• 1 Post
  • 411 Comments
Joined 4 years ago
cake
Cake day: March 24th, 2022

help-circle


  • There’s a few ways to do this. At the other commenter said you can attach a name and require ID at the door. ID could be as common as a credit card or a school ID or even an official piece of mail. All this is less invasive than biometrics and more reliable too. Biometrics are always for convenience and not security.

    If you want to get extra cautious, sell tickets at the booth for an hour or two before the doors open and up until the beginning of the show. The ticket comes in the form of a paper wristband, like they use for alcohol, and you can pay cash.

    Want to buy a ticket for your friend? Use their legal name and then they show ID at the door. There’s paranoid as you? Send them cash.

    There’s another option. You can buy tickets for yourself and any number of companions. Only the purchaser has to show ID, and the entire party has to come in with the purchaser.

    There. And now you didn’t have to give Sam Altman legal authority to store and resell your biometric data to private surveillance networks and retail shops in exchange for seeing Taylor Swift live.














  • While I appreciate your disdain for the titans of industry, the policy you advocate for, for platforms to be responsible for user content, is like tearing up the railways. It reminds me of those ridiculous laws from early in the automobile era where a person was required to sound alarms and wave flags before driving through a city. Policy custom designed to undermine the utility and sustainability of the very thing it is meant to regulate. It would also destroy email, VPS services, VPN services, etc.

    I agree with the bit about antitrust though. And holding them responsible for advertisements is a very different question, because they actively solicit and promote advertisements. But otherwise your policy positions are insane.


  • While I’m bombarded by obnoxious content on social media, I very very rarely see content that is illegal in my area. Let’s stick a pin in that.

    According to a few sources I’ve seen, 500 hours of video are uploaded to YouTube every minute. Suppose Alphabet was to be held liable for any of that content being illegal. Like strict liability. Would they allow automated systems to check the content, or human eyes? They might use some automation, as a pre-check, but they’d be fools to rely on it, because if it misses something they’re on the hook. So how many FTEs would you need to hire just to watch the videos uploaded to YouTube? Not even counting breaks, pauses, double-checking, etc, you’re looking at around 30,000 people. Let’s say you pay them $15/hr no benefits, that’s $10,800,000 per day, close to $4 billion a year, super low balling it because I’m not counting realistic wages, administrative overhead, benefits, or realistic work pace. But maybe Alphabet could still afford it, they grossed $60 billion last year, and while they have lots of other expenses some of that was probably profit. But then I’d ask, could anyone other than Alphabet afford it? Your average PeerTube instance, for instance? Same applies to all the rest.

    But back to my first observation. I don’t see a lot of stuff that’s illegal. I see things that are obnoxious, distracting, etc, but not illegal. But it makes me wonder how you conduct yourself as an adult, or what your perspective on lawful speach is, if you find yourself constantly bombarded by material that you believe is or should be illegal.


  • I’m not sure what data-speech you personally think should or shouldn’t be legal, but I know what kinds a lot of people argue should be illegal: things ranging all the way from videographic records of child abuse (CSAM) to unauthorized copyrighted material to libel to hate speech to blasphemy and plenty else not mentioned. I think some of it is deservedly illegal (e.g. CSAM) and some of it shouldn’t be (e.g. blasphemy).

    My position is that in a pluralistic society there will be a variety of speech that people won’t want to see for various reasons, and they have a right not to see it. They have a right to have tools that allow them to not see things they don’t want to see. And government censorship of speech should be limited to the absolute bare minimum of speech that causes material harm, and legal responsibility for those rare instances of illegal speech should fall upon the speaker and not the platform or carrier.



  • No I get what you’re saying, but your understanding of the world as it exists is incorrect, and your values are for oppression and anti-freedom.

    Your incorrect understanding of reality: the on-platform tools that exist currently on Facebook are useless. You are powerless through account settings to limit your exposure to content from strangers on your feed, much less your child’s, except by individually blocking accounts as you see them when logged into the account that you want to block from. Even Bluesky, which also has insufficient tools, is slightly better in this regard. But what few on-platform tools you’re offerd only exist to give you the illusion of control over your experience. Greater control is possible but not offered because it’s less profitable. It could be mandated through law.

    Your anti-freedom values: making platforms responsible for user content will destroy them or force severe proactive censorship and real identity policies. None of that is conducive to a free and open society. The fediverse could not exist if servers could be held responsible for what users say or do. Most of the Internet couldn’t exist if one rogue or politically unpopular user could land the service they use in court by offending another.

    Your last paragraph is complete nonsense. The way to when an arms race is to come in with bigger arms. That’s where the government comes in, not to force its own will but to restrain companies and empower people. The notion that giving people greater control of their experiences can harm them is insane.