

I thought of the one app India was trying to force all phones sold in the country to have. That one was also tracking locations.


I thought of the one app India was trying to force all phones sold in the country to have. That one was also tracking locations.


It was already a no from me, dawg, you don’t have to convince me.
No, but seriously. Everything he tries to implement is like 90’S cartoon level evil and sloppy. I bet we could get him monologuing.


I wonder if that dumb ass has downloaded his own app and let a third party track the POTUS in real time. I almost feel sorry for the agents assigned to his security detail.


So, for the “it’s the parents fault” bit I’ll say this. Parents are the arbiters of Internet access in their homes. If that van with “Free Candy” written on it pulled into their driveway and they didn’t call the police or warn their children not to get in the van, yes I would consider them liable.
The fact is, lots of parents do know their children are using social media like Facebook, Instagram, Tik Tok etc. A lot of parents are my age and younger (the age where we grew up with the internet and social media in its toddler years if not it’s infancy). A lot of us do know the dangers (and are probably addicted ourselves).
What some of us may lack is the knowledge to use parental controls effectively (and at least some of that is because we do dumb shit like using the same password for everything, or not changing default passwords).
But I also think that some of us (looking at you collective shout and other organizations like it) just want to offload our responsibilities onto these companies so we have someone to blame.
And even though I agree that what these companies are doing is wrong (directly targeting minors, deliberately making their platforms addictive, collecting data on minors etc), and I want them held accountable, I also don’t think ID collection is warranted, and I view this as a way to violate privacy and collect data for surveillance purposes which I believe is wrong to do to people who haven’t done anything illegal.
Even if that weren’t the case, these companies also just cannot be trusted to safeguard the PII data they’re wanting to collect. So as far as I’m concerned the ID verification thing is just not going to work.


I absolutely agree that parents do play a role and have some responsibilities for both their and their childrens internet literacy, as well as for what their children access on the internet. I also agree that companies bear some responsibility (for making their platforms addictive on purpose in order to make money off of people they already know are underaged).
I just really want to put forth other ideas for fixing this problem that don’t involve companies being forced by law to enact ID verification when they can’t be trusted to safeguard such information and it feeds into the information database they already have, which will more than likely be used to violate the privacy of their users.
If the government absolutely must get involved making it illegal to produce and give access to a platform found to be addictive would be a start, but so would media and internet literacy education, both of which are solutions that don’t violate the privacy of minors or adults.
Digital media literacy is part of the education system in Denmark and some other European countries and it’s been beneficial to their populace. I think it could be a good solution.


The harm doesn’t come from the aspects of infinite scroll, auto play, or algorithmic examples in a vacuum.
But we have statistically proven that when you gamify the system and the content can be considered harmful to consume too much of, those two factors are what makes it dangerous.
Tricking the brain into doing something harmful to itself by gamification is the problem. The algorithm, auto play and infinite scroll are just mechanisms to facilitate that. Novelty only plays a small part in that. The algorithm by itself doesn’t provide a dopamine hit. The infinite scroll by itself doesn’t provide a dopamine hit. The auto play feature by itself doesn’t cause a dopamine hit.
Even when you combine all three the dopamine hit won’t come if the content being pushed isn’t sufficient to cause a rush of dopamine. And that dopamine rush often comes from things like upvotes and downvotes, and badges, and achievements. Follower counts and other metrics that the individual users use to get dopamine are being weaponized against them to make money. And it was intentional on the part of meta execs.


I have a question. What if it’s not just at a parenting level. What if it’s also at a school in level? Because I think at least partially there is a disconnect between media and internet literacy and people of all ages including children and parents.
I think we’re going to need such skills going forward and that there exist places in the world where students are being taught such things and are benefiting from them significantly.
Yet the immediate knee jerk reactions seem to be blame the parents and blame the companies that facilitate the access to the content.
It doesn’t have to be a parents by themselves against the world system. But it also can’t just be a “companies protecting the children” system because that’s not what companies do or are for? The need to maintain a profit margin flies directly in the face of the aim to hold companies responsible and the laws seem to be intent on capping the monetary consequences of a breach of the law.
I do feel that the least a parent should be required to do before complaining to a governing body that they find someone else is “harming” their child is to show that they have done their due diligence to protect said child. We punish parents for willful negligence and child endangerment all the time. I don’t understand why this is different but I also wonder if there are other options for educating both children and adults that could help the situation significantly.


And we took up arms because they can pry my cat videos from my cold dead hands?


While I agree that your situation isn’t an edge case (I found dads locked porn collection of VHS tapes and learned that that lock could be circumvented with a fridge magnet) at the age of 9?
But on the other hand, let’s say you post something to the internet that may be considered not okay for children. And let’s say that thing is about gun powder (which you absolutely can make from foraging natural ingredients). It’s your personal website, it’s labeled as not intended for children and you aren’t a big company so you don’t have the ability to just hire another company for things like age verification.
Then you get sued by a regulatory body in another country because you didn’t adhere to their laws? Does that sound reasonable to you?
If a parent or guardian is taking every precaution to keep their kid safe that is reasonable within the law and that kid still gains access to something that can harm them that’s an accident. If the parent takes no precautions and allows their child that they are legally responsible for the well being and safety of to raw dog life with no precautions whatsoever because that’s too hard, or they don’t care or whatever, then it seems reasonable to me that they be held responsible under the law.
Their right to have a third party protect their children ends at my right to privacy which to me extends to my right to anonymity specifically because it has already been shown that without anonymity privacy just doesn’t exist in this age of the internet.
What does that mean? It means that companies that collect your data but promise “privacy” cannot be trusted to uphold that promise, which means the only option left is to be as anonymous as possible.
I want you to understand that I do agree that when one kid figures out the loophole, that loophole spreads like wild fire.
But on the other hand, if a child figured out how to turn off the security system to the family car, grabbed the keys and went for a joyride with their friends, is it the fault of the parents or the fault of the car manufacturer? Because one of them is legally liable under the law.
Would it be acceptable to have to send your thumbprint to BMW every time you wanted to drive your car?


I did not understand. I’ll see myself out.


Blacklisting it on your router would at least prevent it from trying to connect to an open WiFi network like your own guest network which some people just don’t turn off or password protect.
If you are one of those people and you’re reading this turn that off. You can share your wifi via QR code these days from just about any smart phone. Turn it off.


Do you think that only apple TVs have wifi chips?


If you don’t have the technical know how to physically lobotomize the TV’s wifi chip, simply blocking its mac address would suffice.


I think the simple fact is that some of the people in this thread don’t understand is that the people they’re asking to vet the code don’t know how.
They may mean that the people who can vet code should do so before making a fuss about the AI written portions of it, but I don’t know that most of the people in opposition to their comments understand that context.
I haven’t coded anything since the 90’s. I know HTML and basic CSS and that’s it. I wouldn’t have known where to start without guides to explain what commands in Linux do and how they work together. Growing up with various versions of Windows and DOS, I’d still consider myself a novice computer user. I absolutely do know how to go into command line and make things happen. But I wouldn’t know where to start to make a program. It’s not part of my skill set.
Most users are like that. They engage with only parts of a thing. It’s why so many people these days are computer illiterate due to the rise of smartphone usage and apps for everything.
It’d be like me asking a frequent flyer to inspect a plane engine for damage or figure out why the landing gear doesn’t retract. A lot of people wouldn’t know where to start.
I fully agree that other coders on the internet who frequent places like GitHub and make it a point to vet the code of other devs who provide their code for free probably should vet the code before they make assumptions about its quality. And I fully agree that deliberately stirring shit without actually contributing anything meaningful to the community or the project is really just messed up behavior.
But the way I see it there’s two different groups and they have very different views of this situation.
The people who can’t code are consumers. Their contribution is to use the software if they want, and if it works for them to spread by word of mouth what they like about it. Maybe to donate if they can and the dev accepts donations.
If those people choose to boycott, it’ll be on the basis of their moral feelings about the use of AI or at the recommendation of the second group due to quality.
The second group are the peer reviewers so to speak and they can and should both vet the code and sound the alarm if there’s something wrong.
I suppose there’s a third subset of people in the case of FOSS work who can and often do help with projects and I wonder if that is better or worse for the reasons listed in the thread like poorly human written code and simple mistakes.
Humans certainly aren’t infallible. But at least they can tell you how they got the output they got or the reason why they did x. You can have a rational conversation with a human being and for the most part they aren’t going to make something up unless they have an ulterior motive.
Perhaps breaking things down into tiny chunks makes AI better or it’s outputs more usable. Maybe there’s a 'sweet spot".
But I think people also get worried that what happens a lot is people who use AI often start to offload their own thinking onto it and that’s dangerous for many reasons.
This person also admits to having depression. Depression can affect how you respond to information, how well you actually understand the information in front of you. It can make you forget things you know, or make things that much harder to recall.
I know that from experience. So in this case does the AI have more potential to help or do harm?
There’s a lot to this. I have not personally used Lutris, but before this happened I wouldn’t have thought twice about saying that I’ve heard good things about it if someone asked me for a Heroic launcher style software for Linux.
But just like the Ladybird fork of Firefox I don’t know that I feel comfortable suggesting it if this is the state of things. For the same reason I don’t currently feel comfortable recommending Windows 11 or Chrome.
There are so many sensitive things that OS’s, and web browsers handle that people take for granted. If nobody was sounding the alarm about those, I feel like nothing would get better. By contrast, Lutris isn’t swimming in a big pond of sensitive information but it is running on people’s hardware and they should have both the right to be informed and the right to choose.


There’s a problem with that. The vast majority of Linux users are probably more tech savvy than average but I’d wager not all of them or even the vast majority have the skills to vet the code.
Lots of the people in the gaming space who are having Lutris suggested/recommended to them are not going in to check that code for problems. They install the flatpak on move on with their lives.
It appears (from what I’ve read which isn’t necessarily the end all be all) that the people taking exception to the use of AI to code Lutris are doing so because they do decompile and vet code.
My understanding is that it’s harder to get AI code in general because when it hallucinates it may do so in ways that appear correct on the surface, and or do so in ways that don’t even give a significant indication of what that code is attempting to do. This is the problem with vibe coding in general from my understanding and it becomes harder and harder even for senior code engineers to check the output because of the lack of a frame of reference.
You’re asking people who don’t have the skills to ignore people who do have the skills who are sounding the alarm.
I get that this person is a single person writing code and disseminating it for free. I get that we should be thankful for free and open software. I fully understand why this person might use AI to help with coding.
I understand that they are upset about the backlash. But that was a very much foreseeable consequence of the credits they gave the AI (a choice they made), and honestly the use of AI (which might have been called out later on if they hadn’t credited it).
They shot themselves in the foot with the part of their response that was flippant and a “fuck you” to anyone who might find the use of AI concerning.
There’s also the fact that AI is something that a lot of people in the Linux community at large seem to already be boycotting and boycotting derivatives of it make sense.
Just because you create something for free doesn’t mean people have to use it. Or that people aren’t free to boycott it.


Because of the backlash from gamers/consumer/the general public, or because it was detrimental to the production of their product?


I’m with you so far, but I question how that’s still not the publisher’s fault and their liability.
The main reason is because it seems that when the publisher puts the game up for sale on steam, that entity chooses whether or not to add game play data including music and trailers. So they are choosing to give that information to Valve and giving Valve permission to use it. Which means they are the ones who don’t have the legal ability per their license to do so but did so anyway.
The best I could say for this lawsuit with those facts is that Valve is guilty of taking their word for it that they were legally allowed to use the posted video or audio in that way.
If I license something and my license includes certain provisions for distribution but not other provisions for sale or advertisment, then I choose to advertise, then I should be liable for that breach not the venue that I used as the mode for advertisement.
This is like suing a billboard company for posting an ad with artwork I didn’t properly license for the advertisment space.


I’ve never thought of Roblox as being bad. I think it as being dangerous (for the reasons you state), but I also think of it as a “for us by us” kind of deal. Meaning it’s a place and a game(s) for kids.
My son isn’t allowed to play it in our house (if he goes to visit family or friends I don’t make restrictions on that), and I have the same rule for pretty much all online games including things like Fortnite. This is mostly because of voice chat.
I do think the benefits you list are an important part of the conversation that don’t get brought up much.


I don’t like people but I like them more than I like AI.
I also grew up in the 80’s when you’d ask an older sibling to beat the boss for you.
Think about the people you willingly surround yourself with. Then think about how often they agree with the things you think and say.
As the saying goes “I’m sure there’s someone out there who believes the exact opposite of everything I believe, and while I’m sure they aren’t a complete idiot…”
Everyone is susceptible to the feedback loop. Everyone can fall victim to the seduction of an echo chamber. While not everyone would ignore the red flag that this thing is a machine/digital algorithm rather than a person or sentient/sapient being, it’s not really that hard to see how we got here. Echo chambers exist all over the internet. The difference is that most of them have some voices of dissent. The AI LLM doesn’t offer that. They keep trying to add it in but it’s basically antithetical to the design.
When you add that to the fact that making it addictive benefits their bottom line is pretty obvious that they are trying to walk the line between being regulated by the government and making their product as popular as possible.
I don’t think they really knew it would have this exact effect. But I do think they plan to take advantage of it now that they know and I don’t think we humans are all going to be able to fight the temptation of an automated propaganda machine.
This is especially because mental health and healthcare in this country has been failing for decades, and even people who “don’t have mental health problems” aren’t magically mentally healthy, they just don’t know the status of their mental health. A whole lot of people in the US especially are mentally ill or facing neurological medical problems that they don’t know about.