And we will have even more energy when this internet fad dies off
And we will have even more energy when this internet fad dies off
In fairness you cant just say its not a zero sum game when the article is supported with a quote from one individual saying they were glad it told them in some cases. We dont know how effective it is.
This is normalizing very intimate (and automated) surveillance. Kids all have smart phones and can google anything they want when they arent using school hardware. If kids have any serious pre-meditation to do something bad then they will do it on their smartphones.
The only reason this would be effective is to catch students before they are aware they are being watched (poof thats gone tomorrow), or the student is so dirt poor that they dont have a smart phone or craptop.
And what else will the student data be used for? Could it be sold? It would certainly have value. Good intentions are right now… data is FOREVER.
Counter point: when this isnt an obscure thing, and kids are aware of it, they will purposefully use trigger words because they are kids.
If kids/people are having mental health issues, whats the best way to handle that? By scanning for the symptom and telling them to stop being mentally troubled? I really doubt kids are getting the care they need based on these flags. Seems like a bandaid for cultural/systemic issues that cause the mental illness/harm.
!remind me in 1 year
And what happens when you kill a sack of puppies you monster?
I mean i guess this is related to technology??
Im going to hold you to that.
No they will judge you as being above the law (original commenter) and they will be wrong, which doesnt matter, as long as we feel continuity with our synthesized narrative.
Because truth doesnt matter. Our narrative just needs to be as loud as the opposition and then we can confuse people just like those in power… and then the impressionable people trying to understand whats going on or whats morally right will believe one side or the other and truth will not need to be discussed, because its not as catchy anyways.
Then people wont need to be trusted to form their own worldview based on facts, they can neatly choose between a few curated viewpoints, and holding views from multiple viewpoints will isolate them from relevance when they are shunned for not memeing their ideologies like everyone else.
And nobody in this scenario has done anything illegal.
Do you even know what robots.txt is?
Not super catchy…
Theres no particular fuck up mentioned by this article.
The company that conducted the study which this article speculates on said these tools are getting rapidly better and that they arent suggesting to ban ai development assistants.
Also as quoted in the article, the use of these coding assistance is a process in and of itself. If you arent using ai carefully and iteratively then you wont get good results with current models. How we interact with models is as important as the model’s capability. The article quotes that if models are used well, a coder can be faster by 2x or 3x. Not sure about that personally… seems optimistic depending on whats being developed.
It seems like a good discussion with no obvious conclusion given the infancy of the tech. Yet the article headline and accompanying image suggest its wreaking havoc.
Reduction of complexity in this topic serves nobody. We should have the patience and impartiality to watch it develop and form opinions independently from commeter and headline sentiment. Groupthink has been paricularly dumb on this topic from what ive seen.
You are speaking for everyone so right away i dont see this as an actual conversion, but a decree of fact by someone i know nothing about.
What are you saying is an important reminder? This article?
By constant activism, do you mean anything that occurs outside of lemmy comments?
Why would we not take LLMs seriously?
Its really weird.
I want to believe people arent this dumb but i also dont want to be crazy for suggesting such nonsensical sentiment is manufactured. Such is life in the disinformation age.
Like what are we going to do, tell all Countries and fraudsters to stop using ai because it turns out its too much of a hassle?
Old looking shit with modern interior should actually happen
Llms look for patterns in their training data. So like if you asked 2+2= it would look its training and finds high likelihood the text that follows 2+2= is 4. Its not calculating, its finding the most likely completion of the pattern based on what data it has.
So its not deconstructing the word strawberry into letters and running a count… it tries to finish the pattern and fails at simple logic tasks that arent baked into the training data.
But a new model chatgpt-o1 checks against itself in ways i dont fully understand and scores like 85% on international mathematic standardized test now so they are making great improvements there. (Compared to a score of like 14% from the model that cant count the r’s in strawberry)
Granted i havent tried gimp 3. Ill have to give it a try. But im so fast with ps. And i hate how each program needs to have their own control schemes to differentiate.
I just dont get why people hate photoshop to the point of being unhelpful when people ask how to get it working. Especially when many people are pirating it anyways.
Gimp sucks.
And ps works with wine if you have it on windows already and drag over some system32 dlls
Well its shorter