

Omg, a sustainable, repairable, and open source project costs the same as a closed source, non repairable, locked down option … Those are totally the same thing!
/S


Omg, a sustainable, repairable, and open source project costs the same as a closed source, non repairable, locked down option … Those are totally the same thing!
/S


There is an open source project to replace the innards:


No it’s not.
It might be to you, but there are enormous numbers of elderly and disabled people who would benefit from more assistance.
I still wouldn’t trust a robot around them given how inherently dangerous a massive motorized contraption is, but we also shouldn’t be blind to accessibility and utility just because we don’t personally need it.
deleted by creator


And your point is wrong because you keep boiling it down to simple black and white.
The Nobel prize is not purely political and is not devoid of merit.
The world is not full of binary systems. It’s made of multi variable systems where multiple influences can be true at the same time.
If you want to make a point about why accurately predicting the structure of hundreds of thousands of proteins doesn’t deserve the Nobel in chemistry then I’m all ears. Please tell us all exactly why you think their prize was political and not meritocratic, and why predicting protein structures automatically is not important?
Because if you can’t answer that very specific question, then you weren’t making a point relevant to the conversation, you were making a snide generalization to hear yourself speak.


Thank you for finally spewing out the point you wanted to make from the jump. It’s irrelevant in the context of the original discussion, but you got to hear yourself talk.


Lmao, it’s binary cause you say it’s binary.
Bro grow up. The world is not black and white. Literally not a single award on the planet is meritocratic if you insist on dealing in absolutes. Every award is awarded by some committee and there is some room left for human judgement, which leaves room for human bias, which makes it not perfectly meritocratic.
If you want to go an unhinged rant that no one wants to listen to then email the nobel association directly, don’t waste federated server time.


Lol if you rigidly define things binarily in a way that doesn’t reflect real world systems, then sure they’re binary.


This is false, it’s not a binary system. The prize is both.


We’re all just different parts of the universe looking back at itself in different ways.


I’d argue, that it sometimes adds complexity to an already fragile system.
You don’t have to argue that, I think thats inarguably true. But more complexity doesn’t inherently mean worse.
Automatic braking and collision avoidance systems in cars add complexity, but they also objectively make cars safer. Same with controls on the steering wheel, they add complexity because you now often have two places for things to be controlled and increasingly have to rely on drive by wire systems, but HOTAS interfaces (Hands On Throttle And Stick) help to keep you focused on the road and make the overall system of driving safer. While semantic modelling and control systems absolutely can make things less safe, if done well they can also actually let a robot or machine act in more human ways (like detecting that they’re injuring someone and stopping for instance).
Direct control over systems without unreliable interfaces, semantic translation layer, computer vision dependancy etc serves the same tasks without additional risks and computational overheads.
But in this case, Waymo is still having to do that. They’re still running their sensor data through incredibly complex machine learning models that are somewhat black boxes and producing semantic understandings of the world around it, and then act on those models of the world. The primary difference with Waymo and Tesla isn’t about complexity or direct control of systems, but that Tesla is relying on camera data which is significantly worse than the human eye / brain, whereas Waymo and everyone else is supplementing their limited camera data with sensors like Lidar and Sonar that can see in ways and situations humans can’t and that lets them compensate.
That and that Waymo is actually a serious engineering company that takes responsibility seriously, takes far fewer risks, and is far more thorough about failure analysis, redundancy, etc.


“hur durr AI bad”
Read the fucking link. It literally won the Nobel prize.


They should. That’s how automation works. We should be building a society that doesn’t require as much work, not insisting on doing work that machines could do when we don’t want to.


You keep saying y’all and it’s telling.
Learn how to communicate with people, not the simplified boxes you put them in.
When you’re ready to have a conversation instead of just hearing yourself regurgitate mindless internet grandstanding I’m here.


LLM is what usually sold as AI nowadays. Convential ML is boring and too normal, not as exciting as a thing that processes your words and gives some responses, almost as if it’s sentient.
To be fair, that’s because there are a lot of automation situations where having semantic understanding of a situation can be extremely helpful in guiding action over a ML model that is not semantically aware.
The reason that AI video generation and out painting is so good for instance it that it’s analyzing a picture and dividing it into human concepts using language and then using language to guide how those things can realistically move and change, and then applying actual image generation. Stuff like Waymo’s self driving systems aren’t being run through LLMs but they are machine learning models operating on extremely similar principles to build a semantic understanding of the driving world.


I am treating you like a child because you refuse to use your brain.
No you’re doing so because you started doom scrolling before you had coffee and now you’re trying to justify your uncalled for rudeness.
You gave me one obscure
It literally won the nobel prize.
very early stage example
It is not early stage, predicting the structures of those proteins has already actively changed the course of biomedical science. This isn’t early stage research that need fleshing out, this is peer reviewed published research that has caused entire labs and teams to completely change what they’re doing and how.
that isn’t even connected to the overall rise in value of LLMs and other forms of AI
It is in that it uses the same underlying type of algorithms and is literally from the same team that developed the “T” in ChatGPT.
So you are claiming the next real AI revolution is justtttt around the corner with a totally new technology you swear?
I have not claimed that, I said that AI algorithms are likely to be part of our climate solutions and our ability to serve more people with less manual labour. They help to solve entirely new classes of problems and can do so far more efficiently than years of human labour.
Rage out about tech bubbles and hype bros if you want. Last time it was crypto, streaming before that, apps and mobile before that, social before that, the internet before that, etc etc. Hype bubbles come and go, sometimes the underlying technology is actually useful though.


You seem to be projecting about warped perspective.
Sure LLMs and other forms of automation, artificial intelligence and brute forcing of scientific problems will continute to grow.
That’s not brute forcing of a scientific problem, it’s literally a new type of algorithm that lets computers solve fuzzy pattern matching problems that they never could before.
What you are talking about though is extrapolating from that to a massive shift that just isn’t on the horizon.
I’m just very aware of the number of problems in society that fall into the category of fuzzy pattern matching / optimization. Quantum computing is also an exciting avenue for solving some of these problems though is incredibly difficult and complicated.
You are delusional, you have read too many scifi books about AI and can’t get your brain off of that way of thinking being the future no matter how dystopian it is.
This is just childish name calling.
The value to AI just simply isn’t there, and that is before you even include the context of the ecological holocaust it is causing and enabling by getting countries all over the world to abandon critical carbon footprint reduction goals.
Quite frankly, you’re conflating the tech bro hype around LLMs with AI more generally. The ecological footprint of Alpha Fold is tiny compared to previous methods of protein analysis that took labs of people years to discover each individual one. On top of the ecological footprint of all of those people and all of their resources for those years, they also have to use high powered equipment like centrifuges and x-ray machines. Alpha fold did that hundreds of thousands of times with some servers in a year.
Don’t come at me like you are being logical here, at least admit that this is the cool scifi tech dystopia you wanted and have been obsessed with. This is the only way you get to this point of delusion since the rest of us see these technologies and go “huh, that looks like it has some use” whereas people like you have what is essentially a religious view towards AI and it is pathetic and offensive towards religions that actually have substance to their philosophy and beliefs.
Again, more childish name calling. You don’t know me, don’t act like you do.


I don’t have to dream, DeepMind literally won the Nobel prize last year. My best friend did his PhD in protein crystallography and it took him 6 years to predict the structure of a single protein underlying legionnaires disease. He’s now at MIT and just watched DeepMind predict hundreds of thousands of them in a year.
If you vet your news sources by only listening to ones that are anti-AI then you’re going to miss the actual exciting advancements lurking beneath the oceans of tech bro hype.
Honestly I haven’t seen a single article written by someone who actually understands the mathematics involved so I call a huge amount of HORSeSHIT on your headline.