

Yes, the bed and the environment in general is part of the world model. What I mean is that’s part of object identification and recognition of what objects to use for what task, etc. It’s a separate concern from dexterity. Think of it this way. If you’re thirsty, and you pick up a cup. You’re consciously thinking about moving your hand to grab the cup and bring it to your mouth. That’s what the world model is concerned with. You’re not aware of every individual muscle movement and all the micro adjustments that need to happen in order for the task to be completed. And that’s what the running illustrates. It’s the dexterity of the system in dealing with feedback from the world and making these adjustments in response.


You absolutely do have the impact of random events when you’re doing anything in the physical world. You have wind, uneven ground, variations in weight distribution, and so on. That’s what makes this sort of stuff so difficult in practice. All the tiny little errors quickly add up, so you can’t just match expected input. You have to have a dynamic system that can adjust on the fly to the sensory data. Dealing with stuff like an uneven bed or a tilted surface is a completely separate problem of having a good enough world model internally.


And what specifically is it that you disagree with, but I’m just a software engineer.


Running merely illustrates that the system can react with very little latency, it’s obvious that this will be applicable in any applications where the robot needs to quickly adapt to the environment, such as say factory work.


And that means we have robots that can exercise unprecedented body control in dynamic situations. If you don’t understand the general applications of this, really don’t know what else to say to you.


which makes it all the more weird that the idea hasn’t been more widely adopted


I mean that’s basically the idea behind neurosymbolic AI, have the LLM deal with natural language input, convert it to a formal spec, and give it to a symbolic engine to execute https://arxiv.org/abs/2305.00813


he posted this on his official account



So far, war crimes are still scheduled for tomorrow pending a TACO.


it’s always nice to get validated in your logic though :)
Ah yes, misinformation and propaganda as reported by mainstream western sources. Seeing chuds like you lose their shit really is rewarding ngl.
you keep on malding there kiddo, also why would you assume I’m Chinese? 🤣


kind of yeah, incidentally I experimented with a similar idea in a more restricted domain and it works pretty well https://lemmy.ml/post/41786590


Basically, the idea is to use a symbolic logic engine within a dynamic context created by the LLM. Traditionally, the problem with symbolic AI has been with creating the ontologies. You obviously can’t have a comprehensive ontology of the world because it’s inherently context dependent, and you have an infinite number of ways you can contextualize things. What neurosymbolics does is use LLMs for what they are good at, which is classifying noisy data from the outside world, and building a dynamic context. Once that’s done, it’s perfectly possible to use a logic engine to solve problems within that context.


Trump would be parading the pilot on TV like it was the second coming if they had him.
no prob