LLM are not the path to go forward to simulate a person, this is a fact. By design they cannot reason, it’s not a matter of advancement, it’s literally how they work as a principle. It’s a statistical trick to generate random texts that look like thought out phrases, no reasoning involved.
If someone tells you they might be the way forward to simulate a human, they are scamming you. No one who actually knows how they work says that unless they are a CEO of a trillion dollar company selling AI.






Oh, it probably wasn’t about an existing language, but about some guy studying what would become high level languages. Like studying linkers and symbolic representation of programs