I did see some flying taxi trials are happening in China, so I think that’s the most likely use case for them assuming they’re automated.
also doesn’t require burning down a rain forest each time you run a query
Good point, I don’t think Xiaomi was on anybody’s radar here.
A bunch of Chinese companies are doing chip fabrication right now. It’s a big lucrative market that’s opening up now that western companies can’t sell advanced chips to China.
The complexity here lies in having to craft a comprehensive enough spec. Correctness is one aspect, but another is performance. If the AI craps out code that passes your tests, but does it in really inefficient way then it’s still a problem.
Also worth noting that you don’t actually need AI to do such things. For example, Barliman is a tool that can do program synthesis. Given a set of tests to pass, it attempts to complete the program for you. Synthesis is performed using logic programming. Not only is it capable of generating code, but it can also reuse code it’s already come up with as basis for solving bigger problems.
https://github.com/webyrd/Barliman
here’s a talk about how it works https://www.youtube.com/watch?v=er_lLvkklsk
Yeah, TSMC called the Us bluff here.
we absolutely do have green hydrogen at the moment https://rmi.org/insight/chinas-green-hydrogen-new-era/
Last I checked libs are the actual fascists participating in a literal genocide in Gaza and supporting ethnic cleansing in Donbas by the fascist regime the west installed in Ukraine in a violent coup.
These are different tools for different use cases. As we currently see in Ukraine, air superiority plays a huge role right now.
same
libs and coping, name a more iconic duo
could be advancements like this that now make it possible https://interestingengineering.com/energy/china-achieves-fusion-milestone-neural-networks
exactly
I’m saying that the medium of text is not a good way to create a world model, and the problems LLMs have stem directly from people trying to do that. Just because autocomplete produces results that look fancy doesn’t make it actually meaningful. These things are great for scenarios where you just want to produce something aesthetically pleasing like an image or generate some text. However, this quickly falls apart when it comes to problems where there is a specific correct answer.
Furthermore, there is plenty of progress being made with DNNs and CNNs using embodiment which looks to be far more promising than LLMs in actually producing machines that can interact with the world meaningfully. This idea that GPT is some holy grail of AI seems rather misguided to me. It’s a useful tool, but there are plenty of other approaches being explored, and it’s most likely that future systems will use a combination of these techniques.
Actually we do know that there are diminishing returns from scaling already. Furthermore, I would argue that there are inherent limits in simply using correlations in text as the basis for the model. Human reasoning isn’t primarily based on language, we create an internal model of the world that acts as a shared context. The language is rooted in that model and that’s what allows us to communicate effectively and understand the actual meaning behind words. Skipping that step leads to the problems we’re seeing with LLMs.
That said, I agree they are a tool, and they obviously have uses. I just think that they’re going to be a part of a bigger tool set going forward. Right now there’s an incredible amount of hype associated with LLMs. Once the hype settles we’ll know what use cases are most appropriate for them.
Right, I find LLMs are fundamentally no different from Markov chains. It doesn’t mean they’re not useful, they’re a tool that’s good for certain use cases. Unfortunately, we’re in a hype phase right now where people are trying to apply them for a lot of cases they’re terrible at and where better tools already exist to boot.
Should the research he’s discussing also be disregarded? https://arxiv.org/pdf/2410.05229
fully automated luxury communism on the way :)
I do think it’s probably more of a cool toy than a practical solution like a train. But, as long as it doesn’t take away from building more trains, I’m not too bothered. It’s also kinda cool to see them pushing the boundaries of technology. People have been dreaming about flying cars for ages, and now it’s like a symbol of the future. So, these things show off how advanced China has become in a way everyone can see.