• 0 Posts
  • 777 Comments
Joined 3 年前
cake
Cake day: 2023年6月22日

help-circle

  • Tl;dw: he has two points:

    1. That between cameras and now AI monitoring, it has just drastically reduced the cost of running an authoritarian regime. He claims that running the Stahsi used to cost like 20% of the government budget, but can now be done for next to nothing and if will be harder for governments to resist that temptation.

    2. That there hasn’t been much progress in the world of physics since the 70s, so what happens if you point AI and it’s compute power at the field of physics? It could see wondrous progress and a world of plenty.

    Personally I think point 1 is genuinely interesting and valid, and that point 2 is kind of incredible nonsense. Yes, all other fields are just simplified forms of physics, and physics fundamentally underlies all of them. That doesn’t mean that no new knowledge has come from those fields, and that doesn’t mean that new knowledge in physics automatically improves them. Physics has in many ways, done its job. Obviously there’s still more to learn, but between quantum mechanics and general relativity, we can model most human scale processes in our universe, with incredible precision. The problem is that that the closer we get to understanding the true underlying math of the universe, the harder it is to compute that math for a practical system… at a certain point, it requires a computer on the scale of the universe to compute.

    Most of our practical improvements in the past decade have and will come from chemistry, and biology, and engineering in general, because there is far more room to improve human scale processes by finding shortcuts, and patterns, and designing systems to behave the way we want. AI’s computer scale pattern matching ability will undoubtedly help with that, but I think it’s less likely that it can make any true physics breakthroughs, nor that those breakthroughs would impact daily life that much.

    Again though, I think that point number 1 is incredibly valid. At the end of the day incentives, and specifically cost incentives, drive a massive amount of behaviour. It’s worth thinking about how how AI changes them.


  • I agree with everything you’re saying, but even speaking specialist to specialist, or say to a group of specialist colleagues who might not be working on exactly what you’re working on, you still often simplify away the technical parts that aren’t relevant to the specific conversation you’re having, and use specific language on the parts that are, because that inherently helps the listener to focus on the technical aspects you want them to focus on.


  • If you’re communicating with another scientist about the actual work you’re doing then sure there are times when you need to be specific.

    If you’re publishing official documentation on something or writing contracts, then yes, you also need to be extremely speciific.

    But if you’re just providing a description of your work to a non-specialist then no, there’s always a way of simplifying it for the appropriate context. Same thing goes for most of specialist to specialist communication. There are specific sentences and times you use the precision to distinguish between two different things, but if you insist on always speaking in maximum precision and accuracy then it is simply poor communication skills where you are over providing unnecessary detail that detracts from the actual point you’re trying to convey.





  • Eh I don’t really agree, depending on how simple you’re talking. Bags within bags, or dumbing things down to a grade school level, then sure, there are topics that can’t be described succinctly.

    But if you’re talking about simplifying things down to the point that anyone who took a bit of undergrad math/science can understand, then pretty much everything can be described in simple and easy to understand ways.

    Don’t get me wrong, I’ve seen many people at the top who can’t, but in every case, it’s not because of the topics’ inherent complexity, but either because they don’t actually understand the topics as well as they may seem, or because they lack the social skills (or time / effort / setting) to properly analogize and adjust for the listener.










  • Exactly, Sci Fi writers almost never invent an entirely new technology for their books, they just look at current technology, think a bit about where it might head, think about how that could interact with broader societal forces, realize some flaw there-in, and write about it.

    Technologists are doing basically the same thing, looking at current technology, thinking about where it might head and what might be useful and/or profitable, and then start trying to overcome current obstacles to develop and build it.

    But one of them takes a single person a year or two to write a book, and the other has to start trying to do research and building things and testing them and breaking them and getting funding and overcoming the current obstacles etc. etc. If they start at the same time it will look like the technologist has just built what they were warned not to, when in reality they’ve been building it the whole time on a parallel path.