• 2 Posts
  • 800 Comments
Joined 7 months ago
cake
Cake day: October 19th, 2024

help-circle
  • That’s the Victorian era for you. But for many earlier centuries women went out all the time to take care of various errands, so they must have had some form of acceptable public peeing, even if it’s not written down for us to study. Maybe they just gathered up their skirts and squatted behind a designated bush. Definitely sounds like the kind of thing Queen Victoria would have suppressed.






  • Okay here are my estimates:

    1: 100% but I don’t have a timeline. It’s not going as fast as the cultural hype presents it. We don’t even really understand human thinking yet, let alone how to make a computer do it. But I’m sure we’ll get there eventually.

    2: Also 100%. AI doesn’t need to decide on its own to kill all humans, it could be assigned that goal by some maniac. The barrier to possessing sophisticated AI software is not nearly as high as the barrier to getting destructive nuclear weapons, biohazards, etc. Sooner or later I’m sure somebody who doesn’t think humanity should exist will try to unleash a malevolent AI.

    3: At or near zero, and I only include “or near” because mistakes happen. Automated systems that could potentially destroy the human race should always include physical links to people - for example, the way actually launching a nuclear missile requires physical actions by human beings. But of course there’s always the incompetence factor - which could annihilate the human race without the help of AI.

    You need not only propose a “plausible” scenario, you also need to present a reason to believe it will happen. It’s plausible that a rogue faction could infiltrate the military, gain access to launch codes and deliberately start WWIII. It’s plausible that a bio lab could create an organism that overcomes the human immune system and resists all medications. A nonzero chance of any of those happening isn’t proof that they’re inevitable, with or without AI.