• tal@lemmy.today
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    edit-2
    12 hours ago

    I’ve got some pessimistic views as to long-term AI concerns — I’m not sure that aligning advanced AI goals with human goals in the long run is a viable problem to solve. We may not be able to achieve Friendly AI. I could believe that.

    But I certainly don’t think that AI development is “moving too fast”. Not really anything to gain in slowing down development. I remember Elon Musk proposing a six-month moratorium on development — that doesn’t make any sense, only would be something that you’d want to do if you had an immediate milestone that you believed that there was major risk attached to. In general, either AI is something that you should ban globally because it’s too much of an existential risk for humanity, and halt all development and enforce that halt, or you’d like to achieve it as soon as possible. We are not at a point where there is a consensus that that level of unacceptable risk exists and there is a global commitment to enforcing such a global prohibition.

    I can believe that there might be an excess of infrastructure development in particular, that we might not have the research side moving as quickly as need be to support that. Like, we might be doing misallocation in buying a lot of specific chips without establishing that those chips are going to provide a worthwhile return. But in terms of the technology advancing…no, can’t agree there.

    • tal@lemmy.today
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      12 hours ago

      And…let me make it even more concrete. I’d say that there are basically two scenarios:

      1. We establish that AI — for some definition of AI — is simply too dangerous for humanity to have. In that case, the right path is to ban AI globally. That means that nobody gets it. Some coalition of countries is going to have to be willing to attack anyone who tries developing it. In that case, what we have is effectively an arms control restriction baked into customary international law. It is not optional to participate. And, for all the future of humanity, we need to be willing to enforce that. It means that we need a viable verification protocol to ensure that nobody is developing it, as is normally the case for arms treaties. And everyone has to submit to that verification protocol.

      2. We don’t. In that case, we want to develop AI sooner rather than later.

      I am certainly not willing to say that #2 is the “right” scenario and #1 is the “wrong” one. But if we decide on #1, that comes with a lot of things that we need to be doing as a species. It’s not just going to be the pre-computer-era status quo persisting, where our limited state of technology was what maintained the situation.

      EDIT: I’d also add that, just as that I’m not sure that Friendly AI is a solvable problem, I’m also not sure that it’s really viable to have a verification protocol where we can prevent development of AI. Past arms control treaties where I think that verification was likely much easier — it’s hard to hide development of major warships under the Washington Naval Treaty, for example, yet there were still parties evading restrictions — were not always successful. #1 comes with its own set of hard problems too. Are parallel compute processors legal? What about their development and production? Under what restrictions are they used? Is it possible to achieve advanced AI using CPUs (my guess is that it likely is)? If so, what new restrictions will need to be placed on use and access to CPUs? How will we identify entities building production facilities to build CPUs and GPUs? Will we need to track all existing CPUs and GPUs, to try to identify entities who might be stockpiling them? How will we monitor what the great stores of those out there now are being used for?

      If we go with #1, that also entails a different world from the one that we live in today.