The model, called GameNGen, was made by Dani Valevski at Google Research and his colleagues, who declined to speak to New Scientist. According to their paper on the research, the AI can be played for up to 20 seconds while retaining all the features of the original, such as scores, ammunition levels and map layouts. Players can attack enemies, open doors and interact with the environment as usual.

After this period, the model begins to run out of memory and the illusion falls apart.

  • bamboo@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    9
    ·
    2 months ago

    “No code” programming has been a thing for a while, long before the LLM boom. Of course all the “no code” platforms generate some kind of code based on rules provided by the user, not fundamentally different from an interpreter. This is consistent with that established terminology.

    • Blue_Morpho@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      ·
      2 months ago

      No code programming meant using a GUI to draw flowcharts that then creates running code. This is completely different.

      • bamboo@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        4
        ·
        2 months ago

        Using a different high level interface to generate code is completely different? The fundamental concept is the same even if the UI is very different.

        • Blue_Morpho@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          1
          ·
          edit-2
          2 months ago

          Yes it’s completely different. “No code” is actually all code just written graphically instead of with words. Every instruction that is turned into CPU instructions has to be drawn on a flowchart. If you want the “no code” to add A + B, you had to write A+B in a box on the flowchart. Have you taken a computer class? You must know what a flowchart is.

          This Doom was done by having a neural net watch Doom being played. It then recreates the images from Doom based on what it “learned”. It doesn’t have any code for “mouse click -> call fire shotgun function” Instead it saw that when someone clicked the mouse, pixels on the screen changed in a particular way so it simulates the same pixel pattern it learned.