• MinnesotaGoddam@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    9 hours ago

    yeah, and how many iterations did they do, how much did they storyboard, did the ai do the entire thing and if so did the ai use the traditonal concept to storyboarding to scene to etc. process to generate it.

    we have a specific process that is followed when folk make film

    does the AI do something different? could we look at the tree and arbitrarily pick a different fork if we didn’t like a decision rather than having to ask an entirely new question?

    it’s fascinating.

    • WalnutLum@lemmy.ml
      link
      fedilink
      arrow-up
      2
      ·
      5 hours ago

      Afaik seedance only has like three modes of generation:

      1. Text to video
      2. First frame + text to video
      3. First frame + last frame + text to video

      From what I’ve seen you can do ranges of time in the text for certain things like 1-3s: slow pan in etc.

      People will use something like Google’s nano banana to generate still frames in a storyboard-like prompt then have seedance generate the video for each 12 or so second portion