• DaddleDew@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    5
    ·
    edit-2
    1 day ago

    An image is worth a thousand words. How is reading a text describing what is on the screen going to be better than just looking at the screen yourself, something you’ll need to do to read the description anyway? Aside from accessibility for the blind, the practicality such a technology is questionable.

    The motivation behind this is obviously to facilitate the collection and reporting user profiling data. Accessibility for the blind is only a side effect. Tech companies have been doing it with automated audio transcriptions for years already, now they’re after what you look at on your screen.

      • DaddleDew@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        5
        ·
        edit-2
        1 day ago

        Read the whole post. I acknowledged them already and am expressing my doubts over the true motivations that drive Microsoft to force a tech like this upon all their users and express my concerns over the real use they will make of this technology.

        Don’t you try to change the meaning of my post just so you can have a cause to white knight over. This isn’t Reddit.

    • rebelsimile@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      1 day ago

      this is 100% right, you don’t need an AI to describe something you’re already looking at. This is an absurd feature (again aside from the accessibility portion but that’s not what this is).