• ricecake@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 hours ago

      Potentially. Since we don’t know how any of it works because it doesn’t exist, it’s entirely possible that intelligence requires sentience in order to be recognizable as what we would mean by “intelligence”.

      If the AI considered the work trivial, or it could do it faster or more precisely than a human would also be reasons to desire one.
      Alternatively, we could design them to just enjoy doing what we need. Knowing they were built to like a thing wouldn’t make them not like it. Food is tasty because to motivate me to get the energy I need to live, and knowing that doesn’t lessen my enjoyment.

      • EndRedStateSubsidies@leminal.space
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 hours ago

        Cells within cells.

        Interlinked.

        This post is unsettling. While LLMs definitely aren’t reasoning entities, the point is absolutely bang on…

        But at the same time feels like a comment from a bot.

        Is this a bot?