• collapse_already@lemmy.ml
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    2
    ·
    1 day ago

    What I don’t see is an explanation for why developing AI should be a priority. I don’t even think LLMs are really AI, so why should they get free poor plagiarism? I haven’t seen any convincing evidence that LLMs are a step towards AGI.

    I am really dreading the LLM crime prediction future. It will be like a really bad version of Minority Report. CRIMESTOPPER 3000 says you’re a minority, you’re poor, and you criticize our largest shareholder, therefore you are going to jail to prevent your inevitable crime.

    We should be powering these energy hogs off before they make the majority of us poorer.