• 0 Posts
  • 132 Comments
Joined 2 years ago
cake
Cake day: August 2nd, 2023

help-circle

















  • Ok very long and detailed response, i was responding to the initial comments that explicitly said if you give ai a made up thing it will definitely hallucinate. Which i demonstrated to be false in (multiple times). I’m not suggesting it doesn’t hallucinate a lot of the time still, but the comments were making out its 100% broken, and it clearly works for many queries very effectively, despite its limited applications. Im just suggesting we don’t throw the baby out with the bathwater.


  • Oh ok thanks i thought this thread was about AI LLMs in general.

    Weird i was downvoted for demonstrating the very thing that apparently (according to these very learned comments) AI can’t do, actually doing it well. Seems like irrational bubble hate to me, common on reddit but getting more so on Lemmy it seems. “that guys asking topic based questions that make our comments look poorly thought out and potentially wrong, burn him”


  • set_secret@lemmy.worldtoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    10 months ago

    Just put this into GPT 4.

    What’s your view of the fizbang Raspberry blasters?

    Gpt ‘I’m not familiar with “fizbang Raspberry blasters.” Could you provide more details or clarify what they are?’

    It’s a drink making machine from china

    Gpt ‘I don’t have any specific information on the “fizbang Raspberry blasters” drink making machine. If it’s a new or niche product, details might be limited online.’

    So, in this instance is didn’t hallucinate, i tried a few more made up things and it’s consistent in saying it doesn’t know of these.

    Explanations?