Hallucination Vs Truth: The Realities of AI Hallucinations

Hallucination Vs Truth: The Realities of AI Hallucinations

Back in March, we looked at whether AI was a threat to the creative industries. We’re tracking the growth of AI for strategists, creatives and marketers closely - and this month, we’re looking at AI hallucinations.
Ai Hallucinations Blog

If you’ve not heard this term already, it refers to “a confident response by an AI that does not seem to be justified’ (Source: Wikipedia). In non-tech terms, a hallucination (or delusion) in AI is where the information you create is quite simply, false or nonsensical.

Wikipedia gives us a stark view of the current situation: “By 2023, analysts considered frequent hallucination to be a major problem in LLM technology”. It makes sense that this introduces a considerable risk when using AI in our day-to-day work, and backs up our previous thoughts that humans can not be replaced by this technology – not in our lifetime anyway!

We cannot take the information we get from tools such as ChatGPT or GoogleBard as gospel and need to fact-check or use more reliable resources to make sure we’re using the right insights and data for our client work. We have a professional duty to make sure we’re acting with information gained from AI responsibly and accurately – not hastily or with any sense of laziness.

ChatGPT seems to be pretty vulnerable (if a robot can be described as such) in some of its responses which is promising. For example, when asked ‘How did creative agency Creative Spark come about?’ at the end of a lengthy reply was the message: “For accurate and specific details about Creative Spark’s founding, I recommend conducting a search using up-to-date sources or visiting their official website and publications for the most reliable information”.

Have you noticed this trend in your own work? Watch this space as we continue to track the highs and lows of AI.