On Wednesday, Cambridge Dictionary declare that his 2023 word of the year is “hallucinate,” due to the popularity of large language models (LLMs) such as ChatGPT, which sometimes generate erroneous information. The Dictionary also published one cartoon site explaining the matter, said, “When artificial intelligence is confused, it produces false information.”
“The Cambridge Dictionary team chose hallucinate as the Word of the Year 2023 as it recognizes that new meaning comes to the heart of why people are talking about AI,” the dictionary writes. “The introduction of AI is a powerful tool but one that we are all still learning how to communicate safely and effectively – this means knowing your strengths and current weaknesses.”
As mentioned before in many articles, “hallucination” related to AI originated as a term of art in the field of machine learning. As LLM entered mainstream use through apps like ChatGPT last year, the term spilled into general use and began to cause confusion among some, who saw it as an unnecessary anthropomorphism. The Cambridge Dictionary’s first definition of hallucination (for humans) is “to seem to see, hear, feel, or smell something that is not there.” It involves discernment from the pure mind, and some are against that association.
Like all words, its meaning borrows heavily from context. When machine learning researchers use the term hallucinate (which they also, often, judge by research papers), they often understand the limitations of LLM—for example, that the AI model is not alive or “conscious” by human standards—but general public can not. So in a feature exploring hallucinations in depth earlier this year, we suggested an alternative term, “confabulation,” that perhaps more accurately describes the creative gap-building basis of AI models at work without cognitive load. (And guess what –that’s in the Cambridge Dictionary, too.)
“The widespread use of the word ‘hallucinate’ to refer to errors by systems like ChatGPT provides a fascinating picture of how we think about and anthropomorphise AI,” said Henry Shevlin, an AI scientist at the University of Cambridge. , said in a statement. . “As this decade progresses, I hope that our scientific vocabulary will continue to evolve to encompass the strange powers of the new senses being created.”
Hallucinations have led to legal trouble for individuals and companies in the past year. In June, a lawyer who referred to ChatGPT’s fake cases got in trouble with a judge and was later fined. In April, Brian Hood sued OpenAI for defamation when ChatGPT falsely claimed that Hood had been indicted for a foreign bribery scam. It was later settled out of court.
In fact, LLMs “hallucinate” all the time. They draw groups together between ideas from what they have learned from training (and re-editing later), and not always accurate reference. Where there are gaps in knowledge, they will generate the most likely response—sound. In many cases, that may be correct, given high-quality training data and good calibration, but other times it is not.
Therefore, it seems that OpenAI has been the only technology company to seriously clamp down on false hallucinations with GPT-4, which is one of the reasons why the model is also seen as being in charge. How they achieved this is part of OpenAI’s secret sauce, but OpenAI scientist Illya Sutstkever has already mentioned who thinks that RLHF may provide a way to reduce hallucinations in the future. (RLHFor reinforcement learning through human feedback, is a process in which humans measure the responses of a language model, and those results are used to further refine the model.)
Wendalyn Nichols, Cambridge Dictionary publishing manager, say in one word“The fact that AI can ‘hallucinate’ reminds us that humans still need to bring their critical thinking skills to use these tools. AIs are fantastic at churning through large amounts of data to extract specific information and to make it clear . But the more original you ask them to be, the more likely they are to go astray.”
It has been a banner year for AI terms, according to the dictionary. Cambridge says it has added other AI-related terms to its dictionary by 2023, including “extensive language model,” “AGI,” “generative AI,” and “GPT.”