John M. Boyer Dr., Wanda Boyer Dr.
This paper addresses the disconnect between the current capabilities of generative AI technologies and the expectations of scientists, business leaders, domain experts, and other users for cognitive computing capabilities along the pathway toward artificial general intelligence. AI hallucination is the pernicious problem that generated content may be substantively incorrect while appearing to be both authoritative and correct. Emerging techniques such as retrieval augmented generation are making progress on reducing AI hallucinations, but only at the lowest cognitive level of information retrieval. This case study is used to present numerous in-depth examples of AI hallucinations at higher cognitive complexity levels and with related psychological phenomena that are simple yet above the cognitive level of information retrieval. We include considerations of machine learning and AI ethics at the higher cognitive levels. We recommend a research focus on developing saturated benchmarks for generative AI technologies at the next two higher cognitive levels and based on five basic dimensions and five advanced areas within those two cognitive levels.