Generative artificial intelligence produces output--text, images, sound, video, etc.--based on patterns extrapolated through machine learning from large collections of training data. (For definitions of terms related to AI, see this glossary provided by the Center for Integrative Research in Computing and Learning Sciences.)
A search engine like Google is designed to search within a collection of information to find answers to specific queries. Generative AI tools like ChatGPT are chatbots, not search engines. When prompted, the chatbot uses statistical models built on patterns from its training data to generate a dynamic response that supplies content associated with its predictions about the meaning of the words in the prompt. In a response from ChatGPT, for example, the result is a string of text consisting of the next most likely word based on the previous words.
Because generative AI tools respond so quickly and fluently, it's easy to assume that the information provided is authoritative and accurate. However, it's important to remember that every response is just pattern matching, with no true understanding behind it. The accuracy of the response depends on having enough training data to mathematically calculate an appropriate answer.
Generative artificial intelligence is not actually intelligent. It does not think, just mimics thought.
Generative AI tools like ChatGPT can be applied in various ways to assist with academic research, as discussed on the "AI Chatbots and Research" page of this guide. However, it's important to keep in mind potential issues.