Can we train people to recognize them?
Consider the following example, which would also affect academic research papers, undergraduate coursework and homework of students who decide to use AI in their daily lives without knowing the risks of “hallucinations”. One researcher decided to use a large language model (chatbot) to gather information about the “history of Bulgarian intellectual capital”. The chatbot may generate a seemingly well-structured coherent and informative response, but which includes fictional key data. For example, the AI might invent names of prominent Bulgarian intellectuals who never existed, attributing specific (and plausible-sounding) theories or publications to them. It can mention “Professor Ivan Petrov”, who supposedly pioneered the “Bulgarian theory of knowledge economy” in the early 20th century, and cite a non-existent journal article. AI can afford to list research centres or academic societies dedicated to the study of intellectual capital in Bulgaria that have no historical basis. It may even provide convincing (but incorrect) details of their founding dates, key achievements and notable members.














