• Home
  • About
  • Africa
  • Americas
  • Asia
  • Europe
  • Middle East
  • Russia
  • South Asia
  • Space
  • World
  • Newsletters
  • Podcast
  • Contributors
  • Write For Us
  • Contact Us
Facebook Twitter YouTube
  • Leaders
  • States
  • Networks
  • Ideologies
  • Technologies
Facebook Twitter YouTube
Globely NewsGlobely News
  • Africa
  • Americas
  • Asia
  • Europe
  • Middle East
  • Russia
  • South Asia
  • Space
  • World
Subscribe
Trending
  • How the African Union Can the Most of Its G20 Membership
  • Unpacking China’s Moves to Regulate Generative AI
  • Canadian Sikh Killing Should Be the West’s Wakeup Call on India
  • Ukraine’s Allies Are Showing Signs of War Fatigue
  • Zelensky Seeks Biden and Trudeau Support for Long War
  • Race for Green Metals Goes to South Asia
  • The Ukraine War Is Accelerating the Global Spread of Dangerous Weapons
  • The Ukraine War Will Go On for a Long Time
Globely NewsGlobely News
Home » The Peril and Promise of ChatGPT and AI Information Retrieval
World

The Peril and Promise of ChatGPT and AI Information Retrieval

Chirag ShahBy Chirag ShahMarch 18, 2023
Facebook Twitter LinkedIn Email Reddit WhatsApp
ai chatbot downsides risks chatgpt bard
AI chatbots like ChatGPT can "hallucinate" answers and give a false sense of authoritativeness.
Share
Facebook Twitter LinkedIn Email Reddit WhatsApp

The prominent model of information access before search engines became the norm — librarians and subject or search experts providing relevant information — was interactive, personalized, transparent, and authoritative. Search engines are the primary way most people access information today, but entering a few keywords and getting a list of results ranked by some unknown function is not ideal.

A new generation of artificial intelligence-based information access systems, which includes Microsoft’s Bing/ChatGPT, Google/Bard, and Meta/LLaMA, is upending the traditional search engine mode of search input and output. These systems are able to take full sentences and even paragraphs as input and generate personalized natural language responses.

At first glance, this might seem like the best of both worlds: personable and custom answers combined with the breadth and depth of knowledge on the internet. But as a researcher who studies the search and recommendation systems, I believe the picture is mixed at best.

AI systems like ChatGPT and Bard are built on large language models. A language model is a machine-learning technique that uses a large body of available texts, such as Wikipedia and PubMed articles, to learn patterns. In simple terms, these models figure out what word is likely to come next, given a set of words or a phrase. In doing so, they are able to generate sentences, paragraphs, and even pages that correspond to a query from a user. On March 14, 2023, OpenAI announced the next generation of the technology, GPT-4, which works with both text and image input, and Microsoft announced that its conversational Bing is based on GPT-4.

Thanks to the training on large bodies of text, fine-tuning, and other machine learning-based methods, this type of information retrieval technique works quite effectively. The large language model-based systems generate personalized responses to fulfill information queries. People have found the results so impressive that ChatGPT reached 100 million users in one-third of the time it took TikTok to get to that milestone. People have used it to not only find answers but to generate diagnoses, create dieting plans, and make investment recommendations.

Opacity and ‘Hallucinations’

However, there are plenty of downsides. First, consider what is at the heart of a large language model — a mechanism through which it connects the words and presumably their meanings. This produces an output that often seems like an intelligent response, but large language model systems are known to produce almost parroted statements without a real understanding. So, while the generated output from such systems might seem smart, it is merely a reflection of underlying patterns of words the AI has found in an appropriate context.

This limitation makes large language model systems susceptible to making up or “hallucinating” answers. The systems are also not smart enough to understand the incorrect premise of a question and answer faulty questions anyway. For example, when asked which U.S. president’s face is on the $100 bill, ChatGPT answers Benjamin Franklin without realizing that Franklin was never president and that the premise that the $100 bill has a picture of a U.S. president is incorrect.

The problem is that even when these systems are wrong only 10 percent of the time, you don’t know which 10 percent. People also don’t have the ability to quickly validate the systems’ responses. That’s because these systems lack transparency — they don’t reveal what data they are trained on, what sources they have used to come up with answers, or how those responses are generated.

For example, you could ask ChatGPT to write a technical report with citations. But often it makes up these citations — “hallucinating” the titles of scholarly papers as well as the authors. The systems also don’t validate the accuracy of their responses. This leaves the validation up to the user, and users may not have the motivation or skills to do so or even recognize the need to check an AI’s responses.

screen capture of text
ChatGPT doesn’t know when a question doesn’t make sense, because it doesn’t know any facts. Screen capture by Chirag Shah

Stealing Content — and Traffic

While lack of transparency can be harmful to the users, it is also unfair to the authors, artists, and creators of the original content from whom the systems have learned, because the systems do not reveal their sources or provide sufficient attribution. In most cases, creators are not compensated or credited or given the opportunity to give their consent.

There is an economic angle to this as well. In a typical search engine environment, the results are shown with the links to the sources. This not only allows the user to verify the answers and provides the attributions to those sources, it also generates traffic for those sites. Many of these sources rely on this traffic for their revenue. Because the large language model systems produce direct answers but not the sources they drew from, I believe that those sites are likely to see their revenue streams diminish.

Taking Away Learning and Serendipity

Finally, this new way of accessing information also can disempower people and takes away their chance to learn. A typical search process allows users to explore the range of possibilities for their information needs, often triggering them to adjust what they’re looking for. It also affords them an opportunity to learn what is out there and how various pieces of information connect to accomplish their tasks. And it allows for accidental encounters or serendipity.

These are very important aspects of search, but when a system produces the results without showing its sources or guiding the user through a process, it robs them of these possibilities.

Large language models are a great leap forward for information access, providing people with a way to have natural language-based interactions, produce personalized responses, and discover answers and patterns that are often difficult for an average user to come up with. But they have severe limitations due to the way they learn and construct responses. Their answers may be wrong, toxic, or biased.

While other information access systems can suffer from these issues, too, large language model AI systems also lack transparency. Worse, their natural language responses can help fuel a false sense of trust and authoritativeness that can be dangerous for uninformed users.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Chirag Shah

Chirag Shah is a professor of information science at the University of Washington.

    This author does not have any more posts.
Artificial Intelligence Bard Chatbots ChatGPT Large Language Models

More from Globely News

Unpacking China’s Moves to Regulate Generative AI

September 27, 2023

AI Can Lead to Even More Deceptive Political Campaigning

July 28, 2023

Why GPT-3 and Other AI LLMs Don’t Know What They’re Saying

April 7, 2023

ChatGPT: Legal and Ethical Challenges for Healthcare and Medical Research

March 1, 2023

AI Warfare: Ukraine War Accelerates Global Drive Toward Killer Robots

March 1, 2023

AI’s Threat to Google is More About Advertising Income Than Search

February 20, 2023
Add A Comment

Comments are closed.

Newsletter

Subscribe to the Globely Daily

Our flagship newsletter covers the leaders, states, networks, ideologies, and technologies that are transforming world power.

How the African Union Can the Most of Its G20 Membership

September 27, 2023

Unpacking China’s Moves to Regulate Generative AI

September 27, 2023

Canadian Sikh Killing Should Be the West’s Wakeup Call on India

September 26, 2023

Ukraine’s Allies Are Showing Signs of War Fatigue

September 26, 2023
© 2023 Globely News.
  • Home
  • About
  • Editorial Policy
  • Privacy Policy
  • Terms of Use
  • Contributors
  • Write For Us
  • Contact Us

Type above and press Enter to search. Press Esc to cancel.

We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept”, you consent to the use of ALL the cookies.
Do not sell my personal information.
SettingsAccept
Privacy & Cookies Policy

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are as essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
SAVE & ACCEPT

Ad Blocker Enabled

Ad Blocker Enabled
Our website is made possible by displaying online advertisements to our visitors. Please support us by disabling your Ad Blocker.
Go to mobile version