AI Chatbot Language Bias Revealed by Stanford Research - The Struggle with Non-English Languages | ZDNet

AI Chatbot Language Bias Revealed by Stanford Research - The Struggle with Non-English Languages | ZDNet

Matthew Lv10

AI Chatbot Language Bias Revealed by Stanford Research - The Struggle with Non-English Languages | ZDNet

gettyimages-1465742740

marian/Getty Images

AI solutions and associated chatbots coming to the fore may lack the global diversity needed to serve international user bases. Many of today’s large language models tend to favor “Western-centric tastes and values,” asserts a recent study by researchers at Stanford University. Attempts to achieve what is referred to as “alignment” with intended users of systems or chatbots often fall short, they claimed.

It’s not for lack of trying as the researchers, led by Diyi Yang , assistant professor at Stanford University and part of Stanford Human-Centered Artificial Intelligence (HAI) , recount in the study. “Before the creators of a new AI-based chatbot can release their latest apps to the general public, they often reconcile their models with the various intentions and personal values of the intended users.” However, efforts to achieve this alignment “can introduce its own biases, which compromise the quality of chatbot responses.”

Also: Nvidia will train 100,000 California residents on AI in a first-of-its-kind partnership

In theory, “alignment should be universal and make large language models more agreeable and helpful for a variety of users across the globe and, ideally, for the greatest number of users possible,” they state. However, annotators seeking to adapt datasets and LLMs within different regions may misinterpret those instruments.

Newsletters

ZDNET Tech Today

ZDNET’s Tech Today newsletter is a daily briefing of the newest, most talked about stories, five days a week.

Subscribe

See all

AI chatbots for various purposes – from customer interactions to intelligent assistants – keep proliferating at a significant pace, so there’s a lot at stake. The global AI chatbot market size is expected to be worth close to $67 billion by 2033, growing at a rate of 26% annually from its current size of more than $6 billion, according to estimates by MarketsUS.

“The AI chatbot market is experiencing rapid growth due to increased demand for automated customer support services and advancements in AI technology,” the report’s authors detail. “Interestingly, over 50% of enterprises are expected to invest more annually in bots and chatbot development than in traditional mobile app development.”

Also: If these chatbots could talk: The most popular ways people are using AI tools

The bottom line is that a huge variety of languages and communities across the globe are currently being underserved by AI and chatbots. English-language instructions or engagements may include phrases or idioms that are open to misinterpretation.

The Stanford study asserts that LLMs are likely to be based on the preferences of their creators, who, at this point, are likely to be based in English-speaking countries. Human preferences are not universal, and LLMs must reflect “the social context of the people it represents – leading to variations in grammar, topics, and even moral and ethical value systems.”

The Stanford researchers offer the following recommendations to increase awareness of global diversity:

Recognize that the alignment of language models is not a one-size-fits-all solution. “Various groups are impacted differently by alignment procedures.”

Strive for transparency. This “is of the utmost importance in disclosing the design decisions that go into aligning an LLM. Each step of alignment adds additional complexities and impacts on end users.” Most human-written preference datasets do not include the demographics of their regional preference annotators. “Reporting such information, along with decisions about what prompts or tasks are in the domain, is essential for the responsible dissemination of aligned LLMs to a diverse audience of users.”

Seek multilingual datasets. The researchers looked at the Tülu dataset used in language models, of which 13% is non-English. “Yet this multilingual data leads to performance improvements in six out of nine tested languages for extractive QA and all nine languages for reading comprehension. Many languages can benefit from multilingual data.”

Also: AI scientist: ‘We need to think outside the large language model box’

Working closely with local users is also essential to overcome cultural or language deficiencies or missteps with AI chatbots. “Collaborating with local experts and native speakers is crucial for ensuring authentic and appropriate adaptation,” wrote Vuk Dukic , software engineer and founder at Anablock, in a recent LinkedIn article . “Thorough cultural research is necessary to understand the nuances of each target market. Implementing continuous learning algorithms allows chatbots to adapt to user interactions and feedback over time.”

Dukic also urged “extensive testing with local users before full deployment to help identify and resolve cultural missteps.” In addition, “offering language selection allows users to choose their preferred language and cultural context.”

The fastest VPNs: Expert tested and reviewed

Google Pixel 9 Pro XL vs. Samsung Galaxy S24 Ultra: I tested both and here are the key differences

How to upgrade your ‘incompatible’ Windows 10 PC to Windows 11

Your Android phone is getting an anti-theft upgrade, thanks to AI. How it works

Also read:

https://techidaily.com
  • Title: AI Chatbot Language Bias Revealed by Stanford Research - The Struggle with Non-English Languages | ZDNet
  • Author: Matthew
  • Created at : 2024-10-12 04:35:10
  • Updated at : 2024-10-18 05:59:17
  • Link: https://app-tips.techidaily.com/ai-chatbot-language-bias-revealed-by-stanford-research-the-struggle-with-non-english-languages-zdnet/
  • License: This work is licensed under CC BY-NC-SA 4.0.
On this page
AI Chatbot Language Bias Revealed by Stanford Research - The Struggle with Non-English Languages | ZDNet