Tackling the Accountability Gap in Artificial Intelligence Development

Tackling the Accountability Gap in Artificial Intelligence Development

Matthew Lv10

Tackling the Accountability Gap in Artificial Intelligence Development

illustration of robot hand holding brain

Moor Studio/Getty Images

While almost nine in 10 business leaders agree it’s important to have clear guidelines on artifical intelligence (AI) ethics and corporate responsibility, barely a handful admit they have such guidelines, a recent survey shows.

Such findings suggest there’s confusion about what approaches need to be taken to govern AI adoption, and technology professionals need to step forward and take leadership for the safe and ethical development of their data-led initiatives.

Also: Five ways to use AI responsibly

The results are from a survey based on the views of 500 business leaders released by technology company Conversica, which says: “A resounding message emerges from the survey: a majority of respondents recognize the paramount importance of well-defined guidelines for the responsible use of AI within companies, especially those that have already embraced the technology.”

Newsletters

ZDNET Tech Today

ZDNET’s Tech Today newsletter is a daily briefing of the newest, most talked about stories, five days a week.

Subscribe

See all

Almost three-quarters (73%) of respondents said AI guidelines are indispensable. However, just 6% have established clear ethical guidelines for AI use, and 36% indicate they might put guidelines in place during the next 12 months.

Even among companies with AI in production , one in five leaders at companies currently using AI admitted to limited or no knowledge about their organization’s AI-related policies. More than a third (36%) claimed to be only “somewhat familiar” with policy-related concerns.

Guidelines and policies for addressing responsible AI should incorporate governance, unbiased training data, bias detection, bias mitigation , transparency, accuracy, and the inclusion of human oversight, the report’s authors state.

Also: The best AI chatbots: ChatGPT and other noteworthy alternatives

About two-thirds (65%) of the executives surveyed said they already have or plan to have AI-powered services in place during the next 12 months. Leading use cases for AI include powering engagement functions , such as customer service and marketing (cited by 39%), and producing analytic insights (35%).

The survey found the top concerns about AI outputs are the accuracy of current-day data models, false information, and lack of transparency. More than three-quarters (77%) of executives expressed concern about AI generating false information .

AI providers aren’t providing enough information to help formulate guidelines, the business leaders said – especially when it comes to data security and transparency, and the creation of strong ethical policies.

Also: Today’s AI boom will amplify social problems if we don’t act now

Around two-thirds (36%) of respondents said their businesses have rules about using generative AI tools, such as Chat GPT . But 20% said their companies are giving individual employees free rein regarding the use of AI tools for the foreseeable future.

The Conversica survey shows there is a leadership gap when it comes to making responsible AI a reality. So, how can technology leaders and line-of-business professionals step up to ensure responsible AI practices are in place? Here are some of the key guidelines shared by Google’s AI team:

  • Use a human-centered design approach: “The way actual users experience your system is essential to assessing the true impact of its predictions, recommendations, and decisions. Design features with appropriate disclosures built-in: clarity and control is crucial to a good user experience. Model potential adverse feedback early in the design process, followed by specific live testing and iteration for a small fraction of traffic before full deployment.”
  • Engage with a diverse set of users and use-case scenarios: “Incorporate feedback before and throughout project development. This will build a rich variety of user perspectives into the project and increase the number of people who benefit from the technology.”
  • Design your model using concrete goals for fairness and inclusion: “Consider how the technology and its development over time will impact different use cases: Whose views are represented? What types of data are represented? What’s being left out?”
  • Check the system for unfair biases: “For example, organize a pool of trusted, diverse testers who can adversarially test the system, and incorporate a variety of adversarial inputs into unit tests. This can help to identify who may experience unexpected adverse impacts. Even a low error rate can allow for the occasional very bad mistake.”
  • Stress test the system on difficult cases: “This will enable you to quickly evaluate how well your system is doing on examples that can be particularly hurtful or problematic each time you update your system. As with all test sets, you should continuously update this set as your system evolves, features are added or removed and you have more feedback from users.”
  • Test, test, test: “Learn from software engineering best test practices and quality engineering to make sure the AI system is working as intended and can be trusted. Conduct rigorous unit tests to test each component of the system in isolation. Conduct integration tests to understand how individual ML components interact with other parts of the overall system.”
  • Use a gold standard dataset to test the system and ensure that it continues to behave as expected: “Update this test set regularly in line with changing users and use cases, and to reduce the likelihood of training on the test set. Conduct iterative user testing to incorporate a diverse set of users’ needs in the development cycles.”
  • Apply the quality engineering principle of poka-yoke: “Build quality checks into a system, so that unintended failures either cannot happen or trigger an immediate response – e.g., if an important feature is unexpectedly missing, the AI system won’t output a prediction.”

The business might want to implement AI quickly, but caution must be taken to ensure the tools and their models are accurate and fair. While businesses are looking for AI to advance, the technology must deliver responsible results every time.

Artificial Intelligence

Photoshop vs. Midjourney vs. DALL-E 3: Only one AI image generator passed my 5 tests

AI-powered ‘narrative attacks’ a growing threat: 3 defense strategies for business leaders

Copilot Pro vs. ChatGPT Plus: Which AI chatbot is worth your $20 a month?

How my 4 favorite AI tools help me get more done at work

Also read:

https://techidaily.com
  • Title: Tackling the Accountability Gap in Artificial Intelligence Development
  • Author: Matthew
  • Created at : 2024-10-16 00:33:25
  • Updated at : 2024-10-18 00:13:37
  • Link: https://app-tips.techidaily.com/tackling-the-accountability-gap-in-artificial-intelligence-development/
  • License: This work is licensed under CC BY-NC-SA 4.0.
On this page
Tackling the Accountability Gap in Artificial Intelligence Development