Companies Must Be Transparent About Their Use of Artificial Intelligence

Making the use of AI known is key to addressing any pitfalls, researchers said.

Companies Must Be Transparent About Their Use of Artificial Intelligence
https://engineering.nyu.edu/news/we-are-ai-series-nyu-tandon-center-responsible-ai-queens-public-library

WASHINGTON, September 20, 2023 – Researchers at an artificial intelligence workshop Tuesday said companies should be transparent about their use of algorithmic AI in things like hiring processes and content writing.

Andrew Bell, a fellow at the New York University Center for Responsible AI, said that making the use of AI known is key to addressing any pitfalls AI might have.

Algorithmic AI is behind systems like chatbots which can generate texts and answers to questions. It is used in hiring processes to quickly screen resumes or in journalism to write articles.

According to Bell, ‘algorithmic transparency’ is the idea that “information about decisions made by algorithms should be visible to those who use, regulate, and are affected by the systems that employ those algorithms.”

The need for this kind of transparency comes after events like Amazons’ old AI recruiting tool showed bias toward women in the hiring process, or when OpenAI, the company that created ChatGPT, was probed by the FTC for generating misinformation.

Incidents like these have brought the topic of regulating AI and making sure it is transparent to the forefront of Senate conversations.

Senate committee hears need for AI regulation

The Senate’s subcommittee on consumer protection on September 12 heard about proposals to make AI use more transparent, including disclaiming when AI is being used and developing tools to predict and understand risk associated with different AI models.

Similar transparency methods were mentioned by Bell and his supervisor Julia Stoyanovich, the Director of the Center for Responsible AI at New York University, a research center that explores how AI can be made safe and accessible as the technology evolves.

According to Bell, a transparency label on algorithmic AI would “[provide] insight into ingredients of an algorithm.” Similar to a nutrition label, a transparency label would identify all the factors that go into algorithmic decision making.

Data visualization was another option suggested by Bell, which would require a company to put up a public-facing document that explains the way their AI works, and how it generates the decisions it spits out.

Adding in those disclaimers creates a better ecosystem between AI and AI users, increasing levels of trust between all stakeholders involved, explained Bell.

Bell and his supervisor built their workshop around an Algorithm Transparency Playbook, a document they published that has straightforward guidelines on why transparency is important and ways companies can go about it.

Tech lobbying groups like the Computer and Communications Industry Association, which represent Big Tech companies, however, have spoken out in the past against the Senate regulating AI, claiming that it could stifle innovation.

Popular Tags