Ethical AI: Creating experiences that people will love and trust: Susan Etlinger
By Christi Warren
The use of artificial intelligence is growing rapidly — the adoption of the technology tripled in the past year alone, with an estimated 37 percent of organizations now using some form of AI in their operations.
As that pace increases, companies that use AI must begin to implement protocols to prevent bias. Susan Etlinger, an industry analyst for data and AI at Altimeter Group outlined four ethics issues specific to AI at the 10th Data & AI for Media Week at Microsoft in San Francisco on May 2.
- Documented bias in data and algorithms that results in real harms for health, economic well being, safety, and trust.
- Authenticity, especially in new interaction models such as chatbots, voice agents and with regard to deep fakes. All of these emerging models challenge notions of authenticity and transparency.
- Explaining how and why algorithms reach certain conclusions.
- Limited structures of governance for how to make sure the use of these technologies reflects our values.
Good intentions aren’t going to be enough. Organizations that use these technologies must actively build an ethical culture of data and AI.
“In the innovation conversation, ethics is an afterthought,” Etlinger said. “… My hope and dream is that we would start thinking what would happen if we created systems and services and products and worlds in which humans are at the center and we build them in a trustworthy way and we innovate from there.”