Microsoft has released the Human-AI eXperience (HAX) Toolkit, a set of practical tools to help teams strategically create and responsibly implement best practices when creating artificial intelligence technologies that interact with people.
The toolkit comes as AI-infused products and services, such as virtual assistants, route planners, autocomplete, recommendations and reminders, are becoming increasingly popular and useful for many people. But these applications have the potential to do things that aren’t helpful, like misunderstand a voice command or misinterpret an image. In some cases, AI systems can demonstrate disruptive behaviors or even cause harm.
Such negative outcomes are one reason AI developers have pushed for responsible AI guidance. Supporting responsible practices has traditionally focused on improving algorithms and models, but there is a critical need to also make responsible AI resources accessible to the practitioners who design the applications people use. The HAX Toolkit provides practical tools that translate human-AI interaction knowledge into actionable guidance.
“Human-centeredness is really all about ensuring that what we build and how we build it begins and ends with people in mind,” said Saleema Amershi, senior principal research manager at Microsoft Research. “We started the HAX Toolkit to help AI creators take this approach when building AI technologies.”
The toolkit currently consists of four components designed to…