IBM is set to launch a tool for the purpose of analysing how and why algorithms make decisions in real time.
The Fairness 360 Kit will also scan for signs of bias and recommend adjustments.
There is increasing concern that algorithms used by both tech giants and other firms are not always fair in their decision-making.
For instance, in the past, image recognition systems have failed to identify non-white faces.
However, as they increasingly make automated decisions about a wide variety of issues such as policing, insurance and what information people see online, the implications of their recommendations become broader.
Often algorithms operate within what is known as a “black box” – meaning their owners can’t see how they are making decisions.
The IBM cloud-based software will be open-source, and will work with a variety of commonly used frameworks for building algorithms.
Customers will be able to see, via a visual dashboard, how their algorithms are making decisions and which factors are being used in making the final recommendations.
It will also track the model’s record for accuracy, performance and fairness over time.
“We are giving new transparency and control to the businesses who use AI and face the most potential risk from any flawed decision-making,” said David Kenny, senior vice president of Cognitive Solutions.