Amazon has pledged $10 million (£7.5 million) to research grants aiming to make artificial intelligence (AI) more transparent and accountable.
In partnership with the National Science Foundation (NSF), the retailer said it would donate the cash over the next three years to help develop systems which focus on fairness in AI and machine learning.
The partnership will focus on tackling various issues including adverse biases in AI, explainability and considerations of inclusivity in an effort to broaden acceptance of the technology.
Proposals will be accepted from this week until May 10, and the pair expect the project to result in new open source tools and publicly available data sets.
“With the increasing use of AI in everyday life, fairness in artificial intelligence is a topic of increasing importance across academia, government, and industry,” Amazon’s vice president of natural understanding at its AI wing Prem Natarajan wrote.
“Here at Amazon, the fairness of the machine learning systems we build to support our businesses is critical to establishing and maintaining our customers’ trust.”
This comes after a study carried out by the MIT Media Lab’s Joy Boulamwini and the University of Toronto’s Doborah Raji into Amazon’s Rekognition technology found that error rates shot up significantly when attempting to identify women with darker skin tones.
The retail giant’s technology, which it has recently been marketing to police departments, was found to be highly accurate when identifying the gender of males.
However, when attempting to identify the gender of women with darker skin tones, the error rate rose to 31 per cent.