Author: EIS Release Date: Jul 14, 2020
Artificial intelligence that tricks you into paying more for things has come under the academic spotlight in a project by Swiss lab EPFL and British partners.
The research found that, unless specifically checked, AI can easily resort to manipulative strategies. It also identified drivers towards these unethical practices, and ways to design them out of AI training algorithms.
EPFL statistician Professor Anthony Davison (pictured) and his team looked into commercial AI trained to maximise profit and showed “that an AI is likely to pick an unethical strategy in many situations” said EPFL, a situation that the researchers have dubbed ‘unethical optimisation principle’.
“Consider for example using AI to set prices of insurance products to be sold to a particular customer,” according to the paper ‘An unethical optimization principle’, which describes the work in Royal Society Open Science. “There are legitimate reasons for setting different prices for different people, but it may also be profitable to ‘game’ their psychology or willingness to shop around. The AI has a vast number of potential strategies to choose from, but some are unethical – by which we mean, from an economic point of view, that there is a risk that stakeholders will apply some penalty, such as fines or boycotts, if they subsequently understand that such a strategy has been used.”
In short, if an AI’s aim is to maximise risk-adjusted return, “it shows that we can’t just rely on AI systems to act ethically because their objectives seem ethically neutral,” said independent University of Southampton AI researcher Professor Wendy Hall reviewing the work. “On the contrary, under mild conditions, an AI system will disproportionately find unethical solutions unless it is carefully designed to avoid them.”
According to EPFL’s Davison, knowing the principle could help regulators and compliance staff to find problematic AI strategies that might be hidden in a large strategy space. “Such a space,” he said, “can be expected to contain disproportionately many unethical strategies, inspection of which should show where problems are likely to arise and thus suggest how the AI search algorithm should be modified to avoid them. It also suggests that it may be necessary to re-think the way AI operates in very large strategy spaces, so that unethical outcomes are explicitly rejected during the learning process.”
‘An unethical optimization principle’ can be found here, free in full. Its statistics are not for the fainthearted.
The team was:
Funding came from the Swiss National Science Foundation, UK EPSRC, Alan Turing Institute and Capital International.