Artificial Intelligence (AI) is used increasingly in the workplace and has even extended to our social lives and relationships. From seeking cures for diseases and streamlining manufacturing processes, to writing content and online dating, AI has endless applications.
Although AI technology can yield incredible benefits, it can also give rise to ethical concerns. The scientific community is paying increasing attention to ensuring that AI is deployed in an ethical way.
Dr. Kevin Leyton-Brown is a Professor at UBC in Computer Science and the Director of CAIDA (Centre for Artificial Intelligence Decision-making and Action). This year, he was Program Co-Chair for the Association for the Advancement of Artificial intelligence (AAAI) Conference, one of the world’s largest AI conferences.
Under his direction, this year’s AAAI conference introduced a new policy asking every author to “write a statement about the potential ethical impact of their work, including its broad societal implications, both positive and negative.” The policy also asked reviewers to assess papers’ ethical implications and warned that papers could be rejected for ethical reasons.“A trend is underway in the community for considering the ethical impact of research work, and we felt it time to align with the movement,” Dr. Leyton-Brown explained.
But with such a vast number of papers, how does one administer such a policy? The conference received 10,000 submissions this year, of which around 1,700 were published, and the peer review process relied on the work of over 10,000 volunteers from all around the world. Kevin explained, “Papers always differ dramatically from one another, and reviewers typically review only four papers each, so everything is done in an incredibly distributed way. Some reviewers clearly paid attention to our new policy, and some probably did not. But we introduced the policy to make it clear to both authors and reviewers that ethical considerations are fundamentally important in AI research.”
Do ethics policies make a difference?
In the end, a very small fraction of AAAI papers were flagged by reviewers for ethical concerns. Kevin says he wasn’t surprised. “I don't think that a large fraction of AI research is unethical in nature. It’s not as though most papers are describing something like, ‘Here's how to make a laser for shooting people.’”
Instead, Kevin explained, most papers ask purely mathematical questions and aim to make incremental changes to AI techniques that already exist. In some cases, such work may eventually make a social, political, or economic impact, either positive or negative. “However, predicting such future impacts is incredibly hard for anyone, and in particular goes well beyond the expertise of computer scientists. Some unknown guy who made an incremental change to a programming language in 1953 couldn't have imagined that his change, along with millions of others, would lead to the gig economy.”
That doesn’t make ethics statements toothless, however. "We absolutely did weigh ethical considerations in the review process and in accept/reject decisions,” explained Kevin. “Points raised in papers’ ethics statements featured in the decision-making process. A paper flagged for ethical concerns by the reviewers was first overseen by the senior program committee, and then the chairs, if escalated. The AAAI Ethics Committee’s opinion was sought in exceptional cases.”
Kevin describes one such case. “One AI system used sensitive images of people, for which some consent had been obtained. The work was technically strong and would otherwise have been a clear accept; however, reviewers raised ethical concerns about the use of sensitive images. The chairs had an extended back and forth and also sought the AAAI Ethics Committee’s opinion. The authors had clearly made well-intentioned efforts before publication and furthermore were willing to make some of the additional changes that we requested. Nevertheless, we ended up rejecting the paper because no Institutional Review Board approval existed before data collection, and there were concerns about whether the subjects had been sufficiently well informed about the use the images. We advised the authors to re-engage with their IRB before resubmitting the paper for publication.”
Kevin was quick to point out that in other cases where ethical concerns arose, authors similarly had good intentions. “Perhaps the researchers were merely a little bit thoughtless. They didn't really look at their work through an ethical lens, but they did not do something intentionally unethical, just maybe a little careless.” Further, he explained that a key motivation for ethics statements was making authors stop and think about ethical implications of their work, leading authors to change or withdraw submissions before they were submitted for peer review.
Are ethics policies here to stay?
“I think something like this will probably stick around,” Dr. Leyton-Brown said. “But because ethics statements are fairly new, it’s unclear at this point how useful they will be. The worst outcome would be if such statements just become a superfluous, virtue-signaling exercise.”
But he did underscore the importance of how “these policies help put ethics on the agenda for everybody to start thinking and talking about in a sensible and disciplined manner. That can be particularly important in this new Twitter universe, where outrage can be amplified quickly and loudly. It’s important to have a due diligence phase to help conferences flag potentially problematic work that might otherwise disproportionally dominate the public conversation about AI.”
Read more about Kevin’s work in AI