The technology world was abuzz last week when Google announced it spent nearly half a billion dollars to acquire DeepMind, a UK-based artificial intelligence (AI) lab. With few details available, commentators speculated on the underlying motivation.
Is the deal linked to Google's buying spree of seven robotics companies in December alone, including Boston Dynamics, "a company holding contracts with the US military"? Is Google building an unstoppable robot army powered by AI? Does Google want to create something like Skynet? Or, is this just busybody gossip that naturally happens in an information-vacuum? The deal could simply be to improve search engine functionality.
All this uncertainty is driving an unnerving question: What exactly is DeepMind so worried about that they insisted on creating an ethics board? Is it a basic preventative measure, or is it a Hail-Mary pass to save "humanity from extinction"? Whatever the answer, we don't want to feed the rumor mill here. But as professional ethicists, we can throw some light on the mysterious nature of ethics boards and what good they can do.
The Terminator. (Photo credit: Wikipedia)
It's fair to assume that the smart folks at DeepMind have thought deeply about AI and its implications. AI is very powerful technology that is largely invisible to the average person. Right now, AI controls airplanes, stock markets, information searches, surveillance programs, and more. These are important applications that can't help but to have a tremendous impact on society and ethics, increasingly so as every futurist predicts AI to become more pervasive in our lives.
AI developers are thus under pressure to get it right. Just as we'd want to make sure you knew how to be a responsible gun-owner before we sell you one, DeepMind seems to have the same concern for commonsense responsibility as it sells potent AI technology and expertise. But because DeepMind is looking for ethical guidance from a review board, there are key cautionary issues to keep in mind as we follow its development.
1. Ethics Isn't Just About Legal Risk
The first issue to be concerned with is the limits of ethics framed as legal advice.
We don't know who will be invited to be on the ethics board, but we do know that that "chief ethics officer" has been a popular role in business for more than a decade. That position has primarily been filled by lawyers focused on compliance issues or following extant law. Google exemplifies this trend with its Ethics & Compliance team that works "with outside ethics counsel to ensure compliance with all relevant political laws and the associated filings and reports."
This specific focus can lead to wonderful outcomes, such as decreasing consumer risk and improving public safety. But let's not kid ourselves: the focus is dictated by a self-interested goal of minimizing corporate liability. Harms that aren't currently prohibited are therefore not considered much. This is a notoriously grey moral area for emerging technologies, since they usually are unanticipated and unaddressed by laws or regulations. Just as poking unnecessarily into a hornet's nest is dangerously foolish, companies are afraid that probing into issues beyond what is legally required may compromise plausible deniability and open up new possibilities for litigation.
As it turns out, there isn't much law that directly governs AI research—though the usual business laws about privacy, product liability, and so on still apply. So, DeepMind's demand for an ethics board may be a signal they're interested in more than legal risk-avoidance.
If this is the case, we hope the key players appreciate the full scope of ethics. Ethics isn't just about dictating rules for what you should and should not do. Especially in the domain of technology ethics, the answers to pressing questions tend to be unclear: the law is often undefined; applications of new technologies are unclear; and, social and political values conflict, both internally and with each other in new ways.
![](https://img.wattpad.com/cover/109515476-288-k46d17e.jpg)
YOU ARE READING
Ethical Virus
Não FicçãoNon fiction Future ethical considerations The future of ethics