Artificial intelligence (AI) has already changed the world. AI algorithms have been incorporated into everyday applications—such as social media, web mapping, facial recognition, and virtual home assistants—to make them more effective and user-friendly. For those who remember DOS command prompts, these accomplishments alone are impressive. Yet, they represent only AI’s opening act.
This technology has the potential to revolutionize every facet of human life. AIs may one day fly planes and drive cars. They could scale up manufacturing and agriculture, while making both safer and more sustainable. They could make us healthier by serving as our fitness trainers and assisting doctors in diagnosing diseases. Sky’s the limit then? Nope, it’s not even the starting line.
But for Allan Dafoe, director of the Centre for the Governance of AI, this boundless potential comes with an obverse. The same technology that could deliver these many gifts may also prove existentially dangerous.
An expert’s take on AI governance
Dafoe analogies AI’s world-altering potential to that of human intelligence. The human mind blessed us with the printing press, the light bulb, and penicillin. But it also conceived of such horrors as the guillotine, DDT, and nuclear weapons.
If we’re to maximize AI’s benefits while avoiding its dangers, we’ll need to institute governance over these algorithms—preferably before the theoretical risks coalesce into concrete threats.
Dafoe readily admits that this is a difficult task. The governance must be potent enough to be effective, yet not so strict as to strangle innovation and freedom. It must be open enough to entice stakeholder buy-in, yet not leave loopholes for bad actors to exploit. And it requires us to be prescient enough to adequately assess the risks yet not succumb to our fears and wild flights of fancy.
But if we manage to construct thoughtful laws and enforce them effectively, we can build the future we want instead of stumbling into one blindly—a nice change for humanity.
Is it relevant for my business?
For any industry, AI could introduce small shifts in the next couple of years or a complete upheaval within a decade. Because development is difficult and costly, there’s no way to adequately forecast arrival or adoption. Many experts, you’ll recall, predicted that we’d be carted around in self-driving vehicles by 2020. No such luck.
Even so, like other world-changing innovations, AI will eventually become bundled with laws, regulations, and standards. These will have widespread ramifications for any industry that uses the technology.
Self-driving cars are a useful reference point here, too. AI chauffeurs remain years away, and it will be even longer before every vehicle on the road is fully autonomous—if such a day ever comes. But the introduction of a few hundred thousand self-driving cars will change how we approach the governance of licensing, shipping, street laws, pedestrian safety, traffic violations, vehicle insurance, liability insurance, and a host of other social factors.
Because the effects could be so profound, the realm of influence so expansive, and the changes so difficult to predict, it’s relevant for almost any business to take an active role in monitoring the development of AI and its governance.
Is it actionable?
If your organization is actively working on the AI frontiers, you’ll want to make your voice heard and begin building the coalitions necessary to shepherd the attention of policymakers. If your AI timeline stretches further into the future, then today would be an ideal time to start educating your team in preparation.
What governance should you be looking to champion and institute? Unfortunately, there’s no one-size-fits-all approach. Companies are spending billions on AI research, and because each industry operates within unique parameters, the AI developed from such efforts will aim to service those parameters.
With that said, experts like Dafoe argue transparency will be essential. To ensure an AI makes fair and ethical decisions, we need to know how it reaches its conclusion. Hiding that analytical process behind proprietary ramparts risks institutionalizing practices that don’t align with our values. For instance, AIs could exacerbate social prejudices and unfair practices if biases are baked into the algorithm—regardless of whether the programmers’ biases were intentional or unconscious.
In fact, it’s already happening today. Criminal justice systems are currently using software to try to predict recidivism—that is, the likelihood a person will repeat the same crime or commit another. Such software often labels black criminals as more likely to commit future crimes than white ones; however, a ProPublica report into one such system found it only predicted future violent crime correctly 20 percent of the time.
And because the power of AI has the potential to affect everyone, we all have a voice in the discussion of how to best govern it.
Gear up for the AI revolution with lessons ‘For Business’ from Big Think Edge. At Edge, more than 350 experts, academics, and entrepreneurs come together to teach essential skills in career development and lifelong learning. Prepare for new technological shifts with lessons such as:
- What Skills Will Set You Apart in the Age of Automation?, with David Epstein, Author, Range: Why Generalists Trump in a Specialized World
- Making the Shift to Social: Heed the Mobile Revolution, with Mollie Spilman, Chief Revenue Officer, Criteo
- Imagine It Forward: Understand the Fundamentals of Changemaking, with Beth Comstock, Former Vice Chair, GE, and Author, Imagine It Forward
- Tackle the World’s Biggest Problems: The 6 Ds of Exponential Organizations, with Peter Diamandis, Founder and CEO, XPRIZE
- Global Adoption of Technology Has Accelerated Deceptively, with James Manyika, Director, McKinsey Global Institute
Request a demo today!