The topic of artificial intelligence (AI) couldn’t receive more hype. After decades of being an eccentric technology topic, it has suddenly jumped to the top of everybody’s list, with AI-washing permeating every aspect of technology.
But organizations need to be wary: the risks of adopting AI are high and probably underestimated – both on the business and societal levels. And most organizations will have to proceed carefully. Setting aside the possible risk to all humanity, decisions about how AI is implemented have large financial and staffing implications that organizations will need to weigh carefully.
Beyond the larger ethical and existential questions about the threats of AI — technology leaders have warned of possible human extinction — there are more practical considerations such as the cost, the skills required, and the business risks. Let’s dive in and consider the top five risks of AI to your business.
Risk #1: Accuracy and Accountability
This may be the largest current problem with AI. Many AI applications have serious errors with sourcing, accuracy, and accountability.
Where does the information come from? How do you verify it? What is the data set upon which the AI is based? Most AI-driven applications are non-transparent and non-verifiable. Using a term adopted by the intelligence community – “trust but verify” – most AI sourcing can’t be verified. It’s a black box.
In many cases, there are outright errors. Many of the mistakes made by chatbots have been documented. Here’s just a few:
- Google’s Bard AI got off to a rocky start by making some well-known mistakes and “hallucinations,” a phenomenon in which AI models make up facts. Some of the hallucinations it made in the field of astronomy led a leading astrophysicist, Grant Tremblay, to state that while Google’s Bard is “impressive,” AI chatbots like ChatGPT and Bard “have a tendency to confidently state incorrect information.”
- ·Microsoft’s Bing chatbot failed to differentiate key financial data in a basic comparison of vacuum cleaners and clothing . Humans have to clean this up.
- OpenAI’s ChatGPT hallucinations are common – such as the documented case in which it made up fictional court cases when a lawyer used it for legal research. Hallucinations strike again.
- AI largely failed to help diagnose COVID-19 in order to aid clinical assessment of the pandemic. In an examination of 415 such AI tools, none of them was fit for clinical use.
- Online real-estate data company Zillow incurred a $300 million write-down because its Offers program couldn’t accurately price homes with an AI-driven algorithm.
These are just a few examples – there are plenty more. Some of them present risks to financial and physical health. The question organizations need to ask: Do you trust AI without the appropriate supervision and testing?
Risk #2: Skills Gap
As organizations consider adopting AI, they need to ask whether they even have the skills or capabilities to do so. Putting AI into the wrong hands could be weaponized destruction. As one Silicon Valley source at an IT integrator told me: “Our customers barely understand cloud, how are they going to understand AI?”
Good point. With the threat of financial mistakes, hallucinations, or errors that we have already demonstrated can cause serious harm, do most organizations have the AI expertise to use the technology? The answer is probably no.
McKinsey has identified hosts of risks of AI, including technical challenges such as data difficulties and issues with technology process. The key to all AI is data – and how this data is collected, stored, and used is critical. Without the proper understanding, organizations incur many risks, including reputational risks, data risks, and security risks.
Risk #3: Intellectual Property and Legal Risks
If a person or an organization does something wrong, they are held accountable and liable for mistakes via rule of law. What about AI?
There are a host of legal issues in the application of AI. Will the AI be treated like humans when it makes mistakes? Identifying the origins of an AI error or source of data is particularly difficult to trace (i.e. in the case of hallucinations). And then there are the huge intellectual property (IP) questions. If AI is using data models borrowing from IP such as software, art, and music – who owns the IP? AI disintermediates the owners of IP. If Google is used to search for something, typically it can return a link to the source or the originator of the IP — such is not the case with AI.
There’s a host of other issues: data privacy, data bias, discrimination, security, and ethics. Deep fakes. Who owns the rights when somebody creates a deep fake of you?
Imagine a large organization in which AI tools are being adopted across a spectrum of employees and divisions in a “shadow IT” methodology. This is a legal and liability nightmare that has led dozens of companies to ban the adoption of AI tools such as ChatGPT — including big names such as Apple, JPMorgan Chase, Citigroup, Deutsche Bank, Wells Fargo, and Verizon.
Risk #4: Costs
Every technology is ultimately assessed by its financial return on investment (ROI). Some technologies have been invented with unique promise but have ultimately failed because they cost more than they produced. Some examples I can think of are Google Glass, Betamax, the Segway, and fuel-cell technology — all of which either failed outright or did not live up to their expected market gains.
A specific example of where AI fails to deliver an ROI, or even incurs a loss, is Zillow’s misguided (and arrogant) attempt to implement automated purchases of houses based on an AI-driven pricing algorithm, which ended up not working and costing the company hundreds of millions of dollars.
The scale of AI’s implementation in terms of investment, training, and potential dwarfs perhaps any technology in the past. Accenture has already announced a $3 billion investment in AI, and the largest cloud providers are dumping tens of billions into new infrastructure for AI. More traditional businesses in the Fortune 500 will have to spend enormous amounts of money to train staff and invest in new AI technologies.
Will it pay off? This will come down to specific results in myriad applications, but there are going to be some train wrecks along the way. A study by the Boston Consulting Group and MIT Sloan indicated that only 11% of companies say they see a significant ROI on AI.
Risk #5: The End of Humanity
I’ve left the largest risk of AI for last. It could nuke us all, deciding that it’s smarter and more powerful than us — and that humans aren’t even needed.
There has always been a debate about the impact of the singularity, or the point at which AI takes on a life of its own, and what it will mean for humanity. There are, after all, times in which we don’t seem that smart.
The less dramatic question about AI is about its soul. For example, does generative AI have a soul when it writes or creates music? Will humans embrace a soulless machine?
The questions get more deep when we ask about our safety, or what happens when AI does reach the singularity. What’s the impact on global security when terrorists or hostile nation-states use AI to attack others? Elon Musk has warned the United Nations about autonomous weapons controlled by artificial intelligence.
The wide range of risks to humanity recently led 1,000 technology leaders and experts to sign a letter warning about a vast threat to human existence .
This question is probably too big for the average CIO or CEO, but it’s certainly being taken up by global governments, with many already taking steps to restrict or outright ban AI.
This risk, as well as the rest of the risks we have listed, should lead you with only one conclusion: Your organizations may have to take a slow and steady approach to adopting AI – and you need a strategy to contemplate the risks right out of the gate. It may not make human extinct, but it could pose enormous risks to your business.