Most people are watching the hype over AI with detached interest. But as a CEO, you are under a great deal of pressure to implement it in your business. Everyone from your employees to shareholders to the media is shouting that any company that doesn’t will be left behind, made irrelevant, and find themselves competing against companies with a significant edge.
Implementing a new technology – especially one so controversial – is a big decision, though. And, like most complex decisions that happen in your company, making the call comes down to you. The success that results from your decision – and the failure – will land in your lap, too.
43% of CEOs are using generative
AI to inform strategic decisions
Curiously, making high-level decisions about things like implementing AI is something an AI can help with. And according to a recent study done by the IBM Institute for Business Value, 43% of CEOs are using generative AI to inform strategic decisions. This is because decisions have gotten so complex that those that land on the CEOs desk require more than the traditional financial ledgers, operational reports, intuition, and life experiences that went into CEO-level decisions just a few years ago.
Is using a generative AI as a fast research assistant a good idea, though? Like everything having to do with AI, the answer is complicated.
Accepting Bad Decisions
The decisions you face are very likely complex ones about sustainability, cybersecurity, diversity, equity and inclusion, AI, and stakeholder management and, according to the CEOs who responded to the IBM study, the weight of them is “crushing.”
Make a wrong call and the consequences can be disastrous. But there are no easy answers.
Certainly, accepting that not every decision you make will be right is part of the leadership skillset. “I tell managers that they have to make decisions,” George Tsounis, CTO of Stretto told CIO.com. “It’s part of the job.” But you won’t always be right. “If you make nine out of 10 decisions right, you’re killing it,” he said. “[You] have to find comfort with the fact that [you] can fail.”
Don’t expect your stakeholders to find comfort in your failures, though. Today’s shareholders, employees, and even the media will call you out and blame you for mistakes. Even when they were the ones that compelled you to take a stand on a controversial issue that may have had no clear financial benefit.
AI for Better Decisions
In the midst of this heated climate, AI dangles within easy reach. AI – itself a challenging decision – will throw considerable intellectual speed and vast amounts of stored wisdom at your problem at the tap of a keyboard.
Asking a generative AI to help with treacherous, high-level decisions is tempting. You will get an instant answer, backed by vast amounts of data. That answer will probably make perfect sense and will certainly be delivered with confidence and a convincing rationale for the decision.
The problem? It’s hard to know when an AI is doing what AIs do when they don’t know: hallucinating an answer. Even when its conclusion is not fiction, you can’t be sure of the source data.
Generative AI is often trained on generic datasets and typically keeps its logic hidden from view. You can’t ask it if it is making things up or even how it arrived at a conclusion. So that decision might be, as the IBM study put it, “an engine of mistakes.”
Bad Data Makes Bad Decisions
As with all the data input you gather when analyzing your options – whether you are asking for the opinion of colleagues or consulting a spreadsheet – you have to consider how much you trust the source.
How good is the data the AI uses
to come up with answers?
And that, rather obvious, point is the one that sticks for many CEOs. How good is the data the AI uses to come up with answers?
AIs can process vast quantities of data, quickly and accurately. Pose the right question, and an AI will deliver near-instant analysis of customer behavior, financial information, market trends, technology trends, market fluctuations, and operational metrics. An AI can be called upon to use historical data to make predictions about what customers will do or to do risk assessments in advance of big decisions such as expansions, mergers, and acquisitions.
But, according to a report from Deloitte, the quality of data the AI uses to make its analysis is a problem. “To become an AI-fueled organization, you will need access to the right data sets, the ability to train algorithms on that data, and professionals who can interpret the information,” reads the report.
For many CEOs, getting the data sets in order comes before trusting an AI. They have legitimate concerns about the provenance of the data (61%) and the security of their own data (57%) in the hands of an AI.
Deciding About AI For Your Company
Given this level of concern, it is even more important that you make a decision about generative AI. There is a trickle-down effect in an organization. If the CEO uses AI to make decisions so will others. But without a decision from leadership, personnel might use it without the sanction or knowledge of leaders. So, setting policy around AI use is an important business decision that needs to come from the top.
Because of the troubling concerns about data origin and security, many companies – including Apple, Samsung, Verizon, and many financial institutions – have made the decision to ban generative AI use among rank and file employees.
Not all AI is Generative
But that doesn’t mean that these companies are banning all forms of AI.
As Verizon’s CIO Nasrin Rezai pointed in a recent roundtable discussion, generative AIs – those that generate text and images from a prompt – are not the only AI out there. “AI is a much broader science and discipline involving using computers to do things that traditionally require human intelligence,” he said.
There are many ways that AI can use its computational genius to lift the workload of your people, help you make better decisions, and deliver your products to market more competitively. The key to using it intelligently is to take a hard look at the data it uses and to use it for things you can trust it to do.
One report, from Gartner, for example, predicts that the grunt work of data collection, tracking, and reporting will be handled by AI almost entirely by the year 2030. This sort of work takes people away from things they are good at and asks them to do tasks computers are much more suited to. Using AI to build the datasets – using actual workflows, customers, and events from your own company – that a generative AI uses to inform its answers is a futuristic concept. But it is one that will help you make better, faster decisions.
To learn more about how the artificial intelligence Moovila has built into our project and resource management platform, read this blog or explore our RPAX feature set.