It’s a fact – there exist innumerable quotes about risk-taking. We see them shared between loved ones, on social media, in business communications; the list very much goes on. These are quotes like “The reward is in the risk itself.” and “You do the best work when you push yourself and take risks.” and “If you don’t risk going out on the branch, you’ll never get the best fruit.” and etcetera. There’s substance here, but a key thing to remember is that all-or-nothing attitudes and situational summations are, in themselves, perhaps too risky to be worth reaping. Healthier, risk-averse mindsets are warranted. (But then again, is risking talking too deeply about risks actually a risk, in and of itself?)
We’re breaching contextual paradox territory here. I digress.
A moral, though, is that talking about – then, consequentially, actively taking – important risks is a multi-tiered discussion. No doubt about it.
So it would seem, nowadays, is another loaded topic – it’s one we cover what feels like every other day here on Future of Work:
AI.
Generative artificial intelligence (GenAI) has presented the world with a new slew of risks; misleading or inaccurate information generated (i.e. this can depend on human prompting, which is its own Pandora’s box), bias perpetuation, errors in training data, copyright infringements, privacy violations, malicious use cases in the hands of bad actors, and more. Even in the most seemingly beneficial of GenAI-involved situations, the risks are there.
So, are they proving to be worth the take?
Of course, many believe they are. However, as mentioned, at-all-costs sorts of risks – these are being taken with GenAI, as we’re now seeing – have drawbacks all their own.
According to new research from enterprise work assistance and AI platform Glean (in junction with data from technology research and advisory firm ISG), there’s much to break down when it comes to GenAI-centric business value and risks therein.
I’ve jostled out the long-story-short version of it for you, readers:
- To develop this Glean-ISG report, 224 senior-level IT leaders (VP and C-suite) across the U.S. and Europe were surveyed. These leaders come from companies with at least 1,000 employees, and with at least $100 million in solid annual revenue.
- Leaders are eager to dedicate huge spend on GenAI; according to the report, budgets for GenAI are expected to nearly triple by 2025. Respondents reported that GenAI-based projects have already consumed an average of 1.5% of budgets since 2023. They’re on a rise to upwards of 2.7% so far in 2024. Thus, that “nearly triple by 2025” percentage is roughly 4.3%. (That’s actually quite big.)
- Logically, companies with higher revenue tend to plan on investing more in GenAI. Of those with more than $5 billion revenue, 26% reportedly plan to invest more than 10% of their IT budget through 2025. (That’s bigger.)
- Additionally, more than a third (34%) of surveyed leaders disagreed with the idea that it’s better to slow down GenAI adoption than risk negative consequences, suggesting that they’re “willing to make moves fast and produce new solutions quickly, rather than wait to see how the cycle plays out.” But is that the right M.O. here? After all, not every GenAI user comes from a Microsoft or a NVIDIA; cycles need to be monitored as new algorithms and models evolve, as new regulations come into play, and so on. This could perhaps be indicative of a troubling trend.
- It’s also sort of ironic that, per Glean, “only 8% of leaders surveyed said that their biggest concern about implementing GenAI was that the capabilities were changing too fast for them to be able to invest yet. Instead, their biggest concern is around a lack of expertise in generative AI in their organization.” So why, then, are prompt engineers and across-the-board AI experts scarce but in allegedly high demand? What are organizations missing?
To us (and to Glean and ISG), it seems clear that companies lack clear, uniform methods for a.) adopting GenAI and b.) rapidly boosting investments without being able to accurately measure ROI. Yet into the risk bucket, they leap. Are they onto something that others aren’t? Is there a layer of effectiveness they’re hitting that other companies can’t, or is the shark being jumped?
I suppose, for now, this amounts to great data, a bit of a think piece from us, and open-ended questions regarding what’ll be next. Q2 of 2024 begins in less than two weeks, yet since Q4 of 2022 the number of AI-powered changes blazing through industries has been – and we say this without undue exaggeration – purely astronomical. GenAI now is the worst it’ll get; that’s actually telling of how good it can, at times, be – it also highlights how, at the end of the day, there are so many risks we’ve yet to even see yet, as a collective society.
But in end, leaders will still drive budgets with implementations of powerful GenAI tools near the top. We’re very interested to see how trends progress.
Per said Arvind Jain, Glean’s co-founder and CEO:
“Today’s IT leaders have lived through multiple hype cycles heralding new technologies as transformative, and this study demonstrates that these leaders may see something different in GenAI. Companies are moving with unprecedented speed to invest in and deploy GenAI; as a result, they require solutions that enable them to adopt this powerful new technology quickly, and the key is to do so without taking on unnecessary risk. With the right partners, enterprises can unleash the productivity and operational efficiencies of GenAI and avoid negative consequences.”
The trick there, it would seem, is to actually achieve that balance. Putting the pedal to the metal; words into action, as it were. (Rather than have a storm of GenAI pitfalls sweep our digital shores.)
Click here to read the complete report for yourselves, among other resources Glean has made available.
Edited by
Greg Tavarez