Avoiding AI’s False Prophets
If you are a senior executive exploring the emerging world of AI, I'd like to offer a word of advice: be careful where you are getting your guidance.
I’ve been amazed over the past few weeks at the “advice” being promoted by supposed experts in the field of AI. Though there is unquestionably some great content out there, there are also individuals spreading ideas that are – well, just wrong.
I find these misguided voices particularly concerning for leaders in the life sciences and healthcare sectors. Our industry has unique needs – clinical evidence requirements, regulatory obligations, patient privacy concerns – that can only be satisfied through well-formed AI plans.
Recent Examples of Questionable Advice
Here are a few “expert” samples that I've found questionable recently.
“You don’t need to know anything about AI to use it. You just talk to it.”
The notion that simply engaging with AI will generate high-impact results is false. Today, AI models are not sources of truth – they are sources of prediction about what may be true. Those predictions are driven by 3 things: how a model is engineered (i.e., what is the underlying design and methodology), trained (i.e., what data was used to train its responses), and used (i.e., what a user tells it and asks it to do).
If any one of those things is misaligned to the user’s actual intent (and they almost always are to some degree), unreliable results will ensue. Anyone with real experience in AI can tell you all the many ways that AI gets it wrong. When people suggest using AI is like pressing a button on a vending machine, they are glossing over the inaccurate, misleading, and even dangerous responses users might receive.
In the future, our AI capabilities will improve. But today, there are countless situations where you cannot use AI safely and effectively if you don’t understand how it works.
“AI just replaced your strategy consultant.”
I’ve seen this quote from dozens of “experts” fishing for clicks. It reminds me of the predicted demise of radiologists due to AI, as well as times when people were totally convinced that the key to a great strategy was a great set of templates.
Strategy is not templates or dashboards. It’s not market research and competitive intelligence. It’s not about creating forecasts, or setting incremental goals, or even analyzing your next product or service feature set. Strategy is a nuanced, disciplined approach to aligning people and investments in complex trade-off decisions about the future options for your company, your culture, and your ability to create value. Can AI support that with planning and analysis tasks? Absolutely. But does AI currently have access to the nuanced data and skills – based in creatively understanding human behavior, management styles, cultural weaknesses, capability dependencies, market experiences, risk tolerance, and more – associated with great strategies for a specific company? Not today. If highly successful strategies came from the tactical planning tasks, every company would be a top performer. And by the way, your competitors have the same AI tools you do.
“You don’t need an AI strategy.”
The rationale for this argument is simple: AI is everywhere, so it should just be a part of your business strategy. You don’t need to think about AI distinctly.
AI should absolutely be a part of every business strategy. And if accurate data was always available, budgets for AI tools were infinite, organizational changes were easy, systems were seamlessly integrated, and people didn’t care about their careers, maybe organizations would not need an AI strategy. But since we don’t live in a world like that, here are a few reasons you need an actual AI strategy:
1. AI is not an incremental technology; it is a disruptor. How many horse-drawn plow manufacturers and Blockbuster store owners are in your town today? Approaching disruptive innovation in ignorance to its impact on your employees, operating model, and customers has not generally worked well in the past. And by the way, if you just add AI to existing processes, you make those processes more expensive...so be careful how to pursue incremental adoption.
2. Effective AI requires effective data and infrastructure. Organizations without a strategy for adopting and supporting AI will face an avalanche of business issues: uncontrolled data proliferation, IP disclosure risks, workflows that don’t scale, low-quality data, compliance problems, duplicative investments, conflicting insights, and more.
3. Resources are finite. Money, people, and time are all limited. When an external force is impacting every area of your business at once, you cannot commit to “do it all” – that’s a recipe for failure. You need a well-formed, prioritized plan with a strong foundation in value creation.
4. For many people, AI is a threat. When employees see a new technology (described as a job killer) and their leaders don't show up with a clear plan on their future with it, the most valuable and marketable often move to find a safer opportunity. In doing so, they drain your company of capability, capacity, and institutional knowledge.
“Forget use cases. No more software.”
Some voices on social media have argued that organizations no longer need to target specific uses of AI, and they won’t need software in the near future. The argument is that AI tools will just take care of whatever a business needs to operate. This is an example of magical thinking.
In the real world, there are three problems with this idea: quality, scalability, and feasibility. When different people use different tools and different processes for critical tasks, you create massive problems in consistency and therefore quality. And because the organization is not standardizing on the best way to work, its ability to scale and grow becomes highly constrained.
More importantly, for AI to work at all, it needs great data – which comes from controlled processes and reliable software. AI also needs processes – you cannot automate and optimize an undefined process. And AI makes mistakes, so you need ways of tailoring and constraining its operations within the variables appropriate for your organization (and its quality, privacy, and compliance obligations).
Finding Better Experts: Six Recommendations
So how can you tell a real AI expert from the false prophets? Here are a few things I look for in evaluating new hires and partners:
1. Do they have expertise in your industry? In healthcare and life sciences, its hard to see how AI guidance can be effective without a deep understanding of the industry itself. Dimensions around research, clinical data, safety, efficacy, validation requirements, real-world evidence, and regulatory considerations are vital to developing effective AI roadmaps.
2. Do they have a solid background in analytics and data sciences? If not, be cautious – much of what an expert needs to know about AI emerges from computational sciences and the challenges of grappling with real-world data. Here’s an easy hint: if the thing they think is most important is anything except the data, they likely don’t understand AI.
3. Can they work effectively at the intersection of business and IT. Effective leverage from AI emerges when financial, operational, and commercial workflows and decision points become tightly aligned to data, analytical, and AI capabilities. Look for people and organizations that can crosswalk effortlessly between those stakeholders and topics.
4. Have they ever programmed AI? People who know how to use AI chatbots are not AI experts, even if they use the products a lot. Find someone who works with APIs, understands software engineering, fine-tunes models, and has development experience with more sophisticated forms of AI experiences.
5. Are they echoing press releases or giving proven guidance? There’s nothing wrong with sharing the latest news – I do it sometimes. But there is a different between a sports commentator and an athlete. Many of the best AI players don’t stand around talking about AI; they are too busy using it to build smarter companies.
6. Have they ever designed and implemented complex digital transformations?Successful adoption of AI is associated with process re-engineering, user community engagement, requirements management, testing, and change management – not just cool tech. If the person doesn’t have a track record of upgrading organizations with innovative technologies, don’t trust them to learn it by trying to upgrade yours.
In Conclusion
My skepticism of some pundits does not mean I’m skeptical of AI. On the contrary, I think AI will be one of the greatest transformative technologies mankind has ever embraced. But like all transformations, it needs to be pursued with both vigor and rigor. Enthusiasm should be matched by responsible and experienced leadership that maximizes what these emerging technologies do for patients and practitioners around the world.