Planning for AI
What you need know before committing to AI.
Do you have your AI strategy? If you don’t, be prepared to lose. Or at least, so say the consultants, tech journalists, and pundits. You can’t possibly be a competitive modern company without an AI strategy.
We are the last people to say that AI isn’t important, or that having an AI strategy isn’t a good thing—or even that, if you don’t start thinking about AI initiatives now, you’ll end up behind. Artificial intelligence is a game changer: it’s a revolutionary cluster of technologies that has the potential to make fundamental changes in how we live and work. However, much of what we read gets the cart before the horse. An AI strategy, if it’s just an AI strategy, doesn’t get you very much. An AI strategy that’s just an AI strategy is a weird managerial superstition: pour magic AI sauce over everything, and it will be awesome.
It needs to be said, though it should go without saying: don’t build an AI strategy without first thinking about your business objectives. Better: incorporate AI into your business strategies rather than building an AI strategy. Think about how AI can help you achieve your goals; don’t make it a goal in itself. On her blog Quam Proxime, Kathryn Hume counsels established enterprises against the temptation to become “AI-first.” Instead, they should be “something-else-first-with-an-AI-twist.” Enterprises need AI systems to work smart, to take advantage of their data, to learn about and improve on their past performance. Enterprises don’t need AI to become something new that they don’t yet understand. They need AI to build on the strengths they already have, and become what they already are, but better. That’s how to be innovative and transformational; and if, along the way, you come up with ideas and products that disrupt your current practices (and your industry), so much the better.
Start building that strategy by asking what, precisely, you want to accomplish. Do you want to improve customer experience? Do you want to improve internal processes like HR? Do you need some insights into product design, or are their features in your product that could take advantage of AI? Are there specific processes and tasks that are error-prone, repetitive, and just plain boring, or where some assistance can make your staff more effective? AI can help you accomplish all of these things and much more, just like a hammer can be used to build any kind of house. But to succeed, an AI strategy has to be part of an overall business plan. Whether you’re improving your current business or building out a new business, AI should serve your business plan, not the other way around.
As you’re building AI into your business plan, you’ll need to think about what AI can and can’t do, the foundational work that will allow you to build a successful AI program, the problems you’ll encounter as you build your application, and more. AI can do great things; but before you and your team can make it do great things, you need to look at it realistically and understand the challenges. For example, AI has proven to be really good at classifying things (for example, tagging pictures); but it really can’t tell you much about what’s in those pictures. In “The Seven Deadly Sins of AI Predictions,” Rodney Brooks points out that, while AI is really good at identifying pictures of people playing Frisbee, it can’t tell you whether Frisbees are good to eat, whether infants can throw them, and many other questions. In a business context, an AI application might be able to tell you whether customers look happy when they see a new product design, but they can’t answer a larger question, like whether the product will be a success.
James Cham, in “Machine Learning and AI: An Investment Perspective,” says “the biggest risk is that we as managers will make really bad decisions about where to invest, and we’ll end up wasting billions of dollars on stupid projects that nobody ends up caring about.” Your goal is not to be that manager.
What is AI, anyway?
You can read hundreds of articles defining artificial intelligence and machine learning. All the definitions will be somewhat different, and all will overlap. We’re not interested in that debate; John Pavlus notes that we use artificial intelligence and machine learning interchangeably, for better or for worse, and that we would be better off abandoning the term altogether and just talking about automation. It’s a good point; if you’re automating a process effectively, it doesn’t matter whether you’re using a neural network, a rule-based system, or an older but simpler technique.
Nevertheless, it’s still necessary to start with a (brief) definition of AI. Here’s mine. AI is characterized by output that isn’t strictly dependent on the input or on the algorithm: the output of an AI system depends critically on a training process, in which the program learns how to perform its task. Training differentiates AI from traditional software applications and data analysis.
What does AI do well?
It’s easy to be misled by almost apocalyptic statements about what AI can (or might) do. But you’re planning a business strategy for now, or a year from now—not decades from now. So, you have to think about the reality of what AI can do.
Solving specific problems
AI is very good at solving well-defined, specific problems. AlphaGo’s victory over Go expert Lee Sedol was impressive because playing Go well is extremely difficult, and the best guess was that we were a decade away from machines that could beat a player of Sedol’s calibre. But forget about statements like “There are 10170 potential Go games.” The number of possible games is unimportant. What’s important is the problem itself: win a game with very simple, explicit rules. That’s a well-defined and very specific problem. We’re not asking for a system that can win any board game, or that can transfer its knowledge of one game to another, or that can choose whether it would prefer to play Go, Chess, or Checkers. Building a system that can win is neither simple nor easy, but for the purposes of business planning, that’s less important than the statement of the problem itself.
Think about autonomous vehicles, under development by Waymo (Google/Alphabet), Uber, Tesla, and others. Driving a car is not a well-defined, specific problem. But it can be broken down into well-defined, specific problems: planning a route, identifying road signs and signals, identifying obstructions (including other vehicles and pedestrians), detecting skids, managing the brakes, and so on. None of these problems are easy to solve, though they’re solvable. What’s more important is that they’re well-defined, and that’s why they’re solvable.
The same principle applies if you’re a small startup. Prospera analyzes images of tomatoes to detect insect infestations, disease, and other problems that reduce yield. This is a specific, well-defined problem. Prospera knows what sick tomato plants look like, so it can label images for training; it knows where to place the cameras to get good images of the plants; it knows how to collect data from the cameras reliably. The results may seem like magic, but that magic only happens because Prospera chose a well-defined problem, studied that problem carefully, and did the hard work that enabled them to solve it.
Similarly, Chorus.ai listens to sales calls, transcribes the calls and annotates the calls with action items and important topics that come up. Chorus isn’t trying to build a machine that “makes sales,” a goal that’s neither well-defined or specific. It’s transcribing a conversation (a difficult but fairly well-understood problem), and looking for specific signals that indicate action items in that transcription. It’s an assistant to the salespeople, performing routine but necessary tasks. Again, the problem they’re solving is well-defined and specific.
It’s not surprising that breaking a problem into smaller parts makes it easier to solve; that’s what engineering is all about. Let’s say you want to build an AI application to help customer service agents; after all, your high-level business goal is to improve customer satisfaction. What might that look like? Improving CS is a big problem that’s hard to define precisely. Here are some smaller, more specific steps to an AI solution: the system needs to:
- transcribe questions from the customers
- transcribe answers given by customer service representatives
- index and tag the questions and answers
- store the questions and answers in a database
- transcribe new questions and decide which features of those questions are important
- retrieve appropriate answers from the database
- present the answers to customer service agents, who in turn present them to the callers.
These are all simpler problems that can be solved—perhaps not solved easily, but solved nonetheless. In real life, you might need to break several of these problems down into even simpler, more specific tasks. Regardless of your problem, or the path you take to solve it, building an AI solution will require you to back off from bold, futuristic plans and break the problem down into well-defined and specific problems.
Augmenting humans
The key to using artificial intelligence effectively is to augment and assist humans, not to replace them. In “The Fatal Flaw of AI Implementation,” Jeanne Ross writes, “Companies that view smart machines purely as a cost-cutting opportunity are likely to insert them in all the wrong places and all the wrong ways.” If you try to use AI to replace humans, you’re bound to make poor decisions about how to use it.
As Ross points out, using AI for fraud detection will probably increase the number of fraud cases you discover. You don’t need as many people to sift through the data looking for anomalies, but you do need people to handle the cases you find. Furthermore, the additional cases an AI application discovers are likely to be the hardest and more difficult than the cases you found using your older methods. AI eliminates the dull, repetitive part of the job, but you may require more staff (and more highly skilled staff) to deal with the less routine, more creative parts.
What makes AI hard?
If you’re going to build an AI strategy, you have to think about what makes AI difficult. Otherwise, you’re starting up a major project without understanding the costs.
Data practices
First and foremost: AI products are data products. Training your AI product requires data—and probably a lot of data. The somewhat harsh reality is that you’re unlikely to have useful data if you don’t already have solid data practices in your organization. In “The AI Hierarchy of Needs,” Monica Rogati spells out the steps needed to build an AI practice: identifying data sources, building data pipelines, cleaning and preparing data, identifying potential signals in your data, and measuring your results. And Andrew Ng, in his O’Reilly AI Conference keynote, says that organizations that succeed with AI will master strategic data acquisition: they won’t just have data; they will know how and where to get more.
An organization that is using data effectively probably has most of the necessary infrastructure in place. Even if your AI aspirations aren’t directly related to your current data efforts, you’ve figured out what needs to be done and have the infrastructure ready. However, many organizations with AI aspirations don’t have good data practices, even if think they do. Take a careful and skeptical look at how you currently use data as part of building your AI strategy.
Training and retraining
AI projects need to be trained before they can do any useful work. Training is essentially running the program on a set of known data and letting it build a model (essentially, tweaking internal parameters automatically) until it gets adequate results on your test data. Then you run it on another set of test data, one that it hasn’t seen before, to see whether the results are still satisfactory. Training goes hand-in-hand with writing and debugging the software—and it’s quite likely that more work will go into training than the rest of the development process.
Therefore, if you’re planning an AI project, you need to account for training time. You also need to be aware of how training can go wrong. The pitfalls include:
- Overfitting: If your AI application achieves 100% accuracy on your training data, are you done? Is it time to break out the champagne? Unfortunately, no: if you get 100% accuracy on training data, your application has probably “memorized” the training data, and will most likely perform terribly on real-world data. This is called overfitting; it’s a constant problem. On the other hand: since your application won’t be 100% accurate on training data, it will never be 100% accurate on real-world data. That’s life. Some applications require much more accuracy than others: 99% is not good enough for an autonomous vehicle, but 60% accuracy may be all you need for a consumer app that’s recommending additional purchases.
- Bias: If your training data is biased, your application’s results will be biased. That seems obvious, but it’s easy for bias to sneak in unnoticed. Digital cameras are heavily optimized to perform well on light skin, and do poorly with dark skin, so applications like automated face recognition frequently produce incorrect results. There are fewer women in technical jobs, so AI systems tend not to show technical job postings to women. Researchers recently discovered that natural language processing software couldn’t process text that contained Afro-American slang—a significant problem if you’re trying to automate demographic research. And so it goes. Bias can have legal consequences, and “the AI did it” isn’t likely to play well in court.
- Retraining: It’s easy to think that training is a one-time, “set and forget” thing. It isn’t. Customers change, business conditions change, products change: any change in your environment can have an effect on your application. You may not notice, but its performance will gradually degrade over time. If you’re planning an AI project, you need to account for retraining.
Liability issues
AI systems have a deserved reputation for being inscrutable. They can give you results, but frequently can’t tell you why they gave a specific result. You often don’t care, but in many applications, such as medical imaging, the why is as important as the what. In some situations, inability to explain a result can be a legal liability. Developing AI systems that can explain their results is an area of ongoing research.
If the difficulty of doing AI hasn’t scared you away, you’re ready to start planning your AI project. Seriously—AI presents some big challenges. But, as Hume says, don’t let the perfect be the enemy of the good. These are all challenges that you can meet, provided you don’t start your project blind. Build a good data team, and develop good practices for working with data. Allow adequate time for training. Be aware of the problems you’ll encounter along the way. There’s nothing magic about AI—and likewise, there’s nothing magic about what can go wrong.
Getting ahead of the curve
AI is moving forward at speeds that seem almost inconceivable. Every week, if not every day, there are articles about new developments: new techniques, new hardware, new optimizations, and more. Can you afford not to get on the AI train?
You can’t. But don’t get on the train blindly; know where you’re headed before buying that first-class ticket. Treating artificial intelligence as a magic sauce that will make everything better will lead to expensive mistakes. Understand what you’re doing, why you’re doing it, and the limitations you face: both the limitations of AI itself, and the limitations of your organization.
If you do that, you’ll be prepared to take advantage of AI.