These days, my inboxes are overflowing with requests from marketers who want help evaluating Artificial Intelligence (AI) vendors.
I get it. I do. Many of the pitches sound solid(ish), but if you have been in Marketing for a while, there is often something off/suspicious that you can’t quite put your finger on. Sometimes that queasy feeling is about the vendor or the product/software; other times it’s just about the lack of experience in this new world of Artificial Intelligence and Machine Learning.
So, how do you determine whether a particular vendor is right for you? Here’s what I recommend…
First, develop a short list (4-6 bullets) about what you are attempting to do. AI works best for tasks/projects that are repetitive and have a data requirement, so please make sure to identify both of those elements. If you can’t, chances are that you should scrap the project for the time being. It goes without saying that they should also be something you can’t just as quickly do with a spreadsheet.
It would be best if you also considered how you will measure the end result. Revenue? Leads? Cart adoption? Longer average user sessions? Increased page views per user? Something else? This will help you keep focused, and it will allow you to be clear to the vendor about how you plan to measure success. Murky expectations are one of the biggest reasons many AI projects fail. If the client (you!) AND the vendor are both clear upfront, you’ll have a much better chance of success.
Next, look at who/how will do/implement/maintain this project. Frankly, I used to skip this step, but I’ve learned the hard way that it’s critical. Who will onboard this project, and how much time will it take? (That’s the easy part.) What will the vendor team do, and what will the client team do, from the initial assessment to training and reporting the results? (This is also pretty cut and dry.) Who will maintain the software/system, and how much time will it take? (This part can get sticky.) Ongoing upkeep/maintenance and possible disruption are critical elements, and you’ll want to separate them from the onboarding process as they require different timelines and, often, different skill sets.
I’ve finally gotten my act together, but I had to pause several AI projects in the past because we didn’t have the right resources to maintain things properly. It’s easy to get excited about adding Artificial Intelligence to the mix, and with AI projects, the ongoing maintenance often requires a different type of employee and skillset(s). Incidentally, if a vendor tells you an AI project that you’re working on together is set-it-and-forget-it, especially in the first year, RUN…. And RUN FAST.
Determine if you have any existing vendors doing the same thing your new project needs. Many of the bigger AI/ML software vendors do multiple things under the same umbrella or have sister companies you may not know about. Tacking on an extra module or package from a different division can be easier than adding an entirely new vendor, so it’s good to do a quick list check. Current vendors are also good at telling you which companies they are most compatible and work best with and which ones they have conflicts with if you ask them. Identifying what packages/systems you’d like to integrate with upfront can save lots of time down the road.
Speaking of conflicts…. When looking at the who/how part, you must review the conflicts. So many companies skip this step and then get burned. (For example, you often see this in email/personalization AI.) What other things are you doing and/or using that this new AI system will be in conflict with? If there are any, you’ll want to include the required peacemaking efforts in your project scope. Be sure to include data hierarchies and website speed/performance/accessibility when evaluating conflicts. We assign a “what could go wrong” person for every project. Our Negative Nelly helps identify all the conflicts, biases, and potential roadblocks to the project. (At first, I hated this concept, but it’s saved me oodles of time and even more money. It’s worth the extra scrutiny.)
After the initial gatekeeping exercise above, there are lots (and lots) of things you should look at. You’ve likely got a handle on the stuff like background (how long they’ve been in business, how many clients/users they have, the technology they’re using), reviews, case studies/examples, competitors, and pricing, but here are some other things that might be helpful:
Intelligence Scale: People call this all sorts of things (aptitude scale, smart score, wizard factor), but the gist is that you’re trying to identify/rate how intelligent the package is on a scale of 0 to 7. Zero being Not at All Intelligent and 7 being Extremely Intelligent. I don’t love this scale because intelligence is so subjective (which is one of the many reasons why people struggle with the definition of Artificial Intelligence). Still, I do like to identify how much work whatever we’re buying is going to be and how the effort will be divided. I also find it uber-beneficial to know how much the solution will improve our current performance. Focusing on the software/product’s intelligence is a helpful way to predict that.
Plus, the marketing AI industry is still new(ish). Scoring the vendors ensures that we have taken an extra close look at how much of a dimension we think the software/product can add to our mix compared to the other alternatives. Many of the best AI salespeople right now are hawking packages that a 9-year-old could build in Google Docs. #nohyberbole
The Data: What type(s) of data does the vendor/software use and/or support? 0, 1, 2, or 3? (Everyone says they can do this, but you need to be able to segment it, expire it, and so on.) Are there any list size requirements? What does the onboarding process look like? How is the data updated? How is it modeled/scored? What is the level of accuracy and how can you impact it? What percentage of the data is suppressed and not used on average? How should the data be prioritized? What’s the best way to account for any known biases? What do you need to do with your data to get the biggest bang for your buck, initially and on an ongoing basis? (This is a BIG question, but it’s worth asking the vendor’s technical team members.) What are some common mistakes that occur with new setups? Are there any ongoing issues that should be addressed upfront? What are the most significant limitations of the current data structure and use? How will they evolve?
When you talk to the vendors about all things data, you’ll want to know about your data AND their data. If you’re new to AI projects and/or using outside AI-enabled solutions, this may seem weird but it’s essential. You’ll want to know about their training and feeding data. Where did they get their training data? What sources? When? What biases did it have? What did they do to account for them? On an ongoing basis, what data are the models fed? Just your data? Other client data? A combination? Are they using supplemental data? Do they use any scheduled disruption data? How will their data impact you? (If the vendor can’t answer this question, they’re probably not the best choice for you.)
The Foundation (aka The Base): How does the system learn? How does it change? How does it get better? Where does it get worse? When you read the results and want to act on them, what’s the disruption process? What improvements can you expect the system to show within the first three months? What about six months? 12 months? Two years? Are disruptions and improvements done in scheduled batches or dripped slowly over a more extended time period? How much can you expect the overall system/software to improve, and at what intervals, organically and by vendor upgrades? What can be done to help the system best learn? When you’re reviewing the Foundation qualities, I’d also recommend listing what part(s) of Artificial Intelligence the system/software use(s) – NLP (natural language processing), NLG (natural language generation), ML (machine learning), Computer Vision, Deep Learning, etc. These days anyone who uses an if/then statement seems to think they’re an AI pioneer, so it’s good to know exactly how deep their expertise/experience is.
The Technology: What are the biggest differences between what the software offers and what you do now? This is an important topic to address because sometimes the technology just isn’t there to give you big improvements in the first year. A lot of your initial success is dependent on your data. If your data is garbage, no amount of AI will fix that. Be sure to find out the minimally viable product output you’ll get, so you can compare it to what you currently have. For many marketers, it’s also helpful to determine whether things will materially improve over x months/years. Something may sound great in theory, but in practice, the juice may not be worth the squeeze.
You’ll also want to know what limitations there are of the current technology and how the vendor plans to fix them. What enhancements do they plan to add in the future and when? What is their end game? (This loosely translates to how fast are they trying to sell or go public, as their internal hullaballoo can have a direct impact on your experience.)
Before you work with any Marketing AI vendor, develop an outline of what you’re looking to get out of the product/solution, whether it be revenue, leads, performance, or something else altogether. Then, when you talk to the Tech folks, talk to them specifically about how their solution can help you now and in the future.
Training and Onboarding: What is the onboarding process? Is there a special onboarding team assigned to you? Is it the same team you will work with on a day-to-day basis once you’re up and running? How much of your training will be self-service through videos and documentation? How long does the onboarding process take? What type of team members do they expect to be involved and when? Don’t sleep on digging into how this is all going to work. Several of the most well-known AI vendors aggressively sell that their solutions are plug-and-play. Then, when you’re in the onboarding process you realize you’re way outside your depth and need to bring in your already overburdened and overbooked Tech folks. Knowing what’s going to happen, with whom, and when, makes everything easier.
The Visibility: What level of visibility does the software have? How will you know how the system made a particular decision? Will you know what data influenced the decision? Do you have insight into how the decision could have been different? Are the significance/definitive levels available to you? At what time does the system move from test to control? As a warning, many vendors don’t love being queried about visibility. Nor will all of them answer it honestly or with complete transparency. It’s also one of the most important things you can ask an outside vendor. Please don’t let them bury the question under the “bias” category. Visibility and access to the decision-making process are different than why the data and process can be biased. The level of visibility may be murky (many advanced models can’t tell you with 100% certainty why something was decided.) The key here is to know what you’re getting, what control you have over it, and what you don’t.
The Client List: Speaking of controversial things to ask your AI vendor, I’ve learned (the very) hard way that it’s critical to ask what other clients they work with that are competitive to you and/or selling to a similar market. This doesn’t mean you shouldn’t work with vendors who work with other companies selling to your target audience. It means you should know the situation and how it may impact you. Salespeople fight me on this one a lot, saying that it’s a strength, not a weakness, that they work with many competitors. There’s no doubt that it can be a benefit, but it can also be a company-crusher, depending on what AI-product you are using, what you’re using said product for, how sophisticated the “intelligence” actually is, and what’s important to you. If you’re using an AI-enabled product to help build your gift business traffic and the vendor has (insert whatever number here) companies also selling into the gift market and they’re using the same logic for all of you, it may negatively impact your outcome, particularly in your organics and Shopping ads. (Please remember, in our new one-ask, one-answer world, you want to be Position Zero, not 4, 7, or 11. Again, you can overcome this hurdle, and it’s best if you know what you’re getting yourself into beforehand.
The Team: With many (almost all?) of the AI vendors in the space right now, you’ll find that they have one or two AI geniuses on the team that the package is built around; a second, bigger group of tech people who implement the vision of the geniuses and then a bunch of people who have been trained on the lingo but have minimal deep experience. Usually, the latter are your day-to-day contacts. (The Geniuses are dragged out of their caves for sales pitches and speeches.) It’s not a bad system per se, and it’s still good to be clear on how it works and how/to whom you have what levels of access.
Privacy, Security, and Compliance: This topic is vast and can get extremely complicated quickly. It would be irresponsible to skip it, but it’s also somewhat of a moving target these days. I’m not a lawyer (and I don’t play one on the internet) but one of the ways I help counsel is by being crystal clear about our expectations. What we’re okay with and what we’re not. How we will allow our data to be used and how we won’t. (This is absolutely critical.)
I’ve found that if you have a good place to start and are very upfront about your company’s position, vendors can tell you where they differ pretty quickly, which can save a lot of rigamarole. It’s important to note that many vendors are not transparent with their data collection practices or how they use your data and your data combined with others, which can negatively impact you if you haven’t prepared for it. (The pooling or combining your data thing I just mentioned? Get the vendor’s specific practices in writing. 100%. Don’t skip this.)
Many “AI experts” recommend you review the vendor’s Ethics statements. In my experience, these aren’t always the most accurate, so although we review them, we also give them our statements and ask them for some sort of acknowledgment or sign-off. The formality of this varies by the vendor and what they will do for us.
There are some amazing vendors in this space, and because it’s still pre-Gold-Rush, there are also lots of snake oil salesmen and shady hucksters. Please be careful about your data and your ownership of it. Your data is your most important asset. Full stop.
Free/Extended Trial: Does the vendor offer a free trial? Can you give them a test sample of your data (for free or for a small fee) to see what they can do with it? If not, what kind of guarantees do they have? You’ll likely know whether they’re a good fit for you in 90 days; 30 may not be enough. As an aside, we push hard for trials and data reviews. Some companies are very weak at handling legacy data, especially that of multi-channel direct marketers.
Miscellaneous: We ask for comprehensive product demos and/or access to a playground site. If we call for references, we insist on getting at least one past client that no longer does business with them, and we typically make that call first. We also ensure that the team we’ll get for implementation/ongoing work is identified (and approved) before we sign any contracts. Currently, a lot of this software is practically identical, and the only thing that separates the vendors are the people you’ll get to help you on a day-to-day basis. (Again, some of the companies in the AI/ML space with the absolute best salespeople are the weakest of vendors, so please don’t be fooled by the fancy pitches and large expense accounts.)
Have questions about choosing a vendor for your Marketing AI projects? Have tips you’d like to share? Tweet @amyafrica or write info@eightbyeight.com.