Author: Kayla Wolf, Technology Strategy Consultant, Merkle
Using AI is about more than simplifying the marketer’s day
ChatGPT, Lensa, DALL E-2: The last few months have highlighted the advances in AI capabilities for immediate, tangible products. It’s no longer theoretical in application. AI is now able to deliver some of marketing’s greatest desires – writing unique content, monitoring customer activity, and reducing the workloads of marketers and account staff everywhere. Before you get too far down the road, now is the time to develop rules for your team’s ethical use of AI tools in your marketing program because while it can engage your customers, AI is a tool that could just as easily estrange them.
The most underestimated danger in AI applications is not at the enterprise level where legal and compliance teams are already prepared to keep a close eye on company procedures and ethical and legal regulations. Rather, the danger is in the AI that will be used by individuals within an organization without visibility from higher-scale governance procedures such as in-content development, journey orchestration, and relationship management.
What Does “Bad AI Use” Look Like?
Consider an email that I received last month from what was framed as a B2B lead for an AI technology vendor. It was an email written to me by name with a person-based reply-to address from the company, requesting time to speak with me about their platform. I was busy with end-of-year work and it was filed away. Another email came the following week, requesting that I provide some times I was free to meet with the sales rep on the product. Again, filed without reply because my quick read was that it was an automated email.
The following week, a third email: “If I’m not speaking to the right person, I’d appreciate a nudge in the right direction to better use my time”.
Cue my guilt for not responding. Sure, I get hundreds of emails a day and have more work to complete than can easily be done in a day, but I know it’s not easy to be the account owner making time to reach out to me, a lead, without even a response. I reread the email, preparing to reply with an apology that I wasn’t interested at this time. That’s when I noticed the footer at the bottom of the email alerting me that this was an automated message to which I could unsubscribe.
My immediate feeling was incredulity, followed by intense frustration and annoyance. This AI vendor intentionally misled me to believe I was causing undue strain to a real, living person by not responding! This was done to illicit a guilt response from me so that I would respond to their email, so why then did I get upset? Clearly it would not be their intention to upset a potential customer. However, I was being asked to make an emotional and mental investment that the AI vendor wasn’t willing to make themselves – as evidenced by enlisting automations to send me these meeting requests instead of a real account manager.
As marketers rely more on AI within pieces of the tech stack to ease their workloads and increase delivery velocity, it is critical to recognize the impact this will have on customers and prospects as recipients. AI must be given an identity separate from that of an actual human being to avoid the situation I found myself in, and it is in your best interest to let AI reduce the burden of all parties in the marketing relationship, not just the marketer.
Establish this golden rule within your company’s daily users of AI:
Treat Your Audience How You Want To Be Treated.
If AI is going to reduce the burden on a marketer, it must also reduce the burden on the customer. Applications of this golden rule include:
· Emails or direct mail written by AI can provide personalization in copy but should not be tricking the recipient into thinking it is a personal communication in order to elicit a response.
· Chatbots with AI and natural language processing or generation should allow customers more latitude in how questions are asked while getting a correct response in return. They should not simply force the customer to answer a dozen questions about their chat topic before being put in contact with a human customer service representative.
· AI measurement of customer service agent performance should prioritize improving the customer experience rather than on simply reducing the length of calls.
As an industry, we will likely lean on AI more in 2023 than we have previously. Remember to use it efficiently but also keep ethics and the human touch in mind as you use it.
Want to learn more? Read Merkle’s Ethical AI guide here.