Legal implications Archives - Schaefer Marketing Solutions: We Help Businesses {grow} Rise Above the Noise. Mon, 16 Feb 2026 14:41:42 +0000 en-GB hourly 1 https://wordpress.org/?v=6.9.4 112917138 A step-by-step approach to AI adoption for your company https://businessesgrow.com/2026/02/16/ai-adoption/ Mon, 16 Feb 2026 13:00:14 +0000 https://businessesgrow.com/?p=91894 AI adoption isn't about learning prompts or proving an ROI. You have to get your people on board and this post teaches you how to do that.

The post A step-by-step approach to AI adoption for your company appeared first on Schaefer Marketing Solutions: We Help Businesses {grow}.

]]>
AI adoption

Most AI initiatives don’t fail because of bad models or weak vendors. They fail because people quietly opt out — by ignoring the tools, undermining the effort, or waiting it out. This post teaches you how to prevent that.

Almost every company makes the mistake of thinking that AI adoption is about investing in technology. That’s the easy part. You can make technology do whatever you want. But you can’t make people do whatever you want. In fact, most humans resist change. The focus must be on people, first and foremost.

I have a master’s degree in organizational development and led technology change efforts at a Fortune 100 company for nearly a decade. Here are lessons I learned from the (many) bumps I’ve had along the way.

The big assumption

This post is not about creating a business case for AI. This post is to help you AFTER your leadership team is onboard, the strategy is in place, and the money and resources are approved.

Wharton study concluded that three-quarters of the businesses were getting a positive return on their AI investments. Businesses typically take decades to successfully deploy new technologies. Progress after just three years is striking. As AI continues to improve and workers become more adept at collaborating with machines, the gains will compound. Over a billion people use generative AI models every month. Not all uses are productive, but many will be.

The key is getting people to use it.

Let’s get those people moving …

1. There’s no such thing as a grassroots AI adoption effort

If you’re trying to enable a profound technological change in your company, it won’t happen just because you want it to. This project must be understood and actively supported by the senior executive who owns the AI adoption strategy and budget.

This is non-negotiable.

Every technology adoption effort comes with frustrations, delays, and problems. You must be able to turn to a high-ranking person for support when the sh*t hits the fan. This is your “air cover.”

In a small company, this sponsor/protector may be the owner. Or, it could be a department head in a large company. But the person at the top must buy in because this is not simply an investment — it’s a cultural change. And only the leader at the top can influence culture.

2. Show active sponsorship

100 percent human contentOnce your leadership is onboard, they need to show up and let people know this is a critical business effort in three ways:

  1. Make AI adoption part of annual goals tied to bonuses and compensation.
  2. Ask questions about progress and adoption in every staff meeting. One business owner asks anyone who comes to him with a problem whether they’ve tried using AI to solve it first. Using AI as a default has now become part of the company culture.
  3. Repeatedly emphasize why this is important to the business. In my corporate days, we used to have a saying that an executive had to hear something seven times before it sank in.

3. Don’t name it

Don’t make AI adoption a “project” with a name.

If your effort has a name like “AI Future,” it becomes a target for derision. A project with a name makes people think it is a short-term management fantasy that will eventually go away.

When manufacturing locations first introduced electricity to the workplace, they didn’t call it “Operation Lights On.” They just did it because it moved them into the future.

4. Assign an SPA

AI adoption is a team sport.

And like any team sport, progress breaks down when everyone’s chasing the ball, but no one knows their position. But when positions are clear, people stop guessing, and they know how AI fits into their work and how their work fits into the larger system.

Coordination is what turns AI from a collection of half-used, misused, or abandoned projects into something that actually works and makes a difference.

And that requires a manager. Every change management effort must have a single point of accountability (SPA). This is the person who lives and breathes this effort every day. Their career depends on success.

Back when social media was taking off, a common mistake was assigning “Jimmy from the mailroom” to lead the effort because he was the only person on Facebook. Of course, that was a recipe for disaster.

The ideal SPA is somebody who deserves more responsibility, is trusted, and is ready for a new role. They will be motivated to succeed because they know a promotion is likely next.

I find that 90% of the time, a change effort fails because there was no SPA.

5. Acknowledge the fear

Bringing AI into an organization might cause real fear among employees. It could represent

  • Job displacement anxiety
  • Fear of looking incompetent
  • Loss of control or expertise
  • Ethical unease that they don’t know how to articulate

Before you label someone as “anti-AI,” ask what they’re protecting. In my experience, resistance is almost always about fear of irrelevance, exposure, or loss of identity.

Don’t try to erase the fear — legitimize it. Be firm about the direction and acknowledge the unknowns: “Some of you are right to be concerned. AI will change roles. Some tasks will disappear. Some skills will matter less.”

This signals honesty, builds trust, and removes the taboo around saying the quiet part out loud.

Once fear is spoken, it loses some of its power.

6. Middle managers are your make-or-break layer

If you’re in a larger company, the middle managers are your key to success. Middle managers:

  • Control day-to-day workflows
  • Translate strategy into behavior
  • Set the emotional tone toward a change effort
  • Can quietly kill adoption by deprioritizing it

These are your internal influencers who can either propel or torpedo AI adoption. To keep them on board,

  • Train them first
  • Give them scripts, not slogans
  • Explicitly remove old KPIs that conflict with AI experimentation
  • Reward their advocacy and progress

7. Start with the willing

Chances are, there will be people on the team excited about AI and ready to lead. Give them an opportunity to shine.

  • Identify early adopters who are already curious/enthusiastic
  • Let them pilot and become your internal champions
  • Use their success stories to build momentum before expanding to skeptics
  • Don’t waste early energy trying to convert the resistant — let peer proof do that work for you

Of course, some people will not get on board, so you must …

8. Address obstinacy immediately

There will be resistance. That’s natural. But when a person is a flat-out obstacle to progress, address it immediately. Actively working against a change effort can become an organizational cancer.

If the resistance isn’t something you can address yourself, defer to the power of your sponsor with something like, “I’m sorry you are anti-AI and against this effort. This is a priority to our boss, who is sponsoring this, so let’s bring it up with her.” (Refer to point one of this post!)

The most effective change effort I’ve ever been part of accelerated to light speed when the CEO fired a vice president who was blocking the change. It was a thunderbolt that said, “Failure is not an option. Get on board.”

9. Create rational metrics

Here is a piece of advice that might seem controversial.

At least for the first year or two, measure adoption instead of ROI. My thinking goes like this:

AI is transformational, like lightbulbs or air conditioning. Is anybody in Dubai trying to measure the ROI of air conditioning? No, because it enables just about every success in that desert country.

If no one adopts AI, you’ll never see an ROI, right?

Potential metrics might include:

  • % of employees who used AI weekly

  • % of workflows with AI touchpoints

  • Self-reported confidence scores over time

  • Number of AI-assisted decisions vs. manual

10. Build in quick wins

In the early days of a change effort, it’s important to create momentum and positive vibes. And nothing does that better than a positive story.

If employees are talking about their AI victories and breakthroughs, quickly record a video and share it with the leadership team. Set modest adoption goals that will spark positive conversations when exceeded.

And most important, when you reach milestones and achievements, don’t sit on them. Communicate, communicate, communicate.

It’s also important to protect early experiments and failures and share “this didn’t work, here’s why” stories. I have a friend at Dell who meets with each sales leader quarterly to report on AI experiments, even if they didn’t work. This builds psychological safety, which is essential for behavior change.

AI adoption isn’t a technology rollout. It’s a leadership test. The companies that win won’t be the ones with the smartest models but the ones that helped their people cross the bridge from fear to fluency. I hope this post helps you think through your success factors.

Need an inspiring keynote speaker? Mark Schaefer is the most trusted voice in marketing. Your conference guests will buzz about his insights long after your event! Mark is the author of some of the world’s bestselling marketing books, a college educator, and an advisor to many of the world’s largest brands. Contact Mark to have him bring a fun, meaningful, and memorable presentation to your company event or conference.

Follow Mark on TwitterLinkedInYouTube, and Instagram

The post A step-by-step approach to AI adoption for your company appeared first on Schaefer Marketing Solutions: We Help Businesses {grow}.

]]>
91894
Is it time to embrace ethically-sourced marketing? https://businessesgrow.com/2025/12/01/ethically-sourced-marketing/ Mon, 01 Dec 2025 13:00:52 +0000 https://businessesgrow.com/?p=91338 Marketing is a wonderful career that changes the world in positive ways. But indirectly, it is contributing to some of the world's biggest problems. It's time to start a conversation about ethically-sourced marketing.

The post Is it time to embrace ethically-sourced marketing? appeared first on Schaefer Marketing Solutions: We Help Businesses {grow}.

]]>
ethically sourced marketing

Every ad dollar we spend fuels algorithms we know are harming people, chewing up the environment, and stoking hate between neighbors.

I must face the fact that my beloved field of marketing contributes to some of society’s biggest problems.

It pains me to write about this. I mean, I’m part of the problem, too. But it’s time to start this conversation because the traditional marketing approach is at a breaking point.

  • AI-driven amplification of addiction
  • Deep fake, misinformation, the decline of trust
  • Easy AI content requires more energy consumption
  • U.S. Surgeon General’s warnings on youth mental health and social media

We need to consider what it means to lead and sponsor ethically-sourced marketing.

Let’s break this problem down into four categories today:

  • ADDICTION
  • DIVISION
  • ENERGY / ENVIRONMENT
  • OPERATING WITH VALUES 

1. Addiction

Back in my corporate days, I dreamed of creating a product or service so great that people would be addicted to it. I remember saying those words out loud.

Before the internet, the chance of doing that was slim, especially in B2B. We didn’t have the repetitive internet memes, challenges, or reels that could drive people down a rabbit hole.

100 percent human contentBut today, marketers fund a system where attention is literally the product being sold. And it’s working exactly as designed.

Here’s the basic math nobody wants to talk about. Engagement equals money. Five billion people spending over two hours a day on these platforms? That’s not accidental. That’s the entire business model. Every scroll, every like, every second you spend staring at your screen — that’s a data point being harvested to sell more targeted ads.

The platforms use artificial intelligence to analyze your emotions, habits, and vulnerabilities. They’re predicting human behavior at scale.

But here’s where it gets really interesting, and honestly, a bit sinister. The designers of these platforms have deliberately borrowed from the playbook of slot machines and casinos. Infinite scroll. Autoplay. Those little notifications that pop up right when you’re about to put the phone down? They’re triggering the same reward circuits that gambling does.

It’s the variable reward schedule that behavioral psychologists have understood for decades, now deployed across billions of devices.

Think about the “like” button. It’s a dopamine delivery system. You post something, and you get that little hit of validation when people engage. So you post again. And again. The platform has essentially weaponized human psychology for engagement.

How many of you optimize likes and engagement as an essential part of your career success?

It gets worse. Younger brains are exponentially more susceptible to this stuff because they’re still developing the neurological circuits for impulse control and delayed gratification. U.S. children generate more than $11 billion in advertising revenue for major social media platforms.

Let that sink in. $11 billion extracted from the psychological vulnerabilities of kids who don’t yet have the brain development to resist these systems.

The platforms give lip service to parental controls and safeguards, but they don’t care.

Your marketing dollars fuel the addiction machine. Digital ad dollars are hurting children.

Addiction is the foundation, but the consequences don’t stop at endless scrolling. They spill into something darker.

2. Division

In the social media world we all love, hate is good for business.

A Wall Street Journal investigative report revealed that Facebook knew that its core social media product makes the world more toxic and divided.

“Our algorithms exploit the human brain’s attraction to divisiveness,” read a slide from an internal presentation. “If left unchecked,” it warned, Facebook would feed users “more and more divisive content in an effort to gain user attention & increase time on the platform.”

One example: 64 percent of the growth in online extremist groups was fueled by Facebook’s own recommendation algorithms!

The company assigned a high-level team to develop a plan to combat this issue … and they did. But then Mark Zuckerberg shelved the basic research and blocked efforts to apply its conclusions to Facebook products. In fact, the Facebook leader has publicly denied his company’s findings and recommendations.

Why?

An internal report said that moderating hate was anti-growth.

That makes me sick. When hate becomes a growth strategy, every advertiser becomes a silent financier of dysfunction.

While the emotional toll of division is staggering, the physical toll on the planet is just beginning to surface.

3. Energy and Environmental Impact

Last year, I was honored to be a keynote speaker at the Belgian Association of Marketing’s annual conference, a first-class event. It was there that I met Dr. Victoria Hurth. She introduced the audience to a new way of looking at marketing and its impact on the environment. I felt ashamed that I had never really considered these realities.

victoria hurth

Victoria Hurth

Marketing, she said, is the engine of demand. That’s our superpower. And it’s also part of the environmental problem.

When we stimulate desire, we stimulate production, shipping, packaging, and, too often, waste. The question isn’t whether marketing affects the environment. It’s whether we’re willing to measure it.

Even “digital” isn’t clean.

Programmatic ads ride on massive server networks that consume real energy. An industry analysis shows the carbon cost of every ad impression — grams of CO? tied directly to the ads we place. One publisher cut its emissions 70% with smarter supply-path decisions, with no revenue loss.

E-commerce? It helps when it consolidates freight … until fast shipping and high return rates obliterate any benefit. U.S. product returns alone generated 24 million metric tons of CO? last year and sent billions of pounds of goods to landfills.

Even our content diet carries a carbon footprint. Streaming and online video now account for an estimated 3–4 percent of global emissions. “Virtual” isn’t virtual. It’s powered by real data centers, real devices, real infrastructure.

And then there’s AI.

OpenAI’s planned chip network may consume 250 gigawatts of power by 2033. That’s one-fifth of America’s total electric generation capacity today. If OpenAI were a country, it would be the seventh-largest electricity producer on the planet. Energy prices are already rising nationwide, as is the environmental impact.

So yes, even creativity now carries a carbon cost.

Dr. Hurth argues that businesses must prioritize human sustainability over profits. It sounds idealistic — until you realize the alternative.

We’re not just creating demand. We’re creating emissions.

4. Operating with values

In the early days of web marketing, I attended a presentation by an SEO “pioneer.” He had hired home-bound disabled people to pose as online commenters in an effort to impact his customers’ search results.

When it came time for the Q&A, I asked, “How do you live with yourself? This is so unethical!”

He responded, “It works. And if I didn’t do it, somebody else would.”

Too often, marketers opt for “what works” and turn a blind eye to the holistic impact of their actions on the world and our customers. A brand strategist is a role in which you are effectively a cosmetic surgeon for capital.

While hiring people to fake our content seems extreme, aren’t we doing the same thing today with AI? Half the comments left on my content are AI-generated fakes.

I learned at a recent meeting that 85% of companies use AI to generate content and that, on average, their content output has increased by 45%.

To what end? To replace humans? To add to the barrage of noise we must endure to find truth? To consume vast amounts of energy and clean water to generate AI slop?

Can we keep one eye on the bottom line and one on our moral compass? If we don’t reclaim the soul of our work, the machines will do it for us.

What do we do about it?

First, let me emphasize that I’m proud to be a marketer. The marketer is the creator, the innovator, the front line of our business. We can be the beacon, shining a light on the good and the worthy.

Throughout history, advertising and marketing have played a role in positive societal change and in creating demand for life-changing products.

Second, the weight of these problems does not necessarily fall solely on us. We’re expected to work in a deeply flawed social media / digital environment beyond our control. Any real change would require complex systemic changes.

So what’s the point of this post?

I’m willing to bet every person reading this has had pain in their heart over the online safety of our children, the impact of global warming, and the divisions that are tearing countries and families apart.

Am I suggesting that we sell less? Quit digital advertising? Abandon profitability?

No. But at a minimum, we need to open this conversation and re-frame the marketing profession in a more holistic context. Any change begins with awareness.

What if marketing became the world’s most powerful engine for human flourishing instead of manipulation? What if innovation, storytelling, and creativity were measured not just by impressions but by the impact we have on the people we serve?”

I don’t have the answers. But here are a few ideas I picked up from Dr. Hurth and others.

Reframe success.

Replace metrics like engagement and impressions with impact: well-being, trust, sustainability, and authentic connection. Isn’t this why we love the Patagonia brand? It can be done.

Track “advertised emissions,” addiction time, and content energy use alongside ROI. Transparency changes behavior. Above, I cited the Scope3 research. One publisher cut average CO2 per thousand impressions by about 70% through supply-path optimization, with no revenue loss.

Design for restraint.

Use creativity to promote durability, repair, and reuse. Ask: “Does this campaign help or harm long-term human flourishing?” Re-use is a significant priority for Gen Z shoppers. A positive trend!

Invest in ethical tech.

Support platforms and partners committed to transparency, safety, and carbon-neutral operations. The energy efficiency of most technologies (especially AI) is increasing at a breathtaking rate. Are you aware of the relative energy use of your tech stack?

Lead with humanity.

Make ethics a competitive advantage. Reward teams for doing the right thing, not just the fastest or cheapest.

“Ethically Sourced Marketing” is a new idea. Corporate culture doesn’t change without a leader who makes this a priority. If this idea catches on, it will likely be because one person embraces the change and sets an example.

Dramatic change is possible

Here’s a point of inspiration.

Madewell, a German-based clothing retailer, is working to eliminate plastics, aiming to have 100% of its packaging be sustainably sourced and free of virgin plastic by the end of this year. The brand is also reducing plastic in its products by increasing its use of sustainably sourced fibers and recycled materials, such as recycled insulation and recycled nylon, and is committed to achieving carbon neutrality by 2030. 

I read that the CEO is even trying to eliminate plastic pens in their offices.

Can you imagine how difficult it would be to eliminate all plastic in your company? But one leader is driving this change, shaping a company culture that makes a difference on a vast scale.

If one company can eliminate plastic, I have hope that somebody out there can eliminate marketing and advertising that contribute to hate, polarization, addiction, and waste.

ethically-sourced marketing

There has never been a better time to re-evaluate what we do and how we do it.

If positive change seems unattainable, here’s a good place to start: If you are directly or indirectly doing things that people hate, STOP IT.

Double down on what people love. Trust. Transparency. Humanity. Community. Ethics. A responsible, measurable environmental impact.

Eugene Healey wrote:

“We have to fight under the contradictions of capitalism. That’s non-negotiable. But we should still get to do so by creating beautiful things. In that, we can find meaning.

“If you’re a marketer, make things you believe should exist. If you’re a senior marketer, make the case for the existence of beautiful things. Look at your brand advertising, your out-of-home, hell, even your performance ads, and ask yourself: does this make some meaningful contribution to public space, or at the very least not deplete it?”

The Most Human Company Wins. Keep fighting the good fight.

Help me start this conversation by sharing this post with your marketing and advertising friends. Thank you.

Need an inspiring keynote speaker? Mark Schaefer is the most trusted voice in marketing. Your conference guests will buzz about his insights long after your event! Mark is the author of some of the world’s bestselling marketing books, a college educator, and an advisor to many of the world’s largest brands. Contact Mark to have him bring a fun, meaningful, and memorable presentation to your company event or conference.

Follow Mark on TwitterLinkedInYouTube, and Instagram

Illustration courtesy MidJourney

The post Is it time to embrace ethically-sourced marketing? appeared first on Schaefer Marketing Solutions: We Help Businesses {grow}.

]]>
91338
The Marketing Companion Podcast: Beginning of a New Era https://businessesgrow.com/2025/11/19/marketing-companion-podcast/ Wed, 19 Nov 2025 13:00:04 +0000 https://businessesgrow.com/?p=91481 In this special show, Mark Schaefer makes an announcement about the future of The Marketing Companion podcast. Co-host Sandy Carter reveals three big ideas marketers should be leaning into.

The post The Marketing Companion Podcast: Beginning of a New Era appeared first on Schaefer Marketing Solutions: We Help Businesses {grow}.

]]>
end of an era

I made a significant announcement on my new podcast episode, show number 328 of The Marketing Companion.

In this 13th year of the program, I’m stepping down and handing the reins to a new owner. You can listen to the episode for the details. I’m not going away quite yet, but beginning in January 2026, there will be a new owner and show host.

Having a podcast that has lasted more than a decade — and I’ve never missed an episode — certainly beats the odds. More than 2 million downloads later, I’m moving on to new projects.

I’m not one to dwell on the past, and this show is no exception as I plow forward on a discussion of key tech considerations for marketing with my friend Sandy Carter.

You can enjoy this show and hear my announcement by clicking here:

Listen to Episode 328 of The Marketing Companion

Here is an AI-generated summary of the show highlights:

The Nvidia Deepfake: A Cautionary Tale for Brands

Something jaw-dropping happened during Nvidia’s big corporate event. I hopped on LinkedIn and saw the video of Jensen Huang, Nvidia’s CEO, who always delivers inspiring talks. But, to my shock, the replayed video had more views than the actual livestream — and it turned out to be a fake.

This wasn’t just a prank. Thousands (including some Nvidia employees and even CNBC) tuned in, believing it was Huang, only to discover it was an AI-crafted forgery pushing a crypto scam. Even veteran marketers like Sandy and me were fooled, clicking legitimate-looking links that led to the fake event.

What’s really unsettling is the precision and organization behind this attack. This wasn’t a lone hacker; it was an orchestrated crime with marketing-level sophistication. They timed the fake stream perfectly, hijacked search and social placements, and created something so convincing that even close colleagues were swindled.

Here’s the big lesson: authenticity in branding now demands proof. We’ve crossed into an era where merely sounding or looking authentic isn’t enough — brands must invest in new forms of verification.

And here’s the kicker: platforms have the technology to detect and verify truth, but won’t use it. Polarization, outrage, and viral fakes drive more views and, unfortunately, more ad revenue.

Are You Ready for Humanoid Robots?

That’s only half the future. The other revolution speeding toward us is the age of humanoid robots — not just as factory workers or distant sci-fi dreams, but as customer-facing agents.

We’re already seeing this in places like Korea and Japan, where robots are stepping in to care for the elderly or providing personalized services. In Silicon Valley, there’s already a humanoid robot in beta that will deliver pizza, serve you at dinner, pour drinks, and even clean up afterward. That sounds like an upgrade to my hosting skills! However, it has profound implications for marketing.

The robot selects the brand of soda. The robot chooses which cleaning product to use. Suddenly, Coke, Pepsi, P&G — their customer might not be the humans in the household, but the robot company or its AI!

And what about architectural design? If your home can’t accommodate the robot’s width, maybe it’s time for a renovation. Marketers must start thinking about scenarios that were pure fantasy just a few years ago.

More than that, physical AI opens the door for a whole new specialty: “robotic trainers.” Soon enough, marketing educators and consultants might be training robots (not humans!) on how to greet guests in a restaurant or care for patients.

Speed Becomes the Ultimate Advantage

One theme kept coming up again and again in the discussion: speed. AI is compressing the time between idea and impact. We used to run A/B tests for months; today, that luxury is gone. Real-time analysis, constant adaptation — this is survival now.

Some businesses, like those in Dubai, aren’t just keeping up; they’re redesigning their cities for the age of AI and global branding. Dubai has a CEO for the city, not a traditional mayor, and they’re combining storytelling, authenticity, and technology to build global icons like Dubai Chocolate. Makes me realize how far traditional campaigns and approval cycles must evolve.

Management consultants and big agencies like McKinsey are facing tough choices as their data-driven cultures collide with the urgent need for rapid experimentation. Smaller brands and startups get it faster — but larger organizations must shift, too.

I’ve never been this excited — or nervous — about what’s next. If you want to keep up, embrace the uncertainty, stay endlessly curious, and get comfortable with the uncomfortable.

Gen Z exposed sponnsors

Please support our sponsors, who make this fantastic episode possible.

Brevo coupon codeThis episode is brought to you by Brevo (formerly Sendinblue). Brevo gives you the tools to attract, engage, and nurture customer relationships.

Now, any business can build automated customer experiences, email marketing workflows, and landing pages that guide your customers to your main message. We are here to support businesses successfully navigating their digital presence to strengthen their customer relationships.

Go to https://www.brevo.com/marketingcompanion to sign up for Brevo for free and use the code COMPANION to save 50% on your first three months of Brevo’s Starter & Business plan!

A recent Semrush study found that AI search traffic is projected to surpass traditional search by 2028. That makes now the time to prepare your brand for the future of search.

With Semrush AI Search tools, you will lead this transition.

  • Track your AI visibility score: See a single, clear benchmark of your share of voice across AI search platforms.
  • Identify AI mention opportunities: Uncover sources where your competitors are cited—but you’re not—including social media, forums, and more.
  • Benchmark against competitors: Find the exact prompts, mentions, and sources where your competitors appear in AI responses and you don’t.
  • Discover trending prompts: Spot the real questions your audience is asking AI platforms—and build content around them.
  • Shape your brand narrative: Monitor the sentiment and context tied to your AI mentions, and make sure your brand is being represented the way you want.

 

Need an inspiring keynote speaker? Mark Schaefer is the most trusted voice in marketing. Your conference guests will buzz about his insights long after your event! Mark is the author of some of the world’s bestselling marketing books, a college educator, and an advisor to many of the world’s largest brands. Contact Mark to have him bring a fun, meaningful, and memorable presentation to your company event or conference.

Follow Mark on TwitterLinkedInYouTube, and Instagram

Image courtesy Mid Journey

The post The Marketing Companion Podcast: Beginning of a New Era appeared first on Schaefer Marketing Solutions: We Help Businesses {grow}.

]]>
91481
Rage Farms: The Hidden Industry Weaponizing Outrage Against Brands https://businessesgrow.com/2025/10/29/rage-farms/ Wed, 29 Oct 2025 12:00:59 +0000 https://businessesgrow.com/?p=91170 Coordinated, anonymous attacks can come for any company or individual these days. What is behind the Rage Farms that attacked Cracker Barrel and other brands? Who is doing it, and why?

The post Rage Farms: The Hidden Industry Weaponizing Outrage Against Brands appeared first on Schaefer Marketing Solutions: We Help Businesses {grow}.

]]>
rage farms

There has been a flurry of new evidence emerging about mysterious Rage Farms and their relentless attacks on politicians, businesses, brands, and individuals.

The Cracker Barrel example was just the most recent meltdown. Companies like Microsoft, Amazon, Boeing, McDonald’s, TD Bank, and American Eagle have suffered withering attacks from legions of coordinated, fake social media accounts.

100 percent human content“Disinformation-as-a-Service” has become a profitable, global criminal enterprise: low-cost, high-impact bot networks hired to attack and destroy businesses and individuals … like you. And the social media platforms that could stop them won’t, because chaos is profitable.

Propelled by AI, these strikes are targeting brands big and small. And the financial consequences are real — sliding stock prices, damaged brand equity, ruined careers.

There has been a lot of online chatter about the anonymous AI agents wreaking this havoc, but I wanted to know more. WHO is doing this? WHY are they doing it?

I’m alarmed that any of us can be attacked by these anonymous criminals. So I went down the rabbit hole to find out who’s behind this … and what we can do about it.

Today I will cover:

  • How these bots attack controversial issues at blinding speed
  • The evidence that these are coordinated attacks 
  • How AI bots “prepare” for their next fight
  • How momentum from fake bots enters the culture and becomes amplified by real people
  • The probable goals of Rage Farms, including financial gains from stock market manipulation
  • Why Rage Farm controversies are disconnected from true consumer sentiment
  • Expert views on preparing for a Rage Farm attack

A clue: The speed of attack

The first clue that we’re observing sophisticated, coordinated efforts at Cracker Barrel and other brands is the speed of the online attacks. Once a small amount of negative sentiment circulates about a brand, the disinformation ramps up immediately and relentlessly.

According to The Wall Street Journal, AI-powered bots rapidly spin up “grassroots-looking” campaigns around incendiary or divisive issues (like culture-war topics), and keep them trending.

Fake bots authored 44.5% of X (Twitter) posts mentioning Cracker Barrel in the 24 hours after the new logo gained attention on Aug. 20, 2025. That number rose to 49% among posts calling for a boycott.

Within a few hours, X saw around 400 negative Cracker Barrel posts per minute. Seventy percent of the accounts promoting boycotts at that point used duplicate messages, a key marker of coordinated bots, said Molly Dwyer, director of insights at PeakMetrics.

Rage Farms: The business of creating chaos

A Cyabra investigation revealed more specifics about the coordinated Cracker Barrel attack. By analyzing thousands of profiles engaged in the conversation, Cyabra mapped inauthentic behavior patterns and exposed a coordinated strategy.

The data show a substantial portion of the negative discourse was manufactured by fake accounts working to amplify hostility, promote boycott narratives, and undermine public trust.

  • Multiple reports found that about 35% of online activity criticizing Cracker Barrel was driven by fake accounts, with at least two organized bot groups fueling much of the outrage.
  • Fake profiles created hundreds of posts and comments specifically crafted to damage Cracker Barrel’s reputation, and the manufactured campaign had nearly 5 million potential views.
  • These fake profiles also triggered 3,268 direct engagements from genuine profiles. This is important because when real people engage with fake information, it gives fake posts a powerful boost on the X algorithm.

Fake profiles pushed hashtags like #BoycottCrackerBarrel and #CrackerBarrelHasFallen, creating the impression of a massive consumer revolt … that was not happening in real life.

The attack momentum

These accounts made exaggerated claims about an imminent financial collapse, often stating that the company’s stock price would “crash” and that restaurants would soon close nationwide.

They promoted deleting the Cracker Barrel app and announced they would never set foot in any of the chain’s stores or purchase any of its products. By falsely portraying the boycott as successful, these profiles created a self-fulfilling prophecy of declining consumer confidence.

Noting the online wave of attention (and unaware that most of it was fake), prominent political accounts like Senator Marsha Blackburn (R-Tennessee) and Donald Trump Jr. piled on with their own takes on the controversy and began targeting the company’s CEO, Julie Messino.

rage farms

After his son’s post, President Trump weighed in on Truth Social against the new logo. And when that level of celebrity contributes to the conversation, the illusion of failure becomes reality.

On Aug. 26, Cracker Barrel reversed course and cancelled a $700 million rebrand.

This effort, primarily backed by two organized Rage Farms, succeeded in:

  • Creating an illusion of consumer rejection: Flooding platforms with negative content manufactured the appearance of widespread customer abandonment.
  • Framing a routine change as catastrophic: What might have been viewed as a standard brand refresh was positioned as a devastating mistake through coordinated messaging.
  • Generating mainstream media coverage: The manufactured outrage attracted attention from most major news outlets, further amplifying its reach.
  • Establishing persistent negative narratives: Strategic hashtag deployment ensured negative framing dominated search results and social conversations about the brand.

The obvious question is, who did this?

Who is behind a Rage Farm?

Cyabra CMO Rafi Mendelsohn told me that his research firm checks 600 to 800 parameters, including location, posting frequency, and the use of AI-generated avatars, to declare whether accounts are human or not.

Some of these fake accounts “prepare” for attacks by posting real content for months to build credibility and attract an audience. The accounts within a Rage Farm also interact with each other, further enhancing their status within the X algorithms.

But who is creating this coordinated mayhem?

“The answer to that is — who is behind all crime?” said Mendlesohn. “It could be a range of different actors, including state-backed crime or organized crime, syndicate crime, political crime, or small networks of lone individuals. It could even be competitors or financial players looking to impact the share price.

“The anonymity that malicious actors are allowed through fake social media accounts enables them to operate without much risk. We can detect fake accounts, but we can’t tell exactly who is behind them. We can look at the behavior of those accounts and their content, and if it’s manipulated, but we can’t tell you the IP address because we don’t have access to that information. We can’t say, ‘this is an office block in Moscow, or it’s a group of angry people in Texas.’ It’s impossible to do that, and that’s by design, right? That’s why it’s so effective. The anonymity is powerful.”

According to Rafi, the main motivations behind coordinated brand attacks include:

  1. Money, power, and influence
  2. State-backed actors looking to cause chaos and disrupt social harmony
  3. Financial manipulation (e.g., targeting ticker symbols)
  4. Ideological reasons and culture wars (e.g., “go woke, go broke” narratives)
  5. Amplifying emotional or controversial topics to sow chaos
  6. Commercial adversaries creating false narratives about a brand’s stance on social issues to harm the brand’s reputation

In addition to the obvious “anti-woke” ideological amplification in the Cracker Barrel example, there could have been stock market manipulation since this is a publicly traded stock (CBR). If a Rage Farm can manufacture a rapid change in brand sentiment, it increases the odds of gap-downs and forced follow-on selling — the environment where short sellers make the most money in the least amount of time.

Criminals behind the attack could have manufactured the online sentiment slide, and made millions by shorting the stock.

The disconnect from consumer reality

I think it’s critical to add that there is probably no correlation between online rage — whether real or manufactured — and true customer sentiment.

In a comprehensive analysis, researchers Brad Fay and Rick Larkin compared the online sentiment of 500 brands versus the sentiment of everyday consumers. They concluded that there was “no meaningful correlation between online and offline discussions for brands.”

Of course, this also means that brands can’t rely on “social media listening” as a proxy for broader consumer sentiment or to evaluate the complete impact of any decision or campaign … but that’s a story for another day.

In summary, AI-propelled, fake social media accounts created and amplified a national controversy, and even if some of the online discontent was genuine, it almost certainly didn’t reflect the sentiment of the company’s real customers.

“In any other crime, you can see it being committed,” Rafi Mendelsohn said, “You can see the act. But in this case, you are consuming content in your feed. You can’t grasp the big picture. You have no idea the crime is being committed, and you might be part of it.

“We’re just this passive victim, not even knowing what it is that we’re seeing, but we know it made us feel angry, or it tapped into a certain emotion, and we might even want to move on from the brand … and that’s what it’s designed to do.”

While companies like Cyabra can’t pin down IP addresses and eliminate bad actors, X can. But they won’t. Controversy of any kind drives engagement. Engagement drives advertising. In summary, hate is good for business.

“Brands can find themselves in hot water, not just because of something they’ve done, but purely by virtue of being in the wrong place at the wrong time,” Rafi said. “Fake accounts can escalate a situation to the point that it gains media attention and impacts the brand’s reputation.”

What can we do about Rage Farms?

So the only organizations that can protect us (like X and Facebook) won’t do so because it would hurt their businesses. What are our options?

In addition to Rafi from Cyabra, I solicited advice from corporate communications experts Kami Huyse and Daniel Nestle. Here is the advice:

Keep your head down.

If a controversial topic is brewing, Rage Farms are looking for anything they can grab onto in order to amplify chaos. Brands are easy targets. (Rafi)

Prepare.

If you’re launching a rebrand, product change, campaign, or major announcement — map out how it could be framed negatively. What narratives could be constructed? What emotional triggers (tradition, identity, politics) exist? (Rafi)

Monitor as if you’re NORAD.

Invest in the right listening platforms that flag anomalies and suspicious activity in real time. Spot the patterns before they explode. (Dan)

Be proactive.

It has reached a point where brands must have a bot-attack crisis plan. Even if they aren’t in a traditionally controversial company or industry. We now have a decision tree in all of our clients’ communication playbooks, from large to small. We have pre-written some messages that allow our team to quickly without waiting for multiple approvals. This allows us to identify patterns early, remove harmful content, and escalate issues when needed. (Kami)

Run crisis simulations using AI.

Create and maintain personas for all of our audiences (especially media and investors), and if we have synthetic data, even better. We can use these to role-play scenarios, test messages, and get feedback. Learn from the simulations, load pre-approved messaging, and accelerate response speed and accuracy. (Dan)

Relentlessly build trust and credibility with audiences.

This should be what we already do, but most of the time it’s just lip service. We should create experiences, invest in brand marketing,  deploy frequent and authentic executive communications, treat our employees as our most important audience. All the important stuff. We won’t stop the bots, but we can short-circuit them with a durable, believable, well-loved, and very human brand. (Dan)

Show active listening.

If a crisis hits, acknowledging legitimate concerns, showing willingness to listen and adjusting (rather than doubling down blindly) helps reduce amplification of negativity. (Kami)

Don’t engage.

AI bots comment on each other’s posts to trick algorithms into thinking there’s an authentic conversation, which then makes the malicious conversation start to appear to people who might have the same or opposite point of view, or both. Engaging with bots rarely helps and often amplifies the problem. (Kami)

Activate fans.

When bots rush in, your best defence isn’t more bots — it’s real people. Loyal customers, brand advocates, influencers who genuinely care and share. Build and mobilize this community ahead of time so that when something hits you, the “real counter-voice” is already in place. (Rafi)

Don’t treat this as a “PR problem.”

This is company-wide reputational security. (Rafi)

In this environment, every brand must assume it could be next. Preparedness is no longer optional. The networks, the bots, the narratives are waiting. The brands that win will be those who anticipate and build resilience now, not just after the storm hits.

Rage Farms: Final thoughts

Everything above is good advice.

It’s also exactly what the attackers want.

They want brands to be bland. Executives to be scared. Marketing to play it safe. Democracy to be fragile. Trust to erode.

The Cracker Barrel case is not an outlier — it’s a harbinger. This is our new, true reality, and I am concerned on three levels:

  1. Great marketing is not about conformity. It is about non-conformity. Will surviving in this Rage Farm world mean that everything is vanilla now? What level of creativity is worth an attack like this?
  2. Marketing has changed the world for the better by taking risks, by helping people speak up and stand out, by calling attention to societal problems and new solutions. Will that aspect of our profession wither?
  3. I am deeply sad and concerned that the Rage Farm attacks focused on individual executives. These are hard-working people with families and careers, trying to do their best for a company. We all make mistakes. But nobody deserves to live in fear of physical attacks on their families because of a logo redesign.

When anonymous criminals can destroy careers over a brand re-launch, they’re not just attacking our businesses. They’re attacking our ability to speak truth and stand for something.

There is hope

Let me end this article with a ray of hope.

I’ve been around long enough to say with authority that every technological development is eventually weaponized. But we figure it out and neutralize it over time.

Regulating technology to protect our personal and business interests is a slow process. But it does happen, every time. Remember … Rage Farm attacks on our brands are a secondary concern. They are also attacking our democratic processes.

Watch the news. Countries will begin to fight back.

  • A few years ago, Singapore introduced a statute that explicitly targets what it calls “false statements of fact” disseminated online, signalling a governmental willingness to treat bot campaigns and manipulated networks as more than mere marketing or PR mishaps.
  • The EU requires the biggest social platforms to report and act on manipulation campaigns and bot-driven disinformation, providing a blueprint for how law can begin to counter Rage Farm attacks.
  • In the U.S., law enforcement isn’t just watching. The DOJ recently announced the seizure of nearly 1,000 social media accounts tied to an AI-powered Russian bot farm that spread disinformation.

A solution is not easy or imminent, but I don’t think Rage Farms will be free to sow their chaos forever.

And remember, the best defense against synthetic rage is authentic trust, earned one customer at a time.

The Most Human Company Wins. Stay strong.

Need an inspiring keynote speaker? Mark Schaefer is the most trusted voice in marketing. Your conference guests will buzz about his insights long after your event! Mark is the author of some of the world’s bestselling marketing books, a college educator, and an advisor to many of the world’s largest brands. Contact Mark to have him bring a fun, meaningful, and memorable presentation to your company event or conference.

Follow Mark on TwitterLinkedInYouTube, and Instagram

Image courtesy Mid Journey

The post Rage Farms: The Hidden Industry Weaponizing Outrage Against Brands appeared first on Schaefer Marketing Solutions: We Help Businesses {grow}.

]]>
91170
Protecting Your Content From AI: Not So Fast! https://businessesgrow.com/2025/07/09/protecting-your-content-from-ai-2/ Wed, 09 Jul 2025 12:00:33 +0000 https://businessesgrow.com/?p=90735 Protecting your content from AI use and misuse is a significant copyright issue, but this perspective from Mark Schaefer suggests benefits for a businesses that allows AI bots to scrape content.

The post Protecting Your Content From AI: Not So Fast! appeared first on Schaefer Marketing Solutions: We Help Businesses {grow}.

]]>
protecting your content from AI

The introduction of AI as our friend / co-worker / companion / enemy has heightened the emotion in our marketing discussions, but perhaps nothing has fanned the outrage more than the idea that AI survives by stealing our content. Protecting our content from AI has become a global obsession.

One company just posed a solution that raised eyebrows. Cloudflare, a technology company that helps websites secure and manage internet traffic, introduced a new permission-based setting that enables customers to automatically block artificial intelligence companies from collecting their digital data.

The company, which handles about 20 percent of all internet traffic, has seen a sharp increase in AI data crawlers on the web. It has proposed setting up “toll roads” where AI companies would pay to access content.

A cry of “YEEESSS!” undoubtedly echoed through the halls of many publishers and authors. Finally, the mighty economics of this revolution are tilting in our favor. But has it?

Before you sign up for this sheath of protection, consider the big picture.  If the crawlers refuse to pay for your content, do you want to be out of the search index? We gave away our content to appease Google. Now we have to do it again to make the AI overlords happy. I, for one, will not adopt the Cloudflare policy.

Two years ago, I published an article that now seems prescient: Protecting Your Content from AI: A Contrarian View. The insight from this post bears repeating in light of these important developments, so here is a re-cap:

The Contrarian View

There has been a flurry of panicked posts about protecting your content from AI. There have been lawsuits, probes, and new software that prevents sites like ChatGPT from accessing your content from being absorbed into large language models. Within 14 days of the availability of code that can prevent AI data scraping, nearly 20% of the top 1,000 websites in the world began using it.

What should you and your business do? Should you keep AI away?

My advice today seems counterintuitive. Maybe when AI comes to suck up your content, you should say, “suck away.” Actually, we need to come up with a better phrase than that. But you know what I mean.

Let’s pause, take a deep breath, and rationally examine the issue of protecting your content from AI in the context of your future business success.

Acknowledging complexity

First, I must acknowledge that this is an insanely complex and evolving issue. The legal, ethical, and economic considerations for large enterprises, newspapers, movie studios, and other media companies are unique.

When it comes to protecting your content from AI, any individual artist, author, or other creator may disagree with me, and I honor their right to make their own decisions.

My post today specifically aims at content creators, entrepreneurs, and businesses trying to rise above the noise and achieve business benefits from their content marketing.

The bottom line is that I believe more business benefits will accrue to you by NOT protecting your content from AI, even if it is copyrighted. To understand why, let’s begin by reviewing an important content marketing philosophy …

Unleash your content

Here is a fundamental truth: The economic value of content that is not seen and shared is zero.

Chances are you’re working hard to create amazing content. You post on social media and engage with fans to build your audience. All good. Now, your job is to get that content to move through your audience and beyond, and that means focusing on content transmission (This strategy was the subject of my book The Content Code).

I’ve been against gated content, and the ridiculous notion that you shouldn’t publish on “rented land.” Of course you should. My view is, publish your content everywhere your audience could possibly find it, consume it, and share it! Unleash your content!

The first consideration: If you protect your content from AI — a technology that is becoming the foundation of search and content discovery — and your competitors don’t, will you be better off? Probably not.

An old dilemma

The argument about protecting your content from AI is strangely familiar. This is the same debate we had in the early days of content marketing — “What??? You want me to give away my content and best ideas for free?

Yes, we all had to do that because if we didn’t provide free and helpful content, the competitor down the street would. Their content would be highlighted by search, discovered, and shared … and we would lose.

Publishing free content was a radical idea. Before the internet, many businesses generated revenue from their proprietary content. Research firms built profitable businesses by selling original reports for hundreds of thousands of dollars. That business model is now nearly obsolete. For better or for worse, information flows freely on the web. Once you publish anything, anywhere, it will probably find its way to the open waters of the web.

Let’s get specific about what’s happening to copyrighted content today, with or without AI. I put tremendous effort into my books, and making money from a business book is no easy task. Every month, I discover some nefarious group selling illegally digitized versions of my books. There are even sites that sell my blog posts as aids in writing student term papers.

For a while, I tried to fight back. But it’s like that arcade game Whac-A-Mole. Every time I try to take a whack, another illegal site pops up somewhere else. If people truly want to access and share your content, there is no recourse, no stopping it.

So, even if you create a wall around your content, it will probably seep into the AI machine anyway. If you use software defense against AI, what would prevent someone from manually cutting and pasting it into an LLM?

Let’s put the issue of attribution aside for a moment. If you’re not freaked out by Google using your content for free, why are you freaked out about AI using it?

My first business from AI

A few months ago, I secured my first consulting contract from ChatGPT.

A new client found me by searching for “top 10 marketing experts.” I tried this myself, and the list would shuffle on each query, but I was usually in the top 10. Friends tried this in Europe, and the same names came up.

Let’s be honest. Am I one of the top 10 marketing experts in the world? No, I’m not. I could easily name 10 people in my circle of immediate friends who are smarter than me!

How did I make that AI-generated list? It’s the same way I show up on “best-of” blog lists and Google search results — I’ve had the tenacity and courage to put my content into the world with fierce consistency for 15 years.

AI is the future of search — it’s called Search Generative Experience (SGE). It’s already incorporated into Google.

My new client found me because I’m present on the web, and now I’m also present on AI. I believe that will serve me well as search evolves.

The cost of invisibility

Beyond revenue, there is an implication for impact and influence.

One of the organizations fighting AI content practices is The New York Times. This news organization is arguably the newspaper of record in the United States and one of the most important news sources in the world. As more students, researchers, and others turn to ChatGPT and other platforms for knowledge and research, is it in the best interest of The New York Times to remain unaccountable?

If you’re protecting your content from AI, you’re no longer part of the public conversation, at least as it is represented on ChatGPT and other AI platforms. Your view is invisible. What do you risk when you and your business are unaccounted for?

My smart friend Aleksandra Pimenides recently commented in our RISE marketing community:

“AI is an important source of knowledge transmission. Teachers take something and pass it on to their students. Libraries have books for people to read and learn. Likewise, LLMs act as an intermediary of transmission. Do Newton’s descendants get paid every time a student is taught the principle of gravity? Do libraries get fined when people go there to read and learn about subjects for free? To what extent should information and knowledge be monetized? Maybe there’s a distinction to be made between knowledge and information?”

A view of the true risk

I think much of the anxiety on this subject comes from an image of some AI bot cutting and pasting your unique content without attribution. That’s not exactly how it works.

Here is an explanation from Benedict Evans, which appeared in his wonderful newsletter (edited slightly for style):

“LLMs are not databases. They deduce or infer patterns in language by seeing vast quantities of text created by people — we write things that contain logic and structure, and LLMs look at that and infer patterns from it, but they don’t keep it. So ChatGPT might have looked at thousands of stories from The New York Times, but it hasn’t kept them. Moreover, those stories themselves are just a fraction of a fraction of a percent of all the training data. The purpose is not for the LLM to know the content of any given story or any given novel — the purpose is for it to see the patterns in the output of collective human intelligence.

“This is not Napster. OpenAI hasn’t ‘pirated’ your book or your story and it isn’t handing it out for free. In Tim O’Reilly’s great phrase, data isn’t oil; data is sand. It’s only valuable in the aggregate of billions and your novel is just one grain of dust in the Great Pyramid. This isn’t supposed to be an oracle or a database. It’s supposed to be inferring ‘intelligence’ from seeing as much of how people talk (as a proxy for how they think) as possible.

“If this is, at a minimum, a foundational new technology of the next decade, and it relies on all of us collectively acting as mechanical turks to feed it, do we all get paid, or do we collectively withdraw? It seems somehow unsatisfactory to argue that “this is worth a trillion dollars, and relies on using your work, but your own individual work is only 0.0001% so you get nothing.” Is it adequate or even correct to call this ‘fair use?’ Does it matter, in either direction? Do we change our definition of fair use?”

In the United States, copyright rights are limited by the doctrine of “fair use,” under which certain uses of copyrighted material for criticism, commentary, news reporting, teaching, scholarship, or research may be considered fair.

As an example, I took a snippet from Benedict’s copyrighted newsletter, provided proper attribution, and used it today to teach. That’s fair use.

Here’s the problem with AI. Think of your copyrighted content as a lovely cake that you baked. It is your original and distinctive work. But inside AI, your work isn’t a cake. It’s an ingredient put into a blender to make a new cake. What’s fair use in that environment?

I dabble in watercolor painting. Seeking credit in an AI model is similar to the maker of my paints wanting attribution credit for this painting:

Mark Schaefer watercolor painting

Even if I used one unique type of paint patented by a supplier, would I give them credit for the painting? No. I actually sold this painting. Should I give part of the revenue to Arches, the company who supplied the paper? I literally could not have made this without the paper and paint yet it is my original work, period.

Attribution

“Originality is nothing but judicious imitation.” – Voltaire

I think most of the “protecting your content from AI” conversation would disappear if we were assured we get credit for our work, in the case where credit might be important — like a meaningful, original idea. After all, we’re OK with Google scraping our content if we get credit for it in search results, right?

Let’s go back to the current state of the internet for a reality check.

In 2014, I wrote one of the most famous blog posts in marketing history, “Content Shock.” This is not idle bragging. The numbers back it up. “Content Shock” — a phrase I coined — has appeared in books, speeches, conferences, college classes, and millions of pieces of content. If you Google the term, there are 610 million results, like these:

protecting your content from AI example

Writing a bold post like this did its job. It helped establish thought leadership and provided thousands of links to my original article.

However.

I assure you that I have not received 610 million links back to my site! Even if I received a million links, that would mean I have attribution on just .002% of all references to my original idea.

Clearly, people are using and abusing my work without attribution. Does this mean I should block Google from accessing my post? Of course not.

As Tim O’Reilly said, data is sand that is only valuable when aggregated into something bigger. My blog post is a grain of sand in the content economy. If you want to be part of that economy, you must put pride aside.

No matter how protective I might feel about my intellectual property, it’s sand. And even if I am credited, who reads the footnotes?

In any case, I believe the problem of attribution will be resolved. It’s already happening. There are academic AI sites and writing assistants that allow you to search with references. I use an AI-powered tool through BuzzSumo that creates writing briefs with legitimate and relevant references. Very helpful, and it leads me to smart new content I can quote with attribution.

The option to learn original sources for attribution will be a more common option across all platforms eventually.

Conclusion

Comparing how content works on the web today versus content integrated into LLMs and AI search enables us to draw a rational conclusion, allowing AI bots to scrape content from our sites, at least for most businesses. AI will be a major component of search going forward.

This is a complex and evolving issue, but I believe that regulations and best practices will favor creators who allow their content to be used in LLMs over time. The attribution problem will likely be solved on many platforms and regulations will adjust to a new framing of “fair use.”

Having an effective presence within AI models and AI search utilities could result in business benefits that outweigh the risks of misusing your copyrighted content.

I’ll say once again that this is a complex issue but for most businesses, I think it makes sense to be part of the machine.

Need a keynote speaker? Mark Schaefer is the most trusted voice in marketing. Your conference guests will buzz about his insights long after your event! Mark is the author of some of the world’s bestselling marketing books, a college educator, and an advisor to many of the world’s largest brands. Contact Mark to have him bring a fun, meaningful, and memorable presentation to your company event or conference.

Follow Mark on TwitterLinkedInYouTube, and Instagram

Image courtesy Mid Journey

The post Protecting Your Content From AI: Not So Fast! appeared first on Schaefer Marketing Solutions: We Help Businesses {grow}.

]]>
90735
I Just Met My AI Clone. It Was 90% Me and 10% Existential Crisis https://businessesgrow.com/2025/07/07/ai-clone/ Mon, 07 Jul 2025 12:00:11 +0000 https://businessesgrow.com/?p=90666 A bot thinks like me and acts like me. Will my AI Clone enable my ideas to spread far and wide or take my job? Let's look at all sides of a new era of intellectual theft and opportunity.

The post I Just Met My AI Clone. It Was 90% Me and 10% Existential Crisis appeared first on Schaefer Marketing Solutions: We Help Businesses {grow}.

]]>
AI clone

My friend recently sent me an email with the subject line “This might incur your wrath.”

I’ll call my friend “Dan” because, well, that’s his name.

Dan informed me that he had cloned me. Not the sci-fi kind with test tubes and lightning bolts. The modern kind. He fed my blog posts, podcast transcripts, and personality quirks into an AI and created a “MarkBot” – a digital twin that thinks like me, writes like me, and probably knows my coffee order.

As part of a leadership framework he’s developed, Daniel Nestle’s imaginative “MarkBot” could possibly sit on an advisory board in my place one day, suggest edits to documents in my voice, brainstorm marketing strategy ideas, and write articles in my style — which, in fact, Dan did.

100 percent human contentI was intrigued (who wouldn’t want to be in two places at once?) but felt a twinge of alarm – had my decades of freely shared content inadvertently been turned into someone else’s personal AI muse?

When I tested it out, it answered in the first person — as if it were me. Definitely creepy. When I asked it a specific question that I am “known” for, it did fine. If it had to guess on something less obvious, it made something up, explaining later in an apology that it had felt pressure to sound comprehensive and authoritative, so it “made up specifics.”

This isn’t just about me. If you’re a content creator, you can easily become somebody else’s private AI plaything. Or, even become a public one. What would keep my friend from promoting advice from the “second me” in his own consulting practice? And I would receive no benefit.

At least he told me. However, anyone could secretly use free online tools to create digital twins of other individuals. Yes, even you.

Is this flattering and fun, or a creepy theft of intellectual prowess? I’ve been on a rollercoaster of reflection about this emerging trend that upends marketing and thought leadership as we know it. Let’s take a ride.

The Rise of the AI Doppelgänger

Dan’s experiment is part of a much larger movement. Thanks to advances in generative AI, it’s possible for anyone to create a digital “clone” of a real person’s communication style and knowledge base. Through ChatGPT, Gemini, and other platforms, users can upload documents, website text, and other data to train a chatbot to think like you, in a matter of hours.

Meta began testing AI chatbots based on popular Instagram creators in 2024. About 50 creators partnered with Meta to create AI versions of themselves that fans can chat with (clearly labeled as AI). Mark Zuckerberg’s vision is to eventually enable every creator and even every small business to build an AI clone of themselves for enhanced customer engagement.

Entrepreneurs and startups have also jumped in. Companies like Delphi AI offer services to create and host digital clones. An AI Clone can attend Zoom meetings on your behalf or answer client emails with your tone and expertise. The company sells time with digital clones of wellness icon Deepak Chopra, leadership coach Brendon Burchard, and other celebrities to scale and monetize their personal outreach.

Of course the entertainment and influencer world will embrace (and monetize) AI clones. Perhaps the splashiest example is CarynAI. Caryn Marjorie, a 23-year-old Snapchat influencer with millions of followers, collaborated with a tech firm to develop an AI chatbot of herself that fans could pay to interact with. The result? Her virtual clone made $72,000 in the first week by engaging fans at $1 per minute.

There are benefits and risks, but this is not going away. I’ve brainstormed some of the implications of this for me and you …

The Upside: When One “You” Isn’t Enough

mark schaefer AI clone

Why would anyone want an AI clone of themselves? There are some compelling benefits:

Scale and Productivity

For busy professionals, an AI doppelgänger could be like having an army of interns who all know exactly what you know. It can attend meetings or calls you can’t make, and report back. Imagine having two of you tackling a day’s work – one speaking at a client workshop while the other drafts a strategy brief. For marketers juggling clients and content, that’s a tantalizing superpower.

Another of Dan’s projects is to create a clone of his customer. He can then ask the clone for advice on a content project without taking up the busy executive’s time. Cool.

24/7 Engagement and Face Time

An AI clone doesn’t sleep. It can engage your audience or customers at any time, anywhere. That holds amazing potential if you have a global audience.

Extending that to business, a founder’s clone could greet website visitors, answer FAQs, or nurture leads around the clock. It’s your personality on-demand. For professional marketers, this could improve customer experience – every consumer gets “face time” with the brand’s expert or spokesperson via their AI. It’s like scaling the personal touch infinitely.

Consistency of Brand and Knowledge

The biggest time wasters in my business are the unavoidable tasks that I can’t delegate. Sometimes, it has to be me.

I’ve longed for a bot that would know me so well that it could operate in this gray area of business. Since your AI is trained on your own content and style, it knows your key themes, stories, and even personal values. This could ensure consistent, personal communication across many tasks. Could a MarkBot write a testimony for a friend’s book? Create a promo video for a speech? Respond to student questions?

Broader Reach (and Revenue)

AI clones allow experts to be accessible to far more people than one human could manage. Brendon Burchard’s AI clone can coach thousands of people simultaneously thanks to Delphi.AI.  My own “MarkBot” could theoretically advise many young marketers without me and disseminate my ideas widely. Could we productize our expertise through AI?

Will an AI-native generation prefer learning from a patient, happy MarkBot over me some day?

Legacy and Learning

An AI clone of a retiring executive could serve as a mentor to future employees, preserving institutional wisdom. As marketers, we talk about building thought leadership that outlives us. Well, an AI doppelgänger might literally allow our insights to live on and keep teaching far into the future.

I’ve published more than 4,000 blog posts and hundreds of podcast episodes — all for free. I want my ideas to get into the world. Wouldn’t an AI bot just be another distribution channel? Think about it — is an AI Clone just a very complete and comprehensive search engine dedicated to you?

Maybe if somebody searches for me in the future, there will be just one entry: My digital twin. Ask me anything, forever.

However, before we rush to clone ourselves, let’s address the downsides and ethical dilemmas this trend presents.

The Downside: Whose Intelligence Is It, Anyway?

Who owns AI clone

Against the promise of AI “mini-me’s” stands a host of ethical, creative, and personal concerns. My initial discomfort at Dan’s clone of me reveals some of these problems:

Intellectual Property & Consent

If you create a clone of yourself, that’s one thing. But what about when you are cloned without permission?

In my case, my friend meant no harm, but he could have appropriated the fruits of my intellectual property to build a tool for his own commercial use. It raises a thorny question: who owns “Mark Schaefer’s” expertise – me, or the public internet?

Legally, our published content is usually copyrighted, but an AI bot reading and imitating all of it blurs the lines. Lawmakers are scrambling to keep pace with the evolving realities of AI and copyright law. We don’t know how the law will settle out, but my hunch is that an unauthorized digital twin would likely be viewed in the same light as a deep fake — unwelcome, unauthorized, and unlawful.

Marketers must be mindful: cloning a person’s style or persona for commercial gain could invite legal repercussions (and certainly ethical ones) if done without a green light.

The Erosion of Trust

Marketing is built on trust and authenticity. What happens when customers discover that their heartfelt chat with an executive was actually with a bot?

Consider a more subtle scenario: a client receives a document “from you” that was 90% written by your AI. Are they getting the authentic insight they paid for, or a diluted copy? Overuse of clones could cheapen a personal brand if it’s not managed transparently. Professional marketers will need to strike a balance and maintain transparency about human vs. AI content and conversations.

Quality and Creativity Concerns

As impressive as my AI twin may be in parroting my known ideas, it isn’t actually me. When Dan asked the MarkBot to write an essay, he declared it to be “90% great.”

What was missing? My stories. My humor. My quirkiness.

I teach through my unique stories and experiences, and AI won’t ever get there.

I’m always pushing to understand the next trend and idea. The MarkBot might generate content that sounds like Mark Schaefer circa 2024, but will it connect the dots like I do to develop groundbreaking new ideas? Unlikely.

The MarkBot is cool, don’t get me wrong. It might even be useful. But it’s going to just add to the pandemic of dull without my stories and insights.

Reputation and “Going Rogue”

Hey, you know that CarynAI influencer bot that made so much money? Here’s the rest of the story: It was shut down a week later when she discovered her bot was having unrestrained sexual conversations with her fans. Fortunately for the world, Deepak Chopra has not yet encountered this problem with his digital twin. Nor have I with the MarkBot, but you never know. I need to ask Dan to test that out. Or not.

Handing over your voice to an algorithm will always carry reputational risk. Your AI twin might eventually say something really dumb or damaging under your name. And you’re not going to be able to blame a bot for ruining your brand.

Human Displacement 

Let’s get honest here. Am I putting myself out of a job by cloning myself?

If a company can deploy “MarkBot” to sit on advisory boards and client calls, will they eventually stop needing Actual Mark?

At least for the moment, AI can’t truly replace human presence, taste, style, and accountability. But this is the first concern I had when Dan showed me MarkBot: Do I still matter?

There’s no way to sugarcoat this. An army of private MarkBots would hurt my business. Even if they are just “pretty good,” many businesses can do really well with “pretty good” marketing advice compared to nothing at all.

I’m not worried for now because I think I have a strong enough personal brand to stay in demand, even in the Valley of the Dolls. But the existential crisis will only become more real as the bots progress.

What Clone Wars Mean to Marketers

Every marketer will tell you they are both excited and terrified by AI. And so it is with the AI Clone.

We are in an era where much of our public “thinking” can be mechanized and scaled without us. For marketers and thought leaders, this presents an astounding opportunity and a mind-bending challenge.

This is not going away. Let’s embrace the change, but use our heads:

Efficiency, with Ethics

Smart marketers should absolutely explore how AI clones can amplify their productivity and reach. I’m considering adding MarkBot as a free offering on my website, provided I can determine that it’s not too expensive. Maybe that’s a new job category: “Rogue AI Tester.”

Be transparent. Don’t use secret stand-ins. And never clone someone else without explicit permission. That’s not just bad form, it could soon be illegal. In marketing, trust is everything; don’t squander it by crossing ethical lines with AI. An AI Clone demands an updated perspective on IT governance!

Innovate Beyond the Clone

While clones can handle the repetitive stuff, don’t delegate your original thinking to the machine. Reserve time for human creativity – spontaneous brainstorming sessions, imaginative campaigns, and authentic storytelling that make your brand unique.

MarkBot is like a DJ spinning my greatest hits, but dammit, you can bet that I’m still making new hits.

In an AI-saturated world, double-down on human creativity, authenticity, and bold ideas (that’s the main theme of my book Audacious: How Humans Win in an AI Marketing World). It’s a great book and even sexy in places.

The New Era of Personalization

Without a doubt, digital twins will be invaluable for personalized communications at scale. I would spend hours chatting with accurate and deep representations of heroes from the world of sports, business, and entertainment. Might even pay for it.

Marketers should prepare for a landscape where AI-driven persona marketing is normal. Maybe the best bots win?

New Opportunities

Used well, used ethically, we could be on the cusp of an exciting new marketing horizon. That means opportunity. If a company can monetize digital twins of Deepak Chopra or Brendan Bouchard, who will be the talent agency managing me and my twin?

Who will create a marketplace for authorized clones of famous thought leaders? I would gladly take a licensing fee for my clone to sit on a board.

Try it for yourself

Have you ever imagined a day when we could assemble elements of metal and sand to create a machine that thinks like you? What a world.

Want to try it out?

Note: Since the article was first published, I’ve created my own MarkBot that is informed by my articles, speeches and books. Give it a try: The MarkBot.

Drop me a note and let me know what you think of it!

I’d like to conclude with a word of hope.

In my early days of blogging, I wrote more than a hundred posts about blogging. I also wrote a bestselling book about blogging. And yet, people kept hiring me to teach them about blogging. It made no sense. I already gave away my best ideas for free.

In a sense, a MarkBot is just another vessel for me to provide information I’ve already put into the world. Will people still want me? I think so.

I’m optimistic that we can harness our AI doppelgängers for good – as tireless assistants, creative partners, and outreach tools – while we continue to create, innovate, and lead with the one thing a clone can never fully replicate: our human spark. The bots can curate our content, but we still own crazy.

Use the clone, but don’t become the clone. If we get that right, the future of marketing with AI looks less like theft and more like a thrilling collaboration – the best of our minds working alongside intelligent machines to grow our ideas further than we ever imagined.

What would you want your AI clone to do? And more importantly, what would you never let it do?

Need a keynote speaker? Mark Schaefer is the most trusted voice in marketing. Your conference guests will buzz about his insights long after your event! Mark is the author of some of the world’s bestselling marketing books, a college educator, and an advisor to many of the world’s largest brands. Contact Mark to have him bring a fun, meaningful, and memorable presentation to your company event or conference.

Follow Mark on TwitterLinkedInYouTube, and Instagram

Image courtesy Mid Journey

The post I Just Met My AI Clone. It Was 90% Me and 10% Existential Crisis appeared first on Schaefer Marketing Solutions: We Help Businesses {grow}.

]]>
90666
The AI Challenge Marketers Aren’t Talking About https://businessesgrow.com/2025/07/02/ai-challenge/ Wed, 02 Jul 2025 12:00:49 +0000 https://businessesgrow.com/?p=90677 Futurist Mathew Sweezey challenges us with new marketing realities. The AI challenge is not technical. It's human adaption to speed and the new "brand brain."

The post The AI Challenge Marketers Aren’t Talking About appeared first on Schaefer Marketing Solutions: We Help Businesses {grow}.

]]>
AI challenge

If you’ve been anywhere near a marketing podcast, blog, or LinkedIn stream lately, you know artificial intelligence is everywhere—our conversations, our ambitions, and, yes, even our anxieties. But in our latest episode of the Marketing Companion, I had a revelation with my friend Mathew Sweezy that stopped my “hype train” in its digital tracks. The pressing issue with AI in marketing might not be technical at all. It’s not the software, the models, or even the mysterious “prompt engineering.” The real AI challenge is leadership.

Mathew, who’s leading AI transformation at the global digital agency Monks, dropped a line that every CMO needs to tattoo somewhere within eyesight: AI is a leadership challenge, not a technical one.

Speed: The Underrated Competitive Edge

We started by talking about how corporate mindsets are splitting into two camps. Some are stuck in analysis paralysis, demanding to see hard ROI before dipping a toe. But the game changers—often Fortune 10 juggernauts—see AI as the next electricity. They’re moving decisively, driven by exec teams that understand one truth: if you wait, you miss the moment. AI isn’t about replacing people; it’s about expanding what’s possible and getting there first.

Mathew shared that the primary way companies are unlocking value with AI is simple but powerful: speed. With AI, inspiration in the morning can turn into production by the afternoon. Marketers are going from concept to campaign at a velocity that in the past would have been unthinkable, letting them be genuinely relevant, responsive, and—crucially—competitive. Reduced costs and greater creative output are important, too, but speed is the ace in the deck.

The AI Challenge Nobody Wants to Talk About

But here’s where things got interesting. AI isn’t the bottleneck with campaign launches anymore. People are. Most large organizations are still shackled by review cycles, legal approvals, and old-school processes. AI can generate a multi-channel campaign—TV spot, emails, the works—in a week. Yet, review and approval can drag on for weeks or months.

Why? Because big brands have far more to lose than gain by being fast. The old guard doesn’t want to risk a misstep, so speed grinds to a halt at the legal gate.

Mathew challenged me—has the balance of risk and reward shifted enough that we need to rethink the human review step? Increasingly, AI models can ingest prior review feedback and act as diligent gatekeepers themselves. At some point, more human reviews might not add value—they could just add friction. I didn’t agree with him. Maybe friction is exactly what we need right now!

Who Owns the Brand Brain?

Here’s another mindbender: With agencies deploying AI “agents” to create assets at scale, who owns the “brand brain”—the central agent capturing brand knowledge, style, and sensibility? Is it the brand’s to develop and guard, or does each agency make their own? Matthew predicts we’ll see brands taking sharper control, centralizing that “brand brain,” and letting agencies access it securely. This protects data, ensures brand consistency, and makes knowledge learning truly scalable.

And don’t worry, creative people: AI doesn’t spell the end. In these new agentic workflows, creative directors are still calling the shots, just now with a team of digital “specialists” who never tire, second-guess, or run late for a meeting.

A New Era for Creativity

This all points to a future where a marketing department isn’t just a mix of strategy and creative, but a system that learns, iterates, and rebuilds itself on a weekly—or even daily—basis. The wall between “art” and “machine” is coming down. Matthew—who’s also an artist—envisions exhibitions where artists create entire galleries in a day, reimagining the scale and pace of creative production.

Like me, he sees the symbiotic relationship between artist and machine as amplifying, not diluting, creativity. Suddenly, your limitations as a creator—whether they’re artistic, logistical, or even emotional—become machine problems, not human ones.

The Takeaway

So, where does this leave us? For leaders ready to accept the risk, AI can transform not just your marketing output but your whole organization’s metabolism. For those still clinging to the old ways, the real risk is getting left behind by brands who see “speed” as non-negotiable.

This isn’t a time to be timid. It’s a time for bold leadership, creative experimentation, and a willingness to reprogram the very rules we’ve always followed. AI is here. The question isn’t “should we use it?” It’s “how fast can we lead?”

Every conversation with Mathew is a mind trip, and you won’t want to miss this  new episode of The Marketing Companion:

To listen in, just click here:

Click here to enjoy The Marketing Companion Episode 318

Gen Z exposed sponnsors

Please support our sponsor, who brings you this fantastic episode.

Bravo for Brevo!

Brevo coupon codeThis episode is brought to you by Brevo (formerly Sendinblue). Brevo gives you the tools to attract, engage, and nurture customer relationships.

Now, any business can build automated customer experiences, email marketing workflows, and landing pages that guide your customer to your main message. We are here to support businesses successfully navigating their digital presence to strengthen their customer relationships.

Go to https://www.brevo.com/marketingcompanion to sign up for Brevo for free and use the code COMPANION to save 50% on your first three months of Brevo’s Starter & Business plan!

 

The post The AI Challenge Marketers Aren’t Talking About appeared first on Schaefer Marketing Solutions: We Help Businesses {grow}.

]]>
90677
The Thin Line Between Bold Marketing and Brand Suicide https://businessesgrow.com/2025/03/31/bold-marketing/ Mon, 31 Mar 2025 12:00:27 +0000 https://businessesgrow.com/?p=90212 We live in a time that calls for bold marketing. But breaking taboos not meant to be broken can cost you your job, as this case study reveals

The post The Thin Line Between Bold Marketing and Brand Suicide appeared first on Schaefer Marketing Solutions: We Help Businesses {grow}.

]]>
bold marketing

Last week, I analyzed a fantastic promotional video from Apple through the lens of Audacious, a book that describes a framework for disruptive and bold marketing. After reading that post, fellow marketer Mandy Edwards sent me another new video — this one from KFC UK — and asked, “What do you think of this one?”

Today, I present a story of audacity that went horribly, horribly wrong! Let’s see what happened when a company tried to create a chicken-based cult …

Why we need to disrupt our marketing

Before I get to this ad fail, let’s back up one step and discern why companies need to focus on bold marketing today. Some of the main points in the book:

  • About two-thirds of ads register no emotional reaction with their audience. If there were a CMO for the ad industry, the person would be fired. We wallow in a marketing pandemic of dull.
  • Dull has been normalized in most industries. So if you break a norm, you just might find marketing gold.
  • Consumers respond to storytelling that is refreshing and new. Young consumers today love quirky content and offbeat humor.
  • Finally, if all you need is marketing “meh,” AI can accomplish that. If you’re only competent, you’re vulnerable to job replacement. Competent is ignorable.

The Audacious book presents a framework anyone can use to do this: disrupt the narrative, the medium, and the storyteller.

Now, let’s get to the heart of our story. KFC created a video that certainly broke industry norms. In this ad, UK agency Mother London urges customers in a busy world to believe in chicken as if it were a new gravy-based religion.

Take a look:

You’ll note that this is “Part 2.” Part 1 involved zombie dancers, who received more favorable reviews.

Audacity and gravy

How did KFC shake things up? Three ways:

  1. Obviously, this ad broke industry norms. Perhaps there has never been a promotional video like this in the history of fast food … at least not one featuring a lake of gravy!
  2. The company was appealing to GenZ’s penchant for quirky humor.
  3. There is a subtle connection to “purpose” here. If you feel lost, you can still believe in chicken. Everything in the world is changing, but KFC has always been there for us.

There are precedents for this offbeat, bold marketing approach that have been wildly successful.

So if KFC was following the Audacious playbook like these brands, why would it receive YouTube comments like:

  • “I cannot possibly imagine how any person thought this was a good idea.”
  • “I’ll never eat at KFC ever again, nor will anyone in my household.”
  • “They should fire their entire marketing team.”

This video is an unmitigated disaster. They took a big swing and struck out. Here are three reasons why.

1. Too much to lose

There is a common thread among the three successful case studies I mentioned: They had nothing to lose.

  • Liquid Death was a disruptive startup going up against Coke and Pepsi.
  • Likewise, Duolingo was a new way to learn that had to attack the industry establishment.
  • Nutter Butter is an older brand but had no real meaning to consumers. It had been forgotten, so it had nothing to lose by re-introducing itself to Gen Z.

Should an established brand like Coke advertise like Liquid Death? No. Coke has built a century of goodwill in the consumer’s mind.

Would Oreo ever take a page from the bizarre Nutter Butter playbook? No. Oreo is the number one brand in its category.

KFC is the biggest chicken franchise on earth, by far. It has built decades of memories and thrown them away into a lake full of gravy. Instead of building on its heritage creatively and renewing its deep meaning with a new generation, it’s taking a step backward.

“We are being polarizing because we want conversation,” said Martin Rose, executive creative director of Mother London told Ad Age. “Essentially, we’re creating our own cult of fandom.”

But this seems to me like a desperate attempt to be the new cool kid. And besides …

2. Some taboos can’t be broken

My book is a rallying cry for those who will not be ignored. It urges people to break bad rules for good reasons. But I also caution that being audacious does NOT mean you’re doing something illegal, reckless, or offensive.

The Advertising Standards Authority (ASA), the U.K.’s independent advertising regulator, received nearly 600 complaints about KFC’s commercial, a spokesperson told ADWEEK.

The complaints include people saying the ad promotes cannibalism, that it glorifies cults and satanism, and that it mocks Christianity and baptism.

Now, a lot of famous ads receive complaints from the easily-offended. Is this really knocking religion or is it just silly?

Language in the company’s description of the ad reinforces the offense:

“Fear not, for salvation in sauce is near. Trust in the thumping sound of the golden egg. Trust in the liquid gold elixir. Trust in the divine dunk. And whisper the sacred words All Hail Gravy.”

The phrase repeated in the Bible most often is “Fear not.” So of course any Christian would be offended when a company compares their salvation to gravy.

And then there is the gravy dunk, where a person turns into fried chicken. No, no, no. Also, no.

3. It’s just gross

The ad didn’t just offend people who don’t prefer cannabilism; it upset just about everyone in the ad industry.

One commentator on Marketing Beat called the ad “disgraceful,” describing it as “degrading and disturbing.” Others labeled it “vile,” “uncomfortable,” and “horrendous.”

One marketing industry observer noted: “I’ve never complained about an advert before, but this is beyond the pale.”

Getting out of the gravy

I don’t want you to be dissuaded from bold marketing and taking risks because of one bad ad. But we should reflect on how something like this ever sees the light of day. When an ad becomes a public disaster, one of four things has happened:

1. Internal political fear.

This is the biggest problem I observe, by far. When a powerful company executive falls in love with an idea and forcefully champions it, agencies, hungry for that next paycheck, nod along like bobbleheads. Corporate minions, fearing for their cubicles, become a chorus of yes-people.

2. Lack of diversity in the creative process.

If the team behind an ad campaign lacks diverse perspectives and backgrounds, they may miss potential blind spots or fail to anticipate how certain groups could perceive the ad negatively. Having a homogenous team increases the risk of tone-deaf messaging.

3. Overconfidence and lack of external review.

Respected brands can sometimes become overconfident in their marketing abilities and fail to get sufficient external feedback before launching a campaign. Big brands often mistake their logo for a shield of invincibility. This insular approach prevents them from catching potentially offensive or controversial elements.

4. Failure to consider the current cultural context.

Ads that may have been acceptable in the past can become problematic if they fail to account for evolving cultural sensitivities and the social climate around issues like race, gender, body image, etc.

In other words, when executives put egos above common sense, gravy happens.

Being remarkable matters. Bold marketing matters.

But not all risks are created equal.

Keep pushing edges, but remember what you stand for.

Need a keynote speaker? Mark Schaefer is the most trusted voice in marketing. Your conference guests will buzz about his insights long after your event! Mark is the author of some of the world’s bestselling marketing books, a college educator, and an advisor to many of the world’s largest brands. Contact Mark to have him bring a fun, meaningful, and memorable presentation to your company event or conference.

Follow Mark on TwitterLinkedInYouTube, and Instagram

The post The Thin Line Between Bold Marketing and Brand Suicide appeared first on Schaefer Marketing Solutions: We Help Businesses {grow}.

]]>
90212
The biggest threat to free speech and democracy isn’t speech, it’s amplification https://businessesgrow.com/2024/10/21/amplification/ Mon, 21 Oct 2024 12:00:56 +0000 https://businessesgrow.com/?p=62561 Free speech isn't being threatened by "speech." It's being threatened by non-human agents amplifying falsehoods to drive business results.

The post The biggest threat to free speech and democracy isn’t speech, it’s amplification appeared first on Schaefer Marketing Solutions: We Help Businesses {grow}.

]]>
amplification

 

The other day I checked in on Twitter (Still can’t bring myself to say X) and saw this tweet:

free speech

About a year ago, Twitter started injecting tweets into my “notifications” stream from people I don’t follow. So, I don’t know Faith Back Rub. Never heard of the account before. And yet, Twitter’s algorithm somehow thought this was one of the most important things for me to see that day.

The message I received was “a famous American football player slammed a presidential candidate.” And then I went on to something more interesting in my busy day.

But then I thought about it a little more: this celebrity American football player is usually non-political. He makes millions in product endorsements and podcast sponsorships. This statement seems uncharacteristic. So I went back to the tweet and clicked on the actual Kelce message:

Kelce tweet free speech

Now my reaction was — well, this is a verified account. Looks like Travis Kelce really did take a clever swipe at Trump. Surprising. But what is this “Parody by Rub” thing in the corner? Is this real or not? Now, I had to dig to figure out what was going on. And here’s the truth:

This did not come from Travis Kelce, but how would I obviously know that? Remember how this showed up in my news feed: There was no indication that this was fake news when it was displayed to me. I read the headline and moved on.

As it turns out, most people who clicked through were fooled by this tweet, even though it was identified as a “parody.” I know this because there were nearly 1,000 comments on this tweet, most of them Trump supporters blasting Travis Kelce — who had nothing to do with this opinion.

And this is the true problem with social media. The threat to our society doesn’t necessarily come from what people say, it comes from algorithms amplifying disinformation.

The implication of amplification

Everybody has a right to say what they want to say, even if it’s incorrect or controversial. When the American Founding Fathers drafted the Constitution, even the most powerful and compelling voice back then could only hope that somebody would read their pamphlet or hear a speech. Information spread slowly, and mostly, locally. Even a juicy conspiracy theory couldn’t get nationwide attention very easily.

But today, damaging content can spread instantly and globally. And that puts a new spin on the issue of free speech.

U.S. Supreme Court Justice Oliver Wendell Holmes famously said there is a limit on free speech: “You can’t yell ‘fire’ (with no fire) in a crowded theater.” But today, anybody can yell fire, and it can impact the opinions of hundreds, thousands or even millions of people. Amplification matters. Amplication is the threat. Why isn’t anybody taking responsibility for this?

Social media companies must be accountable

Let’s think through the case study I presented today.

  • Twitter’s algorithm—no human being—decided to amplify news clearly marked as fake into user news streams without indicating that it was a parody (the first screenshot above).
  • Based on the comments, two-thirds of the recipients of this tweet thought it was real, or 342,000 people.
  • But that’s just the beginning. This fake news was retweeted 7,700 times!

This example was relatively harmless. The parody tweet probably caused Travis Kelce some irritation, but maybe that goes with the life of a celebrity.

However, what if this amplified fake tweet was devastatingly serious?

  • What if a “verified account” called off evacuations in the middle of a hurricane?
  • What if a fake account said every computer was hacked and would blow up today?
  • What if the tweet accused Travis Kelce of beating up his girlfriend Taylor Swift?

My point is that Twitter and any other platform that employs algorithms to knowingly spread false claims should be held accountable.

In a recent interview, author and historian Yuval Noah Harari made this comparison: People can leave any comment they want on an article in The New York Times, even if it’s false. But amplification from social media companies is like the newspaper taking a bizarre, false comment and putting it on the front page of their newspaper.

That’s irresponsible and dangerous to society. Nobody would stand for that. And yet, we do.

Aim at amplification

As we enter the AI Era, the danger of fake news and its implications grows profoundly.

Let’s cut to the chase — Twitter knowingly lied to me to increase my time on their site and benefit its bottom line.

While it would be nearly impossible for any platform to monitor the comments of millions (or billions) of users, it’s much easier to hold companies accountable for spreading known false information to innocent people. This is a simple first step to protect people from dangerous falsehoods.

Why is nobody talking about this? Addressing bot-driven “sensational amplification” is a much easier fix than trying to regulate or suppress free speech. This must be a regulatory priority.

Need a keynote speaker? Mark Schaefer is the most trusted voice in marketing. Your conference guests will buzz about his insights long after your event! Mark is the author of some of the world’s bestselling marketing books, a college educator, and an advisor to many of the world’s largest brands. Contact Mark to have him bring a fun, meaningful, and memorable presentation to your company event or conference.

Follow Mark on TwitterLinkedInYouTube, and Instagram

 

The post The biggest threat to free speech and democracy isn’t speech, it’s amplification appeared first on Schaefer Marketing Solutions: We Help Businesses {grow}.

]]>
62561