What Did Cyrano Teach Us? AI, Ethics, and the Future of Human Communication

Once upon a time, a knight fell in love with a lady. But the knight was imperfect and thought the lady wouldn’t be able to overlook his prodigious proboscis. He decided to write love letters, but have a handsome colleague deliver them. And, go figure, the lady fell in love with the pretty face.

“And what,” you’re asking, “does Cyrano de Bergerac have to do with marketing or content or the price of beans?”

I’m so glad you asked.

 

Spoiler: Christian is the AI

To anyone who’s been following machine learning for a while, it’s quite apparent that most current AI and robotics are simply increasingly accurate and efficient mimics of discrete human capacities.

The computers are fed (by humans) lots of information and examples. Their little processors use gobs upon gobs of energy and cooling water in the process of “learning” the (human-defined) desired responses to those inputs. And, as has been widely reported, the responses reflect the (human) biases of the “teachers.”

There’s a song in South Pacific that explains “you have to be carefully taught,” about how children become racist because their parents teach them to be so—whether intentionally or not. And we’re surprised that AI programmed by primarily white American males demonstrates a strong white American male bias?

Even when we look toward the “autonomous” applications, where the AI is programmed to find its own inputs and process accordingly to serve up content, make decisions, or take action in response to what it finds, I would venture a guess that it will still reflect the biases of its developers because it has to be guided to weight inputs according to a defined value structure.

And this is why, in doomsday scenarios, we end up in Lord of the Flies, but with robots. Or Frankenstein in binary. Humanity has not yet learned, itself, to live up to its inherent nobility. How can it create artificial intelligence that exceeds its own shortcomings?

 

What Makes Us Inherently Human?

Copywriters are lamenting AI’s ability to write more, low-quality material, faster, at lower direct cost (indirect: see energy and water, above). Proofreaders are lamenting AI’s capacity to eliminate typos at warp speed. The entire marketing universe of LinkedIn on some days seems to be obsessing about how to determine if content is written by AI.

None of that is truly relevant. Why not? Because AI, at this point, Is essentially churning out work at the quality of a mediocre intern. And as with a human of that capacity, the more the AI is taught, the less awful its writing is.

So, instead of myopically focusing on symptoms, let’s broaden our view of the bigger issue. Why are we so eager to sacrifice an inherent human capacity in the name of productivity?

I’m talking about language. Anthropologists, psychologists, ethicists, biologists … all have their own notions about what characteristics are inherent to humanity. Empathy. Nuance. Context. Relationships. Chromosomal patterns. Bipedal locomotion. Spiritual identity and awareness of it. And so much more. But where they overlap is in language.

And I don’t mean the capacity for vocalization, or to communicate through sound. Listen to whale song or wolves and its clear those qualities transcend species.

Nope. I’m talking about the capacity to construct and convey a worldview, and to affect others’ perception of it, through the deliberate use of words. The capacity to engage in expansive thinking. To consult with one another and develop creative solutions to complex problems.

What happens when we sacrifice our capacity for language in the interest of using AI?

We already know. Just ask Indigenous scholars. Worldview, social standards, and identity are eliminated when a people loses its capacity to express them. How much more so when humanity as a whole attempts to outsource an inherent capacity to a soulless mimic?

I suspect that, without course correction, we are looking at one or more generations who are incapable of thinking expansively or independently, who seek and accept guidance from AI on all subjects without considering its limitations or how it can easily be manipulated to further a specific interest (likely not theirs).

In my own circle, I am already seeing friends exemplifying this behavior. These are fully grown adults leaning on ChatGPT as their guide for personalized legal, accounting, mental health, and nutritional guidance. I cringe.

And when my opinion is sought, I redirect my dear ones from the mediocre intern-level generalisms to individual interactions with trained professionals with whom a one-on-one, holistic relationship is core to the quality of advice delivered.

 

The Collapse of Meaning

When it comes to marketing, I’m seeing a collapse of meaning in the rush to use AI to churn out gluttonous masses of content to feed the AI that’s reading that content and then serving it up to humans.

That sad part is, we’ve already seen what comes of this. Around 2008 to 2011-ish, at the point when content distribution largely shifted from labor- and cost-intensive print to the wider-reaching Internet, content mills paid poor-quality writers to churn out gluttonous masses of content to feed the search engines that were reading that content and then serving it up to humans.

In no way are these behaviors contributing to the quality of content, engaging the human capacity for imaginative thought or creative, or serving the greater good.

By the way, this is not a new concern. In 1837, Ralph Waldo Emerson wrote an address called “The American Scholar,” in which he attempted to rally his countrymen to independent investigation of truth and divine reality. It includes one of my favorite unintentionally ironic lines from any book found in a library:

“Meek young men grow up in libraries, believing it their duty to accept the views, which Cicero, which Locke, which Bacon, have given, forgetful that Cicero, Locke, and Bacon were only young men in libraries, when they wrote these books.”

Each time the alarm bell is rung, it should force us to wrestle with the question: Just because we can, does it mean we should? The atom bomb. Synthetic bovine growth hormone. Agent Orange. The damming and rerouting of Western rivers to feed cities plunked in places incompatible with human life (I’m looking at you, Phoenix). Artificial intelligence. And the list goes on.

 

What’s the Lesson?

There is no clear answer. Humanity will continue to invent, to innovate, to evolve. We will continue to struggle with the ethical dilemmas presented by these new “solutions” that create their own problems.

As we do, though, do we hold ourselves to our higher nature, which accepts our universal interconnectedness as a truth? Despite the slowness, the softness, and the rejection of things that serve me but hurt you?

Or do we , by our actions, seek to serve only ourselves and eschew the responsibility to move into the future as one?  

I don’t know. What I do know is that, at the end of the play, the pretty boy’s dead. The letter-writer doesn’t get the girl, still has a big nose, and gains one heck of a guilty conscience.

Imagine what could have been if Cyrano hadn’t decided to hide behind an illusion.

Imagine what could be if we don’t, either.

Previous
Previous

Just Be Good at What You Do

Next
Next

Content: It’s Not Rocket Science