• Skip to main content

LightningStrike Studios

Professional Communication for the Chronically Busy™

  • Home
  • About
    • About Us
    • Associates
    • FAQ
  • Portfolio
    • Websites
    • Writing
    • Videos
    • Voiceovers
  • Testimonials
  • Thoughts
  • Contact

Dec 01 2025

What Star Trek Can Teach Us About Artificial Intelligence

Kirk and Mudd defeat the android Norman in Star Trek
Kirk and Mudd defeat the android Norman
© Paramount Skydance Corporation

Mention artificial intelligence in the context of Star Trek, and you may think of Data, the android with the positronic neural net. Or perhaps the Emergency Medical Hologram with the poor bedside manner.

But Star Trek’s history with AI goes back much further, and much of it is far from positive.This article originally appeared on lightningstrikestudios.com. If you're reading it anywhere else, it's stolen. Please let me know at jules@lightningstrikestudios.com

Remember the homicidal M5 unit that destroyed the U.S.S. Excalibur and almost put Kirk out of a job? Or Nomad, intent on sterilizing Earth if it couldn’t find its creator? Or Norman and Alice 1 through Alice 500, the androids tasked with keeping the Enterprise crew captive, but who couldn’t cope with a simple logic puzzle? Or Ruk, the android who threw Kirk around like a rag doll?

Returning to Data and the EMH, both at times went off the rails and put their ship and crew in danger. Like the time Data hijacked the Enterprise and went in search of his brother Lore. Or the time the EMH became linked with an alien AI-controlled weapon and terrorized the crew.

Of course, not all Star Trek experiences with artificial intelligence end up in death and destruction. But even those that appear benign reveal serious problems with AI, problems that directly impact content creators and the businesses that use content creators today,

AI Music and Holodeck Games

In the TNG episode “The Ensigns of Command,” Picard commends Data on his violin performance and says that Data’s playing is “quite beautiful.” Data honestly replies, “Strictly speaking, sir, it is not my playing. It is a precise imitation of the techniques of Jascha Heifetz and Trenka Bron-Ken.”

Pulaski, Geordi, and Data enter the holodeck on Star Trek The Next Generation
Pulaski, Geordi, and Data enter the holodeck
© Paramount Skydance Corporation

Similarly, in the episode “Elementary, Dear Data,” Geordi and Data are playing a Sherlock Holmes story on the holodeck. Data demonstrates that he is adept at memorization and rote, but Doctor Pulaski insists that he is incapable of solving a real mystery. To prove her wrong, Data instructs the computer to give them “a Sherlock Holmes-type problem, but not one written specifically by Sir Arthur Conan Doyle.” Geordi then confirms, “So this will be something new, something created by the computer.”

The experiment fails.

As Pulaski points out, the story the computer provides is merely an amalgam of elements lifted from two different Holmes stories. (Yes, this demonstrates the limitations of the ship’s computer, not of Data. It’s not until the computer creates a sentient life form in the guise of Holmes’ archenemy Moriarty that Data is actually challenged. But that stretches the limits of credulity.)

In the Next Generation episode “11001001,” Riker meets and falls in love with the holodeck character Minuet. She is unlike any holodeck woman he’s encountered. and Picard comments, “She’s so very different from the other images we’ve experienced on the holodeck, isn’t she? She’s more intuitive.”

Of course, it’s all a lie. Minuet wasn’t created by the holodeck computer; she was programmed by the Bynar specifically for Riker to distract him from their attempt to hijack the ship. (It makes you wonder how the Bynars knew so much about Riker’s romantic preferences.)

Replicator Slop

Think, too, about the food replicator. While not artificial intelligence per se, it does demonstrate the limits of current AI technology.

The concept of a device that could produce any food you wanted with the push of a button was vaguely introduced in the original series. In the classic episode “Tomorrow Is Yesterday,” a visiting Air Force officer asks for some chicken soup and receives it a moment later.

The replicator produces a martini on Star Trek The Next Generation
The replicator produces a martini on Star Trek The Next Generation
© Paramount Skydance Corporation

But the idea fully materialized with The Next Generation. On board the Enterprise D, you could walk up to an alcove in the wall and order anything from a cup of Earl Grey tea (hot) to a plate of Klingon gagh (although not live.) The whole process, from your initial request to sitting down to your first sip or first bite, took only a few seconds.

Given the speed, versatility, and convenience of the replicators, you might wonder why anyone would actually cook. After all, your food would be nutritionally balanced, vegan-friendly (even the gagh), and taste like … food. Sort of.

It’s that last factor that kept the profession of Chef alive, even in the affluent 24th century Federation.This article originally appeared on lightningstrikestudios.com. If you're reading it anywhere else, it's stolen. Please let me know at jules@lightningstrikestudios.com

Throughout the series there were characters who insisted they could tell the difference between replicator food and “real food.”

In the episode, “Sins of the Father,” Picard introduces Worf’s brother Kurn to caviar, explaining that it’s “a delicacy from the Caspian sea on Earth. It’s a favorite of mine. Our replicator has never done it justice, but I managed to store a few cases for special occasions.”

In the episode, “The Price,” Troi tells the computer, “I would like a real chocolate sundae.” When the computer asks her to define “real,” Troi replies, “Real. Not one of your perfectly synthesized, ingeniously enhanced imitations. I would like real chocolate ice cream, real whipped cream.” (Apparently, Troi doesn’t understand that defining the word “real” using the word “real” isn’t really effective.) The computer objects that it is “programmed to provide sources of acceptable nutritional value.” Of course, that raises the question, what is “acceptable?” And who decides that?

To satisfy their desire for “real food,” some characters, at least occasionally, cooked meals using naturally grown ingredients. In Deep Space Nine, Benjamin Sisko is often seen cooking meals for himself and his son in his quarters. His father, Joseph Sisko, even owned a Creole restaurant in New Orleans where he cooked food from raw ingredients.

Joseph Sisko prepares real food from raw ingredients in Star Trek Deep Space Nine
Joseph Sisko prepares real food from raw ingredients
© Paramount Skydance Corporation

Joseph felt so passionately about real food that in the episode “Homefront,” he told Benjamin, “At dinner time, you’d better get yourself down to New Orleans. No son of mine is going to eat that replicated slop Star Fleet calls food.”

And how does Benjamin respond? “You won’t get any argument from me!”

Why did chefs like Joseph Sisko still exist? And what does this have to do with artificial intelligence today?

AI Slop

The replicator couldn’t tell if the food it produced tasted good, or was even safe to consume.

In the Deep Space Nine episode “Babel,” Sisko almost spits out a sip of replicated coffee because it tastes so bad. When O’Brien tries to repair the replicators, he inadvertently activates a Bajoran device that replicates an aphasia virus along with the food it’s producing.

Likewise, today AI can’t tell if its output reads, sounds, or looks good, or even if it’s accurate.

In researching this article, I asked several AI systems to list Star Trek episodes that reference replicator food. Not only did the AI include episodes that never even mention replicator food, but some provided totally fabricated quotes.

Like the holodeck computer on the Enterprise-D, AI is great at copying, but it’s lousy at producing original content. It can merge and combine existing content, but it can’t create anything new. And its content is often repetitive, bland, and full of inaccuracies.

Scotty understood this. In “I, Mudd,” while planning their escape from Norman and the other androids, he observed, “Androids and robots, they’re just not capable of independent, creative thought.”

This weakness is only going to become more pronounced as AI begins training itself on data produced by AI. Remember what Joseph Sisko called replicated food? “Replicated slop.” The same term is now used to describe the output produced by artificial intelligence: AI slop.

You can tell ChatGPT to write a story with parameters you provide. But, like the holodeck-generated story Geordie, Data, and Pulaski played, it won’t be unique. It will simply be a collage of other people’s work, with no element of genuine creativity. It will have all the appeal of replicator food.

If you ever do ask AI to produce an original story, drawing, or song, how will you know the output is actually original? Just because YOU don’t recognize it, doesn’t mean it’s really original. The artist who produced the original work would likely disagree.

Real Art Isn’t Theft

In his testimony at the Senate Hearing, David Baldacci anticipated the objection that human artists just steal work from other artists.

He said,”I’m aware of the argument that what AI did to me and other writers is no different than an aspiring writer reading other’s books and learning how to use them in original ways. I can tell you from personal experience that is flatly wrong.”

“I was once such an aspiring writer. My favorite novelist in college was John Irving. I read everything that Irving wrote. None of my novels read remotely like a John Irving novel. Why? Well, unlike AI, I can’t remember every line that Irving wrote, every detail about his characters, his plots.”

“The fact is, also unlike AI, I read other writers not to copy them or steal from them, but because I love their stories. I appreciate their talent. It’s motivated me to up my game.”

David Baldacci knows this all too well. The best-selling author of more than 60 novels recently spoke before a Senate Hearing with the theme, “Too Big to Prosecute? Examining the AI Industry’s Mass Ingestion of Copyrighted Works for AI Training.”

Baldacci told the Senators, “My son asked ChatGPT to write a plot that read like a David Baldacci novel and in about five seconds three pages came out that had elements of pretty much every book I’d ever written including plot lines, twists, character names, narrative, the works. … Yes, it does read like my novels, because it is my novel.”

Don’t Miss Out On The Fun

There is still another reason — perhaps the most important reason — to not use AI to produce your content. It’s no fun!

Joseph Sisko wasn’t a chef because he couldn’t do anything else. He was a chef because he loved to cook. Why would he use a replicator to cook for him?

In the Deep Space Nine episode “The Muse,” when asked why he writes, Jake replies, “I think it’s mostly because I like to tell stories.” When pressed, he admits, “I guess I do want to be remembered.” If he liked to tell stories, why would Jake use a computer to write those stories for him? If he wanted to be remembered, letting a computer write for him wouldn’t be the way to do it.

If you’re an artist or a writer or a musician, you are such because you love your craft. Why would you use AI to do your work for you? Sure, you can produce more content faster, and perhaps earn more … for now. But it’s not YOUR content. And it’s not fun.

In the 24th century, if you want real food, go to a real chef. If you want real stories, go to a real writer.

In the 21st century, if you want real content, go to a real content creator.

Do you want original, quality content, produced by real humans for real humans? Contact us.

Share This:

Written by Jules Smith · Categorized: Uncategorized

© 2004 - 2025 LightningStrike Studios