‘XO, KITTY’: A Messy, Terrible Show That Might Be the Future of Content

English: XO Kitty Cast photo at the LA premiere on May 11, 2023

INT. KITTY'S BEDROOM — MORNING

A bright and cheerful room adorned with posters of cute kittens. KITTY, a spirited and sassy 16-year-old, bounces around her bedroom, preparing for an exciting day. She wears a pink shirt that says "Cat Lover" and shorts, her hair tied up in a messy bun. Kitty's phone sits on her bed, buzzing with notifications.

KITTY (Smiling) Today's the day, Kitty Song. You've got this.

Kitty picks up her phone and checks the screen. A series of texts from her friends appear, along with a notification from "Bao Bao," her long-distance boyfriend.

Kitty taps on the notification, her face lighting up as she reads the message.

BAO BAO (TEXT) Can't wait to see you, Kitty. It's been too long. 😺

Kitty squeals with delight and starts typing a response.

KITTY (TEXT) Me too, Bao Bao! I'm on my way to the airport now. Prepare for a surprise of a lifetime. 😼

The intro above was composited by the deep learning algorithm Chat GPT (you may have heard of it) after I told it to "Write the first page of a script for the Netflix show "Kitty, XO," plus or minus some other details for specificity. It's not a good script. "I know she'd be proud, but I also know she'd want me to pack some of her secret cookie recipe to charm Oliver even more," my protagonist says with all the tact of a subpar piece of fan fiction. If the "AI" Revolution were based on clever twists and producing great art, I wouldn't bat an eye as a writer over job security.

The truth is, however, that the world of modern content is not based on artistic merit. XO, Kitty was a poorly written show written by humans, and its ratings have been healthy, seeing as Netflix has already renewed it for a second season. It's the type of content that is the bread and butter of the streaming era, and the future of content depends on whether it can be easily automated.

The story isn't the point

XO, Kitty stars the titular Kitty as she convinces her parents to let her go to an International school to pursue her long-distance boyfriend, Dae. Along the way, she meets new friends, learns more about her dead mom, who also attended the school, and engages in lots and lots of drama.

It's not surprising that a show based on a movie where “the protagonist accidentally sends letters to her past crushes and receives replies back” is based on one or two coincidences. Happy coincidences are the bedrock of the modern Rom-Com, with the meet-cute (i.e., our two romantic leads accidentally meeting) being the foundational trope.

XO, Kitty pushes these coincidences into overdrive. Her pen pal boyfriend she moved to Korea to be with conveniently has money problems the moment she arrives, and so he is forced to date a rich queen bee who needs a boyfriend to appease socialite standards. Kitty bumps into the fake girlfriend, Yuri, coincidentally while en route to her new school and is given a ride in the girl's limo. Kitty then coincidentally is confused as a boy and "forced" to room with Dae and his best friends. And don't even get me started on the weird backstory with Kitty's mom. At every turn, the plot stumbles into a new contrivance to force our characters to interact as much as possible.

As you can tell by now, the writing for this show is not great, and critics panned it. As Abby Cavenaugh writes in Collider: "Unfortunately, XO, Kitty has none of the charm and humor that made us fall in love with Kitty in the first place. It tries, to the point of overkill — but the plotlines are so contrived and frankly ridiculous that this sequel doesn't even reach so-bad-it's-good territory."

The cleverest part is when you learn that Kitty is perhaps bisexual and has a crush on Yuri. That narrative twist recontextualizes bumping into Yuri as a meet-cute and is actually clever. It's the kind of twist that belongs in a better show, and if the writers were smart, they would have given more room for it to breathe, but sadly we are always on to the next contrivance. This show would have benefited from some significant editing. Half of these plotlines could have been axed, yet it's evident that quality was not a concern in their production but popularity and profit.

The show XO, Kitty was greenlit off the back of the success of the To All The Boys I've Loved Before trilogy, right when the final movie, To All the Boys: Always and Forever (2021), ended in October of that year. It was a purely data-driven decision about how successful that in-universe franchise had been and had nothing to do with the quality of the art.

Netflix has built its brand by making these types of decisions. They have been routinely praised and criticized for relying on analytics, which they often sparingly show to the public. It's impossible to know what exactly drives every renewal or greenlit decision, but it's been quite clear that telling stories within existing IP has been a priority. From a spinoff to The Witcher (see The Witcher: Blood Origin) to multiple stories centering around the Spanish show Money Heist (see Money Heist: Korea — Joint Economic Area, Berlin, etc.), this company's most significant investments are content that tries to ride the coattails of preexisting media, especially media they already have rights to produce or syndicate.

And yet Netflix is by no means the only company ordering, producing, and syndicating content based on this strategy. The Game of Thrones spinoff The House of The Dragon was exclusively about riding its predecessor's success, and multiple may be on the way (see also A Knight of the Seven Kingdoms: The Hedge Knight). Paramount+ and Disney+'s entire strategy has been to retread preexisting IPs (i.e., Star Trek and the MCU/Star Wars, respectively). Once a modern company gets hold of an IP they believe is profitable, there is a natural incentive to keep iterating on the same kinds of stories, even if there isn't a good idea to anchor that story in: the need to advance the IP almost always precedes the need for a story.

And naturally, if that is your starting point, if one only cares about retelling the same stories over and over and over again, why would one value writing?

This technology can't do art

With the launch of ChatGPT 4, it's safe to say that this has been the year of handwringing about AI. There have been a lot of takes about what this technology will mean for the future of work, life, and the nature of art. This debate has intensified with the recent 2023 Writers Guild of America strike, where writers from this union withheld their labor in exchange for better pay and protections. A fight over the implementation of AI was initially a central sticking point in negotiations, and many are now arguing about the future of this technology.

The widespread consensus is split between two poles. Those who are advancing this technology, or at the very least perceive a financial benefit to it, see the advancement and implementation of AI as inevitable. Many executives in Hollywood and beyond conveniently have submitted the idea that AI will generally replace writers. As Nick Bilton argues in Vanity Fair:

“The more I see of these new machine-learning algorithms, though, the more I realize that the future is coming quickly. And in that future, a good number of people could lose their jobs — not to mention their grasp of what originated in the mind of a human or a machine. Soon, you’ll be asking yourself every time you read an article, Did a human write this, or did an algorithm? The answer is: You’ll never know.”

Yet many are quick to point out the supposed ability to replace writers outright doesn't currently exist. Where Are These AIs That Can Write TV Scripts, Exactly? goes the title of an article by Paul Tassi in Forbes. "AI [came] up with something that, at first glance, reads plausibly, but on second glance, is shit," described co-showrunner for the series Black Mirror, Charlie Brooker, to Empire Magazine. As Tassi goes on to say in that Forbes article:

“One particular case that keeps coming up is the idea that AI is going to replace the creative field of writing. While in some cases, hyper-generic SEO bait on websites, that may be true, in fields like TV or film writing, the tech is not there. The tech is not even close to there, and everyone is acting like the extinction of writers in these fields is imminent, and that AI is some cudgel that can be wielded against creatives as they push for higher pay.

It’s a bluff.”

As someone who has been playing with predictive algorithms for a while now (Grammarly, Inferkit, and now ChatGPT), I fall mainly into the latter camp. For this article, I tried very seriously to have ChatGPT reproduce a workable pilot script of Kitty, XO, not because I thought this was a benchmark of high art, but rather the opposite. Kitty, XO is a poorly written show, and if this technology cannot even regurgitate this repetitive content, what hope does it have in replacing all writers?

Although I was eventually able to find success, it was not in the efficient way that men like Nick Bilton are advocating. After ten iterations, I quickly learned that ChatGPT could not write a workable 25-minute script, not even an adequate draft. It can provide formulaic outlines (I'm talking bullet points). It can tell you some familiar tropes, but every prompt I used came away with narratives that were so unusable editing them would require me to effectively rewrite the entire thing from scratch.

It's important to note that this technology isn't AI in the science fiction sense. Artificial General Intelligence is where entities like Hal 9000 can think and plan. Current AI cannot think. It is a very advanced predictive algorithm cribbing work from across the web without the ability to understand if the string of words it's adding truly fits. That lack of context is critical to this discussion because it prevents AI from crafting anything more advanced than SEO copy. If something requires context to make a decision, then this technology will struggle with it.

The way I was finally able to produce a script with ChatGPT that was even half decent was the prolonged and painstaking process of having it generate a scene based on an outline where I inputted a specific set of requirements (e.g., who are the characters, where they are, what they want to accomplish, and how the scene would end), pausing to read it, thinking of what should go next, and then providing a new scene outline with an equally detailed set of requirements. I was writing paragraphs of text to get a rough approximation of each scene I wanted to tell, which inevitably needed to be edited, trimmed down, and rewritten. Every scene was filled with fundamental mistakes, with a lot of crucial context not being transferred over from scene to scene.

Eventually, my prompts were getting so detailed that it called into question the fundamental premise that AI could bang out a script. That prompting work I did wasn't effortless. It required a lot of thinking and planning to do well. Even when there were no major structural mistakes in a scene, all AI-generated dialogue was too clunky to use. Hours in, I realized that it might have been faster to have written a draft on my own.

ChatGPT struggled even to write a bad episode of XO, Kitty — and even then, not without me holding its hand every step of the way. For complex tasks, current AI is very good at producing a lot of words, but if most of that work is utter nonsense, what's the point?

The fear of AI

As far as I can tell, the fear with AI is not that this technology can replace human labor. It simply cannot at any stage do that. It's that those in power use the introduction of this technology to devalue the work of workers so they can pay them less. Even in the most dystopian cases, writers will still need to do a lot of the work to stitch together the jumble of words AI produces into something that is not merely an incomprehensible mess. It's just that those words will be less valued (see The Work of Art in the Age of AI).

We have seen this trend before of companies using disruptive technologies to devalue labor with other industries. Take the example of AI translations. Often the translation produced by modern AI is not perfect. There are a lot of mistakes and context lost with AI translations. Still, many industries have stopped caring, using them as a pretext to pay translators less, even though these translators have to effectively rewrite vast portions of it, and are expected to be paid less. As noted by Max Deryagin, chair of the British Subtitlers' Association in The Guardian about subtitle translations in TV and film:

“There is no lower limit [in pay]. It goes all the way to almost zero. “It should be a golden moment. We have insane volumes of work. [Instead, what he sees is widespread stress and burnout as subtitle translators try to make ends meet].”

This technological innovation has not led to higher gains for workers but the opposite, and the same thing will undoubtedly happen with writing. If you use the speed at which AI can produce work as a justification to reclassify existing writers as non-union prompting engineers or some other such nonsense, industry owners will be in a position to demand a higher rate of productivity for far less. Because, again, telling a good story isn't the point here. It's about generating content that fits a particular itch that current algorithms say existing audiences want to watch and pay for. As long as numbers on a screen tell companies like Netflix that audiences are tuning in, the quality is irrelevant.

Imagine you are a writer. You log into an app and check the job listings. You notice a series description for a new show: "Draft for a show about a girl named Kitty. Must tie into the Netflix series To All The Boys I Loved Before. Main character Kitty moves to Korea to find love. Ten episodes. The plot can be flexible but must be appropriate for the 11 to 17 female demographic."

That's all you get. You place a bid lower than your usual rate because you didn't get any leads yesterday. You get an automatic acceptance, provisional for completion under twelve hours. You start using the app's built-in AI, which costs about $20 a month. You have the AI give you an outline. It requires an hour or so of tweaking.

Once you are confident that the outline is coherent, you painstakingly work from scene to scene, making sure the AI iterates appropriately, having to rewrite dialogue and descriptions as you go along. 10 hours pass. It's a mess, but it makes sense structurally, and you don't have any more time. You hit the submit button. Seconds pass. Your story is fortunately approved, and you are paid a flat rate of $50.04. That draft is sent to another writer and probably another one after that. Only the showrunner will be credited.

This is the future I see happening with AI-assisted writing. One where human labor is a necessary component in an assembly line but is not valued any more than the people who stitch together our clothes or the many other blue-collar professions out there. Again, this is already the reality for many translators in the media field who have studied for years to hone their craft, only to realize their education is no longer leading to reasonable rates.

This is not a new trend with writing either (stares at Medium). One of the main reasons we are seeing more of a reaction now against automation is the unique place TV writing has in our society. The industry is heavily unionized. It is also highly visible, with famous TV writers having a lot of social capital due to social media and the Internet, which allows them to interact with the American public directly.

In other words, automation is starting to affect rich people. We are having this conversation because of the unique privileges associated with this field, but it's still addressing a real problem and one that asks as a society what we want from work and art more broadly.

Well, what do we want from art?

Writing serves an interesting tension because, under capitalism, it is both a product and not one simultaneously. As Ursula K. Le Guin said of literature: "Books aren't just commodities; the profit motive is often in conflict with the aims of art."

With AI, we are seeing a clash between these two interests come to a head in a powerful way. On the one side, those that think art and work should not always be tied to profit.

On the other side, there are those that see the point of media as not to make art but to increase numbers on a graph. They are delighted by the prospect of AI because there is nothing more they would love than to automate everything. They want to crush all of humanity into numbers.

Is that what you want too?

Or, to put it another way, would you like everything on TV to be even worse versions of XO, Kitty? Hundreds of XO, Kitty's perfectly tailored for your preferences, but in a way that is not good or even mediocre, but merely okay. Bland sludge where Kitty tells you a rough approximation of what you want to hear so you can shut off your brain and not think about the algorithm you will have to press buttons for in eight hours.

We are a society so keen to introduce technologies at a breakneck speed, comfortable with disruption (and so afraid of being called Luddites) that we do not resist things that seem terrible for fear of being called wrong.

Or, as ChatGPT would say:

“This fear of being labeled as luddites or resistant to progress has led us down a path where we often overlook the potential consequences of embracing new technologies without thorough evaluation. As a result, we find ourselves grappling with unforeseen challenges and ethical dilemmas that arise from our haste. It is crucial for us to strike a balance between innovation and thoughtful consideration, to question and critically assess the implications of each technological advancement. By cultivating a society that encourages constructive dialogue and values foresight over blind adoption, we can navigate the complexities of our rapidly evolving world with greater wisdom and ensure a more sustainable future for all.”

Hey, maybe the robot has a point.

Previous
Previous

Can Netflix Critique Itself? (ft. Black Mirror)

Next
Next

I Support Queer Rights, But…