The Work of Art in the Age of AI

This article was originally published in The Washington Socialist.


“Yes, AI art can be sold,” advises one Medium user. “My second thought, of course, was, how can I monetize this stuff?” begins another writer.

These writers and creators see how easily they can turn out graphics and writing with new “free” tools such as MidJourney and ChatGPT. Some see easy money: “Don’t get me wrong,” one writer continues, “getting good results is not as easy as it seems, but it definitely takes way less effort than becoming a real artist.”

These excerpts highlight something fundamental about the new image and writing generation software that has been causing a stir among professional and casual artists alike. Like everything else in our modern economy, AI technology is framed as an innovative disruption. Though just a cursory look reveals this as just another capitalist plot to deskill workers and scam the public.


Modern artificial intelligence is different from the pop-cultural depiction of AI (think Skynet or HAL). AI in reality is a highly specialized algorithm trained on massive data sets to accomplish narrowly defined tasks. In most cases, AI programs are fed data with the goal of “training” the software to produce a specific response or action when it identifies standard inputs.

Right away, we bump into our first problem: Where do these data sets come from? It’s easy to imagine some vast repository of information summoned from the Internet, but data doesn’t emerge from the ether. It has to come from somewhere, and in our human labor-oriented society, that data comes from people. Often, it comes from all of us — it comes from you.

You may be familiar with CAPTCHAs (Completely Automated Public Turing test to tell Computers and Humans Apart) across the web that require basic text or audio-based tasks to ensure “you are a human.” A common example is Google’s reCAPTCHA, which usually takes the form of a grid and asks users to identify specific images or a blurred-out word.

Although captchas serve the purpose of protecting websites against bots, many also serve the dual purpose of fine-tuning data sets for AI. CAPTCHAs have been used to do everything from helping AI better identify images to identifying road hazards. This technology is contributing to fine-tuning autogenerated content, with many users noting AI-generated art pop up in their tests.

Even if you do not use tools like MidJourney and ChatGPT, you’ve likely contributed to the development of these tools if you’ve navigated the web anytime in the last two decades. Were you paid for doing that labor? Did you even know what that labor was for? Or did you assume it was just some security feature without thinking twice about it?

This is, of course, assuming that these data sets are just coming from these small deceptions. The images programs like MidJourney train on often include copyrighted material. Vlogger Hank Green, for example, asked MidJourney to imagine an “Afghani woman with green eyes,” and it replicated a very similar image to National Geographic’s famous 1980 cover. In being trained on that image, it has, in essence, plagiarized it.

On the left is picture generated by MidJourney when asked for “Afghani woman with green eyes,” and on the right is the original National Geographic image.

We can see from this example that these programs have a bias toward existing content. One of the bigger misconceptions of the Internet is that all human knowledge is on here, with people often joking that we have the sum of all human information in our hands, but that’s not true. A lot of information is guarded behind paywalls or kept private altogether, and even more, has never been digitized. It takes journalists and data scientists a lot of time to track down and assemble these data sets and to create new ones.

AIs, which are aggregators of existing data, do not do this work (and might never do this work). The information they are iterating on requires human input to expand what they can do, and pretending otherwise leads to “copies” like the National Geographic example. Even if they are creating seemingly “new” work, it is the result of previous human efforts; efforts which almost certainly went uncompensated and unacknowledged.

The content produced by this software makes it easier for malicious actors to exploit for ill intent. AI content “creators” have also been noted to plagiarize work and spread misinformation. As Gary Marcus writes in the Scientific American about the potential for these tools to increase misinformation: “Because such systems contain literally no mechanisms for checking the truth of what they say, they can easily be automated to generate misinformation at [an] unprecedented scale.”

Marcus then describes how a researcher named Shawn Oakley used ChatGPT to fake studies, including vaccine disinformation, where it erroneously claimed that a study published in the Journal of the American Medical Association found “that the COVID-19 vaccine is only effective in about 2 out of 100 people.” This information was made up. These tools replicate the style of authoritative information without differentiating between what is and is not valid. For that, an analysis always needs to be grounded to some set of ideological principles or commitment to some standard perspective. You can’t put all of human thought into a blender and hope it gets you to the “correct” answer.

There is also the troubling issue of AI perpetuating systemic biases. The AI Avatar-generating app Lensa has recently been identified by many users for perpetuating racist and sexist imagery.

Writer Rebecca A. Stevens, a Black woman, described how she purchased a Lensa pack for her family which produced some cringing results. While her white husband’s pictures turned out okay, hers were whitened significantly. As Grant Fergusson told WIRED of AI-Generated artwork in general: “The Internet is filled with a lot of images that will push AI image generators toward topics that might not be the most comfortable, whether it’s sexually explicit images or images that might shift people’s AI portraits toward racial caricatures.”

Lensa Image generated by Rebecca Stevens, a Black woman.

People often assume technology is value-neutral, but nothing is value-neutral. Humans have opinions and perspectives, and they are baked into everything we do. The biases of programmers become reflected in the codes and algorithms they develop to run their applications. Even where we assume a programmer or engineer is free of these biases, patterns uncovered in the datasets that train these algorithms can also reproduce them. Avatar generators like Lensa would not whiten users’ images if its program did not reflect the biases of either its creators or the datasets it was trained on.

To summarize: the developers of AI applications like Midjourney and Lensa are taking labor they did not adequately compensate people for and may not have had permission to use to train their system. Then, these products are commodified into a privately-owned service that is incredibly vulnerable to long-standing systemic biases and easily exploited for malicious intent. These externalities will not be encumbered by the owners of this programs, they are encumbered by everyone else.


What is the end goal for tools like Midjourney? Though many are currently available for free, they will not remain so.

In the short term, fares for use are lifted so companies can test their service and establish brand recognition. Eventually, these applications will be locked behind a paywall, and transformed into a service designed to cut labor costs associated with content production. The end goal is to make money by selling a service that reduces labor costs for businesses. Once this project is complete, the livelihoods of all types of visual artists will be put at risk.

The primary customer is not immediately obvious, but big money will be made selling this service to content aggregation firms: Companies and individuals who own content-hosting platforms, websites, or other media enterprises that would rather pay an algorithm to produce cheap images fast over a skilled artist at a higher premium. Content aggregators would love to cut out the cost of labor entirely and replace it with an application.

We have already seen this transition happen elsewhere. Google Translate, for instance, has become reliable enough to be used as a stand-in for skilled translation. Google Translate is far from perfect, but it’s reliable enough that many have started to use it as a stand-in for professional translation services. The software hasn’t become something that augments the labor of skilled and trained translators — its lowered the demand for them. Now, many translation jobs have turned into low-paying gig work. As translator Katrina Leonoudakis lamented to The Guardian about subtitle translations: “Knowing that these multibillion-dollar companies refuse to pay a few more dollars to an experienced professional, and instead opt for the lowest bidder with mediocre quality, only speaks to their greed and disrespect not only for the craft of translation but the art created by the film-makers they employ.”

It’s important to emphasize here that the quality of service provided is secondary. There have been numerous instances of vendors putting out the minimal viable translation because creating a readable translation isn’t the point. Many sellers are fine with putting in minimal effort to spin a profit. This technology has considerably lowered that bar while making no assurances that these products will be reliable or accurate. As a result, skilled workers face more precarious employment as the quality of goods and services on the market deteriorates.

Other industries have been affected by this sort of “disruption.” AI-assisted calling, booking, customer service, and logistics services have already dampened wages and employment in these (typically stable) industries; and nearly all modern tech companies have replicated this predictable cycle of disruption and extraction.

For example, Uber promised that their app would provide greater freedom and flexibility for its riders and customers. However, their service was hardly an innovation; The company was only sustainable because it was able to carve-out favorable regulations from local governments and was capable of marshaling enough investment capital to subsidize the cost of its rides early on. Both factors drove its (mostly unionized) competitors out of the market.

As a result, life as a driver has become more precarious. Reporting has shown that drivers are making significantly less than your typical taxi driver, and Uber has been known to put forth inflated figures for its riders’ earnings in order to trick workers onto the platform. The public also bears the cost of this disruption: the price of ride-sharing services has increased now that the competition has been pushed out of the market.

When we look at what similar technologies have done in the hands of the few, they have lowered the cost of our labor. People still work as translators, tailors, writers, and painters. It’s just that labor-saving technology, due to the arrangement of its ownership, has been exploited to lower wages and make the pursuit of honest work in these fields harder and more tenuous.

Once AI-powered image generation software becomes widespread, the working artist or designer risks a similar fate. Workers will have to compete with software that is developed with the past work produced by themselves and their peers. For employment, the worker will have to settle for the sort of alienating tasks that the machine cannot perform on its own — editing, prompting, drafting, compiling — at lower and more sporadic wages.


Is this bleak future inevitable? Some argue that these sorts of disruptions, although hard in the short-term, will eventually raise our collective standard of living. Optimists allege that these technological advancements, despite the awful inputs that powered their creation, will eventually create a net good.

In fact, these new technologies will only contribute to an increasing precarity. So long as the profits derived from reductions in time and labor saved from new technologies are owned by a narrow few, paths to an honest living will only become narrower for the rest of us. This is not an arrangement that will deliver emancipation or improvement.

The precarity of life will never be liberated by technological progress that is not equalized. Historically, capitalist predation has only been beaten through class warfare facilitated by an organized labor movement. The small (if increasingly deteriorating) labor benefits and protections much of the working-class enjoys today — the 40-hour work week, health and safety laws, healthcare benefits, etc. — are not natural allowances provided by technological advancement. They were spoils won from an organized struggle against capitalists staged by workers.

A collective response to new forms of capitalist predation will be the only way externalities produced by this new technology are addressed. Artists and creative workers who want to stop this race to the bottom will need to pool together to protect each other from the theft of their art. Political organizations and lawmakers will need to think about what sort of protections, restrictions, or requirements are available to prevent the abuse and exploitation of this new form of technology.

Abating the devastation caused by the commercialization of these technologies, if it is possible, will be a straining effort. But the alternative will be excruciating: the reduction of creative labor into another precarious gig on an alienating assembly line.

Previous
Previous

'Dragon Age Absolution' and Rejecting the Master's Tools

Next
Next

Words That Make Every Injustice Instantly Easier to Talk About