I just watched a remarkable presentation by Brandon Sanderson on AI and the arts. I think it’s worth watching:
What It Means To Be Human | Art in the AI Era
I think Brandon considered his position well, and delivered the talk sincerely. I’m just not sure I agree with his thesis that the primary function and value of art is to change the creator.
I’m more of the mind that the primary function and value of art is to communicate to an audience, and elicit an emotional reaction. This is true whether you are making personally expressive art or commercial art.
I would merge the two ideas by saying that the creator is their own first audience.
I agree with Brandon that games can be art,[1] and I saw merit in some of his arguments about many of the reasons for disliking AI not being enough of a reason to be “anti” it… But, like @popcornflix, I don’t agree with his primary thesis. Art absolutely is about communication and eliciting a reaction. As @kewms implies… in some cases that intended reaction is for an audience of one; no less noble an aim.
Brandon is (I think) also wrong that it is within our ability to “beat” AI as a collective audience; he’s just shouting in the wind here.
Firstly, I don’t think he’s articulated a compelling enough case against AI in this talk. If this is all the argument he’s got, then expecting the masses to rise up and refuse art-like substances as an alternative to art is a bit unrealistic. I mean, the general population hasn’t even risen up to reject food-like substances as an alternate to food[2]… and this is food we’re talking about.
More importantly, though… We can’t win a “war” on AI any more than we can win a war on drugs. This is because the product being sold by AI isn’t a tool, it isn’t the outputs (whether you think they’re art or not), and it isn’t even time or efficiency (as in the John Henry example Brandon uses); the product being sold is a feeling. Use AI and you get to feel like a creator. And those of us who like creating things will know what an attractive and addictive feeling that is.
I was recently asked by one of my bosses to write a 2-page paper on a particular topic. Two people decided to be “helpful” and (despite knowing nothing about the topic) spent a day using AI to write the 2-pager and then sent it to me. Both AI written papers were garbage (poorly structured; awkward language; hopelessly incorrect content). My manually written paper took me less than 30 minutes start to finish.[3]
But… when you think about it, my paper actually took me 30 minutes plus 25 years of experience. Their papers just took them the day. And[4] they thought their papers were good enough[5]. And they were so proud of what they’d done. They felt like they’d researched and written a comprehensive paper.
That feeling is what AI sells, and that is why it’s inevitable.
I won’t ever[6] use AI for writing, because I can actually write. I also won’t use AI to take photographs because I can do that myself (I don’t even like using cameras with electronic viewfinders, using auto exposure modes or letting the camera choose the focus point).
But I can’t oil paint, and I can’t afford to pay someone to do an oil painting for me. I have used AI to make an oil painting of an idea I had for a book cover. Honestly, it made me feel good and looks much better than anything I could ever do.[7]
AI strikes me as very good at some things,[8] but I’m very conscious that my opinion of what constitutes a valid[9] use of AI depends primarily on my own experience and ability in a certain field.[10] This bothers me immensely, and until I resolve that I will continue avoid it.
Limbo was definitely art (and a great game) Limbo (video game) - Wikipedia ↩︎
see Ultra-Processed People by Chris van Tulleken, or really just the contents of an average supermarket shelf and most likely the cupboards in your own kitchens ↩︎
I was probably finished before they’d even decided to start ↩︎
because they were comfortably atop Mount Stupid on the Dunning-Kruger curve ↩︎
they weren’t, and would have cost the organisation tens of millions if followed ↩︎
I don’t think?! I hope not anyway ↩︎
I won’t be using that cover — simply because it’s AI — and frankly the eventual book cover (in the unlikely event the book ever exists) will be worse because of it ↩︎
Things that spring to mind include: Manipulating large data sets. Natural language search. Content-aware fill in photoshop. ↩︎
as opposed to a highly offensive, immoral and ignorant
↩︎I’m definitely on the very same Mount Stupid I mentioned in an earlier footnote when it comes to oil painting ↩︎
Wow, this is a much longer than intended post, both in terms of words and time. Shoulda got AI to write it for me. ↩︎
I heartily agree, and I embrace Rick Rubin’s rule of thumb that “the audience comes last,” meaning that the artist’s taste is the highest priority, not pandering to what the artist thinks the audience wants.
I think that’s a good way to have a fulfilling career as an artist and certainly good advice. I’m not sure it makes it into my definition of art, though. I think it’s possible to make art you don’t yourself like.
Not every artist who is working to a commission is no longer producing art, surely? Does something stop being art because the artist no longer likes it themselves? Art, like beauty, is in the eye of the beholder.
Which now has me asking myself the question… is “art” objective? Can something be art to one person but not to someone else? (as opposed to merely art that they don’t like?)
EDITED TO ADD:
And does that mean AI can produce art if it doesn’t matter that the creator doesn’t itself believe it’s making art, only the audience? Oh, I keep changing my mind on this.
Yes! I take a broad view about art. I consider any effort that humans make that results in an output that communicates to others and elicits an emotional response, even a purely aesthetic one, is art. Note that this definition includes everything from painting to fight choreography to coding to cuisine. I believe it’s all art.
This is an important distinction. People who are anti-AI often insist that only the person holding the paintbrush makes an artistic contribution. There is a kind of meta-artistic practice that is impresarial in nature. George Lucas didn’t direct or write Empire Strikes Back, but would anyone dream of saying it wasn’t his artistic vision? This is similar to the artistic practice of using an AI to create art outside of your domain of expertise. Your taste and vision guide the artist-for-hire who executes your larger vision. It seems to me that there is no real difference between hiring an art department and using generative AI in that regard.
I have a friend who is a professional concept artist for big movies, and they are strongly against generative AI for concept art. Once they tried Suno for making music (an art for which they have no training), they were completely thrilled and enchanted that AI broke down the barriers between their creative vision and execution. This is exactly what @pigfender is talking about.
I understand this. I think the next step to acceptance is to recognize that everyone has a domain they wish to enter that is barred to them by ignorance, and for which AI can provide liberating relief. That in itself is a reason to embrace AI.
There is a huge amount of human potential trapped behind the walls of ignorance. People who can see the image, but aren’t trained painters. Folks who have an idea for software, but don’t know how to code. Generative AI has the potential to bring their visions into the world in a way that wouldn’t have been possible before.
I would hope that that would make it easier for you to embrace the benefits of AI.
I would be more sympathetic to this view if the stated goal of the AI companies weren’t to destroy the jobs of the people who do have those skills.
Asking AI to do a sketch for your own enjoyment is not the same as using AI to replace an entire animation department, but one leads to the other.
It’s the natural progression of technology in commercial art. Opposing it is a slippery slope; where do you stop? Should we have never embraced cinema because it killed vaudeville? Should cinema have stayed silent because of all the performers who had objectionable voices, ala Singing In the Rain?
IMHO, no-one has a divine right to an occupation. A person’s employment is directly correlative to the value they create for their employer. The smartest people are always increasing their skill-set and always trying to provide more value to their employers. Or they strike out on their own.
Is it sad that a lot of animators are going to lose their jobs? Yes. Is it cause for celebration that many animators will learn to use AI to become 1-person animation studios? Also Yes.
If replacement voices were generated by consuming more energy than some cities, maybe?
No one has a right to an occupation.
Nor does anyone have a right to destroy the commons for their own use. Sanderson argues that the poor ethics of AI companies aren’t a sufficient argument against them, but I disagree. I think their ethics are inextricable from the rest of their proposed project.
The problem is that ethics are subjective. What is considered ethical to one person may not be considered ethical by another. This is why governing bodies sometimes institute a “code of ethics,” so the members will have a shared standard of ethics that is missing from a vibrant, cosmopolitan society.
This is where laws have utility. Laws allow the majority to codify their ethics and impose them on the minority who do not share them. Because there are many ethical standards, laws necessarily are more compromised and generalized than personal ethics. They are the best solution we have to reconcile many conflicting ethical standards.
Big Tech takes the position that anything that isn’t legally prohibited is permissable. To oppose Big Tech’s use of resources for AI, one must muster the support of a political majority, and pass a law that would prevent them from using so many resources. (Over their political and legal objections, one would assume.)
I’d rather spend my time writing cool stuff. (And squabbling on Discourse.) Let the political crusaders fight Google in the corridors of power.
Most political crusaders will recognize that the corridors of power are too corrupted by Big Tech’s bottomless campaign donations for any real headway to be made there. At least here in the U.S. (Thanks to the Citizens United ruling.)
Instead, they’ll be taking the battle to the ballot box and, ironically, to social media. Shame the Big Tech overlords long enough and loudly enough, and we might actually be able to force them to course correct.
Or, perhaps a true reforming crusader will be voted in and tackle the problem with some monopoly busting. Every few generations, the robber barons get out of hand and need to be sharply reined in. I’d say we’re about ready for another Taft.
Edited to add:
I personally would much rather spend my time writing and creating to fighting political battles as well. But, I also feel I owe it to my grand-nieces, grand-nephews, and their peers to do my part to leave things better than when I got here. So far, my Gen-X peers and I haven’t done a very good job of accomplishing that.
Most creators thought that copyright laws already protected them against the kind of wholesale theft committed by the AI companies.
It would be more accurate to say that Big Tech’s philosophy (not just in this arena) is “stop me if you can.” By the time the legal system catches up to what they’re actually doing, the harm has already been done.
This is where we disagree. I think that the next step is to recognise my hypocrisy and step away from the computer, not to go “oh well then” and embrace the suck. GenAI has the ability to make people who don’t know better think they’ve made something and feel like a creator, but it’s not real happiness, just a quick hit of heroin. Users might end up finding that their creator-high starts waning over time from its artificiality, and what skills they did have in that area atrophy at an alarming rate.
My ignorance in the oil painting example is that I’m not only not interested enough to learn to paint, I’m also not good enough to see that my AI oil painting is actually garbage. And my experience of seeing the outputs of GenAI in every single area where I have some knowledge, skill and/or experience is that it is always garbage.
I would love to agree with you, but that view is based on a misunderstanding. Intellectual Property law is not there to protect creators. It never has been. Laws of copyright, patents, etc are there to maximise the return for people who want to copy things.
They are designed to be the absolute minimum protection necessary to avoid creators deciding it isn’t worth innovating, so that said innovations can then be copied as soon as possible by the market for maximum economic benefit.
Intellectual Property is not the same as other property. Build a hotel or a company you can keep it forever. Invented a way to heat water more efficiently? We’d like to copy that as soon as possible, thank you. Come up with a new song, or maybe even a new genre of music? We’d like to “be inspired by that” through other works as soon as possible, thank you.
Big Tech are the ones that are showing they understand what global intellectual property laws are designed to do. The legal system has no catching up to do, this is the system working exactly as intended.
(Which annoys the hell out of me)
And that officially breaches my agreed word-limit for forum posting in 2026!
See ya! ![]()
How interesting. I used to teach intellectual property law. This is a pretty startling assertion. If I write a novel, which is my intellectual property and subject to copyright protections, how does this “maximize the return for people who want to copy things”? (I assume you aren’t referring to the more recent issue of AI companies scraping copyrighted material without the express permission of the authors. But if you do, I might agree with you.)
You are more of an optimist than I am in that regard. Shame can cause some preformative posturing, perhaps a firing or two, but turning the battleship on multi-billion-dollar technology initiative? I think you’ll find Big Tech remarkably shame-resistant.
If you believe the Corridors of Power are corrupt, how can you believe that the election cycle is less so? Even if we get a reforming crusader, they won’t be allowed to rule unless they stick to the lane they are given by the Powers That Be. For contrast, see JFK and RFK.
Most artists don’t understand copyright law at all.
First, you cannot “steal” intellectual property, because “theft” requires that the rightful owner is deprived of the property. Steal my car, I no longer have it. Copy a manuscript and use it without permission, the rightsholder still posesses the manuscript and can publish it. The correct term is “infringement.” If you use someone’s novel without their permission and sell copies, you are guilty of infringement, not theft.
The marketers and spin doctors who work for Big IP have fostered “theft language” on infringement because it ties into cultural taboos of being a thief, and invites colorful similes like “movie pirates”.
Also, in order for theft to occur, there must be property to be improperly taken from the owner. You can’t get the police to arrest someone for “stealing your air” by standing on the public sidewalk in front of your home and breathing. That’s because the air is not property. Nobody owns it. Since it’s not property, it can’t be stolen.
”Style” is not property. Nor are ideas. Neither are protected by copyright (USA). Copyright narrowly protects the fixed expression of ideas. It’s incredibly hard to win a copyright infringement suit, because unless someone is duplicating your protected work exactly and selling what are essentially counterfeits, you’re in for a fight. Most infringement cases are settled for this reason.
At worst, AI companies are guilty of “use without permission,” but there’s already been a strong ruling that training AI on commercially available copyrighted works is legal infringement under Fair Use because it is transformational. The same ruling said that downloading the books without paying for them amounted to wholesale infringement, and they will have to compensate the authors for downloading the books without paying for them.
AI companies are no more “stealing” from artists than art students are stealing by practicing by copying their favorite paintings. It’s training, and its transformative. The AIs don’t keep copies of the work in their memory, they keep complex and detailed measurements and correlations.
Copyright law creates no expectation of protection for ideas and artistic styles.