It Depends – Artificial Intelligence (AI) versus the Real World
Artificial Intelligence isn’t the new kid on the block. Understanding what AI really is and how it may actually be useful takes some digging.
The intended meaning behind the moniker 'AI' is supposed to be emulating human thinking by a computer. But like all good (and bad) illusionist tricks, the appearances are always deceiving. It is more often than not completely misunderstood or assumed to be something it isn’t. It is real, though, and understanding what it really is and how it may actually be useful takes some digging.
The buzzword of the moment isn’t what you think it is. Artificial Intelligence isn’t the new kid on the block. (It was born in the 50s, the Boomer.) It is not as capable as some claim it to be (and, if history is any yardstick, likely will never be). It is more often than not, completely misunderstood, or assumed to be something it isn’t. It is real, though, and understanding what it really is and how it may actually be useful takes some digging.
What it really is and where it stands in the reality of the now can be discerned by those willing to do a little open-minded listening, and once understood, you can find a proper place for it in your workflow, or not. Will you be able to find it useful in your work or life? It depends.
What is AI? (No really, what is it?)
What has colloquially and collectively come to be called ‘AI’ or more formally, Artificial Intelligence, is in reality a broad spectrum of loosely related programming schema. Underneath this broad umbrella are the functional cousins, Machine Learning, Large Language Models, Recursive Relational Databases, Natural Language Processing, and a dozen or more similar but completely different programming covens that all do some but not all of what is broadly thought of as the demon AI.
The intended meaning behind the moniker is supposed to be emulating human thinking by a computer. But like all good (and bad) illusionist tricks, the appearances are always deceiving. For hundreds of years (look up ‘automata’ for a history lesson) humans have been creating machines that startlingly emulate what humans can do. Modern AI is just another of these “pay no attention to the man behind the curtain” phantoms.
For the most part, AI models are trained, that is, shown a bunch of samples of something, told what those samples all have in common and then exposed to ‘real world’ samples and tested as to how accurate they are in determining whether the trained-on things are in the new samples. A lot rides on the accuracy and suitability of the training methods and execution. Train it badly and you could have hilarious (ask about the military tank spotting AI story) or even highly dangerous results. And some of the ‘corrections’ that AI companies have used to address poor training is basically adding more smoke and adjusting the mirrors.
Properly done, reiteration through this process refines the ‘accuracy’ of the discovery until it is ‘good enough.’ Refinements of coding through the years have expanded many of these algorithms’ ability to generalize when given many, many examples in training. The idea is to extrapolate from the known to see into the new experience, searching for similar enough patterns.
Pattern matching and extrapolation is the main lift here. Notice in no place did I use the terms understanding or knowledge. The illusion of prompts being understood completely depends on the training and programming underlying the particular ‘AI’ being queried and its ability to compensate for known potential errors. It can appear miraculous…or not. I liken the usual results as what you’d get from a clueless human intern doing research at a job for a probationary period that will definitely not end in an offer of employment for the student. But they’re getting better at pulling off the illusion all the time.
What it all comes down to is ‘AI’ is a tool. Like all other tools, when new and shiny, the user thinks they can use it to fix any problem. Eventually, reality catches up, and the tool is only applied to the problems that it can best address when wielded by a tool user who understands its limitations and actual utility.
What AI isn’t
What AI is not is equally important to realize. AI is not ‘thinking.’ It is just as capable of giving a ‘correct’ nonsense answer as one that happens to align with reality, as long as its algorithm decision tree is followed properly. So, AI should never be trusted as the end result without checking and verifying by a human.
AI is not cheap. Because of the heavy computational load for every query it requires a significant electrical grid drain when scaled. And in corollary, AI is not a smart investment since a new model can be introduced that could pull the rug out from the heavily invested front runners at any time.
And the biggest what it is not is, AI is not even good at many of the jobs it is currently asked to do. I look at the results of each new generative AI release sample files, compare them to the actual prompts given for delivery and I see that there are significant gaps in what was asked for and what was delivered. Gaps that get lost in the flashiness of what they did get right. But a tool that only ever delivers 80% of what you want, will never give you 100% satisfaction (unless you really had no idea what you were asking for in the first place, and if that’s you, what are you doing in our creative industry?) And each ‘improvement’ leaves a different 20% unaddressed. It seems they can’t ever get there from here.
Current (and future) Legal Standing and Ramifications
The reality of where AI creation stands, legally, is actually already quite resolved. The US Copyright office created a two part report clarifying the longstanding precedents and current laws that leave very little need for any future alterations or AI-specific legislations. They’ve further studied and produced a report on the economic implications of artificial intelligence to explain the state of things.
The TLDR version is no creative work produced by artificial intelligence can qualify for copyright protection simply because an AI is not a human being, the only entity that can qualify their work for copyright protection. Simple as that. And the implications should be obvious in industries, like our own, that hold most of our value within the intellectual property of our works.
Soon, any creator will realize that if you use AI to fully create your output, there is NO WAY to keep anyone from immediately stealing it and using it in their own work. Studios’ whole operating agenda is hoarding IP and charging for its access, but that just wouldn’t work with an AI-centric outflow.
Apparent recent “victory” isn’t much to celebrate
Even so, there are still attempts to get “AI created work” recognized as copyrightable, even if they have to bend over backwards to get it. As reported in the fine art news world on January 30th of 2025 a VERY determined artist seemingly got his AI sourced work recognized. But if you look very closely the results were not as groundbreaking as it seems at a glance.
He was reduced to applying for a version of copyright that has extreme limitations of protection for the “selection, coordination, and arrangement of components” of a work, which is usually the realm of copyright for phonebooks. This category of copyright is reserved for very de minimis creator contributions to a whole work whose other elements are mostly in the public domain. This limits protection to the very specific choices made in arranging the work, not the look of the work or any of the other elements of the work. The rest of the work remains in the public domain, and about the only way to infringe this type of copyright is with an exact copying of the whole work.
So, after being rejected on first application, he appealed by showing a video created showing how many alterations and decisions were made after the results of the AI initial output. That provable extra, human effort is what gets copyright protection, not the elements the AI provided. That extra effort is all that’s protected. The rest of the work is still in the public domain. Small victory indeed. It’s only good to protect an exact copy of the work, like photocopying a page in a phone book. Not the broad style of protections that those in our industry would be seeking.
Legal conclusions
Human effort necessary. There is no financial gain to be had using AI to replace the human creative element. Studios will soon realize this and pivot, or suffer the loss of revenue. AI will likely still be used, but, properly used as an enhancement tool to human created endeavors. AI will be just another screwdriver in the toolbox available to the creative artists.
And, to address the other side of current debate, AI is not stealing anyone’s work by training on available source material. Longstanding law already allows scraping the Internet’s images and content, for example. And the act of looking and emulating to learn is even older common practice. Just consider would you arrest an art student sitting in a museum copying the works of the masters on their sketch pad to learn how they approached creating their works? That’s not illegal, though what you do with that after the training if you’re a bad actor could be.
Although there are often claims of ‘theft’ by AI, usually the thing being claimed stolen is not stealable. A ‘look alike’ artwork or ‘in the style of’ imagery has been an accepted part of the visual arts arena for centuries. Within our own industry, alarmist actors will argue that their right to how their performances are portrayed is theirs alone to determine. But for decades, they have already signed over those rights to producers so that the shows can be edited and manipulated (e.g. body and stunt doubles) into performances that the actors themselves never gave. Rigid control of alterations has long ago left the hands of the actors as part of the contracts they’ve been signing.
Don’t get me wrong. There are valid arguments against the use or abuse of AI in all forms of art that can be used to maintain a civil and legitimate discourse as to how the tool must be treated. But spurious vitriol at imaginary wrongs is wasting breath and muddying the waters of the truer debates.
Using AI for harm
Bad actors gonna act bad. Claiming a false attribution of a work (famous or not) is illegal. Defamation is wrong and criminal no matter how it is performed. Violations of rights of publicity, where they exist within state law, are actionable regardless of the technique employed. Using a person’s image or voice to espouse an idea that the source doesn’t agree with or consent to is ALREADY against the law in so many ways. Using AI is only the most recent way of acting badly. We have no need to invent new AI-centric laws to protect from those bad acts.
Unions’ Fears versus Likely Utility
Because of its apparent newness on the scene there was a significant amount of discussion revolving around AI use in the most recent union and guild contract negotiations. Lots of unfounded doomsday fear mongering and warnings were voiced as the topic was debated and addressed in the final language of the current adopted contracts.
With discussions and established definitions of the new elements being implemented (like defining digital duplicates) the groundwork was laid, roughly, for allowing current work as well as surveying the path towards the future. Some of the decisions and understandings will likely turn out to be overkill or misdirected, as is always the case when our industry tries to adapt to new technology or changing world conditions. Once the hyperbole and rhetoric dies down, there can be useful refinements to contracts that truly will help curb bad actors from using AI, or anything else, for bad acts.
For example, refinements should end up allowing AI to be used to adjust minor elements of shots to salvage otherwise good takes – e.g. a background actor looking into the camera (aka, spiking) during an otherwise good take. Fixing it with AI in post will allow the production to move to the next scene quickly and won’t affect the background actor’s reputation in any negative way (in truth, it may salvage their job). But in allowing such use, there will be a responsibility to ensure a complete replacement of the talent at issue isn’t uncompensated. So, if an actor’s digital duplicate is used to fill out a scene that could have been shot using the actor, but expediency or safety factors created the necessity, then the actor should be fully entitled to compensation for the work they would have done but was done by their “stand in.” In short, AI can be used to save money but never to shortchange individuals of their livelihood. A balance can be found and will be.
AI as a useful tool in our industry
There are plenty of use cases already where implementing AI in the workflow is a creative boon and replaces no one. By appropriately applying this new tool in assistance for rote tasks, or mundane but necessary production elements, AI could actually free up opportunities of more truly creative input from the human being helped. An assistant editor freed of the tasks of labeling, tagging, and grouping the transcription of takes would be able to more fully contribute to the needs of the editor and speed the whole process along, for example.
AI will never be the right tool for everything. But used where appropriate and with proper appreciation of its limitations, AI can find a place.
AI use in writing
Used like we use spell check, no one should complain that it wasn’t exactly what the writer first penned. But replacing the writer’s bulk effort and natural creativity is cheating, just like getting a ghostwriter to write your work and claiming it as your own. There can be found a happy balance where the skill and craft of the writer can be aided by the tools at hand, used in moderation, to help deliver the writer’s truly creative output.
AI in the future?
Its usefulness in specific tasks will be resolved and integrated into workflows. It will be another useful tool, used when the tool fits the task at hand. It will never be a fix-it-all answer.
I consider myself a realist when considering AI. When people hear my views they may exclaim, “but it’ll get better!” It likely will get better at what it does. The key is to realize that “what it does” isn’t what the flashy marketing and high-hoping entrepreneurs are touting. The fluff and puffery is rampant in the sales pitches. Knowing AI’s history and knowing how it currently is built, I know what its potential may be. And I believe I know what it won’t be able to reach. You should, too.

Christopher Schiller is a NY transactional entertainment attorney who counts many independent filmmakers and writers among his diverse client base. He has an extensive personal history in production and screenwriting experience which benefits him in translating between “legalese” and the language of the creatives. The material he provides here is extremely general in application and therefore should never be taken as legal advice for a specific need. Always consult a knowledgeable attorney for your own legal issues. Because, legally speaking, it depends... always on the particular specifics in each case. Follow Chris on Twitter @chrisschiller or through his website.