
Chances are, you have seen them a lot lately.
AI just did something that would make Hayao Miyazaki roll his eyes harder than his son’s works: it recreated the magic of Studio Ghibli’s animation style—without a single human hand involved.
Over the past month, AI-generated images in the iconic Ghibli aesthetic have flooded social media, racking up millions of views and shares. Some posts boasting “AI Ghibli” art on X and Instagram have hit over 100,000 likes, with users marveling at how eerily close the machine-made visuals are to the real thing.
In fact, the demand for it was so great, you got the CEO of the company to try to convince people not to use the product of a company.
Artists are calling it theft. Fans are calling it soulless. And Miyazaki? The legendary filmmaker once called AI-generated art “an insult to life itself,” so although he has not made any statements, you can imagine how he would feel about this.
Studio Ghibli has built its legacy on painstakingly detailed, hand-drawn animation—a single film takes years and up to 100,000 frames drawn by actual humans.
AI, on the other hand, just scraped all that hard work off the internet and spat out its own version in seconds.
And now we’re in a moral crisis.
This isn’t just about Ghibli; it’s about the future of creativity as a whole.
If AI can absorb and replicate decades of artistic evolution in an instant, it raises a question that affects everyone: who owns creativity? If an algorithm can profit from the collective work of generations, what does that mean for human innovation in any field?
The AI vs. human art debate isn’t just about artists—it’s about the value of human originality. As AI becomes more advanced and mimicry more seamless, this fight extends far beyond animation.
Because if companies can exploit one loophole, it’s not economically wise not to exploit all the others. And it’s not something that necessarily benefits humanity as we know it.
Right now, people are fighting over whether this is a crime against art or the future of creativity, and many of them don’t even know what AI does.
So let’s break it down.
Imagine you’re an artist. You’ve spent years developing a unique style—let’s say you draw dreamy, hand-painted landscapes that look straight out of a Studio Ghibli movie.
One day, an AI model trained on millions of images, including ones that look a lot like yours, starts spitting out pictures in that same dreamy, hand-painted style.
How did it do that?
Generative AI isn’t magic. It’s a machine-learning model designed to create new stuff—images, text, music—based on what it’s been trained on.
The chart that I gave you above breaks down how a Generative Adversarial Network (GAN) works. It starts with a random input vector, which the generator model uses to create a fake example.
That generated example, along with a real one, is sent to the discriminator model, which tries to figure out if it’s real or fake. The discriminator then makes a binary classification (real or fake), and both models learn from the process—the generator gets better at faking, and the discriminator gets better at spotting fakes.
Over time, this back-and-forth makes the generator’s outputs more and more realistic. Think about it like two athletes always trying to outrun each other.
This is the same basic idea behind models like Stable Diffusion, MidJourney, and DALL·E. These AI tools don’t just copy and store images like a giant folder of stolen artwork. Instead, they function as hyper-advanced pattern-recognition machines, learning from millions (or even billions) of images.
So when you ask an AI to generate “my portrait but Studio Ghibli style,” it doesn’t grab an old Ghibli frame and slap it onto your screen. It builds something “new”, using what it has statistically learned about what makes a painting look “Ghibli-like.” That’s how it can mimic a style without outright copying any one piece of art.
At least, that’s what its defenders say. But before we get into the ethics, let’s talk about how this AI “learning” process actually works.
At first glance, both AI and human artists learn by “observing” existing art. That’s why defenders of AI-generated art will often say, “Well, human artists copy styles too. How is AI any different?”
So here’s the difference:
A human artist takes inspiration but then applies intent, judgment, and personal experience to create something new. They might study Hayao Miyazaki’s animation techniques, but they choose which elements to keep, which to change, and which to blend with their own artistic style.
AI, on the other hand, doesn’t choose anything. It has no judgment, no intent, no creativity. It simply remixes what it has been fed on based on probability. It generates images by stitching together mathematical predictions—not emotions, not ideas, not personal vision.
Which leads us to the real question:
This is the heart of the controversy. Solving this question is solving the entire debate.
Artists argue that AI models trained on copyrighted art—without permission—are engaging in high-speed digital plagiarism.
Even if the AI doesn’t copy a specific frame from Spirited Away, it wouldn’t be able to generate Ghibli-style art at all if it hadn’t studied thousands of real Ghibli images first. That, to them, is still stealing.
On the other side, AI supporters argue that all artists learn by observing other artists. No one creates in a vacuum.
If a human can study Ghibli and develop a similar style, why can’t AI do the same?
Is it really theft if the AI isn’t copying any single piece but just absorbing and reinterpreting patterns—just like human artists do?
And this is where people start yelling at each other on the internet.
If AI training data is built on human-created artwork, why weren’t artists ever compensated? The answer comes down to three brutal realities of the AI industry: cost, competition, and legal loopholes.
Let me make this super clear:
Paying for data is never an option.
AI needs an insane amount of data to work well. We’re talking billions of images. If AI companies had to license every single piece of artwork they trained on, they’d go bankrupt before ever releasing one product.
Let’s do some rough math: If an AI model was trained on even 100 million pieces of artwork and had to pay an average of $5 per image, a fairly below-average commission, that’s a $500 million cost upfront in cold hard cash just for licensing.
In reality, Stable Diffusion, for instance, utilized the LAION-5B dataset, which contains over 5 billion image-text pairs.[1]
Midjourney's training data reportedly includes a list of approximately 16,000 artists whose works were used to develop its AI art-generating tools.[2]
So, we’re talking about possibly billions of dollars in terms of cost. In an industry that already cost billions of dollars.
So, of course, none of these were paid by any means.
For companies like OpenAI, Stability AI, and MidJourney, paying wasn’t even a question—it was never financially viable to pay artists at scale. So they didn’t. And it’s a thousand times more cost-effective to fight the legal battles, which they have a high chance of winning, than to pay.
And that’s assuming artists even agreed to sell their work, which many wouldn’t.
As of early 2025, there are about 67,200 generative AI companies. That’s a lot. If you put 67,200 people on a street and made them fight, historians would immediately name it “The Great Street Battle of 2025” and spend decades analyzing what went wrong. So yeah, the competition is a bit intense.
So, even if one AI company decided to license artwork ethically, it would instantly fall behind competitors who took the free route.
Let’s ignore the cost for a while and say Company A licenses its training data and only has permission to access to 10 million images. Meanwhile, Company B scrapes the entire internet and trains on a billion images. Take a wild guess which one produces better results?
In the AI market, which is driven by rapid innovation counting in months, ethical licensing is a competitive disadvantage. Investors expect fast progress and companies that take the slow, ethical approach risk getting crushed by those who don’t.
So they do with the optimal path: scrape now, deal with lawsuits later.
AI companies took the path of least resistance. Rather than spend years negotiating licensing deals, they scraped the data, built their models, and decided to deal with lawsuits later.
And now? That’s exactly what’s happening. Lawsuits are piling up from artists, stock photo companies, and even newspapers accusing AI companies of using copyrighted material without permission.
Some courts have ruled against AI firms, but the legal system moves slowly—far more slowly than the speed at which AI is evolving.
So the reality is that these companies calculated the risk and decided it was worth it. The damage is already done, and now the legal battles will shape the future of AI-generated content.
Beyond cost and competition, several additional legal factors contributed to why artists weren’t compensated for AI training data.
One major reason is the perception of data as a public good. Many AI companies operate under the assumption that anything publicly accessible on the internet is fair game for scraping, similar to how Google indexes web pages without compensating content creators.
While copyright laws technically protect artwork, enforcement is weak, especially when AI models transform data rather than directly reproducing it. This creates a legal gray area where companies can claim they are merely “learning” from the data rather than copying it.
AI companies also take advantage of the legal ambiguity surrounding transformative use.[3] They argue that their models don’t copy or store exact replicas of artworks but instead generate entirely new creations based on learned patterns.
This defense, often linked to fair use laws, has been used by tech companies in other fields to justify large-scale data scraping. Since AI-generated content doesn’t always resemble the original works, proving infringement in court becomes difficult and time-consuming.
Another factor is the lack of collective bargaining power among visual artists. Unlike musicians, who have organizations like ASCAP to protect their rights, or stock photographers who license work through platforms like Shutterstock, independent artists don’t have a unified system to negotiate fair compensation.
This made it easier for AI companies to exploit their work without significant facing industry-wide pushback.
Finally, there is the issue of first-mover advantage. The AI industry moved faster than legal frameworks could catch up, following the classic Silicon Valley approach of “move fast and break things.”
By the time lawsuits and regulations began taking shape, AI-generated content had already flooded the market, making it nearly impossible to retroactively compensate artists.
As mentioned above, many companies calculated that any legal consequences would be manageable compared to the potential profits of building advanced AI models early on.
In the end, AI companies saw an economic opportunity, exploited legal and structural gaps, and prioritized rapid growth over fairness—betting that any legal consequences would come too late to stop them.
It can be confusing to find your stance is in this grey area. On one hand, you got your favourite artists criticizing AI companies for scraping their works. On the other hand, you do not know exactly what is wrong with AI companies scraping them. So how do you approach this problem?
So to better understand these issues, let me introduce to you five of my philosophical power rangers.
Together, these frameworks help illustrate why the current system of AI-generated art is ethically problematic and unsustainable.
The rise of AI-generated art presents a profound ethical dilemma:
who controls creativity,
and who benefits from it?
For centuries, artists have earned a living through their skills, developing unique styles, and contributing to cultural progress. But with the advent of AI models trained on vast datasets of human-made art—often without consent—the balance of power has shifted.
AI companies claim their technology democratizes creativity, making artistic production faster, cheaper, and more accessible. But at what cost? Artists find themselves in a paradox where their work, once a means of personal expression and economic survival, has been repurposed to fuel an industry that excludes them from its profits.
Their styles are mimicked, their creative choices are reduced to algorithmic patterns, and their labor is absorbed into training data without permission or compensation.
This brings us to John Rawls’ Theory of Justice4, one of the most influential philosophical frameworks on fairness. Rawls argues that just societies are built by designing rules from behind a "veil of ignorance"—a hypothetical scenario where no one knows what position they will hold in society.
Would you agree to a system where your creative labor could be taken without consent if you didn’t know whether you’d be the artist or the AI developer profiting from it?
The answer is clear: no rational person would accept a system that strips them of their bargaining power. And yet, this is exactly the world AI art corporations are creating—one where artists have no leverage, no legal protections, and no ability to resist mass data extraction.
From a Rawlsian perspective, a just system would look very different. It would ensure:
Instead, AI companies exploit an existing power imbalance, using artists’ work because they can, not because they should. They assume that individual creators lack the resources to challenge them, and so they push forward with a model that benefits only those at the top while leaving artists economically and creatively dispossessed.
Rawls’ theory reminds us that fairness isn’t about what benefits the most powerful—it’s about ensuring no group is disproportionately disadvantaged. A world where AI replaces human artists without their consent is not just unfair.
It is fundamentally unjust.
Karl Marx’s concept of alienation describes how workers in capitalist societies become increasingly disconnected from the value they create.[5] In a traditional capitalist framework, workers produce goods and services but do not own the means of production, receiving only a fraction of the wealth they generate.
However, with the rise of AI-generated art, this alienation reaches an extreme—what can only be described as hyper-alienation.
Artists are not just underpaid or undervalued; they are being systematically erased from the economic cycle. Their creative labor is extracted, stripped of authorship, and repurposed into AI models that generate endless new content without their consent, credit, or compensation.
Unlike factory workers who at least receive wages for their time, artists whose work is absorbed into AI datasets receive nothing.
This marks a fundamental shift in the creative economy—from an exploitative system where individuals profit marginally from their skills to a fully extractive industry. AI companies transform past artistic labor into a limitless resource, ensuring that the true cultural producers—artists—no longer play any role in the economy they once shaped.
The industry moves from a model where human creativity is valued and rewarded to one where past works are endlessly recycled and monetized by corporations.
From a Marxist standpoint, this isn’t just a problem for individual artists—it’s an inevitable collapse point for the entire system. When creative workers lose economic agency, art is no longer dictated by artistic vision but by the relentless pursuit of profit.
The industry shifts from a culture of innovation and expression to one of mass-produced, algorithmic content optimized for engagement metrics rather than artistic integrity.
The consequences:
Marx would argue that the contradiction here is unsustainable. Capitalism depends on labor to function—but when AI models replace the labor force entirely, even capitalism itself risks self-destruction. If AI-generated art continues on its current trajectory, the system will not just exploit artists—it will erase them.
And in doing so, it may ultimately devalue art to the point where no one—neither artists nor audiences—finds meaning in it anymore.
In the age of AI-generated content, a fundamental ethical question arises: Should an artist’s work be used without their permission to train AI models?
The rapid development of AI in creative fields has led to a practice where vast amounts of human-made art, writing, and music are scraped from the internet and fed into machine learning systems—often without the original creators' consent. This raises serious concerns about intellectual property, artistic integrity, and the value of human creativity in an era where machines can replicate styles without attribution.
AI companies often defend this practice by claiming that their models “learn” in the same way humans do—by absorbing information, recognizing patterns, and synthesizing new ideas. But from a Kantian ethical perspective, this analogy is deeply flawed. A human learns through selective experience, judgment, and lived understanding.
An AI model, on the other hand, ingests everything at scale—indiscriminately, without consent, and without an understanding of the ethical boundaries that govern human learning.
Immanuel Kant’s categorical imperative suggests that we should only act according to principles that could be universalized—meaning that if everyone followed the same rule, it should still be morally acceptable.
Applied to AI, this principle demands a crucial test: Would AI companies accept it if the results of their own work—their models, research, and proprietary data—were scraped and fed into another AI system without their permission?
If they would find such a practice unacceptable, then by Kant’s own philosophy, their current approach to using artists’ work without consent fails the test of moral reasoning.
Ironically, we already know the answer. When DeepSeek, a China-based AI company, was accused of using OpenAI’s models to train its own chatbot, OpenAI reacted with outrage. The process, known as "distillation," involves taking outputs from a more advanced AI and using them to improve another system.
While common in the industry, OpenAI condemned this act as a violation of its terms of service—an unacceptable misuse of its intellectual property.
Ethical consistency demands that we hold AI companies to the same standard they would expect for themselves. If they wouldn’t want their intellectual labor to be harvested and repurposed without acknowledgment, then they cannot justify doing the same to artists.
AI companies say their models are a win for everyone—cheaper, faster, and more accessible art. From a business standpoint, that sounds great: endless creative content at minimal cost. But there’s a bigger ethical question—does this benefit outweigh the harm?
Utilitarianism, as laid out by Jeremy Bentham and John Stuart Mill, is simple:
maximize happiness and minimize suffering.
An action is only ethical if it creates the greatest good for the greatest number.
AI companies claim they’re democratizing art and expanding creativity. But look closer, and the harm starts to pile up—not just for artists, but for everyone.
Jobs disappear across industries. AI isn’t just replacing painters and illustrators; it’s creeping into writing, music, design, and even programming. When businesses can get AI-generated work in seconds for a fraction of the cost, human workers across creative and knowledge-based fields lose opportunities.
It’s already happening. Shopify's CEO, Tobi Lütke, says that their staffers need to prove jobs can’t be done by AI before asking to hire more people.
Over time, it’s not just artists struggling—anyone trying to stand out online, from small business owners to independent writers, faces an uphill battle.
Creativity loses its human touch. The best art, music, and writing come from human experience—the struggles, emotions, and perspectives that make something meaningful. AI doesn’t feel joy, pain, or nostalgia; it only predicts patterns.
As AI-generated content dominates, cultural production could become increasingly shallow, optimized for engagement rather than depth. Imagine a world where music, movies, and even books start to feel eerily formulaic—because they are.
The next generation might never stand a chance. If AI continues to replace human creativity, what happens to young people aspiring to be artists, writers, or musicians?
Apprenticeships, entry-level jobs, and freelance gigs start to vanish, making it nearly impossible for newcomers to break in.
Without real-world experience, mentorship, or a way to make a living, entire industries could shrink, leaving fewer paths for future generations to explore their creative potential.
AI’s rapid rise isn’t just an artist’s problem—it’s everyone’s problem. When human creativity becomes undervalued, entire industries shift, economies change, and our cultural landscape risks becoming a sea of soulless, machine-made content.
From a utilitarian view, the ethical math doesn’t add up. If people bear the suffering while corporations and a handful of consumers reap the benefits, then the scale tips toward injustice.
If AI art mostly enriches big tech while stripping away careers and creative expression, utilitarianism would call it unethical. Efficiency alone isn’t a moral defense.
The real test for AI companies is this: Do their innovations actually make society better for everyone? If not, then the cost is too high.
The Ouroboros is a paradox—something that sustains itself by consuming itself. A serpent or dragon biting its own tail, endlessly devouring and renewing, trapped in an infinite loop of self-consumption.
It forces us to ask: can something truly grow if it only has itself to consume? Can renewal come from self-destruction, or is it an illusion? The Ouroboros is a symbol of endless hunger—an entity that can never escape itself.
If AI-generated art reaches a point where it displaces enough human artists, it risks becoming an Ouroboros—devouring the very ecosystem that sustains it. AI learns by training on human-made art; if professional artists vanish, the pool of fresh, high-quality work shrinks.
Without new creativity to feed on, AI risks stagnation, endlessly recycling its own derivatives in a loop of diminishing originality. In trying to replace artists, it may ultimately starve itself.
For all the hype around AI-generated art, there’s one inescapable truth: AI is only as good as the human-made art it learns from. Without artists, AI doesn’t have anything to work with.
Right now, generative AI models thrive because they’ve been trained on a goldmine of human creativity—billions of paintings, illustrations, and digital artworks scraped from across the internet.
But what happens when AI-generated content starts dominating the pool of available images? What happens when future AI models are trained not on human masterpieces, but on AI copies of AI copies?
The result? A slow but inevitable decline in quality.
And AI companies know this.
They know they need a steady supply of fresh, high-quality human-made art to keep improving their models. But if AI-generated art floods the internet, replacing paid human work, where does that fresh supply come from?
This creates an existential crisis for the industry. If AI-generated art pushes human artists out of business, AI itself eventually runs out of high-quality data to learn from. At that point, companies have two choices:
Either way, the current model is unsustainable. AI thrives on human creativity, but if it consumes too much without giving back, it might just starve itself out of existence.
AI and human creativity can totally coexist—if AI doesn’t end up eating the thing keeping it alive.
AI’s biggest selling point right now is also the thing that could kill it: fewer artists, fewer original ideas, fewer creative jobs, and a slow but steady drain on the well of human expression.
If AI feeds too aggressively on human creativity without sustaining it, it risks collapsing into an echo chamber of its own making.
When AI makes art cheaper, faster, and “good enough,” how many studios will still hire human artists? How many publishers will gamble on a new writer instead of feeding bestsellers into an algorithm? How many kids will even bother learning to draw when an app can do it in seconds?
If AI art wins, it won’t be because it’s better. It’ll be because it’s convenient and free. Convenience has a way of erasing things.
Like phone calls. Like handwritten letters. Like the feeling of getting lost in a hand-drawn world, where every detail was placed there by someone who cared.
And let me remind you that AI companies are not doing this for the public good. They’re doing this the same way drug dealers are handing out free samples to teenagers.
Drug dealers, who are protected by great economic resources and legal loopholes.
This is the kind of future we are heading to, and no Black Mirror episodes have been pessimistic enough.
Read the original post: AI, Ghibli, and how to think about everything morally for detailed footnotes and direct interaction with the author.