For the last several years, generative AI has forced authors, publishers, and readers into a new conversation about what publishing is becoming. The discussion is often emotional, sometimes confused, and increasingly urgent. On one side are those who see AI as a tool that lowers barriers, widens participation, and gives ordinary creators access to capabilities that were once reserved for people with more money, more staff, and more industry knowledge. On the other side are those who see an increasingly flooded marketplace, weakened discoverability, and the rise of industrial-scale low-quality content that threatens to bury serious writers beneath noise.

That tension has led to a provocative proposal: Amazon KDP should charge authors an upload fee for each new title. The argument is that if AI has made it possible to publish books at near-zero cost and at industrial scale, then the platform needs to restore some friction to stop abuse. In some versions of the proposal, the fee would be refundable after a book reaches a certain sales threshold, functioning almost like a reverse advance. At first glance, this can sound practical. If the problem is zero-friction content dumping, then perhaps the answer is to make dumping expensive.

But that solution reveals something deeper about how many people are thinking about publishing in the age of generative AI. It suggests that what is being defended is not simply quality, but scarcity. And once that becomes clear, the question changes. The issue is no longer whether abuse exists. It does. The issue is whether the answer should be to make publishing harder again by placing the burden on authors rather than on the platform.

The Real Problem Is Not Imaginary

It is important to begin with honesty. There is a real problem here.

Generative AI has dramatically reduced the cost of producing content. It can assist with ideation, outlining, drafting, formatting, cover concepts, blurbs, metadata, and promotional copy. Used responsibly, these tools can help real authors work faster and with less expense. Used irresponsibly, they can enable high-volume production models in which hundreds of books are pushed into the marketplace with minimal effort, weak differentiation, and little regard for quality.

That creates a legitimate platform concern. If a publisher can profit from books that each sell only a handful of copies, simply because those books cost almost nothing to create and can be released at scale, then the market can become clogged with low-value content. Categories become noisier. Search becomes less useful. Recommendation systems can become polluted. Readers may have a harder time finding what they actually want.

So the concern itself should not be dismissed as panic or nostalgia. If there is industrial-scale manipulation, duplication, category stuffing, or mass low-value publishing, platforms have reason to intervene.

But acknowledging a real problem does not mean accepting every proposed remedy.

The Upload Fee Sounds Sensible Until You Ask What It Actually Does

The appeal of an upload fee is easy to understand. If abuse thrives because entry is frictionless, add friction. If content farms rely on volume, make volume more expensive. If low-quality books are profitable only because they cost nothing to produce, then charge for the privilege of publishing.

That is the logic.

The problem is that a universal upload fee does not actually distinguish between bad actors and good-faith creators. It does not ask whether a book is deceptive, duplicated, misleading, category spam, or manipulative. It asks only whether the author can afford to pay.

That makes it a capital-based filter.

And that matters because publishing has never been economically equal.

Some authors have always had better editors. Some could afford stronger covers. Some had money for ads while others did not. Some had staff, assistant support, technical skills, time, and preexisting audiences. Others had to self-edit, use premade covers, learn everything themselves, and build slowly from nothing. Inequality in publishing did not arrive with AI. It has always been here.

This is exactly why the current debate is so revealing. AI did not create unequal access. In many ways, it lowered the cost of entry for everyone. It gave ordinary creators access to tools that were previously too expensive, too specialized, or too time-consuming. It did not eliminate differences in resources, but it reduced some of the gap.

So when people now point to unequal resources as a reason to oppose AI-enabled scale, the response is obvious: that has always been the case. Authors have never competed on level ground. The difference now is that some of the tools that once belonged mostly to the well-resourced have become more broadly available.

That is not the creation of unfairness. In many respects, it is the reduction of it.

What Is Really Being Defended?

This is where the conversation becomes more uncomfortable.

A great deal of the rhetoric surrounding discoverability seems to assume that authors are owed some meaningful level of visibility. But that is not actually true. Authors are not owed discoverability. They are not owed chart positions, organic reach, or favorable algorithmic treatment simply because they wrote a book.

What authors can reasonably claim is something narrower and more defensible: the right to participate in a marketplace on fair terms. That means they can object to deception, manipulation, duplicate-content abuse, misleading metadata, and irrational platform favoritism. They can argue that readers should be able to navigate categories without being flooded by obvious junk. They can insist that marketplaces should function coherently.

But that is different from saying authors have a right to be found.

Markets do not owe anyone attention. Attention has always been competitive. In many cases, what is described as a discoverability crisis is partly the loss of an older advantage that existed under conditions of higher friction and lower competition. Some authors are not simply reacting to abuse. They are reacting to the end of a scarcity regime that once worked in their favor.

That does not mean all complaints are invalid. It means they need to be parsed carefully.

If AI has lowered barriers and widened participation, then a more crowded market is not automatically evidence of corruption. It may be evidence that access has expanded.

The Missing Data Problem

Another major weakness in the upload-fee argument is the absence of transparency.

Amazon does not provide the level of public data needed to justify a marketplace-wide penalty. Which categories are being most affected? Is the primary issue low-content books, coloring books, public-domain remixes, shallow nonfiction summaries, templated genre clones, or something else? Is original fiction actually being harmed in the same way? Are all sectors of KDP experiencing the same level of distortion, or are a few categories driving the concern?

Without that data, the argument becomes too broad.

If the heaviest abuse is concentrated in a few sectors, then why should every author across every category bear the cost of enforcement? Why should a fiction writer producing original work be penalized because another segment of the marketplace is being exploited at scale?

This is where the proposal begins to look less like smart governance and more like a blunt instrument. It punishes broadly without proving broadly.

Some of This May Be Market Correction, Not Market Collapse

There is another possibility worth considering. Some of what people are interpreting as a crisis may in fact be the leveling of a market that was previously inflated by friction.

Publishing has never been a pure meritocracy. Success was always shaped by barriers to entry, slower production cycles, specialized knowledge, access to services, technical competence, marketing experience, and simple timing. When fewer people could produce books easily, those already inside the system benefited from scarcity.

If AI lowers the cost of competent production, then some of the old advantages will weaken. More people can participate. More books can be made. More competition enters the field.

That does not prove the market is healthy in every respect. Clearly, some forms of abuse may increase under these conditions. But it does mean that not every decline in easy discoverability is proof of injustice. Some of it may simply be the collapse of older scarcity advantages.

That distinction matters.

Because once we recognize it, proposals to “restore friction” begin to look different. They are not always neutral attempts to defend quality. Sometimes they are attempts, conscious or not, to put the walls back up.

What Platforms Owe Authors and Readers

The real responsibility in this moment belongs less to authors than to platforms.

If Amazon believes reader experience is being degraded, then it should address the actual behaviors that degrade it. That means investing in smarter detection of duplication, spam, category abuse, misleading metadata, account farming, and coordinated manipulation. It means using behavior-based enforcement rather than imposing a blanket financial toll on all creators. It means applying category-specific scrutiny where abuse is concentrated instead of using one broad instrument across an entire publishing ecosystem.

The key question is not whether some friction is needed. The key question is what kind of friction is just.

A marketplace can absolutely justify rules against exploitative conduct. What it should not do is treat money as a proxy for legitimacy. Once that happens, governance has given way to gatekeeping.

Authorship in the Generative AI Era

Beneath all of this is a deeper issue: what authorship means when AI can help create.

Some people speak as if AI assistance automatically weakens or invalidates authorship. But authors have always used tools. They have always relied on software, editors, templates, collaborators, workflows, research systems, and production support. The existence of better tools does not eliminate human agency. It changes how that agency is exercised.

The more important distinction is not between AI-assisted and non-AI-assisted work in the abstract. The real distinction is between responsible authorship and exploitative production.

A legitimate author can use AI for brainstorming, structure, revision support, packaging assistance, or speed and still remain meaningfully responsible for the work. Another person can use any technology, AI or otherwise, to flood a platform with manipulative, low-value content. The moral dividing line is not merely speed. It is judgment, intent, disclosure where appropriate, honest presentation, and real engagement with readers.

That is the frame we need.

The Better Way Forward

The age of generative AI does present genuine challenges for publishing. It has made abuse easier to scale. It has also made creation more accessible. Both things are true at once.

What should be resisted is the temptation to solve one problem by recreating another. A universal upload fee would not only fail to distinguish carefully between legitimate creators and exploitative actors. It would also reintroduce scarcity through capital, giving the already advantaged another layer of protection while making participation harder for authors with fewer resources.

Authors do not have a right to discovery. But neither should they be asked to pay for a platform’s inability to govern precisely.

If publishing is changing because new tools have lowered barriers, then the answer is not to restore the old barriers by default. The answer is to build better systems for distinguishing genuine competition from manipulative noise.

That is a harder task than imposing a fee. But it is also the fairer one.

And in the long run, it is the only one that actually respects both authorship and access in the age of generative AI.