Showing posts with label AI-generated writing. Show all posts
Showing posts with label AI-generated writing. Show all posts

Saturday, April 11, 2026

Did AI kill my job, or open up a next chapter?; Public Source, April 10, 2026

[Kip Currier: I posted the following note and excerpt from this Public Source essay for the graduate students in my The Information Professional in Communities course this term:

I'm sharing this Pittsburgh local journalism first person essay by writer Austin Harvey, which I serendipitously came across and have posted to all of my blogs. Given the work that I currently do as a university faculty instructor, the piece raises thorny questions and considerations for me about what information centers/professionals can do to assist and/or "be there" for individuals and communities who are being displaced by AI.

Also, in what ways do academic programs like this one need to better prepare MLIS students to navigate AI-related positive and negative societal changes?

In what ways will information centers/professionals, as well as information center users, potentially be displaced by AI?

In what ways can information centers/professionals proactively adapt and/or manage this disruptive technological change?

What kinds of advocacy and actions by information professionals are required and needed?

Who are potential partners with whom information professionals can confer and collaborate on behalf of communities to strategically address present and future AI-fueled impacts?]


First-person essay by Austin Harvey, Public Source; Did AI kill my job, or open up a next chapter?

"Many writers feared that they would be the first ones to lose their jobs to AI. I did not share this fear, though I feel my heart rate spike every time I use an em-dash now — and you can pry them from my cold, dead hands when I’m gone. I saw value in human writing. I still do, and believe most people agree. We’ve gotten better at identifying AI-generated text, and while there are certainly a litany of websites out there publishing AI-generated articles, readers generally seem averse to them now. 

I was foolish to think none of this would affect me. 

I wasn’t replaced by AI. In fact, ATI’s editors made it very clear that they would never publish AI-generated articles. But AI was still a disruptive force. Search traffic fell. Google changed the rules on SEO and AdSense. We had editors quit or move on to other jobs, but we never hired anyone else to fill their positions. Our team of 12 became a team of seven, and for the better part of two years we were struggling to put out enough content to satisfy the algorithms. I was burning out constantly, still holding on to the idea that this was surely better than self-employment. 

Then, I was called into a meeting and told I was being let go at the end of January...

It wasn’t that I was replaced by AI, or that AI-generated articles were taking all of the search traffic; it was that a great number of people have stopped reading entirely, opting instead to simply ask ChatGPT or Gemini for answers to their questions. It’s an extension of the same issue that has caused many local news outlets to cease operations or cut staff."

Tuesday, April 7, 2026

I told the internet I use AI. Boy, was it mad.; The Washington Post, April 5, 2026

 , The Washington Post; I told the internet I use AI. Boy, was it mad.

"...Many people think that using AI at any stage of the writing process amounts to outsourcing your thinking to a machine, and they reacted badly to a journalist suggesting some AI use might be all right.

Obviously, I disagree, but I recognize those folks are grappling with important questions, such as “What is writing for?” and “Which uses of AI serve those purposes, and which undermine them?”"

Wednesday, November 5, 2025

Amazon’s Bestselling Herbal Guides Are Overrun by Fake Authors and AI; ZME Science, November 4, 2025

Tudor Tarita , ZME Science; Amazon’s Bestselling Herbal Guides Are Overrun by Fake Authors and AI


[Kip Currier: This is a troubling, eye-opening report by Originality.ai on AI-generated books proliferating on Amazon in the sub-area of "herbal remedies". As a ZME Science article on the report suggests, if this is the state of herbal books on the world's largest bookseller platform, what is the state of other book areas and genres?

The lack of transparency and authenticity vis-a-vis AI-generated books is deeply concerning. If a potential book buyer knows that a book is principally or wholly "authored" by AI and that person still elects to purchase that book with that knowledge, that's their choice. But, as the Originality.ai report identifies, potential book buyers are being presented with fake author names on AI-generated books and are not being informed by the purveyors of AI-generated books, or the platforms that make those books accessible for purchase, that those works are not written by human experts and authors. That is deceptive business practice and consumer fraud.

Consumers should have the right to know material information about all products in the marketplace. No one would countenance (except for bad actors) children's toys deceptively containing harmful lead or dog and cat treats made with substances that can cause harm or death. Why should consumers not be concerned in similar fashion about books that purport to be created by human experts but which may contain information that can cause harm and even death in some cases? 

Myriad ethical and legal questions are implicated, such as:

  • What are the potential harms of AI-generated books that falsely pose as human authors?
  • What responsibility do platforms like Amazon have for fake products?
  • What responsibility do platforms like Amazon have for AI-generated books?
  • What do you as a consumer want to know about books that are available for purchase on platforms like Amazon?
  • What are the potential short-term and long-term implications of AI-generated books posing as human authors for consumers, authors, publishers, and societies?]


[Excerpt]

"At the top of Amazon’s “Herbal Remedies” bestseller list, The Natural Healing Handbook looked like a typical wellness guide. With leafy cover art and promises of “ancient wisdom” and “self-healing,” it seemed like a harmless book for health-conscious readers.

But “Luna Filby”, the Australian herbalist credited with writing the book, doesn’t exist.

A new investigation from Originality.ai, a company that develops tools to detect AI-generated writing, reveals that The Natural Healing Handbook and hundreds of similar titles were likely produced by artificial intelligence. The company scanned 558 paperback titles published in Amazon’s “Herbal Remedies” subcategory in 2025 and found that 82% were likely written by AI.

“We inputted Luna’s author biography, book summary, and any available sample pages,” the report states. “All came back flagged as likely AI-generated with 100% confidence.

A Forest of Fakes

It’s become hard (sometimes, almost impossible) to distinguish whether something is written by AI. So there’s often a sliver of a doubt. But according to the report, The Natural Healing Handbook is part of a sprawling canopy of probable AI-generated books. Many of them are climbing Amazon’s rankings, often outselling work by real writers...

Where This Leaves Us

AI is flooding niches that once relied on careful expertise and centuries of accumulated knowledge. Real writers are being drowned out by machines regurgitating fragments of folklore scraped from the internet.

“This is a damning revelation of the sheer scope of unlabeled, unverified, unchecked, likely AI content that has completely invaded [Amazon’s] platform,” wrote Michael Fraiman, author of the Originality.ai report.

The report looked at herbal books, but there’s likely many other niches hidden

Amazon’s publishing model allows self-published authors to flood categories for profit. And now, AI tools make it easier than ever to generate convincing, although hollow, manuscripts. Every new “Luna Filby” who hits #1 proves that the model still works.

Unless something changes, we may be witnessing the quiet corrosion of trust in consumer publishing."

Wednesday, September 24, 2025

What is AI slop? A technologist explains this new and largely unwelcome form of online content; The Conversation, September 2, 2025

Assistant Provost for Innovations in Learning, Teaching, and Technology, Quinnipiac University, The Conversation ; What is AI slop? A technologist explains this new and largely unwelcome form of online content

"You’ve probably encountered images in your social media feeds that look like a cross between photographs and computer-generated graphics. Some are fantastical – think Shrimp Jesus – and some are believable at a quick glance – remember the little girl clutching a puppy in a boat during a flood? 

These are examples of AI slop, low- to mid-quality content – video, images, audio, text or a mix – created with AI tools, often with little regard for accuracy. It’s fast, easy and inexpensive to make this content. AI slop producers typically place it on social media to exploit the economics of attention on the internet, displacing higher-quality material that could be more helpful.

AI slop has been increasing over the past few years. As the term “slop” indicates, that’s generally not good for people using the internet...

Harms of AI slop

AI-driven slop is making its way upstream into people’s media diets as well. During Hurricane Helene, opponents of President Joe Biden cited AI-generated images of a displaced child clutching a puppy as evidence of the administration’s purported mishandling of the disaster response. Even when it’s apparent that content is AI-generated, it can still be used to spread misinformation by fooling some people who briefly glance at it.

AI slop also harms artists by causing job and financial losses and crowding out content made by real creators."

Sunday, November 21, 2021

Artificial intelligence is getting better at writing, and universities should worry about plagiarism; The Conversation, November 4, 2021

 and  , The Conversation; Artificial intelligence is getting better at writing, and universities should worry about plagiarism


"The dramatic rise of online learning during the COVID-19 pandemic has spotlit concerns about the role of technology in exam surveillance — and also in student cheating. 

Some universities have reported more cheating during the pandemic, and such concerns are unfolding in a climate where technologies that allow for the automation of writing continue to improve.

Over the past two years, the ability of artificial intelligence to generate writing has leapt forward significantly, particularly with the development of what’s known as the language generator GPT-3. With this, companies such as Google, Microsoft and NVIDIA can now produce “human-like” text.

AI-generated writing has raised the stakes of how universities and schools will gauge what constitutes academic misconduct, such as plagiarism. As scholars with an interest in academic integrity and the intersections of work, society and educators’ labour, we believe that educators and parents should be, at the very least, paying close attention to these significant developments."