Showing posts with label trust. Show all posts
Showing posts with label trust. Show all posts

Thursday, September 18, 2025

AI could never replace my authors. But, without regulation, it will ruin publishing as we know it; The Guardian, September 18, 2025

, The Guardian ; AI could never replace my authors. But, without regulation, it will ruin publishing as we know it


[Kip Currier: This is a thought-provoking piece by literary agent Jonny Geller. He suggests an "artists’ rights charter for AI that protects two basic principles: permission and attribution". His charter idea conveys some aspects of the copyright area called "moral rights".

Moral rights provide copyright creators with a right of paternity (i.e. attribution) and a right of integrity. The latter can enable creators to exercise some levels of control over how their copyrighted works can be adapted. The moral right of integrity, for example, was an argument in cases involving whether black and white films (legally) could be or (ethically) should be colorized. (See Colors in Conflicts: Moral Rights and the Foreign Exploitation of Colorized U.S. Motion PicturesMoral rights are not widespread in U.S. copyright law because of tensions between the moral right of integrity and the right of free expression/free speech under the U.S. Constitution (whose September 17, 1787 birthday was yesterday). The Visual Artists Rights Act (1990) is a narrow example of moral rights under U.S. copyright law.

To Geller's proposed Artists' Rights Charter for AI I'd suggest adding the word and concept of "Responsibilities". Compelling arguments can be made for providing authors with some rights regarding use of their copyrighted works as AI training data. And, commensurately, persuasive arguments can be made that authors have certain responsibilities if they use AI at any stage of their creative processes. Authors can and ethically should be transparent about how they have used AI, if applicable, in the creation stages of their writing.

Of course, how to operationalize that as an ethical standard is another matter entirely. But just because it may be challenging to initially develop some ethical language as guidance for authors and strive to instill it as a broad standard doesn't mean it shouldn't be attempted or done.]


[Excerpt]

"The single biggest threat to the livelihood of authors and, by extension, to our culture, is not short attention spans. It is AI...

As a literary agent and CEO of one of the largest agencies in Europe, I think this is something everyone should care about – not because we fear progress, but because we want to protect it. If you take away the one thing that makes us truly human – our ability to think like humans, create stories and imagine new worlds – we will live in a diminished world.

AI that doesn’t replace the artist, or that will work with them transparently, is not all bad. An actor who is needed for reshoots on a movie may authorise use of the footage they have to complete a picture. This will save on costs, the environmental impact and time. A writer may wish to speed up their research and enhance their work by training their own models to ask the questions that a researcher would. The translation models available may enhance the range of offering of foreign books, adding to our culture.

All of this is worth discussing. But it has to be a discussion and be transparent to the end user. Up to now, work has simply been stolen and there are insufficient guardrails on the distributors, studios, publishers. As a literary agent, I have a more prosaic reason to get involved – I don’t think it is fair for someone’s work to be taken without their permission to create an inferior competitor.

What can we do? We could start with some basic principles for all to sign up to. An artists’ rights charter for AI that protects two basic principles: permission and attribution."

Sunday, September 7, 2025

Nashville church helps unhoused people after downtown library fire; NewsChannel5, September 6, 2025

"When the Nashville Public Library's downtown branch closed after a fire, McKendree United Methodist Church stepped up to fill a critical gap for people experiencing homelessness who had lost their daily refuge.

"Alright we'll get ya all bagged up here," said Francie Markham, who volunteers at the church every Thursday morning helping people experiencing homelessness...

After losing their cool refuge with computers and resources, Smith said many people just wanted to avoid the long stretch of summer heat.

"So what we were able to do on our Tuesdays and Thursday meal is to allow them to come in much earlier rather than at the 11:30 times so they would be out of the element," Smith said.

"With the changing of the season we need it open as soon as we can," Smith said.

In the meantime, Smith and Markham keep doing what's written on the walls — serving kindness.

Despite initial reports the library would open soon after the fire, library officials say the library requires a third party inspection before it can open. The two nearest library branches, North Branch and Hadley Park, are both more than a 30-minute walk from the library downtown. 

Have you witnessed acts of community kindness during challenging times? Share your story with Kim Rafferty and help us highlight the helpers making a difference in Middle Tennessee. Email kim.rafferty@NewsChannel5.com to continue the conversation.

In this article, we used artificial intelligence to help us convert a video news report originally written by Kim Rafferty. When using this tool, both Kim and the NewsChannel 5 editorial team verified all the facts in the article to make sure it is fair and accurate before we published it. We care about your trust in us and where you get your news, and using this tool allows us to convert our news coverage into different formats so we can quickly reach you where you like to consume information. It also lets our journalists spend more time looking into your story ideas, listening to you and digging into the stories that matter."

Friday, May 30, 2025

This Latest AI Book Debacle Is A Disturbing Part Of A Growing Trend; ScreenRant, May 29, 2025


"Yet another AI scandal has hit self-publishing, as an author left generative AI in a final draft of their book - but this isn't an isolated incident, and reveals a growing, and deeply problematic, trend."

Saturday, April 26, 2025

U.S. autism data project sparks uproar over ethics, privacy and intent; The Washington Post, April 25, 2025

 , The Washington Post; U.S. autism data project sparks uproar over ethics, privacy and intent

"The Trump administration has retreated from a controversial plan for a national registry of people with autism just days after announcing it as part of a new health initiative that would link personal medical records to information from pharmacies and smartwatches.

Jay Bhattacharya, director of the National Institutes of Health, unveiled the broad, data-driven initiative to a panel of experts Tuesday, saying it would include “national disease registries, including a new one for autism” that would accelerate research into the rapid rise in diagnoses of the condition.

The announcement sparked backlash in subsequent days over potential privacy violations, lack of consent and the risk of long-term misuse of sensitive data.

The Trump administration still will pursue large-scale data collection, but without the registry that drew the most intense criticism, the Department of Health and Human Services said."

Thursday, February 27, 2025

Dying in Darkness: Jeff Bezos Turns Out the Lights in the Washington Post’s Opinion Section; Politico, February 26, 2025

MICHAEL SCHAFFER , Politico; Dying in Darkness: Jeff Bezos Turns Out the Lights in the Washington Post’s Opinion Section

"In personally announcing that he was dramatically re-orienting the editorial line, and in fact wouldn’t even run dissenting views, Bezos added another sharp example to a narrative that represents a grave threat to the Post’s image: The idea that its owner is messing around with the product in order to curry favor with his new pal Donald Trump, who has the power to withhold contracts from Amazon and other Bezos companies.

The paper’s image is not some abstract question for journalism-school professors. It’s a matter of dollars and cents. If readers don’t trust a publication’s name, no amount of Pulitzer-worthy scoops will fix it. For Bezos, a guy who believes that the Post needs to gain a broad-based audience, it’s a baffling blind spot...

Owners may get the final say at publications they own, but the wisest among them have let their newsrooms and editorial boards make their own decisions without fear or favor. That’s to prevent the very impression that Bezos is making — that of a mogul trying to disguise his own predilections as independent thought...

Yet even as leadership talked about amping up readership, the owner personally alienated real and potential readers: first by spiking the endorsement, then by showing up in the line of moguls at Trump’s inauguration and now by declaring that the publication would have one editorial line for all of its contributors. It all made his publication look wimpy, or possibly corrupt.

Instead of being an occasionally fussy repository of mostly mainstream points of view, the venerable publication’s opinion pages now risk looking like a vessel for a very rich owner to curry favor with the man who runs the government. It’s going to be hard to keep that image from sticking to the whole organization — including the non-wimpy, non-corrupt reporting corps that keep digging up scoops on the administration.

Bezos, of all people, should know this: He’s the branding whiz who came up with “Democracy Dies in Darkness.”

Among many journalists, Wednesday’s bombshell announcement is being debated as a matter of media ethics: Was Bezos within his rights as an owner to call the tune on opinion matters? Or was this type of process meddling a violation of norms that go back at least to the 1950s?...

“I am of America and for America, and proud to be so,” he added. “Our country did not get here by being typical. And a big part of America’s success has been freedom in the economic realm and everywhere else. Freedom is ethical — it minimizes coercion — and practical; it drives creativity, invention and prosperity.”

Sounds good late at night in the dorm room. But does said freedom include, say, the freedom to start a union at an Amazon warehouse? Or run a business without worrying that some monopolistic e-commerce behemoth is going to drive you under? Come to think of it, these sound like great subjects for energetic debate on a pluralistic op-ed page somewhere. Too bad Bezos, instead of embracing the great American history of arguing about freedom, announced that he’s not so keen on debate."

Thursday, January 16, 2025

Biden bids farewell with dark warning for America: the oligarchs are coming; The Guardian, January 15, 2025

 in Washington , The Guardian; Biden bids farewell with dark warning for America: the oligarchs are coming

"The primetime speech did not mention Donald Trump by name. Instead it will be remembered for its dark, ominous warning about something wider and deeper of which Trump is a symptom.

“Today, an oligarchy is taking shape in America of extreme wealth, power, and influence that literally threatens our entire democracy, our basic rights and freedom and a fair shot for everyone to get ahead,” Biden said.

The word “oligarchy” comes from the Greek words meaning rule (arche) by the few (oligos). Some have argued that the dominant political divide in America is no longer between left and right, but between democracy and oligarchy, as power becomes concentrated in the hands of a few. The wealthiest 1% of Americans now has more wealth than the bottom 90% combined.

The trend did not start with Trump but he is set to accelerate it. The self-styled working-class hero has picked the richest cabinet in history, including 13 billionaires, surrounding himself with the very elite he claims to oppose. Elon Musk, the world’s richest man, has become a key adviser. Tech titans Musk, Jeff Bezos and Mark Zuckerberg – collectively worth a trillion dollars – will be sitting at his inauguration on Monday.

Invoking former president Dwight Eisenhower’s farewell address in January 1961 that warned against the rise of a military-industrial complex, Biden said: “Six decades later, I’m equally concerned about the potential rise of a tech industrial complex. It could pose real dangers for our country as well. Americans are being buried under an avalanche of misinformation and disinformation, enabling the abuse of power.”

In an acknowledgement of news deserts and layoffs at venerable institutions such as the Washington Post, Biden added starkly: “The free press is crumbling. Editors are disappearing. Social media is giving up on fact checking. Truth is smothered by lies, told for power and for profit. We must hold the social platforms accountable, to protect our children, our families and our very democracy from the abuse of power.”

Zuckerberg’s recent decision to abandon factcheckers on Facebook, and Musk’s weaponisation of X in favour of far-right movements including Maga, was surely uppermost in Biden’s mind. Trust in the old media is breaking down as people turn to a fragmented new ecosystem. It has all happened with disorienting speed."

Saturday, December 28, 2024

Overcoming AI’s Nagging Trust And Ethics Issues; Forbes, December 28, 2024

Joe McKendrick, Forbes ; Overcoming AI’s Nagging Trust And Ethics Issues

"Trust and ethics in AI is what is making business leaders nervous. For example, at least 72% of executives responding to a recent surveyfrom the IBM Institute for Business Value say they “are willing to forgo generative AI benefits due to ethical concerns.” In addition, more than half (56%) indicate they are delaying major investments in generative AI until there is clarity on AI standards and regulations...

"Today, guardrails are a growing area of practice for the AI community given the stochastic nature of these models,” said Ross. “Guardrails can be employed for virtually any area of decisioning, from examining bias to preventing the leakage of sensitive data."...

The situation is not likely to change soon, Jeremy Rambarran, professor at Touro University Graduate School, pointed out. “Although the output that's being generated may be unique, depending on how the output is being presented, there's always a chance that part of the results may not be entirely accurate. This will eventually change down the road as algorithms are enhanced and could eventually be updated in an automated manner.”...

How can AI be best directed to be ethical and trustworthy? Compliance requirements, of course, will be a major driver of AI trust in the future, said Rambarran. “We need to ensure that AI-driven processes comply with ethical guidelines, legal regulations, and industry standards. Humans should be aware of the ethical implications of AI decisions and be ready to intervene when ethical concerns arise.”

Thursday, October 3, 2024

X’s AI chatbot spread voter misinformation – and election officials fought back; The Guardian, September 12, 2024

Rachel Leingang, The Guardian; X’s AI chatbot spread voter misinformation – and election officials fought back


"Finding the source – and working to correct it – served as a test case of how election officials and artificial intelligence companies will interact during the 2024 presidential election in the US amid fears that AI could mislead or distract voters. And it showed the role Grok, specifically, could play in the election, as a chatbot with fewer guardrails to prevent the generating of more inflammatory content.


A group of secretaries of state and the organization that represents them, the National Association of Secretaries of State, contacted Grok and X to flag the misinformation. But the company didn’t work to correct it immediately, instead giving the equivalent of a shoulder shrug, said Steve Simon, the Minnesota secretary of state. “And that struck, I think it’s fair to say all of us, as really the wrong response,” he said.


Thankfully, this wrong answer was relatively low-stakes: it would not have prevented people from casting a ballot. But the secretaries took a strong position quickly because of what could come next.


“In our minds, we thought, well, what if the next time Grok makes a mistake, it is higher stakes?” Simon said. “What if the next time the answer it gets wrong is, can I vote, where do I vote … what are the hours, or can I vote absentee? So this was alarming to us.”


Especially troubling was the fact that the social media platform itself was spreading false information, rather than users spreading misinformation using the platform.


The secretaries took their effort public. Five of the nine secretaries in the group signed on to a public letter to the platform and its owner, Elon Musk. The letter called on X to have its chatbot take a similar position as other chatbot tools, like ChatGPT, and direct users who ask Grok election-related questions to a trusted nonpartisan voting information site, CanIVote.org.


The effort worked. Grok now directs users to a different website, vote.gov, when asked about elections."

Tuesday, September 17, 2024

Disinformation, Trust, and the Role of AI: The Daniel Callahan Annual Lecture; The Hastings Center, September 12, 2024

 The Hastings Center; Disinformation, Trust, and the Role of AI: The Daniel Callahan Annual Lecture

"A Moderated Discussion on DISINFORMATION, TRUST, AND THE ROLE OF AI: Threats to Health & Democracy, The Daniel Callahan Annual Lecture

Panelists: Reed Tuckson, MD, FACP, Chair & Co-Founder of the Black Coalition Against Covid, Chair and Co-Founder of the Coalition For Trust In Health & Science Timothy Caulfield, LB, LLM, FCAHS, Professor, Faculty of Law and School of Public Health, University of Alberta; Best-selling author & TV host Moderator: Vardit Ravitsky, PhD, President & CEO, The Hastings Center"

Wednesday, September 4, 2024

NEH Awards $2.72 Million to Create Research Centers Examining the Cultural Implications of Artificial Intelligence; National Endowment for the Humanities (NEH), August 27, 2024

Press Release, National Endowment for the Humanities (NEH); NEH Awards $2.72 Million to Create Research Centers Examining the Cultural Implications of Artificial Intelligence

"The National Endowment for the Humanities (NEH) today announced grant awards totaling $2.72 million for five colleges and universities to create new humanities-led research centers that will serve as hubs for interdisciplinary collaborative research on the human and social impact of artificial intelligence (AI) technologies.

As part of NEH’s third and final round of grant awards for FY2024, the Endowment made its inaugural awards under the new Humanities Research Centers on Artificial Intelligence program, which aims to foster a more holistic understanding of AI in the modern world by creating scholarship and learning centers across the country that spearhead research exploring the societal, ethical, and legal implications of AI. 

Institutions in California, New York, North Carolina, Oklahoma, and Virginia were awarded NEH grants to establish the first AI research centers and pilot two or more collaborative research projects that examine AI through a multidisciplinary humanities lens. 

The new Humanities Research Centers on Artificial Intelligence grant program is part of NEH’s agencywide Humanities Perspectives on Artificial Intelligence initiative, which supports humanities projects that explore the impacts of AI-related technologies on truth, trust, and democracy; safety and security; and privacy, civil rights, and civil liberties. The initiative responds to President Biden’s Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence, which establishes new standards for AI safety and security, protects Americans’ privacy, and advances equity and civil rights."

Thursday, August 29, 2024

Disinformation, Trust, and the Role of AI: Threats to Health & Democracy; The Hastings Center, September 9, 2024

The Hastings Center; Disinformation, Trust, and the Role of AI: Threats to Health & Democracy

"Join us for The Daniel Callahan Annual Lecture, hosted by The Hastings Center at Rockefeller University’s beautiful campus in New York. Hastings Center President Vardit Ravitsky will moderate a discussion with experts Reed Tuckson and Timothy Caulfieldon disinformation, trust, and the role of AI, focusing on current and future threats to health and democracy. The event will take place on Monday, September 9, 5 pm. Learn more and register...

A Moderated Discussion on DISINFORMATION, TRUST, AND THE ROLE OF AI: Threats to Health & Democracy, The Daniel Callahan Annual Lecture

Panelists
Reed Tuckson, MD, FACP, Chair & Co-Founder of the Black Coalition Against Covid, Chair and Co-Founder of the Coalition For Trust In Health & Science
Timothy Caulfield, LB, LLM, FCAHS, Professor, Faculty of Law and School of Public Health, University of Alberta; Best-selling author & TV host

Moderator:
Vardit Ravitsky, PhD, President & CEO, The Hastings Center"

Tuesday, July 16, 2024

Peter Buxtun, whistleblower who exposed Tuskegee syphilis study, dies aged 86; Associated Press via The Guardian, July 15, 2024

 Associated Press via The Guardian; Peter Buxtun, whistleblower who exposed Tuskegee syphilis study, dies aged 86

"Peter Buxtun, the whistleblower who revealed that the US government allowed hundreds of Black men in rural Alabama to go untreated for syphilis in what became known as the Tuskegee study, has died. He was 86...

Buxtun is revered as a hero to public health scholars and ethicists for his role in bringing to light the most notorious medical research scandal in US history. Documents that Buxtun provided to the Associated Press, and its subsequent investigation and reporting, led to a public outcry that ended the study in 1972.

Forty years earlier, in 1932, federal scientists began studying 400 Black men in Tuskegee, Alabama, who were infected with syphilis. When antibiotics became available in the 1940s that could treat the disease, federal health officials ordered that the drugs be withheld. The study became an observation of how the disease ravaged the body over time...

In his complaints to federal health officials, he drew comparisons between the Tuskegee study and medical experiments Nazi doctors had conducted on Jews and other prisoners. Federal scientists did not believe they were guilty of the same kind of moral and ethical sins, but after the Tuskegee study was exposed, the government put in place new rules about how it conducts medical research. Today, the study is often blamed for the unwillingness of some African Americans to participate in medical research.

“Peter’s life experiences led him to immediately identify the study as morally indefensible and to seek justice in the form of treatment for the men. Ultimately, he could not relent,” said the CDC’s Pestorius."

Monday, June 17, 2024

Video Clip: The Death of Truth; C-Span, June 9, 2024

 C-Span; Video Clip: The Death of Truth

"Steven Brill, a journalist and NewsGuard Co-CEO, talked about his new book on online misinformation and social media, and their impact on U.S. politics and democracy."

Tuesday, June 4, 2024

Sure, Google’s AI overviews could be useful – if you like eating rocks; The Guardian, June 1, 2024

 , The Guardian; Sure, Google’s AI overviews could be useful – if you like eating rocks

"To date, some of this searching suggests subhuman capabilities, or perhaps just human-level gullibility. At any rate, users have been told that glue is useful for ensuring that cheese sticks to pizza, that they could stare at the sun for for up to 30 minutes, and that geologists suggest eating one rock per day (presumably to combat iron deficiency). Memo to Google: do not train your AI on Reddit or the Onion."

Thursday, April 4, 2024

Ethics in an age of disinformation: Free webinar series from the National Press Club Journalism Institute; National Press Club Journalism Institute, April 4, 2024

Press Release, National Press Club Journalism Institute; Ethics in an age of disinformation: Free webinar series from the National Press Club Journalism Institute 

"The National Press Club Journalism Institute is pleased to announce a free, four-part webinar series focused on ethics in the age of disinformation. These discussions are geared toward equipping journalists and the public with tools to combat mis and disinformation efforts aimed at disrupting journalism and democracy.

All of these webinars are free and open to the public and are designed to provide tools and best practices to support ethical, trustworthy journalism."

Friday, August 25, 2023

Who owns your cells? Legacy of Henrietta Lacks raises ethical questions about profits from medical research; Cleveland.com, August 18, 2023

Who owns your cells? Legacy of Henrietta Lacks raises ethical questions about profits from medical research

"While the legal victory may have given the family some closure, it has raised concerns for bioethicists in Cleveland and elsewhere.

The case raises important questions about owning one’s own body; whether individuals are entitled to a share of the profits from medical discoveries derived from research on their own cells, organs and genetic material.

But it also offers a tremendous opportunity to not only acknowledge the ethical failures of the past and the seeds of mistrust they have sown, but to guide society toward building better, more trustworthy medical institutions, said Aaron Goldenberg, who directs the Bioethics Center for Community Health and Genomic Equity (CHANGE) at Case Western Reserve University."

Friday, August 11, 2023

A New Frontier for Travel Scammers: A.I.-Generated Guidebooks; The New York Times, August 5, 2023

 Seth Kugel and A New Frontier for Travel Scammers: A.I.-Generated Guidebooks

"Though she didn’t know it at the time, Ms. Kolsky had fallen victim to a new form of travel scam: shoddy guidebooks that appear to be compiled with the help of generative artificial intelligence, self-published and bolstered by sham reviews, that have proliferated in recent months on Amazon.

The books are the result of a swirling mix of modern tools: A.I. apps that can produce text and fake portraits; websites with a seemingly endless array of stock photos and graphics; self-publishing platforms — like Amazon’s Kindle Direct Publishing — with few guardrails against the use of A.I.; and the ability to solicit, purchase and post phony online reviews, which runs counter to Amazon’s policies and may soon face increased regulation from the Federal Trade Commission.

The use of these tools in tandem has allowed the books to rise near the top of Amazon search results and sometimes garner Amazon endorsements such as “#1 Travel Guide on Alaska.”"

Friday, July 28, 2023

The Guardian’s editorial code has been updated – here’s what to expect; The Guardian, July 27, 2023

 , The Guardian; The Guardian’s editorial code has been updated – here’s what to expect

"Much has changed since 2011 – at the Guardian, in the way society shares information and opinions, and in the world at large. The updates reflect this. But as “the embodiment of the Guardian’s values”, which is how the editor-in-chief, Katharine Viner, described the code in an email to staff today, the standards by which journalists agree to be held accountable, while geared (as far as possible) to the modern environment, seek to maintain something immutable: trust."