Sunday, August 25, 2024

‘Never summon a power you can’t control’: Yuval Noah Harari on how AI could threaten democracy and divide the world; The Guardian, August 24, 2024

 , The Guardian; ‘Never summon a power you can’t control’: Yuval Noah Harari on how AI could threaten democracy and divide the world

"Would having even more information make things better – or worse? We will soon find out. Numerous corporations and governments are in a race to develop the most powerful information technology in history – AI. Some leading entrepreneurs, such as the American investor Marc Andreessen, believe that AI will finally solve all of humanity’s problems. On 6 June 2023, Andreessen published an essay titled Why AI Will Save the World, peppered with bold statements such as: “I am here to bring the good news: AI will not destroy the world, and in fact may save it.” He concluded: “The development and proliferation of AI – far from a risk that we should fear – is a moral obligation that we have to ourselves, to our children, and to our future.”

Others are more sceptical. Not only philosophers and social scientists but also many leading AI experts and entrepreneurs such as Yoshua Bengio, Geoffrey Hinton, Sam Altman, Elon Musk and Mustafa Suleyman have warned that AI could destroy our civilisation. In a 2023 survey of 2,778 AI researchers, more than a third gave at least a 10% chance of advanced AI leading to outcomes as bad as human extinction. Last year, close to 30 governments – including those of China, the US and the UK – signed the Bletchley declaration on AI, which acknowledged that “there is potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models”. By using such apocalyptic terms, experts and governments have no wish to conjure a Hollywood image of rebellious robots running in the streets and shooting people. Such a scenario is unlikely, and it merely distracts people from the real dangers.

AI is an unprecedented threat to humanity because it is the first technology in history that can make decisions and create new ideas by itself. All previous human inventions have empowered humans, because no matter how powerful the new tool was, the decisions about its usage remained in our hands. Nuclear bombs do not themselves decide whom to kill, nor can they improve themselves or invent even more powerful bombs. In contrast, autonomous drones can decide by themselves who to kill, and AIs can create novel bomb designs, unprecedented military strategies and better AIs. AI isn’t a tool – it’s an agent. The biggest threat of AI is that we are summoning to Earth countless new powerful agents that are potentially more intelligent and imaginative than us, and that we don’t fully understand or control."

Friday, August 23, 2024

Crossroads: Episode 2 - AI and Ethics; Crossroads from Washington National Cathedral, April 17, 2024

 Crossroads from Washington National CathedralCrossroads: Episode 2 - AI and Ethics

"Tune in for the Cathedral's first conversation on AI and ethics. Whether you are enthusiastically embracing it, reluctantly trying it out, or anxious about its consequences, AI has taken our world by storm and according to the experts, it is here to stay. Dr. Joseph Yun, CEO of Bluefoxlabs.ai and AI architect for the University of Pittsburgh, the Rev. Jo Nygard Owens, the Cathedral's Pastor for Digital Ministry, and Dr. Sonia Coman, the Cathedral's Director of Digital Engagement discuss the state of AI, its risks, and the hope it can bring to the world."

U.S. Accuses Software Maker RealPage of Enabling Collusion on Rents; The New York Times, August 23, 2024

Danielle KayeLauren Hirsch and  , The New York Times; U.S. Accuses Software Maker RealPage of Enabling Collusion on Rents

"The Justice Department filed an antitrust lawsuit on Friday against the real estate software company RealPage, alleging its software enabled landlords to collude to raise rents across the United States.

The suit, joined by North Carolina, California, Colorado, Connecticut, Minnesota, Oregon, Tennessee and Washington, accuses RealPage of facilitating a price-fixing conspiracy that boosted rents beyond market forces for millions of people. It’s the first major civil antitrust lawsuit where the role of an algorithm in pricing manipulation is central to the case, Justice Department officials said."

The US Government Wants You—Yes, You—to Hunt Down Generative AI Flaws; Wired, August 21, 2024

 Lily Hay Newman, Wired; The US Government Wants You—Yes, You—to Hunt Down Generative AI Flaws

"AT THE 2023 Defcon hacker conference in Las Vegas, prominent AI tech companies partnered with algorithmic integrity and transparency groups to sic thousands of attendees on generative AI platforms and find weaknesses in these critical systems. This “red-teaming” exercise, which also had support from the US government, took a step in opening these increasingly influential yet opaque systems to scrutiny. Now, the ethical AI and algorithmic assessment nonprofit Humane Intelligence is taking this model one step further. On Wednesday, the group announced a call for participation with the US National Institute of Standards and Technology, inviting any US resident to participate in the qualifying round of a nationwide red-teaming effort to evaluate AI office productivity software.

The qualifier will take place online and is open to both developers and anyone in the general public as part of NIST's AI challenges, known as Assessing Risks and Impacts of AI, or ARIA. Participants who pass through the qualifying round will take part in an in-person red-teaming event at the end of October at the Conference on Applied Machine Learning in Information Security (CAMLIS) in Virginia. The goal is to expand capabilities for conducting rigorous testing of the security, resilience, and ethics of generative AI technologies."

Wednesday, August 21, 2024

Leaving Your Legacy Via Death Bots? Ethicist Shares Concerns; Medscape, August 21, 2024

Arthur L. Caplan, PhD, Medscape ; Leaving Your Legacy Via Death Bots? Ethicist Shares Concerns

"On the other hand, there are clearly many ethical issues about creating an artificial version of yourself. One obvious issue is how accurate this AI version of you will be if the death bot can create information that sounds like you, but really isn't what you would have said, despite the effort to glean it from recordings and past information about you. Is it all right if people wander from the truth in trying to interact with someone who's died? 

There are other ways to leave memories behind. You certainly can record messages so that you can control the content. Many people video themselves and so on. There are obviously people who would say that they have a diary or have written information they can leave behind. 

Is there a place in terms of accuracy for a kind of artificial version of ourselves to go on forever? Another interesting issue is who controls that. Can you add to it after your death? Can information be shared about you with third parties who don't sign up for the service? Maybe the police take an interest in how you died. You can imagine many scenarios where questions might come up about wanting to access these data that the artificial agent is providing. 

Some people might say that it's just not the way to grieve.Maybe the best way to grieve is to accept death and not try to interact with a constructed version of yourself once you've passed. That isn't really accepting death. It's a form, perhaps, of denial of death, and maybe that isn't going to be good for the mental health of survivors who really have not come to terms with the fact that someone has passed on."

Startup using blockchain to prevent copyright theft by AI is valued over $2 billion after fresh funding; CNBC, August 21, 2024

Ryan Browne, CNBC; Startup using blockchain to prevent copyright theft by AI is valued over $2 billion after fresh funding

"San-Francisco-based startup Story said Wednesday that it raised $80 million of funding for a blockchain designed to prevent artificial intelligence makers like OpenAI from taking creators’ intellectual property without permission."

Tuesday, August 20, 2024

WATCH: How Drones Are Saving Lives in Rural America; Government Technology, August 19, 2024

Nikki Davidson , Government Technology; WATCH: How Drones Are Saving Lives in Rural America

"Until recently, deputies in rural Manitowoc County, Wis., faced a challenge: responding to calls with limited visibility and resources. Traditional policing methods often left them at a disadvantage in vast, wooded areas. 

In June 2022, the Manitowoc County Sheriff’s Office embarked on a mission to integrate unmanned aerial vehicles (UAVs) into their law enforcement operations to give them an eye in the sky. Two lieutenants, Travis Aleff and Kyle Stotzheim, were tasked with spearheading the initiative, working “non-stop” for half a year to establish a fully operational drone team with 13 FAA-certified pilots.

Initially there were a lot of questions about the program’s cost-effectiveness and whether the investment in drones would yield tangible benefits...

To understand the real-world impact of drones in law enforcement, we requested examples from the sheriff’s office, complete with video footage. They provided three compelling cases, each demonstrating a different facet of how UAVs can revolutionize police work and enhance public safety.

DRONES AS A LIFELINE: ENHANCING MENTAL HEALTH CRISIS RESPONSE


One example highlights the potential of drones to aid in mental health crisis response. The Manitowoc County Sheriff’s Office received a call concerning a suicidal, armed individual who intended to harm themselves in a densely wooded county park. Watch the video below to see how the UAV was used as a tool to defuse and safely resolve the situation."

ABC, Kimmel Defeat George Santos Cameo Video Copyright Suit; Bloomberg Law, August 19, 2024

 Kyle Jahner , Bloomberg Law; ABC, Kimmel Defeat George Santos Cameo Video Copyright Suit

"Jimmy Kimmel and ABC defeated former Rep. George Santos’ copyright lawsuit as a New York federal court found use of his Cameo videos on television constituted fair use."

Where AI Thrives, Religion May Struggle; Chicago Booth Review, March 26, 2024

 Jeff Cockrell, Chicago Booth Review; Where AI Thrives, Religion May Struggle

"The United States has seen one of the biggest drops: the share of its residents who said they belonged to a church, synagogue, or mosque fell from 70 percent in 1999 to 47 percent in 2020, according to Gallup.

One potential factor is the proliferation of artificial intelligence and robotics, according to a team of researchers led by Chicago Booth’s Joshua Conrad Jackson and Northwestern’s Adam Waytz. The more exposed people are to automation technologies, the researchers find, the weaker their religious beliefs. They argue that the relationship is not coincidental and that “there are meaningful properties of automation which encourage religious decline."

Researchers and philosophers have pondered the connection between science and religion for many years. The German sociologist Max Weber spoke of science contributing to the “disenchantment of the world,” or the replacement of supernatural explanations for the workings of the universe with rational, scientific ones. Evidence from prior research doesn’t support a strong “disenchantment” effect, Jackson says, but he and his coresearchers suggest that AI and robotics may influence people’s beliefs in a way that science more generally does not."

Authors sue Claude AI chatbot creator Anthropic for copyright infringement; AP, August 19, 2024

 MATT O’BRIEN, AP; Authors sue Claude AI chatbot creator Anthropic for copyright infringement

"A group of authors is suing artificial intelligence startup Anthropic, alleging it committed “large-scale theft” in training its popular chatbot Claude on pirated copies of copyrighted books.

While similar lawsuits have piled up for more than a year against competitor OpenAI, maker of ChatGPT, this is the first from writers to target Anthropic and its Claude chatbot.

The smaller San Francisco-based company — founded by ex-OpenAI leaders — has marketed itself as the more responsible and safety-focused developer of generative AI models that can compose emails, summarize documents and interact with people in a natural way...

The lawsuit was brought by a trio of writers — Andrea Bartz, Charles Graeber and Kirk Wallace Johnson — who are seeking to represent a class of similarly situated authors of fiction and nonfiction...

What links all the cases is the claim that tech companies ingested huge troves of human writings to train AI chatbots to produce human-like passages of text, without getting permission or compensating the people who wrote the original works. The legal challenges are coming not just from writers but visual artistsmusic labels and other creators who allege that generative AI profits have been built on misappropriation...

But the lawsuit against Anthropic accuses it of using a dataset called The Pile that included a trove of pirated books. It also disputes the idea that AI systems are learning the way humans do."

He Regulated Medical Devices. His Wife Represented Their Makers.; The New York Times, August 20, 2024

 , The New York Times; He Regulated Medical Devices. His Wife Represented Their Makers.

"For 15 years, Dr. Jeffrey E. Shuren was the federal official charged with ensuring the safety of a vast array of medical devices including artificial knees, breast implants and Covid tests.

When he announced in July that he would be retiring from the Food and Drug Administration later this year, Dr. Robert Califf, the agency’s commissioner, praised him for overseeing the approval of more novel devices last year than ever before in the nearly half-century history of the device division.

But the admiration for Dr. Shuren is far from universal. Consumer advocates see his tenure as marred by the approval of too many devices that harmed patients and by his own close ties to the $500 billion global device industry.

One connection stood out: While Dr. Shuren regulated the booming medical device industry, his wife, Allison W. Shuren, represented the interests of device makers as the co-leader of a team of lawyers at Arnold & Porter, one of Washington’s most powerful law firms."

Monday, August 19, 2024

Trump posts deepfakes of Swift, Harris and Musk in effort to shore up support; The Guardian, August 19, 2024

, The Guardian ; Trump posts deepfakes of Swift, Harris and Musk in effort to shore up support

"Donald Trump shared several AI-generated images of Taylor Swift and her fans vowing their support for his presidential campaign on Sunday, reposting them with the caption “I accept!” on his Truth Social platform. The deepfakes are part of a slew of images made with artificial intelligence that the former president has disseminated in recent days straddling the line between parody and outright election disinformation."

Mayoral candidate vows to let VIC, an AI bot, run Wyoming’s capital city; The Washington Post, August 19, 2024

Jenna Sampson
 , The Washington Post; Mayoral candidate vows to let VIC, an AI bot, run Wyoming’s capital city

"Miller made this pitch at a county library in Wyoming’s capital on a recent summer Friday, with a few friends and family filling otherwise empty rows of chairs. Before the sparse audience, he vowed to run the city of Cheyenne exclusively with an AI bot he calls “VIC” for “Virtual Integrated Citizen.”

AI experts say the pledge is a first for U.S. campaigns and marks a new front in the rapid emergence of the technology. Its implications have stoked alarm among officials and even tech companies...

The day before, Miller had scrambled to get VIC working after OpenAI,the technology company behind generative-AI tools like ChatGPT, shut down his account, citing policies against using its products for campaigning. Miller quickly made a second ChatGPT bot, allowing him to hold the meet-and-greet almost exactly as planned.

It was just the latest example of Miller’s skirting efforts against his campaign by the company that makes the AI technology and the regulatory authorities that oversee elections...

“While OpenAI may have certain policies against using its model for campaigning, other companies do not, so it makes shutting down the campaign nearly impossible.”"

New ABA Rules on AI and Ethics Shows the Technology Is 'New Wine in Old Bottles'; The Law Journal Editorial Board via Law.com, August 16, 2024

The Law Journal Editorial Board via Law.com; New ABA Rules on AI and Ethics Shows the Technology Is 'New Wine in Old Bottles'

On July 29, the American Bar Association’s Standing Committee on Ethics and Professional Responsibility issued Formal Opinion 512 on generative artificial intelligence tools. The opinion follows on such opinions and guidance from several state bar associations, as well as similar efforts by non-U.S. bars and regulatory bodies around the world...

Focused on GAI, the opinion addresses six core principles: competence, confidentiality, communication, meritorious claims and candor to tribunal, supervision and fees...

What is not commonly understood, perhaps, is that GAI “hallucinates,” and generates content...

Not addressed in the opinion is whether GAI is engaged in the practice of law...

At the ABA annual meeting, representatives of more than 20 “foreign” bars participated in a roundtable on GAI. In a world of cross-border practice, there was a desire for harmonization."

Sunday, August 18, 2024

UC Berkeley Law School To Offer Advanced Law Degree Focused On AI; Forbes, August 16, 2024

  Michael T. Nietzel, Forbes; UC Berkeley Law School To Offer Advanced Law Degree Focused On AI

"The University of California, Berkeley School of Law has announced that it will offer what it’s calling “the first-ever law degree with a focus on artificial intelligence (AI).” The new AI-focused Master of Laws (LL.M.) program is scheduled to launch in summer 2025.

The program, which will award an AI Law and Regulation certificate for students enrolled in UC Berkeley Law’s LL.M. executive track, is designed for working professionals and can be completed over two summers or through remote study combined with one summer on campus...

According to Assistant Law Dean Adam Sterling, the curriculum will cover topics such as AI ethics, the fundamentals of AI technology, and current and future efforts to regulate AI. “This program will equip participants with in-depth knowledge of the ethical, regulatory, and policy challenges posed by AI,” Sterling added. “It will focus on building practice skills to help them advise and represent leading law firms, AI companies, governments, and non-profit organizations.”"

A.L.S. Stole His Voice. A.I. Retrieved It.; The New York Times, August 14, 2024

 , The New York Times; A.L.S. Stole His Voice. A.I. Retrieved It.

"As scientists continued training the device to recognize his sounds, it got only better. Over a period of eight months, the study said, Mr. Harrell came to utter nearly 6,000 unique words. The device kept up, sustaining a 97.5 percent accuracy.

That exceeded the accuracy of many smartphone applications that transcribe people’s intact speech. It also marked an improvement on previous studies in which implants reached accuracy rates of roughly 75 percent, leaving one of every four words liable to misinterpretation.

And whereas devices like Neuralink’s help people move cursors across a screen, Mr. Harrell’s implant allowed him to explore the infinitely larger and more complex terrain of speech.

“It went from a scientific demonstration to a system that Casey can use every day to speak with family and friends,” said Dr. David Brandman, the neurosurgeon who operated on Mr. Harrell and led the study alongside Dr. Stavisky.

That leap was enabled in part by the types of artificial intelligence that power language tools like ChatGPT. At any given moment, Mr. Harrell’s implant picks up activity in an ensemble of neurons, translating their firing pattern into vowel or consonant units of sound. Computers then agglomerate a string of such sounds into a word, and a string of words into a sentence, choosing the output they deem likeliest to correspond to what Mr. Harrell has tried to say...

Whether the same implant would prove as helpful to more severely paralyzed people is unclear. Mr. Harrell’s speech had deteriorated, but not disappeared.

And for all its utility, the technology cannot mitigate the crushing financial burden of trying to live and work with A.L.S.: Insurance will pay for Mr. Harrell’s caregiving needs only if he goes on hospice care, or stops working and becomes eligible for Medicaid, Ms. Saxon said, a situation that, she added, drives others with A.L.S. to give up trying to extend their lives.

Those very incentives also make it likelier that people with disabilities will become poor, putting access to cutting-edge implants even further out of their reach, said Melanie Fried-Oken, a professor of neurology at Oregon Health & Science University."

Friday, August 16, 2024

How AI is revolutionising how firefighters tackle blazes and saving lives; The Guardian, August 15, 2024

Selena Ross , The Guardian; How AI is revolutionising how firefighters tackle blazes and saving lives

"What’s more, all that data needs to be recrunched anew every day as the weather forecast changes.

Humans can do this. But AI seems to be able to do it better, digesting mammoth datasets to predict wildfires fairly accurately as much as a week or even 10 days before they start...

Technosylva’s work is part of a wave of new fire modelling around the world drawing on AI and grappling each project in a different way, with the same three factors: fuel, weather, ignition. Many are not yet operational, but their creators expect them to be in the next couple of years."

San Francisco Moves to Lead Fight Against Deepfake Nudes; The New York Times, August 15, 2024

 Heather Knight, The New York Times; San Francisco Moves to Lead Fight Against Deepfake Nudes

"Instead, the lawsuit seeks to shutter the sites and permanently restrain those operating them from creating deepfake pornography in the future, and assess civil penalties and attorneys’ fees. On the question of jurisdiction, the suit argues that the sites violate state and federal revenge-pornography laws, state and federal child-pornography laws, and the California Unfair Competition Law, which prohibits unlawful and unfair business practices.

San Francisco is a fitting venue, the lawyers argued, as it is ground zero for the growing artificial intelligence industry. Already, people in the city can order driverless vehicles from their phones to whisk them around town, and the industry’s leaders, including OpenAI and Anthropic, are based there.

Mr. Chiu says he thinks the industry has largely had a positive effect on society, but the issue of deepfake pornography has highlighted one of its “dark sides.”

Keeping pace with the rapidly changing industry as a government lawyer is daunting, he said. “But that doesn’t meant we shouldn’t try.”"

Sasse’s spending spree: Former UF president channeled millions to GOP allies, secretive contracts; The Independent Florida Alligator, August 12, 2024

Garrett Shanley, The Independent Florida Alligator; Sasse’s spending spree: Former UF president channeled millions to GOP allies, secretive contracts

"In his 17-month stint as UF president, Ben Sasse more than tripled his office’s spending, directing millions in university funds into secretive consulting contracts and high-paying positions for his GOP allies.

Sasse ballooned spending under the president’s office to $17.3 million in his first year in office — up from $5.6 million in former UF President Kent Fuchs’ last year, according to publicly available administrative budget data.

A majority of the spending surge was driven by lucrative contracts with big-name consulting firms and high-salaried, remote positions for Sasse’s former U.S. Senate staff and Republican officials.

Sasse’s consulting contracts have been kept largely under wraps, leaving the public in the dark about what the contracted firms did to earn their fees. The university also declined to clarify specific duties carried out by Sasse’s ex-Senate staff, several of whom were salaried as presidential advisers."

Thursday, August 15, 2024

This Code Breaker Is Using AI to Decode the Heart’s Secret Rhythms; Wired, August 15, 2024

 Amit Katwala, Wired; This Code Breaker Is Using AI to Decode the Heart’s Secret Rhythms

"There’s an AI boom in health care, and the only thing slowing it down is a lack of data."

Surviving Putin's gulag: Vladimir Kara-Murza tells his story; The Washington Post, August 14, 2024

Today’s show was produced by Charla Freeland. It was edited by Allison Michaels and Damir Marusic and mixed by Emma Munger, The Washington Post; Surviving Putin's gulag: Vladimir Kara-Murza tells his story

"Pulitzer Prize winner Vladimir Kara-Murza, who was part of August’s massive prisoner exchange with Russia, sat down to talk with Post Opinions editor David Shipley about his time in jail, the importance of freedom of speech and what the future holds for Putin’s regime."

Russian court jails US-Russian woman for 12 years over $50 charity donation; The Guardian, August 15, 2024

Associated Press via The Guardian; Russian court jails US-Russian woman for 12 years over $50 charity donation

"A Russian court on Thursday sentenced the US-Russian dual national Ksenia Khavana to 12 years in prison on a treason conviction for allegedly raising money for the Ukrainian military.

The rights group the First Department said the charges stemmed from a $51 (£40) donation to a US charity that helps Ukraine.

Khavana, whom Russian authorities identify by her birth name of Karelina, was arrested in Ekaterinburg in February. She pleaded guilty in her closed trial last week, news reports said.

Khavana reportedly obtained US citizenship after marrying an American and moving to Los Angeles. She had returned to Russia to visit her family."

Artists Score Major Win in Copyright Case Against AI Art Generators; The Hollywood Reporter, August 13, 2024

 Winston Cho, The Hollywood Reporter; Artists Score Major Win in Copyright Case Against AI Art Generators

"Artists suing generative artificial intelligence art generators have cleared a major hurdle in a first-of-its-kind lawsuit over the uncompensated and unauthorized use of billions of images downloaded from the internet to train AI systems, with a federal judge allowing key claims to move forward.

U.S. District Judge William Orrick on Monday advanced all copyright infringement and trademark claims in a pivotal win for artists. He found that Stable Diffusion, Stability’s AI tool that can create hyperrealistic images in response to a prompt of just a few words, may have been “built to a significant extent on copyrighted works” and created with the intent to “facilitate” infringement. The order could entangle in the litigation any AI company that incorporated the model into its products."

Monday, August 12, 2024

Artificial Intelligence in the pulpit: a church service written entirely by AI; United Church of Christ, July 16, 2024

 , United Church of Christ; Artificial Intelligence in the pulpit: a church service written entirely by AI

"Would you attend a church service if you knew that it was written entirely by an Artificial Intelligence (AI) program? What would your thoughts and feelings be about this use of AI?

That’s exactly what the Rev. Dwight Lee Wolter wanted to know — and he let his church members at the Congregational Church of Patchogue on Long Island, New York, know that was what he was intending to do on Sunday, July 14. He planned a service that included a call to worship, invocation, pastoral prayer, scripture reading, sermon, hymns, prelude, postlude and benediction with the use of ChatGPT. ChatGPT is a free AI program developed by OpenAI, an Artificial Intelligence research company and released in 2022.

Taking fear and anger out of exploration

“My purpose is to take the fear and anger out of AI exploration and replace it with curiosity, flexibility and open-mindfulness,” said Wolter. “If, as widely claimed, churches need to adapt to survive, we might not recognize the church in 20 years if we could see it now; then AI will be a part of the church of the future. No matter what we presently think of it, it will be present in the future doing a lot of the thinking for us.”...

Wolter intends to follow up Sunday’s service with a reflection about how it went. On July 21, he will give a sermon about AI, with people offering input about the AI service. “We will discuss their reactions, feelings, thoughts, likes and dislikes, concerns and questions.” Wolter will follow with his synopsis sharing the benefits, criticisms, fears and concerns of AI...

Wolter believes we need to “disarm contempt prior to investigation,” when it comes to things like Artificial Intelligence. “AI is not going anywhere. It’s a tool–and with a shortage of clergy, money and volunteers, we will continue to rely on it.”"

Silicon Valley bishop, two Catholic AI experts weigh in on AI evangelization; Religion News Service, May 6, 2024

leja Hertzler-McCain , Religion News Service; Silicon Valley bishop, two Catholic AI experts weigh in on AI evangelization

"San Jose, California, Bishop Oscar Cantú, who leads the Catholic faithful in Silicon Valley, said that AI doesn’t come up much with parishioners in his diocese...

Pointing to the adage coined by Meta founder Mark Zuckerberg, “move fast and break things,” the bishop said, “with AI, we need to move very cautiously and slowly and try not to break things. The things we would be breaking are human lives and reputations.”...

Noreen Herzfeld, a professor of theology and computer science at St. John’s University and the College of St. Benedict and one of the editors of a book about AI sponsored by the Vatican Dicastery for Culture and Education, said that the AI character was previously “impersonating a priest, which is considered a very serious sin in Catholicism.”...

Accuracy issues, Herzfeld said, is one of many reasons it should not be used for evangelization. “As much as you beta test one of these chatbots, you will never get rid of hallucinations” — moments when the AI makes up its own answers, she said...

Larrey, who has been studying AI for nearly 30 years and is in conversation with Sam Altman, the CEO of OpenAI, is optimistic that the technology will improve. He said Altman is already making progress on the hallucinations, on its challenges to users’ privacy and reducing its energy use — a recent analysis estimated that by 2027, artificial intelligence could suck up as much electricity as the population of Argentina or the Netherlands."

Sunday, August 11, 2024

Pueblo artist seeking copyright protection for AI-generated work; The Gazette, August 8, 2024

O'Dell Isaac , The Gazette; Pueblo artist seeking copyright protection for AI-generated work

"“We’re done with the Copyright Office,” he said. “Now we’re going into the court system.”

Allen said he believes his case raises two essential questions: What is art? And if a piece doesn’t belong to the artist, whom does it belong to?

Tara Thomas, director of the Bemis School of Arts at Colorado College, said the answers may not be clear-cut.

“There was a similar debate at the beginning of photography,” Thomas said. "Was it the camera, or was it the person taking the photos? Is the camera the artmaker, or is it a tool?”

Allen said it took more than two decades for photography to gain acceptance as an art form.

“We’re at a similar place in AI art,” he said. 

“Right now, there is a massive stigma surrounding AI, far more so than there was with photography, so the challenge is much steeper. It is that very stigma that is contributing to the stifling of innovation. Why would anybody want to incorporate AI art into their workflow if they knew they couldn’t protect their work?”"

Dave Eggers’ Novel Was Banned From South Dakota Schools. In a New Documentary, the Community Fights Back (Exclusive); People, August 10, 204

Carly Tagen-Dye

, People; Dave Eggers’ Novel Was Banned From South Dakota Schools. In a New Documentary, the Community Fights Back (Exclusive)

"Bestselling author Dave Eggers wasn’t expecting to learn that his 2013 dystopian novel, The Circle, was removed from high schools in Rapid City, S.D. What's more, Eggers' book, along with four others, was designated “to be destroyed” by the school board as well.

“It was new to me, although the other authors that were banned have had the books banned again and again,” Eggers tells PEOPLE.

The decision to ban The Circle, as well as The Perks of Being a Wallflowerby Stephen Chbosky, How Beautiful We Were by Imbolo Mbue, Fun Homeby Alison Bechdel and Girl, Woman, Other by Bernadine Evaristo, is the subject of the documentary To Be Destroyed, premiering on MSNBC on Aug. 11 as part of Trevor Noah's "The Turning Point" series. Directed by Arthur Bradford, the film follows Eggers during his travels to Rapid City, where he met with the teachers and students on the frontlines of the book banning fight."

Should artists be terrified of AI replacing them?; The Guardian, August 11, 2024

 , The Guardian; Should artists be terrified of AI replacing them?

"Interviewing those at the techno-cultural vanguard, including Herndon, Dryhurst and Maclean, has given me some sense of peace. I realise that I have been hanging on to 20th-century notions of art practice and the cultural landscape, one where humans spent months and years writing, painting, recording and filming works that defined the culture of our species. They provided meaning, distraction, wellbeing. A reason to exist. Making peace may mean letting go of these historical notions, finding new meaning. While digitally generatable media is increasingly becoming the domain of AI, for example, might performance and tactile artforms, such as live concerts, theatre and sculpture, be reinvigorated?"

Friday, August 9, 2024

Utah outlaws books by Judy Blume and Sarah J Maas in first statewide ban; The Guardian, August 7, 2024

 , The Guardian; Utah outlaws books by Judy Blume and Sarah J Maas in first statewide ban

"Books by Margaret Atwood, Judy Blume, Rupi Kaur and Sarah J Maas are among 13 titles that the state of Utah has ordered to be removed from all public school classrooms and libraries.

This marks the first time a state has outlawed a list of books statewide, according to PEN America’s Jonathan Friedman, who oversees the organisation’s free expression programs.

The books on the list were prohibited under a new law requiring all of Utah’s public school districts to remove books if they are banned in either three districts, or two school districts and five charter schools. Utah has 41 public school districts in total.

The 13 books could be banned under House bill 29, which became effective from 1 July, because they were considered to contain “pornographic or indecent” material. The list “will likely be updated as more books begin to meet the law’s criteria”, according to PEN America.

Twelve of the 13 titles were written by women. Six books by Maas, a fantasy author, appear on the list, along with Oryx and Crake by AtwoodMilk and Honey by Kaur and Forever by Blume. Two books by Ellen Hopkins appear, as well as Elana K Arnold’s What Girls Are Made Of and Craig Thompson’s Blankets.

Implementation guidelines say that banned materials must be “legally disposed of” and “may not be sold or distributed”."