Friday, August 30, 2024

Major publishers sue Florida over ‘unconstitutional’ school book ban; The Guardian, August 30, 2024

  , The Guardian; Major publishers sue Florida over ‘unconstitutional’ school book ban

"Six major book publishers have teamed up to sue the US state of Florida over an “unconstitutional” law that has seen hundreds of titles purged from school libraries following rightwing challenges.

The landmark action targets the “sweeping book removal provisions” of House Bill 1069, which required school districts to set up a mechanism for parents to object to anything they considered pornographic or inappropriate.

A central plank of Republican governor Ron DeSantis’s war on “woke” on Florida campuses, the law has been abused by rightwing activists who quickly realized that any book they challenged had to be immediately removed and replaced only after the exhaustion of a lengthy and cumbersome review process, if at all, the publishers say.

Since it went into effect last July, countless titles have been removed from elementary, middle and high school libraries, including American classics such as Brave New World by Aldous Huxley, For Whom the Bell Tolls by Ernest Hemingway and The Adventures of Tom Sawyer by Mark Twain.

Contemporary novels by bestselling authors such as Margaret Atwood, Judy Blume and Stephen King have also been removed, as well as The Diary of a Young Girl, Anne Frank’s gripping account of the Holocaust, according to the publishers."

Breaking Up Google Isn’t Nearly Enough; The New York Times, August 27, 2024

 , The New York Times; Breaking Up Google Isn’t Nearly Enough

"Competitors need access to something else that Google monopolizes: data about our searches. Why? Think of Google as the library of our era; it’s the first stop we go to when seeking information. Anyone who wants to build a rival library needs to know what readers are looking for in order to stock the right books. They also need to know which books are most popular, and which ones people return quickly because they’re no good."

Thursday, August 29, 2024

OpenAI Pushes Prompt-Hacking Defense to Deflect Copyright Claims; Bloomberg Law, August 29, 2024

 Annelise Gilbert, Bloomberg Law; OpenAI Pushes Prompt-Hacking Defense to Deflect Copyright Claims

"Diverting attention to hacking claims or how many tries it took to obtain exemplary outputs, however, avoids addressing most publishers’ primary allegation: AI tools illegally trained on copyrighted works."

The Nuremberg Code isn’t just for prosecuting Nazis − its principles have shaped medical ethics to this day; The Conversation, August 29, 2024

 Director of the Center for Health Law, Ethics & Human Rights, Boston University, The Conversation; The Nuremberg Code isn’t just for prosecuting Nazis − its principles have shaped medical ethics to this day

"I remain a strong supporter of the Nuremberg Code and believe that following its precepts is both an ethical and a legal obligation of physician researchers. Yet the public can’t expect Nuremberg to protect it against all types of scientific research or weapons development. 

Soon after the U.S. dropped atomic bombs over Hiroshima and Nagasaki – two years before the Nuremberg trials began – it became evident that our species was capable of destroying ourselves. 

Nuclear weapons are only one example. Most recently, international debate has focused on new potential pandemics, but also on “gain-of-function” research, which sometimes adds lethality to an existing bacteria or virus to make it more dangerous. The goal is not to harm humans but rather to try to develop a protective countermeasure. The danger, of course, is that a super harmful agent “escapes” from the laboratory before such a countermeasure can be developed.

I agree with the critics who argue that at least some gain-of-function research is so dangerous to our species that it should be outlawed altogether. Innovations in artificial intelligence and climate engineering could also pose lethal dangers to all humans, not just some humans. Our next question is who gets to decide whether species-endangering research should be done, and on what basis?"

Disinformation, Trust, and the Role of AI: Threats to Health & Democracy; The Hastings Center, September 9, 2024

The Hastings Center; Disinformation, Trust, and the Role of AI: Threats to Health & Democracy

"Join us for The Daniel Callahan Annual Lecture, hosted by The Hastings Center at Rockefeller University’s beautiful campus in New York. Hastings Center President Vardit Ravitsky will moderate a discussion with experts Reed Tuckson and Timothy Caulfieldon disinformation, trust, and the role of AI, focusing on current and future threats to health and democracy. The event will take place on Monday, September 9, 5 pm. Learn more and register...

A Moderated Discussion on DISINFORMATION, TRUST, AND THE ROLE OF AI: Threats to Health & Democracy, The Daniel Callahan Annual Lecture

Panelists
Reed Tuckson, MD, FACP, Chair & Co-Founder of the Black Coalition Against Covid, Chair and Co-Founder of the Coalition For Trust In Health & Science
Timothy Caulfield, LB, LLM, FCAHS, Professor, Faculty of Law and School of Public Health, University of Alberta; Best-selling author & TV host

Moderator:
Vardit Ravitsky, PhD, President & CEO, The Hastings Center"

The Ethics of Developing Voice Biometrics; The New York Academy of Sciences, August 29, 2024

Nitin Verma, PhD, The New York Academy of Sciences; The Ethics of Developing Voice Biometrics

"Juana Catalina Becerra Sandoval, a PhD candidate in the Department of the History of Science at Harvard University and a research scientist in the Responsible and Inclusive Technologies initiative at IBM Research, presented as part of The New York Academy of Sciences’ (the Academy) Artificial Intelligence (AI) & Society Seminar series. The lecture – titled “What’s in a Voice? Biometric Fetishization and Speaker Recognition Technologies” – explored the ethical implications associated with the development and use of AI-based tools such as voice biometrics. After the presentation, Juana sat down with Nitin Verma, PhD, a member of the Academy’s 2023 cohort of the AI & Society Fellowship, to further discuss the promises and challenges society faces as AI continues to evolve."

California advances landmark legislation to regulate large AI models; AP, August 28, 2024

TRÂN NGUYỄN, AP ; California advances landmark legislation to regulate large AI models

"Wiener’s proposal is among dozens of AI bills California lawmakers proposed this year to build public trust, fight algorithmic discrimination and outlaw deepfakes that involve elections or pornography. With AI increasingly affecting the daily lives of Americans, state legislators have tried to strike a balance of reigning in the technology and its potential risks without stifling the booming homegrown industry. 

California, home of 35 of the world’s top 50 AI companies, has been an early adopter of AI technologies and could soon deploy generative AI tools to address highway congestion and road safety, among other things."

Wednesday, August 28, 2024

Controversial California AI regulation bill finds unlikely ally in Elon Musk; The Mercury News, August 28, 2024

  , The Mercury News; Controversial California AI regulation bill finds unlikely ally in Elon Musk

"With a make-or-break deadline just days away, a polarizing bill to regulate the fast-growing artificial intelligence industry from progressive state Sen. Scott Wiener has gained support from an unlikely source.

Elon Musk, the Donald Trump-supporting, often regulation-averse Tesla CEO and X owner, this week said he thinks “California should probably pass” the proposal, which would regulatethe development and deployment of advanced AI models, specifically large-scale AI products costing at least $100 million to build.

The surprising endorsement from a man who also owns an AI company comes as other political heavyweights typically much more aligned with Wiener’s views, including San Francisco Mayor London Breed and Rep. Nancy Pelosi, join major tech companies in urging Sacramento to put on the brakes."

After a decade of free Alexa, Amazon now wants you to pay; The Washington Post, August 27, 2024

 , The Washington Post; After a decade of free Alexa, Amazon now wants you to pay

"There was a lot of optimism in the 2010s that digital assistants like Alexa, Apple’s Siri and Google Assistant would become a dominant way we interact with technology, and become as life-changing as smartphones have been.

Those predictions were mostly wrong. The digital assistants were dumber than the companies claimed, and it’s often annoying to speak commands rather than type on a keyboard or tap on a touch screen...

If you’re thinking there’s no chance you’d pay for an AI Alexa, you should see how many people subscribe to OpenAI’s ChatGPT...

The mania over AI is giving companies a new selling point to upcharge you. It’s now in your hands whether the promised features are worth it, or if you can’t stomach any more subscriptions."

Tuesday, August 27, 2024

Ethical and Responsible AI: A Governance Framework for Boards; Directors & Boards, August 27, 2024

Sonita Lontoh, Directors & Boards; Ethical and Responsible AI: A Governance Framework for Boards 

"Boards must understand what gen AI is being used for and its potential business value supercharging both efficiencies and growth. They must also recognize the risks that gen AI may present. As we have already seen, these risks may include data inaccuracy, bias, privacy issues and security. To address some of these risks, boards and companies should ensure that their organizations' data and security protocols are AI-ready. Several criteria must be met:

  • Data must be ethically governed. Companies' data must align with their organization's guiding principles. The different groups inside the organization must also be aligned on the outcome objectives, responsibilities, risks and opportunities around the company's data and analytics.
  • Data must be secure. Companies must protect their data to ensure that intruders don't get access to it and that their data doesn't go into someone else's training model.
  • Data must be free of bias to the greatest extent possible. Companies should gather data from diverse sources, not from a narrow set of people of the same age, gender, race or backgrounds. Additionally, companies must ensure that their algorithms do not inadvertently perpetuate bias.
  • AI-ready data must mirror real-world conditions. For example, robots in a warehouse need more than data; they also need to be taught the laws of physics so they can move around safely.
  • AI-ready data must be accurate. In some cases, companies may need people to double-check data for inaccuracy.

It's important to understand that all these attributes build on one another. The more ethically governed, secure, free of bias and enriched a company's data is, the more accurate its AI outcomes will be."

World Intellectual Property Organization Adopts Treaty on Intellectual Property, Genetic Resources and Associated Traditional Knowledge; WilmerHale, August 26, 2024

 

"Following nearly twenty-five years of negotiations, members of the World Intellectual Property Organization (WIPO) recently adopted a treaty implementing the new requirement for international patent applicants to disclose in their applications any Indigenous Peoples and/or communities that provided traditional knowledge on which the applicant drew in creating the invention sought to be patented.1 The treaty was adopted at WIPO’s “Diplomatic Conference to Conclude an International Legal Instrument Relating to Intellectual Property, Genetic Resources, and Traditional Knowledge Associated with Genetic Resources,” which was held May 13–24.2 The goal of the treaty, known as the WIPO Treaty on Intellectual Property, Genetic Resources and Associated Traditional Knowledge, is to “prevent patents from being granted erroneously for inventions that are not novel or inventive with regard to genetic resources and traditional knowledge associated with genetic resources.”3 This treaty—the first treaty of its kind, linking intellectual property and Indigenous Peoples—also aims to “enhance the efficacy, transparency and quality of the patent system with regard to genetic resources and traditional knowledge associated with genetic resources.”4 

Once the treaty is ratified, patent applicants will have new (but nonretroactive) disclosure requirements for international patent applications."

EXAMINING THE WORKS OF C.S. LEWIS: CRITICAL THINKING AND ETHICS; United States Air Force Academy, August 26, 2024

Randy RoughtonU.S. Air Force Academy Strategic Communications , United States Air Force Academy; EXAMINING THE WORKS OF C.S. LEWIS: CRITICAL THINKING AND ETHICS

"Twentieth-century author C.S. Lewis’s books dominate the top shelf in Dr. Adam Pelser’s office. Pelser, who was recently recognized as an Inaugural Fellow of the Inklings Project, has used Lewis’ work to teach critical thinking skills and ethics in his Department of Philosophy course since 2018...

Reading with a critical eye

In Pelser’s course, cadets evaluate and discuss the philosophical arguments and themes in some of Lewis’s most influential non-fiction books and essays. They also observe how Lewis interacted with the philosophers and philosophies of his era, including the Oxford philosopher Elizabeth Anscombe, and the most noteworthy philosophers in history such as Aristotle, Plato, Immanuel Kant and David Hume.

Cadets read a series of Lewis books and learn to approach them with “a critical eye,” Pelser said. Like their professor, the cadets can raise their objections to Lewis’s arguments and study how the author interacted with his era’s other great thinkers...

Pelser has four goals for each course. First, he wants to deepen an understanding of the philosophical themes in Lewis’ writings. Second is a deeper understanding of the historical and contemporary philosophical influences on Lewis’s thought. The third goal is for cadets to learn to identify and summarize theses and arguments in philosophical texts. Finally, he wants each cadet to write and think through arguments carefully and clearly.

“A major critical thinking component is the dialogue in class when we push each other and challenge ideas,” Pelser said. “That is an important skill they learn in our course.”"

Chicago Public Library Debuts Initiative Offering Ebooks to the City’s Visitors During Special Events; Library Journal, August 23, 2024

Matt Enis, Library Journal; Chicago Public Library Debuts Initiative Offering Ebooks to the City’s Visitors During Special Events

"“Access to knowledge and information is the foundation of a thriving, equitable, and democratic city,” Mayor Johnson said in an announcement. “Thanks to Chicago Public Library and our dedicated librarians, we’re making this powerful initiative possible, ensuring that everyone in Chicago has the opportunity to learn, grow, and connect through universal access to literature.”"

A Good Way for ALA; American Libraries, July 24, 2024

  Cindy Hohl , American Libraries; A Good Way for ALA

"As the first Dakota president and Spectrum Scholar representing the 1% of Indigenous librarians, I will reaffirm that diversifying the field remains overdue. We need to focus on creating opportunities for our colleagues to be represented across every library type in this field. When leaders come together to support the entire community, that act of selfless service elevates collective goodwill among us. The same is true for work life. When we remember what our ancestors taught us and use those teachings to make informed decisions, we can avoid pitfalls along the path toward equitable service.

We also must have the goal of eliminating acts of censorship. On June 2, 1924, the Indian Citizenship Act was passed, granting us dual citizenship. Also known as the Snyder Act, it provided Native Americans with new identities in a step toward equality. While voting credentials were provided to some, several states decided to withhold the same rights from Native American women. Even as the remaining states finally provided voting privileges by 1975, barriers remain today in rural areas where polling locations are out of reach or tribally issued identification cards are not considered an acceptable form of identification by states.

Access to libraries can also be a challenge in these rural areas. We have the ability to accept tribal IDs for library access and create sustainable employment opportunities to ensure success without barriers. That way no one is left behind when acts of censorship are creating a division among us. If we work together in this way, everyone can see themselves written in stories, their voices can be heard, and no one is silenced.

Our core values help us see that what one holds sacred is a touchstone in advancing this work as we strive to serve everyone in ­#AGoodWay together."

Sunday, August 25, 2024

‘Never summon a power you can’t control’: Yuval Noah Harari on how AI could threaten democracy and divide the world; The Guardian, August 24, 2024

 , The Guardian; ‘Never summon a power you can’t control’: Yuval Noah Harari on how AI could threaten democracy and divide the world

"Would having even more information make things better – or worse? We will soon find out. Numerous corporations and governments are in a race to develop the most powerful information technology in history – AI. Some leading entrepreneurs, such as the American investor Marc Andreessen, believe that AI will finally solve all of humanity’s problems. On 6 June 2023, Andreessen published an essay titled Why AI Will Save the World, peppered with bold statements such as: “I am here to bring the good news: AI will not destroy the world, and in fact may save it.” He concluded: “The development and proliferation of AI – far from a risk that we should fear – is a moral obligation that we have to ourselves, to our children, and to our future.”

Others are more sceptical. Not only philosophers and social scientists but also many leading AI experts and entrepreneurs such as Yoshua Bengio, Geoffrey Hinton, Sam Altman, Elon Musk and Mustafa Suleyman have warned that AI could destroy our civilisation. In a 2023 survey of 2,778 AI researchers, more than a third gave at least a 10% chance of advanced AI leading to outcomes as bad as human extinction. Last year, close to 30 governments – including those of China, the US and the UK – signed the Bletchley declaration on AI, which acknowledged that “there is potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models”. By using such apocalyptic terms, experts and governments have no wish to conjure a Hollywood image of rebellious robots running in the streets and shooting people. Such a scenario is unlikely, and it merely distracts people from the real dangers.

AI is an unprecedented threat to humanity because it is the first technology in history that can make decisions and create new ideas by itself. All previous human inventions have empowered humans, because no matter how powerful the new tool was, the decisions about its usage remained in our hands. Nuclear bombs do not themselves decide whom to kill, nor can they improve themselves or invent even more powerful bombs. In contrast, autonomous drones can decide by themselves who to kill, and AIs can create novel bomb designs, unprecedented military strategies and better AIs. AI isn’t a tool – it’s an agent. The biggest threat of AI is that we are summoning to Earth countless new powerful agents that are potentially more intelligent and imaginative than us, and that we don’t fully understand or control."

Friday, August 23, 2024

Crossroads: Episode 2 - AI and Ethics; Crossroads from Washington National Cathedral, April 17, 2024

 Crossroads from Washington National CathedralCrossroads: Episode 2 - AI and Ethics

"Tune in for the Cathedral's first conversation on AI and ethics. Whether you are enthusiastically embracing it, reluctantly trying it out, or anxious about its consequences, AI has taken our world by storm and according to the experts, it is here to stay. Dr. Joseph Yun, CEO of Bluefoxlabs.ai and AI architect for the University of Pittsburgh, the Rev. Jo Nygard Owens, the Cathedral's Pastor for Digital Ministry, and Dr. Sonia Coman, the Cathedral's Director of Digital Engagement discuss the state of AI, its risks, and the hope it can bring to the world."

U.S. Accuses Software Maker RealPage of Enabling Collusion on Rents; The New York Times, August 23, 2024

Danielle KayeLauren Hirsch and  , The New York Times; U.S. Accuses Software Maker RealPage of Enabling Collusion on Rents

"The Justice Department filed an antitrust lawsuit on Friday against the real estate software company RealPage, alleging its software enabled landlords to collude to raise rents across the United States.

The suit, joined by North Carolina, California, Colorado, Connecticut, Minnesota, Oregon, Tennessee and Washington, accuses RealPage of facilitating a price-fixing conspiracy that boosted rents beyond market forces for millions of people. It’s the first major civil antitrust lawsuit where the role of an algorithm in pricing manipulation is central to the case, Justice Department officials said."

The US Government Wants You—Yes, You—to Hunt Down Generative AI Flaws; Wired, August 21, 2024

 Lily Hay Newman, Wired; The US Government Wants You—Yes, You—to Hunt Down Generative AI Flaws

"AT THE 2023 Defcon hacker conference in Las Vegas, prominent AI tech companies partnered with algorithmic integrity and transparency groups to sic thousands of attendees on generative AI platforms and find weaknesses in these critical systems. This “red-teaming” exercise, which also had support from the US government, took a step in opening these increasingly influential yet opaque systems to scrutiny. Now, the ethical AI and algorithmic assessment nonprofit Humane Intelligence is taking this model one step further. On Wednesday, the group announced a call for participation with the US National Institute of Standards and Technology, inviting any US resident to participate in the qualifying round of a nationwide red-teaming effort to evaluate AI office productivity software.

The qualifier will take place online and is open to both developers and anyone in the general public as part of NIST's AI challenges, known as Assessing Risks and Impacts of AI, or ARIA. Participants who pass through the qualifying round will take part in an in-person red-teaming event at the end of October at the Conference on Applied Machine Learning in Information Security (CAMLIS) in Virginia. The goal is to expand capabilities for conducting rigorous testing of the security, resilience, and ethics of generative AI technologies."

Wednesday, August 21, 2024

Leaving Your Legacy Via Death Bots? Ethicist Shares Concerns; Medscape, August 21, 2024

Arthur L. Caplan, PhD, Medscape ; Leaving Your Legacy Via Death Bots? Ethicist Shares Concerns

"On the other hand, there are clearly many ethical issues about creating an artificial version of yourself. One obvious issue is how accurate this AI version of you will be if the death bot can create information that sounds like you, but really isn't what you would have said, despite the effort to glean it from recordings and past information about you. Is it all right if people wander from the truth in trying to interact with someone who's died? 

There are other ways to leave memories behind. You certainly can record messages so that you can control the content. Many people video themselves and so on. There are obviously people who would say that they have a diary or have written information they can leave behind. 

Is there a place in terms of accuracy for a kind of artificial version of ourselves to go on forever? Another interesting issue is who controls that. Can you add to it after your death? Can information be shared about you with third parties who don't sign up for the service? Maybe the police take an interest in how you died. You can imagine many scenarios where questions might come up about wanting to access these data that the artificial agent is providing. 

Some people might say that it's just not the way to grieve.Maybe the best way to grieve is to accept death and not try to interact with a constructed version of yourself once you've passed. That isn't really accepting death. It's a form, perhaps, of denial of death, and maybe that isn't going to be good for the mental health of survivors who really have not come to terms with the fact that someone has passed on."

Startup using blockchain to prevent copyright theft by AI is valued over $2 billion after fresh funding; CNBC, August 21, 2024

Ryan Browne, CNBC; Startup using blockchain to prevent copyright theft by AI is valued over $2 billion after fresh funding

"San-Francisco-based startup Story said Wednesday that it raised $80 million of funding for a blockchain designed to prevent artificial intelligence makers like OpenAI from taking creators’ intellectual property without permission."

Tuesday, August 20, 2024

WATCH: How Drones Are Saving Lives in Rural America; Government Technology, August 19, 2024

Nikki Davidson , Government Technology; WATCH: How Drones Are Saving Lives in Rural America

"Until recently, deputies in rural Manitowoc County, Wis., faced a challenge: responding to calls with limited visibility and resources. Traditional policing methods often left them at a disadvantage in vast, wooded areas. 

In June 2022, the Manitowoc County Sheriff’s Office embarked on a mission to integrate unmanned aerial vehicles (UAVs) into their law enforcement operations to give them an eye in the sky. Two lieutenants, Travis Aleff and Kyle Stotzheim, were tasked with spearheading the initiative, working “non-stop” for half a year to establish a fully operational drone team with 13 FAA-certified pilots.

Initially there were a lot of questions about the program’s cost-effectiveness and whether the investment in drones would yield tangible benefits...

To understand the real-world impact of drones in law enforcement, we requested examples from the sheriff’s office, complete with video footage. They provided three compelling cases, each demonstrating a different facet of how UAVs can revolutionize police work and enhance public safety.

DRONES AS A LIFELINE: ENHANCING MENTAL HEALTH CRISIS RESPONSE


One example highlights the potential of drones to aid in mental health crisis response. The Manitowoc County Sheriff’s Office received a call concerning a suicidal, armed individual who intended to harm themselves in a densely wooded county park. Watch the video below to see how the UAV was used as a tool to defuse and safely resolve the situation."

ABC, Kimmel Defeat George Santos Cameo Video Copyright Suit; Bloomberg Law, August 19, 2024

 Kyle Jahner , Bloomberg Law; ABC, Kimmel Defeat George Santos Cameo Video Copyright Suit

"Jimmy Kimmel and ABC defeated former Rep. George Santos’ copyright lawsuit as a New York federal court found use of his Cameo videos on television constituted fair use."

Where AI Thrives, Religion May Struggle; Chicago Booth Review, March 26, 2024

 Jeff Cockrell, Chicago Booth Review; Where AI Thrives, Religion May Struggle

"The United States has seen one of the biggest drops: the share of its residents who said they belonged to a church, synagogue, or mosque fell from 70 percent in 1999 to 47 percent in 2020, according to Gallup.

One potential factor is the proliferation of artificial intelligence and robotics, according to a team of researchers led by Chicago Booth’s Joshua Conrad Jackson and Northwestern’s Adam Waytz. The more exposed people are to automation technologies, the researchers find, the weaker their religious beliefs. They argue that the relationship is not coincidental and that “there are meaningful properties of automation which encourage religious decline."

Researchers and philosophers have pondered the connection between science and religion for many years. The German sociologist Max Weber spoke of science contributing to the “disenchantment of the world,” or the replacement of supernatural explanations for the workings of the universe with rational, scientific ones. Evidence from prior research doesn’t support a strong “disenchantment” effect, Jackson says, but he and his coresearchers suggest that AI and robotics may influence people’s beliefs in a way that science more generally does not."

Authors sue Claude AI chatbot creator Anthropic for copyright infringement; AP, August 19, 2024

 MATT O’BRIEN, AP; Authors sue Claude AI chatbot creator Anthropic for copyright infringement

"A group of authors is suing artificial intelligence startup Anthropic, alleging it committed “large-scale theft” in training its popular chatbot Claude on pirated copies of copyrighted books.

While similar lawsuits have piled up for more than a year against competitor OpenAI, maker of ChatGPT, this is the first from writers to target Anthropic and its Claude chatbot.

The smaller San Francisco-based company — founded by ex-OpenAI leaders — has marketed itself as the more responsible and safety-focused developer of generative AI models that can compose emails, summarize documents and interact with people in a natural way...

The lawsuit was brought by a trio of writers — Andrea Bartz, Charles Graeber and Kirk Wallace Johnson — who are seeking to represent a class of similarly situated authors of fiction and nonfiction...

What links all the cases is the claim that tech companies ingested huge troves of human writings to train AI chatbots to produce human-like passages of text, without getting permission or compensating the people who wrote the original works. The legal challenges are coming not just from writers but visual artistsmusic labels and other creators who allege that generative AI profits have been built on misappropriation...

But the lawsuit against Anthropic accuses it of using a dataset called The Pile that included a trove of pirated books. It also disputes the idea that AI systems are learning the way humans do."

He Regulated Medical Devices. His Wife Represented Their Makers.; The New York Times, August 20, 2024

 , The New York Times; He Regulated Medical Devices. His Wife Represented Their Makers.

"For 15 years, Dr. Jeffrey E. Shuren was the federal official charged with ensuring the safety of a vast array of medical devices including artificial knees, breast implants and Covid tests.

When he announced in July that he would be retiring from the Food and Drug Administration later this year, Dr. Robert Califf, the agency’s commissioner, praised him for overseeing the approval of more novel devices last year than ever before in the nearly half-century history of the device division.

But the admiration for Dr. Shuren is far from universal. Consumer advocates see his tenure as marred by the approval of too many devices that harmed patients and by his own close ties to the $500 billion global device industry.

One connection stood out: While Dr. Shuren regulated the booming medical device industry, his wife, Allison W. Shuren, represented the interests of device makers as the co-leader of a team of lawyers at Arnold & Porter, one of Washington’s most powerful law firms."

Monday, August 19, 2024

Trump posts deepfakes of Swift, Harris and Musk in effort to shore up support; The Guardian, August 19, 2024

, The Guardian ; Trump posts deepfakes of Swift, Harris and Musk in effort to shore up support

"Donald Trump shared several AI-generated images of Taylor Swift and her fans vowing their support for his presidential campaign on Sunday, reposting them with the caption “I accept!” on his Truth Social platform. The deepfakes are part of a slew of images made with artificial intelligence that the former president has disseminated in recent days straddling the line between parody and outright election disinformation."

Mayoral candidate vows to let VIC, an AI bot, run Wyoming’s capital city; The Washington Post, August 19, 2024

Jenna Sampson
 , The Washington Post; Mayoral candidate vows to let VIC, an AI bot, run Wyoming’s capital city

"Miller made this pitch at a county library in Wyoming’s capital on a recent summer Friday, with a few friends and family filling otherwise empty rows of chairs. Before the sparse audience, he vowed to run the city of Cheyenne exclusively with an AI bot he calls “VIC” for “Virtual Integrated Citizen.”

AI experts say the pledge is a first for U.S. campaigns and marks a new front in the rapid emergence of the technology. Its implications have stoked alarm among officials and even tech companies...

The day before, Miller had scrambled to get VIC working after OpenAI,the technology company behind generative-AI tools like ChatGPT, shut down his account, citing policies against using its products for campaigning. Miller quickly made a second ChatGPT bot, allowing him to hold the meet-and-greet almost exactly as planned.

It was just the latest example of Miller’s skirting efforts against his campaign by the company that makes the AI technology and the regulatory authorities that oversee elections...

“While OpenAI may have certain policies against using its model for campaigning, other companies do not, so it makes shutting down the campaign nearly impossible.”"

New ABA Rules on AI and Ethics Shows the Technology Is 'New Wine in Old Bottles'; The Law Journal Editorial Board via Law.com, August 16, 2024

The Law Journal Editorial Board via Law.com; New ABA Rules on AI and Ethics Shows the Technology Is 'New Wine in Old Bottles'

On July 29, the American Bar Association’s Standing Committee on Ethics and Professional Responsibility issued Formal Opinion 512 on generative artificial intelligence tools. The opinion follows on such opinions and guidance from several state bar associations, as well as similar efforts by non-U.S. bars and regulatory bodies around the world...

Focused on GAI, the opinion addresses six core principles: competence, confidentiality, communication, meritorious claims and candor to tribunal, supervision and fees...

What is not commonly understood, perhaps, is that GAI “hallucinates,” and generates content...

Not addressed in the opinion is whether GAI is engaged in the practice of law...

At the ABA annual meeting, representatives of more than 20 “foreign” bars participated in a roundtable on GAI. In a world of cross-border practice, there was a desire for harmonization."

Sunday, August 18, 2024

UC Berkeley Law School To Offer Advanced Law Degree Focused On AI; Forbes, August 16, 2024

  Michael T. Nietzel, Forbes; UC Berkeley Law School To Offer Advanced Law Degree Focused On AI

"The University of California, Berkeley School of Law has announced that it will offer what it’s calling “the first-ever law degree with a focus on artificial intelligence (AI).” The new AI-focused Master of Laws (LL.M.) program is scheduled to launch in summer 2025.

The program, which will award an AI Law and Regulation certificate for students enrolled in UC Berkeley Law’s LL.M. executive track, is designed for working professionals and can be completed over two summers or through remote study combined with one summer on campus...

According to Assistant Law Dean Adam Sterling, the curriculum will cover topics such as AI ethics, the fundamentals of AI technology, and current and future efforts to regulate AI. “This program will equip participants with in-depth knowledge of the ethical, regulatory, and policy challenges posed by AI,” Sterling added. “It will focus on building practice skills to help them advise and represent leading law firms, AI companies, governments, and non-profit organizations.”"