Friday, December 27, 2024

New Course Creates Ethical Leaders for an AI-Driven Future; George Mason University, December 10, 2024

Buzz McClain, George Mason University; New Course Creates Ethical Leaders for an AI-Driven Future

"While the debates continue over artificial intelligence’s possible impacts on privacy, economics, education, and job displacement, perhaps the largest question regards the ethics of AI. Bias, accountability, transparency, and governance of the powerful technology are aspects that have yet to be fully answered.

A new cross-disciplinary course at George Mason University is designed to prepare students to tackle the ethical, societal, and governance challenges presented by AI. The course, AI: Ethics, Policy, and Society, will draw expertise from the Schar School of Policy and Government, the College of Engineering and Computing(CEC), and the College of Humanities and Social Sciences (CHSS).

The master’s degree-level course begins in spring 2025 and will be taught by Jesse Kirkpatrick, a research associate professor in the CEC, the Department of Philosophy, and codirector of the Mason Autonomy and Robotics Center

The course is important now, said Kirkpatrick, because “artificial intelligence is transforming industries, reshaping societal norms, and challenging long-standing ethical frameworks. This course provides critical insights into the ethical, societal, and policy implications of AI at a time when these technologies are increasingly deployed in areas like healthcare, criminal justice, and national defense.”"

Why ‘A Christmas Carol’ Endures; The New York Times, December 24, 2024

Roger Rosenblatt , The New York Times; Why ‘A Christmas Carol’ Endures

"In some ways, the story’s enduring appeal is easy to account for. “A Christmas Carol” is, first and foremost, a ghost story — a genre that never seems to go out of fashion. But what’s less easy to account for, and more interesting, is how this 19th-century tale has continued to speak to modern readers, offering moral lessons that have only grown more relevant over the decades.

At its core, it is a story about the forces that exist within all of us: greed and generosity, hatred and love, repentance and forgiveness. It doesn’t hurt that it concerns one of literature’s most compelling characters: Ebenezer Scrooge."

After 55 Years, The MCU's Villains Still Follow The Golden Rule Stan Lee Insisted Should Never Be Broken; ScreenRant, December 24, 2024

 , ScreenRant; After 55 Years, The MCU's Villains Still Follow The Golden Rule Stan Lee Insisted Should Never Be Broken

"And all of these villains having some degree of nuance actually stems back to a rule set up by the father of modern Marvel Comics, Stan Lee. One of Stan's greatest contributions to the world, beyond his incredible characters, was the regular Soapbox columns he would include in the comics, where he would wax philosophical and share his personal ideas and values with the world. And in one such Soapbox column from March 1969, Stan shared this sentiment about the essence of heroes and villains:

"One of the things we try to demonstrate in our yarns is that nobody is all good, or all bad. Even a shoddy super-villain can have a redeeming trait, just as any howlin’ hero might have his nutty hang-ups. One of the greatest barriers to real peace and justice in this troubled world is the feeling that everyone on the other side of the ideological fence is a “bad guy”. We don’t know if you’re a far-out radical, or Mr. Establishment himself — if you’re a black militant or a white liberal — if you’re a pantin’ protest marcher or a jolly John Bircher — but, whatever you are, don’t get bogged down by kindergarten labels! It’s time we learned how fruitless it is to think in terms of us and them — of black and white. Maybe, just maybe, the other side isn’t all bad. Maybe your own point of view isn’t the only one that’s divinely inspired. Maybe we’ll never find true understanding until we listen to the other guy; and until we realize that we can never march across the Rainbow Bridge to true Nirvana — unless we do it side-by-side!""

‘Godfather of AI’ shortens odds of the technology wiping out humanity over next 30 years; The Guardian, December 27, 2024

, The Guardian; ‘Godfather of AI’ shortens odds of the technology wiping out humanity over next 30 years

"The British-Canadian computer scientist often touted as a “godfather” of artificial intelligence has shortened the odds of AI wiping out humanity over the next three decades, warning the pace of change in the technology is “much faster” than expected.

Prof Geoffrey Hinton, who this year was awarded the Nobel prize in physics for his work in AI, said there was a “10% to 20%” chance that AI would lead to human extinction within the next three decades...

Hinton is one of the three “godfathers of AI” who have won the ACM AM Turing award – the computer science equivalent of the Nobel prize – for their work. However, one of the trio, Yann LeCun, the chief AI scientist at Mark Zuckerberg’s Meta, has played down the existential threat and has said AI “could actually save humanity from extinction”."

Why ethics is becoming AI's biggest challenge; ZDNet, December 27, 2024

 Joe McKendrick, ZDNet ; Why ethics is becoming AI's biggest challenge

"Many of the technical issues associated with artificial intelligence have been resolved, but the hard work surrounding AI ethics is now coming to the forefront. This is proving even more challenging than addressing technology issues.

The challenge for development teams at this stage is "to recognize that creating ethical AI is not strictly a technical problem but a socio-technical problem," said Phaedra Boinodiris, global leader for trustworthy AI at IBM Consulting, in a recent podcast. This means extending AI oversight beyond IT and data management teams across organizations.

To build responsibly curated AI models, "you need a team composed of more than just data scientists," Boinodiris said. "For decades, we've been communicating that those who don't have traditional domain expertise don't belong in the room. That's a huge misstep."

"It's also notable that well-curated AI models "are also more accurate models," she added. To achieve this, "the team designing the model should be multidisciplinary rather than siloed." The ideal AI team should include "linguistics and philosophy experts, parents, young people, everyday people with different life experiences from different socio-economic backgrounds," she urged. "The wider the variety, the better." Team members are needed to weigh in on the following types of questions:

  • "Is this AI solving the problem we need it to?"
  • "Is this even the right data according to domain experts?"
  • "What are the unintended effects of AI?"
  • "How can we mitigate those effects?""

Character.AI Confirms Mass Deletion of Fandom Characters, Says They're Not Coming Back; Futurism, November 27, 2024

MAGGIE HARRISON DUPRÉ , Futurism; Character.AI Confirms Mass Deletion of Fandom Characters, Says They're Not Coming Back

"The embattled AI companion company Character.AI confirmed to Futurism that it removed a large number of characters from its platform, citing its adherence to the Digital Millennium Copyright Act (DCMA) and copyright law, but failing to say whether the deletions were proactive or in response to requests from the holders of the characters' intellectual property rights...

That's not surprising: Character.AI is currently facing a lawsuit brought by the family of a 14-year-old teenager in Florida who died by suicide after forming an intense relationship with a Daenerys Targaryen chatbot on its platform...

It's been a bad few months for Character.AI. In October, shortly before the recent lawsuit was filed, it was revealed that someone had created a chatbot based on a murdered teenager without consent from the slain teen's family. (The character was removed and Character.AI apologized, as AdWeek first reported.) And in recent weeks, we've reported on disturbing hordes of suicidepedophilia, and eating disorder-themed chatbots hosted by the platform, all of which were freely accessible to Character.AI users of all ages."

‘2073’ Review: Back to the Future; The New York Times, December 26, 2024

 , The New York Times; ‘2073’ Review: Back to the Future

"The existential questions guiding “2073,” Asif Kapadia’s audacious exercise in futurism, are broad and familiar ones. How did we get here? What does our future look like? How can we change our current course toward a brighter one?...

Big Tech, climate catastrophe, autocracy — these are the hallmarks of Kapadia’s vision of the future, and they each receive an origin story of sorts in the nonfiction portions of his film. Montages of archival footage are paired with expert commentary on how the issues are correlated, and the bleak future they presage. Kapadia also profiles a handful of female journalists, who, alongside the film’s array of villains, emerge as spirited heroes offering an iota of hope to counter the feeling of impending doom."

The AI Boom May Be Too Good to Be True; Wall Street Journal, December 26, 2024

 Josh Harlan, Wall Street Journal; The AI Boom May Be Too Good to Be True

 "Investors rushing to capitalize on artificial intelligence have focused on the technology—the capabilities of new models, the potential of generative tools, and the scale of processing power to sustain it all. What too many ignore is the evolving legal structure surrounding the technology, which will ultimately shape the economics of AI. The core question is: Who controls the value that AI produces? The answer depends on whether AI companies must compensate rights holders for using their data to train AI models and whether AI creations can themselves enjoy copyright or patent protections.

The current landscape of AI law is rife with uncertainty...How these cases are decided will determine whether AI developers can harvest publicly available data or must license the content used to train their models."

Tech companies face tough AI copyright questions in 2025; Reuters, December 27, 2024

 , Reuters ; Tech companies face tough AI copyright questions in 2025

"The new year may bring pivotal developments in a series of copyright lawsuits that could shape the future business of artificial intelligence.

The lawsuits from authors, news outlets, visual artists, musicians and other copyright owners accuse OpenAI, Anthropic, Meta Platforms and other technology companies of using their work to train chatbots and other AI-based content generators without permission or payment.
Courts will likely begin hearing arguments starting next year on whether the defendants' copying amounts to "fair use," which could be the AI copyright war's defining legal question."

The AI revolution is running out of data. What can researchers do?; Nature, December 11, 2024

Nicola Jones, Nature; The AI revolution is running out of data. What can researchers do?

"A prominent study1 made headlines this year by putting a number on this problem: researchers at Epoch AI, a virtual research institute, projected that, by around 2028, the typical size of data set used to train an AI model will reach the same size as the total estimated stock of public online text. In other words, AI is likely to run out of training data in about four years’ time (see ‘Running out of data’). At the same time, data owners — such as newspaper publishers — are starting to crack down on how their content can be used, tightening access even more. That’s causing a crisis in the size of the ‘data commons’, says Shayne Longpre, an AI researcher at the Massachusetts Institute of Technology in Cambridge who leads the Data Provenance Initiative, a grass-roots organization that conducts audits of AI data sets...

Several lawsuits are now under way attempting to win compensation for the providers of data being used in AI training. In December 2023, The New York Times sued OpenAI and its partner Microsoft for copyright infringement; in April this year, eight newspapers owned by Alden Global Capital in New York City jointly filed a similar lawsuit. The counterargument is that an AI should be allowed to read and learn from online content in the same way as a person, and that this constitutes fair use of the material. OpenAI has said publicly that it thinks The New York Times lawsuit is “without merit”.

If courts uphold the idea that content providers deserve financial compensation, it will make it harder for both AI developers and researchers to get what they need — including academics, who don’t have deep pockets. “Academics will be most hit by these deals,” says Longpre. “There are many, very pro-social, pro-democratic benefits of having an open web,” he adds."

Thursday, December 26, 2024

Harvard’s Library Innovation Lab launches Institutional Data Initiative; Harvard Law Today, December 12, 2024

 Scott Young , Harvard Law Today; Harvard’s Library Innovation Lab launches Institutional Data Initiative

"At the Institutional Data Initiative (IDI), a new program hosted within the Harvard Law School Library, efforts are already underway to expand and enhance the data resources available for AI training. At the initiative’s public launch on Dec. 12, Library Innovation Lab faculty director, Jonathan Zittrain ’95, and IDI executive director, Greg Leppert, announced plans to expand the availability of public domain data from knowledge institutions — including the text of nearly one million books scanned at Harvard Library — to train AI models...

Harvard Law Today: What is the Institutional Data Initiative?

Greg Leppert: Our work at the Institutional Data Initiative is focused on finding ways to improve the accessibility of institutional data for all uses, artificial intelligence among them. Harvard Law School Library is a tremendous repository of public domain books, briefs, research papers, and so on. Regardless of how this information was initially memorialized — hardcover, softcover, parchment, etc. — a considerable amount has been converted into digital form. At the IDI, we are working to ensure these large data sets of public domain works, like the ones from the Law School library that comprise the Caselaw Access Project, are made open and accessible, especially for AI training. Harvard is not alone in terms of the scale and quality of its data; similar sets exist throughout our academic institutions and public libraries. AI systems are only as diverse as the data on which they’re trained, and these public domain data sets ought to be part of a healthy diet for future AI training.

HLT: What problem is the Institutional Data Initiative working to solve?

Leppert: As it stands, the data being used to train AI is often limited in terms of scale, scope, quality, and integrity. Various groups and perspectives are massively underrepresented in the data currently being used to train AI. As things stand, outliers will not be served by AI as well as they should be, and otherwise could be, by the inclusion of that underrepresented data. The country of Iceland, for example, undertook a national, government-led effort to make materials from their national libraries available for AI applications. That is because they were seriously concerned the Icelandic language and culture would not be represented in AI models. We are also working towards reaffirming Harvard, and other institutions, as the stewards of their collections. The proliferation of training sets based on public domain materials has been encouraging to see, but it’s important that this doesn’t leave the material vulnerable to critical omissions or alterations. For centuries, knowledge institutions have served as stewards of information for the purpose of promoting the public good and furthering the representation of diverse ideas, cultural groups, and ways of seeing the world. So, we believe these institutions are the exact kind of sources for AI training data if we want to optimize its ability to serve humanity. As it stands today, there is significant room for improvement."

How Hallucinatory A.I. Helps Science Dream Up Big Breakthroughs; The New York Times, December 23, 2024

, The New York Times; How Hallucinatory A.I. Helps Science Dream Up Big Breakthroughs

"In the universe of science, however, innovators are finding that A.I. hallucinations can be remarkably useful. The smart machines, it turns out, are dreaming up riots of unrealities that help scientists track cancer, design drugs, invent medical devices, uncover weather phenomena and even win the Nobel Prize.

“The public thinks it’s all bad,” said Amy McGovern, a computer scientist who directs a federal A.I. institute. “But it’s actually giving scientists new ideas. It’s giving them the chance to explore ideas they might not have thought about otherwise.”

The public image of science is coolly analytic. Less visibly, the early stages of discovery can teem with hunches and wild guesswork. “Anything goes” is how Paul Feyerabend, a philosopher of science, once characterized the free-for-all.

Now, A.I. hallucinations are reinvigorating the creative side of science. They speed the process by which scientists and inventors dream up new ideas and test them to see if reality concurs. It’s the scientific method — only supercharged. What once took years can now be done in days, hours and minutes. In some cases, the accelerated cycles of inquiry help scientists open new frontiers."

Judge Strikes Down Portions of Arkansas Law That Threatened Librarians; The New York Times, December 24, 2024

, The New York Times; Judge Strikes Down Portions of Arkansas Law That Threatened Librarians

"A federal judge has struck down portions of an Arkansas law that could have sent librarians and booksellers to prison for providing material that might be considered harmful to minors.

The ruling by Judge Timothy Brooks of the U.S. District Court in the Western District of Arkansas is certain to be appealed. But his decision on Monday provided at least a temporary victory to librarians and booksellers who have said that the law would create a chilling effect since anyone could object to any book and pursue criminal charges against the person who provided it."

Wednesday, December 25, 2024

Matt Gaetz v the ethics committee; The Economist, December 23, 2024

The Economist; Matt Gaetz v the ethics committee

"On december 23rd a congressional committee released a lurid 37-page report alleging ethical misconduct by Matt Gaetz, the former maverick member of the House of Representatives who briefly stood as Donald Trump’s nominee for attorney-general. In a different time the investigation’s details about illicit sex and drug use would definitively end Mr Gaetz’s political career, and perhaps it will now. Yet he could soon test how far deviance has been defined down in America’s norm-smashing political era."

This company rates news sites’ credibility. The right wants it stopped.; The Washington Post, December 24, 2024

, The Washington Post; This company rates news sites’ credibility. The right wants it stopped.

"At a time when social media, podcasts and partisan outlets are displacing the mainstream media as news sources, the battle over NewsGuard’s future is symptomatic of a broader societal struggle over who gets to arbitrate the truth."

Should you trust an AI-assisted doctor? I visited one to see.; The Washington Post, December 25, 2024

, The Washington Post; Should you trust an AI-assisted doctor? I visited one to see.

"The harm of generative AI — notorious for “hallucinations” — producing bad information is often difficult to see, but in medicine the danger is stark. One study found that out of 382 test medical questions, ChatGPT gave an “inappropriate” answer on 20 percent. A doctor using the AI to draft communications could inadvertently pass along bad advice.

Another study found that chatbots can echo doctors’ own biases, such as the racist assumption that Black people can tolerate more pain than White people. Transcription software, too, has been shown to invent things that no one ever said."

Tuesday, December 24, 2024

ChatGPT search tool vulnerable to manipulation and deception, tests show; The Guardian, December 24, 2024

 , The Guardian; ChatGPT search tool vulnerable to manipulation and deception, tests show

"OpenAI’s ChatGPT search tool may be open to manipulation using hidden content, and can return malicious code from websites it searches, a Guardian investigation has found.

OpenAI has made the search product available to paying customers and is encouraging users to make it their default search tool. But the investigation has revealed potential security issues with the new system.

The Guardian tested how ChatGPT responded when asked to summarise webpages that contain hidden content. This hidden content can contain instructions from third parties that alter ChatGPT’s responses – also known as a “prompt injection” – or it can contain content designed to influence ChatGPT’s response, such as a large amount of hidden text talking about the benefits of a product or service."

Why Are So Many Christians So Cruel?; The New York Times, December 22, 2024

 , The New York Times; Why Are So Many Christians So Cruel?

"Here’s a question I hear everywhere I go, including from fellow Christians: Why are so many Christians so cruel?...

It’s a simple question with a complicated answer, but that answer often begins with a particularly seductive temptation, one common to people of all faiths: that the faithful, those who possess eternal truth, are entitled to rule. Under this construct, might makes right, and right deserves might.

Most of us have sound enough moral instincts to reject the notion that might makes right. Power alone is not a sufficient marker of righteousness. We may watch people bow to power out of fear or awe, but yielding to power isn’t the same thing as acknowledging that it is legitimate or that it is just.

The idea that right deserves might is different and may even be more destructive. It appeals to our ambition through our virtue, which is what makes it especially treacherous. It masks its darkness. It begins with the idea that if you believe your ideas are just and right, then it’s a problem for everyone if you’re not in charge.

In that context, your own will to power is sanctified. It’s evidence not so much of your own ambition, but of your love for the community. You want what’s best for your neighbors, and what’s best for your neighbors is, well, you...

Christ’s words were clear, and they cut against every human instinct of ambition and pride:

“The last will be first.”

“It is easier for a camel to go through the eye of a needle than for a rich man to enter the kingdom of God.”

“If any man would come after me, let him deny himself and take up his cross and follow me.”

“Love your enemies and pray for those who persecute you.”

Those were the words. The deeds were just as clear. He didn’t just experience a humble birth; Jesus was raised in a humble home, far from the corridors of power. As a child, he was a refugee."

Monday, December 23, 2024

What Does AI Jesus Teach Us?; The Hastings Center, December 20, 2024

Gregory E. Kaebnick,  The Hastings Center ; What Does AI Jesus Teach Us?

"But there are some roles in which human presence is important.

This is a valuable insight that AI Jesus really does teach us. There would be ways of interacting with AI Jesus that did not suggest an inappropriate replacement of humans. For example, AI Jesus might be a tool for searching through the theological material on which it is trained and generating theologically informed answers to visitors’ questions. Visitors might thereby use AI Jesus to help them think more clearly or creatively about those questions.

But if the bioethics’ editors’ statement is on the right track, AI Jesus is troubling if it is in effect taking over the moral thinking—if it is genuinely seen as spiritual leader or moral guide. For one thing, we humans must be in charge of our moral governance. In much the way that, in a democracy, public policy must ultimately be collectively authorized by the citizens in order to be legitimate, so, too, must moral rules be collectively endorsed by the beings to whom they apply. For that idea of endorsement to have any meaning, we must all be thinking about them and settling on them together. We must collectively be the authors of morality, as it were.

Beyond that, it’s our society. Just as the core mission of a bioethics journal is to foster a community of people exchanging scholarly views about bioethical issues, the ultimate goal of a society is about creating the conditions of human flourishing. Human flourishing has something to do with the efficient production of things for people to enjoy, but most people would say that there’s more to it than that—hence my friend who would rather die than let a chatbot take over the biography he’s struggling to write. As philosophers tracing back to Aristotle have often held, human flourishing requires, not just contentment, but activity and engagement. It requires that humans be in the loop."


The god illusion: why the pope is so popular as a deepfake image; The Guardian, December 21, 2024

 , The Guardian; The god illusion: why the pope is so popular as a deepfake image

"The pope is an obvious target for deepfakes, according to experts, because there is such a vast digital “footprint” of videos, images and voice recordings related to Francis. AI models are trained on the open internet, which is stuffed with content featuring famous public figures, from politicians to celebrities and religious leaders.

“The pope is so frequently featured in the public eye and there are large volumes of photos, videos, and audio clips of him on the open web,” said Sam Stockwell, a research associate at the UK’s Alan Turing Institute.

“Since AI models are often trained indiscriminately on such data, it becomes a lot easier for these models to replicate the facial features and likeness of individuals like the pope compared with those who don’t have such a large digital footprint.”"

Sunday, December 22, 2024

Ethics revisited; British Dental Journal, December 20, 2024

Shaun Sellars  , British Dental Journal ; Ethics revisited

volume;

"But what about the future? Tomorrow's ethical dilemmas will likely centre around technology. Artificial intelligence is already reshaping diagnostics, and its influence will only grow. With this comes the responsibility to ensure that innovation enhances, rather than replaces, the human element of care.

The ethical landscape of dentistry will continue evolving, shaped by societal shifts, technological advances, and our commitment to doing better. If there's one lesson I've learned, it's that ethics isn't static - it's a living, breathing part of what we do. It challenges us to reflect, adapt, and, above all, remain human in our approach.

I leave you with this: Keep asking questions. Keep challenging norms. And never lose sight of why we chose this profession in the first place. Because at the heart of ethical dentistry lies something beautifully simple: a desire to do right by our patients, our colleagues, and ourselves. It's been an honour to be able to write for you all. Thank you for being part of the conversation."

Only 35% of Americans trust the US judicial system. This is catastrophic; The Guardian, December 21, 2024

David Daley, The Guardian; Only 35% of Americans trust the US judicial system. This is catastrophic

"It’s not surprising that Americans have lost all faith in something as anti-democratic as an unelected body (with a majority appointed by presidents who lost the popular vote) granted lifetime fiefdoms to cast final judgement over acts of the elected branches, without any accountability or ethics code that might, for example, prevent them from taking luxury vacations paid by billionaire benefactors."

Jeff Bezos to marry fiancée Lauren Sanchez in lavish $600M Aspen wedding next weekend: report; New York Post, December 21, 2024

 Anna Young, New York Post; Jeff Bezos to marry fiancée Lauren Sanchez in lavish $600M Aspen wedding next weekend: report

[Kip Currier: Think about how spiritually and ethically bankrupt -- how intellectually vacuous -- a person is who would choose to spend more than half a billion dollars on a wedding, amidst rampant suffering and vital needs in this world.

Imagine what even a fraction of that money could do to help people and this planet.] 

[Excerpt]

"A new report says billionaire Amazon founder Jeff Bezos will marry his fiancée Lauren Sanchez next  Saturday in an extravagant $600 million wedding in Aspen, Colorado."

Saturday, December 21, 2024

Every AI Copyright Lawsuit in the US, Visualized; Wired, December 19, 2024

Kate Knibbs, Wired; Every AI Copyright Lawsuit in the US, Visualized

"WIRED is keeping close tabs on how each of these lawsuits unfold. We’ve created visualizations to help you track and contextualize which companies and rights holders are involved, where the cases have been filed, what they’re alleging, and everything else you need to know."

Senate review of Supreme Court ethics finds more luxury trips and urges enforceable code of conduct; AP, December 21, 2024

LINDSAY WHITEHURST, AP; Senate review of Supreme Court ethics finds more luxury trips and urges enforceable code of conduct

"A nearly two-year investigation by Democratic senators of Supreme Court ethics details more luxury travel by Justice Clarence Thomas and urges Congress to establish a way to enforce a new code of conduct

Any movement on the issue appears unlikely as Republicans prepare to take control of the Senate in January, underscoring the hurdles in imposing restrictions on a separate branch of government even as public confidence in the court has fallen to record lows.

The 93-page report released Saturday by the Democratic majority of the Senate Judiciary Committee found additional travel taken in 2021 by Thomas but not reported on his annual financial disclosure form: a private jet flight to New York’s Adirondacks in July and jet and yacht trip to New York City sponsored by billionaire Harlan Crow in October, one of more than two dozen times detailed in the report that Thomas took luxury travel and gifts from wealthy benefactors."

Friday, December 20, 2024

Elon Musk is becoming a one-man rogue state – it’s time we reined him in; The Guardian, December 20, 2024

 , The Guardian; Elon Musk is becoming a one-man rogue state – it’s time we reined him in

"Elon Musk is, more or less, a rogue state. His intentions are self-serving and nefarious, and his nation-state level resources allow him to flout the law with impunity...

The sheer immorality of any one person possessing so much wealth is obvious to most people with basic amounts of empathy. But when it comes to Musk and the other 14 people worth more than $100bn, the morality of it is almost a secondary concern. Their individual wealth is a society-distorting threat to democracy in the same way that economics has always recognised monopolies to be dangerous to a functional market...

Plutocracy is not enough, though, because nothing is ever enough for the handful of men who have everything. Musk’s new obsessions (beyond the validation and human affection that he mistakenly believes he will find on social media) are attacking public servants, slashing social spending and going after the most vulnerable...

When rogue states behave this way – election interference, active disinformation campaigns, social media manipulation – other states call them out, or even impose sanctions. Musk is not simply a private citizen with an opinion and a large following. His sheer wealth, his control of X, and his new position within the US government place him in a different category...

Soon it will be the EU’s turn. What the union owes its citizens is not to play nice or mete out a meek slap on the wrist over the various alleged legalviolations by Musk and X that are under investigation, it’s to firmly and intently show that even interplanetary amounts of wealth don’t mean impunity, and that some things – like democracy – are not for sale."

Conclusion of Copyright Office’s Report on Artificial Intelligence Delayed Until 2025; The National Law Review, December 19, 2024

 Daniel J. Lass of Robinson & Cole LLP , The National Law Review; Conclusion of Copyright Office’s Report on Artificial Intelligence Delayed Until 2025

"This week, Director Shira Perlmutter indicated that the publication of part two of the U.S. Copyright Office’s three-part report on copyright issues raised by artificial intelligence (AI) would be further delayed. In her letter to the ranking members of the Senate Subcommittee on Intellectual Property and the House Subcommittee on Courts, Intellectual Property, and the Internet, Director Perlmutter indicated that although substantial progress had been made, the Office will not publish part two by the end of 2024 and now expects publication to occur in early 2025.

Part two of the report will describe the copyrightability of generative AI outputs and will build on part one of the report on digital replicas. Following the publication of part two, Director Perlmutter indicated that the third and final part would be published in the first quarter of 2025. Part three will relate to “analyzing the legal issues related to the ingestion of copyrighted works to train AI models, including licensing considerations and the allocation of potential liability.”"

Thursday, December 19, 2024

Getty Images Wants $1.7 Billion From its Lawsuit With Stability AI; PetaPixel, December 19, 2024

 MATT GROWCOOT, PETAPIXEL; GETTY IMAGES WANTS $1.7 BILLION FROM ITS LAWSUIT WITH STABILITY AI

"Getty, one of the world’s largest photo agencies, launched its lawsuit in January 2023. Getty suspects that Stability AI may have used as many as 12 million of its copyrighted photos to train the AI image generator Stable Diffusion. Getty is seeking $150,000 per infringement and 12 million photos equates to a staggering $1.8 trillion.

However, according to Stability AI’s latest company accounts as reported by Sifted, Getty is seeking damages for 11,383 works at $150,000 per infringement which comes to a total of $1.7 billion. Stability AI has previously reported that Getty was seeking damages for 7,300 images so that number has increased. But Stability AI says Getty hasn’t given an exact number it wants for the lawsuit to be settled, according to Sifted."