Sunday, June 30, 2024

Tech companies battle content creators over use of copyrighted material to train AI models; The Canadian Press via CBC, June 30, 2024

 Anja Karadeglija , The Canadian Press via CBC; Tech companies battle content creators over use of copyrighted material to train AI models

"Canadian creators and publishers want the government to do something about the unauthorized and usually unreported use of their content to train generative artificial intelligence systems.

But AI companies maintain that using the material to train their systems doesn't violate copyright, and say limiting its use would stymie the development of AI in Canada.

The two sides are making their cases in recently published submissions to a consultation on copyright and AI being undertaken by the federal government as it considers how Canada's copyright laws should address the emergence of generative AI systems like OpenAI's ChatGPT."

Amy Dickinson says goodbye in her final column; The Washington Post, June 30, 2024

 , The Washington Post; Amy Dickinson says goodbye in her final column

"Dear Readers: Since announcing my departure from writing this syndicated column, I have heard from scores of people across various platforms, thanking me for more than two decades of offering advice and wishing me well in my “retirement.” I am very touched and grateful for this outpouring of support...

The questions raised in this space have been used as teaching tools in middle schools, memory care units, ESL classes and prisons. These are perfect venues to discuss ethical, human-size dilemmas. On my last day communicating with you in this way, I feel compelled to try to sum up my experience by offering some lasting wisdom, but I’ve got no fresh insight. Everything I know has been distilled from wisdom gathered elsewhere...

Boxer Mike Tyson famously said, “Everybody has a plan, until they get punched ...” Punches are inevitable. But I do believe I’ve learned some universal truths that might soften the blows.

They are:...

Identify, develop, or explore your core ethical and/or spiritual beliefs...

I sometimes supply “scripts” for people who have asked me for the right words to say, and so I thought I would boil these down to some of the most important statements I believe anyone can make.

They are:

I need help.

I’m sorry.

I forgive you.

I love you, just as you are.

I’m on your side.

You’re safe.

You are not alone."

THE GREAT PRETENDERS; Toronto Life, February 14, 2024

SARAH TRELEAVEN, Toronto Life; THE GREAT PRETENDERS

"The “pretendian” phenomenon in Canada can be traced back to at least the 1930s, when Archibald Stansfeld Belaney donned leathers, renamed himself Grey Owl and began telling people his mother was Apache. He used his new identity to amass fame and fortune as an Indigenous author and conservationist. But the term itself didn’t gain traction in Canada until late 2016, when Indigenous journalists started pointing out the inconsistencies in bestselling author Joseph Boyden’s proclaimed Indigenous roots. Today, it’s used to broadly describe fakers who claim to be Indigenous but aren’t. (Some Inuit also use the term “pretenduit” as a way to address the specific co-opting of their heritage and culture.)

The list of high-profile Canadians busted for faking Indigenous identities has grown alarmingly long in recent years and includes academics, judges, professors and cultural icons. In October 2021, a CBC investigation revealed that Carrie Bourassa, a University of Saskatchewan professor, had falsely claimed to be Métis, Anishinaabe and Tlingit. In 2022, media raised questions about former judge Mary Ellen Turpel-Lafond’s purported Cree ancestry; she has maintained her ­Indigeneity but later lost her Order of Canada, among other awards. Last year, Memorial University removed Vianne Timmons from her role as the school’s president after a CBC report challenged her claims of Mi’kmaw heritage. And in one of the most explosive revelations to date, The Fifth Estate reported last October that 82-year-old singer and activist Buffy Sainte-Marie had lied about being a Cree survivor of the Sixties Scoop.

The problem is especially prevalent in Canadian academia, where the allure of money and status runs high. Universities have been under pressure to increase Indigenous student admissions—as of 2021, only 13 per cent of Indigenous people of working age had a university degree—and hire more Indigenous faculty. In their rush to boost their numbers, many institutions have overlooked the potential for scammers. Jean Teillet is a recently retired Métis lawyer in Vancouver who has worked on Indigenous-identity fraud cases. In the wake of the Bourassa scandal, the University of Saskatchewan hired Teillet to write a report on Indigenous-identity fraud, complete with recommendations on how to spot it. While some institutions are now introducing mechanisms to confirm membership in a recognized nation, including the presentation of official status documents, Teillet found that, for many applicants, claiming Indigeneity is as easy as ticking off a box. Universities are largely ignorant about the complexities of Indigenous identity, and they’re either too gullible or willfully blind to dubious claims."

‘They burned books, like the Nazis did 80 years ago’: Russia’s deadly attack on Ukraine’s biggest printing house; The Guardian, June 30, 2024

  , The Guardian; ‘They burned books, like the Nazis did 80 years ago’: Russia’s deadly attack on Ukraine’s biggest printing house

"Hryniuk said she did not know if the Russian military had deliberately targeted her workplace or had attempted to hit a train repair workshop next door...

In occupied areas, the Kremlin has forbidden the Ukrainian language, removed books from schools and imposed a patriotic pro-Russian curriculum. Statues of the Ukrainian poet Taras Shevchenko have been torn down. Vladimir Putin insists Ukraine does not exist. Its land, he says, is a part of “historical Russia”.

The strike on the factory wiped out 50,000 books. Among them were works of children’s literature and Ukrainian school textbooks – 40% of them were printed by Factor Druk – due to be sent to classrooms for the new September academic year...

“For me it’s so symbolic. They burned books, like the Nazis did 80 years ago. We have so many historical examples of Russia trying to kill off Ukrainian culture,” said Oleksiy Sobol, the head of the pre-press department. The Russian empire banned Ukrainian-language texts from the 17th century onwards, with follow-up edicts. Under Stalin, in the 1930s, Ukrainian poets and writers were shot – a generation known as the “executed renaissance”.

Since 2022, Russia has erased 172 libraries and nearly 2m books, according to the Ukrainian Book Institute...

Emily Finer, who heads a research team working on Ukrainian children’s literature at the University of St Andrews, called the attack a tragedy. “The priority given to publishing trauma-informed children’s books in wartime Ukraine is unprecedented,” she said. “Over 120 picture books in Ukrainian have been printed since 2022 to help children cope with their wartime experiences now and in the future.”

The strike took place a week before the Arsenal book festival, Kyiv’s biggest literary event. Many of the destroyed books were due to be sold there...

The Howard G Buffett Foundation, meanwhile, last week pledged €5.1m (£4.3m) to restore the printing house. “They can destroy books but not Ukrainian resilience and commitment,” said Buffett, the son of the billionaire US investor Warren Buffett."

Saturday, June 29, 2024

Microsoft’s AI boss thinks it’s perfectly OK to steal content if it’s on the open web; The Verge, June 28, 2024

  Sean Hollister, The Verge; Microsoft’s AI boss thinks it’s perfectly OK to steal content if it’s on the open web

"Microsoft AI boss Mustafa Suleyman incorrectly believes that the moment you publish anything on the open web, it becomes “freeware” that anyone can freely copy and use. 

When CNBC’s Andrew Ross Sorkin asked him whether “AI companies have effectively stolen the world’s IP,” he said:

I think that with respect to content that’s already on the open web, the social contract of that content since the ‘90s has been that it is fair use. Anyone can copy it, recreate with it, reproduce with it. That has been “freeware,” if you like, that’s been the understanding...

I am not a lawyer, but even I can tell you that the moment you create a work, it’s automatically protected by copyright in the US." 

AI scientist Ray Kurzweil: ‘We are going to expand intelligence a millionfold by 2045’; The Guardian, June 29, 2024

Zoe Corbin, The Guardian; AI scientist Ray Kurzweil: ‘We are going to expand intelligence a millionfold by 2045’

"The American computer scientist and techno-optimist Ray Kurzweil is a long-serving authority on artificial intelligence (AI). His bestselling 2005 book, The Singularity Is Near, sparked imaginations with sci-fi like predictions that computers would reach human-level intelligence by 2029 and that we would merge with computers and become superhuman around 2045, which he called “the Singularity”. Now, nearly 20 years on, Kurzweil, 76, has a sequel, The Singularity Is Nearer – and some of his predictions no longer seem so wacky. Kurzweil’s day job is principal researcher and AI visionary at Google. He spoke to the Observer in his personal capacity as an author, inventor and futurist...

What of the existential risk of advanced AI systems – that they could gain unanticipated powers and seriously harm humanity? AI “godfather” Geoffrey Hinton left Google last year, in part because of such concerns, while other high-profile tech leaders such as Elon Musk have also issued warnings. Earlier this month, OpenAI and Google DeepMind workers called for greater protections for whistleblowers who raise safety concerns. 

I have a chapter on perils. I’ve been involved with trying to find the best way to move forward and I helped to develop the Asilomar AI Principles [a 2017 non-legally binding set of guidelines for responsible AI development]. We do have to be aware of the potential here and monitor what AI is doing. But just being against it is not sensible: the advantages are so profound. All the major companies are putting more effort into making sure their systems are safe and align with human values than they are into creating new advances, which is positive...

Not everyone is likely to be able to afford the technology of the future you envisage. Does technological inequality worry you? 

Being wealthy allows you to afford these technologies at an early point, but also one where they don’t work very well. When [mobile] phones were new they were very expensive and also did a terrible job. They had access to very little information and didn’t talk to the cloud. Now they are very affordable and extremely useful. About three quarters of people in the world have one. So it’s going to be the same thing here: this issue goes away over time...

The book looks in detail at AI’s job-killing potential. Should we be worried? 

Yes, and no. Certain types of jobs will be automated and people will be affected. But new capabilities also create new jobs. A job like “social media influencer” didn’t make sense, even 10 years ago. Today we have more jobs than we’ve ever had and US average personal income per hours worked is 10 times what it was 100 years ago adjusted to today’s dollars. Universal basic income will start in the 2030s, which will help cushion the harms of job disruptions. It won’t be adequate at that point but over time it will become so.

There are other alarming ways, beyond job loss, that AI is promising to transform the world: spreading disinformation, causing harm through biased algorithms and supercharging surveillance. You don’t dwell much on those… 

We do have to work through certain types of issues. We have an election coming and “deepfake” videos are a worry. I think we can actually figure out [what’s fake] but if it happens right before the election we won’t have time. On issues of bias, AI is learning from humans and humans have bias. We’re making progress but we’re not where we want to be. There are also issues around fair data use by AI that need to be sorted out via the legal process."

2024 Generative AI in Professional Services: Perceptions, Usage & Impact on the Future of Work; Thomson Reuters Institute, 2024

Thomson Reuters Institute; 2024 Generative AI in Professional Services: Perceptions, Usage & Impact on the Future of Work

"Inaccuracy, privacy worries persist -- More than half of respondents identified such worries as inaccurate responses (70%); data security (68%); privacy and confidentiality of data (62%); complying with laws and regulations (60%); and ethical and responsible usage (57%), as primary concerns for GenAI."

GenAI in focus: Understanding the latest trends and considerations; Thomson Reuters, June 27, 2024

 Thomson Reuters; GenAI in focus: Understanding the latest trends and considerations

"Legal professionals, whether they work for law firms, corporate legal departments, government, or in risk and fraud, have generally positive perceptions of generative AI (GenAI). According to the professionals surveyed in the Thomson Reuters Institute’s 2024 GenAI in Professional Services report, 85% of law firm and corporate attorneys, 77% of government legal practitioners, and 82% of corporate risk professionals believe that GenAI can be applied to industry work.  

But should it be applied? There, those positive perceptions softened a bit, with 51% of law firm respondents, 60% of corporate legal practitioners, 62% of corporate risk professionals, and 40% of government legal respondents saying yes.  

In short, professionals’ perceptions of AI include concerns and interest in its capabilities. Those concerns include the ethics of AI usage and mitigating related risksThese are important considerations. But they don’t need to keep professionals from benefiting from all that GenAI can do. Professionals can minimize many of the potential risks by becoming familiar with responsible AI practices."

The Voices of A.I. Are Telling Us a Lot; The New York Times, June 28, 2024

 Amanda Hess, The New York Times; The Voices of A.I. Are Telling Us a Lot

"Tech companies advertise their virtual assistants in terms of the services they provide. They can read you the weather report and summon you a taxi; OpenAI promises that its more advanced chatbots will be able to laugh at your jokes and sense shifts in your moods. But they also exist to make us feel more comfortable about the technology itself.

Johansson’s voice functions like a luxe security blanket thrown over the alienating aspects of A.I.-assisted interactions. “He told me that he felt that by my voicing the system, I could bridge the gap between tech companies and creatives and help consumers to feel comfortable with the seismic shift concerning humans and A.I.,” Johansson said of Sam Altman, OpenAI’s founder. “He said he felt that my voice would be comforting to people.”

It is not that Johansson’s voice sounds inherently like a robot’s. It’s that developers and filmmakers have designed their robots’ voices to ease the discomfort inherent in robot-human interactions. OpenAI has said that it wanted to cast a chatbot voice that is “approachable” and “warm” and “inspires trust.” Artificial intelligence stands accused of devastating the creative industries, guzzling energy and even threatening human life. Understandably, OpenAI wants a voice that makes people feel at ease using its products. What does artificial intelligence sound like? It sounds like crisis management."

Friday, June 28, 2024

Joe Biden Is a Good Man and a Good President. He Must Bow Out of the Race.; The New York Times, June 28, 2024

 THOMAS L. FRIEDMAN, The New York Times; Joe Biden Is a Good Man and a Good President. He Must Bow Out of the Race.

"We are at the dawn of an artificial intelligence revolution that is going to change EVERYTHING FOR EVERYONE — how we work, how we learn, how we teach, how we trade, how we invent, how we collaborate, how we fight wars, how we commit crimes and how we fight crimes. Maybe I missed it, but I did not hear the phrase “artificial intelligence” mentioned by either man at the debate."

Original sins and dirty secrets: GenAI has an ethics problem. These are the three things it most urgently needs to fix; Fortune, June 27, 2024

, Fortune; Original sins and dirty secrets: GenAI has an ethics problem. These are the three things it most urgently needs to fix

"The ethics of generative AI has been in the news this week. AI companies have been accused of taking copyrighted creative works without permission to train their models, and there’s been documentation of those models producing outputs that plagiarize from that training data. Today, I’m going to make the case that generative AI can never be ethical as long as three issues that are currently inherent to the technology remain. First, there’s the fact that generative AI was created using stolen data. Second, it’s built on exploitative labor. And third, it’s exponentially worsening the energy crisis at a pivotal time when we need to be scaling back, not accelerating, our energy demands and environmental impact."

Thursday, June 27, 2024

God Chatbots Offer Spiritual Insights on Demand. What Could Go Wrong?; Scientific American, March 19, 2024

 , Scientific American; God Chatbots Offer Spiritual Insights on Demand. What Could Go Wrong?

"QuranGPT—which has now been used by about 230,000 people around the world—is just one of a litany of chatbots trained on religious texts that have recently appeared online. There’s Bible.Ai, Gita GPT, Buddhabot, Apostle Paul AI, a chatbot trained to imitate 16th-century German theologian Martin Luther, another trained on the works of Confucius, and yet another designed to imitate the Delphic oracle. For millennia adherents of various faiths have spent long hours—or entire lifetimes—studying scripture to glean insights into the deepest mysteries of human existence, say, the fate of the soul after death.

The creators of these chatbots don’t necessarily believe large language models (LLMs) will put these age-old theological enigmas to rest. But they do think that with their ability to identify subtle linguistic patterns within vast quantities of text and provide responses to user prompts in humanlike language (a feature called natural-language processing, or NLP), the bots can theoretically synthesize spiritual insights in a matter of seconds, saving users both time and energy. It’s divine wisdom on demand.

Many professional theologians, however, have serious concerns about blending LLMs with religion...

The danger of hallucination in this context is compounded by the fact that religiously oriented chatbots are likely to attract acutely sensitive questions—questions one might feel too embarrassed or ashamed to ask a priest, an imam, a rabbi or even a close friend. During a software update to QuranGPT last year, Khan had a brief glimpse into user prompts, which are usually invisible to him. He recalls seeing that one person had asked, “I caught my wife cheating on me—how should I respond?” Another, more troublingly, had asked, “Can I beat my wife?”

Khan was pleased with the system’s responses (it urged discussion and nonviolence on both counts), but the experience underscored the ethical gravity behind his undertaking."

New Tactic in China’s Information War: Harassing a Critic’s Child in the U.S.; The New York Times, June 27, 2024

Steven Lee Myers and  , The New York Times; New Tactic in China’s Information War: Harassing a Critic’s Child in the U.S.

"A covert propaganda network linked to the country’s security services has barraged not just Mr. Deng but also his teenage daughter with sexually suggestive and threatening posts on popular social media platforms, according to researchers at both Clemson University and Meta, which owns Facebook and Instagram...

The harassment fits a pattern of online intimidation that has raised alarms in Washington, as well as Canada and other countries where China’s attacks have become increasingly brazen. The campaign has included thousands of posts the researchers have linked to a network of social media accounts known as Spamouflage or Dragonbridge, an arm of the country’s vast propaganda apparatus.

China has long sought to discredit Chinese critics, but targeting a teenager in the United States is an escalation, said Darren Linvill, a founder of the Media Forensics Hub at Clemson, whose researchers documented the campaign against Mr. Deng. Federal law prohibits severe online harassment or threats, but that appears to be no deterrent to China’s efforts."

The Supreme Court rules for Biden administration in a social media dispute with conservative states; AP, June 26, 2024

MARK SHERMAN , AP; The Supreme Court rules for Biden administration in a social media dispute with conservative states

"The Supreme Court on Wednesday sided with the Biden administration in a dispute with Republican-led states over how far the federal government can go to combat controversial social media posts on topics including COVID-19 and election security.

By a 6-3 vote, the justices threw out lower-court rulings that favored Louisiana, Missouri and other parties in their claims that federal officials leaned on the social media platforms to unconstitutionally squelch conservative points of view.

Justice Amy Coney Barrett wrote for the court that the states and other parties did not have the legal right, or standing, to sue. Justices Samuel Alito, Neil Gorsuch and Clarence Thomas dissented. 

The decision should not affect typical social media users or their posts."

AI, Legal Tech, and Ethics: The Florida Bar’s Groundbreaking Guidelines; Legal Talk Network, June 27, 2024

Adriana Linares , Legal Talk Network; AI, Legal Tech, and Ethics: The Florida Bar’s Groundbreaking Guidelines

"Two friends of the podcast return for this episode of New Solo to talk all things legal tech and the latest in AI services for lawyers. Guests Renee Thompson and Liz McCausland are both accomplished mediators and solo practitioners who depend on tech to boost productivity and keep up with their busy lives.

AI is an emerging technology that is finding its way to more and more law offices. McCausland and Thompson served on a Florida Bar committee to draft an advisory opinion laying out ethical guidelines for the use of AI in legal practice.

With ethical guardrails published, what’s next? A best practices guide and clear definitions and examples of AI for legal services. Client consent, the impact on fees and confidentiality, and even how judges view the use of AI and informing the court that AI played a role in your presentation are all pieces of the puzzle."

Wednesday, June 26, 2024

The MTV News website is gone; The Verge, June 25, 2024

Andrew Liszewski, The Verge ; The MTV News website is gone

"The archives of the MTV News website, which had remained accessible online after the unit was shut down last year by parent company Paramount Global, have now been completely taken offline. As Varietyreported yesterday, both mtvnews.com and mtv.com/news now redirect visitors to the MTV website’s front page...

Although the MTV News website was no longer publishing new stories, its extensive archive, dating back over two decades to its launch in 1996, remained online. But as former staffers discovered yesterday, that archive is no longer accessible."

Tuesday, June 25, 2024

Collaborative ethics: innovating collaboration between ethicists and life scientists; Nature, June 20, 2024

, Nature ; Collaborative ethics: innovating collaboration between ethicists and life scientists

"Is there a place for ethics in scientific research, not about science or after scientific breakthroughs? We are convinced that there is, and we describe here our model for collaboration between scientists and ethicists.

Timely collaboration with ethicists benefits science, as it can make an essential contribution to the research process. In our view, such critical discussions can improve the efficiency and robustness of outcomes, particularly in groundbreaking or disruptive research. The discussion of ethical implications during the research process can also prepare a team for a formal ethics review and criticism after publication.

The practice of collaborative ethics also advances the humanities, as direct involvement with the sciences allows long-held assumptions and arguments to be put to the test. As philosophers and ethicists, we argue that innovative life sciences research requires new methods in ethics, as disruptive concepts and research outcomes no longer fit traditional notions and norms. Those methods should not be developed at a distance from the proverbial philosopher’s armchair or in after-the-fact ethics analysis. We argue that, rather, we should join scientists and meet where science evolves in real-time: as Knoppers and Chadwick put it in the early days of genomic science, “Ethical thinking will inevitably continue to evolve as the science does”1."

Monday, June 24, 2024

New Legal Ethics Opinion Cautions Lawyers: You ‘Must Be Proficient’ In the Use of Generative AI; LawSites, June 24, 2024

, LawSites; New Legal Ethics Opinion Cautions Lawyers: You ‘Must Be Proficient’ In the Use of Generative AI

"A new legal ethics opinion on the use of generative AI in law practice makes one point very clear: lawyers are required to maintain competence across all technological means relevant to their practices, and that includes the use of generative AI.

The opinion, jointly issued by the Pennsylvania Bar Association and Philadelphia Bar Association, was issued to educate attorneys on the benefits and pitfalls of using generative AI and to provide ethical guidelines.

While the opinion is focused on AI, it repeatedly emphasizes that a lawyer’s ethical obligations surrounding this emerging form of technology are no different than those for any form of technology...

12 Points of Responsibility

The 16-page opinion offers a concise primer on the use of generative AI in law practice, including a brief background on the technology and a summary of other states’ ethics opinions.

But most importantly, it concludes with 12 points of responsibility pertaining to lawyers using generative AI:

  • Be truthful and accurate: The opinion warns that lawyers must ensure that AI-generated content, such as legal documents or advice, is truthful, accurate and based on sound legal reasoning, upholding principles of honesty and integrity in their professional conduct.
  • Verify all citations and the accuracy of cited materials: Lawyers must ensure the citations they use in legal documents or arguments are accurate and relevant. That includes verifying that the citations accurately reflect the content they reference.
  • Ensure competence: Lawyers must be competent in using AI technologies.
  • Maintain confidentiality: Lawyers must safeguard information relating to the representation of a client and ensure that AI systems handling confidential data both adhere to strict confidentiality measures and prevent the sharing of confidential data with others not protected by the attorney-client privilege.
  • Identify conflicts of interest: Lawyers must be vigilant, the opinion says, in identifying and addressing potential conflicts of interest arising from using AI systems.
  • Communicate with clients: Lawyers must communicate with clients about using AI in their practices, providing clear and transparent explanations of how such tools are employed and their potential impact on case outcomes. If necessary, lawyers should obtain client consent before using certain AI tools.
  • Ensure information is unbiased and accurate: Lawyers must ensure that the data used to train AI models is accurate, unbiased, and ethically sourced to prevent perpetuating biases or inaccuracies in AI-generated content.
  • Ensure AI is properly used: Lawyers must be vigilant against the misuse of AI-generated content, ensuring it is not used to deceive or manipulate legal processes, evidence or outcomes.
  • Adhere to ethical standards: Lawyers must stay informed about relevant regulations and guidelines governing the use of AI in legal practice to ensure compliance with legal and ethical standards.
  • Exercise professional judgment: Lawyers must exercise their professional judgment in conjunction with AI-generated content, and recognize that AI is a tool that assists but does not replace legal expertise and analysis.
  • Use proper billing practices: AI has tremendous time-saving capabilities. Lawyers must, therefore, ensure that AI-related expenses are reasonable and appropriately disclosed to clients.
  • Maintain transparency: Lawyers should be transparent with clients, colleagues, and the courts about the use of AI tools in legal practice, including disclosing any limitations or uncertainties associated with AI-generated content.

My Advice: Don’t Be Stupid

Over the years of writing about legal technology and legal ethics, I have developed my own shortcut rule for staying out of trouble: Don’t be stupid...

You can read the full opinion here: Joint Formal Opinion 2024-200."

AI use must include ethical scrutiny; CT Mirror, June 24, 2024

 Josemari Feliciano, CT Mirror; AI use must include ethical scrutiny

"AI use may deal with data that are deeply intertwined with personal and societal dimensions. The potential for AI to impact societal structures, influence public policy, and reshape economies is immense. This power carries with it an obligation to prevent harm and ensure fairness, necessitating a formal and transparent review process akin to that overseen by IRBs.

The use of AI without meticulous scrutiny of the training data and study parameters can inadvertently perpetuate or exacerbate harm to minority groups. If the data used to train AI systems is biased or non-representative, the resulting algorithms can reinforce existing disparities."

How to Fix “AI’s Original Sin”; O'Reilly, June 18, 2024

  Tim O’Reilly, O'Reilly; How to Fix “AI’s Original Sin”

"In conversation with reporter Cade Metz, who broke the story, on the New York Times podcast The Daily, host Michael Barbaro called copyright violation “AI’s Original Sin.”

At the very least, copyright appears to be one of the major fronts so far in the war over who gets to profit from generative AI. It’s not at all clear yet who is on the right side of the law. In the remarkable essay “Talkin’ Bout AI Generation: Copyright and the Generative-AI Supply Chain,” Cornell’s Katherine Lee and A. Feder Cooper and James Grimmelmann of Microsoft Research and Yale note:

Copyright law is notoriously complicated, and generative-AI systems manage to touch on a great many corners of it. They raise issues of authorship, similarity, direct and indirect liability, fair use, and licensing, among much else. These issues cannot be analyzed in isolation, because there are connections everywhere. Whether the output of a generative AI system is fair use can depend on how its training datasets were assembled. Whether the creator of a generative-AI system is secondarily liable can depend on the prompts that its users supply.

But it seems less important to get into the fine points of copyright law and arguments over liability for infringement, and instead to explore the political economy of copyrighted content in the emerging world of AI services: Who will get what, and why?"

Sunday, June 23, 2024

After uproar over ethics, new 'Washington Post' editor won't take the job; NPR, June 21, 2024


David Folkenflik , NPR; After uproar over ethics, new 'Washington Post' editor won't take the job

"The ethical records of both men have come under withering scrutiny in recent days.

Lewis worked with Winnett at the Sunday Times in Britain in the early 2000s. After Lewis was named the youngest editor in the Daily Telegraph's history, he hired Winnett there. The two men, both Brits, worked hand-in-glove and won accolades in the U.K. for their scoops.

Yet NPR, the New York Times and the Post have reported on a parade of episodes involving both men in conduct that would be barred under professional ethics codes at major American news outlets, including the Post.

The incidents include paying a six-figure sum to secure a major scoop; planting a junior reporter in a government job to obtain secret and even classified documents; and relying on a private investigator who used subterfuge to secure people's confidential records and documents. The investigator was later arrested."