Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Tuesday, April 23, 2024

New Group Joins the Political Fight Over Disinformation Online; The New York Times, April 22, 2024

 Steven Lee Myers and , The New York Times; New Group Joins the Political Fight Over Disinformation Online

"Many of the nation’s most prominent researchers, facing lawsuits, subpoenas and physical threats, have pulled back.

“More and more researchers were getting swept up by this, and their institutions weren’t either allowing them to respond or responding in a way that really just was not rising to meet the moment,” Ms. Jankowicz said in an interview. “And the problem with that, obviously, is that if we don’t push back on these campaigns, then that’s the prevailing narrative.”

That narrative is prevailing at a time when social media companies have abandoned or cut back efforts to enforce their own policies against certain types of content.

Many experts have warned that the problem of false or misleading content is only going to increase with the advent of artificial intelligence.

“Disinformation will remain an issue as long as the strategic gains of engaging in it, promoting it and profiting from it outweigh consequences for spreading it,” Common Cause, the nonpartisan public interest group, wrote in a report published last week that warned of a new wave of disinformation around this year’s vote."

Monday, April 15, 2024

CMU Joins $110M U.S.-Japan Partnership To Accelerate AI Innovation; Carnegie Mellon University, April 11, 2024

 Kelly Saavedra, Carnegie Mellon University; CMU Joins $110M U.S.-Japan Partnership To Accelerate AI Innovation

"Carnegie Mellon University and Keio University have announced they will join forces with one another and with industry partners to boost AI-focused research and workforce development in the United States and Japan. The partnership is one of two new university partnerships between the two countries in the area of artificial intelligence announced in Washington, D.C., April 9 at an event hosted by U.S. Secretary of Commerce Gina Raimondo.

The collaboration joins two universities with outstanding AI programs and forward-looking leaders with leading technology companies committed to providing funding and resources aimed at solving real-world problems. 

CMU President Farnam Jahanian was in Washington, D.C., for the signing ceremony held in the Department of Commerce's Research Library, during which the University of Washington and the University of Tsukuba agreed to a similar collaboration."

Tuesday, April 9, 2024

Revealed: a California city is training AI to spot homeless encampments; The Guardian, March 25, 2024

Todd Feathers , The Guardian; Revealed: a California city is training AI to spot homeless encampments

"For the last several months, a city at the heart of Silicon Valley has been training artificial intelligence to recognize tents and cars with people living inside in what experts believe is the first experiment of its kind in the United States.

Last July, San Jose issued an open invitation to technology companies to mount cameras on a municipal vehicle that began periodically driving through the city’s district 10 in December, collecting footage of the streets and public spaces. The images are fed into computer vision software and used to train the companies’ algorithms to detect the unwanted objects, according to interviews and documents the Guardian obtained through public records requests.

Some of the capabilities the pilot project is pursuing – such as identifying potholes and cars parked in bus lanes – are already in place in other cities. But San Jose’s foray into automated surveillance of homelessness is the first of its kind in the country, according to city officials and national housing advocates. Local outreach workers, who were previously not aware of the experiment, worry the technology will be used to punish and push out San Jose’s unhoused residents."

Saturday, April 6, 2024

Where AI and property law intersect; Arizona State University (ASU) News, April 5, 2024

  Dolores Tropiano, Arizona State University (ASU) News; Where AI and property law intersect

"Artificial intelligence is a powerful tool that has the potential to be used to revolutionize education, creativity, everyday life and more.

But as society begins to harness this technology and its many uses — especially in the field of generative AI — there are growing ethical and copyright concerns for both the creative industry and legal sector.

Tyson Winarski is a professor of practice with the Intellectual Property Law program in Arizona State University’s Sandra Day O’Connor College of Law. He teaches an AI and intellectual property module within the course Artificial Intelligence: Law, Ethics and Policy, taught by ASU Law Professor Gary Marchant.

“The course is extremely important for attorneys and law students,” Winarski said. “Generative AI is presenting huge issues in the area of intellectual property rights and copyrights, and we do not have definitive answers as Congress and the courts have not spoken on the issue yet.”"

Thursday, April 4, 2024

George Carlin’s Estate Reaches Settlement After A.I. Podcast; The New York Times, April 2, 2024

 Christopher Kuo , The New York Times; George Carlin’s Estate Reaches Settlement After A.I. Podcast

"The estate of the comedian George Carlin reached a settlement on Monday with the makers of a podcast who had said they had used artificial intelligence to impersonate Mr. Carlin for a comedy special...

Mr. Carlin’s estate filed the suit in January, saying that Mr. Sasso and Mr. Kultgen, hosts of the podcast “Dudesy,” had infringed on the estate’s copyrights by training an A.I. algorithm on five decades of Mr. Carlin’s work for the special “George Carlin: I’m Glad I’m Dead,” which was posted on YouTube. The lawsuit also said they had illegally used Mr. Carlin’s name and likeness."

Billie Eilish and Nicki Minaj want stop to 'predatory' music AI; BBC, April 2, 2024

Liv McMahon , BBC; Billie Eilish and Nicki Minaj want stop to 'predatory' music AI

"Billie Eilish and Nicki Minaj are among 200 artists calling for the "predatory" use of artificial intelligence (AI) in the music industry to be stopped.

In an open letter also signed by Katy Perry and the estate of Frank Sinatra, they warn AI "will set in motion a race to the bottom" if left unchecked...

Other artists have since spoken out about it, with Sting telling the BBC he believes musicians face "a battle" to defend their work against the rise of songs written by AI.

"The building blocks of music belong to us, to human beings," he said.

But not all musicians oppose developments in or use of AI across the music industry, and electronic artist Grimes and DJ David Guetta are among those backing the use of such AI tools."

Wednesday, April 3, 2024

‘The machine did it coldly’: Israel used AI to identify 37,000 Hamas targets; The Guardian, April 3, 2024

 in Jerusalem and , The Guardian ; ‘The machine did it coldly’: Israel used AI to identify 37,000 Hamas targets

"Israel’s use of powerful AI systems in its war on Hamas has entered uncharted territory for advanced warfare, raising a host of legal and moral questions, and transforming the relationship between military personnel and machines."

Sunday, March 31, 2024

Philosophy, ethics, and the pursuit of 'responsible' artificial intelligence; Rochester Institute of Technology (RIT), March 7, 2024

 Felicia Swartzenberg, Rochester Institute of Technology (RIT); Philosophy, ethics, and the pursuit of 'responsible' artificial intelligence

"Evan Selinger, professor in RIT’s Department of Philosophy, has taken an interest in the ethics of AI and the policy gaps that need to be filled in. Through a humanities lens, Selinger asks the questions, "How can AI cause harm, and what can governments and companies creating AI programs do to address and manage it?" Answering them, he explained, requires an interdisciplinary approach...

“AI ethics has core values and principles, but there’s endless disagreement about interpreting and applying them and creating meaningful accountability mechanisms,” said Selinger. “Some people are rightly worried that AI can be co-opted into ‘ethics washing’—weak checklists, flowery mission statements, and empty rhetoric that covers over abuses of power. Fortunately, I’ve had great conversations about this issue, including with folks at Microsoft, on why it is important to consider a range of positions.”

There are many issues that need to be addressed as companies pursue responsible AI, including public concern over whether generative AI is stealing from artists. Some of Selinger’s recent research has focused on the back-end issues with developing AI, such as the human toll that comes with testing AI chatbots before they’re released to the public. Other issues focus on policy, such as what to do about the dangers that facial recognition and other automated approaches to surveillance.

In a chapter for a book that will be published by MIT Press, Selinger, along with co-authors Brenda Leong, partner at Luminos.Law, and Albert Fox Cahn, founder and executive director of Surveillance Technology Oversight Project, offer concrete suggestions for conducting responsible AI audits, while also considering civil liberties objections."

Saturday, March 30, 2024

A.I.-Generated Garbage Is Polluting Our Culture; The New York Times, March 29, 2024

, The New York Times ; A.I.-Generated Garbage Is Polluting Our Culture

"To deal with this corporate refusal to act we need the equivalent of a Clean Air Act: a Clean Internet Act. Perhaps the simplest solution would be to legislatively force advanced watermarking intrinsic to generated outputs, like patterns not easily removable. Just as the 20th century required extensive interventions to protect the shared environment, the 21st century is going to require extensive interventions to protect a different, but equally critical, common resource, one we haven’t noticed up until now since it was never under threat: our shared human culture."

Thursday, March 28, 2024

AI hustlers stole women’s faces to put in ads. The law can’t help them.; The Washington Post, March 28, 2024


, The Washington Post; AI hustlers stole women’s faces to put in ads. The law can’t help them.

"Efforts to prevent this new kind of identity theft have been slow. Cash-strapped police departments are ill equipped to pay for pricey cybercrime investigations or train dedicated officers, experts said. No federal deepfake law exists, and while more than three dozen state legislatures are pushing ahead on AI bills, proposals governing deepfakes are largely limited to political ads and nonconsensual porn."

Your newsroom needs an AI ethics policy. Start here.; Poynter, March 25, 2024

 , Poynter; Your newsroom needs an AI ethics policy. Start here.

"Every single newsroom needs to adopt an ethics policy to guide the use of generative artificial intelligence. Why? Because the only way to create ethical standards in an unlicensed profession is to do it shop by shop.

Until we create those standards — even though it’s early in the game — we are holding back innovation.

So here’s a starter kit, created by Poynter’s Alex Mahadevan, Tony Elkins and me. It’s a statement of journalism values that roots AI experimentation in the principles of accuracy, transparency and audience trust, followed by a set of specific guidelines.

Think of it like a meal prep kit. Most of the work is done, but you still have to roll up your sleeves and do a bit of labor. This policy includes blank spaces, where newsroom leaders will have to add details, saying “yes” or “no” to very specific activities, like using AI-generated illustrations.

In order to effectively use this AI ethics policy, newsrooms will need to create an AI committee and designate an editor or senior journalist to lead the ongoing effort. This step is critical because the technology is going to evolve, the tools are going to multiply and the policy will not keep up unless it is routinely revised."

Thursday, March 7, 2024

Public Symposium on AI and IP; United States Patent and Trademark Office (USPTO), Wednesday, March 27, 2024 10 AM - 3 PM PT/1 PM - 6 PM ET

 United States Patent and Trademark Office (USPTO); Public Symposium on AI and IP

"The United States Patent and Trademark Office (USPTO) Artificial Intelligence (AI) and Emerging Technologies (ET) Partnership will hold a public symposium on intellectual property (IP) and AI. The event will take place virtually and in-person at Loyola Law School, Loyola Marymount University, in Los Angeles, California, on March 27, from 10 a.m. to 3 p.m. PT. 

The symposium will facilitate the USPTO’s efforts to implement its obligations under the President’s Executive Order (E.O.) 14110 “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” The event will include representation from the Copyright Office, build on previous AI/Emerging Technologies (ET) partnership events, and feature panel discussions by experts in the field of patent, trademark, and copyright law that focus on:

  1. A comparison of copyright and patent law approaches to the type and level of human contribution needed to satisfy authorship and inventorship requirements;
  2. Ongoing copyright litigation involving generative AI; and 
  3. A discussion of laws and policy considerations surrounding name, image, and likeness (NIL) issues, including the intersection of NIL and generative AI.

This event is free and open to the public, but in-person attendance is limited, so register early"

Friday, February 16, 2024

How to Think About Remedies in the Generative AI Copyright Cases; LawFare, February 15, 2024

  Pamela Samuelson, LawFare; How to Think About Remedies in the Generative AI Copyright Cases

"So far, commentators have paid virtually no attention to the remedies being sought in the generative AI copyright complaints. This piece shines a light on them."

From ethics to outsmarting Chat GPT, state unveils resource for AI in Ohio education; Cleveland.com, February 15, 2024

 ; From ethics to outsmarting Chat GPT, state unveils resource for AI in Ohio education

"The state released a guide Thursday to help schools and parents navigate generative artificial intelligence in an ethical manner.

“When you use the term AI, I know in some people’s minds, it can sound scary,” said Lt. Jon Husted, whose InnovateOhio office worked with private sector organizations to develop the guide...

Every technology that’s come into society has been like that.”...

But AI is the wave of the future, and Husted said it’s important that students are exposed to it.

The AI toolkit is not mandatory but can be used as a resource for educators and families.

It doesn’t include many prescriptive actions for how to begin teaching and using AI. Rather, it contains sections for parents, teachers and school districts where they can find dozens of sample lessons and discussions about ethics, how to develop policies to keep students safe, and other topics.

For instance, teachers can find a template letter that they can send to school district officials to communicate how they’re using AI...

“Before you use AI in the classroom you will need a plan for a student with privacy, data security, ethics and many other things,” Husted said. “More is needed than just a fun tool in the classroom.”"

Monday, February 12, 2024

Inventorship guidance for AI-assisted inventions webinar; United States Patent and Trademark Office (USPTO), March 5, 2024 1 PM - 2 PM ET

 United States Patent and Trademark Office (USPTO) ; Inventorship guidance for AI-assisted inventions webinar

"The United States Patent and Trademark Office (USPTO) plays an important role in incentivizing and protecting innovation, including innovation enabled by artificial intelligence (AI), to ensure continued U.S. leadership in AI and other emerging technologies (ET).

The USPTO announced Inventorship Guidance for AI-Assisted Inventions in the Federal RegisterThis guidance is pursuant to President Biden's Executive Order 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (October 30, 2023) with provisions addressing IP equities. The guidance, which is effective on February 13, 2024, provides instructions to USPTO personnel and stakeholders on determining the correct inventor(s) to be named in a patent or patent application for inventions created by humans with the assistance of one or more AI systems. 

The USPTO will host a webinar on Inventorship Guidance for AI-Assisted Inventions on Tuesday, March 5, from 1-2 p.m. EST. USPTO personnel will provide an overview of the guidance and answer stakeholder questions relating to the guidance.

This event is free and open to the public, but virtual space is limited, so please register early."

University Librarians See Urgent Need for AI Ethics; Inside Higher Ed, January 17, 2024

  Lauren Coffey, Inside Higher Ed; University Librarians See Urgent Need for AI Ethics

"Nearly three-quarters of university librarians say there’s an urgent need to address artificial intelligence’s ethical and privacy concerns, a survey finds.

Roughly half the librarians surveyed said they had a “moderate” understanding of AI concepts and principles, according to the study released Friday. About one in five said they had a slightly below moderate understanding, and roughly the same amount had a slightly above moderate understanding. Only 3 percent of respondents said they had a “very high” understanding.

The study, conducted in May 2023 by Leo Lo, president-elect of the Association of College and Research Libraries, had 605 respondents who completed the survey. Of those, 45 percent worked in research institutions and 30 percent in institutions with undergraduate and graduate programming."

Using AI Responsibly; American Libraries, January 21, 2024

Diana Panuncial , American Libraries; Using AI Responsibly

"Navigating misinformation and weighing ethical and privacy issues in artificial intelligence (AI) were top of mind for the panelists at “AI and Libraries: A Discussion on the Future,” a January 21 session at the American Library Association’s 2024 LibLearnX Conference in Baltimore. Flowers was joined by Virginia Cononie, assistant librarian and coordinator of research at University of South Carolina Upstate in Spartanburg; Dray MacFarlane, cofounder of Tasio, an AI consulting company; and Juan Rubio, digital media learning program manager for Seattle Public Library (SPL). 

Rubio, who used AI to create a tool to help teens at SPL reflect on their mental health and well-being, said there is excitement behind the technology and how it can be harnessed, but there should also be efforts to educate patrons on how to use it responsibly. 

“I think ethical use of AI comes with creating ethical people,” he said, adding that SPL has been thinking about implementing guidelines for using AI. “Be very aware of your positionality [as librarians], because I think we are in a place of privilege—not necessarily of money or power, but of knowledge.”"

Friday, February 9, 2024

The Friar Who Became the Vatican’s Go-To Guy on A.I.; The New York Times; February 9, 2024

 Jason Horowitz, The New York Times; The Friar Who Became the Vatican’s Go-To Guy on A.I.

"There is a lot is going on for Father Benanti, who, as both the Vatican’s and the Italian government’s go-to artificial intelligence ethicist, spends his days thinking about the Holy Ghost and the ghosts in the machines.

In recent weeks, the ethics professor, ordained priest and self-proclaimed geek, has joined Bill Gates at a meeting with Prime Minister Giorgia Meloni, presided over a commission seeking to save Italian media from ChatGPT bylines and general A.I. oblivion, and met with Vatican officials to further Pope Francis’s aim of protecting the vulnerable from the coming technological storm."

Wednesday, February 7, 2024

EU countries strike deal on landmark AI rulebook; Politico, February 2, 2024

GIAN VOLPICELLI, Politico ; EU countries strike deal on landmark AI rulebook

"European Union member countries on Friday unanimously reached a deal on the bloc’s Artificial Intelligence Act, overcoming last-minute fears that the rulebook would stifle European innovation.

EU deputy ambassadors green-lighted the final compromise text, hashed out following lengthy negotiations between representatives of the Council, members of the European Parliament and European Commission officials...

Over the past few weeks, the bloc’s top economies Germany and France, alongside Austria, hinted that they might oppose the text in Friday’s vote...

Eventually, the matter was resolved through the EU’s familiar blend of PR offensive and diplomatic maneuvering. The Commission ramped up the pressure by announcing a splashy package of pro-innovation measures targeting the AI sector, and in one fell swoop created the EU’s Artificial Intelligence Office — a body tasked with enforcing the AI Act...

A spokesperson for German Digital Minister Volker Wissing, the foremost AI Act skeptic within Germany’s coalition government, told POLITICO: "We asked the EU Commission to clarify that the AI Act does not apply to the use of AI in medical devices.".

A statement the European Commission, circulated among EU diplomats ahead of the vote and seen by POLITICO, reveals plans to set up an “expert group” comprising  EU member countries’ authorities. The group’s function will be to “ advise and assist” the Commission in applying and implementing the AI Act...

The AI Act still needs the formal approval of the European Parliament. The text is slated to get rubber-stamped at the committee level in two weeks, with a plenary vote expected in April."