Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Thursday, November 9, 2023

How robots can learn to follow a moral code; Nature, October 26, 2023

 Neil Savage, Nature; How robots can learn to follow a moral code

"Many computer scientists are investigating whether autonomous systems can be taught to make ethical choices, or to promote behaviour that aligns with human values. Could a robot that provides care, for example, be trusted to make choices in the best interests of its charges? Or could an algorithm be relied on to work out the most ethically appropriate way to distribute a limited supply of transplant organs? Drawing on insights from cognitive science, psychology and moral philosophy, computer scientists are beginning to develop tools that can not only make AI systems behave in specific ways, but also perhaps help societies to define how an ethical machine should act...

Defining ethics

The ability to fine-tune an AI system’s behaviour to promote certain values has inevitably led to debates on who gets to play the moral arbiter. Vosoughi suggests that his work could be used to allow societies to tune models to their own taste — if a community provides examples of its moral and ethical values, then with these techniques it could develop an LLM more aligned with those values, he says. However, he is well aware of the possibility for the technology to be used for harm. “If it becomes a free for all, then you’d be competing with bad actors trying to use our technology to push antisocial views,” he says.

Precisely what constitutes an antisocial view or unethical behaviour, however, isn’t always easy to define. Although there is widespread agreement about many moral and ethical issues — the idea that your car shouldn’t run someone over is pretty universal — on other topics there is strong disagreement, such as abortion. Even seemingly simple issues, such as the idea that you shouldn’t jump a queue, can be more nuanced than is immediately obvious, says Sydney Levine, a cognitive scientist at the Allen Institute. If a person has already been served at a deli counter but drops their spoon while walking away, most people would agree it’s okay to go back for a new one without waiting in line again, so the rule ‘don’t cut the line’ is too simple."

Sunday, November 5, 2023

High-level Advisory Body on Artificial Intelligence; United Nations, October 29, 2023

United Nations ; High-level Advisory Body on Artificial Intelligence

"The UN's Response

To foster a globally inclusive approach, the UN Secretary-General is convening a multi-stakeholder High-level Advisory Body on AI to undertake analysis and advance recommendations for the international governance of AI.

Calling for Interdisciplinary Expertise

Bringing together up to 38 experts in relevant disciplines from around the world, the Body will offer diverse perspectives and options on how AI can be governed for the common good, aligning internationally interoperable governance with human rights and the Sustainable Development Goals."

Friday, November 3, 2023

The Copyright Battle Over Artificial Intelligence; Hard Fork, The New York Times, November 3, 2023

Kevin Roose and Hard Fork, The New York Times; The Copyright Battle Over Artificial Intelligence

"President Biden’s new executive order on artificial intelligence has a little bit of everything for everyone concerned about A.I. Casey takes us inside the White House as the order was signed.

Then, Rebecca Tushnet, a copyright law expert, walks us through the latest developments in a lawsuit against the creators of A.I.-image generation tools. She explains why artists may have trouble making the case that these tools infringe on their copyrights."

Wednesday, November 1, 2023

Biden Issues Executive Order to Create A.I. Safeguards; The New York Times, October 30, 2023

 Cecilia Kang and Biden Issues Executive Order to Create A.I. Safeguards

"President Biden signed a far-reaching executive order on artificial intelligence on Monday, requiring that companies report to the federal government about the risks that their systems could aid countries or terrorists to make weapons of mass destruction. The order also seeks to lessen the dangers of “deep fakes” that could swing elections or swindle consumers."

Tuesday, October 31, 2023

Georgia State Hosts Deep-Dive Event on Intellectual Property and AI; Georgia State News Hub, October 26, 2023

Georgia State News Hub; Georgia State Hosts Deep-Dive Event on Intellectual Property and AI

"Experts from inside and outside Georgia State University gathered for “Protect Your Ideas: IP, AI and Entertainment,” a first-of-its-kind forum that gave students, faculty and staff a chance to share and learn about intellectual property and artificial intelligence with an eye toward entertainment. The Oct. 10 event was jointly sponsored by the university’s Office of the Vice President for Research and Economic Development, the College of Law and Popular Culture Collective.

“Atlanta is a national hub for creativity, commerce and research, so it makes sense that we at Georgia State strive to educate people about intellectual property,” said university President M. Brian Blake, who gave opening remarks at the event. “Understanding how to protect your ideas is critical, regardless of your field.”...

After remarks by leadership, Kenny Franklin, senior licensing associate with Georgia State’s Office of Technology Transfer & Commercialization, hosted a fireside chat with College of Law alum Scott Frank, president and CEO of AT&T Intellectual Property LLC and chair of the Georgia Intellectual Property Alliance. The dialogue helped define intellectual property and reflected on its meaning in today’s knowledge economy.

“I tell people that intellectual property is like oxygen. It’s all around us and we don’t see it, but we wouldn’t survive without it,” Frank said."

Saturday, October 28, 2023

An AI engine scans a book. Is that copyright infringement or fair use?; Columbia Journalism Review, October 26, 2023

MATHEW INGRAM, Columbia Journalism Review; An AI engine scans a book. Is that copyright infringement or fair use?

"Determining whether LLMs training themselves on copyrighted text qualifies as fair use can be difficult even for experts—not just because AI is complicated, but because the concept of fair use is, too."

Thursday, October 26, 2023

Why I let an AI chatbot train on my book; Vox, October 25, 2023

 , Vox; Why I let an AI chatbot train on my book

"What’s “fair use” for AI?

I think that training a chatbot for nonprofit, educational purposes, with the express permission of the authors of the works on which it’s trained, seems okay. But do novelists like George R.R. Martin or John Grisham have a case against for-profit companies that take their work without that express permission?

The law, unfortunately, is far from clear on this question." 

Tuesday, October 24, 2023

The fingerprints on a letter to Congress about AI; Politico, October 23, 2023

 BRENDAN BORDELON, Politico; The fingerprints on a letter to Congress about AI

"The message in the open letter sent to Congress on Sept. 11 was clear: Don’t put new copyright regulations on artificial intelligence systems.

The letter’s signatories were real players, a broad coalition of think tanks, professors and civil-society groups with a stake in the growing debate about AI and copyright in Washington.

Undisclosed, however, were the fingerprints of Sy Damle, a tech-friendly Washington lawyer and former government official who works for top firms in the industry — including OpenAI, one of the top developers of cutting-edge AI models. Damle is currently representing OpenAI in ongoing copyright lawsuits...

The effort by an OpenAI lawyer to covertly sway Congress against new laws on AI and copyright comes in the midst of an escalating influence campaign — tied to OpenAI and other top AI firms — that critics fear is shifting Washington’s attention away from current AI harms and toward existential threats posed by future AI systems...

Many of the points made in the September letter echo those made recently by Damle in other venues, including an argument comparing the rise of AI to the invention of photography."

Monday, October 23, 2023

Artists, copyright law, and the battle over artificial intelligence; 1A, October 23, 2023

 Lauren Hamilton, 1A ; Artists, copyright law, and the battle over artificial intelligence

"Tech companies have spent billions of dollars this year alone investing in the future of generative artificial intelligence.  

Generative AI apps like ChatGPT, Stable  Diffusion and Bard, deliver brand new text, images and code results – of comparable quality to human outputs – from user prompts. 

But have you ever wondered how an AI bot knows how to process a user’s request? 

It gets trained, using millions of data points – like books, poems, photos, illustrations and song lyrics – from all over the internet, including copyrighted material. 

In recent months, several authors have sued companies like Meta and OpenAI, alleging that the companies used their copyrighted works to train their generative AI models, all without permission or compensation.

It’s an issue of concern for many who work creative jobs; from authors, to musicians, voice actors and graphic designers.

What’s to come of the legal battles between creatives and AI companies? What role does copyright law play in shaping the future of artificial intelligence?"

Thursday, October 19, 2023

Using AI, cartoonist Amy Kurzweil connects with deceased grandfather in 'Artificial'; NPR, October 19, 2023

, NPR ; Using AI, cartoonist Amy Kurzweil connects with deceased grandfather in 'Artificial'

"Amy Kurzweil said the chatbot project and the book that came out of it underscored her somewhat positive feelings about AI.

"I feel like you need to imagine the robot you want to see in the world," she said. "We're not going to stop progress. But we can think about applications of AI that facilitate human connection.""

AI is learning from stolen intellectual property. It needs to stop.; The Washington Post, October 19, 2023

William D. Cohan , The Washington Post; AI is learning from stolen intellectual property. It needs to stop.

"The other day someone sent me the searchable database published by Atlantic magazine of more than 191,000 e-books that have been used to train the generative AI systems being developed by Meta, Bloomberg and others. It turns out that four of my seven books are in the data set, called Books3. Whoa.

Not only did I not give permission for my books to be used to generate AI products, but I also wasn’t even consulted about it. I had no idea this was happening. Neither did my publishers, Penguin Random House (for three of the books) and Macmillan (for the other one). Neither my publishers nor I were compensated for use of my intellectual property. Books3 just scraped the content away for free, with Meta et al. profiting merrily along the way. And Books3 is just one of many pirated collections being used for this purpose...

This is wholly unacceptable behavior. Our books are copyrighted material, not free fodder for wealthy companies to use as they see fit, without permission or compensation. Many, many hours of serious research, creative angst and plain old hard work go into writing and publishing a book, and few writers are compensated like professional athletes, Hollywood actors or Wall Street investment bankers. Stealing our intellectual property hurts." 

Wednesday, October 18, 2023

A.I. May Not Get a Chance to Kill Us if This Kills It First; Slate, October 17, 2023

 SCOTT NOVER, Slate; A.I. May Not Get a Chance to Kill Us if This Kills It First

"There is a disaster scenario for OpenAI and other companies funneling billions into A.I. models: If a court found that a company was liable for copyright infringement, it could completely halt the development of the offending model." 

Friday, October 13, 2023

Researchers use AI to read word on ancient scroll burned by Vesuvius; The Guardian, October 12, 2023

 , The Guardian; Researchers use AI to read word on ancient scroll burned by Vesuvius

"When the blast from the eruption of Mount Vesuvius reached Herculaneum in AD79, it burned hundreds of ancient scrolls to a crisp in the library of a luxury villa and buried the Roman town in ash and pumice.

The disaster appeared to have destroyed the scrolls for good, but nearly 2,000 years later researchers have extracted the first word from one of the texts, using artificial intelligence to peer deep inside the delicate, charred remains.

The discovery was announced on Thursday by Prof Brent Seales, a computer scientist at the University of Kentucky, and others who launched the Vesuvius challenge in March to accelerate the reading of the texts. Backed by Silicon Valley investors, the challenge offers cash prizes to researchers who extract legible words from the carbonised scrolls." 

Thursday, October 12, 2023

Ethical considerations in the use of AI; Reuters, October 2, 2023

  and Hanson Bridgett LLP, Reuters; Ethical considerations in the use of AI

"The burgeoning use of artificial intelligence ("AI") platforms and tools such as ChatGPT creates both opportunities and risks for the practice of law. In particular, the use of AI in research, document drafting and other work product presents a number of ethical issues for lawyers to consider as they contemplate how the use of AI may benefit their practices. In California, as in other states, several ethics rules are particularly relevant to a discussion of the use of AI."

Wednesday, October 11, 2023

Autonomous Vehicles Are Driving Blind; The New York Times, October 11, 2023

 Julia Angwin, The New York Times; Autonomous Vehicles Are Driving Blind

"For all the ballyhoo over the possibility of artificial intelligence threatening humanity someday, there’s remarkably little discussion of the ways it is threatening humanity right now. When it comes to self-driving cars, we are driving blind...

Despite all these real-world examples of harm, many regulators remain distracted by the distant and, to some, far-fetched disaster scenarios spun by the A.I. doomers — high-powered tech researchers and execs who argue that the big worry is the risk someday of human extinction. The British government is holding an A.I. Safety Summit in November, and Politico reports that the A.I. task force is being led by such doomers...

The doomer theories are “a distraction tactic to make people chase an infinite amount of risks,” says Heidy Khlaaf, a software safety engineer who is an engineering director at Trail of Bits, a technical security firm."

Wednesday, September 20, 2023

ANALYSIS: Professional Integrity Tops Lawyers’ Ethics Wish List; Bloomberg Law News, September 20, 2023

Melissa Heelan, Bloomberg Law News ; ANALYSIS: Professional Integrity Tops Lawyers’ Ethics Wish List

"Lawyers have undergone some soul-searching in the wake of election fraud cases and the Jan. 6 raid on the US Capitol. So it stands to reason that they chose “maintaining the integrity of the profession” as the legal ethics category most in need of revision, according to a recent Bloomberg Law survey. 

The respondents, both in-house and law firm lawyers, also said that they want to see more guidance on artificial intelligence and technology.

The American Bar Association’s Model Rules of Professional Conduct, which provide the basis for state ethics rules, are divided into eight categories (in addition to a preamble), each comprised of anywhere between three (Counselor) and 18 (Client-Lawyer Relationship) rules."

Thursday, September 14, 2023

In Show of Force, Silicon Valley Titans Pledge ‘Getting This Right’ With A.I.; The New York Times, September 13, 2023

Cecilia Kang, The New York Times ; In Show of Force, Silicon Valley Titans Pledge ‘Getting This Right’ With A.I.

"Elon Musk warned of civilizational risks posed by artificial intelligence. Sundar Pichai of Google highlighted the technology’s potential to solve health and energy problems. And Mark Zuckerberg of Meta stressed the importance of open and transparent A.I. systems.

The tech titans held forth on Wednesday in a three-hour meeting with lawmakers in Washington about A.I. and future regulations. The gathering, known as the A.I. Insight Forum, was part of a crash course for Congress on the technology and organized by the Senate leader, Chuck Schumer, Democrat of New York.

The meeting — also attended by Bill Gates, a founder of Microsoft; Sam Altman of OpenAI; Satya Nadella of Microsoft; and Jensen Huang of Nvidia — was a rare congregation of more than a dozen top tech executives in the same room. It amounted to one of the industry’s most proactive shows of force in the nation’s capital as companies race to be at the forefront of A.I. and to be seen to influence its direction."

Transcript: US Senate Judiciary Hearing on Oversight of A.I.; Tech Policy Press, September 13, 2023

Gabby Miller, Tech Policy Press; Transcript: US Senate Judiciary Hearing on Oversight of A.I.

"Artificial Intelligence (AI) is in the spotlight only a week into the U.S. Congress’ return from recess. On Tuesday, the Senate held two AI-focused Subcommittee hearings just a day before the first AI Insight Forum hosted by Senate Majority Leader Charles Schumer (D-NY).

Tuesday’s hearing before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law was led by Chairman Sen. Richard Blumenthal (D-CT) and Ranking Member Josh Hawley (R-MO), another of a series of hearings in the committee on how best to govern artificial intelligence. It also corresponded with their formal introduction of a bipartisan bill by Sens. Blumenthal and Hawley that would deny AI companies Section 230 immunity. 

  • Woodrow Hartzog, Professor of Law, Boston University School of Law Fellow, Cordell Institute for Policy in Medicine & Law, Washington University in St. Louis (written testimony)
  • William Dally, Chief Scientist and Senior Vice President of Research, NVIDIA Corporation (written testimony)
  • Brad Smith, Vice Chair and President, Microsoft Corporation (written testimony)

(Microsoft’s Smith will also be in attendance for Sen. Schumer’s first AI Insight Forum on Wednesday and NVIDIA’s CEO, Jensen Huang, will be joining him.)"

Tuesday, September 12, 2023

How industry experts are navigating the ethics of artificial intelligence; CNN, September 11, 2023

CNN; How industry experts are navigating the ethics of artificial intelligence

"CNN heads to one of the longest-running artificial intelligence conferences in the world, to explore how industry experts and tech companies are trying to develop AI that is fairer and more transparent."

Tuesday, September 5, 2023

USM tapped to develop ethics training in age of artificial intelligence; Portland Press Herald, September 5, 2023

, Portland Press Herald; USM tapped to develop ethics training in age of artificial intelligence

"Thompson and other researchers at the Portland-based regulatory training and ethics center hope to better understand what is behind individuals’ tendencies to cut corners ethically and use that information to create training programs for businesses, nonprofits and colleges – including those in the UMaine System – that could help prevent cheating or other unethical conduct in research."