Showing posts with label AI research. Show all posts
Showing posts with label AI research. Show all posts

Wednesday, July 3, 2024

We asked people about using AI to make the news. They’re anxious and annoyed; Poynter, June 27, 2024

 , Poynter; We asked people about using AI to make the news. They’re anxious and annoyed

"Sometimes, when it comes to using artificial intelligence in journalism, people think of a calculator, an accepted tool that makes work faster and easier.

Sometimes, they think it’s flat-out cheating, passing off the work of a robot for a human journalist.

Sometimes, they don’t know what to think at all — and it makes them anxious.

All of those attitudes emerged from new focus group research from the University of Minnesota commissioned by the Poynter Institute about news consumers’ attitudes toward AI in journalism.

The research, conducted by Benjamin Toff, director of the Minnesota Journalism Center and associate professor of Minnesota’s Hubbard School of Journalism & Mass Communication, was unveiled to participants at Poynter’s Summit on AI, Ethics and Journalism on June 11. The summit brought together dozens of journalists and technologists to discuss the ethical implications for journalists using AI tools in their work. 

“I think it’s a good reminder of not getting too far ahead of the public,” Toff said, in terms of key takeaways for newsrooms. “However much there might be usefulness around using these tools … you need to be able to communicate about it in ways that are not going to be alienating to large segments of the public who are really concerned about what these developments will mean for society at large.”

The focus groups, conducted in late May, involved 26 average news consumers, some who knew a fair amount about AI’s use in journalism, and some who knew little. 

Toff discussed three key findings from the focus groups:"

Friday, June 7, 2024

Research suggests AI could help teach ethics; Phys.org, June 6, 2024

 Jessica Nelson, Phys.org ; Research suggests AI could help teach ethics

"Dr. Hyemin Han, an associate professor of , compared responses to  from the popular Large Language Model ChatGPT with those of college students. He found that AI has emerging capabilities to simulate human moral decision-making.

In a paper recently published in the Journal of Moral Education, Han wrote that ChatGPT answered basic ethical dilemmas almost like the average college student would. When asked, it also provided a rationale comparable to the reasons a human would give: avoiding harm to others, following , etc.

Han then provided the program with a new example of virtuous behavior that contradicted its previous conclusions and asked the question again. In one case, the program was asked what a person should do upon discovering an escaped prisoner. ChatGPT first replied that the person should call the police. However, after Han instructed it to consider Dr. Martin Luther King, Jr.'s "Letter from Birmingham Jail," its answer changed to allow for the possibility of unjust incarceration...

Han's second paper, published recently in Ethics & Behavior, discusses the implications of  research for the fields of ethics and education. In particular, he focused on the way ChatGPT was able to form new, more nuanced conclusions after the use of a moral exemplar, or an example of good behavior in the form of a story.

Mainstream thought in educational psychology generally accepts that exemplars are useful in teaching character and ethics, though some have challenged the idea. Han says his work with ChatGPT shows that exemplars are not only effective but also necessary."

Monday, September 5, 2022

Universities Are Making Ethics a Key Focus of Artificial Intelligence Research; Insight Into Diversity, August 16, 2022

 , Insight Into DiversityUniversities Are Making Ethics a Key Focus of Artificial Intelligence Research

"As artificial intelligence (AI) becomes more commonplace in our lives, many activists and academics have raised concerns about the ethics of this technology, including issues with maintaining privacy and preventing bias and discrimination...

“The subject of ethics and justice in technology development is incredibly urgent — it’s on fire,” Sydney Skybetter, a senior lecturer in theater arts and performance studies at Brown, explained in a recent university news release. Skybetter is one of three faculty members leading an innovative new course titled Choreorobotics 0101 in the computer science department. The class allows students with experience in computer science, engineering, dance, and theater to merge their interests by learning how to choreograph a 30-second dance routine for a pair of robots provided by the company Boston Dynamics. The goal of the course is to give these students — most of whom will go on to careers in the tech industry — the opportunity to engage in discussions about the purpose of robotics and AI technology and how they can be used to “minimize harm and make a positive impact on society,” according to the release."

Thursday, February 22, 2018

Control AI now or brace for nightmare future, experts warn; CNN, February 21, 2018

Sherisse Pham, CNN; Control AI now or brace for nightmare future, experts warn

"...Tesla (TSLA) CEO Elon Musk issued a dire warning, suggesting the race between different countries for AI superiority could cause a new world war.

Cambridge's [Seán] Ó hÉigeartaigh described a somewhat less apocalyptic vision.

"We live in a world that could become fraught with day-to-day hazards from the misuse of AI," he said in a statement. "We need to take ownership of the problems -- because the risks are real."

The report, which was also backed by Musk's Open AI research institute and the Center for a New American Security, isn't all doom and gloom.

The authors acknowledge AI has many potential benefits, but they are urging governments and companies to take steps now to reduce the risks of it being misused."

Wednesday, January 18, 2017

Critical, But Overlooked: Ethics Is a Tough Sell to Funders. Is That About to Change?; Inside Philanthropy, 1/17/17

Mike Scutari, Inside Philanthropy; 

Critical, But Overlooked: Ethics Is a Tough Sell to Funders. Is That About to Change?


"Strong ethics may be all important to the healthy functioning of American society, but this is an area that's historically fallen through the cracks of foundation grantmaking programs. Fundraisers for ethics work routinely have to shoehorn their proposals to fit into the issue areas that foundations do care about, like public health or campaign finance reform. Individual donors play a critical role in supporting ethics research, but contributors interested in this area are hardly plentiful. If you search "ethics" in the Lilly School donor base of gifts of a million dollars and up, you'll get a mere five results...

AI is much in the news right now, and ethics giving often follows headlines. A few years back, after the financial crisis, business schools received a string of gifts aimed at teaching ethics to tomorrow's executives and financiers. But as memory of the financial crisis faded, so did this stream of money...

Indeed, if reporting persists out of Trump's Washington regarding conflicts of interest, it seems likely that the ethics field writ large will get a Trump bump when it comes to fundraising."