Showing posts with label researchers. Show all posts
Showing posts with label researchers. Show all posts

Saturday, November 30, 2024

‘AI Jesus’ avatar tests man’s faith in machines and the divine; AP, November 28, 2024

JAMEY KEATEN, AP; ‘AI Jesus’ avatar tests man’s faith in machines and the divine

"Researchers and religious leaders on Wednesday released findings from a two-month experiment through art in a Catholic chapel in Switzerland, where an avatar of “Jesus” on a computer screen — tucked into a confessional — took questions by visitors on faith, morality and modern-day woes, and offered responses based on Scripture...

Philipp Haslbauer, an IT specialist at the Lucerne University of Applied Sciences and Arts who pulled together the technical side of the project, said the AI responsible for taking the role of “AI Jesus” and generating responses was GPT-4o by OpenAI, and an open-source version of the company’s Whisper was used for speech comprehension. 

An AI video generator from Heygen was used to produce voice and video from a real person, he said. Haslbauer said no specific safeguards were used “because we observed GPT-4o to respond fairly well to controversial topics.”

Visitors broached many topics, including true love, the afterlife, feelings of solitude, war and suffering in the world, the existence of God, plus issues like sexual abuse cases in the Catholic Church or its position on homosexuality.

Most visitors described themselves as Christians, though agnostics, atheists, Muslims, Buddhists and Taoists took part too, according to a recap of the project released by the Catholic parish of Lucerne.

About one-third were German speakers, but “AI Jesus” — which is conversant in about 100 languages — also had conversations in languages like Chinese, English, French, Hungarian, Italian, Russian and Spanish."

Tuesday, November 12, 2024

Elon Musk worries free speech advocates with his calls to prosecute researchers and critics; NBC News, November 12, 2024

  and  , NBC News; Elon Musk worries free speech advocates with his calls to prosecute researchers and critics

"Musk, the world’s richest person, has in the past two years called for several of his opponents to be prosecuted, and it’s something that free speech advocates say they could overlook if he were only an ordinary private citizen. 

But now that Musk is gaining political power as a close ally of President-elect Donald Trump, his demands for criminal charges against critics are much more worrisome, according to scholars and groups devoted to the First Amendment...

According to NBC News' review of Musk’s public statements, there’s an established pattern of him attacking nonprofit groups, journalists and others who produce information that he disagrees with or that may not be helpful to his goals or image — a pattern that runs counter to frequent vows by him that he’s a defender of the First Amendment’s guarantee of free speech."

Thursday, August 29, 2024

The Nuremberg Code isn’t just for prosecuting Nazis − its principles have shaped medical ethics to this day; The Conversation, August 29, 2024

 Director of the Center for Health Law, Ethics & Human Rights, Boston University, The Conversation; The Nuremberg Code isn’t just for prosecuting Nazis − its principles have shaped medical ethics to this day

"I remain a strong supporter of the Nuremberg Code and believe that following its precepts is both an ethical and a legal obligation of physician researchers. Yet the public can’t expect Nuremberg to protect it against all types of scientific research or weapons development. 

Soon after the U.S. dropped atomic bombs over Hiroshima and Nagasaki – two years before the Nuremberg trials began – it became evident that our species was capable of destroying ourselves. 

Nuclear weapons are only one example. Most recently, international debate has focused on new potential pandemics, but also on “gain-of-function” research, which sometimes adds lethality to an existing bacteria or virus to make it more dangerous. The goal is not to harm humans but rather to try to develop a protective countermeasure. The danger, of course, is that a super harmful agent “escapes” from the laboratory before such a countermeasure can be developed.

I agree with the critics who argue that at least some gain-of-function research is so dangerous to our species that it should be outlawed altogether. Innovations in artificial intelligence and climate engineering could also pose lethal dangers to all humans, not just some humans. Our next question is who gets to decide whether species-endangering research should be done, and on what basis?"

Tuesday, August 20, 2024

Where AI Thrives, Religion May Struggle; Chicago Booth Review, March 26, 2024

 Jeff Cockrell, Chicago Booth Review; Where AI Thrives, Religion May Struggle

"The United States has seen one of the biggest drops: the share of its residents who said they belonged to a church, synagogue, or mosque fell from 70 percent in 1999 to 47 percent in 2020, according to Gallup.

One potential factor is the proliferation of artificial intelligence and robotics, according to a team of researchers led by Chicago Booth’s Joshua Conrad Jackson and Northwestern’s Adam Waytz. The more exposed people are to automation technologies, the researchers find, the weaker their religious beliefs. They argue that the relationship is not coincidental and that “there are meaningful properties of automation which encourage religious decline."

Researchers and philosophers have pondered the connection between science and religion for many years. The German sociologist Max Weber spoke of science contributing to the “disenchantment of the world,” or the replacement of supernatural explanations for the workings of the universe with rational, scientific ones. Evidence from prior research doesn’t support a strong “disenchantment” effect, Jackson says, but he and his coresearchers suggest that AI and robotics may influence people’s beliefs in a way that science more generally does not."

Saturday, August 3, 2024

AI is complicating plagiarism. How should scientists respond?; Nature, July 30, 2024

Diana Kwon , Nature; AI is complicating plagiarism. How should scientists respond?

"From accusations that led Harvard University’s president to resign in January, to revelations in February of plagiarized text in peer-review reports, the academic world has been roiled by cases of plagiarism this year.

But a bigger problem looms in scholarly writing. The rapid uptake of generative artificial intelligence (AI) tools — which create text in response to prompts — has raised questions about whether this constitutes plagiarism and under what circumstances it should be allowed. “There’s a whole spectrum of AI use, from completely human-written to completely AI-written — and in the middle, there’s this vast wasteland of confusion,” says Jonathan Bailey, a copyright and plagiarism consultant based in New Orleans, Louisiana.

Generative AI tools such as ChatGPT, which are based on algorithms known as large language models (LLMs), can save time, improve clarity and reduce language barriers. Many researchers now argue that they are permissible in some circumstances and that their use should be fully disclosed.

But such tools complicate an already fraught debate around the improper use of others’ work. LLMs are trained to generate text by digesting vast amounts of previously published writing. As a result, their use could result in something akin to plagiarism — if a researcher passes off the work of a machine as their own, for instance, or if a machine generates text that is very close to a person’s work without attributing the source. The tools can also be used to disguise deliberately plagiarized text, and any use of them is hard to spot. “Defining what we actually mean by academic dishonesty or plagiarism, and where the boundaries are, is going to be very, very difficult,” says Pete Cotton, an ecologist at the University of Plymouth, UK."

Wednesday, May 22, 2024

Machine ‘Unlearning’ Helps Generative AI ‘Forget’ Copyright-Protected and Violent Content; UT News, The University of Texas at Austin, May 21, 2024

 UT News, The University of Texas at Austin ; Machine ‘Unlearning’ Helps Generative AI ‘Forget’ Copyright-Protected and Violent Content

"When people learn things they should not know, getting them to forget that information can be tough. This is also true of rapidly growing artificial intelligence programs that are trained to think as we do, and it has become a problem as they run into challenges based on the use of copyright-protected material and privacy issues.

To respond to this challenge, researchers at The University of Texas at Austin have developed what they believe is the first “machine unlearning” method applied to image-based generative AI. This method offers the ability to look under the hood and actively block and remove any violent images or copyrighted works without losing the rest of the information in the model.

“When you train these models on such massive data sets, you’re bound to include some data that is undesirable,” said Radu Marculescu, a professor in the Cockrell School of Engineering’s Chandra Family Department of Electrical and Computer Engineering and one of the leaders on the project. “Previously, the only way to remove problematic content was to scrap everything, start anew, manually take out all that data and retrain the model. Our approach offers the opportunity to do this without having to retrain the model from scratch.”"

Tuesday, April 23, 2024

New Group Joins the Political Fight Over Disinformation Online; The New York Times, April 22, 2024

 Steven Lee Myers and , The New York Times; New Group Joins the Political Fight Over Disinformation Online

"Many of the nation’s most prominent researchers, facing lawsuits, subpoenas and physical threats, have pulled back.

“More and more researchers were getting swept up by this, and their institutions weren’t either allowing them to respond or responding in a way that really just was not rising to meet the moment,” Ms. Jankowicz said in an interview. “And the problem with that, obviously, is that if we don’t push back on these campaigns, then that’s the prevailing narrative.”

That narrative is prevailing at a time when social media companies have abandoned or cut back efforts to enforce their own policies against certain types of content.

Many experts have warned that the problem of false or misleading content is only going to increase with the advent of artificial intelligence.

“Disinformation will remain an issue as long as the strategic gains of engaging in it, promoting it and profiting from it outweigh consequences for spreading it,” Common Cause, the nonpartisan public interest group, wrote in a report published last week that warned of a new wave of disinformation around this year’s vote."

Monday, March 25, 2024

Judge dismisses Elon Musk's suit against hate speech researchers; NPR, March 25, 2024

 , NPR; Judge dismisses Elon Musk's suit against hate speech researchers

"A federal judge has dismissed owner Elon Musk's lawsuit against a research group that documented an uptick in hate speech on the social media site, saying the organization's reports on the platform formerly known as Twitter were protected by the First Amendment. 

Musk's suit, "is so unabashedly and vociferously about one thing that there can be no mistaking that purpose," wrote U.S. District Judge Charles Breyer in his Monday ruling, "This case is about punishing the Defendants for their speech."

Amid an advertiser boycott of X last year, Musk sued the research and advocacy organization Center for Countering Digital Hate, alleging it violated the social media site's terms of service in gathering data for its reports."

Thursday, March 21, 2024

Canada moves to protect coral reef that scientists say ‘shouldn’t exist’; The Guardian, March 15, 2024

, The Guardian; Canada moves to protect coral reef that scientists say ‘shouldn’t exist’

"For generations, members of the Kitasoo Xai’xais and Heiltsuk First Nations, two communities off the Central Coast region of British Columbia, had noticed large groups of rockfish congregating in a fjord system.

In 2021, researchers and the First Nations, in collaboration with the Canadian government, deployed a remote-controlled submersible to probe the depths of the Finlayson Channel, about 300 miles north-west of Vancouver.

On the last of nearly 20 dives, the team made a startling discovery – one that has only recently been made public...

The discovery marks the latest in a string of instances in which Indigenous knowledge has directed researchers to areas of scientific or historic importance. More than a decade ago, Inuk oral historian Louie Kamookak compared Inuit stories with explorers’ logbooks and journals to help locate Sir John Franklin’s lost ships, HMS Erebus and HMS Terror. In 2014, divers located the wreck of the Erebus in a spot Kamookak suggested they search, and using his directions found the Terror two years later."

Friday, October 13, 2023

Researchers use AI to read word on ancient scroll burned by Vesuvius; The Guardian, October 12, 2023

 , The Guardian; Researchers use AI to read word on ancient scroll burned by Vesuvius

"When the blast from the eruption of Mount Vesuvius reached Herculaneum in AD79, it burned hundreds of ancient scrolls to a crisp in the library of a luxury villa and buried the Roman town in ash and pumice.

The disaster appeared to have destroyed the scrolls for good, but nearly 2,000 years later researchers have extracted the first word from one of the texts, using artificial intelligence to peer deep inside the delicate, charred remains.

The discovery was announced on Thursday by Prof Brent Seales, a computer scientist at the University of Kentucky, and others who launched the Vesuvius challenge in March to accelerate the reading of the texts. Backed by Silicon Valley investors, the challenge offers cash prizes to researchers who extract legible words from the carbonised scrolls." 

Thursday, August 3, 2023

Is facial recognition identifying you? Are there ‘dog whistles’ in ChatGPT? Ethics in artificial intelligence gets unpacked; Northeastern Global News, August 3, 2023

 , Northeastern Global News; Is facial recognition identifying you? Are there ‘dog whistles’ in ChatGPT? Ethics in artificial intelligence gets unpacked

"The graduate-level program at Northeastern is designed to teach researchers how to examine artificial intelligence and data systems through an ethical framework. The course is conducted by the Ethics Institute, an interdisciplinary effort supported by the Office of the Provost, the College of Social Sciences and Humanities (CSSH) and the Department of Philosophy and Religion...

The aim of the course was to both provide students with some background on the technical components underpinning these systems as well as the frameworks used to adequately analyze their ethical impact. 

Throughout the seminar, students each day were tasked with providing oral arguments based on the day’s reading. Each student was also tasked with developing an original thesis around the topic of discussion and presented it the final week of class. 

One central topic of discussion was algorithmic fairness, Creel says."  

Thursday, July 13, 2023

RFK Jr. is building a presidential campaign around conspiracy theories; NPR, July 13, 2023

 , NPR; RFK Jr. is building a presidential campaign around conspiracy theories

"What's not up for debate for scientists, researchers and public health officials is Kennedy's long track record of undermining science and spreading dubious claims.

"He has an enormous platform. He is going to, over the next many months, do a series of town hall meetings where he will continue to put bad information out there that will cause people to make bad decisions for themselves and their families, again putting children at risk and causing children to suffer," Offit said. "Because it's always the most vulnerable among us who suffer our ignorance.""

Tuesday, June 20, 2023

G.O.P. Targets Researchers Who Study Disinformation Ahead of 2024 Election; The New York Times, June 19, 2023

Steven Lee Myers and  , The New York Times; G.O.P. Targets Researchers Who Study Disinformation Ahead of 2024 Election

"On Capitol Hill and in the courts, Republican lawmakers and activists are mounting a sweeping legal campaign against universities, think tanks and private companies that study the spread of disinformation, accusing them of colluding with the government to suppress conservative speech online."

Saturday, April 29, 2023

Editors quit top neuroscience journal to protest against open-access charges; Nature, April 21, 2023

 Katharine Sanderson, Nature; Editors quit top neuroscience journal to protest against open-access charges

"More than 40 editors have resigned from two leading neuroscience journals in protest against what the editors say are excessively high article-processing charges (APCs) set by the publisher. They say that the fees, which publishers use to cover publishing services and in some cases make money, are unethical. The publisher, Dutch company Elsevier, says that its fees provide researchers with publishing services that are above average quality for below average price. The editors plan to start a new journal hosted by the non-profit publisher MIT Press.

The decision to resign came about after many discussions among the editors, says Stephen Smith, a neuroscientist at the University of Oxford, UK, and editor-in-chief of one of the journals, NeuroImage. “Everyone agreed that the APC was unethical and unsustainable,” says Smith, who will lead the editorial team of the new journal, Imaging Neuroscience, when it launches.

The 42 academics who made up the editorial teams at NeuroImage and its companion journal NeuroImage: Reports announced their resignations on 17 April. The journals are open access and require authors to pay a fee for publishing services. The APC for NeuroImage is US$3,450; NeuroImage: Reports charges $900, which will double to $1,800 from 31 May. Elsevier, based in Amsterdam, says that the APCs cover the costs associated with publishing an article in an open-access journal, including editorial and peer-review services, copyediting, typesetting archiving, indexing, marketing and administrative costs. Andrew Davis, Elsevier’s vice-president of corporate communications, says that NeuroImage’s fee is less than that of the nearest comparable journal in its field, and that the publisher’s APCs are “set in line with our policy [of] providing above average quality for below average price”."

Tuesday, March 7, 2023

WHO kicks off deliberations on ethical framework and tools for social listening and infodemic management; World Health Organization (WHO), February 10, 2023

World Health Organization (WHO) ; WHO kicks off deliberations on ethical framework and tools for social listening and infodemic management

"WHO has convened a panel of experts to discuss ethical considerations in social listening and infodemic management. The aim of the ethics expert panel is to reach a consensus on ethical principles for social listening and other infodemic management activities and provide recommendations for health authorities and researchers.

The panel brings together experts from academia, health authorities, and civil society, with a wide range of expertise such as in biomedical ethics, data privacy, law, digital sociology, digital health, epidemiology, health communication, health promotion, and media studies.

An infodemic is an overabundance information, including misinformation, that surges during a health emergency. During a health emergency, people seek, receive, process and act on information differently than in other times, which makes it even more important to use evidence-based strategies in response. Infodemic management practice, underpinned by the science of infodemiology, has rapidly evolved in the recent years. Tools and experience that were developed during COVID-19 pandemic response have already been applied to other outbreaks, such as ebola, polio and cholera. 

Social listening in public health is the process of gathering information about people's questions, concerns, and circulating narratives and misinformation about health from online and offline data sources. Data gleaned from social media platforms are being used in a number ways to identify and understand outbreaks, geographic and demographic trends, networks, sentiment and behavioral responses to public health emergencies. Offline data collection may include rapid surveys, townhalls, or interviews with people in vulnerable groups, communities of focus and specific populations. These data are then integrated with other data sources from the health system (such as health information systems) and outside of it (mobility data) to generate infodemic insights and inform strategies to manage infodemics.

However, the collection and use of this data presents ethical challenges, such as privacy and consent, and there is currently no agreed-upon ethical framework for social listening and infodemic management. 

The panel will focus on issues such as data control, commercialization, transparency, and accountability, and will consider ethical guidelines for both online and offline data collection, analysis and reporting. The goal is to develop an ethical framework for social listening and infodemic management to guide health authorities when planning and standing up infodemic insights teams and activities, as well as for practitioners when planning and implementing social listening and infodemic management."

Monday, May 30, 2022

Nature addresses helicopter research and ethics dumping; Nature, June 2, 2022

Nature; Nature addresses helicopter research and ethics dumping

"Exploitative research practices, sadly, come in all shapes and sizes. ‘Helicopter research’ occurs when researchers from high-income settings, or who are otherwise privileged, conduct studies in lower-income settings or with groups who are historically marginalized, with little or no involvement from those communities or local researchers in the con- ceptualization, design, conduct or publication of the research. ‘Ethics dumping’ occurs when similarly privileged researchers export unethical or unpalatable experiments and studies to lower-income or less-privileged settings with different ethical standards or less oversight."

Monday, March 7, 2022

Opinion: Genomics’ Ethical Gray Areas Are Harming the Developing World; Undark, February 24, 2022

DYNA ROCHMYANINGSIH, Undark; Opinion: Genomics’ Ethical Gray Areas Are Harming the Developing World

"Various ethics guidelines on health-related research — including UNESCO’s International Declaration on Human Genetic Data and international ethical guidelines published by the Council for International Organizations of Medical Sciences, or CIOMS, in collaboration with the World Health Organization — advise researchers to seek approval from an ethics committee in the host country. Such reviews are critical, bioethicists say, because cultural and social considerations of research ethics might vary between countries. In low-resource countries especially, ethics reviews are essential to protect the interests of participants and ensure that data are used in ways that benefit local communities.

Nowhere in Larena and Jakobsson’s paper, or in any of the subsequent publications based on the Philippines study, does the Uppsala team mention obtaining such an ethics approval in the Philippines — and Philippines officials say they never granted the team such an approval."

Tuesday, March 1, 2022

How to protect the first ‘CRISPR babies’ prompts ethical debate; Nature, February 25, 2022

Smriti Mallapaty, Nature; How to protect the first ‘CRISPR babies’ prompts ethical debate

"Two prominent bioethicists in China are calling on the government to set up a research centre dedicated to ensuring the well-being of the first children born with edited genomes. Scientists have welcomed the discussion, but many are concerned that the pair’s approach would lead to unnecessary surveillance of the children.

The proposal comes ahead of the possibly imminent release from prison of He Jiankui, the researcher who in 2018 shocked the world by announcing that he had created babies with altered genomes. He’s actions were widely condemned by scientists around the world, who called for a global moratorium on editing embryos destined for implantation. Several ethics committees have since concluded that the technology should not be used to make changes that can be passed on."

Saturday, February 26, 2022

World's first octopus farm stirs ethical debate; Reuters, February 23, 2022

Nathan Allen and Guillermo Martinez , Reuters; World's first octopus farm stirs ethical debate

"Since the 2020 documentary "My Octopus Teacher" captured the public imagination with its tale of a filmmaker's friendship with an octopus, concern for their wellbeing has grown.

Last year, researchers at the London School of Economics concluded from a review of 300 scientific studies that octopus were sentient beings capable of experiencing distress and happiness, and that high-welfare farming would be impossible.

Raul Garcia, who heads the WWF conservation organisation's fisheries operations in Spain, agrees.

"Octopuses are extremely intelligent and extremely curious. And it's well known they are not happy in conditions of captivity," he told Reuters."

Friday, February 18, 2022

The government dropped its case against Gang Chen. Scientists still see damage done; WBUR, February 16, 2022

Max Larkin, WBUR ; The government dropped its case against Gang Chen. Scientists still see damage done

"When federal prosecutors dropped all charges against MIT professor Gang Chen in late January, many researchers rejoiced in Greater Boston and beyond.

Chen had spent the previous year fighting charges that he had lied and omitted information on U.S. federal grant applications. His vindication was a setback for the "China Initiative," a controversial Trump-era legal campaign aimed at cracking down on the theft of American research and intellectual property by the Chinese government.

Researchers working in the United States say the China Initiative has harmed both their fellow scientists and science itself — as a global cooperative endeavor. But as U.S.-China tensions remain high, the initiative remains in place."