Showing posts with label US. Show all posts
Showing posts with label US. Show all posts

Tuesday, August 20, 2024

Where AI Thrives, Religion May Struggle; Chicago Booth Review, March 26, 2024

 Jeff Cockrell, Chicago Booth Review; Where AI Thrives, Religion May Struggle

"The United States has seen one of the biggest drops: the share of its residents who said they belonged to a church, synagogue, or mosque fell from 70 percent in 1999 to 47 percent in 2020, according to Gallup.

One potential factor is the proliferation of artificial intelligence and robotics, according to a team of researchers led by Chicago Booth’s Joshua Conrad Jackson and Northwestern’s Adam Waytz. The more exposed people are to automation technologies, the researchers find, the weaker their religious beliefs. They argue that the relationship is not coincidental and that “there are meaningful properties of automation which encourage religious decline."

Researchers and philosophers have pondered the connection between science and religion for many years. The German sociologist Max Weber spoke of science contributing to the “disenchantment of the world,” or the replacement of supernatural explanations for the workings of the universe with rational, scientific ones. Evidence from prior research doesn’t support a strong “disenchantment” effect, Jackson says, but he and his coresearchers suggest that AI and robotics may influence people’s beliefs in a way that science more generally does not."

Friday, May 24, 2024

Navigating the Patchwork of AI Laws, Standards, and Guidance; American Bar Association (ABA), May 9, 2024

Emily Maxim Lamm , American Bar Association (ABA); Navigating the Patchwork of AI Laws, Standards, and Guidance

"The opening weeks of 2024 have seen a record number of state legislative proposals seeking to regulate artificial intelligence (AI) across different sectors in the United States...

With this type of rapid-fire start to the 2024 legislative season, the AI legal landscape will likely continue evolving across the board. As a result, organizations today are facing a complex and dizzying web of proposed and existing AI laws, standards, and guidance.

This article aims to provide a cohesive overview of this AI patchwork and to help organizations navigate this increasingly intricate terrain. The focus here will be on the implications of the White House AI Executive Order, existing state and local laws in the United States, the European Union’s AI Act, and, finally, governance standards to help bring these diverse elements together within a framework."

Monday, April 15, 2024

CMU Joins $110M U.S.-Japan Partnership To Accelerate AI Innovation; Carnegie Mellon University, April 11, 2024

 Kelly Saavedra, Carnegie Mellon University; CMU Joins $110M U.S.-Japan Partnership To Accelerate AI Innovation

"Carnegie Mellon University and Keio University have announced they will join forces with one another and with industry partners to boost AI-focused research and workforce development in the United States and Japan. The partnership is one of two new university partnerships between the two countries in the area of artificial intelligence announced in Washington, D.C., April 9 at an event hosted by U.S. Secretary of Commerce Gina Raimondo.

The collaboration joins two universities with outstanding AI programs and forward-looking leaders with leading technology companies committed to providing funding and resources aimed at solving real-world problems. 

CMU President Farnam Jahanian was in Washington, D.C., for the signing ceremony held in the Department of Commerce's Research Library, during which the University of Washington and the University of Tsukuba agreed to a similar collaboration."

Friday, April 5, 2024

U.S., Finland Agree To Cooperate On Combating Foreign Disinformation; Radio Free Europe/Radio Liberty, April 4, 2024

RFE/RL, Radio Free Europe/Radio Liberty; U.S., Finland Agree To Cooperate On Combating Foreign Disinformation

"The United States and Finland have agreed to cooperate in “countering foreign state information manipulation,” the U.S. State Department announced on April 4. U.S. Secretary of State Antony Blinken and Finnish Foreign Minister Elina Valtonen signed the memorandum in Brussels at a summit of NATO foreign ministers. The two countries pledged to “expand information sharing about foreign disinformation, share best practices for countering it,” and align policies with the U.S. Framework to Counter Foreign State Information Manipulation. At a joint press briefing, Valtonen said “information manipulation is a growing challenge to democratic countries and open societies.”"

China Is Targeting U.S. Voters and Taiwan With AI-Powered Disinformation; Wall Street Journal, April 5, 2024

 

Dustin Volz, Wall Street Journal ; China Is Targeting U.S. Voters and Taiwan With AI-Powered Disinformation


"Online actors linked to the Chinese government are increasingly leveraging artificial intelligence to target voters in the U.S., Taiwan and elsewhere with disinformation, according to new cybersecurity research and U.S. officials."

Thursday, December 14, 2023

Senator to Pope Francis: Not so fast on AI; Politico, December 14, 2023


"Congress hasn’t done enough work on artificial intelligence regulation in the U.S. to join Pope Francis’ proposal for a global treaty to regulate the technology, Sen. Mark Warner told POLITICO. On Thursday, Francis called for a binding treaty that would ensure artificial intelligence is developed and used ethically. He said in a statement that the risks of technology lacking human values of compassion, mercy, morality and forgiveness are too great — and that failing to regulate it could “pose a risk to our survival.”

Friday, May 6, 2022

The Information War in Ukraine Is Far From Over; The New York Times, May 5, 2022

 SERGE SCHMEMANN, The New York Times; The Information War in Ukraine Is Far From Over

"As a former K.G.B. agent, Mr. Putin sees the world as a battleground of conspiratorial maneuvers. In his speeches, the color revolutions in Ukraine and other former Soviet republics and the Arab Spring and other global upheavals are machinations to bolster American domination. As an heir to the Soviet worldview, he believes more than many Western leaders do in the importance of information warfare, both to give his regime a veneer of legitimacy and to challenge liberal democracy. On this battlefield, lies are ammunition in Mr. Putin’s long and increasingly personal struggle to stay in power.

As the war enters a new phase, as the images and horrors become familiar and the costs rise, it will become ever more difficult for the Biden administration and for Mr. Zelensky to sustain their early lead in the information war. That makes it all the more imperative for the West to press the message that this is not a war Ukraine chose and that the cost of allowing Mr. Putin to have his way in Ukraine would be far higher than the sacrifices required to block him."

Friday, April 22, 2022

AI and Copyright in China; Lexology, April 15, 2022

 Harris Bricken - Fred Rocafort, Lexology; AI and Copyright in China 

"In the landmark Shenzhen Tencent v. Shanghai Yingxun case, the Nanshan District People’s Court considered whether an article written by Tencent’s AI software Dreamwriter was entitled to copyright protection. The court found that it was, with copyright vesting in Dreamwriter’s developers, not Dreamwriter itself. In its decision, the court noted that “the arrangement and selection of the creative team in terms of data input, trigger condition setting, template and corpus style choices are intellectual activities that have a direct connection with the specific expression of the article.” These intellectual activities were carried out by the software developers.

The World Intellectual Property Organization (WIPO) has distinguished between works that are generated without human intervention (“AI-generated”) and works generated with material human intervention and/or direction (“AI-assisted”). In the case of AI-assisted works, artificial intelligence is arguably just a tool used by humans. Vesting of copyright in the humans involved in these cases is consistent with existing copyright law, just as an artist owns the copyright to a portrait made using a paintbrush or a song recorded using a guitar. The scenario in the Tencent case falls in the AI-assisted bucket, with Dreamwriter being the tool."

Monday, January 24, 2022

Monday, October 25, 2021

How Facebook neglected the rest of the world, fueling hate speech and violence in India; The Washington Post, October 24, 2021

 

 and 
The Washington Post; How Facebook neglected the rest of the world, fueling hate speech and violence in India

A trove of internal documents show Facebook didn’t invest in key safety protocols in the company’s largest market.

"In February 2019, not long before India’s general election, a pair of Facebook employees set up a dummy account to better understand the experience of a new user in the company’s largest market. They made a profile of a 21-year-old woman, a resident of North India, and began to track what Facebook showed her.

At first, her feed filled with soft-core porn and other, more harmless, fare. Then violence flared in Kashmir, the site of a long-running territorial dispute between India and Pakistan. Indian Prime Minister Narendra Modi, campaigning for reelection as a nationalist strongman, unleashed retaliatory airstrikes that India claimed hit a terrorist training camp.

Soon, without any direction from the user, the Facebook account was flooded with pro-Modi propaganda and anti-Muslim hate speech. “300 dogs died now say long live India, death to Pakistan,” one post said, over a background of laughing emoji faces. “These are pakistani dogs,” said the translated caption of one photo of dead bodies lined-up on stretchers, hosted in the News Feed.

An internal Facebook memo, reviewed by The Washington Post, called the dummy account test an “integrity nightmare” that underscored the vast difference between the experience of Facebook in India and what U.S. users typically encounter. One Facebook worker noted the staggering number of dead bodies."

Monday, August 24, 2020

Algorithms can drive inequality. Just look at Britain's school exam chaos; CNN, August 23, 2020

Zamira Rahim, CNN; Algorithms can drive inequality. Just look at Britain's school exam chaos

""Part of the problem is the data being fed in," Crider said.
"Historical data is being fed in [to algorithms] and they are replicating the [existing] bias."
Webb agrees. "A lot of [the issue] is about the data that the algorithm learns from," she said. "For example, a lot of facial recognition technology has come out ... the problem is, a lot of [those] systems were trained on a lot of white, male faces.
"So when the software comes to be used it's very good at recognizing white men, but not so good at recognizing women and people of color. And that comes from the data and the way the data was put into the algorithm."
Webb added that she believed the problems could partly be mitigated through "a greater attention to inclusivity in datasets" and a push to add a greater "multiplicity of voices" around the development of algorithms."

Friday, February 14, 2020

Coronavirus: The global race to patent a remedy; The Mercury News, February 13, 2020

Lisa M. Krieger, The Mercury News; Coronavirus: The global race to patent a remedy

"As a lethal coronavirus triggers a humanitarian crisis in the world’s most populous nation, who owns the rights to a potential cure?

The Bay Area’s pharmaceutical powerhouse Gilead Sciences is first in line for a Chinese patent for its drug called Remdesivir, which shows promise against the broad family of coronaviruses.

But now a team of Chinese scientists say they’ve improved and targeted its use — and, in a startling move, have also filed for a patent...

“Each side wants to be the entity that came up with the treatment for coronavirus,” said Jacob Sherkow, professor of law at the Innovation Center for Law and Technology at New York Law School. “This is not a knockoff of a Louis Vuitton handbag,”...

Patent protection — and market exclusivity — is the lifeblood of drug companies such as Gilead, creating the incentive to find, test and market a medicine."

Tuesday, September 17, 2019

TikTok’s Beijing roots fuel censorship suspicion as it builds a huge U.S. audience; The Washington Post, September 15, 2019

Drew Harwell and Tony Romm, The Washington Post; TikTok’s Beijing roots fuel censorship suspicion as it builds a huge U.S. audience

"TikTok’s surging popularity spotlights the tension between the Web’s global powers: the United States, where free speech and competing ideologies are held as (sometimes messy) societal bedrocks, and China, where political criticism is forbidden as troublemaking."

Monday, February 11, 2019

A Confederacy of Grift The subjects of Robert Mueller’s investigation are cashing in.; The Atlantic, February 10, 2019

Quinta Jurecic; A Confederacy of Grift:

"For people in the greater Trump orbit, the publicity of a legal clash with Robert Mueller provides a chance to tap into the thriving marketplace of fringe pro-Trump media. Disinformation in America is a business. And the profit to be turned from that business is a warning sign that the alternative stories of the Mueller investigation spun by the president’s supporters will have a long shelf life."

Sunday, November 18, 2018

To regulate AI we need new laws, not just a code of ethics; The Guardian, October 28, 2018

Paul Chadwick, The Guardian; To regulate AI we need new laws, not just a code of ethics

"For a sense of Facebook’s possible future EU operating environment, Zuckerberg should read the Royal Society’s new publication about the ethical and legal challenges of governing artificial intelligence. One contribution is by a senior European commission official, Paul Nemitz, principal adviser, one of the architects of the EU’s far-reaching General Data Protection Regulation, which took effect in May this year.

Nemitz makes clear the views are his own and not necessarily those of the European commission, but the big tech companies might reasonably see his article, entitled “Constitutional democracy and technology in the age of artificial intelligence”, as a declaration of intent.

“We need a new culture of technology and business development for the age of AI which we call ‘rule of law, democracy and human rights by design’,” Nemitz writes. These core ideas should be baked into AI, because we are entering “a world in which technologies like AI become all pervasive and are actually incorporating and executing the rules according to which we live in large part”.

To Nemitz, “the absence of such framing for the internet economy has already led to a widespread culture of disregard of the law and put democracy in danger, the Facebook Cambridge Analytica scandal being only the latest wake-up call”."

Thursday, October 11, 2018

Research ethics: are we minimizing harm or maximizing bureaucracy?; University Affairs/Affaires Universitaires, October 8, 2018

Karen Robson and Reana Maier, University Affairs/Affaires Universitaires; Research ethics: are we minimizing harm or maximizing bureaucracy?

"Researchers working with human subjects in North America and beyond are very familiar with ethics protocols required by institutions of higher education, protocols rightly put in place to minimize harm to research participants.

In Canada, individual higher education institutions have ethical jurisdiction over the research conducted within their walls and by their employees, although these operations are guided by the Tri-Council Policy Statement: Ethical Conduct for Research Involving Humans (TCPS). First published in 1998 by the three main federal research funding agencies, the TCPS requires that all university research involving human subjects be approved by a research ethics board (REB) and outlines the principles to be upheld in assessing the ethical merits of an application, though no standardized process of application or evaluation is given. It is understandable that universities, following the TCPS, want to put steps in place to minimize the harm researchers may cause to potential participants.

In recent years, however, the Canadian ethics process seems to have become more of an exercise in bureaucracy than a reasonable examination of the harm posed by research, and we fear this process will prevent actual research from occurring...

We are obviously not arguing against the existence of ethics protocols or REBs, but we believe that ethics sprawl is discouraging researchers rather than protecting participants. The fetishization of rules and bureaucratic process in ethics review and a blanket worst-case scenario approach is a drain on researchers’ time and resources in return for – what? Do we have any evidence that this level of procedural minutiae is providing improved protection of research participants or preventing unethical research? We might want to consider taking a page from our colleagues south of the border: the National Science Foundation has, as of 2017, abolished the need for institutional research board ethical approval on all projects deemed “low risk.”"

Thursday, October 4, 2018

Publishers Escalate Legal Battle Against ResearchGate; Inside Higher Ed, October 4, 2018

Lindsay McKenzie, Inside Higher Ed; Publishers Escalate Legal Battle Against ResearchGate

"The court documents, obtained by Inside Higher Ed from the U.S. District Court in Maryland, include an “illustrative” but “not exhaustive list” of 3,143 research articles the publishers say were shared by ResearchGate in breach of copyright protections. The publishers suggest they could be entitled to up to $150,000 for each infringed work -- a possible total of more than $470 million.

This latest legal challenge is the second that the publishers have filed against ResearchGate in the last year. The first lawsuit, filed in Germany in October 2017, is ongoing. Inside Higher Ed was unable to review court documents for the European lawsuit.

The U.S. lawsuit is the latest development in a long and increasingly complex dispute between some academic publishers and the networking site."

Wednesday, August 8, 2018

The Chinese threat that an aircraft carrier can’t stop; The Washington Post, August 7, 2018

The Washington Post; The Chinese threat that an aircraft carrier can’t stop

"America’s vulnerability to information warfare was a special topic of concern. One participant recalled a conversation several years ago with a Russian general who taunted him: “You have a cybercommand but no information operations. Don’t you know that information operations are how you take countries down?”"

Thursday, May 31, 2018

An American Alternative to Europe’s Privacy Law; The New York Times, May 30, 2018

Tim Wu, The New York Times; An American Alternative to Europe’s Privacy Law

"To be sure, a European-style regulatory system operates faster and has clearer rules than an American-style common-law approach. But the European approach runs the risk of being insensitive to context and may not match our ethical intuitions in individual cases. If the past decade of technology has taught us anything, it is that we face a complex and varied array of privacy problems. Case-by-case consideration might be the best way to find good solutions to many of them and, when the time comes (ifthe time comes), to guide the writing of general federal privacy legislation.

A defining fact of our existence today is that we share more of ourselves with Silicon Valley than with our accountants, lawyers and doctors. It is about time the law caught up with that."

Thursday, May 24, 2018

Why you’re getting so many emails about privacy policies; Vox, May 24, 2018

Emily Stewart, Vox; Why you’re getting so many emails about privacy policies

"The United States hasn’t given up its seat on the table, but it could certainly take a bigger role than it has in order to ensure that other countries, when they do implement regulations on tech and information, aren’t going too far.

“People are concerned about privacy, hate speech, disinformation, and we aren’t leading on solutions to these concerns that would at the same time preserve the free flow of information,” Kornbluh said. “You don’t want some governments saying, ‘We’re combating fake news,’ and compromising human rights.”"