Thursday, October 17, 2024

Californians want controls on AI. Why did Gavin Newsom veto an AI safety bill?; The Guardian, October 16, 2024

Garrison Lovely, The Guardian; Californians want controls on AI. Why did Gavin Newsom veto an AI safety bill? 

"I’m writing a book on the economics and politics of AI and have analyzed years of nationwide polling on the topic. The findings are pretty consistent: people worry about risks from AI, favor regulations, and don’t trust companies to police themselves. Incredibly, these findings tend to hold true for both Republicans and Democrats.

So why would Newsom buck the popular bill?

Well, the bill was fiercely resisted by most of the AI industry, including GoogleMeta and OpenAI. The US has let the industry self-regulate, and these companies desperately don’t want that to change – whatever sounds their leaders make to the contrary...

The top three names on the congressional letter – Zoe Lofgren, Anna Eshoo, and Ro Khanna – have collectively taken more than $4m in political contributions from the industry, accounting for nearly half of their lifetime top-20 contributors. Google was their biggest donor by far, with nearly $1m in total.

The death knell probably came from the former House speaker Nancy Pelosi, who published her own statement against the bill, citing the congressional letter and Li’s Fortune op-ed.

In 2021, reporters discovered that Lofgren’s daughter is a lawyer for Google, which prompted a watchdog to ask Pelosi to negotiate her recusal from antitrust oversight roles.

Who came to Lofgren’s defense? Eshoo and Khanna.

Three years later, Lofgren remains in these roles, which have helped her block efforts to rein in big tech – against the will of even her Silicon Valley constituents.

Pelosi’s 2023 financial disclosure shows that her husband owned between $16m and $80m in stocks and options in Amazon, Google, Microsoft and Nvidia...

Sunny Gandhi of the youth tech advocacy group Encode Justice, which co-sponsored the bill, told me: “When you tell the average person that tech giants are creating the most powerful tools in human history but resist simple measures to prevent catastrophic harm, their reaction isn’t just disbelief – it’s outrage. This isn’t just a policy disagreement; it’s a moral chasm between Silicon Valley and Main Street.”

Newsom just told us which of these he values more."

Fact check: John Deere says Trump’s story about how he saved US jobs with a tariff threat is fictional; CNN, October 16, 2024

 , , CNN; Fact check: John Deere says Trump’s story about how he saved US jobs with a tariff threat is fictional

"When former President Donald Trump was challenged at a Tuesday event about the potential economic harms of his proposal for across-the-board tariffs on imported goods, Trump told what sounded like a tariff success story.

He said that in response to his threat to impose hefty tariffs on John Deere if the storied American farm equipment maker went ahead with a plan to move some production from the US to Mexico, the company had just announced it was likely abandoning that outsourcing plan.

Trump said: “Are you ready? John Deere, great company. They announced about a year ago they’re gonna build big plants outside of the United States. Right? They’re going to build them in Mexico … I said, ‘If John Deere builds those plants, they’re not selling anything into the United States.’ They just announced yesterday they’re probably not going to build the plants, OK? I kept the jobs here.”

But a search of news articles and corporate press releases showed nothing about any such John Deere announcement the day prior. And in response to Trump’s story, a John Deere spokesperson told The Wall Street Journal and Bloomberg News that it had not changed its plans or announced any such changes.

The Trump campaign did not respond to a CNN request for any evidence for the former president’s story.

Trump has told numerous fictional tales in recent weeks. Aside from the John Deere story, the Republican presidential nominee made at least 19 false claims at the Tuesday event, which was a public interview at the Economic Club of Chicago that was conducted by John Micklethwait, editor-in-chief of Bloomberg News...

Guns and the Capitol riot: Trump, speaking of rioters at the Capitol on January 6, 2021, repeated his false claim that “not one of those people had a gun.” It has been proven in court that multiple rioters had guns – in addition to stun guns, knives, chemical sprays and numerous other weapons...

The size of the Capitol riot: Trump correctly noted that the Washington, DC, rally he addressed prior to the Capitol riot was peaceful, but then wrongly described the size of the riot, saying, “I don’t know what you had – five, six, seven hundred people – go down to the Capitol.”

Trump’s figures are way off. The Justice Department said in an official update earlier this month that about 1,532 defendants had, so far, been federally charged with crimes associated with the attack on the Capitol. The FBI said in 2021 that “approximately 2,000 individuals are believed to have been involved with the siege” and the actual number might well be hundreds higher...

Who pays tariffs: Trump repeated his false claim that, through tariffs, “We got hundreds of billions of dollars just from China alone.” US importers make the actual tariff payments, not China, and study after study has found that Americans bore the overwhelming majority of the cost of Trump’s tariffs on China."

Wednesday, October 16, 2024

Columbia Cancer Surgeon Notches 5 More Retractions for Suspicious Data; The New York Times, October 16, 2024

, The New York Times; Columbia Cancer Surgeon Notches 5 More Retractions for Suspicious Data

"The chief of a cancer surgery division at Columbia University this week had five research articles retracted and a sixth tagged with an editor’s note, underscoring concerns about research misconduct that have lately bedeviled Columbia as well as cancer labs at several other elite American universities.

With the latest retractions, the Columbia lab, led by Dr. Sam Yoon, has had more than a dozen studies pulled over suspicious results since The New York Times reported in February on data discrepancies in the lab’s work.

The retracted studies were among 26 articles by Dr. Yoon and a more junior collaborator that a scientific sleuth in Britain, Sholto David, revealed had presented images from one experiment as data from another, a tactic that can be used to massage or falsify the results of studies.

Dr. Yoon’s more junior collaborator, Changhwan Yoon, no longer works in the lab, Columbia said in response to questions on Wednesday. But the university has said little else about what, if anything, it has done to address the allegations.

Since the Times article in February, Dr. Yoon’s name has been changed from Sam Yoon to S. Sunghyun Yoon on a Columbia website advertising surgical treatment options."

SC's book ban regulation is in effect. School librarians are caught in the crossfire.; The Post and Courier, October 16, 2024

Anna B. Mitchell and Valerie Nava Mitchell , The Post and Courier; SC's book ban regulation is in effect. School librarians are caught in the crossfire

Computer scientist speaks of effects of AI on humanity; Allied News, October 15, 2024

HAILEY ROGENSKI , Allied News; Computer scientist speaks of effects of AI on humanity

"What role will we let artificial intelligence play in our lives, and what effect will AI have on religion and the world? Can it replace human roles that require empathy?

Dr. Derek Schuurman, a Christian computer scientist from Calvin University in Grand Rapids, Mich., delved into those issues Oct. 7 at Grove City College in the college’s Albert A. Hopeman Jr. Memorial Lecture in Faith & Technology.

Schuurman, is a member of the American Scientific Affiliation and adviser for AI and faith, a contributor to the Christian Scholars Review blog, a columnist for the Christian Courier and an author of Shaping the Digital World: Faith, Culture and Computer Technology and a co-author of A Christian Field Guide to Technology for Engineers and Designers...

“I think at that point we have to get back to that question and say, ‘what does it mean to be human?’” Schuurman said. “What does it mean to be made in the image of God? What does that imply for certain types of relationships and work about having a human doing that, because we choose to have someone who can actually have empathy for us, someone who’s words can be influenced and shaped by the holy spirit speaking into our lives. There’s certain roles that require empathy, care (and) wisdom.”

Schuurman said he thinks some roles that require this kind of empathy, such as being a pastor or teacher, will remain untouched by AI.

He said the best way to use AI is to maintain a “hybrid approach” where “people do what people do well and machines do what machines do well.”"

Houston-area library moves Indigenous history book to fiction section; Lonestar Live, October 14, 2024

Ileana Garnad, Lonestar Live; Houston-area library moves Indigenous history book to fiction section

"A Houston-area public library reclassified a nonfiction children’s book about Native American history as fiction, after the title was reviewed by citizens, not librarians.

“I can only assume it is because it is a telling of the history of Indigenous people that they do not approve of,” said Teresa Kenney, a Montgomery County resident and founder of the Village Books store.

In September, “Colonization and the Wampanoag Story,” by Linda Coombs, was challenged in Montgomery County libraries by an unknown person, according to public records obtained by Kenney. Per county policy, the book was reviewed by a group of five citizens who weren’t required to consult a librarian...

The group’s meetings are closed to the public, so it is unclear why the book was reclassified as fiction. Details about the reconsideration committee, including its members, are not available on the county and library system websites."

What's Next in AI: How do we regulate AI, and protect against worst outcomes?; Pittsburgh Post-Gazette, October 13, 2024

EVAN ROBINSON-JOHNSON , Pittsburgh Post-Gazette; What's Next in AI: How do we regulate AI, and protect against worst outcomes?

"Gov. Josh Shapiro will give more of an update on that project and others at a Monday event in Pittsburgh.

While most folks will likely ask him how Pennsylvania can build and use the tools of the future, a growing cadre in Pittsburgh is asking a broader policy question about how to protect against AI’s worst tendencies...

There are no federal laws that regulate the development and use of AI. Even at the state level, policies are sparse. California Gov. Gavin Newsom vetoed a major AI safety bill last month that would have forced greater commitments from the nation’s top AI developers, most of which are based in the Golden State...

Google CEO Sundar Pichai made a similar argument during a visit to Pittsburgh last month. He encouraged students from local high schools to build AI systems that will make the world a better place, then told a packed audience at Carnegie Mellon University that AI is “too important a technology not to regulate.”

Mr. Pichai said he’s hoping for an “innovation-oriented approach” that mostly leverages existing regulations rather than reinventing the wheel."

The Alarming History Behind Trump’s “Bad Genes” Comments; The Hastings Center, October 15, 2024

Daphne O. Martschenko , The Hastings Center; The Alarming History Behind Trump’s “Bad Genes” Comments

"The former president’s latest comments about immigrants bringing “bad genes” into the United States are part of a longer, racialized history in which claims about genetic difference have been used to further social divisions, explain social inequalities, and justify racial violence. Specifically, such claims have been used to resist the abolition of slavery, prohibit interracial marriage, forcibly sterilize the poor and communities of color, restrict immigration, and even rationalize mass-shootings...

The scientific community has challenged pseudoscientific justifications for hate before. While scientists can be wary of getting involved in politics, our research has the potential to disprove the harmful ideas being wielded by political actors. However, it also carries the risk of being misused in support of such ideas.

We must make it harder for scientific research to be wielded by those looking to create social divisions. For instance, some scientists have recommended altering scientific figures so that they are harder to “meme-ify,” and do not convey the false message that humanity is made up of biologically distinct populations. As another example, scientists have taken up the difficult task of reimagining how biology is taught in schools. Research shows that teaching students about the complexity of genetics can reduce noxious and incorrect beliefs about race and genetics.

We also need to do a better job of understanding the perspectives of those we do not agree with or who don’t orbit in the same circle. Scientists aren’t trained in mediation and conflict resolution, research communication, or public engagement. But their work extends beyond the lab and into society where it has real impacts. The next generation of scientists ought to be trained in these things. Otherwise, we risk regress rather than progress."

Millions of people are creating nude images of pretty much anyone in minutes using AI bots in a ‘nightmarish scenario’; New York Post, October 15, 2024

 Brooke Kate, New York Post; Millions of people are creating nude images of pretty much anyone in minutes using AI bots in a ‘nightmarish scenario’

"Online chatbots are generating nude images of real people at users’ requests, prompting concern from experts who worry the explicit deepfakes will create “a very nightmarish scenario.”

A Wired investigation on the messaging app Telegram unearthed dozens of AI-powered chatbots that allegedly “create explicit photos or videos of people with only a couple clicks,” the outlet reported. Some “remove clothes” from images provided by users, according to Wired, while others say they can manufacture X-rated photos of people engaging in sexual activity.

The outlet estimated that approximately 4 million users per month take advantage of the deepfake capabilities from the chatbots, of which there were an estimated 50. Such generative AI bots promised to deliver “anything you want about the face or clothes of the photo you give me,” Wired reported."

Anyone Can Turn You Into an AI Chatbot. There's Little You Can Do to Stop Them.; Wired, October 15, 2024

Megan Farokhmanesh, Lauren Goode, Wired; Anyone Can Turn You Into an AI Chatbot. There's Little You Can Do to Stop Them.

How Do You Make Ethical Decisions? 15 Leaders Reveal Their Approaches; Forbes, October 15, 2024

 Bruce Weinstein, Ph.D., Forbes; How Do You Make Ethical Decisions? 15 Leaders Reveal Their Approaches

"In honor of this year’s Global Ethics Day, a celebratory event created by the Carnegie Council for Ethics in International Affairs, I asked 15 leaders from business, law, education and the arts how they make ethical decisions.

These successful professionals are from the U.S., Canada, the U.K., France, the United Arab Emirates, Hungary, Australia, and South Africa. This is how they responded...

Since Global Ethics Day was created by the Carnegie Council for Ethics in International Affairs, I give the last word to its president, Joel H. Rosenthal.

True north

“When considering a course of action, I ask: What is my true north? What am I willing (and not willing) to do to achieve a goal, and how might those whom I respect judge my actions? It’s impossible to get ethical choices right all the time, and I’ve found that it’s essential to balance conviction with humility in any decision-making process. Most importantly, I try to remain open to reflection and correction along the way.”

Joel H. Rosenthal, President, Carnegie Council for Ethics in International Affairs, New York, New York"

His daughter was murdered. Then she reappeared as an AI chatbot.; The Washington Post, October 15, 2024

 , The Washington Post; His daughter was murdered. Then she reappeared as an AI chatbot.

"Jennifer’s name and image had been used to create a chatbot on Character.AI, a website that allows users to converse with digital personalities made using generative artificial intelligence. Several people had interacted with the digital Jennifer, which was created by a user on Character’s website, according to a screenshot of her chatbot’s now-deleted profile.

Crecente, who has spent the years since his daughter’s death running a nonprofit organization in her name to prevent teen dating violence, said he was appalled that Character had allowed a user to create a facsimile of a murdered high-schooler without her family’s permission. Experts said the incident raises concerns about the AI industry’s ability — or willingness — to shield users from the potential harms of a service that can deal in troves of sensitive personal information...

The company’s terms of service prevent users from impersonating any person or entity...

AI chatbots can engage in conversation and be programmed to adopt the personalities and biographical details of specific characters, real or imagined. They have found a growing audience online as AI companies market the digital companions as friends, mentors and romantic partners...

Rick Claypool, who researched AI chatbots for the nonprofit consumer advocacy organization Public Citizen, said while laws governing online content at large could apply to AI companies, they have largely been left to regulate themselves. Crecente isn’t the first grieving parent to have their child’s information manipulated by AI: Content creators on TikTok have used AI to imitate the voices and likenesses of missing children and produce videos of them narrating their deaths, to outrage from the children’s families, The Post reported last year.

“We desperately need for lawmakers and regulators to be paying attention to the real impacts these technologies are having on their constituents,” Claypool said. “They can’t just be listening to tech CEOs about what the policies should be … they have to pay attention to the families and individuals who have been harmed.”

Tuesday, October 15, 2024

FEMA resumes door-to-door visits in North Carolina after threats tied to disinformation; AP, October 15, 2024

MAKIYA SEMINERA AND SARAH BRUMFIELD, AP; FEMA resumes door-to-door visits in North Carolina after threats tied to disinformation

"Federal disaster personnel have resumed door-to-door visits as part of their hurricane-recovery work in North Carolina, an effort temporarily suspended amid threats that prompted officials to condemn the spread of disinformation."

Trump Lashes Out at Live Fact-Checks During Disaster of an Interview; The New Republic, October 15, 2024

Ellie Quinlan Houghtaling, The New Republic ; Trump Lashes Out at Live Fact-Checks During Disaster of an Interview

"Donald Trump’s sit-down interview Tuesday with the Economic Club of Chicago went completely off the rails as the Republican presidential nominee struggled to offer concrete answers to a business-minded crowd, and miraculously performed even worse as he was fact-checked live onstage."

‘Armed Militias’ Claims In N.C. Driven By Social Media Misinformation; Forbes, October 14, 2024

Peter Suciu, Forbes; ‘Armed Militias’ Claims In N.C. Driven By Social Media Misinformation

""The amount of misinformation and disinformation we've seen around the recent hurricanes and help efforts is a strong example of how powerful those effects have become," explained Dr. Cliff Lampe, professor of information and associate dean for academic affairs in the School of Information at the University of Michigan.

Misinformation began even before Hurricane Helene made landfall, with the dubious claims that government officials were controlling the weather and directing the storm to hit "red states." The misinformation only intensified after the storm left a path of destruction.

"Over the last weeks we've seen death threats against meteorologists and now first responders in emergency situations," said Lampe. "There are a few things that are challenging about this. One is that belief persistence, which is the effect where people tend to keep believing what they have believed, makes it so that new information often doesn't make a difference in changing people's minds. We tend to think that good information will swamp out bad information, but unfortunately, it's not that simple."

Social media can amplify such misinformation in a way that was previously impossible.

"We saw that a small group of people acting on misinformation can disrupt services of the majority of people with a need," added Lampe.

"False information, especially on social media platforms, spreads incredibly fast. It's crucial to distinguish between misinformation and disinformation," said Rob Lalka, professor at Tulane University's Freeman School of Business and author of The Venture Alchemists: How Big Tech Turned Profits Into Power.

"Misinformation refers to false, incomplete, or inaccurate information shared without harmful intent, while disinformation is deliberately false information designed to deceive," Lalka continued...

"New technologies are making it increasingly hard to tell what's real and what's fake," said Lalka. "We now live in an era where Artificial Intelligence can generate lifelike images and audio, and these powerful tools should prompt us all to pause and consider whether a source is truly trustworthy.""

AI Ethics Council Welcomes LinkedIn Co-Founder Reid Hoffman and Commentator, Founder and Author Van Jones as Newest Members; Business Wire, October 15, 2024

 Business Wire; AI Ethics Council Welcomes LinkedIn Co-Founder Reid Hoffman and Commentator, Founder and Author Van Jones as Newest Members

"The AI Ethics Council, founded by OpenAI CEO Sam Altman and Operation HOPE CEO John Hope Bryant, announced today that Reid Hoffman (Co-Founder of LinkedIn and Inflection AI and Partner at Greylock) and Van Jones (CNN commentator, Dream Machine Founder and New York Times best-selling author) have joined as a members. Formed in December 2023, the Council brings together an interdisciplinary body of diverse experts including civil rights activists, HBCU presidents, technology and business leaders, clergy, government officials and ethicists to collaborate and set guidelines on ways to ensure that traditionally underrepresented communities have a voice in the evolution of artificial intelligence and to help frame the human and ethical considerations around the technology. Ultimately, the Council also seeks to help determine how AI can be harnessed to create vast economic opportunities, especially for the underserved.

Mr. Hoffman and Mr. Jones join an esteemed group on the Council, which will serve as a leading authority in identifying, advising on and addressing ethical issues related to AI. In addition to Mr. Altman and Mr. Bryant, founding AI Ethics Council members include:

The Limitation Effect: A White Paper; October 2024

New York University and University of California - San Diego, The Limitation Effect: A White Paper

Experiences of State Policy-Driven Education Restriction in Florida's Public Schools

"How can a teacher discuss Jim Crow laws without breaking state law? Should a librarian stop ordering books with LGBTQ+ characters? A new white paper by UC San Diego and NYU researchers reveals the experiences of K-12 educators and parents in Florida grappling with state policies and policy effects restricting access to instruction, books, courses, clubs, professional development, and basic student supports."

This threat hunter chases U.S. foes exploiting AI to sway the election; The Washington Post, October 13, 2024

, The Washington Post; This threat hunter chases U.S. foes exploiting AI to sway the election

"Ben Nimmo, the principal threat investigator for the high-profile AI pioneer, had uncovered evidence that Russia, China and other countries were using its signature product, ChatGPT, to generate social media posts in an effort to sway political discourse online. Nimmo, who had only started at OpenAI in February, was taken aback when he saw that government officials had printed out his report, with key findings about the operations highlighted and tabbed.

That attention underscored Nimmo’s place at the vanguard in confronting the dramatic boost that artificial intelligence can provide to foreign adversaries’ disinformation operations. In 2016, Nimmo was one of the first researchers to identify how the Kremlin interfered in U.S. politics online. Now tech companies, government officials and other researchers are looking to him to ferret out foreign adversaries who are using OpenAI’s tools to stoke chaos in the tenuous weeks before Americans vote again on Nov. 5.

So far, the 52-year-old Englishman says Russia and other foreign actors are largely “experimenting” with AI, often in amateurish and bumbling campaigns that have limited reach with U.S. voters. But OpenAI and the U.S. government are bracing for Russia, Iran and other nations to become more effective with AI, and their best hope of parrying that is by exposing and blunting operations before they gain traction."

Kamala Harris’s last mile; The Ink, October 15, 2024

ANAND GIRIDHARADAS, The Ink; Kamala Harris’s last mile

"For all of the vice president’s success thus far, it is important to name the greatest risk to her candidacy, in the hope of avoiding it: In the homestretch, Democrats cannot let themselves be defined as the Whole Foods party — a party that speaks convincingly to upscale and educated and socially conscious and politically engaged and often-voting Americans, but doesn’t similarly rouse working-class voters of various stripes and more disaffected, jaded, demoralized voters.

In the last mile of this election, so many of the remaining pool of undecided voters — or, more importantly, people undecided about voting — have simply lost faith that anyone will change anything for the better in their lifetime.

It is beyond ironic, beyond ridiculous even, that some people who feel this way, millions of them, are attracted to Donald Trump and J.D. Vance, two pillars of the establishment who are running for president on a platform that would only make the richest and most powerful Americans more rich and more powerful.

But it is happening, and it must be stopped...

This outreach must be backed by policy. At her best, Harris has embraced big ideas that would change the landscape of the country, from housing construction to the care economy. Go further. Be sweeping. Propose the kind of simple-to-understand, sweeping, universal policies that make people thirst for the future.

And, finally, be everywhere, all at once. It has been a relief to see Harris saturate the airwaves in recent days, after an earlier reticence.

There is so much reason not to believe in America in 2024. If you want people to believe again, especially the people who are right now still on the fence, you need to tell them a story that not only persuades them but all but rewires their brain. You need to help them make new meaning of what they have seen and heard and felt.

This will require being everywhere all at once, in their heads and hearts, morning, noon, and night. It doesn’t matter if every interview isn’t perfect. Show them your power, your life force, the life force that proposes to smash obstacles and change their lives. Do whatever media most helps you reach them. It doesn’t need to be the old guard. But people are looking for whether you are unafraid, because if you are, it might give you what it takes to help them."