Thursday, October 17, 2024

Fact check: John Deere says Trump’s story about how he saved US jobs with a tariff threat is fictional; CNN, October 16, 2024

 , , CNN; Fact check: John Deere says Trump’s story about how he saved US jobs with a tariff threat is fictional

"When former President Donald Trump was challenged at a Tuesday event about the potential economic harms of his proposal for across-the-board tariffs on imported goods, Trump told what sounded like a tariff success story.

He said that in response to his threat to impose hefty tariffs on John Deere if the storied American farm equipment maker went ahead with a plan to move some production from the US to Mexico, the company had just announced it was likely abandoning that outsourcing plan.

Trump said: “Are you ready? John Deere, great company. They announced about a year ago they’re gonna build big plants outside of the United States. Right? They’re going to build them in Mexico … I said, ‘If John Deere builds those plants, they’re not selling anything into the United States.’ They just announced yesterday they’re probably not going to build the plants, OK? I kept the jobs here.”

But a search of news articles and corporate press releases showed nothing about any such John Deere announcement the day prior. And in response to Trump’s story, a John Deere spokesperson told The Wall Street Journal and Bloomberg News that it had not changed its plans or announced any such changes.

The Trump campaign did not respond to a CNN request for any evidence for the former president’s story.

Trump has told numerous fictional tales in recent weeks. Aside from the John Deere story, the Republican presidential nominee made at least 19 false claims at the Tuesday event, which was a public interview at the Economic Club of Chicago that was conducted by John Micklethwait, editor-in-chief of Bloomberg News...

Guns and the Capitol riot: Trump, speaking of rioters at the Capitol on January 6, 2021, repeated his false claim that “not one of those people had a gun.” It has been proven in court that multiple rioters had guns – in addition to stun guns, knives, chemical sprays and numerous other weapons...

The size of the Capitol riot: Trump correctly noted that the Washington, DC, rally he addressed prior to the Capitol riot was peaceful, but then wrongly described the size of the riot, saying, “I don’t know what you had – five, six, seven hundred people – go down to the Capitol.”

Trump’s figures are way off. The Justice Department said in an official update earlier this month that about 1,532 defendants had, so far, been federally charged with crimes associated with the attack on the Capitol. The FBI said in 2021 that “approximately 2,000 individuals are believed to have been involved with the siege” and the actual number might well be hundreds higher...

Who pays tariffs: Trump repeated his false claim that, through tariffs, “We got hundreds of billions of dollars just from China alone.” US importers make the actual tariff payments, not China, and study after study has found that Americans bore the overwhelming majority of the cost of Trump’s tariffs on China."

Wednesday, October 16, 2024

Columbia Cancer Surgeon Notches 5 More Retractions for Suspicious Data; The New York Times, October 16, 2024

, The New York Times; Columbia Cancer Surgeon Notches 5 More Retractions for Suspicious Data

"The chief of a cancer surgery division at Columbia University this week had five research articles retracted and a sixth tagged with an editor’s note, underscoring concerns about research misconduct that have lately bedeviled Columbia as well as cancer labs at several other elite American universities.

With the latest retractions, the Columbia lab, led by Dr. Sam Yoon, has had more than a dozen studies pulled over suspicious results since The New York Times reported in February on data discrepancies in the lab’s work.

The retracted studies were among 26 articles by Dr. Yoon and a more junior collaborator that a scientific sleuth in Britain, Sholto David, revealed had presented images from one experiment as data from another, a tactic that can be used to massage or falsify the results of studies.

Dr. Yoon’s more junior collaborator, Changhwan Yoon, no longer works in the lab, Columbia said in response to questions on Wednesday. But the university has said little else about what, if anything, it has done to address the allegations.

Since the Times article in February, Dr. Yoon’s name has been changed from Sam Yoon to S. Sunghyun Yoon on a Columbia website advertising surgical treatment options."

SC's book ban regulation is in effect. School librarians are caught in the crossfire.; The Post and Courier, October 16, 2024

Anna B. Mitchell and Valerie Nava Mitchell , The Post and Courier; SC's book ban regulation is in effect. School librarians are caught in the crossfire

Computer scientist speaks of effects of AI on humanity; Allied News, October 15, 2024

HAILEY ROGENSKI , Allied News; Computer scientist speaks of effects of AI on humanity

"What role will we let artificial intelligence play in our lives, and what effect will AI have on religion and the world? Can it replace human roles that require empathy?

Dr. Derek Schuurman, a Christian computer scientist from Calvin University in Grand Rapids, Mich., delved into those issues Oct. 7 at Grove City College in the college’s Albert A. Hopeman Jr. Memorial Lecture in Faith & Technology.

Schuurman, is a member of the American Scientific Affiliation and adviser for AI and faith, a contributor to the Christian Scholars Review blog, a columnist for the Christian Courier and an author of Shaping the Digital World: Faith, Culture and Computer Technology and a co-author of A Christian Field Guide to Technology for Engineers and Designers...

“I think at that point we have to get back to that question and say, ‘what does it mean to be human?’” Schuurman said. “What does it mean to be made in the image of God? What does that imply for certain types of relationships and work about having a human doing that, because we choose to have someone who can actually have empathy for us, someone who’s words can be influenced and shaped by the holy spirit speaking into our lives. There’s certain roles that require empathy, care (and) wisdom.”

Schuurman said he thinks some roles that require this kind of empathy, such as being a pastor or teacher, will remain untouched by AI.

He said the best way to use AI is to maintain a “hybrid approach” where “people do what people do well and machines do what machines do well.”"

Houston-area library moves Indigenous history book to fiction section; Lonestar Live, October 14, 2024

Ileana Garnad, Lonestar Live; Houston-area library moves Indigenous history book to fiction section

"A Houston-area public library reclassified a nonfiction children’s book about Native American history as fiction, after the title was reviewed by citizens, not librarians.

“I can only assume it is because it is a telling of the history of Indigenous people that they do not approve of,” said Teresa Kenney, a Montgomery County resident and founder of the Village Books store.

In September, “Colonization and the Wampanoag Story,” by Linda Coombs, was challenged in Montgomery County libraries by an unknown person, according to public records obtained by Kenney. Per county policy, the book was reviewed by a group of five citizens who weren’t required to consult a librarian...

The group’s meetings are closed to the public, so it is unclear why the book was reclassified as fiction. Details about the reconsideration committee, including its members, are not available on the county and library system websites."

What's Next in AI: How do we regulate AI, and protect against worst outcomes?; Pittsburgh Post-Gazette, October 13, 2024

EVAN ROBINSON-JOHNSON , Pittsburgh Post-Gazette; What's Next in AI: How do we regulate AI, and protect against worst outcomes?

"Gov. Josh Shapiro will give more of an update on that project and others at a Monday event in Pittsburgh.

While most folks will likely ask him how Pennsylvania can build and use the tools of the future, a growing cadre in Pittsburgh is asking a broader policy question about how to protect against AI’s worst tendencies...

There are no federal laws that regulate the development and use of AI. Even at the state level, policies are sparse. California Gov. Gavin Newsom vetoed a major AI safety bill last month that would have forced greater commitments from the nation’s top AI developers, most of which are based in the Golden State...

Google CEO Sundar Pichai made a similar argument during a visit to Pittsburgh last month. He encouraged students from local high schools to build AI systems that will make the world a better place, then told a packed audience at Carnegie Mellon University that AI is “too important a technology not to regulate.”

Mr. Pichai said he’s hoping for an “innovation-oriented approach” that mostly leverages existing regulations rather than reinventing the wheel."

The Alarming History Behind Trump’s “Bad Genes” Comments; The Hastings Center, October 15, 2024

Daphne O. Martschenko , The Hastings Center; The Alarming History Behind Trump’s “Bad Genes” Comments

"The former president’s latest comments about immigrants bringing “bad genes” into the United States are part of a longer, racialized history in which claims about genetic difference have been used to further social divisions, explain social inequalities, and justify racial violence. Specifically, such claims have been used to resist the abolition of slavery, prohibit interracial marriage, forcibly sterilize the poor and communities of color, restrict immigration, and even rationalize mass-shootings...

The scientific community has challenged pseudoscientific justifications for hate before. While scientists can be wary of getting involved in politics, our research has the potential to disprove the harmful ideas being wielded by political actors. However, it also carries the risk of being misused in support of such ideas.

We must make it harder for scientific research to be wielded by those looking to create social divisions. For instance, some scientists have recommended altering scientific figures so that they are harder to “meme-ify,” and do not convey the false message that humanity is made up of biologically distinct populations. As another example, scientists have taken up the difficult task of reimagining how biology is taught in schools. Research shows that teaching students about the complexity of genetics can reduce noxious and incorrect beliefs about race and genetics.

We also need to do a better job of understanding the perspectives of those we do not agree with or who don’t orbit in the same circle. Scientists aren’t trained in mediation and conflict resolution, research communication, or public engagement. But their work extends beyond the lab and into society where it has real impacts. The next generation of scientists ought to be trained in these things. Otherwise, we risk regress rather than progress."

Millions of people are creating nude images of pretty much anyone in minutes using AI bots in a ‘nightmarish scenario’; New York Post, October 15, 2024

 Brooke Kate, New York Post; Millions of people are creating nude images of pretty much anyone in minutes using AI bots in a ‘nightmarish scenario’

"Online chatbots are generating nude images of real people at users’ requests, prompting concern from experts who worry the explicit deepfakes will create “a very nightmarish scenario.”

A Wired investigation on the messaging app Telegram unearthed dozens of AI-powered chatbots that allegedly “create explicit photos or videos of people with only a couple clicks,” the outlet reported. Some “remove clothes” from images provided by users, according to Wired, while others say they can manufacture X-rated photos of people engaging in sexual activity.

The outlet estimated that approximately 4 million users per month take advantage of the deepfake capabilities from the chatbots, of which there were an estimated 50. Such generative AI bots promised to deliver “anything you want about the face or clothes of the photo you give me,” Wired reported."

Anyone Can Turn You Into an AI Chatbot. There's Little You Can Do to Stop Them.; Wired, October 15, 2024

Megan Farokhmanesh, Lauren Goode, Wired; Anyone Can Turn You Into an AI Chatbot. There's Little You Can Do to Stop Them.

How Do You Make Ethical Decisions? 15 Leaders Reveal Their Approaches; Forbes, October 15, 2024

 Bruce Weinstein, Ph.D., Forbes; How Do You Make Ethical Decisions? 15 Leaders Reveal Their Approaches

"In honor of this year’s Global Ethics Day, a celebratory event created by the Carnegie Council for Ethics in International Affairs, I asked 15 leaders from business, law, education and the arts how they make ethical decisions.

These successful professionals are from the U.S., Canada, the U.K., France, the United Arab Emirates, Hungary, Australia, and South Africa. This is how they responded...

Since Global Ethics Day was created by the Carnegie Council for Ethics in International Affairs, I give the last word to its president, Joel H. Rosenthal.

True north

“When considering a course of action, I ask: What is my true north? What am I willing (and not willing) to do to achieve a goal, and how might those whom I respect judge my actions? It’s impossible to get ethical choices right all the time, and I’ve found that it’s essential to balance conviction with humility in any decision-making process. Most importantly, I try to remain open to reflection and correction along the way.”

Joel H. Rosenthal, President, Carnegie Council for Ethics in International Affairs, New York, New York"

His daughter was murdered. Then she reappeared as an AI chatbot.; The Washington Post, October 15, 2024

 , The Washington Post; His daughter was murdered. Then she reappeared as an AI chatbot.

"Jennifer’s name and image had been used to create a chatbot on Character.AI, a website that allows users to converse with digital personalities made using generative artificial intelligence. Several people had interacted with the digital Jennifer, which was created by a user on Character’s website, according to a screenshot of her chatbot’s now-deleted profile.

Crecente, who has spent the years since his daughter’s death running a nonprofit organization in her name to prevent teen dating violence, said he was appalled that Character had allowed a user to create a facsimile of a murdered high-schooler without her family’s permission. Experts said the incident raises concerns about the AI industry’s ability — or willingness — to shield users from the potential harms of a service that can deal in troves of sensitive personal information...

The company’s terms of service prevent users from impersonating any person or entity...

AI chatbots can engage in conversation and be programmed to adopt the personalities and biographical details of specific characters, real or imagined. They have found a growing audience online as AI companies market the digital companions as friends, mentors and romantic partners...

Rick Claypool, who researched AI chatbots for the nonprofit consumer advocacy organization Public Citizen, said while laws governing online content at large could apply to AI companies, they have largely been left to regulate themselves. Crecente isn’t the first grieving parent to have their child’s information manipulated by AI: Content creators on TikTok have used AI to imitate the voices and likenesses of missing children and produce videos of them narrating their deaths, to outrage from the children’s families, The Post reported last year.

“We desperately need for lawmakers and regulators to be paying attention to the real impacts these technologies are having on their constituents,” Claypool said. “They can’t just be listening to tech CEOs about what the policies should be … they have to pay attention to the families and individuals who have been harmed.”

Tuesday, October 15, 2024

FEMA resumes door-to-door visits in North Carolina after threats tied to disinformation; AP, October 15, 2024

MAKIYA SEMINERA AND SARAH BRUMFIELD, AP; FEMA resumes door-to-door visits in North Carolina after threats tied to disinformation

"Federal disaster personnel have resumed door-to-door visits as part of their hurricane-recovery work in North Carolina, an effort temporarily suspended amid threats that prompted officials to condemn the spread of disinformation."

Trump Lashes Out at Live Fact-Checks During Disaster of an Interview; The New Republic, October 15, 2024

Ellie Quinlan Houghtaling, The New Republic ; Trump Lashes Out at Live Fact-Checks During Disaster of an Interview

"Donald Trump’s sit-down interview Tuesday with the Economic Club of Chicago went completely off the rails as the Republican presidential nominee struggled to offer concrete answers to a business-minded crowd, and miraculously performed even worse as he was fact-checked live onstage."

‘Armed Militias’ Claims In N.C. Driven By Social Media Misinformation; Forbes, October 14, 2024

Peter Suciu, Forbes; ‘Armed Militias’ Claims In N.C. Driven By Social Media Misinformation

""The amount of misinformation and disinformation we've seen around the recent hurricanes and help efforts is a strong example of how powerful those effects have become," explained Dr. Cliff Lampe, professor of information and associate dean for academic affairs in the School of Information at the University of Michigan.

Misinformation began even before Hurricane Helene made landfall, with the dubious claims that government officials were controlling the weather and directing the storm to hit "red states." The misinformation only intensified after the storm left a path of destruction.

"Over the last weeks we've seen death threats against meteorologists and now first responders in emergency situations," said Lampe. "There are a few things that are challenging about this. One is that belief persistence, which is the effect where people tend to keep believing what they have believed, makes it so that new information often doesn't make a difference in changing people's minds. We tend to think that good information will swamp out bad information, but unfortunately, it's not that simple."

Social media can amplify such misinformation in a way that was previously impossible.

"We saw that a small group of people acting on misinformation can disrupt services of the majority of people with a need," added Lampe.

"False information, especially on social media platforms, spreads incredibly fast. It's crucial to distinguish between misinformation and disinformation," said Rob Lalka, professor at Tulane University's Freeman School of Business and author of The Venture Alchemists: How Big Tech Turned Profits Into Power.

"Misinformation refers to false, incomplete, or inaccurate information shared without harmful intent, while disinformation is deliberately false information designed to deceive," Lalka continued...

"New technologies are making it increasingly hard to tell what's real and what's fake," said Lalka. "We now live in an era where Artificial Intelligence can generate lifelike images and audio, and these powerful tools should prompt us all to pause and consider whether a source is truly trustworthy.""

AI Ethics Council Welcomes LinkedIn Co-Founder Reid Hoffman and Commentator, Founder and Author Van Jones as Newest Members; Business Wire, October 15, 2024

 Business Wire; AI Ethics Council Welcomes LinkedIn Co-Founder Reid Hoffman and Commentator, Founder and Author Van Jones as Newest Members

"The AI Ethics Council, founded by OpenAI CEO Sam Altman and Operation HOPE CEO John Hope Bryant, announced today that Reid Hoffman (Co-Founder of LinkedIn and Inflection AI and Partner at Greylock) and Van Jones (CNN commentator, Dream Machine Founder and New York Times best-selling author) have joined as a members. Formed in December 2023, the Council brings together an interdisciplinary body of diverse experts including civil rights activists, HBCU presidents, technology and business leaders, clergy, government officials and ethicists to collaborate and set guidelines on ways to ensure that traditionally underrepresented communities have a voice in the evolution of artificial intelligence and to help frame the human and ethical considerations around the technology. Ultimately, the Council also seeks to help determine how AI can be harnessed to create vast economic opportunities, especially for the underserved.

Mr. Hoffman and Mr. Jones join an esteemed group on the Council, which will serve as a leading authority in identifying, advising on and addressing ethical issues related to AI. In addition to Mr. Altman and Mr. Bryant, founding AI Ethics Council members include:

The Limitation Effect: A White Paper; October 2024

New York University and University of California - San Diego, The Limitation Effect: A White Paper

Experiences of State Policy-Driven Education Restriction in Florida's Public Schools

"How can a teacher discuss Jim Crow laws without breaking state law? Should a librarian stop ordering books with LGBTQ+ characters? A new white paper by UC San Diego and NYU researchers reveals the experiences of K-12 educators and parents in Florida grappling with state policies and policy effects restricting access to instruction, books, courses, clubs, professional development, and basic student supports."

This threat hunter chases U.S. foes exploiting AI to sway the election; The Washington Post, October 13, 2024

, The Washington Post; This threat hunter chases U.S. foes exploiting AI to sway the election

"Ben Nimmo, the principal threat investigator for the high-profile AI pioneer, had uncovered evidence that Russia, China and other countries were using its signature product, ChatGPT, to generate social media posts in an effort to sway political discourse online. Nimmo, who had only started at OpenAI in February, was taken aback when he saw that government officials had printed out his report, with key findings about the operations highlighted and tabbed.

That attention underscored Nimmo’s place at the vanguard in confronting the dramatic boost that artificial intelligence can provide to foreign adversaries’ disinformation operations. In 2016, Nimmo was one of the first researchers to identify how the Kremlin interfered in U.S. politics online. Now tech companies, government officials and other researchers are looking to him to ferret out foreign adversaries who are using OpenAI’s tools to stoke chaos in the tenuous weeks before Americans vote again on Nov. 5.

So far, the 52-year-old Englishman says Russia and other foreign actors are largely “experimenting” with AI, often in amateurish and bumbling campaigns that have limited reach with U.S. voters. But OpenAI and the U.S. government are bracing for Russia, Iran and other nations to become more effective with AI, and their best hope of parrying that is by exposing and blunting operations before they gain traction."

Kamala Harris’s last mile; The Ink, October 15, 2024

ANAND GIRIDHARADAS, The Ink; Kamala Harris’s last mile

"For all of the vice president’s success thus far, it is important to name the greatest risk to her candidacy, in the hope of avoiding it: In the homestretch, Democrats cannot let themselves be defined as the Whole Foods party — a party that speaks convincingly to upscale and educated and socially conscious and politically engaged and often-voting Americans, but doesn’t similarly rouse working-class voters of various stripes and more disaffected, jaded, demoralized voters.

In the last mile of this election, so many of the remaining pool of undecided voters — or, more importantly, people undecided about voting — have simply lost faith that anyone will change anything for the better in their lifetime.

It is beyond ironic, beyond ridiculous even, that some people who feel this way, millions of them, are attracted to Donald Trump and J.D. Vance, two pillars of the establishment who are running for president on a platform that would only make the richest and most powerful Americans more rich and more powerful.

But it is happening, and it must be stopped...

This outreach must be backed by policy. At her best, Harris has embraced big ideas that would change the landscape of the country, from housing construction to the care economy. Go further. Be sweeping. Propose the kind of simple-to-understand, sweeping, universal policies that make people thirst for the future.

And, finally, be everywhere, all at once. It has been a relief to see Harris saturate the airwaves in recent days, after an earlier reticence.

There is so much reason not to believe in America in 2024. If you want people to believe again, especially the people who are right now still on the fence, you need to tell them a story that not only persuades them but all but rewires their brain. You need to help them make new meaning of what they have seen and heard and felt.

This will require being everywhere all at once, in their heads and hearts, morning, noon, and night. It doesn’t matter if every interview isn’t perfect. Show them your power, your life force, the life force that proposes to smash obstacles and change their lives. Do whatever media most helps you reach them. It doesn’t need to be the old guard. But people are looking for whether you are unafraid, because if you are, it might give you what it takes to help them."

Monday, October 14, 2024

Why there’s so much disinformation out there now, and how you can combat it; WTOP, October 12, 2024

John Domen, WTOP;  Why there’s so much disinformation out there now, and how you can combat it

"Most people, regardless of their political leaning, will acknowledge social media has become especially riddled with things that just aren’t true, especially in the world of politics where allegations follow candidates from all parties. So the supposed eyewitness coming to you from a disaster zone, or the supposed political insider, should probably be treated with some skepticism.

“When you see content that’s particularly disturbing or makes you very angry, a thing to keep in mind there is maybe making you angry was the point,” said Buntain. “This may not necessarily be legitimate content. It might be. It’s certainly plausible that it might be.”

“But when you start to see content that makes you angry, that tends to decrease your ability to do other things or think rationally about some of this content. Then you start to amplify it. Then you start to engage with it,” he added.

Triggering that response is often the real goal of social media algorithms. But if someone you know has been fooled, how do you convince them of that?

Because, as the saying goes, it can be harder to convince someone they’ve been fooled than it is to actually fool someone. But in some cases, at least deep down, truth might not be the real reason they’re re-sharing content that’s made up.

“They’re sharing it because it had some emotional resonance with them,” said Buntain. “And by just telling them that they … shared bad content, you’re sort of minimizing or ignoring the sort of emotional aspect that got them there, and then that’s not a good recipe for civil engagement around a particular topic.”"

Trump wages campaign against real-time fact checks; The Washington Post, October 14, 2024

, The Washington Post; Trump wages campaign against real-time fact checks

"Donald Trump and his campaign have waged an aggressive campaign against fact-checking in recent months, pushing TV networks, journalism organizations and others to abandon the practice if they hope to interact with Trump...

The moves are the latest example of Trump’s long-held resistance to being called to account for his falsehoods, which have formed the bedrock of his political message for years. Just in recent weeks, for example, Trump has seized on fabricated tales of migrants eating pets and Venezuelan gangs overtaking cities in pushing his anti-immigration message as he seeks a second term in office.

Lucas Graves, a journalism and mass communications professor at the University of Wisconsin at Madison, said that publicly chafing at fact-checking has become a form of tribalism among some Republicans.

“Within the political establishment on the right, it is now considered quite legitimate — and quite legitimate to say publicly and openly — that you disapprove of fact-checking,” said Lucas, author of “Deciding What’s True: The Rise of Political Fact-Checking in American Journalism."

Trump Suggests Using National Guard Against ‘Enemy from Within’ on Election Day; Wall Street Journal, October 14, 2024

C. Ryan Barber and Jimmy Vielkind, Wall Street Journal; Trump Suggests Using National Guard Against ‘Enemy from Within’ on Election Day

"Donald Trump suggested deploying the National Guard or military to respond to what he termed the “enemy from within” on Election Day, saying in an interview that aired Sunday that he was concerned about the chaos wrought by "radical left lunatics".

ScienceAdviser: Shifting from harm to resilience Today in Science and science: ScienceAdviser honors Indigenous Peoples’ Day; Science, October 14, 2024

ScienceAdviser: Shifting from harm to resilience

"Today, in honor of Indigenous Peoples’ Day, Science Staff Writer Rodrigo Pérez Ortega speaks with Diné genetic epidemiologist Krystal Tsosie about the holiday and the importance of Indigenous data sovereignty. The rest of this edition of ScienceAdviser is centered on research that is relevant to and/or being conducted by Indigenous scientists and communities...

Your work has focused on Indigenous data sovereignty. Can you tell me more about the current efforts pushing for Native tribes to have control over their own data?

One recent effort is the #DataBack movement, which is about reclaiming control over Indigenous data, specifically genomic and biological data that have been taken and stored without our consent. My colleague, Keolu Fox, and I have been advocating for Indigenous data sovereignty. We’ve even made stickers to raise awareness, and I love seeing them on water bottles and in public spaces. It’s a small, symbolic way to promote the idea that Indigenous data should be returned to the communities it belongs to."

Meteorologists Face Harassment and Death Threats Amid Hurricane Disinformation; The New York Times, October 14, 2024

 , The New York Times; Meteorologists Face Harassment and Death Threats Amid Hurricane Disinformation

"Meteorologists’ role of delivering lifesaving weather forecasts and explaining climate science sometimes makes them targets for harassment, and this kind of abuse has been happening for years, weather experts said. But amid the conspiracy theories and falsehoods that have spiraled online after Hurricanes Helene and Milton, they say the attacks and threats directed at them have reached new heights."

Sunday, October 13, 2024

Kip Currier: Character-Washing As Complicity?: Media Decision-Making, the Silence of Betsy DeVos, and Ethical Responsibilities. October 13, 2024

Kip Currier: Character-Washing As Complicity?: Media Decision-Making, the Silence of Betsy DeVos, and Ethical Responsibilities. October 13, 2024.

In the sixth paragraph of the October 12, 2024 New York Times article "A Frustrated Trump Lashes Out Behind Closed Doors Over Money", the reporters -- Jonathan SwanMaggie Haberman and  -- state that Donald Trump "disparaged Vice President Kamala Harris as", citing the epithet he used that is a slur for mentally challenged persons. The article describes a Trump Tower dinner Trump was hosting for billionaire donors on September 29, 2024. Among the attendees was "Betsy DeVos, the billionaire former education secretary under Mr. Trump, and her husband, Dick". The reporters also describe a significant number of corroborators of "the rant": i.e. "seven people with knowledge of the meal who spoke on the condition of anonymity to discuss private conversations".

I have some questions for Ms. DeVos later, but first, my questions for the New York Times reporters and the newspaper employees who often create the headlines for the stories that the reporters write:

  • Why wasn't the epithet mentioned in the headline of the article? This is arguably a shockingly noteworthy event that in the past would have likely disqualified any other presidential candidate. Yet, the slur isn't telegraphed for the reader. The lede is buried in the 6th paragraph. Even the sub-headline -- "Donald J. Trump is feeling aggrieved, unappreciated by donors and fenced in by security concerns in the final stretch of the race." -- fails to mention anything about the disparaging term that will appear later in the article.

  • Who made these decisions within the New York Times, and why? Was there any debate about these editorial decisions? 

  • Why is the comments function not turned on for this article? I wanted to see what other readers thought about the article and whether anyone else had opinions on the editorial decisions made. But the comments feature is not on as of the morning of October 13, 2024.

  • Did the persons responsible for this story engage in character-washing? Did they downplay the disparaging term used, for the sake of "journalistic objectivity" or for other reasons?

  • What are the ethical standards upon which you based your editorial decisions for this article? 

Now, my questions for Ms. DeVos:

You were the head of the Education Department for the Trump administration until you resigned the day after the January 6, 2021 insurrection at the U.S. Capitol. Your January 7 resignation letter states, in pertinent part:

We should be highlighting and celebrating your Administration's many accomplishments on behalf of the American people. Instead, we are left to clean up the mess caused by violent protests overrunning the U.S. Capitol in an attempt to undermine the people's business. That behavior was unconscionable for our country. There is no mistaking the impact your rhetoric had on the situation, and it is the inflection point for me. 

Impressionable children are watching all of this, and they are learning from us. I believe we each have a moral obligation to exercise good judgment and model the behavior we hope they would emulate. They must know from us that America is greater than what transpired yesterday. Today, I resign from my position, effective, Friday, January 8, in support of the oath I took to our Constitution, our people, and our freedoms.

In light of what you wrote more than three years ago, regarding "impressionable children" and the "moral obligation" we each have "to exercise good judgment and model the behavior we hope they would emulate": 

  • Why did you not walk out of the September 29, 2024 dinner when Trump used a term that disparages developmentally challenged individuals? 

  •  What are the impacts that Trump's corrosive statements have on not only the communities that he disparages but also on our civil discourse and democracy?

  •  Did you not -- and do you not still -- have a "moral obligation" to model words and actions that uplift people rather than demean them, and to speak up when Trump uses offensive language? 

  • What do you think Trump's statements say about his character?

  • What does it say about your character that you continue to associate with a person who uses slurs and demeaning words?

  • What are children learning from the words that Trump uses and the silence of persons like yourself to call out disparaging language? 

  • Why should the American people -- and the people of the world -- view you as a person who has the moral backbone and leadership competencies to continue to speak about education and the well-being of children?

Art Collective Behind Viral Image of Kamala Harris Sues for Copyright Infringement; artnet, October 11, 2024

Jo Lawson-Tancred , artnet; Art Collective Behind Viral Image of Kamala Harris Sues for Copyright Infringement

"A lawsuit filed by Good Trubble in a California district on October 10 alleges that Irem Erdem of Round Rock, Texas, deliberately committed copyright infringement because of the image’s “widespread dissemination” online.

The digitally-created artwork designed by Bria Goeller for Good Trubble is titled That Little Girl Was Me. It was released on October 20, 2020, and went viral shortly after the last U.S. presidential election in November 2020, when Harris became the first Black and South Asian woman to be elected vice president. The image can be bought as a print or on t-shirts and other products on Good Trubble’s website, including a new version featuring the White House in celebration of Harris’s current bid for the presidency.

The image pairs the figure of Harris with silhouette of activist Ruby Bridges as a young girl. It quotes from Norman Rockwell‘s iconic 1964 painting The Problem We All Live With, which depicts the historic event of a six-year-old Bridges being escorted by four deputy U.S. marshals into the all-white public school during the New Orleans school desegregation crisis of 1960. This measure was taken to protect her from the threat of violence, which is hinted at by a racial slur and the splatter of thrown tomatoes scrawled on the wall behind her."

Saturday, October 12, 2024

2024 Tech Ethics Symposium: Coming October 17-18!; Duquesne University, October 17-18, 2024

 Duquesne University; 2024 Tech Ethics Symposium: Coming October 17-18!; How is AI Transforming Our Communities?

"The Grefenstette Center for Ethics in Science, Technology, and Law will host the fifth annual Tech Ethics Symposium: “How is AI Transforming Our Communities?” This two-day symposium, co-sponsored by the Institute for Ethics and Integrity in Journalism and Media, the Center for Teaching Excellence, and the Albert P. Viragh Institute for Ethics in Business, will focus on how generative AI is transforming our daily lives and our communities. It will also explore how AI has already changed our region and will continue to alter our world in the next decade.

How do major stakeholders like journalists, educators, and tech workers use AI to shape our community?  How have professional communities in tech, journalism, and education been impacted already by AI? What is the role of politics in responding to AI’s influence on, and through, these impactful stakeholder communities? What has AI changed for communities of faith, artists, people with disabilities, and historically marginalized communities? What can each of us do to utilize –or avoid– AI to ensure strong, healthy human communities?"

Man learns he’s being dumped via “dystopian” AI summary of texts; Ars Technica, October 10, 2024

BENJ EDWARDS, Ars Technica; Man learns he’s being dumped via “dystopian” AI summary of texts

"On Wednesday, NYC-based software developer Nick Spreen received a surprising alert on his iPhone 15 Pro, delivered through an early test version of Apple's upcoming Apple Intelligence text message summary feature. "No longer in a relationship; wants belongings from the apartment," the AI-penned message reads, summing up the content of several separate breakup texts from his girlfriend.

Spreen shared a screenshot of the AI-generated message in a now-viral tweet on the X social network, writing, "for anyone who’s wondered what an apple intelligence summary of a breakup text looks like."...

Spreen's message is the first time we've seen an AI-mediated relationship breakup, but it likely won't be the last." 

Notre Dame to explore faith-based ethical uses of AI; Crux, October 11, 2024

John Lavenburg, Crux; Notre Dame to explore faith-based ethical uses of AI

"About five months after Pope Francis spoke of the responsibility political leaders have to ensure that artificial intelligence is used ethically, the University of Notre Dame has announced that it will develop faith-based frameworks for ethical uses of the technology.

Notre Dame, one of the preeminent Catholic universities in the United States located in South Bend, Indiana, announced on Oct. 10 that it has been awarded a $539,000 grant from Lilly Endowment Inc. to develop the frameworks – a process that will begin with a one year planning project.

The development of the frameworks will be led by the Notre Dame Institute for Ethics and the Common Good. Meghan Sullivan, the institute’s director, said that “this is a pivotal moment for technology ethics.”

“[Artificial General Intelligence] is developing quickly and has the potential to change our economies, our systems of education and the fabric of our social lives,” Sullivan, who is also the university’s Wilsey Family College Professor of Philosophy, said in a statement. “We believe that the wisdom of faith traditions can make a significant contribution to the development of ethical frameworks for AGI.”

According to an announcement from the university, the one-year planning project to begin the process of developing the frameworks will engage and build a network of leaders in higher education and technology, as well as those of different faiths to broach the topic of ethical uses of AI, and eventually create the faith-based ethical frameworks.

“This project will encourage broader dialogue about the role that concepts such as dignity, embodiment, love, transcendence and being created in the image of God should play in how we understand and use this technology,” Sullivan said. “These concepts – as the bedrock of many faith-based traditions – are vital for how we advance the common good in the era of AGI.”

The project will culminate in September 2025 with a conference that will focus on the most pressing faith-based issues relating to the proliferation of AGI and provide training and networking opportunities for leaders who attend, according to the university."