Showing posts with label bias. Show all posts
Showing posts with label bias. Show all posts

Saturday, November 1, 2025

DOJ faces ethics nightmare with Trump bid for $230M settlement; The Hill, October 31, 2025

REBECCA BEITSCH, The Hill; DOJ faces ethics nightmare with Trump bid for $230M settlement


[Kip Currier: This real life "nightmare" scenario is akin to a hypothetical law school exam fact pattern with scores of ethics issues for law students to identify and discuss. Would that it were a fictitious set of facts.

If Trump's former personal attorneys, who are now in the top DOJ leadership, will not recuse themselves due to genuine conflicts of interest and appearances of impropriety, will the state and federal bar associations, who license these attorneys and hold them to annual continuing legal and ethics-related education requirements so they can remain in good standing with their respective licensing entities, step in to scrutinize potential ethical lapses of these lawyers?

These unprecedented actions by Trump must not be treated as normal. Similarly, if Trump's former personal attorneys approve Trump's attempt to "shake down" the federal government and American taxpayers, their ethically dubious actions as DOJ leaders and officers of the court must not be normalized by the organizations that are charged to enforce ethical standards for all licensed attorneys.

Moreover, approval of this settlement would be damaging to the rule of law and to public trust in the rule of law. If the most powerful person on the planet can demand that an organization -- whose leadership reports to him -- pay out a "settlement" for lawfully-conducted actions and proceedings in a prior administration, what does that say about the state of justice in the U.S.? I posit that it would say that it is a justice system that has been utterly corrupted and that is not subject to equal application of its laws and ethical standards. No person is above the law, or should be above the law in our American system of government and checks and balances. Not even the U.S. President, despite the Roberts Court's controversial Trump v. U.S. July 2024 ruling recognizing absolute and limited Presidential immunity in certain spheres.

Finally, a few words about "speaking out" and "standing up". It is vital for those who are in leadership positions to call out actions like the ones at hand that arguably undermine the rule of law and incrementally move this country from one that is democratically-centered to an autocratic nation state like Russia. I searched for and could find no statement by the American Bar Association (ABA) on this matter, a matter that is clearly relevant to its membership, of which I count myself as a member.

Will the ABA and other legal organizations share their voices on these matters that have such far-reaching implications for the rule of law and our nearly 250-year democratic experiment?

The paperback version of my Bloomsbury book, Ethics, Information, and Technology, becomes available on November 13, and I intentionally included a substantial professional and character ethics section at the outset of the book because those principles are so integral to how we conduct ourselves in all areas of our lives. Ethics precepts and values like integrity, attribution, truthfulness and avoidance of misrepresentation, transparency, accountability, and disclosure of conflicts of interest, as well as recusal when we have conflicts of interest.]


[Excerpt]

"The Department of Justice (DOJ) is facing pressure to back away from a request from President Trump for a $230 million settlement stemming from his legal troubles, as critics say it raises a dizzying number of ethical issues.

Trump has argued he deserves compensation for the scrutiny into his conduct, describing himself as a victim of both a special counsel investigation into the 2016 election and the classified documents case.

The decision, however, falls to a cadre of attorneys who previously represented Trump personally.

Rupa Bhattacharyya, who reviewed settlement requests in her prior role as director of the Torts Branch of the DOJ’s Civil Division, said most agreements approved by the department are typically for tens of thousands of dollars or at most hundreds of thousands.

“In the ordinary course, the filing of administrative claims is required. So that’s not unusual. In the ordinary course, a relatively high damages demand on an administrative claim is also not that unusual. What is unusual here is the fact that the president is making a demand for money from his own administration, which raises all sorts of ethical problems,” Bhattacharyya told The Hill.

“It’s also just completely unheard of. There’s never been a case where the president of the United States would ask the department that he oversees to make a decision in his favor that would result in millions of dollars lining his own pocket at the expense of the American taxpayer.”

It’s the high dollar amount Trump is seeking that escalates the decision to the top of the department, leaving Deputy Attorney General Todd Blanche, as well as Associate Attorney General Stanley Woodward, to consider the request."

Monday, July 7, 2025

Welcome to Your Job Interview. Your Interviewer Is A.I.; The New York Times, July 7, 2025

Natallie Rocha , The New York Times; Welcome to Your Job Interview. Your Interviewer Is A.I.

"Job seekers across the country are starting to encounter faceless voices and avatars backed by A.I. in their interviews. These autonomous interviewers are part of a wave of artificial intelligence known as “agentic A.I.,” where A.I. agents are directed to act on their own to generate real-time conversations and build on responses."

Friday, June 27, 2025

Hegseth announces new name of US navy ship that honored gay rights icon Harvey Milk; The Guardian, June 27, 2025

, The Guardian ; Hegseth announces new name of US navy ship that honored gay rights icon Harvey Milk


[Kip Currier: The money quote in this Guardian article is Pete Hegseth's statement that:

“People want to be proud of the ship they are sailing in."

It's an intentionally offensive statement against gay rights pioneer Harvey Milk. It's also a coded slur meant to troll LGBTQ+ people -- delivered at the tail end of Pride Month -- by suggesting which vessel names inspire feelings of pride and which do not.

Recall, too, that Hegseth kicked off June and Pride Month by announcing he would be renaming naval vessels that had been given the names of historical figures and civil rights activists, several of whom were veterans, like Harvey Milk, Cesar Chavez, and Medgar Evers.]


[Excerpt]

"The US defense secretary, Pete Hegseth, has formally announced that the US navy supply vessel named in honor of the gay rights activist Harvey Milk is to be renamed after Oscar V Peterson, a chief petty officer who received the congressional Medal of Honor for his actions in the Battle of the Coral Sea in the second world war.

“We are taking the politics out of ship naming,” Hegseth announced on Friday on X.

In an accompanying video-statement, Hegseth added: “We are not renaming the ship to anything political. This is not about political activists, unlike the previous administration. Instead, we are renaming the ship after a congressional Medal of Honor recipient.”

“People want to be proud of the ship they are sailing in,” Hegseth added."

Wednesday, April 30, 2025

The Tech Industry Tried Reducing AI’s Pervasive Bias. Now Trump Wants to End Its ‘Woke AI’ Efforts; Associated Press via Inc., April 28, 2025

Associated Press via Inc.; The Tech Industry Tried Reducing AI’s Pervasive Bias. Now Trump Wants to End Its ‘Woke AI’ Efforts 

"In the White House and the Republican-led Congress, “woke AI” has replaced harmful algorithmic discrimination as a problem that needs fixing. Past efforts to “advance equity” in AI development and curb the production of “harmful and biased outputs” are a target of investigation, according to subpoenas sent to Amazon, Google, Meta, Microsoft, OpenAI and 10 other tech companies last month by the House Judiciary Committee.

And the standard-setting branch of the U.S. Commerce Department has deleted mentions of AI fairness, safety and “responsible AI” in its appeal for collaboration with outside researchers. It is instead instructing scientists to focus on “reducing ideological bias” in a way that will “enable human flourishing and economic competitiveness,” according to a copy of the document obtained by The Associated Press.

In some ways, tech workers are used to a whiplash of Washington-driven priorities affecting their work.

But the latest shift has raised concerns among experts in the field, including Harvard University sociologist Ellis Monk, who several years ago was approached by Google to help make its AI products more inclusive."

Friday, February 7, 2025

A Judge Tried to Get Out of Jury Duty. What He Said Cost Him His Job.; The New York Times, February 6, 2025

, The New York Times ; A Judge Tried to Get Out of Jury Duty. What He Said Cost Him His Job.


[Kip Currier: A bedrock principle of the American judicial system is a commitment to equity and fairness by those who are entrusted to be impartial adjudicators. This story reveals an individual who makes a mockery of that ethical imperative.]


[Excerpt]

"When Richard Snyder was running to be a town justice in tiny Petersburgh, N.Y., in 2013, he told a local news site that he would be fair and honest on the bench. Because he was not a lawyer, he also said he was “looking forward to learning about the law.”

He just learned something about it the hard way.

Mr. Snyder, a Republican, was unopposed in that 2013 race and won it with 329 votes. But in December he resigned after a disciplinary panel found that he had tried to get out of grand jury duty by introducing himself as a town justice and saying he could not be impartial based on his opinion of those who appeared in his court.

“I know they are guilty,” Mr. Snyder said in arguing to be excused, according to a court transcript. Otherwise, he explained, “they would not be in front of me.” (The judge dismissed him and notified the disciplinary panel.)"

Friday, December 27, 2024

New Course Creates Ethical Leaders for an AI-Driven Future; George Mason University, December 10, 2024

Buzz McClain, George Mason University; New Course Creates Ethical Leaders for an AI-Driven Future

"While the debates continue over artificial intelligence’s possible impacts on privacy, economics, education, and job displacement, perhaps the largest question regards the ethics of AI. Bias, accountability, transparency, and governance of the powerful technology are aspects that have yet to be fully answered.

A new cross-disciplinary course at George Mason University is designed to prepare students to tackle the ethical, societal, and governance challenges presented by AI. The course, AI: Ethics, Policy, and Society, will draw expertise from the Schar School of Policy and Government, the College of Engineering and Computing(CEC), and the College of Humanities and Social Sciences (CHSS).

The master’s degree-level course begins in spring 2025 and will be taught by Jesse Kirkpatrick, a research associate professor in the CEC, the Department of Philosophy, and codirector of the Mason Autonomy and Robotics Center

The course is important now, said Kirkpatrick, because “artificial intelligence is transforming industries, reshaping societal norms, and challenging long-standing ethical frameworks. This course provides critical insights into the ethical, societal, and policy implications of AI at a time when these technologies are increasingly deployed in areas like healthcare, criminal justice, and national defense.”"

Monday, October 28, 2024

Panel Reminds Us That Artificial Intelligence Can Only Guess, Not Reason for Itself; New Jersey Institute of Technology, October 22, 2024

Evan Koblentz , New Jersey Institute of Technology; Panel Reminds Us That Artificial Intelligence Can Only Guess, Not Reason for Itself

"Expert panelists took a measured tone about the trends, challenges and ethics of artificial intelligence, at a campus forum organized by NJIT’s Institute for Data Science this month.

The panel moderator was institute director David Bader, who is also a distinguished professor in NJIT Ying Wu College of Computing and who shared his own thoughts on AI in a separate Q&A recently. The panel members were Kevin Coulter, field CTO for AI, Dell Technologies; Grace Wang, distinguished professor and director of NJIT’s Center for Artificial Intelligence Research; and Mengjia Xu, assistant professor of data science. DataBank Ltd., a data center firm that hosts NJIT’s Wulver high-performance computing cluster, was the event sponsor...

Bader: “There's also a lot of concerns that get raised with AI in terms of privacy, in terms of ethics, in terms of its usage. So I really want to understand your thoughts on how we ensure that AI systems are developed and deployed ethically. And are there specific frameworks or guidelines that you would follow?”...

Wang: “Well, I always believe that AI at its core is just a tool, so there's no difference for the AI and say, lock picking tools. Now, picking tools can open your door if you lock yourself out and it can also open others. That's a crime, right? So it depends on how AI is used. From that perspective, there's not much special when we talk about AI ethics, or, say, computer security ethics, or the ethics related to how to use a gun, for example. But what is different is, as AI is too complex, it's beyond the knowledge of many of us how it works. Sometimes it looks ethical but maybe what's behind it is amplifying the bias by using the AI tools without our knowledge. So whenever we talk about AI ethics, I think the most important one is education if you know what AI is about, how it works and what AI can do and what AI cannot. I think for now we have the fear that AI is so powerful it can do anything, but actually, many of the things that people believe AI can do now can be done in the past by just any software system. So education is very, very important to help us to demystify AI accordingly, so we can talk about AI ethics. I want to emphasize transparency. If AI is used for decision making, if we understand how the decision is made, that becomes very, very important. And another important topic related to AI ethics is auditing if we don't know what's inside. At least we have some assessment tools to know whether there's a risk or not in certain circumstances. Whether it can generate a harmful result or is not very much like the stress testing to the financial system after 2008.”

Tuesday, August 27, 2024

Ethical and Responsible AI: A Governance Framework for Boards; Directors & Boards, August 27, 2024

Sonita Lontoh, Directors & Boards; Ethical and Responsible AI: A Governance Framework for Boards 

"Boards must understand what gen AI is being used for and its potential business value supercharging both efficiencies and growth. They must also recognize the risks that gen AI may present. As we have already seen, these risks may include data inaccuracy, bias, privacy issues and security. To address some of these risks, boards and companies should ensure that their organizations' data and security protocols are AI-ready. Several criteria must be met:

  • Data must be ethically governed. Companies' data must align with their organization's guiding principles. The different groups inside the organization must also be aligned on the outcome objectives, responsibilities, risks and opportunities around the company's data and analytics.
  • Data must be secure. Companies must protect their data to ensure that intruders don't get access to it and that their data doesn't go into someone else's training model.
  • Data must be free of bias to the greatest extent possible. Companies should gather data from diverse sources, not from a narrow set of people of the same age, gender, race or backgrounds. Additionally, companies must ensure that their algorithms do not inadvertently perpetuate bias.
  • AI-ready data must mirror real-world conditions. For example, robots in a warehouse need more than data; they also need to be taught the laws of physics so they can move around safely.
  • AI-ready data must be accurate. In some cases, companies may need people to double-check data for inaccuracy.

It's important to understand that all these attributes build on one another. The more ethically governed, secure, free of bias and enriched a company's data is, the more accurate its AI outcomes will be."

Tuesday, July 16, 2024

Ghosts in the Machine: How Past and Present Biases Haunt Algorithmic Tenant Screening Systems; American Bar Association (ABA), June 3, 2024

Gary Rhoades , American Bar Association (ABA); Ghosts in the Machine: How Past and Present Biases Haunt Algorithmic Tenant Screening Systems

"The Civil Rights Act of 1968, also known as the Fair Housing Act (FHA), banned housing discrimination nationwide on the basis of race, religion, national origin, and color. One key finding that persuaded Dr. Martin Luther King Jr., President Lyndon Johnson, and others to fight for years for the passage of this landmark law confirmed that many Americans were being denied rental housing because of their race. Black families were especially impacted by the discriminatory rejections. They were forced to move on and spend more time and money to find housing and often had to settle for substandard housing in unsafe neighborhoods and poor school districts to avoid homelessness.

April 2024 marked the 56th year of the FHA’s attempt to end such unfair treatment. Despite the law’s broadly stated protections, its numerous state and local counterparts, and decades of enforcement, landlords’ use of high-tech algorithms for tenant screening threatens to erase the progress made. While employing algorithms to mine data such as criminal records, credit reports, and civil court records to make predictions about prospective tenants might partially remove the fallible human element, old and new biases, especially regarding race and source of income, still plague the screening results."

Tuesday, July 2, 2024

Are AI-powered church services coming to a pew near you?; Scripps News, May 10, 2024

 

""Depending upon what data sets it's using, we get an intense amount of bias within AI right now," Callaway told Scripps News. "And it reflects, shock and awe, the same bias that we have as humans. And so having someone that is actually a kind of wise guide or mentor to help you discern how to even interpret, understand the results that AI is giving you is really important."

But Callaway says there's good that can come from AI, like translating the Bible into various languages...

Rabbi Geoff Mitelman, who helped found the studies at Temple B'Nai Or through his organization Sinai and Synapses, agrees, saying AI can be an aid in study...

However, there are concerns across religions about the interpretation of such texts, bias and misinformation.

"The spread of misinformation and how easy it is to create and then spread misinformation, whether that's using something like Dall-E or ChatGPT or videos and also algorithms that will spread misinformation — because at least for hundreds of thousands of years it was better for humans to trust than to not trust, right?" said Mitelman.

That cautious view of AI and religion seems to translate across practices, a poll from the Christian research group Barna shows.
Over half of Christians, 52%, said they'd be disappointed if they found out AI was used in their church."

Wednesday, April 10, 2024

'Magical Overthinking' author says information overload can stoke irrational thoughts; NPR, Fresh Air, April 9, 2024

, NPR, Fresh Air; 'Magical Overthinking' author says information overload can stoke irrational thoughts

"How is it that we are living in the information age — and yet life seems to make less sense than ever? That's the question author and podcast host Amanda Montell set out to answer in her new book, The Age of Magical Overthinking. 

Montell says that our brains are overloaded with a constant stream of information that stokes our innate tendency to believe conspiracy theories and mysticism...

Montell, who co-hosts the podcast Sounds Like A Cult, says this cognitive bias is what allows misinformation and disinformation to spread so easily, particularly online. It also helps explain our tendency to make assumptions about celebrities we admire...

Montell says that in an age of overwhelming access to information, it's important to step away from electronic devices. "We are meant for a physical world. That's what our brains are wired for," she says. "These devices are addictive, but I find that my nervous system really thanks me when I'm able to do that.""

Tuesday, January 2, 2024

How the Federal Government Can Rein In A.I. in Law Enforcement; The New York Times, January 2, 2024

 Joy Buolamwini and , The New York Times; How the Federal Government Can Rein In A.I. in Law Enforcement

"One of the most hopeful proposals involving police surveillance emerged recently from a surprising quarter — the federal Office of Management and Budget. The office, which oversees the execution of the president’s policies, has recommended sorely needed constraints on the use of artificial intelligence by federal agencies, including law enforcement.

The office’s work is commendable, but shortcomings in its proposed guidance to agencies could still leave people vulnerable to harm. Foremost among them is a provision that would allow senior officials to seek waivers by arguing that the constraints would hinder law enforcement. Those law enforcement agencies should instead be required to provide verifiable evidence that A.I. tools they or their vendors use will not cause harm, worsen discrimination or violate people’s rights."

Friday, July 7, 2023

The Supreme Court makes almost all of its decisions on the 'shadow docket.' An author argues it should worry Americans more than luxury trips.; Insider, July 7, 2023

, Insider; The Supreme Court makes almost all of its decisions on the 'shadow docket.' An author argues it should worry Americans more than luxury trips.

"The decisions made on the shadow docket are not inherently biased, Vladeck said, but the lack of transparency stokes legitimate concerns about the court's politicization and polarization, especially as the public's trust in the institution reaches an all-time low.

"Even judges and justices acting in good faith can leave the impression that their decisions are motivated by bias or bad faith — which is why judicial ethics standards, even those few that apply to the Supreme Court itself, worry about both bias and the appearance thereof," Vladeck writes.

The dangers posed by the shadow docket are more perilous than the wrongs of individual justices, Vladeck argues, because the shadow docket's ills are inherently institutional." 

Monday, June 19, 2023

Ethical, legal issues raised by ChatGPT training literature; Tech Explore, May 8, 2023

 Peter Grad , Tech XploreEthical, legal issues raised by ChatGPT training literature

""Knowing what books a model has been trained on is critical to assess such sources of bias," they said.

"Our work here has shown that OpenAI models know about books in proportion to their popularity on the web."

Works detected in the Berkeley study include "Harry Potter," "1984," "Lord of the Rings," "Hunger Games," "Hitchhiker's Guide to the Galaxy," "Fahrenheit 451," "A Game of Thrones" and "Dune."

While ChatGPT was found to be quite knowledgeable about works in the , lesser known works such as Global Anglophone Literature—readings aimed beyond core English-speaking nations that include Africa, Asia and the Caribbean—were largely unknown. Also overlooked were works from the Black Book Interactive Project and Black Caucus Library Association award winners.

"We should be thinking about whose narrative experiences are encoded in these models, and how that influences other behaviors," Bamman, one of the Berkeley researchers, said in a recent Tweet. He added, "popular texts are probably not good barometers of model performance [given] the bias toward sci-fi/fantasy.""

Friday, May 27, 2022

Accused of Cheating by an Algorithm, and a Professor She Had Never Met; The New York Times, May 27, 2022

Kashmir Hill, The New York Times; Accused of Cheating by an Algorithm, and a Professor She Had Never Met

An unsettling glimpse at the digitization of education.

"The most serious flaw with these systems may be a human one: educators who overreact when artificially intelligent software raises an alert.

“Schools seem to be treating it as the word of God,” Mr. Quintin said. “If the computer says you’re cheating, you must be cheating.”"

Tuesday, February 15, 2022

What internet outrage reveals about race and TikTok's algorithm; NPR, February 14, 2022

Jess Kung, NPR; What internet outrage reveals about race and TikTok's algorithm

"The more our lives become intertwined and caught up in tech and social media algorithms, the more it's worth trying to understand and unpack just how those algorithms work. Who becomes viral, and why? Who gets harassed, who gets defended, and what are the lasting repercussions? And how does the internet both obscure and exacerbate the racial and gender dynamics that already animate so much of our social interactions?"

Sunday, January 23, 2022

The Humanities Can't Save Big Tech From Itself; Wired, January 12, 2022

, Wired; The Humanities Can't Save Big Tech From Itself

 "I’ve been studying nontechnical workers in the tech and media industries for the past several years. Arguments to “bring in” sociocultural experts elide the truth that these roles and workers already exist in the tech industry and, in varied ways, always have. For example, many current UX researchers have advanced degrees in sociology, anthropology, and library and information sciences. And teachers and EDI (Equity, Diversity, and Inclusion) experts often occupy roles in tech HR departments.

Recently, however, the tech industry is exploring where nontechnical expertise might counter some of the social problems associated with their products. Increasingly, tech companies look to law and philosophy professors to help them through the legal and moral intricacies of platform governance, to activists and critical scholars to help protect marginalized users, and to other specialists to assist with platform challenges like algorithmic oppression, disinformation, community management, user wellness, and digital activism and revolutions. These data-driven industries are trying hard to augment their technical know-how and troves of data with social, cultural, and ethical expertise, or what I often refer to as “soft” data.

But you can add all of the soft data workers you want and little will change unless the industry values that kind of data and expertise. In fact, many academics, policy wonks, and other sociocultural experts in the AI and tech ethics space are noticing a disturbing trend of tech companies seeking their expertise and then disregarding it in favor of more technical work and workers...

Finally, though the librarian profession is often cited as one that might save Big Tech from its disinformation dilemmas, some in LIS (Library and Information Science) argue they collectively have a long way to go before they’re up to the task. Safiya Noble noted the profession’s (just over 83% white) “colorblind” ideology and sometimes troubling commitment to neutrality. This commitment, the book Knowledge Justice explains, leads to many librarians believing, “Since we serve everyone, we must allow materials, ideas, and values from everyone.” In other words, librarians often defend allowing racist, transphobic, and other harmful information to stand alongside other materials by saying they must entertain “all sides” and allow people to find their way to the “best” information. This is the exact same error platforms often make in allowing disinformation and abhorrent content to flourish online."

Friday, May 7, 2021

In Covid Vaccine Data, L.G.B.T.Q. People Fear Invisibility; The New York Times, May 7, 2021

, The New York Times; In Covid Vaccine Data, L.G.B.T.Q. People Fear Invisibility

"Agencies rely on population data to make policy decisions and direct funding, and advocates say that failing to collect sexual orientation and gender identity data on Covid-19 vaccine uptake could obscure the real picture and prevent vaccine distribution decisions and funds from positively impacting this population.

When it comes to Covid-19 vaccine distribution, “how can you design interventions and know where to target your resources if you don’t know where you’ve been?” said Dr. Ojikutu.

A February study showed that L.G.B.T.Q. people with high medical mistrust and concern about experiencing stigma or discrimination were least likely to say they would accept a Covid-19 vaccine.

“The reason we need to do data-driven, culturally responsive outreach is that medical mistrust — and along with that, vaccine hesitancy — among L.G.B.T.Q. people is rooted in the stigma and discrimination that this community has experienced over time,” said Alex Keuroghlian, a psychiatrist and director of the National LGBTQIA+ Health Education Center and the Massachusetts General Hospital Psychiatry Gender Identity Program."

Saturday, July 11, 2020

Wrongfully Accused by an Algorithm; The New York Times, June 24, 2020

, The New York Times; Wrongfully Accused by an Algorithm

In what may be the first known case of its kind, a faulty facial recognition match led to a Michigan man’s arrest for a crime he did not commit.

"Clare Garvie, a lawyer at Georgetown University’s Center on Privacy and Technology, has written about problems with the government’s use of facial recognition. She argues that low-quality search images — such as a still image from a grainy surveillance video — should be banned, and that the systems currently in use should be tested rigorously for accuracy and bias.

“There are mediocre algorithms and there are good ones, and law enforcement should only buy the good ones,” Ms. Garvie said.

About Mr. Williams’s experience in Michigan, she added: “I strongly suspect this is not the first case to misidentify someone to arrest them for a crime they didn’t commit. This is just the first time we know about it.”"