Showing posts with label transparency. Show all posts
Showing posts with label transparency. Show all posts

Saturday, February 8, 2025

Pentagon cuts off Hegseth town hall webcast after transparency pledge; Navy Times, February 7, 2025

 , Navy Times; Pentagon cuts off Hegseth town hall webcast after transparency pledge

"The Pentagon cut off a webcast of Defense Secretary Pete Hegseth’s first town hall with troops and department employees Friday as soon as questions began, and shortly after Hegseth promised to be transparent with service members and the public.

Hegseth delivered about 15 minutes of opening remarks, which touched on issues such as grooming standards, readiness, border security and the administration’s desire to root out diversity, equity and inclusion programs from the military, before opening the floor to questions...

The broadcast ended less than two minutes after Hegseth pledged to be open with service members and the public.

“I appreciate the service so many of you give,” Hegseth said. “I know so many people watching. It’s the honor of a lifetime to come alongside you. No one will work harder. No one’s going to be more — attempt to be more transparent with the American people and with you.”

When asked a follow-up question about why the department stopped broadcasting when questions began, the Pentagon’s press office said, “The [defense secretary’s] opening remarks were televised to allow a larger audience. The Q&A portion was open to in-person participants only.”

It does not appear the Pentagon broadcast any portion of town halls held by Hegseth’s predecessor, Lloyd Austin.

However, the Pentagon did not indicate in its Friday morning email announcing Hegseth’s town hall that the questioning portion would not be broadcast. The webpage with the town hall feed originally indicated the broadcast was scheduled to run for about an hour and a half...

Hegseth said one of his top priorities is “restoring the warrior ethos,” before harshly criticizing the Pentagon’s previous focus on improving diversity in the ranks.

“I think the single dumbest phrase in military history is, ‘Our diversity is our strength,’” Hegseth said. “I think our strength is our unity. Our strength is our shared purpose, regardless of our background, regardless of how we grew up, regardless of our gender, regardless of our race. In this department, we will treat everyone equally, we will treat everyone with respect and we will judge you as an individual by your merit and by your commitment to the team and the mission.”

The Trump administration views diversity, equity and inclusion programs — referred to as DEI — as efforts to divide the military instead of uniting it, Hegseth said.

But the Pentagon’s initial efforts to comply with the administration’s DEI orders have been rocky."

Thursday, February 6, 2025

Rubio named acting director of another US government agency: report; Fox News, February 6, 2025

Danielle Wallace Fox NewsRubio named acting director of another US government agency: report

"Secretary of State Marco Rubio, who was tapped as the acting director of the United States Agency for International Development (USAID) just days ago, is taking on another new role in President Donald Trump's new administration. 

Rubio is now also serving as the acting director of the U.S. Archives, ABC News reported,citing a high-level official. Fox News Digital reached out to the State Department for comment, but they did not immediately respond.

Trump signaled last month his intention of replacing the now-former national archivist Colleen Shogan, who was appointed by former President Joe Biden, during a brief phone interview with radio host Hugh Hewitt. The National Archives notified the Justice Department in early 2022 over classified documents Trump allegedly took with him to his Mar-a-Lago estate in Florida after leaving office. That would later result in an FBI raid, and Trump being indicted by former special counsel Jack Smith. However, Biden nominated Shogan to run the National Archives and Records Administration (NARA) later in 2022, and the Senate confirmed her the following year.

The source told ABC News that Rubio has been the acting archivist since shortly after Trump was sworn in as the 47th president last month."

Wednesday, January 29, 2025

Trump’s firing of independent watchdogs raises concerns about government fraud and ethics; PBS News, January 27, 2025

 , , PBS News; Trump’s firing of independent watchdogs raises concerns about government fraud and ethics

"In another sweeping move of his second term, President Trump fired more than a dozen inspectors general, the non-partisan watchdogs appointed to protect against abuses of power, waste and mismanagement across federal agencies. White House correspondent Laura Barrón-López discussed the impact with Glenn Fine, former inspector general for the Department of Justice."

Saturday, January 25, 2025

Ethics watchdog issues conflict of interest warning to Musk’s Doge agency; The Guardian, January 23, 2025

, The Guardian ; Ethics watchdog issues conflict of interest warning to Musk’s Doge agency

"A leading ethics watchdog has issued warnings to Donald Trump’s billionaire ally Elon Musk and the “department of government efficiency” (Doge), an agency Trump has stated he will create, claiming its use of encrypted messaging apps potentially violates the Federal Records Act (FRA).

American Oversight, which uses litigation to obtain public records and expose government misconduct, argues that Musk’s leadership of Doge raises “significant ethical concerns about potential conflicts of interest”, given his business empire and the substantial impact that Doge could have on federal agencies.

The warnings stem from reports that members of Doge, which aims to carry out dramatic cuts to the US government, are using the encrypted messaging app Signal with an auto-delete feature, which could hinder the preservation of official records."

Sunday, January 19, 2025

How Jeff Bezos can stop the bleeding at the Washington Post; The Guardian, January 17, 2025

 , The Guardian; How Jeff Bezos can stop the bleeding at the Washington Post

"More than 400 newsroom staffers at the Washington Post pleaded with the paper’s owner, Jeff Bezos, this week to do something about their beloved paper’s rapid – and very public – decline.

“We are deeply alarmed by recent leadership decisions that have led readers to question the integrity of this institution, broken with a tradition of transparency, and prompted some of our most distinguished colleagues to leave, with more departures imminent,” an extraordinary letter to Bezos read in part, as first reported by NPR’s David Folkenflik. It was signed by some of the Post’s most respected names, including the investigative reporter Carol Leonnig and the unofficial dean of DC politics writers Dan Balz.

I feel their pain and join their cause. I was proud to work at the Washington Post for six years, until 2022, as the paper’s media columnist. My ties to the paper go back much further; it was the Post’s Watergate reporting that piqued my interest, as a teenager, in journalism and (along with a whole generation of other young people) drew me into a lifelong career. I know and admire many reporters, editors, photographers, videographers, designers and others at the paper, and doubt I’ll ever give up my subscription...

Bezos may not care. The billionaire who bought the paper for $250m in 2013 has been in supplication mode to Donald Trump for months. One of the world’s richest individuals, Bezos seems more interested in palling around with the likes of fellow billionaire Elon Musk.

But let’s say he does care, for reasons that may span the spectrum from preserving his own place in history to defending press rights to improving the Post’s red-drenched bottom line.

What could he do, immediately, to stem the bleeding?

First, he should show up – soon – to hold a town hall with the newsroom, answer questions and take the heat. Do it on the record...

Second, he should clearly state – publicly – that he understands the importance of editorial freedom and pledge not to interfere with it. And he should communicate that he gets the importance of the Post’s history and mission, and that he will support it.

Third, he should dump his handpicked publisher, Will Lewis, from whom many of these problems originate. Lewis, a British journalist who hails from the world of Rupert Murdoch, is far from a paragon of journalistic excellence or good judgment. His appointment has been rejected by the body of the Post (and, eventually, by its readers); to put it mildly, the graft didn’t take. Recognizing that, and immediately beginning a search for a more suitable replacement, would be a huge – and essential – step in the right direction.

All of this should be transparent to the public, in keeping with how the Post has conducted itself for many years. It’s a core value."

Wednesday, January 8, 2025

Opinion | Trump’s Nominees Falsely Say I’m Censoring Conservatives — So They Want to Censor Me; Politico, January 5, 2025

 STEVEN BRILL , Politico; Opinion | Trump’s Nominees Falsely Say I’m Censoring Conservatives — So They Want to Censor Me

"Last week, The Washington Post published an article detailing how NewsGuard, whose journalists rate the reliability of news sources, has become the target of incoming Trump administration regulators and far-right Republicans in Congress. They are accusing me and my NewsGuard colleagues of being part of some left-wing conspiracy — or “cartel” in the words of the incoming chairs of both the FCC and the FTC. Our cartel is supposedly aimed at censoring conservative websites and their associated social media and video platforms.

What we actually do is provide consumers and businesses with our journalists’ assessments of the professional standards of thousands of news websites, assigning them reliability scores based on apolitical, journalistic factors — like accuracy, transparent correction policies and honest headlines. Advertisers, for example, can use these reliability scores to make sure their computerized placements of online ads avoid running alongside Russian disinformation, health care hoaxes or other content that could embarrass their brands. Consumers who subscribe to our browser extension can also see those ratings when they pull up an article or scroll through a Facebook or X feed.

If you click the link to the Post article, you’ll see that the reporters compiled a chart of our 0-100 point ratings for a sample of 20 news sites. It plainly demonstrates that we give high and low ratings to liberal and conservative sites alike, because the nine criteria we use to tally the point score have nothing to do with politics. After all, is there a liberal or conservative way to have a transparent policy for admitting and correcting errors or having headlines that accurately reflect what’s delivered in the story?"

Friday, December 27, 2024

New Course Creates Ethical Leaders for an AI-Driven Future; George Mason University, December 10, 2024

Buzz McClain, George Mason University; New Course Creates Ethical Leaders for an AI-Driven Future

"While the debates continue over artificial intelligence’s possible impacts on privacy, economics, education, and job displacement, perhaps the largest question regards the ethics of AI. Bias, accountability, transparency, and governance of the powerful technology are aspects that have yet to be fully answered.

A new cross-disciplinary course at George Mason University is designed to prepare students to tackle the ethical, societal, and governance challenges presented by AI. The course, AI: Ethics, Policy, and Society, will draw expertise from the Schar School of Policy and Government, the College of Engineering and Computing(CEC), and the College of Humanities and Social Sciences (CHSS).

The master’s degree-level course begins in spring 2025 and will be taught by Jesse Kirkpatrick, a research associate professor in the CEC, the Department of Philosophy, and codirector of the Mason Autonomy and Robotics Center

The course is important now, said Kirkpatrick, because “artificial intelligence is transforming industries, reshaping societal norms, and challenging long-standing ethical frameworks. This course provides critical insights into the ethical, societal, and policy implications of AI at a time when these technologies are increasingly deployed in areas like healthcare, criminal justice, and national defense.”"

Thursday, October 31, 2024

A new study seeks to establish ethical collecting practices for US museums; The Art Newspaper, October 29, 2024

Annabel Keenan , The Art Newspaper; A new study seeks to establish ethical collecting practices for US museums

"As calls for the restitution of looted objects spread across the industry, the Penn Cultural Heritage Center (PennCHC) at the Penn Museum in Philadelphia is launching a study that will examine collecting policies and practices at US museums and encourage transparency and accountability in the sector. Launching today (29 October), the “Museums: Missions and Acquisitions Project” (dubbed M2A Project for short) will study over 450 museum collections to identify current standards and establish a framework for institutions to model their future practices...

The PennCHC has been supporting ethical collecting since its founding in 2008, including working closely with local communities in countries around the world to identify and preserve their cultural heritage. “US museums have historically acquired objects that were removed from these countries illegally or through pathways now considered inequitable,” says Richard M. Leventhal, the executive director of the PennCHC and co-principal investigator for the M2A Project. “The M2A Project is asking a very simple set of questions about these types of objects: Are US museums still acquiring them? And if so, why? Recent seizures of looted property and calls to decolonise collections force us to reconsider whether acquisitions best serve the missions of museums and the interests of their communities.”

The M2A Project evolved from the PennCHC’s Cultural Property Experts on Call Program that launched in 2020 in partnership with the US Department of State’s Cultural Heritage Coordinating Committee to protect at-risk cultural property against theft, looting and trafficking. Through this programme, the PennCHC collaborated with more than 100 museums and universities to study and document the trade in illicit artefacts."

Monday, October 28, 2024

Panel Reminds Us That Artificial Intelligence Can Only Guess, Not Reason for Itself; New Jersey Institute of Technology, October 22, 2024

Evan Koblentz , New Jersey Institute of Technology; Panel Reminds Us That Artificial Intelligence Can Only Guess, Not Reason for Itself

"Expert panelists took a measured tone about the trends, challenges and ethics of artificial intelligence, at a campus forum organized by NJIT’s Institute for Data Science this month.

The panel moderator was institute director David Bader, who is also a distinguished professor in NJIT Ying Wu College of Computing and who shared his own thoughts on AI in a separate Q&A recently. The panel members were Kevin Coulter, field CTO for AI, Dell Technologies; Grace Wang, distinguished professor and director of NJIT’s Center for Artificial Intelligence Research; and Mengjia Xu, assistant professor of data science. DataBank Ltd., a data center firm that hosts NJIT’s Wulver high-performance computing cluster, was the event sponsor...

Bader: “There's also a lot of concerns that get raised with AI in terms of privacy, in terms of ethics, in terms of its usage. So I really want to understand your thoughts on how we ensure that AI systems are developed and deployed ethically. And are there specific frameworks or guidelines that you would follow?”...

Wang: “Well, I always believe that AI at its core is just a tool, so there's no difference for the AI and say, lock picking tools. Now, picking tools can open your door if you lock yourself out and it can also open others. That's a crime, right? So it depends on how AI is used. From that perspective, there's not much special when we talk about AI ethics, or, say, computer security ethics, or the ethics related to how to use a gun, for example. But what is different is, as AI is too complex, it's beyond the knowledge of many of us how it works. Sometimes it looks ethical but maybe what's behind it is amplifying the bias by using the AI tools without our knowledge. So whenever we talk about AI ethics, I think the most important one is education if you know what AI is about, how it works and what AI can do and what AI cannot. I think for now we have the fear that AI is so powerful it can do anything, but actually, many of the things that people believe AI can do now can be done in the past by just any software system. So education is very, very important to help us to demystify AI accordingly, so we can talk about AI ethics. I want to emphasize transparency. If AI is used for decision making, if we understand how the decision is made, that becomes very, very important. And another important topic related to AI ethics is auditing if we don't know what's inside. At least we have some assessment tools to know whether there's a risk or not in certain circumstances. Whether it can generate a harmful result or is not very much like the stress testing to the financial system after 2008.”

Sunday, October 27, 2024

Book Bans Live on in School District Now Run by Democrats; The New York Times, October 27, 2024

, The New York Times; Book Bans Live on in School District Now Run by Democrats

"What is clear is that many Pennridge parents are exhausted with the political battles that inflamed communities nationwide during the Covid-19 pandemic. They have reached a new political equilibrium, where some changes have become part of the firmament of public education, especially the expectation that parents will have visibility into all that their children are learning and reading at school — and some measure of a veto...

Aubrie Schulz, 16, a junior at Pennridge High School, said she had been frustrated by the limited offerings in the high school library. But as adults argued over gender, sex and race, she noted that what occurred in the library or classroom had only a narrow effect on students.

“We can get all the information on our phones,” she said."

Tuesday, October 22, 2024

On X, the Definition of ‘Blocking’ Is About to Change; The New York Times, October 21, 2024

 , The New York Times; On X, the Definition of ‘Blocking’ Is About to Change

"A lot has changed on the social media platform formerly known as Twitter since Elon Musk bought it two years ago. The company, renamed X, is on the verge of yet another major shift, with changes coming for what happens when one user blocks another.

The block function, a powerful tool which makes your account effectively invisible to anyone of your choosing, will soon let those people see what you are posting. The difference, according to a thread posted by X’s engineering account, is that blocked users will not be able to engage with the post in any way...

The overall sentiment from users, however, is that the impending change to the block feature will allow for more abuse."

Friday, October 11, 2024

23andMe is on the brink. What happens to all its DNA data?; NPR, October 3, 2024

 , NPR; 23andMe is on the brink. What happens to all its DNA data?

"As 23andMe struggles for survival, customers like Wiles have one pressing question: What is the company’s plan for all the data it has collected since it was founded in 2006?

“I absolutely think this needs to be clarified,” Wiles said. “The company has undergone so many changes and so much turmoil that they need to figure out what they’re doing as a company. But when it comes to my genetic data, I really want to know what they plan on doing.”

Friday, October 4, 2024

Beyond the hype: Key components of an effective AI policy; CIO, October 2, 2024

 Leo Rajapakse, CIO; Beyond the hype: Key components of an effective AI policy

"An AI policy is a living document 

Crafting an AI policy for your company is increasingly important due to the rapid growth and impact of AI technologies. By prioritizing ethical considerations, data governance, transparency and compliance, companies can harness the transformative potential of AI while mitigating risks and building trust with stakeholders. Remember, an effective AI policy is a living document that evolves with technological advancements and societal expectations. By investing in responsible AI practices today, businesses can pave the way for a sustainable and ethical future tomorrow."

Thursday, September 5, 2024

Intellectual property and data privacy: the hidden risks of AI; Nature, September 4, 2024

 Amanda Heidt , Nature; Intellectual property and data privacy: the hidden risks of AI

"Timothée Poisot, a computational ecologist at the University of Montreal in Canada, has made a successful career out of studying the world’s biodiversity. A guiding principle for his research is that it must be useful, Poisot says, as he hopes it will be later this year, when it joins other work being considered at the 16th Conference of the Parties (COP16) to the United Nations Convention on Biological Diversity in Cali, Colombia. “Every piece of science we produce that is looked at by policymakers and stakeholders is both exciting and a little terrifying, since there are real stakes to it,” he says.

But Poisot worries that artificial intelligence (AI) will interfere with the relationship between science and policy in the future. Chatbots such as Microsoft’s Bing, Google’s Gemini and ChatGPT, made by tech firm OpenAI in San Francisco, California, were trained using a corpus of data scraped from the Internet — which probably includes Poisot’s work. But because chatbots don’t often cite the original content in their outputs, authors are stripped of the ability to understand how their work is used and to check the credibility of the AI’s statements. It seems, Poisot says, that unvetted claims produced by chatbots are likely to make their way into consequential meetings such as COP16, where they risk drowning out solid science.

“There’s an expectation that the research and synthesis is being done transparently, but if we start outsourcing those processes to an AI, there’s no way to know who did what and where the information is coming from and who should be credited,” he says...

The technology underlying genAI, which was first developed at public institutions in the 1960s, has now been taken over by private companies, which usually have no incentive to prioritize transparency or open access. As a result, the inner mechanics of genAI chatbots are almost always a black box — a series of algorithms that aren’t fully understood, even by their creators — and attribution of sources is often scrubbed from the output. This makes it nearly impossible to know exactly what has gone into a model’s answer to a prompt. Organizations such as OpenAI have so far asked users to ensure that outputs used in other work do not violate laws, including intellectual-property and copyright regulations, or divulge sensitive information, such as a person’s location, gender, age, ethnicity or contact information. Studies have shown that genAI tools might do both1,2."

Thursday, July 18, 2024

Ethical AI: Tepper School Course Explores Responsible Business; Carnegie Mellon University Tepper School of Business, July 1, 2024

Carnegie Mellon University Tepper School of Business; Ethical AI: Tepper School Course Explores Responsible Business

"As artificial intelligence (AI) becomes more widely used, there is growing interest in the ethics of AI. A new article by Derek Leben, associate teaching professor of business ethics at Carnegie Mellon University's Tepper School of Business, detailed a graduate course he developed titled "Ethics and AI." The course bridges AI-specific challenges with long-standing ethical discussions in business."

Friday, July 12, 2024

AI Briefing: Senators propose new regulations for privacy, transparency and copyright protections; Digiday, July 12, 2024

Marty Swant , Digiday; AI Briefing: Senators propose new regulations for privacy, transparency and copyright protections

"The U.S. Senate Commerce Committee on Thursday held a hearing to address a range of concerns about the intersection of AI and privacy. While some lawmakers expressed concern about AI accelerating risks – such as online surveillance, scams, hyper-targeting ads and discriminatory business practices — others cautioned regulations might further protect tech giants and burden smaller businesses."

Monday, June 17, 2024

Surgeon General: Why I’m Calling for a Warning Label on Social Media Platforms; The New York Times, June 17, 2024

 Vivek H. Murthy, The New York Times; Surgeon General: Why I’m Calling for a Warning Label on Social Media Platforms

"It is time to require a surgeon general’s warning label on social media platforms, stating that social media is associated with significant mental health harms for adolescents. A surgeon general’s warning label, which requires congressional action, would regularly remind parents and adolescents that social media has not been proved safe. Evidence from tobacco studies show that warning labels can increase awareness and change behavior. When asked if a warning from the surgeon general would prompt them to limit or monitor their children’s social media use, 76 percent of people in one recent survey of Latino parents said yes...

It’s no wonder that when it comes to managing social media for their kids, so many parents are feeling stress and anxiety — and even shame.

It doesn’t have to be this way. Faced with high levels of car-accident-related deaths in the mid- to late 20th century, lawmakers successfully demanded seatbelts, airbags, crash testing and a host of other measures that ultimately made cars safer. This January the F.A.A. grounded about 170 planes when a door plug came off one Boeing 737 Max 9 while the plane was in the air. And the following month, a massive recall of dairy products was conducted because of a listeria contamination that claimed two lives.

Why is it that we have failed to respond to the harms of social media when they are no less urgent or widespread than those posed by unsafe cars, planes or food? These harms are not a failure of willpower and parenting; they are the consequence of unleashing powerful technology without adequate safety measures, transparency or accountability."

Wednesday, June 12, 2024

Why G7 leaders are turning to a special guest — Pope Francis — for advice on AI; NPR, June 12, 2024

 , NPR; Why G7 leaders are turning to a special guest — Pope Francis — for advice on AI

"Pope Francis himself has been at the receiving end of AI misinformation. Last year, a picture of the pope wearing a large white puffer coat went viral. The image was generated by AI, and it prompted conversations on deepfakes and the spread of disinformation through AI technology.

In his annual message on New Year's Day this year, the pope focused on how AI can be used for peace.

His work on the issue goes back several years, when the Vatican and tech companies like Microsoft started working together to create a set of principles known as the Rome Call for AI Ethics, published in 2020. Companies and governments that sign on to the call have agreed to voluntary commitments aimed at promoting transparency and accountability in AI development."

Saturday, June 8, 2024

NJ Bar Association Warns the Practice of Law Is Poised for Substantial Transformation Due To AI; The National Law Review, June 4, 2024

 James G. Gatto of Sheppard, Mullin, Richter & Hampton LLP, The National Law Review; NJ Bar Association Warns the Practice of Law Is Poised for Substantial Transformation Due To AI

"The number of bar associations that have issued AI ethics guidance continues to grow, with NJ being the most recent. In its May 2024 report (Report), the NJ Task Force on Artificial Intelligence and the Law made a number of recommendations and findings as detailed below. With this Report, NJ joins the list of other bar associations that have issued AI ethics guidance, including FloridaCaliforniaNew YorkDC as well as the US Patent and Trademark Office. The Report notes that the practice of law is “poised for substantial transformation due to AI,” adding that while the full extent of this transformation remains to be seen, attorneys must keep abreast of and adapt to evolving technological landscapes and embrace opportunities for innovation and specialization in emerging AI-related legal domains.

The Task Force included four workgroups, including: i) Artificial Intelligence and Social Justice Concerns; ii) Artificial Intelligence Products and Services; iii) Education and CLE Programming; and iv) Ethics and Regulatory Issues. Each workgroup made findings and recommendations, some of which are provided below (while trying to avoid duplicating what other bar associations have addressed). Additionally, the Report includes some practical tools including guidance on Essential Factors for Selecting AI Products and Formulating an AI Policy in Legal Firms, provides a Sample Artificial Intelligence and Generative Artificial Intelligence Use Policy and Questions for Vendors When Selecting AI Products and Services, links to which are provided below.

The Report covers many of the expected topics with a focus on:

  • prioritizing AI education, establishing baseline procedures and guidelines, and collaborating with data privacy, cybersecurity, and AI professionals as needed;
  • adopting an AI policy to ensure the responsible integration of AI in legal practice and adherence to ethical and legal standards; and
  • the importance of social justice concerns related to the use of AI, including the importance of transparency in AI software algorithms, bias mitigation, and equitable access to AI tools and the need to review legal AI tools for fairness and accessibility, particularly tools designed for individuals from marginalized or vulnerable communities.

Some of the findings and recommendations are set forth below."

Wednesday, May 29, 2024

Why using dating apps for public health messaging is an ethical dilemma; The Conversation, May 28, 2024

s, Chancellor's Fellow, Deanery of Molecular, Genetic and Population Health Sciences Usher Institute Centre for Biomedicine, Self and Society, The University of EdinburghProfessor of Sociology, Sociology, University of Manchester, Lecturer in Nursing, University of Manchester , The Conversation; Why using dating apps for public health messaging is an ethical dilemma

"Future collaborations with apps should prioritise the benefit of users over those of the app businesses, develop transparent data policies that prevent users’ data from being shared for profit, ensure the apps’ commitment to anti-discrimination and anti-harrassment, and provide links to health and wellbeing services beyond the apps.

Dating apps have the potential to be powerful allies in public health, especially in reaching populations that have often been ignored. However, their use must be carefully managed to avoid compromising user privacy, safety and marginalisation."