Showing posts with label trust. Show all posts
Showing posts with label trust. Show all posts

Tuesday, March 3, 2026

Trump drops attack on Big Law, but firms already capitulated; Democracy Docket, March 3, 2026

Marc Elias , Democracy Docket; Trump drops attack on Big Law, but firms already capitulated

"As pleased as I am with the outcome of these cases, this is not a story with a happy ending.

The capitulation of Big Law has done enormous damage to our democracy. Firms that were never targeted have stopped representing pro bono clients in voting rights and civil rights cases. Leaders in the profession are rarely willing to speak out. As everyday Americans challenge the illegality of Trump’s actions in the streets of our cities, large law firms remain notably absent.

No one who has paid attention over the past year will ever view the role of lawyers the same way again. Long after Trump leaves office, when we are cleaning up the rubble he leaves behind, the damage to the legal profession will endure.

That is why it is so important not only to remember those who stood and fought, but also those who cowered and gave in. For confidence to be restored, the leaders of the firms that made deals with Trump must be treated as pariahs in the legal world — just as the Ellisons will be in media and Sam Altman will be in tech. When the dust settles, we must be clear about who stood up for our democracy and who was willing to let it fall for personal gain.

I have been fighting — and winning — against Donald Trump for a long time. Yesterday, I was proud to see a hard-earned victory. But today, and in the days ahead, we must rebuild trust in the rule of law and our legal system — not only by celebrating those who did the right thing, but also by ensuring we never forget those who betrayed our cause."

Tuesday, February 24, 2026

Louvre Director Resigns, Months After Burglars Stole Crown Jewels; The New York Times, February 24, 2026

, The New York Times; Louvre Director Resigns, Months After Burglars Stole Crown Jewels

Laurence des Cars’s departure is the latest setback for the world’s largest museum. Her tenure was marred by labor strikes, water leaks and security lapses that led to the heist in October.

"Laurence des Cars, the first female president of the Louvre Museum, resigned on Tuesday, less than three months after an audacious theft raised thorny questions about security at one of the world’s most famous museums.

Ms. des Cars submitted her resignation to the French president, Emmanuel Macron, who had appointed her in 2021 and championed her plans for an ambitious refurbishment of the museum, known as “Louvre — New Renaissance.”

The president’s office said in a statement that Mr. Macron had accepted Ms. des Cars’s resignation “as an act of responsibility at a time when the world’s largest museum needs both stability and a strong new impetus to successfully complete major security and modernization projects.”

Ms. des Cars’s resignation came a day before she was scheduled to testify before the French Parliament about the security lapses that led to the theft of a collection of jewels, which were valued at more than $100 million."

Wednesday, February 18, 2026

Honoring Alex Pretti’s Moral Courage and the Cost of Caring; The Hastings Center for Bioethics, February 17, 2026

Connie M. UlrichMary D. Naylor and Martha A. Q. Curley , The Hastings Center for Bioethics; Honoring Alex Pretti’s Moral Courage and the Cost of Caring

"The death of Alex Pretti, an ICU nurse who was killed last month in an anti-immigration protest in Minneapolis, is, first and foremost, a devastating loss for his loved ones. But it has also shaken the nursing profession to the core. 

People often encounter nurses at the bedside when they  are ill or someone close to them is ill. But nurses also have a long history of advocating for social justice in their communities, speaking out against unjust policies, challenging unsafe practices, and advancing public health reforms.

The 2025 Code of Ethics for Nurses reflects this activism. It calls on all nurses to be civically engaged and to work toward policies and systems that have positive ends for the communities in which we live and work. Alex met this call. 

Alex used his ICU training to help someone in need; it was second nature to him and reflected his primary obligation as a registered nurse to protect the rights and well-being of patients, families, and communities. He lost his life because he helped a woman during a protest against federal immigration action in Minneapolis. Pretti stepped in front of the woman, who was on the ground, to protect her from being pepper sprayed by U.S. Border Patrol agents. Agents then pinned Pretti to the ground and shot him.

Nurses are no strangers to conflict and moral turmoil. They take a professional and ethical oath to care for anyone — victim or perpetrator — regardless of their identity or ideological belief. But Alex’s death exposes a stark and troubling reality for every nurse and healthcare provider: Immigration enforcement agents are now occupying spaces that should be protected in hospitals, waiting rooms, lobbies, and clinics. These are places where patients must feel safe and trust that they will receive care without discrimination and be protected from intimidation. 

The presence of immigration enforcement agents in these places is creating profound moral distress and a climate of deep fear for all those who deliver care and for the people who need it most within these buildings. Nurses and other healthcare providers are caught in the age-old dilemma between what is ethical and what is legal: They question what they ought to do when faced with immigration enforcement agents standing outside hospital rooms and observing the care they are ethically and professionally obligated to protect.

When nurses and other healthcare providers cannot meet their ethical duties to protect the rights and welfare of their patients, this distress can intensify into a deeper wound with lingering residue of regret and a searing violation of their sense of integrity. 

For their part, patients may withhold critical health information, become afraid to ask questions, and mistrust health professionals when immigration enforcement agents are present. Patients who are immigrants are most vulnerable to these harms, but other patients may also experience them. The harms – to healthcare providers and patients – can ultimately compromise ethical decision-making, patient-and family-centered care, and the overall quality of care that all patients deserve, and healthcare providers are trained to deliver.

The patients and families cared for by Alex will always remember him. Nurses will remember Alex’s sacrifice – that his caring extended beyond the walls of his hospital to the stranger he protected in his community. 

Nurses can honor Alex’s moral courage through our individual and professional resolve. We must say no more to the infiltration of immigration enforcement into healthcare spaces that were previously off limits to them. We must speak out on re-establishing “safe zones,” hospital-wide policies that limit enforcement access, and confidential reporting mechanisms that reflect the humanity of the nursing profession towards those we took an oath to serve. 

May a better and more humane world prevail, reminding each of us that moral courage carries risk, but it also helps us rise to the occasion when change and moral repair are needed most. We are at that moment.

Connie M. Ulrich, PhD, RN, is a registered nurse and professor of nursing and of medical ethics and health policy at the University of Pennsylvania School of Nursing and a Hastings Center Fellow. LinkedIn: connieulrich1X: @cm_ulrich

Mary D. Naylor, PhD, RN, is a registered nurse and professor of gerontology and nursing at Penn’s School of Nursing. LinkedIn: Mary_Naylor,  X: @MaryDNaylor

Martha A.Q. Curley, PhD, RN, is a registered nurse and professor of pediatric nursing at Penn’s School of Nursing.LinkedIn: Martha-a-q-curleyX: maqcurleyBluesky: @maqc.bsky.social"

Monday, February 9, 2026

Essential Knowledge for Journalists Reporting on AI, Creativity and Copyright; Webinar, National Press Foundation: Thursday, February 19, 2026 12 PM - 1 PM EST

 Webinar, National Press Foundation: Essential Knowledge for Journalists Reporting on AI, Creativity and Copyright 

"Generative AI is one of the biggest technological and cultural stories of our time – and one of the hardest to explain. As AI companies train models on news articles, books, images and music, reporters face tough questions about permission, transparency and fair use. Should AI companies pay when creative works are used to train their AI models? Where’s the line between innovation and theft?

The National Press Foundation will host a webinar to help journalists make sense of the evolving AI licensing landscape and report on it with clarity and confidence. We’ll unpack what “AI licensing” really means, how early one-off deals are turning into structured revenue-sharing systems, and why recent agreements in media and entertainment could shift the conversation from conflict to cooperation.

Join NPF and a panel of experts for a free online briefing from 12-1 p.m. ET, Feb. 19, 2026. The practical, forward-looking discussion examines how trust, creativity, and innovation can coexist as this new era unfolds and will equip journalists with plain-language explanations, real-world examples, and story angles that help readers understand why AI licensing matters to culture, innovation and the creative ecosystem they rely on every day."

Monday, February 2, 2026

How the Supreme Court Secretly Made Itself Even More Secretive; The New York Times, February 2, 2026

, The New York Times ; How the Supreme Court Secretly Made Itself Even More Secretive

Amid calls to increase transparency and revelations about the court’s inner workings, the chief justice imposed nondisclosure agreements on clerks and employees.

"n November of 2024, two weeks after voters returned President Donald Trump to office, Chief Justice John G. Roberts Jr. summoned employees of the U.S. Supreme Court for an unusual announcement. Facing them in a grand conference room beneath ornate chandeliers, he requested they each sign a nondisclosure agreement promising to keep the court’s inner workings secret.

The chief justice acted after a series of unusual leaks of internal court documents, most notably of the decision overturning the right to abortion, and news reports about ethical lapses by the justices. Trust in the institution was languishing at a historic low. Debate was intensifying over whether the black box institution should be more transparent.

Instead, the chief justice tightened the court’s hold on information.Its employees have long been expected to stay silent about what they witness behind the scenes. But starting that autumn, in a move that has not been previously reported, the chief justice converted what was once a norm into a formal contract, according to five people familiar with the shift."

Sunday, December 28, 2025

Government Officials Once Stopped False Accusations After Violence. Now, Some Join In.; The New York Times, December 25, 2025

, The New York Times ; Government Officials Once Stopped False Accusations After Violence. Now, Some Join In.

"A churn of disinformation after a major news event is hardly a surprise anymore, but its spread after the Brown killings was not limited to the dark fringes of the internet. It was fueled by prominent figures in business and government whose false statements or politically charged innuendo compounded public anger and anxiety.

That has raised new alarms about the nature and quality of public discourse — and whether there is any consequence for those who degrade it or for the social media platforms that reward it."

Thursday, December 11, 2025

AI Has Its Place in Law, But Lawyers Who Treat It as a Replacement Can Risk Trust, Ethics, and Their Clients' Futures; International Business Times, December 11, 2025

 Lisa Parlagreco, International Business Times; AI Has Its Place in Law, But Lawyers Who Treat It as a Replacement Can Risk Trust, Ethics, and Their Clients' Futures

"When segments of our profession begin treating AI outputs as inherently reliable, we normalize a lower threshold of scrutiny, and the law cannot function on lowered standards. The justice system depends on precision, on careful reading, on the willingness to challenge assumptions rather than accept the quickest answer. If lawyers become comfortable skipping that intellectual step, even once, we begin to erode the habits that make rigorous advocacy possible. The harm is not just procedural; it's generational. New lawyers watch what experienced lawyers do, not what they say, and if they see shortcuts rewarded rather than corrected, that becomes the new baseline.

This is not to suggest that AI has no place in law. When used responsibly, with human oversight, it can be a powerful tool. Legal teams are successfully incorporating AI into tasks like document review, contract analysis, and litigation preparation. In complex cases with tens of thousands of documents, AI has helped accelerate discovery and flag issues that humans might overlook. In academia as well, AI has shown promise in grading essays and providing feedback that can help educate the next generation of lawyers, but again, under human supervision.

The key distinction is between augmentation and automation. We must not be naive about what AI represents. It is not a lawyer. It doesn't hold professional responsibility. It doesn't understand nuance, ethics, or the weight of a client's freedom or financial well-being. It generates outputs based on patterns and statistical likelihoods. That's incredibly useful for ideation, summarization, and efficiency, but it is fundamentally unsuited to replace human reasoning.

To ignore this reality is to surrender the core values of our profession. Lawyers are trained not just to know the law but to apply it with judgment, integrity, and a commitment to truth. Practices that depend on AI without meaningful human oversight communicate a lack of diligence and care. They weaken public trust in our profession at a time when that trust matters more than ever.

We should also be thinking about how we prepare future lawyers. Law schools and firms must lead by example, teaching students not just how to use AI, but how to question it. They must emphasize that AI outputs require verification, context, and critical thinking. AI should supplement legal education, not substitute it. The work of a lawyer begins long before generating a draft; it begins with curiosity, skepticism, and the courage to ask the right questions.

And yes, regulation has its place. Many courts and bar associations are already developing guidelines for the responsible use of AI. These frameworks encourage transparency, require lawyers to verify any AI-assisted research, and emphasize the ethical obligations that cannot be delegated to a machine. That's progress, but it needs broader adoption and consistent enforcement.

At the end of the day, technology should push us forward, not backward. AI can make our work more efficient, but it cannot, and should not, replace our judgment. The lawyer who delegates their thinking to an algorithm risks their profession, their client's case, and the integrity of the justice system itself."

Friday, November 28, 2025

‘Traitor’: US representatives call for Trump envoy Witkoff to be fired after leaked Kremlin call; The Guardian, November 26, 2025

, The Guardian ; ‘Traitor’: US representatives call for Trump envoy Witkoff to be fired after leaked Kremlin callRepublicans and Democrats warn Witkoff ‘cannot be trusted’ after reportedly advising officials on peace plan

"A handful of US representatives have reacted furiously to a leaked recording in which the special envoy to Ukraine reportedly coached Moscow on how to handle Donald Trump, but most have so far remained mute on the revelation that American officials were advising a US adversary.

Don Bacon, a Republican representative, called for Steve Witkoff’s immediate dismissal. “For those who oppose the Russian invasion and want to see Ukraine prevail as a sovereign & democratic country, it is clear that Witkoff fully favors the Russians,” the Nebraska lawmaker wrote on X.

“He cannot be trusted to lead these negotiations. Would a Russian paid agent do less than he? He should be fired.”

Brian Fitzpatrick, a Pennsylvania Republican wrote that the leak represented “a major problem” and “one of the many reasons why these ridiculous side shows and secret meetings need to stop”. He urged that the secretary of state, Marco Rubio, be allowed to “do his job in a fair and objective manner”.

Democratic representative Ted Lieu went further, calling Witkoff an “actual traitor,” and adding: “Steve Witkoff is supposed to work for the United States, not Russia.”

In a recording obtained by Bloomberg of a 14 October phone call between Witkoff and Yuri Ushakov, Vladimir Putin’s top foreign policy aide, Witkoff said peace would require Moscow gaining control of Donetsk and potentially additional Ukrainian territory."

Wednesday, November 19, 2025

Can You Believe the Documentary You’re Watching?; The New York Times, November 18, 2025

 , The New York Times; Can You Believe the Documentary You’re Watching?

"Like a surging viral outbreak, A.I.-generated video has suddenly become inescapable. It’s infiltrated our social feeds and wormed its way into political discourse. But documentarians have been bracing for impact since before most of us even knew what the technology could do.

Documentaries fundamentally traffic in issues of truth, transparency and trust. If they use so-called synthetic materials but present them as if they’re “real,” it’s not just a betrayal of the tacit contract between filmmaker and audience. The implications are far broader, and far more serious: a century of shared history is in jeopardy.

At a time when the idea of facts and shared reality is assaulted from every side, the turning point has arrived. The stakes couldn’t be higher. And we all need to pay attention."

Saturday, October 11, 2025

US universities must reject Trump’s ‘compact’. It is full of traps; The Guardian, October 7, 2025

 , The Guardian; US universities must reject Trump’s ‘compact’. It is full of traps

"As with other aspects of Donald Trump’s emerging mafia state, there is no guarantee that those bending the knee will not be bullied again. The government can always come back to universities and accuse them of having violated the agreement (still too many courses in victimhood studies; still too much “violence” – as defined by bureaucrats – vis-a-vis someone’s cherished ideas). The government will also encourage donors to claim back their cash. Since the compact’s criteria are exceedingly vague, those who take the offer will probably overdo compliance.

At the risk of sounding like one of those dreadful self-styled victims: universities are fragile institutions. Many American ones are excellent precisely because people trust each other and cooperate successfully without over-regulation (some Europeans can tell you what it means to be subject to constant assessments – and how a Soviet-style bureaucracy constantly distracts from research and teaching). Of course there is always plenty of academic infighting, but what the Trumpists are doing is consciously trying to create divisions by setting potential Trump administration collaborators against those determined to resist it. As has become apparent with other autocrats’ assaults on universities, even if institutions escape (sometimes literally, as they have to relocate to other countries) the worst, much damage has been done. This is why the nine universities should not only reject the compact, but also publicly explain what is wrong with it (otherwise they will be immediately charged with wanting to protect their tuition-racket, helping foreigners and “importing radicalism” to undermine American greatness).

Precisely because they have been losing court cases over free speech and visas for foreign students, Trumpists now seek to entrap universities in a deal that effectively removes the protections of federal law and gives the administration arbitrary power over them. The carrots serve to lure institutions of higher learning into a dark alley where, rather than just waiting with a big stick, the government can put a gun to their heads at any time."

Sunday, September 28, 2025

Hastings Center Releases Medical AI Ethics Tool for Policymakers, Patients, and Providers; The Hastings Center for Bioethics, September 25, 2025

 The Hastings Center for Bioethics; Hastings Center Releases Medical AI Ethics Tool for Policymakers, Patients, and Providers

"As artificial intelligence rapidly transforms healthcare, The Hastings Center for Bioethics has released an interactive tool to help policymakers, patients and providers understand the ways that AI is being used in medicine—from making a diagnosis to evaluating insurance claims—and navigate the ethical questions that emerge along the way.

The new tool, a Patient’s Journey with Medical AI, follows an imaginary patient through five interactions with medical AI. It guides users through critical decision points in diagnostics, treatment, and communication, offering personalized insights into how algorithms might influence their care. 

Each decision point in the Patient’s Journey includes a summary of the ethical issues raised and multiple choice questions intended to stimulate thinking and discussion about particular uses of AI in medicine. Policy experts from across the political spectrum were invited to review the tool for accuracy and utility.

The Patient’s Journey is the latest in a set of resources developed through Hastings on the Hill, a project that translates bioethics research for use by policymakers—with an initial focus on medical AI. “This isn’t just about what AI can do — it’s about what it should do,” said Hastings Center President Vardit Ravitsky, who directs Hastings on the Hill. “Patients deserve to understand how technologies affect their health decisions, and policymakers can benefit from expert guidance as they seek to ensure that AI serves the public good.”

The Greenwall Foundation is supporting this initiative. Additional support comes from The Donaghue Foundation and the National Institutes of Health’s Bridge2AI initiative.

In addition to using Hastings on the Hill resources, policymakers, industry leaders, and others who shape medical AI policy and practice are invited to contact The Hastings Center with questions related to ethical issues they are encountering. Hastings Center scholars and fellows can provide expert nonpartisan analysis on urgent bioethics issues, such as algorithmic bias, patient privacy, data governance, and informed consent.

“Ethics should not be an afterthought,” says Ravitsky. “Concerns about biased health algorithms and opaque clinical decision tools have underscored the need for ethical oversight alongside technical innovation.”

“The speed of AI development has outpaced the ethical guardrails we need,” said Erin Williams, President and CEO of EDW Wisdom, LLC — the consultancy working with The Hastings Center. “Our role is to bridge that gap —ensuring that human dignity, equity, and trust are not casualties of technological progress.”

Explore Patient’s Journey with Medical AI. Learn more about Hastings on the Hill."

Thursday, September 18, 2025

AI could never replace my authors. But, without regulation, it will ruin publishing as we know it; The Guardian, September 18, 2025

, The Guardian ; AI could never replace my authors. But, without regulation, it will ruin publishing as we know it


[Kip Currier: This is a thought-provoking piece by literary agent Jonny Geller. He suggests an "artists’ rights charter for AI that protects two basic principles: permission and attribution". His charter idea conveys some aspects of the copyright area called "moral rights".

Moral rights provide copyright creators with a right of paternity (i.e. attribution) and a right of integrity. The latter can enable creators to exercise some levels of control over how their copyrighted works can be adapted. The moral right of integrity, for example, was an argument in cases involving whether black and white films (legally) could be or (ethically) should be colorized. (See Colors in Conflicts: Moral Rights and the Foreign Exploitation of Colorized U.S. Motion PicturesMoral rights are not widespread in U.S. copyright law because of tensions between the moral right of integrity and the right of free expression/free speech under the U.S. Constitution (whose September 17, 1787 birthday was yesterday). The Visual Artists Rights Act (1990) is a narrow example of moral rights under U.S. copyright law.

To Geller's proposed Artists' Rights Charter for AI I'd suggest adding the word and concept of "Responsibilities". Compelling arguments can be made for providing authors with some rights regarding use of their copyrighted works as AI training data. And, commensurately, persuasive arguments can be made that authors have certain responsibilities if they use AI at any stage of their creative processes. Authors can and ethically should be transparent about how they have used AI, if applicable, in the creation stages of their writing.

Of course, how to operationalize that as an ethical standard is another matter entirely. But just because it may be challenging to initially develop some ethical language as guidance for authors and strive to instill it as a broad standard doesn't mean it shouldn't be attempted or done.]


[Excerpt]

"The single biggest threat to the livelihood of authors and, by extension, to our culture, is not short attention spans. It is AI...

As a literary agent and CEO of one of the largest agencies in Europe, I think this is something everyone should care about – not because we fear progress, but because we want to protect it. If you take away the one thing that makes us truly human – our ability to think like humans, create stories and imagine new worlds – we will live in a diminished world.

AI that doesn’t replace the artist, or that will work with them transparently, is not all bad. An actor who is needed for reshoots on a movie may authorise use of the footage they have to complete a picture. This will save on costs, the environmental impact and time. A writer may wish to speed up their research and enhance their work by training their own models to ask the questions that a researcher would. The translation models available may enhance the range of offering of foreign books, adding to our culture.

All of this is worth discussing. But it has to be a discussion and be transparent to the end user. Up to now, work has simply been stolen and there are insufficient guardrails on the distributors, studios, publishers. As a literary agent, I have a more prosaic reason to get involved – I don’t think it is fair for someone’s work to be taken without their permission to create an inferior competitor.

What can we do? We could start with some basic principles for all to sign up to. An artists’ rights charter for AI that protects two basic principles: permission and attribution."

Sunday, September 7, 2025

Nashville church helps unhoused people after downtown library fire; NewsChannel5, September 6, 2025

"When the Nashville Public Library's downtown branch closed after a fire, McKendree United Methodist Church stepped up to fill a critical gap for people experiencing homelessness who had lost their daily refuge.

"Alright we'll get ya all bagged up here," said Francie Markham, who volunteers at the church every Thursday morning helping people experiencing homelessness...

After losing their cool refuge with computers and resources, Smith said many people just wanted to avoid the long stretch of summer heat.

"So what we were able to do on our Tuesdays and Thursday meal is to allow them to come in much earlier rather than at the 11:30 times so they would be out of the element," Smith said.

"With the changing of the season we need it open as soon as we can," Smith said.

In the meantime, Smith and Markham keep doing what's written on the walls — serving kindness.

Despite initial reports the library would open soon after the fire, library officials say the library requires a third party inspection before it can open. The two nearest library branches, North Branch and Hadley Park, are both more than a 30-minute walk from the library downtown. 

Have you witnessed acts of community kindness during challenging times? Share your story with Kim Rafferty and help us highlight the helpers making a difference in Middle Tennessee. Email kim.rafferty@NewsChannel5.com to continue the conversation.

In this article, we used artificial intelligence to help us convert a video news report originally written by Kim Rafferty. When using this tool, both Kim and the NewsChannel 5 editorial team verified all the facts in the article to make sure it is fair and accurate before we published it. We care about your trust in us and where you get your news, and using this tool allows us to convert our news coverage into different formats so we can quickly reach you where you like to consume information. It also lets our journalists spend more time looking into your story ideas, listening to you and digging into the stories that matter."

Friday, May 30, 2025

This Latest AI Book Debacle Is A Disturbing Part Of A Growing Trend; ScreenRant, May 29, 2025


"Yet another AI scandal has hit self-publishing, as an author left generative AI in a final draft of their book - but this isn't an isolated incident, and reveals a growing, and deeply problematic, trend."

Saturday, April 26, 2025

U.S. autism data project sparks uproar over ethics, privacy and intent; The Washington Post, April 25, 2025

 , The Washington Post; U.S. autism data project sparks uproar over ethics, privacy and intent

"The Trump administration has retreated from a controversial plan for a national registry of people with autism just days after announcing it as part of a new health initiative that would link personal medical records to information from pharmacies and smartwatches.

Jay Bhattacharya, director of the National Institutes of Health, unveiled the broad, data-driven initiative to a panel of experts Tuesday, saying it would include “national disease registries, including a new one for autism” that would accelerate research into the rapid rise in diagnoses of the condition.

The announcement sparked backlash in subsequent days over potential privacy violations, lack of consent and the risk of long-term misuse of sensitive data.

The Trump administration still will pursue large-scale data collection, but without the registry that drew the most intense criticism, the Department of Health and Human Services said."

Thursday, February 27, 2025

Dying in Darkness: Jeff Bezos Turns Out the Lights in the Washington Post’s Opinion Section; Politico, February 26, 2025

MICHAEL SCHAFFER , Politico; Dying in Darkness: Jeff Bezos Turns Out the Lights in the Washington Post’s Opinion Section

"In personally announcing that he was dramatically re-orienting the editorial line, and in fact wouldn’t even run dissenting views, Bezos added another sharp example to a narrative that represents a grave threat to the Post’s image: The idea that its owner is messing around with the product in order to curry favor with his new pal Donald Trump, who has the power to withhold contracts from Amazon and other Bezos companies.

The paper’s image is not some abstract question for journalism-school professors. It’s a matter of dollars and cents. If readers don’t trust a publication’s name, no amount of Pulitzer-worthy scoops will fix it. For Bezos, a guy who believes that the Post needs to gain a broad-based audience, it’s a baffling blind spot...

Owners may get the final say at publications they own, but the wisest among them have let their newsrooms and editorial boards make their own decisions without fear or favor. That’s to prevent the very impression that Bezos is making — that of a mogul trying to disguise his own predilections as independent thought...

Yet even as leadership talked about amping up readership, the owner personally alienated real and potential readers: first by spiking the endorsement, then by showing up in the line of moguls at Trump’s inauguration and now by declaring that the publication would have one editorial line for all of its contributors. It all made his publication look wimpy, or possibly corrupt.

Instead of being an occasionally fussy repository of mostly mainstream points of view, the venerable publication’s opinion pages now risk looking like a vessel for a very rich owner to curry favor with the man who runs the government. It’s going to be hard to keep that image from sticking to the whole organization — including the non-wimpy, non-corrupt reporting corps that keep digging up scoops on the administration.

Bezos, of all people, should know this: He’s the branding whiz who came up with “Democracy Dies in Darkness.”

Among many journalists, Wednesday’s bombshell announcement is being debated as a matter of media ethics: Was Bezos within his rights as an owner to call the tune on opinion matters? Or was this type of process meddling a violation of norms that go back at least to the 1950s?...

“I am of America and for America, and proud to be so,” he added. “Our country did not get here by being typical. And a big part of America’s success has been freedom in the economic realm and everywhere else. Freedom is ethical — it minimizes coercion — and practical; it drives creativity, invention and prosperity.”

Sounds good late at night in the dorm room. But does said freedom include, say, the freedom to start a union at an Amazon warehouse? Or run a business without worrying that some monopolistic e-commerce behemoth is going to drive you under? Come to think of it, these sound like great subjects for energetic debate on a pluralistic op-ed page somewhere. Too bad Bezos, instead of embracing the great American history of arguing about freedom, announced that he’s not so keen on debate."