Showing posts with label AI tools. Show all posts
Showing posts with label AI tools. Show all posts

Monday, October 20, 2025

The platform exposing exactly how much copyrighted art is used by AI tools; The Guardian, October 18, 2025

 , The Guardian; The platform exposing exactly how much copyrighted art is used by AI tools

"The US tech platform Vermillio tracks use of a client’s intellectual property online and claims it is possible to trace, approximately, the percentage to which an AI generated image has drawn on pre-existing copyrighted material."

Saturday, October 11, 2025

OpenAI’s Sora Is in Serious Trouble; Futurism, October 10, 2025

 , Futurism ; OpenAI’s Sora Is in Serious Trouble

"The cat was already out of the bag, though, sparking what’s likely to be immense legal drama for OpenAI. On Monday, the Motion Picture Association, a US trade association that represents major film studios, released a scorching statementurging OpenAI to “take immediate and decisive action” to stop the app from infringing on copyrighted media.

Meanwhile, OpenAI appears to have come down hard on what kind of text prompts can be turned into AI slop on Sora, implementing sweeping new guardrails presumably meant to appease furious rightsholders and protect their intellectual property.

As a result, power users experienced major whiplash that’s tarnishing the launch’s image even among fans. It’s a lose-lose moment for OpenAI’s flashy new app — either aggravate rightsholders by allowing mass copyright infringement, or turn it into yet another mind-numbing screensaver-generating experience like Meta’s widely mocked Vibes.

“It’s official, Sora 2 is completely boring and useless with these copyright restrictions. Some videos should be considered fair use,” one Reddit user lamented.

Others accused OpenAI of abusing copyright to hype up its new app...

How OpenAI’s eyebrow-raising ask-for-forgiveness-later approach to copyright will play out in the long term remains to be seen. For one, the company may already be in hot water, as major Hollywood studios have already started suing over less."

Saturday, October 4, 2025

I’m a Screenwriter. Is It All Right if I Use A.I.?; The Ethicist, The New York Times, October 4, 2025

 , The Ethicist, The New York Times; I’m a Screenwriter. Is It All Right if I Use A.I.?;

"I write for television, both series and movies. Much of my work is historical or fact-based, and I have found that researching with ChatGPT makes Googling feel like driving to the library, combing the card catalog, ordering books and waiting weeks for them to arrive. This new tool has been a game changer. Then I began feeding ChatGPT my scripts and asking for feedback. The notes on consistency, clarity and narrative build were extremely helpful. Recently I went one step further: I asked it to write a couple of scenes. In seconds, they appeared — quick paced, emotional, funny, driven by a propulsive heartbeat, with dialogue that sounded like real people talking. With a few tweaks, I could drop them straight into a screenplay. So what ethical line would I be crossing? Would it be plagiarism? Theft? Misrepresentation? I wonder what you think. — Name Withheld"

Sunday, September 28, 2025

Hastings Center Releases Medical AI Ethics Tool for Policymakers, Patients, and Providers; The Hastings Center for Bioethics, September 25, 2025

 The Hastings Center for Bioethics; Hastings Center Releases Medical AI Ethics Tool for Policymakers, Patients, and Providers

"As artificial intelligence rapidly transforms healthcare, The Hastings Center for Bioethics has released an interactive tool to help policymakers, patients and providers understand the ways that AI is being used in medicine—from making a diagnosis to evaluating insurance claims—and navigate the ethical questions that emerge along the way.

The new tool, a Patient’s Journey with Medical AI, follows an imaginary patient through five interactions with medical AI. It guides users through critical decision points in diagnostics, treatment, and communication, offering personalized insights into how algorithms might influence their care. 

Each decision point in the Patient’s Journey includes a summary of the ethical issues raised and multiple choice questions intended to stimulate thinking and discussion about particular uses of AI in medicine. Policy experts from across the political spectrum were invited to review the tool for accuracy and utility.

The Patient’s Journey is the latest in a set of resources developed through Hastings on the Hill, a project that translates bioethics research for use by policymakers—with an initial focus on medical AI. “This isn’t just about what AI can do — it’s about what it should do,” said Hastings Center President Vardit Ravitsky, who directs Hastings on the Hill. “Patients deserve to understand how technologies affect their health decisions, and policymakers can benefit from expert guidance as they seek to ensure that AI serves the public good.”

The Greenwall Foundation is supporting this initiative. Additional support comes from The Donaghue Foundation and the National Institutes of Health’s Bridge2AI initiative.

In addition to using Hastings on the Hill resources, policymakers, industry leaders, and others who shape medical AI policy and practice are invited to contact The Hastings Center with questions related to ethical issues they are encountering. Hastings Center scholars and fellows can provide expert nonpartisan analysis on urgent bioethics issues, such as algorithmic bias, patient privacy, data governance, and informed consent.

“Ethics should not be an afterthought,” says Ravitsky. “Concerns about biased health algorithms and opaque clinical decision tools have underscored the need for ethical oversight alongside technical innovation.”

“The speed of AI development has outpaced the ethical guardrails we need,” said Erin Williams, President and CEO of EDW Wisdom, LLC — the consultancy working with The Hastings Center. “Our role is to bridge that gap —ensuring that human dignity, equity, and trust are not casualties of technological progress.”

Explore Patient’s Journey with Medical AI. Learn more about Hastings on the Hill."

Monday, September 22, 2025

Librarians Are Being Asked to Find AI-Hallucinated Books; 404 Media, September 18, 2025

CLAIRE WOODCOCK , 404 Media; Librarians Are Being Asked to Find AI-Hallucinated Books

"Reference librarian Eddie Kristan said lenders at the library where he works have been asking him to find books that don’t exist without realizing they were hallucinated by AI ever since the release of GPT-3.5 in late 2022. But the problem escalated over the summer after fielding patron requests for the same fake book titles from real authors—the consequences of an AI-generated summer reading list circulated in special editions of the Chicago Sun-Times and The Philadelphia Inquirer earlier this year. At the time, the freelancer told 404 Media he used AI to produce the list without fact checking outputs before syndication. 

“We had people coming into the library and asking for those authors,” Kristan told 404 Media. He’s receiving similar requests for other types of media that don’t exist because they’ve been hallucinated by other AI-powered features. “It’s really, really frustrating, and it’s really setting us back as far as the community’s info literacy.” 

AI tools are changing the nature of how patrons treat librarians, both online and IRL. Alison Macrina, executive director of Library Freedom Project, told 404 Media early results from a recent survey of emerging trends in how AI tools are impacting libraries indicate that patrons are growing more trusting of their preferred generative AI tool or product, and the veracity of the outputs they receive. She said librarians report being treated like robots over library reference chat, and patrons getting defensive over the veracity of recommendations they’ve received from an AI-powered chatbot. Essentially, like more people trust their preferred LLM over their human librarian."

Sunday, September 14, 2025

Preparing faith leaders to prepare others to use artificial intelligence in a faithful way; Presbyterian News Service, September 4, 2025

Mike Ferguson , Presbyterian News Service; Preparing faith leaders to prepare others to use artificial intelligence in a faithful way

"It turns out an engineer whose career included stops at Boeing and Amazon — and who happens to be a person of deep faith — has plenty to say about how faith leaders can use artificial intelligence in places of worship.

Jovonia Taylor-Hayes took to the lectern Wednesday during Faithful Futures: Guiding AI with Wisdom and Witness, which is being offered online and at Westminster Presbyterian Church in Minneapolis. The PC(USA)’s Office of Innovation is among the organizers and sponsors, which also includes The Episcopal Church, the United Methodist Church and the Evangelical Lutheran Church in America.

Think of all the varied ways everyday people use AI, Taylor-Hayes said, including as an aid to streamline grocery shopping and resume building; by medical teams for note-taking; for virtual meetings and closed-captioning, which is getting better, she said; and in customer service.

“The question is, what does it look like when we stop and think about what AI means to me personally? Where does your head and heart go?” she asked. One place where hers goes to is scripture, including Ephesians 2:10 and Psalm 139:14. “God has prepared us,” she said, “to do what we need to do.”

During the first of two breakout sessions, she asked small groups both in person and online to discuss questions including where AI shows up in their daily work and life and why they use AI as a tool."

Saturday, August 23, 2025

PittGPT debuts today as private AI source for University; University Times, August 21, 2025

MARTY LEVINE, University Times; PittGPT debuts today as private AI source for University

"Today marks the rollout of PittGPT, Pitt’s own generative AI for staff and faculty — a service that will be able to use Pitt’s sensitive, internal data in isolation from the Internet because it works only for those logging in with their Pitt ID.

“We want to be able to use AI to improve the things that we do” in our Pitt work, said Dwight Helfrich, director of the Pitt enterprise initiatives team at Pitt Digital. That means securely adding Pitt’s private information to PittGPT, including Human Resources, payroll and student data. However, he explains, in PittGPT “you would only have access to data that you would have access to in your daily role” — in your specific Pitt job.

“Security is a key part of AI,” he said. “It is much more important in AI than in other tools we provide.” Using PittGPT — as opposed to the other AI services available to Pitt employees — means that any data submitted to it “stays in our environment and it is not used to train a free AI model.”

Helfrich also emphasizes that “you should get a very similar response to PittGPT as you would get with ChatGPT,” since PittGPT had access to “the best LLM’s on the market” — the large language models used to train AI.

Faculty, staff and students already have free access to such AI services as Google Gemini and Microsoft Copilot. And “any generative AI tool provides the ability to analyze data … and to rewrite things” that are still in early or incomplete drafts, Helfrich said.

“It can help take the burden off some of the work we have to do in our lives” and help us focus on the larger tasks that, so far, humans are better at undertaking, added Pitt Digital spokesperson Brady Lutsko. “When you are working with your own information, you can tell it what to include” — it won’t add misinformation from the internet or its own programming, as AI sometimes does. “If you have a draft, it will make your good work even better.”

“The human still needs to review and evaluate that this is useful and valuable,” Helfrich said of AI’s contribution to our work. “At this point we can say that there is nothing in AI that is 100 percent reliable.”

On the other hand, he said, “they’re making dramatic enhancements at a pace we’ve never seen in technology. … I’ve been in technology 30 years and I’ve never seen anything improve as quickly as AI.” In his own work, he said, “AI can help review code and provide test cases, reducing work time by 75 percent. You just have to look at it with some caution and just (verify) things.”

“Treat it like you’re having a conversation with someone you’ve just met,” Lutsko added. “You have some skepticism — you go back and do some fact checking.”

Lutsko emphasized that the University has guidance on Acceptable Use of Generative Artificial Intelligence Tools as well as a University-Approved GenAI Tools List.

Pitt’s list of approved generative AI tools includes Microsoft 365 Copilot Chat, which is available to all students, faculty and staff (as opposed to the version of Copilot built into Microsoft 365 apps, which is an add-on available to departments through Panther Express for $30 per month, per person); Google Gemini; and Google NotebookLMwhich Lutsko said “serves as a dedicated research assistant for precise analysis using user-provided documents.”

PittGPT joins that list today, Helfrich said.

Pitt also has been piloting Pitt AI Connect, a tool for researchers to integrate AI into software development (using an API, or application programming interface).

And Pitt also is already deploying the PantherAI chatbot, clickable from the bottom right of the Pitt Digital and Office of Human Resources homepages, which provides answers to common questions that may otherwise be deep within Pitt’s webpages. It will likely be offered on other Pitt websites in the future.

“Dive in and use it,” Helfrich said of PittGPT. “I see huge benefits from all of the generative AI tools we have. I’ve saved time and produced better results.”"

Friday, July 25, 2025

Virginia teachers learn AI tools and ethics at largest statewide workshop; WTVR, July 23, 2025

 

Wednesday, July 16, 2025

The Pentagon is throwing $200 million at ‘Grok for Government’ and other AI companies; Task & Purpose, July 14, 2025

 , Task & Purpose; The Pentagon is throwing $200 million at ‘Grok for Government’ and other AI companies

"The Pentagon announced Monday it is going to spend almost $1 billion on “agentic AI workflows” from four “frontier AI” companies, including Elon Musk’s xAI, whose flagship Grok appeared to still be declaring itself “MechaHitler” as late as Monday afternoon.

In a press release, the Defense Department’s Chief Digital and Artificial Intelligence Office — or CDAO — said it will cut checks of up to $200 million each to tech giants Anthropic, Google, OpenAI and Musk’s xAI to work on:

  • “critical national security challenges;”
  • “joint mission essential tasks in our warfighting domain;”
  • “DoD use cases.”

The release did not expand on what any of that means or how AI might help. Task & Purpose reached out to the Pentagon for details on what these AI agents may soon be doing and asked specifically if the contracts would include control of live weapons systems or classified information."

Saturday, June 28, 2025

Ethical guidance for AI in the professional practice of health service psychology; American Psychological Association, June 2025

American Psychological Association; Ethical guidance for AI in the professional practice of health service psychology

 "Artificial intelligence (AI) is developing rapidly and is increasingly being integrated into psychological practice. Many AI-driven tools are now available to assist with clinical decision-making, documentation, or patient engagement. These tools hold promises for improving access and efficiency, but they also raise ethical concerns that require careful consideration to safeguard patient well-being and trust.

APA’s Ethical Guidance for AI in the Professional Practice of Health Service Psychology (PDF, 126KB) was developed specifically for health service psychologists who want to ethically integrate AI into their practice. This document offers practical considerations and recommendations tailored to real-world clinical settings.

Whether you’re exploring new technologies or seeking guidance on tools already in use, this resource is designed to help you navigate the evolving landscape of AI while staying aligned with ethical responsibilities in psychological care."

Wednesday, June 18, 2025

AI copyright anxiety will hold back creativity; MIT Technology Review, June 17, 2025

  

, MIT Technology Review; AI copyright anxiety will hold back creativity

"Who, exactly, owns the outputs of a generative model? The user who crafted the prompt? The developer who built the model? The artists whose works were ingested to train it? Will the social forces that shape artistic standing—critics, curators, tastemakers—still hold sway? Or will a new, AI-era hierarchy emerge? If every artist has always borrowed from others, is AI’s generative recombination really so different? And in such a litigious culture, how long can copyright law hold its current form? The US Copyright Office has begun to tackle the thorny issues of ownership and says that generative outputs can be copyrighted if they are sufficiently human-authored. But it is playing catch-up in a rapidly evolving field.

Different industries are responding in different ways...

I don’t consider this essay to be great art. But I should be transparent: I relied extensively on ChatGPT while drafting it...

Many people today remain uneasy about using these tools. They worry it’s cheating, or feel embarrassed to admit that they’ve sought such help...

I recognize the counterargument, notably put forward by Nicholas Thompson, CEO of the Atlantic: that content produced with AI assistance should not be eligible for copyright protection, because it blurs the boundaries of authorship. I understand the instinct. AI recombines vast corpora of preexisting work, and the results can feel derivative or machine-like.

But when I reflect on the history of creativity—van Gogh reworking Eisen, Dalí channeling Bruegel, Sheeran defending common musical DNA—I’m reminded that recombination has always been central to creation. The economist Joseph Schumpeter famously wrote that innovation is less about invention than “the novel reassembly of existing ideas.” If we tried to trace and assign ownership to every prior influence, we’d grind creativity to a halt." 

Monday, October 28, 2024

Panel Reminds Us That Artificial Intelligence Can Only Guess, Not Reason for Itself; New Jersey Institute of Technology, October 22, 2024

Evan Koblentz , New Jersey Institute of Technology; Panel Reminds Us That Artificial Intelligence Can Only Guess, Not Reason for Itself

"Expert panelists took a measured tone about the trends, challenges and ethics of artificial intelligence, at a campus forum organized by NJIT’s Institute for Data Science this month.

The panel moderator was institute director David Bader, who is also a distinguished professor in NJIT Ying Wu College of Computing and who shared his own thoughts on AI in a separate Q&A recently. The panel members were Kevin Coulter, field CTO for AI, Dell Technologies; Grace Wang, distinguished professor and director of NJIT’s Center for Artificial Intelligence Research; and Mengjia Xu, assistant professor of data science. DataBank Ltd., a data center firm that hosts NJIT’s Wulver high-performance computing cluster, was the event sponsor...

Bader: “There's also a lot of concerns that get raised with AI in terms of privacy, in terms of ethics, in terms of its usage. So I really want to understand your thoughts on how we ensure that AI systems are developed and deployed ethically. And are there specific frameworks or guidelines that you would follow?”...

Wang: “Well, I always believe that AI at its core is just a tool, so there's no difference for the AI and say, lock picking tools. Now, picking tools can open your door if you lock yourself out and it can also open others. That's a crime, right? So it depends on how AI is used. From that perspective, there's not much special when we talk about AI ethics, or, say, computer security ethics, or the ethics related to how to use a gun, for example. But what is different is, as AI is too complex, it's beyond the knowledge of many of us how it works. Sometimes it looks ethical but maybe what's behind it is amplifying the bias by using the AI tools without our knowledge. So whenever we talk about AI ethics, I think the most important one is education if you know what AI is about, how it works and what AI can do and what AI cannot. I think for now we have the fear that AI is so powerful it can do anything, but actually, many of the things that people believe AI can do now can be done in the past by just any software system. So education is very, very important to help us to demystify AI accordingly, so we can talk about AI ethics. I want to emphasize transparency. If AI is used for decision making, if we understand how the decision is made, that becomes very, very important. And another important topic related to AI ethics is auditing if we don't know what's inside. At least we have some assessment tools to know whether there's a risk or not in certain circumstances. Whether it can generate a harmful result or is not very much like the stress testing to the financial system after 2008.”

Friday, October 25, 2024

Biden Administration Outlines Government ‘Guardrails’ for A.I. Tools; The New York Times, October 24, 2024

, The New York Times ; Biden Administration Outlines Government ‘Guardrails’ for A.I. Tools

"President Biden on Thursday signed the first national security memorandum detailing how the Pentagon, the intelligence agencies and other national security institutions should use and protect artificial intelligence technology, putting “guardrails” on how such tools are employed in decisions varying from nuclear weapons to granting asylum.

The new document is the latest in a series Mr. Biden has issued grappling with the challenges of using A.I. tools to speed up government operations — whether detecting cyberattacks or predicting extreme weather — while limiting the most dystopian possibilities, including the development of autonomous weapons.

But most of the deadlines the order sets for agencies to conduct studies on applying or regulating the tools will go into full effect after Mr. Biden leaves office, leaving open the question of whether the next administration will abide by them...

The new guardrails would also prohibit letting artificial intelligence tools make a decision on granting asylum. And they would forbid tracking someone based on ethnicity or religion, or classifying someone as a “known terrorist” without a human weighing in.

Perhaps the most intriguing part of the order is that it treats private-sector advances in artificial intelligence as national assets that need to be protected from spying or theft by foreign adversaries, much as early nuclear weapons were. The order calls for intelligence agencies to begin protecting work on large language models or the chips used to power their development as national treasures, and to provide private-sector developers with up-to-the-minute intelligence to safeguard their inventions."

Friday, October 4, 2024

Ethical uses of generative AI in the practice of law; Reuters, October 3, 2024

 Thomson Reuters; Ethical uses of generative AI in the practice of law

"In the rapidly evolving landscape of legal technology, the integration of generative AI tools presents both unprecedented opportunities and significant ethical challenges. Ryan Groff, a distinguished member of the Massachusetts Bar and a lecturer at New England Law, explores these dimensions in his enlightening webinar, “Ethical Uses of Generative AI in the Practice of Law.” 

In the webinar, Ryan Groff discusses the ethical implications of using generative AI (GenAI) in legal practices, tracing the history of GenAI applications in law and distinguishing between various AI tools available today.  He provides an insightful overview of the historical application of GenAI in legal contexts and differentiates the various AI tools currently available. Groff emphasizes that while AI can enhance the efficiency of legal practices, it should not undermine the critical judgment of lawyers. He underscores the importance of maintaining rigorous supervision, safeguarding client confidentiality, and ensuring technological proficiency."

Friday, September 6, 2024

AN ETHICS EXPERT’S PERSPECTIVE ON AI AND HIGHER ED; Pace University, September 3, 2024

 Johnni Medina, Pace University; AN ETHICS EXPERT’S PERSPECTIVE ON AI AND HIGHER ED

"As a scholar deeply immersed in both technology and philosophy, James Brusseau, PhD, has spent years unraveling the complex ethics of artificial intelligence (AI).

“As it happens, I was a physics major in college, so I've had an abiding interest in technology, but I finally decided to study philosophy,” Brusseau explains. “And I did not see much of an intersection between the scientific and my interest in philosophy until all of a sudden artificial intelligence landed in our midst with questions that are very philosophical.”.

Some of these questions are heavy, with Brusseau positing an example, “If a machine acts just like a person, does it become a person?” But AI’s implications extend far beyond the theoretical, especially when it comes to the impact on education, learning, and career outcomes. What role does AI play in higher education? Is it a tool that enhances learning, or does it risk undermining it? And how do universities prepare students for an AI-driven world?

In a conversation that spans these topics, Brusseau shares his insights on the place of AI in higher education, its benefits, its risks, and what the future holds...

I think that if AI alone is the professor, then the knowledge students get will be imperfect in the same vaguely definable way that AI art is imperfect."

Saturday, August 31, 2024

More Art School Classes Are Teaching AI This Fall Despite Ethical Concerns and Ongoing Lawsuits; Artnews, August 30, 2024

KAREN K. HO, Artnews ; More Art School Classes Are Teaching AI This Fall Despite Ethical Concerns and Ongoing Lawsuits


"When undergraduate students return to the Ringling College of Art and Design this fall, one of the school’s newest offerings will be an AI certificate

Ringling is just the latest of several top art schools to offer undergraduate students courses that focus on or integrate artificial intelligence tools and techniques.

ARTnews spoke to experts and faculty at Ringling, Rhode Island School of Design(RISD), Carnegie Mellon University (CMU), and Florida State University about how they construct curriculum; how they teach AI in consideration of its limitations and concerns about ethics and legal issues; as well as why they think it’s important for artists to learn."

Thursday, August 29, 2024

The Ethics of Developing Voice Biometrics; The New York Academy of Sciences, August 29, 2024

Nitin Verma, PhD, The New York Academy of Sciences; The Ethics of Developing Voice Biometrics

"Juana Catalina Becerra Sandoval, a PhD candidate in the Department of the History of Science at Harvard University and a research scientist in the Responsible and Inclusive Technologies initiative at IBM Research, presented as part of The New York Academy of Sciences’ (the Academy) Artificial Intelligence (AI) & Society Seminar series. The lecture – titled “What’s in a Voice? Biometric Fetishization and Speaker Recognition Technologies” – explored the ethical implications associated with the development and use of AI-based tools such as voice biometrics. After the presentation, Juana sat down with Nitin Verma, PhD, a member of the Academy’s 2023 cohort of the AI & Society Fellowship, to further discuss the promises and challenges society faces as AI continues to evolve."

Monday, August 19, 2024

Mayoral candidate vows to let VIC, an AI bot, run Wyoming’s capital city; The Washington Post, August 19, 2024

Jenna Sampson
 , The Washington Post; Mayoral candidate vows to let VIC, an AI bot, run Wyoming’s capital city

"Miller made this pitch at a county library in Wyoming’s capital on a recent summer Friday, with a few friends and family filling otherwise empty rows of chairs. Before the sparse audience, he vowed to run the city of Cheyenne exclusively with an AI bot he calls “VIC” for “Virtual Integrated Citizen.”

AI experts say the pledge is a first for U.S. campaigns and marks a new front in the rapid emergence of the technology. Its implications have stoked alarm among officials and even tech companies...

The day before, Miller had scrambled to get VIC working after OpenAI,the technology company behind generative-AI tools like ChatGPT, shut down his account, citing policies against using its products for campaigning. Miller quickly made a second ChatGPT bot, allowing him to hold the meet-and-greet almost exactly as planned.

It was just the latest example of Miller’s skirting efforts against his campaign by the company that makes the AI technology and the regulatory authorities that oversee elections...

“While OpenAI may have certain policies against using its model for campaigning, other companies do not, so it makes shutting down the campaign nearly impossible.”"

Friday, July 26, 2024

Students Weigh Ethics of Using AI for College Applications; Education Week via GovTech, July 24, 2024

Alyson Klein , Education Week via GovTech; Students Weigh Ethics of Using AI for College Applications

"About a third of high school seniors who applied to college in the 2023-24 school year acknowledged using an AI tool for help in writing admissions essays, according to research released this month by foundry10, an organization focused on improving learning.

About half of those students — or roughly one in six students overall — used AI the way Makena did, to brainstorm essay topics or polish their spelling and grammar. And about 6 percent of students overall—including some of Makena's classmates, she said — relied on AI to write the final drafts of their essays instead of doing most of the writing themselves.

Meanwhile, nearly a quarter of students admitted to Harvard University's class of 2027 paid a private admissions consultant for help with their applications.

The use of outside help, in other words, is rampant in college admissions, opening up a host of questions about ethics, norms, and equal opportunity.

Top among them: Which — if any — of these students cheated in the admissions process?

For now, the answer is murky."