Showing posts with label transparency. Show all posts
Showing posts with label transparency. Show all posts

Thursday, October 31, 2024

A new study seeks to establish ethical collecting practices for US museums; The Art Newspaper, October 29, 2024

Annabel Keenan , The Art Newspaper; A new study seeks to establish ethical collecting practices for US museums

"As calls for the restitution of looted objects spread across the industry, the Penn Cultural Heritage Center (PennCHC) at the Penn Museum in Philadelphia is launching a study that will examine collecting policies and practices at US museums and encourage transparency and accountability in the sector. Launching today (29 October), the “Museums: Missions and Acquisitions Project” (dubbed M2A Project for short) will study over 450 museum collections to identify current standards and establish a framework for institutions to model their future practices...

The PennCHC has been supporting ethical collecting since its founding in 2008, including working closely with local communities in countries around the world to identify and preserve their cultural heritage. “US museums have historically acquired objects that were removed from these countries illegally or through pathways now considered inequitable,” says Richard M. Leventhal, the executive director of the PennCHC and co-principal investigator for the M2A Project. “The M2A Project is asking a very simple set of questions about these types of objects: Are US museums still acquiring them? And if so, why? Recent seizures of looted property and calls to decolonise collections force us to reconsider whether acquisitions best serve the missions of museums and the interests of their communities.”

The M2A Project evolved from the PennCHC’s Cultural Property Experts on Call Program that launched in 2020 in partnership with the US Department of State’s Cultural Heritage Coordinating Committee to protect at-risk cultural property against theft, looting and trafficking. Through this programme, the PennCHC collaborated with more than 100 museums and universities to study and document the trade in illicit artefacts."

Monday, October 28, 2024

Panel Reminds Us That Artificial Intelligence Can Only Guess, Not Reason for Itself; New Jersey Institute of Technology, October 22, 2024

Evan Koblentz , New Jersey Institute of Technology; Panel Reminds Us That Artificial Intelligence Can Only Guess, Not Reason for Itself

"Expert panelists took a measured tone about the trends, challenges and ethics of artificial intelligence, at a campus forum organized by NJIT’s Institute for Data Science this month.

The panel moderator was institute director David Bader, who is also a distinguished professor in NJIT Ying Wu College of Computing and who shared his own thoughts on AI in a separate Q&A recently. The panel members were Kevin Coulter, field CTO for AI, Dell Technologies; Grace Wang, distinguished professor and director of NJIT’s Center for Artificial Intelligence Research; and Mengjia Xu, assistant professor of data science. DataBank Ltd., a data center firm that hosts NJIT’s Wulver high-performance computing cluster, was the event sponsor...

Bader: “There's also a lot of concerns that get raised with AI in terms of privacy, in terms of ethics, in terms of its usage. So I really want to understand your thoughts on how we ensure that AI systems are developed and deployed ethically. And are there specific frameworks or guidelines that you would follow?”...

Wang: “Well, I always believe that AI at its core is just a tool, so there's no difference for the AI and say, lock picking tools. Now, picking tools can open your door if you lock yourself out and it can also open others. That's a crime, right? So it depends on how AI is used. From that perspective, there's not much special when we talk about AI ethics, or, say, computer security ethics, or the ethics related to how to use a gun, for example. But what is different is, as AI is too complex, it's beyond the knowledge of many of us how it works. Sometimes it looks ethical but maybe what's behind it is amplifying the bias by using the AI tools without our knowledge. So whenever we talk about AI ethics, I think the most important one is education if you know what AI is about, how it works and what AI can do and what AI cannot. I think for now we have the fear that AI is so powerful it can do anything, but actually, many of the things that people believe AI can do now can be done in the past by just any software system. So education is very, very important to help us to demystify AI accordingly, so we can talk about AI ethics. I want to emphasize transparency. If AI is used for decision making, if we understand how the decision is made, that becomes very, very important. And another important topic related to AI ethics is auditing if we don't know what's inside. At least we have some assessment tools to know whether there's a risk or not in certain circumstances. Whether it can generate a harmful result or is not very much like the stress testing to the financial system after 2008.”

Sunday, October 27, 2024

Book Bans Live on in School District Now Run by Democrats; The New York Times, October 27, 2024

, The New York Times; Book Bans Live on in School District Now Run by Democrats

"What is clear is that many Pennridge parents are exhausted with the political battles that inflamed communities nationwide during the Covid-19 pandemic. They have reached a new political equilibrium, where some changes have become part of the firmament of public education, especially the expectation that parents will have visibility into all that their children are learning and reading at school — and some measure of a veto...

Aubrie Schulz, 16, a junior at Pennridge High School, said she had been frustrated by the limited offerings in the high school library. But as adults argued over gender, sex and race, she noted that what occurred in the library or classroom had only a narrow effect on students.

“We can get all the information on our phones,” she said."

Tuesday, October 22, 2024

On X, the Definition of ‘Blocking’ Is About to Change; The New York Times, October 21, 2024

 , The New York Times; On X, the Definition of ‘Blocking’ Is About to Change

"A lot has changed on the social media platform formerly known as Twitter since Elon Musk bought it two years ago. The company, renamed X, is on the verge of yet another major shift, with changes coming for what happens when one user blocks another.

The block function, a powerful tool which makes your account effectively invisible to anyone of your choosing, will soon let those people see what you are posting. The difference, according to a thread posted by X’s engineering account, is that blocked users will not be able to engage with the post in any way...

The overall sentiment from users, however, is that the impending change to the block feature will allow for more abuse."

Friday, October 11, 2024

23andMe is on the brink. What happens to all its DNA data?; NPR, October 3, 2024

 , NPR; 23andMe is on the brink. What happens to all its DNA data?

"As 23andMe struggles for survival, customers like Wiles have one pressing question: What is the company’s plan for all the data it has collected since it was founded in 2006?

“I absolutely think this needs to be clarified,” Wiles said. “The company has undergone so many changes and so much turmoil that they need to figure out what they’re doing as a company. But when it comes to my genetic data, I really want to know what they plan on doing.”

Friday, October 4, 2024

Beyond the hype: Key components of an effective AI policy; CIO, October 2, 2024

 Leo Rajapakse, CIO; Beyond the hype: Key components of an effective AI policy

"An AI policy is a living document 

Crafting an AI policy for your company is increasingly important due to the rapid growth and impact of AI technologies. By prioritizing ethical considerations, data governance, transparency and compliance, companies can harness the transformative potential of AI while mitigating risks and building trust with stakeholders. Remember, an effective AI policy is a living document that evolves with technological advancements and societal expectations. By investing in responsible AI practices today, businesses can pave the way for a sustainable and ethical future tomorrow."

Thursday, September 5, 2024

Intellectual property and data privacy: the hidden risks of AI; Nature, September 4, 2024

 Amanda Heidt , Nature; Intellectual property and data privacy: the hidden risks of AI

"Timothée Poisot, a computational ecologist at the University of Montreal in Canada, has made a successful career out of studying the world’s biodiversity. A guiding principle for his research is that it must be useful, Poisot says, as he hopes it will be later this year, when it joins other work being considered at the 16th Conference of the Parties (COP16) to the United Nations Convention on Biological Diversity in Cali, Colombia. “Every piece of science we produce that is looked at by policymakers and stakeholders is both exciting and a little terrifying, since there are real stakes to it,” he says.

But Poisot worries that artificial intelligence (AI) will interfere with the relationship between science and policy in the future. Chatbots such as Microsoft’s Bing, Google’s Gemini and ChatGPT, made by tech firm OpenAI in San Francisco, California, were trained using a corpus of data scraped from the Internet — which probably includes Poisot’s work. But because chatbots don’t often cite the original content in their outputs, authors are stripped of the ability to understand how their work is used and to check the credibility of the AI’s statements. It seems, Poisot says, that unvetted claims produced by chatbots are likely to make their way into consequential meetings such as COP16, where they risk drowning out solid science.

“There’s an expectation that the research and synthesis is being done transparently, but if we start outsourcing those processes to an AI, there’s no way to know who did what and where the information is coming from and who should be credited,” he says...

The technology underlying genAI, which was first developed at public institutions in the 1960s, has now been taken over by private companies, which usually have no incentive to prioritize transparency or open access. As a result, the inner mechanics of genAI chatbots are almost always a black box — a series of algorithms that aren’t fully understood, even by their creators — and attribution of sources is often scrubbed from the output. This makes it nearly impossible to know exactly what has gone into a model’s answer to a prompt. Organizations such as OpenAI have so far asked users to ensure that outputs used in other work do not violate laws, including intellectual-property and copyright regulations, or divulge sensitive information, such as a person’s location, gender, age, ethnicity or contact information. Studies have shown that genAI tools might do both1,2."

Thursday, July 18, 2024

Ethical AI: Tepper School Course Explores Responsible Business; Carnegie Mellon University Tepper School of Business, July 1, 2024

Carnegie Mellon University Tepper School of Business; Ethical AI: Tepper School Course Explores Responsible Business

"As artificial intelligence (AI) becomes more widely used, there is growing interest in the ethics of AI. A new article by Derek Leben, associate teaching professor of business ethics at Carnegie Mellon University's Tepper School of Business, detailed a graduate course he developed titled "Ethics and AI." The course bridges AI-specific challenges with long-standing ethical discussions in business."

Friday, July 12, 2024

AI Briefing: Senators propose new regulations for privacy, transparency and copyright protections; Digiday, July 12, 2024

Marty Swant , Digiday; AI Briefing: Senators propose new regulations for privacy, transparency and copyright protections

"The U.S. Senate Commerce Committee on Thursday held a hearing to address a range of concerns about the intersection of AI and privacy. While some lawmakers expressed concern about AI accelerating risks – such as online surveillance, scams, hyper-targeting ads and discriminatory business practices — others cautioned regulations might further protect tech giants and burden smaller businesses."

Monday, June 17, 2024

Surgeon General: Why I’m Calling for a Warning Label on Social Media Platforms; The New York Times, June 17, 2024

 Vivek H. Murthy, The New York Times; Surgeon General: Why I’m Calling for a Warning Label on Social Media Platforms

"It is time to require a surgeon general’s warning label on social media platforms, stating that social media is associated with significant mental health harms for adolescents. A surgeon general’s warning label, which requires congressional action, would regularly remind parents and adolescents that social media has not been proved safe. Evidence from tobacco studies show that warning labels can increase awareness and change behavior. When asked if a warning from the surgeon general would prompt them to limit or monitor their children’s social media use, 76 percent of people in one recent survey of Latino parents said yes...

It’s no wonder that when it comes to managing social media for their kids, so many parents are feeling stress and anxiety — and even shame.

It doesn’t have to be this way. Faced with high levels of car-accident-related deaths in the mid- to late 20th century, lawmakers successfully demanded seatbelts, airbags, crash testing and a host of other measures that ultimately made cars safer. This January the F.A.A. grounded about 170 planes when a door plug came off one Boeing 737 Max 9 while the plane was in the air. And the following month, a massive recall of dairy products was conducted because of a listeria contamination that claimed two lives.

Why is it that we have failed to respond to the harms of social media when they are no less urgent or widespread than those posed by unsafe cars, planes or food? These harms are not a failure of willpower and parenting; they are the consequence of unleashing powerful technology without adequate safety measures, transparency or accountability."

Wednesday, June 12, 2024

Why G7 leaders are turning to a special guest — Pope Francis — for advice on AI; NPR, June 12, 2024

 , NPR; Why G7 leaders are turning to a special guest — Pope Francis — for advice on AI

"Pope Francis himself has been at the receiving end of AI misinformation. Last year, a picture of the pope wearing a large white puffer coat went viral. The image was generated by AI, and it prompted conversations on deepfakes and the spread of disinformation through AI technology.

In his annual message on New Year's Day this year, the pope focused on how AI can be used for peace.

His work on the issue goes back several years, when the Vatican and tech companies like Microsoft started working together to create a set of principles known as the Rome Call for AI Ethics, published in 2020. Companies and governments that sign on to the call have agreed to voluntary commitments aimed at promoting transparency and accountability in AI development."

Saturday, June 8, 2024

NJ Bar Association Warns the Practice of Law Is Poised for Substantial Transformation Due To AI; The National Law Review, June 4, 2024

 James G. Gatto of Sheppard, Mullin, Richter & Hampton LLP, The National Law Review; NJ Bar Association Warns the Practice of Law Is Poised for Substantial Transformation Due To AI

"The number of bar associations that have issued AI ethics guidance continues to grow, with NJ being the most recent. In its May 2024 report (Report), the NJ Task Force on Artificial Intelligence and the Law made a number of recommendations and findings as detailed below. With this Report, NJ joins the list of other bar associations that have issued AI ethics guidance, including FloridaCaliforniaNew YorkDC as well as the US Patent and Trademark Office. The Report notes that the practice of law is “poised for substantial transformation due to AI,” adding that while the full extent of this transformation remains to be seen, attorneys must keep abreast of and adapt to evolving technological landscapes and embrace opportunities for innovation and specialization in emerging AI-related legal domains.

The Task Force included four workgroups, including: i) Artificial Intelligence and Social Justice Concerns; ii) Artificial Intelligence Products and Services; iii) Education and CLE Programming; and iv) Ethics and Regulatory Issues. Each workgroup made findings and recommendations, some of which are provided below (while trying to avoid duplicating what other bar associations have addressed). Additionally, the Report includes some practical tools including guidance on Essential Factors for Selecting AI Products and Formulating an AI Policy in Legal Firms, provides a Sample Artificial Intelligence and Generative Artificial Intelligence Use Policy and Questions for Vendors When Selecting AI Products and Services, links to which are provided below.

The Report covers many of the expected topics with a focus on:

  • prioritizing AI education, establishing baseline procedures and guidelines, and collaborating with data privacy, cybersecurity, and AI professionals as needed;
  • adopting an AI policy to ensure the responsible integration of AI in legal practice and adherence to ethical and legal standards; and
  • the importance of social justice concerns related to the use of AI, including the importance of transparency in AI software algorithms, bias mitigation, and equitable access to AI tools and the need to review legal AI tools for fairness and accessibility, particularly tools designed for individuals from marginalized or vulnerable communities.

Some of the findings and recommendations are set forth below."

Wednesday, May 29, 2024

Why using dating apps for public health messaging is an ethical dilemma; The Conversation, May 28, 2024

s, Chancellor's Fellow, Deanery of Molecular, Genetic and Population Health Sciences Usher Institute Centre for Biomedicine, Self and Society, The University of EdinburghProfessor of Sociology, Sociology, University of Manchester, Lecturer in Nursing, University of Manchester , The Conversation; Why using dating apps for public health messaging is an ethical dilemma

"Future collaborations with apps should prioritise the benefit of users over those of the app businesses, develop transparent data policies that prevent users’ data from being shared for profit, ensure the apps’ commitment to anti-discrimination and anti-harrassment, and provide links to health and wellbeing services beyond the apps.

Dating apps have the potential to be powerful allies in public health, especially in reaching populations that have often been ignored. However, their use must be carefully managed to avoid compromising user privacy, safety and marginalisation."

Wednesday, May 15, 2024

The Generative AI Copyright Disclosure Act of 2024: Balancing Innovation and IP Rights; The National Law Review, May 13, 2024

  Danner Kline of Bradley Arant Boult Cummings LLP, The National Law Review; The Generative AI Copyright Disclosure Act of 2024: Balancing Innovation and IP Rights

"As generative AI systems become increasingly sophisticated and widespread, concerns around the use of copyrighted works in their training data continue to intensify. The proposed Generative AI Copyright Disclosure Act of 2024 attempts to address this unease by introducing new transparency requirements for AI developers.

The Bill’s Purpose and Requirements

The primary goal of the bill is to ensure that copyright owners have visibility into whether their intellectual property is being used to train generative AI models. If enacted, the law would require companies to submit notices to the U.S. Copyright Office detailing the copyrighted works used in their AI training datasets. These notices would need to be filed within 30 days before or after the public release of a generative AI system.

The Copyright Office would then maintain a public database of these notices, allowing creators to search and see if their works have been included. The hope is that this transparency will help copyright holders make more informed decisions about licensing their IP and seeking compensation where appropriate."

Tuesday, April 9, 2024

Supreme Court Justices Apply New Ethics Code Differently; Newsweek, April 9, 2024

 , Newsweek; Supreme Court Justices Apply New Ethics Code Differently

"Supreme Court justices are divided along political lines over whether or not to explain their recusals, and legal experts are very concerned."

Thursday, March 28, 2024

Your newsroom needs an AI ethics policy. Start here.; Poynter, March 25, 2024

 , Poynter; Your newsroom needs an AI ethics policy. Start here.

"Every single newsroom needs to adopt an ethics policy to guide the use of generative artificial intelligence. Why? Because the only way to create ethical standards in an unlicensed profession is to do it shop by shop.

Until we create those standards — even though it’s early in the game — we are holding back innovation.

So here’s a starter kit, created by Poynter’s Alex Mahadevan, Tony Elkins and me. It’s a statement of journalism values that roots AI experimentation in the principles of accuracy, transparency and audience trust, followed by a set of specific guidelines.

Think of it like a meal prep kit. Most of the work is done, but you still have to roll up your sleeves and do a bit of labor. This policy includes blank spaces, where newsroom leaders will have to add details, saying “yes” or “no” to very specific activities, like using AI-generated illustrations.

In order to effectively use this AI ethics policy, newsrooms will need to create an AI committee and designate an editor or senior journalist to lead the ongoing effort. This step is critical because the technology is going to evolve, the tools are going to multiply and the policy will not keep up unless it is routinely revised."

Sunday, February 18, 2024

IT body proposes that AI pros get leashed and licensed to uphold ethics; The Register, February 15, 2024

Paul Kunert, The Register; IT body proposes that AI pros get leashed and licensed to uphold ethics

"Creating a register of licensed AI professionals to uphold ethical standards and securing whistleblowing channels to call out bad management are two policies that could prevent a Post Office-style scandal.

So says industry body BCS – formerly the British Computer Society – which reckons licenses based on an independent framework of ethics would promote transparency among software engineers and their bosses.

"We have a register of doctors who can be struck off," said Rashik Parmar MBE, CEO at BCS. "AI professionals already have a big role in our life chances, so why shouldn't they be licensed and registered too?"...

The importance of AI ethics was amplified by the Post Office scandal, says the BCS boss, "where computer generated evidence was used by non-IT specialists to prosecute sub postmasters with tragic results."

For anyone not aware of the outrageous wrongdoing committed by the Post Office, it bought the bug-ridden Horizon accounting system in 1999 from ICL, a company that was subsequently bought by Fujitsu. Hundreds of local Post Office branch managers were subsequently wrongfully convicted of fraud when Horizon was to blame."

Wednesday, January 10, 2024

Addressing equity and ethics in artificial intelligence; American Psychological Association, January 8, 2024

 Zara Abrams, American Psychological Association; Addressing equity and ethics in artificial intelligence

"As artificial intelligence (AI) rapidly permeates our world, researchers and policymakers are scrambling to stay one step ahead. What are the potential harms of these new tools—and how can they be avoided?

“With any new technology, we always need to be thinking about what’s coming next. But AI is moving so fast that it’s difficult to grasp how significantly it’s going to change things,” said David Luxton, PhD, a clinical psychologist and an affiliate professor at the University of Washington’s School of Medicine who is part of a session at the upcoming 2024 Consumer Electronics Show (CES) on Harnessing the Power of AI Ethically.

Luxton and his colleagues dubbed recent AI advances “super-disruptive technology” because of their potential to profoundly alter society in unexpected ways. In addition to concerns about job displacement and manipulation, AI tools can cause unintended harm to individuals, relationships, and groups. Biased algorithms can promote discrimination or other forms of inaccurate decision-making that can cause systematic and potentially harmful errors; unequal access to AI can exacerbate inequality (Proceedings of the Stanford Existential Risk Conference 2023, 60–74). On the flip side, AI may also hold the potential to reduce unfairness in today’s world—if people can agree on what “fairness” means.

“There’s a lot of pushback against AI because it can promote bias, but humans have been promoting biases for a really long time,” said psychologist Rhoda Au, PhD, a professor of anatomy and neurobiology at the Boston University Chobanian & Avedisian School of Medicine who is also speaking at CES on harnessing AI ethically. “We can’t just be dismissive and say: ‘AI is good’ or ‘AI is bad.’ We need to embrace its complexity and understand that it’s going to be both.”"

Thursday, December 14, 2023

Big Tech funds the very people who are supposed to hold it accountable; The Washington Post, December 7, 2023

, The Washington Post; Big Tech funds the very people who are supposed to hold it accountable

"“Big Tech has played this game really successfully in the past decade,” said Lawrence Lessig, a Harvard Law School professor who previously founded Stanford’s Center for Internet and Society without raising money outside the university. “The number of academics who have been paid by Facebook alone is extraordinary.”

Most tech-focused academics say their work is not influenced by the companies, and the journals that publish their studies have ethics rules designed to ward off egregious interference. But in interviews, two dozen professors said that by controlling funding and access to data, tech companies wield “soft power,” slowing down research, sparking tension between academics and their institutions, and shifting the fields’ targets in small — but potentially transformative — ways...

Harvard’s Lessig, who spent years heading a center on ethics issues in society at the university, is developing a system for academics to verify that their research is truly independent. He hopes to present the initiative, the Academic Integrity Project, to the American Academy of Arts and Sciences.

He is still looking for funding."