Showing posts with label AI technologies. Show all posts
Showing posts with label AI technologies. Show all posts

Friday, October 4, 2024

Beyond the hype: Key components of an effective AI policy; CIO, October 2, 2024

 Leo Rajapakse, CIO; Beyond the hype: Key components of an effective AI policy

"An AI policy is a living document 

Crafting an AI policy for your company is increasingly important due to the rapid growth and impact of AI technologies. By prioritizing ethical considerations, data governance, transparency and compliance, companies can harness the transformative potential of AI while mitigating risks and building trust with stakeholders. Remember, an effective AI policy is a living document that evolves with technological advancements and societal expectations. By investing in responsible AI practices today, businesses can pave the way for a sustainable and ethical future tomorrow."

Monday, July 29, 2024

Lawyers using AI must heed ethics rules, ABA says in first formal guidance; Reuters, July 29, 2024

S, Reuters; Lawyers using AI must heed ethics rules, ABA says in first formal guidance

"Lawyers must guard against ethical lapses if they use generative artificial intelligence in their work, the American Bar Association said on Monday.

In its first formal ethics opinion on generative AI, an ABA committee said lawyers using the technology must "fully consider" their ethical obligations to protect clients, including duties related to lawyer competence, confidentiality of client data, communication and fees...

Monday's opinion from the ABA's ethics and professional responsibility committee said AI tools can help lawyers increase efficiency but can also carry risks such as generating inaccurate output. Lawyers also must try to prevent inadvertent disclosure or access to client information, and should consider whether they need to tell a client about their use of generative AI technologies, it said."

Monday, July 1, 2024

Vatican conference ponders who really holds the power of AI; Religion News Service, June 27, 2024

Claire Giangravé, Religion News Service; Vatican conference ponders who really holds the power of AI

"The vice director general of Italy’s Agency for National Cybersecurity, Nunzia Ciardi, also warned at the conference of the influence held by leading AI developers.

“Artificial intelligence is made up of massive economic investments that only large superpowers can afford and through which they ensure a very important geopolitical dominance and access to the large amount of data that AI must process to produce outputs,” Ciardi said.

Participants agreed that international organizations must enforce stronger regulations for the use and advancement of AI technologies.

“You could say that we are colonized by AI, which is managed by select companies that brutally rack through our data,” she added.

“We need guardrails, because what is coming is a radical transformation that will change real and digital relations and require not only reflection but also regulation,” Benanti said.

The “Rome Call for AI Ethics,” a document signed by IBM, Microsoft, Cisco and U.N. Food and Agriculture Organization representatives, was promoted by the Vatican’s Academy for Life and lays out guidelines for promoting ethics, transparency and inclusivity in AI.

Other religious communities have also joined the “Rome Call,” including the Anglican Church and Jewish and Muslim representatives. On July 9, representatives from Eastern religions will gather for a Vatican-sponsored event to sign the “Rome Call” in Hiroshima, Japan. The location was decided to emphasize the dangerous consequences of technology when unchecked."

Wednesday, May 22, 2024

Are Ethics Taking a Backseat in AI Jobs?; Statista, May 22, 2024

 Anna Fleck, Statista; Are Ethics Taking a Backseat in AI Jobs?

"Data published jointly by the OECD and market analytics platform Lightcast has found that few AI employers are asking for creators and developers of AI to have ethical decision making AI skills. The two research teams looked for keywords such as “AI ethics”, “responsible AI” and “ethical AI” in job postings for AI workers across 14 OECD countries, in both English and the official languages spoken in the 14 countries studied. According to Lightcast, out of these, an average of less than two percent of AI job postings listed these skills. However, between 2019 and 2022 the share of job postings mentioning ethics-related keywords increased in the majority of surveyed countries. For example, the figure rose from 0.1 percent to 0.5 percent in the United States between the four years and from 0.1 percent to 0.4 percent in the United Kingdom.

According to Lightcast writer Layla O’Kane, federal agencies in the U.S. are, however, now being encouraged to hire Chief AI Officers to monitor the use of AI technologies following an executive order for the Safe, Secure, and Trustworthy Development and Use Of Artificial Intelligence. O’Kane writes: “While there are currently a very small number of postings for Chief AI Officer jobs across public and private sector, the skills they call for are encouraging: almost all contain at least one mention of ethical considerations in AI.”"

Saturday, January 27, 2024

Artificial Intelligence Law - Intellectual Property Protection for your voice?; JDSupra, January 22, 2024

 Steve Vondran, JDSupra ; Artificial Intelligence Law - Intellectual Property Protection for your voice?

"With the advent of AI technology capable of replicating a person's voice and utilizing it for commercial purposes, several key legal issues are likely to emerge under California's right of publicity law. The right of publicity refers to an individual's right to control and profit from their own name, image, likeness, or voice.

Determining the extent of a person's control over their own voice will likely become a contentious legal matter given the rise of AI technology. In 2024, with a mere prompt and a push of a button, a creator can generate highly accurate voice replicas, potentially allowing companies to utilize a person's voice without their explicit permission for example using a AI generated song in a video, or podcast, or using it as a voice-over for a commercial project. This sounds like fun new technology, until you realize that in states like California where a "right of publicity law" exists a persons VOICE can be a protectable asset that one can sue to protect others who wrongfully misuse their voice for commercial advertising purposes.

This blog will discuss a few new legal issues I see arising in our wonderful new digital age being fueled by the massive onset of Generative AI technology (which really just means you input prompts into an AI tool and it will generate art, text, images, music, etc."

Saturday, December 10, 2022

Your selfies are helping AI learn. You did not consent to this.; The Washington Post, December 9, 2022

 , The Washington Post; Your selfies are helping AI learn. You did not consent to this.

"My colleague Tatum Hunter spent time evaluating Lensa, an app that transforms a handful of selfies you provide into artistic portraits. And people have been using the new chatbot ChatGPT to generate silly poems or professional emails that seem like they were written by a human. These AI technologies could be profoundly helpful but they also come with a bunch of thorny ethical issues.

Tatum reported that Lensa’s portrait wizardly comes from the styles of artists whose work was included in a giant database for coaching image-generating computers. The artists didn’t give their permission to do this, and they aren’t being paid. In other words, your fun portraits are built on work ripped off from artists. ChatGPT learned to mimic humans by analyzing your recipes, social media posts, product reviews and other text from everyone on the internet...

Hany Farid, a computer science professor at the University of California at Berkeley, told me that individuals, government officials, many technology executives, journalists and educators like him are far more attuned than they were a few years ago to the potential positive and negative consequences of emerging technologies like AI. The hard part, he said, is knowing what to do to effectively limit the harms and maximize the benefits."

Sunday, April 10, 2022

AI.Humanity Ethics Lecture Series will explore the ethics of artificial intelligence; Emory University, Emory News Center, April 5, 2022

Emory University, Emory News Center; AI.Humanity Ethics Lecture Series will explore the ethics of artificial intelligence

"As society increasingly relies on artificial intelligence (AI) technologies, how can ethically committed individuals and institutions articulate values to guide their development and respond to emerging problems? Join the Office of the Provost to explore the ethical implications of AI in a new AI.Humanity Ethics Lecture Series.

Over four weeks in April and May, world-renowned AI scholars will visit Emory to discuss the moral and social complexities of AI and how it may be shaped for the benefit of humanity. A reception will follow each lecture.

Matthias Scheutz: “Moral Robots? How to Make AI Agents Fit for Human Societies” 

Monday, April 11

Lecture at 4 p.m., reception at 5:30 p.m.

Convocation Hall — Community Room (210)

Register here.

AI is different from other technologies in that it enables and creates machines that can perceive the world and act on it autonomously. We are on the verge of creating sentient machines that could significantly improve our lives and better human societies. Yet AI also poses dangers that are ours to mitigate. In this presentation, Scheutz will argue that AI-enabled systems — in particular, autonomous robots — must have moral competence: they need to be aware of human social and moral norms, be able to follow these norms and justify their decisions in ways that humans understand. Throughout the presentation, Scheutz will give examples from his work on AI robots and human-robot interaction to demonstrate a vision for ethical autonomous robots...

Seth Lazar: “The Nature and Justification of Algorithmic Power” 

Monday, April 18

Lecture at 4 p.m., reception at 5:30 p.m.

Convocation Hall — Community Room (210)

Register here.

Algorithms increasingly mediate and govern our social relations. In doing so, they exercise a distinct kind of intermediary power: they exercise power over us; they shape power relations between us; and they shape our overarching social structures. Sometimes, when new forms of power emerge, our task is simply to eliminate them. However, algorithmic intermediaries can enable new kinds of human flourishing and could advance social structures that are otherwise resistant to progress. Our task, then, is to understand and diagnose algorithmic power and determine whether and how it can be justified. In this lecture, Lazar will propose a framework to guide our efforts, with particular attention to the conditions under which private algorithmic power either can, or must not, be tolerated.

Ifeoma Ajunwa: “The Unrealized Promise of Artificial Intelligence” 

Thursday, April 28

Lecture at 4 p.m., reception at 5:30 p.m.

Oxford Road Building — Presentation Room and Living Room/Patio

Register here.

AI was forecast to revolutionize the world for the better. Yet this promise is still unrealized. Instead, there is a growing mountain of evidence that automated decision making is not revolutionary; rather, it has tended to replicate the status quo, including the biases embedded in our societal systems. The question, then, is what can be done? The answer is twofold: One part looks to what can be done to prevent the reality of automated decision making both enabling and obscuring human bias. The second looks toward proactive measures that could allow AI to work for the greater good...

Carissa Véliz: “On Privacy and Self-Presentation Online” 

Thursday, May 5

Lecture at 4 p.m. 

Online via Zoom 

A long tradition in philosophy and sociology considers self-presentation as the main reason why privacy is valuable, often equating control over self-presentation and privacy. Véliz argues that, even though control over self-presentation and privacy are tightly connected, they are not the same — and overvaluing self-presentation leads us to misunderstand the threat to privacy online. Véliz argues that to combat some of the negative trends we witness online, we need, on the one hand, to cultivate a culture of privacy, in contrast to a culture of exposure (for example, the pressure on social media to be on display at all times). On the other hand, we need to readjust how we understand self-presentation  online."

Friday, February 4, 2022

Where Automated Job Interviews Fall Short; Harvard Business Review (HBR), January 27, 2022

Dimitra Petrakaki, Rachel Starr, and , Harvard Business Review (HBR) ; Where Automated Job Interviews Fall Short

"The use of artificial intelligence in HR processes is a new, and likely unstoppable, trend. In recruitment, up to 86% of employers use job interviews mediated by technology, a growing portion of which are automated video interviews (AVIs).

AVIs involve job candidates being interviewed by an artificial intelligence, which requires them to record themselves on an interview platform, answering questions under time pressure. The video is then submitted through the AI developer platform, which processes the data of the candidate — this can be visual (e.g. smiles), verbal (e.g. key words used), and/or vocal (e.g. the tone of voice). In some cases, the platform then passes a report with an interpretation of the job candidate’s performance to the employer.

The technologies used for these videos present issues in reliably capturing a candidate’s characteristics. There is also strong evidence that these technologies can contain bias that can exclude some categories of job-seekers. The Berkeley Haas Center for Equity, Gender, and Leadership reports that 44% of AI systems are embedded with gender bias, with about 26% displaying both gender and race bias. For example, facial recognition algorithms have a 35% higher detection error for recognizing the gender of women of color, compared to men with lighter skin.

But as developers work to remove biases and increase reliability, we still know very little on how AVIs (or other types of interviews involving artificial intelligence) are experienced by different categories of job candidates themselves, and how these experiences affect them, this is where our research focused. Without this knowledge, employers and managers can’t fully understand the impact these technologies are having on their talent pool or on different group of workers (e.g., age, ethnicity, and social background). As a result, organizations are ill-equipped to discern whether the platforms they turn to are truly helping them hire candidates that align with their goals. We seek to explore whether employers are alienating promising candidates — and potentially entire categories of job seekers by default — because of varying experiences of the technology."

Wednesday, September 25, 2019

‘Nerd,’ ‘Nonsmoker,’ ‘Wrongdoer’: How Might A.I. Label You?; The New York Times, September 20, 2019

, The New York Times; ‘Nerd,’ ‘Nonsmoker,’ ‘Wrongdoer’: How Might A.I. Label You?

ImageNet Roulette, a digital art project and viral selfie app, exposes how biases have crept into the artificial-intelligence technologies changing our lives.

"But for Mr. Paglen, a larger issue looms. The fundamental truth is that A.I. learns from humans — and humans are biased creatures. “The way we classify images is a product of our worldview,” he said. “Any kind of classification system is always going to reflect the values of the person doing the classifying.”"