Showing posts with label AI technologies. Show all posts
Showing posts with label AI technologies. Show all posts

Saturday, January 27, 2024

Artificial Intelligence Law - Intellectual Property Protection for your voice?; JDSupra, January 22, 2024

 Steve Vondran, JDSupra ; Artificial Intelligence Law - Intellectual Property Protection for your voice?

"With the advent of AI technology capable of replicating a person's voice and utilizing it for commercial purposes, several key legal issues are likely to emerge under California's right of publicity law. The right of publicity refers to an individual's right to control and profit from their own name, image, likeness, or voice.

Determining the extent of a person's control over their own voice will likely become a contentious legal matter given the rise of AI technology. In 2024, with a mere prompt and a push of a button, a creator can generate highly accurate voice replicas, potentially allowing companies to utilize a person's voice without their explicit permission for example using a AI generated song in a video, or podcast, or using it as a voice-over for a commercial project. This sounds like fun new technology, until you realize that in states like California where a "right of publicity law" exists a persons VOICE can be a protectable asset that one can sue to protect others who wrongfully misuse their voice for commercial advertising purposes.

This blog will discuss a few new legal issues I see arising in our wonderful new digital age being fueled by the massive onset of Generative AI technology (which really just means you input prompts into an AI tool and it will generate art, text, images, music, etc."

Saturday, December 10, 2022

Your selfies are helping AI learn. You did not consent to this.; The Washington Post, December 9, 2022

 , The Washington Post; Your selfies are helping AI learn. You did not consent to this.

"My colleague Tatum Hunter spent time evaluating Lensa, an app that transforms a handful of selfies you provide into artistic portraits. And people have been using the new chatbot ChatGPT to generate silly poems or professional emails that seem like they were written by a human. These AI technologies could be profoundly helpful but they also come with a bunch of thorny ethical issues.

Tatum reported that Lensa’s portrait wizardly comes from the styles of artists whose work was included in a giant database for coaching image-generating computers. The artists didn’t give their permission to do this, and they aren’t being paid. In other words, your fun portraits are built on work ripped off from artists. ChatGPT learned to mimic humans by analyzing your recipes, social media posts, product reviews and other text from everyone on the internet...

Hany Farid, a computer science professor at the University of California at Berkeley, told me that individuals, government officials, many technology executives, journalists and educators like him are far more attuned than they were a few years ago to the potential positive and negative consequences of emerging technologies like AI. The hard part, he said, is knowing what to do to effectively limit the harms and maximize the benefits."

Sunday, April 10, 2022

AI.Humanity Ethics Lecture Series will explore the ethics of artificial intelligence; Emory University, Emory News Center, April 5, 2022

Emory University, Emory News Center; AI.Humanity Ethics Lecture Series will explore the ethics of artificial intelligence

"As society increasingly relies on artificial intelligence (AI) technologies, how can ethically committed individuals and institutions articulate values to guide their development and respond to emerging problems? Join the Office of the Provost to explore the ethical implications of AI in a new AI.Humanity Ethics Lecture Series.

Over four weeks in April and May, world-renowned AI scholars will visit Emory to discuss the moral and social complexities of AI and how it may be shaped for the benefit of humanity. A reception will follow each lecture.

Matthias Scheutz: “Moral Robots? How to Make AI Agents Fit for Human Societies” 

Monday, April 11

Lecture at 4 p.m., reception at 5:30 p.m.

Convocation Hall — Community Room (210)

Register here.

AI is different from other technologies in that it enables and creates machines that can perceive the world and act on it autonomously. We are on the verge of creating sentient machines that could significantly improve our lives and better human societies. Yet AI also poses dangers that are ours to mitigate. In this presentation, Scheutz will argue that AI-enabled systems — in particular, autonomous robots — must have moral competence: they need to be aware of human social and moral norms, be able to follow these norms and justify their decisions in ways that humans understand. Throughout the presentation, Scheutz will give examples from his work on AI robots and human-robot interaction to demonstrate a vision for ethical autonomous robots...

Seth Lazar: “The Nature and Justification of Algorithmic Power” 

Monday, April 18

Lecture at 4 p.m., reception at 5:30 p.m.

Convocation Hall — Community Room (210)

Register here.

Algorithms increasingly mediate and govern our social relations. In doing so, they exercise a distinct kind of intermediary power: they exercise power over us; they shape power relations between us; and they shape our overarching social structures. Sometimes, when new forms of power emerge, our task is simply to eliminate them. However, algorithmic intermediaries can enable new kinds of human flourishing and could advance social structures that are otherwise resistant to progress. Our task, then, is to understand and diagnose algorithmic power and determine whether and how it can be justified. In this lecture, Lazar will propose a framework to guide our efforts, with particular attention to the conditions under which private algorithmic power either can, or must not, be tolerated.

Ifeoma Ajunwa: “The Unrealized Promise of Artificial Intelligence” 

Thursday, April 28

Lecture at 4 p.m., reception at 5:30 p.m.

Oxford Road Building — Presentation Room and Living Room/Patio

Register here.

AI was forecast to revolutionize the world for the better. Yet this promise is still unrealized. Instead, there is a growing mountain of evidence that automated decision making is not revolutionary; rather, it has tended to replicate the status quo, including the biases embedded in our societal systems. The question, then, is what can be done? The answer is twofold: One part looks to what can be done to prevent the reality of automated decision making both enabling and obscuring human bias. The second looks toward proactive measures that could allow AI to work for the greater good...

Carissa Véliz: “On Privacy and Self-Presentation Online” 

Thursday, May 5

Lecture at 4 p.m. 

Online via Zoom 

A long tradition in philosophy and sociology considers self-presentation as the main reason why privacy is valuable, often equating control over self-presentation and privacy. Véliz argues that, even though control over self-presentation and privacy are tightly connected, they are not the same — and overvaluing self-presentation leads us to misunderstand the threat to privacy online. Véliz argues that to combat some of the negative trends we witness online, we need, on the one hand, to cultivate a culture of privacy, in contrast to a culture of exposure (for example, the pressure on social media to be on display at all times). On the other hand, we need to readjust how we understand self-presentation  online."

Friday, February 4, 2022

Where Automated Job Interviews Fall Short; Harvard Business Review (HBR), January 27, 2022

Dimitra Petrakaki, Rachel Starr, and , Harvard Business Review (HBR) ; Where Automated Job Interviews Fall Short

"The use of artificial intelligence in HR processes is a new, and likely unstoppable, trend. In recruitment, up to 86% of employers use job interviews mediated by technology, a growing portion of which are automated video interviews (AVIs).

AVIs involve job candidates being interviewed by an artificial intelligence, which requires them to record themselves on an interview platform, answering questions under time pressure. The video is then submitted through the AI developer platform, which processes the data of the candidate — this can be visual (e.g. smiles), verbal (e.g. key words used), and/or vocal (e.g. the tone of voice). In some cases, the platform then passes a report with an interpretation of the job candidate’s performance to the employer.

The technologies used for these videos present issues in reliably capturing a candidate’s characteristics. There is also strong evidence that these technologies can contain bias that can exclude some categories of job-seekers. The Berkeley Haas Center for Equity, Gender, and Leadership reports that 44% of AI systems are embedded with gender bias, with about 26% displaying both gender and race bias. For example, facial recognition algorithms have a 35% higher detection error for recognizing the gender of women of color, compared to men with lighter skin.

But as developers work to remove biases and increase reliability, we still know very little on how AVIs (or other types of interviews involving artificial intelligence) are experienced by different categories of job candidates themselves, and how these experiences affect them, this is where our research focused. Without this knowledge, employers and managers can’t fully understand the impact these technologies are having on their talent pool or on different group of workers (e.g., age, ethnicity, and social background). As a result, organizations are ill-equipped to discern whether the platforms they turn to are truly helping them hire candidates that align with their goals. We seek to explore whether employers are alienating promising candidates — and potentially entire categories of job seekers by default — because of varying experiences of the technology."

Wednesday, September 25, 2019

‘Nerd,’ ‘Nonsmoker,’ ‘Wrongdoer’: How Might A.I. Label You?; The New York Times, September 20, 2019

, The New York Times; ‘Nerd,’ ‘Nonsmoker,’ ‘Wrongdoer’: How Might A.I. Label You?

ImageNet Roulette, a digital art project and viral selfie app, exposes how biases have crept into the artificial-intelligence technologies changing our lives.

"But for Mr. Paglen, a larger issue looms. The fundamental truth is that A.I. learns from humans — and humans are biased creatures. “The way we classify images is a product of our worldview,” he said. “Any kind of classification system is always going to reflect the values of the person doing the classifying.”"