Showing posts with label AI tools. Show all posts
Showing posts with label AI tools. Show all posts

Monday, October 28, 2024

Panel Reminds Us That Artificial Intelligence Can Only Guess, Not Reason for Itself; New Jersey Institute of Technology, October 22, 2024

Evan Koblentz , New Jersey Institute of Technology; Panel Reminds Us That Artificial Intelligence Can Only Guess, Not Reason for Itself

"Expert panelists took a measured tone about the trends, challenges and ethics of artificial intelligence, at a campus forum organized by NJIT’s Institute for Data Science this month.

The panel moderator was institute director David Bader, who is also a distinguished professor in NJIT Ying Wu College of Computing and who shared his own thoughts on AI in a separate Q&A recently. The panel members were Kevin Coulter, field CTO for AI, Dell Technologies; Grace Wang, distinguished professor and director of NJIT’s Center for Artificial Intelligence Research; and Mengjia Xu, assistant professor of data science. DataBank Ltd., a data center firm that hosts NJIT’s Wulver high-performance computing cluster, was the event sponsor...

Bader: “There's also a lot of concerns that get raised with AI in terms of privacy, in terms of ethics, in terms of its usage. So I really want to understand your thoughts on how we ensure that AI systems are developed and deployed ethically. And are there specific frameworks or guidelines that you would follow?”...

Wang: “Well, I always believe that AI at its core is just a tool, so there's no difference for the AI and say, lock picking tools. Now, picking tools can open your door if you lock yourself out and it can also open others. That's a crime, right? So it depends on how AI is used. From that perspective, there's not much special when we talk about AI ethics, or, say, computer security ethics, or the ethics related to how to use a gun, for example. But what is different is, as AI is too complex, it's beyond the knowledge of many of us how it works. Sometimes it looks ethical but maybe what's behind it is amplifying the bias by using the AI tools without our knowledge. So whenever we talk about AI ethics, I think the most important one is education if you know what AI is about, how it works and what AI can do and what AI cannot. I think for now we have the fear that AI is so powerful it can do anything, but actually, many of the things that people believe AI can do now can be done in the past by just any software system. So education is very, very important to help us to demystify AI accordingly, so we can talk about AI ethics. I want to emphasize transparency. If AI is used for decision making, if we understand how the decision is made, that becomes very, very important. And another important topic related to AI ethics is auditing if we don't know what's inside. At least we have some assessment tools to know whether there's a risk or not in certain circumstances. Whether it can generate a harmful result or is not very much like the stress testing to the financial system after 2008.”

Friday, October 25, 2024

Biden Administration Outlines Government ‘Guardrails’ for A.I. Tools; The New York Times, October 24, 2024

, The New York Times ; Biden Administration Outlines Government ‘Guardrails’ for A.I. Tools

"President Biden on Thursday signed the first national security memorandum detailing how the Pentagon, the intelligence agencies and other national security institutions should use and protect artificial intelligence technology, putting “guardrails” on how such tools are employed in decisions varying from nuclear weapons to granting asylum.

The new document is the latest in a series Mr. Biden has issued grappling with the challenges of using A.I. tools to speed up government operations — whether detecting cyberattacks or predicting extreme weather — while limiting the most dystopian possibilities, including the development of autonomous weapons.

But most of the deadlines the order sets for agencies to conduct studies on applying or regulating the tools will go into full effect after Mr. Biden leaves office, leaving open the question of whether the next administration will abide by them...

The new guardrails would also prohibit letting artificial intelligence tools make a decision on granting asylum. And they would forbid tracking someone based on ethnicity or religion, or classifying someone as a “known terrorist” without a human weighing in.

Perhaps the most intriguing part of the order is that it treats private-sector advances in artificial intelligence as national assets that need to be protected from spying or theft by foreign adversaries, much as early nuclear weapons were. The order calls for intelligence agencies to begin protecting work on large language models or the chips used to power their development as national treasures, and to provide private-sector developers with up-to-the-minute intelligence to safeguard their inventions."

Friday, October 4, 2024

Ethical uses of generative AI in the practice of law; Reuters, October 3, 2024

 Thomson Reuters; Ethical uses of generative AI in the practice of law

"In the rapidly evolving landscape of legal technology, the integration of generative AI tools presents both unprecedented opportunities and significant ethical challenges. Ryan Groff, a distinguished member of the Massachusetts Bar and a lecturer at New England Law, explores these dimensions in his enlightening webinar, “Ethical Uses of Generative AI in the Practice of Law.” 

In the webinar, Ryan Groff discusses the ethical implications of using generative AI (GenAI) in legal practices, tracing the history of GenAI applications in law and distinguishing between various AI tools available today.  He provides an insightful overview of the historical application of GenAI in legal contexts and differentiates the various AI tools currently available. Groff emphasizes that while AI can enhance the efficiency of legal practices, it should not undermine the critical judgment of lawyers. He underscores the importance of maintaining rigorous supervision, safeguarding client confidentiality, and ensuring technological proficiency."

Friday, September 6, 2024

AN ETHICS EXPERT’S PERSPECTIVE ON AI AND HIGHER ED; Pace University, September 3, 2024

 Johnni Medina, Pace University; AN ETHICS EXPERT’S PERSPECTIVE ON AI AND HIGHER ED

"As a scholar deeply immersed in both technology and philosophy, James Brusseau, PhD, has spent years unraveling the complex ethics of artificial intelligence (AI).

“As it happens, I was a physics major in college, so I've had an abiding interest in technology, but I finally decided to study philosophy,” Brusseau explains. “And I did not see much of an intersection between the scientific and my interest in philosophy until all of a sudden artificial intelligence landed in our midst with questions that are very philosophical.”.

Some of these questions are heavy, with Brusseau positing an example, “If a machine acts just like a person, does it become a person?” But AI’s implications extend far beyond the theoretical, especially when it comes to the impact on education, learning, and career outcomes. What role does AI play in higher education? Is it a tool that enhances learning, or does it risk undermining it? And how do universities prepare students for an AI-driven world?

In a conversation that spans these topics, Brusseau shares his insights on the place of AI in higher education, its benefits, its risks, and what the future holds...

I think that if AI alone is the professor, then the knowledge students get will be imperfect in the same vaguely definable way that AI art is imperfect."

Saturday, August 31, 2024

More Art School Classes Are Teaching AI This Fall Despite Ethical Concerns and Ongoing Lawsuits; Artnews, August 30, 2024

KAREN K. HO, Artnews ; More Art School Classes Are Teaching AI This Fall Despite Ethical Concerns and Ongoing Lawsuits


"When undergraduate students return to the Ringling College of Art and Design this fall, one of the school’s newest offerings will be an AI certificate

Ringling is just the latest of several top art schools to offer undergraduate students courses that focus on or integrate artificial intelligence tools and techniques.

ARTnews spoke to experts and faculty at Ringling, Rhode Island School of Design(RISD), Carnegie Mellon University (CMU), and Florida State University about how they construct curriculum; how they teach AI in consideration of its limitations and concerns about ethics and legal issues; as well as why they think it’s important for artists to learn."

Thursday, August 29, 2024

The Ethics of Developing Voice Biometrics; The New York Academy of Sciences, August 29, 2024

Nitin Verma, PhD, The New York Academy of Sciences; The Ethics of Developing Voice Biometrics

"Juana Catalina Becerra Sandoval, a PhD candidate in the Department of the History of Science at Harvard University and a research scientist in the Responsible and Inclusive Technologies initiative at IBM Research, presented as part of The New York Academy of Sciences’ (the Academy) Artificial Intelligence (AI) & Society Seminar series. The lecture – titled “What’s in a Voice? Biometric Fetishization and Speaker Recognition Technologies” – explored the ethical implications associated with the development and use of AI-based tools such as voice biometrics. After the presentation, Juana sat down with Nitin Verma, PhD, a member of the Academy’s 2023 cohort of the AI & Society Fellowship, to further discuss the promises and challenges society faces as AI continues to evolve."

Monday, August 19, 2024

Mayoral candidate vows to let VIC, an AI bot, run Wyoming’s capital city; The Washington Post, August 19, 2024

Jenna Sampson
 , The Washington Post; Mayoral candidate vows to let VIC, an AI bot, run Wyoming’s capital city

"Miller made this pitch at a county library in Wyoming’s capital on a recent summer Friday, with a few friends and family filling otherwise empty rows of chairs. Before the sparse audience, he vowed to run the city of Cheyenne exclusively with an AI bot he calls “VIC” for “Virtual Integrated Citizen.”

AI experts say the pledge is a first for U.S. campaigns and marks a new front in the rapid emergence of the technology. Its implications have stoked alarm among officials and even tech companies...

The day before, Miller had scrambled to get VIC working after OpenAI,the technology company behind generative-AI tools like ChatGPT, shut down his account, citing policies against using its products for campaigning. Miller quickly made a second ChatGPT bot, allowing him to hold the meet-and-greet almost exactly as planned.

It was just the latest example of Miller’s skirting efforts against his campaign by the company that makes the AI technology and the regulatory authorities that oversee elections...

“While OpenAI may have certain policies against using its model for campaigning, other companies do not, so it makes shutting down the campaign nearly impossible.”"

Friday, July 26, 2024

Students Weigh Ethics of Using AI for College Applications; Education Week via GovTech, July 24, 2024

Alyson Klein , Education Week via GovTech; Students Weigh Ethics of Using AI for College Applications

"About a third of high school seniors who applied to college in the 2023-24 school year acknowledged using an AI tool for help in writing admissions essays, according to research released this month by foundry10, an organization focused on improving learning.

About half of those students — or roughly one in six students overall — used AI the way Makena did, to brainstorm essay topics or polish their spelling and grammar. And about 6 percent of students overall—including some of Makena's classmates, she said — relied on AI to write the final drafts of their essays instead of doing most of the writing themselves.

Meanwhile, nearly a quarter of students admitted to Harvard University's class of 2027 paid a private admissions consultant for help with their applications.

The use of outside help, in other words, is rampant in college admissions, opening up a host of questions about ethics, norms, and equal opportunity.

Top among them: Which — if any — of these students cheated in the admissions process?

For now, the answer is murky."

Thursday, July 11, 2024

The assignment: Build AI tools for journalists – and make ethics job one; Poynter, July 8, 2024

 , Poynter; The assignment: Build AI tools for journalists – and make ethics job one

"Imagine you had virtually unlimited money, time and resources to develop an AI technology that would be useful to journalists.

What would you dream, pitch and design?

And how would you make sure your idea was journalistically ethical?

That was the scenario posed to about 50 AI thinkers and journalists at Poynter’s recent invitation-only Summit on AI, Ethics & Journalism

The summit drew together news editors, futurists and product leaders June 11-12 in St. Petersburg, Florida. As part of the event, Poynter partnered with Hacks/Hackers, to ask groups attendees to  brainstorm ethically considered AI tools that they would create for journalists if they had practically unlimited time and resources.

Event organizer Kelly McBride, senior vice president and chair of the Craig Newmark Center for Ethics and Leadership at Poynter, said the hackathon was born out of Poynter’s desire to help journalists flex their intellectual muscles as they consider AI’s ethical implications.

“We wanted to encourage journalists to start thinking of ways to deploy AI in their work that would both honor our ethical traditions and address the concerns of news consumers,” she said.

Alex Mahadevan, director of Poynter’s digital media literacy project MediaWise, covers the use of generative AI models in journalism and their potential to spread misinformation."

Saturday, July 6, 2024

THE GREAT SCRAPE: THE CLASH BETWEEN SCRAPING AND PRIVACY; SSRN, July 3, 2024

Daniel J. SoloveGeorge Washington University Law School; Woodrow HartzogBoston University School of Law; Stanford Law School Center for Internet and SocietyTHE GREAT SCRAPETHE CLASH BETWEEN SCRAPING AND PRIVACY

"ABSTRACT

Artificial intelligence (AI) systems depend on massive quantities of data, often gathered by “scraping” – the automated extraction of large amounts of data from the internet. A great deal of scraped data is about people. This personal data provides the grist for AI tools such as facial recognition, deep fakes, and generative AI. Although scraping enables web searching, archival, and meaningful scientific research, scraping for AI can also be objectionable or even harmful to individuals and society.


Organizations are scraping at an escalating pace and scale, even though many privacy laws are seemingly incongruous with the practice. In this Article, we contend that scraping must undergo a serious reckoning with privacy law. Scraping violates nearly all of the key principles in privacy laws, including fairness; individual rights and control; transparency; consent; purpose specification and secondary use restrictions; data minimization; onward transfer; and data security. With scraping, data protection laws built around

these requirements are ignored.


Scraping has evaded a reckoning with privacy law largely because scrapers act as if all publicly available data were free for the taking. But the public availability of scraped data shouldn’t give scrapers a free pass. Privacy law regularly protects publicly available data, and privacy principles are implicated even when personal data is accessible to others.


This Article explores the fundamental tension between scraping and privacy law. With the zealous pursuit and astronomical growth of AI, we are in the midst of what we call the “great scrape.” There must now be a great reconciliation."

Saturday, January 27, 2024

Artificial Intelligence Law - Intellectual Property Protection for your voice?; JDSupra, January 22, 2024

 Steve Vondran, JDSupra ; Artificial Intelligence Law - Intellectual Property Protection for your voice?

"With the advent of AI technology capable of replicating a person's voice and utilizing it for commercial purposes, several key legal issues are likely to emerge under California's right of publicity law. The right of publicity refers to an individual's right to control and profit from their own name, image, likeness, or voice.

Determining the extent of a person's control over their own voice will likely become a contentious legal matter given the rise of AI technology. In 2024, with a mere prompt and a push of a button, a creator can generate highly accurate voice replicas, potentially allowing companies to utilize a person's voice without their explicit permission for example using a AI generated song in a video, or podcast, or using it as a voice-over for a commercial project. This sounds like fun new technology, until you realize that in states like California where a "right of publicity law" exists a persons VOICE can be a protectable asset that one can sue to protect others who wrongfully misuse their voice for commercial advertising purposes.

This blog will discuss a few new legal issues I see arising in our wonderful new digital age being fueled by the massive onset of Generative AI technology (which really just means you input prompts into an AI tool and it will generate art, text, images, music, etc."

Thursday, October 12, 2023

Ethical considerations in the use of AI; Reuters, October 2, 2023

  and Hanson Bridgett LLP, Reuters; Ethical considerations in the use of AI

"The burgeoning use of artificial intelligence ("AI") platforms and tools such as ChatGPT creates both opportunities and risks for the practice of law. In particular, the use of AI in research, document drafting and other work product presents a number of ethical issues for lawyers to consider as they contemplate how the use of AI may benefit their practices. In California, as in other states, several ethics rules are particularly relevant to a discussion of the use of AI."

Tuesday, August 1, 2023

The Ethics of Making (and Publishing) AI Art; Lifehacker, July 31, 2023

Brendan Hesse, Lifehacker; The Ethics of Making (and Publishing) AI Art

"This post is part of Lifehacker’s “Living With AI” series: We investigate the current state of AI, walk through how it can be useful (and how it can’t), and evaluate where this revolutionary tech is heading next. Read more here...

Are there ethical uses of AI art?

Despite the ethical and legal issues, some argue there is a place for these tools, and that they can even be helpful to professional artists...

Given all these concerns, it’s hard to recommend AI art creators, even if the intent to use them is innocent. Nevertheless, these tools are here, and unless some future regulations force them to change, we can’t stop folks from giving them a try. But, if you do, please keep in mind the legal and ethical issues associated with making and sharing AI art, think twice about sharing it, and never claim an AI-generated image as your own work."

Tuesday, March 14, 2023

Microsoft lays off team that taught employees how to make AI tools responsibly; The Verge, March 13, 2023

 ZOE SCHIFFERCASEY NEWTON, The Verge; Microsoft lays off team that taught employees how to make AI tools responsibly

"Microsoft laid off its entire ethics and society team within the artificial intelligence organization as part of recent layoffs that affected 10,000 employees across the company, Platformer has learned. 

The move leaves Microsoft without a dedicated team to ensure its AI principles are closely tied to product design at a time when the company is leading the charge to make AI tools available to the mainstream, current and former employees said.

Microsoft still maintains an active Office of Responsible AI, which is tasked with creating rules and principles to govern the company’s AI initiatives. The company says its overall investment in responsibility work is increasing despite the recent layoffs."

Wednesday, January 18, 2023

Alarmed by A.I. Chatbots, Universities Start Revamping How They Teach; The New York Times, January 16, 2023

Kalley Huang, The New York Times ; Alarmed by A.I. Chatbots, Universities Start Revamping How They Teach

"In higher education, colleges and universities have been reluctant to ban the A.I. tool because administrators doubt the move would be effective and they don’t want to infringe on academic freedom. That means the way people teach is changing instead."

Friday, January 13, 2023

Advances in artificial intelligence raise new ethics concerns; PBS News Hour, January 10, 2023

, PBS News Hour ; Advances in artificial intelligence raise new ethics concerns

"In recent months, new artificial intelligence tools have garnered attention, and concern, over their ability to produce original work. The creations range from college-level essays to computer code and works of art. As Stephanie Sy reports, this technology could change how we live and work in profound ways."

Friday, May 20, 2022

Federal officials caution employers on using AI in hiring; FCW, May 12, 2022

Natalie Alms, FCW; Federal officials caution employers on using AI in hiring

"The growing use of artificial intelligence and other software tools for hiring, performance monitoring and pay determination in the workplace is compounding discriminiation against people with disabilities, federal civil rights officials say.

Artificial intelligence can be deployed to target job ads to certain potential applicants, hold online job interviews, assess the skills of job applicants and even decide if an applicant meets job requirements. But the technology can discriminate against applicants and employees with disabilities.

On Thursday, the Equal Employment Opportunity Commission and the Department of Justice put employers on alert that they're responsible for not using AI tools in ways that discriminate and inform employees of their rights, agency officials told reporters."

Friday, January 24, 2020

This App Is a Dangerous Invasion of Your Privacy—and the FBI Uses It; Popular Mechanics, January 22, 2020

, Popular Mechanics; This App Is a Dangerous Invasion of Your Privacy—and the FBI Uses It

"Even Google Wouldn't Build This

When companies like Google—which has received a ton of flack for taking government contracts to work on artificial intelligence solutions—won't even build an app, you know it's going to cause a stir. Back in 2011, former Google Chairman Eric Schmidt said a tool like Clearview AI's app was one of the few pieces of tech that the company wouldn't develop because it could be used "in a very bad way."

Facebook, for its part, developed something pretty similar to what Clearview AI offers, but at least had the foresight not to publicly release it. That application, developed between 2015 and 2016, allowed employees to identify colleagues and friends who had enabled facial recognition by pointing their phone cameras at their faces. Since then, the app has been discontinued.

Meanwhile, Clearview AI is nowhere near finished. Hidden in the app's code, which the New York Times evaluated, is programming language that could pair the app to augmented reality glasses, meaning that in the future, it's possible we could identify every person we see in real time.

Early Pushback

Perhaps the silver lining is that we found out about Clearview AI at all. Its public discovery—and accompanying criticism—have led to well-known organizations coming out as staunchly opposed to this kind of tech.

Fight for the Future tweeted that "an outright ban" on these AI tools is the only way to fix this privacy issue—not quirky jewelry or sunglasses that can help to protect your identity by confusing surveillance systems."

Tuesday, February 12, 2019

Rethinking Medical Ethics; Forbes, February 11, 2019

, Forbes; Rethinking Medical Ethics

"Even so, the technology raises some knotty ethical questions. What happens when an AI system makes the wrong decision—and who is responsible if it does? How can clinicians verify, or even understand, what comes out of an AI “black box”? How do they make sure AI systems avoid bias and protect patient privacy?

In June 2018, the American Medical Association (AMA) issued its first guidelines for how to develop, use and regulate AI. (Notably, the association refers to AI as “augmented intelligence,” reflecting its belief that AI will enhance, not replace, the work of physicians.) Among its recommendations, the AMA says, AI tools should be designed to identify and address bias and avoid creating or exacerbating disparities in the treatment of vulnerable populations. Tools, it adds, should be transparent and protect patient privacy.

None of those recommendations will be easy to satisfy. Here is how medical practitioners, researchers, and medical ethicists are approaching some of the most pressing ethical challenges."