Showing posts with label generative AI. Show all posts
Showing posts with label generative AI. Show all posts

Saturday, October 5, 2024

Police reports written with advanced tech could help cops but comes with host of challenges: expert; Fox News, September 24, 2024

Christina Coulter , Fox News; Police reports written with advanced tech could help cops but comes with host of challenges

"Several police departments nationwide are debuting artificial intelligence that writes officers' incident reports for them, and although the software could cause issues in court, an expert says, the technology could be a boon for law enforcement.

Oklahoma City's police department was among the first to experiment with Draft One, an AI-powered software that analyzes police body-worn camera audio and radio transmissions to write police reports that can later be used to justify criminal charges and as evidence in court.

Since The Associated Press detailed the software and its use by the department in a late August article, the department told Fox News Digital that it has put the program on hold. 

"The use of the AI report writing has been put on hold, so we will pass on speaking about it at this time," Capt. Valerie Littlejohn wrote via email. "It was paused to work through all the details with the DA’s Office."...

According to Politico, at least seven police departments nationwide are using Draft One, which was made by police technology company Axon to be used with its widely used body-worn cameras."

Friday, October 4, 2024

Ethical uses of generative AI in the practice of law; Reuters, October 3, 2024

 Thomson Reuters; Ethical uses of generative AI in the practice of law

"In the rapidly evolving landscape of legal technology, the integration of generative AI tools presents both unprecedented opportunities and significant ethical challenges. Ryan Groff, a distinguished member of the Massachusetts Bar and a lecturer at New England Law, explores these dimensions in his enlightening webinar, “Ethical Uses of Generative AI in the Practice of Law.” 

In the webinar, Ryan Groff discusses the ethical implications of using generative AI (GenAI) in legal practices, tracing the history of GenAI applications in law and distinguishing between various AI tools available today.  He provides an insightful overview of the historical application of GenAI in legal contexts and differentiates the various AI tools currently available. Groff emphasizes that while AI can enhance the efficiency of legal practices, it should not undermine the critical judgment of lawyers. He underscores the importance of maintaining rigorous supervision, safeguarding client confidentiality, and ensuring technological proficiency."

Monday, September 23, 2024

Generative AI and Legal Ethics; JD Supra, September 20, 2024

Craig BrodskyGoodell, DeVries, Leech & Dann, LLP, JD Supra; Generative AI and Legal Ethics

 "In his scathing opinion, Cullen joined judges from New York Massachusetts and North Carolina, among others, by concluding that improper use of AI generated authorities may give rise to sanctions and disciplinary charges...

As a result, on July 29, 2024, the American Bar Association Standing Committee on Ethics and Professional issued Formal Opinion 512 on Generative Artificial Intelligence Tools. The ABA Standing Committee issued the opinion primarily because GAI tools are a “rapidly moving target” that can create significant ethical issues. The committee believed it necessary to offer “general guidance for lawyers attempting to navigate this emerging landscape.”

The committee’s general guidance is helpful, but the general nature of Opinion 512 it underscores part of my main concern — GAI has a wide-ranging impact on how lawyers practice that will increase over time. Unsurprisingly, at present, GAI implicates at least eight ethical rules ranging from competence (Md. Rule 19-301.1) to communication (Md. Rule 19-301.4), to fees (Md. Rule 19-301.5), to confidentiality, (Md. Rule 19-301.6), to supervisory obligations (Md. Rule 19-305.1 and Md. Rule 305.3) to the duties of a lawyer before tribunal to be candid and pursue meritorious claims and defenses. (Md. Rules 19-303.1 and 19-303.3).

As a technological feature of practice, lawyers cannot simply ignore GAI. The duty of competence under Rule 19-301.1 includes technical competence, and GAI is just another step forward. It is here to stay. We must embrace it but use it smartly.

Let it be an adjunct to your practice rather than having Chat GPT write your brief. Ensure that your staff understands that GAI can be helpful, but that the work product must be checked for accuracy.

After considering the ethical implications and putting the right processes in place, implement GAI and use it to your clients’ advantage."

Tuesday, September 17, 2024

Disinformation, Trust, and the Role of AI: The Daniel Callahan Annual Lecture; The Hastings Center, September 12, 2024

 The Hastings Center; Disinformation, Trust, and the Role of AI: The Daniel Callahan Annual Lecture

"A Moderated Discussion on DISINFORMATION, TRUST, AND THE ROLE OF AI: Threats to Health & Democracy, The Daniel Callahan Annual Lecture

Panelists: Reed Tuckson, MD, FACP, Chair & Co-Founder of the Black Coalition Against Covid, Chair and Co-Founder of the Coalition For Trust In Health & Science Timothy Caulfield, LB, LLM, FCAHS, Professor, Faculty of Law and School of Public Health, University of Alberta; Best-selling author & TV host Moderator: Vardit Ravitsky, PhD, President & CEO, The Hastings Center"

Sunday, September 1, 2024

QUESTIONS FOR CONSIDERATION ON AI & THE COMMONS; Creative Commons, July 24, 2024

Anna Tumadóttir , Creative Commons; QUESTIONS FOR CONSIDERATION ON AI & THE COMMONS

"The intersection of AI, copyright, creativity, and the commons has been a focal point of conversations within our community for the past couple of years. We’ve hosted intimate roundtables, organized workshops at conferences, and run public events, digging into the challenging topics of credit, consent, compensation, transparency, and beyond. All the while, we’ve been asking ourselves:  what can we do to foster a vibrant and healthy commons in the face of rapid technological development? And how can we ensure that creators and knowledge-producing communities still have agency?...

We recognize that there is a perceived tension between openness and creator choice. Namely, if we  give creators choice over how to manage their works in the face of generative AI, we may run the risk of shrinking the commons. To potentially overcome, or at least better understand the effect of generative AI on the commons, we believe  that finding a way for creators to indicate “no, unless…” would be positive for the commons. Our consultations over the course of the last two years have confirmed that:

  • Folks want more choice over how their work is used.
  • If they have no choice, they might not share their work at all (under a CC license or strict copyright).

If these views are as wide ranging as we perceive, we feel it is imperative that we explore an intervention, and bring far more nuance into how this ecosystem works.

Generative AI is here to stay, and we’d like to do what we can to ensure it benefits the public interest. We are well-positioned with the experience, expertise, and tools to investigate the potential of preference signals.

Our starting point is to identify what types of preference signals might be useful. How do these vary or overlap in the cultural heritage, journalism, research, and education sectors? How do needs vary by region? We’ll also explore exactly how we might structure a preference signal framework so it’s useful and respected, asking, too: does it have to be legally enforceable, or is the power of social norms enough?

Research matters. It takes time, effort, and most importantly, people. We’ll need help as we do this. We’re seeking support from funders to move this work forward. We also look forward to continuing to engage our community in this process. More to come soon."

A bill to protect performers from unauthorized AI heads to California governor; NPR, August 30, 2024

 , NPR; A bill to protect performers from unauthorized AI heads to California governor

"Other proposed guardrails

In addition to AB2602, the performer’s union is backing California bill AB 1836 to protect deceased performers’ intellectual property from digital replicas.

On a national level, entertainment industry stakeholders, from SAG-AFTRA to The Recording Academy and the MPA, and others are supporting The “NO FAKES Act” (the Nurture Originals, Foster Art, and Keep Entertainment Safe Act) introduced in the Senate. That law would make creating a digital replica of any American illegal.

Around the country, legislators have proposed hundreds of laws to regulate AI more generally. For example, California lawmakers recently passed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047), which regulates AI models such as ChatGPT.

“It's vital and it's incredibly urgent because legislation, as we know, takes time, but technology matures exponentially. So we're going to be constantly fighting the battle to stay ahead of this,” said voice performer Zeke Alton, a member of SAG-AFTRA’s negotiating committee. “If we don't get to know what's real and what's fake, that is starting to pick away at the foundations of democracy.”

Alton says in the fight for AI protections of digital doubles, Hollywood performers have been the canary in the coal mine. “We are having this open conversation in the public about generative AI and it and using it to replace the worker instead of having the worker use it as a tool for their own efficiency,” he said. “But it's coming for every other industry, every other worker. That's how big this sea change in technology is. So what happens here is going to reverberate.”"

Tuesday, August 27, 2024

Ethical and Responsible AI: A Governance Framework for Boards; Directors & Boards, August 27, 2024

Sonita Lontoh, Directors & Boards; Ethical and Responsible AI: A Governance Framework for Boards 

"Boards must understand what gen AI is being used for and its potential business value supercharging both efficiencies and growth. They must also recognize the risks that gen AI may present. As we have already seen, these risks may include data inaccuracy, bias, privacy issues and security. To address some of these risks, boards and companies should ensure that their organizations' data and security protocols are AI-ready. Several criteria must be met:

  • Data must be ethically governed. Companies' data must align with their organization's guiding principles. The different groups inside the organization must also be aligned on the outcome objectives, responsibilities, risks and opportunities around the company's data and analytics.
  • Data must be secure. Companies must protect their data to ensure that intruders don't get access to it and that their data doesn't go into someone else's training model.
  • Data must be free of bias to the greatest extent possible. Companies should gather data from diverse sources, not from a narrow set of people of the same age, gender, race or backgrounds. Additionally, companies must ensure that their algorithms do not inadvertently perpetuate bias.
  • AI-ready data must mirror real-world conditions. For example, robots in a warehouse need more than data; they also need to be taught the laws of physics so they can move around safely.
  • AI-ready data must be accurate. In some cases, companies may need people to double-check data for inaccuracy.

It's important to understand that all these attributes build on one another. The more ethically governed, secure, free of bias and enriched a company's data is, the more accurate its AI outcomes will be."

Friday, August 23, 2024

Crossroads: Episode 2 - AI and Ethics; Crossroads from Washington National Cathedral, April 17, 2024

 Crossroads from Washington National CathedralCrossroads: Episode 2 - AI and Ethics

"Tune in for the Cathedral's first conversation on AI and ethics. Whether you are enthusiastically embracing it, reluctantly trying it out, or anxious about its consequences, AI has taken our world by storm and according to the experts, it is here to stay. Dr. Joseph Yun, CEO of Bluefoxlabs.ai and AI architect for the University of Pittsburgh, the Rev. Jo Nygard Owens, the Cathedral's Pastor for Digital Ministry, and Dr. Sonia Coman, the Cathedral's Director of Digital Engagement discuss the state of AI, its risks, and the hope it can bring to the world."

The US Government Wants You—Yes, You—to Hunt Down Generative AI Flaws; Wired, August 21, 2024

 Lily Hay Newman, Wired; The US Government Wants You—Yes, You—to Hunt Down Generative AI Flaws

"AT THE 2023 Defcon hacker conference in Las Vegas, prominent AI tech companies partnered with algorithmic integrity and transparency groups to sic thousands of attendees on generative AI platforms and find weaknesses in these critical systems. This “red-teaming” exercise, which also had support from the US government, took a step in opening these increasingly influential yet opaque systems to scrutiny. Now, the ethical AI and algorithmic assessment nonprofit Humane Intelligence is taking this model one step further. On Wednesday, the group announced a call for participation with the US National Institute of Standards and Technology, inviting any US resident to participate in the qualifying round of a nationwide red-teaming effort to evaluate AI office productivity software.

The qualifier will take place online and is open to both developers and anyone in the general public as part of NIST's AI challenges, known as Assessing Risks and Impacts of AI, or ARIA. Participants who pass through the qualifying round will take part in an in-person red-teaming event at the end of October at the Conference on Applied Machine Learning in Information Security (CAMLIS) in Virginia. The goal is to expand capabilities for conducting rigorous testing of the security, resilience, and ethics of generative AI technologies."

Monday, July 29, 2024

The COPIED Act Is an End Run around Copyright Law; Public Knowledge, July 24, 2024

 Lisa Macpherson , Public Knowledge; The COPIED Act Is an End Run around Copyright Law

"Over the past week, there has been a flurry of activity related to the Content Origin Protection and Integrity from Edited and Deepfaked Media (COPIED) Act. While superficially focused on helping people understand when they are looking at content that has been created or altered using artificial intelligence (AI) tools, this overly broad bill makes an end run around copyright law and restricts how everyone – not just huge AI developers – can use copyrighted work as the basis of new creative expression. 

The COPIED Act was introduced in the Senate two weeks ago by Senators Maria Cantwell (D-WA, and Chair of the Commerce Committee); Marsha Blackburn (R-TN); and Martin Heinrich (D-NM). By the end of last week, we learned there may be a hearing and markup on the bill within days or weeks. The bill directs agency action on standards for detecting and labeling synthetic content; requires AI developers to allow the inclusion of these standards on content; and prohibits the use of such content to generate new content or train AI models without consent and compensation from creators. It allows for enforcement by the Federal Trade Commission and state attorneys general, and for private rights of action. 

We want to say unequivocally that this is the wrong bill, at the wrong time, from the wrong policymakers, to address complex questions of copyright and generative artificial intelligence."

Monday, July 22, 2024

What Is The Future Of Intellectual Property In A Generative AI World?; Forbes, July 18, 2024

Ron Schmelzer, Forbes; What Is The Future Of Intellectual Property In A Generative AI World?

"Taking a More Sophisticated and Nuanced Approach to GenAI IP Issues

Clearly we’re at a crossroads when it comes to intellectual property and the answers aren’t cut and dry. Simply preventing IP protection of AI-generated works might not be possible if AI systems are used in any significant portion of the creation process. Likewise, prohibiting AI systems from making use of pre-existing IP-protected works might be a Pandora’s box we can’t close. We need to find new approaches that balance the ability to use AI tools as part of the creation process with IP protection of both existing works and the outputs of GenAI systems.

This means a more sophisticated and nuanced approach to clarifying the legal status of data used in AI training and developing mechanisms to ensure that AI-generated outputs respect existing IP rights, while still providing protection for creative outputs that have involved significant elements of human creativity in curation and prompting, even if the outputs are transformative recombinations of training data. Clearly we’re in the early days of the continued evolution of what intellectual property means."

This might be the most important job in AI; Business Insider, July 21, 2024

  , Business Insider; This might be the most important job in AI

"Generative AI can hallucinate, spread misinformation, and reinforce biases against marginalized groups if it's not managed properly. Given that the technology relies on volumes of sensitive data, the potential for data breaches is also high. At worst, though, there's the danger that the more sophisticated it becomes, the less likely it is to align with human values.

With great power, then, comes great responsibility, and companies that make money from generative AI must also ensure they regulate it.

That's where a chief ethics officer comes in...

Those who are successful in the role ideally have four areas of expertise, according to Mills. They should have a technical grasp over generative AI, experience building and deploying products, an understanding of the major laws and regulations around AI, and significant experience hiring and making decisions at an organization."

The Fast-Moving Race Between Gen-AI and Copyright Law; Baker Donelson, July 10, 2024

 Scott M. Douglass and Dominic Rota, Baker Donelson ; The Fast-Moving Race Between Gen-AI and Copyright Law

"It is still an open question whether plaintiffs will succeed in showing that use of copyrighted works to train generative AI constitutes copyright infringement and be able to overcome the fair use defense or succeed in showing that generative AI developers are removing CMI in violation of the DMCA.

The government has made some moves in the past few months to resolve these issues. The U.S. Copyright Office started an inquiry in August 2023, seeking public comments on copyright law and policy issues raised by AI systems, and Rep. Adam Schiff (D-Calif.) introduced a new bill in April 2024, that would require people creating a training dataset for a generative AI system to submit to the Register of Copyrights a detailed summary of any copyrighted works used in training. These initiatives will most likely take some time, meaning that currently pending litigation is vitally important for defining copyright law as it applies to generative AI.

Recent licensing deals with news publishers appear to be anywhere from $1 million to $60 million per year, meaning that AI companies will have to pay an enormous amount to license all the copyrighted works necessary to train their generative AI models effectively. However, as potential damages in a copyright infringement case could be billions of dollars, as claimed by Getty Images and other plaintiffs, developers of generative AI programs should seriously consider licensing any copyrighted works used as training data."

Thursday, July 11, 2024

The assignment: Build AI tools for journalists – and make ethics job one; Poynter, July 8, 2024

 , Poynter; The assignment: Build AI tools for journalists – and make ethics job one

"Imagine you had virtually unlimited money, time and resources to develop an AI technology that would be useful to journalists.

What would you dream, pitch and design?

And how would you make sure your idea was journalistically ethical?

That was the scenario posed to about 50 AI thinkers and journalists at Poynter’s recent invitation-only Summit on AI, Ethics & Journalism

The summit drew together news editors, futurists and product leaders June 11-12 in St. Petersburg, Florida. As part of the event, Poynter partnered with Hacks/Hackers, to ask groups attendees to  brainstorm ethically considered AI tools that they would create for journalists if they had practically unlimited time and resources.

Event organizer Kelly McBride, senior vice president and chair of the Craig Newmark Center for Ethics and Leadership at Poynter, said the hackathon was born out of Poynter’s desire to help journalists flex their intellectual muscles as they consider AI’s ethical implications.

“We wanted to encourage journalists to start thinking of ways to deploy AI in their work that would both honor our ethical traditions and address the concerns of news consumers,” she said.

Alex Mahadevan, director of Poynter’s digital media literacy project MediaWise, covers the use of generative AI models in journalism and their potential to spread misinformation."

Tuesday, July 9, 2024

Record labels sue AI music startups for copyright infringement; WBUR Here & Now, July 8, 2024

  WBUR Here & Now; Record labels sue AI music startups for copyright infringement

"Major record labels including Sony, Universal Music Group and Warner are suing two music startups that use artificial intelligence. The labels say Suno and Udio rely on mass copyright infringement, echoing similar complaints from authors, publishers and artists who argue that generative AI infringes on copyright.

Here & Now's Lisa Mullins discusses the cases with Ina Fried, chief technology correspondent for Axios."

Sunday, June 30, 2024

Tech companies battle content creators over use of copyrighted material to train AI models; The Canadian Press via CBC, June 30, 2024

 Anja Karadeglija , The Canadian Press via CBC; Tech companies battle content creators over use of copyrighted material to train AI models

"Canadian creators and publishers want the government to do something about the unauthorized and usually unreported use of their content to train generative artificial intelligence systems.

But AI companies maintain that using the material to train their systems doesn't violate copyright, and say limiting its use would stymie the development of AI in Canada.

The two sides are making their cases in recently published submissions to a consultation on copyright and AI being undertaken by the federal government as it considers how Canada's copyright laws should address the emergence of generative AI systems like OpenAI's ChatGPT."

Saturday, June 29, 2024

2024 Generative AI in Professional Services: Perceptions, Usage & Impact on the Future of Work; Thomson Reuters Institute, 2024

Thomson Reuters Institute; 2024 Generative AI in Professional Services: Perceptions, Usage & Impact on the Future of Work

"Inaccuracy, privacy worries persist -- More than half of respondents identified such worries as inaccurate responses (70%); data security (68%); privacy and confidentiality of data (62%); complying with laws and regulations (60%); and ethical and responsible usage (57%), as primary concerns for GenAI."

GenAI in focus: Understanding the latest trends and considerations; Thomson Reuters, June 27, 2024

 Thomson Reuters; GenAI in focus: Understanding the latest trends and considerations

"Legal professionals, whether they work for law firms, corporate legal departments, government, or in risk and fraud, have generally positive perceptions of generative AI (GenAI). According to the professionals surveyed in the Thomson Reuters Institute’s 2024 GenAI in Professional Services report, 85% of law firm and corporate attorneys, 77% of government legal practitioners, and 82% of corporate risk professionals believe that GenAI can be applied to industry work.  

But should it be applied? There, those positive perceptions softened a bit, with 51% of law firm respondents, 60% of corporate legal practitioners, 62% of corporate risk professionals, and 40% of government legal respondents saying yes.  

In short, professionals’ perceptions of AI include concerns and interest in its capabilities. Those concerns include the ethics of AI usage and mitigating related risksThese are important considerations. But they don’t need to keep professionals from benefiting from all that GenAI can do. Professionals can minimize many of the potential risks by becoming familiar with responsible AI practices."

Monday, June 24, 2024

New Legal Ethics Opinion Cautions Lawyers: You ‘Must Be Proficient’ In the Use of Generative AI; LawSites, June 24, 2024

, LawSites; New Legal Ethics Opinion Cautions Lawyers: You ‘Must Be Proficient’ In the Use of Generative AI

"A new legal ethics opinion on the use of generative AI in law practice makes one point very clear: lawyers are required to maintain competence across all technological means relevant to their practices, and that includes the use of generative AI.

The opinion, jointly issued by the Pennsylvania Bar Association and Philadelphia Bar Association, was issued to educate attorneys on the benefits and pitfalls of using generative AI and to provide ethical guidelines.

While the opinion is focused on AI, it repeatedly emphasizes that a lawyer’s ethical obligations surrounding this emerging form of technology are no different than those for any form of technology...

12 Points of Responsibility

The 16-page opinion offers a concise primer on the use of generative AI in law practice, including a brief background on the technology and a summary of other states’ ethics opinions.

But most importantly, it concludes with 12 points of responsibility pertaining to lawyers using generative AI:

  • Be truthful and accurate: The opinion warns that lawyers must ensure that AI-generated content, such as legal documents or advice, is truthful, accurate and based on sound legal reasoning, upholding principles of honesty and integrity in their professional conduct.
  • Verify all citations and the accuracy of cited materials: Lawyers must ensure the citations they use in legal documents or arguments are accurate and relevant. That includes verifying that the citations accurately reflect the content they reference.
  • Ensure competence: Lawyers must be competent in using AI technologies.
  • Maintain confidentiality: Lawyers must safeguard information relating to the representation of a client and ensure that AI systems handling confidential data both adhere to strict confidentiality measures and prevent the sharing of confidential data with others not protected by the attorney-client privilege.
  • Identify conflicts of interest: Lawyers must be vigilant, the opinion says, in identifying and addressing potential conflicts of interest arising from using AI systems.
  • Communicate with clients: Lawyers must communicate with clients about using AI in their practices, providing clear and transparent explanations of how such tools are employed and their potential impact on case outcomes. If necessary, lawyers should obtain client consent before using certain AI tools.
  • Ensure information is unbiased and accurate: Lawyers must ensure that the data used to train AI models is accurate, unbiased, and ethically sourced to prevent perpetuating biases or inaccuracies in AI-generated content.
  • Ensure AI is properly used: Lawyers must be vigilant against the misuse of AI-generated content, ensuring it is not used to deceive or manipulate legal processes, evidence or outcomes.
  • Adhere to ethical standards: Lawyers must stay informed about relevant regulations and guidelines governing the use of AI in legal practice to ensure compliance with legal and ethical standards.
  • Exercise professional judgment: Lawyers must exercise their professional judgment in conjunction with AI-generated content, and recognize that AI is a tool that assists but does not replace legal expertise and analysis.
  • Use proper billing practices: AI has tremendous time-saving capabilities. Lawyers must, therefore, ensure that AI-related expenses are reasonable and appropriately disclosed to clients.
  • Maintain transparency: Lawyers should be transparent with clients, colleagues, and the courts about the use of AI tools in legal practice, including disclosing any limitations or uncertainties associated with AI-generated content.

My Advice: Don’t Be Stupid

Over the years of writing about legal technology and legal ethics, I have developed my own shortcut rule for staying out of trouble: Don’t be stupid...

You can read the full opinion here: Joint Formal Opinion 2024-200."

How to Fix “AI’s Original Sin”; O'Reilly, June 18, 2024

  Tim O’Reilly, O'Reilly; How to Fix “AI’s Original Sin”

"In conversation with reporter Cade Metz, who broke the story, on the New York Times podcast The Daily, host Michael Barbaro called copyright violation “AI’s Original Sin.”

At the very least, copyright appears to be one of the major fronts so far in the war over who gets to profit from generative AI. It’s not at all clear yet who is on the right side of the law. In the remarkable essay “Talkin’ Bout AI Generation: Copyright and the Generative-AI Supply Chain,” Cornell’s Katherine Lee and A. Feder Cooper and James Grimmelmann of Microsoft Research and Yale note:

Copyright law is notoriously complicated, and generative-AI systems manage to touch on a great many corners of it. They raise issues of authorship, similarity, direct and indirect liability, fair use, and licensing, among much else. These issues cannot be analyzed in isolation, because there are connections everywhere. Whether the output of a generative AI system is fair use can depend on how its training datasets were assembled. Whether the creator of a generative-AI system is secondarily liable can depend on the prompts that its users supply.

But it seems less important to get into the fine points of copyright law and arguments over liability for infringement, and instead to explore the political economy of copyrighted content in the emerging world of AI services: Who will get what, and why?"