Showing posts with label responsible AI practices. Show all posts
Showing posts with label responsible AI practices. Show all posts

Saturday, June 29, 2024

GenAI in focus: Understanding the latest trends and considerations; Thomson Reuters, June 27, 2024

 Thomson Reuters; GenAI in focus: Understanding the latest trends and considerations

"Legal professionals, whether they work for law firms, corporate legal departments, government, or in risk and fraud, have generally positive perceptions of generative AI (GenAI). According to the professionals surveyed in the Thomson Reuters Institute’s 2024 GenAI in Professional Services report, 85% of law firm and corporate attorneys, 77% of government legal practitioners, and 82% of corporate risk professionals believe that GenAI can be applied to industry work.  

But should it be applied? There, those positive perceptions softened a bit, with 51% of law firm respondents, 60% of corporate legal practitioners, 62% of corporate risk professionals, and 40% of government legal respondents saying yes.  

In short, professionals’ perceptions of AI include concerns and interest in its capabilities. Those concerns include the ethics of AI usage and mitigating related risksThese are important considerations. But they don’t need to keep professionals from benefiting from all that GenAI can do. Professionals can minimize many of the potential risks by becoming familiar with responsible AI practices."

Thursday, May 16, 2024

How to Implement AI — Responsibly; Harvard Business Review (HBR), May 10, 2024

and , Harvard Business Review (HBR) ; How to Implement AI — Responsibly

"Regrettably, our research suggests that such proactive measures are the exception rather than the rule. While AI ethics is high on the agenda for many organizations, translating AI principles into practices and behaviors is proving easier said than done. However, with stiff financial penalties at stake for noncompliance, there’s little time to waste. What should leaders do to double-down on their responsible AI initiatives?

To find answers, we engaged with organizations across a variety of industries, each at a different stage of implementing responsible AI. While data engineers and data scientists typically take on most responsibility from conception to production of AI development lifecycles, nontechnical leaders can play a key role in ensuring the integration of responsible AI. We identified four key moves — translate, integrate, calibrate and proliferate — that leaders can make to ensure that responsible AI practices are fully integrated into broader operational standards."