Showing posts with label companies. Show all posts
Showing posts with label companies. Show all posts

Friday, October 4, 2024

Beyond the hype: Key components of an effective AI policy; CIO, October 2, 2024

 Leo Rajapakse, CIO; Beyond the hype: Key components of an effective AI policy

"An AI policy is a living document 

Crafting an AI policy for your company is increasingly important due to the rapid growth and impact of AI technologies. By prioritizing ethical considerations, data governance, transparency and compliance, companies can harness the transformative potential of AI while mitigating risks and building trust with stakeholders. Remember, an effective AI policy is a living document that evolves with technological advancements and societal expectations. By investing in responsible AI practices today, businesses can pave the way for a sustainable and ethical future tomorrow."

Tuesday, August 6, 2024

How Companies Can Take a Global Approach to AI Ethics; Harvard Business Review (HBR), August 5, 2024

Favour Borokini, and Harvard Business Review (HBR) ; How Companies Can Take a Global Approach to AI Ethics

"Getting the AI ethics policy right is a high-stakes affair for an organization. Well-published instances of gender biases in hiring algorithms or job search results may diminish the company’s reputation, pit the company against regulations, and even attract hefty government fines. Sensing such threats, organizations are increasingly creating dedicated structures and processes to inculcate AI ethics proactively. Some companies have moved further along this road, creating institutional frameworks for AI ethics.

Many efforts, however, miss an important fact: ethics differ from one cultural context to the next...

Western perspectives are also implicitly being encoded into AI models. For example, some estimates show that less than 3% of all images on ImageNet represent the Indian and Chinese diaspora, which collectively account for a third of the global population. Broadly, a lack of high-quality data will likely lead to low predictive power and bias against underrepresented groups — or even make it impossible for tools to be developed for certain communities at all. LLMs can’t currently be trained for languages that aren’t heavily represented on the Internet, for instance. A recent survey of IT organizations in India revealed that the lack of high-quality data remains the most dominant impediment to ethical AI practices.

As AI gains ground and dictates business operations, an unchecked lack of variety in ethical considerations may harm companies and their customers.

To address this problem, companies need to develop a contextual global AI ethics model that prioritizes collaboration with local teams and stakeholders and devolves decision-making authority to those local teams. This is particularly necessary if their operations span several geographies."

Wednesday, April 24, 2024

U.S. bans noncompete agreements for nearly all jobs; NPR, April 23, 2024

 , NPR; U.S. bans noncompete agreements for nearly all jobs

"The Federal Trade Commission narrowly voted Tuesday to ban nearly all noncompetes, employment agreements that typically prevent workers from joining competing businesses or launching ones of their own...

"For more than a year, the group has vigorously opposed the ban, saying that noncompetes are vital to companies, by allowing them to better guard trade secrets, and employees, by giving employers greater incentive to invest in workforce training and development."

Wednesday, November 1, 2023

Biden Issues Executive Order to Create A.I. Safeguards; The New York Times, October 30, 2023

 Cecilia Kang and Biden Issues Executive Order to Create A.I. Safeguards

"President Biden signed a far-reaching executive order on artificial intelligence on Monday, requiring that companies report to the federal government about the risks that their systems could aid countries or terrorists to make weapons of mass destruction. The order also seeks to lessen the dangers of “deep fakes” that could swing elections or swindle consumers."

Tuesday, October 10, 2023

Navigating the patchwork of U.S. privacy and cybersecurity laws: key regulatory updates from summer 2023; Reuters, October 9, 2023

 and , Reuters; Navigating the patchwork of U.S. privacy and cybersecurity laws: key regulatory updates from summer 2023

"The increasing patchwork of privacy and cybersecurity statutes, rules, and regulations on the state and federal level will likely result in further compliance costs to entities. In addition, these new laws create new grounds for governmental oversight that could result in a costly defense of regulatory investigations and exposure to civil penalties.

Indeed, federal and state regulators continue to enforce existing laws that may touch on privacy and cybersecurity with increasing frequency, and the addition of these new laws provide regulators with an increased ability to bring enforcement actions. Finally, the public disclosure requirements that many of these laws require expose companies to more potential lawsuits following any public notification resulting from an incident."

Wednesday, August 2, 2023

A More Ethical Approach to Employing Contractors; Harvard Business Review, August 2, 2023

 Catherine Bracy, Harvard Business Review; A More Ethical Approach to Employing Contractors

"Companies that do not adopt high-road contracting practices create a race to the bottom, degrading job quality and career mobility. Enacting high-road practices, and requiring them of any vendor your company works with, helps mitigate the potential for worker harm in the first place which helps reduce future liability and risk."

Wednesday, June 10, 2020

Narcan or No?; American Libraries, June 1, 2020

Anne Ford, American Libraries; Narcan or No?

Several years into the opioid crisis, public librarians reflect on whether to stock free naloxone


"“I think some libraries are concerned about liability, even though most states have Good Samaritan laws around naloxone,” Duddy says. “And I think some people feel there’s not an [opioid overdose] issue where their library is located.”

The libraries to which American Libraries spoke cited different reasons for not seeking the free Narcan."

Thursday, February 13, 2020

How To Teach Artificial Intelligence; Forbes, February 12, 2020

Tom Vander Ark, Forbes; How To Teach Artificial Intelligence

"Artificial intelligence—code that learns—is likely to be humankind’s most important invention. It’s a 60-year-old idea that took off five years ago when fast chips enabled massive computing and sensors, cameras, and robots fed data-hungry algorithms...

A World Economic Forum report indicated that 89% of U.S.-based companies are planning to adopt user and entity big data analytics by 2022, while more than 70% want to integrate the Internet of Things, explore web and app-enabled markets, and take advantage of machine learning and cloud computing.

Given these important and rapid shifts, it’s a good time to consider what young people need to know about AI and information technology. First, everyone needs to be able to recognize AI and its influence on people and systems, and be proactive as a user and citizen. Second, everyone should have the opportunity to use AI and big data to solve problems. And third, young people interested in computer science as a career should have a pathway for building AI...

The MIT Media Lab developed a middle school AI+Ethics course that hits many of these learning objectives. It was piloted by Montour Public Schools outside of Pittsburgh, Pennsylvania, which has incorporated the three-day course in its media arts class."

Thursday, January 23, 2020

Five Ways Companies Can Adopt Ethical AI; Forbes, January 23, 2020

Kay Firth-Butterfield, Head of Artificial Intelligence and Machine Learning, World Economic Forum, Forbes; Five Ways Companies Can Adopt Ethical AI

"In 2014, Stephen Hawking said that AI would be humankind’s best or last invention. Six years later, as we welcome 2020, companies are looking at how to use Artificial Intelligence (AI) in their business to stay competitive. The question they are facing is how to evaluate whether the AI products they use will do more harm than good...

Here are five lessons for the ethical use of AI."

Thursday, November 14, 2019

Why Businesses Should Adopt an AI Code of Ethics -- Now; InformationWeek, November 14, 2019

Gary Grossman, InformationWeek; Why Businesses Should Adopt an AI Code of Ethics -- Now

"Adherence to AI ethics breeds trust

According to Angel Gurria, Secretary-General of the Organization for Economic Co-Operation and Development (OECD): “To realize the full potential of [AI] technology, we need one critical ingredient. That critical ingredient is trust. And to build trust we need human-centered artificial intelligence that fosters sustainable development and inclusive human progress.” To achieve this, he adds that there must be an ethical dimension to AI use. This all underscores the urgency for companies to create and live by a responsible AI code of ethics to govern decisions about AI development and deployment.

The EU has developed principles for ethical AI, as has the IEEE, Google, Microsoft, Intel, Tencent and other countries and corporations. As these have appeared in only the last couple of years, AI ethics is very much an evolving field. There is an opportunity and critical need for businesses to lead by creating their own set of principles embodied in an AI code of ethics to govern their AI research and development to both further the technology while also helping to create a better tomorrow."

Tuesday, January 29, 2019

4 Ways AI Education and Ethics Will Disrupt Society in 2019; EdSurge, January 28, 2019

Tara Chklovski, EdSurge; 4 Ways AI Education and Ethics Will Disrupt Society in 2019

 "I see four AI use and ethics trends set to disrupt classrooms and conference rooms. Education focused on deeper learning and understanding of this transformative technology will be critical to furthering the debate and ensuring positive progress that protects social good."

Sunday, January 27, 2019

Can we make artificial intelligence ethical?; The Washington Post, January 23, 2019

Stephen A. Schwarzman , The Washington Post; Can we make artificial intelligence ethical?

"Stephen A. Schwarzman is chairman, CEO and co-founder of Blackstone, an investment firm...

Too often, we think only about increasing our competitiveness in terms of advancing the technology. But the effort can’t just be about making AI more powerful. It must also be about making sure AI has the right impact. AI’s greatest advocates describe the Utopian promise of a technology that will save lives, improve health and predict events we previously couldn’t anticipate. AI’s detractors warn of a dystopian nightmare in which AI rapidly replaces human beings at many jobs and tasks. If we want to realize AI’s incredible potential, we must also advance AI in a way that increases the public’s confidence that AI benefits society. We must have a framework for addressing the impacts and the ethics.

What does an ethics-driven approach to AI look like?

It means asking not only whether AI be can used in certain circumstances, but should it?

Companies must take the lead in addressing key ethical questions surrounding AI. This includes exploring how to avoid biases in AI algorithms that can prejudice the way machines and platforms learn and behave and when to disclose the use of AI to consumers, how to address concerns about AI’s effect on privacy and responding to employee fears about AI’s impact on jobs.

As Thomas H. Davenport and Vivek Katyal argue in the MIT Sloan Management Review, we must also recognize that AI often works best with humans instead of by itself."

 

Thursday, January 10, 2019

Pennsylvania High Court Decision Regarding Data Breach Increases Litigation Risk for Companies Storing Personal Data; Lexology, January 8, 2019

Ropes & Gray LLP , Lexology; Pennsylvania High Court Decision Regarding Data Breach Increases Litigation Risk for Companies Storing Personal Data

"This decision could precipitate increased data breach class action litigation against companies that retain personal data. No state Supreme Court had previously recognized the existence of a negligence-based duty to safeguard personal information, other than in the narrow context of health care patient information."

Thursday, November 29, 2018

Navy Official: Concerns About Intellectual Property Rights Becoming More 'Acute'; National Defense, NDIA's Business & Technology Magazine, November 29, 2018

Connie Lee, National Defense, NDIA's Business & Technology Magazine;

Navy Official: Concerns About Intellectual Property Rights Becoming More 'Acute'


"Capt. Samuel Pennington, major program manager for surface training systems, said the fear of losing data rights can sometimes make companies reluctant to work with the government.
“We get feedback sometimes where they’re not willing to bid on a contract where we have full data rights,” he said. “Industry [is] not going to do that because they have their secret sauce and they don’t want to release it.”

Pennington said having IP rights would allow the Defense Department to more easily modernize and sustain equipment.

“Our initiative is to get as much data rights, or buy a new product that has open architecture to the point where [the] data rights that we do have are sufficient, where we can recompete that down the road,” he said. This would prevent the Navy from relying on the original manufacturer for future work on the system, he noted.

The issue is also being discussed on Capitol Hill, Merritt added. The fiscal year 2018 National Defense Authorization Act requires the Pentagon to develop policy on the acquisition or licensing of intellectual property. Additionally, the NDAA requires the department to negotiate a price for technical data rights of major weapon systems."

Saturday, November 10, 2018

Our lack of interest in data ethics will come back to haunt us; TNW, November 3, 2018

Jayson Demers, TNW; Our lack of interest in data ethics will come back to haunt us

"Outreach and attention

We can’t solve these ethical dilemmas by issuing judgments or making a few laws. After all, ethical discussions rarely result in a simple understanding of what’s “right” and what’s “wrong.” Instead, we should be concentrating our efforts on raising awareness of these ethical dilemmas, and facilitating more open, progressive conversations.

We need to democratize the conversation by encouraging consumers to demand greater ownership, control, and/or transparency over their own data. We need to hold companies accountable for their practices before they get out of hand. And we need the data scientists, entrepreneurs, and marketers of the world to think seriously about the consequences of their data-related efforts — and avoid sacrificing ethical considerations in the name of profits."

Monday, July 23, 2018

We Need Transparency in Algorithms, But Too Much Can Backfire; Harvard Business Review, July 23, 2018

Kartik Hosanagar and Vivian Jair, Harvard Business Review; We Need Transparency in Algorithms, But Too Much Can Backfire

"Companies and governments increasingly rely upon algorithms to make decisions that affect people’s lives and livelihoods – from loan approvals, to recruiting, legal sentencing, and college admissions. Less vital decisions, too, are being delegated to machines, from internet search results to product recommendations, dating matches, and what content goes up on our social media feeds. In response, many experts have called for rules and regulations that would make the inner workings of these algorithms transparent. But as Nass’s experience makes clear, transparency can backfire if not implemented carefully. Fortunately, there is a smart way forward."

Monday, July 16, 2018

UN Report Sets Forth Strong Recommendations for Companies to Protect Free Expression; Electronic Frontier Foundation (EFF), June 27, 2018

Jillian C. York, Electronic Frontier Foundation (EFF);

UN Report Sets Forth Strong Recommendations for Companies to Protect Free Expression

 

"Through Onlinecensorship.org and various other projects—including this year’s censorship edition of our annual Who Has Your Back? report—we’ve highlighted the challenges and pitfalls that companies face as they seek to moderate content on their platforms. Over the past year, we’ve seen this issue come into the spotlight through advocacy initiatives like the Santa Clara Principles, media such as the documentary The Cleaners, and now, featured in the latest report by Professor David Kaye, the United Nations' Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression. 

Toward greater freedom, accountability, and transparency 

The Special Rapporteur’s latest is the first-ever UN report to focus on the regulation of user-generated content online, and comes at a time of heated debate on the impact of disinformation, extremism, and hateful speech. The report focuses on the obligations of both State actors and ICT companies. It aims at finding user-centered, human rights law-aligned approaches to content policy-making, transparency, due process, and governance on platforms that host user-generated content."

Wednesday, May 23, 2018

No one’s ready for GDPR; The Verge, May 22, 2018

Sarah Jeong, The Verge; No one’s ready for GDPR

"The General Data Protection Regulation will go into effect on May 25th, and no one is ready — not the companies and not even the regulators...

GDPR is only supposed to apply to the EU and EU residents, but because so many companies do business in Europe, the American technology industry is scrambling to become GDPR compliant. Still, even though GDPR’s big debut is bound to be messy, the regulation marks a sea change in how data is handled across the world. Americans outside of Europe can’t make data subject access requests, and they can’t demand that their data be deleted. But GDPR compliance is going to have spillover effects for them anyway. The breach notification requirement, especially, is more stringent than anything in the US. The hope is that as companies and regulatory bodies settle into the flow of things, the heightened privacy protections of GDPR will become business as usual. In the meantime, it’s just a mad scramble to keep up."

Monday, April 30, 2018

The 7 stages of GDPR grief; VentureBeat, April 29, 2018

Chris Purcell, VentureBeat; The 7 stages of GDPR grief

"All of the systems we’ve built around handling personal data will need to be re-engineered to handle the new General Data Protection Regulation (GDPR) rules that go into effect that day. That’s a lot to accomplish, with very little time left.

While the eve of the GDPR deadline may not start parties like we had back on New Year’s Eve 1999 — when people counted down to “the end of the world” — stakeholders in organizations across the globe will be experiencing a range of emotions as they make their way through the seven stages of GDPR grief at varying speeds.

Like Y2K, May 25 could come and go without repercussion if people work behind the scenes to make their organizations compliant. Unfortunately, most companies are in the earliest stage of grief – denial – believing that GDPR does not apply to them (if they even know what it is). Denial rarely serves companies well. And in the case of GDPR non-compliance, it could cost them fines of up to 20 million euros ($24 million) or four percent of global annual turnover, whichever value is greater.

Luckily, there are sure-tell signs for each grief stage and advice to help individuals and their employers move through each (and fast):..."

Wednesday, April 25, 2018

In global AI race, Europe pins hopes on ethics; Politico, April 25, 2018

Janosch Delcker, Politico; 

In global AI race, Europe pins hopes on ethics


"One of the central goals in the EU strategy is to provide customers with insight into the systems.

That could be easier said than done.

“Algorithmic transparency doesn’t mean [platforms] have to publish their algorithms,” Ansip said, “but ‘explainability’ is something we want to get.”

AI experts say that to achieve such explainability, companies will, indeed, have to disclose the codes they’re using – and more.

Virginia Dignum, an AI researcher at the Delft University of Technology, said “transparency of AI is more than just making the algorithm transparent,” adding that companies should also have to disclose details such as which data was used to train their algorithms, which data are used to make decisions, how this data was collected, or at which point in the process humans were involved in the decision-making process."