Showing posts with label policymakers. Show all posts
Showing posts with label policymakers. Show all posts

Saturday, June 22, 2024

Oxford University institute hosts AI ethics conference; Oxford Mail, June 21, 2024

Jacob Manuschka , Oxford Mail; Oxford University institute hosts AI ethics conference

"On June 20, 'The Lyceum Project: AI Ethics with Aristotle' explored the ethical regulation of AI.

This conference, set adjacent to the ancient site of Aristotle’s school, showcased some of the greatest philosophical minds and featured an address from Greek prime minister, Kyriakos Mitsotakis.

Professor John Tasioulas, director of the Institute for Ethics in AI, said: "The Aristotelian approach to ethics, with its rich notion of human flourishing, has great potential to help us grapple with the urgent question of what it means to be human in the age of AI.

"We are excited to bring together philosophers, scientists, policymakers, and entrepreneurs in a day-long dialogue about how ancient wisdom can shed light on contemporary challenges...

The conference was held in partnership with Stanford University and Demokritos, Greece's National Centre for Scientific Research."

Wednesday, June 19, 2024

Oxford Institute for Ethics in AI to host ground-breaking AI Ethics Conference; University of Oxford, In-Person Event on June 20, 2024

University of Oxford; Oxford Institute for Ethics in AI to host ground-breaking AI Ethics Conference

"The Oxford University Institute for Ethics in AI is hosting an exciting one day conference in Athens on the 20th of June 2024, The Lyceum Project: AI Ethics with Aristotle, in partnership with Stanford University and Demokritos, Greece's National Centre for Scientific Research...

Set in the cradle of philosophy, adjacent to the ancient site of Aristotle’s school, the conference will showcase some of the greatest philosophical minds and feature a special address from the Greek Prime Minister, Kyriakos Mitsotakis, as they discuss the most pressing question of our times – the ethical regulation of AI.

The conference will be free to attend (register to attend).

Professor John Tasioulas, Director of the Institute for Ethics in AI, said: ‘The Aristotelian approach to ethics, with its rich notion of human flourishing, has great potential to help us grapple with the urgent question of what it means to be human in the age of AI. We are excited to bring together philosophers, scientists, policymakers, and entrepreneurs in a day-long dialogue about how ancient wisdom can shed light on contemporary challenges.’

George Nounesis, Director & Chairman of the Board of NCSR Demokritos said: ‘There is no such thing as ethically neutral AI; and high-quality research on AI cannot ignore its inherent ethical aspects. Ancient Greek philosophy can serve as a valuable resource guiding us in this discourse. In this respect, Aristotelian philosophy can play a pivotal role by nurturing ethical reasoning and a comprehensive understanding of the societal 'implications of AI, broadening the dialogue with society.’

Alexandra Mitsotaki, President of the World Human Forum, said: ‘This conference is an important first step towards our vision to bring Aristotle’s lyceum alive again by showing the relevance of the teachings of the great philosopher for today’s global challenges. We aspire for the Lyceum to become a global point of connection. This is, after all, the original location where the great philosopher thought, taught and developed many of the ideas that formed Western Civilisation.’"

Wednesday, June 5, 2024

Supersharers of fake news on Twitter; Science, May 30, 2024

 SAHAR BARIBI-BARTOV BRIONY SWIRE-THOMPSON , AND NIR GRINBERG  , Science; Supersharers of fake news on Twitter

"Editor’s summary

Most fake news on Twitter (now X) is spread by an extremely small population called supersharers. They flood the platform and unequally distort political debates, but a clear demographic portrait of these users was not available. Baribi-Bartov et al. identified a meaningful sample of supersharers during the 2020 US presidential election and asked who they were, where they lived, and what strategies they used (see the Perspective by van der Linden and Kyrychenko). The authors found that supersharers were disproportionately Republican, middle-aged White women residing in three conservative states, Arizona, Florida, and Texas, which are focus points of contentious abortion and immigration battles. Their neighborhoods were poorly educated but relatively high in income. Supersharers persistently retweeted misinformation manually. These insights are relevant for policymakers developing effective mitigation strategies to curtail misinformation. —Ekeoma Uzogara

Abstract

Governments may have the capacity to flood social media with fake news, but little is known about the use of flooding by ordinary voters. In this work, we identify 2107 registered US voters who account for 80% of the fake news shared on Twitter during the 2020 US presidential election by an entire panel of 664,391 voters. We found that supersharers were important members of the network, reaching a sizable 5.2% of registered voters on the platform. Supersharers had a significant overrepresentation of women, older adults, and registered Republicans. Supersharers’ massive volume did not seem automated but was rather generated through manual and persistent retweeting. These findings highlight a vulnerability of social media for democracy, where a small group of people distort the political reality for many."

Tuesday, April 27, 2021

Stop talking about AI ethics. It’s time to talk about power.; MIT Technology Review, April 23, 2021

 , MIT Technology Review;

Stop talking about AI ethics. It’s time to talk about power.

"If there’s been a real trap in the tech sector for the last decade, it’s that the theory of change has always centered engineering. It’s always been, “If there’s a problem, there’s a tech fix for it.” And only recently are we starting to see that broaden out to “Oh, well, if there’s a problem, then regulation can fix it. Policymakers have a role.”

But I think we need to broaden that out even further. We have to say also: Where are the civil society groups, where are the activists, where are the advocates who are addressing issues of climate justice, labor rights, data protection? How do we include them in these discussions? How do we include affected communities?

In other words, how do we make this a far deeper democratic conversation around how these systems are already influencing the lives of billions of people in primarily unaccountable ways that live outside of regulation and democratic oversight?

In that sense, this book is trying to de-center tech and starting to ask bigger questions around: What sort of world do we want to live in?""

Monday, September 23, 2019

Manifesto Promotes ‘Ethics, Equity, and Empathy’; STREETSBLOGUSA,September 20, 2019



Manifesto Promotes ‘Ethics, Equity, and Empathy’


A design firm publishes a new credo for engineers, policymakers, and planners.

"Maryland-based design firm is seeking to revolutionize the century-old credo that shapes how policymakers and engineers plan communities — in order to force planners to prioritize human beings over automobiles and think deeply about how their decisions affect road safety. 

Toole Design, which has 17 offices in the United States and Canada, last week released a manifesto that seeks to substitute new concepts for the traditional “three Es” — education, enforcement, and engineering — that have guided transportation professionals as they have built the infrastructure of our towns and cities.

The new “three Es” that Toole proposes — “ethics, equity, and empathy”  — replace the object- and rule-centered approach that dominates the discipline with a moral one centered on people."



Sunday, February 17, 2019

With fitness trackers in the workplace, bosses can monitor your every step — and possibly more; The Washington Post, February 16, 2019

Christopher Rowland, The Washington Post; With fitness trackers in the workplace, bosses can monitor your every step — and possibly more



[Kip Currier: This article--and case study about the upshots and downsides of employers' use of personal health data harvested from their employees' wearable devices--is a veritable "ripped from the headlines" gift from the Gods for an Information Ethics professor's discussion question for students this week!... 
What are the ethics issues? 
Who are the stakeholders? 
What ethical theory/theories would you apply/not apply in your analysis and decision-making?
What are the risks and benefits presented by the issues and the technology? 
What are the potential positive and negative consequences?  
What are the relevant laws and gaps in law?
Would you decide to participate in a health data program, like the one examined in the article? Why or why not?

And for all of us...spread the word that HIPAA does NOT cover personal health information that employees VOLUNTARILY give to employers. It's ultimately your decision to decide what to do, but we all need to be aware of the pertinent facts, so we can make the most informed decisions.
See the full article and the excerpt below...]   


"Many consumers are under the mistaken belief that all health data they share is required by law to be kept private under a federal law called HIPAA, the Health Insurance Portability and Accountability Act. The law prohibits doctors, hospitals and insurance companies from disclosing personal health information.


But if an employee voluntarily gives health data to an employer or a company such as Fitbit or Apple — entities that are not covered by HIPPA’s [sic] rules — those restrictions on disclosure don’t apply, said Joe Jerome, a policy lawyer at the Center for Democracy & Technology, a nonprofit in Washington. The center is urging federal policymakers to tighten up the rules.

“There’s gaps everywhere,’’ Jerome said.

Real-time information from wearable devices is crunched together with information about past doctors visits and hospitalizations to get a health snapshot of employees...

Some companies also add information from outside the health system — social predictors of health such as credit scores and whether someone lives alone — to come up with individual risk forecasts."

Tuesday, December 11, 2018

Government Is Using Algorithms — Is It Assessing Bias?; Government Technology, December 10, 2018

Michaelle Bond, Government Technology; Government Is Using Algorithms — Is It Assessing Bias?

"“Data science is here to stay. It holds tremendous promise to improve things,” said Julia Stoyanovich, an assistant professor at New York University and former assistant professor in ethical data management at Drexel University. But policymakers need to use it responsibly.

“The first thing we need to teach people is to be skeptical about technology,” she said.

Data review boards, toolkits and software that cities, universities, and data analysts are starting to develop are steps in the right direction to spur policymakers to think critically about data, researchers said."

Thursday, September 13, 2018

North Carolina, Warned of Rising Seas, Chose to Favor Development; The New York Times, September 12, 2018

John Schwartz and Richard Fausset, The New York Times; North Carolina, Warned of Rising Seas, Chose to Favor Development

[Kip Currier: Food for thought for all stakeholders (--particularly anyone, anywhere, concerned and involved with matters of scientific research, data, modeling, ethics, law, and policy--) as the Carolinas prepare for the arrival of Hurricane Florence.

The article's takeaway insight is in the last three sentences, excerpted and highlighted in bold below.]

"The leading scientific model used to forecast storm surge and its effect on coastal areas, known as Adcirc, was created in large part by Rick Luettich, director of the institute of marine sciences at the University of North Carolina.

In a telephone interview during a break from boarding up the windows of his home in Morehead City, on the coast, Mr. Luettich noted that before 2012, the state pursued progressive policies that put it in the forefront of coastal management. When the legislature pushed back against the clear scientific evidence underlying climate change, he said, “it came as a shock.”

There is a lesson in that, he said.

[Bold and red added for emphasis] “The process of converting scientific research into policy is one that we take for granted at times,” Mr. Luettich said. “What we learned is that you can’t take that for granted. We need to have a closer dialogue with policymakers, to make sure we’re on the same page.”

Saturday, March 24, 2018

Driverless cars raise so many ethical questions. Here are just a few of them.; San Diego Union-Tribune, March 23, 2018

Lawrence M. Hinman, San Diego Union-Tribune; Driverless cars raise so many ethical questions. Here are just a few of them.

"Even more troubling will be the algorithms themselves, even if the engineering works flawlessly. How are we going to program autonomous vehicles when they are faced with a choice among competing evils? Should they be programmed to harm or kill the smallest number of people, swerving to avoid hitting two people but unavoidably hitting one? (This is the famous “trolley problem” that has vexed philosophers and moral psychologists for over half a century.)

Should your car be programmed to avoid crashing into a group of schoolchildren, even if that means driving you off the side of a cliff? Most of us would opt for maximizing the number of lives saved, except when one of those lives belongs to us or our loved ones.

These are questions that take us to the heart of the moral life in a technological society. They are already part of a lively and nuanced discussion among philosophers, engineers, policy makers and technologists. It is a conversation to which the larger public should be invited.

The ethics of dealing with autonomous systems will be a central issue of the coming decades."

Thursday, January 19, 2017

Will open data survive Trump?; InfoWorld, 1/16/17

Eric Knorr, InfoWorld; 

Will open data survive Trump?


"The incredible quantity of data collected across the federal government is a national treasure. Few other countries on earth apply the same energy, funding, and rigor to assembling such extensive stores. Even if ordinary citizens don't go to Data.gov for entertainment, both policymakers and business leaders need objective data to make sound decisions.

Before joining the Sunlight Foundation, Howard worked at O’Reilly Media, starting there a few years after Tim O’Reilly convened a group of open government advocates to develop the eight principles of open government data in 2007. Howard says the idea of open data really goes back to the Constitution, which stipulates an "Enumeration" (aka, census) be held to apportion Congressional seats -- an indication that "open data is in the DNA of the USA." Even further, open data harkens to the original Enlightenment idea that reason based on fact should govern human action.

We'll see how that quaint notion survives the postfact era. Meanwhile, consider contributing to the Sunlight Foundation and the Electronic Frontier Foundation."