Showing posts with label technology. Show all posts
Showing posts with label technology. Show all posts

Friday, September 6, 2024

AN ETHICS EXPERT’S PERSPECTIVE ON AI AND HIGHER ED; Pace University, September 3, 2024

 Johnni Medina, Pace University; AN ETHICS EXPERT’S PERSPECTIVE ON AI AND HIGHER ED

"As a scholar deeply immersed in both technology and philosophy, James Brusseau, PhD, has spent years unraveling the complex ethics of artificial intelligence (AI).

“As it happens, I was a physics major in college, so I've had an abiding interest in technology, but I finally decided to study philosophy,” Brusseau explains. “And I did not see much of an intersection between the scientific and my interest in philosophy until all of a sudden artificial intelligence landed in our midst with questions that are very philosophical.”.

Some of these questions are heavy, with Brusseau positing an example, “If a machine acts just like a person, does it become a person?” But AI’s implications extend far beyond the theoretical, especially when it comes to the impact on education, learning, and career outcomes. What role does AI play in higher education? Is it a tool that enhances learning, or does it risk undermining it? And how do universities prepare students for an AI-driven world?

In a conversation that spans these topics, Brusseau shares his insights on the place of AI in higher education, its benefits, its risks, and what the future holds...

I think that if AI alone is the professor, then the knowledge students get will be imperfect in the same vaguely definable way that AI art is imperfect."

Saturday, August 31, 2024

ChatGPT Spirituality: Connection or Correction?; Geez, Spring 2024 Issue: February 27, 2024

Rob Saler, Geez ; ChatGPT Spirituality: Connection or Correction?

"Earlier this year, I was at an academic conference sitting with friends at a table. This was around the time that OpenAI technology – specifically ChatGPT – was beginning to make waves in the classroom. Everyone was wondering how to adapt to the new technology. Even at that early point, differentiated viewpoints ranged from incorporation (“we can teach students to use it well as part of the curriculum of the future”) to outright resistance (“I am going back to oral exams and blue book written in-class tests”).

During the conversation, a very intelligent friend casually remarked that she recently began using ChatGPT for therapy – not emergency therapeutic intervention, but more like life coaching and as a sounding board for vocational discernment. Because we all respected her sincerity and intellect, several of us (including me) suppressed our immediate shock and listened as she laid out a very compelling case for ChatGPT as a therapy supplement – and perhaps, in the case of those who cannot or choose not to afford sessions with a human therapist, a therapy substitute. ChapGPT is free (assuming one has internet), available 24/7, shapeable to one’s own interests over time, (presumably) confidential, etc…

In my teaching on AI and technology throughout the last semester, I used this example with theology students (some of whom are also receiving licensure as therapists) as a way of pressing them to examine their own assumptions about AI – and then, by extension, their own assumptions about ontology. If the gut-level reaction to ChatGPT therapy is that it is not “real,” then – in Matrix-esque fashion – we are called to ask how we should define “real.” If a person has genuine insights or intense spiritual experiences engaging in vocational discernment with a technology that can instantaneously generate increasingly relevant responses to prompts, then what is the locus of reality that is missing?"

Friday, June 16, 2023

Tennis stars get lots of hate online. The French Open gave them AI 'bodyguards'; NPR, June 8, 2023

Friday, April 1, 2022

Self-driving semis may revolutionize trucking while eliminating hundreds of thousands of jobs.; The Hill, March 23, 2022

 Joseph Guzman , The Hill; Self-driving semis may revolutionize trucking while eliminating hundreds of thousands of jobs.

"Aniruddh Mohan, a PhD candidate in the department of engineering and public policy at Carnegie Mellon University and co-author of the study, said widespread implementation will depend on how successful pilot programs in the Sun Belt are in the coming years, but warned any lapse in safety could slow down progress. 

“One thing to keep in mind, just as we saw with the passenger vehicle automation race, the moment you even have one accident, that could really set the industry back,” Mohan said. 

“So I think it remains to be seen how quickly this develops.”"

Monday, January 24, 2022

Bloomsbury Acquisition of ABC-CLIO To Strengthen Tech, Market Reach; Library Journal, January 12, 2022

Maggie Knapp , Library Journal; Bloomsbury Acquisition of ABC-CLIO To Strengthen Tech, Market Reach

"Bloomsbury Publishing purchased ABC-CLIO in December 2021 for $22.9 million, bringing ABC-CLIO’s four imprints and 32 databases into U.K.-based Bloomsbury’s academic and professional division.

Becky Snyder, co-owner of ABC-CLIO, has 35 years with the company, which was founded in 1955. She noted that from her company’s perspective Bloomsbury was an optimal fit, as leadership looked at options to carry on the ABC-CLIO legacy. The imprints Praeger, Greenwood, and ABC-CLIO Solutions, as well as the company’s databases, often focus on historical and current events topics, presenting overviews, chronologies, primary sources, and analysis primarily for use in high school and up. Libraries Unlimited publishes educational and professional content for library and information service professionals.

One of the most compelling areas of opportunity Bloomsbury offered, Snyder said, was its strength and investment in current and future technology, which will allow ABC-CLIO products to continue the company’s commitment to scholarship while navigating accessibility standards, privacy protection, and emerging platforms and distribution formats." 

Sunday, November 28, 2021

193 countries adopt first-ever global agreement on the Ethics of Artificial Intelligence; UN News, November 25, 2021

UN News; 193 countries adopt first-ever global agreement on the Ethics of Artificial Intelligence

"Artificial intelligence is present in everyday life, from booking flights and applying for loans to steering driverless cars. It is also used in specialized fields such as cancer screening or to help create inclusive environments for the disabled.

According to UNESCOAI is also supporting the decision-making of governments and the private sector, as well as helping combat global problems such as climate change and world hunger.

However, the agency warns that the technology ‘is bringing unprecedented challenges’.

We see increased gender and ethnic bias, significant threats to privacy, dignity and agency, dangers of mass surveillance, and increased use of unreliable Articificial Intellegence technologies in law enforcement, to name a few. Until now, there were no universal standards to provide an answer to these issues”, UNESCO explained in a statement.

Considering this, the adopted text aims to guide the construction of the necessary legal infrastructure to ensure the ethical development of this technology.

“The world needs rules for artificial intelligence to benefit humanity. The Recommendation on the ethics of AI is a major answer. It sets the first global normative framework while giving States the responsibility to apply it at their level. UNESCO will support its 193 Member states in its implementation and ask them to report regularly on their progress and practices”, said UNESCO chief Audrey Azoulay."

Sunday, November 21, 2021

Artificial intelligence is getting better at writing, and universities should worry about plagiarism; The Conversation, November 4, 2021

 and  , The Conversation; Artificial intelligence is getting better at writing, and universities should worry about plagiarism


"The dramatic rise of online learning during the COVID-19 pandemic has spotlit concerns about the role of technology in exam surveillance — and also in student cheating. 

Some universities have reported more cheating during the pandemic, and such concerns are unfolding in a climate where technologies that allow for the automation of writing continue to improve.

Over the past two years, the ability of artificial intelligence to generate writing has leapt forward significantly, particularly with the development of what’s known as the language generator GPT-3. With this, companies such as Google, Microsoft and NVIDIA can now produce “human-like” text.

AI-generated writing has raised the stakes of how universities and schools will gauge what constitutes academic misconduct, such as plagiarism. As scholars with an interest in academic integrity and the intersections of work, society and educators’ labour, we believe that educators and parents should be, at the very least, paying close attention to these significant developments."

Monday, May 3, 2021

Stephen Fry Would Like to Remind You That You Have No Free Will; The New York Times Magazine, May 2, 2021

David Marchese , The New York Times Magazine; Stephen Fry Would Like to Remind You That You Have No Free Will


"You said earlier you’ve been reading philosophy. Is there a particular idea that you’re tickled by lately? I suppose the real biggie is free will. I find it interesting that no one really talks about it: I would say that 98 percent of all philosophers would agree with me that essentially free will is a myth. It doesn’t exist. That ought to be shocking news on the front of every newspaper. I’m not saying we don’t look both ways before we cross the road; we decide not to leave it to luck as to whether a car is going to hit us. Nor am I saying that we don’t have responsibility for our actions: We have agency over the body in which our minds and consciousness dwell. But we can’t choose our brains, we can’t choose our genes, we can’t choose our parents. There’s so much. I mean, look at the acts of a sociopath, which are performed with absolute will in the sense that he means to do what he’s doing, but he’s doing it because he has desires and impulses which he didn’t choose to have. Nobody elects to be a sociopath. The difference between us and them is one of degree. That certainly interests me. But, generally speaking, I suppose ethics is the most interesting. You do wonder if there are enough people in the world thinking about the consequences of A.I. and technology...

What’s so interesting now is that in 20 or 30 years, we will be in exactly the same ethical positions as Prometheus and Zeus. We will say, “A.I. has reached this event horizon, this transformative moment in which it becomes self-conscious.” Will we then say we have to turn those machines off — be like Zeus — and not give A.I. fire? Or some will be like Prometheus. They will say, “Give A.I. fire; it would be fantastic to watch these creatures have their own will.""

Monday, February 8, 2021

Want to Reverse Inequality? Change Intellectual Property Rules.; The Nation, February 8, 2021

Dean Baker, February 8, 2021; Want to Reverse Inequality? Change Intellectual Property Rules.

Changes in IP have done far more than tax cuts to increase inequality—and US protection of IP could lead to a cold war with China.

"While the Reagan, George W. Bush, and Trump tax cuts all gave more money to the rich, policy changes in other areas, especially intellectual property have done far more to redistribute income upward. In the past four decades, a wide array of changes—under both Democratic and Republican presidents—made patent and copyright protection both longer and stronger."

Tuesday, January 19, 2021

Why Ethics Matter For Social Media, Silicon Valley And Every Tech Industry Leader; Forbes, January 14, 2021

Rob Dube, Forbes; Why Ethics Matter For Social Media, Silicon Valley And Every Tech Industry Leader

"At one time, the idea of technology and social media significantly influencing society and politics would’ve sounded crazy. Now, with technology so embedded into the fabric of our lives, it’s a reality that raises legitimate questions about Silicon Valley’s ethical responsibility. 

Should tech companies step in to create and enforce guidelines within their platforms if they believe such policies would help the greater good? Or should leaders allow their technology to evolve organically without filters or manipulation? 

One authority on this fascinating topic is Casey Fiesler—a researcher, assistant professor at the University of Colorado Boulder, and expert on tech ethics. She is also a graduate of Vanderbilt Law School. There, she found a passion for the intersections between law, ethics, and technology."

Sunday, June 7, 2020

These drones will plant 40,000 trees in a month. By 2028, they’ll have planted 1 billion; Fast Company, May 15, 2020

Adele Peters, Fast Company; These drones will plant 40,000 trees in a month. By 2028, they’ll have planted 1 billion

"Flash Forest, the Canadian startup behind the project, plans to use its technology to plant 40,000 trees in the area this month. By the end of the year, as it expands to other regions, it will plant hundreds of thousands of trees. By 2028, the startup aims to have planted a full 1 billion trees.

The company, like a handful of other startups that are also using tree-planting drones, believes that technology can help the world reach ambitious goals to restore forests to stem biodiversity loss and fight climate change."

Tuesday, April 21, 2020

IT’S ABOUT ETHICS IN COMIC BOOK JOURNALISM: THE POLITICS OF X-MEN: RED; Comic Watch, April 18, 2020

Bethany W Pope, Comic Watch; IT’S ABOUT ETHICS IN COMIC BOOK JOURNALISM: THE POLITICS OF X-MEN: RED

" X-Men: Red.    

Tuesday, April 7, 2020

Artificial Intelligence and COVID-19: How Technology Can Understand, Track, and Improve Health Outcomes; Stanford University, April 2, 2020

Shana Lynch, Stanford University; Artificial Intelligence and COVID-19: How Technology Can Understand, Track, and Improve Health Outcomes


"On April 1, nearly 30 artificial intelligence (AI) researchers and experts met virtually to discuss ways AI can help understand COVID-19 and potentially mitigate the disease and developing public health crisis.

COVID-19 and AI: A Virtual Conference, hosted by the Stanford Institute for Human-Centered Artificial Intelligence, brought together Stanford faculty across medicine, computer science, and humanities; politicians, startup founders, and researchers from universities across the United States.

“In these trying times, I am especially inspired by the eagerness and diligence of scientists, clinicians, mathematicians, engineers, and social scientists around the world that are coming together to combat this pandemic,” Fei-Fei Li, Denning Family Co-Director of Stanford HAI, told the live audience.

Here are the top-line takeaways from the day. Visit HAI’s website for more in-depth coverage or watch the full conference video."

Wednesday, November 13, 2019

Engineers need a required course in ethics; Quartz, November 8, 2019

Kush Saxena; Engineers need a required course in ethics

"The higher education sector cannot ignore its role in preparing students for the future of work as quite literally the purpose it serves. That includes integrating ethics into comprehensive computer science curricula.

Universities like MIT are leading the way by creating research collaborations across disciplines such as law and government, finding ways to embed topics around the societal impact of computing into the technical curriculum.

This type of rigorous education shouldn’t be accessible only to students who can get into elite universities. As more jobs require engineering skills, all institutions—from coding boot camps to community college courses to advanced state-funded PhD programs—need to follow suit."

Thursday, April 18, 2019

Ethics Alone Can’t Fix Big Tech Ethics can provide blueprints for good tech, but it can’t implement them.; Slate, April 17, 2019

Daniel Susser, Slate;

Ethics Alone Can’t Fix Big Tech


Ethics can provide blueprints for good tech, but it can’t implement them.



"Ethics requires more than rote compliance. And it’s important to remember that industry can reduce any strategy to theater. Simply focusing on law and policy won’t solve these problems, since they are equally (if not more) susceptible to watering down. Many are rightly excited about new proposals for state and federal privacy legislation, and for laws constraining facial recognition technology, but we’re already seeing industry lobbying to strip them of their most meaningful provisions. More importantly, law and policy evolve too slowly to keep up with the latest challenges technology throws at us, as is evident from the fact that most existing federal privacy legislation is older than the internet.

The way forward is to see these strategies as complementary, each offering distinctive and necessary tools for steering new and emerging technologies toward shared ends. The task is fitting them together.

By its very nature ethics is idealistic. The purpose of ethical reflection is to understand how we ought to live—which principles should drive us and which rules should constrain us. However, it is more or less indifferent to the vagaries of market forces and political winds. To oversimplify: Ethics can provide blueprints for good tech, but it can’t implement them. In contrast, law and policy are creatures of the here and now. They aim to shape the future, but they are subject to the brute realities—social, political, economic, historical—from which they emerge. What they lack in idealism, though, is made up for in effectiveness. Unlike ethics, law and policy are backed by the coercive force of the state."

Tuesday, April 16, 2019

Course organized by students tackles ethics in CS; Janet Chang, April 15, 2019

Janet Chang, The Brown Daily Herald; 

Course organized by students tackles ethics in CS


"Last spring, students in a new computer science social change course developed software tools for a disaster relief organization to teach refugee children about science and technology, a Chrome extension to filter hate speech on the internet and a mobile app to help doctors during a patient visits.

Called CSCI 1951I: “CS for Social Change,” the course — now in its second iteration — was developed for computer science, design and engineering students to discuss and reflect on the social impact of their work while building practical software tools to help local and national partner nonprofits over the 15-week semester.

The course was initially conceived by Nikita Ramoji ’20, among others, who was a co-founder of CS for Social Change, a student organization that aims to addethics education to college computer science departments. “The (general consensus) was that we were getting a really great computer science education, but we didn’t really have that social component,” she said."

Thursday, April 11, 2019

Do You Know What You’ve Given Up?; The New York Times, April 10, 2019

James Bennet, The New York Times; Do You Know What You’ve Given Up?

""It seems like a good moment to pause and consider the choices we’ve already made, and the ones that lie ahead. That’s why Times Opinion is launching The Privacy Project, a monthslong initiative to explore the technology, to envision where it’s taking us, and to convene debate about how we should control it to best realize, rather than stunt or distort, human potential."

Monday, April 8, 2019

Expert Panel: What even IS 'tech ethics'?; TechCrunch, April 2, 2019

Greg Epstein, TechCrunch; Expert Panel: What even IS 'tech ethics'?

"It’s been a pleasure, this past month, to launch a weekly series investigating issues in tech ethics, here at TechCrunch. As discussions around my first few pieces have taken off, I’ve noticed one question recurring in a number of different ways: what even IS “tech ethics”? I believe there’s lots of room for debate about what this growing field entails, and I hope that remains the case because we’re going to need multiple ethical perspectives on technologies that are changing billions of lives. That said, we need to at least attempt to define what we’re talking about, in order to have clearer public conversations about the ethics of technology."

Tuesday, February 26, 2019

Fixing Tech’s Ethics Problem Starts in the Classroom; The Nation, February 21, 2019

Stephanie Wykstra, The Nation; Fixing Tech’s Ethics Problem Starts in the Classroom

 

"Casey Fiesler, a faculty member in the Department of Information Science at the University of Colorado Boulder, said that a common model in engineering programs is a stand-alone ethics class, often taught towards the end of a program. But there’s increasingly a consensus among those teaching tech ethics that a better model is to discuss ethical issues alongside technical work. Evan Peck, a computer scientist at Bucknell University, writes that separating ethical from technical material means that students get practice “debating ethical dilemmas…but don’t get to practice formalizing those values into code.” This is a particularly a problem, said Fiesler, if an ethics class is taught by someone from outside a student’s field, and the professors in their computer-science courses rarely mention ethical issues. On the other hand, classes focused squarely on the ethics of technology allow students to dig deeply into complicated questions. “I think the best solution is to do both…but if you can’t do both, incorporating [ethics material into regular coursework] is the best option,” Fiesler said."