Saturday, June 29, 2024

AI scientist Ray Kurzweil: ‘We are going to expand intelligence a millionfold by 2045’; The Guardian, June 29, 2024

Zoe Corbin, The Guardian; AI scientist Ray Kurzweil: ‘We are going to expand intelligence a millionfold by 2045’

"The American computer scientist and techno-optimist Ray Kurzweil is a long-serving authority on artificial intelligence (AI). His bestselling 2005 book, The Singularity Is Near, sparked imaginations with sci-fi like predictions that computers would reach human-level intelligence by 2029 and that we would merge with computers and become superhuman around 2045, which he called “the Singularity”. Now, nearly 20 years on, Kurzweil, 76, has a sequel, The Singularity Is Nearer – and some of his predictions no longer seem so wacky. Kurzweil’s day job is principal researcher and AI visionary at Google. He spoke to the Observer in his personal capacity as an author, inventor and futurist...

What of the existential risk of advanced AI systems – that they could gain unanticipated powers and seriously harm humanity? AI “godfather” Geoffrey Hinton left Google last year, in part because of such concerns, while other high-profile tech leaders such as Elon Musk have also issued warnings. Earlier this month, OpenAI and Google DeepMind workers called for greater protections for whistleblowers who raise safety concerns. 

I have a chapter on perils. I’ve been involved with trying to find the best way to move forward and I helped to develop the Asilomar AI Principles [a 2017 non-legally binding set of guidelines for responsible AI development]. We do have to be aware of the potential here and monitor what AI is doing. But just being against it is not sensible: the advantages are so profound. All the major companies are putting more effort into making sure their systems are safe and align with human values than they are into creating new advances, which is positive...

Not everyone is likely to be able to afford the technology of the future you envisage. Does technological inequality worry you? 

Being wealthy allows you to afford these technologies at an early point, but also one where they don’t work very well. When [mobile] phones were new they were very expensive and also did a terrible job. They had access to very little information and didn’t talk to the cloud. Now they are very affordable and extremely useful. About three quarters of people in the world have one. So it’s going to be the same thing here: this issue goes away over time...

The book looks in detail at AI’s job-killing potential. Should we be worried? 

Yes, and no. Certain types of jobs will be automated and people will be affected. But new capabilities also create new jobs. A job like “social media influencer” didn’t make sense, even 10 years ago. Today we have more jobs than we’ve ever had and US average personal income per hours worked is 10 times what it was 100 years ago adjusted to today’s dollars. Universal basic income will start in the 2030s, which will help cushion the harms of job disruptions. It won’t be adequate at that point but over time it will become so.

There are other alarming ways, beyond job loss, that AI is promising to transform the world: spreading disinformation, causing harm through biased algorithms and supercharging surveillance. You don’t dwell much on those… 

We do have to work through certain types of issues. We have an election coming and “deepfake” videos are a worry. I think we can actually figure out [what’s fake] but if it happens right before the election we won’t have time. On issues of bias, AI is learning from humans and humans have bias. We’re making progress but we’re not where we want to be. There are also issues around fair data use by AI that need to be sorted out via the legal process."

2024 Generative AI in Professional Services: Perceptions, Usage & Impact on the Future of Work; Thomson Reuters Institute, 2024

Thomson Reuters Institute; 2024 Generative AI in Professional Services: Perceptions, Usage & Impact on the Future of Work

"Inaccuracy, privacy worries persist -- More than half of respondents identified such worries as inaccurate responses (70%); data security (68%); privacy and confidentiality of data (62%); complying with laws and regulations (60%); and ethical and responsible usage (57%), as primary concerns for GenAI."

GenAI in focus: Understanding the latest trends and considerations; Thomson Reuters, June 27, 2024

 Thomson Reuters; GenAI in focus: Understanding the latest trends and considerations

"Legal professionals, whether they work for law firms, corporate legal departments, government, or in risk and fraud, have generally positive perceptions of generative AI (GenAI). According to the professionals surveyed in the Thomson Reuters Institute’s 2024 GenAI in Professional Services report, 85% of law firm and corporate attorneys, 77% of government legal practitioners, and 82% of corporate risk professionals believe that GenAI can be applied to industry work.  

But should it be applied? There, those positive perceptions softened a bit, with 51% of law firm respondents, 60% of corporate legal practitioners, 62% of corporate risk professionals, and 40% of government legal respondents saying yes.  

In short, professionals’ perceptions of AI include concerns and interest in its capabilities. Those concerns include the ethics of AI usage and mitigating related risksThese are important considerations. But they don’t need to keep professionals from benefiting from all that GenAI can do. Professionals can minimize many of the potential risks by becoming familiar with responsible AI practices."

The Voices of A.I. Are Telling Us a Lot; The New York Times, June 28, 2024

 Amanda Hess, The New York Times; The Voices of A.I. Are Telling Us a Lot

"Tech companies advertise their virtual assistants in terms of the services they provide. They can read you the weather report and summon you a taxi; OpenAI promises that its more advanced chatbots will be able to laugh at your jokes and sense shifts in your moods. But they also exist to make us feel more comfortable about the technology itself.

Johansson’s voice functions like a luxe security blanket thrown over the alienating aspects of A.I.-assisted interactions. “He told me that he felt that by my voicing the system, I could bridge the gap between tech companies and creatives and help consumers to feel comfortable with the seismic shift concerning humans and A.I.,” Johansson said of Sam Altman, OpenAI’s founder. “He said he felt that my voice would be comforting to people.”

It is not that Johansson’s voice sounds inherently like a robot’s. It’s that developers and filmmakers have designed their robots’ voices to ease the discomfort inherent in robot-human interactions. OpenAI has said that it wanted to cast a chatbot voice that is “approachable” and “warm” and “inspires trust.” Artificial intelligence stands accused of devastating the creative industries, guzzling energy and even threatening human life. Understandably, OpenAI wants a voice that makes people feel at ease using its products. What does artificial intelligence sound like? It sounds like crisis management."

Friday, June 28, 2024

Joe Biden Is a Good Man and a Good President. He Must Bow Out of the Race.; The New York Times, June 28, 2024

 THOMAS L. FRIEDMAN, The New York Times; Joe Biden Is a Good Man and a Good President. He Must Bow Out of the Race.

"We are at the dawn of an artificial intelligence revolution that is going to change EVERYTHING FOR EVERYONE — how we work, how we learn, how we teach, how we trade, how we invent, how we collaborate, how we fight wars, how we commit crimes and how we fight crimes. Maybe I missed it, but I did not hear the phrase “artificial intelligence” mentioned by either man at the debate."

Original sins and dirty secrets: GenAI has an ethics problem. These are the three things it most urgently needs to fix; Fortune, June 27, 2024

, Fortune; Original sins and dirty secrets: GenAI has an ethics problem. These are the three things it most urgently needs to fix

"The ethics of generative AI has been in the news this week. AI companies have been accused of taking copyrighted creative works without permission to train their models, and there’s been documentation of those models producing outputs that plagiarize from that training data. Today, I’m going to make the case that generative AI can never be ethical as long as three issues that are currently inherent to the technology remain. First, there’s the fact that generative AI was created using stolen data. Second, it’s built on exploitative labor. And third, it’s exponentially worsening the energy crisis at a pivotal time when we need to be scaling back, not accelerating, our energy demands and environmental impact."

Thursday, June 27, 2024

God Chatbots Offer Spiritual Insights on Demand. What Could Go Wrong?; Scientific American, March 19, 2024

 , Scientific American; God Chatbots Offer Spiritual Insights on Demand. What Could Go Wrong?

"QuranGPT—which has now been used by about 230,000 people around the world—is just one of a litany of chatbots trained on religious texts that have recently appeared online. There’s Bible.Ai, Gita GPT, Buddhabot, Apostle Paul AI, a chatbot trained to imitate 16th-century German theologian Martin Luther, another trained on the works of Confucius, and yet another designed to imitate the Delphic oracle. For millennia adherents of various faiths have spent long hours—or entire lifetimes—studying scripture to glean insights into the deepest mysteries of human existence, say, the fate of the soul after death.

The creators of these chatbots don’t necessarily believe large language models (LLMs) will put these age-old theological enigmas to rest. But they do think that with their ability to identify subtle linguistic patterns within vast quantities of text and provide responses to user prompts in humanlike language (a feature called natural-language processing, or NLP), the bots can theoretically synthesize spiritual insights in a matter of seconds, saving users both time and energy. It’s divine wisdom on demand.

Many professional theologians, however, have serious concerns about blending LLMs with religion...

The danger of hallucination in this context is compounded by the fact that religiously oriented chatbots are likely to attract acutely sensitive questions—questions one might feel too embarrassed or ashamed to ask a priest, an imam, a rabbi or even a close friend. During a software update to QuranGPT last year, Khan had a brief glimpse into user prompts, which are usually invisible to him. He recalls seeing that one person had asked, “I caught my wife cheating on me—how should I respond?” Another, more troublingly, had asked, “Can I beat my wife?”

Khan was pleased with the system’s responses (it urged discussion and nonviolence on both counts), but the experience underscored the ethical gravity behind his undertaking."

New Tactic in China’s Information War: Harassing a Critic’s Child in the U.S.; The New York Times, June 27, 2024

Steven Lee Myers and  , The New York Times; New Tactic in China’s Information War: Harassing a Critic’s Child in the U.S.

"A covert propaganda network linked to the country’s security services has barraged not just Mr. Deng but also his teenage daughter with sexually suggestive and threatening posts on popular social media platforms, according to researchers at both Clemson University and Meta, which owns Facebook and Instagram...

The harassment fits a pattern of online intimidation that has raised alarms in Washington, as well as Canada and other countries where China’s attacks have become increasingly brazen. The campaign has included thousands of posts the researchers have linked to a network of social media accounts known as Spamouflage or Dragonbridge, an arm of the country’s vast propaganda apparatus.

China has long sought to discredit Chinese critics, but targeting a teenager in the United States is an escalation, said Darren Linvill, a founder of the Media Forensics Hub at Clemson, whose researchers documented the campaign against Mr. Deng. Federal law prohibits severe online harassment or threats, but that appears to be no deterrent to China’s efforts."

The Supreme Court rules for Biden administration in a social media dispute with conservative states; AP, June 26, 2024

MARK SHERMAN , AP; The Supreme Court rules for Biden administration in a social media dispute with conservative states

"The Supreme Court on Wednesday sided with the Biden administration in a dispute with Republican-led states over how far the federal government can go to combat controversial social media posts on topics including COVID-19 and election security.

By a 6-3 vote, the justices threw out lower-court rulings that favored Louisiana, Missouri and other parties in their claims that federal officials leaned on the social media platforms to unconstitutionally squelch conservative points of view.

Justice Amy Coney Barrett wrote for the court that the states and other parties did not have the legal right, or standing, to sue. Justices Samuel Alito, Neil Gorsuch and Clarence Thomas dissented. 

The decision should not affect typical social media users or their posts."

AI, Legal Tech, and Ethics: The Florida Bar’s Groundbreaking Guidelines; Legal Talk Network, June 27, 2024

Adriana Linares , Legal Talk Network; AI, Legal Tech, and Ethics: The Florida Bar’s Groundbreaking Guidelines

"Two friends of the podcast return for this episode of New Solo to talk all things legal tech and the latest in AI services for lawyers. Guests Renee Thompson and Liz McCausland are both accomplished mediators and solo practitioners who depend on tech to boost productivity and keep up with their busy lives.

AI is an emerging technology that is finding its way to more and more law offices. McCausland and Thompson served on a Florida Bar committee to draft an advisory opinion laying out ethical guidelines for the use of AI in legal practice.

With ethical guardrails published, what’s next? A best practices guide and clear definitions and examples of AI for legal services. Client consent, the impact on fees and confidentiality, and even how judges view the use of AI and informing the court that AI played a role in your presentation are all pieces of the puzzle."

Wednesday, June 26, 2024

The MTV News website is gone; The Verge, June 25, 2024

Andrew Liszewski, The Verge ; The MTV News website is gone

"The archives of the MTV News website, which had remained accessible online after the unit was shut down last year by parent company Paramount Global, have now been completely taken offline. As Varietyreported yesterday, both mtvnews.com and mtv.com/news now redirect visitors to the MTV website’s front page...

Although the MTV News website was no longer publishing new stories, its extensive archive, dating back over two decades to its launch in 1996, remained online. But as former staffers discovered yesterday, that archive is no longer accessible."

Tuesday, June 25, 2024

Collaborative ethics: innovating collaboration between ethicists and life scientists; Nature, June 20, 2024

, Nature ; Collaborative ethics: innovating collaboration between ethicists and life scientists

"Is there a place for ethics in scientific research, not about science or after scientific breakthroughs? We are convinced that there is, and we describe here our model for collaboration between scientists and ethicists.

Timely collaboration with ethicists benefits science, as it can make an essential contribution to the research process. In our view, such critical discussions can improve the efficiency and robustness of outcomes, particularly in groundbreaking or disruptive research. The discussion of ethical implications during the research process can also prepare a team for a formal ethics review and criticism after publication.

The practice of collaborative ethics also advances the humanities, as direct involvement with the sciences allows long-held assumptions and arguments to be put to the test. As philosophers and ethicists, we argue that innovative life sciences research requires new methods in ethics, as disruptive concepts and research outcomes no longer fit traditional notions and norms. Those methods should not be developed at a distance from the proverbial philosopher’s armchair or in after-the-fact ethics analysis. We argue that, rather, we should join scientists and meet where science evolves in real-time: as Knoppers and Chadwick put it in the early days of genomic science, “Ethical thinking will inevitably continue to evolve as the science does”1."

Monday, June 24, 2024

New Legal Ethics Opinion Cautions Lawyers: You ‘Must Be Proficient’ In the Use of Generative AI; LawSites, June 24, 2024

, LawSites; New Legal Ethics Opinion Cautions Lawyers: You ‘Must Be Proficient’ In the Use of Generative AI

"A new legal ethics opinion on the use of generative AI in law practice makes one point very clear: lawyers are required to maintain competence across all technological means relevant to their practices, and that includes the use of generative AI.

The opinion, jointly issued by the Pennsylvania Bar Association and Philadelphia Bar Association, was issued to educate attorneys on the benefits and pitfalls of using generative AI and to provide ethical guidelines.

While the opinion is focused on AI, it repeatedly emphasizes that a lawyer’s ethical obligations surrounding this emerging form of technology are no different than those for any form of technology...

12 Points of Responsibility

The 16-page opinion offers a concise primer on the use of generative AI in law practice, including a brief background on the technology and a summary of other states’ ethics opinions.

But most importantly, it concludes with 12 points of responsibility pertaining to lawyers using generative AI:

  • Be truthful and accurate: The opinion warns that lawyers must ensure that AI-generated content, such as legal documents or advice, is truthful, accurate and based on sound legal reasoning, upholding principles of honesty and integrity in their professional conduct.
  • Verify all citations and the accuracy of cited materials: Lawyers must ensure the citations they use in legal documents or arguments are accurate and relevant. That includes verifying that the citations accurately reflect the content they reference.
  • Ensure competence: Lawyers must be competent in using AI technologies.
  • Maintain confidentiality: Lawyers must safeguard information relating to the representation of a client and ensure that AI systems handling confidential data both adhere to strict confidentiality measures and prevent the sharing of confidential data with others not protected by the attorney-client privilege.
  • Identify conflicts of interest: Lawyers must be vigilant, the opinion says, in identifying and addressing potential conflicts of interest arising from using AI systems.
  • Communicate with clients: Lawyers must communicate with clients about using AI in their practices, providing clear and transparent explanations of how such tools are employed and their potential impact on case outcomes. If necessary, lawyers should obtain client consent before using certain AI tools.
  • Ensure information is unbiased and accurate: Lawyers must ensure that the data used to train AI models is accurate, unbiased, and ethically sourced to prevent perpetuating biases or inaccuracies in AI-generated content.
  • Ensure AI is properly used: Lawyers must be vigilant against the misuse of AI-generated content, ensuring it is not used to deceive or manipulate legal processes, evidence or outcomes.
  • Adhere to ethical standards: Lawyers must stay informed about relevant regulations and guidelines governing the use of AI in legal practice to ensure compliance with legal and ethical standards.
  • Exercise professional judgment: Lawyers must exercise their professional judgment in conjunction with AI-generated content, and recognize that AI is a tool that assists but does not replace legal expertise and analysis.
  • Use proper billing practices: AI has tremendous time-saving capabilities. Lawyers must, therefore, ensure that AI-related expenses are reasonable and appropriately disclosed to clients.
  • Maintain transparency: Lawyers should be transparent with clients, colleagues, and the courts about the use of AI tools in legal practice, including disclosing any limitations or uncertainties associated with AI-generated content.

My Advice: Don’t Be Stupid

Over the years of writing about legal technology and legal ethics, I have developed my own shortcut rule for staying out of trouble: Don’t be stupid...

You can read the full opinion here: Joint Formal Opinion 2024-200."

AI use must include ethical scrutiny; CT Mirror, June 24, 2024

 Josemari Feliciano, CT Mirror; AI use must include ethical scrutiny

"AI use may deal with data that are deeply intertwined with personal and societal dimensions. The potential for AI to impact societal structures, influence public policy, and reshape economies is immense. This power carries with it an obligation to prevent harm and ensure fairness, necessitating a formal and transparent review process akin to that overseen by IRBs.

The use of AI without meticulous scrutiny of the training data and study parameters can inadvertently perpetuate or exacerbate harm to minority groups. If the data used to train AI systems is biased or non-representative, the resulting algorithms can reinforce existing disparities."

How to Fix “AI’s Original Sin”; O'Reilly, June 18, 2024

  Tim O’Reilly, O'Reilly; How to Fix “AI’s Original Sin”

"In conversation with reporter Cade Metz, who broke the story, on the New York Times podcast The Daily, host Michael Barbaro called copyright violation “AI’s Original Sin.”

At the very least, copyright appears to be one of the major fronts so far in the war over who gets to profit from generative AI. It’s not at all clear yet who is on the right side of the law. In the remarkable essay “Talkin’ Bout AI Generation: Copyright and the Generative-AI Supply Chain,” Cornell’s Katherine Lee and A. Feder Cooper and James Grimmelmann of Microsoft Research and Yale note:

Copyright law is notoriously complicated, and generative-AI systems manage to touch on a great many corners of it. They raise issues of authorship, similarity, direct and indirect liability, fair use, and licensing, among much else. These issues cannot be analyzed in isolation, because there are connections everywhere. Whether the output of a generative AI system is fair use can depend on how its training datasets were assembled. Whether the creator of a generative-AI system is secondarily liable can depend on the prompts that its users supply.

But it seems less important to get into the fine points of copyright law and arguments over liability for infringement, and instead to explore the political economy of copyrighted content in the emerging world of AI services: Who will get what, and why?"

Sunday, June 23, 2024

After uproar over ethics, new 'Washington Post' editor won't take the job; NPR, June 21, 2024


David Folkenflik , NPR; After uproar over ethics, new 'Washington Post' editor won't take the job

"The ethical records of both men have come under withering scrutiny in recent days.

Lewis worked with Winnett at the Sunday Times in Britain in the early 2000s. After Lewis was named the youngest editor in the Daily Telegraph's history, he hired Winnett there. The two men, both Brits, worked hand-in-glove and won accolades in the U.K. for their scoops.

Yet NPR, the New York Times and the Post have reported on a parade of episodes involving both men in conduct that would be barred under professional ethics codes at major American news outlets, including the Post.

The incidents include paying a six-figure sum to secure a major scoop; planting a junior reporter in a government job to obtain secret and even classified documents; and relying on a private investigator who used subterfuge to secure people's confidential records and documents. The investigator was later arrested."

Saturday, June 22, 2024

NBCUniversal’s Donna Langley on AI: ‘We’ve got to get the ethics of it right’; Los Angeles Times, June 21, 2024

Samantha Masunaga , Los Angeles Times; NBCUniversal’s Donna Langley on AI: ‘We’ve got to get the ethics of it right’

"Artificial intelligence is “exciting,” but guardrails must be put in place to protect labor, intellectual property and ethics, NBCUniversal Studio Group Chairman Donna Langley said Friday at an entertainment industry law conference.

During a wide-ranging, on-stage conversation at the UCLA Entertainment Symposium, the media chief emphasized that first, “the labor piece of it has to be right,” a proclamation that was met with applause from the audience. 

“Nor should we infringe on people’s rights,” she said, adding that there also needs to be “very good, clever, sophisticated copyright laws around our IP.”...

AI has emerged as a major issue in Hollywood, as technology companies have increasingly courted studios and industry players. But it is a delicate dance, as entertainment industry executives want to avoid offending actors, writers and other workers who view the technology as a threat to their jobs."

Pope Francis meets Biden, Zelensky, and talks A.I. ethics at G7 summit; America: The Jesuit Review, June 20, 2024

America: The Jesuit Review; Pope Francis meets Biden, Zelensky, and talks A.I. ethics at G7 summit

"Pope Francis met individually with 10 world leaders at the G7 summit. He also made history as the first pope to attend and deliver a speech at the gathering, where he urged delegates to prioritize ethics in artificial intelligence for the common good. Earlier that day, he had met with 100 international comedians at the Vatican. In this episode of “Inside the Vatican,” hosts Colleen Dulle and Gerard O’Connell bring you inside both events."

AI lab at Christian university aims to bring morality and ethics to artificial intelligence; Fox News, June 17, 2024

Christine Rousselle  , Fox News; AI lab at Christian university aims to bring morality and ethics to artificial intelligence

"A new AI Lab at a Christian university in California is grounded in theological values — something the school hopes will help to prevent Christians and others of faith from falling behind when it comes to this new technology.

"The AI Lab at Biola University is a dedicated space where students, faculty and staff converge to explore the intricacies of artificial intelligence," Dr. Michael J. Arena told Fox News Digital...

The lab is meant to "be a crucible for shaping the future of AI," Arena said via email, noting the lab aims to do this by "providing education, fostering dialogue and leading innovative AI projects rooted in Christian beliefs." 

While AI has been controversial, Arena believes that educational institutions have to "embrace AI or risk falling behind" in technology. 

"If we don't engage, we risk falling asleep at the wheel," Arena said, referring to Christian and faith-centered institutions. 

He pointed to social media as an example of how a failure to properly engage with an emerging technology with a strong approach to moral values has had disastrous results."

Oxford University institute hosts AI ethics conference; Oxford Mail, June 21, 2024

Jacob Manuschka , Oxford Mail; Oxford University institute hosts AI ethics conference

"On June 20, 'The Lyceum Project: AI Ethics with Aristotle' explored the ethical regulation of AI.

This conference, set adjacent to the ancient site of Aristotle’s school, showcased some of the greatest philosophical minds and featured an address from Greek prime minister, Kyriakos Mitsotakis.

Professor John Tasioulas, director of the Institute for Ethics in AI, said: "The Aristotelian approach to ethics, with its rich notion of human flourishing, has great potential to help us grapple with the urgent question of what it means to be human in the age of AI.

"We are excited to bring together philosophers, scientists, policymakers, and entrepreneurs in a day-long dialogue about how ancient wisdom can shed light on contemporary challenges...

The conference was held in partnership with Stanford University and Demokritos, Greece's National Centre for Scientific Research."

Thursday, June 20, 2024

Something’s Rotten About the Justices Taking So Long on Trump’s Immunity Case; The New York Times, June 19, 2024

 Leah Litman, The New York Times; Something’s Rotten About the Justices Taking So Long on Trump’s Immunity Case

"The court is a busy place, though the justices are completing decisions at the second slowest rate since the 1946 term, according to a recent article in The Wall Street Journal...

In 1974, the Watergate special prosecutor squared off against President Richard Nixon over his refusal to release Oval Office tape recordings of his conversations with aides. Nixon argued that he was immune from a subpoena seeking the recordings. Last year, Steve Vladeck, a law professor at the University of Texas at Austin, looked at how long that case took once it reached the Supreme Court on May 31 of that year. The justices gave the parties 21 days to file their briefs, and then 10 days to respond. Oral argument was held on July 8. Sixteen days later, on July 24, the court issued its 8-0 decision ordering Nixon to turn over the tapes. The chief justice, Warren Burger, who had been nominated to the court by Nixon, wrote the opinion. Total elapsed time: 54 days. Nixon subsequently resigned.

As of Tuesday, 110 days had passed since the court agreed to hear the Trump immunity case. And still no decision."

Wednesday, June 19, 2024

Oxford Institute for Ethics in AI to host ground-breaking AI Ethics Conference; University of Oxford, In-Person Event on June 20, 2024

University of Oxford; Oxford Institute for Ethics in AI to host ground-breaking AI Ethics Conference

"The Oxford University Institute for Ethics in AI is hosting an exciting one day conference in Athens on the 20th of June 2024, The Lyceum Project: AI Ethics with Aristotle, in partnership with Stanford University and Demokritos, Greece's National Centre for Scientific Research...

Set in the cradle of philosophy, adjacent to the ancient site of Aristotle’s school, the conference will showcase some of the greatest philosophical minds and feature a special address from the Greek Prime Minister, Kyriakos Mitsotakis, as they discuss the most pressing question of our times – the ethical regulation of AI.

The conference will be free to attend (register to attend).

Professor John Tasioulas, Director of the Institute for Ethics in AI, said: ‘The Aristotelian approach to ethics, with its rich notion of human flourishing, has great potential to help us grapple with the urgent question of what it means to be human in the age of AI. We are excited to bring together philosophers, scientists, policymakers, and entrepreneurs in a day-long dialogue about how ancient wisdom can shed light on contemporary challenges.’

George Nounesis, Director & Chairman of the Board of NCSR Demokritos said: ‘There is no such thing as ethically neutral AI; and high-quality research on AI cannot ignore its inherent ethical aspects. Ancient Greek philosophy can serve as a valuable resource guiding us in this discourse. In this respect, Aristotelian philosophy can play a pivotal role by nurturing ethical reasoning and a comprehensive understanding of the societal 'implications of AI, broadening the dialogue with society.’

Alexandra Mitsotaki, President of the World Human Forum, said: ‘This conference is an important first step towards our vision to bring Aristotle’s lyceum alive again by showing the relevance of the teachings of the great philosopher for today’s global challenges. We aspire for the Lyceum to become a global point of connection. This is, after all, the original location where the great philosopher thought, taught and developed many of the ideas that formed Western Civilisation.’"

Tuesday, June 18, 2024

POPE FRANCIS ATTENDS THE G7 SESSION ON ARTIFICIAL INTELLIGENCE: ADDRESS OF HIS HOLINESS POPE FRANCIS, June 14, 2024

The Vatican, POPE FRANCIS ATTENDS THE G7 SESSION ON ARTIFICIAL INTELLIGENCE: 

ADDRESS OF HIS HOLINESS POPE FRANCISBorgo Egnazia (Puglia)

[Excerpt]

            "An exciting and fearsome tool


 Esteemed ladies and gentlemen,

I address you today, the leaders of the Intergovernmental Forum of the G7, concerning the effects of artificial intelligence on the future of humanity.

“Sacred Scripture attests that God bestowed his Spirit upon human beings so that they might have ‘skill and understanding and knowledge in every craft’ ( Ex 35:31)”. [1] Science and technology are therefore brilliant products of the creative potential of human beings. [2]

Indeed, artificial intelligence arises precisely from the use of this God-given creative potential.

As we know, artificial intelligence is an extremely powerful tool, employed in many kinds of human activity: from medicine to the world of work; from culture to the field of communications; from education to politics. It is now safe to assume that its use will increasingly influence the way we live, our social relationships and even the way we conceive of our identity as human beings. [3]

The question of artificial intelligence, however, is often perceived as ambiguous: on the one hand, it generates excitement for the possibilities it offers, while on the other it gives rise to fear for the consequences it foreshadows. In this regard, we could say that all of us, albeit to varying degrees, experience two emotions: we are enthusiastic when we imagine the advances that can result from artificial intelligence but, at the same time, we are fearful when we acknowledge the dangers inherent in its use. [4]"

‘Trump Too Small’ Trademark Case Morphs Into Free Speech Debate; Bloomberg Law, June 18, 2024

 Laura Heymann , Bloomberg Law; ‘Trump Too Small’ Trademark Case Morphs Into Free Speech Debate

"The US Supreme Court’s June 13 decision in the “Trump Too Small” trademark case revealed a potential rift among the justices on First Amendment jurisprudence but did little to advance intellectual property law...

Trademark law, the Supreme Court has said in prior cases, is primarily about two goals: preventing confusion among consumers by ensuring accurate source identification and preserving trademark owners’ reputation and goodwill. For these justices, the names clause passed muster because prohibiting the registration of personal names without consent was self-evidently reasonable in light of these purposes; no further analysis was required."