Sunday, February 17, 2019

Roger McNamee: ‘It’s bigger than Facebook. This is a problem with the entire industry'; The Observer via The Guardian, February 16, 2019

Alex Hern, The Observer via The Guardian; Roger McNamee: ‘It’s bigger than Facebook. This is a problem with the entire industry'

"Mark Zuckerberg’s mentor and an early investor in Facebook on why his book Zucked urges people to turn away from big tech’s toxic business model

Roger McNamee is an American fund manager and venture capitalist who has made investments in, among others, Electronic Arts, Sybase, Palm Inc and Facebook. In 2004, along with Bono and others, he co-founded Elevation Partners, a private equity firm. He has recently published Zucked: Waking Up to the Facebook Catastrophe...

Is this a Facebook problem or a Mark Zuckerberg problem?
 

It’s bigger than Facebook. This is a problem with the entire internet platform industry, and Mark is just one of the two most successful practitioners of it.

This is a cultural model that infected Silicon Valley around 2003 – so, exactly at the time that Facebook and LinkedIn were being started – and it comes from a specific route.

Silicon Valley spent the period from 1950 to 2003 first with the space programme, and then with personal computers and the internet. The cultures of those things were very idealistic: make the world a better place through technology. Empower the people who use technology to be their best selves. Steve Jobs famously characterised his computers as bicycles for the mind.

The problem with Google and Facebook is that their goal is to replace humans in many of the core activities of life...

Do you think there’s a version of history in which we don’t end up in this situation? 

The culture into which Facebook was born was this deeply libertarian philosophy that was espoused by their first investor, Peter Thiel, and the other members of the so-called “PayPal mafia”.

They were almost single-handedly responsible for creating the social generation of companies. And their insights were brilliant. Their ideas about how to grow companies were revolutionary and extraordinarily successful. The challenge was that they also had a very different philosophy from the prior generations of Silicon Valley. Their notion was that disruption was perfectly reasonable because you weren’t actually responsible for anybody but yourself, so you weren’t responsible for the consequences of your actions.

That philosophy got baked into their companies in this idea that you could have a goal – in Facebook’s case, connecting the whole world on one network – and that goal would be so important that it justified whatever means were necessary to get there."

Saturday, February 16, 2019

Vatican, Microsoft team up on artificial intelligence ethics; The Washington Post, February 13, 2019

Associated Press via The Washington Post; Vatican, Microsoft team up on artificial intelligence ethics

"The Vatican says it is teaming up with Microsoft on an academic prize to promote ethics in artificial intelligence.

Pope Francis met privately on Wednesday with Microsoft President Brad Smith and the head of a Vatican scientific office that promotes Catholic Church positions on human life.

The Vatican said Smith and Archbishop Vincenzo Paglia of the Pontifical Academy for Life told Francis about the international prize for an individual who has successfully defended a dissertation on ethical issues involving artificial intelligence."

Thursday, February 14, 2019

Parkland school turns to experimental surveillance software that can flag students as threats; The Washington Post, February 13, 2019

Drew Harwell, The Washington Post; Parkland school turns to experimental surveillance software that can flag students as threats

"The specter of student violence is pushing school leaders across the country to turn their campuses into surveillance testing grounds on the hope it’ll help them detect dangerous people they’d otherwise miss. The supporters and designers of Avigilon, the AI service bought for $1 billion last year by tech giant Motorola Solutions, say its security algorithms could spot risky behavior with superhuman speed and precision, potentially preventing another attack.

But the advanced monitoring technologies ensure that the daily lives of American schoolchildren are subjected to close scrutiny from systems that will automatically flag certain students as suspicious, potentially spurring a response from security or police forces, based on the work of algorithms that are hidden from public view.

The camera software has no proven track record for preventing school violence, some technology and civil liberties experts argue. And the testing of their algorithms for bias and accuracy — how confident the systems are in identifying possible threats — has largely been conducted by the companies themselves."

What to tell patients when artificial intelligence is part of the care team; American Medical Association (AMA), February 13, 2019

Staff News Writer, American Medical Association (AMA); What to tell patients when artificial intelligence is part of the care team


"Artificial intelligence (AI) in health care can help manage and analyze data, make decisions and conduct conversations. The availability of AI is destined to drastically change physicians’ roles and everyday practices. It is key that physicians be able to adapt to changes in diagnostics, therapeutics and practices of maintaining patient safety and privacy. However, physicians need to be aware of ethically complex questions about implementation, uses and limitations of AI in health care.   

The February issue of the AMA Journal of Ethics® (@JournalofEthics) features numerous perspectives on AI in health care and gives you an opportunity to earn CME credit."

Wednesday, February 13, 2019

AI ethics: Time to move beyond a list of principles; Information Age, February 13, 2019

Nick Ismail, Information Age; AI ethics: Time to move beyond a list of principles

"AI ethics should be a universally accepted practice.

AI is only as good as the data behind it, and as such, this data must be fair and representative of all people and cultures in the world. The technology must also be developed in accordance with international laws, and we must tread carefully with the integration of AI into weaponry — all this fits into the idea of AI ethics. Is it moral, is it safe…is it right?...

Indeed, ‘an ethical approach to the development and deployment of algorithms, data and AI (ADA) requires clarity and consensus on ethical concepts and resolution of tensions between values,’ according to a new report from the Nuffield Foundation and the Leverhulme Centre for the Future of Intelligence at the University of Cambridge.

Organisations and governments need help, and this report provides a broad roadmap for work on the ethical and societal implications of ADA-based technologies."

Defying Parents, A Teen Decides To Get Vaccinated; NPR, February 9, 2019

Amanda Morris and Scott Simon, NPR; Defying Parents, A Teen Decides To Get Vaccinated

"Ethan Lindenberger is getting vaccinated for well, just about everything.

He's 18 years old, but had never received vaccines for diseases like hepatitis, polio, measles, mumps, rubella, or the chickenpox.

Lindenberger's mother, Jill Wheeler, is anti-vaccine. He said she has been influenced by online misinformation, such as a debunked study that claimed certain vaccines were linked with autism, or a theory that vaccines cause brain damage. Incorrect ideas like these have spread like wildfire, so much so that the CDC has explicitly tried to combat them, posting pages like "Vaccines Do Not Cause Autism.""

Facebook under pressure to halt rise of anti-vaccination groups; The Guardian, February 12, 2019

Ed Pilkington and Jessica Glenza, The Guardian; Facebook under pressure to halt rise of anti-vaccination groups

"Dr Noni MacDonald, a professor of pediatrics at Dalhousie University in Halifax, Nova Scotia, Canada, who has worked as an expert adviser to the WHO on immunization, questioned why Facebook was unrestrained by the stringent controls against misinformation put on drug companies. “We don’t let big pharma or big food or big radio companies do this, so why should we let this happen in this venue?”

She added: “When a drug company puts a drug up in the formal media, they can’t tell you something false or they will be sued. So why is this different? Why is this allowed?”"

Tuesday, February 12, 2019

A.I. Shows Promise Assisting Physicians; The New York Times, February 11, 2019

Cade Metz, The New York Times; A.I. Shows Promise Assisting Physicians

"Each year, millions of Americans walk out of a doctor’s office with a misdiagnosis. Physicians try to be systematic when identifying illness and disease, but bias creeps in. Alternatives are overlooked.

Now a group of researchers in the United States and China has tested a potential remedy for all-too-human frailties: artificial intelligence.

In a paper published on Monday in Nature Medicine, the scientists reported that they had built a system that automatically diagnoses common childhood conditions — from influenza to meningitis — after processing the patient’s symptoms, history, lab results and other clinical data."

Rethinking Medical Ethics; Forbes, February 11, 2019

, Forbes; Rethinking Medical Ethics

"Even so, the technology raises some knotty ethical questions. What happens when an AI system makes the wrong decision—and who is responsible if it does? How can clinicians verify, or even understand, what comes out of an AI “black box”? How do they make sure AI systems avoid bias and protect patient privacy?

In June 2018, the American Medical Association (AMA) issued its first guidelines for how to develop, use and regulate AI. (Notably, the association refers to AI as “augmented intelligence,” reflecting its belief that AI will enhance, not replace, the work of physicians.) Among its recommendations, the AMA says, AI tools should be designed to identify and address bias and avoid creating or exacerbating disparities in the treatment of vulnerable populations. Tools, it adds, should be transparent and protect patient privacy.

None of those recommendations will be easy to satisfy. Here is how medical practitioners, researchers, and medical ethicists are approaching some of the most pressing ethical challenges."

‘Sorrow Is the Price You Pay for Love’; The Atlantic, February 5, 2019

Video by Erlend Eirik Mo, The Atlantic;

‘Sorrow Is the Price You Pay for Love’


[Kip Currier: A remarkable short video. Poignant, uplifting, inspiring. A reminder of what matters most, and what's worth striving for and toward.

Watch and share with others.]

"“So much in her story was compelling for me,” Mo told The Atlantic. “It is unique, about a girl doing a male macho dance, and universal, about love and sorrow.”"

EU Recalls Children’s Smartwatch Over Security Concerns; Lexology, February 8, 2019

Hunton Andrews Kurth LLP , Lexology; EU Recalls Children’s Smartwatch Over Security Concerns

"The European Commission has issued an EU-wide recall of the Safe-KID-One children’s smartwatch marketed by ENOX Group over concerns that the device leaves data such as location history, phone and serial numbers vulnerable to hacking and alteration."

Monday, February 11, 2019

A Confederacy of Grift The subjects of Robert Mueller’s investigation are cashing in.; The Atlantic, February 10, 2019

Quinta Jurecic; A Confederacy of Grift:

"For people in the greater Trump orbit, the publicity of a legal clash with Robert Mueller provides a chance to tap into the thriving marketplace of fringe pro-Trump media. Disinformation in America is a business. And the profit to be turned from that business is a warning sign that the alternative stories of the Mueller investigation spun by the president’s supporters will have a long shelf life."

Monday, February 4, 2019

How the Nazis Used the Rule of Law Against Jewish Lawyers; The Daily Beast, February 1, 2019


How the Nazis Used the Rule of Law Against Jewish Lawyers

A new book on the persecution of Jewish lawyers under the Third Reich ably documents a dark history—but fails to acknowledge the complicity of the law.

"Released in English for the first time by the American Bar Association (ABA), Lawyers Without Rights is a powerful work of history, commemorating Berlin’s Jewish attorneys while also describing how they were barred from their profession and, in most cases, driven from their city. Unfortunately, the tragedy of Lawyers Without Rights is not confined to history but permeates the ongoing idealization of “the rule of law.”"

Let Children Get Bored Again; The New York Times, February 2, 2019

Pamela Paul, The New York Times;

Let Children Get Bored Again

Boredom teaches us that life isn’t a parade of amusements. More important, it spawns creativity and self-sufficiency.

"Kids won’t listen to long lectures, goes the argument, so it’s on us to serve up learning in easier-to-swallow portions.

But surely teaching children to endure boredom rather than ratcheting up the entertainment will prepare them for a more realistic future, one that doesn’t raise false expectations of what work or life itself actually entails. One day, even in a job they otherwise love, our kids may have to spend an entire day answering Friday’s leftover email. They may have to check spreadsheets. Or assist robots at a vast internet-ready warehouse.

This sounds boring, you might conclude. It sounds like work, and it sounds like life. Perhaps we should get used to it again, and use it to our benefit. Perhaps in an incessant, up-the-ante world, we could do with a little less excitement."

Thursday, January 31, 2019

Facebook has declared sovereignty; The Washington Post, January 31, 2019

Molly Roberts, The Washington Post; Facebook has declared sovereignty

"That’s a lot of control, as Facebook has implicitly conceded by creating this court. But the court alone cannot close the chasm of accountability that renders Facebook’s preeminence so unsettling. Democracy, at least in theory, allows us to change things we do not like. We can vote out legislators who pass policy we disagree with, or who fail to pass policy at all. We cannot vote out Facebook. We can only quit it.

But can we really? Facebook has grown so large and, in many countries, essential that deleting an account seems to many like an impossibility. Facebook isn’t even just Facebook anymore: It is Instagram and WhatsApp, too. To people in many less developed countries, it is the Internet. Many users may feel more like citizens than customers, in that they cannot just quit. But they are not being governed with their consent.

No court — or oversight board — can change that."

An angry historian ripped the ultrarich over tax avoidance at Davos. Then one was given the mic.; The Washington Post, January 31, 2019

Eli Rosenberg, The Washington Post; An angry historian ripped the ultrarich over tax avoidance at Davos. Then one was given the mic.

"Rutger Bregman, a Dutch historian and author who studies poverty and global inequality, had a first this year: being invited to the world’s most prominent gathering of wealthy people — the World Economic Forum’s annual meeting in Switzerland — as a speaker...

[Bregman]...decided to say something during the panel discussion about income inequality he was on, hosted by Time magazine on Friday. He started by saying that he found the conference’s mix of indulgence and global problem-solving a bit bewildering.

“I mean 1,500 private jets have flown in here to hear Sir David Attenborough speak about how we’re wrecking the planet," he said. "I hear people talking the language of participation and justice and equality and transparency. But then almost no one raises the real issue of tax avoidance. And of the rich just not paying their fair share. It feels like I’m at a firefighters conference and no one is allowed to speak about water.

“This is not rocket science,” he said. “We can talk for a very long time about all these stupid philanthropy schemes, we can invite Bono once more, but, come on, we got to be talking about taxes. That’s it. Taxes, taxes, taxes — all the rest is bulls---, in my opinion.”

The doorbells have eyes: The privacy battle brewing over home security cameras; The Washington Post, January 31, 2019

Geoffrey A. Fowler, The Washington Post; The doorbells have eyes: The privacy battle brewing over home security cameras

"We should recognize this pattern: Tech that seems like an obvious good can develop darker dimensions as capabilities improve and data shifts into new hands. A terms-of-service update, a face-recognition upgrade or a hack could turn your doorbell into a privacy invasion you didn’t see coming."

The Role Of The Centre For Data Ethics And Innovation - What It Means For The UK; Mondaq, January 22, 2019

Jocelyn S. Paulley and David Brennan, Gowling WLG, Mondaq; The Role Of The Centre For Data Ethics And Innovation - What It Means For The UK

"What is the CDEI's role?

The CDEI will operate as an independent advisor to the government and will be led by an independent board of expert members with three core functions3:

  • analysing and anticipating risks and opportunities such as gaps in governance and regulation that could impede the ethical and innovative deployment of data and AI;
  • agreeing and articulating best practice such as codes of conduct and standards that can guide ethical and innovative uses of AI; and
  • advising government on the need for action including specific policy or regulatory actions required to address or prevent barriers to innovative and ethical uses of data.
As part of providing these functions, the CDEI will operate under the following principles;

  • appropriately balance objectives for ethical and innovative uses of data and AI to ensure they deliver the greatest benefit for society and the economy;
  • take into account the economic implications of its advice, including the UK's attractiveness as a place to invest in the development of data-driven technologies;
  • provide advice that is independent, impartial, proportionate and evidence-based; and
  • work closely with existing regulators and other institutions to ensure clarity and consistency of guidance
The CDEI's first project will be exploring the use of data in shaping people's online experiences and investigating the potential for bias in decisions made using algorithms. It will also publish its first strategy document by spring 2019 where it will set out how it proposes to operate with other organisations and other institutions recently announced by the government, namely the AI Council and the Office for AI."

Recent events highlight an unpleasant scientific practice: ethics dumping; The Economist, January 31, 2019

The Economist; Recent events highlight an unpleasant scientific practice: ethics dumping

Rich-world scientists conduct questionable experiments in poor countries

"Ethics dumping is the carrying out by researchers from one country (usually rich, and with strict regulations) in another (usually less well off, and with laxer laws) of an experiment that would not be permitted at home, or of one that might be permitted, but in a way that would be frowned on. The most worrisome cases involve medical research, in which health, and possibly lives, are at stake. But other investigations—anthropological ones, for example—may also be carried out in a more cavalier fashion abroad. As science becomes more international the risk of ethics dumping, both intentional and unintentional, has risen. The suggestion in this case is that Dr He was encouraged and assisted in his project by a researcher at an American university."

State looking for a few ethical people; The Garden Island: Kauai's newspaper since 1901, January 31, 2019

Editorial, The Garden Island: Kauai's newspaper since 1901; State looking for a few ethical people

"Not so easy to say what’s ethical these days, as it seems to depends on one’s standards. Either way, if you’re one of those ethical people, the Hawaii State Ethics Commission wants to hear from you.

The Judicial Council is seeking applicants to fill one upcoming vacancy on the Hawaii State Ethics Commission. The term will run from July 1, 2019 through June 30, 2023...

Some of our brightest minds have tackled the significance of ethics. Here is what a few of them had to say:

w “History shows that where ethics and economics come in conflict, victory is always with economics. Vested interests have never been known to have willingly divested themselves unless there was sufficient force to compel them.” — B. R. Ambedkar

w “The first step in the evolution of ethics is a sense of solidarity with other human beings.” — Albert Schweitzer

w “Non-violence leads to the highest ethics, which is the goal of all evolution. Until we stop harming all other living beings, we are still savages.” — Thomas A. Edison

w “Ethics are more important than laws.” — Wynton Marsalis"

Facebook has been paying teens $20 a month for access to all of their personal data; Vox, January 30, 2019

Kaitlyn Tiffany, Vox; Facebook has been paying teens $20 a month for access to all of their personal data

"The shocking “research” program has restarted a long-standing feud between Facebook and Apple.

 

"Facebook, now entering a second year of huge data-collection scandals, can’t really afford this particular news story. However, it’s possible the company just weighed the risks of public outrage against the benefits of the data and made a deliberate choice: Knowing which apps people are using, how they’re using them, and for how long is extremely useful information for Facebook."

Wednesday, January 30, 2019

Warning! Everything Is Going Deep: ‘The Age of Surveillance Capitalism’; The New York Times, January 29, 2019

Thomas L. Friedman, The New York Times; Warning! Everything Is Going Deep: ‘The Age of Surveillance Capitalism’


[Kip Currier: I just posted this Thomas L. Friedman New York Times piece as a MUST read for the students in my Information Ethics course. The "money quote" regarding the crux of the issue: 

"Unfortunately, we have not developed the regulations or governance, or scaled the ethics, to manage a world of such deep powers, deep interactions and deep potential abuses."

And the call to action for all those who care and can do something, even if it is solely to raise awareness of the promise AND perils of these "deep" technologies:


"This has created an opening and burgeoning demand for political, social and religious leaders, government institutions and businesses that can go deep — that can validate what is real and offer the public deep truths, deep privacy protections and deep trust."


Friedman leaves out another important "opening"--EDUCATION--and a critically important stakeholder group that is uniquely positioned and which must be ready and able to help prepare citizenry to critically evaluate "deep" technologies and information of all kinds--EDUCATORS.]
 

"Well, even though it’s early, I’m ready to declare the word of the year for 2019.

The word is “deep.”

Why? Because recent advances in the speed and scope of digitization, connectivity, big data and artificial intelligence are now taking us “deep” into places and into powers that we’ve never experienced before — and that governments have never had to regulate before. I’m talking about deep learning, deep insights, deep surveillance, deep facial recognition, deep voice recognition, deep automation and deep artificial minds.

Some of these technologies offer unprecedented promise and some unprecedented peril — but they’re all now part of our lives. Everything is going deep...

Unfortunately, we have not developed the regulations or governance, or scaled the ethics, to manage a world of such deep powers, deep interactions and deep potential abuses...

This has created an opening and burgeoning demand for political, social and religious leaders, government institutions and businesses that can go deep — that can validate what is real and offer the public deep truths, deep privacy protections and deep trust.

But deep trust and deep loyalty cannot be forged overnight. They take time. That’s one reason this old newspaper I work for — the Gray Lady — is doing so well today. Not all, but many people, are desperate for trusted navigators.

Many will also look for that attribute in our next president, because they sense that deep changes are afoot. It is unsettling, and yet, there’s no swimming back. We are, indeed, far from the shallow now."


Apple Was Slow to Act on FaceTime Bug That Allows Spying on iPhones; The New York Times, January 29, 2019

Nicole Perlroth, The New York Times; Apple Was Slow to Act on FaceTime Bug That Allows Spying on iPhones


"A bug this easy to exploit is every company’s worst security nightmare and every spy agency, cybercriminal and stalker’s dream. In emails to Apple’s product security team, Ms. Thompson noted that she and her son were just everyday citizens who believed they had uncovered a flaw that could undermine national security." 

“My fear is that this flaw could be used for nefarious purposes,” she wrote in a letter provided to The New York Times. “Although this certainly raises privacy and security issues for private individuals, there is the potential that this could impact national security if, for example, government members were to fall victim to this eavesdropping flaw."

Tuesday, January 29, 2019

FaceTime Is Eroding Trust in Tech Privacy paranoiacs have been totally vindicated.; The Atlantic, January 29, 2019

Ian Bogost, The Atlantic;

FaceTime Is Eroding Trust in Tech

Privacy paranoiacs have been totally vindicated.

"Trustworthy is hardly a word many people use to characterize big tech these days. Facebook’s careless infrastructure upended democracy. Abuse is so rampant on Twitter and Instagram that those services feel designed to deliver harassment rather than updates from friends. Hacks, leaks, and other breaches of privacy, at companies from Facebook to Equifax, have become so common that it’s hard to believe that any digital information is secure. The tech economy seems designed to steal and resell data."

New definition of privacy needed for the social media age; The San Francisco Chronicle, January 28, 2019

Jordan Cunningham, The San Francisco Chronicle; New definition of privacy needed for the social media age

"To bring about meaningful change, we need to fundamentally overhaul the way we define privacy in the social media age.

We need to stop looking at consumers’ data as a commodity and start seeing it as private information that belongs to individuals. We need to look at the impact of technology on young kids with developing brains. And we need to give consumers an easy way to ensure their privacy in homes filled with connected devices.

That’s why I’ve worked with a group of state lawmakers to create the “Your Data, Your Way” package of legislation."

The unnatural ethics of AI could be its undoing; The Outline, January 29, 2019

, The Outline; The unnatural ethics of AI could be its undoing

"When I used to teach philosophy at universities, I always resented having to cover the Trolley Problem, which struck me as everything the subject should not be: presenting an extreme situation, wildly detached from most dilemmas the students would normally face, in which our agency is unrealistically restricted, and using it as some sort of ideal model for ethical reasoning (the first model of ethical reasoning that many students will come across, no less). Ethics should be about things like the power structures we enter into at work, what relationships we decide to pursue, who we are or want to become — not this fringe-case intuition-pump nonsense.

But maybe I’m wrong. Because, if we believe tech gurus at least, the Trolley Problem is about to become of huge real-world importance. Human beings might not find themselves in all that many Trolley Problem-style scenarios over the course of their lives, but soon we're going to start seeing self-driving cars on our streets, and they're going to have to make these judgments all the time. Self-driving cars are potentially going to find themselves in all sorts of accident scenarios where the AI controlling them has to decide which human lives it ought to preserve. But in practice what this means is that human beings will have to grapple with the Trolley Problem — since they're going to be responsible for programming the AIs...

I'm much more sympathetic to the “AI is bad” line. We have little reason to trust that big tech companies (i.e. the people responsible for developing this technology) are doing it to help us, given how wildly their interests diverge from our own."

Meet the data guardians taking on the tech giants; BBC, January 29, 2019

Matthew Wall, BBC; Meet the data guardians taking on the tech giants

"Ever since the world wide web went public in 1993, we have traded our personal data in return for free services from the tech giants. Now a growing number of start-ups think it's about time we took control of our own data and even started making money from it. But do we care enough to bother?"

Big tech firms still don’t care about your privacy; The Washington Post, January 28, 2019

Rob Pegoraro, The Washington Post; Big tech firms still don’t care about your privacy

"Today is Data Privacy Day. Please clap.

This is an actual holiday of sorts, recognized as such in 2007 by the Council of Europe to mark the anniversary of the 1981 opening of Europe’s Convention for the Protection of Individuals With Regard to Automatic Processing of Personal Data — the grandfather of such strict European privacy rules as the General Data Protection Regulation.

In the United States, Data Privacy Day has yet to win more official acknowledgment than a few congressional resolutions. It mainly serves as an opportunity for tech companies to publish blog posts about their commitment to helping customers understand their privacy choices.

But in a parallel universe, today might feature different headlines. Consider the following possibilities."

4 Ways AI Education and Ethics Will Disrupt Society in 2019; EdSurge, January 28, 2019

Tara Chklovski, EdSurge; 4 Ways AI Education and Ethics Will Disrupt Society in 2019

 "I see four AI use and ethics trends set to disrupt classrooms and conference rooms. Education focused on deeper learning and understanding of this transformative technology will be critical to furthering the debate and ensuring positive progress that protects social good."

Video and audio from my closing keynote at Friday's Grand Re-Opening of the Public Domain; BoingBoing, January 27, 2019

Cory Doctorow, BoingBoing; Video and audio from my closing keynote at Friday's Grand Re-Opening of the Public Domain

"On Friday, hundreds of us gathered at the Internet Archive, at the invitation of Creative Commons, to celebrate the Grand Re-Opening of the Public Domain, just weeks after the first works entered the American public domain in twenty years.
 

I had the honor of delivering the closing keynote, after a roster of astounding speakers. It was a big challenge and I was pretty nervous, but on reviewing the saved livestream, I'm pretty proud of how it turned out.

Proud enough that I've ripped the audio and posted it to my podcast feed; the video for the keynote is on the Archive and mirrored to Youtube.

The whole event's livestream is also online, and boy do I recommend it."

Monday, January 28, 2019

Ethics as Conversation: A Process for Progress; MIT Sloan Management Review, January 28, 2019

R. Edward Freeman and Bidhan (Bobby). L. Parmar, MIT Sloan Management Review; Ethics as Conversation: A Process for Progress

"We began to use this insight in our conversations with executives and students. We ask them to define what we call “your ethics framework.” Practically, this means defining what set of questions you want to be sure you ask when confronted with a decision or issue that has ethical implications.

The point of asking these questions is partly to anticipate how others might evaluate and interpret your choices and therefore to take those criteria into account as you devise a plan. The questions also help leaders formulate the problem or opportunity in a more nuanced way, which leads to more effective action. You are less likely to be blindsided by negative reactions if you have fully considered a problem.

The exact questions to pose may differ by company, depending on its purpose, its business model, or its more fundamental values. Nonetheless, we suggest seven basic queries that leaders should use to make better decisions on tough issues."