Showing posts with label big data. Show all posts
Showing posts with label big data. Show all posts

Friday, May 28, 2021

Privacy laws need updating after Google deal with HCA Healthcare, medical ethics professor says; CNBC, May 26, 2021

Emily DeCiccio, CNBC; Privacy laws need updating after Google deal with HCA Healthcare, medical ethics professor says

"Privacy laws in the U.S. need to be updated, especially after Google struck a deal with a major hospital chain, medical ethics expert Arthur Kaplan said Wednesday.

“Now we’ve got electronic medical records, huge volumes of data, and this is like asking a navigation system from a World War I airplane to navigate us up to the space shuttle,” Kaplan, a professor at New York University’s Grossman School of Medicine, told “The News with Shepard Smith.” “We’ve got to update our privacy protection and our informed consent requirements.”

On Wednesday, Google’s cloud unit and hospital chain HCA Healthcare announced a deal that — according to The Wall Street Journal — gives Google access to patient records. The tech giant said it will use that to make algorithms to monitor patients and help doctors make better decisions."

Sunday, May 2, 2021

Killer farm robot dispatches weeds with electric bolts; The Guardian, April 29, 2021

 , The Guardian; Killer farm robot dispatches weeds with electric bolts

"In a sunny field in Hampshire, a killer robot is on the prowl. Once its artificial intelligence engine has locked on to its target, a black electrode descends and delivers an 8,000-volt blast. A crackle, a puff of smoke, and the target is dead – a weed, boiled alive from the inside.

It is part of a fourth agricultural revolution, its makers say, bringing automation and big data into farming to produce more while harming the environment less. Pressure to cut pesticide use and increasing resistance to the chemicals meant killing weeds was the top priority for the farmers advising the robot company.

The killer robot, called Dick, is the world’s first to target individual weeds in arable crops and, on its first public demonstration, it is destroying broad-leaved weeds identified using pattern recognition. A scout robot, called Tom, has already scanned the field in detail and passed the data to an AI engine called Wilma to plot the targets. Dick’s onboard AI then ensures a bullseye hit."

Tuesday, January 7, 2020

UK Government Plans To Open Public Transport Data To Third Parties; Forbes, December 31, 2019

Simon Chandler, Forbes; UK Government Plans To Open Public Transport Data To Third Parties

"The launch is a significant victory for big data. Occasionally derided as a faddish megatrend or empty buzzword, the announcement of the Bus Open Data Service shows that national governments are willing to harness masses of data and use them to create new services and economic opportunities. Similarly, it's also a victory for the internet of things, insofar as real-time data from buses will be involved in providing users with up-to-date travel info.

That said, the involvement of big data inevitably invites fears surrounding privacy and surveillance."

Thursday, April 25, 2019

The Legal and Ethical Implications of Using AI in Hiring; Harvard Business Review, April 25, 2019

  • Ben Dattner
  • Tomas Chamorro-Premuzic
  • Richard Buchband
  • Lucinda Schettler
  • , Harvard Business Review; 

    The Legal and Ethical Implications of Using AI in Hiring


    "Using AI, big data, social media, and machine learning, employers will have ever-greater access to candidates’ private lives, private attributes, and private challenges and states of mind. There are no easy answers to many of the new questions about privacy we have raised here, but we believe that they are all worthy of public discussion and debate."

    Tuesday, April 9, 2019

    Why we can’t leave Grindr under Chinese control; The Washington Post, April 9, 2019

    Isaac Stone Fish, The Washington Post; Why we can’t leave Grindr under Chinese control

    "Because a Chinese company now oversees Grindr’s data, photographs and messages, that means the [Chinese Communist] Party can, if it chooses to do so, access all of that information, regardless of where it’s stored. And that data includes compromising photos and messages from some of America’s most powerful men — some openly gay, and some closeted.

    Couple this with China’s progress in developing big data and facial recognition software, industries more advanced there than in the United States, and there are some concerning national security implications of a Chinese-owned Grindr. In other words, Beijing could now exploit compromising photos of millions of Americans. Think what a creative team of Chinese security forces could do with its access to Grindr’s data."

    Wednesday, March 6, 2019

    The ethical side of big data; Statistics Netherlands, March 4, 2019

    Masja de Ree, Statistics NetherlandsThe ethical side of big data

    "The power of data

    Why do we need to highlight the importance of ethical data use? Dechesne explains: ‘I am a mathematician. My world is a world of numbers. My education did not put much emphasis on the power of data in our society, however. Numbers frequently have a veneer of objectivity, but any conclusions drawn on the basis of data are always contingent on the definitions maintained and the decisions made when designing a research project. These choices can have a huge impact on certain groups in our society. This is something we need to be aware of. Decisions have to be made. That is fine, of course, as long as everyone is mindful and transparent when making decisions.’"

    Tuesday, February 19, 2019

    Drones and big data: the next frontier in the fight against wildlife extinction; The Guardian, February 18, 2019

    , The Guardian; Drones and big data: the next frontier in the fight against wildlife extinction

    "Yet it’s not more widely used because few researchers have the skills to use this type of technology. In biology, where many people are starting to use drones, few can code an algorithm specifically for their conservation or research problem, Wich says. “There’s a lot that needs to be done to bridge those two worlds and to make the AI more user-friendly so that people who can’t code can still use the technology.”

    The solutions are more support from tech companies, better teaching in universities to help students overcome their fears of coding, and finding ways to link technologies together in an internet-of-things concept where all the different sensors, including GPS, drones, cameras and sensors, work together."

    Tuesday, February 12, 2019

    A.I. Shows Promise Assisting Physicians; The New York Times, February 11, 2019

    Cade Metz, The New York Times; A.I. Shows Promise Assisting Physicians

    "Each year, millions of Americans walk out of a doctor’s office with a misdiagnosis. Physicians try to be systematic when identifying illness and disease, but bias creeps in. Alternatives are overlooked.

    Now a group of researchers in the United States and China has tested a potential remedy for all-too-human frailties: artificial intelligence.

    In a paper published on Monday in Nature Medicine, the scientists reported that they had built a system that automatically diagnoses common childhood conditions — from influenza to meningitis — after processing the patient’s symptoms, history, lab results and other clinical data."

    Thursday, January 10, 2019

    All of Us program wants to change the face of medicine; University of Pittsburgh: University Times, January 8, 2019

    Susan Jones, University of Pittsburgh: University Times; All of Us program wants to change the face of medicine

    "Dr. Steven Reis wants all of you to become part of All of Us.

    Pitt received a $46 million award in 2016 from National Institutes of Health to build the partnerships and infrastructure needed to carry out the All of Us initiative, which seeks to gather health information from 1 million people nationwide to create a database to study different diseases and other maladies, and in the process change the face of medicine.

    In Pennsylvania, Pitt is responsible for recruiting 120,000 participants and by early this week had reached 11,610. Nationally, there are more than a dozen other organizations now gathering participants and more than 80,000 people have enrolled nationwide. There are between 40 and 50 people working on the project at Pitt...

    The institute “supports translational research, meaning how to get research from the bench to the bedside, to the patient, to practice, to the community, to health policy,” Reis said...

    The information will be stored in a secure central database created by Vanderbilt University Medical Center, Verily Life Sciences (a Google company) and the Broad Institute in Cambridge, Mass. Volunteers will have access to their study results, along with summarized data from across the program."

    Thursday, December 20, 2018

    How You Can Help Fight the Information Wars: Silicon Valley won’t save us. We’re on our own.; The New York Times, December 18, 2018

    Kara Swisher, The New York Times;

    How You Can Help Fight the Information Wars:

    Silicon Valley won’t save us. We’re on our own.

    [Kip Currier: A rallying cry to all persons passionate about and/or working on issues related to information literacy and evaluating information...]

    "For now, it’s not clear what we can do, except take control of our own individual news consumption. Back in July, in fact, Ms. DiResta advised consumer restraint as the first line of defense, especially when encountering information that any passably intelligent person could guess might have been placed by a group seeking to manufacture discord.

    “They’re preying on your confirmation bias,” she said. “When content is being pushed to you, that’s something that you want to see. So, take the extra second to do the fact-check, even if it confirms your worst impulses about something you absolutely hate — before you hit the retweet button, before you hit the share button, just take the extra second.”

    If we really are on our own in this age of information warfare, as the Senate reports suggest, there’s only one rule that can help us win it: See something, say nothing."

    Friday, November 9, 2018

    In Favor of the Caselaw Access Project; The Harvard Crimson, November 7, 2018

    The Crimson Editorial Board, The Harvard Crimson; In Favor of the Caselaw Access Project

    "We hope that researchers will use these court opinions to further advance academic scholarship in this area. In particular, we hope that computer programmers are able to take full advantage of this repository of information. As Ziegler noted, no lawyer will be able to take full advantage of the millions of pages in the database, but computers have an advantage in this regard. Like Ziegler, we are hopeful that researchers using the database will be able to learn more about less understood aspects of the legal system — such as how courts influence each other and deal with disagreements. Those big-picture questions could not have been answered as well without the information provided by this new database.

    This project is a resounding success for the Harvard Library, which happens also to be looking for a new leader. We hope that the person hired for the job will be similarly committed to projects that increase access to information — a key value that all who work in higher education should hold near and dear. In addition to maintaining the vast amounts of histories and stories already in the system, Harvard’s libraries should seek to illuminate content that may have been erased or obscured. There is always more to learn."

    Harvard Converts Millions of Legal Documents into Open Data; Government Technology, November 2, 2018

    Theo Douglas, Government Technology; Harvard Converts Millions of Legal Documents into Open Data

    [Kip Currier: Discovered the recent launch of this impressive Harvard University-anchored Caselaw Access Project, while updating a lecture for next week on Open Data.

    The free site provides access to highly technical data, full text cases, and even "quirky" but fascinating legal info...like the site's Gallery, highlighting instances in which "witchcraft" is mentioned in legal cases throughout the U.S.

    Check out this new site...and spread the word about it!] 


    "A new free website spearheaded by the Library Innovation Lab at the Harvard Law School makes available nearly 6.5 million state and federal cases dating from the 1600s to earlier this year, in an initiative that could alter and inform the future availability of similar areas of public-sector big data.

    Led by the Lab, which was founded in 2010 as an arena for experimentation and exploration into expanding the role of libraries in the online era, the Caselaw Access Project went live Oct. 29 after five years of discussions, planning and digitization of roughly 100,000 pages per day over two years.

    The effort was inspired by the Google Books Project; the Free Law Project, a California 501(c)(3) that provides free, public online access to primary legal sources, including so-called “slip opinions,” or early but nearly final versions of legal opinions; and the Legal Information Institute, a nonprofit service of Cornell University that provides free online access to key legal materials."

    Sunday, August 5, 2018

    Interview: Yuval Noah Harari: ‘The idea of free information is extremely dangerous’; The Guardian, August 5, 2018

    Andrew Anthony, The Guardian; Interview: Yuval Noah Harari: ‘The idea of free information is extremely dangerous’

    "Why is liberalism under particular threat from big data?
    Liberalism is based on the assumption that you have privileged access to your own inner world of feelings and thoughts and choices, and nobody outside you can really understand you. This is why your feelings are the highest authority in your life and also in politics and economics – the voter knows best, the customer is always right. Even though neuroscience shows us that there is no such thing as free will, in practical terms it made sense because nobody could understand and manipulate your innermost feelings. But now the merger of biotech and infotech in neuroscience and the ability to gather enormous amounts of data on each individual and process them effectively means we are very close to the point where an external system can understand your feelings better than you. We’ve already seen a glimpse of it in the last epidemic of fake news.

    There’s always been fake news but what’s different this time is that you can tailor the story to particular individuals, because you know the prejudice of this particular individual. The more people believe in free will, that their feelings represent some mystical spiritual capacity, the easier it is to manipulate them, because they won’t think that their feelings are being produced and manipulated by some external system...

    You say if you want good information, pay good money for it. The Silicon Valley adage is information wants to be free, and to some extent the online newspaper industry has followed that. Is that wise?
    The idea of free information is extremely dangerous when it comes to the news industry. If there’s so much free information out there, how do you get people’s attention? This becomes the real commodity. At present there is an incentive in order to get your attention – and then sell it to advertisers and politicians and so forth – to create more and more sensational stories, irrespective of truth or relevance. Some of the fake news comes from manipulation by Russian hackers but much of it is simply because of the wrong incentive structure. There is no penalty for creating a sensational story that is not true. We’re willing to pay for high quality food and clothes and cars, so why not high quality information?"

    Tuesday, July 31, 2018

    Big Data Is Getting Bigger. So Are the Privacy and Ethical Questions.; The Chronicle of Higher Education, July 31, 2018

    Goldie Blumenstyk, The Chronicle of Higher Education; Big Data Is Getting Bigger. So Are the Privacy and Ethical Questions.

    "Big data is getting bigger. So are the privacy and ethical questions.

    The next step in using “big data” for student success is upon us. It’s a little cool. And also kind of creepy.

    This new approach goes beyond the tactics now used by hundreds of colleges, which depend on data collected from sources like classroom teaching platforms and student-information systems. It not only makes a technological leap; it also raises issues around ethics and privacy.

    Here’s how it works: Whenever you log on to a wireless network with your cellphone or computer, you leave a digital footprint. Move from one building to another while staying on the same network, and that network knows how long you stayed and where you went. That data is collected continuously and automatically from the network’s various nodes.

    Now, with the help of a company called Degree Analytics, a few colleges are beginning to use location data collected from students’ cellphones and laptops as they move around campus. Some colleges are using it to improve the kind of advice they might send to students, like a text-message reminder to go to class if they’ve been absent."

    Thursday, March 1, 2018

    Professor Tells UN, Governments Of Coming “Tsunami” Of Data And Artificial Intelligence; Intellectual Property Watch, February 21, 2018

    William New, Intellectual Property Watch; Professor Tells UN, Governments Of Coming “Tsunami” Of Data And Artificial Intelligence

    "[Prof. Shmuel (Mooly) Eden of the University of Haifa, Israel] said this fourth revolution in human history is made up of four factors. First, computing power is at levels that were unimaginable. This power is what makes artificial intelligence now possible. The smartphone in your hand has 1,000 times the components of the first rocket to the moon, he said, which led to a chorus of “wows” from the audience.

    Second is big data. Every time you speak on the phone or go on the internet, someone records it, he said. The amount of data is unlimited. Eden said he would be surprised if we use 2 percent of the data we generate, but in the future “we will.”

    Third is artificial intelligence (AI). No one could analyse all of that data, so AI came into play.

    Fourth is robots. He noted that they don’t always look like human forms. Most robots are just software doing some function...

     Eden ended by quoting a hero of his, former Israeli Prime Minister Shimon Peres, who told him: “Technology without ethics is evil. Ethics without technology is poverty. That’s why we have to combine the two.”
    Eden challenged the governments, the UN and all others to think about how to address this rapid change and come up with ideas.
    He challenged the governments, the UN and all others to think about how to address this rapid change and come up with ideas. Exponentially."

    Sunday, February 11, 2018

    Computational Propaganda: Bots, Targeting And The Future; NPR, February 9, 2018

    Adam Frank, NPR; Computational Propaganda: Bots, Targeting And The Future

    "Combine the superfast calculational capacities of Big Compute with the oceans of specific personal information comprising Big Data — and the fertile ground for computational propaganda emerges. That's how the small AI programs called bots can be unleashed into cyberspace to target and deliver misinformation exactly to the people who will be most vulnerable to it. These messages can be refined over and over again based on how well they perform (again in terms of clicks, likes and so on). Worst of all, all this can be done semiautonomously, allowing the targeted propaganda (like fake news stories or faked images) to spread like viruses through communities most vulnerable to their misinformation.

    As someone who has worked at the hairy edges of computational science my entire career I am, frankly, terrified by the possibilities of computational propaganda. My fear comes exactly because I have seen how rapidly the power and the capacities of digital technologies have grown. From my perspective, no matter what your political inclinations may be, if you value a healthy functioning democracy, then something needs to be done to get ahead of computational propaganda's curve."

    Saturday, February 3, 2018

    China's Surveillance State Should Scare Everyone; The Atlantic, February 2, 2018


    The Atlantic; China's Surveillance State Should Scare Everyone

    [Kip Currier: This Atlantic article brings to mind the Black Mirror Bryce Dallas Howard-helmed episode "Nosedive"--in which social media-dependent social-climbing-Americans are ranked from 1 (not good) to 5 (cream of the crop). The difference: China's real life "good citizen score" surveillance system is way scarier than the one imagined in Nosedive; and is more like the "Under His Eye" dystopia of The Handmaid's Tale Gilead authoritarian state.]

    "[China] is racing to become the first to implement a pervasive system of algorithmic surveillance. Harnessing advances in artificial intelligence and data mining and storage to construct detailed profiles on all citizens, China’s communist party-state is developing a “citizen score” to incentivize “good” behavior. A vast accompanying network of surveillance cameras will constantly monitor citizens’ movements, purportedly to reduce crime and terrorism. While the expanding Orwellian eye may improve “public safety,” it poses a chilling new threat to civil liberties in a country that already has one of the most oppressive and controlling governments in the world.

    China’s evolving algorithmic surveillance system will rely on the security organs of the communist party-state to filter, collect, and analyze staggering volumes of data flowing across the internet. Justifying controls in the name of national security and social stability, China originally planned to develop what it called a “Golden Shield” surveillance system allowing easy access to local, national, and regional records on each citizen. This ambitious project has so far been mostly confined to a content-filtering Great Firewall, which prohibits foreign internet sites including Google, Facebook, and The New York Times. According to Freedom House, China’s level of internet freedom is already the worst on the planet. Now, the Communist Party of China is finally building the extensive, multilevel data-gathering system it has dreamed of for decades."

    Wednesday, January 31, 2018

    Privacy experts alarmed as Amazon moves into the health care industry; Washington Post, January 30, 2018

    Abha Bhattarai, Washington Post; Privacy experts alarmed as Amazon moves into the health care industry

    "Amazon.com on Tuesday announced a joint partnership with Berkshire Hathaway and JP Morgan to create an independent health-care company for their employees, putting an end to months of speculation that the technology giant was eyeing a foray into the medical industry. It’s yet another endeavor for the company, which last year spent $13.7 billion to enter the grocery business with its acquisition of Whole Foods Market. (Jeffrey P. Bezos, the founder and chief executive of Amazon, also owns The Washington Post.)

    [Amazon, Berkshire Hathaway and JP Morgan Chase join forces to tackle employees’ health-care costs]

    But as the online retailer expands into new industries — cloud computing, drones, tech gadgets, moviemaking and now health care — some privacy experts say the company’s increasingly dominant role in our lives raises concerns about how personal data is collected and used. What happens, for example, when a company that has access to our weekly shopping lists, eating habits and in-home Alexa-based assistants also becomes involved in our medical care?"

    Thursday, July 20, 2017

    'We are all mutants now': the trouble with genetic testing; Guardian, July 18, 2017

    Carrie Arnold, Guardian; 'We are all mutants now': the trouble with genetic testing

    "To get a better handle on all the variation in humans, scientists are going to need to sequence tens of millions of people. And the only way to ever get these kinds of large numbers is by sharing data. But regardless of how good the databases get, and how many people have their genomes sequenced, uncertainty will never completely go away."

    Sunday, July 16, 2017

    How can we stop algorithms telling lies?; Guardian, July 16, 2017

    Cathy O'Neil, Guardian; 

    How can we stop algorithms telling lies?


    [Kip Currier: Cathy O'Neil is shining much-needed light on the little-known but influential power of algorithms on key aspects of our lives. I'm using her thought-provoking 2016 Weapons of Math Destruction: How Big Data Increases Inequality And Threatens Democracy as one of several required reading texts in my Information Ethics graduate course at the University of Pittsburgh's School of Computing and Information.]

    "A proliferation of silent and undetectable car crashes is harder to investigate than when it happens in plain sight.

    I’d still maintain there’s hope. One of the miracles of being a data sceptic in a land of data evangelists is that people are so impressed with their technology, even when it is unintentionally creating harm, they openly describe how amazing it is. And the fact that we’ve already come across quite a few examples of algorithmic harm means that, as secret and opaque as these algorithms are, they’re eventually going to be discovered, albeit after they’ve caused a lot of trouble.

    What does this mean for the future? First and foremost, we need to start keeping track. Each criminal algorithm we discover should be seen as a test case. Do the rule-breakers get into trouble? How much? Are the rules enforced, and what is the penalty? As we learned after the 2008 financial crisis, a rule is ignored if the penalty for breaking it is less than the profit pocketed. And that goes double for a broken rule that is only discovered half the time...

    It’s time to gird ourselves for a fight. It will eventually be a technological arms race, but it starts, now, as a political fight. We need to demand evidence that algorithms with the potential to harm us be shown to be acting fairly, legally, and consistently. When we find problems, we need to enforce our laws with sufficiently hefty fines that companies don’t find it profitable to cheat in the first place. This is the time to start demanding that the machines work for us, and not the other way around."