Sunday, November 10, the New York Times ran a story on the ability of bad persons to hide and distribute child pornography on the Internet.
Tuesday, November 12, the New York Times ran a story on a unit of Google assisting Ascension, the second largest U.S. health organization, to mine data on millions of patients to look for new insights from the information. These activities are perfectly legal, but they are not transparent to the patients. The data seems to be used for good but is used in an environment where secrecy leads to suspicion.
The recent International Conference of Privacy and Data Protection Commissioners discussed the misuse of observed data for many purposes but focused on how such data leads to micro targeting in elections to put liberal democracies at risk. Since the conference, the willingness for companies facilitating digital political ads to police out and out false information in those ads has been debated by Facebook and Twitter CEOs.
It is hard to argue that any of these three issues are pure privacy and data protection issues. Instead, they go to digital responsibility or maybe even digital civility. The focus of the appropriateness of digital activities will increasingly not be about technical compliance but rather about what is responsible. As New Zealand Privacy Commissioner John Edwards recently argued in a keynote address for the IAPP, these issues, particularly for smaller countries impacted by technology from larger trading partners, are about the preservation of cultural values.
Fair processing, which is required in some form or fashion by all privacy and data protection laws, requires organizations to own the risks they create for others. This includes the risk that individual autonomy may be lost in the digital world but also includes the risk to the full range of personal rights and interests impacted by processing or not processing data. Increasingly, organizations are being required to balance that full bucket of risks, mitigate the injurious risks where possible, and determine the risks that must be managed to gain the societal benefits that may only come from digital knowledge that drives smart decisions. What I am suggesting is the moral obligation for organizations must now go further. It goes to the impact of information and information processing on all aspects of society. For the moment, the term that comes to mind is digital responsibility.
My concern is that having a corporate silo or a government enforcement agency labeled privacy enhances the sense that the preponderant risk that must emphasized is the ability of individuals to control their digital footprint. This orphans other data related risks. Individuals having control of their data might be a perfect state, but it doesn’t capture the world we have lived in since observation has become increasingly necessary to make things work and data have become the driving force behind knowledge creation. I am not saying that our digital footprint should not be better respected and that minimization where appropriate should not be a regulator objective. But more and more the vocabulary for our missions should reflect the fairness which is part of all laws governing the use or nonuse of impactful data on not just individuals but society as a whole.
I would like to see a movement towards a recognition that privacy is just an aspect of digital responsibility. I would like to see the day when a CEO turns first to her or his senior leader for digital responsibility when the organization does something different in the digital age. At times, it will be privacy. At other times, it will be content mediation. And at still other times, it will be the impact on individuals at risk. From my perspective, there needs to be a recognition that the risks we are facing today go beyond those related to privacy and data protection failures and extend to digital responsibility failures as well.
Related Articles