Alexa, Observation and Practical Solutions

Sitting in our family room is a little disk that answers to Alexa that joined us on Christmas day.  She (it?) sits there quietly asleep until someone calls her name.  Then she wakes up and does our bidding.  It is typically mundane things such as turning on the lights or finding music.  One might argue about the green impact of using networks and the cloud to turn off the lights, but it is like having the staff of Downton Abbey without needing a dormitory to house them.  As a lover of neat technologic things, I really think Alexa is cool.  As 27-year privacy professional, I can’t help but think about the dilemma that we privacy professionals have been dealing with for the past quarter century and that looks to be expanding.  We love what data and computing power brings us, but it grates against a recognized desire for many to have a space where they might be invisible to or just shielded from others.  So we do things, like activate digital servants, that may limit our ability to be invisible, and then yearn for solutions. 

In the early 1970’s information policy choices in the private sector related to an individual protection trilogy comprised of individual autonomy, fair data use in the private sector, and maintenance of freedom from government surveillance.  Examples of those early protections were the first European data protection laws whose purpose was to protect individual autonomy, the Fair Credit Reporting Act that assured fair processing of credit information, and the U.S. Privacy Act that protect individuals from intrusion by the U.S. government.  As the 50th anniversary of Alan Westin’s “Privacy and Freedom” approaches, the maintenance of that trilogy of protections is getting more difficult, in part, because people such as me want the benefits that come with information technologies like Alexa.   

There are numerous factors that differentiate man from other species.  One of them is that humans think abstract thoughts based on what they see, feel, hear and smell.  Another is that while humans are social, there are actions they take that they choose to be obscure from others.  We humans are constantly in conflict between our desire to see and infer and our desire to obscure others from doing the same.  Every communications technology breakthrough, from the invention of writing to the functionality of Alexa that makes her an effective servant, has increased the tension between the seeing and inferring and obscuring actions from others. 

Technology requires a constant update of that trilogy of personal protection, and it must be done not only with the rights and interests of individuals in mind but also with the interests of other individuals, society as a whole and even private organizations.  While change in protections can be driven by technology, those new protections should be technology neutral.  While the surest protections come from legally certain rules, the speed of change moves faster than the ability to reach consensus and develop rules.  Therefore, individuals, organizations, law enforcement and courts should interpret norms and apply them to new situations.   Most recently, the new EU General Data Protection Regulation, which while doesn’t go into effect until 2018, is already being stressed by the learning made possible through observation as applied to new interactive technologies.

So Alexa is brought into the Abrams house.  We share a great deal with her, so she can serve us better.  We temper what we say because of her presence – sometimes forgetting that all the applications linked to her are potential observers/inferrers.  The same context applies to our smart car and any other smart technology that we touch.  We also expect those doing the observing/inferring to apply a set of filters to determine when they are acting appropriately and when they are not.  We expect good companies to do the right thing, but guidance on what is right is often ambiguous.   The Information Accountability Foundation’s mission, in part, is reducing that ambiguity by developing an infrastructure for making decisions that are legal, fair and just.

So, as the Information Accountability Foundation enters 2017, it is continually focused on that trilogy of personal protection and how to make it work with everyone’s desire to learn from observation and apply those inferences in the world where they also seek seclusion or some level of control over complex data use.  Every project and discussion area at the IAF is focused on practical solutions to this dilemma.

However, there is process that must be followed to generate practical decision making frameworks.  First concepts must be imagined.  They must then be tested and be refined in rigorous discussions with smart thinkers from all segments of our community.  The concepts then must be tested against real scenarios.  Methodologies for applying the concepts in actual decision making and then production must be developed.  Lastly they must be disseminated in a manner that builds trust that they contribute to protecting individuals while supporting the beneficial use of information technologies.   

The European Working Party 29 released an opinion, just before Christmas, on data portability as required by the General Data Protection Regulation.  To create a bright line rule, the WP29 differentiated data based on its origin.  Data that was provided by the individual or was observed from the individual’s behavior would be subject to portability.  Data that was derived or inferred would not.  This differentiation to create the bright line rule is based on a paper the IAF produced for an OECD session on big data in 2014 entitled “The Origins of Personal Data and its Implications for Governance” (  The paper development process began in 2013 as a means of first recognizing that the nature of data has changed over the past fifty years.  Since most data no longer came from individuals in a manner where the individual knowingly registered, applied for a benefit, or paid a bill, the origins of data would have implications for data governance.  For organizations and regulators to be effective, a taxonomy that described a classification for data was needed.  The four classifications were conceptualized and then tested with data professionals.  Paper drafts were shared and modified.  Finally, the paper was presented at an OECD session on big data.  The IAF began using the data taxonomy in its big data work, as did others.  Eventually the WP29 used the taxonomy to create a bright line rule for data portability.  This is a clear example of how governance concepts move from imagination to practical application in a public policy arena.

The IAF’s work in 2017 will continue that cycle.  The conceptualization for ethical big data began in 2013 and is the basis for a Canadian assessment framework that will be published on February 28.  Effective Data Protection Governance (EDPG) project began in 2015 as a holistic governance concept.  2017 is the year where the five EDPG components are built out.  2017 is also the year the core concepts for applied digital ethics and data protection governance for artificial intelligence begin to be built.  As always, IAF begins our deliberations with concepts and ends with practical tools. 

IAF welcomes all stakeholders to our work of developing concepts to get to the practical, trust enhancing solutions.  This work is necessary so Alexa may learn while observing and still maintain that trilogy of protection that is necessary for the society we all want to live in.

Leave a reply