Dynamic Data Security Should Be the Policy Default: Dynamic Data Obscurity Revisited

The IAF used the phrase “Dynamic Data Obscurity” in 2015 when I organized a Washington dialog and a Brussels session.   With Schrems II and draft legislation in Canada, it is time to bring the term back.  Below is an update of my 2015 blog.  This blog appeared first in various IAPP publications on January 8, 2021.

In 2030, I will be 80 years old and very dependent on data driven technologies.  In ten years, I will not own a car and instead will share a vehicle with others and will have the vehicle available when I need to go somewhere.  The next generation of vehicles will be fed by data in the cloud that is based on observation of me and millions of other travelers.  The data will be used for a full range of activities from improving the design for future vehicles to billing for services.  The car will take me to a clinic visit where the physician already will know my 180-day running history based on the interaction between my various imbedded medical devices.  All of that history will be generated passively based on consents long ago forgotten.  The data will be shared between the many specialists charged with my health.  When I arrive home, my house will recognize me, and the presets will customize my return experience (e.g., they will read my mood to decide if I want a decaf coffee or a single malt.)

This is my future, and as you know, the technology to accomplish it is already here.  For example, when I get into my wife’s car on the drivers’ side, the car senses it is me and sets my driving preferences.  If I had a defibrillator in my chest, it would report events to my physician.  So, the leap to a more observational future really is easy to predict.  What still is not easy to predict is how that observational data will be governed to serve me and my community in a thoughtful and fair way.  To the public observation is tracking and tracking makes them nervous, even as they have become dependent on things, like cars and medical devices, that are observation dependent.  I believe one of the keys will be an evolution of the concept of dynamic data obscurity, where data is governed down to element level.  In many ways, the foundation for the concept may be found in the increased definitional requirements for pseudonymization found in the GDPR. 

Data are not good or bad; it is when and how they are used that defines benevolence and maliciousness.  Data receive context from other data.  Data must be linked for that context to exist.  There are enormous risks to people and society when data are linked for the wrong reasons and by the wrong persons.  Based on those risks, the default position is that data in transit and data at rest should be obscured robustly.  For that reason, encryption is very common. 

Many applications also may be run with data still in an obscured fashion.  De-identification was in wide use back in the 1980’s.   Most analytics can be run with data links obscured.  However, for accuracy purposes, data links must be used to match the data together before they are processed.  The rules around when to obscure links and when not to must be robust and must be enforced.  Technical measures must be used to protect those linking processes.  But, in the end, no single, bright line rule can exist that covers all the human interests at issue when data come together.  It is policy processes that, in the end, carry that burden. That is why the word “dynamic” is used in the phrase “dynamic data obscurity.”

Today there still is policy confusion about the most basic of processing terms.  There even is further confusion about processing objectives and which rules should be applied to which processes.  However, the basics are:  Data that link to people should be governed by dynamic rules sets.  Data links should be governed for the brief time they are necessary to achieve a legitimate objective, unless demonstrable assessments determine that the data links should be used.  Again, that is why the term “dynamic” is used.  

Other terms will have to be defined and defined in a global universal manner.  There has been a great deal of work done by think tanks and academics to define the terms de-identified, anonymous, and pseudonymous.  However, there needs to be political definitions for those terms that are broadly accepted. 

So, after five years, and before another law is enacted, let’s return to the concept of dynamic data obscurity.  Let’s determine before 2030 is here which terms still need to be defined for global use and what rules still need to be set in place.