It is increasingly understood digital agendas in both the public and private sectors are about creating safe pathways for data to turn into information, for information to be turned into knowledge and for knowledge to facilitate actions that are societally beneficial to people. Data enables technology such as Artificial intelligence (AI) technologies have significant potential to transform society and people’s lives. The U.S. National Institute of Standards and Technology states “AI technologies can drive inclusive economic growth and support scientific advancements that improve the conditions of our world.”[1]
Less understood or acknowledged is the distinction between data used to create knowledge and the actions that may result from using or applying this knowledge. Knowledge creation also can be called “thinking with data.” It is key for processes such as data analytics used in product or service development and improvement. It may more broadly be called “data-driven or digital innovation.”
The creation of knowledge differs from applying this knowledge. Knowledge application utilizes knowledge created on a specific or identifiable set of individuals. Understanding the differences between knowledge creation and knowledge application, and the purposes for which they are being undertaken as well as the different set of risks for each, is critical to understanding how they should be regulated.
Knowledge creation in and of itself is less directly impactful on individuals. The application of this knowledge leads to decisions and actions that can impact individuals and brings into consideration traditional privacy concerns. Knowledge creation does not have the same concerns and results. Knowledge creation and knowledge application should not be treated the same way. However, today, regulatory approaches, including regulation, oversight, and enforcement, do not necessarily treat these two processes differently. Both processes require controls, but since the risks are not the same in knowledge creation and knowledge application, the controls should not be the same.
At its extreme, polarization is affecting the public policy debate on privacy and data protection. Terms such as autonomy and control increasingly are being interpreted as personal sovereignty. Policymakers designing the new data-driven industrial policy are talking past independent regulators who are concerned markets are dominated by data extraction that harms all individuals, particularly protected classes. This situation is making legislation and regulation that both facilitates innovation and protects the full range of stakeholder interests increasingly difficult. This void has led some organizations to delay or forgo the creation of these insights. As a result, the trajectory of the application of data protection public policy has the potential to stifle further data driven innovation
The Information Accountability Foundation (IAF) believes it is critical that organizations be able to think with data and to engage in knowledge discovery and creation in order to achieve a trusted global digital ecosystem. The IAF has advocated for many years that there should be a distinction between knowledge creation (thinking with knowledge) and knowledge application (acting with knowledge). Increasingly, knowledge creation leads to the development of new insights key to digital innovation. However, the current pattern of data protection law evolution, and the enforcement of these laws, which does not appreciate the distinction between these two processes, has led to confusion and hesitancy about when advanced analytics for knowledge creation may be used where there is no distinct legal basis for processing personal data for this purpose.
The IAF undertook research to understand how organizations discover and create new knowledge and is pleased to publicize the results of this research in our report Making Data Driven Innovation Work. The IAF research team conducted extensive interviews with organizations from numerous fields that use data to create knowledge, even if they did not identify their processes in that way. The interviews were used to supplement the team’s decade of work as researchers at the IAF and consultants working on ethical assessments and demonstrable accountability. This project sought to clarify the impact of regulatory and public policy uncertainty on commercial-driven knowledge creation, develop scenario driven examples of this impact, develop a public policy model that enables responsible data-driven knowledge creation through a series of compensatory controls, and create a narrative and path for knowledge creation to be more formally recognized as legitimate data processing activities in next generation privacy and data protection law.
In summary, many organizations use personal data as part of analytics processing (Corporate Research) to solve identified business problems, but most organizations do not use Corporate Research as a distinct processing activity more broadly because:
- Most organizations generally do not break data processing into two distinct phases: knowledge creation (I.e., the research function to identify a solution to a business problem) and knowledge application (I.e., application of the solution to the business problem), and/or
- In the EU, this portion of data processing (research) is complicated and/or limiting. Personal data can be processed for scientific research purposes, which is narrowly defined, as long as sufficient safeguards have been implemented and scientific research is run “in accordance with relevant sector-related methodological and ethical standards, in conformity with good practice” (collectively Safeguards), and/or
- Other countries and future laws limit the use of personal data for research purposes to improvement/supply of products and services or to requiring the use of de-identified personal data for research and for socially beneficial purposes.
The IAF thinks that organizations might be able to use personal data more broadly for Corporate Research if, in addition to legal and regulatory modifications, appropriate Safeguards are put into place and proposes that those Safeguards be developed and implemented in a defined “Research Sandbox” environment.
The project developed potential safe pathways for data to turn into information, for information to be turned into knowledge and for data to facilitate actions that are societally beneficial to people, all where processing is conducted in a manner that is lawful, fair, and just. The IAF appreciates that this level and type of paradigm shift will require input from key stakeholders, including regulators and organizations. In short, this research report is just the start, and the IAF hopes to obtain further input through this project which includes workshops in 2023.
[1] Artificial Intelligence Risk Management Framework (AI RMF 1.0) (nist.gov)
Related Articles