There is general agreement that privacy analysis and enforcement should be more risk based. The IAF increasingly is looking at risk in terms of consequential processing and at case studies to help it understand what is meant by consequential processing.
Rental housing in the United States is a scarce resource, and rental housing usually cannot be obtained without a tenant screening report. While nothing could be more fundamental than a place to live, landlords are focused on their own risks when they receive these reports to screen applicants. Few areas of risk management have changed more since the Internet revolution than the process of tenant screening, a form a consumer reporting covered by the Fair Credit Reporting Act (FCRA). 25 years ago, a landlord would pull a credit report that covered loans and bankruptcies. Today, the landlord has available any one of the hundreds of tenant screening organizations that not only have access to a credit report but also to all government records that have been made public. Those data do not have the precise identifiers that have improved traditional credit reporting, and they do not have the appropriate level of oversight to make sure accurate information is being reported. Instead these organizations make use of less precise algorithms, fueled by less precise data that are maximized to not miss anything that might, or might not, be relevant to the applicant. In addition, landlords generally accept this reporting without any processes that ascertain the accuracy of the information.
“Algorithms that scan everything from terror watch lists to eviction records spit out flawed tenant screening reports. And almost nobody is watching.” With those words, Lauren Kirchner and Matthew Goldstein began their Sunday New York Times story, “How Automated Background Checks Freeze Out Renters.” The authors describe data processing that is consequential to people’s lives. The consequences, such as potential homelessness, are more significant than the risks typically associated with AdTech as a result of offering products and services.[1] According to the New York Times article, the ease of access to public record information, such as criminal records, has made entry into the tenant screening business easy. Furthermore, the article says that many tenant screening services use very loose matching criteria, leading to false attribution of negative information to applicants. This practice leads to people losing the ability to have housing because the screening organizations have defaulted to a standard that favors landlords in tight markets.
There is growing consensus that privacy law should prioritize risk. The IAF model legislation, the “Fair and Open Use Act” is risk based, with five bands of risk. Yet privacy practitioners do not always agree on what risks are meaningful and whose risks should be modelled. As advanced analytics driven by ever increasing data availability are looked to, the term consequential becomes even more meaningful, and few activities could be more consequential than putting roofs over the heads of children.
In looking to legislation to model, the Fair Credit Reporting Act is seen as one of the most enduring pieces of fair processing legislation ever enacted. It both has facilitated a consumer economy and has provided rights to individuals when consequential decisions about them have been driven by data. The FCRA requires that data organized by third parties and used for substantive decision making be accurate, fair and used only for such decisions. The FCRA also gives consumers the right to know when they are denied credit, insurance, employment and housing based on a report for one of those reasons. It gives consumers the right to inspect the report, challenge the accuracy of the data, require incomplete or incorrect data be fixed or removed in thirty days, and require new information be reported to the business that made a substantive decision related to the consumer. When reports are used for employment, it requires the credit bureau to report negative information to the consumer as it reports it to the business so the consumer may dispute or put in context the negative information immediately.
The FCRA requires maximum possible accuracy. This requirement should translate into reporting all the information that pertains to an individual and none of the information that pertains to other individuals with similar names, addresses, or other identifying information.
There is no immediacy requirement in the FCRA for negative information in a tenant screening to be shared with the applicant when it is shared with the landlord, where a mismatched criminal record could block a lease. Where there is a housing shortage, a landlord will not wait 30 days to see if a tenant report will be corrected. So, individuals are harmed by inaccurate tenant reports. The result is adults and their children are not finding a place to sleep, share a dinner, study and play safely. Inaccurate tenant screening reports are consequential harm that requires attention.
What should be a market differentiator among credit bureaus (including tenant screening organizations) is the quality of the algorithms that make the decisions to include or not include data. This condition goes to maximum accuracy. If tenant screening agencies loosen criteria to be inclusive of data that may or may not match to the consumer, inaccuracy is enforceable by the Federal Trade Commission. With hundreds of tenant screening services, the FTC does not have the manpower to police a very good but not perfect law. The FCRA could be made better by requiring the screening service to provide negative information to the applicant at the same time it is provided to the landlord.
While the FCRA and FCRA enforcement are not perfect, they at least have the correct target. They are directed at fixing a defined consequential harm — being denied a substantial benefit based on inaccurate or badly processed data. Instead, privacy legislation in the United States is tending to what professors Corien Prinz and Lokke Moerel refer to as “mechanical proceduralism,” dependence on data governance by consumers on very complex processing that could not possibly be described in consumer understandable privacy notices. Nor should this be the backstop. The IAF is not suggesting that transparency is not important and that individuals should not have control where effective. Rather, privacy law should prioritize consequential harms and provide regulators with tools to provide certainty to actors that inappropriately harm individuals that they will be caught and punished. That approach probably means more resources for regulators. The IAF model legislation is focused on fair processing that regulates consequential harms. The IAF model also defines the powers that would be exercised by the FTC to achieve more certain enforcement.
A future IAF blog will focus on consequential harms as it relates to the GDPR.
[1] AdTech used to manipulate elections, as in Cambridge Analytica is different.
Comments