I had the privilege of attending the Privacy Law Scholars Conference (PLSC) this week in Washington, DC, an event I’ve enjoyed for the past several years (thanks to Dan & Chris). While (obviously) neither a Privacy Lawyer or a Scholar — I find this conference fascinating because of the smart folks that attend (present company excluded) and the sharp, thought-provoking dialog that takes place there. Like Las Vegas — what goes on at PLSC stays at PLSC, so this posting is not about the meetings, discussions and attendees. That said — as a commercial technology/data guy — I had a couple of thoughts about consumer privacy today in an on-line services, app intensive, big data world that I just want to pound out and put down on paper. I write like I talk – so deal with it.
Notice & Consent
A large part of today’s consumer privacy model is predicated on notice and consent. When a consumer considers agreeing to a use a product or service the company provides them “notice” of the service terms and privacy policies, and the consumer agrees to (“consents”) to these terms. The general idea is that with full disclosure, the consumer is smart enough to decide whether this bargain is a good one or not. If the consumer agrees then both parties are off to the races.
I think this model is broken for a plethora of reasons but I’m just going to highlight two:
Reason 1: Notice doesn’t work because disclosure and disclaimer masquerade as transparency.
The idea was you the service peddler, will tell me the consumer, how this works and what you’re going to do with my data — then I, the informed consumer, get to decide yea or nay. The notices now mostly consist of some benign primary and secondary uses and then drone on for paragraphs about potential other uses and partners — and then conclude with another set of paragraphs that essentially convey that the service provider can change their mind at any time, and do any other thing that they figure out is interesting or profitable at a later date. This may meet the notice and disclosure test but its a far cry from transparency.
Sometimes if the service provider’s changes are drastic enough they issue a “New Polices” notice to the consumer that requires a new consent. The Apple iTunes store agreement might be an example here — anyone ever really read this multi-page document anymore? I just want to buy and replay music and to download apps that I can use on my various devices. How many pages should this agreement really need to be?
Reason #2: The use of “consent’ in its present form is too inarticulate to actually memorialize the type of bargain that I as a consumer really want to make, or the actual bargain that the service provider is asking me to commit to.
Its one thing if “consent” is given in the context of a specific set of circumstances with a particular implied bargain. It’s quite a different thing if this means perpetual consent (now and forever), or derivative consent (to this bargain and any other ) or transferable consent (consent to the service provider and any of their partners and agents). The default is now just “consent” and in today’s world this is a very crude, very broad, very long-lived give away by a consumer.
Privacy Priorities for Tomorrow
So what should we do?
First, I think the notice and consent discussions should capitalize consent. It’s easy for a business to give notice (well – after all the lawyers finish layering in all the might be’s and CYA’s.) Those concerned with consumer privacy should focus on making consent more granular, with a half-life and with more thought on the “essence” of the consent, and the “edges” of the consent. Normal business agreements usually contain very clear description of what the parties are and are not getting, and what is not being given up or exchanged. Unfortunately, this isn’t happening well in the on-line consent world today because the universe of potential gives and takes is way too unbounded.
Secondly, I think notice and consent should start to take into account something beyond the type of data or the use of the data (the current regulatory metrics). Big Data gives vast data connectivity, data storage and data analysis that takes us well past the historical concerns of disclosure and purpose. During my working lifetime we’ve gone from data gathering -> data processing -> data insight -> data enlightenment. Big data makes the notion of consent more crucial because the context of consent is now so significant. Shouldn’t the type and endurance of my consent necessarily change with the throw weight of the impact of that decision?
With Big Data this isn’t just some namby pamby discussion of using my data for targeted advertising anymore — it’s now a discussion about real-time triangulating so much data about me that the level of enlightenment obtainable is almost beyond conceiving that I, or anyone else, would ever consent to it. Let’s throw medical records and social reading habits and physical location tracking into the mix and I might as well turn my entire life into a YouTube channel. Big data demands big consents — and the privacy community and regulatory grid likely needs to consider “degree of enlightenment” and “permanence of exposure” as factors in shaping the consents of tomorrow.
Thirdly, I think the business doctrine of “fairness” and “bargain” need more weighting in the notice and consent discussion. How fair or unfair is this bargain between the parties? We need to acknowledge that in most case the service provider is much better equipped to assess the bargain than the consumer is. Should we protect consumers from making grossly unfair privacy bargains — or even prohibit service providers from burying too many bargains within one broad agreement for service? Perhaps, the more unfair the bargain, the clearer the notice and the narrower the consent?
As someone once said to me “all boundaries are fuzzy and all containers leak”. That’s certainly true in today’s consumer app services privacy world.
There — I’m glad that’s done. Back to my day job.
Cheers. DC
PS. Dear “Privacy Posse” friends — I don’t think the privacy community needs to presume necessity of use when thinking about managing privacy. I think its OK to question why a practice or use should be allowable. I understand why there’s a focus on cost/benefit or economic/societal burden, but I think a privacy purist perspective is valuable too. After all, commerce is pretty good at adapting 🙂