The Death of Privacy

Matthew J Cahill
4 min readJan 24, 2020

co-authored w/ Shmuel Silverman

Navigating through our day-to-day in a world with information at our fingertips is a current reality that we all seem to enjoy. The ease with which we can access information allows us to look up lyrics, movie times, check company profiles and gain instant insights into the people that we meet. While we barely have time to reflect on this current phenomenon, let’s consider the next iteration/evolution of this all-to-easy-to-access information technology? When will real-time data continuously feed into our everyday activities with distilled insights that then shape decisions, relationships and, eventually, our institutions?

Imagine a world where Artificial Intelligence (AI) displays graphical information that not only shapes opinions but hacks our minds and becomes the message.

Technological advances in civilizations are best understood as extensions of some part of ourselves. Nature replicates itself in a wide variety of ways and humans are but the latest iterations of evolution. The wheel extends the foot. Hammers and guns extend our hands. Computers extend our brains. The internet extends our reach. With each come some unintended and intended consequences. We place different value judgments and often recategorize after the fact.

The pace of information delivered to us in real-time has increased over the years. Augmented reality glasses and individual augmentation with other devices has been limited by communication bandwidth capabilities. When we move from 4G to 5G wireless communication, we also enable all these devices and human augmented reality. This upgrade in network capabilities brings cloud server AI and decision support recommendation systems closer to us with the ability to influence our perceptions based on context such as location, the person or activity we are currently engaged with, and so on.

For example, the AI can act as a die detector service using biometrics to instantaneously determine if what you hear or see is true or not. It will fact-check in real-time and display colors that indicate true or false? Or, if when you walk into a room, you can scan and see a real-time reading of each person’s net worth? Whether or not they have a transmittable virus or disease? The entirety of an individual’s digital activity can be synthesized into singular graphics and icons for easy consumption. This new AI tool can help you make better decisions about who you choose to engage with and how you interact with each person.

Marshall McLuhan wrote in 1964 that the “Medium is the Message” when television (3 channels of sanitized, homogenized, formulaic content) started ushering in the Information Age. As an early intellectual pioneer, he sounded a bell-weather then that was largely ignored in favor of our unbridled bias for technological innovations. We’ve blindly embraced media, in all its glorious forms to watch producers continue to chase each other in creating content that is more and more emotionally provocative. Emotions are the hooks that shape and frame EVERY message we consume. Our brains have an almost insatiable demand for new data and record into our collective memory a string of binary stereotypes. The context is King. Context matters. It makes all the difference.

Consider the following:

  • What do you do when you know when someone is lying?
  • What do you do when you know how much money each person in the room makes?
  • Do you still choose to interact if you know a person has a life-threatening illness?
  • Do you still shake a person’s hand if you know they’ve watched porn earlier that day?
  • Did you automatically think the person in question was male? Does it make a difference if it’s a woman that’s watched porn earlier in the day?

Are you making the world “safer” if you know who in the room has a criminal record? Where sex offenders are? What about if someone was only accused? Can that “news” somehow be scrubbed from their digital profile? The assumption that we can effectively use distilled, filtered real-time data to assist us in deciding who to engage with is overlooking implicit bias. Even with a carefully curated filtered list of criteria, when our brain is presented with the distilled, graphic information, what do we do with it? If it makes us feel good, chances are we’ll still make decisions that fly in the face of reason. Humans have proven over and over again that we are not logical or rational.

How we adapt to this technology cannot be foretold. Our historical pattern has shown that we overvalue and blindly embrace technology that appear to make our lives easier. There will be unintended consequences and potentially irreversible changes. It’s a conversation worth having in the hopes of creating more thoughtful practices to mitigate our worst impulses and design policies to protect our public spaces and personal boundaries.

--

--

Matthew J Cahill

Disrupting biases that impact company performance. Public Speaking | Workshops | Consulting | Coaching