At this year’s Defrag, several themes emerged: identity management and security, quantified self and (of course) robotics. And, although the conference claimed to have APIs as a common theme (and it did), the APIs themselves were part of a much bigger and louder conversation about what they enable.
At the conference, as we opened each door that APIs unlocked for us, we found an unintended theme emerging--one that lurked beneath the surface of many of the presentations and really only broke through during the break discussions. The theme was ethics, and whether we know how to be this new kind of human-machine blend that we are on track to becoming.
Amber Case’s discussion about Calm Technology led us through a history of the cyborg fascination, pioneered by Steve Mann, that has ultimately resulted in things like Google Glass and OTG’s heads-up technology. It occurred to me as I watched Amber calmly talk about Calm Technology that it has a very reassuring moniker for something that can feel very dangerous. Calm Technology wares are designed to provide data when you need it and to get out of your way when you don’t need it--in other words, it’s technology that enhances your life, rather than rules it.
Take, for example, Chris Dancy and his work. Dancy has taken quantified self to a new level that he refers to as “Existence as a Platform.” He’s been monitoring his every physiological reaction for the last three years, as well as all of the environmental data around him, in an effort to understand how one affects the other. And, based on that understanding, he can automatically tailor his environment to fit his physiological requirements. (Time to work? Cue lighting, temperature and noise levels to achieve maximum work product.Feeling anxious? Examine the weather, the news and the other beings in the environment to discover the root cause.)
Chris is a lively and engaged conversationalist, and I found myself one afternoon, sitting on a couch with him, hypothesizing applications for his work. In thinking about the power of being able to automatically adjust your environment (with no effort on your part because your house is a living organism that knows what you need), you can manage negative emotions like anxiety, depression, stress and anger more effectively by adjusting the world around you to trigger more positive emotions inside you.
It all sounds so easy--maybe too easy. Will this capability create a society of agoraphobics? Do people become so reliant on an environment that knows them and nurtures them that they cannot function outside that safe bubble?
Joe Burton is also a big advocate of wearable tech and its ability to provide a real-time personalized experience. He is CTO of Plantronics, the company that designs headsets so responsive to movement and physiological response that they can detect emotional reactions based on temperature and moisture levels of the skin.
Burton sees a future in which wearable tech data can be used for a higher purpose in the healthcare industry. Imagine being able to send data to your doctor, and how much more accurate your doctor’s assessments and recommendations will be when fueled with that specific data. But, again, that’s a line we haven’t figured out how to cross. Today, we answer routine questions at every doctor’s visit, but a 10-minute conversation once a year is easily trumped by a regular data feed of daily activities, stress levels, calorie consumption and sleep patterns. We are at a point right now where we can provide more data than we can consume. So how do we take that information to the next level so we can aggregate it and analyze it programmatically in order to actually make the best use of it?
This was the general path of John Wilbanks’ talk about “Healthcare After the Deluge.” Wilbanks pointed out that we are drowning in data but have not yet built the infrastructure or policies to deal with it. He walked through a number of examples, including his own of requesting his genome data. Interestingly, while there is no straight path for providing your genome data to your doctor and having your doctor know what to do with it, there are plenty of paths the data travels on its own. Wilbanks received unsolicited emails from a variety of organizations that analyzed the data based on their own specific interests. But finding out that you have a higher probability of getting a specific type of cancer is not the same as telling you what to do about it. We’re being bombarded with what we suspect is useful data that can help us, but we have not yet figured out a good way to harness it.
I was especially struck by the fact that, while Wilbanks’ genome data travels on its own to various places, there is no way to get that data back. I was reminded of this example on my way home from the conference, when I applied for Clear and had to go through a registration process that included a retinal scan. As I saw my retinal scans being accepted by Clear’s server, I had a moment of panic: My data was travelling to places unknown--would it live there for eternity?
When you combine this situation with the new capabilities that APIs provide to us--allowing us to take this data, scads and scads of it, and use it to instruct the things around it--it’s a rather frightening scenario. Are we enabling the data transfers among machines faster than we can interpret the data or the legalities around using it? Clearly this is uncharted territory, but it’s ground the industry needs to cover before the technology gets too far ahead of our ability to not only harness it but to protect ourselves from unintended consequences.