Making sense of sleep

I spent a lot of time in 2013–2014 exploring ideas at the intersection of data and health. I sketched out a few ideas, and built a few different prototypes, some of which I told people about – like my personal API, api.naveen – and some of which never left my local development environment and test iPhone. I wanted to see through a few themes: make it easier to capture data, make sense of the data, and lastly, see if by bringing together data, I could learn something about myself.

My deepest explorations were around tracking and capturing data – that’s always the hardest step, it’s always the one that is hard to do authentically (without adding gamification, without sucking in data form other sources, without cheating by attempting to make sense of signals when none exist). So I coded up what I thought was a better journal. “What are the things Google can’t see about me?” was a question I often asked myself. My smart journal was designed not only to take input from various means – via API, via photo upload, via voice and via free-form text input which it would then later make sense of – but to do it in a seamless, non-intrusive way.

I also believed in the idea that we should build better software around things we already do: not necessarily wear a new device around our hips, but to build a better weight scale, a better monitor for medicine, a better journal.

I hacked away at a few concepts around the journal – and I became convinced that my ‘Journal’ would have the simplest of views possible: a Twitter-like free-form input box where you could type whatever you wanted and it would log your entry smartly and add appropriate sensor metadata it knew about. (Enter “70 bpm” at the end of your run home, and it would automatically tack on all the other signals it knew: timestamp, location, weather, if other audio was playing on your phone, shown on a timeline alongside a photo you took then, &c.)

But, I don’t always run, and I don’t always think to enter things like heart rate and when I took some medicine. So, convinced I needed a more specialized journal that I would deliberately touch at least once a day, I built an alarm clock.

Why an alarm clock? For me, it satisfied a few things:

  • Everyone sleeps (total addressable market = everyone. HA!)
  • It passes Larry Page’s toothbrush test (a service that you use at least twice a day)
  • It’s the first thing you touch in the morning and the last thing you touch at night – imagine that as an entry point for everything else you might want to do as a service: “Hey, how was your day? Hit me with a mood and a journal entry. Tap in emoji.” “Hello, tomorrow morning is going to be packed for you. Mind if i tell you to go to bed at 10:30pm tonight and set your alarm for the morning? You’ll need at least 6.5 hours if you’re going to make the gym and then your first meeting.
  • The alarm clock – right alongside the weight scale – are the two things most people have and need no additional training on. They are perfect examples of devices already in the room, and that you don’t have to get into the habit of wearing and recharging. (See “Thereables”).
  • It also allowed me to explore other features: wake up to music instead of a random horn; get rid of snooze.

I moved on from exploring these ideas for a while – to come back to some other time perhaps. Around the time I got going with our company Expa, I came across Sense by Hello. It perfectly captured the problems I was trying to solve: easy to log your sleeps, and something that you could interact with daily. I love that the device has been in the same form nearly all that time, too: packed with the right sensors to help you sleep and to keep track of how well you did that. I am particularly a fan of the softly rising music to wake you up (you’re never “afraid to wake up” with music as opposed to a blaring horn).

A Sense device has the kind of Apple-like big company magic that makes you wonder how a startup did it – soft glows, hover gestures to wave it to silence in the morning, packed with sensors like air quality and temperature.

Their latest addition – Sense with Voice – is something I am looking forward to really getting into. Voice is a natural evolution of the product – you tell the computer what you want, and the computer gives you what you want. The key benefit of voice the team’s talked about quite a bit is that a mobile phone’s screen (and any distracting notifications) will no longer be the last thing you stare at before sleep. I am also hoping that voice-as-interface is one of the stepping stones to bring them out of the sleep-at-first focus and into other aspects of our health.