The Fox and the Found(er)

I’ve been reading all the analyses and backstories and the opinion pieces of the Disney-Fox-Comcast situation. (Who doesn’t like a big deal?)

The Comcast offer is the larger of the two deals, by a lot: if someone gives you that offer, you should probably take it (you’ll also have to take a chance on increased risk of the deal being vetoed). Walking away from the Disney deal now will mean a $1.53b break fee for 21st Century Fox, but you can make up for that by taking the higher Comcast deal and paying Disney back. (And Murdoch could still use the extra leftover money from the Comcast deal to buy even more shares of Disney and have both toys!)

However, if you founded a firm, ran the firm, and still control a big piece of the firm, the Disney deal makes way more sense. My favorite take on why Murdoch presently favors the Disney path is the one from Felix Salmon, shareholders be damned:

The official answer is that Disney-Fox is a “horizontal” merger, while Comcast-Fox would be a “vertical” merger, and well-paid readers of antitrust tea leaves consider the former to be easier to do than the latter. The increased chances of the government vetoing a Comcast deal mean that it makes sense for Murdoch to just go with Disney instead.

[…]

Once the Disney deal closes, Murdoch will be Disney’s largest individual shareholder and will have a direct line to Iger; he might even have a son, James, in a senior position at Disney. The combined Disney-Fox will include a huge amount of Murdoch DNA, especially when it comes to television production, and Murdoch will be justified in taking the occasional victory lap if and when Disney goes on to ever greater strengths.

It reminds me of the founder mentality at work here: of course, it is about the money, but it is also about your work, your legacy and how best to make sure your company and vision will outlast you. This is true of big companies and small.

Note: Nota bene is one of a few favorite newsletters in my inbox these days. Put it on your list.

A civilization of the mind

One sign of how much you accomplished in life is how many people from all different walks of life remember you when you’re gone. Of many aspects, that’s my most favorite about John Perry Barlow’s life: he’s touched so many people in so many spaces from music, to politics, to technology.

Another is one’s ability to blend two different concepts from two different fields together – that’s how we create new ideas after all. And last is the ability to know things well enough to be able to explain it plainly to others and to use your words to lead. For instance, even though he may not have had a background in engineering, he understood enough to be able to explain it well and in simple terms to others. What a beautiful gift.

I believe Barlow did all three really well. This weekend, in remembrance, I went back to find a few good essays by Barlow (and one podcast h/t @msg).

My favorite is the passage where I believe he was the first to connect the Gibson term cyberspace with what we now know as our present-day global telecommunications network. From Crime and Puzzlement:

Whether by one telephonic tendril or millions, they are all connected to one another. Collectively, they form what their inhabitants call the Net. It extends across that immense region of electron states, microwaves, magnetic fields, light pulses and thought which sci-fi writer William Gibson named Cyberspace.

Cyberspace, in its present condition, has a lot in common with the 19th Century West. It is vast, unmapped, culturally and legally ambiguous, verbally terse (unless you happen to be a court stenographer), hard to get around in, and up for grabs. Large institutions already claim to own the place, but most of the actual natives are solitary and independent, sometimes to the point of sociopathy. It is, of course, a perfect breeding ground for both outlaws and new ideas about liberty.

The words ring just as true now as when they were written in 1990. Large institutions still claim to own the place, just as in the 90s and as with the old Wild West. But there are still natives out there on the edges, working to get out their ideas of freedom.

This must be where ice cream goes to die

Last night, I had a chance to taste the first batch of the season of MilkMade’s Brie Mine. It’s brie ice cream with cabernet caramel swirl. I’ve never ever tasted anything like this before. Diana sold out in two days and now has to make another batch for February. They’re only making one more batch for the month – so get over to the Tasting Room before they’re out!

This flavor joins another on my all-time favorites list: French Kiss – chocolat à l’orange; chocolate ice cream with notes of orange. Be sure to check out the other new ones for February: Brooklyn Ambrosia, Conversation Hearts, That’s Amore.

This must be where ice cream goes to die.

The archives of the heart

In 2015, exploring Teshima, I came across Christian Boltanski’s Les Archives du Cœur. Inside a small beautiful cabin overlooking the bay, you’ll find a work of art that permanently houses recordings of heartbeats of people throughout the world. Boltanski has been recording these heartbeats since 2008; you can record your own heartbeat here and you can listen to the beats of other visitors who’ve visited this place in Teshima. Boltanski’s primary purpose in art has been to remind us of our own mortality. When measured like this, our heartbeats represent not only the passing of time, but our past and our experiences as they become coded into the pace of the rhythms. We like to think our own beats carry a code that’s unique in the world, shaped by our experiences. When we leave our heartbeats behind in Teshima, an island in the middle of the Seto Inland Sea, we’d like to think it’s left behind forever. It’s the first time you think of leaving your heartbeats behind in the world, independent of any agency or hospital that’s recorded yours before, and has owned yours before.

I’ve been leaving my heartbeats behind on the web too. And for the past few years, I’ve been working to archive the heartbeats I’ve posted on various online services. It took me a little bit of trial-and-error to figure out which service to host these on. I eventually just picked Github, because 1) git respects all your original file formats; 2) you can keep a revision history of your data as it grows and becomes richer over time; 3) you can easily push to multiple remote locations to distribute your data and keep it in sync, meaning if Github ever were to go away, you’d have your data in other places too.

Usually, I back up my data privately, because in a few cases, they reveal private information like the email addresses of my connections. Whenever possible, like with del.icio.us (archive-del.icio.us) and my Twitter history (archive-twitter), I’ve been putting them up openly. They were open to begin with, so why not keep it that way?

This brings me to my Tumblr blog, which I used from 2007 to 2017. Tumblr was a beautiful platform for so many reasons. It wasn’t just a blog to me – because there were other blogs and tools out there to host content. At the time, I’d also already had a WordPress blog going. But I used Tumblr for my photos, my early NYC tech scene posts and my foursquare posts. This is largely because Tumblr was New York tech and New York tech was Tumblr.

All the changes at Tumblr the last few years, with Yahoo’s acquisition and with David Karp leaving got me thinking about the great connections and memories Tumblr helped me build, but also once more about the mortality of my work online. I researched a few different backup tools, and eventually found one, tb-ng, that worked well enough to pull the hi-res photos and post content (it won’t pull your likes and reblogs). I pulled all my content out: archive-tumblr. If you don’t want just the archives, but to host them back up on a live blog, you can import your old Tumblr into WordPress like I did.

It was beautiful to take a few minutes to step back through that time.

I started looking around for other tools that could do this for me on other old services, like Flickr, and I came across Archive Team. They’re a group of hackers that, in their words, are:

[…] dedicated to saving our digital heritage. Since 2009 this variant force of nature has caught wind of shutdowns, shutoffs, mergers, and plain old deletions – and done our best to save the history before it’s lost forever. Along the way, we’ve gotten attention, resistance, press and discussion, but most importantly, we’ve gotten the message out: IT DOESN’T HAVE TO BE THIS WAY.

Between efforts like this one and the Internet Archive, it makes me happy to know many other people want to preserve our history like this and to save our creative work from obsolescence and to have us all remembered. It doesn’t have to be this way.

The beauty of the internet is that many different people from all over the world can work together to help you preserve your art and your work. And, depending on the tools used, you can save copies of your data in islands all around the world – resilient to any single node falling over, just as the internet originally intended it to be.

Your first 100,000 photographs are your worst

At the turn of every year, I try to do a bit of digital housecleaning. It’s nice to do this every once in a while: get all your files, your backups and other security details in order across all your devices and services.

While going through this most recent sweep, I started wondering how best to organize my photos. I’ve taken about 25,000 photos (and only about 900 videos) on my iPhone since 2012. The ones I took from 2007 to 2012 are all in an iPhoto Library file somewhere in a backup drive. So that’s probably another 20,000 photos, conservatively, taken over those years. Then, I easily have another 25,000+ photos in high-resolution form from the various cameras I have owned over the years.

Reviewing this history, I’m reminded of Cartier-Bresson: “Your first 10,000 photographs are your worst.” In the digital age, I think to myself, should this be 100,000 photographs? I think this not only because I don’t believe I’m a very good photographer yet – but also because it might take me another 25,000 photos before we figure out how to safely and effectively store them forever.

For the photos coming from my cameras, I’ve been using Lightroom to organize and store into my backup drive.

Until now, though, the photos taken on iPhone I’ve just been leaving on my phone. However, I started running out of space on the phone. I was backing them up into Dropbox, but I’m out of space there too (really Dropbox, a 1TB limit for personal use? Why?). So I moved them all into Amazon Drive. If you have Prime, you get free unlimited storage of all photo files in hi-res format. It’s definitely the best deal going. The Amazon Photos user interface needs quite a lot of work, but the syncing is so much faster than the other services. (At least, it feels that way to me; maybe it’s just because I have the upload/download bandwidth and concurrent limits set to max.)

It feels like this is a process once every couple of years: pick a service, move everything over to that service, hope that you don’t lose anything, hope that there are no proprietary file formats or file names or strange organization structures. As a part of this review, though, I realized I prefer to organize them in a certain way “on disk”. Because the file sizes are so big, it makes sense not to have them all in one big “Camera Uploads” folder. I group photos into folders by source, and then by year. For instance, each camera gets its own top-level source folder, underneath which photos from each year are grouped. The individual photo files are named according to the date they were taken. This allows me to manually find photos much more quickly, and it allows me to sync only the subset that I want to across devices. A source isn’t just my camera though, because photos friends send me can get their own top-level grouping. I try to make sure whichever editing or backup tool I choose respects this hierarchy. They can hold whatever other metadata they each want, as long as the basic structure is one and the same across applications.

One side effect of this whole reorg is that it basically means that my workflow for my digital camera is now the same as for my iPhone. The phone has truly turned into one of my cameras. As a photographer, no matter how big the memory card, I take all the photos off after a shoot and then I use the best tool possible to organize them in the way I want to organize them and another tool to post-process them the way I want them to look. Why should the latest camera, iPhone X, be any different?

Another side effect is that my Fuji digital camera and the iPhone X are just two cameras, two devices. In fact, the way I see it now is that they are two sets of lenses: just as in the past, I might have carried a wide and a portrait, I’m carrying two lenses now: whatever is on the Fuji and the wide-angle ƒ/1.8 on the iPhone. They both have great lenses, they both have WiFi, they both take great shots I love (and other people do too, I hope) and they both make me creative.

So, in addition to the storage, why shouldn’t the workflow for both be the same? Onto the next 25,000 photos we go.

How do you keep your ever-increasing set of photos organized?

The more you know

A few people sent me the Times article on Strava’s global usage and paths in heat map form this week. Leaving aside the alarming headlines and shares for a second, I wanted to really think through: what are the real issues here? The data on Strava, whether in aggregate form or not, was already publicly available on their site. You can look up certain areas and paths and see all the top people that biked, ran and swam them. I think this feature has been there for as long as the site has existed. The fun of it is that you’ll discover other great athletes near you who run the same patterns – an interesting super-local social network of sorts based on paths. There’s a certain magic there that makes big cities like New York City feel more like a village.

Strava’s contract has always been: join our community, share your paths with other enthusiasts like yourself, maybe learn about new paths near you which you may not have otherwise known. If you wish to opt out and want to keep your data private, Strava’s had a way out. Strava’s settings, like any other social sharing app, make it easy for the user to block their sharing and to keep paths private if you so chose. They are right in offering up those tools and putting the privacy settings back in the hands of the end user, who ultimately should be in control of whether their data is released and shown elsewhere.

The first issue

At the same time however, people don’t know better and are too busy and can’t dig into every setting out there on every platform. There’s a lot of friction involved in getting end users to update settings. Most people just don’t know or don’t care. So if it really matters, the platforms should take care not only to inform them, but to try their best to protect them as well. Many employers, including the government, have to work harder to keep their users informed of the settings and how to protect themselves and their organizations. Can we really trust users with the default settings on any system, knowing they’ll leave the defaults on, knowing they’ll never change the password to their wireless routers?

I’m reminded of a question Apple asks you every so often – which probably makes you think of your battery life more than it does your potential privacy leaks: ‘”________” has been using your location in the background. Do you want to continue allowing this?‘ (Speaking of Apple, take a look at how many location-related settings there are in iOS; you can get very granular with this stuff and still not be able to cover it all.)

The second issue

Why do we trust all our data to Apple & Google fully knowing they can read and hear everything, but then panic over smaller companies and third parties having our information? Is it somehow different if you’re a small service or startup versus a large one?

The third issue

As another exercise: let’s fast-forward and think about a fully decentralized future (I hope), where you are fully in control of your data and you own your data security keys – individually or in aggregate. If a leak happened, would people just blame themselves? People always want someone to blame, but given the choice to manage their own security keys and data, I’m sure a lot of people would not want to deal with it, and would trade for the convenience of data-in-a-centralized-cloud instead.

The fourth issue

Data aggregates can get you into trouble no matter even if the individual data point is harmless. An example: I know where you are right now based on your location in your most recent Instagram or Snapchat story. That’s only one data point, so maybe I can’t do much with it. But if I had a direct API feed of your full history, then that exposes a lot more about your life – patterns and paths and timestamps and locations over time from which I can derive real meaning.

This isn’t new to the world of mobiles and location data either; it is something that comes up in medicine regularly. Here’s a study from PLoS Med on ‘Ethical and Practical Issues Associated with Aggregating Databases‘ from ten years ago:

Participants who consented to the collection of their data for use in a particular study, or inclusion in a particular database, may not have consented to “secondary uses” of those data for unrelated research, or use by other investigators or third parties. There is concern that institutional review boards (IRBs) or similar bodies will not approve of the formation of aggregated databases or will limit the types of studies that can be done with them, even if those studies are believed by others to be appropriate, since there is a lack of consensus about how to deal with re-use of data in this manner.

Combined databases can raise other important ethical concerns that are unrelated to the original consent process. For example, they may make it possible for investigators to identify individuals, families, and groups. Such concerns may be exacerbated in settings where there is the possibility of access to data by individuals who are not part of the original research team.

If you opted out of your individual data, so should you then also have a setting an option to opt out of the “secondary uses” of your data. It’s not enough to let others determine the fate of how they’re anonymizing keys and uniquely identifiable information in datasets. Not only should you be able to opt out of the individual level, but also out of the secondary use process, because you don’t know and don’t have control over that process (and it also likely won’t benefit you or the primary study).

These aren’t easy problems to solve for. They all trade some level of privacy for convenience, for wanting to be noticed, and for wanting free platforms (which are ad-supported, and which then take your data for secondary uses). Ultimately, to properly care for data, it will come down to a new sort of contract between an individual and a service: the individual’s right to hold their data for their own use, the individual’s right to take back their data when they so choose and their right to be forgotten altogether.

The endgame (and AlphaGo)

I recently watched the movie AlphaGo on Netflix, which documents the lead up to and the challenge match between DeepMind’s AlphaGo and Lee Sedol.

Three things jumped out at me watching the movie.

One, we know it’s going to happen. Even if you didn’t previously know the outcome of AlphaGo versus Sedol, you know someday soon, the computer is going to surpass and beat the human. If not this version, then the next. If it’s only able to win a few games now, it will win all games in future. We already know the ending. But you just can’t help but want Sedol to win, because it means that we all win. It means that we put off by another day the inevitable moment when the computer can beat us. Not just beat us by being faster than our brains and bodies, like previous inventions, but by learning by itself and out-thinking us.

Two, the way the documentary protrays the tension between the two sides is when it strikes you that nobody thought it would happen so soon. Neither side, DeepMind or Sedol, thought so even as it was unfolding. It’s a moment that is simultaneously terrifying and heartbreaking and amazing at the same time. The endgame is near, but how amazing that we were able to get a group of people together to program that.

Three, and this is on your mind as you watch and this is why the movie is so good: it asks the question “What happens when it happens?” What’s it do the psychology of humans to know they can’t win and that the computer has surpassed them? What’s the emotional toll? Sedol, even with his incredible winning percentage has lost other games to human players in the past, but this loss is just so much different: knowing he (and, therefore, all us humans) can’t win this one. And what happens after it happens? Will humans just play human games and computers computer games?

In the end, we’re left with only a hope. A hope that the creativity of the machine will unlock a new creativity within us. To allow us to see moves and the world in new ways we hadn’t before envisioned. And like any other tool we’ve invented – the pencil, the bicycle, the car – the computer will continue doing just that.

A thousand true fans

I came across two great remembrance posts this afternoon, from Om and from Kottke, about the loss of their friend Dean Allen.

I never met him but I was always a fan of his work, from textism to Textpattern to favrd. When he first announced TextDrive back in 2004, I put up the $200 (a lot for that time in life) and signed up to support him and get a lifetime account out of it. We talk a lot these days about crowdfunding and bootstrapping your work via your first thousand true fans – TextDrive was the first time I’d ever encountered that concept on the web. It was a refreshing and beautiful thing.

I’m always amazed that one person can leave so much behind on the internet – and to keep trying new things, just for the sake of trying things, and to move the web forward because that’s the only way it ever does.

It seems a thousand others thought so today as well.

Beliefs

Earlier this week, at a team dinner, we got to talking about whether people can and will ever move off of Facebook. The question was posed not necessarily as an exercise into how to start a new social network, but to talk about the company’s influence and network power, and to ask: if something else took its place, wouldn’t that service naturally go through the same evolution? Create a hook, aggregate people and data and attention, and then monetize that attention with advertising. The same question can be posed of all institutions that grow too big and centralize power: aggregate a resource, attract people in, have network effects or other data lock-in to make it hard for them to leave.

When it comes to big companies, you can only believe in one of three things: 1) that if something grew too big, people will always have the freedom to vote with their feet and move on; 2) that government will step in and break it up; 3) that technology will improve to break us out of such lock-ins because technology wants to innovate and improve itself (it could be argued that the default state of technology is to always be in a process of self-improvement like this).

On the first, that we are free to choose whichever service, we know now is increasingly not true: with almost all types of centralization, and especially with network effects businesses, you can’t just move on easily. You’re on it because your friends are on it, and they’re on it because you’re on it. You’re on it, because it lists more products than the other sites and it already has your billing information, and that makes it more convenient. And because more people are shopping on it, more products and vendors get on it. So, the more powerful each network becomes, the harder it is to remove yourself (and your data) from it. The “well, if they increase prices, I’ll just go elsewhere” is just not true these days, because at some point a service gets so big that there aren’t many good alternatives that will have the stuff you need.

You can believe that governments will step in and do what they did in previous eras to fight anti-trust and break up things that get too big. You can believe that policy will change, but it may not always go your way (e.g., the state of the net neutrality debate now). Policy is always reflective of who’s running the government during any span of years. It can go one direction for a while, and then, as we see now, flip into another for a few years. So, you can’t solely believe in and wait for that either. (In fact, even when it does feel like it worked, as with Microsoft in the 90s, it could be argued it wasn’t just government intervention that made them stumble, but rather that they missed multiple technology trends in a row).

So ultimately, it leaves it up to technology to provide solutions for us and that’s why for those that are working in and watching the space, decentralized technologies like cryptocurrencies are so interesting.

In New York Times Magazine this weekend, Steven Johnson has a beautifully written essay on bitcoin, blockchain, the underlying technologies behind networks and why there could be a ‘there’ there. Like a great photo or idea, it’s one of those things you wish you had done, that you had written it this way for others to understand – that was my first reaction! He explains it better than any of hundreds of other posts and articles you’ll read on this stuff. Now when anyone asks me for a starter into networks and cryptocurrencies, I will just send them this essay as the first read.

You have to believe in one of these three things. Each has people behind the scenes, creating policies and movements and supporting each style of action. But it’s the belief in technology that has consistently moved us forward in the past, and the belief is that it will once more, and that it should.