diego's weblog

there and back again

Category Archives: software

a billion waterdrops (beyond the last wave)

“So now, less than five years later, you can go up on a steep hill in Las Vegas and look West, and with the right kind of eyes you can almost see the high-water mark — that place where the wave finally broke and rolled back.” — Hunter S. Thompson, Fear and Loathing in Las Vegas: A Savage Journey to the Heart of the American Dream, 1972


Just one word. One word from which we can start pulling, as if it was the string we’ve rescued from a mix of ideas that form a confusing ball of yarn behind it. We start to pull.

There’s some systems, or technologies, or services, that these days are being implicitly used as a framework for ideas — as if what we have now all we’ll ever have. Facebook, and the specific social interactions it rewards and punishes, is frequently a thing that is noted as a sort of immovable place where something happened. As if it was Times Square or Deep Space.

This is what I mean:

A mother writes down her thoughts on his son’s birthday, but he’ll never read them. He’d died not long before, of a heroin overdose. Her words are filled with pain, she questions her faith: “Where is the God that is making us all so sad?” she asks.

She posts these thoughts on her dead son’s Facebook page.

Someone replies: “Junkie.”

This is described in a post by Stephanie Wittels Wachs: “The End of Empathy” that you might have already read. The mom is her mother, the son her brother.

She goes on to talk about how in engaging directly with someone online, even someone who’s behaved like a complete knucklehead, the most frequent thing that happens is a softening, perhaps an apology, perhaps, even, a complete change of attitude.

But why?

My own ‘a-ha!’ moment in this regard came, thankfully, in a much lighter situation. A long, long time ago, when I lived in an island far far away — Ireland!! — I wrote a parody abridged script for The Matrix Revolutions that was quickly picked up and shared and read by thousands of people (In 2003, this was a big deal). Within that blast of short lived micro-Internet pseudo-fame, I always remember a particular interaction with one of the first people to comment on it (I had posted that on my weblog, with comments open, those were the days!). This person replied:

Its easy to make fun of something you don’t understand. You did a damn fine job of that.

Which led me to post a lengthy reply which could easily have turned into one of the many essays academics publish about these movies. I went all the way: comparison with other movies and texts, religious themes, even threw in some Plato for good measure. My honor as a nerd had been challenged, I had to reply.

The very first comment on my long reply was from that same guy (Sadly the comments were lost when I migrated away from MovableType a few years ago so you can’t see them in the page). He said, in essence, “Wow, you have really spent a ton of time thinking about this stuff, definitely more than I have, now I realize were coming at this as a loving fan and I appreciate it, thanks mate.” (Ireland, remember?)

But, again: why?

Connection is not communication

Aside from the odd psycho here and there, why is it that so many many otherwise decent, perhaps even nice people appear become trolls online? And why do they revert to being human being if you engage directly?

The digital age has shown our true colors… hmm? We are all jerks until confronted, is that it? And then cowards? Or suddenly reasonable? Or what? Is it Facebook’s fault, or Twitter? or …

Well, yes and no. Yes because these toxic interactions that would otherwisenever happen are happening over those channels, but it’s not their “fault.” It’s just that they are carriers of information. They are indifferent to the semantics of the bits they transmit. They can just as easily be distributing a video of a puppy saving a kitten from drowning as they can be broadcasting a horrifying video of a beheading of an aid worker by some idiot psychopath.

This behavior happens because we have potentially connected everyone to everyone, and it’s happening over a channel that is connecting people without context, without community, without history, without background.

Someone can now reach you but without the cushion and controls and context of a community around that connection, it can’t be a communication, it just can’t. You need to have some idea of who someone is before you can interact with them effectively. Words without a context are meaningless and therefore carry little cost. This is why people (perhaps some with poor impulse control, perhaps other problems, but not always) can run around the Internet threatening others with rape, theft, and murder, and then casually get up from their desk, close their computers, kiss their kids as they finish their cereal, and go to work without a worry in their mind.

They would never do that at their neighborhood meeting. Never! Why? Because there’s community rules. Because they would be ran out of town, or put in jail, or whatever.

But wait! Aren’t social networks communities?

Nope. At least, not necessarily.

What we need is to get into our heads that Facebook, Twitter, LinkedIn, or any other “flat” social networking service are not communities.

Repeat after me. They are not communities. They are not. They. Are. Not. Even a Facebook Page is not a community anymore than a marker attached to a wall on a random street (so that people can write on the wall) is a community.


The Original Facebook Wall: the Wailing Wall in the Old City of Jerusalem

Facebook, Twitter, LinkedIn, they are a utility. Nothing more, nothing less. We use electricity to power a TV, which can they show good things or bad things. We don’t blame the power company because they let us cook some dinner and let someone in our home that turned out to be a jerk.

No, no, no, someone says. You’re talking about my Internet provider, like AT&T, or AOL or whatever. Those jerks. Facebook isn’t a utility! It’s the global consciousness arising from the deepest, purest corners of our shared soul! Everyone can grab someone else’s hand and sing Kumbaya in unison. Come on people…!

(cue the tumbleweeds, rolling across the view).

Interaction needs context

“Flat” one-size-fits-all social spaces like Facebook or MySpace (remember MySpace?) or Friendster (remember Friendster?) or AOL (Remember AOL?!?!) have been heralded one after the other as “the global village”, a “global consciousness”, a global whatever.

This, however, is — to put it technically and delicately — total bullshit. There’s no global village. There’s global reach. The fact that my connections to people can be drawn as arcs that travel across the planet doesn’t mean that my village is the planet, it just means that the people I talk to are dispersed.

AOL, Friendster, MySpace are no longer talked about more because of technological/execution failures rather than model failures. AOL got stuck with modems, but it still generated over 5 BILLION USD in revenue a year (yep). Friendster imploded under load but was revived for a while with electric paddles and if they had managed to hang one I’m sure they’d still be alive, even if just barely. MySpace became a total garbage fire both in terms of performance and content so it couldn’t be the beacon of the world, but it’s still hanging in there.

The reason those services failed are primarily technological. They thought modems would be cool forever. They couldn’t adapt. They couldn’t handle the load. Facebook pulled it off, and it’s no minor feat. But check this out:


AOL Welcome Login Page, 20 Years Ago

Now do this: open your browser and visit facebook.com. Look at the image above. look at the site. Repeat as much as you can stand it. Ok, maybe not that much.

I’m not even going to bother to point out all the overlaps. That’s why we have a word like “obvious.” You have a Facebook account like you have an account with the power company, or like you have with the cable company. Same goes for LinkedIn, but with a different purpose. Or Twitter. Step back and think about the accounts themselves, the services. You don’t “enjoy” belonging to Facebook (early on, when it was exclusive and specific to campuses you did enjoy it, but that actually just reinforces the point that’s coming). Just like you don’t “enjoy” having an account with the electric company, but you have it because that way you can have light, cook, and watch TV and so forth. You are on Facebook because it gives you access to a bunch of other things. Whatsapp and Instagram (short private messages, photos) are also probably in a trajectory to settle as a utility. It’s not a coincidence Facebook bought them.

The point is there have been many attempts at building social utility overlays for the Internet, and through work, luck, evolution and determination we finally got them.


This isn’t a putdown. At all. We need this stuff. We just have to remember that we need it to do something else.

It’s not the electric company’s fault that the movie was terrible

Facebook, Twitter, LinkedIn… are not “the end”. They’re not the last stage of human communication. They’re not even the beginning. They’re the necessary infrastructure we had to build before we could do all the cool stuff.

That word, “Junkie,” that’s the equivalent of a prank call within Facebook. Not that it’s a prank, but rather that it’s the lack of context that permits its execution and existence.

We need to move past this. We can’t think of Facebook as being responsible for destroying human empathy just as we didn’t think of the phone company as doing that because someone could prank-call you.

The text, the happy icons and the profile photos and everyone having a good time at lunch obscure the fact these are minimum-common-denominator interactions, not insignificant by any means, but not unlike what you’d get if you could be publishing your own little newsletter and sending it to whomever you pleased.

With this infrastructure in place is when we must focus on communities that are meaningful in whichever reality you’re engaging.

How? There’s many ways. Individual reddits (rather than reddit itself) are an example of proto-digital communities. They lack formal tools that communities need to shape rules and behavior, but the best ones create their own ad-hoc versions.

Medium is itself an example of something new that you can build on top of the utilities. Not quite a community, but not quite just a publishing platform either.

Communities can also be built effectively online, leveraging the connections created by the utility networks into well defined social spaces. How? For that, go read “Building Better Social Networks: Beyond Likes, Follows and Hashtags” from Gina Bianchini (co-founder of Ning, and therefore my old boss, so there’s disclosure for you!) who now runs Mightybell. From her article:

We can do better than platforms that require a convoluted combination of hashtags and poorly organized numbers for questions and answers to “have a conversation.” No one should have to work this hard to chat.

Exactly. What that means is that you don’t have to have a conversation in an environment fearing that some random person might appear and start making a mess. Conversely, it also means you’re not always walking on eggshells, you’re not diluting what you say because there’s a context to it that’s provided by the community.

When you have true communities (either in the digital or the real world) the kinds of regular trolling and abuses we were talking about early on simply don’t happen, precisely because it’s a community, not just a communication channel that is indifferent to what it is being used for.

Social networks were one of the waves of the early 21st century. Large-scale digital social interconnections, with maximum coverage and total accessibility (governments permitting…).

For social and digital what comes next is not another wave, but what happens when a wave breaks reaching the shore:

A billion waterdrops.

(reposted from medium)

the fallacy of … tape.

A discussion has emerged in various corners of the Internets regarding a recent photo from Mark Zuckerberg in which someone spotted he (apparently) covers the camera of his laptop, and possibly the mic as well, with tape. (As far as I know, this hasn’t been confirmed, so I’d argue we can’t really know for sure the purpose of that tape).

Perhaps we can start by saying that if your “solution” to a problem is basically something that Homer Simpson has already done (see video above), you’re probably not on the right track.

Regardless, this led to articles like “Mark Zuckerberg Covers His Laptop Camera. You Should Consider It, Too.

John Gruber points out:

I think this is nonsense. Malware that can surreptitiously engage your camera can do all sort of other nefarious things. If you can’t trust your camera, you can’t trust your keyboard either.

I’d go further and say that it is worse than nonsense: it is dangerous nonsense — because it creates a false sense of security.

The problem it “solves” is hilariously low in importance down the list of problems you’d have if malware had taken over your camera without you noticing. 

Because, yes, with the exception of (very rare) highly specialized attack vectors involving specific hardware elements, someone taking over the camera and bypassing low-level mechanisms that control it and the light pretty much guarantees they have full control of your system, including your keyboard, which by the way means they have all of your logins and passwords to all services, local and remote.

“Well, it doesn’t hurt, does it?” someone might say, but I’d argue that it does. It does hurt that this sort of nonsense can be propagated. It’s a bad meme. We should be talking about real security measures, improving software, whatever… except this.

It doesn’t solve the real problem (again, because the real problem will usually be “someone has total control of your computer”) but it doesn’t solve the “problem” it’s trying to solve. Because: how many HD cameras do you think are in a 15-foot radius of that laptop camera? I’d bet a couple of dozen, easily (at various angles, no doubt). Does anyone realistically think that malware that has taken silent, undetected control of a networked system running UNIX is just sitting there twiddling its thumbs and uploading JPEGs to a server somewhere? It’s like some type of bug infestations: if you find them anywhere, chances are they’re already everywhere. 

This is the reality of the world today. Taping over a camera is as much a solution as sticking your head in the sand. Which is to say: none at all.

The universe doesn’t do straight lines

“Everything you’ve learned in school as ‘obvious’ becomes less and less obvious as you begin to study the universe. For example, there are no solids in the universe. There’s not even a suggestion of a solid. There are no absolute continuums. There are no surfaces. There are no straight lines.”

– R. Buckminster Fuller (1895–1983)


If you haven’t thought about this before, it’s one of those ideas that generates a special feeling, something you’ve always known but never articulated. The kind of ideas that make you look up into the sky, eyes lost in the distance and say “Yeah, that’s right,” while you smile and nod slightly.

It also lends itself to pseudo-myth busting: Of course there are straight lines! Here, let me show you, after which we are look at a high-altitude image of salt flats, deserts, rocks, and any number of other things that appear to have reasonably straight lines here and there at different resolutions. But there’s no “reasonably straight” in math, geometry, topology, and that’s what we’re talking about.

Even the most ubiquitous, inescapable “line” in nature, the skyline, or horizon, is not a line at all, but a curve of such diameter that it can’t be discerned by the naked eye unless you’re, well, flying.

But… why? There’s no law of physics or biology expressly against straight lines, 90-degree angles, or perfect geometric shapes. DNA is a construct of incredible complexity. Surely a straight line wouldn’t be a problem if it had an advantage.

Thinking about it from an evolutionary/natural selection perspective it becomes clear pretty quickly that there’s little advantage to something being perfectly straight compared to anything “reasonably” straight (in the few cases in which you need it). On the other hand, “perfection” has a very clear cost.

Consider — Anything that can bend will eventually bend and cease being straight if that was its initial state. Therefore, the only way for something to remain straight for a long time with all the environmental uncertainty created by nature is for it to be extremely rigid, not flexible. So you end up with something that will not bend but that, no matter how strong, will have a point at which it breaks.

Flexibility matters. Bending is better than breaking, a fact that becomes a problem for humans since our bones get stronger as we grow, but the strength translates into less flexibility and therefore more of a chance of breaking outright.

It seems logical then that a perfectly straight line isn’t a thing you’d want in your evolutionary tree.

Speaking of trees.

Trees are interesting when you think about them in terms of construction. When we (humans, that is) started building really big things, we needed the help not only of straight lines but also of the simplest convex polyhedron possible, the tetrahedron, aka pyramid. (Even though the pyramids humans tend to build are not tetrahedrons, since they generally use a square, rather than triangular, polygonal base, the point remains.)


It’s taken us 8,000 years to figure out how to build high without building wide. Meanwhile, many trees grow tall enough and strong enough that humans can live in them, and yet their weight distribution is not unlike a pyramid standing on its tip, supported by the root structure, which while smaller in mass is generally larger in surface covered. (The triangular areas I marked in the images above reference mass, not surface.) The tensile strength and toughness of the materials used matters a lot of course, but so does what you’re trying to use them for.

If you’re just getting started at the whole civilization thing, and you’re going to build a hut to protect yourself from the elements, or a small vehicle to carry stuff, it is better to use artificial constructs (straight lines, circles, etc) because they make calculations easier, it makes reproduction easier, and it makes verification easier. Early on, at small scale, knowledge can transferred verbally, but as soon as you start writing things down, simple geometries become even more important. You could carry the design to another city, and the master builder that came up with it wouldn’t be able to verify your construction. The certainty of mathematics becomes a necessity, and the simpler the design, the simpler the concepts behind it, the easier it is not only to propagate but also to verify.

For us, then, up until well past the point when we’ve moved beyond simple construction capabilities, it pays off to expend the additional energy necessary to approach mathematical perfection. The advantages are many. The time and energy invested in, say, turning a tree trunk into lumber is acceptable not only because it is easier to use, but also because it’s easier to measure, partition, buy, sell. This, in turn, makes markets and therefore whole economies function more effectively and efficiently as well.


787 Dreamliner Wing Flex Test (source: Wired)

As you advance in building your civilization you start to see that evolving past a certain point both requires and enables flexibility in how and what you create. It’s not just about architecture, or mechanical engineering. Clothing, for example, also had to pass through a period in which mass-production constraints around what you could deliver resulted in straight lines everywhere. Think back at the sharp edges and angles in suits and dresses of the late 1940s and 50s, when mass production of those items became commonplace.


Now, Betty & Don probably aren’t fooling around with mass produced stuff,but manufacturing capabilities invariably affect design and therefore fashion — even for high end goods since, after all, they are part of the same ecosystem.

Attack of the rectangles

Now, this has all been very entertaining so far but my point is really (you guessed it) about software.

Straight lines are easier in software, too. Software interfaces have been somewhat stuck in the same era of rigidity as architecture, engineering, and even clothing were stuck in until the very end of the 20th century, when new processes and materials allowed us to start creating strong, bendable, curved surfaces.

Take a step back and look at your phone, or laptop screen. Start counting rectangles, or, as we could also call them, boxes.


There are boxes everywhere! Invisible boxes contain other boxes all over the place.

Don’t get me wrong, I am not saying boxes are evil or anything like that. Rectangles are fine, they’re our friends. They’re cool. In the 80s, I think it was even hip to be square for a while. But we’ve become overly reliant on them. We use them as a crutch. Instead of trying to figure out how to make something work for a specific task, we use rectangles everywhere, because we know they work, even if they aren’t perfect.

This matters because rigidity propagates directly from the interface into our thoughts. It is not the same to have an open white space to write in than to be given a small box and a 140 characters. It is not.

In that vein, I don’t see it as a coincidence that there’s so many great text editors around that focus on eliminating everything but what you’re typing.

Circular/Rotating dials are better than vertical knobs because the human hand has more precision on radial movements than on linear movements. Our extremities are well adapted to rotating and sliding along curves, but everything in our computers is stuck within the vertical and horizontal confines of 2D cartesian coordinate space. With touch on devices (and 3D) we can create interfaces that are both more natural, organic, and can be better adapted ergonomically to how we operate in the real world. The moment you add any kind of spatial analysis using IR and so forth (e.g., Kinect, Leap) significant vertical and horizontal movements, while definitely useful, become secondary to the expressive power of the hand.

Some calendaring systems have now added margins around events to account for travel time, and if you happen to allow enough of your information to be constantly provided to a server, they can help find routes in advance, be smarter about alarms, and so forth.

The event itself though, fundamentally, is still some text in a box.


To begin with, no ‘event’ of any kind that you put in a calendar ever fits in a box. Aha! You’d say — Many calendaring systems let you add text, attachments, and locations, and change colors, and so forth.

But you just put those things in a box, metaphorically, and literally, as far as the interface is concerned.

If you open the event you get even more boxes within boxes that contain some of these files and information, much of which is also in rectangle form.

And when you’re done, as the clock marks 1 pm, that box remains forever frozen to collect digital dust, exactly the same as it was when it started.

Finding meaning and purpose in actions

But that’s not what the event is, is it?

Whatever it is, it’s fluid. Maybe it doesn’t exactly start at that time. Maybe it doesn’t exactly end at that time. Can’t software take note of that?

Have you ever been to any kind of event, meeting, presentation, appointment, that could be properly described by the boundaries of a box or a rectangle? Something that starts here and ends there? Ever?

Say it was a meeting. What about the things that happened during the event? Why can’t you keep track of the links you loaded, the documents seen, changes made right there?

Phone calls made?

People that came and went? We had a list of participants, but we got Joe to come in and help us with something because, duh, he’s actually the one in charge of…

Right? See what I mean?

Maybe you can’t say exactly what shape all of that stuff takes, but it sure as hell doesn’t feel like it fits in anything that has right angles and a preset width and height.

Because N3xt

These ideas are obviously important to me and fundamental to how I’ve approached thinking about N3xt, but this isn’t about one system. It’s about trying on new ways to think about what we build, how we build it, and for what purpose.

We need to expand how we think about data and information to the point of challenging the modeling, storage and processing of long-standing fundamental constructs like pages, folders, lists, and so on, on “clients”. It’s a change that’s already been happening in the “backend world” for a while now, and it’s long overdue on the other side. It’s time we move past from using metaphors and concepts anchored in a paper-centric, office-centric, container-centric view of the world. It’s time we let go of linear organizational schemes. Lists are great for some things, but surely they don’t have to be the foundation for everything.

All the major online services rely on incredibly complex infrastructures in which data exists and interacts in a virtual world that is so… removed from what happens when you squeeze it all into http://… and a grid of pixels that it might as well be in another universe. Backend filesystems at scale stopped looking like filesystems a while ago, just to take one example. It’s time to bring some of the magic pixie dust over to the other side and see what happens.

We also have to consistently push against rigid structures to create interfaces based on grids and boxes and lists and menus. We have lived with the same fundamental ideas for almost 50 years now, since the great Douglas Engelbart cracked open the fire pits of invention with theMother of All Demos. Desktops, files, folders, pages, “cabinets”, a digital approximation of the American Corporation in the 50s and 60s.

We’ve got the tools. We’ve got WebGL, and 3D frameworks, and inputs and sensors up the wazoo.

People aren’t going to be confused or terrified or anything like that. People are constantly adapting to new ways to interact, and we now have a generation that has grown up without knowing what a dialtone is.

In client software in particular, layers are closely linked, more interdependent than in backend software. In the server world hardware homogeneity and network layers actually help a bit in creating more elastic relationships between endpoints — so you can have something amazing likePresto, which gives you new possibilities: an aging squirrel in a wheelchair (SQL) strapped to the nose of a Space Orbiter (Presto) which can turn various toxic-otherwise-unusable solid and liquid fuels (in my metaphor, all the various horrible data sources that you can tap into… I’m looking at you, Oracle-over-JDBC) into precisely guided propulsion that will get you into orbit and back, so the squirrel gets to see space and we get to do something useful with all that horrible toxic stuff while still actually using something cool and shiny and moving things forward in between the two.

On the client, you can’t quite do that. The coupling is too close, the constraints too tight, so if your data model is literally a SQLlite table with 1,000 rows and columns, and you actually use row-access mechanisms and even, God help you, normalized table structures, then it is kind of inevitable that what’s going to show up on screen will look like that. And if it doesn’t, if you’re smushing up the 1,000 rows into a neural network that will give you the ONE row you need and display just that, then why the hell are you storing and fetching stuff using SQLlite? Why not just have a blob of 1,000 other blobs that you read and write atomically, along with whatever you preserve of the neural net between runs?


This doesn’t mean that we have to throw everything away and start from scratch.

Not what I’m saying.

When I see something like what Microsoft proposed here for HoloLens, I have to concentrate pretty hard keep myself from hitting my head against the wall repeatedly.

Because while what we already have is ok at what it does, we just have to let go of the notion that we can keep juicing it forever.

What I’m saying is that we’ll rethink how we approach everything new, and, sure, some of the old stuff might be replaced, but that’s not the point.

So for a quick glance at your watch, just some text that says “Event, 1 hr, starts in 5m” may be fine. You could even have a little colored rectangle around it.

But back in our growing mixed reality world, dealing with ambient sensors, phones, holographic lenses and wall displays … a BB-8 replica that rolls around the room beeping and whistling quietly…. and supercomputers in our pockets, in our desks, and in the infrastructure all around us, the only straight lines should be there only when we create them in our minds.

Just like the horizon.

(cross-posted to medium)


 What’s n3xt? Let me start by saying that if you’re looking for a 30-second answer, or a quick pitch, or some shortcut to a category (“it’s like Tinder for piano tuners!”), or something along those lines, you will probably be disappointed.

For new products or technologies there’s this pervasive idea that if people can’t figure out what your thing does in 30 seconds, then you’re toast. Done. Or that if you can’t explain why something is valuable in a single sentence you haven’t distilled the concept enough.

That’s true in some areas. There’s value in brevity, sure. But brevity can also be a straightjacket. It forces us to use shortcuts, makes us dependent on analogies and pre-existing categories. It creates a rigid set of constraints that limit the possible complexity of the message and subsequent discourse.

I understand that this is sort of breaking “the rules”, such as they are and that in doing so I might not be able to reach some people. That’s ok. I do this not for novelty or to ‘be original’ or whatever, but rather because I believe that complexity isn’t a vice and if we end up only communicating ideas that can fit in a bumper sticker it will become increasingly difficult to tackle the challenges that lay in front of us. I also believe that the extra effort and time will be worth it.

So, with that in mind…

Bits are fusing with atoms. Sensors and screens are proliferating, and along with them storage, networking and compute power.

Devices are becoming more specialized in both form and function.

And the world in which pixelated rectangles are the only pathway to our data will be gone.

In time, applications as we know them today will no longer exist. The Facebook News Feed is a great example of what the post-app world looks like in two dimensions.

In it we don’t see apps.

We see photos, messages, likes. Data plus context.


N3xt is software that runs on personal devices — phones, tablets, laptops, TVs, etc. It creates a personal mesh network that makes it easier to share data between devices, and it can expand outward to create circles of trust with peers connected not only based on geography or network topology but also real relationships between people. It can consume data from different sources and aggregate it in a way that can only be done client-side, creating context and connections across disparate services and data sources that would not be feasible otherwise. Leveraging context to show only the data that’s relevant allows us to break away from one-dimensional navigational mechanisms like lists or scrolling that become necessary when, unable to determine what is important, we just present a whole list of items that only might be.

Thanks for taking the time to read this far. Much more will be coming soon. In the meantime, you can follow me @diegodoval and @n3xtapp or send questions via email.

PS: www.whatsn3xt.com will be expanding. Not much going on there right now. I’ll talk about why in more detail in a future post.

(cross-posted to my medium.)


So last night I get a news alert from CNN: This year Europe received one million refugees.

I read that, and wondered, not for the first time: what am I supposed to do with this information?


Don’t worry, this isn’t another indictment of the 24 hour news cycle, or some empty complaint about the relentlessly depressive nature of a lot of news (in general, not even lately). This isn’t something about whether we have the information we need or we deserve, or whether we can or should be doing anything, anything at all, for the apparently innumerable intractable problems that surround us, so much so that we end up making blockbusters of movie after movie that talks about nothing but the end of the world… because, presumably, one thing that will be for sure after that is that it will be quiet. 


It’s about a shift in attitude, and it’s meant to be positive, not to bring anyone down. It’s meant to be freeing.

This is about a simple question that we need to start asking ourselves more frequently.

Here it is:


We have to start asking ourselves this question about a great many things, in a great many areas of human endeavor.

We have to ask why not in a negative way, not wailing or crying about how bad we have it. This isn’t dark or cynical. We have to ask why in a way that can bring about new solutions and in a way that maybe requires to throw away some old notions or to stop doing things the way we used to.

Because it’s time.


The question I’m talking about, the why in the way I mean it, refers to questioning things we’ve been repeating for a long time without thinking or maybe without really trying to figure out if there’s a better or different way for something.

We have spent many decades building stuff and coming up with ways to do stuff and creating organizations that can consistently both build and come up with way to do stuff.

A lot of it is fine, really. This isn’t about throwing it all away.

But a lot of it is just not really useful anymore. It needs to be replaced, or simply removed.


Example: the news. CNN informs me that a million refugees went into Europe this year, via a notification on my phone.


To start somewhere, out of the many many things that happened in the last 12 hours, why did I get a notification for that and not for something else?

Easy: Many people (certainly in the news) have been freaking out about refugees. CNN wants readers. So some programming person somewhere saw this article come up and had to choose the ONE notification they would send out for the next 12 hours, because they know they can’t send a million or we’ll just delete the app, because they know we don’t pay much attention anyway… and he chooses something that is simultaneously “Serious”, “Newsworthy” but that also will almost certainly catch the eye. A MILLION REFUGEES!!! WHAT THE….

Yeah, sensationalism. So this CNN person (or more likely a group… think of it, this one notification sent to millions of people the result of a 30-minute meeting someone had in Atlanta a few hours ago).

But still, why? Well, CNN is (nominally at least) in the news business, so it must do news. And this is how we do news now, isn’t it?


This is what I imagine Jeff Zucker wakes up every day to, that scream inside his head. ISN’T THIS WHAT YOU WANT? MORE TWEETING OR WHATEVER? JUST TELL ME I’LL DO IT. Jeff has a tough job, no two ways about it.

Back to the point: CNN is trying to survive, doing news or the closest approximation they can manage while still being able to cut to commercials about Viagra and golf courses. That’s fine.

CNN was the inventor, really, of the 24-hour news cycle. They were (actually) serious once. Real journalist-like. They just decided that the world had become too connected and too complicated and that the previous diet of news only in the morning, evening, or night, was not enough. That we needed news ALL THE TIME.

Fine. It was just an extension, really, of what had happened earlier. It used to be there was one news program that everyone liked. Everyone had their favorite TV news anchor. The most “trusted.”

Before that, Radio and newspapers held more sway. And even before that, just newspapers, or just a few printed pages every day, really, of things that were happening around towns. Around big towns specifically, because…

There. Right there.

There’s the why.

With the industrial revolution came the need for people to be closely together. Cities grew. As cities grew, they became more concentrated. People didn’t know each other anymore.

The old mechanisms for knowing what the hell was going on had broken down. If there was some pest killing all the chickens in the vicinity, you couldn’t rely on the old lady near the water hole anymore. If there was an impending band of roving asshats that was going to rob people you wouldn’t find out until they were on top of you. In this new, hypercompressed, always-on city, you couldn’t even see two feet in front of you, the smell and the dust and the ash and the pervasive rumors and bullshit so thick you needed someone to parse some of this stuff and tell you, well, simply, what you just had to know.

This is the why.

See, there is a why. There’s always a why. Always a reason. This one in particular started to get left behind pretty early. Once day, astute newspapermen (and yes, they were all men back then, trust me) realized that the more blood you talked about, the more papers you sold. People seemed to like it. Morbid curiosity? A twisted new way of finding entertainment? Perhaps finding that your life wasn’t really that bad because, look, there’s that other guy that just got run over by a truck and now can only eat with a straw, look at that, how horrible, and how’s he progressing, that saint, in his recovery? He’s a hero, that’s what…

Ok, I went too far. I said I wouldn’t get dark, or cynical, moving on.

The point is, the news had a point. There was a why. There was a reason. Over time, the reason mutated, changed. Over time, we added the profit motive (well, that one jumped on board pretty much at the beginning but the notion that there’s more profit margin in abstract thought sold for entertainment than in manufacturing something still hasn’t quite cut through some people).


Over time… over time, though, we just started doing it… well, because. Why do we watch the news? To be informed, to be good citizens, etcetera. True. True. To do our jobs… maybe, for some of us. But a lot of it, a lot of what we consume as “news” is really just filler. Useless. Even significant and perhaps tragic events are not really important to a lot of us, sad as that may sound. I’m not even going to go into the issue of how we selectively decide to freak out about the same thing depending on where it happens. I am talking about the objective value of the news, for example…

A small building collapses in somewhere in St Louis, MO. A few people killed and a few injured. No foul play… no crazy terrorists. Just an accident. Bad plumbing. Ok. Well, surely everyone in the vicinity should know about it, and gathering information and summarizing it for them is an eminently useful and worthwhile thing to do. They may have to find out if there’s family or friends hurt. City officials might have to revisit their building codes, or something. Perhaps, even, the whole State or the whole country might have to have a discussion about structural integrity and such. But the actual, specific continued coverage about the collapse, the victims, the bystanders, is irrelevant to anyone outside a 50-mile radius of that building. And yet, the “news” would spend two days with wall-to-wall coverage, and the rescue, and so forth. Charities would be set up. You know how it’d go.

But it shouldn’t be this way. There’s local news, and there’s global news, in that there’s things that affect us locally (“crossing two miles down my house is flooded”), nationally (“a few drops of rain in Chicago freeze air traffic”, heh), or world-wide (“Aliens land on White House lawn, demand rent-controlled apartment in NYC”).


This wasn’t a conspiracy — this is something that evolved naturally. We were doing one thing, and we kept doing it and now we’re still doing it even if it is not really useful and even if it, sometimes, it starts to hurt us. Because I’m all for entertainment, but there’s really no need for me to be informed about every tragedy in the world just because CNN has to fill its front page. There’s a lot of people in the world. Bad stuff happens all the time. So I don’t want to hear about a train crash in Thailand, horrible as that might be. I choose not to look at that, not to be uninformed, not because I don’t care, but because it serves no purpose. Whatever “good” can come from this information (safety concerns? worrying about trains I take? donate money? what?) surely is diluted by the obvious immediate “bad” that comes from having to process this stuff non-stop.

So it’s not about the news, or about CNN, although they do probably need some help to pull out from that sense of fully saturated colors tinted in desperation that I get from their every broadcast “pleeeassse watch us! we’ll have monkeys playing poker! Squirrels in space!!! ANYTHING!!!.”

I’m just saying that we have accumulated lots and lots of habits and systems and gadgets that are no longer necessary. They may need replacement, or rethinking. They may need just to be removed.

The opportunities are everywhere. We just have to look for purpose in what we do, in what we have.

Inertia is a powerful force, but it’s not all-powerful. Questioning, finding purpose in the things we do, is clarifying, and while it sometimes leads to uncomfortable problems that don’t have easy solutions, it invariably ends up being a useful exercise.

Just give it a try.

encryption is bad news for bad guys! (and other things we should keep in mind)

Once again, a senseless act of violence shocks us and enrages us. Prevention becomes a hot topic, and we end up having a familiar “debate” about technology, surveillance, and encryption, more specifically, how to either eliminate or weaken encryption. Other topics are mentioned in passing (somehow, gun control is not), but ‘controlling’ encryption seems to win the day as The Thing That Apparently Would Solve A Lot Of Problems.

However, as of now, there is zero indication that encryption played any part in preventing security services from stopping the Paris attacks. There wasn’t a message with a date and names and a time, sitting in front of a group of detectives, encrypted.

I feel obligated to mention this, even if it should be obvious by now. “If only we could know what they’re saying,” sounds reasonable. It ignores the fact that you need incredibly invasive, massive non-stop surveillance of everyone, but setting that tiny detail aside it comes back to the (flawed) argument of “you don’t need encryption if you have nothing to hide.”

First off, needing to hide something doesn’t mean you’re a criminal. Setting aside our own intelligence and military services, this is what keeps Chinese dissidents alive (to use one of a myriad examples), and I’m sure there are a few kids growing up in ISIS-controlled areas that are using encrypted channels to pass along books, movies (plus, probably some porn), or to discuss how to get the hell out of there. In less extreme territory, hiding is instrumental in many areas of everyday life, say, planning surprise parties. Selective disclosure is a necessary component in human interaction. 

There’s only one type of debate we should be having about encryption, and it is how to make it easier to use, more widespread. How to make it better, not how to weaken it.

Because encryption can’t be uninvented, and, moreover, widespread secure communications don’t help criminals or terrorists–it hurts them.

(1) Encryption can’t be uninvented

A typical first-line-of-defense argument for encryption goes: “eliminating or weakening encryption does nothing to prevent criminals or terrorists  from using encryption of their own.” Any criminals or terrorists (from now on “bad guys”) with minimal smarts would know how to add their own encryption layer to any standard communication channel. The only bad guys you’d catch would be either lazy, or stupid.

“Aha!” Says the enthusiastic anti-encryption advocate. “That’s why we need to make sure all the algorithms contain backdoors.” What about all the books that describe these algorithms before the backdoors? Would we erase the memory of the millions of programmers, mathematicians, or anyone that’s ever learned about this. And couldn’t the backdoors be used against us? Also get this: you don’t even need a computer to encrypt messages! With just pen and paper you can effectively use any number of cyphers that in some cases are quite strong (e.g., one-time use pads, or multilayered substitution cyphers, etc.) Shocking, I know.

The only way to “stop” encryption from being used by bad guys would be to uninvent it. Which, hopefully, we can all agree is impossible.

Then there’s the positive argument for encryption. It’s good for us, and bad for bad guys.

(2) Herd immunity, or, Encryption is bad for bad guys

Maybe we in technology haven’t done a good job of explaining this to law enforcement or politicians, or the public at large, but there’s a second, more powerful argument that we often fail to make: widespread secure & encrypted communications and data storage channels hinders, not helps, criminals, terrorists, or other assorted psychos.

That’s right. Secure storage and communications hurts bad guys.

Why? Simple: because bad guys, to operate, to prepare, obtain resources, or plan, need three things: money, time, and anonymity. They obtain these by leeching off their surroundings.

More and more frequently terrorists finance their activities with cybercrime. Stealing identities and credit cards, phishing attacks, and so forth. If everyone’s communications and storage (not just individuals but also banks, stores, etc) was always encrypted and more secure, criminals would have a much harder time financing their operations.

That is, to operate with less restrictions bad guys need to be able to exploit their surroundings. The more protected their surroundings are, the more exposed they are. More security and encryption also mean it’s harder to obtain a fake passport, create a fake identity, or steal someone else’s.

Biosafety experts have a term for this: Herd Immunity. Vaccines work only when in widespread use, for two reasons. First, the higher the percentage of immune individuals, the fewer avenues a disease has to spread, but, as importantly, the less probability that a non-immune individual will interact with an infected individual.

More advanced encryption and security also helps police agencies and security services. If the bad guys can’t get into your network or spy on your activities, you have more of a chance of catching them. The first beneficiaries of strong encryption are the very agencies tasked with defending us.


Dictatorships and other oppressive regimes hate encryption for a reason. Secure, widespread communication also strengthens public discourse. It makes communication channels harder to attack, allowing the free flow of information to continue in the face of ideologies who want nothing less than to shut it down and lock everyone into a single way of thinking, acting, and behaving.


(Postscript) Dear media: to have a real conversation we need your help, so get a grip and calm down. 

The focus on encryption is part of looking for quick fixes when there aren’t any. In our fear and grief we demand answers and “safety,” even to a degree that is clearly not possible. We cannot be 100% safe. I think people in general are pretty reasonable, and know this. But it’s kind of hard to stay that way when we are surrounded by news reports that have all the subtlety and balance of a chicken running around with its head cut off. We are told that the “mastermind” (or “architect”) of the attack is still at large. We hear of “of an elaborate international terror operation.” On day 3, the freakout seems to be intensifying, so much so that a reporter asks the President of the United States: “Why Can’t We Take Out These Bastards?

The Paris attacks were perpetrated by a bunch of suicidal murderers with alarm clocks, a few rifles, bullets, and some explosives. Their “plan” amounted to synchronizing their clocks and then start firing and/or blow themselves up on a given date at roughly the same time, a time chosen for maximum damage.

“Mastermind”? Reporters need to take a deep breath and put things in context. This wasn’t complicated enough to be “masterminded.” We’re not dealing with an ultra-sophisticated criminal organization headed by a Bond villain ready to deploy a doomsday device. This is bunch of thugs with wristwatches and Soviet-era rifles. They are lethal, and we need to fight back. But they are not an existential threat to our civilization. We are stronger than that.

With less of an apocalyptic tone to the reporting we could have a more reasonable conversation about the very real and complex reality behind all of this. Naive? Maybe. Still —  it doesn’t hurt to mention it.



all your tech are belong to us: media in a world of technology as the dominant force

Pop quiz: who held the monopoly on radio equipment production in the US in 1918?

General Electric? The Marconi Company?

Radio Shack? (Jk!) :)

How about the US Military?

The US entered World War I “officially” in early April, 1917. Determined to control a technology of strategic importance to the war effort, the Federal Government took over radio-related patents owned by companies in the US and gave the monopoly of manufacturing of radio equipment to the Armed Forces — which at the time included the Army, the Navy, the Marine Corps, and the Coast Guard.

This takeover was short-lived (ending in late 1918) but it would have profound effects in how the industry organized in the years and decades that followed. The War and Navy departments, intent on keeping the technology under some form of US control, arranged for General Electric to acquire the American Marconi company and secure the patents involved.

The result was Radio Corporation of America, RCA, a public company whose controlling interested was owned by GE.

Newspapers had been vertically integrated since their inception. The technology required for printing presses and the distribution networks involved in delivering the product were all “proprietary,” in that they were controlled and evolved by the newspapers themselves. Even if the printing press had other uses, you couldn’t easily repurpose a newspaper printing press to print books, or viceversa, and even if you could secure a printing press for newspapers (a massive investment) you could not hope to easily recreate the distribution network required to get the newspaper in the hands of consumers.

This vertical integration resulted in a combination of natural and artificial barriers of entry that would let a few key players, most notably William Randolph Hearst, leverage the resulting common economic, distribution and technological foundation to effect a consolidation in the market without engendering significant opposition. Later, Movie studios relied on a similar set of controls over the technology employed — they didn’t manufacture their own cameras but by controlling creation and distribution, and with their aggregate purchase power, they could dictate what technology was viable and how it was to be used.

Radio, early on, presented the possibility of a revolution in this regard. It could have allowed consumers to also be creators (at least in a small scale). The ability to broadcast was restricted by the size and power of the transmitter at your disposal, and you could start small. It was the first opportunity for a new medium to have the evolution of the underlying technology decoupled from the content it carried, but WWI and the intervention of the US government ensured this would not come to pass. The deal that resulted in the creation of RCA created, in effect, a similar vertical integration in Radio as in other mediums (in Britain, a pioneer of broadcast radio and later TV, the government had been largely in control from the beginning through the BBC, and so already was “vertically integrated”).

This is a way of thinking that became embedded into how Media companies operated.

RCA went on to be at the center of the creation of the two other subsequent major media markets of the 20th century: music and television, and in both cases it extended the notion of technology as subservient to the content that it carried.

For every major new medium that appeared until late in the 20th century, media companies could control the technology that they depended on.

Over time, even as technology development broke off into its own path and started to evolve separately from media, media companies retained control of both the standards and the adoption rate (black and white to color, vinyl to CD, SD to HD, etc.). Media companies selected new technologies when and how they wanted, and they set the terms of use, the price, and the pace of its deployment. Consumers could only consume. By retaining control of the evolution of the technology through implicit control of standards, and explicit control of the distribution channels, they could retain overall control of the medium. Slowly, though, the same technology started to be used for more than one thing, and control started to slip away.

Then the Internet came along.

The great media/technology decoupling

TV, radio, CDs, even newspapers are all “platforms” in a technical sense, even if closed ones, in that they provide a set of common standards and distribution channels for information. In this way, the Internet appears to be “just another platform” through which media companies must deliver their content. This has led to the view that we are simply going through a transition not unlike that of, say, Vinyl to CDs, or Radio to TV.

That media companies can’t control the technology as they used to is clear. What is less clear is that this is a difference of kind, not of degree.

CNN can have a website, but it can neither control the technology standards or software used to build it or ensure that the introduction of a certain technology (say, Adobe Flash) will be followed by a period of stability long enough to ensure recouping the investment required to use it. NBC can post shows online, but it can’t prevent millions of people from downloading the show without advertisement through other channels. Universal Studios can provide a digital copy of a movie six months after its release, but in the meantime everyone that wanted to watch it has, often without paying for it. These effects and many more are plainly visible, and as a result, prophecies involving the death of TV, the music industry, newspapers, movie studios, or radio, are common.

The diagnoses are varied and they tend to focus, incorrectly, on the revenue side of the equation: it’s the media companies’ business models which are antiquated. They don’t know how to monetize. Piracy is killing them. They can’t (or won’t) adapt to new demands and therefore are too expensive to operate. Long-standing contracts get in the way (e.g. Premium channels & cable providers). The traditional business models that supported mass media throughout their existence are being made increasingly ineffective by the radically different dynamics created by online audiences, ease of copying and lack of ability to create scarcity, which drive down prices.

All of these are real problems but none of them is insurmountable, and indeed many media concerns are making progress in fits and starts in these areas and finding new sources of revenue in the online world. The fundamental issue is that control has shifted, irreversibly, out of the hands of the media companies.

For the first time in the history of mass media, technology evolution has become largely decoupled from the media that uses it, and, as importantly, it has become valuable in and of itself. This has completely inverted the power structure in which media operated, with media relegated to just another actor in a larger stage. For media companies, lack of control of the information channel used is behind each and every instance of a crack in the edifice that has supported their evolution, their profits, and their power.

Until the appearance of the Internet it was the media companies that dictated the evolution of the technology behind the medium and, as critically, the distribution channel. Since the mid-1990s, media companies have tried and generally failed to insert themselves as a force of control in the information landscape created by the digitalization of media and the Internet. Like radio and TV, the Internet includes a built in “distribution channel” but unlike them it does not lend itself to natural monopolies apportioned by the government of that channel. Like other media, the Internet depends on standards and devices to access it, but unlike other media the standards and devices are controlled, evolved, and manufactured by companies that see media as just another element of their platforms, and not as a driver of their existence.

This shift in control over technology standards, manufacture, demand, and evolution is without precedent, and it is the central factor that drives the ongoing crisis media finds itself since the early 90s.

Now what?

Implicitly or explicitly, what media companies are trying to do with every new initiative and every effort (DRM, new formats, paywalls, apps) is to regain control of the platform. Given the actors that now control technology, it becomes clear why they are not succeeding and what they must do to adapt.

In the past, they may have attempted to purchase the companies involved in technology, fund competitors, and the like. Some of this is going on today, with the foremost examples being Hulu and Ultraviolet. As with past technological shifts, media companies have also resorted to lobbying and the courts to attempt to maintain control, but this too is a losing proposition long-term. Trying to wrest control of technology by lawsuits that address whatever the offending technology is at any given moment, when technology itself is evolving, advancing, and expanding so quickly, is like trying to empty the ocean by using a spoon.

These attempts are not effective because the real cause of the shift in power that has occurred is beyond their control. It is systemic.

In a world where the market capitalization of the technology industry is an order of magnitude or more than that of the media companies (and when, incidentally, a single company, Apple, has more cash in hand than the market value of all traditional media companies combined), it should be obvious that the battle for economic dominance has been lost. Temporary victories, if any, only serve to obfuscate that fact.

The media companies that survive the current upheaval are those that accept their new role in this emerging ecosystem: one of an important player but not a dominant one (this is probably the toughest part). There still is and there will continue to be demand for content that is professionally produced.

Whenever people in a production company, or a studio, or magazine, find themselves trying to figure out which technology is better for the business, they’re having the wrong conversation. Technology should now be directed only by the needs of creation, and at the service of content.

And everyone needs to adapt to this new reality, accept it, and move on… or fall, slowly but surely, into irrelevance.

When All You Have Is A Hammer

(x-post to medium)

Today’s software traps people in metaphors and abstractions that are antiquated, inefficient, or, simply, wrong. New apps appear daily and with a few exceptions they simply cement the dominance of these broken metaphors into the future.

Uncharacteristically, I’m going to skip a digression on what are the causes of this and leave that for another time (*cough*UI guidelines*cough*), and go straight to the consequences, which must begin with identifying the problem. I could point at the “Desktop metaphor” including its influence on the idea of “User Interfaces” as the source of many problems (people, not users!), but I won’t.

I’ll just focus on a simple question: Can you print it?

If You Can Print It…

Most of the metaphors and abstractions that deal with “old” media, content, and data simply haven’t evolved beyond being a souped-up digital version of their real-world counterparts. For example: you can print a homepage of the New York Times and it wouldn’t look much different from the paper version.

You could take an email thread, or a calendar and print it.

You can print your address book.

Consider: If you showed any of these printouts to someone from the early 20th century, they would have no problem recognizing them and the only thing that they may find hard to believe about it would be how good the typesetting is (or they would be surprised by the pervasive use of color).

Our thinking in some areas has advanced little beyond just creating a digital replica of industrial-era mechanisms and ideas: not only they can be printed: Data would be lost (e.g. IP routing headers), but little to no information would be lost. With few exceptions⁠ (say, alarms built into the calendar.), they can be printed without significant loss of fidelity or even functionality.

On the flip side, you could print a Facebook profile page, but once put in these terms, we can see that the static, printed page does not really replicate what a profile page is: something deeply interactive, engaging, and more than what you can just see on the surface.

Similarly, you could actually take all these printouts and organize them in basically the same way as you would organize them in your computer or online (with physical folders and shelves and desks) and you’d get basically the same functionality. Basic keyword searching is the main feature that you’d lose, but as we all now from our daily experience, for anything that isn’t Google, keyword search can a hit-and-miss proposition.

This translation of a printed product into a simple digital version (albeit sprinkled with hyperlinks) has significant effects in how we think about information itself, placing constraints on how software works from the highest levels of interaction to the lowest levels of code.

These constraints express themselves as a pervasive lack of context: in terms of how pieces of data relate to each other, how it relates to our environment, when we did something, with whom, and so on.

Lacking context, and because the “print-to-digital” translation has been so literal in so many cases, we look at data as text or media with little context or meaning attached, leading modern software to resort to a one-word answer for anything that requires we find a specific piece of information.


Apparently, There Is One Solution To All Our Problems

Spend a moment and look at the different websites and apps on your desktop, your phone, or your tablet. Search fields, embodied in the iconic (pun intended)⁠1 magnifying glass, are everywhere these days. The screenshot below includes common services and apps, used daily by hundreds of millions of people.⁠

Search is everywhere

Search is everywhere

On the web, a website that doesn’t include a keyword-based search function is rare, and that’s even considering that the browser address bar has by now turned into a search field as well.

While the screenshot is of OS X, you could take similar screenshots on Windows or Linux. On phones and tablets, the only difference is that we see (typically) only one app at a time.

Other data organization tools, like tags and folders, have also increased our ability to get to data by generally flattening the space we search through.

The fact that search fields appear to be reproducing at an alarming rate is not a good sign. It’s because search is inherently bad (or inherently good, for that matter). It’s a feature that is trying to address real problems, but attempting to cure a symptom rather than the underlying cause. It’s like a pilot wearing a helmet one size too small and taking aspirin for the headache instead of getting a bigger helmet.

Whether on apps or on the web, these search engines and features are good at returning a lot of results quickly. But that’s not enough, and it’s definitely not what we need.

Because searching very fast through a lot of data is not the same as getting quickly to the right information.

Et tu, Search?

Score one for iconography: Search behaves, more often than not, exactly like a magnifying glass: as powerful and indiscriminate in its amplification as a microscope, only we’re not all detectives looking for clues or biologists zooming in on cell structure. What we need is something closer to a sieve than a magnifying glass: something that naturally gives us what we care about while filtering out what we don’t need.

Inspector Clouseau, possibly searching for a document

Inspector Clouseau, possibly searching for a document

Superficially, the solution to this problem appears to be “better search”, but it is not.

Search as we understand it today will be part of the solution, a sort of escape hatch to be used when more appropriate mechanisms fail. Building those “appropriate mechanisms,” however, requires confronting that software is, by and large, utterly unaware of either context or the concept of time beyond their most primitive forms — more likely to try to impose on us whatever it thinks is the “proper” way to do something than adapting to how we already work and think, and frequently focused on recency at the expense of everything else. Today’s software and services fail to both associate enough contextual and chronological information, and leverage effectively the contextual data that is available once we are in the process of retrieving or exploring.

Meaning what, exactly? Consider, for example, the specific case of trying to find the address of a place where you’re supposed to meet a friend in an hour. You created a calendar entry but neglected to enter the location. With today’s tools, you’d have to search through email for your friend’s name, and more likely than not you’d get dozens of email threads that have nothing to do with the meeting. If software used context even in simple ways, you could, as a primitive example, just drag the calendar entry and drop it on top of a list of emails, which the software would interpret as wanting to filter emails by those around the date in which the entry was created. The amount of emails that would match that condition would be relatively small. Drag and drop your friend’s avatar from a list of contacts and more often than not you’d end up staring at an email thread around that date that would have the information you need, no keyword search necessary.

In other words — search is a crutch. Even when we must resort to exhaustive searching, we don’t need a tool to search as much as we need a tool to find. It is very clear that reducing every possible type of information need to “searching,” mostly using keywords (or whatever can be entered into a textfield) is an inadequate metaphor to apply to the world, which is increasingly being digitized and dumped wholesale into phones, laptops and servers.

We need software that understands the world and that adapts, most importantly, to us. People. It’s not as far-fetched or difficult as it sounds.

2 idiots, 1 keyboard (or: How I Learned to Stop Worrying and Love Mr. Robot)

I’d rename it “The Three Stooges in Half-Wits at Work” if not for the fact that there are four of them. We could say the sandwich idiot doesn’t count, though, but he does a good job with his line (“Is that a videogame?”) while extra points go to the “facepalm” solution of disconnecting a terminal to stop someone from hacking a server. It’s so simple! Why didn’t I think of that before!?!?!

Mr. Robot would have to go 100 seasons before it starts to balance out the stupidity that shows like NCIS, CSI and countless others have perpetrated on brains re: programming/ops/etc.

Alternative for writers that insist in not doing simple things like talking to the computer guy that makes your studio not implode: keep the stupid, but make it hilariously, over the top funny, like so:

We’ll count it even if it’s unintentional. That’s how nice we computer people are.

PS: and, btw, this, this, is why no one gets to complain about Mr. Robot’s shortcomings.

not evenly distributed, indeed

The future is already here – it’s just not very evenly distributed.

— William Gibson (multiple sources)

The speed at which digital content grows (and at which non-digital content has been digitized) has quickly outpaced the ability of systems to aid us in processing it in a meaningful way, which is why we are stuck living in a land of Lost Files, Trending Topics and Viral Videos.

Most of those systems use centuries-old organizational concepts like Folders and Libraries, or rigid hierarchical structures that are perfectly reasonable when everything exists on paper, but that are grossly inadequate, not to mention wasteful and outdated, in the digital world. When immersed in the digital information is infinitely malleable, easily changeable, and can be referenced with a degree of precision and at scales that are simply impossible to replicate in the physical world, and we should be leveraging those capabilities far more than we do today outside of new services.

Doing this effectively would require many changes across the stack, from protocols, to interfaces, to storage mechanisms, maybe formats. This certainly sounds pretty disruptive, but is it really? Or is there a precedent for this type of change?

What We Can Learn From The Natives

“Digital native” systems like social networks and other tools and services created in the last decade continue to evolve at an increasingly rapid pace around completely new mechanisms of information creation and consumption, so a good question to ask is whether it is those services that will simply take over as the primary way in which we interact with information.

Social media, services, and apps are “native” in that they are generally unbounded by the constraints of old physical based paradigms — they simply could not exist before the arrival of the Internet, high speed networks, or powerful, portable personal computing devices. They leverage (to varying degrees) a few of the ideas I’ve mentioned in earlier posts: context, typically in the form of time and/or geographical location, an understanding of how people interact and relate to each other, a strong sense of time and semantics around the data they consume and create.

Twitter, Facebook, and others, include the concept of time as a sorting mechanism and, at best, as another way to filter search. While this type of functionality is part of what we are discussing, it is not all that what we are talking about, and just like “time” is not the only variable that we need to consider, neither will social media replace all other types of media. Each social media service is focused on specific functionality, needs, and wants. Each has its own unique ‘social contract.’

Social media is but one example of the kind of qualitative jumps in functionality and capabilities that are possible when we leverage context, even in small ways. They are proof positive that people respond to these ideas, but they are also limited — specific expressions of the use of context within small islands of functionality of the larger world of data and information that we interact with.

Back on topic, the next question is, did ‘digital natives’ succeed in part because they embraced the old systems and structures? And if so, wouldn’t that mean that they are still relevant? The answer to both questions is: not really.

Post Hoc, Ergo Propter Hoc

Facebook and Twitter (to name just two) are examples of wildly successful new services that, when we look closely, have not succeeded because of the old hierarchical paradigms embedded into the foundations of computers and the Internet, but in spite of them. To be able to grow they have in fact left behind most of the recognizable elements on which the early Internet was built. Their server-side infrastructures are extremely complex and not even remotely close to what we’d have called a “website” barely a decade ago. On the client side, they are really full-fledged applications that don’t exist in the context of the Web as a mechanism for delivering content. New services use web browsers as multi-platform runtime environments, which is also why as they transition to mobile devices more of their usage happens in their own apps, in environments they fully control. They have achieved this thanks to massive investments, in the order of hundreds of millions or billions of dollars, and enormous effort.

This has also carried its cost for the rest of the Web in terms of interconnectivity. These services and systems are in the Web, but not of it. They relate to it through tightly controlled APIs, even as they happily import data from other services. In some respects, they behave like a black hole of data, and they are often criticized for it.

This is usually considered to be a business decision — a need to keep control of their data and thus control of the future, sometimes with ominous undertones attached, and perhaps they could do more to open up their ability to interface with other services in this regard.

But there is another factor that is often overlooked and that plays a role as or more important. These services’ information graphs, structures, and patterns of interaction are qualitatively different than, and far removed from, the basic mechanisms that the Web supports. For example, some of Facebook’s data streams can’t really be shared using the types of primitive mechanisms available through the hierarchical, fixed structures that form the shared foundation of the Internet: simple HTML, URLs, and open access. Whereas before you could attach a permalink to most pieces of content, some pieces of content within Facebook are intrinsically part of a stream of data that crumbles if you start to tease it apart or that requires you to be signed in to verify whether you have access to it or not, how it relates to other people and content on the site, etc. The same applies to other modern services. Wikipedia and Google both have managed to straddle this divide to some degree, Wikipedia retaining extremely simple output structures, and Google maintaining some ability to reference portions of their services through URLs, but this is quickly changing as Google+ is embedded more deeply throughout the core service.

Skype is an example of a system that creates a new layer of routing to deliver a service in a way that couldn’t be possible before, while still retaining the ability to connect to its “old world” equivalent (POTS) through hybrid elements in its infrastructure. Because Skype never ran on a web browser, we tend not to think of it as “part of the Web,” something we do for Facebook, Twitter, and others, but it’s a mere historical accident of when it was built and the capabilities of browsers at the time. Skype has as much of a social network as Facebook does, but because it deals mostly with real time communication we don’t think of putting them in the same category as we do Facebook, but there’s no real reason for that.

Bits are bits, communication is communication.

Old standards become overloaded and strained to cover every possible need or function *coughHTML5cough*. Fear drives this, fear that instead of helping new systems would end up being counterproductive, concerns of balkanization, incompatibilities, and so forth. Those concerns are misplaced.

The fact is that new services have to discard most of the internal models and technology stacks (and many external ones) that the Internet supposedly depends on. They have to frequently resort to a “low fidelity” version of what they offer to connect to the Web in terms it can “understand.” In the past we have called these systems and services “walled gardens.” When a bunch of these “walled gardens” are used by 10, 20, or 30% of the population of the planet, we’re not talking about gardens anymore. You can’t hold a billion plants and trees in your backyard.

The balkanization of the Internet has already happened.

New approaches are already here.

They’re just not evenly distributed yet.

%d bloggers like this: