The future is already here – it’s just not very evenly distributed.
— William Gibson (multiple sources)
The speed at which digital content grows (and at which non-digital content has been digitized) has quickly outpaced the ability of systems to aid us in processing it in a meaningful way, which is why we are stuck living in a land of Lost Files, Trending Topics and Viral Videos.
Most of those systems use centuries-old organizational concepts like Folders and Libraries, or rigid hierarchical structures that are perfectly reasonable when everything exists on paper, but that are grossly inadequate, not to mention wasteful and outdated, in the digital world. When immersed in the digital information is infinitely malleable, easily changeable, and can be referenced with a degree of precision and at scales that are simply impossible to replicate in the physical world, and we should be leveraging those capabilities far more than we do today outside of new services.
Doing this effectively would require many changes across the stack, from protocols, to interfaces, to storage mechanisms, maybe formats. This certainly sounds pretty disruptive, but is it really? Or is there a precedent for this type of change?
What We Can Learn From The Natives
“Digital native” systems like social networks and other tools and services created in the last decade continue to evolve at an increasingly rapid pace around completely new mechanisms of information creation and consumption, so a good question to ask is whether it is those services that will simply take over as the primary way in which we interact with information.
Social media, services, and apps are “native” in that they are generally unbounded by the constraints of old physical based paradigms — they simply could not exist before the arrival of the Internet, high speed networks, or powerful, portable personal computing devices. They leverage (to varying degrees) a few of the ideas I’ve mentioned in earlier posts: context, typically in the form of time and/or geographical location, an understanding of how people interact and relate to each other, a strong sense of time and semantics around the data they consume and create.
Twitter, Facebook, and others, include the concept of time as a sorting mechanism and, at best, as another way to filter search. While this type of functionality is part of what we are discussing, it is not all that what we are talking about, and just like “time” is not the only variable that we need to consider, neither will social media replace all other types of media. Each social media service is focused on specific functionality, needs, and wants. Each has its own unique ‘social contract.’
Social media is but one example of the kind of qualitative jumps in functionality and capabilities that are possible when we leverage context, even in small ways. They are proof positive that people respond to these ideas, but they are also limited — specific expressions of the use of context within small islands of functionality of the larger world of data and information that we interact with.
Back on topic, the next question is, did ‘digital natives’ succeed in part because they embraced the old systems and structures? And if so, wouldn’t that mean that they are still relevant? The answer to both questions is: not really.
Post Hoc, Ergo Propter Hoc
Facebook and Twitter (to name just two) are examples of wildly successful new services that, when we look closely, have not succeeded because of the old hierarchical paradigms embedded into the foundations of computers and the Internet, but in spite of them. To be able to grow they have in fact left behind most of the recognizable elements on which the early Internet was built. Their server-side infrastructures are extremely complex and not even remotely close to what we’d have called a “website” barely a decade ago. On the client side, they are really full-fledged applications that don’t exist in the context of the Web as a mechanism for delivering content. New services use web browsers as multi-platform runtime environments, which is also why as they transition to mobile devices more of their usage happens in their own apps, in environments they fully control. They have achieved this thanks to massive investments, in the order of hundreds of millions or billions of dollars, and enormous effort.
This has also carried its cost for the rest of the Web in terms of interconnectivity. These services and systems are in the Web, but not of it. They relate to it through tightly controlled APIs, even as they happily import data from other services. In some respects, they behave like a black hole of data, and they are often criticized for it.
This is usually considered to be a business decision — a need to keep control of their data and thus control of the future, sometimes with ominous undertones attached, and perhaps they could do more to open up their ability to interface with other services in this regard.
But there is another factor that is often overlooked and that plays a role as or more important. These services’ information graphs, structures, and patterns of interaction are qualitatively different than, and far removed from, the basic mechanisms that the Web supports. For example, some of Facebook’s data streams can’t really be shared using the types of primitive mechanisms available through the hierarchical, fixed structures that form the shared foundation of the Internet: simple HTML, URLs, and open access. Whereas before you could attach a permalink to most pieces of content, some pieces of content within Facebook are intrinsically part of a stream of data that crumbles if you start to tease it apart or that requires you to be signed in to verify whether you have access to it or not, how it relates to other people and content on the site, etc. The same applies to other modern services. Wikipedia and Google both have managed to straddle this divide to some degree, Wikipedia retaining extremely simple output structures, and Google maintaining some ability to reference portions of their services through URLs, but this is quickly changing as Google+ is embedded more deeply throughout the core service.
Skype is an example of a system that creates a new layer of routing to deliver a service in a way that couldn’t be possible before, while still retaining the ability to connect to its “old world” equivalent (POTS) through hybrid elements in its infrastructure. Because Skype never ran on a web browser, we tend not to think of it as “part of the Web,” something we do for Facebook, Twitter, and others, but it’s a mere historical accident of when it was built and the capabilities of browsers at the time. Skype has as much of a social network as Facebook does, but because it deals mostly with real time communication we don’t think of putting them in the same category as we do Facebook, but there’s no real reason for that.
Bits are bits, communication is communication.
Old standards become overloaded and strained to cover every possible need or function *coughHTML5cough*. Fear drives this, fear that instead of helping new systems would end up being counterproductive, concerns of balkanization, incompatibilities, and so forth. Those concerns are misplaced.
The fact is that new services have to discard most of the internal models and technology stacks (and many external ones) that the Internet supposedly depends on. They have to frequently resort to a “low fidelity” version of what they offer to connect to the Web in terms it can “understand.” In the past we have called these systems and services “walled gardens.” When a bunch of these “walled gardens” are used by 10, 20, or 30% of the population of the planet, we’re not talking about gardens anymore. You can’t hold a billion plants and trees in your backyard.
The balkanization of the Internet has already happened.
New approaches are already here.
They’re just not evenly distributed yet.