diego's weblog

there and back again

all your tech are belong to us: media in a world of technology as the dominant force

Pop quiz: who held the monopoly on radio equipment production in the US in 1918?

General Electric? The Marconi Company?

Radio Shack? (Jk!) :)

How about the US Military?

The US entered World War I “officially” in early April, 1917. Determined to control a technology of strategic importance to the war effort, the Federal Government took over radio-related patents owned by companies in the US and gave the monopoly of manufacturing of radio equipment to the Armed Forces — which at the time included the Army, the Navy, the Marine Corps, and the Coast Guard.

This takeover was short-lived (ending in late 1918) but it would have profound effects in how the industry organized in the years and decades that followed. The War and Navy departments, intent on keeping the technology under some form of US control, arranged for General Electric to acquire the American Marconi company and secure the patents involved.

The result was Radio Corporation of America, RCA, a public company whose controlling interested was owned by GE.

Newspapers had been vertically integrated since their inception. The technology required for printing presses and the distribution networks involved in delivering the product were all “proprietary,” in that they were controlled and evolved by the newspapers themselves. Even if the printing press had other uses, you couldn’t easily repurpose a newspaper printing press to print books, or viceversa, and even if you could secure a printing press for newspapers (a massive investment) you could not hope to easily recreate the distribution network required to get the newspaper in the hands of consumers.

This vertical integration resulted in a combination of natural and artificial barriers of entry that would let a few key players, most notably William Randolph Hearst, leverage the resulting common economic, distribution and technological foundation to effect a consolidation in the market without engendering significant opposition. Later, Movie studios relied on a similar set of controls over the technology employed — they didn’t manufacture their own cameras but by controlling creation and distribution, and with their aggregate purchase power, they could dictate what technology was viable and how it was to be used.

Radio, early on, presented the possibility of a revolution in this regard. It could have allowed consumers to also be creators (at least in a small scale). The ability to broadcast was restricted by the size and power of the transmitter at your disposal, and you could start small. It was the first opportunity for a new medium to have the evolution of the underlying technology decoupled from the content it carried, but WWI and the intervention of the US government ensured this would not come to pass. The deal that resulted in the creation of RCA created, in effect, a similar vertical integration in Radio as in other mediums (in Britain, a pioneer of broadcast radio and later TV, the government had been largely in control from the beginning through the BBC, and so already was “vertically integrated”).

This is a way of thinking that became embedded into how Media companies operated.

RCA went on to be at the center of the creation of the two other subsequent major media markets of the 20th century: music and television, and in both cases it extended the notion of technology as subservient to the content that it carried.

For every major new medium that appeared until late in the 20th century, media companies could control the technology that they depended on.

Over time, even as technology development broke off into its own path and started to evolve separately from media, media companies retained control of both the standards and the adoption rate (black and white to color, vinyl to CD, SD to HD, etc.). Media companies selected new technologies when and how they wanted, and they set the terms of use, the price, and the pace of its deployment. Consumers could only consume. By retaining control of the evolution of the technology through implicit control of standards, and explicit control of the distribution channels, they could retain overall control of the medium. Slowly, though, the same technology started to be used for more than one thing, and control started to slip away.

Then the Internet came along.

The great media/technology decoupling

TV, radio, CDs, even newspapers are all “platforms” in a technical sense, even if closed ones, in that they provide a set of common standards and distribution channels for information. In this way, the Internet appears to be “just another platform” through which media companies must deliver their content. This has led to the view that we are simply going through a transition not unlike that of, say, Vinyl to CDs, or Radio to TV.

That media companies can’t control the technology as they used to is clear. What is less clear is that this is a difference of kind, not of degree.

CNN can have a website, but it can neither control the technology standards or software used to build it or ensure that the introduction of a certain technology (say, Adobe Flash) will be followed by a period of stability long enough to ensure recouping the investment required to use it. NBC can post shows online, but it can’t prevent millions of people from downloading the show without advertisement through other channels. Universal Studios can provide a digital copy of a movie six months after its release, but in the meantime everyone that wanted to watch it has, often without paying for it. These effects and many more are plainly visible, and as a result, prophecies involving the death of TV, the music industry, newspapers, movie studios, or radio, are common.

The diagnoses are varied and they tend to focus, incorrectly, on the revenue side of the equation: it’s the media companies’ business models which are antiquated. They don’t know how to monetize. Piracy is killing them. They can’t (or won’t) adapt to new demands and therefore are too expensive to operate. Long-standing contracts get in the way (e.g. Premium channels & cable providers). The traditional business models that supported mass media throughout their existence are being made increasingly ineffective by the radically different dynamics created by online audiences, ease of copying and lack of ability to create scarcity, which drive down prices.

All of these are real problems but none of them is insurmountable, and indeed many media concerns are making progress in fits and starts in these areas and finding new sources of revenue in the online world. The fundamental issue is that control has shifted, irreversibly, out of the hands of the media companies.

For the first time in the history of mass media, technology evolution has become largely decoupled from the media that uses it, and, as importantly, it has become valuable in and of itself. This has completely inverted the power structure in which media operated, with media relegated to just another actor in a larger stage. For media companies, lack of control of the information channel used is behind each and every instance of a crack in the edifice that has supported their evolution, their profits, and their power.

Until the appearance of the Internet it was the media companies that dictated the evolution of the technology behind the medium and, as critically, the distribution channel. Since the mid-1990s, media companies have tried and generally failed to insert themselves as a force of control in the information landscape created by the digitalization of media and the Internet. Like radio and TV, the Internet includes a built in “distribution channel” but unlike them it does not lend itself to natural monopolies apportioned by the government of that channel. Like other media, the Internet depends on standards and devices to access it, but unlike other media the standards and devices are controlled, evolved, and manufactured by companies that see media as just another element of their platforms, and not as a driver of their existence.

This shift in control over technology standards, manufacture, demand, and evolution is without precedent, and it is the central factor that drives the ongoing crisis media finds itself since the early 90s.

Now what?

Implicitly or explicitly, what media companies are trying to do with every new initiative and every effort (DRM, new formats, paywalls, apps) is to regain control of the platform. Given the actors that now control technology, it becomes clear why they are not succeeding and what they must do to adapt.

In the past, they may have attempted to purchase the companies involved in technology, fund competitors, and the like. Some of this is going on today, with the foremost examples being Hulu and Ultraviolet. As with past technological shifts, media companies have also resorted to lobbying and the courts to attempt to maintain control, but this too is a losing proposition long-term. Trying to wrest control of technology by lawsuits that address whatever the offending technology is at any given moment, when technology itself is evolving, advancing, and expanding so quickly, is like trying to empty the ocean by using a spoon.

These attempts are not effective because the real cause of the shift in power that has occurred is beyond their control. It is systemic.

In a world where the market capitalization of the technology industry is an order of magnitude or more than that of the media companies (and when, incidentally, a single company, Apple, has more cash in hand than the market value of all traditional media companies combined), it should be obvious that the battle for economic dominance has been lost. Temporary victories, if any, only serve to obfuscate that fact.

The media companies that survive the current upheaval are those that accept their new role in this emerging ecosystem: one of an important player but not a dominant one (this is probably the toughest part). There still is and there will continue to be demand for content that is professionally produced.

Whenever people in a production company, or a studio, or magazine, find themselves trying to figure out which technology is better for the business, they’re having the wrong conversation. Technology should now be directed only by the needs of creation, and at the service of content.

And everyone needs to adapt to this new reality, accept it, and move on… or fall, slowly but surely, into irrelevance.

the planet remade: now, with asteroids!

the-planet-remade-book-coverLet me begin with a book recommendation: The Planet Remade: How Geoengineering Could Change The World by Oliver Morton.

I would change the title of this book to “The Planet Remade: How Geoengineering Has Changed The World And Will Continue To Change It As Long As Humans Are Monkeying On It, In It, and Around It.” But I understand that might be a less catchy title.

Look, I accept the distinction Morton makes between ‘willful change’ and not, and he needs to establish some boundaries for the discussion. It’s pretty clear we’ve already created massive changes in the planet’s systems. We have altered its features, most obviously by redirecting rivers, creating dams, digging giant tunnels into mountains, covering hundreds of thousands of square miles with concrete, cement, asphalt and all kinds of other crazy stuff (like, say… putting golf courses in the middle of deserts), and (mostly for bad reasons) blowing up lots and lots of different places. We have pumped and continue to pump trillions of tons of gases and chemicals into the biosphere. Geoengineering is already happening, so how about we do it for something other than manufacturing complicated barbeque grills, phone cases and christmas tree decorations?

The book’s discussion on the transformation of the nitrogen cycle is particularly interesting, since this was a key factor in making Norman Borlaug’s high-yield dwarf “superwheat” a feasible crop at large scale (dwarf wheat consumes more nitrogen). Much is frequently said of Borlaug’s work and the Nobel prize he got for it (and with good reason) but less is known about the massive geoengineering activity that started before that work and made it possible.

Geoengineering will be a key element in reversing some of the effects of climate change, since it is pretty clear that “just” reducing emissions won’t cut it.

Just sulfate it.

If I had to bet on a method for climate engineering that’s going to be used in the next few decades, I’d go for stratospheric sulfate aerosols — which the book covers well. Why? As The Joker in TDK said of gasoline and dynamite: “They’re cheap!” If none of the world powers is going to do it, any one of a number of other countries will eventually decide that it’s time to stop the ocean from erasing their coast sooner rather than later. The consequences of this could lead to (surprise!) war, perhaps even nuclear war, which Morton discusses as well. Nothing like some optimism about saving the planet sprinkled with apocalyptic thinking. Just kidding, that’s something important to discuss too. (Nuclear winter is also discussed in terms of its climate impact).

Near the end the book spends a good amount of time talking about asteroids, but not in the way I thought would be … kind of obvious. It focuses on asteroids as an Extinction Level Event. Dino-killer, etc. The point he makes is that the various ideas discussed around how to stop an asteroid from crashing to earth are in a way similar to the idea of using geoengineering to save us from a different kind of cataclysm.

This is an interesting argument but….

Asteroid Mining + Stratospheric Aerosols = Profit!

Fine… maybe not profit, just saving the world. My point is, what the book doesn’t discuss is the use of asteroids for geoengineering… and not as an argument. It mentionsasteroid wranglingbut all hope is dashed when we see that it’s talking about moving an asteroid off-course to prevent it from hitting earth. Ridiculous. We have Bruce Willis for that!

One of my personal obsessions is the topic of asteroid mining. Yes, within the next few decades we will begin mining asteroids, there’s no doubt in my mind about that. And it seems inevitable to me that we’ll also be using some of the results of that for climate engineering via the stratosphere (and later to create massive structures in orbit around the planet).

Why? because the biggest cost in seeding the stratosphere is energy, specifically, the kinetic energy you need to spend to move millions of tons of what essentially is dust from the ground (where it is manufactured cheaply) to its stratospheric destination over 8-10 kilometers above the surface of the earth, depending on latitude. This “cost” is more of a logistical cost rather than a pure energy cost. How so?

Option A: Airplane!airplane-movie-poster

(Not the movie). Let’s say we are going to seed a million tons of sulfate aerosols into the stratosphere.

The energy required to lift a mass of a million tons of material to a height of 10,000 meters would be ~98.1 terajoules (give or take a Joule, E = x x h) = ~27 GWh (gigawatt-hour) = 27,000,000 kWh. In the US (with average energy cost of 12c/kWh) just lifting the dust would cost at minimum 2.7 million dollars. Add to that the necessary costs for stamps, copy paper, printing receipts and office parties, copies of Microsoft Windows, safety goggles, and such, and the cost would rise by several million more. So round it up to 10. 10 MM USD = 1 million tons of material at stratospheric height.

Now, the Mount Pinatubo eruption in 1991 is estimated to have injected 20 million tons of sulfates and resulted in an estimated 0.5 C cooldown across the planet within a year. This cooldown dissipated as quickly as it arrived (at least in geological terms) so a long term geoengineering operation would require adding sulfates for several years, perhaps decades.

With this we could derive a “baseline cost” of 200 million dollars to make global temperatures drop half a degree centigrade within a year. Sounds cheap! We could have a 2×1 offer and make it an even degree cooler.

The energy transfer, sadly, is not “pure”, and so, therefore, neither is the cost. If you are spreading the material from, say, a plane, the weight of the plane, the fuel, transport to airfield form the factory and so forth also comes into play. The logistics chain and equipment required becomes really complicated, really fast. Not impossible by any means, just complicated and much more costly, running into billions. For a less hand-wavy (and more systematic but way longer) analysis, see Geoengineering Cost Analysis and Costs and economics of geoengineering.

Here’s where asteroids come into play.

Option B: Asteroids!asteroids-arcade

(Not the game). Using asteroids for this purpose seems to me like a perfect match. Any nasty by-products of the mining and manufacture remain in space, where hazardous chemical waste is not a problem since a lot of the stuff out there is already hazardous chemicals, plus no one can hear you scream.

Asteroids contain enough material to either obtain what you need directly or by synthesizing what you need using micro factories landed/built (by other micro-factories landed on the asteroid) for that purpose.

The energy required for the deployment of the material will be far lower (you’ll always need some amount of energy expenditure in the form of thrusters and the dispersion device), but you would be able to rely on gravity to do most of the work (if the asteroid in question has been captured and placed in orbit around the earth, even better). Instead of fighting gravity, we’d use it to our advantage.

Most of the maneuvers involved in transferring material would rely on gravity assist rather than rockets (plus aerobraking for atmospheric reentry when needed) which makes them cheaper, and, something that is hardly ever mentioned, less prone to failure simply because there are fewer components in the system, particularly components of the very large, very explosive kind, like the Saturn V’s S-IC of the Space Shuttle’s SRBs.

Now that people are excitedly talking about the possibility that we may have found a Dyson Sphere in our own neighborhood (KIC 8462852 FTW – only 1,480 light years away!) talking about these types of projects could sound to people more like science and less than science fiction. As a bonus, this gets us closer to a Type II civilization. We’ll definitely need to throw a party when that happens.

TL;DR go read this book. It’s very likely that stratospheric sulfate aerosols will be used for climate engineering within the next few decades. But why wouldn’t we use asteroid capture and mining for that? Can this possibly be a new idea? Also: Dyson Spheres!

PS: I haven’t found discussion of this type of sourcing of material for geoengineering, so should this be a new idea I fully expect my fair share of the massive profits. Just let me know and I’ll send my bank information. Can’t send funds myself though, most of my money is in the hands of a nigerian prince who is using it to process an inheritance.

When All You Have Is A Hammer

(x-post to medium)

Today’s software traps people in metaphors and abstractions that are antiquated, inefficient, or, simply, wrong. New apps appear daily and with a few exceptions they simply cement the dominance of these broken metaphors into the future.

Uncharacteristically, I’m going to skip a digression on what are the causes of this and leave that for another time (*cough*UI guidelines*cough*), and go straight to the consequences, which must begin with identifying the problem. I could point at the “Desktop metaphor” including its influence on the idea of “User Interfaces” as the source of many problems (people, not users!), but I won’t.

I’ll just focus on a simple question: Can you print it?

If You Can Print It…

Most of the metaphors and abstractions that deal with “old” media, content, and data simply haven’t evolved beyond being a souped-up digital version of their real-world counterparts. For example: you can print a homepage of the New York Times and it wouldn’t look much different from the paper version.

You could take an email thread, or a calendar and print it.

You can print your address book.

Consider: If you showed any of these printouts to someone from the early 20th century, they would have no problem recognizing them and the only thing that they may find hard to believe about it would be how good the typesetting is (or they would be surprised by the pervasive use of color).

Our thinking in some areas has advanced little beyond just creating a digital replica of industrial-era mechanisms and ideas: not only they can be printed: Data would be lost (e.g. IP routing headers), but little to no information would be lost. With few exceptions⁠ (say, alarms built into the calendar.), they can be printed without significant loss of fidelity or even functionality.

On the flip side, you could print a Facebook profile page, but once put in these terms, we can see that the static, printed page does not really replicate what a profile page is: something deeply interactive, engaging, and more than what you can just see on the surface.

Similarly, you could actually take all these printouts and organize them in basically the same way as you would organize them in your computer or online (with physical folders and shelves and desks) and you’d get basically the same functionality. Basic keyword searching is the main feature that you’d lose, but as we all now from our daily experience, for anything that isn’t Google, keyword search can a hit-and-miss proposition.

This translation of a printed product into a simple digital version (albeit sprinkled with hyperlinks) has significant effects in how we think about information itself, placing constraints on how software works from the highest levels of interaction to the lowest levels of code.

These constraints express themselves as a pervasive lack of context: in terms of how pieces of data relate to each other, how it relates to our environment, when we did something, with whom, and so on.

Lacking context, and because the “print-to-digital” translation has been so literal in so many cases, we look at data as text or media with little context or meaning attached, leading modern software to resort to a one-word answer for anything that requires we find a specific piece of information.


Apparently, There Is One Solution To All Our Problems

Spend a moment and look at the different websites and apps on your desktop, your phone, or your tablet. Search fields, embodied in the iconic (pun intended)⁠1 magnifying glass, are everywhere these days. The screenshot below includes common services and apps, used daily by hundreds of millions of people.⁠

Search is everywhere

Search is everywhere

On the web, a website that doesn’t include a keyword-based search function is rare, and that’s even considering that the browser address bar has by now turned into a search field as well.

While the screenshot is of OS X, you could take similar screenshots on Windows or Linux. On phones and tablets, the only difference is that we see (typically) only one app at a time.

Other data organization tools, like tags and folders, have also increased our ability to get to data by generally flattening the space we search through.

The fact that search fields appear to be reproducing at an alarming rate is not a good sign. It’s because search is inherently bad (or inherently good, for that matter). It’s a feature that is trying to address real problems, but attempting to cure a symptom rather than the underlying cause. It’s like a pilot wearing a helmet one size too small and taking aspirin for the headache instead of getting a bigger helmet.

Whether on apps or on the web, these search engines and features are good at returning a lot of results quickly. But that’s not enough, and it’s definitely not what we need.

Because searching very fast through a lot of data is not the same as getting quickly to the right information.

Et tu, Search?

Score one for iconography: Search behaves, more often than not, exactly like a magnifying glass: as powerful and indiscriminate in its amplification as a microscope, only we’re not all detectives looking for clues or biologists zooming in on cell structure. What we need is something closer to a sieve than a magnifying glass: something that naturally gives us what we care about while filtering out what we don’t need.

Inspector Clouseau, possibly searching for a document

Inspector Clouseau, possibly searching for a document

Superficially, the solution to this problem appears to be “better search”, but it is not.

Search as we understand it today will be part of the solution, a sort of escape hatch to be used when more appropriate mechanisms fail. Building those “appropriate mechanisms,” however, requires confronting that software is, by and large, utterly unaware of either context or the concept of time beyond their most primitive forms — more likely to try to impose on us whatever it thinks is the “proper” way to do something than adapting to how we already work and think, and frequently focused on recency at the expense of everything else. Today’s software and services fail to both associate enough contextual and chronological information, and leverage effectively the contextual data that is available once we are in the process of retrieving or exploring.

Meaning what, exactly? Consider, for example, the specific case of trying to find the address of a place where you’re supposed to meet a friend in an hour. You created a calendar entry but neglected to enter the location. With today’s tools, you’d have to search through email for your friend’s name, and more likely than not you’d get dozens of email threads that have nothing to do with the meeting. If software used context even in simple ways, you could, as a primitive example, just drag the calendar entry and drop it on top of a list of emails, which the software would interpret as wanting to filter emails by those around the date in which the entry was created. The amount of emails that would match that condition would be relatively small. Drag and drop your friend’s avatar from a list of contacts and more often than not you’d end up staring at an email thread around that date that would have the information you need, no keyword search necessary.

In other words — search is a crutch. Even when we must resort to exhaustive searching, we don’t need a tool to search as much as we need a tool to find. It is very clear that reducing every possible type of information need to “searching,” mostly using keywords (or whatever can be entered into a textfield) is an inadequate metaphor to apply to the world, which is increasingly being digitized and dumped wholesale into phones, laptops and servers.

We need software that understands the world and that adapts, most importantly, to us. People. It’s not as far-fetched or difficult as it sounds.

the multichannel conundrum

(x-post to Medium)

I’ve been writing online for quite a while now. My earliest posts date back to late 2001/early 2002. I tried a bunch of different platforms and eventually settled on MovableType running on my own server, and a few years back I moved to hosted WordPress, where my primary weblog remains. As I’ve been revving up my writing in recent weeks I started wondering about other options.

why write where

Now, some people may think of posting in as many places as you can in purely utilitarian terms, as a way to “increase distribution” or whatever. I, however, think about it in terms of the mental space the tool creates, and how it affects my output. Which affects me. This effect is not restricted to online writing, where social feedback loops can be created instantly. I think the tool has a direct, real effect on what you write. All things being equal, writing on a typewriter will lead to something different than if you used, say, Notepad on Windows 95. I’m sure there are studies about this that confirm my completely unfounded assertion. However, I am not going to go on a yak-shaving expedition in an attempt to find out. Let us assume there are, and if not, then let’s agree there should be… and if not we can disagree*.

*Should someone object and try to say that we can “agree to disagree” then I will point out that, no, “agreeing to disagree” is just plain disagreeing but pretending you don’t, probably to avoid an actual conversation. “Agreeing to disagree” is to “disagreeing” what “agnostic” is to “atheist.”

A lot of what I write, of what I’ve always written, is long form. And a lot of what I write, of what I’ve always written, is connected. Not superficially, not just thematically, but actually connected, a long-running thread of obsessions and topics that expand (and, less frequently, collapse) non-linearly. Sometimes I’ve written hypertextually, simultaneously creating meaningful minor blocks of meaning and greater ideas that emerge out of the non-directed navigation of references between those minor blocks. By the by, I know “hypertextually” is not really a word, but I think it conveys what I mean.

While that structure is amusing to my brain (and possibly other brains!), it can have a fate worse than becoming incomprehensible: becoming invisible. If you see something that you don’t understand you have a choice to spend time and try to understand it, but if you don’t see something, regardless of complexity, well…

content survivability

So trying to keep that structure somewhat visible means lots of cross-referencing, which means what I write has to have exceptional survivability. This is less easy than it sounds. Services start and close down. Linking mechanisms change. Technically, theoretically, there’s nothing really preventing hyperlinked content to remain available for referencing in perpetuity, in practice perpetuity can and often is a very very short time. An easy example is Twitter and the tweet-boxes that they insist people must use to reference tweets. Some people take screenshots, most use the tweet boxes. Eventually Twitter will change, morph, be acquired, shut down, or maybe not, but I guarantee you that at some point in the next 10–20 years those boxes will simply stop working. At that time, regardless of how standards-compliant the HTML the pages that contain those tweets, they will be crippled, possibly severely. How many times have you read a news story recently that talks about how so-and-so tweeted such-and-such and it’s outrageous? Archive.org and its wonderful Wayback Machine don’t solve this issue.

Now, in general, this is not necessarily a bad thing. I’m sure that not everything has to be preserved forever. With distance history loses resolution, and that’s alright for lots of things. Even during crises a lot of what we do in life is mundane, inconsequential and it rightfully gets lost in time. Now that a lot of what we do is either in cyberspace or is reflected by/in it, it’s natural that inconsequential things end up there. We don’t care what Julius Caesar had for lunch one day in October as a teenager. Likewise, the fact that an Instagram photo of a future president’s lunch is lost in time will do nothing to alter history. However, if the choice for lunch leads to losing a bus that later crashed, then the entire incident will generally be recorded. Psychohistory comes to mind.

But I digress. The point is that I like the idea, personally, of knowing that I can maintain cross references valid for what I write, and that means having both a level of control over it as well as reducing the number of outlets in which it appears. Hence my weblog being fairly static in structure (I converted the MT weblog to static pages back during the transition).

This also limits the tools that can be used, to some degree, and according to my theory of how the tool shapes the message, it would naturally lead to stagnation, at minimum, stylistically, of what is said.

Which results in this so-called conundrum.

Trying new things is important though. That’s why I’m here. I may cross-post to my weblog for now, just for “backup,” but I am going to give Medium a try, and see what happens. This entire post resulted entirely from this experiment, and that’s a pretty good start.😛

letter to a rooster

Somewhere North of my residence.

Dear Sir/Madam

As a modern man, I am reasonable accepting of other lifeforms and their possibly bizarre rituals or traditions. For example, I both accept and embrace expats of New York or Chicago, who seem to never tire of extolling the virtues of their city of origin while not actually living in it. I am also fond of librarians and their index cards, whether printed or digital, and can even tolerate so-called “Foodies” as long as they maintain an appropriate distance (usually around 50 feet, depending on voice volume).

Now, with my bonafides established, I would like to lodge my complaint.

Whatever possessed your race (presumably millennia ago) to invent the screeching call you insist on perpetrating everyday at the break of dawn, surely we can all agree that, given your current sub- or semi-urban circumstances, the time for such barbarism is now past.

In this modern world of ours we have a large number of items that are designed to wake people up as well as put them to sleep, in the form of smartphones, State of the Union addresses (both live and recorded), alarm clocks, movies starring Steven Seagal, or the occasional blunt instrument. I have no doubt that Fowl in general and Roosters in particular would be equally well served by any of the products that can be procured for little expense at your local Wal-Mart or Barn, Pottery or otherwise. Furthermore, were you to resist this notion one could almost call your obsessive clinging to this forgotten Alarm-Clock-less past pitiful, and you, Sir/Madam, a Luddite. And I do not believe that Sir/Madam wishes this notion to propagate among your contemporaries.

I am a gentleman; I do not wish to escalate matters. However, at this point I must inform you that, should you continue in this course of action, I will be forced to take measures to end this daily attack on the senses. These include, but may not be limited to: SEAL HALO drops on your present location, various types of artillery, grenades, arrows (both wooden and metallic), colored confetti, and low-yield nuclear-tipped ICBMs. Should you persist beyond that point I am also prepared to deploy offensive mechanisms banned by the Geneva Conventions, for example, non-stop rebroadcasts of Jersey Shore at high volume in your general direction.
I beg that you will listen to reason and relent, before the madness consumes us all.



PS: by “SEAL” I don’t mean the cute animals that hang out in pools at the Zoo or the Marina clapping at the tourist folk, and by “HALO” I don’t mean the critically-acclaimed game from Bungie. Although a shower of marine creatures and DVDs would probably range from inconvenient to downright annoying, it’s far tamer than what I am actually referring to.

PPS: Somehow it has gotten in to my head that when not “roosting” (or whatever your call the screeching) you actually speak with the voice of Sir Sean Connery. Should this be accurate, please let me know. I am not about to declare war on 007. I am not a moron.
PPPS: Jimmy McMillan 2016.

2 idiots, 1 keyboard (or: How I Learned to Stop Worrying and Love Mr. Robot)

I’d rename it “The Three Stooges in Half-Wits at Work” if not for the fact that there are four of them. We could say the sandwich idiot doesn’t count, though, but he does a good job with his line (“Is that a videogame?”) while extra points go to the “facepalm” solution of disconnecting a terminal to stop someone from hacking a server. It’s so simple! Why didn’t I think of that before!?!?!

Mr. Robot would have to go 100 seasons before it starts to balance out the stupidity that shows like NCIS, CSI and countless others have perpetrated on brains re: programming/ops/etc.

Alternative for writers that insist in not doing simple things like talking to the computer guy that makes your studio not implode: keep the stupid, but make it hilariously, over the top funny, like so:

We’ll count it even if it’s unintentional. That’s how nice we computer people are.

PS: and, btw, this, this, is why no one gets to complain about Mr. Robot’s shortcomings.

not evenly distributed, indeed

The future is already here – it’s just not very evenly distributed.

— William Gibson (multiple sources)

The speed at which digital content grows (and at which non-digital content has been digitized) has quickly outpaced the ability of systems to aid us in processing it in a meaningful way, which is why we are stuck living in a land of Lost Files, Trending Topics and Viral Videos.

Most of those systems use centuries-old organizational concepts like Folders and Libraries, or rigid hierarchical structures that are perfectly reasonable when everything exists on paper, but that are grossly inadequate, not to mention wasteful and outdated, in the digital world. When immersed in the digital information is infinitely malleable, easily changeable, and can be referenced with a degree of precision and at scales that are simply impossible to replicate in the physical world, and we should be leveraging those capabilities far more than we do today outside of new services.

Doing this effectively would require many changes across the stack, from protocols, to interfaces, to storage mechanisms, maybe formats. This certainly sounds pretty disruptive, but is it really? Or is there a precedent for this type of change?

What We Can Learn From The Natives

“Digital native” systems like social networks and other tools and services created in the last decade continue to evolve at an increasingly rapid pace around completely new mechanisms of information creation and consumption, so a good question to ask is whether it is those services that will simply take over as the primary way in which we interact with information.

Social media, services, and apps are “native” in that they are generally unbounded by the constraints of old physical based paradigms — they simply could not exist before the arrival of the Internet, high speed networks, or powerful, portable personal computing devices. They leverage (to varying degrees) a few of the ideas I’ve mentioned in earlier posts: context, typically in the form of time and/or geographical location, an understanding of how people interact and relate to each other, a strong sense of time and semantics around the data they consume and create.

Twitter, Facebook, and others, include the concept of time as a sorting mechanism and, at best, as another way to filter search. While this type of functionality is part of what we are discussing, it is not all that what we are talking about, and just like “time” is not the only variable that we need to consider, neither will social media replace all other types of media. Each social media service is focused on specific functionality, needs, and wants. Each has its own unique ‘social contract.’

Social media is but one example of the kind of qualitative jumps in functionality and capabilities that are possible when we leverage context, even in small ways. They are proof positive that people respond to these ideas, but they are also limited — specific expressions of the use of context within small islands of functionality of the larger world of data and information that we interact with.

Back on topic, the next question is, did ‘digital natives’ succeed in part because they embraced the old systems and structures? And if so, wouldn’t that mean that they are still relevant? The answer to both questions is: not really.

Post Hoc, Ergo Propter Hoc

Facebook and Twitter (to name just two) are examples of wildly successful new services that, when we look closely, have not succeeded because of the old hierarchical paradigms embedded into the foundations of computers and the Internet, but in spite of them. To be able to grow they have in fact left behind most of the recognizable elements on which the early Internet was built. Their server-side infrastructures are extremely complex and not even remotely close to what we’d have called a “website” barely a decade ago. On the client side, they are really full-fledged applications that don’t exist in the context of the Web as a mechanism for delivering content. New services use web browsers as multi-platform runtime environments, which is also why as they transition to mobile devices more of their usage happens in their own apps, in environments they fully control. They have achieved this thanks to massive investments, in the order of hundreds of millions or billions of dollars, and enormous effort.

This has also carried its cost for the rest of the Web in terms of interconnectivity. These services and systems are in the Web, but not of it. They relate to it through tightly controlled APIs, even as they happily import data from other services. In some respects, they behave like a black hole of data, and they are often criticized for it.

This is usually considered to be a business decision — a need to keep control of their data and thus control of the future, sometimes with ominous undertones attached, and perhaps they could do more to open up their ability to interface with other services in this regard.

But there is another factor that is often overlooked and that plays a role as or more important. These services’ information graphs, structures, and patterns of interaction are qualitatively different than, and far removed from, the basic mechanisms that the Web supports. For example, some of Facebook’s data streams can’t really be shared using the types of primitive mechanisms available through the hierarchical, fixed structures that form the shared foundation of the Internet: simple HTML, URLs, and open access. Whereas before you could attach a permalink to most pieces of content, some pieces of content within Facebook are intrinsically part of a stream of data that crumbles if you start to tease it apart or that requires you to be signed in to verify whether you have access to it or not, how it relates to other people and content on the site, etc. The same applies to other modern services. Wikipedia and Google both have managed to straddle this divide to some degree, Wikipedia retaining extremely simple output structures, and Google maintaining some ability to reference portions of their services through URLs, but this is quickly changing as Google+ is embedded more deeply throughout the core service.

Skype is an example of a system that creates a new layer of routing to deliver a service in a way that couldn’t be possible before, while still retaining the ability to connect to its “old world” equivalent (POTS) through hybrid elements in its infrastructure. Because Skype never ran on a web browser, we tend not to think of it as “part of the Web,” something we do for Facebook, Twitter, and others, but it’s a mere historical accident of when it was built and the capabilities of browsers at the time. Skype has as much of a social network as Facebook does, but because it deals mostly with real time communication we don’t think of putting them in the same category as we do Facebook, but there’s no real reason for that.

Bits are bits, communication is communication.

Old standards become overloaded and strained to cover every possible need or function *coughHTML5cough*. Fear drives this, fear that instead of helping new systems would end up being counterproductive, concerns of balkanization, incompatibilities, and so forth. Those concerns are misplaced.

The fact is that new services have to discard most of the internal models and technology stacks (and many external ones) that the Internet supposedly depends on. They have to frequently resort to a “low fidelity” version of what they offer to connect to the Web in terms it can “understand.” In the past we have called these systems and services “walled gardens.” When a bunch of these “walled gardens” are used by 10, 20, or 30% of the population of the planet, we’re not talking about gardens anymore. You can’t hold a billion plants and trees in your backyard.

The balkanization of the Internet has already happened.

New approaches are already here.

They’re just not evenly distributed yet.

scenario #1

“I just know I’ve seen it before.”

You’re meeting Mike, who waits patiently while you mumble this. Browsing, navigating through files, searching. Something you were looking at just yesterday, something that would be useful… You remember telling yourself this is important, then getting sidetracked following up on the last in the list of emails you needed to exchange to set the time for the meeting, switching between checking your spam folder for misplaced messages and your calendar for available times, then a phone call… but that doesn’t help… you know you won’t find it. You briefly consider checking the browser on your laptop, but the thought of wading through two-dozen-plus spinning tabs as they load data you don’t need while trying to find something you can’t even describe precisely doesn’t sound like an inviting prospect. You give up.

The meeting moves on. You start to take some notes. Suddenly, a notification pops up but it goes away too quickly for you to see it. You don’t know what it is, so you load the app, disrupting the conversation and your note-taking. It’s a shipment tracking notification. You close the app and go back to your notes, now stuck at mid-sentence.

The flow of the conversation moves to a blog post Mike forwarded to you recently, but you can’t remember seeing it. You find the email, eventually, but after clicking on the link in the results page the window is blank and the page doesn’t finish loading. You wait five seconds. Ten. You give up, close the tab, and keep going.

Hours later, you are at home, reading through the news of the day, and you suddenly remember that blog post again. While it’s loading, you get an alert. Twin beeps. You glance at it. Meeting with Mike, 8 pm, it says. A second later, the phone beeps.

Meeting with Mike, 8 pm.

Two rooms away, you hear your tablet, beeping. You don’t need to go look at it. You know what it says.

Meeting with Mike, 8 pm.

It turns out that the time you set in the calendar entry when you originally created it was incorrect, the fixed one was a duplicate, and all your calendars are now happily notifying you of an upcoming meeting that actually happened hours ago. You dismiss the alert on your laptop, but this doesn’t do much for the alerts on your other devices.

In fact, an hour or so later, when you start using the tablet, the alert is still be there, even though it’s two hours after when the meeting should have happened. Now you’d like to finish reading what you had started earlier in the day, but the list of “cloud tabs” seems endless, and when you finally find what you want to read, you can’t remember exactly where you were in the article. You don’t want to read it all again, not now. You mark it to “read later” … and give up.

Oh, well. Maybe there’s something good on TV that you can watch on the phone.

islanded in the datastream: the limits of a paradigm

Data is not information, information is not knowledge, knowledge is not understanding, understanding is not wisdom.

— Clifford Stoll

“I just know I’ve seen it before.”

islanded datastream limits of a paradigmA simple statement about a straightforward problem, and perhaps the best embodiment of the irony that while computers can, in theory, remember everything, and would allow us to access any information that exists within our reach — if we could only define precisely enough what we’re looking for.

It’s something that happens to everyone, in every context: at work, at home, at school. We could be trying to remember a reference to use in a document, or looking for a videoclip to show to a friend, or for a photo that (we think) we took during a trip years ago, an email from two weeks ago that has the information to get to our next meeting, or trying to find an article that we read last month and that we’d like to send to someone.

Google can search for something we’ve never seen out of billions of documents that match a keyword, but we can’t easily find a page we read last week. We create documents and then lose track of them. Our hard drives contain a sea of duplicates, particularly photos, images, and videos. Our email, messaging apps, digital calendars, media libraries, browser histories, file systems, all contain implicit and explicit information about our habits, the people we’ve met, places we’ve been, things we’ve done, but it doesn’t seem to help. Often, our digital world seems to be at the whim of an incompetent version of Maxwell’s Demon, actively turning much of our data into a chaotic soup of ever-increasing entropy that makes it increasingly harder to extract the right information at the right time.

We now have countless ways of instantly sharing and communicating. We consume, experience, create and publish in shorter bursts, mainlining tweets, photos, videos, status messages, whole activity feeds.

In the last ten years the rise relatively inexpensive, fast, pervasive Internet access, wireless infrastructure, and flexible new approaches to server-side computing have had an enormous effect in many areas, and has contributed to the creation of what are arguably entirely new realities in a shift of a breadth and depth with few, if any, historical precedents.

But not everything is evolving at the same speed. In today’s seemingly limitless ocean of ideas of innovation, a significant portion of the software we use everyday remains stuck in time, in some cases with core function and behavior that has remained literally unchanged for decades.

The software and services that we use to access and interact with our information universe have become optimized around telling us what’s happening now, while those that help us understand and leverage the present and the past, in the form of what we perceive and know, and the future, in the form of learning and making new connections, have become stale and strained under ever-growing data volumes.

Intranets hold data that we need and can’t find, while the Internet keeps pushing us to look at content we may not need but can’t seem to escape. Consumption of media has become nearly frictionless for some types of activities, like entertainment or brief social interactions, while becoming increasingly more difficult for others, like work or study activities that involve referencing and recall. Search engines have opened up access to information of all kinds and reduced the barriers to dissemination for new content as soon as it becomes available, while our ability to navigate through the data closest to us, in some cases data we ourselves produce, has in effect decreased.

Meanwhile, the proliferation of computing devices of all types and sizes, of sensors, of services that provide ambient information, provide a solid layer of new capabilities and raw data that we can rely on, but that are rarely used extensively or beyond the obvious.

The limits of a paradigm

This isn’t about just interfaces, or publishing standards, or protocols. It can’t be fixed by a better bookmarking service or a better search engine, or nicer icons on your filesystem browser app. The interfaces, the protocols they use, the storage and processing models in clients and middleware are all tied, interdependent, or rather codependent.

What is happening is that we have reached the limits of paper-centric paradigms, a model of the digital world built upon organizational constructs and abstractions based the physical world and thus made up of hierarchies of data that can be understood as a sequences of pages of printed paper. Hierarchies in which people, us, whom technology should be helping, are at the very bottom of the stack, and what we can see and do is constrained by what’s higher up in whatever hierarchy is at play at the moment.

These hierarchies are everywhere: filesystems, DNS, websites, email folders, music players, URL structures, contact lists. They are embedded in fundamental interface metaphors, in how we structure content — even present in the unseen depths of the Internet’s routing systems.

“Paper centric interfaces” are still pervasive. A significant percentage of modern software could be printed and not only retain most of its information, but also a lot of its “functionality.” For example, what most people do with their iPad calendar could be done as well using a piece of paper with a calendar drawn on it. And most of these things have not only remained unchanged for years, they are in many cases just plain electronic translations of paper-based systems (Cc, BCc, From, To, all existed as part of corporate memo flows well before email was invented).

The hierarchies embedded in our information systems explain why China has the ability to control (but not in detail) what its Citizens can and can’t see on the Internet.

Within these hierarchies, some services have carved out new spaces that show what’s possible when the digital starts to be unleashed from the physical. Wikipedia, Facebook and Twitter are perhaps some of the best examples. Not surprisingly, they’re all fully or partially banned in China, while their “local” copies/competitors can be controlled. But they are universes unto themselves.

There are many other consequences to our over-reliance on hierarchical, centralized architectures. Sites and services go down, and millions of people are affected. Currently, we are on a trajectory in which people increasingly have less, rather than more control over what they see. Privacy is easier to invade, whether by government actors in legitimate—although often overly broad—security sweeps, or by criminals. A single corporate network breach can expose millions of people to fraud or theft.

Large-scale services have to resort to incredibly complex caching systems and massive infrastructure to support their services: the largest Internet cloud services have more than one server for every 1,000 people online.

A lot of this infrastructure seems structurally incapable of change, but it only seems that way. Many companies that run large-scale web services already operate internal systems that no longer resemble anything remotely recognizable as what existed when the protocols they support first came into widespread use. They’re spaceship engines being forced into Ford Model-T frames, twisted into knots to “speak” in a form and at a pace that decades-old conceptual frameworks can process.

Clients, middleware, protocols, what’s visible, and what’s in our pockets and our homes, is stuck in time (examples and discussion on this in future posts). Getting it unstuck will require making changes at multiple levels, but it’s perfectly doable. The digital world is infinitely malleable, and so should be our view of it.

they’re moving to agile any day now

Great story: the most obsolete infrastructure money could buy. If you know the meaning of words/acronyms like RCS, VAX, VMS, Xenix, Kermit and many others and have been waiting anxiously for a chance to see them use in actual sentences once more, here’s your chance. Choice quotes:

[…] on my first day I found that X was running what was supposedly largest VAXcluster remaining in the world, for doing their production builds. Yes, dozens of VAXen running VMS, working as a cross-compile farm, producing x86 code. You might wonder a bit about the viability of the VAX as computing platform in the year 2005. Especially for something as cpu-bound as compiling. But don’t worry, one of my new coworkers had as their current task evaluating whether this should be migrated to VMS/Alpha or to VMS/VAX running under a VAX emulator on x86-64


After a couple of months of twiddling my thumbs and mostly reading up on all this mysterious infrastructure, a huge package arrived addressed to this compiler project. […]

Why it’s the server that we’ll use for compiling one of the compiler suites once we get the source code! A Intel System/86 with a genuine 80286 CPU, running Intel Xenix 286 3.5. The best way to interface with all this computing power is over a 9600 bps serial port. Luckily the previous owners were kind enough to pre-install Kermit on the spacious 40MB hard drive of the machine, and I didn’t need to track down a floppy drive or a Xenix 286 Kermit or rz/sz binary. God, what primitive pieces of crap that machine and OS were.

Go read it, and try to avoid laughing, crying, shuddering, or shaking your head (possibly all at the same time).

%d bloggers like this: