diego's weblog

there and back again

the territory is not the map (or, beyond the desert of the real)

. . . In that Empire, the Art of Cartography attained such Perfection that the map of a single Province occupied an entire City, and the map of the Empire, an entire Province. In time, these Excessive Maps did not satisfy and the Schools of Cartographers built a Map of the Empire, that was of the Size of the Empire, and which coincided point for point with it. Less Addicted to the Study of Cartography, the Following Generations understood that that dilated Map was Useless and not without Pitilessness they delivered it to the Inclemencies of the Sun and the Winters. In the Deserts of the West endure broken Ruins of the Map, inhabited by Animals and Beggars; in the whole country there is no other relic of the Disciplines of Geography.

Suárez Miranda: Viajes de varones prudentes, libro cuarto, cap. XLV, Lérida, 1658

— Jorge Luis Borges, On rigor in science (translated by myself, and here’s why )

Today abstraction is no longer that of the map, the double, the mirror, or the concept. Simulation is no longer that of a territory, a referential being, or a substance. It is the generation by models of a real without origin or reality: a hyperreal. The territory no longer precedes the map, nor does it survive it. It is nevertheless the map that precedes the territory — precession of simulacra — that engenders the territory, and if one must return to [Borges’] fable, today it is the territory whose shreds slowly rot across the extent of the map. It is the real, and not the map, whose vestiges persist here and there in the deserts that are no longer those of the Empire, but ours. The desert of the real itself.

— Jean Baudrillard, Simulacra and Simulations

(Translated by Sheila Faria Glaser)

The map is not the territory.

— Alfred Korzybski

Imagine we had invented airplanes and automobiles but no modern mapping or positioning systems. Without precise maps or GPS,  we could go very fast and very far, we could stumble onto wonderful places to visit, we would occasionally meet long lost friends. We would even be able to communicate at great speed by bringing documents and packages back and forth between a set of specific, well-known locations.

the territory is not the mapHowever, it would require enormous effort to make these experiences repeatable, consistent, and therefore reliably useful. Without appropriate navigational tools, pilots and drivers would have to rely on tedious record keeping and inefficient behaviors to describe a path — for example, keeping multiple copies of various partially accurate maps to stitch together something that may resemble a reasonable course to take, or using imprecise mechanisms to establish their positions. These pilots would often spend a lot of time on tasks that have nothing to do with traveling itself (like organizing all those maps), it would take them longer than necessary to reach even common destinations, and, perhaps frequently, they would get completely lost. This is perhaps not that different from what things were like, for example, for pre-Enlightenment European explorers. Every once in a while, the imprecision and unpredictability of incomplete information would pay off, as in someone “discovering” an unknown continent. But, most of the time, it was a fairly inefficient way to go about visiting our planet.

The Internet today is a vast, ever-expanding territory for which we have built great vehicles, but no modern maps. Instead, what we have is the equivalent of 15th century mapping technology to navigate in a world that lets us, and perhaps even requires we move at 21st century speed. We have no equivalent of GPS for the information landscape we inhabit. No accuracy. No context. All we have are scraps and notes we take along the way, feeble markings on dog-eared charts that become outdated almost as soon as they are created.

Furthermore, this analogy oversimplifies the problem in one important regard — the “vehicles” of the Internet not only travel the territory but also expand it. You can add information explicitly (e.g., post a photo) or create new information implicitly by simply navigating it (e.g. View counts on a video).

On the Internet, we quite literally create new roads as we walk.

The Internet as a “territory” is to some degree like that in Borges’ story: it creates the illusion that it and the map are one, that they are inextricably linked and interdependent, but they are not.

Modern systems and services are great at externally directed consumption. The digital mechanisms we use to navigate our present, the now, are increasingly sophisticated about putting new information in front of us.

But in almost every context, self-directed navigation through the vast ocean of data that we increasingly live in has become more difficult, and when we attempt to combine this with purpose and need for specific information (an article that we heard about, a document we created long ago, etc.) the task becomes even more difficult. Some of us are constantly trying out new organizational techniques for our folders, bookmarks, documents; most of us just make do with what we have.

Notifications are used across devices and software to trigger actions from us constantly, but in many cases their intent is not to aid or inform — not because they are necessary or useful, but because the product or service in question wants to accelerate growth, increase “engagement”, or increase revenue.

In response, operating systems started to include a “Do Not Disturb” feature that can be activated manually or automatically at certain times (e.g. at night). This is a useful feature, but the question we should ask ourselves is: Why it is so necessary? It isn’t just the number of services or apps that counts here. It is also that these apps and services lack or ignore simple elements that should be at the forefront in a world in which we are awash in devices and sensors. Awareness of context and time, along with adaptation to the circumstances of the person using the software are not common design parameters in our software and services. In the few cases when they are considered they are not given as much importance as they deserve, or they are implicitly sidestepped in the service of revenue or growth or some other internal metric. This is not a bad thing in and of itself, but it should be an explicit decision (even explicit to people using the product) and it rarely is.

Modern systems also require relatively precise inputs to work efficiently and correctly. Take “Searching” for example: before we can search for something, we have to know not only what we’re searching for but the specific technique for how to find it. To begin with, ‘search’ has by now largely become synonym with ‘keyword search.’ Any amount of disorganized, decontextualized information can, apparently, be managed effectively by typing a few words into a rectangular, two-dimensional box.

And there’s no doubt that if you understand the metaphors and abstractions involved, search as it exists today can be both an efficient and powerful tool. But, if you don’t, it can actually be counterproductive. Google is a spectacular achievement of a scope that is hard to overstate (even if rarely acknowledged) and a wonderful tool, but there can be a vast difference between someone who knows how to use it and someone who doesn’t, not unlike a person asked to find a particular book within the Library of Congress when they don’t have a clear understanding of how their filing system works.

“Search,” in general, presumes that what you’re looking for is differentiated enough to be separated from everything else by using words, or, even more explicitly, a sequence of unicode characters.

Consider that for a moment.

Even in the specific case of keyword search, words as they exist for us, in our heads, in speech, in writing, have context, nuance, pronunciation, subtlety. All of that is lost when we have to reduce what we are thinking to pure symbols, to just type—and having a remote system work based not on what we mean, but what it thinks we mean.

Search as it exists today is a tool designed primarily to find one element that shares one or two characteristics (typically keywords) with many others but has some fundamental difference that we can express symbolically. However, what we need is, more often than not, to be able to create or resurface connections across personal knowledge strands, the ability to find a specific item among many that are alike when we can’t define precisely what sets them apart.

If search engines aim for finding a needle in a haystack what we often need is more like  looking for a particular grain of sand on a beach. When “searching” for something that we’ve seen it is more frequent to recall the context rather than the detail. “That website we looked at in the meeting last week” is a more frequent starting point than one or two keywords from the website in question. 

In other words, this is not merely an issue of expertise or experience using the product, it is a fundamental problem rooted in the lack of subjective contextual information: subjective to the person using it, subjective to their situation, their place, who they are, the time it is, what they’ve seen in the past, the subtle and unique features of the landscape of their experience and knowledge.

Evoking, on the other hand, is something that people excel at: correlating, incrementally building up thin strands of information into coherent threads that allow us to pull from them and bring what we’re looking for into view.

We’ve become mesmerized with the territory and forgotten the map — for example, when the structure of websites rather than that of the information in question determines indexing and results. Or: a message with to, from and cc is not the pinnacle of expression of small group interpersonal communication dynamics but we can see clearly that so far we have failed evolve it in meaningful ways.

If we are to move past the current mire of software that remains stubbornly stuck in systems, modes of operation, protocols and interfaces created decades ago, if we are going to manage to break from interfaces that are barely one step removed from their paper-based counterparts, we will need software that doesn’t rely so deeply on how information is created, manipulated and stored. We are people, not users.

We connect — time, place, related ideas, half-remembered thoughts, sensory information, even feelings, all take part in the process of recall.

Software that in some sense “understands” and can help or even guide us through that process is what we should aim for, and figure out how to build. Software that adapts to us, instead of the other way around.


people, not users!

How we relate to things is dictated by how we think about them, which is strongly influenced by how we communicate about them… and the words we use for that purpose. Terminology matters beyond mere semantics. How we think about how our products are used, and the people who will be using our products, shapes what we build. I don’t care how obvious this may be, it’s still useful to say it, to be mindful of it.

We are people, not users.

When we “use a car” we think of ourselves as a person who drives, not “the user of a car”. You drive. The act of driving is not intrinsic to you, it’s temporary, just something you’re doing. When you use a pen, you are writing, when you use a brush you are painting. So what are we “users” of software and services (and all things that make up the Internetdoing? And what does “user” mean, anyway?

Let’s look at the dictionary definition of ‘user’:

user |ˈyo͞ozər| noun

1 a person who uses or operates something, especially a computer or other machine.

• a person who takes illegal drugs; a drug user: the drug causes long-term brain damage in users | a heroin user.

• a person who manipulates others for personal gain: he was a gifted user of other people.

So according to this, if you’re a user you might be using a computer ‘or other machine’, a drug addict, or possibly a sociopath.


We can’t erase the other (pre-existing) interpretations of “user” from our heads, so just on this basis ‘user’ is not a great term. But let’s set that aside anyway and put things in context. Where did this specific use (ahem) of the word come from? How did it come to be used in computing? For those who don’t know what “Multics” means, The Jargon File –and if you don’t know what that is, wikipedia, as always, can help— sheds some light:

user: n.

1. Someone doing ‘real work’ with the computer, using it as a means rather than an end. Someone who pays to use a computer. See real user.

2. A programmer who will believe anything you tell him. One who asks silly questions. [GLS observes: This is slightly unfair. It is true that users ask questions (of necessity). Sometimes they are thoughtful or deep. Very often they are annoying or downright stupid, apparently because the user failed to think for two seconds or look in the documentation before bothering the maintainer.] See luser.

3. Someone who uses a program from the outside, however skillfully, without getting into the internals of the program. One who reports bugs instead of just going ahead and fixing them.

The general theory behind this term is that there are two classes of people who work with a program: there are implementors (hackers) and lusers. The users are looked down on by hackers to some extent because they don’t understand the full ramifications of the system in all its glory. (The few users who do are known as real winners.) The term is a relative one: a skilled hacker may be a user with respect to some program he himself does not hack. A LISP hacker might be one who maintains LISP or one who uses LISP (but with the skill of a hacker). A LISP user is one who uses LISP, whether skillfully or not. Thus there is some overlap between the two terms; the subtle distinctions must be resolved by context.

Early computers were large, expensive systems that needed “administrators” and naturally people who could “use” but not “administer”.

“User” did not originally indicate a person. In a networked environment, at the network management level it may, for example, make sense to label a computer as a “user of the network”. In fact, UNIX considers any actor in the system, whether human or synthetic, to have a user (and a group) attached to it as a way of indicating access privileges to different functions or services. The “user” in UNIX is really a property that can be attached to things to indicate permissions and other relevant information (for example, a user will typically have a password, used for authentication, a home directory, etc).

Over time “the person named Joe Smith has a user in the system” turned into “Joe Smith is a user in the system.” We started to mix the concept of a user being a virtual entity controlled by a person with a way to refer to the person themselves.

This use of terminology migrated from terminal-based systems (UNIX, et al) to desktop computers and beyond. An iPad, for example, has no need to distinguish between a user and an administrator. We are, in effect, “administrators” of our iPads, but we still call ourselves “users.” Incidentally, that’s what anyone building software for it calls us, and themselves, as well.

This is one of the reasons why so much of the software we use everyday feels impersonal: it was built for a “user”, a faceless entity that may or may not be human. If you happen to build software and disagree, I challenge you to visualize what comes to your mind when you think of the “user” of your software. For most of us it’s not a person, no face, no arms: a shapeless blob of requirements that, somehow, can manage clicks and taps and keypresses. It will take a concerted effort on your part to picture an actual human doing something specific with your software. And it should be the other way around.

So, why is this a problem?

We are more than what we use, and because we have purpose we are doing more than just using.

Homer is baffled when he can't find the 'Any' key.

Homer is baffled when he can’t find the ‘Any’ key.

The effects of ignoring this simple truth seep in at all levels of language and thought, and ultimately define and constrain what we build. We think of “the user” of software. The user of piece of software. Who’s unique in that sentence?

In general, we talk about what a person does, we describe their actions, we don’t define the person as a generic, interchangeable element while giving the privilege of specificity to the tool. Except in software.

In software, the designer of a word processor would generally refer to the person using their software as “the word processor user.” (let’s ignore, for the moment, the insanity of calling a tool for writing a “word processor.”)

Someone designing a hammer would not call the intended audience for their product “hammer users.”

They would give them purpose and a name. Carpenter, Sculptor, and so on, and in doing so they would be defining a context in which they intend their tool to be used, context which would then aid in making the design better adapted to its intended task.

The designer of a pen would think of a person using the pennot a “pen user”. A “user” uses the pen to let ink flow from the pen to the paper, permanently marking it in a way that may or may not make any sense. A person who writes has a purpose: a memo, a letter, a book. In that purpose you also discover design constraints. A pen used for calligraphy is different than a pen for everyday use, which is different than a pen used at high altitude, or in space, or for drawing.

Beyond “purpose” there’s something else that a person has that is frequently ignored by software: a life.

Just thinking of “the user experience” is not enough. A lot of software, and certainly the vast majority of dominant software and services, appears to rely on the Donald Trump principle of design: it screams ME, ME, ME. You want to shut down your computer? You can’t: your “word processor” demands that you make a decision about saving or not that file you were editing. Are you highly stressed while driving your car because you’re late to a job interview? Deal with it: your preferred social network wants you to check out the photo of a friend having lunch while on vacation, while you’re also being told that there’s a great offer on coat hangers at a nearby mall, and your calendar is alerting you that you have lunch with a friend soon, even though it’s set for 6 pm and the appointment was mistakenly set for the wrong time, then duplicated for the right time, leaving the old one intact. And who has lunch at 6 pm anyway?

How one would fix this problem is a good question. The answers will be heavily dependent on context. My point is that it’s a question we’re not asking as frequently as we should, if at all. And calling people ‘users’ gives us a pass on that.

Profile photos or friends are used frequently to lend an air of ‘humanity’ or ‘personalization,’ but strip that away and it’s the same content, the same links, the same design, the same choices. Conveniently, it also creates a barrier, and in the process defines a type of human-machine dialogue that puts the weight of decisions and results on “the user” while software absconds, claiming nothing is its responsibility.

We can do better than that.

For a while now I’ve been avoiding the term ‘user’ in everything I write, be it code or prose. It’s hard. And it’s completely worth it.

We are people, not users, and what we build, how we build it, and why we build it should reflect that.

PS: There’s other aspects to this that I’ll expand on in future posts.

PPS: One of those topics is, specifically, how the more recent software development practice of creating “user stories” fits into (and affects) this practice is a topic for a future post. Spoiler alert: it’s not enough. :)

software you can’t make fun of

factsandfictionsWe dream of Minority Report; what we have is The Drudge Report. Many of us would happily embrace the moral hazards of pre-crime enforcement—if we could only have some of the supercool holographic, seamless information technology manipulation we see in the movie.

Art is a powerful lens through which we can examine culture and society. Artistic expression is also a conduit through which we relate to each other and try to make sense of what’s happening around us. When it comes to technology, popular artistic expressions of it are interesting for two main reasons.

First, art reflects reality as widely perceived for a particular age. It acts as a mirror, and while it doesn’t represent objective reality, it can be a valid measure of how how we really see ourselves. When we see Homer Simpson struggle to use a computer, we can laugh at it because he is an oaf, but we can also relate to what he is going through, because we experience it everyday.

Second, art can also express our collective fears, desires and aspirations. For example, The Terminator falls squarely in the “fear” category (and what better way to represent fear of technology that than through a monomaniacal unstoppable killing machine with a heavy Austrian accent?) while Iron Man, at least in part, is more aspirational, a non-cynical view of Vorsprung durch Technik.

So what do popular representations of information technology in art tell us, not about what the tech industry thinks it is, but how people perceive it, how we relate to it, whether it’s doing what we want or not?

Asking this question leads to some uncomfortable answers. It is striking how art, and in particular popular art —TV, movies, bestselling books— is in almost universal agreement: in their current form software, hardware, the Internet and information technologies are (apparently) good for humor, but not much else.

The flip-side is that when information technologies have to be used for a “serious” purpose (say, by detectives trying to solve a crime, doctors trying to save a life, spies chasing terrorists), without cynicism but matter-of-factly as part of the world, they rarely if ever look like what actually exists. Not only that, they are qualitatively different than what is available today.

It’s not just in the context of science fiction: whether in contemporaneous or futuristic settings, when information technologies are involved, nothing looks or, more importantly, behaves like what’s available in our everyday reality.

Here’s a short clip with some of my favorite examples:

Think about it beyond the examples in the video. On TV, on books, on movies, and not just for comedy. If you see a “realistic” depiction of the technologies we have, it is almost invariably in the context of humor or, at best, a cynical point of view. This is not a case of confirmation bias. Examples of use of information technology in realistic settings are few and far between, and most of those are for highly specialized areas, like computer experts.

Even in those cases, it’s still notable when it happens. The new USA Show Mr. Robot (btw, watch this show, it’s awesome!) has actually gotten attention specifically because it stays true to reality.

Consider what you are exposed to everyday while using your computer or the Internet. The challenges involved in actually getting your work done (e.g., struggling to find the right information, reading through nearly incomprehensible email threads) or just everyday communication or entertainment (being bombarded with advertising, posting what you had for dinner, commenting on traffic). Now try to recall examples of media depictions of those activities in which they are just a matter of everyday life and not used as a comic foil, for humor, or acerbic/cynical commentary.

There aren’t many. We are spending significant portions of our lives in front of screens, and yet a straightforward, realistic depiction of this reality seems to automatically become a joke. Even non-fictional accounts of events tend to avoid the subject altogether.

The appearance of other technologies also involved humor, fear, cynicism, but there was also a good share of positive depictions and, more importantly, reflections of reality as reality without turning it into a punchline. Phones, for example, could be used for jokes (perhaps, most effectively, by Seinfeld) but also as an “everyday” communication mechanism. Cars, rockets, biotechnology, advanced medicine, modern industry, media itself, all have been depicted for good, for bad, but also as a staple of life.

It’s valid to consider that, perhaps, there’s something intrinsic in information technology that resists use in artistic representations, but that’s not the case.

In fact, art and popular media are awash in representations of information technologies used for purposes other than humor.

It’s just that those representations are completely removed from reality—drastically divergent from what we actually use today.

Many of these clips are for stories set in the future, but one would be hard pressed to trace a path between, say, the Mac Finder or Internet Explorer and the systems depicted. After all the main element that separates “futuristic” interfaces from those placed in alternate versions of the present like in James Bond movies is the common appearance of 3D holographic interactive displays. Even more: movies like the Iron Man series or TV Shows like Agents of S.H.I.E.L.D. all manage to skip the part where it’s all in the future and place those interfaces and interaction mechanisms into the fictional “present” they depict.

Computers that don’t get in the way seems to be more  part of the ‘Fiction’ than the ‘Science’ in Science Fiction.

Apparently, to have your computer help you, without interrupting, and without getting in the way, seems like a fantastical notion. To enjoy entertainment without glitches or being flooded with inane commentary seems preposterous.

Literary representations of technology, even in dystopian contexts, also tend to prefer non-linear extrapolation. William Gibson’s description of Cyberspace is a good example:

“Cyberspace. A consensual hallucination experienced daily by billions of legitimate operators, in every nation, by children being taught mathematical concepts… A graphic representation of data abstracted from banks of every computer in the human system. Unthinkable complexity. Lines of light ranged in the nonspace of the mind, clusters and constellations of data. Like city lights, receding…”

— William Gibson, Neuromancer

What is significant is that even though these representations of information technology are unrealistic, we still connect with them. They don’t strike us as nonsensical. They just seem far-fetched, particularly as we return to the reality of pages that never load, files that can’t be found, viruses, spam attacks, and a flood of data we can’t digest into anything sensible.

If these depictions represent something worth aspiring to, we need to start moving along that path instead of burrowing deeper into the hole we find ourselves in: we need software you can’t make fun of.

At least, not as easily as it is today. :-)

‘Internet’ redux

Is HTML a protocol or a format? What does “Layer 7” mean? What are RFCs?

What is the difference between the Web and the Internet?

For the vast majority of people, even many people in technology, the answer to all those questions will be “I don’t know.”

And that’s the way it should be. Most of us drive cars, but very few of us could pick apart an engine, much less build one from scratch.

But everyone who owns a car knows the difference between the car and the engine: engines come in different forms, they can be used for vehicles and machines other than cars, and so forth. 

Most people, when told “the car is not the engine” would probably say “obviously,” while very few if any would be surprised by the statement.httpimg

But when dealing with information technologies, the situation is reversed.

The Browser is not the Web. An App is not the Internet. Word isn’t Windows. Google is not your Internet connection. Many people would be surprised, if not confused, by some or all of these statements. Only a minority would think “obviously.”

If you are in that minority, you may be surprised that this is the case, but it is.

Cars and engines are physical; the fact that they are two different things — albeit tightly interconnected, even interdependent to different degrees — is almost axiomatic. We have used, seen, or heard of, or studied instances in which engines exist without cars, which is why the concepts are easy to perceive as separate. They are physical constructs, they can be touched, while the fact that they typically break down in nearly binary terms (ie., the car either starts or doesn’t, the fridge either works or doesn’t) is also useful.

On the other hand, information technologies seem to blend into each other, are intertwined in ways invisible to the layperson. The multitude of layers, software, services, providers, is a key element in the Internet’s strength, but it also makes it more opaque for people who use it. The massive efforts of engineers over the last few decades have resulted in a multitude of recovery and error states in which it’s hard to tell what, exactly, is wrong. When a short video is “buffering” for ten minutes, is it because the Internet is slow? Is it because of a router rebooting somewhere? The cable modem? The PC? The Browser? The Browser’s plugin? Or any of the ten other possible problems we haven’t thought of? And that’s even when you have a reasonable understanding on the myriad of components involved along the path for delivering one particular service.

Until a few years ago, most people didn’t experience “the Internet” through anything other than a Web browser and their email app. Even now, as everything from your scale to your security system to your gaming console starts to connect to (and often, depend on) the Internet, the exact nature of what’s going on remains elusive. “Reboot the router” has become the new “Reboot the PC.” 

To add to the confusion, hardware itself has become more integrated, more difficult to separate into discrete components.

The browser becomes the operating system, the operating system becomes the display, The display becomes the computer. The computer becomes the Internet. 

The Web, Browsers, Email, Operating Systems, not only they all seem linked to varying degrees that are hard to pull apart; they have also evolved in an apparently seamless progression that has had the unintended effect of blurring the lines between them. You can see web pages in your email, you can check your email inside a browser, and so on.

It’s as if combustion engines existed only as part of, say, four door cars. After decades of only seeing engines in four-door cars, it would be natural to mix them up. Only mechanics would be left to argue that an engine could be used to power other vehicles, or other machinery.

Why does this matter?

This matters because over time it seems hard to imagine that parts of the whole can be repurposed or used for anything but what we use them for. 

With software, it’s far too easy to conflate form and function. And when the software in question involves simple (but not simplistic), fundamental abstractions, it’s easy for those abstractions to swallow everything else.

The Web browser paradigm, for example, has been so spectacularly successful that it seems nothing could exist in its place, particularly when Web browsers, designed primarily to support HTTP and HTML, subsumed Gopher or FTP early on, and entire runtime environments and applications later.

At the dawn of the Web, browsers implemented various protocols, and someone that accessed, for example, an FTP site understood that the browser was (in a sense) “pretending” to be an FTP client. Today, if the average user happens to stumble, via hyperlinks, onto an FTP server, they will be unlikely to recognize the fact that they’re not looking at a website. They probably won’t notice that the URL starts with “ftp://” (most modern browsers hide the protocol portion of URLs, some hide URLs altogether) and if they do, they probably won’t know what it means. They may even think that the site in question is poorly designed.

Over time, it has become harder to conceive of anything that doesn’t rely on this paradigm and we try to fit all new concepts into it. One implementation of the idea of hypertext has achieved in a sense a form of inevitability. It seems inescapable that this is what hypertext should be.

But of course it isn’t.

The Web, Email, Calendars, Address Books, Filesystems: they are all in a sense elemental. Their primal nature and scale of deployment makes it easy to mix up their current incarnation with their essential form and function.

When thinking of the information we use in various tasks and contexts, it’s nearly impossible to avoid putting information not just in categories but in whatever is the most common way in which we use them. Computing has exposed the true nature of data as an ethereal, infinitely malleable construct, but we think in analogies and physical terms. A document becomes a web page, or a file. A message becomes an email, or a ‘text’. An appointment becomes a calendar entry. The phrase “following a hyperlink” will create in a lot of people the mental image of clicking or tapping on blue underlined text within a Web page, in a Web browser, even if the concept of a hyperlink is not tied uniquely to HTML, or the World Wide Web, underlined text, or, perhaps most obviously, the color blue.

Avoiding these categories is, admittedly, difficult. The software that, in our collective psyche, has come to be associated so strongly to those categories has remained nearly unchanged in its basic behavior and function since it was first developed, decades ago, which makes it seem even more like unchangeable features of our information landscape, rather than the very first step in a long evolutionary process.

If we are ever going to advance beyond the initial paradigms and metaphors of data rooted in the physical world, we need to reclaim some of these terms.

The future of the Internet should not be tied down to the idea of the Web. A lot of it will happen through interfaces other than a web browser. New protocols could supplement old ones, new formats will coexist with those that dominate.

We just have to keep in mind that none of it is fixed, and constantly question the assumptions behind it, looking for a better way.

That’s why I often use the term “Internet” to describe not just one aspect of it but the whole that comprises it. Protocols, servers, routers, your home computer or tablet, a Web page, all of these are parts of the Internet Experience. Using “Internet” in this way is in strict terms inaccurate, but it is also necessary.

It’s a way to escape the confines of terms that are overloaded and overused to such a degree that they restrict our imagination.

Because, maybe, by embracing the notion that the Internet is “everything” we can, hopefully, rediscover the fundamental truth that the Internet can be anything.

Data is more than just bits.

Information is more than Data.

The Internet is more than the Web.

maybe because both words end with “y”

In an an apparent confusion between the word “utility” and the word “monopoly,” the Wall Street Journal runs an opinion piece today called “The Department of the Internet” that has to be one of the most disingenuous (and incoherent) efforts to attack Net Neutrality I’ve seen in recent times. The author, currently a hedge fund manager and previously at Bell Labs/AT&T, basically explains all of the ways in which AT&T slowed down innovation, either by omission, errors of judgment, or willful blocking of disruptive technologies.

All of them because, presumably, AT&T was classified as a “utility.” I say “presumably” because at no point does the piece establish a clear causal link between AT&T’s service being a utility and the corporate behavior he describes.

Thing is, AT&T behaved like that primarily because it was a monopoly.

And how do we know that it was its monopoly power that was the primary factor? Because phone companies never really stopped being regulated in the same way — and yet competition increased after the breakup of AT&T. In fact, you could argue that regulation on the phone system as a whole increased as a result of the breakup.

Additionally, it was regulation that forced companies to share resources they otherwise would never have. In fact the example of “competition” in the piece is exactly an example of government intervention similar to what Net Neutrality would do:

“The beauty of competition is that you get network neutrality for free. AT&T cut long-distance rates in the 1980s when MCI and Sprint started competing fiercely.”

Had the government not intervened in multiple occasions (whether in the form of legislation, the Courts, or the FCC, and most dramatically with the breakup), AT&T would never have allowed third parties to sell long distance to their customers, much less at lower rates than them.

There’s more than one fallacy on the piece on how “utilities are bad”:

A boss at Bell Labs in those days explained what he called the Big Lie, using water utilities as an example. Delivering water involves mostly fixed costs. So every decade or so, water companies engineer a shortage. Less water over the same infrastructure meant that they needed to raise rates per gallon to generate returns. When the shortage ends, they spend the extra money coming in on fancy facilities, thus locking in the higher rates for another decade.

So — someone, decades ago, gave an example of the corruption of water companies to the author, and regardless of whether this “example” is true or not, real, embellished or a complete fabrication, and regardless of whether the situation is, I don’t know, maybe a little different half a century later and dealing with bits and not water molecules, it’s apparently something good to throw out there anyway. (In fact, I struggle to see exactly what AT&T could do that would be analogous to the abuse he’s describing).

Again, this is presumed, since no causal link is established in the sense that if true, the described ‘bad behavior’ is conclusively the result of something being a utility rather than, well, any other reason, like corruption, incompetence, or just greed.

To close — I’ve seen that a number of people/organizations (many but not all of them conservatives) are opposed to Net Neutrality. My understanding is that this is because of fear of over-regulation. Fair enough. Have any of them thought how it would affect them? Perhaps it’s only when it’s implemented that they will realize that their readers/customers, by an overwhelming majority, have little choice of ISPs. Very few markets have more than two choices, and almost no markets have competitive choices (ie, choices that are at equivalent levels of speed or service).

But I’m sure that the Wall Street Journal, or Drudge, or whoever will be happy to pay an extra fee to every IP carrier out there so their pages and videos load fast enough and they don’t lose readers.


the importance of Interstellar

iDo not go gentle into that good night,
Old age should burn and rave at close of day;
Rage, rage against the dying of the light.

                                                    Dylan Thomas (1951)

Over the last few years a lot of movies -among other things- seem to have shrunk in ambition while appearing to be”bigger.” The Transformers series of movies are perhaps the best example. Best way to turn off your brain while watching fights of giant robots and cool explosions? Sure. But while mega-budget blockbusters focus on size, many of them lack ambition and scope. Art, entertainment, and movies in particular, given their reach, matter a lot in terms of what they reflect of us and what they can inspire. For all their grandiose intergalactic-battle-of-the-ages mumbo jumbo, Transformers and other similar movies always feel small, and petty. Humans in them are relegated to bit actors that appear to be props necessary for the real heroes (in this case, giant alien robots) to gain, or regain, inspiration and do what they must do. And always, always by chance. Random people turn into key characters in world-changing events just because they stumbled into the wrong, or right, (plot)hole.

Now, people turned into “the instruments of fate (or whatever),” if you will, is certainly a worthwhile theme and something that does happen. But stories in which the protagonists (and people in general) take the reins and attempt to influence large-scale events through  hard work, focus, cooperation, even -gasp!- study, became less common for a while. Art reflects the preoccupations and aspirations of society, and it seems that by the mid-to-late 2000s we had become reliant on the idea of the world as reality TV – success is random and based on freakish circumstances, or, just as often, on being a freak of some sort. This isn’t a phenomenon isolated to science fiction — westerns, for example, declined in popularity but also turned “gritty” or “realistic” and in the process, for the most part, trading stories of the ‘purity of the pioneering spirit’ or ‘taming the frontier’ with cesspools of dirt, crime, betrayal and despair.

Given the reality of the much of the 20th century, it was probably inevitable that a lot of art (popular or not) would go from a rosy, unrealistically happy and/or heroic view of the past, present, and future, to a depressing, excessively pessimistic view of them. Many of the most popular heroes in our recent collective imaginations are ‘born’ (by lineage, by chance, etc) rather than ‘made’ by their own efforts or even the concerted efforts of a group. Consider: Harry Potter, the human characters in Transformers (and pretty much any Michael Bay movie since Armageddon), even more obviously commercial efforts like Percy Jackson or Twilight along with other ‘young adult’ fiction and with pretty much all other vampire movies, which have the distinction of creating ‘heroes’ simultaneously randomly and through bloodlines, the remake of Star Trek turned Kirk joining Starfleet into something he didn’t really want to do; the characters in The Walking Dead; the grand-daddy of all of these: Superman… and, even, as much as I enjoy The Lord of The Rings, nearly everything about its view of good and evil involves little in the way of will and intent from the main characters. Characters talk a great deal about the importance of individuals and their actions, but in the end they’re all destined to do what they do and the key turning points are best explained as either ‘fate’, simply random, or manipulated by people of ‘greater wisdom and/or power’ like Gandalf, Galadriel, Elrond and so on. Good and evil are defined along the lines of an eugenics pamphlet in a way that gets to be creepy more often than not (the ‘best’ are fair-skinned, with blue or green eyes, and from the West, the ‘worst’ are dark-skinned, speak in hellish tongues and are from the East, along with an unhealthy obsession with bloodlines and purity of blood, and so on; Gandalf “progresses” from Gray to White, while Saruman falls from being the leader as Saruman the White into shrunken evil serving Sauron, the Dark Lord… as “Saruman of Many Colours”… you get the idea).

All of which is to say: I don’t think it’s a coincidence that in this environment good Science Fiction in general and space exploration SF is always relegated a bit, particularly in movies. There is nothing random about space exploration: it requires an enormous amount of planning, study, effort, hard work, and money. You can’t inherit a good space program. It has to be painstakingly built, and supported, across decades. When a not-insignificant percentage of society flatly discards basic scientific theories in favor of religious or political dogma while giving an audience to Honey Boo Boo or Duck Dynasty, it’s not illogical for studios to finance another animated movie with talking animals than to push people beyond their comfort zones.

Even so, there’s always been good SF, if perhaps not as frequently as SF fans would like. And over the last 20 years we have started to see  Fantasy/SF stories that combine a more “realistic” view of the world, but mixed in with the more idealistic spirit of movies like The Right Stuff. In these we have characters succeeding, or at least ‘fighting the good fight’, through exertion of will, the resolve to change their reality. And even if there’s an element of ‘fate’ or chance in the setup, the bulk of the story involves characters that aren’t just pushed around by forces beyond their control. Nolan’s Dark Knight trilogy, Avatar, Serenity, most of Marvel’s new movies: Iron Man, Captain America, The AvengersWatchmen. In books, the Already Dead series and the Coyote series, both of which could make for spectacularly good movies if ever produced. In TV, Deadwood, which is perhaps the best TV series of all time, was a good example of the same phenomenon — it felt realistic, but realistically complex, with characters that weren’t just swept up in events, and that exhibited more than one guiding principle or idea. We got ‘smaller’ movies like Moon that were excellent, but large-scale storytelling involving spaceflight that wasn’t another iteration of a horror/monster/action movie is something I’ve missed in the last few years.

What about last year’s Gravity? It was visually arresting and technically proficient but fairly mundane in terms of what actually happens. It’s not really inspiring — it’s basically the story of someone wrecking their car in the middle of the desert and having to make it to the next gas station… but in space, the focus on experiencing a spiritual rebirth, and in case we were confused about the metaphor the see the main character literally crawl out of mud and water and then slowly stand and start to walk. Bullock’s character in Gravity is also one of those guided by circumstances, frequently displaying a lack of knowledge about spaceflight that even the original monkeys that flew in the early space missions would have slapped their foreheads about.

Which brings me to Interstellar. No doubt it will be compared to 2001: A Space Odyssey (with reason) and with Gravity (with less reason). Interstellar is more ambitious than 2001 in terms of science, matching it or exceeding it in terms of story scope and complexity, while leaving Gravity in the dust.  2007’s Sunshine shares some themes and some of the serious approach to both science and fiction (… at least the first 30 minutes or so, afterwards it shares more with Alien) as well as with the (in my opinion) under-appreciated Red Planet (2000) and even some elements of the much less convincing Mission to Mars. It also reminded me of Primer in terms of how it seamlessly wove pretty complex ideas into its plot.

We haven’t had a “hard” SF space movie like this for a whileKey plot points involving gravitational time-dilation, wormholes, black holes,  quantum mechanics/relativity discrepancies… even a 3D representation of a spacetime tesseract (!!!!). 2001 was perfect about the mechanics of space flight, but Interstellar also gets as deep into grand-unified theory issues as you can probably get without losing a lot of the audience, and goes much further than 1997’s Contact. There are some plot point that are weak (or, possibly, that I may have missed an explanation for, I’ll need another viewing to confirm…), and sometimes there are moments that feel a bit slow or excessively, shall we say, ‘philosophical’, although in retrospect the pauses in action were effective in making what followed even more significant.

Comparisons and minor quibbles aside, Interstellar is spectacular; the kind of movie you should, nay, must watch in a theater, the bigger screen the better, preferably on IMAX.

The movie not only has a point of view,  it is unapologetic about it. It doesn’t try to be “balanced,” and it doesn’t try to mix in religion even as it touches on subjects in which it frequently is mixed in the name of making “all points of view heard.” Interstellar is not “anti religion” … and it is not pro-religion either. There’s a fundamental set of circumstances in the plot that allows the movie to sidestep pretty much all of the usual politics and religion that would normally be involved. Perhaps someone can argue whether those circumstances are realistic (although something like the Manhattan project comes to mind as an example of how it can actually happen). But the result is that the movie can focus almost exclusively on science, exploration, our ability to change things, either individually or in groups.

This, to me, felt truly refreshing. Everything that has to do with science these days is mixed in with politics and/or religion. This also helps the story in its refusal to “dumb things down”…  its embrace of complexity of ideas, even if less focused on a lot of specific technical details than, say, Apollo 13 was, which is a natural result of having the Apollo data at hand.

How many people, I wonder, know by now what NASA’s Apollo program really was? Sometimes it seems to be relegated to either conspiracy joke material or mentioned in passing to, for example, explain how your phone is more powerful than the computers that went to the moon. Somehow what was actually attempted, and what was actually achieved, isn’t remarkable anymore, and the true effort it took is less appreciated as a result. With that, we are making those things smaller, which gives us leeway to do, to be less. It makes “raging against the dying of the light” sound like a hopelessly romantic, useless notion. It justifies how approaching big challenges these days frequently happens in ways that makes us “involved” in the same way that Farmville relates to actual farming. Want to feel like you’ve solved world hunger? Donate $1 via text to Oxfam. Want to “promote awareness of ALS”? Just dump a bucket of ice water on your head. Want to “contribute in the fight against cancer”? Add a $3 donation while checking out of the supermarket. No need to get into medicine or study for a decade. Just bump your NFC-enabled phone against this gizmo and give us some money, we’ll do the rest.

I’m not saying that there is no place for those things, but recently it seems that’s the default. Why? Many commentators have talked about how these days we lack an attitude best described by Kennedy’s famous line “Ask not what your country can do for you, as what you can do for your country”. But I don’t think the issue is not wanting to do anything, or not wanting to help. I think the issue is that we have gotten used to being scared and feeling powerless in the face of complexity. We’ve gone from the 60’s attitude of everyone being able to change the world to feeling as if we’re completely at the mercy of forces beyond our control. And we’ve gone overboard about whatever we think we can control:  people freaking out about the use of child seats in cars, or worrying about wearing helmets when biking, while simultaneously doing little as societies about the far greater threat of climate change.

When education was a privilege of very few, very rich people, it was possible for pretty much everyone to accept a simplistic version of reality. That was before affordable mass travel, before realtime communications, before two devastating world wars and any number of “smaller” ones. Reality has been exposed for the truly messy, complicated thing it is and always was. But instead of embracing it we have been redefining reality downwards, hiding our collective heads in the sand, telling ourselves that small is big. Even heroism is redefined — everyone’s a hero now.

Interstellar is important not just as a great science fiction movie, not just because it is inspiring when it’s so much easier to be cynical about the past, the present or the future, but also because beyond what it says there’s also how it says it, with a conviction and clarity that is rare for this kind of production. It’s not a coincidence that it references those Dylan Thomas verses more than once. It’s an idealistic movie, and in a sense fundamentally optimistic, although perhaps not necessarily as optimistic about outcomes as it is about opportunities.

It’s about rekindling the idea that we can think big. A reminder of what we can attempt, and sometimes achieve. And, crucially, that at a time when we demand predictability out of everything, probably because it helps us feel somehow ‘in control’, it is also a reminder in more ways than one that great achievement, like discovery, has no roadmap.

Because if you always know where you’re going and how you’re getting there you may be ‘safe’, it’s unlikely you’ll end up anywhere new.

here’s when you get a sense that the universe is telling you something

In the same Amazon package you get:

    The latest Thomas Pynchon novel.
    The World War Z blu ray.

Telling you what exactly…. well, that is less clear.

what a startup feels like (sometimes)

That is all.

the apple developer center downpocalypse


We’re now into day three of the Apple Developer Center being down. This is one of those instances in which Apple’s tendency to “let products speak for themselves,” an approach that ordinarily has a lot going for it, can be counterproductive. In three days we’ve gone from “Downtime, I wonder what they’ll upgrade,” to “Still down, I wonder what’s going on?” to “Still down, something bad is definitely going on.”

Which, btw, is the most likely scenario at this point. If you’re ever been involved in 24/7 website operations you can picture what life must have been like since Thursday for dozens, maybe hundreds of people at Apple: no sleep, constant calls, writing updates to be passed along the chain, increasingly urgent requests from management wanting to know, exactly, how whatever got screwed up got screwed up, and that competing with the much more immediately problem of actually solving the issue.

And a few people in particular, likely less than a dozen, are under particular pressure. I’m not talking about management (although they have pressure of their own) but the few sysadmins, devops, architects and engineers that are at the center of whatever team is responsible for solving the problem, which undoubtedly was also in charge of the actual maintenance that led to the outage in the first place, so the pressure is multiplied.

Even for global operations at massive scale, this is what it usually comes down to — a few people. They’re on the front lines, and hopefully they know that some of us appreciate their efforts and that of the teams working non-stop to solve the problem. I know I do.

The significance of the dev center is hard to see for non-developers, but it’s real and this incident will likely have ripple effects beyond the point of resolution. Days without being able to upload device IDs, or create development profiles. Schedules gone awry. Releases delayed. People will re-evaluate their own contingency plans and maybe question their app store strategy. Thousands of developers are being affected, and ultimately, this will affect Apple’s bottom line.

And that’s why this situation is not the kind of thing that you’ll let go on for this long unless there was a very, very good reason (only a couple of days from reporting quarterly results, no less). Maybe critical data was lost and they’re trying to rebuild it (what if everyone’s App IDs just went up in smoke?). Maybe it was a security breach (what if the root certs were compromised?). The likelihood that there will be consequences for developers, as opposed to just a return to the status quo, goes up with every hour that this continues. As Marco said: “[…]  if you’re an iOS or Mac App Store developer, I’d suggest leaving some free time in the schedule this week until we know what happened to the Developer Center.”

In fact, it could be that at least part of the delay has to do with coming up with procedures and documentation, if not a full-on PR strategy. Apple hasn’t traditionally behaved this way, but Tim Cook has managed things very differently than Steve Jobs on this regard.

Finally, I’ve been somewhat surprised by the lack of actual reporting on this. One day, maybe two days… but three? Nothing much aside from minor posts on a few websites, and not even much on the Apple-dedicated sites. This is where real reporting is necessary. Having sources that can speak to you about what’s going on. Part of the problem is that the eventual impact of this will be subtle, and modern media doesn’t do subtle very well. It’s less about the immediate impact or people out of a job than about a potential gap in future app releases. A whole industry is in fact dependent on what goes on with that little-known service, and with iOS 7/Mavericks being under NDA, Apple’s developer forums, which are also down, are the only place where you can discuss problems and file bug reports. Some developer, somewhere, is no doubt blocked from being able to do any work at all. 

Apple should, perhaps against its own instincts, try their best to explain what happened and how they’ve dealt with it. Otherwise, the feeling that this will just happen again will be hard to shake off for a lot of people. For Apple, this could be an opportunity to engage with their developer community more directly. Here’s hoping.

diego’s life lessons, part III

Excerpted from the upcoming book: “Diego’s life lessons: 99 tips for survival, fun, and profit in today’s baffling bric-a-brac world.” (see Part I and Part II).

#9 make the right career choices

Everyone will have seven careers in their lifetime, someone said once, and we all repeated it even if we have no idea why.

The key to career planning, though, is to keep in mind that while the world of today ranges from complicated to downright baffling, the world of tomorrow will be pretty predictable, since as we all know it will just be a barren hellscape populated by Zombies.

So the question is: post-Zombie Apocalypse, what will you need to be? Survival in the new Zombie-infested world will require the skills of any good D&D party: a Healer, a Warrior, a Thief, and a Wizard — which in a world without magic means someone to tinker with things, build weapons, design shelters with complicated spring traps, and knowledge of how to brew a good cup of coffee.

Clearly you don’t want to be a Healer (read: medic/doctor), since that means no one will be able to fix you — you should have friends or relatives with careers in medicine, however, for obvious reasons. Being a Thief will be of limited use, but more importantly it’s not really the kind of thing you can practice for without turning to a life of crime as defined by our pre-Zombie civilization (post-Zombies, most of the things we consider crimes today will become fairly acceptable somehow, so you may be able to pull this off with the right timing).

That leaves you with either Warrior or Wizard, which translates roughly to: Gun Nut or Hacker. And by “Hacker” we mean the early-1980s definition of hacker, rather than the bastardized 2000s version, and one that is not restricted to computers.

So. Your choices for a new career path are as follows:

  • If you’re a Nerd, become a Hacker.
  • If you’re neither a Nerd or a Hacker, just become a Gun Nut, it’s the easiest and fastest way to post-apocalyptic survival. This way, while you wait for Zombies to strike you won’t need to worry (for example) about a lookup being O(N) or not, or why the CPU on some random server is pegged at 99% without any incoming requests.
  • If you’re already a Gun Nut, you’re good to go. Just keep buying ammo.
  • If you’re already a Hacker… please don’t turn into an evil genius and destroy the world. Try taking up some activity that will consume your time for no reason, like playing The Elder Scrolls V: Skyrim or learning to program for Blackberry.

NOTE (I): If you’re in the medical profession, just stay put. We will protect you so you can fix our sprained ankles and such.
NOTE (II): there is also the rare combination of Hacker/Nerd+Gun Nut, but you should be aware that this is a highly volatile combination of skills which can have unpredictable results on your psyche.

#45: purchase a small island in the Pacific Ocean

As far as having a permanent vacation spot, this one really is a no-brainer. Why bother with hotels when you can own a piece of slowly sinking real estate? Plus, according to highly reliable sources, you don’t need to be a billionaire.

True, you will have significant coconut-maintenance fees and you’ll probably need a small fleet of Roombas to keep the place tidy, but coconuts are delicious and the Roombas can help in following lesson #18.

NOTE I: don’t be fooled by the “Pacific” part of “Pacific Ocean.” There’s nothing “pacific” about it. There’s storms, cyclones, tsunamis, giant garbage monsters, sharks, jellyfish, and any number of other dangers. Therefore, an important followup to purchase the island is to buy an airline for it. You know, to be able to get away quickly, just in case.

NOTE II: this is actually an alternative to the career choices described above, since it is well known that Zombies can’t swim.

NOTE III: the island should not be named Krakatoa — see lesson #1. Aside from this detail, owning a Pacific Island does not directly conflict with lesson #1, since the cupboard can be actually located in a hut somewhere in the island (multiple cupboard hiding spots are also advisable).

#86 Stock up on Kryptonite

Ok, so let me tell you about this guy… He wears a cape and tights. He frequently disrobes in public places. He makes a living writing for a newspaper with an owner that makes Rupert Murdoch look like Edward R. Murrow. He has deep psychological scars since he is the last survivor of a cataclysmic event that destroyed his civilization. He leads a secret double life, generally disappearing whenever something terrible happens. He is an illegal alien. Also, he is an ALIEN.

Does this look like someone trustworthy to you? Hm?

That’s right. This is not a stable person.

Add to the list that he can fly, even in space, stop bullets, has X-ray vision, can (possibly) travel back in time and is essentially indestructible. How is this guy not a threat to all of humanity?

Lex Luthor was deeply misunderstood — he could see all this, but his messaging was way off. Plus there were all those schemes to Take Over The World, which should really be left to experts like genetically engineered mice.

The only solution to this menace is to keep your own personal stash of Kryptonite. Keep most of it in a cupboard (see lesson #1) and a small amount on your person at all times.

After all, you never know when this madman will show up.


Get every new post delivered to your Inbox.

Join 386 other followers

%d bloggers like this: