diego's weblog

there and back again

Category Archives: Uncategorized


The worst things in life inevitably blindside you.

Inevitably. I’d even say “by definition” but I can’t quite bring myself to do it since any kind of close analysis reveals it as one of those things that sounds good but is actually bleh when you think about it for more than two seconds. Yeah, you read that right: bleh. You know what I mean.

The best things don’t. The best moments are invariably the pinnacle of a metaphorical mountain. The best things in life build up, requiring an enormous amount of effort and care, and when they’re done sometimes it’s easy to forget that they’re good because of that and that’s something we also fuck up frequently around here, but that’s not what I’m talking about today.

What I’m talking about is the surprise. Yes, you will be surprised, guaranteed.

You know why? Because if you have a shred of survival instinct and you see something terrible coming, you move out of the way. We do this constantly, without even thinking. We are continuously patching stuff up so it doesn’t blow up in our face. We fix the leaks. We prop up the structure so it doesn’t come crashing down on us right this second, because right this second I have to finish a project and then later I got to shop for some groceries, and honey would you please pick up my prescriptions while you’re at the store?

If you see it coming, you move out of the way. Which leaves you with the things you don’t see coming.

Blindsided, always. Sometimes, literally, as in truck-unexpectedly-crashing-into-the-side-of-your-car-blindside-you, sometimes not.

I’d even say that the worst of the worst things are also the ones that look insignificant at first glance. The literal truck in the previous paragraph, for example, is pretty bad, but it’s also something that will have a fairly straightforward resolution — assume a happy ending and let’s say it’s just a few stitches, a visit to the shop, and a lot of haggling with the insurance company. You didn’t see it coming, but you can see it going, so to speak. The path out of the disaster zone is clear. It’s the advantage of major catastrophes that we can see so clearly that we’re severely fucked up that we are forced to do something about it. We simply have no choice.

Those moments though, when something happens or when you are told something truly awful that hit you as if you were hit physically, moments that feel like something breaks inside you. A fracture in your soul. And, in many cases, like a fracture, it can be ignored for a while, disregarded:”it’s bad, but not that bad really, right?”


I’ve come to realize that defining that moment is important because if you’re going to find a way to move forward from any situation, particularly a bad one, is to make some things end.

And the only way for something to end is for it to have begun. Which means you have to identify the beginning.

There was a phone call, early evening I think. I was in the kitchen. That for some reason I remember vividly. Phone rings, check caller ID: wow, there’s someone I haven’t talked to …. or even thought about …. in quite a while…

A minute later the call was over and I didn’t know what to do. I couldn’t go to the hospital. Couldn’t. Many reasons, all of them probably bad, is what I’m sure I’d think now if I remembered any of them. Many reasons, many. I was probably in shock, whatever that is, because I just put the phone down on the kitchen counter and I just went about my business, continuing along the path of mundane activity that I was already set on for that Friday night.

I’ve spent some time trying to decide whether it was that one moment or something earlier that was truly the first rock of the avalanche. There’s some pretty ugly stuff about 14 months prior that nearly broke me, but I managed to pull out, somehow. I spent a little more than a year in a fairly positive trajectory of some sort, flying uncertain, but gaining speed and then when this happened it was like the freeze frame in movies when they zoom into something, cut to total silence and you hear a metallic sound or a break and for a second nothing happens and then… BOOOOOOOOOOM!

So: the phone call is what I’ve settled on.


I didn’t know it then, but it was at that moment that the edifice of my life had begun to collapse in slow motion. Nothing felt right. Like one of those warning signs you read or hear about in movies… earthquakes, typhoons, hurricanes. Warning signs that go beyond the rational or the known. The air around you feels heavy, charged. Something is coming.

That moment.

Tiny rocks, rolling down the hill.

A tocsin. (not a typo).

It would take a little over a year from that point until it would all finally crash on top of my head — and quite spectacularly I might add. I’m someone that doesn’t trust easily but during these last few years I’ve made an effort and every time I’ve reached out the universe has lashed out back at me viciously. The Rach 3 is easier than this.

Had I played a part? Self-fulfilling prophecies and all that? Maybe, but definitely not all the time. I’ve kept trying. So far every time I give it a shot, it’s turned out badly. Whatever mistakes I’ve made, I’ve paid for them, and then some. I’ve been crawling out of the rubble in darkness for a while now.

I’m stepping out now, but I can look back at the ruins that will soon lose sight of me because I know when it began.

1,263 days ago.

It ends today.

islanded in the datastream: the limits of a paradigm

Data is not information, information is not knowledge, knowledge is not understanding, understanding is not wisdom.

— Clifford Stoll

“I just know I’ve seen it before.”

islanded datastream limits of a paradigmA simple statement about a straightforward problem, and perhaps the best embodiment of the irony that while computers can, in theory, remember everything, and would allow us to access any information that exists within our reach — if we could only define precisely enough what we’re looking for.

It’s something that happens to everyone, in every context: at work, at home, at school. We could be trying to remember a reference to use in a document, or looking for a videoclip to show to a friend, or for a photo that (we think) we took during a trip years ago, an email from two weeks ago that has the information to get to our next meeting, or trying to find an article that we read last month and that we’d like to send to someone.

Google can search for something we’ve never seen out of billions of documents that match a keyword, but we can’t easily find a page we read last week. We create documents and then lose track of them. Our hard drives contain a sea of duplicates, particularly photos, images, and videos. Our email, messaging apps, digital calendars, media libraries, browser histories, file systems, all contain implicit and explicit information about our habits, the people we’ve met, places we’ve been, things we’ve done, but it doesn’t seem to help. Often, our digital world seems to be at the whim of an incompetent version of Maxwell’s Demon, actively turning much of our data into a chaotic soup of ever-increasing entropy that makes it increasingly harder to extract the right information at the right time.

We now have countless ways of instantly sharing and communicating. We consume, experience, create and publish in shorter bursts, mainlining tweets, photos, videos, status messages, whole activity feeds.

In the last ten years the rise relatively inexpensive, fast, pervasive Internet access, wireless infrastructure, and flexible new approaches to server-side computing have had an enormous effect in many areas, and has contributed to the creation of what are arguably entirely new realities in a shift of a breadth and depth with few, if any, historical precedents.

But not everything is evolving at the same speed. In today’s seemingly limitless ocean of ideas of innovation, a significant portion of the software we use everyday remains stuck in time, in some cases with core function and behavior that has remained literally unchanged for decades.

The software and services that we use to access and interact with our information universe have become optimized around telling us what’s happening now, while those that help us understand and leverage the present and the past, in the form of what we perceive and know, and the future, in the form of learning and making new connections, have become stale and strained under ever-growing data volumes.

Intranets hold data that we need and can’t find, while the Internet keeps pushing us to look at content we may not need but can’t seem to escape. Consumption of media has become nearly frictionless for some types of activities, like entertainment or brief social interactions, while becoming increasingly more difficult for others, like work or study activities that involve referencing and recall. Search engines have opened up access to information of all kinds and reduced the barriers to dissemination for new content as soon as it becomes available, while our ability to navigate through the data closest to us, in some cases data we ourselves produce, has in effect decreased.

Meanwhile, the proliferation of computing devices of all types and sizes, of sensors, of services that provide ambient information, provide a solid layer of new capabilities and raw data that we can rely on, but that are rarely used extensively or beyond the obvious.

The limits of a paradigm

This isn’t about just interfaces, or publishing standards, or protocols. It can’t be fixed by a better bookmarking service or a better search engine, or nicer icons on your filesystem browser app. The interfaces, the protocols they use, the storage and processing models in clients and middleware are all tied, interdependent, or rather codependent.

What is happening is that we have reached the limits of paper-centric paradigms, a model of the digital world built upon organizational constructs and abstractions based the physical world and thus made up of hierarchies of data that can be understood as a sequences of pages of printed paper. Hierarchies in which people, us, whom technology should be helping, are at the very bottom of the stack, and what we can see and do is constrained by what’s higher up in whatever hierarchy is at play at the moment.

These hierarchies are everywhere: filesystems, DNS, websites, email folders, music players, URL structures, contact lists. They are embedded in fundamental interface metaphors, in how we structure content — even present in the unseen depths of the Internet’s routing systems.

“Paper centric interfaces” are still pervasive. A significant percentage of modern software could be printed and not only retain most of its information, but also a lot of its “functionality.” For example, what most people do with their iPad calendar could be done as well using a piece of paper with a calendar drawn on it. And most of these things have not only remained unchanged for years, they are in many cases just plain electronic translations of paper-based systems (Cc, BCc, From, To, all existed as part of corporate memo flows well before email was invented).

The hierarchies embedded in our information systems explain why China has the ability to control (but not in detail) what its Citizens can and can’t see on the Internet.

Within these hierarchies, some services have carved out new spaces that show what’s possible when the digital starts to be unleashed from the physical. Wikipedia, Facebook and Twitter are perhaps some of the best examples. Not surprisingly, they’re all fully or partially banned in China, while their “local” copies/competitors can be controlled. But they are universes unto themselves.

There are many other consequences to our over-reliance on hierarchical, centralized architectures. Sites and services go down, and millions of people are affected. Currently, we are on a trajectory in which people increasingly have less, rather than more control over what they see. Privacy is easier to invade, whether by government actors in legitimate—although often overly broad—security sweeps, or by criminals. A single corporate network breach can expose millions of people to fraud or theft.

Large-scale services have to resort to incredibly complex caching systems and massive infrastructure to support their services: the largest Internet cloud services have more than one server for every 1,000 people online.

A lot of this infrastructure seems structurally incapable of change, but it only seems that way. Many companies that run large-scale web services already operate internal systems that no longer resemble anything remotely recognizable as what existed when the protocols they support first came into widespread use. They’re spaceship engines being forced into Ford Model-T frames, twisted into knots to “speak” in a form and at a pace that decades-old conceptual frameworks can process.

Clients, middleware, protocols, what’s visible, and what’s in our pockets and our homes, is stuck in time (examples and discussion on this in future posts). Getting it unstuck will require making changes at multiple levels, but it’s perfectly doable. The digital world is infinitely malleable, and so should be our view of it.

new kindle – a quick review, and other thoughts

You may be thinking: New Kindle? The new tablet?

No, no that one. That’s out in November.

The Kindle with the “Paperwhite” display?

No, no that one either. That one comes out in next week but my order won’t ship until October.

This is perhaps the biggest problem I see with the new Kindle(s) Amazon announced last week: confusion. But more on that in a second. My answer to “should I get it?” also follows.

The new Kindle I’m referring to is the keyboard-less version of what Amazon now calls “Kindle e-Readers.” It’s the cheapest Kindle but it’s also the best suited, in my opinion, for long-form reading.

The best ebook reader

This new Kindle is a bit lighter than the old one, jet-black replacing the dark but dull gray of yesteryear. It is a bit faster. To my eyes, the screen looks a bit better. In other words, it screams INCREMENTAL IMPROVEMENT all around.

That’s fine with me, really. For a sub-$100 device that does exactly what it’s supposed to do and that remains, thankfully, focused primarily on a single function –reading–, this isn’t that far from the perfect device. The perfect device would be lighter but probably not much thinner, since if it was thinner it may get in the way of holding it properly.

This model is the primary device in which I read long-form writing. The news-reading king is the iPad, but for books the Kindle wins hands-down. It can be used under any light conditions, it’s lighter, and if the iPad’s battery lasts (seemingly) for an infinite amount of time, this Kindle’s battery is infinity plus one.

If you do a lot of long-form reading, this is the device to get, no question.


Versions, versions, versions

That’s all well and good, but what about all the others? As I mentioned above, this is where the problem starts.

This is the list of the Kindles Amazon is selling at the moment. It’s important to note that these are the names that Amazon uses:

  • Kindle. Keyboardless e-reader with eInk display. WiFi only.
  • Kindle Paperwhite. Keyboardless touch-sensitive e-Reader with a backlit eInk display. WiFi, replaces 2011 Kindle Touch (which was, to be succinct, terrible, since it was heavier, bigger, and the touch model and responsiveness was all over the place).
  • Kindle Paperwhite 3G. Same as Kindle Paperwhite, but with global 3G connectivity.
  • Kindle Keyboard 3G. This is the 2010 Kindle, the only one of the current line to include a physical keyboard.
  • Kindle Fire. The original Kindle Fire released last year. “You mean, the one that is bulky, slow, heavy, etc?” Yeah, that one.
  • Kindle Fire HD. The new Kindle Fire with a higher resolution screen, faster processor, more storage, and MIMO (better WiFi, although that depends on your router).
  • Kindle Fire HD 8.9″. Same as the Kindle fire HD but with a bigger screen (smaller than iPad, iPad is 9.7″)
  • Kindle Fire HD 8.9″ 4G LTE Wireless. Same as the Kindle Fire HD 8.9″ but with 4G LTE (which requires a yearly subscription of $50 for a 250 MB monthly data plan).

You may be tempted to think that Amazon just decided to splay Kindle versions by feature or options, but that’s not the case. Every one of those models has options to choose from: Ad-supported (what Amazon called “With Special Offers”) or not for the Kindles, which lowers their prices, and with different amounts of memory for the Kindle Fire models.

For the any buyer –not just your “average” buyer which tends to appear, unicorn-like, in many reviews– this is incredibly confusing. Each of these products has their own product page, each touting the devices advantages. The “3G” or “4G” monikers in the product name force you to make a choice upfront for whether you want cellular wireless or not. Each product page references the other products, and the descriptions are very similar. It’s easy to click on the wrong product page by mistake and think you’re getting one thing while actually buying another. Price may offset confusion somewhat, and the fact that Kindles are less expensive than the alternative, iPad in particular, may entice some buyers to power through all the marketing nonsense and figure out what to buy anyway.

Amazon seems to have decided that strength is in numbers, Paradox of Choice be damned. I think that this is a mistake. It prevents simple, verbal recommendations from happening. “Should I get a Kindle?” is a common question I hear. There is no way to answer that quickly and with accuracy. The best you can do is extrapolate, assume they want an ebook reader, and say: “Yes, just get the cheapest one, less than $100.”

Compare that to the other typical question: “Should I get an iPad?” You can easily answer this with yes or no. The iPad is the iPad. True, there are choices to be made once you have decided to get one: price, memory, 3G or not. But the basic product choice has already been made. I, like most others, expect that if Apple announces a smaller iPad at some point it will be clearly distinguished, like, say, “iPad Mini,” but it won’t go beyond that.

What would be simpler?

My own preference for the Kindle line would be something like this. Three products: Kindle, Kindle Fire Mini, and Kindle Fire. (Since Apple hasn’t announced an “iPad Mini” yet, that would work, and it would make sure that stories about an iPad Mini, if it’s released, would also mention the Kindle Mini as a competitor). Drop the original Kindle Fire and the Kindle Keyboard (I have no doubt that Amazon keeps the Kindle Keyboard around because it sells, but my argument would be that at some point whatever you’re making from it is undercutting clarity of choice, and therefore sales from the newer devices). Make everything else an option. Forget about adding “HD” to the name: it will not age well.

Obviously this won’t happen now, but who knows, maybe next year. As for whether the new Kindle Fire models will be worthy iPad challengers, that remains to be seen, but the initial reviews suggest the same lack-of-polish software problems that the original Kindle Fire has. This is somewhat beside the point, however: Amazon’s beachhead into the tablet space is by making Kindle Fire a window into all Amazon content you own, as opposed to trying to match the iPad as a more general computing device. For that purpose, it’s a good tablet.

I’m a huge fan of Amazon as a company and the Kindle e-reader as a product. I also think that the Kindle Fire is important to provide a credible competitor to Apple, so I’m looking forward to seeing it evolve.

In the meantime, though, my recommendation remains the same: if you do a lot of long-form reading, get the cheapest Kindle. For everything else, there’s iPad.:-)

the rumor singularity

I’ve talked before about what I believe are some of the effects of a world in which news are more pull than push, and in which the continuous rumor mill keeps stories about upcoming products, real or imagined, front and center. So far so good.

Then today I came across this Techcrunch article: Thoughts On Apple’s Latest TV Efforts.

On the surface, this looks like the usual article about the perennial Apple TV rumors (what I called Steve’s Last Theorem). Thinking about it a bit more, however, I’d say this is a good example of a new level of meta rumor mongering that we’ve been seeing more frequently lately.

Why? Deconstructing:

  1. “Thoughts on…”. This is an opinion piece… on a product that has not been announced and have no idea when, how or even if it will come to market.
  2. “…Apple’s latest TV Efforts”. Moreover, this is about the latest iteration of the rumor, and it seems as if we are discussing version 2 (or 3, or 4) of an actual product.

So: Not only it’s an opinion piece on an imaginary product, it’s also discussing the evolution of this imaginary product and commenting on it, referencing previous articles as if they are fact, not speculation.

We’re through the looking glass here, past the event horizon. We have reached the rumor singularity. For example, we could easily take the next logical step after that article and, for example, write a thoughtful critique on Apple’s API restrictions for the TV AppStore that doesn’t yet exist, on a device that hasn’t yet been released. Afterwards, we could start polishing up our pitchforks and have a good round of blog posts and commentary on how insane those imaginary API constraints are, and watch Apple’s stock price go up or down based completely on widespread reaction to the imagined constraints of the potential API the product that doesn’t exist yet may or may not have. Eventually the rumored product can be rumored to be canceled because of perceived tepid demand, and everyone can just move on to the next thing that doesn’t yet exist to obsess about. Naturally, in this case afterwards everyone will be able to talk about “Apple’s failed TV efforts”.

I really think Apple, Facebook (think perennial “Facebook phone” rumors), and others could save themselves a lot of time and effort and simply not release anything else. Just let the rumor singularity take over, and enjoy the ride.:-)

“I broke it” vs. “it’s broken”

“I don’t know what I did.” (Cue ominous music.)

These are usually the first words that non tech-nerds will utter when asking for help with a computer problem. The printer was working, now it’s not. A certain toolbar has always been there. Now it’s not. Double-clicking on an icon used to load a certain application, now it doesn’t. A network drive used to be available, now it isn’t. And on, and on it goes.

Let’s step back for a moment — quoting from my post a couple of weeks ago, cargo-cult troubleshooting:

There’s an interesting aside to this in terms of why we assume that the problem is on our end first, rather than the other. It’s what I call the “I broke it vs. It’s broken” mindset, of which I’ll say more in another post, but that in essence says that with computer systems we tend to look at ourselves, and what is under out control, as the source of the problem, rather than something else. This is changing slowly in some areas, but in a lot of cases, with software in particular, we don’t blame the software (or in this case, the internet service). We blame ourselves. As opposed to nearly everything else, where we don’t blame ourselves. We say “the car broke down,” not “I broke the car.” We say “The fridge isn’t working properly” as opposed to “I wonder what I did to the fridge that it’s no longer working”. And so on. We tend to think of Google, Apple, and pretty much anyone else as black boxes that function all the time, generally ignoring that these are enormously complex systems run by non-superhuman beings on non-perfect hardware and software. Mistakes are made. Software has bugs. Operational processes get screwed up. That’s how things are, they do the best they can, but nothing’s perfect.

This is perhaps my biggest pet peeve with computing devices. Whenever someone tells me “I don’t know what I did,” I give them the fridge example and they say, “good point… but… this was working before.” They see the point, but they still blame themselves. And it drives me crazy. (Note: As I mention in the paragraph, some of this comes from a sense of infallibility we assign to web services, but that’s a slightly different topic and so I’ll leave that for yet another post. What I want to discuss here has to do with personal computing devices themselves).

This didn’t happen by chance. Software as long placed seemingly impossible choices on users. I still chuckle at DOS’s “Abort, Retry, Fail?” option when, say, a DIR operation failed. Of course, there’s a meaning to each of those options (nevermind that in practice it didn’t make much of a difference which one you chose, since they usually happened when there was a hardware failure).

Now, this is fairly common with new technologies — early on there’s many more low level details that are exposed to the user that allow them to create problems. The difference with software is its malleability and the fact that we chose, early on, to expose this malleability to every day users, and many of the initial metaphors were low-level enough that they could easily be misused, like, say, the filesystem (a bit more on that below).

Granted, software does present more opportunities for a user to make a mistake and “break things” than your average fridge, but in my mind that’s not an excuse. Software should be flexible, yes, but it should also be resilient to user choices, allowing easy recovery and an understanding, on the part of the device, of state.

Frequently the source of the error is a hardware problem. These days, automatic updates can also break software or misconfigure settings. This isn’t the user’s fault. Many other times, it was, in fact, something the user did that “broke it.” But my argument is that even in that case it’s our responsibility as software designers to build software that is resilient, if you will, to user choices. Back to the fridge for a moment: you can break a fridge by doing things that, for example, push the motor too hard, or if you’re smashing the controls inside, but it requires time, it’s not easy, and it can’t happen in a split second while you are distracted.

The filesystem is a great example of this problem. It’s too easy to make mistakes. While using it, you have to pay attention not just to the task at hand but also have to devote a significant amount of attention to the mechanics of doing it to make sure you don’t, say, wipe out some unrelated documents while trying to create a new one. That’s why I really like the idea of what Google has done with Google Docs and what Apple is trying to do with iCloud in general, pushing the filesystem out of the way to leave in place just a document metaphor, closer to what’s in place in iOS (for an in-depth discussion of this topic, you should read the iCloud section in John Siracusa’s excellent OS X 10.8 Ars Technica review. And if you haven’t yet, read the whole thing while you’re at it.) These new approaches aren’t perfect by any means. Behavior and functionality are still wonky at times and it’s hard to let go of an interaction metaphor that we have been using for decades, but we have to start somewhere.

There are many more areas in which this kind of shift has to happen, but since in the end it all comes down to data, it’s really, at the core, a difference in how we approach data creation, modification, and deletion. creation should be painless, modification should almost always automatically maintain version information, switching between versions/states should be easy, deleting information should be very difficult, and, in a few select cases, pretty much impossible (if you think this is extreme, consider, do you have the option to delete your computer’s firmware? Not really, and for good reason, but the firmware isn’t the only critical component in a computer). This can apply to UI state, system settings, device setup, display preferences, you name it. Incidentally, all large-scale web services have to implement these notions one way or another. Bringing down your entire web service because of one bad build just won’t do.:)

We know how we got here. For a while, we had neither the tools not the computing power and storage to implement these ideas, but that is no longer the case. It’s taken a long time to get these concepts into our heads; it will take a long time to get them out, but I think we’re making progress and we’re on the right path. Here’s hoping.

the era of wysiwyg product announcements

Watching the Microsoft Surface announcement a few weeks ago, I was struck by the same things nearly everyone has commented on: the wooden delivery, the crashes, the interesting ideas coupled with Microsoft’s equivocations on what should constitute the best experience (“It has a soft keyboard, and a real keyboard that is kind of like a soft keyboard, and a real keyboard that is more like a typical keyboard and… a pen!”).

Yet, something kept bugging me, until it suddenly hit me: Microsoft was announcing a product that wasn’t ready.

Pre-announcing. This used to be how things were done. Back in the day when CES, COMDEX (remember COMDEX?) and other similar conferences ruled the roost, products were announced and demo’ed with great fanfare months and months (sometimes a year or more) before they were available. Demo followed by demo, release date changes, multi-year development cycles. In the 90s and early 2000s these events were the rule, not the exception. We even had an acronym to go along with this way of announcing products: FUD, where the announcement was made only to scare off the competition or to create pressure even when the product being announced didn’t actually exist.

This is no longer how product releases are done, however. We could even draw a faint analogy to personal computer UIs in the 80s, when MacOS brought WYSIWYG to personal computers. It used to be “what you see is what you’ll get…maybe…eventually” and now it’s pretty much “what you see is what you get.” Products are announced and often available the same day, or within a few weeks at most. There’s almost no daylight between a product’s announcement and its wide availability. As with many things in recent years, we can credit Apple with having changed the game.

By and large, Apple no longer pre-announces products. The one glaring exception is operating systems and developer tools, where you need some degree of “pre-announcing” to get developers on board so that at release the new OS & toolkits are supported by 3rd party apps. It’s much more a situation of announcing the developer version (and its immediate availability), followed by an announcement of the consumer version. OS X Mountain Lion, most recently, was announced for consumer release about a month and a half prior to wide availability, and that’s probably as close as you can get it if you want to give 3rd party developers some time to adjust to the final version of the OS.

In the case of Apple, this is in keeping with their theory of product design, in which they “know what’s best.” If you’re not really going to adjust that much to input from others, what’s the point in pre-announcing?

That’s not the whole story though. There are other factors at play. Rumors and analysis for upcoming products now spread faster and earlier than ever before. In the 90s speculation about new products was much more limited, most of the activity was around information provided by the company in the first place, whoever they were. With blogs and news sites that publish almost anything they get their hands on, rumors, spy photos & video coming from all corners of the globe (including from production lines), there’s now a constant drumbeat of essentially free PR for new products that didn’t exist even a few years ago. From a PR perspective, what would be the value of squashing a rumor with actual information (say, of the resolution of the next iPhone’s camera, to name just one thing)? People would a) stop speculating, leading to fewer articles, and b) start criticizing the actual choice without having the product in front of them, which is also bad. The only time I remember Apple “breaking” this rule was with the announcement of the original iPhone, almost a full 6 months before its release, but in that case there was already a fever pitch of speculation and they knew that the product spec would get out partially in government documents related to FCC testing and whatnot, so controlling the story was more important than waiting for availability.

Another key factor is that Apple has demonstrated they can sustain an extraordinarily stable release cycle for their devices. iPhones & iPods (plus iOS) in early Fall. iPads in early Spring. Major updates for Macs (plus OS X) in the summer. No doubt there are internal reasons why keeping this cycle makes sense, in terms of component updates and such as well as to create a general development pace, but it also has the side-effect that the speculation and rumors have rough targets and therefore they start to “seed”, on cue, information and expectation for the upcoming product, all without Apple spending a single cent.

Incidentally, I think this was also a key reason why Apple stopped going to conferences like CES years ago. They want to control their schedule, they don’t want to be forced to announce products in January when they may be available in August. And the always-on, rumor-fueled digital news cycle lets them do just that.

So I think that in this case Apple has adapted to a new world were you don’t need a constant drumbeat of PR to remind people that a new product is coming, blogs & news sites do it for you. Others, like Google, have partially adapted to this by centering major announcements around Google I/O, but they still release products at random times, losing some of the steam that an increase in rumor and expectation would provide.

And Microsoft? They appear to be stuck in the 1990s. They attempted to replicate the pageantry associated with Apple events without the substance. The Surface announcement came out of the blue (up to the announcement, speculation was all over the place, mostly focusing on the announcement having something to do with Xbox). In and of itself this isn’t a bad thing, but surprises aren’t always good, particularly when followed by lots of unknowns. The software crashed during the demo, and they didn’t let anyone touch the devices, leading to (justified) speculation that they just weren’t ready. They didn’t announce a release date. They didn’t announce pricing. They left pretty much everything important open to speculation while showing enough to quash many rumors, so criticism and opinions appear to be based in fact. (Non-tech people I know that don’t follow these things closely actually thought that Microsoft had released the device). The worst of both worlds.

Some of Microsoft’s missteps in this area probably relate to the fact that they coupled to some degree parts of the Windows 8 announcement with the tablets, and for Windows 8 you need to bring OEMs to fall in line (not to mention the Windows Phone 7/8 incompatibility issues floating out there as well), a problem Apple doesn’t have. With so many versions and variants it’s difficult for one single product to pack a lot of punch in terms of mindshare.

We’ll have to wait and see if Microsoft is able to adapt to the new way of doing things now that Windows 8, even with all its variants, presents a more cohesive picture of where they want to go and perhaps allows them to create a more stable release cycle. In the meantime, Apple will continue to leverage WYSIWYG product announcements to their advantage.

the dark knight rises: an epic conclusion

As I wrote eight (!) years ago, Batman is my favorite superhero character. With Batman Begins, The Dark Knight, and finally The Dark Knight Rises we finally have a movie saga worthy of an iconic comic book character that is unlikely to be topped any time soon. It is epic, a great conclusion to the best superhero trilogy ever put on the big screen, and if you enjoy movies you should go see this one in a theater, were it can be experienced it as it was meant to.

I’ve said before that Batman can be defined in contrast with its enemies. If the Joker in TKD was pure id, Bane in TKDR is much more calculating ego and even anti-super-ego (even if that is mangling Freudian theory to some degree), and it suits the evolution of the overall story well, connecting the enemy of Batman Begins (The League of Shadows) with the unrestrained anarchy of The Dark Knight represented by the Joker into one, with more impact than Knightfall, the comic book series that foreshadows some of the movie’s plot.

The movie isn’t perfect. Parts of it feel rushed, and certain plot points seem at times pulled out of thin air, others can be seen coming a mile away, three areas in which The Dark Knight was superior. There were times when I found myself admiring the scenery rather than being immersed in it, and I would have liked at least an oblique reference to the fate of the Joker (if there was one, I missed it in what is at times fast, mumbled or distorted dialogue). These minor failings don’t subtract too much from the movie in my opinion, and the movie’s climax is superior to that of The Dark Knight.

The Dark Knight Rises is Christopher Nolan at the top of his game, a master of cinema weaving an extremely complex story with great skill (although perhaps not to the level of Inception), and I can’t wait to see what he does next.

ideas are bulletproof

An act of violence like what happened in Colorado is not really something that we can make sense of, as much as we might try. It is sad but true that this was “just” the act of someone who’s clearly mentally unbalanced, with effortless access to assault weapons and riot gear when he clearly should not have been able to purchase anything more deadly than a set of plastic scissors.

If the first part of The Dark Knight Rises (more on the movie in the next post) is to some degree an expression of the idea that one person can’t push back on an ocean wave crashing on the shore, the second part embodies the notion that how we react, what we do, in the face of forces beyond our control matters. Standing up to something matters. The wave eventually recedes.

I admit that I wasn’t completely unconcerned about going to see the movie last night. The lizard brain is hard to completely quiet down. But I went, and so did a lot of other people. And there was something reassuring, small and yet valuable, about that.

I am not, by any means, saying that going to watch a movie represents some kind of a deeply held moral stance or a profound act of strength of character. Not at all. First, it echoes too much of the post-9/11 notion that “shopping is patriotic” (I’m paraphrasing–you know what I’m talking about). Second, I have no doubt some of the people that went did so mindlessly, that is, without specific intent. But I also have no doubt that for a lot of people there was a kernel of fear in their minds, and what matters is they got over it, and went on with their lives. Some people probably didn’t get over it, and didn’t go — and that’s fine too. I’m not talking about individual actions here, but about the reaction of the collective. Empty theaters on Friday night after what happened at midnight on Thursday would have been a bad sign. A sign that we as a group had given up, retreated to some degree in the face of what’s in essence a world that is beyond our control, even if we like to tell ourselves that it isn’t.

So if millions of people going to see a movie in spite of fear isn’t sudden proof of a culture-wide show of courage, it is also true that there is something important in that people did do it: the simple but powerful idea that life, down to its most routine and perhaps even frivolous moments, is worth living, not only when we can protect ourselves from every possible danger and somehow live without fear, but precisely in spite of the fact that we can’t.

Adding to something I said at the end of this post many years ago: Ideas are bulletproof — but only if we believe in them.

nexus 7 and the android experience consistency conundrum

“So now the home screen is locked to portrait mode?”

This was one of the first questions in my mind during the first few minutes of using the Nexus 7, which finally arrived yesterday. After a few hours of use, I have to say: I like it. So what’s this about portrait mode them?

I’ve used the original 7 inch Galaxy Tab, running Froyo (Android 2.2) first, then Gingerbread (Android 2.3), which had the UI locked in portrait mode. I own a Galaxy Tab 10.1, which at the moment runs Android 3.x (Honeycomb, apparently to be updated soon to 4.0 Ice Cream Sandwich) and in which the home screen UI is locked in landscape mode. (Note: while the Kindle Fire is an Android device, the fact that Amazon has modified it to the degree that they have makes it unsuitable in my viewfor inclusion in a list of “Android tablets”).

Now with the Nexus 7, running Android 4.1 (Jelly Bean), the home screen UI is again in portrait mode. Although we have to use “again” carefully here since the original Galaxy Tab was really just the phone OS/UI installed on a larger slate, and not really designed for tablets.

Why does the home screen orientation matter? The fact that the home screen UI is now locked to portrait mode may seem like a relatively minor thing, and it is, but I think it is representative of a larger issue facing Google with Android in general: they need to decide when something is good enough, and stop making major changes for a while.

In the tablet space Google hasn’t really had a Google-branded flagship device before the Nexus 7, so we could chalk up some inconsistency to that. But Google has released phones under the Nexus brand for a while now, and every iteration has been different. I own and have used a Nexus 1, Nexus S, and Galaxy Nexus, and while these are all great devices, and in my opinion the best Android smartphones for each respective generation, every new device with its corresponding new OS has made significant, and often bewildering, UI changes.

“Primary” UI buttons (ie., the equivalent to Apple’s home button) have gone from hardware to software. Their number and functions have changed. Defaults have shifted significantly with each release (even when restoring settings from the same Google account). The store has undergone significant changes and rebranding. Jelly Bean’s home screen by default now greets you with a magazine’like interface to highlight content from your “library,” also a new concept post-introduction of the Google Play store. Under the covers, APIs have undergone a dramatic (and, overall, welcome) improvement, but every release feels somewhat disconnected from the previous by making major changes to what apps are supposed to do.

Now, don’t get me wrong–Android initially -and for years-, from the UI to its APIs, was inferior to iOS in my opinion (and yes, I’ve developed and released software on both), and with Jelly Bean, and the Galaxy Nexus and Nexus 7 hardware platforms, Google finally has something that is at least up to the challenge, so continued iteration has paid off in that regard. Additionally, an under-appreciated factor in making drastic changes is that Android’s market share on tablets has been tiny, which gives them an opportunity to evolve more quickly.

Android would benefit from less fragmentation of both versions and experience, and a faster update cycle. Part of getting there requires Google to finally settle on the major features of the Android experience and evolve more incrementally in the next few releases. iOS has a real advantage in uniformity of the experience (both in terms of hardware and software) across devices: if you know how to use one iOS device, you know how to use them all. This hasn’t been entirely true of Android devices until Jelly Bean and the Google Play Store.

The wildcard in this are the OEMs. They seem to be addicted to making unnecessary modifications and customizations that add little value and are actually counterproductive in that they invalidate, to varying degrees, the knowledge that a user may have about Android from other devices. Their incentive is actually to make it harder, not easier, to switch to another manufacturer — another advantage Apple has.

With the Nexus 7 and Jelly Bean, Google has a chance to establish a dominant device and experience that could have the effect of forcing the OEMs to see the value in consistency, and over time perhaps this can also trickle over to the Android smartphone space, something that will improve the lives of developers and users alike.

Here’s hoping.:)

skeuomorphic software, invisible hardware

A number of articles in the last few months have argued against the increasingly common use of skeuomorphisms in UI design. A recent one, that is also a good summary of the argument, is can we please move past Apple’s silly, faux-real UIs? by Tom Hobbs. A key point these arguments make is that software shouldn’t necessarily try to imitate the physical object(s) it is replacing, since we are both encumbering software with constraints it doesn’t naturally have, and we’re missing the opportunity to really leverage the malleability of software interfaces to create entirely new things.

In the case of Apple, though, I think there may be a reason beyond those usually associated with the use of skeuomorphic design, one rooted in a view of their products as a deeply integrated combination of hardware and software.

Before going into it in more detail, I actually agree with the general case against overuse of skeuomorphisms — I think that we have not done enough as an industry to explore new ways of creating, presenting, and manipulating information. There’s definite value in retaining well-known characteristics for UIs for common tasks, but the problem is when we simply substitute the task of designing a UI with copying its real-world equivalent. We haven’t scratched the surface of what is possible with highly portable, instant-on, location-aware, context-aware, always-connected high resolution touch-based (or not) hardware, and just copying what came before us is unnecessarily restrictive.

The case of Apple is slightly different, however. They don’t just produce software, they design and produce the whole package. Arguably, a lot of the success of iOS devices hinges precisely on the high level of integration between hardware and software.

So the question is, if we consider the whole package, not just the software, does that change the reasoning behind Apple’s consistent move towards skeuomorphic UIs? I think it does.

Consider the hardware side of the equation. With every new generation of hardware, whether iPhone, iPad, Mac, or even displays, Apple moves closer and closer to the notion of “invisible hardware”. In recent product introductions they’ve frequently touted how, for example, the iPad or the Retina macbook to some degree fade into the background: it’s just you, and your content. This materializes in many ways, from the introduction of Retina displays to the consistent move towards removing extraneous elements from displays (no product names, no logos — just the bezel and the display).

I’ve written about this before when I discussed the end of the mechanical age. Apple has been for years moving towards devices that disappear from view even as you’re holding them in your hand, making them simpler (externally), even monolithic in their appearance, just slabs of aluminum and glass. Couple this with a skeuomorphic design approach for the software and what you get is a view of the world where single-purpose objects fade away for those that can essentially morph into the object you need at any one time.

In other words: I think Apple’s overall design direction, implicitly or explicitly, is that of replacing the object rather than just the function.

Today, this can be done with invisible hardware and skeuomorphic software. In the future, barring the zombie apocalypse or some such:-) we could have devices based on nanomachines that in fact physically morph to take on the characteristics of whatever you need to use.

As I said before, I think that we should be exploring new user interfaces and letting go of the shackles of UIs created decades or even centuries ago to find new and better ways of interfacing with the vast ocean of data that permeates reality. Apple’s approach in the meantime, however, (regardless of my personal preference) strikes me as a valid direction that is not at all run-of-the-mill overuse of skeuomorphisms, but something deeper: a slow but steady replacement of inert physical objects with ones that are a malleable –and seamless– analog UI replacement, with a digital heartbeat connected to the datastream at their core.


Get every new post delivered to your Inbox.

Join 408 other followers

%d bloggers like this: