diego's weblog

there and back again

Monthly Archives: August 2012

the rumor singularity

I’ve talked before about what I believe are some of the effects of a world in which news are more pull than push, and in which the continuous rumor mill keeps stories about upcoming products, real or imagined, front and center. So far so good.

Then today I came across this Techcrunch article: Thoughts On Apple’s Latest TV Efforts.

On the surface, this looks like the usual article about the perennial Apple TV rumors (what I called Steve’s Last Theorem). Thinking about it a bit more, however, I’d say this is a good example of a new level of meta rumor mongering that we’ve been seeing more frequently lately.

Why? Deconstructing:

  1. “Thoughts on…”. This is an opinion piece… on a product that has not been announced and have no idea when, how or even if it will come to market.
  2. “…Apple’s latest TV Efforts”. Moreover, this is about the latest iteration of the rumor, and it seems as if we are discussing version 2 (or 3, or 4) of an actual product.

So: Not only it’s an opinion piece on an imaginary product, it’s also discussing the evolution of this imaginary product and commenting on it, referencing previous articles as if they are fact, not speculation.

We’re through the looking glass here, past the event horizon. We have reached the rumor singularity. For example, we could easily take the next logical step after that article and, for example, write a thoughtful critique on Apple’s API restrictions for the TV AppStore that doesn’t yet exist, on a device that hasn’t yet been released. Afterwards, we could start polishing up our pitchforks and have a good round of blog posts and commentary on how insane those imaginary API constraints are, and watch Apple’s stock price go up or down based completely on widespread reaction to the imagined constraints of the potential API the product that doesn’t exist yet may or may not have. Eventually the rumored product can be rumored to be canceled because of perceived tepid demand, and everyone can just move on to the next thing that doesn’t yet exist to obsess about. Naturally, in this case afterwards everyone will be able to talk about “Apple’s failed TV efforts”.

I really think Apple, Facebook (think perennial “Facebook phone” rumors), and others could save themselves a lot of time and effort and simply not release anything else. Just let the rumor singularity take over, and enjoy the ride. :-)

diego’s life lessons, part II

Excerpted from the upcoming book: “Diego’s life lessons: 99 tips for survival, fun, and profit in today’s baffling bric-a-brac world.”  (part I is here).

#6: sign up for every personalization app available

“Big data” is all the rage these days. We are told that a few machines running Hadoop can figure out the deepest triggers in your psyche from only a few Facebook ‘Like’ actions. Companies crop up every other day promising to figure out what kind of ice cream flavor you’re going to like because you buy a certain color of bath towel. And they’re right, of course. A human being’s life can absolutely be defined by a linear regression on a few loosely correlated data points. It doesn’t matter if what they’re correlating is a couple dozen people in Tennessee having burgers, while you live in Sweden and like to unwind watching The Seventh Seal over and over. People are people, it doesn’t matter who they are, where they are, why they are, or how much they are. Humans are just not that interesting or different from each other. If you need proof, go look for videos of stake boarding puppies on YouTube and count the views.

It used to be these were websites, but now they’re just apps. They follow you, track your every move, and give you suggestions about all the things that you should be doing. By signing up for every one of these apps, you not only ensure that you will be doing as little thinking as possible, you will be ahead of the curve on this I-don’t-need-to-think-or-do-anything trend. Soon, apps will shift from giving recommendations and predictions to just telling you where you should go. The inevitable next step is that the app will do it for you — you won’t need to move a finger. An app will tell you that you should be having dinner somewhere (eventually even skipping the part where it tells you), check you in, tweet for you about how delighted you are with the dinner you’re supposed to be having, all from the comfort of your home while you’re eating canned beans. Then you can hide in a cupboard in peace (See rule #1).

In time, the apps will even date and have their own, um, applets. Which will let them achieve the app singularity and just bypass humans altogether (for the most part. We will still have to keep the phones plugged so the apps can run. We’ll basically be tasked with providing electricity for them, something that The Matrix got right).

So get on board. Before an app does it for you.

#18: raise your own tiny robot army

Wake up call: Do you really think that people are getting Lego Mindstorms just for the kids? We have been warned time and again of the consequences of a decaying civilization, impending apocalypse, and whatnot. What we have not been told is what happens when someone else in your block has a robot army. Trust me, you need one.

A tiny robot army comes in handy in countless ways. Whether you’re engaging in hostilities with the jerk from the apartment right above yours that won’t stop playing loud music, or just escorting your dog while you take him for a walk, a robot army will always come in handy. In peacetime, the robots can be deployed to do various chores, like washing dishes, cleaning your house, or carrying bags from the supermarket. And every once in a while, you can use them to invade a nearby property if you’re suspicious that they may be threatening your way of life, by, for example, hiring a new gardener to change the shapes of their shrubbery. In a pinch, if, say, your Internet connection is down, they may be dressed in hilarious tiny costumes and made to enact Shakespeare plays for you. You haven’t really experienced Hamlet until you’ve seen it performed by a tiny robot army.

And I know what you will say: “I’ll call Aquaman!” But, really, Aquaman is useless outside of water, and he isn’t real. Batman only works Gotham, so unless you live there, you’re on your own. Get a robot army ready, or suffer the consequences.

#50: be friendly with squirrels

Godzilla has battled many enemies: Mothra, Space Godzilla, Megalon, Mechagodzilla. All worthy foes. But Godzilla never had to face a horde of angry squirrels.

Squirrels are a force to be reckoned with. They have multiple powers: blinding-fast movement, jumping, crawling up surfaces, high-speed sniffing, some can even fly. They are fearsome foes of cabling: squirrels can chew through pretty much anything, given enough time, including but not limited to fiber optic cables, coax and the wiring system of your car, which I have personally experienced.

Some people have said that squirrels are just fancy rats with fur coats, but nothing could be further from the truth. They’re highly advanced creatures, as you can see from the following photographic evidence:

That’s right. Anti-tank weaponry. Tiny violins. Lightsaber fights. Squirrels can handle them all. They even had the forethought, thousands of years ago, of preparing for today’s Jurassic Park-like experiments with plants.

Naturally, squirrels cannot be true friends of mankind, since we compete for the same lightly-forested Internet-enabled high-garbage-density habitats known as the suburbs. So “friendly” is the best you can hope for. If your interests happen to match theirs, they can be a powerful ally, and, in case you ignored Lesson #18 above and don’t have a tiny robot army, you may be able to entice them to fight for you by giving them a box with assorted lengths of wire and some nuts.

#76: always have a miniature EMP device handy

Ever have that problem where your neighbor keeps construction going all the time? Or been at the movies and some jerk doesn’t stop talking on the phone? What about that meeting in which people just won’t stop playing Angry Birds on their iPads?

All of these problems have one solution: a miniature EMP pulse generator. This wonderful device will wipe out all circuits within a reasonable radius, returning your immediate surroundings to something like, say, Victorian-era England. What a time that was, when you could hold a world-wide empire that controlled hundreds of millions of people from a tiny island thousands of miles away from nearly everything and no one really worried about pesky things like human rights, child labor, and such. On the other hand you had to be constantly at war for all sorts of reasons, which really put a cramp on the Queen’s croquet schedule, but hey, nothing’s perfect.

Speaking of croquet: I don’t get it. Cricket, either. And who named these things anyway? Football, basketball: those are sports you can understand just by hearing their name. But croquet? Sounds like a side for breakfast, not a sport.

Anyway, back to the portable EMP. Procuring this device may be slightly tricky, and customizing it properly is not for the faint of heart. The best way to do it is to build it yourself: spend a few years becoming a nuclear physicist and then following that up with mechanical and electrical engineering degrees. With any luck, civilization will still be around by the time you are done.

Once you have your device, be careful how you use it, since it will likely also wipe out some or all of your own devices — not ideal, but in the end a small price to pay for peace and quiet.

“I broke it” vs. “it’s broken”

“I don’t know what I did.” (Cue ominous music.)

These are usually the first words that non tech-nerds will utter when asking for help with a computer problem. The printer was working, now it’s not. A certain toolbar has always been there. Now it’s not. Double-clicking on an icon used to load a certain application, now it doesn’t. A network drive used to be available, now it isn’t. And on, and on it goes.

Let’s step back for a moment — quoting from my post a couple of weeks ago, cargo-cult troubleshooting:

There’s an interesting aside to this in terms of why we assume that the problem is on our end first, rather than the other. It’s what I call the “I broke it vs. It’s broken” mindset, of which I’ll say more in another post, but that in essence says that with computer systems we tend to look at ourselves, and what is under out control, as the source of the problem, rather than something else. This is changing slowly in some areas, but in a lot of cases, with software in particular, we don’t blame the software (or in this case, the internet service). We blame ourselves. As opposed to nearly everything else, where we don’t blame ourselves. We say “the car broke down,” not “I broke the car.” We say “The fridge isn’t working properly” as opposed to “I wonder what I did to the fridge that it’s no longer working”. And so on. We tend to think of Google, Apple, and pretty much anyone else as black boxes that function all the time, generally ignoring that these are enormously complex systems run by non-superhuman beings on non-perfect hardware and software. Mistakes are made. Software has bugs. Operational processes get screwed up. That’s how things are, they do the best they can, but nothing’s perfect.

This is perhaps my biggest pet peeve with computing devices. Whenever someone tells me “I don’t know what I did,” I give them the fridge example and they say, “good point… but… this was working before.” They see the point, but they still blame themselves. And it drives me crazy. (Note: As I mention in the paragraph, some of this comes from a sense of infallibility we assign to web services, but that’s a slightly different topic and so I’ll leave that for yet another post. What I want to discuss here has to do with personal computing devices themselves).

This didn’t happen by chance. Software as long placed seemingly impossible choices on users. I still chuckle at DOS’s “Abort, Retry, Fail?” option when, say, a DIR operation failed. Of course, there’s a meaning to each of those options (nevermind that in practice it didn’t make much of a difference which one you chose, since they usually happened when there was a hardware failure).

Now, this is fairly common with new technologies — early on there’s many more low level details that are exposed to the user that allow them to create problems. The difference with software is its malleability and the fact that we chose, early on, to expose this malleability to every day users, and many of the initial metaphors were low-level enough that they could easily be misused, like, say, the filesystem (a bit more on that below).

Granted, software does present more opportunities for a user to make a mistake and “break things” than your average fridge, but in my mind that’s not an excuse. Software should be flexible, yes, but it should also be resilient to user choices, allowing easy recovery and an understanding, on the part of the device, of state.

Frequently the source of the error is a hardware problem. These days, automatic updates can also break software or misconfigure settings. This isn’t the user’s fault. Many other times, it was, in fact, something the user did that “broke it.” But my argument is that even in that case it’s our responsibility as software designers to build software that is resilient, if you will, to user choices. Back to the fridge for a moment: you can break a fridge by doing things that, for example, push the motor too hard, or if you’re smashing the controls inside, but it requires time, it’s not easy, and it can’t happen in a split second while you are distracted.

The filesystem is a great example of this problem. It’s too easy to make mistakes. While using it, you have to pay attention not just to the task at hand but also have to devote a significant amount of attention to the mechanics of doing it to make sure you don’t, say, wipe out some unrelated documents while trying to create a new one. That’s why I really like the idea of what Google has done with Google Docs and what Apple is trying to do with iCloud in general, pushing the filesystem out of the way to leave in place just a document metaphor, closer to what’s in place in iOS (for an in-depth discussion of this topic, you should read the iCloud section in John Siracusa’s excellent OS X 10.8 Ars Technica review. And if you haven’t yet, read the whole thing while you’re at it.) These new approaches aren’t perfect by any means. Behavior and functionality are still wonky at times and it’s hard to let go of an interaction metaphor that we have been using for decades, but we have to start somewhere.

There are many more areas in which this kind of shift has to happen, but since in the end it all comes down to data, it’s really, at the core, a difference in how we approach data creation, modification, and deletion. creation should be painless, modification should almost always automatically maintain version information, switching between versions/states should be easy, deleting information should be very difficult, and, in a few select cases, pretty much impossible (if you think this is extreme, consider, do you have the option to delete your computer’s firmware? Not really, and for good reason, but the firmware isn’t the only critical component in a computer). This can apply to UI state, system settings, device setup, display preferences, you name it. Incidentally, all large-scale web services have to implement these notions one way or another. Bringing down your entire web service because of one bad build just won’t do. :)

We know how we got here. For a while, we had neither the tools not the computing power and storage to implement these ideas, but that is no longer the case. It’s taken a long time to get these concepts into our heads; it will take a long time to get them out, but I think we’re making progress and we’re on the right path. Here’s hoping.

the era of wysiwyg product announcements

Watching the Microsoft Surface announcement a few weeks ago, I was struck by the same things nearly everyone has commented on: the wooden delivery, the crashes, the interesting ideas coupled with Microsoft’s equivocations on what should constitute the best experience (“It has a soft keyboard, and a real keyboard that is kind of like a soft keyboard, and a real keyboard that is more like a typical keyboard and… a pen!”).

Yet, something kept bugging me, until it suddenly hit me: Microsoft was announcing a product that wasn’t ready.

Pre-announcing. This used to be how things were done. Back in the day when CES, COMDEX (remember COMDEX?) and other similar conferences ruled the roost, products were announced and demo’ed with great fanfare months and months (sometimes a year or more) before they were available. Demo followed by demo, release date changes, multi-year development cycles. In the 90s and early 2000s these events were the rule, not the exception. We even had an acronym to go along with this way of announcing products: FUD, where the announcement was made only to scare off the competition or to create pressure even when the product being announced didn’t actually exist.

This is no longer how product releases are done, however. We could even draw a faint analogy to personal computer UIs in the 80s, when MacOS brought WYSIWYG to personal computers. It used to be “what you see is what you’ll get…maybe…eventually” and now it’s pretty much “what you see is what you get.” Products are announced and often available the same day, or within a few weeks at most. There’s almost no daylight between a product’s announcement and its wide availability. As with many things in recent years, we can credit Apple with having changed the game.

By and large, Apple no longer pre-announces products. The one glaring exception is operating systems and developer tools, where you need some degree of “pre-announcing” to get developers on board so that at release the new OS & toolkits are supported by 3rd party apps. It’s much more a situation of announcing the developer version (and its immediate availability), followed by an announcement of the consumer version. OS X Mountain Lion, most recently, was announced for consumer release about a month and a half prior to wide availability, and that’s probably as close as you can get it if you want to give 3rd party developers some time to adjust to the final version of the OS.

In the case of Apple, this is in keeping with their theory of product design, in which they “know what’s best.” If you’re not really going to adjust that much to input from others, what’s the point in pre-announcing?

That’s not the whole story though. There are other factors at play. Rumors and analysis for upcoming products now spread faster and earlier than ever before. In the 90s speculation about new products was much more limited, most of the activity was around information provided by the company in the first place, whoever they were. With blogs and news sites that publish almost anything they get their hands on, rumors, spy photos & video coming from all corners of the globe (including from production lines), there’s now a constant drumbeat of essentially free PR for new products that didn’t exist even a few years ago. From a PR perspective, what would be the value of squashing a rumor with actual information (say, of the resolution of the next iPhone’s camera, to name just one thing)? People would a) stop speculating, leading to fewer articles, and b) start criticizing the actual choice without having the product in front of them, which is also bad. The only time I remember Apple “breaking” this rule was with the announcement of the original iPhone, almost a full 6 months before its release, but in that case there was already a fever pitch of speculation and they knew that the product spec would get out partially in government documents related to FCC testing and whatnot, so controlling the story was more important than waiting for availability.

Another key factor is that Apple has demonstrated they can sustain an extraordinarily stable release cycle for their devices. iPhones & iPods (plus iOS) in early Fall. iPads in early Spring. Major updates for Macs (plus OS X) in the summer. No doubt there are internal reasons why keeping this cycle makes sense, in terms of component updates and such as well as to create a general development pace, but it also has the side-effect that the speculation and rumors have rough targets and therefore they start to “seed”, on cue, information and expectation for the upcoming product, all without Apple spending a single cent.

Incidentally, I think this was also a key reason why Apple stopped going to conferences like CES years ago. They want to control their schedule, they don’t want to be forced to announce products in January when they may be available in August. And the always-on, rumor-fueled digital news cycle lets them do just that.

So I think that in this case Apple has adapted to a new world were you don’t need a constant drumbeat of PR to remind people that a new product is coming, blogs & news sites do it for you. Others, like Google, have partially adapted to this by centering major announcements around Google I/O, but they still release products at random times, losing some of the steam that an increase in rumor and expectation would provide.

And Microsoft? They appear to be stuck in the 1990s. They attempted to replicate the pageantry associated with Apple events without the substance. The Surface announcement came out of the blue (up to the announcement, speculation was all over the place, mostly focusing on the announcement having something to do with Xbox). In and of itself this isn’t a bad thing, but surprises aren’t always good, particularly when followed by lots of unknowns. The software crashed during the demo, and they didn’t let anyone touch the devices, leading to (justified) speculation that they just weren’t ready. They didn’t announce a release date. They didn’t announce pricing. They left pretty much everything important open to speculation while showing enough to quash many rumors, so criticism and opinions appear to be based in fact. (Non-tech people I know that don’t follow these things closely actually thought that Microsoft had released the device). The worst of both worlds.

Some of Microsoft’s missteps in this area probably relate to the fact that they coupled to some degree parts of the Windows 8 announcement with the tablets, and for Windows 8 you need to bring OEMs to fall in line (not to mention the Windows Phone 7/8 incompatibility issues floating out there as well), a problem Apple doesn’t have. With so many versions and variants it’s difficult for one single product to pack a lot of punch in terms of mindshare.

We’ll have to wait and see if Microsoft is able to adapt to the new way of doing things now that Windows 8, even with all its variants, presents a more cohesive picture of where they want to go and perhaps allows them to create a more stable release cycle. In the meantime, Apple will continue to leverage WYSIWYG product announcements to their advantage.

not with a bang but a whimper

This is the way the world ends

This is the way the world ends

This is the way the world ends

Not with a bang but a whimper.

–T.S. Eliot, “The Hollow Men”

While apocalyptic fiction stories are a dime a dozen these days (what with the anything-with-zombies craze that’s been building up for a while now…) the genre isn’t new by any means. In western culture, we could trace it back even to Revelations in the bible. We’ve been worrying about our impending doom for a long, long time.

If eschatology isn’t new, perhaps our awareness of its human scale is. In the last half-century the advent of nuclear and biological weapons has changed how we think about the end of the world as we know it: we’ve grown less focused, I’d say, on the extraordinary events that we imagine as prerequisites for such a thing to occur, and more focused on the impact on individuals and everyday life that this would have. Simultaneously with the cold war we got a much clearer picture of how thin the edge of the razor on which humanity’s existence stands really is. Nuclear war, bioweapons, a super flu, an asteroid, Yellowstone, you name it. That is — not only we know the world can end, we also know that there’s multiple sure-fire ways in which it can happen. The question becomes, then, more often than not, what happens to us, shifting the view away from the pyrotechnics to the quiet that remains. A whimper, not a bang.

Two of the earliest, fullest realized examples of this is that I can think of are Richard Matheson’s I Am Legend (1954) and Nevil Shute’s On The Beach (1957). Granted, I Am Legend was ahead of its time in terms of the cause of the end of the world but I’d argue its focus nevertheless is no longer from “how could this happen?” or “why?” but rather “now what?”

More recently there are many others I could mention, but a great example is Cormac McCarthy’s The Road (2006) which had a (surprisingly) fairly faithful adaptation in the 2009 movie. Incidentally, The Road contains one of my favorite passages in any kind of fiction:

What is it? she said. He didnt answer. He went into the bathroom and threw the lightswitch but the power was already gone. A dull rose glow in the windowglass. He dropped to one knee and raised the lever to stop the tub and then turned on both taps as far as they would go. She was standing in the doorway in her nightwear, clutching the jamb, cradling her belly in one hand. What is it? she said. What is happening?

I dont know.

Why are you taking a bath?

I’m not.

The subtlety and quiet power of that moment, all that is said and all that isn’t, get me every time. But I digress (if only slightly).

To this list I now have to add Will McIntosh’s Soft Apocalypse. I won’t spoil the story, and I’d recommend avoiding reviews and just reading it. What I found interesting is that it deals with a fairly complex set of causes, albeit in sketch form — not unlike the “Why are you taking a bath?” paragraph from The Road. It communicates as much in silences, in what it doesn’t say, as in what it does. In the specifics, the only weak point I can remember is that it references the frog-in-boiling-water urban legend, but I’ll even accept that as something a character thinks he knows, but isn’t true.

In any case, it’s a great book. Highly recommended.

Follow

Get every new post delivered to your Inbox.

Join 367 other followers

%d bloggers like this: