diego's weblog

there and back again

Category Archives: technology

the multichannel conundrum

(x-post to Medium)

I’ve been writing online for quite a while now. My earliest posts date back to late 2001/early 2002. I tried a bunch of different platforms and eventually settled on MovableType running on my own server, and a few years back I moved to hosted WordPress, where my primary weblog remains. As I’ve been revving up my writing in recent weeks I started wondering about other options.

why write where

Now, some people may think of posting in as many places as you can in purely utilitarian terms, as a way to “increase distribution” or whatever. I, however, think about it in terms of the mental space the tool creates, and how it affects my output. Which affects me. This effect is not restricted to online writing, where social feedback loops can be created instantly. I think the tool has a direct, real effect on what you write. All things being equal, writing on a typewriter will lead to something different than if you used, say, Notepad on Windows 95. I’m sure there are studies about this that confirm my completely unfounded assertion. However, I am not going to go on a yak-shaving expedition in an attempt to find out. Let us assume there are, and if not, then let’s agree there should be… and if not we can disagree*.

*Should someone object and try to say that we can “agree to disagree” then I will point out that, no, “agreeing to disagree” is just plain disagreeing but pretending you don’t, probably to avoid an actual conversation. “Agreeing to disagree” is to “disagreeing” what “agnostic” is to “atheist.”

A lot of what I write, of what I’ve always written, is long form. And a lot of what I write, of what I’ve always written, is connected. Not superficially, not just thematically, but actually connected, a long-running thread of obsessions and topics that expand (and, less frequently, collapse) non-linearly. Sometimes I’ve written hypertextually, simultaneously creating meaningful minor blocks of meaning and greater ideas that emerge out of the non-directed navigation of references between those minor blocks. By the by, I know “hypertextually” is not really a word, but I think it conveys what I mean.

While that structure is amusing to my brain (and possibly other brains!), it can have a fate worse than becoming incomprehensible: becoming invisible. If you see something that you don’t understand you have a choice to spend time and try to understand it, but if you don’t see something, regardless of complexity, well…

content survivability

So trying to keep that structure somewhat visible means lots of cross-referencing, which means what I write has to have exceptional survivability. This is less easy than it sounds. Services start and close down. Linking mechanisms change. Technically, theoretically, there’s nothing really preventing hyperlinked content to remain available for referencing in perpetuity, in practice perpetuity can and often is a very very short time. An easy example is Twitter and the tweet-boxes that they insist people must use to reference tweets. Some people take screenshots, most use the tweet boxes. Eventually Twitter will change, morph, be acquired, shut down, or maybe not, but I guarantee you that at some point in the next 10–20 years those boxes will simply stop working. At that time, regardless of how standards-compliant the HTML the pages that contain those tweets, they will be crippled, possibly severely. How many times have you read a news story recently that talks about how so-and-so tweeted such-and-such and it’s outrageous? Archive.org and its wonderful Wayback Machine don’t solve this issue.

Now, in general, this is not necessarily a bad thing. I’m sure that not everything has to be preserved forever. With distance history loses resolution, and that’s alright for lots of things. Even during crises a lot of what we do in life is mundane, inconsequential and it rightfully gets lost in time. Now that a lot of what we do is either in cyberspace or is reflected by/in it, it’s natural that inconsequential things end up there. We don’t care what Julius Caesar had for lunch one day in October as a teenager. Likewise, the fact that an Instagram photo of a future president’s lunch is lost in time will do nothing to alter history. However, if the choice for lunch leads to losing a bus that later crashed, then the entire incident will generally be recorded. Psychohistory comes to mind.

But I digress. The point is that I like the idea, personally, of knowing that I can maintain cross references valid for what I write, and that means having both a level of control over it as well as reducing the number of outlets in which it appears. Hence my weblog being fairly static in structure (I converted the MT weblog to static pages back during the transition).

This also limits the tools that can be used, to some degree, and according to my theory of how the tool shapes the message, it would naturally lead to stagnation, at minimum, stylistically, of what is said.

Which results in this so-called conundrum.

Trying new things is important though. That’s why I’m here. I may cross-post to my weblog for now, just for “backup,” but I am going to give Medium a try, and see what happens. This entire post resulted entirely from this experiment, and that’s a pretty good start. :-P

maybe because both words end with “y”

In an an apparent confusion between the word “utility” and the word “monopoly,” the Wall Street Journal runs an opinion piece today called “The Department of the Internet” that has to be one of the most disingenuous (and incoherent) efforts to attack Net Neutrality I’ve seen in recent times. The author, currently a hedge fund manager and previously at Bell Labs/AT&T, basically explains all of the ways in which AT&T slowed down innovation, either by omission, errors of judgment, or willful blocking of disruptive technologies.

All of them because, presumably, AT&T was classified as a “utility.” I say “presumably” because at no point does the piece establish a clear causal link between AT&T’s service being a utility and the corporate behavior he describes.

Thing is, AT&T behaved like that primarily because it was a monopoly.

And how do we know that it was its monopoly power that was the primary factor? Because phone companies never really stopped being regulated in the same way — and yet competition increased after the breakup of AT&T. In fact, you could argue that regulation on the phone system as a whole increased as a result of the breakup.

Additionally, it was regulation that forced companies to share resources they otherwise would never have. In fact the example of “competition” in the piece is exactly an example of government intervention similar to what Net Neutrality would do:

“The beauty of competition is that you get network neutrality for free. AT&T cut long-distance rates in the 1980s when MCI and Sprint started competing fiercely.”

Had the government not intervened in multiple occasions (whether in the form of legislation, the Courts, or the FCC, and most dramatically with the breakup), AT&T would never have allowed third parties to sell long distance to their customers, much less at lower rates than them.

There’s more than one fallacy on the piece on how “utilities are bad”:

A boss at Bell Labs in those days explained what he called the Big Lie, using water utilities as an example. Delivering water involves mostly fixed costs. So every decade or so, water companies engineer a shortage. Less water over the same infrastructure meant that they needed to raise rates per gallon to generate returns. When the shortage ends, they spend the extra money coming in on fancy facilities, thus locking in the higher rates for another decade.

So — someone, decades ago, gave an example of the corruption of water companies to the author, and regardless of whether this “example” is true or not, real, embellished or a complete fabrication, and regardless of whether the situation is, I don’t know, maybe a little different half a century later and dealing with bits and not water molecules, it’s apparently something good to throw out there anyway. (In fact, I struggle to see exactly what AT&T could do that would be analogous to the abuse he’s describing).

Again, this is presumed, since no causal link is established in the sense that if true, the described ‘bad behavior’ is conclusively the result of something being a utility rather than, well, any other reason, like corruption, incompetence, or just greed.

To close — I’ve seen that a number of people/organizations (many but not all of them conservatives) are opposed to Net Neutrality. My understanding is that this is because of fear of over-regulation. Fair enough. Have any of them thought how it would affect them? Perhaps it’s only when it’s implemented that they will realize that their readers/customers, by an overwhelming majority, have little choice of ISPs. Very few markets have more than two choices, and almost no markets have competitive choices (ie, choices that are at equivalent levels of speed or service).

But I’m sure that the Wall Street Journal, or Drudge, or whoever will be happy to pay an extra fee to every IP carrier out there so their pages and videos load fast enough and they don’t lose readers.


the importance of Interstellar

iDo not go gentle into that good night,
Old age should burn and rave at close of day;
Rage, rage against the dying of the light.

                                                    Dylan Thomas (1951)

Over the last few years a lot of movies -among other things- seem to have shrunk in ambition while appearing to be”bigger.” The Transformers series of movies are perhaps the best example. Best way to turn off your brain while watching fights of giant robots and cool explosions? Sure. But while mega-budget blockbusters focus on size, many of them lack ambition and scope. Art, entertainment, and movies in particular, given their reach, matter a lot in terms of what they reflect of us and what they can inspire. For all their grandiose intergalactic-battle-of-the-ages mumbo jumbo, Transformers and other similar movies always feel small, and petty. Humans in them are relegated to bit actors that appear to be props necessary for the real heroes (in this case, giant alien robots) to gain, or regain, inspiration and do what they must do. And always, always by chance. Random people turn into key characters in world-changing events just because they stumbled into the wrong, or right, (plot)hole.

Now, people turned into “the instruments of fate (or whatever),” if you will, is certainly a worthwhile theme and something that does happen. But stories in which the protagonists (and people in general) take the reins and attempt to influence large-scale events through  hard work, focus, cooperation, even -gasp!- study, became less common for a while. Art reflects the preoccupations and aspirations of society, and it seems that by the mid-to-late 2000s we had become reliant on the idea of the world as reality TV – success is random and based on freakish circumstances, or, just as often, on being a freak of some sort. This isn’t a phenomenon isolated to science fiction — westerns, for example, declined in popularity but also turned “gritty” or “realistic” and in the process, for the most part, trading stories of the ‘purity of the pioneering spirit’ or ‘taming the frontier’ with cesspools of dirt, crime, betrayal and despair.

Given the reality of the much of the 20th century, it was probably inevitable that a lot of art (popular or not) would go from a rosy, unrealistically happy and/or heroic view of the past, present, and future, to a depressing, excessively pessimistic view of them. Many of the most popular heroes in our recent collective imaginations are ‘born’ (by lineage, by chance, etc) rather than ‘made’ by their own efforts or even the concerted efforts of a group. Consider: Harry Potter, the human characters in Transformers (and pretty much any Michael Bay movie since Armageddon), even more obviously commercial efforts like Percy Jackson or Twilight along with other ‘young adult’ fiction and with pretty much all other vampire movies, which have the distinction of creating ‘heroes’ simultaneously randomly and through bloodlines, the remake of Star Trek turned Kirk joining Starfleet into something he didn’t really want to do; the characters in The Walking Dead; the grand-daddy of all of these: Superman… and, even, as much as I enjoy The Lord of The Rings, nearly everything about its view of good and evil involves little in the way of will and intent from the main characters. Characters talk a great deal about the importance of individuals and their actions, but in the end they’re all destined to do what they do and the key turning points are best explained as either ‘fate’, simply random, or manipulated by people of ‘greater wisdom and/or power’ like Gandalf, Galadriel, Elrond and so on. Good and evil are defined along the lines of an eugenics pamphlet in a way that gets to be creepy more often than not (the ‘best’ are fair-skinned, with blue or green eyes, and from the West, the ‘worst’ are dark-skinned, speak in hellish tongues and are from the East, along with an unhealthy obsession with bloodlines and purity of blood, and so on; Gandalf “progresses” from Gray to White, while Saruman falls from being the leader as Saruman the White into shrunken evil serving Sauron, the Dark Lord… as “Saruman of Many Colours”… you get the idea).

All of which is to say: I don’t think it’s a coincidence that in this environment good Science Fiction in general and space exploration SF is always relegated a bit, particularly in movies. There is nothing random about space exploration: it requires an enormous amount of planning, study, effort, hard work, and money. You can’t inherit a good space program. It has to be painstakingly built, and supported, across decades. When a not-insignificant percentage of society flatly discards basic scientific theories in favor of religious or political dogma while giving an audience to Honey Boo Boo or Duck Dynasty, it’s not illogical for studios to finance another animated movie with talking animals than to push people beyond their comfort zones.

Even so, there’s always been good SF, if perhaps not as frequently as SF fans would like. And over the last 20 years we have started to see  Fantasy/SF stories that combine a more “realistic” view of the world, but mixed in with the more idealistic spirit of movies like The Right Stuff. In these we have characters succeeding, or at least ‘fighting the good fight’, through exertion of will, the resolve to change their reality. And even if there’s an element of ‘fate’ or chance in the setup, the bulk of the story involves characters that aren’t just pushed around by forces beyond their control. Nolan’s Dark Knight trilogy, Avatar, Serenity, most of Marvel’s new movies: Iron Man, Captain America, The AvengersWatchmen. In books, the Already Dead series and the Coyote series, both of which could make for spectacularly good movies if ever produced. In TV, Deadwood, which is perhaps the best TV series of all time, was a good example of the same phenomenon — it felt realistic, but realistically complex, with characters that weren’t just swept up in events, and that exhibited more than one guiding principle or idea. We got ‘smaller’ movies like Moon that were excellent, but large-scale storytelling involving spaceflight that wasn’t another iteration of a horror/monster/action movie is something I’ve missed in the last few years.

What about last year’s Gravity? It was visually arresting and technically proficient but fairly mundane in terms of what actually happens. It’s not really inspiring — it’s basically the story of someone wrecking their car in the middle of the desert and having to make it to the next gas station… but in space, the focus on experiencing a spiritual rebirth, and in case we were confused about the metaphor the see the main character literally crawl out of mud and water and then slowly stand and start to walk. Bullock’s character in Gravity is also one of those guided by circumstances, frequently displaying a lack of knowledge about spaceflight that even the original monkeys that flew in the early space missions would have slapped their foreheads about.

Which brings me to Interstellar. No doubt it will be compared to 2001: A Space Odyssey (with reason) and with Gravity (with less reason). Interstellar is more ambitious than 2001 in terms of science, matching it or exceeding it in terms of story scope and complexity, while leaving Gravity in the dust.  2007’s Sunshine shares some themes and some of the serious approach to both science and fiction (… at least the first 30 minutes or so, afterwards it shares more with Alien) as well as with the (in my opinion) under-appreciated Red Planet (2000) and even some elements of the much less convincing Mission to Mars. It also reminded me of Primer in terms of how it seamlessly wove pretty complex ideas into its plot.

We haven’t had a “hard” SF space movie like this for a whileKey plot points involving gravitational time-dilation, wormholes, black holes,  quantum mechanics/relativity discrepancies… even a 3D representation of a spacetime tesseract (!!!!). 2001 was perfect about the mechanics of space flight, but Interstellar also gets as deep into grand-unified theory issues as you can probably get without losing a lot of the audience, and goes much further than 1997’s Contact. There are some plot point that are weak (or, possibly, that I may have missed an explanation for, I’ll need another viewing to confirm…), and sometimes there are moments that feel a bit slow or excessively, shall we say, ‘philosophical’, although in retrospect the pauses in action were effective in making what followed even more significant.

Comparisons and minor quibbles aside, Interstellar is spectacular; the kind of movie you should, nay, must watch in a theater, the bigger screen the better, preferably on IMAX.

The movie not only has a point of view,  it is unapologetic about it. It doesn’t try to be “balanced,” and it doesn’t try to mix in religion even as it touches on subjects in which it frequently is mixed in the name of making “all points of view heard.” Interstellar is not “anti religion” … and it is not pro-religion either. There’s a fundamental set of circumstances in the plot that allows the movie to sidestep pretty much all of the usual politics and religion that would normally be involved. Perhaps someone can argue whether those circumstances are realistic (although something like the Manhattan project comes to mind as an example of how it can actually happen). But the result is that the movie can focus almost exclusively on science, exploration, our ability to change things, either individually or in groups.

This, to me, felt truly refreshing. Everything that has to do with science these days is mixed in with politics and/or religion. This also helps the story in its refusal to “dumb things down”…  its embrace of complexity of ideas, even if less focused on a lot of specific technical details than, say, Apollo 13 was, which is a natural result of having the Apollo data at hand.

How many people, I wonder, know by now what NASA’s Apollo program really was? Sometimes it seems to be relegated to either conspiracy joke material or mentioned in passing to, for example, explain how your phone is more powerful than the computers that went to the moon. Somehow what was actually attempted, and what was actually achieved, isn’t remarkable anymore, and the true effort it took is less appreciated as a result. With that, we are making those things smaller, which gives us leeway to do, to be less. It makes “raging against the dying of the light” sound like a hopelessly romantic, useless notion. It justifies how approaching big challenges these days frequently happens in ways that makes us “involved” in the same way that Farmville relates to actual farming. Want to feel like you’ve solved world hunger? Donate $1 via text to Oxfam. Want to “promote awareness of ALS”? Just dump a bucket of ice water on your head. Want to “contribute in the fight against cancer”? Add a $3 donation while checking out of the supermarket. No need to get into medicine or study for a decade. Just bump your NFC-enabled phone against this gizmo and give us some money, we’ll do the rest.

I’m not saying that there is no place for those things, but recently it seems that’s the default. Why? Many commentators have talked about how these days we lack an attitude best described by Kennedy’s famous line “Ask not what your country can do for you, as what you can do for your country”. But I don’t think the issue is not wanting to do anything, or not wanting to help. I think the issue is that we have gotten used to being scared and feeling powerless in the face of complexity. We’ve gone from the 60’s attitude of everyone being able to change the world to feeling as if we’re completely at the mercy of forces beyond our control. And we’ve gone overboard about whatever we think we can control:  people freaking out about the use of child seats in cars, or worrying about wearing helmets when biking, while simultaneously doing little as societies about the far greater threat of climate change.

When education was a privilege of very few, very rich people, it was possible for pretty much everyone to accept a simplistic version of reality. That was before affordable mass travel, before realtime communications, before two devastating world wars and any number of “smaller” ones. Reality has been exposed for the truly messy, complicated thing it is and always was. But instead of embracing it we have been redefining reality downwards, hiding our collective heads in the sand, telling ourselves that small is big. Even heroism is redefined — everyone’s a hero now.

Interstellar is important not just as a great science fiction movie, not just because it is inspiring when it’s so much easier to be cynical about the past, the present or the future, but also because beyond what it says there’s also how it says it, with a conviction and clarity that is rare for this kind of production. It’s not a coincidence that it references those Dylan Thomas verses more than once. It’s an idealistic movie, and in a sense fundamentally optimistic, although perhaps not necessarily as optimistic about outcomes as it is about opportunities.

It’s about rekindling the idea that we can think big. A reminder of what we can attempt, and sometimes achieve. And, crucially, that at a time when we demand predictability out of everything, probably because it helps us feel somehow ‘in control’, it is also a reminder in more ways than one that great achievement, like discovery, has no roadmap.

Because if you always know where you’re going and how you’re getting there you may be ‘safe’, it’s unlikely you’ll end up anywhere new.

here’s when you get a sense that the universe is telling you something

In the same Amazon package you get:

    The latest Thomas Pynchon novel.
    The World War Z blu ray.

Telling you what exactly…. well, that is less clear.

the apple developer center downpocalypse


We’re now into day three of the Apple Developer Center being down. This is one of those instances in which Apple’s tendency to “let products speak for themselves,” an approach that ordinarily has a lot going for it, can be counterproductive. In three days we’ve gone from “Downtime, I wonder what they’ll upgrade,” to “Still down, I wonder what’s going on?” to “Still down, something bad is definitely going on.”

Which, btw, is the most likely scenario at this point. If you’re ever been involved in 24/7 website operations you can picture what life must have been like since Thursday for dozens, maybe hundreds of people at Apple: no sleep, constant calls, writing updates to be passed along the chain, increasingly urgent requests from management wanting to know, exactly, how whatever got screwed up got screwed up, and that competing with the much more immediately problem of actually solving the issue.

And a few people in particular, likely less than a dozen, are under particular pressure. I’m not talking about management (although they have pressure of their own) but the few sysadmins, devops, architects and engineers that are at the center of whatever team is responsible for solving the problem, which undoubtedly was also in charge of the actual maintenance that led to the outage in the first place, so the pressure is multiplied.

Even for global operations at massive scale, this is what it usually comes down to — a few people. They’re on the front lines, and hopefully they know that some of us appreciate their efforts and that of the teams working non-stop to solve the problem. I know I do.

The significance of the dev center is hard to see for non-developers, but it’s real and this incident will likely have ripple effects beyond the point of resolution. Days without being able to upload device IDs, or create development profiles. Schedules gone awry. Releases delayed. People will re-evaluate their own contingency plans and maybe question their app store strategy. Thousands of developers are being affected, and ultimately, this will affect Apple’s bottom line.

And that’s why this situation is not the kind of thing that you’ll let go on for this long unless there was a very, very good reason (only a couple of days from reporting quarterly results, no less). Maybe critical data was lost and they’re trying to rebuild it (what if everyone’s App IDs just went up in smoke?). Maybe it was a security breach (what if the root certs were compromised?). The likelihood that there will be consequences for developers, as opposed to just a return to the status quo, goes up with every hour that this continues. As Marco said: “[…]  if you’re an iOS or Mac App Store developer, I’d suggest leaving some free time in the schedule this week until we know what happened to the Developer Center.”

In fact, it could be that at least part of the delay has to do with coming up with procedures and documentation, if not a full-on PR strategy. Apple hasn’t traditionally behaved this way, but Tim Cook has managed things very differently than Steve Jobs on this regard.

Finally, I’ve been somewhat surprised by the lack of actual reporting on this. One day, maybe two days… but three? Nothing much aside from minor posts on a few websites, and not even much on the Apple-dedicated sites. This is where real reporting is necessary. Having sources that can speak to you about what’s going on. Part of the problem is that the eventual impact of this will be subtle, and modern media doesn’t do subtle very well. It’s less about the immediate impact or people out of a job than about a potential gap in future app releases. A whole industry is in fact dependent on what goes on with that little-known service, and with iOS 7/Mavericks being under NDA, Apple’s developer forums, which are also down, are the only place where you can discuss problems and file bug reports. Some developer, somewhere, is no doubt blocked from being able to do any work at all. 

Apple should, perhaps against its own instincts, try their best to explain what happened and how they’ve dealt with it. Otherwise, the feeling that this will just happen again will be hard to shake off for a lot of people. For Apple, this could be an opportunity to engage with their developer community more directly. Here’s hoping.

diego’s life lessons, part III

Excerpted from the upcoming book: “Diego’s life lessons: 99 tips for survival, fun, and profit in today’s baffling bric-a-brac world.” (see Part I and Part II).

#9 make the right career choices

Everyone will have seven careers in their lifetime, someone said once, and we all repeated it even if we have no idea why.

The key to career planning, though, is to keep in mind that while the world of today ranges from complicated to downright baffling, the world of tomorrow will be pretty predictable, since as we all know it will just be a barren hellscape populated by Zombies.

So the question is: post-Zombie Apocalypse, what will you need to be? Survival in the new Zombie-infested world will require the skills of any good D&D party: a Healer, a Warrior, a Thief, and a Wizard — which in a world without magic means someone to tinker with things, build weapons, design shelters with complicated spring traps, and knowledge of how to brew a good cup of coffee.

Clearly you don’t want to be a Healer (read: medic/doctor), since that means no one will be able to fix you — you should have friends or relatives with careers in medicine, however, for obvious reasons. Being a Thief will be of limited use, but more importantly it’s not really the kind of thing you can practice for without turning to a life of crime as defined by our pre-Zombie civilization (post-Zombies, most of the things we consider crimes today will become fairly acceptable somehow, so you may be able to pull this off with the right timing).

That leaves you with either Warrior or Wizard, which translates roughly to: Gun Nut or Hacker. And by “Hacker” we mean the early-1980s definition of hacker, rather than the bastardized 2000s version, and one that is not restricted to computers.

So. Your choices for a new career path are as follows:

  • If you’re a Nerd, become a Hacker.
  • If you’re neither a Nerd or a Hacker, just become a Gun Nut, it’s the easiest and fastest way to post-apocalyptic survival. This way, while you wait for Zombies to strike you won’t need to worry (for example) about a lookup being O(N) or not, or why the CPU on some random server is pegged at 99% without any incoming requests.
  • If you’re already a Gun Nut, you’re good to go. Just keep buying ammo.
  • If you’re already a Hacker… please don’t turn into an evil genius and destroy the world. Try taking up some activity that will consume your time for no reason, like playing The Elder Scrolls V: Skyrim or learning to program for Blackberry.

NOTE (I): If you’re in the medical profession, just stay put. We will protect you so you can fix our sprained ankles and such.
NOTE (II): there is also the rare combination of Hacker/Nerd+Gun Nut, but you should be aware that this is a highly volatile combination of skills which can have unpredictable results on your psyche.

#45: purchase a small island in the Pacific Ocean

As far as having a permanent vacation spot, this one really is a no-brainer. Why bother with hotels when you can own a piece of slowly sinking real estate? Plus, according to highly reliable sources, you don’t need to be a billionaire.

True, you will have significant coconut-maintenance fees and you’ll probably need a small fleet of Roombas to keep the place tidy, but coconuts are delicious and the Roombas can help in following lesson #18.

NOTE I: don’t be fooled by the “Pacific” part of “Pacific Ocean.” There’s nothing “pacific” about it. There’s storms, cyclones, tsunamis, giant garbage monsters, sharks, jellyfish, and any number of other dangers. Therefore, an important followup to purchase the island is to buy an airline for it. You know, to be able to get away quickly, just in case.

NOTE II: this is actually an alternative to the career choices described above, since it is well known that Zombies can’t swim.

NOTE III: the island should not be named Krakatoa — see lesson #1. Aside from this detail, owning a Pacific Island does not directly conflict with lesson #1, since the cupboard can be actually located in a hut somewhere in the island (multiple cupboard hiding spots are also advisable).

#86 Stock up on Kryptonite

Ok, so let me tell you about this guy… He wears a cape and tights. He frequently disrobes in public places. He makes a living writing for a newspaper with an owner that makes Rupert Murdoch look like Edward R. Murrow. He has deep psychological scars since he is the last survivor of a cataclysmic event that destroyed his civilization. He leads a secret double life, generally disappearing whenever something terrible happens. He is an illegal alien. Also, he is an ALIEN.

Does this look like someone trustworthy to you? Hm?

That’s right. This is not a stable person.

Add to the list that he can fly, even in space, stop bullets, has X-ray vision, can (possibly) travel back in time and is essentially indestructible. How is this guy not a threat to all of humanity?

Lex Luthor was deeply misunderstood — he could see all this, but his messaging was way off. Plus there were all those schemes to Take Over The World, which should really be left to experts like genetically engineered mice.

The only solution to this menace is to keep your own personal stash of Kryptonite. Keep most of it in a cupboard (see lesson #1) and a small amount on your person at all times.

After all, you never know when this madman will show up.


When my home phone… you know, the bulky, heavy one, plugged in to a wireline (perhaps for sentimental reasons, at this point), rings… I don’t answer.


It is muted. Permanently.

There’s a generation … a group of people, a dividing line, somewhere… for whom the idea of a dialtone, of verified communication, sounds insane. Most of them are kids at this point, sure, but some aren’t. To me, it is noticeable. To others, it is alien.

A dialtone.

Think about it, how many people alive today don’t know what a dialtone is? Have never heard one?

How many people do not answer their phone because they assume it’s spam?

Spam. Email… bits, translated into voice (also bits). Video. TV, or, truthfully, the constructs that TV (and to some degree radio) created.


Something to consider…

the reason behind windows phone’s dominance in some geographies

via daringfireball, Nick wingfield points to places in the world where Windows Phone is outselling iPhone. Gruber notes, correctly, that these are not Apple strongholds. Blackberry is also extremely popular in those geographies.

What is special about those places? Is it that they have some cultural quirk that prevents them from appreciating iOS?

No. It’s about exchange rates and import controls.

Imports to Argentina, for example, are effectively frozen. People can’t get all sorts of things, from books to electronics. Simple kitchen appliances are in some cases hard to come by. Anecdotally, I can say with some degree of certainty that people would love to get Apple products, and yet Apple products are in extremely short supply since the government denies import licenses unless you export the same amount. Car companies export grains so they can bring in cars. RIM set up a factory in the country just so they could sell phones (you can imagine Apple, given its size and scale, didn’t bother).

As reference, see this businessweek article:

After months of negotiations, [BMW] figured out a fix. The government agreed to let in BMW’s vehicles as long as the company’s Argentine subsidiary exported an equivalent amount of upholstery leather, car parts … processed rice. Echeagaray worked a deal with the Ministry of Industry to get the necessary import permits.

Russia and India are not exactly the same story but match shades of it. The exchange rate factor is a big issue too (more so in Russia and India than in Argentina) — cost of Apple products translates more directly in dollar terms, since they are manufactured in a few locations worldwide and then priced in dollar terms, as opposed to in the local manufacturing and pricing in local currencies. This makes them expensive. No doubt Apple is making a conscious decision here to avoid devaluing their products in real terms.

assume good intentions

A good friend once told me: “Assume good intentions.” Those three words have been hugely influential in my world view in the last few years. Once you make this idea explicit it can shape how you think about what others do in significant ways.

I was reading today about some of the brouhaha surrounding Lean In and the whole why-is-a-billionaire-woman-telling-women-everywhere-what-to-do thing and there was a reference for the launch of Circles.

Gina & Team: congratulations on the launch, it must have been a crazy effort and it looks great.

It seems it’s been building up for a while (the controversy around the book, that is) but I had not seen it until today when I read this article in The New Yorker.

Why I bring this up is that what keeps coming back to me in all of this is how our perspective in the Valley is sometimes clouded by second-hand opinions, innuendo, and gossip, for example around who got funded by whom or which idea is “in”. Yes, this is not unique to the Valley, but it happens frequently here and so I can attest to it, in my own backyard (so to speak… the actual inhabitants of my shared backyard are bluebirds and squirrels).

Putting yourself out there, through a book, art, or even, yes, software, is a hard thing to do. People misunderstand and misinterpret your intentions and motivations constantly, and the schadenfreude that is sadly all-too-common makes things even harder. But we are all just people, trying to do the best we can. The number of significant zeros in your bank account doesn’t change that in most cases. And I say that  having very few significant zeros left in my own bank account.

But, funny thing (not ha-ha funny), most of the people that have such strong opinions on these things have never done them. They “talk about the book” without having “read the book.” (You really need to read The New Yorker article to get this reference). Some of my brothers-in-arms work at Evernote, but do they get press and coverage when they “just” keep an awesome service/app running? No. They get press when someone breaks into their systems.

Controversy sells.

Don’t get me wrong: critics are good> But it’s a matter of degrees. I’m not saying you need to write a book to be able to critique a book, or that you need to start a company to be give your opinion on how ist should be run, but at the very least spend a moment and consider the effort involved. Avoid ad hominems. Forget about money for a second. Consider how much of their lives these people are sacrificing trying to do something.

Assume good intentions.

I bet that if you did that you’d find yourself a bit more forgiving of missteps, a bit more understanding, a bit more willing to believe.

And for those who are doing it, regardless of the scope or (apparent) size of your project, here’s something I could not say out loud because it would sound terrible given my accent… but I can write it: Gina, Sheryl, and all of you out there who are putting yourselves, your sanity, on the line for an idea: Give ’em hell.


kindle paperwhite: good device, but beware the glow

For all fellow book nerds out there, we close the trilogy of kindle reviews for this year, now moving on to a look at Kindle Paperwhite, adding to the plain Kindle review and the Kindle Fire HD.

This device has gotten the most positive reviews we’ve seen this side of an Apple launch. I don’t think I’ve read a single negative review, and most of them are positively glowing with praise. A lot of it is well deserved. The device is light, fast, and the screen is quite good. The addition of light to the screen, which everyone seems bananas about, is also welcome, but there are issues with it that could be a problem depending on your preference (more on that in a bit).


Touch response is better than the Kindle touch as well. There are enough minor issues with it that it’s not transparent as an interface — while reading, it’s still too easy to do something you didn’t intend to do (e.g. tap twice and skip ahead more than one page, or swipe improperly on the homescreen and end up opening a book instead of browsing, etc.) but it doesn’t happen so often that it gets in the way. Small annoyance.

Something I do often when reading books is highlight text and –occasionally– add notes for later collection/analysis/etc. Notes are a problem in both Kindles for different reasons (no keyboard in the first, slow-response touch keyboard in the second) but the Paperwhilte gets the edge I think. The Paperwhite is also better than the regular Kindle for selection in most cases (faster, by a mile), with two exceptions being that at the end of paragraphs it’s harder than it should be to avoid selecting part of the beginning of the next, and once you highlight a the text gets block-highlighted as opposed to underlined, which not only gets in the way of reading but also results in an ugly flash when the display refreshes as you flip pages. Small annoyances #2 and #3.

Overall though, during actual long-form reading sessions I’d say it works quite well. Its quirks appear of the kind that you can get used to, rather than those that you potentially can’t stand.


Speaking of things you potentially can’t stand, the Paperwhite has a flaw, minor to be sure, but visible: the light at the bottom of the screen generates weird negative glow, “hotspots” or a kind of blooming effect on the lower-screen area that can be, depending on lighting conditions, brightness, and your own preference, fairly annoying. Now, don’t get me wrong — sans light, this is the best eink screen I’ve ever seen, but the light is on by default and in part this is a big selling point of the device, so this deserves a bit more attention.

Some of the other reviews mention this either in passing or not at all, with the exception of Engadget where they focused on it (just slightly) beyond a cursory mention.

Pogue over at the NYT:

“At top brightness, it’s much brighter. More usefully, its lighting is far more even than the Nook’s, whose edge-mounted lamps can create subtle “hot spots” at the top and bottom of the page, sometimes spilling out from there. How much unevenness depends on how high you’ve turned up the light. But in the hot spots, the black letters of the text show less contrast.

The Kindle Paperwhite has hot spots, too, but only at the bottom edge, where the four low-power LED bulbs sit. (Amazon says that from there, the light is pumped out across the screen through a flattened fiber optic cable.) In the middle of the page, where the text is, the lighting is perfectly even: no low-contrast text areas.”

The Verge:

“There are some minor discrepancies towards the bottom of the screen (especially at lower light settings), but they weren’t nearly as distracting as what competitors offer.”


“Just in case you’re still unsure, give the Nook a tilt and you’ll see it clearly coming from beneath the bezel. Amazon, on the other hand, has managed to significantly reduce the gap between the bezel and the display. If you look for it, you can see the light source, but unless you peer closely, the light appears to be coming from all sides. Look carefully and you’ll also see spots at the bottom of the display — when on a white page, with the light turned up to full blast. Under those conditions, you might notice some unevenness toward to bottom. On the whole, however, the light distribution is far, far more even than on the GlowLight.”

So it seems clear that the Nook is worse (I haven’t tried it) but Engadget was the only one to show clear shots of the differences between them, although I don’t think their screenshots clearly show what’s going on. Let me add my own to that. Here’s three images:


The first is the screen in a relatively low-light environment at 75% screen brightness (photo taken with an iPhone 5, click on them to see them at higher res). The second two are the same image with different Photoshop filters applied to show more clearly what you can perhaps already see in the first image — those black blooming areas at the bottom of the screen, inching upwards.

The effect is slightly more visible with max brightness settings:

What is perhaps most disconcerting is that what is more visible is not the light but the lack of it — the black areas are what’s not as illuminated as the rest before the full effect of light distribution across the display takes place.

Being used to the previous Kindles, when I first turned it on my immediate reaction was to think that I’d gotten a bad unit, especially because this issue hadn’t been something that reviews had put much emphasis on, or seemed to dismiss altogether, but it seems that’s how it is. Maybe it is one of those things that you usually don’t notice but, when you do, you can’t help but notice.

So the question is — does it get in the way? After reading on it for hours I think it’s fair to say that it fades into the background and you don’t really notice it much, but I still kept seeing it, every once in a while, and when I did it would bother me. I don’t know if over time the annoyance –or the effect– will fade, but I’d definitely recommend you try to see it in a store if you can.


Weight-wise, while heavier than the regular Kindle, the Paperwhite seems to strike a good balance. You can hold it comfortably on one hand for extended periods of time, and immerse in whatever you’re reading. Speaking of holding it — the material of the bezel is more of a fingerprint magnet than previous Kindles, for some reason, and I find myself cleaning it more often than I’ve done with the others.

The original touch was ok but I still ended up using the lower-end Kindle for regular reading. If I can get over the screen issue, the Paperwhite may be the one touch e-reader to break that cycle. Time will tell.


Get every new post delivered to your Inbox.

Join 386 other followers

%d bloggers like this: