diego's weblog

there and back again

Category Archives: media

The universe doesn’t do straight lines

“Everything you’ve learned in school as ‘obvious’ becomes less and less obvious as you begin to study the universe. For example, there are no solids in the universe. There’s not even a suggestion of a solid. There are no absolute continuums. There are no surfaces. There are no straight lines.”

– R. Buckminster Fuller (1895–1983)


If you haven’t thought about this before, it’s one of those ideas that generates a special feeling, something you’ve always known but never articulated. The kind of ideas that make you look up into the sky, eyes lost in the distance and say “Yeah, that’s right,” while you smile and nod slightly.

It also lends itself to pseudo-myth busting: Of course there are straight lines! Here, let me show you, after which we are look at a high-altitude image of salt flats, deserts, rocks, and any number of other things that appear to have reasonably straight lines here and there at different resolutions. But there’s no “reasonably straight” in math, geometry, topology, and that’s what we’re talking about.

Even the most ubiquitous, inescapable “line” in nature, the skyline, or horizon, is not a line at all, but a curve of such diameter that it can’t be discerned by the naked eye unless you’re, well, flying.

But… why? There’s no law of physics or biology expressly against straight lines, 90-degree angles, or perfect geometric shapes. DNA is a construct of incredible complexity. Surely a straight line wouldn’t be a problem if it had an advantage.

Thinking about it from an evolutionary/natural selection perspective it becomes clear pretty quickly that there’s little advantage to something being perfectly straight compared to anything “reasonably” straight (in the few cases in which you need it). On the other hand, “perfection” has a very clear cost.

Consider — Anything that can bend will eventually bend and cease being straight if that was its initial state. Therefore, the only way for something to remain straight for a long time with all the environmental uncertainty created by nature is for it to be extremely rigid, not flexible. So you end up with something that will not bend but that, no matter how strong, will have a point at which it breaks.

Flexibility matters. Bending is better than breaking, a fact that becomes a problem for humans since our bones get stronger as we grow, but the strength translates into less flexibility and therefore more of a chance of breaking outright.

It seems logical then that a perfectly straight line isn’t a thing you’d want in your evolutionary tree.

Speaking of trees.

Trees are interesting when you think about them in terms of construction. When we (humans, that is) started building really big things, we needed the help not only of straight lines but also of the simplest convex polyhedron possible, the tetrahedron, aka pyramid. (Even though the pyramids humans tend to build are not tetrahedrons, since they generally use a square, rather than triangular, polygonal base, the point remains.)


It’s taken us 8,000 years to figure out how to build high without building wide. Meanwhile, many trees grow tall enough and strong enough that humans can live in them, and yet their weight distribution is not unlike a pyramid standing on its tip, supported by the root structure, which while smaller in mass is generally larger in surface covered. (The triangular areas I marked in the images above reference mass, not surface.) The tensile strength and toughness of the materials used matters a lot of course, but so does what you’re trying to use them for.

If you’re just getting started at the whole civilization thing, and you’re going to build a hut to protect yourself from the elements, or a small vehicle to carry stuff, it is better to use artificial constructs (straight lines, circles, etc) because they make calculations easier, it makes reproduction easier, and it makes verification easier. Early on, at small scale, knowledge can transferred verbally, but as soon as you start writing things down, simple geometries become even more important. You could carry the design to another city, and the master builder that came up with it wouldn’t be able to verify your construction. The certainty of mathematics becomes a necessity, and the simpler the design, the simpler the concepts behind it, the easier it is not only to propagate but also to verify.

For us, then, up until well past the point when we’ve moved beyond simple construction capabilities, it pays off to expend the additional energy necessary to approach mathematical perfection. The advantages are many. The time and energy invested in, say, turning a tree trunk into lumber is acceptable not only because it is easier to use, but also because it’s easier to measure, partition, buy, sell. This, in turn, makes markets and therefore whole economies function more effectively and efficiently as well.


787 Dreamliner Wing Flex Test (source: Wired)

As you advance in building your civilization you start to see that evolving past a certain point both requires and enables flexibility in how and what you create. It’s not just about architecture, or mechanical engineering. Clothing, for example, also had to pass through a period in which mass-production constraints around what you could deliver resulted in straight lines everywhere. Think back at the sharp edges and angles in suits and dresses of the late 1940s and 50s, when mass production of those items became commonplace.


Now, Betty & Don probably aren’t fooling around with mass produced stuff,but manufacturing capabilities invariably affect design and therefore fashion — even for high end goods since, after all, they are part of the same ecosystem.

Attack of the rectangles

Now, this has all been very entertaining so far but my point is really (you guessed it) about software.

Straight lines are easier in software, too. Software interfaces have been somewhat stuck in the same era of rigidity as architecture, engineering, and even clothing were stuck in until the very end of the 20th century, when new processes and materials allowed us to start creating strong, bendable, curved surfaces.

Take a step back and look at your phone, or laptop screen. Start counting rectangles, or, as we could also call them, boxes.


There are boxes everywhere! Invisible boxes contain other boxes all over the place.

Don’t get me wrong, I am not saying boxes are evil or anything like that. Rectangles are fine, they’re our friends. They’re cool. In the 80s, I think it was even hip to be square for a while. But we’ve become overly reliant on them. We use them as a crutch. Instead of trying to figure out how to make something work for a specific task, we use rectangles everywhere, because we know they work, even if they aren’t perfect.

This matters because rigidity propagates directly from the interface into our thoughts. It is not the same to have an open white space to write in than to be given a small box and a 140 characters. It is not.

In that vein, I don’t see it as a coincidence that there’s so many great text editors around that focus on eliminating everything but what you’re typing.

Circular/Rotating dials are better than vertical knobs because the human hand has more precision on radial movements than on linear movements. Our extremities are well adapted to rotating and sliding along curves, but everything in our computers is stuck within the vertical and horizontal confines of 2D cartesian coordinate space. With touch on devices (and 3D) we can create interfaces that are both more natural, organic, and can be better adapted ergonomically to how we operate in the real world. The moment you add any kind of spatial analysis using IR and so forth (e.g., Kinect, Leap) significant vertical and horizontal movements, while definitely useful, become secondary to the expressive power of the hand.

Some calendaring systems have now added margins around events to account for travel time, and if you happen to allow enough of your information to be constantly provided to a server, they can help find routes in advance, be smarter about alarms, and so forth.

The event itself though, fundamentally, is still some text in a box.


To begin with, no ‘event’ of any kind that you put in a calendar ever fits in a box. Aha! You’d say — Many calendaring systems let you add text, attachments, and locations, and change colors, and so forth.

But you just put those things in a box, metaphorically, and literally, as far as the interface is concerned.

If you open the event you get even more boxes within boxes that contain some of these files and information, much of which is also in rectangle form.

And when you’re done, as the clock marks 1 pm, that box remains forever frozen to collect digital dust, exactly the same as it was when it started.

Finding meaning and purpose in actions

But that’s not what the event is, is it?

Whatever it is, it’s fluid. Maybe it doesn’t exactly start at that time. Maybe it doesn’t exactly end at that time. Can’t software take note of that?

Have you ever been to any kind of event, meeting, presentation, appointment, that could be properly described by the boundaries of a box or a rectangle? Something that starts here and ends there? Ever?

Say it was a meeting. What about the things that happened during the event? Why can’t you keep track of the links you loaded, the documents seen, changes made right there?

Phone calls made?

People that came and went? We had a list of participants, but we got Joe to come in and help us with something because, duh, he’s actually the one in charge of…

Right? See what I mean?

Maybe you can’t say exactly what shape all of that stuff takes, but it sure as hell doesn’t feel like it fits in anything that has right angles and a preset width and height.

Because N3xt

These ideas are obviously important to me and fundamental to how I’ve approached thinking about N3xt, but this isn’t about one system. It’s about trying on new ways to think about what we build, how we build it, and for what purpose.

We need to expand how we think about data and information to the point of challenging the modeling, storage and processing of long-standing fundamental constructs like pages, folders, lists, and so on, on “clients”. It’s a change that’s already been happening in the “backend world” for a while now, and it’s long overdue on the other side. It’s time we move past from using metaphors and concepts anchored in a paper-centric, office-centric, container-centric view of the world. It’s time we let go of linear organizational schemes. Lists are great for some things, but surely they don’t have to be the foundation for everything.

All the major online services rely on incredibly complex infrastructures in which data exists and interacts in a virtual world that is so… removed from what happens when you squeeze it all into http://… and a grid of pixels that it might as well be in another universe. Backend filesystems at scale stopped looking like filesystems a while ago, just to take one example. It’s time to bring some of the magic pixie dust over to the other side and see what happens.

We also have to consistently push against rigid structures to create interfaces based on grids and boxes and lists and menus. We have lived with the same fundamental ideas for almost 50 years now, since the great Douglas Engelbart cracked open the fire pits of invention with theMother of All Demos. Desktops, files, folders, pages, “cabinets”, a digital approximation of the American Corporation in the 50s and 60s.

We’ve got the tools. We’ve got WebGL, and 3D frameworks, and inputs and sensors up the wazoo.

People aren’t going to be confused or terrified or anything like that. People are constantly adapting to new ways to interact, and we now have a generation that has grown up without knowing what a dialtone is.

In client software in particular, layers are closely linked, more interdependent than in backend software. In the server world hardware homogeneity and network layers actually help a bit in creating more elastic relationships between endpoints — so you can have something amazing likePresto, which gives you new possibilities: an aging squirrel in a wheelchair (SQL) strapped to the nose of a Space Orbiter (Presto) which can turn various toxic-otherwise-unusable solid and liquid fuels (in my metaphor, all the various horrible data sources that you can tap into… I’m looking at you, Oracle-over-JDBC) into precisely guided propulsion that will get you into orbit and back, so the squirrel gets to see space and we get to do something useful with all that horrible toxic stuff while still actually using something cool and shiny and moving things forward in between the two.

On the client, you can’t quite do that. The coupling is too close, the constraints too tight, so if your data model is literally a SQLlite table with 1,000 rows and columns, and you actually use row-access mechanisms and even, God help you, normalized table structures, then it is kind of inevitable that what’s going to show up on screen will look like that. And if it doesn’t, if you’re smushing up the 1,000 rows into a neural network that will give you the ONE row you need and display just that, then why the hell are you storing and fetching stuff using SQLlite? Why not just have a blob of 1,000 other blobs that you read and write atomically, along with whatever you preserve of the neural net between runs?


This doesn’t mean that we have to throw everything away and start from scratch.

Not what I’m saying.

When I see something like what Microsoft proposed here for HoloLens, I have to concentrate pretty hard keep myself from hitting my head against the wall repeatedly.

Because while what we already have is ok at what it does, we just have to let go of the notion that we can keep juicing it forever.

What I’m saying is that we’ll rethink how we approach everything new, and, sure, some of the old stuff might be replaced, but that’s not the point.

So for a quick glance at your watch, just some text that says “Event, 1 hr, starts in 5m” may be fine. You could even have a little colored rectangle around it.

But back in our growing mixed reality world, dealing with ambient sensors, phones, holographic lenses and wall displays … a BB-8 replica that rolls around the room beeping and whistling quietly…. and supercomputers in our pockets, in our desks, and in the infrastructure all around us, the only straight lines should be there only when we create them in our minds.

Just like the horizon.

(cross-posted to medium)

An open letter to the creators of “The Expanse”

the_expanseC/O Everyone That Creates ‘The Expanse’

Possibly Somewhere in Hollywood? Dunno. Well, these are the Internets, so wherever you are…


You’re going to be cancelled.

You know it, the executives know it, and we know it. Yup, we, the audience, even those of us who forked over the cash for the iTunes season pass.

What? How do I know? While I have neither heard or read any news in this regard whatsoever, I am absolutely certain that after you have put out several more episodes of what is without a doubt the best hard Science Fiction show in years you will face the obvious result: imminent cancellation.

That’s right. The show is too fucking good. You could have plodded along with some aliens invading or apocalyptic something-or-other, or zombies, all goo with some crappy plastic models hanging on strings, but no. You had to focus on gritty realism, character development, and seamless execution of what the day-to-day reality of a spacefaring civilization would be like. Visual and sensory detail and beauty that put you above a majority of all science fiction ever put on screen. Evolving complex plots or characters that would need several hours of “Previously on…” previews to get anyone up to speed. A worthy successor to every good show we’ve loved in recent years: Battlestar GalacticaStargate Universe, Firefly …. Hell, the pilot is on par with some of the best movies in the genre (save for, naturally, the fact that the plot is not appropriately compressed).

Btw, speaking of Firefly. Get some Joss Whedon juice in there. A few more jokes. You need one character who isn’t brooding. Someone has to be fired up about being in space, right?

Anyway, you have chosen to put your show on TV, Syfy of all places, which as we have seen pretty much guarantees you will be cancelled in the most unexpected, gut-wrenching way possible.

Granted, you may be able to advance painfully for a few seasons, as you are constantly actually shut down or threatened with cancellation, and then brought back from the brink in what at this point is, I think, a concerted effort by certain television executives to whip up a fan movement that will continue to buy every book, movie, pin, photo, shred of set, or any kind of other artifact of the show for years to come. Merchandising rights. That’s what they go for, isn’t it?

Regardless. This is going to happen.

So, I beseech you to get ready now and, furthermore, I would also like to extend any offer of help I can for the campaign that will follow your upcoming cancellation. Maybe we can come up with a hashtag?

To all space nerds! Get ready as well. We should start the online howling now, just to be in shape for when it matters. Plus, I’m sure we all have some forms leftover from BSG or SGU that we can use.

To everyone else: you know, could do us space nerds a solid and just watch this goddamn science fiction show for once. You ignored our pleas with BSG. Fine. Same with everything else really.  Just once we would like to have nice things. Put it on mute, we don’t care. Give it ratings. You can keep CSI, Survivor, and whatever else.

IMPORTANT RE: THE LAST TWO PARAGRAPHS — getting people to watch the show will not, as you would expect, stop it from getting cancelled. It will get cancelled. It will. This is a fact. The sooner we all accept this, the better. Having more people watching might make it easier to have a return season after the outrage though, so the effort still counts.




PS: Alternatively, you could let the quality of the show go to hell. This will not stop it from being cancelled, but at least it will hurt less when it happens.

PPS: And, again, Syfy? Syfy!?! Syfy has perfected this process to an artform: create a great science fiction show, then either starve it or cancel it for no apparent reason, usually “lack of audience” or “high cost.” Science fiction has an audience. Star Wars, The Martian, even  Interstellar, all made tons of money. Meanwhile on TV Duck Dynasty gets eight seasons. Something’s rotten in Denmark. In some cases I understand they pick up stuff produced by others by they seem to have a spectacular record of just butchering whatever falls into their lap. Or maybe this is all a vast, shrewd conspiracy.

PPS: Syfy, get your game on. And before cancelling stuff, talk to Netflix or Amazon. Those guys do TV now! They can help. Hell, I’d even take Hulu.





totally like whatever, you know?

Something that continues to resonate, years after I first saw it: Taylor Mali‘s “Totally like whatever, you know?”. Spend 3 minutes and check it out, you won’t regret it. (Clip from HBO’s Russell Simmons Presents Def Poetry)

I implore you, I entreat you,
and I challenge you: To speak with conviction.
To say what you believe in a manner that bespeaks
the determination with which you believe it.
Because contrary to the wisdom of the bumper sticker,
it is not enough these days to simply QUESTION AUTHORITY.
You gotta to speak with it, too.

And here’s the original version, slightly different than the one in the video.

Bonus: something I wrote a couple of years ago “honestly, let’s unpack this: it’s like, you know…very unique?”

all your tech are belong to us: media in a world of technology as the dominant force

Pop quiz: who held the monopoly on radio equipment production in the US in 1918?

General Electric? The Marconi Company?

Radio Shack? (Jk!) :)

How about the US Military?

The US entered World War I “officially” in early April, 1917. Determined to control a technology of strategic importance to the war effort, the Federal Government took over radio-related patents owned by companies in the US and gave the monopoly of manufacturing of radio equipment to the Armed Forces — which at the time included the Army, the Navy, the Marine Corps, and the Coast Guard.

This takeover was short-lived (ending in late 1918) but it would have profound effects in how the industry organized in the years and decades that followed. The War and Navy departments, intent on keeping the technology under some form of US control, arranged for General Electric to acquire the American Marconi company and secure the patents involved.

The result was Radio Corporation of America, RCA, a public company whose controlling interested was owned by GE.

Newspapers had been vertically integrated since their inception. The technology required for printing presses and the distribution networks involved in delivering the product were all “proprietary,” in that they were controlled and evolved by the newspapers themselves. Even if the printing press had other uses, you couldn’t easily repurpose a newspaper printing press to print books, or viceversa, and even if you could secure a printing press for newspapers (a massive investment) you could not hope to easily recreate the distribution network required to get the newspaper in the hands of consumers.

This vertical integration resulted in a combination of natural and artificial barriers of entry that would let a few key players, most notably William Randolph Hearst, leverage the resulting common economic, distribution and technological foundation to effect a consolidation in the market without engendering significant opposition. Later, Movie studios relied on a similar set of controls over the technology employed — they didn’t manufacture their own cameras but by controlling creation and distribution, and with their aggregate purchase power, they could dictate what technology was viable and how it was to be used.

Radio, early on, presented the possibility of a revolution in this regard. It could have allowed consumers to also be creators (at least in a small scale). The ability to broadcast was restricted by the size and power of the transmitter at your disposal, and you could start small. It was the first opportunity for a new medium to have the evolution of the underlying technology decoupled from the content it carried, but WWI and the intervention of the US government ensured this would not come to pass. The deal that resulted in the creation of RCA created, in effect, a similar vertical integration in Radio as in other mediums (in Britain, a pioneer of broadcast radio and later TV, the government had been largely in control from the beginning through the BBC, and so already was “vertically integrated”).

This is a way of thinking that became embedded into how Media companies operated.

RCA went on to be at the center of the creation of the two other subsequent major media markets of the 20th century: music and television, and in both cases it extended the notion of technology as subservient to the content that it carried.

For every major new medium that appeared until late in the 20th century, media companies could control the technology that they depended on.

Over time, even as technology development broke off into its own path and started to evolve separately from media, media companies retained control of both the standards and the adoption rate (black and white to color, vinyl to CD, SD to HD, etc.). Media companies selected new technologies when and how they wanted, and they set the terms of use, the price, and the pace of its deployment. Consumers could only consume. By retaining control of the evolution of the technology through implicit control of standards, and explicit control of the distribution channels, they could retain overall control of the medium. Slowly, though, the same technology started to be used for more than one thing, and control started to slip away.

Then the Internet came along.

The great media/technology decoupling

TV, radio, CDs, even newspapers are all “platforms” in a technical sense, even if closed ones, in that they provide a set of common standards and distribution channels for information. In this way, the Internet appears to be “just another platform” through which media companies must deliver their content. This has led to the view that we are simply going through a transition not unlike that of, say, Vinyl to CDs, or Radio to TV.

That media companies can’t control the technology as they used to is clear. What is less clear is that this is a difference of kind, not of degree.

CNN can have a website, but it can neither control the technology standards or software used to build it or ensure that the introduction of a certain technology (say, Adobe Flash) will be followed by a period of stability long enough to ensure recouping the investment required to use it. NBC can post shows online, but it can’t prevent millions of people from downloading the show without advertisement through other channels. Universal Studios can provide a digital copy of a movie six months after its release, but in the meantime everyone that wanted to watch it has, often without paying for it. These effects and many more are plainly visible, and as a result, prophecies involving the death of TV, the music industry, newspapers, movie studios, or radio, are common.

The diagnoses are varied and they tend to focus, incorrectly, on the revenue side of the equation: it’s the media companies’ business models which are antiquated. They don’t know how to monetize. Piracy is killing them. They can’t (or won’t) adapt to new demands and therefore are too expensive to operate. Long-standing contracts get in the way (e.g. Premium channels & cable providers). The traditional business models that supported mass media throughout their existence are being made increasingly ineffective by the radically different dynamics created by online audiences, ease of copying and lack of ability to create scarcity, which drive down prices.

All of these are real problems but none of them is insurmountable, and indeed many media concerns are making progress in fits and starts in these areas and finding new sources of revenue in the online world. The fundamental issue is that control has shifted, irreversibly, out of the hands of the media companies.

For the first time in the history of mass media, technology evolution has become largely decoupled from the media that uses it, and, as importantly, it has become valuable in and of itself. This has completely inverted the power structure in which media operated, with media relegated to just another actor in a larger stage. For media companies, lack of control of the information channel used is behind each and every instance of a crack in the edifice that has supported their evolution, their profits, and their power.

Until the appearance of the Internet it was the media companies that dictated the evolution of the technology behind the medium and, as critically, the distribution channel. Since the mid-1990s, media companies have tried and generally failed to insert themselves as a force of control in the information landscape created by the digitalization of media and the Internet. Like radio and TV, the Internet includes a built in “distribution channel” but unlike them it does not lend itself to natural monopolies apportioned by the government of that channel. Like other media, the Internet depends on standards and devices to access it, but unlike other media the standards and devices are controlled, evolved, and manufactured by companies that see media as just another element of their platforms, and not as a driver of their existence.

This shift in control over technology standards, manufacture, demand, and evolution is without precedent, and it is the central factor that drives the ongoing crisis media finds itself since the early 90s.

Now what?

Implicitly or explicitly, what media companies are trying to do with every new initiative and every effort (DRM, new formats, paywalls, apps) is to regain control of the platform. Given the actors that now control technology, it becomes clear why they are not succeeding and what they must do to adapt.

In the past, they may have attempted to purchase the companies involved in technology, fund competitors, and the like. Some of this is going on today, with the foremost examples being Hulu and Ultraviolet. As with past technological shifts, media companies have also resorted to lobbying and the courts to attempt to maintain control, but this too is a losing proposition long-term. Trying to wrest control of technology by lawsuits that address whatever the offending technology is at any given moment, when technology itself is evolving, advancing, and expanding so quickly, is like trying to empty the ocean by using a spoon.

These attempts are not effective because the real cause of the shift in power that has occurred is beyond their control. It is systemic.

In a world where the market capitalization of the technology industry is an order of magnitude or more than that of the media companies (and when, incidentally, a single company, Apple, has more cash in hand than the market value of all traditional media companies combined), it should be obvious that the battle for economic dominance has been lost. Temporary victories, if any, only serve to obfuscate that fact.

The media companies that survive the current upheaval are those that accept their new role in this emerging ecosystem: one of an important player but not a dominant one (this is probably the toughest part). There still is and there will continue to be demand for content that is professionally produced.

Whenever people in a production company, or a studio, or magazine, find themselves trying to figure out which technology is better for the business, they’re having the wrong conversation. Technology should now be directed only by the needs of creation, and at the service of content.

And everyone needs to adapt to this new reality, accept it, and move on… or fall, slowly but surely, into irrelevance.

2 idiots, 1 keyboard (or: How I Learned to Stop Worrying and Love Mr. Robot)

I’d rename it “The Three Stooges in Half-Wits at Work” if not for the fact that there are four of them. We could say the sandwich idiot doesn’t count, though, but he does a good job with his line (“Is that a videogame?”) while extra points go to the “facepalm” solution of disconnecting a terminal to stop someone from hacking a server. It’s so simple! Why didn’t I think of that before!?!?!

Mr. Robot would have to go 100 seasons before it starts to balance out the stupidity that shows like NCIS, CSI and countless others have perpetrated on brains re: programming/ops/etc.

Alternative for writers that insist in not doing simple things like talking to the computer guy that makes your studio not implode: keep the stupid, but make it hilariously, over the top funny, like so:

We’ll count it even if it’s unintentional. That’s how nice we computer people are.

PS: and, btw, this, this, is why no one gets to complain about Mr. Robot’s shortcomings.

the importance of Interstellar

iDo not go gentle into that good night,
Old age should burn and rave at close of day;
Rage, rage against the dying of the light.

                                                    Dylan Thomas (1951)

Over the last few years a lot of movies -among other things- seem to have shrunk in ambition while appearing to be”bigger.” The Transformers series of movies are perhaps the best example. Best way to turn off your brain while watching fights of giant robots and cool explosions? Sure. But while mega-budget blockbusters focus on size, many of them lack ambition and scope. Art, entertainment, and movies in particular, given their reach, matter a lot in terms of what they reflect of us and what they can inspire. For all their grandiose intergalactic-battle-of-the-ages mumbo jumbo, Transformers and other similar movies always feel small, and petty. Humans in them are relegated to bit actors that appear to be props necessary for the real heroes (in this case, giant alien robots) to gain, or regain, inspiration and do what they must do. And always, always by chance. Random people turn into key characters in world-changing events just because they stumbled into the wrong, or right, (plot)hole.

Now, people turned into “the instruments of fate (or whatever),” if you will, is certainly a worthwhile theme and something that does happen. But stories in which the protagonists (and people in general) take the reins and attempt to influence large-scale events through  hard work, focus, cooperation, even -gasp!- study, became less common for a while. Art reflects the preoccupations and aspirations of society, and it seems that by the mid-to-late 2000s we had become reliant on the idea of the world as reality TV – success is random and based on freakish circumstances, or, just as often, on being a freak of some sort. This isn’t a phenomenon isolated to science fiction — westerns, for example, declined in popularity but also turned “gritty” or “realistic” and in the process, for the most part, trading stories of the ‘purity of the pioneering spirit’ or ‘taming the frontier’ with cesspools of dirt, crime, betrayal and despair.

Given the reality of the much of the 20th century, it was probably inevitable that a lot of art (popular or not) would go from a rosy, unrealistically happy and/or heroic view of the past, present, and future, to a depressing, excessively pessimistic view of them. Many of the most popular heroes in our recent collective imaginations are ‘born’ (by lineage, by chance, etc) rather than ‘made’ by their own efforts or even the concerted efforts of a group. Consider: Harry Potter, the human characters in Transformers (and pretty much any Michael Bay movie since Armageddon), even more obviously commercial efforts like Percy Jackson or Twilight along with other ‘young adult’ fiction and with pretty much all other vampire movies, which have the distinction of creating ‘heroes’ simultaneously randomly and through bloodlines, the remake of Star Trek turned Kirk joining Starfleet into something he didn’t really want to do; the characters in The Walking Dead; the grand-daddy of all of these: Superman… and, even, as much as I enjoy The Lord of The Rings, nearly everything about its view of good and evil involves little in the way of will and intent from the main characters. Characters talk a great deal about the importance of individuals and their actions, but in the end they’re all destined to do what they do and the key turning points are best explained as either ‘fate’, simply random, or manipulated by people of ‘greater wisdom and/or power’ like Gandalf, Galadriel, Elrond and so on. Good and evil are defined along the lines of an eugenics pamphlet in a way that gets to be creepy more often than not (the ‘best’ are fair-skinned, with blue or green eyes, and from the West, the ‘worst’ are dark-skinned, speak in hellish tongues and are from the East, along with an unhealthy obsession with bloodlines and purity of blood, and so on; Gandalf “progresses” from Gray to White, while Saruman falls from being the leader as Saruman the White into shrunken evil serving Sauron, the Dark Lord… as “Saruman of Many Colours”… you get the idea).

All of which is to say: I don’t think it’s a coincidence that in this environment good Science Fiction in general and space exploration SF is always relegated a bit, particularly in movies. There is nothing random about space exploration: it requires an enormous amount of planning, study, effort, hard work, and money. You can’t inherit a good space program. It has to be painstakingly built, and supported, across decades. When a not-insignificant percentage of society flatly discards basic scientific theories in favor of religious or political dogma while giving an audience to Honey Boo Boo or Duck Dynasty, it’s not illogical for studios to finance another animated movie with talking animals than to push people beyond their comfort zones.

Even so, there’s always been good SF, if perhaps not as frequently as SF fans would like. And over the last 20 years we have started to see  Fantasy/SF stories that combine a more “realistic” view of the world, but mixed in with the more idealistic spirit of movies like The Right Stuff. In these we have characters succeeding, or at least ‘fighting the good fight’, through exertion of will, the resolve to change their reality. And even if there’s an element of ‘fate’ or chance in the setup, the bulk of the story involves characters that aren’t just pushed around by forces beyond their control. Nolan’s Dark Knight trilogy, Avatar, Serenity, most of Marvel’s new movies: Iron Man, Captain America, The AvengersWatchmen. In books, the Already Dead series and the Coyote series, both of which could make for spectacularly good movies if ever produced. In TV, Deadwood, which is perhaps the best TV series of all time, was a good example of the same phenomenon — it felt realistic, but realistically complex, with characters that weren’t just swept up in events, and that exhibited more than one guiding principle or idea. We got ‘smaller’ movies like Moon that were excellent, but large-scale storytelling involving spaceflight that wasn’t another iteration of a horror/monster/action movie is something I’ve missed in the last few years.

What about last year’s Gravity? It was visually arresting and technically proficient but fairly mundane in terms of what actually happens. It’s not really inspiring — it’s basically the story of someone wrecking their car in the middle of the desert and having to make it to the next gas station… but in space, the focus on experiencing a spiritual rebirth, and in case we were confused about the metaphor the see the main character literally crawl out of mud and water and then slowly stand and start to walk. Bullock’s character in Gravity is also one of those guided by circumstances, frequently displaying a lack of knowledge about spaceflight that even the original monkeys that flew in the early space missions would have slapped their foreheads about.

Which brings me to Interstellar. No doubt it will be compared to 2001: A Space Odyssey (with reason) and with Gravity (with less reason). Interstellar is more ambitious than 2001 in terms of science, matching it or exceeding it in terms of story scope and complexity, while leaving Gravity in the dust.  2007’s Sunshine shares some themes and some of the serious approach to both science and fiction (… at least the first 30 minutes or so, afterwards it shares more with Alien) as well as with the (in my opinion) under-appreciated Red Planet (2000) and even some elements of the much less convincing Mission to Mars. It also reminded me of Primer in terms of how it seamlessly wove pretty complex ideas into its plot.

We haven’t had a “hard” SF space movie like this for a whileKey plot points involving gravitational time-dilation, wormholes, black holes,  quantum mechanics/relativity discrepancies… even a 3D representation of a spacetime tesseract (!!!!). 2001 was perfect about the mechanics of space flight, but Interstellar also gets as deep into grand-unified theory issues as you can probably get without losing a lot of the audience, and goes much further than 1997’s Contact. There are some plot point that are weak (or, possibly, that I may have missed an explanation for, I’ll need another viewing to confirm…), and sometimes there are moments that feel a bit slow or excessively, shall we say, ‘philosophical’, although in retrospect the pauses in action were effective in making what followed even more significant.

Comparisons and minor quibbles aside, Interstellar is spectacular; the kind of movie you should, nay, must watch in a theater, the bigger screen the better, preferably on IMAX.

The movie not only has a point of view,  it is unapologetic about it. It doesn’t try to be “balanced,” and it doesn’t try to mix in religion even as it touches on subjects in which it frequently is mixed in the name of making “all points of view heard.” Interstellar is not “anti religion” … and it is not pro-religion either. There’s a fundamental set of circumstances in the plot that allows the movie to sidestep pretty much all of the usual politics and religion that would normally be involved. Perhaps someone can argue whether those circumstances are realistic (although something like the Manhattan project comes to mind as an example of how it can actually happen). But the result is that the movie can focus almost exclusively on science, exploration, our ability to change things, either individually or in groups.

This, to me, felt truly refreshing. Everything that has to do with science these days is mixed in with politics and/or religion. This also helps the story in its refusal to “dumb things down”…  its embrace of complexity of ideas, even if less focused on a lot of specific technical details than, say, Apollo 13 was, which is a natural result of having the Apollo data at hand.

How many people, I wonder, know by now what NASA’s Apollo program really was? Sometimes it seems to be relegated to either conspiracy joke material or mentioned in passing to, for example, explain how your phone is more powerful than the computers that went to the moon. Somehow what was actually attempted, and what was actually achieved, isn’t remarkable anymore, and the true effort it took is less appreciated as a result. With that, we are making those things smaller, which gives us leeway to do, to be less. It makes “raging against the dying of the light” sound like a hopelessly romantic, useless notion. It justifies how approaching big challenges these days frequently happens in ways that makes us “involved” in the same way that Farmville relates to actual farming. Want to feel like you’ve solved world hunger? Donate $1 via text to Oxfam. Want to “promote awareness of ALS”? Just dump a bucket of ice water on your head. Want to “contribute in the fight against cancer”? Add a $3 donation while checking out of the supermarket. No need to get into medicine or study for a decade. Just bump your NFC-enabled phone against this gizmo and give us some money, we’ll do the rest.

I’m not saying that there is no place for those things, but recently it seems that’s the default. Why? Many commentators have talked about how these days we lack an attitude best described by Kennedy’s famous line “Ask not what your country can do for you, as what you can do for your country”. But I don’t think the issue is not wanting to do anything, or not wanting to help. I think the issue is that we have gotten used to being scared and feeling powerless in the face of complexity. We’ve gone from the 60’s attitude of everyone being able to change the world to feeling as if we’re completely at the mercy of forces beyond our control. And we’ve gone overboard about whatever we think we can control:  people freaking out about the use of child seats in cars, or worrying about wearing helmets when biking, while simultaneously doing little as societies about the far greater threat of climate change.

When education was a privilege of very few, very rich people, it was possible for pretty much everyone to accept a simplistic version of reality. That was before affordable mass travel, before realtime communications, before two devastating world wars and any number of “smaller” ones. Reality has been exposed for the truly messy, complicated thing it is and always was. But instead of embracing it we have been redefining reality downwards, hiding our collective heads in the sand, telling ourselves that small is big. Even heroism is redefined — everyone’s a hero now.

Interstellar is important not just as a great science fiction movie, not just because it is inspiring when it’s so much easier to be cynical about the past, the present or the future, but also because beyond what it says there’s also how it says it, with a conviction and clarity that is rare for this kind of production. It’s not a coincidence that it references those Dylan Thomas verses more than once. It’s an idealistic movie, and in a sense fundamentally optimistic, although perhaps not necessarily as optimistic about outcomes as it is about opportunities.

It’s about rekindling the idea that we can think big. A reminder of what we can attempt, and sometimes achieve. And, crucially, that at a time when we demand predictability out of everything, probably because it helps us feel somehow ‘in control’, it is also a reminder in more ways than one that great achievement, like discovery, has no roadmap.

Because if you always know where you’re going and how you’re getting there you may be ‘safe’, it’s unlikely you’ll end up anywhere new.

indiana… smith

via an old post from Mystery Man on Film, The “Raiders” Story Conference: the transcripts of meetings in 1978 during which George Lucas, Steven Spielberg and Lawrence Kasdan ironed out what would become Raiders of The Lost Ark. It is really something to see the movie unfold in the discussion, the recurring themes and references (e.g. James Bond), the highly structured way in which Lucas (in particular) approached the story-crafting process, and moments like this, when Lucas first names the character:

Kasdan: Do you have a name for this person?

Lucas: I do for our leader.

Spielberg: I hate this, but go ahead.

Lucas: Indiana Smith. It has to be unique. It’s a character. Very Americana square. He was born in Indiana.

Kasdan: What does she call him, Indy?

Lucas: That’s what I was thinking. Or Jones. Then people can call him Jones.

If you’re interested at all in art, movies, or the creative process in general, the transcript and Mystery Man’s analysis are a must-read. (Almost a Movie has more formats).

must watch (canceled) tv

There is no doubt in my mind that TV has gotten measurably better in the last decade or so. Something, I imagine, having to do with people figuring out how to really create art in a medium that is relatively young by historical standards. Setting aside the vagaries of the physical medium of TV (which Netflix, with House of Cards just proved pretty convincingly didn’t matter, if HBO hadn’t done that already…) there’s the episodic nature of it, the idea that this isn’t something you go watch in a theater but that you experience at home, either by yourself or with others.

Sometimes TV Series are canceled before they even get to the point of even closing off the story in a good way, usually after one season. Every once in a while those canceled series still stand the test of time, and even if the story is left unfinished they’re still worth watching. I thought I’d add a two here that fall in that category, with the caveat that if you get into them you should fully expect to be frustrated when you reach the end.

Rubicon (13 episodes, 1 season, 2010)

rubicon-showPerhaps what I appreciate the most about Rubicon is the silences. No dialog, just long stretches in which people do what they do in everyday life… like being in their apartment by themselves, for example. Not constant action and interaction between characters…. But people being alone and still moving the story forward. This is extraordinarily difficult to pull off and Rubicon does it really well. Almost everything in Rubicon is against the grain. It’s a conspiracy thriller set in our post-9/11 world with a distinct 1970s vibe, where people carry around huge piles of paper, memos, and reports and rarely use computers. Subtle character building instead of in-your-face exposition. Steady but slow story building, with strands emerging until it all comes together in the last few episodes. As far I can tell it is only available through Amazon Instant Video, but you may be able to find it through, um, other means. AMC has dropped the ball on not having this on iTunes, or DVD/Blu-Ray. Then again, they nearly destroyed The Walking Dead in Season 2, and almost managed to kill Mad Men over some silly argument around a few extra minutes per episode, so I’m not that surprised. I’m rooting for them to do better, though.

If you like movies like The Conversation (1974), Three Days Of The Condor (1975) or The Parallax View (1974) then you are sure to enjoy this series. Note the dates on those movies — not a coincidence.

SGU: Stargate Universe (40 episodes, 2 seasons, 2009-2011)

260px-SGUTVlogoStargate became, to some degree, the heir of Star Trek as a TV Science Fiction franchise, but SGU took things to the next level. The writing is spectacularly good, and the “cliche problem” is almost non-existent, as are occurrences of Deus ex machinas  (I say again: almost). SF classics like Rama and 2001: A Space Odyssey are clearly strong influences here. It is available pretty much everywhere, including Netflix. It ends with a semi-cliffhanger that will almost certainly never be properly resolved (maybe a Kickstarter campaign could fix that… but I’m not holding my breath).

If you like the re-imagined Battlestar Galactica then this series is a must-watch. It is one of the few series I’ve seen that does Science Fiction right (two others that come to mind at the moment are Caprica and Firefly, both cancelled as well — perhaps the subject of a follow-up post). Watching SGU makes you wonder if its writers and producers were also following Ron Moore’s “Battlestar Galactica Series Bible” (Google that, if you don’t know what I’m talking about).

The BSG Bible has more to say about the “cliche problem” I mentioned before:

Story. We will eschew the usual stories about parallel universes, time-travel, mindcontrol, evil twins, God-like powers and all the other cliches of the genre. Our show is first and foremost a drama. It is about people. Real people that the audience can identify with and become engaged in. It is not a show about hardware or bizarre alien cultures. It is a show about us. It is an allegory for our own society, our own people and it should be immediately recognizable to any member of the audience. 

(My emphasis). SGU does “break” those rules now and again, certainly more than BSG ever did. But it does it not do it because it’s out of things to say, but in the interest of the overall story arc, which in my mind makes it acceptable. For BSG, for example, Edward James Olmos revealed later that he had a clause  in his contract that no strange aliens or monsters would ever appear on the show, because he wanted to insure that the story stay focused on human drama (basically if a monster or alien showed up, he would just drop dead of a heart attack at that point). Apparently, this made the writers nervous when the introduced the concept of Hybrids but Olmos was fine with that because it fit the story and was a natural outgrowth of it (thank the Gods! heh). What SGU does is generally within that framework.

So, enjoy! And be prepared to scream (silently… or not) at your TV at the end. You are going to wish these series had arrived at an appropriate conclusion.

honestly, let’s unpack this: it’s like, you know…very unique?

I am fascinated by (obsessed with?) slang, colloquialisms, jargon, argot, and of course language use and misuse in general. Perhaps most entertaining are slang and colloquialisms that pop up and become widespread in the space of a few years.

“Honestly…,” “Let’s unpack this,” and a few notable others have become more frequent (at least from my point of view) and I wanted to dissect them a bit and think about what could be behind them.

New terms or ways of communicating can be hard to see “appear” sometimes, since they enter everyday language incrementally, and the best part is that some of them may not be new at all, “new” defined here as “having popped up in the last 10 years”, but they may be new to me as they become common or even pervasive in the conversations I have and the information landscape that I inhabit.

There’s more than pure nerdish entertainment to this. For one thing, it can be used as a lens through which to look at society and culture, but more specifically at organizations and what makes them tick. Religions, in particular are an interesting subtype of organization since some of them maintain their high-level structures for hundreds or thousands of years. For example, Scientology’s obsession with redefining  language is notable in that they are at the extreme end of the spectrum combining both jargon and and repurposing of common language, which naturally affects how you communicate and therefore relate to, and to some degree how we perceive, reality.

Startups go through a similar (even if simultaneously more overt and less structured) process in this regard. Most of us have seen how companies have their own terminology for everything. In engineering, in particular, you could literally sit through an entire conversation about infrastructure between two engineers from the same company and never know what they’re talking about, while in marketing or sales they don’t so much invent terminology as repurpose it freely, leading to a overloading of commonly used terms that can some times create confusion (e.g. “Active users” or even “pageviews”).

I’m not saying that startups, tech companies, or even non-tech companies are cults (Apple’s perception as such notwithstanding..), but there’s some similarities that I think speak to a need of a group, no matter of what kind, to define itself as separate from everyone else and, of the mechanisms necessary for that to happen, language is one of the  easier starting points.

But back to what are more widely shared colloquialisms and/or slang, here’s a few personal favorites that I’ve observed have become more common in recent years, and some of my own musings on what’s behind them.

Some of these trigger “old man yells at cloud” syndrome in me, since (apparently) I have a hard time handling the cognitive dissonance, sheer nonsense, or just plain lack of meaning involved.

“Like, you know…” and the invisible question mark that follows

This one is fairly established, dating I think back to the mid-90s. And it hasn’t just endured, it has become so widespread and entrenched that it’s definitely worth mentioning.

It’s one of the most fascinating colloquialisms in my opinion. It’s a simile in which the structure that follows “like” is not explicit, but rather vaguely points to some idea that perhaps, maybe, hopefully, the other person shares in some indeterminate way in the statement we’re about to make, while expressing that we really don’t care too much one way or the other.

It is maddening to me to be in a conversation in which the other person constantly trails off, attaching “like, you know”s and question marks at the end of sentences. We are, apparently, not supposed to have conviction anymore, and language tinted with this construct communicates that clearly. It says: I have nothing invested in this statement.

All too often, in fact, “Like, you know…?” has no follow up at all and it just trails off, the question mark implicit in the inflection of our voice, the interrogative tone, the you know parenthetical. It’s filler, pretending that you’re saying something when you really aren’t, a statement without content, a commitment to nothing in particular that nevertheless creates the impression that we’re communicating. Whatever is said gets turned into a question, something to be challenged on the receiving end. But when the receiver also answers with similar lack of definition, then it’s just a bunch of words strung together, isn’t it? A charade: because, actually, we don’t want to have a real conversation.

Declarative language, straight up statement of beliefs, of facts, of what we know to be true even if it is subjective, has been appropriated by the extremes, the Glenn Becks of the world. The alternative, nuance and complexity of thought, are in everyone else often replaced by a quivering indecision.

The flip side of this indecision is how we pretend to counteract it with an earnest declaration: “Honestly…”


This type of preface or clarification instantly triggers, at least for me, the thought that the rest of what the other person’s been saying has not been “honest.” Not “dishonest” necessarily, but the addition raises the level of whatever comes after over what came before. And, when it’s used constantly it just makes me question everything.

Aside from combining it with “like, you know…”, to give the appearance of weight while simultaneously reducing the importance of what we’re saying, “honestly” is also used in many other cases. Why are we suddenly using this modifier so frequently? Is it that in a world when The Onion‘s headlines can appear as serious as those in The New York Times we have suddenly decided that, by default, everything is suspect? Or is it perhaps that PR, marketing and advertising are so pervasive that we look at everything with skepticism and some degree of mistrust, requiring the additional emphasis of “honestly” to separate what we say from what we’re supposed to say? Maybe a bit of both. Ironically, advertising continues to be pretty effective. Instead of applying these filters to ads, we look at everything with suspicion.

“Let’s unpack this”

This one seems to have become more common in the last couple of years. I don’t know if it’s been traced back to its origins, but it seems to me that it’s a byproduct of technology –both explicitly and implicitly, partially around lack of trust, but also increased (real and perceived) complexity.

Explicitly: software, first, where so many things are “packaged” and have to be “unpacked” to look at them. More importantly, thanks to e-commerce, followed by a relatively new phenomenon of boxes everywhere. We all get packages at home or the office that have to be unpacked. Think back, pre-e-commerce, how common was it to get a package? For most people, not very. Now, unpacking is a frequent action in our daily lives, a common occurrence.

Implicitly: everything around us now has layers within layers, a Matryoshka doll of seemingly neverending complexity. The phrase “let’s take a look under the hood” used to be applicable beyond cars — the world generally had one level. You’d open the hood and there was the engine. Done. Now, “under the hood” is just the first of many layers, even in cars (batteries, microprocessors, software…). A phone is no longer just a phone, and you can even have a phone built into your car, nevermind connected to it. A car contains maps. The maps contain reviews. The reviews link to social media. And on and on it goes. The ongoing merging of cyberspace and meatspace often leads us down rabbit holes in everything we touch.

Which also relates to “Honestly” since “Unpacking” is often used for discussing statements by public officials, and even facts. The only way you would need to “unpack” a statement is if its true meaning, or different interpretations, were “packed” under the “wrapping” of its surface. Orwell’s doublespeak (or maybe n-speak?) ingrained to the degree that the default assumption becomes that there’s hidden meaning, or inherent obfuscation. Hence, “Honestly” may be functioning as a vaccine for “Unpacking” — something that communicates “Unpacking not required.”

“Very unique”

Once more, I chalk this one up as trying to counteract the lack of trust we have come to assume in what’s communicated. It is more commonly used by marketing types, but recently I’ve heard with alarming frequency in other contexts.

Something is either unique, or it isn’t. It can’t be “very unique,” or “incredibly unique.” Period. But I suppose that when words like “unique” have become overused, we start to add adjectives in the hopes of making it clear that, yes, this is unique, as opposed to all those other things that we say are unique even if they’re not.

This is the most egregious misuse of an adjective, but there are others. I typically use words like “beautiful,” “love,” “hate,” and others sparsely, because their weight is diminished by attaching them to everything. I like rain, in itself but also because I appreciate sunny days more when they’re juxtaposed with the alternative, and viceversa.

If everything is beautiful, if beautiful is the norm, then how do we talk about something that is special, that touches us beyond that? We start adding superlatives: “incredibly beautiful,” “profoundly beautiful” and so forth (“profound” is another overused term these days, now that I think about it). Until that becomes the way we refer to even the menu transition of an iPhone app, or some icon, or the color of a couch, at which point we are left with a situation in which our depiction of it leaves us little room to enjoy the occasional good thing, because we have done away with contrasts by turning everything into positive happy feelings.

Most of the time, nothing remarkable happens, our lives are routine, and that should be just fine. Also, a lot of things just suck. And that’s a good thing, because if they didn’t we wouldn’t be able to tell when they don’t.

kindle paperwhite: good device, but beware the glow

For all fellow book nerds out there, we close the trilogy of kindle reviews for this year, now moving on to a look at Kindle Paperwhite, adding to the plain Kindle review and the Kindle Fire HD.

This device has gotten the most positive reviews we’ve seen this side of an Apple launch. I don’t think I’ve read a single negative review, and most of them are positively glowing with praise. A lot of it is well deserved. The device is light, fast, and the screen is quite good. The addition of light to the screen, which everyone seems bananas about, is also welcome, but there are issues with it that could be a problem depending on your preference (more on that in a bit).


Touch response is better than the Kindle touch as well. There are enough minor issues with it that it’s not transparent as an interface — while reading, it’s still too easy to do something you didn’t intend to do (e.g. tap twice and skip ahead more than one page, or swipe improperly on the homescreen and end up opening a book instead of browsing, etc.) but it doesn’t happen so often that it gets in the way. Small annoyance.

Something I do often when reading books is highlight text and –occasionally– add notes for later collection/analysis/etc. Notes are a problem in both Kindles for different reasons (no keyboard in the first, slow-response touch keyboard in the second) but the Paperwhilte gets the edge I think. The Paperwhite is also better than the regular Kindle for selection in most cases (faster, by a mile), with two exceptions being that at the end of paragraphs it’s harder than it should be to avoid selecting part of the beginning of the next, and once you highlight a the text gets block-highlighted as opposed to underlined, which not only gets in the way of reading but also results in an ugly flash when the display refreshes as you flip pages. Small annoyances #2 and #3.

Overall though, during actual long-form reading sessions I’d say it works quite well. Its quirks appear of the kind that you can get used to, rather than those that you potentially can’t stand.


Speaking of things you potentially can’t stand, the Paperwhite has a flaw, minor to be sure, but visible: the light at the bottom of the screen generates weird negative glow, “hotspots” or a kind of blooming effect on the lower-screen area that can be, depending on lighting conditions, brightness, and your own preference, fairly annoying. Now, don’t get me wrong — sans light, this is the best eink screen I’ve ever seen, but the light is on by default and in part this is a big selling point of the device, so this deserves a bit more attention.

Some of the other reviews mention this either in passing or not at all, with the exception of Engadget where they focused on it (just slightly) beyond a cursory mention.

Pogue over at the NYT:

“At top brightness, it’s much brighter. More usefully, its lighting is far more even than the Nook’s, whose edge-mounted lamps can create subtle “hot spots” at the top and bottom of the page, sometimes spilling out from there. How much unevenness depends on how high you’ve turned up the light. But in the hot spots, the black letters of the text show less contrast.

The Kindle Paperwhite has hot spots, too, but only at the bottom edge, where the four low-power LED bulbs sit. (Amazon says that from there, the light is pumped out across the screen through a flattened fiber optic cable.) In the middle of the page, where the text is, the lighting is perfectly even: no low-contrast text areas.”

The Verge:

“There are some minor discrepancies towards the bottom of the screen (especially at lower light settings), but they weren’t nearly as distracting as what competitors offer.”


“Just in case you’re still unsure, give the Nook a tilt and you’ll see it clearly coming from beneath the bezel. Amazon, on the other hand, has managed to significantly reduce the gap between the bezel and the display. If you look for it, you can see the light source, but unless you peer closely, the light appears to be coming from all sides. Look carefully and you’ll also see spots at the bottom of the display — when on a white page, with the light turned up to full blast. Under those conditions, you might notice some unevenness toward to bottom. On the whole, however, the light distribution is far, far more even than on the GlowLight.”

So it seems clear that the Nook is worse (I haven’t tried it) but Engadget was the only one to show clear shots of the differences between them, although I don’t think their screenshots clearly show what’s going on. Let me add my own to that. Here’s three images:


The first is the screen in a relatively low-light environment at 75% screen brightness (photo taken with an iPhone 5, click on them to see them at higher res). The second two are the same image with different Photoshop filters applied to show more clearly what you can perhaps already see in the first image — those black blooming areas at the bottom of the screen, inching upwards.

The effect is slightly more visible with max brightness settings:

What is perhaps most disconcerting is that what is more visible is not the light but the lack of it — the black areas are what’s not as illuminated as the rest before the full effect of light distribution across the display takes place.

Being used to the previous Kindles, when I first turned it on my immediate reaction was to think that I’d gotten a bad unit, especially because this issue hadn’t been something that reviews had put much emphasis on, or seemed to dismiss altogether, but it seems that’s how it is. Maybe it is one of those things that you usually don’t notice but, when you do, you can’t help but notice.

So the question is — does it get in the way? After reading on it for hours I think it’s fair to say that it fades into the background and you don’t really notice it much, but I still kept seeing it, every once in a while, and when I did it would bother me. I don’t know if over time the annoyance –or the effect– will fade, but I’d definitely recommend you try to see it in a store if you can.


Weight-wise, while heavier than the regular Kindle, the Paperwhite seems to strike a good balance. You can hold it comfortably on one hand for extended periods of time, and immerse in whatever you’re reading. Speaking of holding it — the material of the bezel is more of a fingerprint magnet than previous Kindles, for some reason, and I find myself cleaning it more often than I’ve done with the others.

The original touch was ok but I still ended up using the lower-end Kindle for regular reading. If I can get over the screen issue, the Paperwhite may be the one touch e-reader to break that cycle. Time will tell.

%d bloggers like this: