diego's weblog

there and back again

Category Archives: technology

encryption is bad news for bad guys! (and other things we should keep in mind)

Once again, a senseless act of violence shocks us and enrages us. Prevention becomes a hot topic, and we end up having a familiar “debate” about technology, surveillance, and encryption, more specifically, how to either eliminate or weaken encryption. Other topics are mentioned in passing (somehow, gun control is not), but ‘controlling’ encryption seems to win the day as The Thing That Apparently Would Solve A Lot Of Problems.

However, as of now, there is zero indication that encryption played any part in preventing security services from stopping the Paris attacks. There wasn’t a message with a date and names and a time, sitting in front of a group of detectives, encrypted.

I feel obligated to mention this, even if it should be obvious by now. “If only we could know what they’re saying,” sounds reasonable. It ignores the fact that you need incredibly invasive, massive non-stop surveillance of everyone, but setting that tiny detail aside it comes back to the (flawed) argument of “you don’t need encryption if you have nothing to hide.”

First off, needing to hide something doesn’t mean you’re a criminal. Setting aside our own intelligence and military services, this is what keeps Chinese dissidents alive (to use one of a myriad examples), and I’m sure there are a few kids growing up in ISIS-controlled areas that are using encrypted channels to pass along books, movies (plus, probably some porn), or to discuss how to get the hell out of there. In less extreme territory, hiding is instrumental in many areas of everyday life, say, planning surprise parties. Selective disclosure is a necessary component in human interaction. 

There’s only one type of debate we should be having about encryption, and it is how to make it easier to use, more widespread. How to make it better, not how to weaken it.

Because encryption can’t be uninvented, and, moreover, widespread secure communications don’t help criminals or terrorists–it hurts them.

(1) Encryption can’t be uninvented

A typical first-line-of-defense argument for encryption goes: “eliminating or weakening encryption does nothing to prevent criminals or terrorists  from using encryption of their own.” Any criminals or terrorists (from now on “bad guys”) with minimal smarts would know how to add their own encryption layer to any standard communication channel. The only bad guys you’d catch would be either lazy, or stupid.

“Aha!” Says the enthusiastic anti-encryption advocate. “That’s why we need to make sure all the algorithms contain backdoors.” What about all the books that describe these algorithms before the backdoors? Would we erase the memory of the millions of programmers, mathematicians, or anyone that’s ever learned about this. And couldn’t the backdoors be used against us? Also get this: you don’t even need a computer to encrypt messages! With just pen and paper you can effectively use any number of cyphers that in some cases are quite strong (e.g., one-time use pads, or multilayered substitution cyphers, etc.) Shocking, I know.

The only way to “stop” encryption from being used by bad guys would be to uninvent it. Which, hopefully, we can all agree is impossible.

Then there’s the positive argument for encryption. It’s good for us, and bad for bad guys.

(2) Herd immunity, or, Encryption is bad for bad guys

Maybe we in technology haven’t done a good job of explaining this to law enforcement or politicians, or the public at large, but there’s a second, more powerful argument that we often fail to make: widespread secure & encrypted communications and data storage channels hinders, not helps, criminals, terrorists, or other assorted psychos.

That’s right. Secure storage and communications hurts bad guys.

Why? Simple: because bad guys, to operate, to prepare, obtain resources, or plan, need three things: money, time, and anonymity. They obtain these by leeching off their surroundings.

More and more frequently terrorists finance their activities with cybercrime. Stealing identities and credit cards, phishing attacks, and so forth. If everyone’s communications and storage (not just individuals but also banks, stores, etc) was always encrypted and more secure, criminals would have a much harder time financing their operations.

That is, to operate with less restrictions bad guys need to be able to exploit their surroundings. The more protected their surroundings are, the more exposed they are. More security and encryption also mean it’s harder to obtain a fake passport, create a fake identity, or steal someone else’s.

Biosafety experts have a term for this: Herd Immunity. Vaccines work only when in widespread use, for two reasons. First, the higher the percentage of immune individuals, the fewer avenues a disease has to spread, but, as importantly, the less probability that a non-immune individual will interact with an infected individual.

More advanced encryption and security also helps police agencies and security services. If the bad guys can’t get into your network or spy on your activities, you have more of a chance of catching them. The first beneficiaries of strong encryption are the very agencies tasked with defending us.


Dictatorships and other oppressive regimes hate encryption for a reason. Secure, widespread communication also strengthens public discourse. It makes communication channels harder to attack, allowing the free flow of information to continue in the face of ideologies who want nothing less than to shut it down and lock everyone into a single way of thinking, acting, and behaving.


(Postscript) Dear media: to have a real conversation we need your help, so get a grip and calm down. 

The focus on encryption is part of looking for quick fixes when there aren’t any. In our fear and grief we demand answers and “safety,” even to a degree that is clearly not possible. We cannot be 100% safe. I think people in general are pretty reasonable, and know this. But it’s kind of hard to stay that way when we are surrounded by news reports that have all the subtlety and balance of a chicken running around with its head cut off. We are told that the “mastermind” (or “architect”) of the attack is still at large. We hear of “of an elaborate international terror operation.” On day 3, the freakout seems to be intensifying, so much so that a reporter asks the President of the United States: “Why Can’t We Take Out These Bastards?

The Paris attacks were perpetrated by a bunch of suicidal murderers with alarm clocks, a few rifles, bullets, and some explosives. Their “plan” amounted to synchronizing their clocks and then start firing and/or blow themselves up on a given date at roughly the same time, a time chosen for maximum damage.

“Mastermind”? Reporters need to take a deep breath and put things in context. This wasn’t complicated enough to be “masterminded.” We’re not dealing with an ultra-sophisticated criminal organization headed by a Bond villain ready to deploy a doomsday device. This is bunch of thugs with wristwatches and Soviet-era rifles. They are lethal, and we need to fight back. But they are not an existential threat to our civilization. We are stronger than that.

With less of an apocalyptic tone to the reporting we could have a more reasonable conversation about the very real and complex reality behind all of this. Naive? Maybe. Still —  it doesn’t hurt to mention it.



all your tech are belong to us: media in a world of technology as the dominant force

Pop quiz: who held the monopoly on radio equipment production in the US in 1918?

General Electric? The Marconi Company?

Radio Shack? (Jk!) :)

How about the US Military?

The US entered World War I “officially” in early April, 1917. Determined to control a technology of strategic importance to the war effort, the Federal Government took over radio-related patents owned by companies in the US and gave the monopoly of manufacturing of radio equipment to the Armed Forces — which at the time included the Army, the Navy, the Marine Corps, and the Coast Guard.

This takeover was short-lived (ending in late 1918) but it would have profound effects in how the industry organized in the years and decades that followed. The War and Navy departments, intent on keeping the technology under some form of US control, arranged for General Electric to acquire the American Marconi company and secure the patents involved.

The result was Radio Corporation of America, RCA, a public company whose controlling interested was owned by GE.

Newspapers had been vertically integrated since their inception. The technology required for printing presses and the distribution networks involved in delivering the product were all “proprietary,” in that they were controlled and evolved by the newspapers themselves. Even if the printing press had other uses, you couldn’t easily repurpose a newspaper printing press to print books, or viceversa, and even if you could secure a printing press for newspapers (a massive investment) you could not hope to easily recreate the distribution network required to get the newspaper in the hands of consumers.

This vertical integration resulted in a combination of natural and artificial barriers of entry that would let a few key players, most notably William Randolph Hearst, leverage the resulting common economic, distribution and technological foundation to effect a consolidation in the market without engendering significant opposition. Later, Movie studios relied on a similar set of controls over the technology employed — they didn’t manufacture their own cameras but by controlling creation and distribution, and with their aggregate purchase power, they could dictate what technology was viable and how it was to be used.

Radio, early on, presented the possibility of a revolution in this regard. It could have allowed consumers to also be creators (at least in a small scale). The ability to broadcast was restricted by the size and power of the transmitter at your disposal, and you could start small. It was the first opportunity for a new medium to have the evolution of the underlying technology decoupled from the content it carried, but WWI and the intervention of the US government ensured this would not come to pass. The deal that resulted in the creation of RCA created, in effect, a similar vertical integration in Radio as in other mediums (in Britain, a pioneer of broadcast radio and later TV, the government had been largely in control from the beginning through the BBC, and so already was “vertically integrated”).

This is a way of thinking that became embedded into how Media companies operated.

RCA went on to be at the center of the creation of the two other subsequent major media markets of the 20th century: music and television, and in both cases it extended the notion of technology as subservient to the content that it carried.

For every major new medium that appeared until late in the 20th century, media companies could control the technology that they depended on.

Over time, even as technology development broke off into its own path and started to evolve separately from media, media companies retained control of both the standards and the adoption rate (black and white to color, vinyl to CD, SD to HD, etc.). Media companies selected new technologies when and how they wanted, and they set the terms of use, the price, and the pace of its deployment. Consumers could only consume. By retaining control of the evolution of the technology through implicit control of standards, and explicit control of the distribution channels, they could retain overall control of the medium. Slowly, though, the same technology started to be used for more than one thing, and control started to slip away.

Then the Internet came along.

The great media/technology decoupling

TV, radio, CDs, even newspapers are all “platforms” in a technical sense, even if closed ones, in that they provide a set of common standards and distribution channels for information. In this way, the Internet appears to be “just another platform” through which media companies must deliver their content. This has led to the view that we are simply going through a transition not unlike that of, say, Vinyl to CDs, or Radio to TV.

That media companies can’t control the technology as they used to is clear. What is less clear is that this is a difference of kind, not of degree.

CNN can have a website, but it can neither control the technology standards or software used to build it or ensure that the introduction of a certain technology (say, Adobe Flash) will be followed by a period of stability long enough to ensure recouping the investment required to use it. NBC can post shows online, but it can’t prevent millions of people from downloading the show without advertisement through other channels. Universal Studios can provide a digital copy of a movie six months after its release, but in the meantime everyone that wanted to watch it has, often without paying for it. These effects and many more are plainly visible, and as a result, prophecies involving the death of TV, the music industry, newspapers, movie studios, or radio, are common.

The diagnoses are varied and they tend to focus, incorrectly, on the revenue side of the equation: it’s the media companies’ business models which are antiquated. They don’t know how to monetize. Piracy is killing them. They can’t (or won’t) adapt to new demands and therefore are too expensive to operate. Long-standing contracts get in the way (e.g. Premium channels & cable providers). The traditional business models that supported mass media throughout their existence are being made increasingly ineffective by the radically different dynamics created by online audiences, ease of copying and lack of ability to create scarcity, which drive down prices.

All of these are real problems but none of them is insurmountable, and indeed many media concerns are making progress in fits and starts in these areas and finding new sources of revenue in the online world. The fundamental issue is that control has shifted, irreversibly, out of the hands of the media companies.

For the first time in the history of mass media, technology evolution has become largely decoupled from the media that uses it, and, as importantly, it has become valuable in and of itself. This has completely inverted the power structure in which media operated, with media relegated to just another actor in a larger stage. For media companies, lack of control of the information channel used is behind each and every instance of a crack in the edifice that has supported their evolution, their profits, and their power.

Until the appearance of the Internet it was the media companies that dictated the evolution of the technology behind the medium and, as critically, the distribution channel. Since the mid-1990s, media companies have tried and generally failed to insert themselves as a force of control in the information landscape created by the digitalization of media and the Internet. Like radio and TV, the Internet includes a built in “distribution channel” but unlike them it does not lend itself to natural monopolies apportioned by the government of that channel. Like other media, the Internet depends on standards and devices to access it, but unlike other media the standards and devices are controlled, evolved, and manufactured by companies that see media as just another element of their platforms, and not as a driver of their existence.

This shift in control over technology standards, manufacture, demand, and evolution is without precedent, and it is the central factor that drives the ongoing crisis media finds itself since the early 90s.

Now what?

Implicitly or explicitly, what media companies are trying to do with every new initiative and every effort (DRM, new formats, paywalls, apps) is to regain control of the platform. Given the actors that now control technology, it becomes clear why they are not succeeding and what they must do to adapt.

In the past, they may have attempted to purchase the companies involved in technology, fund competitors, and the like. Some of this is going on today, with the foremost examples being Hulu and Ultraviolet. As with past technological shifts, media companies have also resorted to lobbying and the courts to attempt to maintain control, but this too is a losing proposition long-term. Trying to wrest control of technology by lawsuits that address whatever the offending technology is at any given moment, when technology itself is evolving, advancing, and expanding so quickly, is like trying to empty the ocean by using a spoon.

These attempts are not effective because the real cause of the shift in power that has occurred is beyond their control. It is systemic.

In a world where the market capitalization of the technology industry is an order of magnitude or more than that of the media companies (and when, incidentally, a single company, Apple, has more cash in hand than the market value of all traditional media companies combined), it should be obvious that the battle for economic dominance has been lost. Temporary victories, if any, only serve to obfuscate that fact.

The media companies that survive the current upheaval are those that accept their new role in this emerging ecosystem: one of an important player but not a dominant one (this is probably the toughest part). There still is and there will continue to be demand for content that is professionally produced.

Whenever people in a production company, or a studio, or magazine, find themselves trying to figure out which technology is better for the business, they’re having the wrong conversation. Technology should now be directed only by the needs of creation, and at the service of content.

And everyone needs to adapt to this new reality, accept it, and move on… or fall, slowly but surely, into irrelevance.

the planet remade: now, with asteroids!

the-planet-remade-book-coverLet me begin with a book recommendation: The Planet Remade: How Geoengineering Could Change The World by Oliver Morton.

I would change the title of this book to “The Planet Remade: How Geoengineering Has Changed The World And Will Continue To Change It As Long As Humans Are Monkeying On It, In It, and Around It.” But I understand that might be a less catchy title.

Look, I accept the distinction Morton makes between ‘willful change’ and not, and he needs to establish some boundaries for the discussion. It’s pretty clear we’ve already created massive changes in the planet’s systems. We have altered its features, most obviously by redirecting rivers, creating dams, digging giant tunnels into mountains, covering hundreds of thousands of square miles with concrete, cement, asphalt and all kinds of other crazy stuff (like, say… putting golf courses in the middle of deserts), and (mostly for bad reasons) blowing up lots and lots of different places. We have pumped and continue to pump trillions of tons of gases and chemicals into the biosphere. Geoengineering is already happening, so how about we do it for something other than manufacturing complicated barbeque grills, phone cases and christmas tree decorations?

The book’s discussion on the transformation of the nitrogen cycle is particularly interesting, since this was a key factor in making Norman Borlaug’s high-yield dwarf “superwheat” a feasible crop at large scale (dwarf wheat consumes more nitrogen). Much is frequently said of Borlaug’s work and the Nobel prize he got for it (and with good reason) but less is known about the massive geoengineering activity that started before that work and made it possible.

Geoengineering will be a key element in reversing some of the effects of climate change, since it is pretty clear that “just” reducing emissions won’t cut it.

Just sulfate it.

If I had to bet on a method for climate engineering that’s going to be used in the next few decades, I’d go for stratospheric sulfate aerosols — which the book covers well. Why? As The Joker in TDK said of gasoline and dynamite: “They’re cheap!” If none of the world powers is going to do it, any one of a number of other countries will eventually decide that it’s time to stop the ocean from erasing their coast sooner rather than later. The consequences of this could lead to (surprise!) war, perhaps even nuclear war, which Morton discusses as well. Nothing like some optimism about saving the planet sprinkled with apocalyptic thinking. Just kidding, that’s something important to discuss too. (Nuclear winter is also discussed in terms of its climate impact).

Near the end the book spends a good amount of time talking about asteroids, but not in the way I thought would be … kind of obvious. It focuses on asteroids as an Extinction Level Event. Dino-killer, etc. The point he makes is that the various ideas discussed around how to stop an asteroid from crashing to earth are in a way similar to the idea of using geoengineering to save us from a different kind of cataclysm.

This is an interesting argument but….

Asteroid Mining + Stratospheric Aerosols = Profit!

Fine… maybe not profit, just saving the world. My point is, what the book doesn’t discuss is the use of asteroids for geoengineering… and not as an argument. It mentionsasteroid wranglingbut all hope is dashed when we see that it’s talking about moving an asteroid off-course to prevent it from hitting earth. Ridiculous. We have Bruce Willis for that!

One of my personal obsessions is the topic of asteroid mining. Yes, within the next few decades we will begin mining asteroids, there’s no doubt in my mind about that. And it seems inevitable to me that we’ll also be using some of the results of that for climate engineering via the stratosphere (and later to create massive structures in orbit around the planet).

Why? because the biggest cost in seeding the stratosphere is energy, specifically, the kinetic energy you need to spend to move millions of tons of what essentially is dust from the ground (where it is manufactured cheaply) to its stratospheric destination over 8-10 kilometers above the surface of the earth, depending on latitude. This “cost” is more of a logistical cost rather than a pure energy cost. How so?

Option A: Airplane!airplane-movie-poster

(Not the movie). Let’s say we are going to seed a million tons of sulfate aerosols into the stratosphere.

The energy required to lift a mass of a million tons of material to a height of 10,000 meters would be ~98.1 terajoules (give or take a Joule, E = x x h) = ~27 GWh (gigawatt-hour) = 27,000,000 kWh. In the US (with average energy cost of 12c/kWh) just lifting the dust would cost at minimum 2.7 million dollars. Add to that the necessary costs for stamps, copy paper, printing receipts and office parties, copies of Microsoft Windows, safety goggles, and such, and the cost would rise by several million more. So round it up to 10. 10 MM USD = 1 million tons of material at stratospheric height.

Now, the Mount Pinatubo eruption in 1991 is estimated to have injected 20 million tons of sulfates and resulted in an estimated 0.5 C cooldown across the planet within a year. This cooldown dissipated as quickly as it arrived (at least in geological terms) so a long term geoengineering operation would require adding sulfates for several years, perhaps decades.

With this we could derive a “baseline cost” of 200 million dollars to make global temperatures drop half a degree centigrade within a year. Sounds cheap! We could have a 2×1 offer and make it an even degree cooler.

The energy transfer, sadly, is not “pure”, and so, therefore, neither is the cost. If you are spreading the material from, say, a plane, the weight of the plane, the fuel, transport to airfield form the factory and so forth also comes into play. The logistics chain and equipment required becomes really complicated, really fast. Not impossible by any means, just complicated and much more costly, running into billions. For a less hand-wavy (and more systematic but way longer) analysis, see Geoengineering Cost Analysis and Costs and economics of geoengineering.

Here’s where asteroids come into play.

Option B: Asteroids!asteroids-arcade

(Not the game). Using asteroids for this purpose seems to me like a perfect match. Any nasty by-products of the mining and manufacture remain in space, where hazardous chemical waste is not a problem since a lot of the stuff out there is already hazardous chemicals, plus no one can hear you scream.

Asteroids contain enough material to either obtain what you need directly or by synthesizing what you need using micro factories landed/built (by other micro-factories landed on the asteroid) for that purpose.

The energy required for the deployment of the material will be far lower (you’ll always need some amount of energy expenditure in the form of thrusters and the dispersion device), but you would be able to rely on gravity to do most of the work (if the asteroid in question has been captured and placed in orbit around the earth, even better). Instead of fighting gravity, we’d use it to our advantage.

Most of the maneuvers involved in transferring material would rely on gravity assist rather than rockets (plus aerobraking for atmospheric reentry when needed) which makes them cheaper, and, something that is hardly ever mentioned, less prone to failure simply because there are fewer components in the system, particularly components of the very large, very explosive kind, like the Saturn V’s S-IC of the Space Shuttle’s SRBs.

Now that people are excitedly talking about the possibility that we may have found a Dyson Sphere in our own neighborhood (KIC 8462852 FTW – only 1,480 light years away!) talking about these types of projects could sound to people more like science and less than science fiction. As a bonus, this gets us closer to a Type II civilization. We’ll definitely need to throw a party when that happens.

TL;DR go read this book. It’s very likely that stratospheric sulfate aerosols will be used for climate engineering within the next few decades. But why wouldn’t we use asteroid capture and mining for that? Can this possibly be a new idea? Also: Dyson Spheres!

PS: I haven’t found discussion of this type of sourcing of material for geoengineering, so should this be a new idea I fully expect my fair share of the massive profits. Just let me know and I’ll send my bank information. Can’t send funds myself though, most of my money is in the hands of a nigerian prince who is using it to process an inheritance.

the multichannel conundrum

(x-post to Medium)

I’ve been writing online for quite a while now. My earliest posts date back to late 2001/early 2002. I tried a bunch of different platforms and eventually settled on MovableType running on my own server, and a few years back I moved to hosted WordPress, where my primary weblog remains. As I’ve been revving up my writing in recent weeks I started wondering about other options.

why write where

Now, some people may think of posting in as many places as you can in purely utilitarian terms, as a way to “increase distribution” or whatever. I, however, think about it in terms of the mental space the tool creates, and how it affects my output. Which affects me. This effect is not restricted to online writing, where social feedback loops can be created instantly. I think the tool has a direct, real effect on what you write. All things being equal, writing on a typewriter will lead to something different than if you used, say, Notepad on Windows 95. I’m sure there are studies about this that confirm my completely unfounded assertion. However, I am not going to go on a yak-shaving expedition in an attempt to find out. Let us assume there are, and if not, then let’s agree there should be… and if not we can disagree*.

*Should someone object and try to say that we can “agree to disagree” then I will point out that, no, “agreeing to disagree” is just plain disagreeing but pretending you don’t, probably to avoid an actual conversation. “Agreeing to disagree” is to “disagreeing” what “agnostic” is to “atheist.”

A lot of what I write, of what I’ve always written, is long form. And a lot of what I write, of what I’ve always written, is connected. Not superficially, not just thematically, but actually connected, a long-running thread of obsessions and topics that expand (and, less frequently, collapse) non-linearly. Sometimes I’ve written hypertextually, simultaneously creating meaningful minor blocks of meaning and greater ideas that emerge out of the non-directed navigation of references between those minor blocks. By the by, I know “hypertextually” is not really a word, but I think it conveys what I mean.

While that structure is amusing to my brain (and possibly other brains!), it can have a fate worse than becoming incomprehensible: becoming invisible. If you see something that you don’t understand you have a choice to spend time and try to understand it, but if you don’t see something, regardless of complexity, well…

content survivability

So trying to keep that structure somewhat visible means lots of cross-referencing, which means what I write has to have exceptional survivability. This is less easy than it sounds. Services start and close down. Linking mechanisms change. Technically, theoretically, there’s nothing really preventing hyperlinked content to remain available for referencing in perpetuity, in practice perpetuity can and often is a very very short time. An easy example is Twitter and the tweet-boxes that they insist people must use to reference tweets. Some people take screenshots, most use the tweet boxes. Eventually Twitter will change, morph, be acquired, shut down, or maybe not, but I guarantee you that at some point in the next 10–20 years those boxes will simply stop working. At that time, regardless of how standards-compliant the HTML the pages that contain those tweets, they will be crippled, possibly severely. How many times have you read a news story recently that talks about how so-and-so tweeted such-and-such and it’s outrageous? Archive.org and its wonderful Wayback Machine don’t solve this issue.

Now, in general, this is not necessarily a bad thing. I’m sure that not everything has to be preserved forever. With distance history loses resolution, and that’s alright for lots of things. Even during crises a lot of what we do in life is mundane, inconsequential and it rightfully gets lost in time. Now that a lot of what we do is either in cyberspace or is reflected by/in it, it’s natural that inconsequential things end up there. We don’t care what Julius Caesar had for lunch one day in October as a teenager. Likewise, the fact that an Instagram photo of a future president’s lunch is lost in time will do nothing to alter history. However, if the choice for lunch leads to losing a bus that later crashed, then the entire incident will generally be recorded. Psychohistory comes to mind.

But I digress. The point is that I like the idea, personally, of knowing that I can maintain cross references valid for what I write, and that means having both a level of control over it as well as reducing the number of outlets in which it appears. Hence my weblog being fairly static in structure (I converted the MT weblog to static pages back during the transition).

This also limits the tools that can be used, to some degree, and according to my theory of how the tool shapes the message, it would naturally lead to stagnation, at minimum, stylistically, of what is said.

Which results in this so-called conundrum.

Trying new things is important though. That’s why I’m here. I may cross-post to my weblog for now, just for “backup,” but I am going to give Medium a try, and see what happens. This entire post resulted entirely from this experiment, and that’s a pretty good start. :-P

maybe because both words end with “y”

In an an apparent confusion between the word “utility” and the word “monopoly,” the Wall Street Journal runs an opinion piece today called “The Department of the Internet” that has to be one of the most disingenuous (and incoherent) efforts to attack Net Neutrality I’ve seen in recent times. The author, currently a hedge fund manager and previously at Bell Labs/AT&T, basically explains all of the ways in which AT&T slowed down innovation, either by omission, errors of judgment, or willful blocking of disruptive technologies.

All of them because, presumably, AT&T was classified as a “utility.” I say “presumably” because at no point does the piece establish a clear causal link between AT&T’s service being a utility and the corporate behavior he describes.

Thing is, AT&T behaved like that primarily because it was a monopoly.

And how do we know that it was its monopoly power that was the primary factor? Because phone companies never really stopped being regulated in the same way — and yet competition increased after the breakup of AT&T. In fact, you could argue that regulation on the phone system as a whole increased as a result of the breakup.

Additionally, it was regulation that forced companies to share resources they otherwise would never have. In fact the example of “competition” in the piece is exactly an example of government intervention similar to what Net Neutrality would do:

“The beauty of competition is that you get network neutrality for free. AT&T cut long-distance rates in the 1980s when MCI and Sprint started competing fiercely.”

Had the government not intervened in multiple occasions (whether in the form of legislation, the Courts, or the FCC, and most dramatically with the breakup), AT&T would never have allowed third parties to sell long distance to their customers, much less at lower rates than them.

There’s more than one fallacy on the piece on how “utilities are bad”:

A boss at Bell Labs in those days explained what he called the Big Lie, using water utilities as an example. Delivering water involves mostly fixed costs. So every decade or so, water companies engineer a shortage. Less water over the same infrastructure meant that they needed to raise rates per gallon to generate returns. When the shortage ends, they spend the extra money coming in on fancy facilities, thus locking in the higher rates for another decade.

So — someone, decades ago, gave an example of the corruption of water companies to the author, and regardless of whether this “example” is true or not, real, embellished or a complete fabrication, and regardless of whether the situation is, I don’t know, maybe a little different half a century later and dealing with bits and not water molecules, it’s apparently something good to throw out there anyway. (In fact, I struggle to see exactly what AT&T could do that would be analogous to the abuse he’s describing).

Again, this is presumed, since no causal link is established in the sense that if true, the described ‘bad behavior’ is conclusively the result of something being a utility rather than, well, any other reason, like corruption, incompetence, or just greed.

To close — I’ve seen that a number of people/organizations (many but not all of them conservatives) are opposed to Net Neutrality. My understanding is that this is because of fear of over-regulation. Fair enough. Have any of them thought how it would affect them? Perhaps it’s only when it’s implemented that they will realize that their readers/customers, by an overwhelming majority, have little choice of ISPs. Very few markets have more than two choices, and almost no markets have competitive choices (ie, choices that are at equivalent levels of speed or service).

But I’m sure that the Wall Street Journal, or Drudge, or whoever will be happy to pay an extra fee to every IP carrier out there so their pages and videos load fast enough and they don’t lose readers.


the importance of Interstellar

iDo not go gentle into that good night,
Old age should burn and rave at close of day;
Rage, rage against the dying of the light.

                                                    Dylan Thomas (1951)

Over the last few years a lot of movies -among other things- seem to have shrunk in ambition while appearing to be”bigger.” The Transformers series of movies are perhaps the best example. Best way to turn off your brain while watching fights of giant robots and cool explosions? Sure. But while mega-budget blockbusters focus on size, many of them lack ambition and scope. Art, entertainment, and movies in particular, given their reach, matter a lot in terms of what they reflect of us and what they can inspire. For all their grandiose intergalactic-battle-of-the-ages mumbo jumbo, Transformers and other similar movies always feel small, and petty. Humans in them are relegated to bit actors that appear to be props necessary for the real heroes (in this case, giant alien robots) to gain, or regain, inspiration and do what they must do. And always, always by chance. Random people turn into key characters in world-changing events just because they stumbled into the wrong, or right, (plot)hole.

Now, people turned into “the instruments of fate (or whatever),” if you will, is certainly a worthwhile theme and something that does happen. But stories in which the protagonists (and people in general) take the reins and attempt to influence large-scale events through  hard work, focus, cooperation, even -gasp!- study, became less common for a while. Art reflects the preoccupations and aspirations of society, and it seems that by the mid-to-late 2000s we had become reliant on the idea of the world as reality TV – success is random and based on freakish circumstances, or, just as often, on being a freak of some sort. This isn’t a phenomenon isolated to science fiction — westerns, for example, declined in popularity but also turned “gritty” or “realistic” and in the process, for the most part, trading stories of the ‘purity of the pioneering spirit’ or ‘taming the frontier’ with cesspools of dirt, crime, betrayal and despair.

Given the reality of the much of the 20th century, it was probably inevitable that a lot of art (popular or not) would go from a rosy, unrealistically happy and/or heroic view of the past, present, and future, to a depressing, excessively pessimistic view of them. Many of the most popular heroes in our recent collective imaginations are ‘born’ (by lineage, by chance, etc) rather than ‘made’ by their own efforts or even the concerted efforts of a group. Consider: Harry Potter, the human characters in Transformers (and pretty much any Michael Bay movie since Armageddon), even more obviously commercial efforts like Percy Jackson or Twilight along with other ‘young adult’ fiction and with pretty much all other vampire movies, which have the distinction of creating ‘heroes’ simultaneously randomly and through bloodlines, the remake of Star Trek turned Kirk joining Starfleet into something he didn’t really want to do; the characters in The Walking Dead; the grand-daddy of all of these: Superman… and, even, as much as I enjoy The Lord of The Rings, nearly everything about its view of good and evil involves little in the way of will and intent from the main characters. Characters talk a great deal about the importance of individuals and their actions, but in the end they’re all destined to do what they do and the key turning points are best explained as either ‘fate’, simply random, or manipulated by people of ‘greater wisdom and/or power’ like Gandalf, Galadriel, Elrond and so on. Good and evil are defined along the lines of an eugenics pamphlet in a way that gets to be creepy more often than not (the ‘best’ are fair-skinned, with blue or green eyes, and from the West, the ‘worst’ are dark-skinned, speak in hellish tongues and are from the East, along with an unhealthy obsession with bloodlines and purity of blood, and so on; Gandalf “progresses” from Gray to White, while Saruman falls from being the leader as Saruman the White into shrunken evil serving Sauron, the Dark Lord… as “Saruman of Many Colours”… you get the idea).

All of which is to say: I don’t think it’s a coincidence that in this environment good Science Fiction in general and space exploration SF is always relegated a bit, particularly in movies. There is nothing random about space exploration: it requires an enormous amount of planning, study, effort, hard work, and money. You can’t inherit a good space program. It has to be painstakingly built, and supported, across decades. When a not-insignificant percentage of society flatly discards basic scientific theories in favor of religious or political dogma while giving an audience to Honey Boo Boo or Duck Dynasty, it’s not illogical for studios to finance another animated movie with talking animals than to push people beyond their comfort zones.

Even so, there’s always been good SF, if perhaps not as frequently as SF fans would like. And over the last 20 years we have started to see  Fantasy/SF stories that combine a more “realistic” view of the world, but mixed in with the more idealistic spirit of movies like The Right Stuff. In these we have characters succeeding, or at least ‘fighting the good fight’, through exertion of will, the resolve to change their reality. And even if there’s an element of ‘fate’ or chance in the setup, the bulk of the story involves characters that aren’t just pushed around by forces beyond their control. Nolan’s Dark Knight trilogy, Avatar, Serenity, most of Marvel’s new movies: Iron Man, Captain America, The AvengersWatchmen. In books, the Already Dead series and the Coyote series, both of which could make for spectacularly good movies if ever produced. In TV, Deadwood, which is perhaps the best TV series of all time, was a good example of the same phenomenon — it felt realistic, but realistically complex, with characters that weren’t just swept up in events, and that exhibited more than one guiding principle or idea. We got ‘smaller’ movies like Moon that were excellent, but large-scale storytelling involving spaceflight that wasn’t another iteration of a horror/monster/action movie is something I’ve missed in the last few years.

What about last year’s Gravity? It was visually arresting and technically proficient but fairly mundane in terms of what actually happens. It’s not really inspiring — it’s basically the story of someone wrecking their car in the middle of the desert and having to make it to the next gas station… but in space, the focus on experiencing a spiritual rebirth, and in case we were confused about the metaphor the see the main character literally crawl out of mud and water and then slowly stand and start to walk. Bullock’s character in Gravity is also one of those guided by circumstances, frequently displaying a lack of knowledge about spaceflight that even the original monkeys that flew in the early space missions would have slapped their foreheads about.

Which brings me to Interstellar. No doubt it will be compared to 2001: A Space Odyssey (with reason) and with Gravity (with less reason). Interstellar is more ambitious than 2001 in terms of science, matching it or exceeding it in terms of story scope and complexity, while leaving Gravity in the dust.  2007’s Sunshine shares some themes and some of the serious approach to both science and fiction (… at least the first 30 minutes or so, afterwards it shares more with Alien) as well as with the (in my opinion) under-appreciated Red Planet (2000) and even some elements of the much less convincing Mission to Mars. It also reminded me of Primer in terms of how it seamlessly wove pretty complex ideas into its plot.

We haven’t had a “hard” SF space movie like this for a whileKey plot points involving gravitational time-dilation, wormholes, black holes,  quantum mechanics/relativity discrepancies… even a 3D representation of a spacetime tesseract (!!!!). 2001 was perfect about the mechanics of space flight, but Interstellar also gets as deep into grand-unified theory issues as you can probably get without losing a lot of the audience, and goes much further than 1997’s Contact. There are some plot point that are weak (or, possibly, that I may have missed an explanation for, I’ll need another viewing to confirm…), and sometimes there are moments that feel a bit slow or excessively, shall we say, ‘philosophical’, although in retrospect the pauses in action were effective in making what followed even more significant.

Comparisons and minor quibbles aside, Interstellar is spectacular; the kind of movie you should, nay, must watch in a theater, the bigger screen the better, preferably on IMAX.

The movie not only has a point of view,  it is unapologetic about it. It doesn’t try to be “balanced,” and it doesn’t try to mix in religion even as it touches on subjects in which it frequently is mixed in the name of making “all points of view heard.” Interstellar is not “anti religion” … and it is not pro-religion either. There’s a fundamental set of circumstances in the plot that allows the movie to sidestep pretty much all of the usual politics and religion that would normally be involved. Perhaps someone can argue whether those circumstances are realistic (although something like the Manhattan project comes to mind as an example of how it can actually happen). But the result is that the movie can focus almost exclusively on science, exploration, our ability to change things, either individually or in groups.

This, to me, felt truly refreshing. Everything that has to do with science these days is mixed in with politics and/or religion. This also helps the story in its refusal to “dumb things down”…  its embrace of complexity of ideas, even if less focused on a lot of specific technical details than, say, Apollo 13 was, which is a natural result of having the Apollo data at hand.

How many people, I wonder, know by now what NASA’s Apollo program really was? Sometimes it seems to be relegated to either conspiracy joke material or mentioned in passing to, for example, explain how your phone is more powerful than the computers that went to the moon. Somehow what was actually attempted, and what was actually achieved, isn’t remarkable anymore, and the true effort it took is less appreciated as a result. With that, we are making those things smaller, which gives us leeway to do, to be less. It makes “raging against the dying of the light” sound like a hopelessly romantic, useless notion. It justifies how approaching big challenges these days frequently happens in ways that makes us “involved” in the same way that Farmville relates to actual farming. Want to feel like you’ve solved world hunger? Donate $1 via text to Oxfam. Want to “promote awareness of ALS”? Just dump a bucket of ice water on your head. Want to “contribute in the fight against cancer”? Add a $3 donation while checking out of the supermarket. No need to get into medicine or study for a decade. Just bump your NFC-enabled phone against this gizmo and give us some money, we’ll do the rest.

I’m not saying that there is no place for those things, but recently it seems that’s the default. Why? Many commentators have talked about how these days we lack an attitude best described by Kennedy’s famous line “Ask not what your country can do for you, as what you can do for your country”. But I don’t think the issue is not wanting to do anything, or not wanting to help. I think the issue is that we have gotten used to being scared and feeling powerless in the face of complexity. We’ve gone from the 60’s attitude of everyone being able to change the world to feeling as if we’re completely at the mercy of forces beyond our control. And we’ve gone overboard about whatever we think we can control:  people freaking out about the use of child seats in cars, or worrying about wearing helmets when biking, while simultaneously doing little as societies about the far greater threat of climate change.

When education was a privilege of very few, very rich people, it was possible for pretty much everyone to accept a simplistic version of reality. That was before affordable mass travel, before realtime communications, before two devastating world wars and any number of “smaller” ones. Reality has been exposed for the truly messy, complicated thing it is and always was. But instead of embracing it we have been redefining reality downwards, hiding our collective heads in the sand, telling ourselves that small is big. Even heroism is redefined — everyone’s a hero now.

Interstellar is important not just as a great science fiction movie, not just because it is inspiring when it’s so much easier to be cynical about the past, the present or the future, but also because beyond what it says there’s also how it says it, with a conviction and clarity that is rare for this kind of production. It’s not a coincidence that it references those Dylan Thomas verses more than once. It’s an idealistic movie, and in a sense fundamentally optimistic, although perhaps not necessarily as optimistic about outcomes as it is about opportunities.

It’s about rekindling the idea that we can think big. A reminder of what we can attempt, and sometimes achieve. And, crucially, that at a time when we demand predictability out of everything, probably because it helps us feel somehow ‘in control’, it is also a reminder in more ways than one that great achievement, like discovery, has no roadmap.

Because if you always know where you’re going and how you’re getting there you may be ‘safe’, it’s unlikely you’ll end up anywhere new.

here’s when you get a sense that the universe is telling you something

In the same Amazon package you get:

    The latest Thomas Pynchon novel.
    The World War Z blu ray.

Telling you what exactly…. well, that is less clear.

the apple developer center downpocalypse


We’re now into day three of the Apple Developer Center being down. This is one of those instances in which Apple’s tendency to “let products speak for themselves,” an approach that ordinarily has a lot going for it, can be counterproductive. In three days we’ve gone from “Downtime, I wonder what they’ll upgrade,” to “Still down, I wonder what’s going on?” to “Still down, something bad is definitely going on.”

Which, btw, is the most likely scenario at this point. If you’re ever been involved in 24/7 website operations you can picture what life must have been like since Thursday for dozens, maybe hundreds of people at Apple: no sleep, constant calls, writing updates to be passed along the chain, increasingly urgent requests from management wanting to know, exactly, how whatever got screwed up got screwed up, and that competing with the much more immediately problem of actually solving the issue.

And a few people in particular, likely less than a dozen, are under particular pressure. I’m not talking about management (although they have pressure of their own) but the few sysadmins, devops, architects and engineers that are at the center of whatever team is responsible for solving the problem, which undoubtedly was also in charge of the actual maintenance that led to the outage in the first place, so the pressure is multiplied.

Even for global operations at massive scale, this is what it usually comes down to — a few people. They’re on the front lines, and hopefully they know that some of us appreciate their efforts and that of the teams working non-stop to solve the problem. I know I do.

The significance of the dev center is hard to see for non-developers, but it’s real and this incident will likely have ripple effects beyond the point of resolution. Days without being able to upload device IDs, or create development profiles. Schedules gone awry. Releases delayed. People will re-evaluate their own contingency plans and maybe question their app store strategy. Thousands of developers are being affected, and ultimately, this will affect Apple’s bottom line.

And that’s why this situation is not the kind of thing that you’ll let go on for this long unless there was a very, very good reason (only a couple of days from reporting quarterly results, no less). Maybe critical data was lost and they’re trying to rebuild it (what if everyone’s App IDs just went up in smoke?). Maybe it was a security breach (what if the root certs were compromised?). The likelihood that there will be consequences for developers, as opposed to just a return to the status quo, goes up with every hour that this continues. As Marco said: “[…]  if you’re an iOS or Mac App Store developer, I’d suggest leaving some free time in the schedule this week until we know what happened to the Developer Center.”

In fact, it could be that at least part of the delay has to do with coming up with procedures and documentation, if not a full-on PR strategy. Apple hasn’t traditionally behaved this way, but Tim Cook has managed things very differently than Steve Jobs on this regard.

Finally, I’ve been somewhat surprised by the lack of actual reporting on this. One day, maybe two days… but three? Nothing much aside from minor posts on a few websites, and not even much on the Apple-dedicated sites. This is where real reporting is necessary. Having sources that can speak to you about what’s going on. Part of the problem is that the eventual impact of this will be subtle, and modern media doesn’t do subtle very well. It’s less about the immediate impact or people out of a job than about a potential gap in future app releases. A whole industry is in fact dependent on what goes on with that little-known service, and with iOS 7/Mavericks being under NDA, Apple’s developer forums, which are also down, are the only place where you can discuss problems and file bug reports. Some developer, somewhere, is no doubt blocked from being able to do any work at all. 

Apple should, perhaps against its own instincts, try their best to explain what happened and how they’ve dealt with it. Otherwise, the feeling that this will just happen again will be hard to shake off for a lot of people. For Apple, this could be an opportunity to engage with their developer community more directly. Here’s hoping.

diego’s life lessons, part III

Excerpted from the upcoming book: “Diego’s life lessons: 99 tips for survival, fun, and profit in today’s baffling bric-a-brac world.” (see Part I and Part II).

#9 make the right career choices

Everyone will have seven careers in their lifetime, someone said once, and we all repeated it even if we have no idea why.

The key to career planning, though, is to keep in mind that while the world of today ranges from complicated to downright baffling, the world of tomorrow will be pretty predictable, since as we all know it will just be a barren hellscape populated by Zombies.

So the question is: post-Zombie Apocalypse, what will you need to be? Survival in the new Zombie-infested world will require the skills of any good D&D party: a Healer, a Warrior, a Thief, and a Wizard — which in a world without magic means someone to tinker with things, build weapons, design shelters with complicated spring traps, and knowledge of how to brew a good cup of coffee.

Clearly you don’t want to be a Healer (read: medic/doctor), since that means no one will be able to fix you — you should have friends or relatives with careers in medicine, however, for obvious reasons. Being a Thief will be of limited use, but more importantly it’s not really the kind of thing you can practice for without turning to a life of crime as defined by our pre-Zombie civilization (post-Zombies, most of the things we consider crimes today will become fairly acceptable somehow, so you may be able to pull this off with the right timing).

That leaves you with either Warrior or Wizard, which translates roughly to: Gun Nut or Hacker. And by “Hacker” we mean the early-1980s definition of hacker, rather than the bastardized 2000s version, and one that is not restricted to computers.

So. Your choices for a new career path are as follows:

  • If you’re a Nerd, become a Hacker.
  • If you’re neither a Nerd or a Hacker, just become a Gun Nut, it’s the easiest and fastest way to post-apocalyptic survival. This way, while you wait for Zombies to strike you won’t need to worry (for example) about a lookup being O(N) or not, or why the CPU on some random server is pegged at 99% without any incoming requests.
  • If you’re already a Gun Nut, you’re good to go. Just keep buying ammo.
  • If you’re already a Hacker… please don’t turn into an evil genius and destroy the world. Try taking up some activity that will consume your time for no reason, like playing The Elder Scrolls V: Skyrim or learning to program for Blackberry.

NOTE (I): If you’re in the medical profession, just stay put. We will protect you so you can fix our sprained ankles and such.
NOTE (II): there is also the rare combination of Hacker/Nerd+Gun Nut, but you should be aware that this is a highly volatile combination of skills which can have unpredictable results on your psyche.

#45: purchase a small island in the Pacific Ocean

As far as having a permanent vacation spot, this one really is a no-brainer. Why bother with hotels when you can own a piece of slowly sinking real estate? Plus, according to highly reliable sources, you don’t need to be a billionaire.

True, you will have significant coconut-maintenance fees and you’ll probably need a small fleet of Roombas to keep the place tidy, but coconuts are delicious and the Roombas can help in following lesson #18.

NOTE I: don’t be fooled by the “Pacific” part of “Pacific Ocean.” There’s nothing “pacific” about it. There’s storms, cyclones, tsunamis, giant garbage monsters, sharks, jellyfish, and any number of other dangers. Therefore, an important followup to purchase the island is to buy an airline for it. You know, to be able to get away quickly, just in case.

NOTE II: this is actually an alternative to the career choices described above, since it is well known that Zombies can’t swim.

NOTE III: the island should not be named Krakatoa — see lesson #1. Aside from this detail, owning a Pacific Island does not directly conflict with lesson #1, since the cupboard can be actually located in a hut somewhere in the island (multiple cupboard hiding spots are also advisable).

#86 Stock up on Kryptonite

Ok, so let me tell you about this guy… He wears a cape and tights. He frequently disrobes in public places. He makes a living writing for a newspaper with an owner that makes Rupert Murdoch look like Edward R. Murrow. He has deep psychological scars since he is the last survivor of a cataclysmic event that destroyed his civilization. He leads a secret double life, generally disappearing whenever something terrible happens. He is an illegal alien. Also, he is an ALIEN.

Does this look like someone trustworthy to you? Hm?

That’s right. This is not a stable person.

Add to the list that he can fly, even in space, stop bullets, has X-ray vision, can (possibly) travel back in time and is essentially indestructible. How is this guy not a threat to all of humanity?

Lex Luthor was deeply misunderstood — he could see all this, but his messaging was way off. Plus there were all those schemes to Take Over The World, which should really be left to experts like genetically engineered mice.

The only solution to this menace is to keep your own personal stash of Kryptonite. Keep most of it in a cupboard (see lesson #1) and a small amount on your person at all times.

After all, you never know when this madman will show up.


When my home phone… you know, the bulky, heavy one, plugged in to a wireline (perhaps for sentimental reasons, at this point), rings… I don’t answer.


It is muted. Permanently.

There’s a generation … a group of people, a dividing line, somewhere… for whom the idea of a dialtone, of verified communication, sounds insane. Most of them are kids at this point, sure, but some aren’t. To me, it is noticeable. To others, it is alien.

A dialtone.

Think about it, how many people alive today don’t know what a dialtone is? Have never heard one?

How many people do not answer their phone because they assume it’s spam?

Spam. Email… bits, translated into voice (also bits). Video. TV, or, truthfully, the constructs that TV (and to some degree radio) created.


Something to consider…


Get every new post delivered to your Inbox.

Join 392 other followers

%d bloggers like this: