Goodbye Privacy. Goodbye Open Internet?

The famous story of the pregnant teen outed to her father by Target epitomizes the power of big data and advanced psychometrics to wring potent conclusions from seemingly innocuous snippets of data. Andreas Weigend, the former chief data scientist at Amazon has written a must-read book that dives deep into this topic. And an article in The Observer shows how these learnings are being applied to influence voters:

“With this, a computer can actually do psychology, it can predict and potentially control human behaviour. It’s what the scientologists try to do but much more powerful. It’s how you brainwash someone. It’s incredibly dangerous.

“It’s no exaggeration to say that minds can be changed. Behaviour can be predicted and controlled. I find it incredibly scary. I really do. Because nobody has really followed through on the possible consequences of all this. People don’t know it’s happening to them. Their attitudes are being changed behind their backs.”

– Jonathan Rust, Director, Cambridge University Psychometric Centre.

So metadata about your internet behavior is valuable, and can be used against your interests. This data is abundant, and Target only has a drop in the ocean compared to Facebook, Google and a few others. Thanks to its multi-pronged approach (Search, AdSense, Analytics, DNS, Chrome and Android), Google has detailed numbers on everything you do on the internet, and because it reads all your (sent and received) Gmail it knows most of your private thoughts. Facebook has a similar scope of insight into your mind and motivations.

Ajit Pai, the new FCC chairman, was a commissioner when the privacy regulations that were repealed last week were instituted last fall. He wrote a dissent arguing that applying internet privacy rules only to ISPs (Internet Service Providers, companies like AT&T or Comcast) was not only unfair, but ineffective in protecting your online privacy, since the “edge providers” (companies like Google, Facebook, Amazon and Netflix) are not subject to FCC regulations (but the FTC instead), and they are currently far more zealous and successful at destroying your privacy than the ISPs:

“The era of Big Data is here. The volume and extent of personal data that edge providers collect on a daily basis is staggering… Nothing in these rules will stop edge providers from harvesting and monetizing your data, whether it’s the websites you visit or the YouTube videos you watch or the emails you send or the search terms you enter on any of your devices.”

True as this is, it would be naive to expect now-chairman Pai to replace the repealed privacy regulations with something consistent with his concluding sentiment in that dissent:

“After all, as everyone acknowledges, consumers have a uniform expectation of privacy. They shouldn’t have to be network engineers to understand who is collecting their data. And they shouldn’t need law degrees to determine whether their information is protected.”

So it’s not as though your online privacy was formerly protected and now it’s not. It just means the ISPs can now compete with Google and Facebook to sell details of your activity on the internet. There are still regulations in place to aggregate and anonymize the data, but experiments have shown anonymization to be surprisingly difficult.

If you don’t like playing the patsy, it’s possible to fight a rearguard action by using cookie-blockers, VPNs and encryption, but such measures look ever more Canute-like. Maybe those who tell us to abandon the illusion that there is such a thing as privacy are right.

So, after privacy, what’s the next thing you could lose?

Goodbye Open Internet?

Last week’s legislation was a baby-step towards what the big ISPs would like, which is to ‘own’ all the data that they pipe to you, charging content providers differentially for bandwidth, and filtering and modifying content (for example by inserting or substituting ads in web pages and emails). They are currently forbidden to do this by the FCC’s 2015 net neutrality regulations.

So the Net Neutrality controversy is back on the front burner. Net neutrality is a free-market issue, but not in the way that those opposed to it believe; the romantic notion of a past golden age of internet without government intrusion is hogwash. The consumer internet would never have happened without the common carrier regulations that allowed consumers to attach modems to their phone lines. AT&T fought tooth and nail against those regulations, wanting instead to control the data services themselves, along the lines of the British Prestel service. If AT&T had won that battle with the regulators, the internet would have remained an academic backwater. Not only was the internet founded under government sponsorship, but it owes its current vibrant and innovative character to strongly enforced government regulation:

“Without Part 68, users of the public switched network would not have been able to connect their computers and modems to the network, and it is likely that the Internet would have been unable to develop.”

For almost all US consumers internet access service choice is limited to a duopoly (telco and cableco). On the other hand internet content services participate in an open market teeming with competition (albeit with near-monopolies in their domains for Google and Facebook). This is thanks to the net neutrality regulations that bind the ISPs:

  • No Blocking: broadband providers may not block access to legal content, applications, services, or non-harmful devices.
  • No Throttling: broadband providers may not impair or degrade lawful Internet traffic on the basis of content, applications, services, or non-harmful devices.
  • No Paid Prioritization: broadband providers may not favor some lawful Internet traffic over other lawful traffic in exchange for consideration of any kind — in other words, no “fast lanes.” This rule also bans ISPs from prioritizing content and services of their affiliates.

If unregulated, ISPs will be compelled by structural incentives to do all these things and more, as explained by the FCC:

“Broadband providers function as gatekeepers for both their end user customers who access the Internet, and for edge providers attempting to reach the broadband provider’s end-user subscribers. Broadband providers (including mobile broadband providers) have the economic incentives and technical ability to engage in practices that pose a threat to Internet openness by harming other network providers, edge providers, and end users.”

It’s not a simple issue. ISPs must have robust revenues so they can afford to upgrade their networks; but freedom to prioritize, throttle and block isn’t the right solution. Without regulation, internet innovation suffers. Instead of an open market for internet startups, gatekeepers like AT&T and Comcast pick winners and losers.

Net neutrality simply means an open market for internet content. Let’s keep it that way!

ITExpo: The Future is Now: Mobile Callers Want Visuals with Voice over the existing network

I will be moderating a panel on this topic at ITExpo East 2012 in Miami at 2:30 pm on Wednesday, February 1st.

The panelists will be Theresa Szczurek of Radish Systems, LLC, Jim Machi of Dialogic, Niv Kagan of Surf Communications Solutions and Bogdan-George Pintea of Damaka.

The concept of visuals with voice is a compelling one, and there are numerous kinds of visual content that you may want to convey. For example, when you do a video call with FaceTime or Skype, you can switch the camera to show what you are looking at if you wish, but you can’t share your screen or photos during a call.

FaceTime, Skype and Google Talk all use the data connection for both the voice and video streams, and the streams travel over the Internet.

A different, non-IP technology for videophone service called 3G-324M, is widely used by carriers in Europe and Asia. It carries the video over the circuit-switched channel, which enables better quality (lower latency) than the data channel. An interesting application of this lets companies put their IVR menus into a visual format, so instead of having to listen through a tedious listing of options that you don’t want, you can instantly select your choice from an on-screen menu. Dialogic makes back-end equipment that makes applications like on-screen IVR possible on 3G-324M networks.

Radish Systems uses a different method to provide a similar visual IVR capability for when your carrier doesn’t support 3G-324M (none of the US carriers do). The Radish application is called Choiceview. When you make a call from your iPhone to a Choiceview-enabled IVR, you dial the call the regular way, then start the Choiceview app on your iPhone. The Choiceview IVR matches the Caller ID on the call with your phone number that you typed into the app setup, and pushes a menu to the appropriate client. So the call goes over the old circuit-switched network, while Choiceview communicates over the data network. Choiceview is strictly a client-server application. A Choiceview server can push any data to a phone, but the phone can’t send data the other way, neither can two phones exchange data via Choiceview.

So this ITExpo session will try to make sense of this mix: multiple technologies, multiple geographies and multiple use cases for visual data exchange during phone calls.

Using the Google Chrome Browser

I have some deep seated opinions about user interfaces and usability. It normally only takes me a few seconds to get irritated by a new application or device, since they almost always contravene one or more of my fundamental precepts of usability. So when I see a product that gets it righter than I could have done myself, I have to say it warms my heart.

I just noticed a few minutes ago, using Chrome, that the tabs behave in a better way than on any other browser that I have checked (Safari, Firefox, IE8). If you have a lot of tabs open, and you click on an X to close one of them, the tabs rearrange themselves so that the X of the next tab is right under the mouse, ready to get clicked to close that one too. Then after closing all the tabs that you are no longer interested in, when you click on a remaining one, the tabs rearrange themselves to a right size. This is a very subtle user interface feature. Chrome has another that is a monster, not subtle at all, and so nice that only stubborn sour grapes (or maybe patents) stop the others from emulating it. That is the single input field for URLs and searches. I’m going to talk about how that fits with my ideas about user interface design in just a moment, but first let’s go back to the tab sizing on closing with the mouse.

I like this feature because it took a programmer some effort to get it right, yet it only saves a user a fraction of a second each time it is used, and only some users close tabs with the mouse (I normally use Cmd-W), and only some users open large numbers of tabs simultaneously. So why did the programmer take the trouble? There are at least two good reasons: first, let’s suppose that 100 million people use the Chrome browser, and that they each use the mouse to close 12 tabs a day, and that in 3 of these closings, this feature saved the user from moving the mouse, and the time saved for each of these three mouse movements was a third of a second. The aggregate time saved per day across 100 million users is 100 million seconds. At 2,000 working hours per year, that’s more than 10 work-years saved per day. The altruistic programmer sacrificed an hour or a day or whatever of his valuable time, to give the world far more. But does anybody apart from me notice? As I have remarked before, at some level the answer is yes.

The second reason it was a good idea for the programmer to take this trouble is to do with the nature of usability and choice of products. There is plenty of competition in the browser market, and it is trivial for a user to switch browsers. Usability of a program is an accretion of lots of little ingredients. So in the solution space addressed by a particular application, the potential gradation of usability is very fine-grained, each tiny design decision moving the needle a tiny increment in the direction of greater or lesser usability. But although ease of use of an application is an infinitely variable property, whether a product is actually used or not is effectively a binary property. It is a very unusual consumer (guilty!) who continues to use multiple browsers on a daily basis. Even if you start out that way you will eventually fall into the habit of using just one. For each user of a product, there is a threshold on that infinite gradation of usability, that balances against the benefit of using the product. If the product falls below that effort/benefit threshold it gradually falls into disuse. Above that threshold the user forms the habit of using it regularly. Many years ago I bought a Palm Pilot. For me, that user interface was right on my threshold. It teetered there for several weeks as I tried to get into the habit of depending on it, but after I missed a couple of important appointments because I had neglected to put them into the device, I went back to my trusty pocket Day-Timer. For other people, the Palm Pilot was above their threshold of usability, and they loved it, used it and depended on it. Not all products are so close to the threshold of usability. Some fall way below it. You have never heard of them – or maybe you have: how about the Apple Newton? And some land way above it; before the iPhone nobody browsed the Internet on their phones – the experience was too painful. In one leap the iPhone landed so far above that threshold that it routed the entire industry.
The razor thin line between use and disuse
The point here is that the ‘actual use’ threshold is a a razor-thin line on the smooth scale of usability, so if a product lies close to that line, the tiniest, most subtle change to usability can move it from one side of the line to the other. And in a competitive market where the cost of switching is low, that line isn’t static; the competition is continuously moving the threshold up. This is consistent with “natural selection by variation and survival of the fittest.” So product managers who believe their usability is “good enough,” and that they need to focus on new features to beat the competition are often misplacing their efforts – they may be moving their product further to the right on the diagram above than they are moving it up.

Now let’s go on to Chrome’s single field for URLs and searches. Computer applications address complicated problem spaces. In the diagram below, each circle represents the aggregate complexity of an activity performed with the help of a computer. The horizontal red line represents the division between the complexity handled by the user, and that handled by the computer. In the left circle most of the complexity is dealt with by the user, in the right circle most is dealt with by the computer. For a given problem space, an application will fall somewhere on this line. For searching databases HAL 9000 has the circle almost entirely above this line, SQL is way further down. The classic example of this is the graphical user interface. It is vastly more programming work to create a GUI system like Windows than a command-line system like MS-DOS, and a GUI is correspondingly vastly easier on the user.

Its single field for typing queries and URLs clearly makes Chrome sit higher on this line than the browsers that use two fields. With Chrome the user has less work to do: he just gives the browser an instruction. With the others the user has to both give the instruction and tell the computer what kind of instruction it is. On the other hand, the programmer has to do more work, because he has to write code to determine whether the user is typing a URL or a search. But this is always going to be the case when you make a task of a given complexity easier on the user. In order to relieve the user, the computer has to handle more complexity. That means more work for the programmer. Hard-to-use applications are the result of lazy programmers.

The programming required to implement the single field for URLs and searches is actually trivial. All browsers have code to try to form a URL out of what’s typed into the address field; the programmer just has to assume it’s a search when that code can’t generate a URL. So now, having checked my four browsers, I have to partially eat my words. Both Firefox and IE8, even though they have separate fields for web addresses and searches, do exactly what I just said: address field input that can’t be made into a URL is treated as a search query. Safari, on the other hand, falls into the lazy programmer hall of shame.

This may be a result of a common “ease of use” fallacy: that what is easier for the programmer to conceive is easier for the user to use. The programmer has to imagine the entire solution space, while the user only has to deal with what he comes across. I can imagine a Safari programmer saying “We have to meet user expectations consistently – it will be confusing if the address field behaves in an unexpected way by doing a search when the user was simply mistyping a URL.” The fallacy of this argument is that while the premise is true (“it is confusing to behave in an unexpected way,”) you can safely assume that an error message is always unexpected, so rather than deliver one of those, the kind programmer will look at what is provoking the error message, and try to guess what the user might have been trying to achieve, and deliver that instead.

There are two classes of user mistake here: one is typing into the “wrong” field, the other is mistyping a URL. On all these browsers, if you mistype a URL you get an unwanted result. On Safari it’s an error page, on the others it’s an error page or a search, depending on what you typed. So Safari isn’t better, it just responds differently to your mistake. But if you make the other kind of “mistake,” typing a search into the “wrong” field, Safari gives an error, while the others give you what you actually wanted. So in this respect, they are twice as good, because the computer has gracefully relieved the user of some work by figuring out what they really wanted. But Chrome goes one step further, making it impossible to type into the “wrong” field, because there is only one field. That’s a better design in my opinion, though I’m open to changing my mind: the designers at Firefox and Microsoft may argue that they are giving the best of both worlds, since users accustomed to separate fields for search and addresses might be confused if they can’t find a separate search field.

ITExpo West — Achieving HD Voice On Smartphones

I will be moderating a panel discussion at ITExpo West on Tuesday 5th October at 11:30 am in room 306B: “Achieving HD Voice On Smartphones.”

Here’s the session description:

The communications market has been evolving to fixed high definition voice services for some time now, and nearly every desktop phone manufacturer is including support for G.722 and other codecs now. Why? Because HD voice makes the entire communications experience a much better one than we are used to.

But what does it mean for the wireless industry? When will wireless communications become part of the HD revolution? How will handset vendors, network equipment providers, and service providers have to adapt their current technologies in order to deliver wireless HD voice? How will HD impact service delivery? What are the business models around mobile HD voice?

This session will answer these questions and more, discussing both the technology and business aspects of bringing HD into the mobile space.

The panelists are:

This is a deeply experienced panel; each of the panelists is a world-class expert in his field. We can expect a highly informative session, so come armed with your toughest questions.

White Spaces Geolocation Database

For now, all White Spaces devices will use a geolocation database to avoid interfering with licensed spectrum users. The latest FCC Memorandum and Order on TV White Spaces says that it is still OK to have a device that uses spectrum sensing only (one that doesn’t consult a geolocation database for licensed spectrum users), but to get certified for sensing only, a device will have to satisfy the FCC’s Office of Engineering and Technology, then be approved by the Commissioners on a case-by-case basis.

So all the devices for the foreseeable future are going to use a geolocation database. But they will have spectrum-sensing capabilities too, in order to select the cleanest channel from the list of available channels provided by the database.

Fixed devices (access points) will normally have a wired Internet connection. Once a fixed device has figured out where it is, it can query the database over the Internet for a list of available channels. Then it can advertise itself on those channels.

Mobile devices (phones, laptops etc.) will normally have non-whitespace connections to the Internet too, for example Wi-Fi or cellular data. These devices can know where they are by GPS or some other location technology, and query the geolocation database over their non-whitespace connection. If a mobile device doesn’t have non-whitespace Internet connectivity, it can sit and wait until it senses a beacon from a fixed whitespace device, then query the geolocation database over the whitespace connection. There is a slight chance at this point that the mobile device is using a licensed frequency inside the licensee’s protected contour. This chance is mitigated because the contour includes a buffer zone, so a mobile device inside a protected contour should be beyond the range of any whitespace devices outside that contour. The interference will also be very brief, since when it gets the response from the database it will instantly switch to another channel.

Nine companies have proposed themselves as geolocation database providers. Here they are, linked to the proposals they filed with the FCC:

Here’s an example of what a protected contour looks like. Here’s an example database. Note that this database is not accurate yet.

Actually, a geolocation database is overkill for most cases. The bulk of the information is just a reformatting of data the FCC already publishes online; it’s only 37 megabytes compressed. It could be kept in the phone since it doesn’t change much; it is updated weekly.

The proposed database will be useful for those rare events where the number of wireless microphones needed is so large that it won’t fit into the spectrum reserved for microphones, though in this case spectrum sensing would probably suffice. In other words, the geolocation database is a heavyweight solution to a lightweight problem.

Google sells out

Google and Verizon came out with their joint statement on Net Neutrality on Monday. It is reasonable and idealistic in its general sentiments, but contains several of the loopholes Marvin Ammori warned us about. It was released in three parts: a document posted to Google Docs, a commentary posted to the Google Public Policy Blog, and an op-ed in the Washington Post. Eight paragraphs in the statement document map to seven numbered points in the blog. The first three numbered points map to the six principles of net neutrality enumerated by Julius Genachowski [jg1-6] almost a year ago. Here are the Google/Verizon points as numbered in the blog:

1. Open access to Content [jg1], Applications [jg2] and Services [jg3]; choice of devices [jg4].
2. Non-discrimination [jg5].
3. Transparency of network management practices [jg6].
4. FCC enforcement power.
5. Differentiated services.
6. Exclusion of Wireless Access from these principles (for now).
7. Universal Service Fund to include broadband access.

The non-discrimination paragraph is weakened by the kinds of words that are invitations to expensive litigation unless they are precisely defined in legislation. It doesn’t prohibit discrimination, it merely prohibits “undue” discrimination that would cause “meaningful harm.”

The managed (or differentiated) services paragraph is an example of what Ammori calls “an obvious potential end-run around the net neutrality rule.” I think that Google and Verizon would argue that their transparency provisions mean that ISPs can deliver things like FIOS video-on-demand over the same pipe as Internet service without breaching net neutrality, since the Internet service will commit to a measurable level of service. This is not how things work at the moment; ISPs make representations about the maximum delivered bandwidth, but for consumers don’t specify a minimum below which the connection will not fall.

The examples the Google blog gives of “differentiated online services, in addition to the Internet access and video services (such as Verizon’s FIOS TV)” appear to have in common the need for high bandwidth and high QoS. This bodes extremely ill for the Internet. The evolution to date of Internet access service has been steadily increasing bandwidth and QoS. The implication of this paragraph is that these improvements will be skimmed off into proprietary services, leaving the bandwidth and QoS of the public Internet stagnant.

The exclusion of wireless many consider egregious. I think that Google and Verizon would argue that there is nothing to stop wireless being added later. In any case, I am sympathetic to Verizon on this issue, since wireless is so bandwidth constrained relative to wireline that it seems necessary to ration it in some way.

The Network Management paragraph in the statement document permits “reasonable” network management practices. Fortunately the word “reasonable” is defined in detail in the statement document. Unfortunately the definition, while long, includes a clause which renders the rest of the definition redundant: “or otherwise to manage the daily operation of its network.” This clause appears to permit whatever the ISP wants.

So on balance, while it contains a lot of worthy sentiments, I am obliged to view this framework as a sellout by Google. I am not alone in this assessment.

iPhone 4 gets competition

When the iPhone came out it redefined what a smartphone is. The others scrambled to catch up, and now with Android they pretty much have. The iPhone 4 is not in a different league from its competitors the way the original iPhone was. So I have been trying to decide between the iPhone 4 and the EVO for a while. I didn’t look at the Droid X or the Samsung Galaxy S, either of which may be better in some ways than the EVO.

Each hardware and software has stronger and weaker points. The Apple wins on the subtle user interface ingredients that add up to delight. It is a more polished user experience. Lots of little things. For example I was looking at the clock applications. The Apple stopwatch has a lap feature and the Android doesn’t. I use the timer a lot; the Android timer copied the Apple look and feel almost exactly, but a little worse. It added a seconds display, which is good, but the spin-wheel to set the timer doesn’t wrap. To get from 59 seconds to 0 seconds you have to spin the display all the way back through. The whole idea of a clock is that it wraps, so this indicates that the Android clock programmer didn’t really understand time. Plus when the timer is actually running, the Android cutely just animates the time-set display, while the Apple timer clears the screen and shows a count-down. This is debatable, but I think the Apple way is better. The countdown display is less cluttered, more readable, and more clearly in a “timer running” state. The Android clock has a wonderful “desk clock” mode, which the iPhone lacks, I was delighted with the idea, especially the night mode which dims the screen and lets you use it as a bedside clock. Unfortunately when I came to actually use it the hardware let the software down. Even in night mode the screen is uncomfortably bright, so I had to turn the phone face down on the bedside table.

The EVO wins on screen size. Its 4.3 inch screen is way better than the iPhone’s 3.5 inch screen. The “retina” definition on the iPhone may look like a better specification but the difference in image quality is indistinguishable to my eye, and the greater size of the EVO screen is a compelling advantage.

The iPhone has far more apps, but there are some good ones on the Android that are missing on the iPhone, for example the amazing Wi-Fi Analyzer. On the other hand, this is also an example of the immaturity of the Android platform, since there is a bug in Android’s Wi-Fi support that makes the Wi-Fi Analyzer report out-of-date results. Other nice Android features are the voice search feature and the universal “back” button. Of course you can get the same voice search with the iPhone Google app, but the iPhone lacks a universal “back” button.

The GPS on the EVO blows away the GPS on the iPhone for accuracy and responsiveness. I experimented with the Google Maps app on each phone, walking up and down my street. Apple changed the GPS chip in this rev of the iPhone, going from an Infineon/GlobalLocate to a Broadcom/GlobalLocate. The EVO’s GPS is built-in to the Qualcomm transceiver chip. The superior performance may be a side effect of assistance from the CDMA radio network.

Incidentally, the GPS test revealed that the screens are equally horrible under bright sunshine.

The iPhone is smaller and thinner, though the smallness is partly a function of the smaller screen size.

The EVO has better WAN speed, thanks to the Clearwire WiMax network, but my data-heavy usage is mainly over Wi-Fi in my home, so that’s not a huge concern for me.

Battery life is an issue. I haven’t done proper tests, but I have noticed that the EVO seems to need charging more often than the iPhone.

Shutter lag is a major concern for me. On almost all digital cameras and phones I end up taking many photos of my shoes as I put the camera back in my pocket after pressing the shutter button and assuming the photo got taken at that time rather than half a second later. I just can’t get into the habit of standing still and waiting for a while after pressing the shutter button. The iPhone and the EVO are about even on this score, both sometimes taking an inordinately long time to respond to the shutter – presumably auto-focusing. The pictures taken with the iPhone and the EVO look very different; the iPhone camera has a wider angle, but the picture quality of each is adequate for snapshots. On balance the iPhone photos appeal to my eye more than the EVO ones.

For me the antenna issue is significant. After dropping several calls I stuck some black electrical tape over the corner of the phone which seems to have somewhat fixed it. Coverage inside my home in the middle of Dallas is horrible for both AT&T and Sprint.

The iPhone’s FM radio chip isn’t enabled, so I was pleased when I saw FM radio as a built-in app on the EVO, but disappointed when I fired it up and discovered that it needed a headset to be plugged in to act as an antenna. Modern FM chips should work with internal antennas. In any case, the killer app for FM radio is on the transmit side, so you can play music from your phone through your car stereo. Neither phone supports that yet.

So on the plus side, the EVO’s compelling advantage is the screen size. On the negative side, it is bulkier, the battery life is less, the software experience isn’t quite so polished.

The bottom line is that the iPhone is no longer in a class of its own. The Android iClones are respectable alternatives.

It was a tough decision, but I ended up sticking with the iPhone.

Google pushes fiber

Google announced that it is going to wire a select few communities with gigabit broadband connections. This could be huge.

Something is wrong with broadband access in the US. It was ranked 15th in the world in 2008 on a composite score of household penetration, speed and price.

Google is setting out to demonstrate a better way, though other countries already offer such demonstrations. The current international benchmark for price and speed is Stockholm at $11 per month for 100 mbps. There are similar efforts in the US, for example Utopia in Utah. One of the key features of these implementations of fiber as a utility is that the supplier of the fiber does not supply content, since this would impose a structural conflict of interest.

Google does supply content, so it will be interesting to see how it deals with this conflict. I doubt there will be any problems in the short term, but in the long term it will be very hard to resist the impulse to use all the competitive tools available; “Don’t be evil” isn’t a useful guideline to a long, gentle slope.

OK, it’s easy to be cynical, but at least Google is trying to do something to improve the broadband environment in the US, and it may be a long time before the short term allure of preferred treatment for its own content outweighs the strategic benefit of improved national broadband infrastructure. And this initiative will undoubtedly help to accelerate the deployment of fiber to the home, if only by goading the incumbents.

I touched on the issue of municipal dark fiber a while back.

Why are we waiting?

I just clicked on a calendar appointment enclosure in an email. Nothing happened, so I clicked again. Then suddenly two copies of the appointment appeared in my iCal calendar. Why on earth did the computer take so long to respond to my click? I waited an eternity – maybe even as long as a second.

The human brain has a clock speed of about 15 Hz. So anything that happens in less than 70 milliseconds seems instant. The other side of this coin is that when your computer takes longer than 150 ms to respond to you, it’s slowing you down.

I have difficulty fathoming how programmers are able to make modern computers run so slowly. The original IBM PC ran at well under 2 MIPS. The computer you are sitting at is around ten thousand times faster. It’s over 100 times faster than a Cray-1 supercomputer. This means that when your computer keeps you waiting for a quarter of a second, equally inept programming on the same task on an eighties-era IBM PC would have kept you waiting 40 minutes. I don’t know about you, but I encounter delays of over a quarter of a second with distressing frequency in my daily work.

I blame Microsoft. Around Intel the joke line about performance was “what Andy giveth, Bill taketh away.” This was actually a winning strategy for decades of Microsoft success: concentrate on features and speed of implementation and never waste time optimizing for performance because the hardware will catch up. It’s hard to argue with success, but I wonder if a software company obsessed with performance could be even more successful than Microsoft?