Net neutrality – Holland leads the way

Service providers can offer any product they wish. But consumers have certain expectations when a product is described as ‘Internet Service.’ So net neutrality regulations are similar to truth in advertising rules. The primary expectation that users have of an Internet Service Provider (ISP) is that it will deliver IP datagrams (packets) without snooping inside them and slowing them down, dropping them, or charging more for them based on what they contain.

The analogy with the postal service is obvious, and the expectation is similar. When Holland passed a net neutrality law last week, one of the bill’s co-authors, Labor MP Martijn van Dam, compared Dutch ISP KPN to “a postal worker who delivers a letter, looks to see what’s in it, and then claims he hasn’t read it.” This snooping was apparently what set off the furor that led to the legislation:

“At a presentation to investors in London on May 10, analysts questioned where KPN had obtained the rapid adoption figures for WhatsApp. A midlevel KPN executive explained that the operator had deployed analytical software which uses a technology called deep packet inspection to scrutinize the communication habits of individual users. The disclosure, widely reported in the Dutch news media, set off an uproar that fueled the legislative drive, which in less than two months culminated in lawmakers adopting the Continent’s first net neutrality measures with real teeth. New York Times

Taking the analogy with the postal service a little further: the postal service charges by volume. The ISP industry behaves similarly, with tiered rates depending on bandwidth. Net neutrality advocates don’t object to this.

The postal service also charges by quality of service, like delivery within a certain time, and guaranteed delivery. ISPs don’t offer this service to consumers, though it is one that subscribers would probably pay for if applied voluntarily and transparently. For example, suppose I wish to subscribe to 10 megabits per second of Internet connectivity, I might be willing to pay a premium for a guaranteed minimum delay on UDP packets. The ISP could then add value for me by prioritizing UDP packets over TCP when my bandwidth demand exceeded 10 megabits per second. Is looking at the protocol header snooping inside the packets? Kind of, because the TCP or UDP header is inside the IP packet, but on the other hand, it might be like looking at a piece of mail to see if it is marked Priority or bulk rate.

A subscriber may even be interested in paying an ISP for services based on deep packet inspection. In a recent conversation, an executive at a major wireless carrier likened net neutrality to pollution. I am not sure what he meant by this, but he may have been thinking of spam-like traffic that nobody wants, but that neutrality regulations might force a service provider to carry. I use Gmail as my email service, and I am grateful for the Gmail spam filter, which works quite well. If a service provider were to use deep packet inspection to implement malicious-site blocking (like phishing site blocking or unintentional download blocking) or parental controls, I would consider this a service worth paying for, since the PC-based capabilities in this category are too easily circumvented by inexperienced users.

Notice that all these suggestions are for voluntary services. When a company opts to impose a product on a customer when the customer prefers an alternative one, the customer is justifiably irked.

What provoked KPN to start blocking WhatsApp, was that KPN subscribers were abandoning KPN’s SMS service in favor of WhatsApp. This caused a revenue drop. Similarly, as VoIP services like Skype grow, voice revenues for service providers will drop, and service providers will be motivated to block or impair the performance of those competing services.

The dumb-pipe nature of IP has enabled the explosion of innovation in services and products that we see on the Internet. Unfortunately for the big telcos and cable companies, many of these innovations disrupt their other service offerings. Internet technology enables third parties to compete with legacy cash cows like voice, SMS and TV. The ISP’s rational response is to do whatever is in its power to protect those cash cows. Without network neutrality regulations, the ISPs are duty-bound to their investors to protect the profitability of their other product lines by blocking the competitors on their Internet service, just as KPN did. Net neutrality regulation is designed to prevent such anti-competitive behavior. A neutral net obliges ISPs to allow competition on their access links.

So which is the free-market approach? Allowing network owners to do whatever they want on their networks and block any traffic they don’t like, or ensuring that the Internet is a level playing field where entities with the power to block third parties are prevented from doing so? The former is the free market of commerce, the latter is the free market of ideas. In this case they are in opposition to each other.

Using the Google Chrome Browser

I have some deep seated opinions about user interfaces and usability. It normally only takes me a few seconds to get irritated by a new application or device, since they almost always contravene one or more of my fundamental precepts of usability. So when I see a product that gets it righter than I could have done myself, I have to say it warms my heart.

I just noticed a few minutes ago, using Chrome, that the tabs behave in a better way than on any other browser that I have checked (Safari, Firefox, IE8). If you have a lot of tabs open, and you click on an X to close one of them, the tabs rearrange themselves so that the X of the next tab is right under the mouse, ready to get clicked to close that one too. Then after closing all the tabs that you are no longer interested in, when you click on a remaining one, the tabs rearrange themselves to a right size. This is a very subtle user interface feature. Chrome has another that is a monster, not subtle at all, and so nice that only stubborn sour grapes (or maybe patents) stop the others from emulating it. That is the single input field for URLs and searches. I’m going to talk about how that fits with my ideas about user interface design in just a moment, but first let’s go back to the tab sizing on closing with the mouse.

I like this feature because it took a programmer some effort to get it right, yet it only saves a user a fraction of a second each time it is used, and only some users close tabs with the mouse (I normally use Cmd-W), and only some users open large numbers of tabs simultaneously. So why did the programmer take the trouble? There are at least two good reasons: first, let’s suppose that 100 million people use the Chrome browser, and that they each use the mouse to close 12 tabs a day, and that in 3 of these closings, this feature saved the user from moving the mouse, and the time saved for each of these three mouse movements was a third of a second. The aggregate time saved per day across 100 million users is 100 million seconds. At 2,000 working hours per year, that’s more than 10 work-years saved per day. The altruistic programmer sacrificed an hour or a day or whatever of his valuable time, to give the world far more. But does anybody apart from me notice? As I have remarked before, at some level the answer is yes.

The second reason it was a good idea for the programmer to take this trouble is to do with the nature of usability and choice of products. There is plenty of competition in the browser market, and it is trivial for a user to switch browsers. Usability of a program is an accretion of lots of little ingredients. So in the solution space addressed by a particular application, the potential gradation of usability is very fine-grained, each tiny design decision moving the needle a tiny increment in the direction of greater or lesser usability. But although ease of use of an application is an infinitely variable property, whether a product is actually used or not is effectively a binary property. It is a very unusual consumer (guilty!) who continues to use multiple browsers on a daily basis. Even if you start out that way you will eventually fall into the habit of using just one. For each user of a product, there is a threshold on that infinite gradation of usability, that balances against the benefit of using the product. If the product falls below that effort/benefit threshold it gradually falls into disuse. Above that threshold the user forms the habit of using it regularly. Many years ago I bought a Palm Pilot. For me, that user interface was right on my threshold. It teetered there for several weeks as I tried to get into the habit of depending on it, but after I missed a couple of important appointments because I had neglected to put them into the device, I went back to my trusty pocket Day-Timer. For other people, the Palm Pilot was above their threshold of usability, and they loved it, used it and depended on it. Not all products are so close to the threshold of usability. Some fall way below it. You have never heard of them – or maybe you have: how about the Apple Newton? And some land way above it; before the iPhone nobody browsed the Internet on their phones – the experience was too painful. In one leap the iPhone landed so far above that threshold that it routed the entire industry.
The razor thin line between use and disuse
The point here is that the ‘actual use’ threshold is a a razor-thin line on the smooth scale of usability, so if a product lies close to that line, the tiniest, most subtle change to usability can move it from one side of the line to the other. And in a competitive market where the cost of switching is low, that line isn’t static; the competition is continuously moving the threshold up. This is consistent with “natural selection by variation and survival of the fittest.” So product managers who believe their usability is “good enough,” and that they need to focus on new features to beat the competition are often misplacing their efforts – they may be moving their product further to the right on the diagram above than they are moving it up.

Now let’s go on to Chrome’s single field for URLs and searches. Computer applications address complicated problem spaces. In the diagram below, each circle represents the aggregate complexity of an activity performed with the help of a computer. The horizontal red line represents the division between the complexity handled by the user, and that handled by the computer. In the left circle most of the complexity is dealt with by the user, in the right circle most is dealt with by the computer. For a given problem space, an application will fall somewhere on this line. For searching databases HAL 9000 has the circle almost entirely above this line, SQL is way further down. The classic example of this is the graphical user interface. It is vastly more programming work to create a GUI system like Windows than a command-line system like MS-DOS, and a GUI is correspondingly vastly easier on the user.

Its single field for typing queries and URLs clearly makes Chrome sit higher on this line than the browsers that use two fields. With Chrome the user has less work to do: he just gives the browser an instruction. With the others the user has to both give the instruction and tell the computer what kind of instruction it is. On the other hand, the programmer has to do more work, because he has to write code to determine whether the user is typing a URL or a search. But this is always going to be the case when you make a task of a given complexity easier on the user. In order to relieve the user, the computer has to handle more complexity. That means more work for the programmer. Hard-to-use applications are the result of lazy programmers.

The programming required to implement the single field for URLs and searches is actually trivial. All browsers have code to try to form a URL out of what’s typed into the address field; the programmer just has to assume it’s a search when that code can’t generate a URL. So now, having checked my four browsers, I have to partially eat my words. Both Firefox and IE8, even though they have separate fields for web addresses and searches, do exactly what I just said: address field input that can’t be made into a URL is treated as a search query. Safari, on the other hand, falls into the lazy programmer hall of shame.

This may be a result of a common “ease of use” fallacy: that what is easier for the programmer to conceive is easier for the user to use. The programmer has to imagine the entire solution space, while the user only has to deal with what he comes across. I can imagine a Safari programmer saying “We have to meet user expectations consistently – it will be confusing if the address field behaves in an unexpected way by doing a search when the user was simply mistyping a URL.” The fallacy of this argument is that while the premise is true (“it is confusing to behave in an unexpected way,”) you can safely assume that an error message is always unexpected, so rather than deliver one of those, the kind programmer will look at what is provoking the error message, and try to guess what the user might have been trying to achieve, and deliver that instead.

There are two classes of user mistake here: one is typing into the “wrong” field, the other is mistyping a URL. On all these browsers, if you mistype a URL you get an unwanted result. On Safari it’s an error page, on the others it’s an error page or a search, depending on what you typed. So Safari isn’t better, it just responds differently to your mistake. But if you make the other kind of “mistake,” typing a search into the “wrong” field, Safari gives an error, while the others give you what you actually wanted. So in this respect, they are twice as good, because the computer has gracefully relieved the user of some work by figuring out what they really wanted. But Chrome goes one step further, making it impossible to type into the “wrong” field, because there is only one field. That’s a better design in my opinion, though I’m open to changing my mind: the designers at Firefox and Microsoft may argue that they are giving the best of both worlds, since users accustomed to separate fields for search and addresses might be confused if they can’t find a separate search field.

MIMO for handset Wi-Fi

I mentioned earlier that the Wi-Fi Alliance requires MIMO for 802.11n certification except for phones, which can be certified with a single stream. This waiver was for several reasons, including power, size and the difficulty of getting two spatially separated antennas into a handset. Atheros and Marvell appear to have overcome those difficulties; both have announced 2×2 Wi-Fi chips for handsets. Presumably TI and Broadcom will not be far behind.

The Atheros chip is called the AR6004. According to Rethink Wireless,

The AR6004 can use both the 2.4GHz and the 5GHz bands and is capable of real world speeds as high as 170Mbps. Yet the firm claims its chip consumes only 15% more power than the current AR6003, which delivers only 85Mbps. It will be available in sample quantities by the end of this quarter and in commercial quantities in the first quarter of next year.

The AR6004 appears to be designed for robust performance. It incorporates all the optional features of 802.11n intended to improve rate at range. Atheros brands this suite of features “Signal Sustain Technology.” The AR6004 is also designed to reduce the total solution footprint, by including on-chip power amplifiers and low-noise amplifiers. Historically on-chip CMOS power amplifiers have performed worse than external PAs using GaAs, but Atheros claims to have overcome this deficiency, prosaically branding its solution “Efficient Power Amplifier.”

The 88W8797 from Marvell uses external PAs and LNAs, but saves space a different way, by integrating Bluetooth and FM onto the chip. The data sheet on this chip doesn’t mention as many of the 802.11n robustness features as the Atheros one does, so it is unclear whether the chip supports LDPC, for example.

Both chips claim a maximum 300 Mbps data rate. Atheros translates this to an effective throughput of 170 Mbps.

Of course, these chips will be useful in devices other than handsets. They are perfect for tablets, where there is plenty of room for two antennas at the right separation.

ISP Performance numbers from Netflix

Interesting numbers from the Netflix Tech Blog.

Several things jump out at me. First, cable is faster than DSL, and wireless is the slowest. Second, again no surprise, urban is faster than rural. But the big surprise to me is the Verizon number. They have spent a ton on FIOS, and according to Trefis about half of Verizon’s broadband customers are now on FiOS. So according to these numbers, even if we supposed that Verizon’s non-FiOS customers were getting a bandwidth of zero, the average bandwidth available to a FiOS customer appears to be less than 5 megabits per second.

Since FiOS is a very fast last mile, the bottleneck might be in the backhaul, or, more likely, in some bandwidth-throttling device. Whichever way you slice it, it’s hard to cast these numbers in a positive light for Verizon.

Netflix measurements of ISP bandwidth
Update January 31, 2011: This story in the St. Louis Business Journal says that the Charter, the ISP with the best showing in the Netflix measurements, is increasing its speed further, with no increase in price. This is good news. It is time that ISPs in the US started to compete on speed.

Contemplating the graphs, the lines appear to cluster to some extent in three bands, centered on 1.5 mbps, 2 mbps and 2.5 mbps. If this is evidence of traffic shaping, these are the numbers that ISPs should be using in their promotional materials, rather than the usual “up to” numbers that don’t mention minimums or averages.

ITExpo East 2011: C-01 “Connecting the Distributed Enterprise via Video”

I will be moderating this panel at IT Expo in Miami on February 3rd at 9:00 am:

Mobility is taking the enterprise space by storm – everyone is toting a smartphone, tablet, laptop, or one of each. It’s all about what device happens to be tIn today’s distributed workforce environment, it’s essential to be able to communicate to employees and customers across the globe both efficiently and effectively. Prior to today, doing so was far more easily said than done because, not only was the technology not in place, but video wasn’t accepted as a form of business communication. Now that video has burst onto the scene by way of Apple’s Facetime, Skype and Gmail video chat, consumers are far more likely to pick video over voice – both in their home and at their workplaces. But, though demand has never been higher, enterprise networks still experience a slow-down when employees attempt to access video streams from the public Internet because the implementation of IP video is not provisioned properly. This session will provide an overview of the main deployment considerations so that IP video can be successfully deployed inside or outside the corporate firewall, without impacting the performance of the network, as well as how networks need to adapt to accommodate widespread desktop video deployments. It will also expose the latest in video compression technology in order to elucidate the relationship between video quality, bandwidth, and storage. With the technology in place, an enterprise can efficiently leverage video communication to lower costs and increase collaboration.

The panelists are:

  • Mike Benson, Regional Vice President, VBrick Systems
  • Anatoli Levine, Sr. Director, Product Management, RADVISION Inc.
  • Matt Collier, Senior Vice President of Corporate Development, LifeSize

VBrick claims to be the leader in video streaming for enterprises. Radvision and LifeSize (a subsidiary of Logitech) are oriented towards video conferencing rather than streaming. It will be interesting to get their respective takes on bandwidth constraints on the WLAN and the access link, and what other impairments are important.

IT Expo East 2011: NGC-04 “Meeting the Demand for In-building Wireless Networks”

I will be moderating this panel at IT Expo in Miami on February 2nd at 12:00 pm:

Mobility is taking the enterprise space by storm – everyone is toting a smartphone, tablet, laptop, or one of each. It’s all about what device happens to be the most convenient at the time and the theory behind unified communications – anytime, anywhere, any device. The adoption of mobile devices in the home and their relevance in the business space has helped drive a new standard for enterprise networking, which is rapidly becoming a wireless opportunity, offering not only the convenience and flexibility of in-building mobility, but WiFi networks are much easier and cost effective to deploy than Ethernet. Furthermore, the latest wireless standards largely eliminate the traditional performance gap between wired and wireless and, when properly deployed, WiFi networks are at least as secure as wired. This session will discuss the latest trends in enterprise wireless, the secrets to successful deployments, as well as how to make to most of your existing infrastructure while moving forward with your WiFi installation.

The panelists are:

  • Shawn Tsetsilas, Director, WLAN, Cellular Specialties, Inc.
  • Perry Correll, Principal Technologists, Xirrus Inc.
  • Adam Conway, Vice President of Product Management, Aerohive

Cellular Specialties in this context is a system integrator, and one of their partners is Aerohive. Aerohive’s special claim to fame is that they eliminate the WLAN controller, so each access point controls itself in cooperation with its neighbors. The only remaining centralized function is the management. Aerohive claims that this architecture gives them superior scalability, and a lower system cost (since you only pay for the access points, not the controllers).

Xirrus’s product is unusual in a different way, packing a dozen access points into a single sectorized box, to massively increase the bandwidth available in the coverage areas.

So is it true that Wi-Fi has evolved to the point that you no longer need wired ethernet?

ITExpo East 2011: NGC-02 “The Next Generation of Voice over WLAN”

I will be moderating this panel at IT Expo in Miami on February 2nd at 10:00 am.

Voice over WLAN has been deployed in enterprise applications for years, but has yet to reach mainstream adoption (beyond vertical markets). With technologies like mobile UC, 802.11n, fixed-mobile convergence and VoIP for smartphones raising awareness/demand, there are a number of vendors poised to address market needs by introducing new and innovative devices. This session will look at what industries have already adopted VoWLAN and why – and what benefits they have achieved, as well as the technology trends that make VoWLAN possible.

The panelists are:

  • Russell Knister, Sr. Director, Business Development & Product Marketing, Motorola Solutions
  • Ben Guderian, VP Applications and Ecosystem, Polycom
  • Carlos Torales, Cisco Systems, Inc.

All three of these companies have a venerable history in enterprise Wi-Fi phones; the two original pioneers of enterprise Voice over Wireless LAN were Symbol and Spectralink, which Motorola and Polycom acquired respectively in 2006 and 2007. Cisco announced a Wi-Fi handset (the 7920) to complement their Cisco CallManager in 2003. But the category has obstinately remained a niche for almost a decade.

It has been clear from the outset that cell phones would get Wi-Fi, and it would be redundant to have dedicated Wi-Fi phones. And of course, now that has come to pass. The advent of the iPhone with Wi-Fi in 2007 subdued the objections of the wireless carriers to Wi-Fi and knocked the phone OEMs off the fence. By 2010 you couldn’t really call a phone without Wi-Fi a smartphone, and feature phones aren’t far behind.

So this session will be very interesting, answering questions about why enterprise voice over Wi-Fi has been so confined, and why that will no longer be the case.

Sharing Wi-Fi Update

Back in February 2009 I wrote about how Atheros’ new chip made it possible for a phone to act as a Wi-Fi hotspot. A couple of months later, David Pogue wrote in the New York Times about a standalone device to do the same thing, the Novatel MiFi 2200. The MiFi is a Wi-Fi access point with a direct connection to the Internet over a cellular data channel. So you can have “a personal Wi-Fi bubble, a private hot spot, that follows you everywhere you go.”

The type of technology that Atheros announced at the beginning of 2009 was put on a standards track at the end of 2009; the “Wi-Fi Direct” standard was launched in October 2010. So far about 25 products have been certified. Two phones have already been announced with Wi-Fi Direct built-in: the Samsung Galaxy S and the LG Optimus Black.

Everybody has a cell phone, so if a cell phone can act as a MiFi, why do you need a MiFi? It’s another by-product of the dysfunctional billing model of the mobile network operators. If they simply bit the bullet and charged à la carte by the gigabyte, they would be happy to encourage you to use as many devices as possible through your phone.

WiFi Direct may force a change in the way that network operators bill. It is such a compelling benefit to consumers, and so trivial to implement for the phone makers, that the mobile network operators may not be able to hold it back.

So if this capability proliferates into all cell phones, we will be able to use Wi-Fi-only tablets and laptops wherever we are. This seems to be bad news for Novatel’s MiFi and for cellular modems in laptops. Which leads to another twist: Qualcomm’s Gobi is by far the leading cellular modem for laptops, and Qualcomm just announced that it is acquiring Atheros.

Net Neutrality Fallout

Stacey Higginbotham posted an analysis of the FCC Net Neutrality report and order on GigaOM. She concludes:

As a consumer, it’s depressing, …it leaves the mobile field open for the creation of walled gardens and incentivizes the creation of application-specific devices.

Sure enough, just two weeks after the publication of the R&O, Ryan Kim reports on GigaOM that MetroPCS announced on January 3rd plans to charge extra based on what you access, rather than on the quantity or quality of the bandwidth you consume.