Net Neutrality and consumer benefit

A story in Wired dated December 17th reports on a webinar presented by Allot Communications and Openet.

A slide from the webinar shows how network operators could charge by the type of content being transported rather than by bandwidth:

DPI integrated into Policy Control & Charging

In an earlier post I said that strict net neutrality is appropriate for wired broadband connections, but that for wireless connections the bandwidth is so constrained that the network operators must be able to ration bandwidth in some way. The suggestion of differential charging for bandwidth by content goes way beyond mere rationing. The reason this is egregious is that the bandwidth costs the same to the wireless service provider regardless of what is carried on it. Consumers don’t want to buy content from Internet service providers, they want to buy connectivity – access to the Internet.

In cases where a carrier can legitimately claim to add value it would make sense to let them charge more. For example, real-time communications demands traffic prioritization and tighter timing constraints than other content. Consumers may be willing to pay a little bit more for the better sounding calls resulting from this.

But this should be the consumer’s choice. Allowing mandatory charging for what is currently available free on the Internet would mean the death of the mobile Internet, and its replacement with something like interactive IP-based cable TV service. The Internet is currently a free market where the best and best marketed products win. Per-content charging would close this down, replacing it with an environment where product managers at carriers would decide who is going to be the next Facebook or Google, kind of like AOL or Compuserve before the Internet. The lesson of the Internet is that a dumb network connecting content creators with content consumers leads to massive innovation and value creation. The lesson of the PSTN is that an “intelligent network,” where network operators control the content, leads to decades of stagnation.

In a really free market, producers get paid for adding value. Since charging per content by carriers doesn’t add value, but merely diverts revenue from content producers to the carriers, it would be impossible in a free market. If a wireless carrier successfully attempted this, it would indicate that wireless Internet access is not a free market, but something more like a monopoly or cartel which should be regulated for the public good.

Dumb mobile pipes

An interesting story from Bloomberg says that Ericsson is contemplating owning a wireless network infrastructure. Ericsson is already one of the top 5 mobile network operators worldwide, but it doesn’t own any of the networks it manages – it is simply a supplier of outsourced network management services.

The idea here is that Ericsson will own and manage its own network, and wholesale the services on it to MVNOs. If this plan goes through, and if Ericsson is able to stick to the wholesale model and not try to deliver services direct to consumers, it will be huge for wireless network neutrality. It is a truly disruptive development, in that it could lower barriers to entry for mobile service providers, and open up the wireless market to innovation at the service level.

[update] Upon reflection, I think this interpretation of Ericsson’s intent is over-enthusiastic. The problem is spectrum. Ericsson can’t market this to MVNOs without spectrum. So a more likely interpretation of Ericsson’s proposal is that it will pay for infrastructure, then sell capacity and network management services to spectrum-owning mobile network operators. Not a dumb pipes play at all. It is extremely unlikely that Ericsson will buy spectrum for this, though there are precedents for equipment manufacturers buying spectrum – Qualcomm and Intel have both done so.

[update 2] With the advent of white spaces, Ericsson would not need to own spectrum to offer a wholesale service from its wireless infrastructure. The incremental cost of provisioning white spaces on a cellular base station would be relatively modest.

QoS meters on Voxygen

The term “QoS” is used ambiguously. The two main categories of definition are first, QoS Provisioning: “the capability of a network to provide better service to selected network traffic,” which means packet prioritization of one kind or another, and second more literally: “Quality of Service,” which is the degree of perfection of a user’s audio experience in the face of potential impairments to network performance. These impairments fall into four categories: availability, packet loss, packet delay and tampering. Since this sense is normally used in the context of trying to measure it, we could call it QoS Metrics as opposed to QoS Provisioning. I would put issues like choice of codec and echo into the larger category of Quality of Experience, which includes all the possible impairments to audio experience, not just those imposed by the network.

By “tampering” I mean any intentional changes to the media payload of a packet, and I am OK with the negative connotations of the term since I favor the “dumb pipes” view of the Internet. On phone calls the vast bulk of such tampering is transcoding: changing the media format from one codec to another. Transcoding always reduces the fidelity of the sound, even when transcoding to a “better” codec.

Networks vary greatly in the QoS they deliver. One of the major benefits of going with VoIP service provided by your ISP (Internet Service Provider) is that your ISP has complete control over QoS. But there is a growing number of ITSPs (Internet Telephony Service Providers) that contend that the open Internet provides adequate QoS for business-grade telephone service. Skype, for example.

But it’s nice to be sure. So I have added a “QoS Metrics” category in the list to the right of this post. You can use the tools there to check your connection. I particularly like the one from Voxygen, which frames the test results in terms of the number of simultaneous voice sessions that your WAN connection can comfortably handle. Here’s an example of a test of ten channels:

Screen shot of Voxygen VoIP performance metrics tool

Third Generation WLAN Architectures

Aerohive claims to be the first example of a third-generation Wireless LAN architecture.

  • The first generation was the autonomous access point.
  • The second generation was the wireless switch, or controller-based WLAN architecture.
  • The third generation is a controller-less architecture.

The move from the first generation to the second was driven by enterprise networking needs. Enterprises need greater control and manageability than smaller deployments. First generation autonomous access points didn’t have the processing power to handle the demands of greater network control, so a separate category of device was a natural solution: in the second generation architecture, “thin” access points did all the real-time work, and delegated the less time-sensitive processing to powerful central controllers.

Now the technology transition to 802.11n enables higher capacity wireless networks with better coverage. This allows enterprises to expand the role of wireless in their networks, from convenience to an alternative access layer. This in turn further increases the capacity, performance and reliability demands on the WLAN.

Aerohive believes this generational change in technology and market requires a corresponding generational change in system architecture. A fundamental technology driver for 802.11n, the ever-increasing processing bang-for-the-buck yielded by Moore’s law, also yields sufficient low-cost processing power to move the control functions from central controllers back to the access points. Aerohive aspires to lead the enterprise Wi-Fi market into this new architecture generation.

Superficially, getting rid of the controller looks like a return to the first generation architecture. But an architecture with all the benefits of a controller-based WLAN, only without a controller, requires a sophisticated suite of protocols by which the smart access points can coordinate with each other. Aerohive claims to have developed such a protocol suite.

The original controller-based architectures used the controller for all network traffic: the management plane, the control plane and the data plane. The bulk of network traffic is on the data plane, so bottlenecks there do more damage than on the other planes. So modern controller-based architectures have “hybrid” access points that handle the data plane, leaving only the control and management planes to the controller device (Aerohive’s architect, Devin Akin, says:, “distributed data forwarding at Layer-2 isn’t news, as every other vendor can do this.”) Aerohive’s third generation architecture takes it to the next step and distributes control plane handling as well, leaving only the management function centralized, and that’s just software on a generic server.

Aerohive contends that controller-based architectures are expensive, poorly scalable, unreliable, hard to deploy and not needed. A controller-based architecture is more expensive than a controller-less one, because controllers aren’t free (Aerohive charges the same for its APs as other vendors do for their thin ones: under $700 for a 2×2 MIMO dual-band 802.11n device). It is not scalable because the controller constitutes a bottleneck. It is not reliable because a controller is a single point of failure, and it is not needed because processing power is now so cheap that all the functions of the controller can be put into each AP, and given the right system design, the APs can coordinate with each other without the need for centralized control.

Distributing control in this way is considerably more difficult than distributing data forwarding. Control plane functions include all the security features of the WLAN, like authentication and admission, multiple VLANs and intrusion detection (WIPS). Greg Taylor, wireless LAN services practice lead for the Professional Services Organization of BT in North America says “The number one benefit [of a controller-based architecture] is security,” so a controller-less solution has to reassure customers that their vulnerability will not be increased. According to Dr. Amit Sinha, Chief Technology Officer at Motorola Enterprise Networking and Communications, other functions handled by controllers include “firewall, QoS, L2/L3 roaming, WIPS, AAA, site survivability, DHCP, dynamic RF management, firmware and configuration management, load balancing, statistics aggregation, etc.”

You can download a comprehensive white paper describing Aerohive’s architecture here.

Motorola recently validated Aerohive’s vision, announcing a similar architecture, described here.

Here’s another perspective on this topic.

Google sells out

Google and Verizon came out with their joint statement on Net Neutrality on Monday. It is reasonable and idealistic in its general sentiments, but contains several of the loopholes Marvin Ammori warned us about. It was released in three parts: a document posted to Google Docs, a commentary posted to the Google Public Policy Blog, and an op-ed in the Washington Post. Eight paragraphs in the statement document map to seven numbered points in the blog. The first three numbered points map to the six principles of net neutrality enumerated by Julius Genachowski [jg1-6] almost a year ago. Here are the Google/Verizon points as numbered in the blog:

1. Open access to Content [jg1], Applications [jg2] and Services [jg3]; choice of devices [jg4].
2. Non-discrimination [jg5].
3. Transparency of network management practices [jg6].
4. FCC enforcement power.
5. Differentiated services.
6. Exclusion of Wireless Access from these principles (for now).
7. Universal Service Fund to include broadband access.

The non-discrimination paragraph is weakened by the kinds of words that are invitations to expensive litigation unless they are precisely defined in legislation. It doesn’t prohibit discrimination, it merely prohibits “undue” discrimination that would cause “meaningful harm.”

The managed (or differentiated) services paragraph is an example of what Ammori calls “an obvious potential end-run around the net neutrality rule.” I think that Google and Verizon would argue that their transparency provisions mean that ISPs can deliver things like FIOS video-on-demand over the same pipe as Internet service without breaching net neutrality, since the Internet service will commit to a measurable level of service. This is not how things work at the moment; ISPs make representations about the maximum delivered bandwidth, but for consumers don’t specify a minimum below which the connection will not fall.

The examples the Google blog gives of “differentiated online services, in addition to the Internet access and video services (such as Verizon’s FIOS TV)” appear to have in common the need for high bandwidth and high QoS. This bodes extremely ill for the Internet. The evolution to date of Internet access service has been steadily increasing bandwidth and QoS. The implication of this paragraph is that these improvements will be skimmed off into proprietary services, leaving the bandwidth and QoS of the public Internet stagnant.

The exclusion of wireless many consider egregious. I think that Google and Verizon would argue that there is nothing to stop wireless being added later. In any case, I am sympathetic to Verizon on this issue, since wireless is so bandwidth constrained relative to wireline that it seems necessary to ration it in some way.

The Network Management paragraph in the statement document permits “reasonable” network management practices. Fortunately the word “reasonable” is defined in detail in the statement document. Unfortunately the definition, while long, includes a clause which renders the rest of the definition redundant: “or otherwise to manage the daily operation of its network.” This clause appears to permit whatever the ISP wants.

So on balance, while it contains a lot of worthy sentiments, I am obliged to view this framework as a sellout by Google. I am not alone in this assessment.

Net Neutrality heating up

I got an email from Credo this morning asking me to call Julius Genachowski to ask him to stand firm on net neutrality.

The nice man who answered told me that the best way to make my voice heard on this issue is to file a comment at the FCC website, referencing proceeding number 09-191.

So that my comment would be a little less ignorant, I carefully read an article on the Huffington Post by Marvin Ammori before filing it.

My opinion on this is that ISPs deserve to be fairly compensated for their service, but that they should not be permitted to double-charge for a consumer’s Internet access. If some service like video on demand requires prioritization or some other differential treatment, the ISP should only be allowed to charge the consumer for this, not the content provider. In other words, every bit traversing the subscriber’s access link should be treated equally by the ISP unless the consumer requests otherwise, and the ISP should not be permitted to take payments from third parties like content providers to preempt other traffic. If such discrimination is allowed, the ISP will be motivated to keep last-mile bandwidth scarce.

Internet access in the US is effectively a duopoly (cable or DSL) in each neighborhood. This absence of competition has caused the US to become a global laggard in consumer Internet bandwidth. With weak competition and ineffective regulation, a rational ISP will forego the expense of network upgrades.

ISPs like AT&T view the Internet as a collection of pipes connecting content providers to content consumers. This is the thinking behind Ed Whitacre’s famous comment, “to expect to use these pipes for free is nuts!” Ed was thinking that Google, or Yahoo or Vonage are using his pipes to his subscribers for free. The “Internet community” on the other hand views the Internet as a collection of pipes connecting people to people. From this other point of view, the consumer pays AT&T for access to the Internet, and Google, Yahoo and Vonage each pay their respective ISPs for access to the Internet. Nobody is getting anything for free. It makes no more sense for Google to pay AT&T for a subscriber’s Internet access than it would for an AT&T subscriber to pay Google’s connectivity providers for Google’s Internet access.

All you can eat?

The always good Rethink Wireless has an article AT&T sounds deathknell for unlimited mobile data.

It points out that with “3% of smartphone users now consuming 40% of network capacity,” the carrier has to draw a line. Presumably because if 30% of AT&T’s subscribers were to buy iPhones, they would consume 400% of the network’s capacity.

Wireless networks are badly bandwidth constrained. AT&T’s woes with the iPhone launch were caused by lack of backhaul (wired capacity to the cell towers), but the real problem is on the wireless link from the cell tower to the phone.

The problem here is one of setting expectations. Here’s an excerpt from AT&T’s promotional materials: “Customers with capable LaptopConnect products or phones, like the iPhone 3G S, can experience the 7.2 [megabit per second] speeds in coverage areas.” A reasonable person reading this might think that it is an invitation to do something like video streaming. Actually, a single user of this bandwidth would consume the entire capacity of a cell-tower sector:
HSPA ell capacity per sector per 5 MHz
Source: High Speed Radio Access for Mobile Communications, edited by Harri Holma and Antti Toskala.

This provokes a dilemma – not just for AT&T but for all wireless service providers. Ideally you want the network to be super responsive, for example when you are loading a web page. This requires a lot of bandwidth for short bursts. So imposing a bandwidth cap, throttling download speeds to some arbitrary maximum, would give users a worse experience. But users who use a lot of bandwidth continuously – streaming live TV for example – make things bad for everybody.

The cellular companies think of users like this as bad guys, taking more than their share. But actually they are innocently taking the carriers up on the promises in their ads. This is why the Rethink piece says “many observers think AT&T – and its rivals – will have to return to usage-based pricing, or a tiered tariff plan.”

Actually, AT&T already appears to have such a policy – reserving the right to charge more if you use more than 5GB per month. This is a lot, unless you are using your phone to stream video. For example, it’s over 10,000 average web pages or 10,000 minutes of VoIP. You can avoid running over this cap by limiting your streaming videos and your videophone calls to when you are in Wi-Fi coverage. You can still watch videos when you are out and about by downloading them in advance, iPod style.

This doesn’t seem particularly burdensome to me.

Network Neutrality – FCC issues NPRM

I wrote earlier about FCC chairman Julius Genachowski’s plans for regulations aimed at network neutrality. The FCC today came through with a Notice of Proposed Rule Making. Here are the relevant documents from the FCC website:

Summary Presentation: Acrobat
NPRM: Word | Acrobat
News Release: Word | Acrobat
Genachowski Statement: Word | Acrobat
Copps Statement: Word | Acrobat
McDowell Statement: Word | Acrobat
Clyburn Statement: Word | Acrobat
Baker Statement: Word | Acrobat

The NPRM itself is a hefty document, 107 pages long; if you just want the bottom line, the Summary Presentation is short and a little more readable than the press release. The comment period closes in mid-January, and the FCC will respond to the comments in March. I hesitate to guess when the rules will actually be released – this is hugely controversial: 40,000 comments filed to date. Here is a link to a pro-neutrality advocate. Here is a link to a pro-competition advocate. I believe that the FCC is doing a necessary thing here, and that the proposals properly address the legitimate concerns of the ISPs.

Here is the story from Reuters, and from AP.

3G network performance test results: Blackberries awful!

ARCchart has just published a report summarizing the data from a “test your Internet speed” applet that they publish for iPhone, Blackberry and Android. The dataset is millions of readings, from every country and carrier in the world. The highlights from my point of view:

  1. 3G (UMTS) download speeds average about a megabit per second; 2.5G (EDGE) speeds average about 160 kbps and 2G (GPRS) speeds average about 50 kbps.
  2. For VoIP, latency is a critical measure. The average on 3G networks was 336 ms, with a variation between carriers and countries ranging from 200 ms to over a second. The ITU reckons latency becomes a serious problem above 170 ms. I discussed the latency issue on 3G networks in an earlier post.
  3. According to these tests, Blackberries are on average only half as fast for both download and upload on the same networks as iPhones and Android phones. The Blackberry situation is complicated because they claim to compress data-streams, and because all data normally goes through Blackberry servers. The ARCchart report looks into the reasons for Blackberry’s poor showing:

The BlackBerry download average across all carriers is 515 kbps versus 1,025 kbps for the iPhone and Android – a difference of half. Difference in the upload average is even greater – 62 kbps for BlackBerry compared with 155 kbps for the other devices.
Source: ARCchart, September 2009.