Net Neutrality and consumer benefit

A story in Wired dated December 17th reports on a webinar presented by Allot Communications and Openet.

A slide from the webinar shows how network operators could charge by the type of content being transported rather than by bandwidth:

DPI integrated into Policy Control & Charging

In an earlier post I said that strict net neutrality is appropriate for wired broadband connections, but that for wireless connections the bandwidth is so constrained that the network operators must be able to ration bandwidth in some way. The suggestion of differential charging for bandwidth by content goes way beyond mere rationing. The reason this is egregious is that the bandwidth costs the same to the wireless service provider regardless of what is carried on it. Consumers don’t want to buy content from Internet service providers, they want to buy connectivity – access to the Internet.

In cases where a carrier can legitimately claim to add value it would make sense to let them charge more. For example, real-time communications demands traffic prioritization and tighter timing constraints than other content. Consumers may be willing to pay a little bit more for the better sounding calls resulting from this.

But this should be the consumer’s choice. Allowing mandatory charging for what is currently available free on the Internet would mean the death of the mobile Internet, and its replacement with something like interactive IP-based cable TV service. The Internet is currently a free market where the best and best marketed products win. Per-content charging would close this down, replacing it with an environment where product managers at carriers would decide who is going to be the next Facebook or Google, kind of like AOL or Compuserve before the Internet. The lesson of the Internet is that a dumb network connecting content creators with content consumers leads to massive innovation and value creation. The lesson of the PSTN is that an “intelligent network,” where network operators control the content, leads to decades of stagnation.

In a really free market, producers get paid for adding value. Since charging per content by carriers doesn’t add value, but merely diverts revenue from content producers to the carriers, it would be impossible in a free market. If a wireless carrier successfully attempted this, it would indicate that wireless Internet access is not a free market, but something more like a monopoly or cartel which should be regulated for the public good.

Video calling from your cell phone

Although phone numbers are an antiquated kind of thing, we are sufficiently beaten down by the machines that we think of it as natural to identify a person by a 10 digit number. Maybe the demise of the numeric phone keypad as big touch-screens take over will change matters on this front. But meanwhile, phone numbers are holding us back in important ways. Because phone numbers are bound to the PSTN, which doesn’t carry video calls, it is harder to make video calls than voice, because we don’t have people’s video addresses so handy.

This year, three new products attempted to address this issue in remarkably similar ways – clearly an idea whose time has come. The products are Apple’s FaceTime, Cisco’s IME and a startup product called Tango.

In all three of these products, you make a call to a regular phone number, which triggers a video session over the Internet. You only need the phone number – the Internet addressing is handled automatically. The two problems the automatic addressing has to handle are finding a candidate address, then verifying that it is the right one. Here’s how each of those three new products does the job:

1. FaceTime. When you first start FaceTime, it sends an SMS (text message) to an Apple server. The SMS contains sufficient information for the Apple server to reliably associate your phone number with the XMPP (push services) client running on your iPhone. With this authentication performed, anybody else who has your phone number in their address book on their iPhone or Mac can place a videophone call to you via FaceTime.

2. Cisco IME (Inter-Company Media Engine). The protocol used by IME to securely associate your phone number with your IP address is ViPR (Verification Involving PSTN Reachability), an open protocol specified in several IETF drafts co-authored by Jonathan Rosenberg who is now at Skype. ViPR can be embodied in a network box like IME, or in an endpoint like a phone of PC.
Here’s how it works: you make a phone call in the usual way. After you hang up, ViPR looks up the phone number you called to see if it is also ViPR-enabled. If it is, ViPR performs a secure mutual verification, by using proof-of-knowledge of the previous PSTN call as a shared secret. The next time you dial that phone number, ViPR makes the call through the Internet rather than through the phone network, so you can do wideband audio and video with no per-minute charge. A major difference between ViPR and FaceTime or Tango is that ViPR does not have a central registration server. The directory that ViPR looks up phone numbers in is stored in a distributed hash table (DHT). This is basically a distributed database with the contents stored across the network. Each ViPR participant contributes a little bit of storage to the network. The DHT itself defines an algorithm – called Chord – which describes how each node connects to other nodes, and how to look up information.

3. Tango, like FaceTime, has its own registration servers. The authentication on these works slightly differently. When you register with Tango, it looks in the address book on your iPhone for other registered Tango users, and displays them in your Tango address book. So if you already know somebody’s phone number, and that person is a registered Tango user, Tango lets you call them in video over the Internet.

Dumb mobile pipes

An interesting story from Bloomberg says that Ericsson is contemplating owning a wireless network infrastructure. Ericsson is already one of the top 5 mobile network operators worldwide, but it doesn’t own any of the networks it manages – it is simply a supplier of outsourced network management services.

The idea here is that Ericsson will own and manage its own network, and wholesale the services on it to MVNOs. If this plan goes through, and if Ericsson is able to stick to the wholesale model and not try to deliver services direct to consumers, it will be huge for wireless network neutrality. It is a truly disruptive development, in that it could lower barriers to entry for mobile service providers, and open up the wireless market to innovation at the service level.

[update] Upon reflection, I think this interpretation of Ericsson’s intent is over-enthusiastic. The problem is spectrum. Ericsson can’t market this to MVNOs without spectrum. So a more likely interpretation of Ericsson’s proposal is that it will pay for infrastructure, then sell capacity and network management services to spectrum-owning mobile network operators. Not a dumb pipes play at all. It is extremely unlikely that Ericsson will buy spectrum for this, though there are precedents for equipment manufacturers buying spectrum – Qualcomm and Intel have both done so.

[update 2] With the advent of white spaces, Ericsson would not need to own spectrum to offer a wholesale service from its wireless infrastructure. The incremental cost of provisioning white spaces on a cellular base station would be relatively modest.

QoS meters on Voxygen

The term “QoS” is used ambiguously. The two main categories of definition are first, QoS Provisioning: “the capability of a network to provide better service to selected network traffic,” which means packet prioritization of one kind or another, and second more literally: “Quality of Service,” which is the degree of perfection of a user’s audio experience in the face of potential impairments to network performance. These impairments fall into four categories: availability, packet loss, packet delay and tampering. Since this sense is normally used in the context of trying to measure it, we could call it QoS Metrics as opposed to QoS Provisioning. I would put issues like choice of codec and echo into the larger category of Quality of Experience, which includes all the possible impairments to audio experience, not just those imposed by the network.

By “tampering” I mean any intentional changes to the media payload of a packet, and I am OK with the negative connotations of the term since I favor the “dumb pipes” view of the Internet. On phone calls the vast bulk of such tampering is transcoding: changing the media format from one codec to another. Transcoding always reduces the fidelity of the sound, even when transcoding to a “better” codec.

Networks vary greatly in the QoS they deliver. One of the major benefits of going with VoIP service provided by your ISP (Internet Service Provider) is that your ISP has complete control over QoS. But there is a growing number of ITSPs (Internet Telephony Service Providers) that contend that the open Internet provides adequate QoS for business-grade telephone service. Skype, for example.

But it’s nice to be sure. So I have added a “QoS Metrics” category in the list to the right of this post. You can use the tools there to check your connection. I particularly like the one from Voxygen, which frames the test results in terms of the number of simultaneous voice sessions that your WAN connection can comfortably handle. Here’s an example of a test of ten channels:

Screen shot of Voxygen VoIP performance metrics tool

Third Generation WLAN Architectures

Aerohive claims to be the first example of a third-generation Wireless LAN architecture.

  • The first generation was the autonomous access point.
  • The second generation was the wireless switch, or controller-based WLAN architecture.
  • The third generation is a controller-less architecture.

The move from the first generation to the second was driven by enterprise networking needs. Enterprises need greater control and manageability than smaller deployments. First generation autonomous access points didn’t have the processing power to handle the demands of greater network control, so a separate category of device was a natural solution: in the second generation architecture, “thin” access points did all the real-time work, and delegated the less time-sensitive processing to powerful central controllers.

Now the technology transition to 802.11n enables higher capacity wireless networks with better coverage. This allows enterprises to expand the role of wireless in their networks, from convenience to an alternative access layer. This in turn further increases the capacity, performance and reliability demands on the WLAN.

Aerohive believes this generational change in technology and market requires a corresponding generational change in system architecture. A fundamental technology driver for 802.11n, the ever-increasing processing bang-for-the-buck yielded by Moore’s law, also yields sufficient low-cost processing power to move the control functions from central controllers back to the access points. Aerohive aspires to lead the enterprise Wi-Fi market into this new architecture generation.

Superficially, getting rid of the controller looks like a return to the first generation architecture. But an architecture with all the benefits of a controller-based WLAN, only without a controller, requires a sophisticated suite of protocols by which the smart access points can coordinate with each other. Aerohive claims to have developed such a protocol suite.

The original controller-based architectures used the controller for all network traffic: the management plane, the control plane and the data plane. The bulk of network traffic is on the data plane, so bottlenecks there do more damage than on the other planes. So modern controller-based architectures have “hybrid” access points that handle the data plane, leaving only the control and management planes to the controller device (Aerohive’s architect, Devin Akin, says:, “distributed data forwarding at Layer-2 isn’t news, as every other vendor can do this.”) Aerohive’s third generation architecture takes it to the next step and distributes control plane handling as well, leaving only the management function centralized, and that’s just software on a generic server.

Aerohive contends that controller-based architectures are expensive, poorly scalable, unreliable, hard to deploy and not needed. A controller-based architecture is more expensive than a controller-less one, because controllers aren’t free (Aerohive charges the same for its APs as other vendors do for their thin ones: under $700 for a 2×2 MIMO dual-band 802.11n device). It is not scalable because the controller constitutes a bottleneck. It is not reliable because a controller is a single point of failure, and it is not needed because processing power is now so cheap that all the functions of the controller can be put into each AP, and given the right system design, the APs can coordinate with each other without the need for centralized control.

Distributing control in this way is considerably more difficult than distributing data forwarding. Control plane functions include all the security features of the WLAN, like authentication and admission, multiple VLANs and intrusion detection (WIPS). Greg Taylor, wireless LAN services practice lead for the Professional Services Organization of BT in North America says “The number one benefit [of a controller-based architecture] is security,” so a controller-less solution has to reassure customers that their vulnerability will not be increased. According to Dr. Amit Sinha, Chief Technology Officer at Motorola Enterprise Networking and Communications, other functions handled by controllers include “firewall, QoS, L2/L3 roaming, WIPS, AAA, site survivability, DHCP, dynamic RF management, firmware and configuration management, load balancing, statistics aggregation, etc.”

You can download a comprehensive white paper describing Aerohive’s architecture here.

Motorola recently validated Aerohive’s vision, announcing a similar architecture, described here.

Here’s another perspective on this topic.

ITExpo West — Achieving HD Voice On Smartphones

I will be moderating a panel discussion at ITExpo West on Tuesday 5th October at 11:30 am in room 306B: “Achieving HD Voice On Smartphones.”

Here’s the session description:

The communications market has been evolving to fixed high definition voice services for some time now, and nearly every desktop phone manufacturer is including support for G.722 and other codecs now. Why? Because HD voice makes the entire communications experience a much better one than we are used to.

But what does it mean for the wireless industry? When will wireless communications become part of the HD revolution? How will handset vendors, network equipment providers, and service providers have to adapt their current technologies in order to deliver wireless HD voice? How will HD impact service delivery? What are the business models around mobile HD voice?

This session will answer these questions and more, discussing both the technology and business aspects of bringing HD into the mobile space.

The panelists are:

This is a deeply experienced panel; each of the panelists is a world-class expert in his field. We can expect a highly informative session, so come armed with your toughest questions.

ITExpo West — Building Better HD Video Conferencing & Collaboration Systems

I will be moderating a session at ITExpo West on Tuesday 5th October at 9:30 am: “Building Better HD Video Conferencing & Collaboration Systems,” will be held in room 306A.

Here’s the session description:

Visual communications are becoming more and more commonplace. As networks improve to support video more effectively, the moment is right for broad market adoption of video conferencing and collaboration systems.

Delivering high quality video streams requires expertise in both networks and audio/video codec technology. Often, however, audio quality gets ignored, despite it being more important to efficient communication than the video component. Intelligibility is the key metric here, where wideband audio and voice quality enhancement algorithms can greatly improve the quality of experience.

This session will cover both audio and video aspects of today’s conferencing systems, and the various criteria that are used to evaluate them, including round-trip delay, lip-sync, smooth motion, bit-rate required, visual artifacts and network traversal – and of course pure audio quality. The emphasis will be on sharing best practices for building and deploying high-definition conferencing systems.

The panelists are:

  • James Awad, Marketing Product Manager, Octasic
  • Amir Zmora, VP Products and Marketing, RADVISION
  • Andy Singleton, Product Manager, MASERGY

These panelists cover the complete technology stack from chips (Octasic), to equipment (Radvison) to network services (Masergy), so please bring your questions about any technical aspect of video conferencing systems.

ITExpo West — The State of VoIP Peering

I will be moderating a session at ITExpo West on Monday 4th October at 2:15 pm: “The State of VoIP Peering,” will be held in room 304C.

Here’s the session description:

VoIP is a fact – it is here, and it is here to stay. That fact is undeniable. To date, the cost savings associated with VoIP have largely been enough to drive adoption. However, the true benefits of VoIP will only be realized through the continued growth of peering, which will keep calls on IP backbones rather than moving them onto the PSTN. Not only will increased peering continue to reduce costs, it will increase voice call quality – HD voice, for instance, can only be delivered on all-IP calls.

Of course, while there are benefits to peering, traditional carriers have traditionally not taken kindly to losing their PSTN traffic, for which they are able to bill by the minute. But, as the adoption of IP communications continues to increase – and of course the debate continues over when we will witness the true obsolescence of the PSTN – carriers will have little choice but to engage in peering relationships.

This session will offer an market update on the status of VoIP peering and its growth, as well as trends and technologies that will drive its growth going forward, including wideband audio and video calling.

The panelists are:

This is shaping up to be a fascinating session. Rico can tell us about the hardware technologies that are enabling IP end-to-end for phone calls, and Mark and Grant will give us a real-world assessment of the state of deployment, the motivations of the early adopters, and the likely fate of the PSTN.

White Spaces Geolocation Database

For now, all White Spaces devices will use a geolocation database to avoid interfering with licensed spectrum users. The latest FCC Memorandum and Order on TV White Spaces says that it is still OK to have a device that uses spectrum sensing only (one that doesn’t consult a geolocation database for licensed spectrum users), but to get certified for sensing only, a device will have to satisfy the FCC’s Office of Engineering and Technology, then be approved by the Commissioners on a case-by-case basis.

So all the devices for the foreseeable future are going to use a geolocation database. But they will have spectrum-sensing capabilities too, in order to select the cleanest channel from the list of available channels provided by the database.

Fixed devices (access points) will normally have a wired Internet connection. Once a fixed device has figured out where it is, it can query the database over the Internet for a list of available channels. Then it can advertise itself on those channels.

Mobile devices (phones, laptops etc.) will normally have non-whitespace connections to the Internet too, for example Wi-Fi or cellular data. These devices can know where they are by GPS or some other location technology, and query the geolocation database over their non-whitespace connection. If a mobile device doesn’t have non-whitespace Internet connectivity, it can sit and wait until it senses a beacon from a fixed whitespace device, then query the geolocation database over the whitespace connection. There is a slight chance at this point that the mobile device is using a licensed frequency inside the licensee’s protected contour. This chance is mitigated because the contour includes a buffer zone, so a mobile device inside a protected contour should be beyond the range of any whitespace devices outside that contour. The interference will also be very brief, since when it gets the response from the database it will instantly switch to another channel.

Nine companies have proposed themselves as geolocation database providers. Here they are, linked to the proposals they filed with the FCC:

Here’s an example of what a protected contour looks like. Here’s an example database. Note that this database is not accurate yet.

Actually, a geolocation database is overkill for most cases. The bulk of the information is just a reformatting of data the FCC already publishes online; it’s only 37 megabytes compressed. It could be kept in the phone since it doesn’t change much; it is updated weekly.

The proposed database will be useful for those rare events where the number of wireless microphones needed is so large that it won’t fit into the spectrum reserved for microphones, though in this case spectrum sensing would probably suffice. In other words, the geolocation database is a heavyweight solution to a lightweight problem.

Genuine Disruption from PicoChip

Clayton Christensen turned business thinking upside-down in 1997 with his book “The Innovator’s Dilemma” where he popularized his term “disruptive technology” in an analysis of the disk drive business. Since then abuse and over-use have rendered the term a meaningless cliche, but the idea behind it is still valid: well-run large companies that pay attention to their customers and make all the right decisions can be defeated in the market by upstarts that emerge from low-end niches with lower-cost, lower performance products.

PicoChip is following Christensen’s script faithfully. First it made a low-cost consumer-oriented chip that performed many of the functions of a cellular base station. Now it has added in some additional base station functions to address the infrastructure market.

Traditional infrastructure makers now face the prospect of residential device economics moving up to the macrocell.
From Rethink Wireless