Dumb mobile pipes

An interesting story from Bloomberg says that Ericsson is contemplating owning a wireless network infrastructure. Ericsson is already one of the top 5 mobile network operators worldwide, but it doesn’t own any of the networks it manages – it is simply a supplier of outsourced network management services.

The idea here is that Ericsson will own and manage its own network, and wholesale the services on it to MVNOs. If this plan goes through, and if Ericsson is able to stick to the wholesale model and not try to deliver services direct to consumers, it will be huge for wireless network neutrality. It is a truly disruptive development, in that it could lower barriers to entry for mobile service providers, and open up the wireless market to innovation at the service level.

[update] Upon reflection, I think this interpretation of Ericsson’s intent is over-enthusiastic. The problem is spectrum. Ericsson can’t market this to MVNOs without spectrum. So a more likely interpretation of Ericsson’s proposal is that it will pay for infrastructure, then sell capacity and network management services to spectrum-owning mobile network operators. Not a dumb pipes play at all. It is extremely unlikely that Ericsson will buy spectrum for this, though there are precedents for equipment manufacturers buying spectrum – Qualcomm and Intel have both done so.

[update 2] With the advent of white spaces, Ericsson would not need to own spectrum to offer a wholesale service from its wireless infrastructure. The incremental cost of provisioning white spaces on a cellular base station would be relatively modest.

QoS meters on Voxygen

The term “QoS” is used ambiguously. The two main categories of definition are first, QoS Provisioning: “the capability of a network to provide better service to selected network traffic,” which means packet prioritization of one kind or another, and second more literally: “Quality of Service,” which is the degree of perfection of a user’s audio experience in the face of potential impairments to network performance. These impairments fall into four categories: availability, packet loss, packet delay and tampering. Since this sense is normally used in the context of trying to measure it, we could call it QoS Metrics as opposed to QoS Provisioning. I would put issues like choice of codec and echo into the larger category of Quality of Experience, which includes all the possible impairments to audio experience, not just those imposed by the network.

By “tampering” I mean any intentional changes to the media payload of a packet, and I am OK with the negative connotations of the term since I favor the “dumb pipes” view of the Internet. On phone calls the vast bulk of such tampering is transcoding: changing the media format from one codec to another. Transcoding always reduces the fidelity of the sound, even when transcoding to a “better” codec.

Networks vary greatly in the QoS they deliver. One of the major benefits of going with VoIP service provided by your ISP (Internet Service Provider) is that your ISP has complete control over QoS. But there is a growing number of ITSPs (Internet Telephony Service Providers) that contend that the open Internet provides adequate QoS for business-grade telephone service. Skype, for example.

But it’s nice to be sure. So I have added a “QoS Metrics” category in the list to the right of this post. You can use the tools there to check your connection. I particularly like the one from Voxygen, which frames the test results in terms of the number of simultaneous voice sessions that your WAN connection can comfortably handle. Here’s an example of a test of ten channels:

Screen shot of Voxygen VoIP performance metrics tool

Third Generation WLAN Architectures

Aerohive claims to be the first example of a third-generation Wireless LAN architecture.

  • The first generation was the autonomous access point.
  • The second generation was the wireless switch, or controller-based WLAN architecture.
  • The third generation is a controller-less architecture.

The move from the first generation to the second was driven by enterprise networking needs. Enterprises need greater control and manageability than smaller deployments. First generation autonomous access points didn’t have the processing power to handle the demands of greater network control, so a separate category of device was a natural solution: in the second generation architecture, “thin” access points did all the real-time work, and delegated the less time-sensitive processing to powerful central controllers.

Now the technology transition to 802.11n enables higher capacity wireless networks with better coverage. This allows enterprises to expand the role of wireless in their networks, from convenience to an alternative access layer. This in turn further increases the capacity, performance and reliability demands on the WLAN.

Aerohive believes this generational change in technology and market requires a corresponding generational change in system architecture. A fundamental technology driver for 802.11n, the ever-increasing processing bang-for-the-buck yielded by Moore’s law, also yields sufficient low-cost processing power to move the control functions from central controllers back to the access points. Aerohive aspires to lead the enterprise Wi-Fi market into this new architecture generation.

Superficially, getting rid of the controller looks like a return to the first generation architecture. But an architecture with all the benefits of a controller-based WLAN, only without a controller, requires a sophisticated suite of protocols by which the smart access points can coordinate with each other. Aerohive claims to have developed such a protocol suite.

The original controller-based architectures used the controller for all network traffic: the management plane, the control plane and the data plane. The bulk of network traffic is on the data plane, so bottlenecks there do more damage than on the other planes. So modern controller-based architectures have “hybrid” access points that handle the data plane, leaving only the control and management planes to the controller device (Aerohive’s architect, Devin Akin, says:, “distributed data forwarding at Layer-2 isn’t news, as every other vendor can do this.”) Aerohive’s third generation architecture takes it to the next step and distributes control plane handling as well, leaving only the management function centralized, and that’s just software on a generic server.

Aerohive contends that controller-based architectures are expensive, poorly scalable, unreliable, hard to deploy and not needed. A controller-based architecture is more expensive than a controller-less one, because controllers aren’t free (Aerohive charges the same for its APs as other vendors do for their thin ones: under $700 for a 2×2 MIMO dual-band 802.11n device). It is not scalable because the controller constitutes a bottleneck. It is not reliable because a controller is a single point of failure, and it is not needed because processing power is now so cheap that all the functions of the controller can be put into each AP, and given the right system design, the APs can coordinate with each other without the need for centralized control.

Distributing control in this way is considerably more difficult than distributing data forwarding. Control plane functions include all the security features of the WLAN, like authentication and admission, multiple VLANs and intrusion detection (WIPS). Greg Taylor, wireless LAN services practice lead for the Professional Services Organization of BT in North America says “The number one benefit [of a controller-based architecture] is security,” so a controller-less solution has to reassure customers that their vulnerability will not be increased. According to Dr. Amit Sinha, Chief Technology Officer at Motorola Enterprise Networking and Communications, other functions handled by controllers include “firewall, QoS, L2/L3 roaming, WIPS, AAA, site survivability, DHCP, dynamic RF management, firmware and configuration management, load balancing, statistics aggregation, etc.”

You can download a comprehensive white paper describing Aerohive’s architecture here.

Motorola recently validated Aerohive’s vision, announcing a similar architecture, described here.

Here’s another perspective on this topic.

ITExpo West — Achieving HD Voice On Smartphones

I will be moderating a panel discussion at ITExpo West on Tuesday 5th October at 11:30 am in room 306B: “Achieving HD Voice On Smartphones.”

Here’s the session description:

The communications market has been evolving to fixed high definition voice services for some time now, and nearly every desktop phone manufacturer is including support for G.722 and other codecs now. Why? Because HD voice makes the entire communications experience a much better one than we are used to.

But what does it mean for the wireless industry? When will wireless communications become part of the HD revolution? How will handset vendors, network equipment providers, and service providers have to adapt their current technologies in order to deliver wireless HD voice? How will HD impact service delivery? What are the business models around mobile HD voice?

This session will answer these questions and more, discussing both the technology and business aspects of bringing HD into the mobile space.

The panelists are:

This is a deeply experienced panel; each of the panelists is a world-class expert in his field. We can expect a highly informative session, so come armed with your toughest questions.

ITExpo West — Building Better HD Video Conferencing & Collaboration Systems

I will be moderating a session at ITExpo West on Tuesday 5th October at 9:30 am: “Building Better HD Video Conferencing & Collaboration Systems,” will be held in room 306A.

Here’s the session description:

Visual communications are becoming more and more commonplace. As networks improve to support video more effectively, the moment is right for broad market adoption of video conferencing and collaboration systems.

Delivering high quality video streams requires expertise in both networks and audio/video codec technology. Often, however, audio quality gets ignored, despite it being more important to efficient communication than the video component. Intelligibility is the key metric here, where wideband audio and voice quality enhancement algorithms can greatly improve the quality of experience.

This session will cover both audio and video aspects of today’s conferencing systems, and the various criteria that are used to evaluate them, including round-trip delay, lip-sync, smooth motion, bit-rate required, visual artifacts and network traversal – and of course pure audio quality. The emphasis will be on sharing best practices for building and deploying high-definition conferencing systems.

The panelists are:

  • James Awad, Marketing Product Manager, Octasic
  • Amir Zmora, VP Products and Marketing, RADVISION
  • Andy Singleton, Product Manager, MASERGY

These panelists cover the complete technology stack from chips (Octasic), to equipment (Radvison) to network services (Masergy), so please bring your questions about any technical aspect of video conferencing systems.

ITExpo West — The State of VoIP Peering

I will be moderating a session at ITExpo West on Monday 4th October at 2:15 pm: “The State of VoIP Peering,” will be held in room 304C.

Here’s the session description:

VoIP is a fact – it is here, and it is here to stay. That fact is undeniable. To date, the cost savings associated with VoIP have largely been enough to drive adoption. However, the true benefits of VoIP will only be realized through the continued growth of peering, which will keep calls on IP backbones rather than moving them onto the PSTN. Not only will increased peering continue to reduce costs, it will increase voice call quality – HD voice, for instance, can only be delivered on all-IP calls.

Of course, while there are benefits to peering, traditional carriers have traditionally not taken kindly to losing their PSTN traffic, for which they are able to bill by the minute. But, as the adoption of IP communications continues to increase – and of course the debate continues over when we will witness the true obsolescence of the PSTN – carriers will have little choice but to engage in peering relationships.

This session will offer an market update on the status of VoIP peering and its growth, as well as trends and technologies that will drive its growth going forward, including wideband audio and video calling.

The panelists are:

This is shaping up to be a fascinating session. Rico can tell us about the hardware technologies that are enabling IP end-to-end for phone calls, and Mark and Grant will give us a real-world assessment of the state of deployment, the motivations of the early adopters, and the likely fate of the PSTN.

White Spaces Geolocation Database

For now, all White Spaces devices will use a geolocation database to avoid interfering with licensed spectrum users. The latest FCC Memorandum and Order on TV White Spaces says that it is still OK to have a device that uses spectrum sensing only (one that doesn’t consult a geolocation database for licensed spectrum users), but to get certified for sensing only, a device will have to satisfy the FCC’s Office of Engineering and Technology, then be approved by the Commissioners on a case-by-case basis.

So all the devices for the foreseeable future are going to use a geolocation database. But they will have spectrum-sensing capabilities too, in order to select the cleanest channel from the list of available channels provided by the database.

Fixed devices (access points) will normally have a wired Internet connection. Once a fixed device has figured out where it is, it can query the database over the Internet for a list of available channels. Then it can advertise itself on those channels.

Mobile devices (phones, laptops etc.) will normally have non-whitespace connections to the Internet too, for example Wi-Fi or cellular data. These devices can know where they are by GPS or some other location technology, and query the geolocation database over their non-whitespace connection. If a mobile device doesn’t have non-whitespace Internet connectivity, it can sit and wait until it senses a beacon from a fixed whitespace device, then query the geolocation database over the whitespace connection. There is a slight chance at this point that the mobile device is using a licensed frequency inside the licensee’s protected contour. This chance is mitigated because the contour includes a buffer zone, so a mobile device inside a protected contour should be beyond the range of any whitespace devices outside that contour. The interference will also be very brief, since when it gets the response from the database it will instantly switch to another channel.

Nine companies have proposed themselves as geolocation database providers. Here they are, linked to the proposals they filed with the FCC:

Here’s an example of what a protected contour looks like. Here’s an example database. Note that this database is not accurate yet.

Actually, a geolocation database is overkill for most cases. The bulk of the information is just a reformatting of data the FCC already publishes online; it’s only 37 megabytes compressed. It could be kept in the phone since it doesn’t change much; it is updated weekly.

The proposed database will be useful for those rare events where the number of wireless microphones needed is so large that it won’t fit into the spectrum reserved for microphones, though in this case spectrum sensing would probably suffice. In other words, the geolocation database is a heavyweight solution to a lightweight problem.

Genuine Disruption from PicoChip

Clayton Christensen turned business thinking upside-down in 1997 with his book “The Innovator’s Dilemma” where he popularized his term “disruptive technology” in an analysis of the disk drive business. Since then abuse and over-use have rendered the term a meaningless cliche, but the idea behind it is still valid: well-run large companies that pay attention to their customers and make all the right decisions can be defeated in the market by upstarts that emerge from low-end niches with lower-cost, lower performance products.

PicoChip is following Christensen’s script faithfully. First it made a low-cost consumer-oriented chip that performed many of the functions of a cellular base station. Now it has added in some additional base station functions to address the infrastructure market.

Traditional infrastructure makers now face the prospect of residential device economics moving up to the macrocell.
From Rethink Wireless

FCC to address White Spaces at September 23rd Meeting

The agenda for the September 23rd FCC Commission Meeting lists:

TV White Spaces Second MO&O: A Second Memorandum Opinion and Order that will create opportunities for investment and innovation in advanced Wi-Fi technologies and a variety of broadband services by finalizing provisions for unlicensed wireless devices to operate in unused parts of TV spectrum.

Early discussion of White Spaces proposed that client devices would be responsible for finding vacant spectrum to use. This “spectrum sensing” or “detect and avoid” technology was sufficiently controversial that a consensus grew to supplement it with a geolocation database, where the client devices determine their location using GPS or other technologies, then consult a coverage database showing which frequencies are available at that location.

Among the Internet companies this consensus now appears to have evolved to eliminate the spectrum-sensing requirement for geolocation-enabled devices.

The Register says that this is because spectrum sensing doesn’t work.

The Associated Press predicts that the Order will go along with the Internet companies, and ditch the spectrum sensing requirement.

Some of the technology companies behind the white spaces are fighting a rearguard action, saying there are good reasons to retain spectrum sensing as an alternative to geolocation. The broadcasting industry (represented by the NAB and MSTV) want to require both. It will be interesting to see if the FCC leaves any spectrum sensing provisions in the Order.

Intel Infineon: history repeats itself

Unlike its perplexing McAfee move, Intel had to acquire Infineon. Intel must make the Atom succeed. The smartphone market is growing fast, and the media tablet market is in the starting blocks. Chips in these devices are increasingly systems-on-chip, combining multiple functions. To sell application processors in phones, you must have a baseband story. Infineon’s RF expertise is a further benefit.

As Linley Gwennap said when he predicted the acquisition a month ago, the fit is natural. Intel needs 3G and LTE basebands, Infineon has no application processor.

Linley also pointed out Intel’s abysmal track record for acquisitions.

Intel has been through this movie before, for the same strategic reasons. It acquired DSP Communications in 1999 for $1.6 Bn. The idea there was to enter the cellphone chip market with DSP’s baseband plus the XScale ARM processor that Intel got from DEC. It was great in theory, and XScale got solid design wins in the early smart-phones, but Intel neglected XScale, letting its performance lead over other ARM implementations dwindle, and its only significant baseband customer was RIM.

In 2005, Paul Otellini became CEO; at that time AMD was beginning to make worrying inroads into Intel’s market share. Otellini regrouped – he focused in on Intel’s core business, which he saw as being “Intel Architecture” chips. But XScale runs an instruction set architecture that competes with IA, namely ARM. So rather than continuing to invest in its competition, Intel instead dumped off its flagging cellphone chip business (baseband and XScale) to Marvell for $0.6 Bn, and set out to create an IA chip that could compete with ARM in size, power consumption and price. Hence Atom.

But does instruction set architecture matter that much any more? Intel’s pitch on Atom-based netbooks was that you could have “the whole Internet” on them, including the parts that run only on IA chips. But now there are no such parts. Everything relevant on the Internet works fine on ARM-based systems like Android phones. iPhones are doing great even without Adobe Flash.

So from Intel’s point of view, this decade-later redo of its entry into the cellphone chip business is different. It is doing it right, with a coherent corporate strategy. But from the point of view of the customers (the phone OEMs and carriers) it may not look so different. They will judge Intel’s offerings on price, performance, power efficiency, wireless quality and how easy Intel makes it to design-in the new chips. The same criteria as last time.

Rethink Wireless has some interesting insights on this topic…