Network neutrality: free market vs. regulation

Network neutrality is a contentious topic, particularly because billions of dollars of revenue hinge on the way it pans out. Because of these high stakes, partisans are motivated to use all the tools of rhetoric to argue for their positions. One way that the debate is commonly misrepresented is as one between regulation and the free market. It is actually a debate between more or less regulation. A free market is one where competition leads to abundant choice, and where consumers have the option to select one service over another. A monopoly is one where there is no choice, and consumers must take or leave what they are offered. Internet access is not a monopoly, but it is close, because of the limited number of suppliers in any particular location. The supposed purpose of network neutrality regulation is to preserve the current vibrant free market of internet services (services provided via the internet, as opposed to internet access service.)

There are several arguments against network neutrality. One is that it is impossible to enforce, that the Internet access providers will subtly strangle third party services by intentional incompetency if the regulators try to force them to stay open. This seems to be one of the ways that the CLECs were held off until the demise of UNE-P. A second one is that there is no need for regulation because there is no threat to third party services. This argument points out that the commercial Internet has been going strong for over a decade, so why fiddle with it when it is working so well? The rebuttal to this argument is that Ed Whitacre, CEO of AT&T has famously said that they intend to double charge providers of services over the network like Google: once for their own internet access like now, and a second time for their customer’s network access, additional to the one that customers are already paying for their own internet access. If the access providers succeed in this it will become much harder for small companies to offer new services on the Internet.

Another anti-network neutrality regulation argument is that network neutrality is good, that we have it now, and that any Internet Access Provider that tries to mess with it will get savaged by customer opinion and lose customers to competitors who keep their network open. This argument assumes that there is a free market in Internet access, and that a move by the telcos to double-charge service providers will not be immediately echoed by the MSOs. Or in a slightly modified version, it assumes that if the telcos and MSOs jump on this together, public outcry will force immediate regulatory reversal, so why try to fix something that ain’t broke yet?

Putting on the other hat, the access network operators have poured and continue to pour billions of dollars a year into upgrading their networks. They certainly deserve a fair return on their investments. The problem is that competition is so vigorous that neither the telcos nor the MSOs can afford to raise their rates to pay for all this build-out. But service providers (Google) have cash coming out of their ears, and they would happily share some of it in exchange for guaranteed priority of their content on congested networks.

OK, let’s take that hat off again. While Google does indeed have money coming out of its ears, and probably could afford to send some of it the way of the access providers, what about service providers with no spare cash, in other words, startups? The Internet has been a hothouse of innovation because of its spectacularly low barriers to entry. It costs virtually nothing to set up a new Internet service. Loss of network neutrality would shut down this innovation, because the telcos and MSOs would get to decide which of these services survived. Their track record at this stinks. If they had an inkling how to do it there would be no Google, UTube, MySpace or eBay, because the telcos would have put services like this into place 20 years ago. But they didn’t. Large companies are structurally incapable of this kind of radical innovation, while a vibrant free market does it naturally. This is because evolution by natural selection is a far more potent force than executive edict, and “let a thousand flowers bloom” is a vastly more fertile approach than “let’s not do anything risky.” Looked at this way, the telcos’ presentation of tiered service as “free market” rings hollow.

Wi-Fi Interference Experiments

Interesting new series of white papers on Wi-Fi interference from Craig Mathias of the Farpoint Group. He set up a couple of clients and attempted various activities (file transfer, VoIP, video streaming) in the presence of interference from various sources (microwave oven, cordless phone, DECT phone, another AP, a Bluetooth headset) and characterized the impairments. His conclusions were that some interference sources can completely shut down some uses (almost all of them shut down video), but that interference can be managed and does not present a long term stopper to Wi-Fi.

Missing from the tests was 802.11n. This should make a huge difference, for several reasons. First, its MIMO operation is intrinsically more resistant to interference, second 11n operates both in the 2.4 GHz frequency range (like 11b/g) and in the 5 GHz frequency range (like 11a) . The 5 GHz waveband is immune from microwave oven interference, and most of the cordless phone interference. Its disadvantage of shorter range is mitigated by the multi-path amplification effect of MIMO.

DiVitas and enterprise-controlled FMC

Still at VoiceCon, there was a great presentation in a FMC panel by Vivek Khuller of DiVitas Networks. DiVitas has just released a product that rebuts the idea that cellular/Wi-Fi roaming requires participation by a Mobile Network Operator. Cellular companies get the vast bulk of their revenues from consumers, (though business users are more profitable) and have not been motivated to tailor their services to businesses. This offering from DiVitas, and MVNO efforts like Sotto address a gaping need in the market.

Vivek prefaced his presentation by enumerating three big-picture observations.

First, the current crop of FMC devices is actually the second generation – we just didn’t recognize the first generation, which is mobile computers. They are converged devices because they have multiple network connections: POTS (built-in modem), Ethernet, Wi-Fi and possibly a WWAN card from Verizon or Sprint or whoever. Of course the truth of this observation depends on your definition of FMC, but it illuminates the roots of DiVitas’s strategy.

Second, when you start a job at a new company, they give you two things, a phone and a computer. They get both of them through the same type of channel, which is not a service provider.

Third, Skype came out of the blue from the service provider perspective; none of their founders had any voice service provider experience, but Skype is now the biggest voice service provider in the world in subscriber count.

He then launched into his presentation, pointing out that cellular penetration in the consumer space is about 75%, while in the enterprise space it’s only 20%, even though 75% of workers consider mobility “critical” or “important,” and the chances of finding a person at their desk are less than 30%.

He gave the reasons for the lack of penetration of cell phones in the enterprise as cost, control and complexity. On the cost front, he showed us his corporate phone bills – $14K for mobile and $1.4K for wireline. He acknowledged that a lot of his cellular use was lab testing, but felt that the point was still valid – cellular service is actually roughly 10x as expensive as wireline.

On the control front he pointed out that 80% of corporations use a PBX rather than Centrex, and he felt the primary reason was control issues.

On complexity, he pointed out that current solutions require multiple devices and servers – he might easily have added that they also require multiple MNO relationships for companies with international presence.

He went on to identify three major forces of change in the enterprise mobility market that are related to these barriers: first, the massive uptake of Wi-Fi, which reduces costs and increases control; second, the advent of SIP, which together with the availability of a lot of high quality open source code, lowers barriers to entry by leveraging engineering resources and increasing control. Third, the increasing potency of cell phones, with more processing power and connectivity, which he sees as reducing complexity and cost of the overall solution.

He ended up claiming an ROI for his enterprise-based FMC solution of 6 months, which may be hyperbolic, but would remain impressive even if far longer.

Charlie Giancarlo on telepresence at VoiceCon

This morning’s keynote speech at VoiceCon by Cisco’s Charlie Giancarlo was polished and entertaining. What jumped out for me was his description of telepresence. He had just demoed video phone calls, then went on to telepresence. My immediate thought was, “yes, a video phone call with a bigger screen.” But Charlie must have met this reaction before because he started to stress the radical nature of telepresence. As you know, telepresence is the idea of putting a bunch of big-screen LCDs around a conference table so it looks as though people are sitting there. HP and Dreamworks have had a system called Halo for a couple of years, but it’s hugely expensive.

Charlie’s point about the novelty of telepresence is that you have to experience it to understand it. He said that after a few minutes of a meeting, you forget that the person isn’t really there, and the subjective nature of the interaction is that it is face-to-face.

The second surprise from Charlie was that the Cisco version of telepresence has a total cost of around $10,000 per month per telepresence room. This seems to be a lot lower than the cost of Halo.

I can believe that you have to experience it to understand it, because of Tivo. TV viewing is a completely different (and much better) experience with Tivo. Tivo owners are all evangelists. They tell their Tivo-less friends that they will love it if they just try it. The friends believe it, but they don’t bother to get a Tivo. Then, when they do, their reaction is “why didn’t you tell me?” and they become ignored evangelists, too. But I still don’t have a Slingbox.

FMS and FMC

Detractors of Fixed Mobile Convergence can’t see the value in being able to walk into your office talking on a cell phone, then pick up your desk phone and seamlessly continue the call. They are right; that’s a small hook on which to hang a massive reworking of the corporate voice network. But that’s not Fixed Mobile Convergence, so it’s a straw-man argument. In Fixed-Mobile Convergence, you walk into your office talking on a cell phone, sit down, and continue the conversation on the same cell phone. The value driver is that you no longer need a desk phone – that’s a big saving.

So now you are wondering where the “convergence” is in this scenario. Isn’t this just FMS – Fixed Mobile Substitution? Yes and no. If the wireless connection remains cellular it is FMS. But the second big value driver comes in if it seamlessly transitions to Wi-Fi or Bluetooth, and stops using cellular minutes. This is FMC. The line between FMC and FMS is quite blurred. FMS has the disadvantage that signal coverage can be weak in some buildings, and if everybody in a densely populated office goes the FMS route, the cell won’t have the capacity to serve them all. But this problem is addressed by an interesting new product category called the femtocell, which is just like a Wi-Fi access point except it runs cellular frequencies and protocols. So if you deploy femtocells in your office you are going the FMS route, if you use the Wi-Fi network instead you are doing FMC. So the FMC scenario requires the phones to be dual mode (cellular plus Wi-Fi). FMS has the advantage that the phones can be cheaper, since they can leave out the Wi-Fi radio.

Another issue is who manages the cell phone. Voice Service Providers for years have been pushing a service called Centrex, which obviates the need for a PBX on the company premises. But most businesses have resisted. They prefer to control their communications infrastructure themselves. This same objection applies to FMS, but not necessarily to FMC. With the enterprise oriented (i.e. non-UMA, non-IMS) flavors of FMC, once the call is on the Wi-Fi network it is just a regular VoIP call on the corporate PBX. This means that it is billed at non-cellular rates (free on internal calls), and it can offer all the regular PBX features.

From a market segment point of view, the Blackberry is the closest thing to an enterprise cell phone that currently exists. But it doesn’t (yet) offer any PBX call features nor does it have Wi-Fi, and it is sold through and controlled by the mobile network operators, so it also fails on the score of not being controlled by the IT department.

Nokia, recognizing these issues, sells their Eseries phones not only through mobile operators but also through interconnects, the same distribution channel as PBXs. Nokia has also endowed their Eseries phones with enterprise-grade manageability (though it is with carrier oriented OMA-DM, rather than the enterprise oriented WBEM). So Nokia’s Eseries strategy still lacks PBX features on the phone. But the Eseries phones run on Symbian’s S60 operating system, for which there is a vibrant developer community, so if there isn’t yet a third-party PBX style client for it, there soon will be.

Another cell phone project taking this approach is OpenMoko. This goes even further than Nokia’s Eseries, since all the software, including the operating system (Linux) is open source.

The hidden telecom money-drain

Corporations have trimmed their telecommunications expenses down to the bone, but there is still a huge telephone-related money leak at most companies. This is the cell phone bill that so many employees simply expense, so it isn’t a controlled part of the IT department budget.

Businesses are finding it increasingly urgent to manage their cell phones the way they manage their network and computer equipment. One company seeking to help with this is IntegratedMobile.
But bringing cell phones under the corporate manageability umbrella is just the first step in integrating them into the IT strategy. The next step is to treat them the same way as regular phones for their voice capabilities, and as laptops for their data capabilities.

Treating phones the same way we treat laptops would call for a standard managed corporate software build with a common image worldwide.
Treating cell phones the same way we treat corporate desk phones would call for them to have all the standard PBX phone features, and to be administered on the PBX the same way desk phones are.

The trouble with this vision is that there is no worldwide mobile network operator (MNO), and in the US the phones are normally bought through the MNO. So the phone doesn’t exist that can be centrally bought, has a common manageable image installed, and can be deployed worldwide.

FMS in the enterprise

Value-conscious consumers are increasingly wondering why they need to pay for two phones, and deciding to save by ditching their wireline phone. This is technically termed FMS – Fixed Mobile Substitution, . Will the same thing happen in the business phone world? There’s a precedent for it. Desktop computers are being routed from offices by mobile PCs. Could desktop phones be displaced by mobile ones in the same way? It has been reported that more than half of calls made from businesses are cellular, so the substitution in usage is well under way already.

There are several objections. The sound quality of cellular conversations is abysmal. The cost per minute is much higher. Business desk phones have all sorts of features that cell phones lack. The form factor of cell phones is inconvenient in some ways – you can’t clamp them to your ear with your shoulder, to free up your hands. Plus cordless phones have been available for PBXs for years, and they have sold very badly.

Almost everybody in business has a cell phone. There is no way that these people are going to abandon their cell phones, but if the technical and usability obstacles are removed, they may see no further need for a desk phone. Cutting this expense has to be attractive to businesses focussed on ROI.

Wi-Fi Security Risks

Ray Naraine talks about exploits on Wi-Fi networks, how easy they are, first with a tool called Silica, then with free software running on a Nokia N800.

Exploits of this type can be prevented by elementary network hygiene, using the authentication and encryption techniques of 802.11i.
A different kind of vulnerability has been described by “Johnny Cache.” This type of vulnerability is more insidious.

In lab tests it has been possible for a device masquerading as an access point to respond to probe frames (which must always be sent in the clear before any authentication can take place) with a mal-formed packet that causes a buffer overrun in the computer that is looking for a network. Because these buffer overruns are in the 802.11 driver they can be designed to execute hostile code in kernel mode.
Of course this type of vulnerability is specific to particular implementations of the Wi-Fi driver, and all the reported ones have been fixed. More reassuring, there is no reported case of this type of exploit actually being done in the wild. But the principle remains that a badly written network driver can compromise your security regardless of the higher level measures you take, and that wireless networks are more vulnerable to this type of exploit than wired.

So, is Wi-Fi too insecure for corporate use? Neither of the two classes of vulnerability discussed here seem to be stoppers. The Naraine exploits are addressed by simple common sense; the known driver vulnerabilities were repaired before anybody exploited them in the wild. There are almost certainly more like that waiting to be found, but on the scale of risks, this has so far ranked low compared to the many widely publicized instances of physical theft of a laptop.