First 802.11n handset spotted in the wild – what took so long?

The fall 2009 crop of ultimate smartphones looks more penultimate to me, with its lack of 11n. But a handset with 802.11n has come in under the wire for 2009. Not officially, but actually. Slashgear reports a hack that kicks the Wi-Fi chip in the HTC HD2 phone into 11n mode. And the first ultimate smartphone of 2010, the HTC Google Nexus One is also rumored to support 802.11n.

These are the drops before the deluge. Questions to chip suppliers have elicited mild surprise that there are still no Wi-Fi Alliance certifications for handsets with 802.11n. All the flagship chips from all the handset Wi-Fi chipmakers are 802.11n. Broadcom is already shipping volumes of its BCM4329 11n combo chip to Apple for the iTouch (and I would guess the new Apple tablet), though the 3GS still sports the older BCM4325.

Some fear that 802.11n is a relative power hog, and will flatten your battery. For example, a GSMArena report on the HD2 hack says:

There are several good reasons why Wi-Fi 802.11n hasn’t made its way into mobile phones hardware just yet. Increased power consumption is just not worth it if the speed will be limited by other factors such as under-powered CPU or slow-memory…

But is it true that 802.11n increases power consumption at a system level? In some cases it may be: the Slashgear report linked above says: “some users have reported significant increases in battery consumption when the higher-speed wireless is switched on.”

This reality appears to contradict the opinion of one of the most knowledgeable engineers in the Wi-Fi industry, Bill McFarland, CTO at Atheros, who says:

The important metric here is the energy-per-bit transferred, which is the average power consumption divided by the average data rate. This energy can be measured in nanojoules (nJ) per bit transferred, and is the metric to determine how long a battery will last while doing tasks such as VoIP, video transmissions, or file transfers.

For example, Table 1 shows that for 802.11g the data rate is 22 Mbps and the corresponding receive power-consumption average is around 140 mW. While actively receiving, the energy consumed in receiving each bit is about 6.4 nJ. On the transmit side, the energy is about 20.4 nJ per bit.

Looking at these same cases for 802.11n, the data rate has gone up by almost a factor of 10, while power consumption has gone up by only a factor of 5, or in the transmit case, not even a factor of 3.

Thus, the energy efficiency in terms of nJ per bit is greater for 802.11n.

Here is his table that illustrates that point:
Effect of Data Rate on Power Consumption

Source: Wireless Net DesignLine 06/03/2008

The discrepancy between this theoretical superiority of 802.11n’s power efficiency, and the complaints from the field may be explained several ways. For example, the power efficiency may actually be better and the reports wrong. Or there may be some error in the particular implementation of 802.11n in the HD2 – a problem that led HTC to disable it for the initial shipments.

Either way, 2010 will be the year for 802.11n in handsets. I expect all dual-mode handset announcements in the latter part of the year to have 802.11n.

As to why it took so long, I don’t think it did, really. The chips only started shipping this year, and there is a manufacturing lag between chip and phone. I suppose a phone could have started shipping around the same time as the latest iTouch, which was September. But 3 months is not an egregious lag.

All you can eat?

The always good Rethink Wireless has an article AT&T sounds deathknell for unlimited mobile data.

It points out that with “3% of smartphone users now consuming 40% of network capacity,” the carrier has to draw a line. Presumably because if 30% of AT&T’s subscribers were to buy iPhones, they would consume 400% of the network’s capacity.

Wireless networks are badly bandwidth constrained. AT&T’s woes with the iPhone launch were caused by lack of backhaul (wired capacity to the cell towers), but the real problem is on the wireless link from the cell tower to the phone.

The problem here is one of setting expectations. Here’s an excerpt from AT&T’s promotional materials: “Customers with capable LaptopConnect products or phones, like the iPhone 3G S, can experience the 7.2 [megabit per second] speeds in coverage areas.” A reasonable person reading this might think that it is an invitation to do something like video streaming. Actually, a single user of this bandwidth would consume the entire capacity of a cell-tower sector:
HSPA ell capacity per sector per 5 MHz
Source: High Speed Radio Access for Mobile Communications, edited by Harri Holma and Antti Toskala.

This provokes a dilemma – not just for AT&T but for all wireless service providers. Ideally you want the network to be super responsive, for example when you are loading a web page. This requires a lot of bandwidth for short bursts. So imposing a bandwidth cap, throttling download speeds to some arbitrary maximum, would give users a worse experience. But users who use a lot of bandwidth continuously – streaming live TV for example – make things bad for everybody.

The cellular companies think of users like this as bad guys, taking more than their share. But actually they are innocently taking the carriers up on the promises in their ads. This is why the Rethink piece says “many observers think AT&T – and its rivals – will have to return to usage-based pricing, or a tiered tariff plan.”

Actually, AT&T already appears to have such a policy – reserving the right to charge more if you use more than 5GB per month. This is a lot, unless you are using your phone to stream video. For example, it’s over 10,000 average web pages or 10,000 minutes of VoIP. You can avoid running over this cap by limiting your streaming videos and your videophone calls to when you are in Wi-Fi coverage. You can still watch videos when you are out and about by downloading them in advance, iPod style.

This doesn’t seem particularly burdensome to me.

Why are we waiting?

I just clicked on a calendar appointment enclosure in an email. Nothing happened, so I clicked again. Then suddenly two copies of the appointment appeared in my iCal calendar. Why on earth did the computer take so long to respond to my click? I waited an eternity – maybe even as long as a second.

The human brain has a clock speed of about 15 Hz. So anything that happens in less than 70 milliseconds seems instant. The other side of this coin is that when your computer takes longer than 150 ms to respond to you, it’s slowing you down.

I have difficulty fathoming how programmers are able to make modern computers run so slowly. The original IBM PC ran at well under 2 MIPS. The computer you are sitting at is around ten thousand times faster. It’s over 100 times faster than a Cray-1 supercomputer. This means that when your computer keeps you waiting for a quarter of a second, equally inept programming on the same task on an eighties-era IBM PC would have kept you waiting 40 minutes. I don’t know about you, but I encounter delays of over a quarter of a second with distressing frequency in my daily work.

I blame Microsoft. Around Intel the joke line about performance was “what Andy giveth, Bill taketh away.” This was actually a winning strategy for decades of Microsoft success: concentrate on features and speed of implementation and never waste time optimizing for performance because the hardware will catch up. It’s hard to argue with success, but I wonder if a software company obsessed with performance could be even more successful than Microsoft?

Network Neutrality – FCC issues NPRM

I wrote earlier about FCC chairman Julius Genachowski’s plans for regulations aimed at network neutrality. The FCC today came through with a Notice of Proposed Rule Making. Here are the relevant documents from the FCC website:

Summary Presentation: Acrobat
NPRM: Word | Acrobat
News Release: Word | Acrobat
Genachowski Statement: Word | Acrobat
Copps Statement: Word | Acrobat
McDowell Statement: Word | Acrobat
Clyburn Statement: Word | Acrobat
Baker Statement: Word | Acrobat

The NPRM itself is a hefty document, 107 pages long; if you just want the bottom line, the Summary Presentation is short and a little more readable than the press release. The comment period closes in mid-January, and the FCC will respond to the comments in March. I hesitate to guess when the rules will actually be released – this is hugely controversial: 40,000 comments filed to date. Here is a link to a pro-neutrality advocate. Here is a link to a pro-competition advocate. I believe that the FCC is doing a necessary thing here, and that the proposals properly address the legitimate concerns of the ISPs.

Here is the story from Reuters, and from AP.

Dual mode phone trends update 3

I last looked at dual mode phone certifications on the Wi-Fi Alliance website almost a year ago.

Here’s what has happened since, through the first three quarters of 2009:
Wi-Fi Alliance Dual-Mode Phone Certifications 2005-2009

There are still no certifications for 802.11 draft n, and almost none for 802.11a.

Here’s another breakdown, by manufacturer and year. Click on the chart to get a bigger image. This shows that the Wi-Fi enthusiasts have been pretty constant over the years: Nokia, HTC, Motorola and Samsung. Then more recently SonyEricsson and LG. Note that the 2009 figures are only through Q3, so the growth is even more impressive than it seems from this chart.
Wi-Fi Alliance Dual-Mode Phone Certifications 2005-2009 by OEM

The all-time champion is Samsung, with a total of 84 phone models certified for Wi-Fi, followed by Nokia with 68, then HTC with 54. This changes if you look just at smartphones, where Nokia has 61 total certifications to HTC’s 34 and Samsung’s 29.

3G network performance test results: Blackberries awful!

ARCchart has just published a report summarizing the data from a “test your Internet speed” applet that they publish for iPhone, Blackberry and Android. The dataset is millions of readings, from every country and carrier in the world. The highlights from my point of view:

  1. 3G (UMTS) download speeds average about a megabit per second; 2.5G (EDGE) speeds average about 160 kbps and 2G (GPRS) speeds average about 50 kbps.
  2. For VoIP, latency is a critical measure. The average on 3G networks was 336 ms, with a variation between carriers and countries ranging from 200 ms to over a second. The ITU reckons latency becomes a serious problem above 170 ms. I discussed the latency issue on 3G networks in an earlier post.
  3. According to these tests, Blackberries are on average only half as fast for both download and upload on the same networks as iPhones and Android phones. The Blackberry situation is complicated because they claim to compress data-streams, and because all data normally goes through Blackberry servers. The ARCchart report looks into the reasons for Blackberry’s poor showing:

The BlackBerry download average across all carriers is 515 kbps versus 1,025 kbps for the iPhone and Android – a difference of half. Difference in the upload average is even greater – 62 kbps for BlackBerry compared with 155 kbps for the other devices.
Source: ARCchart, September 2009.

Femtocell pricing chutzpah

It’s like buying an airplane ticket then getting charged extra to get on the plane.

The cellular companies want you to buy cellular service then pay extra to get signal coverage. Gizmodo has a coolly reasoned analysis.

AT&T Wireless is doing the standard telco thing here, conflating pricing for different services. It is sweetening the monthly charge option for femtocells by offering unlimited calling. A more honest pricing scheme would be to provide femtocells free to anybody who has coverage problem, and to offer the femtocell/unlimited calling option as a separate product. Come to think of it, this is probably how AT&T really plans for it to work: if a customer calls to cancel service because of poor coverage, I expect AT&T will offer a free femtocell as a retention incentive.

It is ironic that this issue is coming up at the same time as the wireless carriers are up in arms about the FCC’s new network neutrality initiative. Now that smartphones all have Wi-Fi, if the handsets were truly open we could use our home Wi-Fi signal to get data and voice services from alternative providers when we were at home. No need for femtocells. (T-Mobile@Home is a closed-network version of this.)

Presumably something like this is on the roadmap for Google Voice, which is one of the scenarios that causes the MNOs to fight network neutrality tooth and nail.

FCC to issue Net Neutrality rules

In a speech to the Brookings Institution today, FCC Chairman Julius Genachowski announced that the FCC is initiating a public process to formulate net neutrality rules for broadband network operators based on six principles:

  1. Open access to Content
  2. Open access to Applications
  3. Open access to Services
  4. Freedom for users to attach devices to the network
  5. Non-discrimination for content and applications
  6. Transparency of network management practices

The first four of these principles were initially articulated by former FCC Chairman Michael Powell in 2004 as the “Four Freedoms.” Numbers 5 and 6 are new. The forthcoming rules will apply these six principles to all broadband access technologies, including wireless.

Genachowski made the case that Internet openness is essential and that it is threatened. He acknowledged that network providers need to manage their networks, and said that they can control spam and help to maintain intellectual property integrity without compromising these principles.

The threats to Internet openness come from reduced competition among ISPs and conflicts of interest within the ISPs, because they are also trying to be content providers.

Genachowski rightly sees these threats as serious:

This is not about protecting the internet against imaginary dangers. We’re seeing the breaks and cracks emerge, and they threaten to change the Internet’s fundamental architecture of openness. This would shrink opportunities for innovators, content creators and small businesses around the country, and limit the full and free expression the internet promises. This is about preserving and maintaining something profoundly successful and ensuring that it’s not distorted or undermined.

These rules will be very tough to enforce. The fundamental structure of the business works against them. A more effective approach may be to break up the ISPs into multiple independent companies, for example: Internet access operations, wide area network operations, and service/content/application operations. The neutrality problem is in the access networks – the WANs and the services are healthier. With only the telcos (DSL and fiber) and the MSOs (cable) there is not enough competition for a free market to develop. This is why Intel pushed so hard for WiMAX as a third mode of broadband access, though it hasn’t panned out that way. It is also why municipal dark fiber makes sense, following the model of roads, water and sewers.

VoIP on the cellular data channel

In a recent letter to the FCC, AT&T said that it had no objection to VoIP applications on the iPhone that communicate over the Wi-Fi connection. It furthermore said:

Consistent with this approach, we plan to take a fresh look at possibly authorizing VoIP capabilities on the iPhone for use on AT&T’s 3G network.

So why would anybody want to do VoIP on the cellular data channel, when there is a cellular voice channel already? Wouldn’t voice on the data channel cost more? And since the voice channel is optimized for voice and the data channel isn’t, wouldn’t voice on the data channel sound even worse than cellular voice already does?

Let’s look at the “why bother?” question first. There are actually at least four reasons you might want to do voice on the cellular data channel:

  1. To save money. If your voice plan has some expensive types of call (for example international calls) you may want to use VoIP on the data channel for toll by-pass. The alternative to this is to use the voice channel to call a local access number for an international toll by-pass service (like RebTel.)
  2. To get better sound quality: the cellular voice codecs are very low bandwidth and sound horrible. You can choose which codec to run over the data network and even go wideband. At IT Expo West a couple of weeks ago David Frankel of ZipDX demoed a wideband voice call on his laptop going through a Sprint Wireless Data Card. The audio quality was excellent.
  3. To get additional service features: companies like DiVitas offer roaming between the cellular and Wi-Fi networks that makes your cell phone act as an extension behind your corporate PBX. All these solutions currently use the cellular voice channel when out of Wi-Fi range, but if they were to go to the data channel they could offer wideband codecs and other differentiating features.
  4. For cases where there is no voice channel. In the example of David Frankel’s demo, the wireless data card doesn’t offer a voice channel, so VoIP on the data channel is the only option for a voice connection.

Moving on to the issue of cost, an iPhone unlimited data plan is $30 per month. “Unlimited” is AT&T’s euphemism for “limited to 5GB per month,” but translated to voice that’s a lot of minutes: even with IP packet overhead the bit-rate of compressed HD voice is going to be around 50K bits per second, which works out to about 13,000 minutes in 5GB. So using it for voice is unlikely to increase your bill. On the other hand, many voice plans are already effectively unlimited, what with rollover minutes, friend and family minutes, night and weekend minutes and whatnot, and you can’t get a phone without a voice plan. So for normal (non-international) use voice on the data channel is not going to reduce your bill, but it is unlikely to increase it, either.

Finally we come to the issue of whether voice sounds better on the voice channel or the data channel. The answer is, it depends on several factors, primarily the codec and the network QoS. With VoIP you can radically improve the sound quality of a call by using a wideband codec, but do impairments on the data channel nullify this benefit?

Technically, the answer is yes. The cellular data channel is not engineered for low latency. Variable delays are introduced by network routing decisions and by router queuing decisions. Latencies in the hundreds of milliseconds are not unusual. This will change with the advent of LTE, where the latencies will be of the order of 10 milliseconds. The available bandwidth is also highly variable, in contrast to the fixed bandwidth allocation of the voice channel. It can sometimes drop below what is needed for voice with even an aggressive variable rate codec.

In practice VoIP on the cellular data channel can sometimes sound much better than regular cellular voice. I mentioned above David Frankel’s demo at IT Expo West. I performed a similar experiment this morning with Michael Graves, with similarly good results. I was on a Polycom desk phone, Michael used Eyebeam on a laptop, and the codec was G.722. The latency on this call was appreciable – I estimated it at around 1 second round trip. There was also some packet loss – not bad for me, but it caused a sub-par experience for Michael. Earlier this week at Jeff Pulver’s HD Connect conference in New York, researchers from Qualcomm demoed a handset running on the Verizon network using EVRC-WB, transcoding to G.722 on Polycom and Gigaset phones in their lab in San Diego. The sound quality was excellent, but the latency was very high – I estimated it at around two seconds round trip.

The ITU addresses latency (delay) in Recommendation G.114. Delay is a problem because normal conversation depends on turn taking. Most people insert pauses of up to about 400 ms as they talk. If nobody else speaks during a pause, they continue. This means that if the one-way delay on a phone conversation is greater than 200 ms, the talker doesn’t hear an interruption within the 400 ms break, and starts talking again, causing frustrating collisions.
The ITU E-Model for call quality identifies a threshold at about 170 ms one-way at which latency becomes a problem. The E-Model also tells us that increasing latency amplifies other impairments – notably echo, which can be severe at low latencies without being a problem, but at high latencies even relatively quiet echo can severely disrupt a talker.

Some people may be able to handle long latencies better than others. Michael observed that he can get used to high latency echo after a few minutes of conversation.