First 802.11n handset spotted in the wild – what took so long?

The fall 2009 crop of ultimate smartphones looks more penultimate to me, with its lack of 11n. But a handset with 802.11n has come in under the wire for 2009. Not officially, but actually. Slashgear reports a hack that kicks the Wi-Fi chip in the HTC HD2 phone into 11n mode. And the first ultimate smartphone of 2010, the HTC Google Nexus One is also rumored to support 802.11n.

These are the drops before the deluge. Questions to chip suppliers have elicited mild surprise that there are still no Wi-Fi Alliance certifications for handsets with 802.11n. All the flagship chips from all the handset Wi-Fi chipmakers are 802.11n. Broadcom is already shipping volumes of its BCM4329 11n combo chip to Apple for the iTouch (and I would guess the new Apple tablet), though the 3GS still sports the older BCM4325.

Some fear that 802.11n is a relative power hog, and will flatten your battery. For example, a GSMArena report on the HD2 hack says:

There are several good reasons why Wi-Fi 802.11n hasn’t made its way into mobile phones hardware just yet. Increased power consumption is just not worth it if the speed will be limited by other factors such as under-powered CPU or slow-memory…

But is it true that 802.11n increases power consumption at a system level? In some cases it may be: the Slashgear report linked above says: “some users have reported significant increases in battery consumption when the higher-speed wireless is switched on.”

This reality appears to contradict the opinion of one of the most knowledgeable engineers in the Wi-Fi industry, Bill McFarland, CTO at Atheros, who says:

The important metric here is the energy-per-bit transferred, which is the average power consumption divided by the average data rate. This energy can be measured in nanojoules (nJ) per bit transferred, and is the metric to determine how long a battery will last while doing tasks such as VoIP, video transmissions, or file transfers.

For example, Table 1 shows that for 802.11g the data rate is 22 Mbps and the corresponding receive power-consumption average is around 140 mW. While actively receiving, the energy consumed in receiving each bit is about 6.4 nJ. On the transmit side, the energy is about 20.4 nJ per bit.

Looking at these same cases for 802.11n, the data rate has gone up by almost a factor of 10, while power consumption has gone up by only a factor of 5, or in the transmit case, not even a factor of 3.

Thus, the energy efficiency in terms of nJ per bit is greater for 802.11n.

Here is his table that illustrates that point:
Effect of Data Rate on Power Consumption

Source: Wireless Net DesignLine 06/03/2008

The discrepancy between this theoretical superiority of 802.11n’s power efficiency, and the complaints from the field may be explained several ways. For example, the power efficiency may actually be better and the reports wrong. Or there may be some error in the particular implementation of 802.11n in the HD2 – a problem that led HTC to disable it for the initial shipments.

Either way, 2010 will be the year for 802.11n in handsets. I expect all dual-mode handset announcements in the latter part of the year to have 802.11n.

As to why it took so long, I don’t think it did, really. The chips only started shipping this year, and there is a manufacturing lag between chip and phone. I suppose a phone could have started shipping around the same time as the latest iTouch, which was September. But 3 months is not an egregious lag.

All you can eat?

The always good Rethink Wireless has an article AT&T sounds deathknell for unlimited mobile data.

It points out that with “3% of smartphone users now consuming 40% of network capacity,” the carrier has to draw a line. Presumably because if 30% of AT&T’s subscribers were to buy iPhones, they would consume 400% of the network’s capacity.

Wireless networks are badly bandwidth constrained. AT&T’s woes with the iPhone launch were caused by lack of backhaul (wired capacity to the cell towers), but the real problem is on the wireless link from the cell tower to the phone.

The problem here is one of setting expectations. Here’s an excerpt from AT&T’s promotional materials: “Customers with capable LaptopConnect products or phones, like the iPhone 3G S, can experience the 7.2 [megabit per second] speeds in coverage areas.” A reasonable person reading this might think that it is an invitation to do something like video streaming. Actually, a single user of this bandwidth would consume the entire capacity of a cell-tower sector:
HSPA ell capacity per sector per 5 MHz
Source: High Speed Radio Access for Mobile Communications, edited by Harri Holma and Antti Toskala.

This provokes a dilemma – not just for AT&T but for all wireless service providers. Ideally you want the network to be super responsive, for example when you are loading a web page. This requires a lot of bandwidth for short bursts. So imposing a bandwidth cap, throttling download speeds to some arbitrary maximum, would give users a worse experience. But users who use a lot of bandwidth continuously – streaming live TV for example – make things bad for everybody.

The cellular companies think of users like this as bad guys, taking more than their share. But actually they are innocently taking the carriers up on the promises in their ads. This is why the Rethink piece says “many observers think AT&T – and its rivals – will have to return to usage-based pricing, or a tiered tariff plan.”

Actually, AT&T already appears to have such a policy – reserving the right to charge more if you use more than 5GB per month. This is a lot, unless you are using your phone to stream video. For example, it’s over 10,000 average web pages or 10,000 minutes of VoIP. You can avoid running over this cap by limiting your streaming videos and your videophone calls to when you are in Wi-Fi coverage. You can still watch videos when you are out and about by downloading them in advance, iPod style.

This doesn’t seem particularly burdensome to me.

Why are we waiting?

I just clicked on a calendar appointment enclosure in an email. Nothing happened, so I clicked again. Then suddenly two copies of the appointment appeared in my iCal calendar. Why on earth did the computer take so long to respond to my click? I waited an eternity – maybe even as long as a second.

The human brain has a clock speed of about 15 Hz. So anything that happens in less than 70 milliseconds seems instant. The other side of this coin is that when your computer takes longer than 150 ms to respond to you, it’s slowing you down.

I have difficulty fathoming how programmers are able to make modern computers run so slowly. The original IBM PC ran at well under 2 MIPS. The computer you are sitting at is around ten thousand times faster. It’s over 100 times faster than a Cray-1 supercomputer. This means that when your computer keeps you waiting for a quarter of a second, equally inept programming on the same task on an eighties-era IBM PC would have kept you waiting 40 minutes. I don’t know about you, but I encounter delays of over a quarter of a second with distressing frequency in my daily work.

I blame Microsoft. Around Intel the joke line about performance was “what Andy giveth, Bill taketh away.” This was actually a winning strategy for decades of Microsoft success: concentrate on features and speed of implementation and never waste time optimizing for performance because the hardware will catch up. It’s hard to argue with success, but I wonder if a software company obsessed with performance could be even more successful than Microsoft?