FCC to address White Spaces at September 23rd Meeting

The agenda for the September 23rd FCC Commission Meeting lists:

TV White Spaces Second MO&O: A Second Memorandum Opinion and Order that will create opportunities for investment and innovation in advanced Wi-Fi technologies and a variety of broadband services by finalizing provisions for unlicensed wireless devices to operate in unused parts of TV spectrum.

Early discussion of White Spaces proposed that client devices would be responsible for finding vacant spectrum to use. This “spectrum sensing” or “detect and avoid” technology was sufficiently controversial that a consensus grew to supplement it with a geolocation database, where the client devices determine their location using GPS or other technologies, then consult a coverage database showing which frequencies are available at that location.

Among the Internet companies this consensus now appears to have evolved to eliminate the spectrum-sensing requirement for geolocation-enabled devices.

The Register says that this is because spectrum sensing doesn’t work.

The Associated Press predicts that the Order will go along with the Internet companies, and ditch the spectrum sensing requirement.

Some of the technology companies behind the white spaces are fighting a rearguard action, saying there are good reasons to retain spectrum sensing as an alternative to geolocation. The broadcasting industry (represented by the NAB and MSTV) want to require both. It will be interesting to see if the FCC leaves any spectrum sensing provisions in the Order.

Intel Infineon: history repeats itself

Unlike its perplexing McAfee move, Intel had to acquire Infineon. Intel must make the Atom succeed. The smartphone market is growing fast, and the media tablet market is in the starting blocks. Chips in these devices are increasingly systems-on-chip, combining multiple functions. To sell application processors in phones, you must have a baseband story. Infineon’s RF expertise is a further benefit.

As Linley Gwennap said when he predicted the acquisition a month ago, the fit is natural. Intel needs 3G and LTE basebands, Infineon has no application processor.

Linley also pointed out Intel’s abysmal track record for acquisitions.

Intel has been through this movie before, for the same strategic reasons. It acquired DSP Communications in 1999 for $1.6 Bn. The idea there was to enter the cellphone chip market with DSP’s baseband plus the XScale ARM processor that Intel got from DEC. It was great in theory, and XScale got solid design wins in the early smart-phones, but Intel neglected XScale, letting its performance lead over other ARM implementations dwindle, and its only significant baseband customer was RIM.

In 2005, Paul Otellini became CEO; at that time AMD was beginning to make worrying inroads into Intel’s market share. Otellini regrouped – he focused in on Intel’s core business, which he saw as being “Intel Architecture” chips. But XScale runs an instruction set architecture that competes with IA, namely ARM. So rather than continuing to invest in its competition, Intel instead dumped off its flagging cellphone chip business (baseband and XScale) to Marvell for $0.6 Bn, and set out to create an IA chip that could compete with ARM in size, power consumption and price. Hence Atom.

But does instruction set architecture matter that much any more? Intel’s pitch on Atom-based netbooks was that you could have “the whole Internet” on them, including the parts that run only on IA chips. But now there are no such parts. Everything relevant on the Internet works fine on ARM-based systems like Android phones. iPhones are doing great even without Adobe Flash.

So from Intel’s point of view, this decade-later redo of its entry into the cellphone chip business is different. It is doing it right, with a coherent corporate strategy. But from the point of view of the customers (the phone OEMs and carriers) it may not look so different. They will judge Intel’s offerings on price, performance, power efficiency, wireless quality and how easy Intel makes it to design-in the new chips. The same criteria as last time.

Rethink Wireless has some interesting insights on this topic…

Google sells out

Google and Verizon came out with their joint statement on Net Neutrality on Monday. It is reasonable and idealistic in its general sentiments, but contains several of the loopholes Marvin Ammori warned us about. It was released in three parts: a document posted to Google Docs, a commentary posted to the Google Public Policy Blog, and an op-ed in the Washington Post. Eight paragraphs in the statement document map to seven numbered points in the blog. The first three numbered points map to the six principles of net neutrality enumerated by Julius Genachowski [jg1-6] almost a year ago. Here are the Google/Verizon points as numbered in the blog:

1. Open access to Content [jg1], Applications [jg2] and Services [jg3]; choice of devices [jg4].
2. Non-discrimination [jg5].
3. Transparency of network management practices [jg6].
4. FCC enforcement power.
5. Differentiated services.
6. Exclusion of Wireless Access from these principles (for now).
7. Universal Service Fund to include broadband access.

The non-discrimination paragraph is weakened by the kinds of words that are invitations to expensive litigation unless they are precisely defined in legislation. It doesn’t prohibit discrimination, it merely prohibits “undue” discrimination that would cause “meaningful harm.”

The managed (or differentiated) services paragraph is an example of what Ammori calls “an obvious potential end-run around the net neutrality rule.” I think that Google and Verizon would argue that their transparency provisions mean that ISPs can deliver things like FIOS video-on-demand over the same pipe as Internet service without breaching net neutrality, since the Internet service will commit to a measurable level of service. This is not how things work at the moment; ISPs make representations about the maximum delivered bandwidth, but for consumers don’t specify a minimum below which the connection will not fall.

The examples the Google blog gives of “differentiated online services, in addition to the Internet access and video services (such as Verizon’s FIOS TV)” appear to have in common the need for high bandwidth and high QoS. This bodes extremely ill for the Internet. The evolution to date of Internet access service has been steadily increasing bandwidth and QoS. The implication of this paragraph is that these improvements will be skimmed off into proprietary services, leaving the bandwidth and QoS of the public Internet stagnant.

The exclusion of wireless many consider egregious. I think that Google and Verizon would argue that there is nothing to stop wireless being added later. In any case, I am sympathetic to Verizon on this issue, since wireless is so bandwidth constrained relative to wireline that it seems necessary to ration it in some way.

The Network Management paragraph in the statement document permits “reasonable” network management practices. Fortunately the word “reasonable” is defined in detail in the statement document. Unfortunately the definition, while long, includes a clause which renders the rest of the definition redundant: “or otherwise to manage the daily operation of its network.” This clause appears to permit whatever the ISP wants.

So on balance, while it contains a lot of worthy sentiments, I am obliged to view this framework as a sellout by Google. I am not alone in this assessment.

Net Neutrality heating up

I got an email from Credo this morning asking me to call Julius Genachowski to ask him to stand firm on net neutrality.

The nice man who answered told me that the best way to make my voice heard on this issue is to file a comment at the FCC website, referencing proceeding number 09-191.

So that my comment would be a little less ignorant, I carefully read an article on the Huffington Post by Marvin Ammori before filing it.

My opinion on this is that ISPs deserve to be fairly compensated for their service, but that they should not be permitted to double-charge for a consumer’s Internet access. If some service like video on demand requires prioritization or some other differential treatment, the ISP should only be allowed to charge the consumer for this, not the content provider. In other words, every bit traversing the subscriber’s access link should be treated equally by the ISP unless the consumer requests otherwise, and the ISP should not be permitted to take payments from third parties like content providers to preempt other traffic. If such discrimination is allowed, the ISP will be motivated to keep last-mile bandwidth scarce.

Internet access in the US is effectively a duopoly (cable or DSL) in each neighborhood. This absence of competition has caused the US to become a global laggard in consumer Internet bandwidth. With weak competition and ineffective regulation, a rational ISP will forego the expense of network upgrades.

ISPs like AT&T view the Internet as a collection of pipes connecting content providers to content consumers. This is the thinking behind Ed Whitacre’s famous comment, “to expect to use these pipes for free is nuts!” Ed was thinking that Google, or Yahoo or Vonage are using his pipes to his subscribers for free. The “Internet community” on the other hand views the Internet as a collection of pipes connecting people to people. From this other point of view, the consumer pays AT&T for access to the Internet, and Google, Yahoo and Vonage each pay their respective ISPs for access to the Internet. Nobody is getting anything for free. It makes no more sense for Google to pay AT&T for a subscriber’s Internet access than it would for an AT&T subscriber to pay Google’s connectivity providers for Google’s Internet access.

iPhone 4 gets competition

When the iPhone came out it redefined what a smartphone is. The others scrambled to catch up, and now with Android they pretty much have. The iPhone 4 is not in a different league from its competitors the way the original iPhone was. So I have been trying to decide between the iPhone 4 and the EVO for a while. I didn’t look at the Droid X or the Samsung Galaxy S, either of which may be better in some ways than the EVO.

Each hardware and software has stronger and weaker points. The Apple wins on the subtle user interface ingredients that add up to delight. It is a more polished user experience. Lots of little things. For example I was looking at the clock applications. The Apple stopwatch has a lap feature and the Android doesn’t. I use the timer a lot; the Android timer copied the Apple look and feel almost exactly, but a little worse. It added a seconds display, which is good, but the spin-wheel to set the timer doesn’t wrap. To get from 59 seconds to 0 seconds you have to spin the display all the way back through. The whole idea of a clock is that it wraps, so this indicates that the Android clock programmer didn’t really understand time. Plus when the timer is actually running, the Android cutely just animates the time-set display, while the Apple timer clears the screen and shows a count-down. This is debatable, but I think the Apple way is better. The countdown display is less cluttered, more readable, and more clearly in a “timer running” state. The Android clock has a wonderful “desk clock” mode, which the iPhone lacks, I was delighted with the idea, especially the night mode which dims the screen and lets you use it as a bedside clock. Unfortunately when I came to actually use it the hardware let the software down. Even in night mode the screen is uncomfortably bright, so I had to turn the phone face down on the bedside table.

The EVO wins on screen size. Its 4.3 inch screen is way better than the iPhone’s 3.5 inch screen. The “retina” definition on the iPhone may look like a better specification but the difference in image quality is indistinguishable to my eye, and the greater size of the EVO screen is a compelling advantage.

The iPhone has far more apps, but there are some good ones on the Android that are missing on the iPhone, for example the amazing Wi-Fi Analyzer. On the other hand, this is also an example of the immaturity of the Android platform, since there is a bug in Android’s Wi-Fi support that makes the Wi-Fi Analyzer report out-of-date results. Other nice Android features are the voice search feature and the universal “back” button. Of course you can get the same voice search with the iPhone Google app, but the iPhone lacks a universal “back” button.

The GPS on the EVO blows away the GPS on the iPhone for accuracy and responsiveness. I experimented with the Google Maps app on each phone, walking up and down my street. Apple changed the GPS chip in this rev of the iPhone, going from an Infineon/GlobalLocate to a Broadcom/GlobalLocate. The EVO’s GPS is built-in to the Qualcomm transceiver chip. The superior performance may be a side effect of assistance from the CDMA radio network.

Incidentally, the GPS test revealed that the screens are equally horrible under bright sunshine.

The iPhone is smaller and thinner, though the smallness is partly a function of the smaller screen size.

The EVO has better WAN speed, thanks to the Clearwire WiMax network, but my data-heavy usage is mainly over Wi-Fi in my home, so that’s not a huge concern for me.

Battery life is an issue. I haven’t done proper tests, but I have noticed that the EVO seems to need charging more often than the iPhone.

Shutter lag is a major concern for me. On almost all digital cameras and phones I end up taking many photos of my shoes as I put the camera back in my pocket after pressing the shutter button and assuming the photo got taken at that time rather than half a second later. I just can’t get into the habit of standing still and waiting for a while after pressing the shutter button. The iPhone and the EVO are about even on this score, both sometimes taking an inordinately long time to respond to the shutter – presumably auto-focusing. The pictures taken with the iPhone and the EVO look very different; the iPhone camera has a wider angle, but the picture quality of each is adequate for snapshots. On balance the iPhone photos appeal to my eye more than the EVO ones.

For me the antenna issue is significant. After dropping several calls I stuck some black electrical tape over the corner of the phone which seems to have somewhat fixed it. Coverage inside my home in the middle of Dallas is horrible for both AT&T and Sprint.

The iPhone’s FM radio chip isn’t enabled, so I was pleased when I saw FM radio as a built-in app on the EVO, but disappointed when I fired it up and discovered that it needed a headset to be plugged in to act as an antenna. Modern FM chips should work with internal antennas. In any case, the killer app for FM radio is on the transmit side, so you can play music from your phone through your car stereo. Neither phone supports that yet.

So on the plus side, the EVO’s compelling advantage is the screen size. On the negative side, it is bulkier, the battery life is less, the software experience isn’t quite so polished.

The bottom line is that the iPhone is no longer in a class of its own. The Android iClones are respectable alternatives.

It was a tough decision, but I ended up sticking with the iPhone.

Dual Mode Phone Trends Update 4

We are half way through the year, so it’s time for another look at Wi-Fi phone certifications. Three things jump out this time. First, a leap in the number of Wi-Fi phone models in the second quarter of 2010. Second, the arrival of 802.11n in handsets, and third Samsung’s market-leading commitment to 802.11n. According to Rethink Wireless “Samsung’s share of the smartphone market was only about 5% in Q1 but it aims to increase this to almost 15% by year end.” Samsung Wi-Fi-certified a total of 73 dual mode phones in the first six months of 2010, three times as many as second place LG with 23. In the 11n category, Samsung’s lead was even more dominating: its 40 certifications were ten times either of the second place OEMs.

Here is a chart of dual mode phones certified with the Wi-Fi Alliance from 2008 to June 30th 2010. We usually do this chart stacked, but side-by-side gives a clearer comparison between feature phones and smart phones. Note that up to the middle of 2009, smart phones outpaced feature phones, but then it switched. This is a natural progression of Wi-Fi into the mass market, but may also be exaggerated by a quirk of reporting: of HTC’s 17 certifications in the first half of 2010, it only categorized one as a smart phone.
Dual mode phones by quarter 2008-2010

The chart below shows the growth of 802.11n. It starts in January 2010 because only one 11n phone was certified in 2009, at the end of December. As you can see, the growth is strong. I anticipate that practically all new dual mode phone certifications will be for 802.11n by the end of 2010.

802.11n phones 2010 by month

Below is the same chart sliced by manufacturer instead of by month. The iPhone is missing because it wasn’t certified until July, and the iPad is missing because it’s not a phone. With only one 802.11n phone, Nokia has become a technology laggard, at least in this respect. The RIM Pearl 8100/8105 certifications are the only ones with STBC, an important feature for phones because it improves rate at distance. All the major chips (except those from TI) support STBC, so the phone OEMs must be either leaving it disabled or just not bothering to certify for it.

802.11n phones 2010 by manufacturer

Wi-Fi for Mice and Keyboards

A while back the Wi-Fi Alliance announced a new certification program, Wi-Fi Direct, which enables a PC to connect directly with other Wi-Fi devices without having to go through an Access Point.

The Wi-Fi certification process for Wi-Fi Direct is scheduled to be launched by the end of 2010, but there are already two pre-standard implementations of this concept, My Wi-Fi, an Intel product which ships in Centrino 2 systems, and Wireless Hosted Network which ships in all versions of Windows 7.

The Wi-Fi Direct driver makes a single Wi-Fi adapter on the PC look like two to the operating system: one ordinary one that associates with a regular Access Point, and a second acting as a “Virtual Access Point.” The virtual access point (Microsoft calls it a “SoftAP”) actually runs inside the Wi-Fi driver on the PC (labeled WPAN I/F in the Intel diagram below).

To the outside world the Wi-Fi adapter also looks like two devices, each with its own MAC address: one the PC just like without Wi-Fi Direct, and the other an access point. Devices that associate with that access point join the PC’s PAN (Personal Area Network).

This yields several benefits in various use cases.

I wrote a couple of years ago about how a company called Ozmo planned to use a Wi-Fi PAN to connect peripherals to PCs, replacing Bluetooth and proprietary wireless technologies. That plan has now come to fruition. Earlier this month Ozmo announced that it had received $10.8 million in additional funding, and this week it announced two major customers: Primax, a leading ODM of wireless mice, and NMB Technologies, a leading ODM of wireless keyboards.

Here’s a slide from one of their promotional presentations giving a comparison with Bluetooth and proprietary technologies:
Comparison of Ozmo's low power Wi-Fi technology with Bluetooth and proprietary solutions for Human Interface Devices (HIDs)

The essence of Ozmo’s approach is low cost, multi-device, low bandwidth and low power consumption. Wi-Fi Direct has another use case that is high bandwidth, with no requirement for low power.

If you want to stream video from your PC to a monitor using traditional Wi-Fi (“infrastructure mode”) each packet goes from the PC to the access point, then from the access point to the TV, so it occupies the spectrum twice for each packet. Wi-Fi Direct effectively doubles the available throughput, since each packet flies through the ether only once, directly from the PC to the TV. But it actually does better than that. Supposing the PC and the TV are in the same room, but the access point is in a different room, the PC can transmit at much lower power. Another similar Wi-Fi Direct session can then happen in another room in the house. Without Wi-Fi direct the two sessions would have to share the access point, taking turns to use the spectrum. So we get increased aggregate throughput both from halving the number of packet transmissions, and from allowing simultaneous use of the spectrum by multiple sessions (if they are far enough apart).

A Wi-Fi buff would point out that you can already do all this with ad-hoc mode, but Wi-Fi Direct purports to be usable by mortals, and to work interoperably, neither of which could be said for ad-hoc mode until recently. In January Infinitec announced a new point-to-point video streaming product that claims to be easy to use and universally interoperable, that Engadget implies uses ad-hoc mode, though Google can’t find the words “ad hoc” on the Infinitec website.

Between the bandwidth extremes of mice and TVs, lie numerous other potential uses, like headsets (which Ozmo also supports); syncing phones, cameras and media players; and wireless printers.

Wi-Fi Ubiquity

ABI came out with a press release last week saying that 770 million Wi-Fi chips will ship in 2010. This is an amazing number. Where are they all going? Fortunately ABI included a bar-chart with this information in the press release. Here it is (click on it for a full-size view):

Wi-Fi chip shipments worldwide. Source: ABI

The y axis isn’t labeled, but the divisions appear to be roughly 200 million units.

This year shows roughly equal shipments going to phones, mobile PCs, and everything else. There is no category of Access Points, so presumably less of those are sold than “pure VoWi-Fi handsets.” I find this surprising, since I expect the category of pure VoWi-Fi handsets to remain moribund. Gigaset, which makes an excellent cordless handset for VoIP, stopped using Wi-Fi and went over to DECT because of its superior characteristics for this application.

There is also no listing for tablet PCs, a category set to boom; they must be subsumed under MIDs (Mobile Internet Devices).

The chart shows the portable music player category growing vigorously through 2015. iPod unit sales were down 8% year on year in 1Q10, and pretty much stagnant since 2007. ABI must be thinking that even with unit sales dropping, the attach rate of Wi-Fi will soar.

The category of “Computer Peripherals” will probably grow faster than ABI seems to anticipate. Wireless keyboards and mice use either Bluetooth or proprietary radios currently, but the new Wi-Fi alliance specification “Wi-Fi Direct” will change that. Ozmo is aiming to use Wi-Fi to improve battery life in mice and keyboards two to three-fold. Since all laptops, most all-in-one PCs and many regular desktops already have Wi-Fi built-in (that’s at least double the Bluetooth attach rate) this may be an attractive proposition for the makers (and purchasers) of wireless mice and keyboards. Booming sales of tablet PCs may further boost sales of wireless keyboards and mice.

HD Voice, Peering and ENUM

The most convenient route between telephone service providers is through the PSTN, since you can’t offer phone service without connecting to it. Because of this convenience telephone service providers tend to consider PSTN connectivity adequate, and don’t take the additional step of delivering IP connectivity. This is unfortunate because it inhibits the spread of high quality wideband (HD Voice) phone calls. For HD voice to happen, the two endpoints must be connected by an all-IP path, without the media stream crossing into the PSTN.

For example, OnSIP is my voice service provider. Any calls I make to another OnSIP subscriber complete in HD Voice (G.722 codec), because I have provisioned my phones to prefer this codec. Calls I make to phone numbers (E.164 numbers) that don’t belong to OnSIP complete in narrowband (G.711 codec), because OnSIP has to route them over the PSTN. If OnSIP was able to use an IP address for these calls instead of an E.164 number, it could avoid the PSTN and keep the call in G.722.

Xconnect has just announced an HD Voice Peering initiative, where multiple voice service providers share their numbers in a common directory called an ENUM directory. When a subscriber places a call, the service provider looks up the destination number in the ENUM directory; if is there, it returns a SIP address (URI) to substitute for the phone number, and the call can complete without going over the PSTN. About half the participants in the Xconnect trial go a step further than ENUM pooling: they interconnect (“peer”) their networks directly through an Xconnect router, so the traffic doesn’t need to traverse the public Internet. [See correction in comments below]

There are other voice peering services that support this kind of HD connection, notably the VPF (Voice Peering Fabric). The VPF has an ENUM directory, but as the name suggests, it does not offer ENUM-only service; all the member companies interconnect their networks on a VPF router.

Some experts maintain that for business-grade call quality, it is essential to peer networks rather than route over the public Internet. Packets that traverse the public Internet are prone to delay and loss, while properly peered networks deliver packets quickly and reliably. In my experience, this has not been an issue. My access to OnSIP and to Vonage is over the public Internet, and I have never had any quality issues with either provider. From this I am inclined to conclude that explicit peering of voice networks is overkill, and that if you have a VoIP connection all that is needed for HD voice communication is to list your phone number in an ENUM directory. Presumably the voice service providers in Xconnect’s trial that are not peering share this opinion.

Xconnect’s ENUM directory is enormous, partly because it is pooled with Pathfinder – the GSMA ENUM directory administered by Neustar. Xconnect’s ENUM directory had over 120 million numbers in it as of 2007.

Xconnect and the VPN only add to their ENUM directories the numbers owned by their members. But even if you are not a customer of one of their members, you can still list your number in an ENUM directory, e164.org. This way, anybody who checks for your number in the directory can route the call over the Internet. Calls made this way don’t need to use SIP trunks, and they can complete in HD voice.

If you happen to have an Asterisk PBX, you can easily provision it to check in a list of ENUM directories before it places a call.