Goodbye Privacy. Goodbye Open Internet?

The famous story of the pregnant teen outed to her father by Target epitomizes the power of big data and advanced psychometrics to wring potent conclusions from seemingly innocuous snippets of data. Andreas Weigend, the former chief data scientist at Amazon has written a must-read book that dives deep into this topic. And an article in The Observer shows how these learnings are being applied to influence voters:

“With this, a computer can actually do psychology, it can predict and potentially control human behaviour. It’s what the scientologists try to do but much more powerful. It’s how you brainwash someone. It’s incredibly dangerous.

“It’s no exaggeration to say that minds can be changed. Behaviour can be predicted and controlled. I find it incredibly scary. I really do. Because nobody has really followed through on the possible consequences of all this. People don’t know it’s happening to them. Their attitudes are being changed behind their backs.”

– Jonathan Rust, Director, Cambridge University Psychometric Centre.

So metadata about your internet behavior is valuable, and can be used against your interests. This data is abundant, and Target only has a drop in the ocean compared to Facebook, Google and a few others. Thanks to its multi-pronged approach (Search, AdSense, Analytics, DNS, Chrome and Android), Google has detailed numbers on everything you do on the internet, and because it reads all your (sent and received) Gmail it knows most of your private thoughts. Facebook has a similar scope of insight into your mind and motivations.

Ajit Pai, the new FCC chairman, was a commissioner when the privacy regulations that were repealed last week were instituted last fall. He wrote a dissent arguing that applying internet privacy rules only to ISPs (Internet Service Providers, companies like AT&T or Comcast) was not only unfair, but ineffective in protecting your online privacy, since the “edge providers” (companies like Google, Facebook, Amazon and Netflix) are not subject to FCC regulations (but the FTC instead), and they are currently far more zealous and successful at destroying your privacy than the ISPs:

“The era of Big Data is here. The volume and extent of personal data that edge providers collect on a daily basis is staggering… Nothing in these rules will stop edge providers from harvesting and monetizing your data, whether it’s the websites you visit or the YouTube videos you watch or the emails you send or the search terms you enter on any of your devices.”

True as this is, it would be naive to expect now-chairman Pai to replace the repealed privacy regulations with something consistent with his concluding sentiment in that dissent:

“After all, as everyone acknowledges, consumers have a uniform expectation of privacy. They shouldn’t have to be network engineers to understand who is collecting their data. And they shouldn’t need law degrees to determine whether their information is protected.”

So it’s not as though your online privacy was formerly protected and now it’s not. It just means the ISPs can now compete with Google and Facebook to sell details of your activity on the internet. There are still regulations in place to aggregate and anonymize the data, but experiments have shown anonymization to be surprisingly difficult.

If you don’t like playing the patsy, it’s possible to fight a rearguard action by using cookie-blockers, VPNs and encryption, but such measures look ever more Canute-like. Maybe those who tell us to abandon the illusion that there is such a thing as privacy are right.

So, after privacy, what’s the next thing you could lose?

Goodbye Open Internet?

Last week’s legislation was a baby-step towards what the big ISPs would like, which is to ‘own’ all the data that they pipe to you, charging content providers differentially for bandwidth, and filtering and modifying content (for example by inserting or substituting ads in web pages and emails). They are currently forbidden to do this by the FCC’s 2015 net neutrality regulations.

So the Net Neutrality controversy is back on the front burner. Net neutrality is a free-market issue, but not in the way that those opposed to it believe; the romantic notion of a past golden age of internet without government intrusion is hogwash. The consumer internet would never have happened without the common carrier regulations that allowed consumers to attach modems to their phone lines. AT&T fought tooth and nail against those regulations, wanting instead to control the data services themselves, along the lines of the British Prestel service. If AT&T had won that battle with the regulators, the internet would have remained an academic backwater. Not only was the internet founded under government sponsorship, but it owes its current vibrant and innovative character to strongly enforced government regulation:

“Without Part 68, users of the public switched network would not have been able to connect their computers and modems to the network, and it is likely that the Internet would have been unable to develop.”

For almost all US consumers internet access service choice is limited to a duopoly (telco and cableco). On the other hand internet content services participate in an open market teeming with competition (albeit with near-monopolies in their domains for Google and Facebook). This is thanks to the net neutrality regulations that bind the ISPs:

  • No Blocking: broadband providers may not block access to legal content, applications, services, or non-harmful devices.
  • No Throttling: broadband providers may not impair or degrade lawful Internet traffic on the basis of content, applications, services, or non-harmful devices.
  • No Paid Prioritization: broadband providers may not favor some lawful Internet traffic over other lawful traffic in exchange for consideration of any kind — in other words, no “fast lanes.” This rule also bans ISPs from prioritizing content and services of their affiliates.

If unregulated, ISPs will be compelled by structural incentives to do all these things and more, as explained by the FCC:

“Broadband providers function as gatekeepers for both their end user customers who access the Internet, and for edge providers attempting to reach the broadband provider’s end-user subscribers. Broadband providers (including mobile broadband providers) have the economic incentives and technical ability to engage in practices that pose a threat to Internet openness by harming other network providers, edge providers, and end users.”

It’s not a simple issue. ISPs must have robust revenues so they can afford to upgrade their networks; but freedom to prioritize, throttle and block isn’t the right solution. Without regulation, internet innovation suffers. Instead of an open market for internet startups, gatekeepers like AT&T and Comcast pick winners and losers.

Net neutrality simply means an open market for internet content. Let’s keep it that way!

FCC Title II Ruling

I got an email from the Heartland Institute today, purporting to give an expert opinion about today’s Net Neutrality ruling. The money quote reads: “The Internet is not broken, it is a vibrant, continually growing market that has thrived due to the lack of regulations that Title II will now infest upon it.”

This is wrong both on Internet history, and on the current state of broadband in the US.

It was the common carriage regulatory requirement on voice lines that first enabled the Internet to explode into the consumer world, by obliging the phone companies to allow consumers to hook up modems to their voice lines. It is the current unregulated environment in the US that has caused our Internet to become, if not broken, at least considerably worse than it is in many other countries:

America currently ranks thirty-first in the world in terms of average download speeds and forty-second in average uploads speeds, according to a recent study by Ookla Speedtest. Consumers pay much more for Internet access in the U.S. than in many other countries.

Net Neutrality, Congestion, DRM

Videos burn up a lot more bandwidth than written words, per hour of entertainment. The Encyclopedia Britannica is 0.3 GB in size, uncompressed. The movie Despicable Me is 1.2 GB, compressed. Consequently we should not be surprised that most Internet traffic is video traffic:

The main source of the video traffic is Netflix, followed by YouTube:

Internet Service Providers would like to double-dip, charging you for your Internet connection, and also charging Netflix (which already pays a different ISP for its connection) for delivering its content to you. And they do.

To motivate content providers like Netflix to pay extra, ISPs that don’t care about their subscribers could hold them to ransom, using network congestion to make Neflix movies look choppy, blocky and freezy until Neflix coughs up. And they do:


This example illustrates the motivation structure of the industry. Bandwidth demand is continuously growing. The two basic strategies an ISP can use to cope with the growth are either to increase capacity or to ration the existing bandwidth. The Internet core is sufficiently competitive that its capacity grows by leaps and bounds. The last mile to the consumer is far less competitive, so the ISP has little motivation to upgrade its equipment. It can simply prioritize packets from Netflix and whoever else is prepared to pay the toll, and let the rest drop undelivered.

One might expect customers to complain if this was happening in a widespread way. And they do:

Free market competition might be a better answer to this particular issue than regulation, except that this problem isn’t really amenable to competition; you need a physical connection (fiber ideally) for the next generation of awesome immersive Internet. Running a network pipe to the home is expensive, like running a gas pipe, or a water pipe, or a sewer, or an electricity supply cable, or a road; so like all of those instances, it is a natural monopoly. Natural monopolies work best when strongly regulated, and the proposed FCC Title II action on Net Neutrality is a good start.

Digital Rights Management

Unrelated but easily confused with Net Neutrality is the issue of copyright protection. The Stop Online Piracy Act, or SOPA, was defeated by popular outcry for being too expansive. The remedies proposed by SOPA were to take down websites hosting illegal content, and to oblige ISPs to block illegal content from their networks.

You might have noticed in the first graphic above, about 3% of what consumers consume (“Downstream”) online is “filesharing,” a.k.a music and video piracy. It is pretty much incontrovertible that the Internet has devastated the music business. One might debate whether it was piracy or iTunes that did it in, but either way the fact of Internet piracy gave Steve Jobs a lot of leverage in his negotiations with the music industry. What’s to prevent a similar disembowelment of the movie industry, when a consumer in Dallas can watch a movie like “Annie” for free in his home theater before it has even been released?

The studio that distributes the movie would like to make sure you pay for seeing it, and don’t get a pirated copy. I think so too. This is a perfectly reasonable position to take, and if the studio was also your ISP, it might feel justified in blocking suspicious content. In the US it is not unusual for the studio to be your ISP (for example if your ISP is Comcast and the movie is Despicable Me). In a non-net-neutral world an ISP could block content unilaterally. But Net Neutrality says that an ISP can’t discriminate between packets based on content or origin. So in a net-neutral world, an ISP would be obliged to deliver pirated content, even when one of its own corporate divisions was getting ripped off.

This dilemma is analogous to free speech. The civilized world recognizes that in order to be free ourselves, we have to put up with some repulsive speech from other people. The alternative is censorship: empowering some bureaucrat to silence people who say unacceptable things. Enlightened states don’t like to go there, because they don’t trust anybody to define what’s acceptable. Similarly, it would be tough to empower ISPs to suppress content in a non-arbitrary but still timely way, especially when the content is encrypted and the source is obfuscated. Opposing Net Neutrality on the grounds of copyright protection is using the wrong tool for the job. It would be much better to find an alternative solution to piracy.

Actually, maybe we have. The retail world has “shrinkage” of about 1.5%. The credit card industry remains massively profitable even while factoring in a provision for fraud at about 3% of customers compromised.

Total Existing Card Fraud Losses and Incidence Rate by Year. Source: Lexis/Nexis.

“Filesharing” at 3% of download volume seems manageable in that context, especially since it has trended down from 10% in 2011.

ALA Troubled by Court’s Net Neutrality Decision

My thoughts on network neutrality can be found here and some predictions contingent on its loss here, so obviously I am disheartened by this latest ruling. The top Google hit on this news is currently a good story at GigaOm, and further down Google’s hit list is a thoughtful article in Forbes, predicting this result, but coming to the wrong conclusion.

I am habitually skeptical of “slippery slope” arguments, where we are supposed to fear something that might happen, but hasn’t yet. So I sympathize with pro-ISP sentiments like that Forbes article in this regard. On the other hand, I view businesses as tending to be rational actors, maximizing their profits under the rules of the game. If the rules of the game incent the ISPs to move in a particular direction, they will tend to move in that direction. Because competition is so limited among broadband ISPs (for any home in America there are rarely more than two options, regardless of the actual number of ISPs in the nation), they are currently incented to ration their bandwidth rather than to invest in increasing it. This decision is a push in that same direction.

Arguably the Internet was born of Federal action that forced a corporation to do something it didn’t want to do: without the Carterfone decision, there would have been no modems in the US. Without modems, the Internet would never have gotten off the ground.

Arguments that government regulation could stifle the Internet miss the point that all business activity in the US is done under government rules of various kinds: without those rules competitive market capitalism could not work. So the debate is not over whether the government should ‘interfere,’ but over what kinds of interference the government should do, and with what motivations. I take the liberal view that a primary role of government is to protect citizens from exploitation by predators. I am an enthusiastic advocate of competitive-market capitalism too, where it can exist. The structure of capitalism pushes corporations to charge as much as possible and provide as little as possible for the money (‘maximize profit’). In a competitive market, the counter-force to this is competition: customers can get better, cheaper service elsewhere, or forgo service without harm. But because of the local lack of competition, broadband in the US is not a competitive market, so there is no counter-force. And since few would argue that you can live effectively in today’s US without access to the Internet, you can’t forgo service without harm.

The current rules of the broadband game in the US have moved us to a pathetically lagging position internationally so it seems reasonable to change them. Unfortunately this latest court decision changes them in the wrong direction, freeing ISPs to ration and charge more for connectivity rather than encouraging them to invest in bandwidth. If you agree that this is a bad thing, you can do some token venting here: http://act.freepress.net/sign/internet_FCC_court_decision2/

Here is a press release from an organization that few people could find fault with.

Gesture recognition in smartphones

This piece from the Aberdeen Group shows accelerometers and gyroscopes becoming universal in smartphones by 2018.

Accelerometers were exotic in smartphones when the first iPhone came out – used mainly for sensing the orientation of the phone for displaying portrait or landscape mode. Then came the idea of using them for dead-reckoning-assist in location-sensing. iPhones have always had accelerometers; since all current smartphones are basically copies of the original iPhone, it is actually odd that some smartphones lack accelerometers.

Predictably, when supplied with a hardware feature, the app developer community came up with a ton of creative uses for the accelerometer: magic tricks, pedometers, air-mice, and even user authentication based on waving the phone around.

Not all sensor technologies are so fertile. For example the proximity sensor is still pretty much only used to dim the screen and disable the touch sensing when you hold the phone to your ear or put it in your pocket.

So what about the user-facing camera? Is it a one-trick pony like the proximity sensor, or a springboard to innovation like the accelerometer? Although videophoning has been a perennial bust, I would argue for the latter: the you-facing camera is pregnant with possibilities as a sensor.

Looking at the Aberdeen report, I was curious to see “gesture recognition” on a list of features that will appear on 60% of phones by 2018. The others on the list are hardware features, but once you have a camera, gesture recognition is just a matter of software. (The Kinect is a sidetrack to this, provoked by lack of compute power.)

In a phone, compute power means battery-drain, so that’s a limitation to using the camera as a sensor. But each generation of chips becomes more power-efficient as well as more powerful, and as phone makers add more and more GPU cores, the developer community delivers new useful uses for them that max them out.

Gesture recognition is already here with Samsung, and soon every Android device. The industry is gearing up for innovation in phone based computer vision with OpenVX from Khronos. When always-on computer vision becomes feasible from a power-drain point of view, gesture recognition and face tracking will look like baby-steps. Smart developers will come up with killer applications that are currently unimaginable. For example, how about a library implementation of Paul Ekman’s emotion recognition algorithms to let you know how you are really feeling right now? Or, in concert with Google Glass, so you will never again be oblivious to your spouse’s emotional temperature.
Update November 19th: Here‘s some news and a little bit of technology background on this topic…
Update November 22:It looks like a company is already engaged on the emotion-recognition technology.

Upcoming WebRTC World Conference

Last month, WebRTC was included in Firefox for Android. It has been available for a while in Chrome, Mozilla, and Opera browsers. Justin Uberti of Google claims that this adds up to a billion devices running WebRTC.

To get up to speed on the challenges and opportunities of WebRTC, and the future of real-time voice/video communications, Wirevolution’s Charlie Gold will be attending the WebRTC World Conference next month and writing about it here. The conference runs Nov 19-21 at the Convention Center in Santa Clara, CA.

At the conference we hope to make sense of the wide range of applications integrating WebRTC, and to relate them to integration opportunities for service providers. The applications range from standard video enabled apps such as webcasting and security, to enterprise applications such as CRM and contact centers, to emerging opportunities such as virtual reality and gaming. Service providers can combine webRTC with IMS and RCS, and can use it to manage network capabilities and end users’ quality of experience.

The conference is organized around four tracks: developer workshops, B2C, enterprise, and service providers.

  • The developer workshop topics include concepts and structures of WebRTC, implementation and build options, standardization efforts, signaling options, applications to the internet of things (IoT), codec evolution, monitoring and alarms, firewall traversal with STUN/TURN/ICE, and large scale simultaneous delivery in applications such as webcasting, gaming and virtual reality (VR) and security.
  • Business and consumer applications sessions cover successful deployment strategies of use cases like collaboration and conferencing, call centers and the Internet of Things (IoT). Other sessions on this track cover security, device requirements and regulatory issues.
  • Service provider workshops include IMS value in a world of WebRTC, how to use WebRTC, deployment strategies, how to extend existing services and offer new services using WebRTC, using WebRTC to acquire new users, and understanding the network impact of WebRTC.
  • The enterprise track has additional sessions on integrating WebRTC into your contact center and websites (public, supplier, internal). These sessions cover details like mapping out your integration strategy between WebRTC and SIP, using a media server vs. direct media interoperation; and how to deploy a WebRTC portal.

Keynotes will be from Ericsson, Alcatel-Lucent, Mozilla, Genband, Mavenir, Radisys, CafeX and presumably others.

To round it out, there will be a plethora of special workshops, realtime demos, panels and round tables.

With the momentum of WebRTC growing in leaps and bounds, we are looking forward to attending and sharing more on webRTC next month.

Multipath ambiguity

In Wi-Fi multi path used to mean the way that signals traversing different spatial paths arrive at different times. Before MIMO this was considered an impairment to the signal, but with multiple antennas this ‘spatial diversity’ is used to deliver the huge speed increases of 802.11n over 802.11g.

This week has seen a big PR push on a different “multipath,” embodied in IETF RFC 6824, “TCP Extensions for Multipath Operation with Multiple Addresses,” (MPTCP). Of course you know that one of the robustness features of IP is that it delivers packets over different routes, but RFC 6824 takes it to another level, as described in this article in Ars Technica, and this one at MIT Technology Review.