Network Neutrality: History, Regulation and Future

AutorChristopher T. Marsden
CargoSenior Lecturer in Law, University of Essex (2007-)
Páginas91-108

Page 92

1. History: Trust-to-Trust and Control of Communications

Network neutrality is the latest phase of an eternal argument over control of communications media. The Internet was held out by early legal and technical analysts to be special, due to its decentred construction, separating it from earlier “technologies of freedom” (de Sola Pool, 1983) including radio and the telegraph. Spar (2001) argues that control is a historical evolutionary step in communications media development, while Wu (2010), following Lessig (1999), argues that closure need not be an inevitable outcome.

The Internet had never been subject to regulation beyond that needed for interoperability and competition, building on the Computer I and II inquiries by the Federal Communications Commission (FCC) in the United States (Wer-bach, 2005), and the design principle of end-to-end (E2E) that was first described by Saltzer, Reed and Clark (1984). That principle itself was bypassed by the need for greater trust and reliability in the emerging broadband network by the late 1990s, particularly as spam email led to viruses, botnets and other risks. As a result, E2E has gradually given way to trust-to-trust mechanisms, in which it is receipt of the message by one party’s trusted agent which replaces the receipt by final receiver (Clark/Blumenthal, 2011). This agent is almost always the Internet Service Provider (ISP), and it is regulation of this party which is at stake in net neutrality. ISPs are not only removing spam and other hazardous material before they reach the (largely technically uneducated) subscriber, ISPs also can remove other potentially illegal material on behalf of governments and copyright holders, to name the two most active censors on the Internet, as well as prioritising packets for their own benefit. As a result, the E2E principle would be threatened were it not already moribund.

The legal policy and regulatory implications of rapidly standardising innovation on the communications ecology was well understood by Benkler, who was concerned with the need to maintain interoperability and openness to ensure a ‘commons’ in which unaffiliated and non-commercial innovation could flourish (Benkler, 1998a, 1998b). The Internet’s core values of openness and democracy have been established by accident as well as design. Noam (2008) states: “There is nothing especially new about [media law’s] recent round-net-neutrality – as a conceptual issue, or in terms of its policy options, except for the terminology”. Benkler (1998) and Lemley and McGowan (1998) have argued that, though network effects may tend to closure of the network, regulatory scrutiny may not be the only outcome that will result in greater openness.

It is not novel to claim that protocols regulate user behaviour on the Internet (“Code is law” as Lessig [1999a] put it), but legal commitment to freedom of speech means that law can regulate the Internet by enforcing conditions to enable free speech. As Wu (2003a) explains, laws can regulate the Internet as surely as vice versa, and with more constitutional authority if less technical virtuosity (Mayer-Schonberger, 2008; Reidenberg, 2005). By 1998, the innovation-control argument hinged on Microsoft’s leveraging of its operating system monopoly into browser and video software, and by 2000 this had led to scrutiny of AOL-Time Warner, notably the potential for foreclo-sure of Instant Messaging and video (Faulhaber, 2002), and of cable-telephony horizontal merger such as that between AT&T and MediaOne (Lemley and Lessig, 1999). This moved on to control over WiFi, an unlicensed spectrum technology capable of providing local area network connectivity and opening the control over end-users exerted by fixed and wireless ISPs (Croxford and Mars-den, 2001). Net neutrality as a description was first applied to the debate about Internet traffic management practices (ITMP), or Quality of Service on the Internet in 2003 (Lessig and Wu, 2003; Wu, 2003b), though the

Page 93

debate began when academics feared that cable TV’s closed business model would overtake the open Internet in 1999 (Lemley and Lessig, 1999; Lessig, 1999a, 1999b).

Initial treatment of network neutrality discussed ensuring four ‘Net Freedoms’ (FCC, 2005) for end-users: freedom to attach devices, run applications, receive the content packets of their choice and to receive “Service Plan Information... meaningful information” (see the section on transparency). Even in 2011, scholars are suggesting freedom to innovate can be squared with design prohibitions (van Schewick 2010), despite over a decade of multi-billion dollar protocol development by the ISP community resulting in the ability to control traffic coming onto their networks (Waclawsky, 2005), and whole scale rationing of end-user traffic. Berners Lee (2006) explained: “There have been suggestions that we don't need legislation because we haven't had it. These are nonsense, because in fact we have had net neutrality in the past - it is only recently that real explicit threats have occurred.” Bern-ers Lee was particularly adamant that he does not wish to see the prohibition of Quality of Service (QoS) because that is precisely the claim made by some US net neutrality advocates – and opposed by the network engineering community.

1.1. History: Definition and Development

Net neutrality may be seen to comprise two separate non-discrimination commitments (Marsden, 2010a), one of universal service and another of common carriage. Backward-looking ‘net neutrality lite’ claims that Internet users should not be disadvantaged due to opaque and invidious practices by their current Internet Service Provider – the company providing the Internet connection into their home. The argument is that a minimum level of service should be provided which offers open Internet access without blocking or degrading specific applications or protocols – what has been described as an updated form of universal service (Mueller, 1998), generally proposed at 2Mbps. That provides a basic level of service which all subscribers should eventually receive.

Forward-looking ‘positive net neutrality’ describes a practice whereby higher QoS at higher prices should be offered on fair reasonable and non-discriminatory (FRAND) terms to all-comers, a modern equivalent of common carriage (Noam, 1994). It is a more debatable principle, with many content providers and carriers preferring exclusive arrangements. The type of service which may be entitled to FRAND treatment could result in short-term exclusivity in itself, as, for instance, wireless/mobile cell towers may only be able to carry a single high-definition video stream at any one point in time and therefore a monopoly may result. As common carriage dictates terms but not the specific market conditions, transparency and non-discrimination would not automatically result in a plurality of services. I argue against social or economic justifications for either barring any proprietary high-speed traffic at all, or for strict versions of net neutrality that would not allow any traffic prioritisation. There is too much at stake either to expect government to supplant the market in providing higher speed connections, or for the market to continue to deliver openness without the most basic of policy and regulatory backstops to ensure some growth (Meisel, 2010, p. 20).

The net neutrality problem is complex and far-reaching: attempts to dismiss it as a problem that can be overcome by local loop (last mile) telecoms competition (Cave et al., 2009; Renda, 2008) do not fully acknowledge persistent problems with market failure. The physical delivery of Internet to consumers is subject to a wide range of bottlenecks, not simply in the last mile to the end-user. There is little ‘middle mile’ (backhaul) competition in fixed ISP markets, even in Europe where the commitment to regulation for competition remains, as wholesale backhaul is provided by the incumbent privatised national telecoms provider (in the UK, British Telecom). Even if platforms did compete in, for instance, heavily cabled countries, there would remain ‘n-sided’ market problems in that there is no necessary direct (even non-contractual) relationship between innovative application providers and ISPs (Economides and Tåg, 2007), so that platforms may set rules to ‘tax’ data packets that ultimately impoverish the open innovation value chain, so ultimately causing consumer harm. Thus, the archetypal garage start-ups such as Facebook (founded 2003) and YouTube (founded 2005) would have had less opportunity to spread ‘virally’ across the Internet, as their services would have been subject to these extra costs. Many commercial content providers, such as Google, use content delivery networks and other caching mechanisms to accelerate the speed of delivery to users, in essence reducing the number of these ‘hops’. Content is therefore already delivered at different speeds depending on the paid priority the content provider assigns to it, but not the ISPs’ policies.

Page 94

1.2. History: How Traffic Management Has Changed Common Carriage

Network congestion and lack of bandwidth at peak times is a feature of the Internet. It has always existed. That is why video over the Internet was until the late 1990s simply unfeasible. It is why Voice over the Internet has patchy quality, and why engineers have been trying to create higher QoS. E2E is a two-edged sword, with advantages of openness and a dumb network, and disadvantages of congestion, jitter and ultimately a slowing rate of progress for high-end applications such as high definition video. It may have its disadvantages for those introducing zoning as compared with QoS, and in this it has obvious parallels with ‘common carriage’. Common carriers, who claim on the one hand the benefits of rights of way and other privileges, yet on the other claim traffic management for profit rather than network integrity, are trying to both have their cake and eat it (Frieden, 2010b). It is worth stating what common carriage is not. It is not a flat rate for all packets. It is also not necessarily a flat rate for all packets of a certain size. It is, however, a mediaeval non-discrimination bargain between sovereign and transport network or facility, in which an exchange is made: for the privileges of classification as a common carrier, these private actors will be granted the rights and benefits that an ordinary private carrier would not. As Cherry (2006) has written, common carriers are not a solution to a competition problem, they far predate competition law. They prevent discrimination between the same traffic types – if I offer you transport of your high definition video stream of a certain protocol, then the next customer could demand the same, subject to capacity, were the Internet subject to common carriage.

New technology lets ISP routers (if so equipped) look inside a data packet to ‘see’ its content, via what is known as deep packet inspection (DPI) and other techniques. Previous routers were not powerful enough to conduct more than a shallow inspection that simply established the header information – the equivalent of the postal address for the packet. An ISP can use DPI to determine whether a data packet values high-speed transport – as a television stream does in requiring a dedicated broadcast channel – and offer higher-speed dedicated capacity to that content, typically real-time dependent content such as television, movies or telephone calls using VOIP. Most voice calls and video today use a dedicated line, your copper telephone line or cable line: tomorrow they may use dedicated high-speed lanes on your Internet connection.

That could make good business for ISPs that wish to offer higher capability via DPI (not all ISPs will do so, and it is quite possible to manage traffic less obtrusively by using the DiffServ protocol to prioritise traffic streams within the same Internet channel). Waclawsky (2005) stated, “This is the emerging, consensus view: [it] will let broadband industry vendors and operators put a control layer and a cash register over the Internet and creatively charge for it”.

DPI and other techniques that let ISPs prioritise content also allow them to slow down other content, as well as speed up content for those that pay (and for emergency communications and other ‘good’ packets). This potentially threatens the business of companies that compete with that content: Skype offers VOIP using normal Internet speeds; uTorrent and BBC’s iPlayer offer video using peer-to-peer (P2P) protocols. Encryption is common in these applications and partially successful in overcoming these ISP controls, but, even if all users and applications used strong encryption, this would not succeed in overcoming decisions by ISPs simply to route known premium traffic to a ‘faster lane’, consigning all other traffic to a slower non-priority lane (a policy explanation simplifying a complex engineering decision). P2P is designed to make the most efficient use of congested networks, and its proponents claim that with sufficient deployment, P2P could largely overcome congestion problems.

Traffic management techniques affect not only high-speed, high-money content, but by extension all other content too. You can only build a high-speed lane on a motorway by creating inequality, and often these ‘improvement works’ slow down everyone currently using the roads. The Internet may be different in that regulators and users may tolerate much more discrimination in the interests of innovation. To make this decision on an informed basis, it is in the public interest to investigate transparently both net neutrality ‘lite’ (the slow lanes) and net neutrality ‘heavy’ (what rules allow higher speed content). For instance, in the absence of oversight, ISPs could use DPI to block some content altogether, if they decide it is not to the benefit of ISPs, copyright holders, parents or the government. ISP blocking is currently widespread in controlling spam email, and in some countries in blocking sexually graphic illegal images.

One of the main claims by ISPs wishing to traffic-manage the Internet is that Internet traffic growth is unmanageable by traditional means of expansion of bandwidth and

Page 95

that, therefore, their practices are reasonable. In order to properly research this claim, regulators need access to ISP traffic measurement data. There are several possible means of accessing data at Internet Exchange points, but much data is private either because it is between two peers who do not use an exchange, or because it is carried by a content delivery network (CDN). No government regulator has produced any reliable data and carriers’ and CDNs’ own data is subject to commercial confidentiality (for instance Google’s proprietary CDN). In June 2009, Epitiro benchmarking tests showed UK broadband running at 0.9 Mbps in evening peak time, a rate below that which would permit video streaming of the BBC iPlayer. The delays to the network also made it unreliable for video gaming or VOIP (ThinkBroadband 2009): “users received on average 24% of the maximum ‘up to’ headline speeds advertised [...] During peak hours (6 pm to midnight) speeds dipped by approximately 20% [...] Ping times, an important metric for online game playing came in at around 150 ms which is too high for acceptable gaming performance.”

2. Regulation: the Law of Net Neutrality

Although net neutrality was the subject of FCC regulatory discussions and merger conditions from 2003 (Frieden, 2010b, 2011), its status was unsure in mid-2011 with no legislation passed by Congress, and FCC actions reserved to isolated examples of discrimination that were litigated (Comcast v. FCC, 2010). President Obama came into office committed to net neutrality regulation (Marsden, 2010a:
1). A Notice of Proposed Rule Making (NPRM) by the FCC extended a consultation on net neutrality over 2009-10. This process was finishing just as the Court of Appeal in April 2010 (Comcast v. FCC, 2010) judged that the FCC’s regulatory actions in this area were not justified by its reasoning under the Telecommunications Act 1996 (Ammori, 2010). The successful Comcast appeal meant that the FCC had three legal choices: reclaim Title II common carrier authority for ISPs under the 1996 Telecommunications Act, ask Congress to re-legislate to grant it Title I authority, or try to assert its own Title I authority subject to legal challenge (Marsden, 2010a). It adopted this last course in its Order of 23 December 2010 (FCC, 2010), which is to be challenged before the courts (Frieden, 2011, pp. 6-15). This stay of regulatory action may leave the FCC in suspended animation for much of 2012, and researchers must look elsewhere for net neutrality regulation (Marsden, 2010b; Meisel, 2010, Donahue, 2010).

The European institutions in late 2009 agreed to impose transparency and net neutrality ‘lite’ conditions on ISPs, in directives that had to be implemented in national law by May 2011. BEREC (2010) notes that legal provisions in the Directives permit greater ‘symmetric’ regulation on all operators, not simply dominant actors, but ask for clarification on these measures: Access Directive, Art 5(1) now explicitly mentions that NRAs are able to impose obligations “on undertakings that control access to end-users to make their services interoperable”. The new wider scope for solving interoperability disputes may be used: “revised article 20 of the Framework Directive now provides for the resolution of disputes between undertakings providing electronic communications networks or services and also between such undertakings and others that benefit from obligations of access and/or interconnection (with the definition of access also modified in Art
2 AD as previously stated). Dispute resolutions cannot be considered as straightforward tools for developing a regulatory policy, but they do provide the option to address some specific (maybe urgent) situations. The potential outcome of disputes based on the transparency obligations can provide a ‘credible threat’ for undertakings to behave in line with those obligations, since violation may trigger the imposition of minimum quality requirements on an undertaking, in line with Art 22(3) USD.”

The European Commission is in 2011 consulting on the future of the Universal Service Obligation (EC, 2010) which may be extended to 2Mbps broadband (impacting member state law in 2012), which will mark a new ‘line in the sand’ in Europe for minimum service levels. That will also require commitments to offering that level of access to the open Internet, not a throttled, blocked, walled garden area.

2.1. National Regulatory Responses

Net neutrality has been most effectively carried into legislation or regulation in Japan and the European Union, as well as Norway and Canada (where it is called ITMP: De Beer, 2009). European Economic Area (not full EU) member, Norway, dealt with net neutrality in 2008-9. A complaint first arose due to a dispute between an ISP, NextGenTel, and the Norwegian state broadcaster NRK in mid-2006 (Marsden, 2010a, pp. 172–173). The regulator in

Page 96

Norway persuaded the ISPs and cable companies to sign a co-regulatory pact on transparency and consumer rights in 2009. The Norwegian Code (2009) states:

- Internet users must be given complete and accurate information about the service they are buying, including capacity and quality.

- Users may send and receive content of their choice, use services and applications of their choice and connect any hardware and software that does not harm the network.

- The connection cannot be discriminated against based on application, service, content, sender or receiver.

At national level, EU member states have been slow to recognise net neutrality problems, despite strong anecdotal evidence arising (Dunstone, 2006). Ofcom has confined itself to measuring ISP broadband performance, and making it easier for consumers to switch to rival providers (Kiedrowski, 2007). The government itself has been inert, even erroneously reporting to the European Commission in its 15th Annual Implementation Report on telecoms liberalisation that no problems were occurring.

The Netherlands, in June 2011, introduced a net neutrality provision into Parliament, following controversy over KPN Mobile’s intention to charge extra for VOIP and text messaging by alternative providers. The vote was postponed twice, on 14 and 21 June, and it was pending at the time of submitting this article.

Net neutrality is politically controversial in Canada, where a celebrated breach took place in 2005 (De Beer, 2009). The regulator announced an evidence-based inquiry into net neutrality held in 2009. As a result, new principles of transparency and non-discrimination were declared; these await cases and regulatory decisions in which to add detail to the broad declarations.

2.1.1. Bandwidth Caps

Usage based billing (UBB), to use the Canadian expression, is not new in Internet policy, being the default in most countries prior to the introduction of broadband modems in the late 1990s. Only in countries with unmetered local calls, such as Canada and the United States, was Internet use ‘all you can eat’ (Oftel, 2000). UBB became a headline issue in 2010 in both the United States and Canada. Different practices have been identified by

Geist (2011). With the introduction of broadband cable in Canada, its regulator, the Canadian Radio-television and Telecommunications Commission (CRTC), permitted UBB with monthly download caps on users. This was justified by the shared resource used by cable modem subscribers in the local loop. The CRTC (2011) reiterated its permission for UBB, justified by reference to its responsibilities to ensure competition under Section 7 of the Telecommunications Act 1993. Comcast in the US created a 250GB cap (Burstein, 2008), which was considered more transparent than its previous usage of DPI and other techniques led by its subcontractor Sandvine to prevent peer-to-peer transfers.

Most UBB relates to maximum download capacity, and is assessed independently of the maximum download speeds which users can receive, the latter being the ‘headline rates’ that are generally used in broadband advertising to consumers. OECD (2008) shows that, of 215 broadband packages sampled, almost half would result in users exceeding their monthly caps within three hours at advertised maximum speeds. OECD (2010) shows that while two countries (Japan, South Korea) have replaced almost half of their copper lines with fibre, the vast majority are still copper-based. There is wide variation in practices between countries, though comparisons are difficult to put into context (Bauer, 2010). Countries which were bottom of the OECD tables for bandwidth provision in 2008, Australia and New Zealand have adopted the radical step of commissioning a national fibre local loop to replace their incumbent telephony monopoly. Public intervention is by no means a taboo in broadband investment, and the European Commission has repeatedly approved all non-urban public investment in fibre deployments proposed by Member States. Broadband is not an investment to be left wholly to the private sector, and investment incentives such as permitting UBB will not of themselves ensure national fibre to the premises.

The deployment of fibre to the local exchange is in itself no major current constraint on capacity: it is the backhaul cost from the telephone exchange to the Internet that is the constraint here (and in future, the cost of fibre from exchange closer to the customer). All broadband users share the backhaul capacity from the local exchange to the Internet, capacity which must be bought wholesale from the incumbent in most cases. Therefore, incumbents can control the capacity available to competitive ISPs. Burstein (2011) has stated his belief that current caps are

Page 97

designed to prevent ‘over-the-top’ (OTT) video to be delivered via broadband, competing with the triple-play offers of ISPs which want subscribers to pay for a telephone line, broadband service and cable or Internet delivered video programming (also Crawford, 2011). OTT video would compete with the last of these services, and degrading or capping the broadband service can protect the incumbent’s video service. Burstein estimates the backhaul costs to ISPs as under $1/month, whereas Ofcom (2006) estimated the costs of backhaul for BBC’s iPlayer video catch-up service to UK ISPs as in the order of £4-5/month. Prices have fallen rapidly with increases in transmission efficiency in that period (Moore’s Law alone will have decreased prices by 75% over five years). Much more research is needed into backhaul costs and other constraints on UBB.

2.1.2. Transparency and ‘Reasonable Traffic Management’

One of the several principles of network neutrality promulgated by both the FCC and European Commission is that only ‘reasonable network management’ be permitted, and that the end-user be informed of this reasonableness via clear information (Faulhaber, 2010). Both the FCC in the US and the European Commission have relied on non-binding declarations to make clear their intention to regulate the ‘reasonableness’ of traffic management practices. In Canada, the CRTC has relied on inquiries to the dissatisfaction of advocates, while in Norway and Japan non-binding self-regulatory declarations have been thus far non-enforced.

Transparency is a work in progress, and best regulatory information practices have yet to emerge – without such practices, any commitment to net neutrality is specious. Faulhaber (2010) has suggested four basic principles based on examination of other industries’ information regulation: “1) disclose all information relevant to customer choice, 2) to which customers have easy access, 3) clearly and simply, and 4) in a way that is verifiable.” He argues that Comcast would not have been reprimanded by the FCC had its traffic management been more transparent. I suggest a fifth principle: information should be cross-compared by an accredited independent third party that is not reliant on broadband industry funding, such as a consumer protection agency. This could be carried out at arm’s length via a self- or co-regulatory agreement.

Since May 2011, both European regulators and the European Commission have begun to attempt to define ‘reasonable traffic management’ for the purposes of the European law on Internet traffic. This is likely to produce more robust guidelines for both ISPs and consumers (Sluijs, 2010), with a BEREC work group due to report by the end of 2011. The European law was, in 2009, amended to include the following:

“19. Transparency obligations on public communications network providers providing electronic communications services available to the public to ensure end-toend connectivity, [...] disclosure regarding any conditions limiting access to and/or use of services and applications where such conditions are allowed by Member States in conformity with Community law, and, where necessary and proportionate, access by national regulatory authorities to such information needed to verify the accuracy of such disclosure”.1In the UK, Ofcom has tried to encourage industry self-regulation via transparency Codes of Conduct. It has also carried out measurement of ISP practices in collaboration with SamKnows, a consultancy that has also worked with the FCC. SamKnows is measuring seventeen metrics over 2010-12.2It has worked with Ofcom since 2008, and the FCC since 2010 (with the latter it is conducting 11 tests over a three-year period). US FCC-SamKnows tests with project name TestMyISP are also supported by the Measurement Lab, notably the New America Foundation. The Canadian CRTC made rules in 2009, but there is little evidence of enforcement of CRTC principles of reasonableness, which are to be made on a case-by-case basis (Geist, 2011).

2.2. Implementing Regulation of Net Neutrality

Net neutrality regulatory solutions under the 2009 European Directives had to be implemented by May 2011. They can be classified by the ‘degree of self-regulation’

Page 98

involved, from basic informal communication through to formal regulation. The general trend is towards an expansion of scope of co-regulation, often at the expense of statutory regulation. A wide variety of models of co-regulatory tools exists (EU, 2003) for those actions that require coordinated or joint implementation (Marsden et al., 2008; Tambini, Leonardi, Marsden, 2007). Without co-regulation responsive to constitutional protection of freedom of expression at national levels, measures cannot be self-sustaining (Marsden, 2011).

In the UK, Ofcom has continually attempted, since 2008, to reach a self-regulatory solution. By 2011, with the timetable for implementation of EC Directives growing near, the government-funded Broadband Stakeholder Group (BSG) produced a Code of Conduct, upon which the UK government minister indicated that Berners Lee would play an oversight role (Vaizey, 2011). Whether such a ram-shackle arrangement satisfies the European Commission, which is legally obliged to monitor implementation, remains to be seen in the course of 2012. It is likely to first ask the 27 Member States for details of their detailed implementations, before a further information request can be made which would be a prelude to a possible case for a preliminary ruling before the Court of Justice of the European Union (CJEU). Such a case would be unlikely to be heard before 2013.

In the US, co-regulation is a novel concept, and the implementation of the technical means for measuring reasonable traffic management are to be tested in a self-regulatory forum, though with FCC blessing, the Broadband Industry Technical Advisory Group (BITAG), under Executive Director and FCC veteran Dale Hadfield. Its specific duties include that to offer ‘safe harbor’ opinions on traffic management practices by parties making formal reference for an advisory technical opinion: “Specific TWG functions include: (i) identifying ‘best practices’ by broadband providers and other entities; (ii) interpreting and applying ‘safe harbor’ practices;
(iii) otherwise providing technical guidance to industry and to the public; and/or (iv) issuing advisory opinions on the technical issues germane to the TWG’s mission that may underlie disputes among discrete parties.” (BITAG, 2011, section 7.1). BITAG has a broad multi-stakeholder constituency and is therefore far from simply an industry self-regulatory solution, but charges companies for testing of their solutions and is not currently mandated by law, therefore continuing to act as self- rather than co-regulatory forum.3As a Delaware-incorporated entity with published bylaws and an antitrust policy to formally exclude government activity, BITAG is a classic self-regulatory organisation in structure. US legal and policy scholars may wish to research the extent to which this offers advantages and costs in constitutional oversight and regulatory flexibility as compared with more administrative law supported bodies in Europe. Phil Weiser has proposed that a co-regulatory mechanism be supported (Weiser, 2009).

Unsurprisingly, net neutrality regulation has been fiercely resisted by the ISPs, and its implementation has relied on a series of declarations and merger conditions prior to full implementation via regulations and legislation. Mergers afford regulators the opportunity to introduce such relatively minor adjustments as merger parties are eager to conclude the overall deal, and trade off the relatively minor inconvenience of controls on traffic management in the interests of successful approval. In the same way as consumers – even with perfect information – may not view traffic management as the primary goal of their subscription to broadband (and are thus easy targets for restrictive conditions so long as industry standards prevent real choice between ISPs), so ISPs may make strategic choices to accept some limited traffic management conditions as a price of approval. The proposed 2011 merger of AT&T Wireless and T-Mobile could also illustrate the propensity to enforce net neutrality via merger conditions, as could the merger of Level3 and Global Crossing, important Tier 1 backbone providers with extensive Content Delivery Networks.

2.3. The Special Case of Wireless or Mobile Net Neutrality?

Mobile remains a poor substitute for the fixed Internet, and mobile smartphone users (the most advanced mobile users) in 2010 only downloaded an average of 79 Megabytes per month (Cisco, 2011). It is misleading to use headline percentage growth to suggest there is a major congestion issue - people are finally using the Internet on

Page 99

mobile networks via dongles and smartphones, so absolute usage is increasingly slow compared to growth. Mobile data traffic was in 2010 a total of 237 Petabytes, which Cisco states is three times greater than the entire Internet in 2000. More relevant is that it was 1% of the Internet in 2010, a global total of 21 Exabytes. If mobile data grows twice as fast as the global Internet for the next decade years, it will amount to 11% of the entire Internet by 2020. At that point, it will become more than a statistical insignificance in global terms. Mobile claims should be met with robust scepticism as mobile is such a minute part of the entire Internet traffic measured, and indeed a substantial part of mobile ‘traffic’ is intended in future to be handed off to femtocells, WiFi cells, and other fixed wireless infrastructure, piggybacking on the relatively stable and mature fixed Internet that is expanding to meet capacity. Mobile is a trivial proportion of overall Internet traffic by volume, but commands massive premiums over fixed traffic for the service provided.

European regulators’ group BEREC (2010, p. 11) explained: “mobile network access may need the ability to limit the overall capacity consumption per user in certain circumstances (more than fixed network access with high bandwidth resources) and as this does not involve selective treatment of content it does not, in principle, raise network neutrality concerns.” They explain that though mobile will always need greater traffic management than fixed (“traffic management for mobile accesses is more challenging”), symmetrical regulation must be maintained to ensure technological neutrality: “there are not enough arguments to support having a different approach on network neutrality in the fixed and mobile networks. And especially future-oriented approach for network neutrality should not include differentiation between different types of the networks.” BEREC (2010, p. 3) concluded that mobile should be subject to the ‘net neutrality lite’ provisions available under Directives 136/2009/EC and 140/ 2009/EC, listing some breaches of neutrality: “blocking of VoIP in mobile networks occurred in Austria, Croatia, Ger-many, Italy, the Netherlands, Portugal, Romania and Switzerland”. The FCC's comment period on their Open Internet inquiry, specifically asked for answers to regulation of managed specialized services, and wireless net neutrality. The FCC announced in their (FCC, 2010) Order that they were prepared not to enforce their proposed regulation on wireless services in the near future. This means that the faster growing and more competitive US market will be less regulated, whereas the more sluggish and less competitive European market will be more regulated.

3. The Future: Public Policy Considerations in Net Neutrality

Net neutrality is a more politically important issue than telecommunications regulators are equipped or legally bound to explore, as at stake are technologies of censorship. BEREC (2010: 20) explains:

“Freedom of expression and citizens rights, as well as media pluralism and cultural diversity, are important values of the modern society, and they are worth being protected in this context – especially since mass communication has become easier for all citizens thanks to the Internet. However intervention in respect of such considerations lies outside the competence of BEREC.”

“Putting a cash register on the Internet” (Waclawsky, 2005) will permit much more granular knowledge of what an ISP’s customers are downloading and uploading on the Internet. ISPs could filter out both annoying and illegal content. For instance, they could ‘hear’ criminal conversations, such as those by terrorist sympathisers, illegal pornographers, harassers, those planning robberies, libellous commentary and so on. They could also ‘see’ illegal downloading of copyrighted material. They would be obliged to cooperate with law enforcement or even copyright industries in these scenarios, and this could create even greater difficulties where that speech was legal in one country but illegal where it was received (Diebert et al., 2010). Net neutrality is therefore less unpopular with smaller ISPs that wish to avoid a legal liability morass, which Directive 2000/31/EC (E-Commerce Directive) and other national ISP non-liability ‘safe harbor’ [sic] laws are expressly designed to prevent.

Politicians in 2011 were reviewing the E-Commerce Directive (COM, 2010, pp. 10–11), and passing local laws that favour, for instance, their copyright industries, such as the Digital Economy Act 2010 in the United Kingdom or the HADOPI law in France. In the discussions to amend the E-Communications Framework via Directives 2009/ 136/EC and 2009/140/EC, large well-resourced European incumbent ISPs saw the opportunity to make common cause with mobile operators (Wu, 2007) and others, in an alliance to prevent transparency and permit filtering. The

Page 100

regulation of the Internet is erecting entry barriers with the connivance of the incumbent players, with potentially enormous consequences for free speech, for free competition and for individual expression. This may be the correct option for a safer Internet policy (to prevent exposing children to illegal and/or offensive content), though it signals an abrupt change from the open Internet (Zittrain, 2008). It is therefore vital that regulators address the question of the proper ‘lite’ approach to net neutrality to prevent harm to the current Internet, as well as beginning to address the heavier questions of positive – or tiered – breaches of network neutrality.

Forms of private censorship by intermediaries have been increasing throughout the last decade even as the law continues to declare those intermediaries (mainly ISPs, but increasingly also video hosting companies such as YouTube, social networks such as Facebook, and search providers such as Google) to be ‘three wise monkeys’. These intermediaries are not subject to liability for their customers’ content under the Electronic Commerce Directive (EC/2000/31) so long as they have no actual or constructive knowledge of that content: if they “hear no evil, see no evil and speak no evil” (Marsden, 2010a, pp. 105–149). Any net neutrality solution needs to be holistic, considering ISPs’ roles in the round.

Privacy inquiries can also impact on regulatory control of traffic management, with the UK government taken to the European Court by the European Commission for approving the both secret and invasive behavioural advertising practices of British Telecom and PHORM in 2006. The introduction of network neutrality rules into European law was under the rubric of consumer information safeguards and privacy regulation, not competition rules, and the US Congress was in 2011 actively exploring privacy rules and controls on ISP behavioural advertising activities.

Finally, regulations passed in licensing can affect network neutrality at a fundamental level. Interoperability requirements can form a basis for action where an ISP blocks an application. Furthermore, wireless ISPs may be required to provide open access, as in the FCC auction of 700MHz Upper Block C frequencies in 2008 (Rosston and Topper, 2010, pp. 115-116), or in more general common carriage requirements traditionally imposed on public communications networks since before the dawn of modern communications, with railways and telegraphs (Railways Act 1844).

3.1. The Future Development of Net Neutrality and the Internet

The future of the Internet is a non-trivial issue; in fact it is central to the future of productivity in most industries. It is an enabling technology, which means that the exchange of information on this open platform promises (and delivers) real efficiencies in the economy and society generally, as it helps collaboration and improvement (Carnoy et al., 1993). It is also socially enabling ‘Web 2.0’ or ‘the participative web’ (Schrage, 2000; Seely Brown and Duguid, 2000). That is, it has become a virtual playground, classroom, laboratory and chat room (Palfrey and Gasser, 2008; Tapscott, 1999). Moreover, small businesses and solo, home-based workers depend on the Internet. The promise of virtual worlds and massive online collaboration is to extend this impact even further by 2020.

The Wealth of Networks analysis of Benkler (2006) thinks of the Internet as a giant experiment, combining laboratory with user innovation and feedback, while Boyle (2008) describes a wider movement in Enclosing the Commons of the Mind and Post (2009) extends a comparison with Jeffersonian America. The open Internet is a commons for all to enjoy. That is the basis for claims that it should be preserved and regulation induced to prevent any more enclosure of that commons, while at the same time ensuring that the commons is not ruined by free-riders – that there is no ‘tragedy of the commons’. The open Internet is by no means the only or necessarily the most important place for public opinion to be formed, but it is the open public space that gives legitimacy to all these private or semi-private spaces.

The problems of development and the global digital divide are intimately connected to net neutrality. Internet connectivity is still very expensive for most developing countries, despite attempts to ensure local Internet peering points (exchanges) and new undersea cables, for instance serving East Africa. To flood the developing world’s ISPs with video traffic, much of which comes from major video production countries such as India, Nigeria and of course Hollywood, could place local ISPs in serious financial peril. Casualties in such undertakings include, for instance, countries blacklisted by major ISPs for producing large amounts of spam: Nigerian consumers have previously discovered that their email was blocked because the ISP was also used by spammers. The second development problem that net neutrality debate centres on is the wireless Internet. Most developing coun-

Page 101

tries’ citizens have much lower bandwidth than the west, and most of their connectivity is mobile: India is probably the poster child for a country with at least ten times more mobile than fixed phone subscribers. In the next few years, the developing world Internet user will test the limits of mobile networks, and capacity as well as price might determine the extent to which they can expect a rapidly developing or a third world Internet experience. I flag up development issues because they are critical. Universal service is still a pipe dream for many in the developing world, and when that arrives, the definition it is given will determine the minimum threshold that ISPs have to achieve. As Mueller (2007, p. 7) states, net neutrality “must also encompass a positive assertion of the broader social, economic and political value of universal and non-discriminatory access to Internet resources among those connected to the Internet”.

The types of non-net neutrality employed in West Asia/ North Africa in winter 2010-11 were politically rather than economically motivated, that is, political censorship designed to prevent citizens’ access to the Internet. Mueller (2007, p. 8) argues that the tendency of governments in both repressive and traditionally democratic regimes to impose liability on ISPs to censor content for a plethora of reasons argues for a policy of robust non-interference. That is especially valuable in countries where there is much less discussion of how government deployment of ISPs as censors can endanger user privacy and freedom of expression. Mueller suggests that the net neutrality metaphor could be used to hold all filtering and censorship practices up to the light, as well as other areas of Internet regulation, such as domain name governance. Network neutrality has become an important policy issue discussed at the United Nations Internet Governance Forum (IGF). The IGF discussions of net neutrality have substantially increased (IGF, 2008, 2009).

We may expect to see more protest behaviour by ‘netizens’ who do not agree with net neutrality policies, especially where ISPs are seen to have failed to inform end-users fully about the implications of policy changes. Regulators and politicians are challenged publicly by such problems, particularly given the ubiquity of email, Twitter and social media protests against censorship, and there are two Pirate Party MEPs elected to the European Parliament. Research into social activism against corporate control of the Internet is a growing research field (Hart, 2011).

4. Conclusions: Future Policy Research

The Internet’s evolution is dynamic and complex. The availability and design of a suitable regulatory response must reflect this dynamism, and also the responsiveness of regulators and market players to each other. Therefore, national legislation should be future-proof and avoid being overly prescriptive, to avoid a premature response to the emerging environment. The pace of change in the relation between architecture and content on the Internet requires continuous improvement in the regulator’s research and technological training. Regulators can monitor both commercial transactions and traffic shaping by ISPs to detect potentially abusive discrimination. An ex ante requirement to demonstrate internal network metrics to content provider customers and consumers may be a practical solution, via a regulatory or co-regulatory reporting requirement. The need for better research towards understanding the nature of congestion problems on the Internet and their effect on content and innovation is clear (Marsden et al., 2008). These conclusions support a light-touch regulatory regime involving reporting requirements and co-regulation with, as far as possible, market-based solutions. Solutions may be international as well as local, and international coordination of best practice and knowledge will enable national regulators to keep up with the technology ‘arms race’.

The European legal basis for regulatory intervention is an enabling framework to prevent competition abuses and discrimination, under which national regulators need the skills and evidence base to investigate unjustified discrimination. Regulators expecting a ‘smoking gun’ to present itself should be advised against such a reactive approach. A more proactive approach to monitoring and researching non-neutral behaviours will make network operators much more cognisant of their duties and obligations. A consumer- and citizen-orientated intervention depends on preventing unregulated non-transparent controls exerted over traffic, whether imposed by ISPs for financial advantage or by governments eager to use this new technology to filter, censor and enforce copyright against their citizens. Unravelling the previous ISP limited liability regime risks removing the efficiency of that approach in permitting the free flow of information for economic and social advantage.

Page 102

References

AMMORI, M. (2010). “How I lost the big one bigtime”.

http://ammori.org/2010/04/07/how-i-lost-the-big-one-bigtime

AYRES, I.; BRAITHWAITE, J. (1992). Responsive regulation. Transcending the deregulation debate. Hartford, CT: Yale University Press.

BAUER, Johannes M. (2010). “Learning from each other: promises and pitfalls of benchmarking in communications policy”. Info. Vol. 12, no. 6, pp. 8-20.

BENKLER, Y. (1998a). “Communications infrastructure regulation and the distribution of control over content” [online]. Telecommunications Policy. Vol. 22, no. 3, pp. 183-196.http://www.benkler.org/PolTech.pdf

BENKLER, Y. (1998b) “Overcoming agoraphobia: building the commons of the digitally networked environment” [online]. Harvard Journal of Law and Technology. Vol. 11, no. 2, pp. 287-400.http://jolt.law.harvard.edu/articles/pdf/v11/11HarvJLTech287.pdf

BENKLER, Y. (2006). The wealth of networks. How social production transforms markets and freedom. New Haven, CT and London: Yale University Press.

BEREC (2010). BEREC Response to the European Commission’s consultation on the open Internet and net neutrality in Europe. BoR (10) 42.http://www.erg.eu.int/doc/berec/bor_10_42.pdf

BERNERS LEE, Tim (2006). “Net neutrality: this is serious” [2006/06/21, 16:35].

http://dig.csail.mit.edu/breadcrumbs/node/144

BITAG (2011). By-laws of Broadband Industry Technical Advisory Group.

BOYLE, J. (2008). The public domain: enclosing the commons of the mind. New Haven, CT: Yale University Press.

BURSTEIN, D. (2008). “Comcast’s fair 250 gig bandwidth cap”. Fast Net News. DSL Prime, Docsis report, 21 October.

BURSTEIN, D. (2011). “Wireline costs and caps: a few facts”. Fast Net News. DSL Prime, 6 March.

CARNOY, M.; CASTELLS, M.; COHEN, S. S.; CARDOSO, F. H. (1993). The new global economy in the information age; reflections on our changing world. New York: Macmillan.

CAVE, M.; VAN EIJK, N.; PROSPERETTI, L. [et al.] (2009). “Statement by European academics on the inappropriateness of imposing increased Internet regulation in the EU”. 8 January 2009.

CHERRY, Barbara A. (2006). “Misusing network neutrality to eliminate common carriage threatens free speech and the postal system”. Northern Kentucky Law Review. Vol. 33, no. 4, pp. 483-511.

CHERRY, Barbara A. (2008). “Back to the future: how transportation deregulatory policies fore-shadow evolution of communications policies”. The Information Society. Vol. 24, no. 5, pp. 273-291.

CISCO (2011). Cisco visual networking index: global mobile data traffic forecast Update, 2010-2015.

Page 103

CLARK, David D. (1988). “The design philosophy of the DARPA Internet protocols”. Computer Communications Review. Vol. 18, no. 4, August, pp. 106-114.

CLARK, David D.; BLUMENTHAL, Marjory S. (2011). “The end-to-end argument and application design: the role of trust”. Federal Communications Law Journal. Vol. 63, no. 2, pp. 357-390.

COM (2002). Better regulation action plan. COM 278. Brussels.

COM (2010). A digital agenda for Europe. COM 245. Brussels.

Comcast v. FCC (2010). Decision of the United States Court of Appeals for the District of Columbia Circuit, delivered 6 April. No. 08-1291.

CRAWFORD, S. (2011). The big squeeze: the looming cable monopoly [forthcoming].

CROWCROFT, J. (2011). “The affordance of asymmetry or a rendezvous with the random?” Communications and Convergence Review. Vol. 3, no. 1.

CRTC (2011). Usage-based billing for Gateway Access Services and third-party Internet access services. Telecom Decision 2011-44. File number: 8661-C12-201015975. Ottawa, 25 January 2011.

DE BEER, J. (2009). “Network neutrality in the Great White North (and its impact on Canadian culture)”. Telecommunications Journal of Australia. Vol. 59, no. 2, pág 24.1-24.19.

DE SOLA POOL, I. (1983). Technologies of freedom. Cambridge MA: Belknap.

DEIBERT, R. J.; PALFREY, J. G.; ROHOZINSKI, R.; ZITTRAIN, J. (eds) (2010). Access controlled: the shaping of power, rights, and rule in cyberspace. Cambridge, MA: MIT Press. Digital Economy Act (2010).

DONAHUE, H. (2010). “The network neutrality inquiry”. Info. Vol. 12, nº 2, pp.3-8.

DUNSTONE, C. (2006). Presentation by Carphone Warehouse/TalkTalk CEO at the 2006 Ofcom conference.

ECONOMIDES, N.; TÅG, J. (2007). Net neutrality on the internet: A two-sided market analysis. Working Paper. New York: NYU Center for Law and Economics.

EUROPEAN COMMISSION (EC) (2010). Consultation on the future of the universal service obligation.

EUROPEAN UNION (EU) (2003). Inter institutional agreement.

FAULHABER, Gerald R. (2002). “Network effects and merger analysis: instant messaging and the

AOL-Time Warner case”. Telecommunications Policy. Vol. 26, no. 5-6, pp. 311-333.

FAULHABER, Gerald R. (2010). “Transparency and broadband Internet service providers”. International Journal of Communication. Vol. 4, pp. 738-757.

FCC (2005). Appropriate framework for broadband access to the Internet over wireline facilities et al.

Policy statement, 20 FCC Rcd 14986 (2005). (Internet policy statement).

Page 104

FCC (2010). In the matter of preserving the open Internet broadband industry practices. GN Docket No. 09-191, WC Docket No. 07-52. Report and order. Adopted: 21 December 2010.

FRIEDEN, Rob (2010a). Winning the silicon sweepstakes: can the United States compete in global telecommunications? Hartford, CT: Yale University Press.

FRIEDEN, Rob (2010b). “Invoking and avoiding the First Amendment: how Internet service providers leverage their status as both content creators and neutral conduits”. The Journal of Constitutional Law. Vol. 12, no. 5, pp. 1279-1324. University of Pennsilvania Law School.

FRIEDEN, Rob (2011). “A layered and nuanced assessment of network neutrality rationales”. In: Tilburg

TILEC Workshop on Law and Economics, 20 de June.

GAINES, S. E.; KIMBER, C. (2001). “Redirecting self-regulation”. Environmental Law. Vol. 13, no. 2, pp. 157-184.

GEIST, Michael (2011a). “Unpacking the policy issues behind bandwidth caps & usage based billing”. 1 de February.

GEIST, Michael (2011b). “Canada’s usage based billing controversy: how to address the wholesale and retail issues”. March 2011.

HARRIS, Susan; GERICH, Elise (1996). “Retiring the NSFNET backbone service: chronicling the end of an era”. ConneXions. Vol. 10, no. 4, April.

HART, Jeffrey A. (2011). “The net neutrality debate in the United States”. Journal of Information Technology & Politics. No. 1, 2011, p. 1.

HASLINGER, Gerhard; NUNZI, Giorgio; MEIROSU, Catalin; FAN, Changpeng; ANDERSEN, Frank-Uwe (2011). “Traffic engineering supported by Inherent Network Management: analysis of resource efficiency and cost saving potential”. International Journal of Network Management. Vol. 21, no. 1, pp. 45-64, January-February. DOI: 10.1002/nem.770.

INTERNET GOVERNANCE FORUM (IGF) (2008). “Network neutrality: examining the issues and implications for development”. Co-hosted workshop, 4 December.

INTERNET GOVERNANCE FORUM (IGF) (2009). Programme, format and schedule for the 2009 meeting. Revision of 4 June 2009.

KIEDROWSKI, T. (2007). “Net neutrality: Ofcom’s view”.

LABOVITZ, C; IEKEL-JOHNSON, S.: MCPHERSON, D.; OBERHEIDE, J.; JAHANIAN, F.; KARIR, M. (2009).

ATLAS Internet Observatory Annual Report, and their presentation to the North American Network Operators Group – an industry body – NANOG (2009).

LEMLEY, M. A.; LESSIG, L. (1999). Ex parte declaration of Professor Mark A. Lemley and Professor Lawrence Lessig in the matter of: Application for consent to the transfer of control of licenses of Mediaone Group, Inc. to AT&T Corp, CS Docket No. 99-251, before the Federal Communications Commission.

Page 105

LESSIG, L. (1999a). Code and other laws of cyberspace. New York: Basic Books.

LESSIG, L. (1999b). “The limits in open code: regulatory standards and the future of the Net”. Berkeley Technology Law Journal. Vol. 14, no. 2, pp. 759-770.

LESSIG, L.; WU, T. (2003). Letter to the FCC Ex parte, 22 August 2003.

MALIK, O. (2010). “U.S. mobile data traffic to top 1 exabyte”. 7 November.

MARSDEN, C. (2001). “The start of end-to-end? Internet protocol television”. Intermedia. No. 29, pp. 4-8.

MARSDEN, C. (2010a). Net neutrality: towards a co-regulatory solution. London: Bloomsbury Academic.

MARSDEN, C. (2010b). “Appeals Court demolishes FCC legal argument for ancillary jurisdiction without Title I argument in Comcast”. 6 April.

MARSDEN, C. (2011). Internet co-regulation: European law and regulatory legitimacy in cyberspace. Cambridge: Cambridge University Press.

MARSDEN, C.; CAVE, J. et al. (2006). Assessing indirect impacts of the ec proposals for video regulation, TR-414 for Ofcom. Santa Monica, CA: RAND.

MARSDEN, C.; SIMMONS, S.; BROWN, I.; WOODS, L.; PEAKE, A., ROBINSON, N. et al. (2008). “Options for and effectiveness of Internet self- and co-regulation phase 2: case study report”. 15 January 2008. Prepared for European Commission DG Information Society & Media.

MAYER-SCHONBERGER, V. (2008). “Demystifying Lessig”. Wisconsin Law Review. No. 4, pp. 713-746.

MEISEL, J. P. (2010). “Trinko and mandated access to the Internet”. Info. Vol. 12, no. 2, pp. 9-27.

MINTS (2007). Methodology. Page last modified 30 August.

MINTS (2009). MINTS pages updated, many new reports, further slight slowdown in wireline traffic growth rate. 17 November.

MUELLER, M. (1998). Universal service: competition, interconnection, and monopoly in the making. Washington DC: AEI Press.

MUELLER, M. (2007). “Net neutrality as global principle for Internet governance”. Internet Governance Project Paper IGP07-003.

NOAM, E. M. (1994). “Beyond liberalization II: the impending doom of common carriage”. Telecommunications Policy. Vol. 18, no. 6, pp. 435-452.

NOAM, E. M. (2008). “Beyond net neutrality: enduser sovereignty”. Columbia University Draft Paper for 34th Telecoms Policy Research Conference, 14 August. Norwegian Code (2009). “Guidelines for net neutrality”.

ODLYZKO, Andrew; LEVINSON, David (2007). “Too expensive to meter: the influence of transaction costs in transportation and communication” [draft].

Page 106

OECD (2008). OECD Broadband portal, table 5(m): Time to reach bit/data caps at advertised speeds. September 2008.

OECD (2010). OECD Broadband portal, table 1l: Percentage of fibre connections in total broadband among countries reporting fibre subscribers. June 2010.

OFCOM (2006). Market impact assessment: BBC new on-demand video proposals.

OFTEL (2000). Draft Direction under Condition 45 of the Public Telecommunications Licence granted to British Telecommunications plc of a dispute between BT and MCI Worldcom concerning the provision of a Flat Rate Internet Access Call Origination product (FRIACO), noting at point 3 that “BT cited concerns about network capacity and the principle of capacity charging”.

PALFREY, J.; GASSER, U. (2008). Born digital: understanding the first generation of digital natives. New York: Basic Books.

POST, D. (2009). In search of Jefferson’s Moose: notes on the state of cyberspace. New York: Oxford University Press.

REIDENBERG, J. (2005). “Technology and Internet jurisdiction”. University of Pennsylvania Law Review. Vol. 153, no. 6, pp. 1951-1974.

RENDA, A. (2008). I own the pipes, you call the tune: The net neutrality debate and its (ir)relevance for Europe. CEPS Special Reports. Brussels: Centre for European Policy Studies.

ROONEY, BEN (2011). “Net neutrality debate in Europe is ‘over’”. 28 February.

ROSSTON, G. I.; TOPPER, M. D. (2010). “An anti-trust analysis of the case for wireless net neutrality”. Information Economics and Policy. Vol. 22, no. 10, pp. 103-119.

SALTZER J. H.; REED, D.; CLARK, D. (1981). “End to end arguments in system design”. In: Second International Conf. on Distributed Computing Systems, pp. 509-12.

SCHRAGE, M. (2000). “The debriefing: John Seely Brown”. Wired. No. 8.08, August.

SEELY BROWN J.; DUGUID, P. (2000). The social life of information. Cambridge, MA: Harvard Business School Press.

SLUIJS, Jasper P. (2010). “Network neutrality between false positives and false negatives: introducing a European approach to American broadband markets”. Federal Communications Law Journal. Vol. 62, p. 77.

TAMBINI, L.; MARSDEN, C. (2008). Codifying cyberspace: communications self-regulation in the age of Internet convergence. London: Routledge.

TAPSCOTT, D. (1999). Growing up digital: the rise of the net generation. New York: McGraw Hill.

TEUBNER, G. (1986). “The transformation of law in the welfare state”. In: G. TEUBNER (ed.). Dilemmas of law in the welfare state. Berlin: W. de Gruyter.

THINKBROADBAND (2009). Average mobile broadband speed clocks in at 0.9 meg. 10 June.

Page 107

VAIZEY, Ed (2011). Hansard HC Deb, 5 April 2011, c259WH.

WACLAWSKY, J. G. (2005). “IMS 101: What you need to know now”. Business Communications Review.
June 2005, pp. 18-23.

WEISER, P. (2009). “The future of Internet regulation”. U.C. Davis Law Review. Vol. 43, no. 2, pp. 529-590.

WERBACH, Kevin (2005). “The Federal Computer Commission”. North Carolina Law Review. Vol. 84, no. 1, p. 21.

WERBACH, Kevin (2010). “Off the hook”. Cornell Law Review. Vol. 95, p. 535.

WU, T. (2003a). “When code isn’t law” [online]. Virginia Law Review. Vol. 89, p. 679.

WU, T. (2003b). “Network neutrality, broadband discrimination”. Journal of Telecommunications and
High Technology Law
[online]. Vol. 2, pp. 141-172.

WU, T. (2007). Wireless net neutrality: cellular carterfone and consumer choice in mobile broadband.
New America Foundation Wireless Future Program Working Paper #17, February.

YOO, C. (2010). “The changing patterns of Internet usage”. Federal Communications Law Journal. Vol.
63, no.1, pp. 67-90.

ZITTRAIN, J. (2008). The future of the Internet and how to stop it. New Haven, CT: Yale University
Press.

----------------------

[1] Annex to Directive 2002/20/EC Authorisation Directive by Directive 2009/140/EC at OJ L337/68 18 December 2009.

[2] For more details and methodology, see http://www.samknows.com/broadband/ofcom_and_samknows for Ofcom and http://www.samknows.com/broadband/ofcom_and_samknows for the FCC tests.

[3] For details see http://members.bitag.org/kwspub/BITAG_Membership/

VLEX utiliza cookies de inicio de sesión para aportarte una mejor experiencia de navegación. Si haces click en 'Aceptar' o continúas navegando por esta web consideramos que aceptas nuestra política de cookies. ACEPTAR