The logical design (the code law that operates the network of nodes and ends) of the Internet is what defines how the Internet works. The Internet was originally designed without any internal intelligence: the nodes of the network did not care (or even attempt to know) what content was communicated across them, they attempted only a best-efforts approach to transmitting packets of information from one end (the originating node or device) to another (the receiving node or device) via non-predetermined pathways. What enabled this approach was adherence to openly published standards and protocols for the transfer of these data packets across the network such as the TCP/IP suite of protocols, and the layers of protocols that were established by the Open Systems Interconnection (OSI) Basic Reference Model.
This logical model, known as the end-to-end model, in conjunction with adherence to open protocols (namely, the TCP/IP suite) produced a network that was fundamentally open, equitable and neutral (Lessig, 2001). It was open in the sense that users needed only connect their computers to the network to share or receive content; equitable in the sense that the design was not optimized for any particular content or user and thus the network was open to uses and users that it was not originally designed for; and neutral in the sense that the network was incapable of discriminating against specific content. This final aspect of the original Internet model is the foundation of “net neutrality” that is currently threatened by new technologies and legal-institutional frameworks.
An alternative logical model, which is now increasingly embedded into the current Internet structure, and is becoming rapidly more attractive in an era of large scale corporate mergers within and between telecommunications providers and mass-media content deliverers, is the asynchronous transfer model (ATM). This model is moving the intelligence from the ends of the network to the middle giving the owners of the infrastructure (the wires and increasingly the spectrum), as well as the Internet service providers (ISPs; the gatekeepers) the control to dispense with “net neutrality” when it benefits their particular interests (i.e. to favour the content, applications and traffic of their corporate partners over those of competitors or independents). Thus, layering intelligence onto the neutral TCP/IP suite leads to serious concerns over government transparency (think North Korea), the viability of a free and independent media (think China), and of course security and privacy (think the Patriot Act of the United States).
Standards are also a crucial component to enable content level applications and services. These standards, that are open, such as XML and its extensions (the geography mark-up language, GML, for example), allow data and information to be exchanged and shared between software and applications users and devices. These open standards fuel the interchange of data and information that underlie the services provided over the Internet, thus making collaboration and production of new content possible. Defining domain-specific standards is also increasingly important for the filing, cataloguing and sharing of the domain-specific information.
Next: Layer 2: Content Layer