Proposing new technologies for the future Internet involves testing these technologies beforehand. Although theory, simulation, or emulation can be useful tools in the design process, experimentation with the “real” Internet is necessary to test a protocol’s reaction to unexpected, real events
that evade simulation or emulation. The Internet is now an
important production and business tool. ISPs would not
risk taking it down to install new, untested routing software
or risk running experimental software. Such an interruption
or an unexpected failure that could take the whole network
down could result in loss of a large number of existing business and resulting revenue. Thus, extensive experimentation
is a crucial step for coming even close to convincing an ISP
to deploy any new enhancement. On the other hand, deployment of the new technology might actually be the only way
of testing it, under “realistic” conditions. These two facts
create a vicious circle, where disruptive technologies are not
deployed due to lack of enough experiments to prove that
they would not harm deployed Internet services, and new
protocols can never be tested to the extend necessary due
to the ISPs’ averting to deploy them.
To overcome this deadlock, global experimentation platforms like PlanetLab  have been deployed with a goal of
allowing researchers to (i) conduct large scale experiments
in a “real” Internet topology and with “real” Internet traffic, while (ii) not disrupting the Internet’s regular behavior
and performance. PlanetLab is an open, overlay network
of “virtualized” computing and communication resources”,
that are connected through the Internet. By using virtualization software like VINI  it allows different experiments
to share a large set of these resources by allocating dedicated
“slices” or resources, while at the same time being exposed
to real Internet events and traffic. Of course, when setting
up testbeds and doing virtualization, both full realism in
experimentation and reproducibility of experimental results
may not always be achievable. Researchers will have to hit
the right tradeoff between experiments that take into account real traffic and network conditions, and experiments
that can be reproduced, or at least provide enough context
to be meaningful for comparison.
Another effort towards that direction is XORP (eXtensible Open Router Platform) [15, 5]. XORP is an open software project that aims at allowing experimental networking
software to run side by side with production software. It
runs a virtual router stack on commodity PCs or servers”,
and is also used by the VINI platform. Through a carefully
designed Forwarding Engine Machine (FEM) the forwarding
tables are exposed to a large number of concurrently running
routing protocols like OSPF, RIP, BGP, multicast protocols
like PIM, as well as experimental protocols under testing.
Each of these protocols runs as software on the user plane
to enable better (sandbox) isolation and security. What is
more, groups (or each) of these could run in a separate physical router. Thus, if multicast fails, for example, it would
not at least bring down the unicast service. Furthermore, if
one router is compromised by say a “router worm”, this software isolation could prevent its further spread (something
that is not guaranteed by today’s Internet routers). Finally”,
this stack enables individual changes to the routing software
to be implemented much more easily and quickly. In other
words, XORP promises to deliver stability for the provider
and flexibility for deployment.
As with any virtualization approach, XORP may face performance considerations. It introduces an extra processing
layer (FEM) and its respective processing overhead. A PC
cannot be faster than specialized hardware. On the other
hand, the technology of XORP could allow many cheap
routers to be put together in a much more powerful router
cluster. It has been demonstrated that XORP can “virtualize” up to 40Gbps connections. Nevertheless, it is still
mostly envisioned to address “edge” routers rather than
“backbone” ones. Finally, XORP may also be used inside generic virtualization software like VINI, to allow different experiments/ISPs to run different routing protocols.
While XORP may sound like slow software, compared to the
fast specialized hardware found inside today’s routers, it is
claimed that in a high performance server it can support
routing at bandwidths up to 40 Gbps.
Taking virtualization a step further, one could even envision different Internet architectures, rather than just experiments, to be running side by side. One such proposal, called
CABO (Concurrent Architectures are Better than One) ”,
advocates to separate the providers of the physical network
infrastructure from the service providers themselves. Despite a large economy running over the “net”, its economic
incentives are misaligned, stifling growth. Different ISPs
control different, often small parts of the network, and are
rarely on both ends of an end-to-end session. To change
anything in the core functionality of the protocols, and offer a new service, a number of different ISPs need to come
to agreement and coordinate to make anything happen. In
addition to the inherent difficulty of such an endeavor, this
also creates a disincentive for ISPs to compete by innovating.
How can an ISP attract customers from another ISP by creating and offering a better service, when that service often
requires the agreement of that second ISP in order to work?
For example, imagine an ISP wishes to offer QoS guarantees
to its customers. Even if it invests a lot of resources into developing its own network with QoS provisions, these would
be useless as soon as its traffic would have to traverse other
networks that do not comply.
CABO aims at better aligning these economic incentives.
A service provider would now lease physical resources endto-end from different infrastructure providers, and would be
able to offer a better service to its customers. Furthermore”,
different service providers would not only have the ability
to, but now also every incentive to evolve, improve, and innovate their (virtual) network, in order to attract customers
to connect through them. Of course, this does not preclude
an entity from being both the infrastructure and service
provider in some cases, as for example when the national
infrastructure provider (e.g. France Telecom) also acts as a
service provided (e.g. for DSL).
However, this would imply that different service providers
would now have to share physical resources. Hence, some
means is necessary (i) to be able to guarantee each the required resources to operate its services, and (ii) to isolate