Monday, May 29, 2006

Internet 2: The Next Generation Internet

Introduction
Internet2 is a not-for-profit consortium being led by 207 universities of the United States of America working in partnership with over 60 leading companies and government to develop and deploy advanced network applications and technologies, accelerating the creation of tomorrow's Internet.

Internet2’s mission is to develop and deploy advanced network applications and technologies for research and higher education, accelerating the creation of tomorrow's Internet. The primary goal of Internet2 is to ensure the transfer of new network technologies and applications to the broader education and networking communities. Some of the major goals of Internet2 are to:
• Create a leading edge network capability for the national research community
• Enable revolutionary Internet applications
• Ensure the rapid transfer of new network services and applications to the broader Internet community.
Internet2 is not a separate physical network and will not replace the Internet. Internet2 brings together institutions and resources from academia, industry and government to develop new technologies and capabilities that can then be deployed in the global Internet. Close collaboration with Internet2 corporate members will ensure that new applications and technologies are rapidly deployed throughout the Internet. Just as email and the World Wide Web are legacies of earlier investments in academic and federal research networks, the legacy of Internet2 will be to expand the possibilities of the broader Internet.

Internet2 and its members are developing and testing new technologies, such as IPv6, multicasting and quality of service (QoS) that will enable revolutionary Internet applications. However, these applications require performance not possible on today's Internet. More than a faster Web or email, these new technologies will enable completely new applications such as digital libraries, virtual laboratories, distance-independent learning and tele-immersion.

The History of Internet
With the invention of nuclear weapons and the beginning of the Cold War, the American authorities faced a strategic problem: How could they ensure the reliability and stability of communications after a nuclear attack?
A governmental organization, known as RAND was entrusted with finding the appropriate solution to these problems. RAND’s research determined the necessity of building a decentralized communication network. The principles were simple: each node in the network would have the capability to originate, pass and receive messages. At a specified source node, the message would be divided into packets; each packet would be separately addressed, and would make its way through the network. At the specified destination node, the packets would reassemble to rebuild the original message. This message-delivery technique, called packet switching, proved vital to the construction of a computer network.

In 1969, ARPA (Advanced Research Project Agency), which is part of the US Department of Defense, started to build a leased network, which they called ARPANET (Advanced Research Project Agency Network). This network linked four nodes (four American Universities) that were using supercomputers. The impact of ARPANET was tremendous. Using NCP (Network Control Protocol), it allowed the transfer of information between nodes running on the same network, so scientists and researchers were able to access remote computer systems, and share computer resources at a rate of 50 kbps.
In 1972, Ray Tomlinson of BBN (Bolt Beranek and Newman) created the first e-mail program, which enabled electronic messages to be sent over decentralized networks. E-mail rapidly gained popularity, and became the most common means of communication over networks.
In the early 1970s, a group from ARPA began to develop a new protocol, called TCP/IP (Transmission Control Protocol/Internet Protocol), which allowed different computer networks to interconnect and communicate with each other; and in the paper about TCP, the term Internet was used for the first time. ARPANET also launched its first commercial application, called TELNET, during this era. Soon after, ARPANET expanded to connect universities and research centers in Europe, and eventually, became a global network. By the late 1970s, people were able to participate in discussions over networks through newsgroup services, such as USENET.
When other networks, such as CSNET (Computer Science Network), BITNET (Because its Time Network), NSFnet (National Science Foundation network) started to offer e-mail and FTP (File Transfer Protocol) services in the 1980s, inter-network connections became prevalent, so every network had to use the TCP/IP suite of protocols, which replaced NCP completely. The term Internet, which refers to the group of computers communicating via TCP/IP, started to become popular.
As time passed, more and more nodes were built. Data transmission speed became faster, especially when dedicated lines, such as TI carriers, were introduced. All of these developments contributed to the expansion of the Internet and triggered the formation of different organizations, such as IAB (Internet Activities Board) and IETF (Internet Engineering Task Force), who wanted to develop the Internet further.
Internet2 and Next Generation Internet (NGI)
Most research works in the US today are being funded either by NASA or by the US Department of Defense. US Authorities in order to maintain their supremacy in the field of Science and Technology in October 1996 started two parallel projects on the line of the ARPANET namely Internet2 and Next Generation Internet (NGI). The university-led Internet2 and the federally led NGI are parallel and complementary initiatives based.
Internet2 Universities are required to invest and provide high-performance networks from their universities and commit $60 million per year in investments, with corporate members investing another $30 million over the lifetime of a project. In addition, Internet2 member institutions may receive funding in the form of competitively awarded grants from the federal agencies participating in the federal Next Generation Internet initiative.

Internet2 is systematically swallowing up the National Science Foundation's Very High-Performance Backbone Network Service (vBNS). More than 50 Internet2 institutions have received competitively awarded vBNS grants under the NSF's High Performance Connections program.

In fact, vBNS could be considered the heart of Internet2, or at least its substantive launch pad. Begun in 1995, with an investment of $50 million under a five-year cooperative project with MCI, the service links six NSF supercomputer centers and was initially implemented to design and support "gigabit testbeds" for R&D of advanced networking technologies.

What does it offer?
Requiring state-of-the-art infrastructure, Internet2 universities are connected to the Abilene network backbone, which uses regional network aggregation points called gigaPoPs. Abilene supports transfer rates between 2.4 gigabits per second and 9.6 gigabits per second. The rate of data transfer of Internet2 can be realised by the fact that if it takes a 56 Kbps Modem 171 hours to download a particular file, it would take 74 hours to download the same file on an ISDN connection, 25 hours on DSL/Cable line, 6.4 hours on a T1 network and just 30 seconds on Internet2.

Some of the major applications of Internet2 would comprise:
1. Tele-immersion, which enables users at geographically distributed sites to collaborate in real time in a shared, simulated, hybrid environment as if they were in the same physical room.
2. New services and capabilities envisioned for Internet2 offer important opportunities to move the Digital Libraries program into new areas. Very high-bandwidth and bandwidth reservation will allow currently exotic materials such as continuous digital video and audio to move from research use to much broader use. Images, audio and video can, at least from a delivery point of view, move into the mainstream currently occupied almost exclusively by textual materials. This will also facilitate more extensive research in the difficult problems of organizing, indexing and providing intellectual access to these classes of materials.
3. A virtual laboratory, which enables a group of researchers located around the world to work together on a common set of projects. As with any other laboratory, the tools and techniques are specific to the domain of the research, but the basic infrastructure requirements are shared across disciplines. Although related to some of the applications of tele-immersion, the virtual laboratory does not assume a priori the need for a shared immersive environment.
4. Virtual Museums can be created as a result of high-bandwidth. With curators digitizing their collections, the wealth of assembled artifacts can be available to anyone with a high-speed connection.
5. Real-time access/visualization of simulation results. Faster access to remote computers that perform data analysis and simulations.
6. Access and control of remote precious devices, such as MRIs, telescopes etc.
7. Advanced network quality video conferencing and distant learning.
8. Setting up a War Information Network, synchronising forces on land, air and water and providing them real time satellite images/videos of the ground position.

Conclusion
Indian Universities and Corporates can definitely draw a leaf out of their US counterparts specially when India has a vision of becoming an IT surperpower by the year 2020. The Govt. of India should also play a proactive role in this regard.