Internet2: The Once and Future Net
July 10, 2001
By Daniel Tynan
On academia's high-powered Internet2, researchers are redefining what computer networks can do.
At a high school in North Carolina, students use an atomic force microscope to push cold viruses around as if they were chess pieces. Astronomers at the University of Florida in Gainesville gather infrared images of astral dust that may some day form into a planet. In Columbus, OH, a doctor peers inside a patient's abdomen on a monitor and discusses the procedure with surgeons in the OR.
What's remarkable about these events is not so much what the researchers did as where they did it: on their computers, in classrooms and offices miles from the actual event. And they did it using a new generation of the Internet that few outside academia have been privileged to enjoy.
A Time Machine
Today, 185 universities and research labs are breezing along on a parallel network to the public Internet called Internet2. Launched in 1996, Internet2 offers super-fast connections to two fiber-optic backbones and networking protocols that ensure data arrives at its destination without loss or delay.
Internet2 is more than just a bandwidth banquet; it's the petri dish where tomorrow's Internet applications are being grownfrom new ways to conduct surgery to virtual worlds where you can interact with colleagues across the continent.
Universities are using Internet2 to open their doors to remotely located students via distance-learning applications. Computer scientists are using it to collaborate on complex computational projects such as long-term weather forecasting. Researchers at the University of Washington are streaming high-definition video over the network in what could be a preview of tomorrow's TV.
Last February, NYU and the Rensselaer Polytechnic Institute released the first opera distributed over Internet2. Entitled "The Technophobe and the Madman," the play was performed simultaneously on two stages 160 miles apart.
Ted Hanss, director of application development for Internet2 in Ann Arbor, MI, urges people to "think about Internet2 as a time machine, showing us where the [public] Internet will be in three to five years."
Hanss's timetable may be a tad optimistic. Although Internet2's cutting-edge networking technology is already being introduced on the commercial Net, upgrading the public infrastructure will take far longerespecially for the "last mile" of slow, dial-up connections used by the vast majority of netizens.
But it's easy to understand Hanss's enthusiasm. Internet2's sheer, awe-inspiring speed indulges projects with grand goals: to defeat geography and circumvent the barriers of time and space.
History of the Future
Back in the mid-90s, the Internet was going through a midlife crisis. Initially built to help universities share research data, the Net was starting to bog down under the weight of commercial traffic. Research institutions no longer had the bandwidth they needed.
It was time to build an exclusive new network that could handle bandwidth-intensive applications from the ground up. With these goals in mind, 34 research institutions got together in 1996 to form the Internet2 consortium.
Around the same time, the federal government launched the Next Generation Internet initiative, a project with virtually identical goals but focused on government agencies such as NASA and the Department of Defense.
Over time, the two networks have become complementary, sharing similar research goals and resources. The biggest difference is that while Next Generation Internet is paid for by tax dollars, Internet2 is privately funded.
To join Internet2, you must be an educational institution or private firm willing to use the network to collaborate and support the development of new applications. Annual costs run between $500,000 and $1 million per university, according to Internet2 spokesperson Greg Wood, most of it going toward upgrading campus networks.
In June, the consortium announced it now had member universities in all 50 states. Over the next few years, it plans to connect thousands of elementary and secondary schools, libraries and museums to the Internet2 backbone.
Meanwhile, private technology companies like IBM and Cisco Systems have poured in millions more, typically as equipment grants to member universities. What they get in return is expert feedback on the design of new products.
For example, Cisco Systems has relied extensively on Internet2 research in designing its next generation of networking routersdevices that forward data packets on the Internet.
"We're not in it for altruism," says Stephen Wolff, manager of business development for Cisco in Washington, DC. "It costs us something to participate in Internet2, and we hope to regain that and more by translating the technology into products people will want to buy."
The Fast Lane
Internet2's biggest advantage is raw speed. The network uses two high-performance optical backbones: MCI Worldcom's very-high-performance bandwidth network service (vBNS), and Abilene, a 10,000-mile backbone built specifically for Internet2 and named after the Kansas railhead that opened the old West to settlement.
Though these backbones are similar to those on the commercial Internet, only about three million users can access Internet2, versus several hundred million on the public Net.
Internet2 members also enjoy much faster connections to the backbone, eliminating a major cause of Net slowdowns. About one quarter connect directly to the backbone; the rest link up through so-called giga-popshigh-speed access points located in different regions of the country.
The minimum connection speed is a blistering 155 megabits per seconda hundred times faster than a typical university lab connection and almost 3,000 times faster than a dial-up modem.
Across the Universe
But there's more to the network than sheer bandwidth. Network researchers are working on ways to implement so-called Quality of Service guarantees, designed to prevent data loss and minimize delays as signals bounce from machine to machine.
Thanks to the network's simplified design, data is sent more efficiently and with fewer "hops" between routers. Researchers are also looking at ways to give some data transmissions higher priority than others. By marking the data as "urgent," researchers can make sure real-time video of surgeries cross the network before less time-sensitive data such as e-mail.
Another key Internet2 technology is multicasting. This allows a single data stream such as a live video broadcast to travel across the Internet and then split off copies of itself to multiple destinations. On the public Internet, the originating server must transmit a separate data stream to each user, greatly increasing congestion.
Researchers are also using the network to test a new version 6 of the Internet protocol, the fundamental software that controls the way data is sent over the Internet. Among other things, the protocol vastly increases the number of potential Internet addresses, preparing for a future where devices from cell phones to refrigerators are connected to the Net.
To Infinity... and Beyond
Astronomer Charlie Telesco figures that if he can't go to the mountain, he can always bring the mountain to his monitor. The University of Florida professor uses Internet2 to solve the eternal dilemma of how to be in two places at once, as well as overcome the problem of sharing scarce resources among a pool of hungry researchers.
From his office in Gainesville, Telesco employs an Internet2 link to peer through the eight-meter telescope at the top of Mauna Kea in Hawaii, some 4,500 miles away. Using a video conferencing application on his PC, Telesco can pan a camera around the control room at the Gemini Observatory and converse with his counterparts on the Big Island.
On another screen in his office, a series of control panels surround the image transmitted by the telescope's infrared camera. As the data refreshes 100 times per second, minute dust particles gradually become visible against a sea of background radiation. One day, Telesco says, these particles will coalesce into planets.
"In the old days, the astronomer would come up the mountain, baby-sit the instruments, and gather the data," says observatory spokesperson Peter Michaud. By the time an astronomer arrived, however, clouds might have moved in, and observation conditions might no longer be optimal.
With Internet2, Gemini can alert the astronomer to log on when conditions are right and gather data remotelyincreasing both the telescope's efficiency and the quality of the data it collects.
But the arrangement is not without drawbacks. For security reasons, Gemini won't let anyone control its $185 million telescopes remotely, so astronomers must tell Gemini employees how to adjust the telescope's settings.
At first, Telesco had trouble getting the infrared images to come through Gemini's firewall. Now that it's up and running, Telesco is sold on his high-speed connection, which he calls "really fabulous."
The University of Florida, which built the infrared cameras used at Gemini's observatories in Hawaii and Chile, is working with the Spanish government on a 10-meter telescope in the Canary Islands. When it comes online, it too will be hooked to Internet2.
"It's likely I might have time on all three telescopes at exactly the same time," says Telesco. "One way to handle that is for me to stay at Florida, have students at each observatory, and remotely link to all three."
When he does, Telesco may have a few thousand netizens peering over his shoulder. "Eventually we want to do a Webcast from the control room," Michaud says. "Let people see how it happens, and dispel some misconceptions about how science is done."
Nano a Nano
While the rule at Mauna Kea is "Look, but don't touch!", elsewhere on Internet2 you can find gadgets and computers that remote users can control directly. At the University of North Carolina's nanoManipulator lab, for example, researchers can "touch" objects as tiny as a strand of DNAeven from hundreds of miles away.
At one end of the lab's nanoManipulator device sits a scanning probe microscope. On the other end is a computer running sophisticated 3-D modeling software. Using a special joystick known as the Phantom, researchers direct a micro-sized probe over the surface of a fibrin fiber, an essential element of blood clots that measures barely 50 nanometers high.
The Phantom's force-feedback mechanism simulates what it feels like to touch, squish, even split such fibers in two, capturing data that help scientists understand the nature of blood disorders.
Thanks to Internet2, scientists have used the nanoManipulator to conduct experiments as far away as Redmond, WA, more than 2,300 miles from UNC's Chapel Hill campus. Researchers at Ohio State have employed the device to manipulate fibrin fibers, while students at Orange High School in Hillsborough, NC, have kicked around adeno viruses, soccer-ball shaped cold microbes used in gene therapy.
The real goal of the high-speed connection, however, is to foster collaboration with scientists at other universities.
"Collaboration is the best way to get good work done," says Sean Washburn, professor of physics and astronomy at UNC. "As a friend of mine once said, if you rub two graduate students together, you get sparks."
The biggest problem the nanoManipulator encounters is latencydelays that occur as data packets bounce from one Internet router to the next. Even a delay of 1/20th of a second between moving the joystick and feeling the feedback is enough to throw most people off, Washburn says. That limits how far away you can be before you can't control the device any longer.
The problem, says UNC professor of computer science Kevin Jeffay, is that the Internet isn't designed for "real-time" applications like the nanoManipulator. Even Internet2, which offers much lower latency than the commercial Net, can't always cut it. To work around these limitations, Jeffay and other scientists devise schemes to give certain data packets higher priority than others, so the network sends them on faster.
Even then, he notes, there are fundamental limits to how far such applications can work. "If you try to control the microscope from China, the speed of light delay to China and back is going to be high enough so that you can't control it," says Jeffay. "But going from [North Carolina] to Ohio, the speed of light isn't such a problem."
While mere mortals rarely need to manipulate molecules in real time, UNC's research has broad implications for anyone who hopes to simulate realistic "touch" over a network connection. Low-latency networks will be essential for such applications as finding defects in an integrated circuit, examining the feel of a suit fabric before purchasing it, or doing battle over an interactive gaming network.
The Abdominal Showman
Examining the far reaches of inner and outer space may make for compelling science, but it's unlikely to translate directly to new Internet applications that affect our everyday lives. In fields like remote telemedicine, however, a fast network connection could be the difference between life and death.
At Ohio State's Center for Minimally Invasive Surgery, a patient is undergoing laparoscopy to repair a ventral hernia. Tubes feed a tiny digital camera into his abdomen while surgeons watch their work on a TV screen. On the other side of town, another surgeon watches a live broadcast from inside the patient's body and consults with the doctors.
Remote hospital procedures such as this one will one day become routine, says Dr. Scott Melvin, director of the center in Columbus. Melvin has been both chief surgeon and consultant on operations that have been broadcast to San Francisco and Washington, DC.
"This gives you the opportunity to get an instant second opinion," Melvin says. "Many surgeons don't have experts available down the hall. [Using Internet2] a surgeon in a rural area can call up and say, 'Can you come look at this for me?'"
Video conferencing over the commercial Internet is notoriously badthanks to data loss, latency problems and poor color fidelity. Jerry Johnson, the research scientist in charge of the Ohio State project, says it took him and OSU engineer Bob Dixon six months to convince surgeons they could get medical-quality video over Internet2.
"But now they love it because it's so spontaneous," says Johnson, who uses off-the-shelf videoconferencing gear like Polycom's ViewStation, which costs $4,000 and up. "You don't have to preplan it; all you need is a good 768-kilobit-per-second connection at each end."
Unfortunately, the remote clinics most in need of medical expertise are unlikely to have access to Internet2. Even for non-surgical medical video conferences, you'd still need a fast DSL or cable-modem connection, which are not always easy to come by.
But as more clinics and individuals gain access to broadband connections, telemedicine may become a standard way to monitor patients remotely. "We'll no longer just call up patients to ask how they feel," says Melvin. "We'll be able to see inside of them and gather objective data in real time."
You Are There... Almost
Most Internet2 applications are really just variations on teleconferencing. Ultimately, you're still looking at a two-dimensional video wall. The goal of the National Tele-Immersion Initiative is to break through that wall, to create the illusion that colleagues across the continent are in the cubicle next door, close enough to touch.
A collaboration between four research centersAdvanced Network and Services in Armonk, NY; UNC Chapel Hill; the University of Pennsylvania; and Brown Universitythe NTII might be the most ambitious Internet2 project so far. Even the researchers can't talk about it without lapsing into Star Trek metaphors.
"Tele-immersion is kind of a cross between the holodeck and the transporter beam," says Jaron Lanier, chief scientist for Advanced Network and Services and a pioneer in the field of virtual reality.
Indeed, today's version shares some qualities of both fictional devices.
At UNC's tele-immersion lab, a graduate student sits at a desk in front of seven digital video cameras. A reporter sits in another room and wears polarized sunglasses and a headset, whose movements are tracked by infrared lights embedded in the ceiling. A three-dimensional image of the student appears on the wall in front of him, against a scanned backdrop of an actual office.
By craning his neck, the reporter can see objects on the desk behind the studentor rather the objects that appear to be behind her. Although she can't see himfor purposes of this demonstration, the video is one-way onlywhen she smiles and waves hello, the reporter instinctively waves back.
Work Trek: The Next Generation
From these humble beginnings, NTII's architects spin scenarios in which people interact with each other and virtual objects in 3-D spaces.
Building inspectors could tour structures without leaving their desks. Automobile designers from Detroit and Germany could meet to conceive the next generation of sport utility vehicles. Geographically distant surgeons could experiment with excising a virtual tumor before working on the actual patient. Scientists from across the continent could magnify molecules or shrink down galaxies and walk through them.
So far, researchers have conducted tele-immersion experiments between Chapel Hill, Armonk and Philadelphia. The cost in bandwidth is enormous, however; each session consumes as much as 25 percent of the backbone.
And even the most basic tele-immersion experience requires a daunting amount of equipment. Besides the cameras and the headgear, it takes two 35-pound Sharp projectors to create the telecubicle image and eight high-end PCs to acquire and deliver the 3-D graphics.
Lanier estimates that practical implementations of tele-immersion are at least 10 years awayan eon in Internet time. Still, the advances in the underlying technology have been impressive, says Herman Towles, senior research associate in UNC's computer science department. "When we started this project we used a $2 million SGI Reality [system] to create these images," Towles says. "Now we use PCs costing less than $20,000."
Though Internet2's potential is compelling, you probably won't wake up one day to find your den turned into a holodeck. While some Internet2 technology is already employed in isolated locations on the commercial Net, the most compelling applications may only be available to those who can afford to pay for them.
One huge stumbling block is the so-called "last mile" connection. When the Internet2 faithful talk about broadband, they're speaking of a world where every computer is connected at 100 megabits per second or faster. Right now about 95 percent of U.S. netizens access the Internet using 56K modems; upgrading the public infrastructure to achieve 10,000 times that level of performance could take decades.
Initially at least, bandwidth-intensive applications will be limited to big organizations that can foot the bill for high-speed connections. For example, telemedicine will probably appear first at major regional medical centers, says John Patrick, vice president of Internet Technology for IBM in Somers, NY. Then as bandwidth gets cheaper, it will spread to local hospitals and eventually trickle down to doctors' offices.
A Question of Quality
Besides raw bandwidth, most Internet2 apps require guaranteed quality of service; the data needs to arrive on time and intact. Wolff says Cisco is already deploying new routers with QoS features across the Net. But the many thousands of existing routers will need to be upgraded with new operating systems and internal cards, a costly and time-consuming process. Those most likely to endure the expense: major Internet service providers and backbone operators who want to offer premium services to corporate customers.
Implementing QoS means that some data will get better treatment, similar to how the U.S. Post Office handles first-, second- and third-class mail. And first-class Net delivery will cost you.
"You have to fundamentally charge for these services," says Jeffay. "If you don't, then everybody's a high priority."
Similarly, Internet protocol version 6 is being used today in thousands of corporate and research networks, which communicate with other IPv6 sites by "tunneling over" the current Net protocols. But it will be many years before IPv6 replaces the current version 4, if ever, because the IPv4 network is still growing at a frantic pace.
"I see IPv6 taking hold in corporations," says Todd Needham, manager of research programs at Microsoft. "It's much easier and cheaper to administer."
Needham notes that the upcoming Windows XP operating system has quality-of-service capabilities and support for IPv6 built in. But as with QoS, routers need to be upgraded and software written to take advantage of the new technology. The companies making the biggest push toward the new protocol will be makers of wireless devices, who need the expanded IP addresses version 6 provides.
Companies like IBM and Microsoft have already used basic multicastingsending material from a single source to multiple destinationsfor internal communications. But doing it on a large scale, or managing audio and video sent between multiple locations, is a problem researchers are just beginning to tackle.
May the Market Force Be with You
But technological barriers are usually the simplest ones to remove. Moving from a collaborative environment like Internet2 to the commercial Internet introduces a host of new problems.
In a high-stakes competitive environment, companies seeking a market advantage could attempt to implement their own schemes, which may not be compatible with others. In these battles, the best technology does not always win.
"There are tremendous market forces at play that will impact what the actual architecture of the commercial Net is," Jeffay says. "From my perspective, the real problems are the socio-economic ones of how do you get these companies to play together."
As in any grand experiment, what goes on in the lab won't always translate to the real world. When it's still inside the petri dish, an experiment can be controlled. What happens when the beast is unleashed is anyone's guess.
Daniel Tynan writes about technology and culture from his home in North Carolina.