NSFNET grew rapidly, connecting thousands and then millions of university computers around the United States by the early 1990s. The success of the network convinced the Department of Energy (DOE) and NASA to establish their own networks using the same protocols, which led to the birth of the DOE’s Energy Sciences Network and the NASA Science Internet. These new networks joined NSF’s to become the four major networks of the U.S. internet.
Commercial internet service did not exist until 1989. Cerf noted that commercially available software and hardware products available from firms like Proteon, and later Juniper and IBM, were available for dial-up use. “You could buy those [products]. You couldn’t buy [internet] service, because it was only available to people who were sponsored by the U.S. government research agencies,” Cerf said.
In 1983, the first part of what would become the National Science Foundation’s (NSF) computing backbone, NSFNET, was laid down using an optical fiber network.
In 1988, Cerf, who was then vice president of the Corporation for National Research Initiatives, received permission from the Federal Networking Council to connect the MCI Mail service, which had been providing commercial email services over phone lines via modem since 1983, to the internet. As president of MCI’s Digital Information Services from 1982 to 1986, Cerf led the development of MCI Mail. This served to break the restriction prohibiting commercial traffic from traveling over the government backbone, he explained.
This new commercial, rather than government-based, email service launched in 1989. It opened the floodgates for commercial players, and the major email providers of the time, such as AOL, CompuServe, and Telenet’s Telemail, which were providing messaging via telephone modems, soon followed suit. This rapid expansion of commercial internet services prompted the NSF to shut down its dedicated backbone in 1995.
Powering the Growing Web
The 1990s saw the development of web search technology, driven by users’ need for more efficient ways to find information as the World Wide Web expanded exponentially. Work in web search technology derived largely from earlier work in information retrieval from text, such as DARPA’s 1991 TIPSTER program. These earlier programs helped set the stage for a line of DARPA investments aimed at improving information extraction and search that still continues today. By the late ’90s, the need for better machine processing of the information on the growing web, as well as for more precise search capabilities, was becoming critical to the ability of users to find what they were looking for. In the national security context, the challenges to accessing, managing, and sharing data residing on the classified DOD networks were becoming more important to solve. “A lot of information management technologies had been developed by DARPA,” Hendler said, but “the question was, how could we apply them to this exponentially growing space?”
Much of DARPA’s work before the late 1990s balanced the needs of different government user communities with the challenge of controlling when and where to share – or not to share – information between individuals, agencies, or battlefield commanders. However, as the information needs of warfighters grew, especially for complex operations, DARPA also started working with the DOD to develop better technologies for improving the sharing of information between the different military services and particularly for coalition operations in which the United States had to work with forces from other countries or the U.N.
One particular challenge was that even within the military, let alone on the open web, people needed means to find documents or data that might be relevant for them, even when the terms used were different. “A good example,” said Hendler, “was that what a United States warfighter would refer to as a ‘MIG-25’ might be called a ‘Foxbat’ in a report filed by United Nations troops.” Users needed a means by which search and retrieval wasn’t just limited to the words in a document but could also include knowledge about how entities were related to each other. “We needed a way that users could search on ‘Russian fighter aircraft’ and find MiGs, Foxbats, and all the other things which they might be called,” said Hendler.
By the late ’90s, the need for better machine processing of the information on the growing web, as well as for more precise search capabilities, was becoming critical to the ability of users to find what they were looking for.
To address this problem, DARPA invested in the DARPA Agent Markup Language (DAML) program, which Hendler started up in 1999. A key researcher on the project was MIT professor Tim Berners-Lee, who had created the World Wide Web during his time at CERN in the late 1980s and who coined the term “Semantic Web” in the mid-’90s. It was clear that for these technologies to be used widely by the military, they would have to be supported by commercial web companies, which could provide the scale needed to be used across the many millions of pages on the growing web. The early languages of the Semantic Web, promoted by Berners-Lee, Hendler, and others, eventually diffused throughout the World Wide Web to become commercial standards. For example, one of the best-known uses of the Semantic Web has been in the creation of the “knowledge graph” technologies used today by large internet search and social-networking companies. Semantic Web technologies are also widely used in the health care and life sciences community.