The internet's 'father' says it was born with two big flaws
- Vint Cerf, one of the creators of the internet, said the network had two big flaws when he launched it.
- The internet didn't have room for all the devices that would eventually be connected to it, said Cerf, now Google's chief internet evangelist.
- It also didn't have any built-in security protocols.
- Even though both shortcomings proved problematic, Cerf's not certain he would have fixed them if he had to do it all over again.
The internet was born flawed. But if it hadn't been, it might not have grown into the worldwide phenomenon it's become.
That's the take of Vint Cerf, and if anyone would know, it's him. He's widely considered to be one of the fathers of the international network and helped officially launch it in 1983.
When the internet debuted, Cerf, who is now a vice president at Google and its chief internet evangelist, basically didn't set aside enough room to handle all the devices that would eventually be connected to it. Perhaps even more troubling, he and his collaborators didn't build into the network a way of securing data that was transmitted over it.
You might chalk up the lack of room on the internet, which was later corrected with a system-wide upgrade, to a lack of vision. When Cerf was helping to set up the internet, it was simple experiment, and he couldn't really imagine it getting as large as it became.
The security flaw, on the other hand, can be chalked up, at least in part, to simple expediency, Cerf said in a recent interview with Business Insider.
"I had been working on this for five years," he said. "I wanted to get it built and tested to see if we could make it work."
The internet had a space problem
The lack of room on the internet has to do with the addressing system Cerf created for it. Every device connected directly to the network must have a unique numerical address. When Cerf launched it, the internet had a 32-bit addressing system, meaning that it could support up to 4.3 billion (2 to the 32nd power) devices. And that seemed plenty when he was designing the system in the 1970s.
That number "was larger than the population of the planet at the time, the human population of the planet," he said.
But after the internet took off in the 1990s and early 2000s, and more and more computers and other devices were connecting to the network, it became clear that 4.3 billion addresses weren't going to be nearly enough. Cerf and other internet experts realized relatively early that they needed to update the internet protocols to make room for the flood of new devices connecting to the network.
So, in the mid-1990s, the Internet Engineering Task Force started to develop Internet Protocol version 6, or IPv6, as an update to the software underlying the network. A key feature of IPv6 is its 128-bit addressing system, which provides room for 2 to the 128th power unique addresses.
But it's taken years for companies and other organizations to buy into, test, and roll out IPv6. The standard didn't officially launch until 2012. And even today, Google estimates that only a little more than a quarter of users accessing its sites from around the world have an IPv6 address. Even the United States only has about a 35% adoption rate, according to Google.
"Now that we see the need for 128-bit addresses in IPv6, I wish I had understood that earlier, if only to avoid the slow pain of getting IPv6 implemented," Cerf said.
But hindsight is 20-20, and he acknowledges that it's highly unlikely that he could have pushed through a 128-bit addressing system at the time, because it would have seemed like overkill.
"I don't think ... it would have passed the red-face test," Cerf said. He continued: "To assert that you need 2 to the 128th [power] addresses in order to do this network experiment would have been laughable."
Security was an afterthought
Security was also something Cerf skipped for his experiment. Transmissions were generally sent "in the clear," meaning they could potentially be read by anyone who intercepted them. And the network didn't have built-in ways of verifying that a user or device was who or what it attested to be.
Even today, some data is still transmitted in the clear, a vulnerability that has been exploited by hackers. And authentication of users remains a big problem. The passwords that consumers use to log into various web sites and services have been widely compromised, giving malicious actors access to plenty of sensitive data.One of the most widely used security methods on the internet was actually developed around the time that Cerf was putting together the protocols underlying the network. The concept for what's called public-key encryption technology was described publicly in a paper in 1976. The RSA algorithm - one of the first public-key cryptographic systems - was developed the following year.
But at the time, Cerf was head deep in trying to finalize the internet protocols so that after years of development, he could launch the system. He needed to get them ported to multiple operating systems and needed to be able to set a deadline for operators of the internet's predecessor networks to switch over to the new protocols.
"It would not have aided my sense of urgency to have to ... have to stop for a minute and integrate the public-key crypto into the system," he said. "And so we didn't."
The lack of security may have helped boost usage
Even with the benefit of hindsight, Cerf doesn't think it would have been a good idea to build security into the internet when it launched. Most of the early users of the network were college students, and they weren't likely to be very "disciplined" when it came to remembering and maintaining their password keys, he said. Many could easily have found themselves locked out of it.
"Looking back on it, I don't know whether it would have worked out to try to incorporate ... this key-distribution system," he said, continuing: "We might not have been able to get much traction to adopt and use the network, because it would have been too difficult."
The security situation on the internet ended up being somewhat easier to address than its lack of space, Cerf said. It was relatively easy to add on public-key cryptography to the internet later on through various services and features, and several are now widely used. For example, the protocol that web sites rely on to secure the transmission of web pages - HyperText Transfer Protocol Secure, or HTTPS - relies on a public-key cryptographic system.
Other types of security features have also been bolted on after the fact, he noted, such as two-factor authentication systems, which typically require users to enter a randomly generated code in addition to their password when logging into certain sites.
Security "is retrofittable into the internet," he said.