Security Competition for the Internet of Things


The Internet of Things is presently all things to everyone. For some it is the connected home where your fridge can 'talk' to your cooker. I presume the conversation will be, "what are your plans for dinner?". To others it is being able to control the lighting or heating at home from your work desk or even your watch. Regardless of what it is to you one things is certain there are real risks to deployment. I mean what if someone hacks into (God forbid) your cooker and sets off a fire? Well people are looking into security of IoT as the following post on the New York Times makes clear.

The vision of the so-called internet of things — giving all sorts of physical things a digital makeover — has been years ahead of reality. But that gap is closing fast.

Today, the range of things being computerized and connected to networks is stunning, from watches, appliances and clothing to cars, jet engines and factory equipment. Even roadways and farm fields are being upgraded with digital sensors. In the last two years, the number of internet-of-things devices in the world has surged nearly 70 percent to 6.4 billion, according to Gartner, a research firm. By 2020, the firm forecasts, the internet-of-things population will reach 20.8 billion.

The optimistic outlook is that the internet of things will be an enabling technology that will help make the people and physical systems of the world — health care, food production, transportation, energy consumption — smarter and more efficient.

The pessimistic outlook? Hackers will have something else to hack. And consumers accustomed to adding security tools to their computers and phones should expect to adopt similar precautions with internet-connected home appliances.

“If we want to put networked technologies into more and more things, we also have to find a way to make them safer,” said Michael Walker, a program manager and computer security expert at the Pentagon’s advanced research arm. “It’s a challenge for civilization.”

To help address that challenge, Mr. Walker and the Defense Advanced Research Projects Agency, or Darpa, created a contest with millions of dollars in prize money, called the Cyber Grand Challenge. To win, contestants would have to create automated digital defense systems that could identify and fix software vulnerabilities on their own — essentially smart software robots as sentinels for digital security.

A reminder of the need for stepped-up security came a few weeks after the Darpa-sponsored competition, which was held in August. Researchers for Level 3 Communications, a telecommunications company, said they had detected several strains of malware that launched attacks on websites from compromised internet-of-things devices.

The Level 3 researchers, working with Flashpoint, an internet risk-management firm, found that as many as one million devices, mainly security cameras and video recorders, had been harnessed for so-called botnet attacks. They called it “a drastic shift” toward using internet-of-things devices as hosts for attacks instead of traditional hosts, such as hijacked data center computers and computer routers in homes.

And last week, researchers at Akamai Technologies, a web content delivery company, reported another security breach. They detected hackers commandeering as many as two million devices, including Wi-Fi hot spots and satellite antennas, to test whether stolen user names and passwords could be deployed to gain access to websites.

The Cyber Grand Challenge was announced in 2013, and qualifying rounds began in 2014. At the outset, more than 100 teams were in the contest. Through a series of elimination rounds, the competitors were winnowed to seven teams that participated in the finals in August in Las Vegas. The three winning teams collected a total of $3.75 million in prize money.

With the computer security contest, Darpa took a page from a playbook that worked in the past. The agency staged a similar contest that served to jump-start the development of self-driving cars in 2005. It took the winning team’s autonomous vehicle nearly seven hours to complete the 132-mile course, a dawdling pace of less than 20 miles per hour.

Still, the 2005 contest proved that autonomous vehicles were possible, brushing aside longstanding doubts and spurring investment and research that led to the commercialization of self-driving car technology.

“We’re at that same moment with autonomous cyberdefense,” Mr. Walker said.

The contest, according the leaders of the three winning teams, was a technical milestone, but it also shed light on how machine automation and human expertise might be most efficiently combined in computer security.

In the security industry, the scientists say, there is a lot of talk of “self-healing systems.” But the current state of automation, they add, typically applies to one element of security, such as finding software vulnerabilities, monitoring networks or deploying software patches. And automated malware detection, for instance, is often based on large databases of known varieties of malicious code.

For the Darpa test, the attack code was new, created for the event. In the capture-the-flag style contest, the teams played both offense and defense. For the humans, it was hands-off during the competition. The software was on its own to find and exploit flaws in opponents’ software, scan networks for incoming assaults and write code to tighten its defenses.

The winners succeeded in integrating different software techniques, in ways not done before, into automated “cybersecurity systems.” The contest was conducted in a walled-off computing environment rather than the open internet.

The scientists agree that further development work needs to be done for the technology to be used broadly on commercial networks and the open internet.

“But this was a demonstration that automated cyberdefense is mature enough, and it’s coming,” said David Melski, captain of the second-place team whose members came from the University of Virginia and a spinoff start-up from Cornell University, GrammaTech, where Mr. Melski heads research.

The first-place team, which won $2 million, was a group from ForAllSecure, a spinoff from Carnegie Mellon University. Hours after the Darpa contest, its cyberreasoning software, called Mayhem, went up against the best human teams at Defcon, an annual hacking competition.

In that three-day contest, Mayhem held its own for two days and proved itself to be extremely strong on defense. But by the third day, the human experts had come up with more innovative exploits than Mayhem, said David Brumley, a professor at Carnegie Mellon and chief executive of ForAllSecure.

Still, the automated system displayed its power, especially in keeping up with the scale of security challenges in the internet-of-things era. “The number of things automated systems can look at is so vast that it changes the game,” Mr. Brumley said.

Yan Shoshitaishvili, a Ph.D. candidate who led the third-place team, a group from the University of California, Santa Barbara, is focusing his research on designing “centaur” systems that effectively combine machine firepower with human expertise.

Humans are still better than computers at understanding context — and security is so often defined by context. For example, you do want to broadcast your GPS location data to friends in a social app like Glympse; you do not want a program sending out location data if you’re in a battlefield tank.

“In the real world,” Mr. Shoshitaishvili said, “humans can assist these automated systems. That’s the path ahead.”
 

Culled from the New York Times

Comments

Popular posts from this blog

On Anambra the Governor says "To Be"

Cloud ... International Scholarships Jobs week ending 11 March 2018