SUPPORT US

The Robotics Revolution:
What It Means and What to Watch for Next

by Peter Warren Singer 

Whether it is a report about the latest drone strike into Pakistan or an awesome web video of a cute robot dancing in the latest style, it seems like robots are taking over the world, figuratively if not yet literally. But within their growing appearance in the news is perhaps something bigger, a story that is reshaping the overall history of war and politics, and even humanity.

 

Where are we now?

While unmanned systems have a long history, dating back to Da Vinci’s designs for a robotic knight and including things like German remote-controlled torpedo boats in the First World War, it wasn’t until just a decade ago that they truly took off in war. Advances in technology made unmanned systems more usable, especially through the incorporation of GPS technology that allowed such systems to locate themselves in the world. At the same time, the new conflicts that followed 9/11 drove demand. When US forces first went into Afghanistan, the U.S. military had only a handful of unmanned aerial systems (UAS, also called “remotely piloted aircraft” or, more colloquially, “drones”) in the air, none of them armed, and zero on the ground.  Now it has a force inventory of more than 8,000 in the air and more than 12,000 on the ground. Another example of how far the change has gone is that last year, the U.S. Air Force trained more unmanned systems operators than fighter and bomber pilots combined.

But when we think about technologies like the Predator or the PackBot, we need to remember that they are just the first generation, the Model T Fords and Wright Flyers compared to what is already in the prototype stage. We are still at the “horseless carriage” stage of this technology, describing these technologies by what they are not, rather than wrestling with what they truly are. These technologies are “killer applications” in all the meanings of the term. They are technologies that advance the power of killing, but also have a disruptive effect on existing structures and programs. That is, they are akin to advancements like the airplane or the steam engine in allowing greater power and reach in war, but they are also akin to what iPods did to the music industry, changing it forever.

 

What Next? The Robotics Revolution

While many are surprised by the existing use of robotics, the pace of change won’t stop. We may have thousands now, but as one three-star U.S. Air Force general noted in my book Wired for War, very soon it will be “tens of thousands.”

But the numbers matter in another way. It won’t be tens of thousands of today’s robots, but tens of thousands of tomorrow’s robots, with far different capabilities.

One of the laws in action when it comes to technology is Moore’s Law, that the computing power that can fit on a microchip doubles just under every two years or so. It has become an encapsulation of broader exponential trends in technology that have occurred through history, with technology constantly doubling upon itself in everything from power to storage to broader innovation patterns. If Moore’s Law holds true over the next 25 years, the way it has held true over the last 40 years, then our chips, our computers, and, yes, our robots will be as much as a billion times more powerful than today. But Moore’s Law is not a law of physics. It doesn’t have to hold true. What if our technology moves at a pace just 1/1000th slower than it has historically? In this slowed-down scenario, we’d only see a mere 1,000,000 times the change.

The bottom line is that what was once only fodder for science-fiction conventions like Comic-Con is now being talked about seriously in places like the Pentagon. A robotics revolution is at hand.

I should be clear here. The robot revolution happening is not the Robopocalypse that Steven Spielberg is preparing to film. It is not the type where you need to worry about the former governor of California showing up at your door, à la The Terminator.

Instead, every so often, a technology comes along that changes the rules of the game. These technologies – be they fire, the printing press, gunpowder, the steam engine, the computer, etc. – are rare, but truly consequential.

The key to what makes a revolutionary technology is not merely its new capabilities, but its questions. Truly revolutionary technologies force us to ask new questions about what is possible that wasn’t possible a generation before. But they also force us to relook at what is proper. They raise issues of right and wrong that we didn’t have to wrestle with before.

The historical comparisons that people make to the robotics revolution illustrate this. When I conducted interviews for my book, I asked people to give historical parallels to where they think we stand now with robotics. As I noted earlier with the comparison to the “horseless carriage,” many of them, especially engineers, liken where we are now with robotics to the advent of the automobile. Indeed, at this stage of the last century, Ford was selling fewer than 1,000 cars a year. Within a decade, especially spurred on by the military proving ground of the First World War, it was selling a million a year.

If the horseless carriage is the parallel, think of the ripple effects that cars had on everything from our geopolitics to our law enforcement. A group of people who were, at the time, desert nomads became crucial players in the global economy simply because they lived over a sticky black substance previously considered more of a nuisance than anything else. The greater use of that same – now crucial – resource has changed the global climate. The growing use of cars, in turn, led to new concepts that reshaped the landscape, whether through highways and suburbia, or through new social notions.

Others, such as Bill Gates, make a different comparison, to the computer in 1980. Much like robots today, the computer back then was a big, bulky device for which we could only conceive a few functions. Importantly, the military was the main spender on computers’ research and development and a key client driving the marketplace, again comparable to the development of robots.

But soon, computers changed. They got smaller. We figured out more and more functions and applications that they could perform, both in war and in civilian life. And they proliferated. It soon got to the point that we stopped thinking of most of them as “computers.” I drive a car with more than 100 computers in it. No one calls it a “computerized car.” I have a number of computers in my kitchen. I call them things like “microwave” or “coffee maker.”

The same thing is happening with robotics – not just the changes in size and proliferation, but also the reconceptualization. Indeed, if you buy a new car today, it will come equipped with things like “parking assist” or “crash avoidance” technologies. These are kind ways of saying that we stupid humans are not good at parallel parking and too often don’t look in our blind spots. So, the robotic systems in our car will handle these things for us.

But, again, just as the story of the automobile was more than just the shift from owning horse stables to garages, so, too, was the computer about more than never having to remember long-division tables again. What were important, again, were the ripple effects. The game-changing technology reshaped the modern information-rich economy, allowing billions of dollars to be made and lost in nanoseconds. It led to new concepts of social relations and even privacy. I can now “friend” someone in China I’ve never met. Of course, I may now be concerned about my niece social networking with people whom she’s never met. It became a tool of law enforcement (imagine the TV show CSI without computers), but also led to new types of crime (imagine explaining “identity theft” to J. Edgar Hoover). And it may even be leading to a new domain of war, so-called “cyber-war.”

This comparison is a striking one because it illustrates how bureaucracies often have a hard time keeping up with revolutionary change. For example, while computers were obviously important by then, the director of the FBI was so averse to computers that he didn’t have one in his office and never used email, as late as 2001. Sound amazing? Well, the current U.S. secretary of Homeland Security, the agency in charge of the civilian side of American cyber-security, doesn’t use email today in 2012.

The final comparison that is made is perhaps a darker one. It is to the work on the atomic bomb in the 1940s. Scientists, in particular, talk about the field of robotics today in much the same way they talked about nuclear research back in the 1940s. If you are a young engineer or computer scientist, you will find yourself drawn towards it. It is the cutting edge. It is where the excitement is, and where the research money is.

But many worry that their experience will turn out just like that of those amazing minds that were drawn towards the Manhattan Project, like a moth to an atomic flame. They are concerned that the same mistakes could be repeated – of creating something and only after the fact worrying about the consequences. Will robotics, too, be a genie we one day wish we could put back in the bottle?

The underlying point here is that too often in discussions of technology we focus on the widget. We focus on how it works and its direct and obvious uses. But that is not what history cares about. The ripple effects are what make that technology revolutionary.  Indeed, with robotics, issues on the technical side may ultimately be much easier to resolve than dilemmas that emerge from our human use of them.

 

How Our Robots Are Changing

The first key ripple effect with robotics is the diversification of the field and expansion of the market itself.

The first generations of aerial robots were much like the manned systems they were replacing, even down to some of them having the cockpit where the pilot would sit looking like it’d been painted over. Now we are seeing an explosion of new types, ranging in size, shape, and form. With no human inside, they can stay in the air not just for hours, but for days, months, and even years, having wings the length of a football field. Alternatively, they can be as small as an insect. And, of course, they need not be modelled after our manned machines, but can instead take their design cues from nature, or even the bizarre.

The other key change is their gain in intelligence and autonomy. This is a whole new frontier for weapons development. Traditionally, we’ve compared weapons based on their lethality, range, or speed. Think about the comparison between a Second World War B-17 bomber plane and a B-24 bomber plane. The B-24 could be considered superior because it flew faster, further, and carried more bombs. The same could be said in comparing the MQ-9 Reaper UAS with its earlier version, the MQ-1 Predator. The Reaper is better because it flies faster and further and carries more bombs. But the Reaper is also something else, which we couldn’t say about previous generations of weapons: It is smarter, and more autonomous. We are not yet in the world of The Terminator, where weapons make their own decisions, but the Reaper can do things like take off and land on its own, fly mission waypoints on its own, and carry sensors that make sense of what they are seeing, such as identifying a disruption in the dirt from a mile overhead and recognizing it as something that we humans call a “footprint.”

From these changes comes a crucial opening up of the user base and the functionality of robotics. Much as you once could only use a computer if you first learned a new language like “Basic,” so, too, could you once only use robotic systems if you were highly trained. To fly an early version Predator drone, for instance, you had to be a rated pilot. Now, just as my three-year-old can navigate his iPad without even knowing how to spell, so, too, can you fly some drones with an iPhone app.

This greater usability opens up the realm of possible users, lowering the costs and spreading the technology even further. So, we are seeing the range of uses expand not just in the military, but also, once proved on the military side, moving over to the civilian world. Take aerial surveillance with UAS. It’s gone from a military activity to border security to police to environmental monitoring. Similarly, the notion of using a robotic helicopter to carry cargo to austere locations was first tested out in Afghanistan, but is now being looked at by logging companies.

A key step in moving this forward in the U.S. will be the integration of unmanned aerial systems into the National Airspace System (NAS) and expanded civilian use. Congress has recently set a deadline of 2015 for the Federal Aviation Authority to figure out how to make this happen. While it is unclear if the FAA will meet that deadline, the step is coming, and with it, the next ripple effect outwards in the market.

Indeed, what the opening of the civilian airspace will do to robotics is akin to what the internet did to desktop computing. The field was there before, but then it boomed like never before. For instance, if you are a maker of small tactical surveillance drones in the U.S. right now, your client pool numbers effectively one: the U.S. military. But when the airspace opens up, you will have as many as 21,000 new clients – all the state and local police agencies that either have expensive manned aviation departments or can’t afford them.

Beyond the obvious applications moved over from the military side, the real change occurs when imagination and innovation cross with profit-seeking. This is where parallels to computer or aviation history hold most, as the civilian side then starts to lead the way for the military. For instance, the idea of moving freight via airplanes was not originally a military role. It started out in 1919 with civilians. Today, it’s both a major military role (the U.S. military’s Air Mobility Command has some 134,000 members) and an industry that moves more than $10 trillion in global trade annually. And, yes, a number of airfreight firms are starting to explore drone air cargo delivery.

If history is any lesson, there are many more ways we don’t yet know of that robotics might be applied to other fields. Who saw agriculture as a field to be computerized? And yet the application of computers has led to massive efficiency gains. So, too, is agriculture appearing to be an area in which robotics will drive immense change, from the surveillance of the fields to the crop-dusting to the picking and harvesting.

 

The Global Revolution

As this progress in robotics plays out, it leads to more ripple effects, notably on the global level. While this is a robotics revolution, it will not be solely an American revolution.

The U.S. is certainly ahead now in this revolution, and well it should be, given that it outspends the rest of the world on military research and development.

There is a rule, however, in both technology and war that means the U.S. should not rest on its laurels: There is no such thing as a permanent first-mover advantage. Companies like IBM and Commodore may have once led the world of computing, but their wares likely don’t sit on your desk today. Similarly, the British may have invented the tank in the First World War, inspired by an H.G. Wells short story about “Land Ironclads.” But it was the Germans who figured out how to use them better in the Blitzkrieg of the Second World War.

Today, there are more than 50 other countries building, buying, and using military robotics of some sort. They range from close allies like Canada and the United Kingdom to potential adversaries like Iran, China, Russia, and Pakistan. Indeed, China has gone from having no UAS under development just a few years back to showing off well more than 25 different models of Chinese-made drones at its tradeshows, ranging from the Predator-like “Pterodactyl” to a stealthy, lethal-looking “Dark Sword.”

 

Battles of Ideas and Persuasion

The introduction of a revolutionary technology brings new races for ideas and new interactions of knowledge, power, and communication. In the case of robotics, a new fascinating cross has emerged between intellectual-property rights issues and defence studies.

As a critical field to security and industry, akin to the rise of the car, the computer, or the atomic bomb, we are unsurprisingly seeing attempts at stealing information for copying abroad. The examples of this already range from advanced persistent threats in the cyber-security space targeting the secrets of major defence manufacturers to a sales guy for a small robotics maker I spoke with, who happened to see a clone of his firm’s ground robot being sold at an Asian arms fair.

Beyond the stealing of design secrets, unmanned systems have also opened a competition to reach into the communications of the machines themselves. In Iraq, insurgents managed to hack into the video feed of U.S. military drones – in effect, the equivalent of a robber listening in on a police radio scanner. What is even more notable is that the insurgents were able to do so using a $29 piece of software they had obtained from a Russian website. It had originally been designed to allow college kids to illegally download movies online.

As we use more and more systems that are digitally controlled, where a human is not physically inside, we will see a new step in this race open. The battle is not just for design secrets and access to communications, but also for control. We enter into an era of battles of persuasion.

This is a fundamental shift. We have never been able to “persuade” a weapon to do what its owner didn’t want. You never could change the direction of a bullet or arrow in mid flight. Now you can do the equivalent. The goal then moves from only seeking to destroy the enemy’s plane or tank, to co-opting it to “persuade” it to do things its original owners wouldn’t want. “Recode all allied soldiers as enemies, and all enemy soldiers as friendly.” A human would ask why, needing motivation to change his or her ways. With the proper access, a computer will just comply.

 

Privacy and the Law

A computer will also not ask for an explanation when tasked with surveillance. While some say drones are no different than manned planes or surveillance cameras and so raise no new privacy issues, this is incorrect. There are many similarities but also fundamental differences.

To operate, a robot is always gathering and storing information about the world around it. Always. This is different from a regular plane, for example, where the human operator is gathering most of this information but cannot store it for playback. A robot’s operating requirements mean that even in the course of regular operations, it is gathering and storing information about everything that crosses its path.  This gives robots an advantage over human-operated planes, where a conscious decision to acquire and store data must be made. The other main advantage of unmanned systems is their ability to loiter for long periods of time, which again allows them to draw in more information than manned systems. Taking in vast quantities of information happens unintentionally – a robot on a “Where’s Waldo?” mission to hunt down one person in a city will still be gathering data on the entirety of that city throughout the search process.

Visual information is not the only type of data being gathered. Unmanned systems also carry out electronic surveillance. A drone unveiled at the DefCon hacking conference last year can crack Wi-Fi networks and intercept text messages and cell phone conversations, all without the knowledge or help of either the communications provider or the customer. This type of drone draws in electronic information on a wide group of people beyond the intended target, including those who have not signed a user agreement or otherwise signaled they accept this intrusion upon their privacy.

Finally, the size and mobility of robotic systems is fundamentally different: they are being designed in increasingly smaller sizes, and they are able to move and track targets covertly when required. A robotic system can watch from above, but can also get up close and personal, unlike a fixed security camera or a high altitude spy plane.

These differences lie at the heart of a lot of the suspicion of domestic use of unmanned systems. Such suspicion has been encouraged by the American Civil Liberties Union, and by right wing commentators on Fox News, who have urged Americans to use their Second Amendment powers to shoot down drones (something already done by a group of hunters in Pennsylvania, who shot down a drone doing environmental monitoring).

As with revolutionary inventions of the past – like the horseless carriage and manned airplanes – no amount of handwringing or fear mongering by pundits late to the game will lead to a ban on technology of such great promise.

Instead, a revolutionized world requires the establishment of new rules, which in turn requires an understanding of the new technology. Much of the substance of these rules will likely come from public discourse and the private sector. For example, the origins of the modern way we drive can be found in Rules of the Road, published in 1903 by William P. Eno. Known as "the father of traffic safety,” Eno’s book contained such revolutionary ideas as cars only passing on the left, stoplights and one-way streets. (Ironically he never drove himself; he was always chauffered).

We are seeing a similar evolution now, whether in the development of industry codes of conduct or guidelines for university research groups. But much like the early “rules of the road”, these will need enforceable laws to make them real. Early cars and planes needed more than Eno’s book – mainstream use of these inventions demanded the drafting of traffic laws and the creation of regulatory institutions like the Federal Aviation Administration. Similarly, the increasing use of unmanned systems has highlighted a gap at the state and federal levels that demands action.

 

The Psychological Side

There is a degree of irony to all the calls for regulation. Our reactions to drones policing city streets from above, computers at the National Security Agency reading emails, and smartphones letting Starbucks know when there’s a potential customer walking nearby, are still mostly determined by the very fuzzy combination of our identity and our emotions – by the DNA coding and chemical makeup that drives human psychology.

What then will be the reaction to the intensification of the surveillance state? Will we respond similarly to those teenagers given access to Facebook and Twitter who couldn’t care less that the world is watching and who have embraced the system to the point of overload? Or will we respond with fear? And how will our response to being watched impact the way we look at the human operators behind the robots?

We are facing the domestic version of the problem confronting our counterterrorist efforts abroad – the impact of robots on the very human “war of ideas”.

We need to consider what message we think we are sending with our robot watchers versus the one publics are actually receiving, and the range of impacts that message can have. U.S. troops in Afghanistan describe unmanned systems as reassuring, saying that they can sleep better because they feel like someone is always overhead, watching out for them. On the other hand, many Afghan civilians fear and distrust them.

Some, such as one senior State Department official, believe that our unmanning of war “…Plays to our strength. The thing that scares people is our technology.” Their idea is that it has a deterrent value even if it is scary.

But the psychology of scaring people with technology is a tricky business. There is the risk that robotic surveillance will instead be perceived as an intrusive “Big Brother” figure, as the Russian police whose used drones to monitor protesters have been called. Or, they might be seen as emblematic of those trying to police people they don’t know, on the cheap, from afar. The drone becomes like the cameras favoured by the disconnected and corrupt Baltimore police force of the TV show The Wire, who watch a world of crime play out that they don’t understand.

 

User Questions

The innovation spread of robotics represents another trend of opportunity and peril. An ever-wider set of users is innovating for all sorts of positive purposes with robotics, from the great work being done by young students at robotics labs at McGill University to the team in Australia that built an autonomous drone to help find lost bushwalkers.

But not all of the people behind machines have only the best in mind. Take the traditional notion of using a robotic drone for surveillance. The new users have not just been militaries or police, but have also been civilians. These include news journalists who have reported on natural disasters with drones, as well as even parents who want new ways to watch their kids. A father in the U.S. gave new meaning to the term “helicopter parent,” using an automated quadcopter drone to escort his child to the school bus stop.


The problem is that each and every technology has its darker side. The same field of drone journalism that reports important stories with a whole new level of fidelity also advances the field of paparazzi. For instance, Gary Morgan, chief executive officer of Splash News, a celebrity-photo agency, has already said he’d like to be buzzing his quarry soon with silent, miniature drones mounted with tiny cameras: “It would strike fear in the hearts of every celebrity having a birthday party.” And, one has the sense that child may end up telling a therapist one day about his father loving him a bit too much, to the extent of following him with a drone.

 

Open Source

More seriously, just as software has gone “open source,” so has warfare. Robotics is not a technology like the atomic bomb or aircraft carrier, where only the great powers can build and use it effectively. Instead, just like with the “app” in the field of software, it is not just the big boys who control the field. The barriers to entry are not exceptionally high, and that means that bad actors will be able to gain and use this advanced technology.

If history is any guide, the repurposing of a low-entry revolutionary technology tends to happen fairly quickly. Indeed, the first car bomb was set off as early as 1905, used in an assassination attempt on the Ottoman sultan. Similarly, the first hijacking of a plane took place in 1931, very early in civilian air travel.

A particular area of concern, then, is the use of robotic systems by terrorists and other non-state actors. Israel as a state has long used drones, and now so has its non-state opposition. Hezbollah, for example, is not a major state military, but it has already operated UAVs, as too has Hamas.

The impact of this trend is twofold. The first is that it reinforces the empowerment of individuals and small groups against the power of the state. During the Second World War, for example, Hitler’s entire Luftwaffe could not manage to reach across the Atlantic to strike at Canada or the US. Just a few years ago, a blind 77-year-old man managed to build his own drone that flew itself across the Atlantic.

And one man’s hobby may be another man’s plot. In 2011, the U.S. arrested Rezwan Ferdaus, a man who wanted to recreate the 9/11 attacks (not so ironically, he had been angered by drone attacks in the Mideast intended to stop terrorism). Unable to hijack planes, he instead obtained a large drone and planned to fly it into the Pentagon. Fortunately, he made the mistake of asking an FBI informant where he could obtain C-4 explosives. The plot was averted, but it showed we are now in a world where it is easier to get the drone than the bomb.

This greater reach and power may also see a lowering of the bar. One does not have to be suicidal to carry out attacks that previously might have required one to be so. This allows new players into the game, making al-Qaeda 2.0 and the next-generation version of the Unabomber or Timothy McVeigh far more lethal.

Just as car bombs are not the only way automobile technology has been misused, we should not make the mistake of only focusing on terrorism when it comes to the potential negative uses of robotics. The early horseless carriage may have been reworked into a car bomb by turn-of-the-century terrorists, but the main illegal use was as a getaway device for criminals. Similarly, the best example of innovation in the field of robotics this year might be the team of thieves in Taiwan, who used tiny helicopters equipped with pinhole cameras to carry out a jewellery heist. They made away with $4 million worth of loot before being caught.

 

The Biggest Impact

Perhaps the biggest ripple effect of the robot, however, is in reshaping the narrative in that most important realm of war. We are seeing a reordering of how we conceptualize war, how we talk about it, and how we report it.

In democracies, there have always been deep bonds between the public and its wars. Citizens have historically participated in decisions to take military action, through their elected representatives, helping to ensure broad support for wars and a willingness to share the costs, both human and economic, of enduring them.

In the U.S., our Constitution explicitly divided the president’s role as commander-in-chief in war from Congress’s role in declaring war. Yet, these links and this division of labour are now under siege as a result of a technology that our founding fathers never could have imagined.

We don’t have a draft anymore. Less than 0.5 per cent of Americans over 18 serve in the active-duty military. We do not declare war anymore. The last time Congress actually did so was in 1942 – against Bulgaria, Hungary, and Romania. We don’t buy war bonds or pay war taxes anymore. During the Second World War, 85 million Americans purchased war bonds that brought the government $185 billion. In the last decade, we bought none and instead gave the richest five per cent of Americans a tax break.

And now we possess a technology that removes the last political barriers to war. The strongest appeal of unmanned systems is that we don’t have to send someone’s son or daughter into harm’s way. But when politicians can avoid the political consequences of the condolence letter – and the impact that military casualties have on voters and on the news media – they no longer treat the previously weighty matters of war and peace the same way.

For the first 200 years of American democracy, engaging in combat and bearing risk – both personal and political – went hand in hand. In the age of drones, that is no longer the case.

This last year, unmanned systems carried out strikes from Afghanistan to Yemen. The most notable of these continuing operations is the not-so-covert war in Pakistan, where the United States has carried out more than 350 drone strikes since 2004.

Yet, this operation has never been debated in Congress. More than seven years after it began, there has not even been a single vote for or against it. This campaign is not carried out by the Air Force – it is being conducted by the CIA. This shift affects everything from the strategy that guides it to the individuals who oversee it (civilian political appointees) and the lawyers who advise them (civilians rather than military officers).

It also affects how we, and our politicians, view such operations. U.S. President Barack Obama’s decision to send a small, brave Navy SEAL team into Pakistan for 40 minutes was described by one of his advisers as “the gutsiest call of any president in recent history.” Yet, few even talk about the decision to carry out more than 350 drone strikes in the very same country, and certainly not with the same “gutsy” narrative.

I do not condemn these strikes – I support most of them, especially in the cases where it is the only way to get an identified terrorist leader. What troubles me, though, is how a new technology is short-circuiting the decision-making process for what used to be the most important choice a democracy could make. Something that would have previously been viewed as a war, not just by our leaders, but also by our media and public, is simply not being treated like a war.

The change is not limited to covert action. Last spring, the U.S. launched airstrikes on Libya as part of a NATO operation to prevent Moammar Gadhafi’s government from massacring civilians. In late March, the White House announced that the American military was handing over combat operations to its European partners and would thereafter play only a supporting role.

The distinction was crucial. The operation’s goals quickly evolved from a limited humanitarian intervention into an air war supporting local insurgents’ efforts at regime change. But it had limited public support and no congressional approval.

When the administration was asked to explain why continuing military action would not be a violation of the War Powers Resolution – a Vietnam-era law that requires notifying Congress of military operations within 48 hours and getting its authorization after 60 days – the White House argued that American operations did not “involve the presence of U.S. ground troops, U.S. casualties, or a serious threat thereof.” But they did involve something we used to think of as war: blowing up stuff – lots of it.

Starting on April 23, American unmanned systems were deployed over Libya. For the next six months, they carried out at least 146 strikes on their own. They also identified and pinpointed the targets for most of NATO’s manned strike jets. This unmanned operation lasted well past the 60-day deadline of the War Powers Resolution, extending to the very last airstrike that hit Gadhafi’s convoy on Oct. 20 and led to his death.

Choosing to make the operation unmanned proved critical to initiating it without congressional authorization and continuing it with minimal public support. On June 21, when NATO’s air war was lagging, an American Navy helicopter was shot down by pro-Gadhafi forces. This previously would have been a disaster, with the risk of an American aircrew being captured, or even killed. But the downed helicopter was an unmanned Fire Scout, and the story didn’t even make the newspapers the next day.

Congress has not disappeared from all decisions about war – just the ones that matter. The same week that American drones were carrying out their 145th unauthorized airstrike in Libya, the president notified Congress that he had deployed 100 Special Operations troops to a different part of Africa.

This small unit was sent to train and advise Ugandan forces battling the cultish Lord’s Resistance Army, and was explicitly ordered not to engage in combat. Congress applauded the president for notifying it about this small noncombat mission, but did nothing about having its laws ignored in the much larger combat operation in Libya.

We must now accept that technologies that remove humans from the battlefield, from unmanned systems like the Predator to cyber-weapons like the Stuxnet computer worm, are becoming the new normal in war. And like it or not, the new standard we’ve established for them is that leaders need to seek approval only for operations that send people into harm’s way – not for those that involve waging war by other means.

Without any actual political debate, we have set an enormous precedent, blurring the civilian and military roles in war and circumventing the Constitution’s mandate for authorizing it. Freeing the executive branch to act as it chooses may be appealing to some now, but many future scenarios will be less clear-cut. And each political party will very likely have a different view, depending on who is in the White House.

The ease of operations raises concern not just in the initiation of operations, but also how we frame them, sometimes only focusing on the seeming absence of direct risks, ignoring the broader context. Unmanned operations are not “costless,” as they are too often described in the news media and government deliberations. Even worthy actions can sometimes have unintended consequences. Faisal Shahzad, the would-be Times Square bomber, was drawn into terrorism by the very Predator strikes in Pakistan meant to stop terrorism.

Similarly, CIA drone strikes outside of declared war zones are setting a troubling precedent that we might not want to see followed by the close to 50 other nations that now possess the same unmanned technology, including our allies, who have to start to contemplate the risks, but also including nations like China, Russia, Pakistan, and Iran that might abuse these precedents in even worse ways.

A deep deliberation on war was something the framers of the Constitution sought to build into our system, and that example was followed by other systems of democracy in allied countries, as well. Yet, these thinkers in past centuries could not have imagined war being reframed in such a manner. To them, war involved both the act of, and the risk of, violence. It was about killing, but it was also about sending people into harm’s way to do so. Now, the technology opens up new possibilities, and new questions for our democracies.

 

Going to War

This changing meaning of “going to war” isn’t just about the nation – it is also about the individual. For 5,000 years of humans at war, the experience of going to war had the same essential meaning. Whether one was talking about the ancient Greeks going off to fight Troy, or my grandfather going off to fight the Japanese in the Pacific theatre of the Second World War, going to war meant going to a place of such danger that one might never come home again.

This essential truth is now changing. Note how a Predator pilot described his wartime experience of fighting insurgents in Iraq, while still being at home in Nevada: “You are going to war for 12 hours, shooting weapons at targets, directing kills on enemy combatants and then you get in the car, drive home and within 20 minutes you are sitting at the dinner table talking to your kids about their homework.”

This new experience of going to war is not easy. Indeed, far from the portrayal of UAS pilots as “video gamers” who don’t care about what they do, these remote warriors are experiencing notable challenges, including rates of combat stress and burnout comparable to those physically in the field.

Though they may be doing so from afar, these UAS pilots are still experiencing acts of violence. One American non-commissioned officer spoke to me about the heartbreak of watching a team of NATO soldiers die on screen, while the unarmed drone that her team was flying could only helplessly circle above. They also face a weird disconnect of being at home and at war simultaneously. Another officer spoke of standing in line at a Burger King, and then realizing she’d been part of a “kill chain” decision just half an hour earlier.

We have not been in this new world long enough to think that we can fully understand it all, but it is clear that all forms of war carry psychological costs.

 

Conclusions

The ripple effects of robotics will continue to push out into all sorts of domains, in ways both expected and unexpected. Through it all, though, one fundamental principle will hold true as it has in the past: There are always two sides to technologic revolutions. From our new technologies we gain amazing capabilities that seem like they are straight from science-fiction. But from our new technologies we also gain new human dilemmas that seem like they are straight from science-fiction. Moore’s Law is operative, but so is Murphy’s Law.

The issues of “drones,” “unmanned systems,” and “robots” all seem futuristic, but notice how none of the examples that were explored in this article were from the future. This sets a great challenge for us all, well before we have to worry about our robotic vacuum cleaners sneaking up on us at night.

Are we going to let the fact that what is unveiling itself no seems like science-fiction to keep us in denial of the fact that it is already part of our technological and political reality?

Peter Warren Singer is Director of the 21st Century Defense Initiative at Brookings and author of the New York Times bestseller Wired for War: The Robotics Revolution and Conflict in the 21st Century. The following article is derived from a talk given to Media@McGill at McGill University. For further information, visit www.pwsinger.com or follow Peter on Twitter: @peterwsinger.


Be the first to comment

Please check your e-mail for a link to activate your account.
SUBSCRIBE TO OUR NEWSLETTERS
 
SEARCH

HEAD OFFICE
Canadian Global Affairs Institute
Suite 2720, 700–9th Avenue SW
Calgary, Alberta, Canada T2P 3V4

 

Calgary Office Phone: (587) 574-4757

 

OTTAWA OFFICE
Canadian Global Affairs Institute
8 York Street, 2nd Floor
Ottawa, Ontario, Canada K1N 5S6

 

Ottawa Office Phone: (613) 288-2529
Email: [email protected]
Web: cgai.ca

 

Making sense of our complex world.
Déchiffrer la complexité de notre monde.

 

©2002-2024 Canadian Global Affairs Institute
Charitable Registration No. 87982 7913 RR0001

 


Sign in with Facebook | Sign in with Twitter | Sign in with Email