The future of air travel may include planes without pilots. We have the technology, but the associated risks that must be overcome and maintained consistently over time are “mind-boggling”, says Matthew Rosenquist.

Boeing will begin testing pilotless jetliners in 2018. Yes, the future of air travel may include planes without pilots. Just computers and calculations to get passengers safely to their destinations. Advances in Artificial Intelligence (AI) are opening up possibilities to make flying safer, more consistent, easier to manage, and cost efficient.

“The basic building blocks of the technology clearly are available,” said Mike Sinnett, Boeing’s vice president of product development.

Automation and safety 

Planes already are under the control of computers most of the time. They can take off, fly to their destination, and even land in semi-automatic modes. The question is not if it is technically possible, but rather would it be safe in all situations where risks of safety to passengers arise. It is not about the 99% of flight time, but rather those unexpected and unforeseen moments when snap decisions are required to keep the passengers safe.

Not too long ago, Captain Chelsey Sullenberger made a miraculous effort to avoid disaster when a flock of geese struck his plane shortly after takeoff. He was able, against serious odds after the engines were rendered ineffective, to avoid populated areas of New York City and glide the plane to a safe landing on the Hudson River. The pilot saved 150 passengers and potentially countless people on the ground.

Cybersecurity factors 

Autonomous planes, carrying passengers, flying with significant force, and carrying tremendous amount of highly flammable fuel may be a prime target for certain cyber threats. Total control would be the ultimate goal, but even the ability to disrupt operations may be sufficient to cause horrendous loss. As a result, autonomous flight development will be a huge test for AI security fundamentals, integration, and sustainable operations. AI-controlled airborne transport vehicles is an admirable goal with significant potential benefits for all, but the associated risks that must be overcome and maintained consistently over time are mind-boggling.

Consider this: a malicious actor taking control of an AI controlled car could cause a handful of deaths. Taking over an AI controlled plane can result in situations like 9/11 where thousands of people die, many more are injured (short and long-term), and most importantly sending an indelible message to an entire society that strikes a chord of long-lasting fear. Every day there are tens of thousands of flights occurring in the skies above. Aside from the passengers at risk, each one could be used as a weapon against targets on the ground.

The business risks are equally severe. It could crater a plane manufacturer, if such a situation manifested and one or more of their planes were hacked and intentionally brought down. The viability of the plane manufacturer or airline company would cease to exist.

The fallible control 

Personally, I like humans in the loop. There is no doubt people are fallible, unpredictable, and inconsistent. From a cybersecurity perspective, they can be tough for an attacker to anticipate. The very weakness humans bring to complex systems is ironically a safety control against malicious attackers.

Then there is the fear factor. A flesh and blood pilot has a committed stake in the safety of the plane, passengers, and themselves. It is their lives at risk as well. Under pressure, humans have a remarkable ability to adapt and overcome when facing unexpected or new situations that put their mortality in the balance. I am not sure that concept is something that can be programmed into a computer.

We are entering the age of AI. It will bring with it enormous benefits, but humanity still has a lot to learn when it comes to deciding the proper role and trust we will place in digital intelligence. Large scale human safety may be one of those places where AI is better suited as an accompaniment to human involvement. Such teamwork may bring the very best of both worlds. We are learning, just as we are teaching machines. Both human and AI entities still have a lot to discover.

For more articles, subscribe to Matthew’s cybersecurity blog.