Design Engineering

AI and the Future

By Dr. Ishwar K. Puri, PhD, P.Eng.   

Automation AI

To successfully exploit AI, engineers must discern between uncertainty and risk.

In his seminal work, famed economist Frank Knight wrote that the  distinction between risk and uncertainty comes down to a matter of measurability. Since it is possible to measure risk, robust predictions can be made, provided all risks are known. Uncertainty, on the other hand, can’t be similarly measured and, therefore, poses unknown risks, which throw forecasts out of whack and precipitate unreasonable decisions.

Unfortunately, we live in a world characterized by volatility, uncertainty, complexity and ambiguity. Society is subject to forces different from any other era and reality is often hazy and easily misread. Change is influenced by multiple social, political, economic and technological forces, and is often abrupt and unpredictable.

What does this mean for engineers? Ten years ago, only one in six people worldwide used the Internet. Today, that fraction is one in two or approximately 3.8 billion people, globally. Of them, 2.8 billion people use social media. Overwhelmingly, most use a mobile device to do so.

It’s inevitable that, as we become even more connected, so will business and industry. However unromantic and intrusive this might sound, it will be increasingly impracticable to ‘go off the grid’.

Advertisement

Today’s smart machines are typically driven by expert systems. These systems include software that enables decision making (e.g., to support a medical diagnosis or the operation of a smart grid). The engine of that software is based on if-then rules that are learned progressively through experience.

If this sounds like reasoning, it is. The reasoning of the software in a smart system is based on a library that contains certain facts, which are the ‘ifs’, and outcomes, which are the ‘thens’. As new knowledge is archived in this library, an inference engine in the software uses if-then rules to develop new facts, or ifs, and suggests different outcomes, or the ‘then what happens’. This is the basis of a class of artificial intelligence, which itself has now become an all-embracing term.

Siemens reports that the global market for smart machines is growing by almost 20 percent annually and will reach about $15 billion by 2019. As the Internet has connected us, this is also becoming the norm for smart machines. Expert systems currently make up the largest market fraction of smart systems, but, according to some sources, their share will be overtaken by autonomous robots by 2024.

There are, of course, critics of this trend. Stephen Hawking, for example, called artificial intelligence “the worst event in the history of our civilization” while Elon Musk told Rolling Stone, “Climate change is the biggest threat that humanity faces this century, except for AI.”

Whether those predictions prove true or not, AI enabled autonomous robots will continue to proliferate for a simple reason: They will be inexpensive. As the number of robotic appliances continues to increase, the cost of sensors will keep decreasing. The global sensors for robotics market already exceeds $16 billion.

As a result, the proliferation and diversification of smart systems based on interconnected artificial intelligence will lead to disruptive technologies and introduce more uncertainty.

Here’s the problem. The human brain is not a computer. Likewise, computers, although capable of intelligently-based action, cannot reproduce the cognition and intelligence of our brains. Artificial intelligence algorithms are trained with known data. Consequently, their acquired if-then rules cannot anticipate and formulate rational decisions during uncertain, or unknown, circumstances.

Artificial intelligence methods have been developed over the course of more than a half century. Their influence has ebbed and flowed during that time but now, through integration with pervasive connectivity and inexpensive sensors, AI is enabling significant technologies.

Nevertheless, today’s wave of AI is based on very primitive models of our brains. Sensors do not yet mimic how we perceive; computer memory cannot duplicate how we remember; and current if-then AI rules cannot truly duplicate how we reason and make decisions and then act.

One could say that the AI algorithms that relate facts to outcomes (i.e. the if-then rules) are the result of rigorous problem-based and experiential learning, but without any appreciation of the underlying physics.

Even so, AI has transitioned from a scientific advance to an engineering tool. Continuing innovation in an increasing number of domains is requiring engineers from all disciplines to learn how to integrate AI tools into their engineering designs.

Open-source tools, such as Amazon’s DSSTNE, Microsoft’s DMLT, and Google’s TensorFlow, contain software libraries that enable machine learning. Google, for example, recently released an open-source AI tool called DeepVariant that is able to provide a more accurate depiction of a person’s genome from gene sequencing data than other methods.

Amazon’s Alexa and Apple’s Siri use natural language processing to make decisions. Oncologists are training IBM Watson to help them diagnose and treat lung cancer. Tesla and Google are competing to bring autonomous self driving cars to consumers. The Israeli company, Zebra Medical Systems, is developing tools for radiology that have greater than human accuracy.

Engineers are responsible for training the software engines of smart systems. They do so by developing a variety of if-then rules for different applications. To have confidence in the AI-enabled product, whether it is a refrigerator or a car, they must therefore understand the difference between uncertainly and risk, and be able to account for volatility and complexity.

An uncertain technological future requires adaptable and resilient engineers who can see through the fog to create robust engineering designs based on AI. They must understand the capabilities and limitations of both their environments and the cognition afforded through AI. And, they must have the courage to make audacious but safe decisions. Therein lies the challenge for engineering leaders and educators.

firesidewiththedean.wordpress.com

Dr. Ishwar K. Puri is dean of the Faculty of Engineering and professor of mechanical engineering at McMaster University, a Fellow of the Canadian Academy of Engineering, and serves as chair of the Canadian National Council of Deans of Engineering and Applied Science.

Advertisement

Stories continue below

Print this page

Related Stories