Skip to main content
July 2019
High-Tech Future Comes With Risks and Rewards
In Focus

July/August 2019

High-Tech Future Comes With Risks and Rewards

BY PATRICK INGRAHAM

Woman in TechnologyThroughout works of science fiction, a common trope involves man inventing a machine to do his bidding. Time and time again, the machine rises up, destroys its programming, and wreaks havoc on mankind.

As technologies like artificial intelligence, machine learning, automation, advanced robotics, and mobile supercomputing move from the realm of science fiction to reality, humans must be aware that while the machines may not actually rise up and enslave humanity, there are very real consequences if tomorrow’s engineers are unprepared.

Technological Rewards

In a 2015 article in Foreign Affairs, Klaus Schwab, executive chairman of the World Economic Forum, explained that the world is at the beginning of a “fourth industrial revolution.” New technologies are creating new ways of meeting needs and disrupting industries.

“Engineers, designers, and architects,” Schwab wrote, “are combining computational design, additive manufacturing, materials engineering, and synthetic biology to pioneer a symbiosis between microorganisms, our bodies, the products we consume, and even the buildings we inhabit.”

The revolution is progressing all around. In manufacturing, intelligent automated robots are quickly and efficiently carrying out tasks in a fraction of the time required by humans. Public works and government employees are using AI processes to analyze complex data from water and energy systems, allowing them to efficiently maintain and distribute these resources to the public. Blockchain and the internet of things are leading to new ways of conducting trade and commerce and exchanging valuable information. And drones and unmanned autonomous vehicles are providing inspections, transportation, prospecting, extraction, and surveillance in numerous industries.

While all these advances can help engineers carry out their jobs, the risks involved with these technologies could be catastrophic without the proper human analysis, development, and monitoring. As Schwab wrote, leaders and citizens must “together shape a future that works for all by putting people first, empowering them, and constantly reminding ourselves that all of these new technologies are first and foremost tools made by people for people.”

Technological Risks

Peter Neumann is a computer researcher and the senior principal scientist in SRI International’s computer science lab in Menlo Park, California. In his spare time, he is the moderator and aggregator of “The Risks Digest,” an online forum on risks to the public in computers and related systems. He believes his book, Computer-Related Risks, is just as timely today as it was when it was published in 1995.

In the book, Neumann summarizes many real events, with a wide range of causes and effects, involving computer technologies and the people who depend on those technologies. Examples include hacker attacks, communications equipment outages, financial losses, and events involving advanced computerization.

“It is quite remarkable that almost everything in the book is still true and still relevant today,” Neumann says. “In many cases, the situation is even worse, because many of the same problems still continue to recur and because the internet has exploded the opportunities for miscreants to include attacks on almost anyone from potentially every system in the world.”

Some of the stories Neumann has recently highlighted in The Risks Digest have covered a security risk in Bluetooth devices, defective robotic bomb detectors used by the Iraqi military, and a malfunction in a supercomputer designed to predict US stock futures that cost an investor millions.

“Most people think that algorithms and computerization make automated machines faster and better decision-makers than humans,” says Neumann. “While that can be true most of the time, the 1% of the time that it is not can lead to devastating consequences in a number of areas.”

Many computer and software experts, including Neumann, point to the “trolley problem” to highlight how automated machines and AI fail to account for human moral dilemmas in their decision making.

The scenario in the problem features a runaway trolley barreling down the tracks. Ahead on the tracks, five people are tied up and can’t move. In the distance, a person stands next to a lever. If the person pulls this lever, the trolley will switch to a different set of tracks. On the sidetrack, however, there is one person. Your choices: (1) Do nothing, and the trolley kills the five people on the main track, or (2) Pull the lever, diverting the trolley onto the sidetrack where it will kill one person.

Most people, when given the options, would likely opt to pull the lever and sacrifice one person to save five. In machines using AI, like autonomous vehicles, it is hard to program that ethical decision making; therefore, in situations where a machine must make a choice in a life-or-death situation, it might not always recognize the value of human life in its decision.

Phil Laplante, P.E., a professor of software and systems engineering at Penn State University, believes AI must be closely monitored, especially within critical systems and infrastructures.

“AI is no substitute for human wisdom and common sense,” Laplante says. “In critical systems an error or mistake made by AI can be catastrophic. Software engineers, and really any engineer whose job may force them to come into contact with critical systems, need to be qualified and knowledgeable to ensure risks are mitigated or eliminated before something bad happens. And it will happen if we are unprepared.”

Automation and Workforce Effects

A 2019 report from the Metropolitan Policy Program at the Brookings Institution, Automation and Artificial Intelligence, examines how advances in computing and communication technologies affect different occupations and industries.

The report shows that, although most jobs are not highly susceptible to automation, those that are lower-wage and closely associated with routine task content are at the highest risk. Nevertheless, it will affect tasks in virtually all occupational groups.

“Intelligent machines may well take over from humans many traditionally protected task areas,” the report reads, “but expert opinion converges around several broad areas that appear to be particularly challenging for today’s machines and that likely will remain so in the near future.”

The report goes on to mention that machines cannot readily do “non-routine abstract manual” activities that involve perception, manipulation, dexterity, or physical adaptability. Machines also lack creative intelligence and social intelligence—the ability to think on their feet and work or empathize with human beings.

A similar study from the National Academies of Science, Engineering, and Medicine, Information Technology and the US Workforce, suggests that although many routine tasks are susceptible to automation, advances in AI, machine learning, and robotics are making nonroutine tasks susceptible too.

Those include tasks or jobs like writing news articles, answering questions, driving, and navigating uneven terrain. To combat the encroaching machine takeover of tasks that are more than routine, policymakers and researchers should continually track and analyze data to find trends in how automation is affecting the workforce and identify potential problems and solutions.

How Do Engineers Adapt and Advance in a Tech World?

International associations and US government agencies are just beginning to take the first steps toward making emerging technology safer and more secure.

In April 2016, the Institute of Electrical and Electronics Engineers launched a global initiative on ethics of autonomous and intelligent systems. The initiative sparked the collaboration of policymakers, engineers, designers, developers, and corporations from around the world. This year, as part of the initiative, IEEE released the first edition of the Ethically Aligned Design guide to address how autonomous and intelligent systems can be designed and implemented ethically.

In 2017, the National Institute of Standards and Technology within the Department of Commerce, under an Executive Order by President Trump, requested public input to assess the scope of efforts to educate and train the American cybersecurity workforce, including education curricula, training, apprenticeship programs, and primary school through higher education.

A subsequent report by NIST showed that the country needs immediate and sustained improvements to its cybersecurity workforce. Findings and recommendations included expanding the pool of candidates by retraining those in noncybersecurity fields and increasing participation of women, minorities, and veterans.

“Responsible engineering of AI systems involves engineers ensuring security,” Laplante says. “This starts with education, certification, and licensure to ensure we have qualified, knowledgeable engineers. PEs from other disciplines should get out ahead of these technological trends and learn to build safe and secure software. It might not be tomorrow, but soon every sector and industry will be affected by these technologies.”

MORE Spring 2022 ARTICLES
ELP
On a Mission to Lead

NSPE’s Emerging Leaders Program supports the leadership growth of early-career professionals and helps them develop the skills needed to advance beyond the technical. The program’s graduates are taking their careers to the next level and pursuing opportunities to be engineering change makers. »

Protected Content
Professional Liability Rates Under Pressure

According to a September survey of 15 leading engineers’ professional liability insurance carriers, there are new pressures on the market. While rates may hold for a time, we can anticipate pressure to increase deductibles. Furthermore, carriers will be evaluating risk factors more closely and adding some new characteristics for monitoring. »

Protected Content
2021 Directory and Survey of Insurance Provider

Every engineering team needs the help of other experts to successfully complete a project. If your aim is to select an expert to help you in the area of professional liability and risk management, this directory and survey will help you get started. »

Protected Content
Confronting Imposters and Other Career Challenges

Attitudes about women in engineering are slowly changing but obstacles remain. Young engineers who want to thrive and take on leadership roles can learn from successful NSPE members who have lived the experience. »

2019 Survey of Insurance Providers

Every engineering team needs the help of other experts to successfully complete a project. One team member not to overlook is the expert in professional liability and risk management. If your aim is to select the best provider for your firm, this insurance provider directory and survey will help you get started. »

Opportunities to Advance Equals Engineers Staying Put

Engineering firms around the country recognize that their employees want to seek out knowledge and training to advance their careers, and have obliged by furthering their development programs, continuing education offerings, and opportunities to attain professional licensure and advancement within their workplace. »

Back to School

Among PEs’ professional obligations emphasized in the NSPE Code of Ethics is a responsibility to provide career guidance to youths. Some NSPE members have taken the obligation to heart. »

Managing Great Expectations

As technology advances, issues arise regarding the evolution of the standard of care. During contract negotiations, design firms must resist client demands for a standard of care that goes beyond what is reasonable, which could cause later conflicts. »

Talking—and Listening—Your Way to the Top

As engineers seek to move up the career ladder toward management and leadership positions, effective communication skills will benefit them greatly. While it is no secret that top managers often demonstrate effective communication, what are the specific skills that engineers need, and where can they go to gain, practice, and maintain these important skills? »

2018 Survey of Insurance Providers

Each year, the Professional Liability Committee of NSPE’s Professional Engineers in Private Practice, in conjunction with the risk management committees of the American Institute of Architects and the American Council of Engineering Companies, conducts a survey and face-to-face interviews with representatives of companies serving the A/E professional liability insurance market. The results and the accompanying directory can help you select the best provider for your firm. »

Comebacks Are an Option

Engineers are humans.
Humans make mistakes.
It’s what you do with your mistakes that counts. »

Putting Ethics Into Practice

Ethical practice is vital for ensuring the public trust in engineering and in its practitioners. To teach professional ethics, some organizations use case-based reasoning, but there is still much to be understood about how professionals learn ethics this way. So, how can professionals ensure that solving ethical dilemmas at work will become second nature? »