The Intersection of Innovation and Ethical Responsibility



Number of words: 672

In the 1700s, soon after Ben Franklin created the postal service in the United States, criminals invented mail fraud. In the 1800s, with the telegraph and the telephone, criminals invented wire fraud. In the twentieth century, when technologists invented the internet, it was apparent to anyone who knew history that the invention of new forms of fraud was unavoidable.

The challenge was that the tech sector, to its credit, always looked forward. The problem was that, to its detriment, too few people spent time or even accepted the virtue of looking in the rearview mirror long enough to use a knowledge of the past to anticipate the problems around the corner.

Within a year of the AI party in Davos, artificial intelligence started to create a broader set of questions for society. The public’s trust in technology previously had centred around privacy and security, but artificial intelligence was also now making people feel uneasy and was quickly becoming a central topic of public discussion.

Computers were becoming endowed with the ability to learn and make decisions, increasingly free from human intervention. But how would they make these decisions? Would they reflect the best of humanity? Or something much less inspiring? It had become increasingly apparent that AI technologies desperately needed to be guided by strong ethical principles if they were to serve society well.

This day had long been in the making. Several years before researchers at Dartmouth College held a summer study in 1956 to explore the development of computers that could learn—marked by some as the birth of academic discussion about AI—Isaac Asimov had written his famous “three laws of robotics” in the short story “Run around.” It was a science fiction account about humanity’s attempt to create ethical rules that would guide the autonomous AI-based decision making of robots. As dramatically illustrated in the 2004 film I, Robot starring Will Smith, it did not go well.

AI has developed in fits and starts since the late 1950s, most notably for a short time in the mid-1980s with a flurry of hype, investment, start-ups, and media interest in “expert systems.” But why did it burst onto the scene in such a big way in 2017, sixty years later? It’s not because it was a fad. To the contrary, it reflected trends and issues that were far broader and had long been converging.

There is no universally agreed-upon definition of AI across the tech sector, and technologists understandably advance their own perspective with vigor. In 2016, I spent some time on the emergence of new AI issues with Microsoft’s Dave Heiner, who at the time was working with Eric Horvitz, who had long led much of our basic research in the field. When I pressed Dave, he provided me with what I still regard as one helpful way to think about AI: “AI is a computer system that can learn from experience by discerning patterns in data fed to it and thereby make decisions.” Eric uses a somewhat broader definition, suggesting that “AI is the study of computational mechanisms underlying thought and intelligent behavior.” While this often involves data, it can also be based on experiences such as playing games, understanding natural languages, and the like. The ability of a computer to learn from data and experience and make decisions—the essence of these definitions of AI—is based on two fundamental technological capabilities: human perception and human cognition.

Human perception is the ability of computers to perceive what is happening in the world the way humans do through sight and sound. At one level, machines have been able to “see” the world since the camera was invented in the 1830s. But it always took a human to understand what was depicted in a photograph. Similarly, machines have been able to hear since Thomas Edison invented the phonograph in 1877. But no machine could understand and transcribe as accurately as a human being.

Excerpted from pages 192 to 194 of ‘Tools and Weapons’ by Brad Smith and Carol Browne

Leave a Comment