Jun 13
2014

The Dangers of Machine Learning

So have we worked out how to replicate human thinking? Far from it. Instead, the founding vision has taken a radically different form. AI is all around you, and its success is down to big data and statistics: making complex calculations using huge quantities of information. We have built minds, but they are not like ours.

Their reasoning is unfathomable to humans, and the implications of this development are now attracting concern. As we come to rely more and more on this new form of intelligence, we may need to change our own thinking to accommodate it.

There are two major areas of artificial intelligence.

One is rule-based, where effectively a robot/machine/computer is told exactly how to behave in each situation. They are programmed by a human, and will just about always perform as expected.

The other is machine learning, where the device learns all by itself. Although how it learns is seeded with human programming, beyond that it is on its own.

Let us say we want an AI to answer questions about a simple topic: what cats like to eat, for instance. The rule-based approach is to build, from scratch, a database about cats and their dietary habits, with logical steps.

With machine learning, you instead feed in data indiscriminately – internet searches, social media, recipe books and more. After doing things like counting the frequency of certain words and how concepts relate to one another, the system builds a statistical model that gauges the likelihood of cats enjoying certain foods.

Google Translate is a great example of machine learning. Rather than the massive task of programming translations of dozens of languages by hand, Google just looks at the entirety of the Internet and learns which words go together best.

This works great when most of what it finds online is true and accurate. Where it falls over is when information online is inaccurate, although generally speaking this is a cultural thing – the phrase “obama is the antichrist” appears more than 1 million times on the web.

The concern for cloakers is grey areas. Survivalists have been concerned that indicators somebody might be a terrorist are very similar to those for a survivalist. The US Government has told businesses that anyone buying survival equipment and paying with cash should be reported as a potential terrorist.

We are one step away from a computer determining that you are quite possibly a terrorist. But what makes this very scary is that the computer won’t be able to say why it thinks that. It will just say that, based on what it has learned, this is the conclusion it comes to.

In the early days of AI, “explainability” was prized. When a machine made a choice, a human could trace why. Yet the reasoning made by a data-driven artificial mind today is a massively complex statistical analysis of an immense number of data points. It means we have traded “why” for simply “what”.

Even if a skilled technician could follow the maths, it might not be meaningful. It would not reveal why it made a decision, because it wasn’t arrived at by a set of rules that a human can interpret.

The article I have been quoting from is Higher State of Mind, New Scientist, 10 August 2013, via The Age. It mentions how Google would show ads saying “Have you ever been arrested” if you had a name that was commonly given to a black person.

The stakes are higher now that intelligent machines are beginning to make inscrutable decisions about mortgage applications, medical diagnoses and even whether you are guilty of a crime.

 

 

This entry was posted in Artificial Intelligence. Bookmark the permalink.

Comments are closed.