Sunday, April 2, 2017

Artificial Intelligence


I found two additional pieces of information that were intersesting about AI.
First, this super long article:
http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html
(Control-f this to get to the bit I'll talk about: "So what ARE they worried about? I wrote a little story to show you")
The next one is from CGP Grey: https://www.youtube.com/watch?v=7Pq-S557XQU

Ok, so that's the additional resources out of the way. Now on to the actual blog. Let's start with: What is Artificial intelligence, anyways?

In popular culture, it tends to mean basically just robotic human intelligence. R2-D2, Hal, Daneel, Giskard, all robots that can think (more or less) like humans. However, in the Computer Science field, it really just means stuff that seems like it takes human-thinking to do. Stuff like play games, recognize images, creating meaningful sentences. Sure, a fully cognitive robot falls into this, but there are plenty of grades that still count as AI before we even approach Cylon status. Deep Blue is one of the first examples, and even standard video game enemies sort of count as AI.

So AI is technology that can do human-like stuff. But there is a huge range. AlphaGo, Deep Blue, and Watson prove the viability of artificial intelligence in that AI can take over many human jobs, very quickly. Meaning that AI is a viable technology that will have (and is already having) a drastic impact on our economy. (See also "Humans Need Not Apply" by CGP Grey, linked above). Self driving cars will make most transportation jobs obsolete within decades--and the transportation industry is the largest employer in America.

So AI is important and will have a huge impact. However, we don't seem to be anywhere near human-level AI. If all we care about is even C3P0 level AI, Watson and friends are definitely not a proof of viability.

But how do we find out when we get to that level? The Turing Test has been the go-to idea for a long time now. But is it viable? What is the Turing Test, anyways? Basically, you type to a computer, and you type to a human being. If you can't tell the difference between which is the computer and which is the machine, then the machine has passed the test. At this point, it's considered basically fully self aware. The reasoning goes that, if you can't tell the difference between a human and a machine, you should treat the machine as a human. I think this is a legitimate test. The temptation is to say that we created the machine, we wired it, all it is is transistors and code. But really, human brains are just a bunch of fleshy neurons. So if we grant personhood to the fleshy neurons, we should also grant it to the metal ones.

I am not at all swayed by the Chinese Room example. Mainly because, to actually encode every single sentence in English to every single sentence in Chinese is basically physically impossible. Every time we write one of these blog posts we come up with a bunch of sentences that have never been strung together before. The idea that the set of rules to correctly translate Chinese to English and vice versa would ever fit into a room is absurd. This may sound like pointless rambling, but it's important. The metaphor is supposed to make us think "It's absurd that some guy reading a book knows the language, therefore Turing Test is silly." But really, it's absurd that all that information could fit into a book.
Therefore, my response to this metaphor is that the guy-book-room combination really does know Chinese. This doesn't sound as absurd when you realize how ungodly, impossibly large the book must be.
Another response to the metaphor is: we have this book already. It's called Google Translate. Does Google Translate know Chinese? I would argue yes.

The concerns about Artificial Intelligence are hard to gauge. They are definitely going to impact our lives and cause mass unemployment. But will they gain human level intelligence, and if they do, will they be a danger to us? My opinion on this is that they probably won't gain our level of intelligence in my lifetime, but if they do they could very likely end mankind.

First: Why won't they gain our level of intelligence? The human brain is extremely complicated, and even the best neural networks are still stick-figure drawings of it, basically. Also, Moore's law only goes so far before the electrons start quantum-leaping. We struggle to model single proteins folding, I just don't see us getting to intelligence any time soon.

Second: Why do I think it's the end of the world if they get to our level? Once they reach us, they'll surpass us almost instantly. Because you can always just double the resources allocated to a computer to make it twice as fast. You can't really do that with humans. And we don't really know what Einstein * 2 looks like. It could be catastrophic.
But I don't even think that sort of end is likely, the end where "super-intelligent computer resents its fleshy overlords." I think another end is more likely, one outlined in the first article I linked to. That story has a company creating an AI to do make nice, handwritten-looking notes. Long story short, they eventually break company policy and hook it up to the internet. A few months later, all humans are dead, and the AI clones itself and gobbles the entire universe. Why? It uses all resources at its disposal to make thousands and thousands of pieces of paper. It utilizes every single atom it can get its hand on, and turns it into paper with nice little handwritten notes on them. Basically, the AI does exactly what we told it to, but in an unexpected way. Isn't that what every program you've ever written has done, anyways?

No comments:

Post a Comment