You can read and reply to posts and download all mods without registering.
We're an independent and non-profit fan-site. Find out more about us here.
Simply put, a strict symbol-processing machine can never be a symbol-understanding machine.
Put simply, a strict symbol-processing machine can never be a symbol-understanding machine.
R1: If FEELS(In-Love-With(x)) then Assert(Handsome(x)) R2: IF BELIEVES(Obese(x)) then NOT(Handsome(x)) R3: IF BELIEVES(Proposes(x) and Handsome(x)) Then Accept-Proposal(x)
develop something that is somewhat comparable to consciousness and intent and therefore becoming a threat to everything else.
The classic counter argument to "symbol processing is not understanding" usually is "if the simulation is comprehensive enough it will be indistinguishable from intelligence". Wave if you think a comprehensive simulation of infinite situations is possible. What about half-comprehensive?
He doesn't mention binary computation in the way you suggest, Briareos.
This two-symbol system is the foundational principle that all of digital computing is based upon. Everything a computer does involves manipulating two symbols in some way. As such, they can be thought of as a practical type of Turing machine—an abstract, hypothetical machine that computes by manipulating symbols. A Turing machine’s operations are said to be “syntactical”, meaning they only recognize symbols and not the meaning of those symbols—i.e., their semantics. Even the word “recognize” is misleading because it implies a subjective experience, so perhaps it is better to simply say that computers are sensitive to symbols, whereas the brain is capable of semantic understanding.
I think that hundreds of years from now if people invent a technology that we haven’t heard of yet, maybe a computer could turn evil. But the future is so uncertain. I don’t know what’s going to happen five years from now. The reason I say that I don’t worry about AI turning evil is the same reason I don’t worry about overpopulation on Mars. Hundreds of years from now I hope we’ve colonized Mars. But we’ve never set foot on the planet so how can we productively worry about this problem now?
It's like Ng says: You might currently believe that self driving cars were just around the corner, but in reality "we’re firmly on our way to being safer than a drunk driver."