673f08c707c8a

673f08c709ef1
1 Guest is here.
 

Topic: The Myth of Sentient Machines Read 17279 times  

673f08c70c5ce
https://www.psychologytoday.com/blog/mind-in-the-machine/201606/the-myth-sentient-machines

Quote by Bobby Azarian:
Simply put, a strict symbol-processing machine can never be a symbol-understanding machine.
« Last Edit: 03. June 2016, 07:43:47 by Kolya »
673f08c70c796
funny ... I was at a talk yesterday by John Searle who more or less said the exact same thing. Syntax =! Semantic.

Edit: Oh I just saw, that he's cited in the article ;-)

673f08c70ca3cBriareos H

673f08c70caac
Not a good article IMO. And thank you for highlighting the biggest problem with it.
Put simply, a strict symbol-processing machine can never be a symbol-understanding machine.
I would expect such a claim to require formal proof which is not provided -- and I am actually convinced that the opposite is true. Instead, the author resorts to an argument of authority ("Searle and Feynman said so!") then goes on pointless tangents which he thinks are 'compelling'. The paragraph about the worthlessness of simulation is complete pseudo-science and is really shameful.

Also, reminding the reader that computers use binary as if it were in any way meaningful is misleading. Turing machines don't have to use binary code. Even mentioning such a thing makes it appear as if the author wanted to scream "Look! Computers can only treat ones and zeroes while we are analog, we are so much better!" without saying it out loud because they realized it was a fallacy.

My conviction is that consciousness, reasoning and 'symbol-understanding' are a byproduct of the specialized, synchronous, extremely powerful memory-based classification engines within our brains, controlled by a highly non-linear chemically-mediated decision system aggregating desire, initiative and willpower.

Now that first part, these classification engines, we're getting there: computers based on neural nets are here and are becoming great at classifying. Although we're still far from the same cognitive abilities as biological brains, I'm willing to bet that we're going to reach a turing test-shattering breakthrough in natural language recognition and production within the next 10 years ; there is nothing technically unfeasible in it, we mostly lack processing power.

As for the decision system, an argument could be made (and it is made in the article although in a strange, indirect way) that even though a computer may understand a stimulus, it will lack the willpower to act on it on its own. But our decision mechanisms, although complicated, variable and susceptible to external influence, are hardly magical. I see no compelling argument to say similar mechanisms can't be simulated.
« Last Edit: 03. June 2016, 13:43:43 by Briareos H »
Acknowledged by 3 members: ThiefsieFool, fox, Marvin
673f08c70ce11
I don't like how many times the article mixes the words "never" with "might never", "almost never", "may be impossible", and so on. Yes, there is substantial evidence a Turing machine cannot be made into a truly intelligent, thinking machine and it is true that a perfect simulation of a process is not equal to the process itself. However, that's a moot point when the actual question at hand is "Can a machine outthink humans and try to wipe us out?"
You don't meed a conscious AI for that. Not even a perfectly simulated one.

Also, what Briareos said.
Acknowledged by: rccc
673f08c70d02b
He doesn't mention binary computation in the way you suggest, Briareos. What he says is that computing symbols in a binary system does not lead to experiences. As opposed to a biochemical system. It may still be a faulty argument. It's certainly wide open to discussion, but he's not being as polemic as you said.
I tend to agree with his view for the following reasons: The only approximation we have of what is "intelligence", is that it would be kind of thinking like us. (If you have a more definitive description let's hear it!) Therefore the question is if an artificial system could be similar to us. Whether that is intelligent is besides the point.
So what makes up a human then? Well, its experiences do. And that is the point Azarian is making. But to make first hand human experiences this artificial system would need to live among us undetected. And that is pretty much impossible.
Its body would also have to be extremely close to a human body. Not just for camouflage but to be able to have the same experiences. Obvious ones would be affection, eating, sleep. But there's many less obvious experiences. For example it is known that digestive problems have an as of yet unexplained correlation to depression. Our digestive system shapes how we see the world.
I'm willing to bet that not a single AI scientist is simulating or even taking into account these kind of experiences and how they form our perception and hence what we call "intelligence". And that is what Azarian means when he says they are trying to take shortcuts that will never work.
In the end, if you really succeeded to either simulate or build everything that makes a human what you would have is - a human. That includes death and excludes incredibly fast calculations. Of course there are cheaper ways to come by a human, so it's all rather pointless. 

673f08c70d0fdXKILLJOY98

673f08c70d14d
Don't get me started.

It's no myth, what is a brain but a biological computer and what is a computer but a technological brain?

In the past people have viewed things as impossible that are now a reality, this is very much the same thing.

I for one firmly believe it is very possible.
673f08c70d407
Put a brain on a glass platter then and see what it thinks about itself and its situation. The point is that you are not your brain. And your body isn't just a machine to carry your head-computer around. Everything you think and therefore everything you consider intelligent cannot be separated from your experience of being a human body. 
And if you don't believe that, then try communicating with a dolphin. It's another brain, just with a different body right? You may say that it's "not intelligent" but what you are really saying is that its experiences are too different from yours. Therefore you can't understand it. From the point of view of another dophin it's perfectly intelligent.

An interesting side note is that the human trait which allows us to somewhat imagine what the life of a dolphin or an AI might be like - empathy - is one that the "AI"s we have are very bad at. Try telling Siri/Google Now that you have a deadly disease. You probably weren't expecting an empathetic response anyway. It's just a glorified chatbot after all, a symbol processor that doesn't understand any of these symbols. No intelligence whatsoever. Even if it came up with an empathetic answer it would be because someone put it there, ie it is faked.
Here's an interesting paper about the importance of empathy for developing human level AI: http://www-formal.stanford.edu/cmason/circulation-ws07cmason.pdf
It becomes rather funny when they try to simulate affects in programmative ways.
R1: If FEELS(In-Love-With(x)) then Assert(Handsome(x))
R2: IF BELIEVES(Obese(x)) then NOT(Handsome(x))
R3: IF BELIEVES(Proposes(x) and Handsome(x)) Then Accept-Proposal(x)
I think I can guess how well this will work out. It's still a processor dealing with symbols. What sexual attraction or love actually mean and can do to one's thoughts will forever escape it. And so it will stay stupid.
« Last Edit: 04. June 2016, 08:51:58 by Kolya »
673f08c70d5c0
In theory, I believe everything could be simulated eventually - even love. At which point that might become technically possible and why that would be useful is another question and which role quantum mechanics vs binary systems will play, I don't know. But I don't think that machines need to become perfect simulations of humans to develop something that is somewhat comparable to consciousness and intent and therefore becoming a threat to everything else. 

« Last Edit: 04. June 2016, 10:08:47 by fox »
673f08c70d976
develop something that is somewhat comparable to consciousness and intent and therefore becoming a threat to everything else.

A robot is supposed to bring heavy boxes from A to B. If it encounters an obstacle in its way it is programmed to calculate whether it could safely drive over said obstacle or going around would be faster. And if it recognizes shape and motion in sync with itself it's supposed to wave it's robot arm.  :droid:/
Then place a kitten in its way and a mirror on the side. You get an AI with something that is somewhat comparable to consciousness and an intent to drive over a kitten. Fortunately the kitten ran away, singularity averted. 

The classic counter argument to "symbol processing is not understanding" usually is "if the simulation is comprehensive enough it will be indistinguishable from intelligence". Wave if you think a comprehensive simulation of infinite situations is possible. What about half-comprehensive?
673f08c70dcfc
The classic counter argument to "symbol processing is not understanding" usually is "if the simulation is comprehensive enough it will be indistinguishable from intelligence". Wave if you think a comprehensive simulation of infinite situations is possible. What about half-comprehensive?

I admit that I'm entirely out of my league here. But why would a machine have to simulate inifinite situations? If that would be possible, it would be able to forsee the future but that is not what this is about. Humans can't do that either - we only have to be able to deal with actually emerging situations (using past experiences to adapt/optimize the reactions plus some gambling/assuming). That might be incredibly complex but I don't understand why it shouldn't be possible at some point?
« Last Edit: 04. June 2016, 13:22:26 by fox »
673f08c70df32
A machine cannot adapt like a human because it's lacking experiences of the world. While it can "learn" that placing the red ball into the cup results in an energy boost, whereas blue balls do nothing, even such a pitifully simple experiment requires pre-programming of what is a ball, what to do with it and even that energy is a good thing.
Feed a new born baby and it requires no explanations. Being hungry and being fed is enough, because it has a living body that can experience stuff. 

Humans can deal with an infinite number of situations because they can adapt memories of previous experiences to new situations taking into account the differences. The process of how these memories are formed, reinforced and overwritten, their quality and how they influence each other, and how they make up an image of the world is inseparable from the human experience and the emotions they invoke.

Pretty soon the baby learns that its fathers hairy flat chest does not feed it. It's not as comfortable either. But the father is more likely to expose the baby to new situations, which is interesting because it learns about danger and reward. It will still take several years until it learns about other people's motifs and empathy.

The human experience is just very complex. I don't understand why AI researchers and others have underestimated it for so long and continue to do so while at the same time being scared of a potential success. It's like telling yourself the old Frankenstein story over and over again. That book was written during the year without summer 1816 by a woman who had recently had a miscarriage. Not by a scientist or engineer.
673f08c70e07e
The part I don't get is why you are convinced that a machine, theoretically at this point, wouldn't be able to emulate all the needed processes (collection of sensory input, analysis, filtering, storage, inter-connecting via neuronal network, etc.) at some point? It seems that you yourself are only relying on the complexity-argument - which could be quite dangerously short-sighted, imo. A machine like that would certainly make different experiences and arrive at different conclusions but I don't think that's all that relevant at this point of the discussion.
« Last Edit: 04. June 2016, 16:32:31 by fox »
673f08c70eb00
I'm arguing that without experiencing the world an AI cannot come to any kind of understanding of it. And without experiencing a human life it will not develop anything humans would call intelligence. 

If an AI was placed in a robot body with enough sensors to experience the world and had the same inherent needs and skills as a new born human and was taught (not programmed) for years, it might become an artificial lifeform that develops consciousness, intent and an intelligence befitting its robot life.

But that's not going to happen, because we don't know enough about the skills a human baby inherits. For example language acquisition is still a mystery despite or because of Chomsky (who convinced linguists that babies have hereditary grammar for every language in the world that are hooked into during language acquisition).

We also wouldn't know how to teach such a robot. It would likely be costly, long term and the result might be artificial algae or perhaps a dormouse. Instead the expectation seems to be that feeding a program tonnes of symbols and rivaling rules will make it connect the dots at some point. Like Google's neural network is currently reading thousands of romance novels, hoping to enhance its emotional intelligence.

For 50 years we tried throwing increased processing power at it, getting nowhere. Siri et al are conceptually not different from the ELIZA script from 1966. And that's still the most human like intelligence we came up with. A script that looks for a few keywords and otherwise falls back to stock phrases.
673f08c70ec56
For the moment you are right, as far as we know. And I tend to agree that it is unlikely something like that will happen before we have figured out pretty much everything about how the human brain works. However, in my opinion this is only a matter of time and I wouldn't underestimate the progress in the related fields - especially with multi-national corporations like Google and Microsoft pressing matters. The complexity argument is not working in the long run - as mankind should've learned from a number of experiences before, in my opinion.
« Last Edit: 04. June 2016, 18:22:17 by fox »

673f08c70f366Briareos H

673f08c70f3c6
He doesn't mention binary computation in the way you suggest, Briareos.
This two-symbol system is the foundational principle that all of digital computing is based upon. Everything a computer does involves manipulating two symbols in some way. As such, they can be thought of as a practical type of Turing machine—an abstract, hypothetical machine that computes by manipulating symbols.
 
A Turing machine’s operations are said to be “syntactical”, meaning they only recognize symbols and not the meaning of those symbols—i.e., their semantics. Even the word “recognize” is misleading because it implies a subjective experience, so perhaps it is better to simply say that computers are sensitive to symbols, whereas the brain is capable of semantic understanding.
My complaint here was merely that the author didn't need to remind us that computers use binary data to introduce the fact that they are Turing machines. He didn't need to mention binary at all but still does so awkwardly; I get the feeling that he wanted to imply more but was wary of an association fallacy. Or maybe I'm looking too much into it, it is very possible :p and not relevant to my point anyway.

My point, and I'll use the rest of your posts as a basis here, is that it would have been fine if he said a computer could never think like a human, and nothing more. I think so too as well, because as you rightfully point out there is no true duality between our perception and our thinking. Perceptual integration, memory, pattern building etc. are a continuum of experience that is highly dependent on our physical and chemical states. I will never put that into question and if anyone's interested in the subject, I heartily recommend one of my favorite works of philosophy, "Phenomenology of perception" by Maurice Merleau-Ponty which bridges phenomenological philosophy (albeit a slightly weird version of it, not exactly Husserl), early existentialism and psychology in a remarkably easy to follow and rational way.

A computerized brain needs a human experience to think like a human and as such needs the same inputs -- which they'll never get or at least not in our lifetimes. Alright. But to say that they can not think at all, i.e. that they can never be intelligent and self-reflective is another thing altogether.

When you look at pictures that went through Google's Deep Dream, most objects get transformed into animal faces. It does so because it was trained to see animal faces everywhere: when you present it with something which it doesn't know, it is going to represent it in a way where it can see an animal face in it. I am arguing that if it was trained with enough (i.e. more) neurons, and with a learning set that encompassed the entire web, the way it would represent data when presented with a new input would be in no way different than the way an "intelligent entity living in the web" would represent data. As such, I fully believe that the idea in your last post's second to last paragraph (feeding romance novels) is sound and I don't agree with your conclusion. When triggered in the right way, it could understand and translate any language, it could predict outcomes of complex systems (hello stockmarket abuse), it could understand the hidden desires and motivations of most internet users and interact with them in a meaningful way (hello targeted ads), it could create new art in the styles that it learned (which it already does).

What exactly is the step leading from there to consciousness? From copying Picasso on command to deciding to create an entirely new art style? What is missing? Nobody knows for sure. And I truly believe that nobody today can reasonably argue that self-awareness can not come out of this. If you want my opinion, I'd say it would need a decision-making system and probably meta-level synchronicity circuits. Being able to see itself in time and act upon it. After learning so much on human concepts of consciousness, I'm sure it would get the hint rather quickly.

Finally, I disagree with your pessimism regarding the history of AI. Use of wide, multi-layered neural networks with large data sets only became possible very recently, thanks to distributed computing and efficient data representation. What was done before was purely algorithmic. I don't know how Siri works but I'm almost certain it only uses very basic learning techniques. Neural networks are extremely computing intensive, they really are.

EDIT: Don't get me wrong, I do not hold blind faith in Google's neural networks. Maybe they're not optimal, maybe --like real brains-- there needs to be different kinds of neurons for specialized tasks, maybe there needs to be an element of simulated biology. But unlike the author of the article, I see no fundamental reason to dismiss possible intelligence coming out of such systems. In my opinion, the framework of thought which allows him to categorically deny potential sentience is at best unproven, at worst completely false.
« Last Edit: 04. June 2016, 20:48:46 by Briareos H »

673f08c70f4c2XKILLJOY98

673f08c70f52f
It is important to know that (in case it hasn't already been brought up) we have already invented a quantum computer, thus computers no longer strictly run off of binary (Also, both Microsoft and Google both have AI research teams, and even brilliant scientists such as steven hawkings say that it is inevitable).

Also, if an AI is smart enough it can experience the world and learn, also never say never, don't say it won't happen. Perhaps brain uploading can help us "create" a digital mind.
« Last Edit: 05. June 2016, 22:57:29 by XKILLJOY98 »
673f08c70f6d6
Here's an interesting (German) article about the philosopher Nick Bostrom who wrote a book warning about the AI apocalypse that was then recommended by Bill Gates, Elon Musk etc.
http://www.zeit.de/2016/21/nick-bostrom-oxford-philosoph-kuenstliche-intelligenz

He has no idea how it would happen. Instead his premise is: If it happened, how would it go?
His way to assert the likeliness of machines reaching human like intelligence is to poll experts when they expect it to happen.
Of course we already know that the answer will invariably be a definitive "soon", as it has been for half a century.
673f08c70f8b9
Did you read his book? Full version of the Andrew Ng's quote mentioned in the Zeit-article:
I think that hundreds of years from now if people invent
    a technology that we haven’t heard of yet, maybe a
    computer could turn evil. But the future is so
    uncertain. I don’t know what’s going to happen five
    years from now. The reason I say that I don’t worry
    about AI turning evil is the same reason I don’t worry
    about overpopulation on Mars. Hundreds of years from now
    I hope we’ve colonized Mars. But we’ve never set foot on
    the planet so how can we productively worry about this
    problem now?

I'm with Nick Bostrom for sure. We are already trying to develop it so now is the time to think about the consequences.

Here's a bit more about Ng's point of view (which I don't post to prove a point, just because he too has interesting things to say).
http://www.wired.com/brandlab/2015/05/andrew-ng-deep-learning-mandate-humans-not-just-machines/
673f08c70fa11
No, I haven't read Bostrom's book. I'm with Ng who actually works in AI development whereas Bostrom is a philosopher. Not because philosophy had nothing interesting to say about the subject, but because there's the usual misunderstanding when software developers speak of defined terms like "artificial intelligence" then it gets mangled by marketing and then a philosoper hears it. Or your average Joe.

Ng is not talking about human level intelligence, much less sentience, but Bostrom is.

It's like Ng says: You might currently believe that self driving cars were just around the corner, but in reality "we’re firmly on our way to being safer than a drunk driver." :D
673f08c70fd91
It's like Ng says: You might currently believe that self driving cars were just around the corner, but in reality "we’re firmly on our way to being safer than a drunk driver." :D

Autonomer Postbus: Ein "Meilenstein" für den öffentlichen Nahverkehr
http://www.heise.de/newsticker/meldung/Autonomer-Postbus-Ein-Meilenstein-fuer-den-oeffentlichen-Nahverkehr-3249008.html

It’s official: Drone delivery is coming to D.C. in September
https://www.washingtonpost.com/news/the-switch/wp/2016/06/24/its-official-drone-delivery-is-coming-to-d-c-in-september/

Hermes testet Zustellung per Liefer-Roboter
http://www.heise.de/newsticker/meldung/Hermes-testet-Zustellung-per-Liefer-Roboter-3235308.html
« Last Edit: 27. June 2016, 07:41:00 by fox »
673f08c70fec0
Car company said autopilot did not notice the truck because of bright sun

Ex-Navy SEAL Brown is first person to die while using self-driving vehicle

http://www.dailymail.co.uk/news/article-3668916/Former-Navy-SEAL-killed-wheel-Tesla-autopilot-motorist-die-self-driving-car-recorded-near-miss-just-month-earlier.html
673f08c710033
Yeah, I heared about it. I want to point out that the actual safety of these systems is an aspect of minor importance to this discussion. Those systems already being out in the wild (and mainly without incidents) is what's really counting, I think. Surely these accidents do have an impact on development and political decisions but I doubt that it will lead to much, if any, delay. My point is that Ng is dead wrong, if he thinks that things like self-driving cars are still far away from our reality. Them being not 100% safe only underlines how much we are in need of regulation and careful risk assessment right now.

Btw: this also makes a case against TTIP and Co.
« Last Edit: 02. July 2016, 23:04:56 by fox »
Acknowledged by: Kolya
673f08c711a00
Good for me. I'm a terrible driver.  :stroke:
1 Guest is here.
Ako Isang Tagapagpananalixik na naglapapupang matuto ng bagong bagai. Áno ang ayong natutunan upang dnes?
Contact SMF 2.0.19 | SMF © 2016, Simple Machines | Terms and Policies
FEEP
673f08c715f08