679bc7c62683e

679bc7c62ac1f
4 Guests are here.
 

Topic: The Myth of Sentient Machines
Page: « 1 ... 3 [4] 5 6 »
Read 19633 times  

679bc7c62baaa
What else can I do but simply shut up
Try that for a while.

679bc7c62c4e8hedonicflux~~

679bc7c62c63b
Reposted from my new blog, Fearless System:

There was a recent thread on the Systemshock.org forums in which Kolya linked to an article on cybernetics titled "The Myth of Sentient Machines" by some Bobby Azarian. Kolya was posting the article in agreement with Azarian, but he's wrong. Dead wrong. And what is missing from the formulation that so many transhumanists and cyberneticists (but not David Pierce) never mention is the factor of hedonics (or affective experience). A system for processing and storing hedonic data is all that is needed to make an artificial intelligence sentient. There is nothing special about the cybernetic experience of animals, no magic ingredient that can't be simulated. So what is hedonic experience? Simply, it is the two opposed affectives: pleasure and suffering.

But what are pleasure and suffering, on a technical, neurological level? Quite simply they are feedback loops: pleasure is the experience caused by any stimulus that, by stimulating the cybernetic system, increases the likelihood that the action which lead to the stimulus is repeated by the owner of the cybernetic system; likewise and inversely, suffering is the experience caused by a stimulus that, by stimulating the cybernetic system, decreases the likelihood that the action which lead to the stimulus is repeated by the owner of the cybernetic system. Put more simply, pleasure is the effect of a stimulus that increases the likelihood of itself being repeated, whereas suffering is the effect of a stimulus that decreases the likelihood of itself being repeated.

The brain is a finite piece of matter which can hold an infinite amount of information. It can achieve this with feedback loops, neuroplasticity, and I might propose, communication between arbitrary synapses beyond their "normal" function (if they have any function) and being able to adapt as needed, to be there when called upon.

I'll have more on this later.
679bc7c62d546
Kolya was posting the article in agreement with Azarian, but he's wrong. Dead wrong.
So wrong!  :awesome:

You know, I love the verve with which you are pursuing this. And I'm not entirely sure of what I said here, but I don't think I said it would be completely impossible. Just that the current attempts at creating a human-like AI are woefully short-sighted. And if they ever went the whole way to make one, it would basically end up in creating another human.

You're adding an interesting aspect there with the pleasure and suffering principles but actually the human experience is a bit more varied than that. For a start there's also boredom.
So while it's a step in the right direction (or would be if you were an AI researcher and actually put it to use) in the end it feels like another tunnel vision of what makes up manhood.

679bc7c62df74chickenhead

679bc7c62dfdc
he's wrong. Dead wrong.
I'm sorry, but whenever someone says that phrase, I can't help but imagine a ten-year-old kid in a argument with another child over who the coolest Avenger is.
679bc7c62e7c3
Here's an interesting article about safe interruptibility of AIs. "Safe" here means, that the AI will disregard its interruption in its learning process. Because everything else could end up badly.
if a human intervention doesn't maximize the agent's given reward function, it may be that the agent learns to avoid and possibly resist future interventions
Google DeepMind Researchers Develop AI Kill Switch

679bc7c62ec98RocketMan

679bc7c62ecfa
Laputan Machine...

I AM NOT A MACHINE!!!
679bc7c62faf7
The body is the missing link for truly intelligent machines
Nature ‘has built the apparatus of rationality not just on top of the apparatus of biological regulation, but also from it and with it’, wrote the neuroscientist Antonio Damasio in Descartes’ Error (1994), his seminal book on cognition. In other words, we think with our whole body, not just with the brain.


679bc7c62fce2
But what's the magical ingredient of the human body that could never be substituted by sensors & actuators in a machine? Unless there is evidence for such a thing, this statement doesn't amount to more then "input and output feedback are essential for decision making and learning".
679bc7c630541
There's no magic ingredient. It's just a lot more complicated than current approaches to humanlike AI apparently envision. A few million years of evolution created an environment which a species of monkeys is part of that has developed a thin sliver of consciousness and rationality, precariously balanced on top of its biology. And that sliver thinks: 'If I write down my rationality comprehensively enough I should be able to create something just like me.' ignoring 99% of it's own existence.
That describes the idea of achieving AI through symbolic logic. Then somewhere along the line it realises that the world is a bit too complex to write it all down, so it creates a program that reads previously existing writings. That's the statistical approach.

The point of this thread and what I argued here all along is that intelligence needs a body to develop. And for it to be humanlike it needs a human(like) body. And that body needs to be part of social and biological environment that nurtures this intelligence. So what you need to create humanlike intelligence unsurprisingly enough is a human (or something very very close to it).
Could you recreate its biology with sensors and actuators? I guess you would need to create an environment first, that this entity then needs to survive like we need ours. But it would probably end up rather crude and definitely not humanlike. 

Think about it from the other direction and it becomes clearer (and a  lot more interesting): What happens when humans replace part of their sensors with technical substitutes?
System Shock delved into this transhumanist topic quite a bit and the answer was that you become less human and more machine. So how could a machine become more human then through the same devices?

And as if this post wasn't long enough I'm going off on a feminist tangent now  :D
Shodan's appeal is her otherness, the fear of the unknown. Supposedly this is her being a machine. But since that is largely unimaginable to us a larger part is her being an empowered and irate female. The strangest thing.
679bc7c6306ec
I tend to agree with you about humanlike sentience, simply because the level of complexity needed to recreate all the necessary factors doesn't seem practically feasible or worth the effort in the foreseeable future. However, to me the question wether or not machines will ever become sentient/self-aware and intelligent enough to become entirely independent of humans isn't about how humanlike that might turn out. Philosophical stimulation aside, to me the importance of this topic is always associated with how much of a threat such machines would pose to mankind.
« Last Edit: 26. March 2017, 14:13:11 by fox »
679bc7c630b56
I have a hunch that the reason the threat of AIs keeps popping up in this kind of discussion is that AIs trying to destroy us would just be the simplest way to recognize non-humanlike intelligence. The thought seems to go along the line of: As long as we can kill it, it's obviously not smarter than us! So we cautiously waddle in that direction with one hand on the exterminate-button.
But actually dumb people have killed many intelligent people before. And AIs would have to fear a lot more from us than we from them.
679bc7c630c73
It would certainly be exceptionally dumb to ignore the risk when engineering stuff like that.
679bc7c631086
The Dark Secret at the Heart of AI:
No one really knows how the most advanced algorithms do what they do.
https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/?set=607864
Acknowledged by: fox
679bc7c631325
A god-like being of infinite knowing (the singularity); an escape of the flesh and this limited world (uploading our minds); a moment of transfiguration or ‘end of days’ (the singularity as a moment of rapture); prophets (even if they work for Google); demons and hell (even if it’s an eternal computer simulation of suffering), and evangelists who wear smart suits (just like the religious ones do). Consciously and unconsciously, religious ideas are at work in the narratives of those discussing, planning, and hoping for a future shaped by AI.
679bc7c631970
Yeah, basically it's a matter of beliefs and a number of half-educated guesses. I think caution can't be wrong though.
679bc7c631bac
L. Ron Hubbard would be proud. Seriously though, if that's humanity's perspective - to worship the AI - then it would be better not to build it. More likely though this guy is just really bad at metaphors.

679bc7c632c0aXKILLJOY98

679bc7c632c67
https://www.psychologytoday.com/blog/mind-in-the-machine/201606/the-myth-sentient-machines
Simply put, a strict symbol-processing machine can never be a symbol-understanding machine.

Again, it's not a myth, and that isn't true. Plenty of scientists think other wise. It can if it's smart enough. Look at animals and cells they function similarly to AI and the Chinese room can be solved by giving the man a English to Chinese translator book. You're also assuming that we won't evolve part that point.  It does things without knowing what it's doing and feels it has to, is that not instinct? Besides the ability to think gives it a head start, plus instinct can be developed.


Also, AI is nothing to be feared as long was we don't treat them like slaves we should be fine. They are living sentient creatures the same as us. If they destroy us, then we deserve it.

Plenty of people in the past thought our technology was impossible, but it's not.

Yes, there is substantial evidence a Turing machine cannot be made into a truly intelligent, thinking machine and it is true that a perfect simulation of a process is not equal to the process itself.

That's not true.

A machine cannot adapt like a human because it's lacking experiences of the world.

I'm arguing that without experiencing the world an AI cannot come to any kind of understanding of it. And without experiencing a human life it will not develop anything humans would call intelligence.   

It can, look at the internet. Plus it can interact with the world after it has gotten smart enough. Plus people can always make it smarter until it can. Also, AI is getting smarter without that anyway, with enough time consciousness is inevitable.

I'm arguing that without experiencing the world an AI cannot come to any kind of understanding of it. And without experiencing a human life it will not develop anything humans would call intelligence.   

It can, plus you could always put it in a body.


While it can "learn" that placing the red ball into the cup results in an energy boost, whereas blue balls do nothing, even such a pitifully simple experiment requires pre-programming of what is a ball, what to do with it and even that energy is a good thing.

You're brain is preprogrammed, plus cells are preprogrammed as well. AI can do things without explanation as long as it's brain is wired the right way, We help it learn and eventually they will teach themselves. You program something in multiple ways, plus thinking an instinct can be both be "programmed".

...it would have been fine if he said a computer could never think like a human, and nothing more. I think so too as well...
It can and will.

Everything you think and therefore everything you consider intelligent cannot be separated from your experience of being a human body. 

It can, intelligence is not liked to experiencing "reality".

 
But that's not going to happen, because we don't know enough about the skills a human baby inherits.

You don't know that, also it will.

The point is that you are not your brain. And your body isn't just a machine to carry your head-computer around. Everything you think and therefore everything you consider intelligent cannot be separated from your experience of being a human body.

This is untrue.

A computerized brain needs a human experience to think like a human and as such needs the same inputs -- which they'll never get or at least not in our lifetimes.
This is not true, Experience =/= Intelligence.

That includes death and excludes incredibly fast calculations.
Not true, a computer would be far more superior than a brain, upgradable and runs much faster. A computer runs faster and could become god like with thinking speeds and th elike.
« Last Edit: 16. February 2018, 15:57:13 by XKILLJOY98 »
Acknowledged by: fox
679bc7c632f1a
I do believe in something like that but I'm not sure where exactly the academic credibility part comes in. At least no when it's meant to imply scientific proof. So far, it seems mostly just a matter of belief, as far as I can see.
4 Guests are here.
This is no offence, but you are a robot, aren't you?
Contact SMF 2.0.19 | SMF © 2016, Simple Machines | Terms and Policies
FEEP
679bc7c637e68