You can read and reply to posts and download all mods without registering.
We're an independent and non-profit fan-site. Find out more about us here.
What else can I do but simply shut up
There was a recent thread on the Systemshock.org forums in which Kolya linked to an article on cybernetics titled "The Myth of Sentient Machines" by some Bobby Azarian. Kolya was posting the article in agreement with Azarian, but he's wrong. Dead wrong. And what is missing from the formulation that so many transhumanists and cyberneticists (but not David Pierce) never mention is the factor of hedonics (or affective experience). A system for processing and storing hedonic data is all that is needed to make an artificial intelligence sentient. There is nothing special about the cybernetic experience of animals, no magic ingredient that can't be simulated. So what is hedonic experience? Simply, it is the two opposed affectives: pleasure and suffering.But what are pleasure and suffering, on a technical, neurological level? Quite simply they are feedback loops: pleasure is the experience caused by any stimulus that, by stimulating the cybernetic system, increases the likelihood that the action which lead to the stimulus is repeated by the owner of the cybernetic system; likewise and inversely, suffering is the experience caused by a stimulus that, by stimulating the cybernetic system, decreases the likelihood that the action which lead to the stimulus is repeated by the owner of the cybernetic system. Put more simply, pleasure is the effect of a stimulus that increases the likelihood of itself being repeated, whereas suffering is the effect of a stimulus that decreases the likelihood of itself being repeated.The brain is a finite piece of matter which can hold an infinite amount of information. It can achieve this with feedback loops, neuroplasticity, and I might propose, communication between arbitrary synapses beyond their "normal" function (if they have any function) and being able to adapt as needed, to be there when called upon.I'll have more on this later.
Kolya was posting the article in agreement with Azarian, but he's wrong. Dead wrong.
he's wrong. Dead wrong.
if a human intervention doesn't maximize the agent's given reward function, it may be that the agent learns to avoid and possibly resist future interventions
Nature ‘has built the apparatus of rationality not just on top of the apparatus of biological regulation, but also from it and with it’, wrote the neuroscientist Antonio Damasio in Descartes’ Error (1994), his seminal book on cognition. In other words, we think with our whole body, not just with the brain.
A god-like being of infinite knowing (the singularity); an escape of the flesh and this limited world (uploading our minds); a moment of transfiguration or ‘end of days’ (the singularity as a moment of rapture); prophets (even if they work for Google); demons and hell (even if it’s an eternal computer simulation of suffering), and evangelists who wear smart suits (just like the religious ones do). Consciously and unconsciously, religious ideas are at work in the narratives of those discussing, planning, and hoping for a future shaped by AI.
https://www.psychologytoday.com/blog/mind-in-the-machine/201606/the-myth-sentient-machinesSimply put, a strict symbol-processing machine can never be a symbol-understanding machine.
Yes, there is substantial evidence a Turing machine cannot be made into a truly intelligent, thinking machine and it is true that a perfect simulation of a process is not equal to the process itself.
A machine cannot adapt like a human because it's lacking experiences of the world.
I'm arguing that without experiencing the world an AI cannot come to any kind of understanding of it. And without experiencing a human life it will not develop anything humans would call intelligence.
While it can "learn" that placing the red ball into the cup results in an energy boost, whereas blue balls do nothing, even such a pitifully simple experiment requires pre-programming of what is a ball, what to do with it and even that energy is a good thing.
...it would have been fine if he said a computer could never think like a human, and nothing more. I think so too as well...
Everything you think and therefore everything you consider intelligent cannot be separated from your experience of being a human body.
But that's not going to happen, because we don't know enough about the skills a human baby inherits.
The point is that you are not your brain. And your body isn't just a machine to carry your head-computer around. Everything you think and therefore everything you consider intelligent cannot be separated from your experience of being a human body.
A computerized brain needs a human experience to think like a human and as such needs the same inputs -- which they'll never get or at least not in our lifetimes.
That includes death and excludes incredibly fast calculations.