You can read and reply to posts and download all mods without registering.
We're an independent and non-profit fan-site. Find out more about us here.
Intuitively I don't blame you, but the statistics and science would suggest otherwise.
- Bad weather causes far more challenge for the pilot of an aircraft than for the driver of a car. This is where billions of calculations per second offer a much greater advantage to the pilot. A computer may be necessary to land a plane in gale force storm with no visibility to the human eye but in a car, you could always just pull over or at the very least go really slow.
- The number of contingencies that a car must deal with is far greater than that of a plane and most of them are not known in advance. A plane doesn't have to account for nearly as much. Death to the occupants can only result from a collision and there are only 3 things it can collide with: a bird, another plane and the ground, one of which is no contest for a plane.
If everyone is so keen on making driverless vehicles they should start with aircraft.
In the end, if you really succeeded to either simulate or build everything that makes a human what you would have is - a human. That includes death and excludes incredibly fast calculations.
The point is that you are not your brain. And your body isn't just a machine to carry your head-computer around. Everything you think and therefore everything you consider intelligent cannot be separated from your experience of being a human body.
What sexual attraction or love actually mean and can do to one's thoughts will forever escape it. And so it will stay stupid.
A machine cannot adapt like a human because it's lacking experiences of the world. While it can "learn" that placing the red ball into the cup results in an energy boost, whereas blue balls do nothing, even such a pitifully simple experiment requires pre-programming of what is a ball, what to do with it and even that energy is a good thing.
Humans can deal with an infinite number of situations because they can adapt memories of previous experiences to new situations taking into account the differences. The process of how these memories are formed, reinforced and overwritten, their quality and how they influence each other, and how they make up an image of the world is inseparable from the human experience and the emotions they invoke.
But that's not going to happen, because we don't know enough about the skills a human baby inherits. For example language acquisition is still a mystery despite or because of Chomsky (who convinced linguists that babies have hereditary grammar for every language in the world that are hooked into during language acquisition).
My point, and I'll use the rest of your posts as a basis here, is that it would have been fine if he said a computer could never think like a human, and nothing more. I think so too as well, because as you rightfully point out there is no true duality between our perception and our thinking. Perceptual integration, memory, pattern building etc. are a continuum of experience that is highly dependent on our physical and chemical states.
When you look at pictures that went through Google's Deep Dream, most objects get transformed into animal faces. It does so because it was trained to see animal faces everywhere: when you present it with something which it doesn't know, it is going to represent it in a way where it can see an animal face in it. I am arguing that if it was trained with enough (i.e. more) neurons, and with a learning set that encompassed the entire web, the way it would represent data when presented with a new input would be in no way different than the way an "intelligent entity living in the web" would represent data. As such, I fully believe that the idea in your last post's second to last paragraph (feeding romance novels) is sound and I don't agree with your conclusion. When triggered in the right way, it could understand and translate any language, it could predict outcomes of complex systems (hello stockmarket abuse), it could understand the hidden desires and motivations of most internet users and interact with them in a meaningful way (hello targeted ads), it could create new art in the styles that it learned (which it already does).
In order for any sentient being to pose a threat, it must have some sort of motivation. Motivation, while neurochemical in nature, is based on emotional prerequisites such as being able to form goals, anticipate rewards and form affective correlations between the sheer notion of a reward and the personal experience of being rewarded. Human beings are drug addicts. We are addicted to the stimulating sensations of norepinephrine and dopamine. We are relieved by the calming effects of seratonin, prolactin, oxytocin, etc. This high-low cycle is strongly correlated with certain habitual and instinctive behaviours such as eating and having sex. It can be easily extended to any perceived reward and the necessary actions to achieve it. Until AI becomes capable of emotions, it is unlikely to spontaneously form goals of any kind that aren't part of its programming guidelines. If a robot cannot feel anything emotionally, it has no feedback loop. It has no motivation. As K alluded to, it cannot relate to the human condition or even the mammalian condition and will exhibit very predictable responses to stimuli. It will not turn on us and start wiping us out. What would be its inclination to do that? So that it can start a colony of robot children and take over the world? Come on.
Along comes a buddhist and says "There is no you.".
Capitalism has nothing to do this. And your view of Buddhism is simplistic and ignorant. For your own sake, be more curious.
I'm not against people believing that enlightenment will just come to them if they sit around long enough
Asian 'wisdom' is usually one of two things...either common sense stuff wrapped up in elaborate phrasings to make them sound extra wise...or gibberish that has no place in a sane person's mind.
No one romanticised Asian beliefs. You're the one dishing out ignorant and condescending clichés. You pose an aggressively stupid cliché and declare your tolerance at the same time. It's like saying "I'm not against blacks dancing around wildly if that's all they know." As I told you, I didn't think ZylonBane was wrong. You do write a lot of daft posts and I hope you will learn to accommodate. I just thought his reaction was uncalled for in that situation.
I'm not against people believing that enlightenment will just come to them if they sit around long enough, anymore than I'm against people believing that when they die they go to a sky paradise.
Buddha's teachings are indeed frequently about stuff that should be common sense and and yeah, buddhist wisdoms can sound like a lot of gibberish to modern westerners. However, just because it sounds like gibberish to you, doesn't mean it's meaningless, right? Fortunately there are many books about them, written specifically with the modern westerner in mind. And just because you think something should be common sense doesn't mean it really is (when it counts). Not judging something you don't understand - should be common sense, yet most of us do it without even realizing it most of the time (including me). Sometimes it's necessary to drag such things into concsiousness - over and over again.