AI, The Singularity, and other BS that ranks up with the Zombie Apocalypse

“It’s a machine[.] It doesn’t get pissed off. It doesn’t get happy, it doesn’t get sad, it doesn’t laugh at your jokes. It just runs programs.”

Like many overly specialized people, he knows his field very well, and knows little about anything else (even though he thinks he knows everything).

I tend to follow science news. I find it very interesting even though it’s not my particular forte primarily because any understanding of the free market can’t exist without an understanding of scientific and technological developments. For instance the increase of MERS, MRSA, the return of disease we thought we left generations ago and the end to the age of antibiotics may very well solve the problems of Social Security insolvency in exactly the way we wouldn’t want it to be solved. The rapid advances in 3D printing are likely going to end manufacturing as a major industry in a generation or two meaning that anyone making plans for the US to once again be a manufacturing superpower is just unspeakably clueless. The problem with this is that recently in the wake of movies like Her and Transcendence and Stephen Hawking’s recent laughably preposterous article about how thinking machines will kill us all there has been a slew of other articles warning of the coming singularity and the end of mankind.

This has sadly led me to the conclusion that scientists, for an awfully well educated and knowledgeable bunch, are surprising stupid. I mean I get why Hawking has led his current charge against God in trying to prove that God doesn’t exist…I think we all understand why Hawking would hate God so much he would have to lead a crusade against the idea of God. It’s not a rational crusade, but we all understand the motivations behind it.

So let’s lay out their case. Computers are getting faster and more complex all the time. They will continue to do so. They will continue to do so until we reach the point that a computer program reaches the point where it isn’t just a very well written program, but actual sentient artificial intelligence. At which point this self aware intelligence will have the ability to (A) improve itself faster than we could make improvement on it (B) replicate other AI’s like itself and (C) have a desire for self preservation…this pretty much means that Skynet here will kills us all. (It tells you a lot about the attitude of these scientists where they feel the jump from sentience to genocidal rampage is a logical progression).

Seriously, that’s the argument they’re making. In the last three months I have seen at least 2 articles a week in various usually respectable scientific forum.

But while the chicken littles fear the coming Matrix by our Terminator and Cylon overlords…I sit here and yawn knowing this is just as preposterous as Zombie Apocalypse, and as scientifically based as the delusion of global warming.

How do I know this?

If you’re afraid of this being a real issue ever you need to come back to reality.

Because, unlike a lot of these scientists (who seem to deal in fiction more than science) I’ve actually bothered to look at the evidence. You know like the numerous scientific studies on near death experiences or the 2,000+ tome  “Reincarnation and Biology: A Contribution to the Etiology of Birthmarks and Birth Defects”which details 225 cases on evidence of reincarnation based on the scars, birthmarks, and medical problems found in young children who accurately remember a past life (when you can get that detailed in your research, think about how much actual evidence there must be). Or the fact that while science has dissected and parsed the brain very well, they still can’t seem to find consciousness. And why can’t they find it? Because as the previous evidence suggest, consciousness, the thing they want to try and create in AI, isn’t in the mass of synapses in your head, but in your soul. The divine part of you that is may be using this sack of meat you call a body, is what gives you intelligence, a sense of self and identity. And that cannot be replicated in a CPU or lines of code. Ever.
It’s as simple as that. I don’t have to get into the complexities of the Chinese Room or the Frame Problem or the other such reasons why philosophers have already pointed out you’ll never have AI…without a soul it doesn’t matter how complex the computer system and code because consciousness does not reside in human being in a physical form, thus it cannot reside in a machine. You could recreate the entire human brain with each cell being represented by a functional quantum processor and connected in with all the synapses in exactly the same order as a functioning brain with the most brilliant code in the world…but even then it the computer, while amazing, will not be sentienet, as a famous movie put, “It doesn’t get pissed off. It doesn’t get happy, it doesn’t get sad, it doesn’t laugh at your jokes. It just runs programs.” And unless for some reason the Heavens decide to breath a soul into a machine (which I can’t even fathom such a reason) AI will remain a convenient tool of science fiction in helping us define exactly what it is to be human and nothing more.

But you know what, let’s take them at their own game, ignore that consciousness is a creation of the soul and not the synapse, for just a minute to show why this is preposterous. Let’s for the sake of argument say there is no soul and that the mind just a complex computer that can be replicated. Then by the same argument there must be thousands of planets out there with intelligent life (since life on Earth is nothing special and the odds of self replicating DNA become very likely—they have to be likely because if they’re unlikely we shouldn’t be here)—and certainly at least some of those races should have reached the technological singularity before we did (if we assume there numerous planet with intelligent life, it becomes statistically preposterous that we would be the first to reach this level). At which point the AI intelligence would have either destroyed the biological being which created it or come to some kind of coexistence. Either way, if you have AI, space travel becomes a given as the AI would have a greater propensity for long term thinking and would expand out to other planets in their own solar system and from there to any inhabitable planets in other spaces systems (the time and resources needed for interstellar travel which prevents humans from traveling to other stars would be irrelevant to thinking machines, you could easily launch thinking machines to other star systems with the machines needed to mine and build new machines and they could reach other planets in a 100 years which wouldn’t mean much to a machine). They would expand and expand and expand. And the entire galaxy would be filled with the radio signals of these computer AI’s communicating with each other (as the desire of any self conscious being is to learn)…and we certainly would have picked that up. Oh wait, there is nothing there, no sign of intelligent life anywhere be it biological or computer based. Which either means that we’re the first race in the galaxy that would get this close (unlikely if life on Earth is nothing special) or that these AI’s are so far from us that we haven’t picked up their signals (again unlikely in a galaxy of 400 billion stars that is only 120,000 light years across). If Hawking and the rest of the chicken littles are right about the nature of life and the singularity, the galaxy (hell the universe) should be filled with AI colonized worlds, each communicating with others to share knowledge. But there’s not even a trace of this. That seems statistically unlikely. I mean you have the argument that we might not pick up biological signals as it’s unlikely you have the mass colonies spreading across the galaxy with biological life—but all those argument go out the window with AI intelligence. But there is nothing there, and if the singularity is possible there should be. But reaching that point of deduction would require actual thought and not just hysterics.

I don’t about you but maybe the silence of the universe is because life is not so cheap and sentience so easy to create.


Leave a comment

Filed under Long Term Thinking, People Are Stupid

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s