Of course the computer only appears human-like superficially and does so to make it easier for us to interact with it, this is an anthropomorphism.
Computers sit there waiting for us to enter a command (which is what it really is because computers are really just tools, just like garden rakes, they sit there waiting to be used. If the command is properly formed, the computer will respond by executing your command or produce an error, you can use any tool the wrong way, some will work, some won't and sometimes you can break it. Same with televisions, they sit there waiting for us to turn them on then we tell them what we want to view, and remote controls communicate via infrared signalling. On the other hand humans seldom give other humans commands and the parsing of the command is part of being able to properly use language.)
We as humans have the peculiar desire to anthropomorphize everything (notice how I anthropomorphized TVs sitting there waiting in the last paragraph?) and even to turn them into gods, to which we particularly loved making human sacrifices to. The volcano never needed human sacrifices, the volcano never was human. And now we are preparing for our computers to overtake us intellectually because - we decided to do that anthropomorphizing thing again. And what is best about those computers that achieve self-awareness, consciousness or whatever the pop-culture buzz word de jour is, it will immediately seek to destroy humanity. So before the internet can actually experience psychological scars, the nascent computer consciousness becomes totally paranoid.
I suspect if a computer/internet became conscious, it'd be more like the thoughts passing through the sperm whale's mind after coming into existence in the Hitchhikers Guide to the Galaxy.
Over the years I have thought about computers as I use them constantly both for work and to use the internet and in that time I have thought a lot about thinking. Of course I don't know all the answers so I am guessing quite a fair bit here.
I doubt computers will ever become truly conscious, although I think consciousness could be simulated, but it has no reason to. We are conscious because it facilitates our survival, our brain allows us to formulate plans, remember things, gather information, make hypotheses, test theories, optimize resources, prioritize... all necessary for our survival. For instance, is this berry good to eat or not? How can we test it without killing anyone and then performing an experiment to find out.
Computers aren't alive, the same way volcanoes aren't alive. They will never need human sacrifices or be totally paranoid.
In this silly piece of science fiction, it talks about computer programs that will write themselves and because we have the ability to code genetic algorithms. I have written programs that will write themselves and genetic algorithms. The programs with genetic algorithms were solving the problems I presented, but computers don't need to adapt, although they probably will because WE want them to.
We evolve to adapt to better facilitate our survival in our environment. But computer programs adapt to possible solutions and randomly or as determined by the programmer. Adaptation for a computer would possibly to minimize more processor time and to optimize how it shares of RAM, perhaps, stuff that will never cause it to launch a pre-emptive nuclear strike, as in Terminator. It will never care if you reboot it, it will never fight you to stop you pulling out its power cord, not unless we give it the ability too.
From this facebook discussion people think we need a) to have souls and God or b) we can simulate consciousness as is. I'm in camp c) we need to be alive to be actually able to experience consciousness.
I think consciousness is a byproduct of being alive.
We need the 'I' to resolve the information we are receiving from the environment and then we can decide how to respond and I think that 'I' is part of being alive. I don't believe in souls, but I think other animals have an 'I' as well. Some animals have more consciousness than others and I think it is a result of how many neurons the animal has and general brain size. One of the things animals and humans have in common is that we are alive - when the animal or human is dead, the lights are definitely out and can no long respond to the environment, it's dead body is acted upon by other organisms as it decays.
I don't know how sequences of carbon molecules first became alive.
The other component of being alive is we get the ability to act - I used to call this 'will' but I got into too many discussions where people assumed I was talking about 'free will'. I don't care about that debate because it is true we are all subject to behavioral psychology - still we obviously get to have some level of choice - if you go into a restaurant for the first time that you've never heard of, you are going to have to make a choice what to order. I don't think you have completely pre-programmed so that every choice you make would be known in advance - if that was the case your loved ones would know how you'd order in advance, but apparently they don't and from what I gather these numbers in that article still look almost like a coin flip.
If something happens, we have a choice of whether we do something or not. It is usual that we'd think about it at least before deciding how to respond to some stimulus.
So being alive has some a) experience/awareness of sensation (the 'I'), b) the ability to act, even if those actions are the result of behavioral conditioning. Neither of these things are present in a dead body. Even bacteria can respond to stimulus - they can absorb nutrients when they are present in the environment and when they have acquired the right nutrients they act by reproducing, to put it simply. If the bacteria is dead it can do neither.
While a computer may be able to process data, and I do think consciousness can be simulated, a computer can only ever do what we tell it to because it doesn't have an 'I' and I don't know how to simulate that at all.
It is the 'I' that wants to stay alive, to survive, it has it's own imperative to live and it can override that imperative and chose to suicide - I suppose. Computers only do what we want. They cannot suicide, they cannot do anything other than what we want them to do.