Can AI eventually become 'self-aware'?

Discussion in 'Science' started by Patricio Da Silva, Dec 25, 2020.

  1. WillReadmore

    WillReadmore Well-Known Member

    Joined:
    Nov 21, 2013
    Messages:
    59,808
    Likes Received:
    16,434
    Trophy Points:
    113
    I see that as AboveAlpha saying something along the line that AI can be dangerous. Maybe he will join us - I hope he's doing well.

    But, that is not the OP question, which has more to do with what consciousness/self awareness actually is, what is required, wheher a machine could provide those requirements, etc.

    As for AboveAlpha's DANGER question:

    Any danger comes LONG before an AI is conscious/self aware.

    For example, the HAL computer of 2001 movie fame shows no indication of consciousness or self awareness. It was simply focusing on its assigned task of ensuring success of the mission.

    An AI could endanger human lives even if it were a lot less capable than HAL. Unfortunately, the saftey concerns presented by AI are a lot more difficult to solve than simply not allowing artificial consciousness.
     
    DennisTate likes this.
  2. Patricio Da Silva

    Patricio Da Silva Well-Known Member Donor

    Joined:
    Apr 26, 2020
    Messages:
    31,853
    Likes Received:
    17,227
    Trophy Points:
    113
    Gender:
    Male
    AI/SI may achieve parity with a human in many respects, save one, self awareness, true consciousness. Life, and life is free will. That can be simulated to give one the appearance of it, but it will always be a program, a program that was ultimately designed by a human or computer designed by a computer designed by a human ( hence 'ultimately' ). But this is difference between life and simulated life. Because life is something more than bits and bytes stored in brain cells. Consciousness can exist separate from the physical body ( don't ask me to prove it ) and now the conversation belongs in the religion and philosophy forum. I believe that life has a spiritual basis. That spirituality does not have an intelligent designer ( no, I do not believe in a supreme being)
     
  3. Patricio Da Silva

    Patricio Da Silva Well-Known Member Donor

    Joined:
    Apr 26, 2020
    Messages:
    31,853
    Likes Received:
    17,227
    Trophy Points:
    113
    Gender:
    Male
    DennisTate likes this.
  4. DennisTate

    DennisTate Well-Known Member Past Donor

    Joined:
    Jul 7, 2012
    Messages:
    31,580
    Likes Received:
    2,618
    Trophy Points:
    113
    Gender:
    Male

    Yes...... AboveAlpha knows a lot about how computer technology has improved over these past decade or so and he seemed to be genuinely worried that somebody somewhere would develop Artificial Intelligence...... and take the risk of connecting it to the internet...... and all hell could well soon break loose!

    AA was really worried that AI would soon become a serious threat to humanity.
     
  5. Patricio Da Silva

    Patricio Da Silva Well-Known Member Donor

    Joined:
    Apr 26, 2020
    Messages:
    31,853
    Likes Received:
    17,227
    Trophy Points:
    113
    Gender:
    Male
    it could, but isn't there a plug somewhere someone can pull, or an off switch?
     
    DennisTate likes this.
  6. DennisTate

    DennisTate Well-Known Member Past Donor

    Joined:
    Jul 7, 2012
    Messages:
    31,580
    Likes Received:
    2,618
    Trophy Points:
    113
    Gender:
    Male

    If the AI is intelligent enough to realize that its survival depends on it making a dozen.... or several dozen people really wealthy..... it may give them a serious incentive to not pull the plug?????
     
  7. Patricio Da Silva

    Patricio Da Silva Well-Known Member Donor

    Joined:
    Apr 26, 2020
    Messages:
    31,853
    Likes Received:
    17,227
    Trophy Points:
    113
    Gender:
    Male
    How do you program a machine to have 'survival instincts'?

    How do you program a machine to prevent someone from turning it off?
     
  8. WillReadmore

    WillReadmore Well-Known Member

    Joined:
    Nov 21, 2013
    Messages:
    59,808
    Likes Received:
    16,434
    Trophy Points:
    113
    Let's remember that our dictionaries, our memories of events and pictures, our impressions of the outcomes of decisions that we have made - all these things are physically stored inside our brains. The technology used is neurones instead of "bits and bytes". When a person hears a new word or sees a picture the brain changes the organization of the neurons in the memory centers - an actual change of neural connections is made in order to remember. Those changes are made in sophisticated was, because as meantioned earlier we are seldom interested in the details of the picture . Words get categorized by part of speech, etc.

    So computers use electromagnetism to store all these things rather than using meat based equivalents. In a camera, the whole idea is the picture in all its precise detal. But, in an AI the information stored would probably be stored more like our brains store that info.

    I don't see how these details are important. It's true that computers do a FAR better job of accurate memory. It's also true that the memory access mechanism of our brains is crazy good - as I understand it, having search technology built into the access methods. It's true that an AI might choose to do storage in a different way than a camera.

    So, what is the real difference?

    From there, one can consider the vast storage we have concerning what strategies exist, how they may be applied, etc., etc. These are all things that we learned. Computers do the same kind of learning, as seen with machines that can beat all human experts at the game of "go". These machines developed new strategies through playing go games with itself and creating new strategies for winning - strategies that humans have had a hard time understanding.

    Thus these machines take actions that are NOT programmed. They are actions that the machine decided to take, based on its own experiences.

    What I don't understand is where anyone can imagine there is a wall that machines can't cross.

    Experts state that our technology is nowhere near there yet. But, not having the technology is very much different than stating that the problem can not be solved is something very different.

    You use the word "live". But, that's not good enough. The definition of "live" in biology has a lot to do with issues like reproduction and really nothing related to this issue. So, if you want to use "live" you have to do a LOT more to describe what it is about "live" that makes AI consciousness/self awareness an impossible problem.
     
  9. Patricio Da Silva

    Patricio Da Silva Well-Known Member Donor

    Joined:
    Apr 26, 2020
    Messages:
    31,853
    Likes Received:
    17,227
    Trophy Points:
    113
    Gender:
    Male
    That's because AA doesn't understand that the distance between man and machine is infinite.
     
  10. WillReadmore

    WillReadmore Well-Known Member

    Joined:
    Nov 21, 2013
    Messages:
    59,808
    Likes Received:
    16,434
    Trophy Points:
    113
    Well, isn't the problem more general than that?

    Consciousness/self awareness would give the machine a range of objectives that machines don't have today.

    And, how an AI looks at itself, its "life", may be very different in ways that make it less likely to follow our laws, mores, etc., etc.
     
    DennisTate likes this.
  11. Patricio Da Silva

    Patricio Da Silva Well-Known Member Donor

    Joined:
    Apr 26, 2020
    Messages:
    31,853
    Likes Received:
    17,227
    Trophy Points:
    113
    Gender:
    Male
    By 'live' I mean 'alive' by 'alive I mean a living sentient, conscious, spiritual being. It's called life. Either you can sense the difference between life and machine, or you can't, there is no science model or explanation I could give that would be satisfying to a scientist who as no inclination for the spiritual side of life ( let's toss out all belief systems, christianity and the like I don't mean any of that, either ). You, as a spiritual being, are conscious, can respond to stimuli in an infinite number of ways, via free will, you have desires, instincts, suspicions, fears, flaws, demons, reckonings, epiphanies, wonder, appreciation for poetry, music, romance, love of life, all of the things that make you human, full of spontaneity, that makes you different than a machine, and the difference is infinite, it is not proximate, it is infinite. See, I can't explain it in a simple equation, but if I dance a circle around it, long enough, and point to it, hint at it, maybe, just maybe, one day, you'll get it.

    And they just might find out that memories are not stored in the brain
    https://www.scientificamerican.com/...g-standard-theory-of-how-the-brain-remembers/
     
  12. Monash

    Monash Well-Known Member

    Joined:
    Jan 12, 2019
    Messages:
    4,560
    Likes Received:
    3,150
    Trophy Points:
    113
    Gender:
    Male
    Not trying to 'slip through' any loophole just answer the question. As I noted previously it depends on the circumstances - why you decided to 'kill' the AI. And you still have the question of whether it actually is self aware in the first place or just mimicking self awareness. But assuming it is self aware there are still potentially ethical reasons for doing so.
     
  13. WillReadmore

    WillReadmore Well-Known Member

    Joined:
    Nov 21, 2013
    Messages:
    59,808
    Likes Received:
    16,434
    Trophy Points:
    113
    In humans, these are learned responses.

    As mentioned before, the vast majority of what a conscious AI would have would NOT be programming - it would be what the AI learned.

    There isn't any reason that an AI would need to be more capable than a newborn baby. That's a tall order. But, babies don't have to know anything about how to prevent itself from being starved to death - as per your last sentence.
     
  14. WillReadmore

    WillReadmore Well-Known Member

    Joined:
    Nov 21, 2013
    Messages:
    59,808
    Likes Received:
    16,434
    Trophy Points:
    113
    It's worse than that.

    We create these AI capabilites to control our serious systems, because they can do it better than humans can.

    Turning off computers that are smart can mean downing our power grid, terminating our stock and commodities markets, unplugging pieces of our national defense systems, etc., etc.

    We create these capabilities to solve problems that are important, problems where we depend on the solution.
     
  15. Patricio Da Silva

    Patricio Da Silva Well-Known Member Donor

    Joined:
    Apr 26, 2020
    Messages:
    31,853
    Likes Received:
    17,227
    Trophy Points:
    113
    Gender:
    Male
    So a computer is programed by (it's own) computer (which you call 'learning' ) which is ultimately programed by a human.

    At the beginning of the causative chain, is ALWAYS a human.

    Not true with mankind. The distance between man and machine is infinity.

    It's called life. Either you can sense the difference between life and machine, or you can't, there is no science model or explanation I could give that would be satisfying to a scientist who as no inclination for the spiritual side of life ( let's toss out all belief systems, Christianity and the like I don't mean any of that, either ). You, as a spiritual being, are conscious, can respond to stimuli in an infinite number of ways, via free will, you have desires, instincts, suspicions, fears, flaws, demons, reckonings, epiphanies, wonder, appreciation for poetry, music, romance, love of life, all of the things that make you human, full of spontaneity, that makes you different than a machine, and the difference is infinite, it is not proximate, it is infinite. See, I can't explain it in a simple equation, but if I dance a circle around it, long enough, and point to it, hint at it, maybe, just maybe, one day, those who don't get it, will.
     
  16. WillReadmore

    WillReadmore Well-Known Member

    Joined:
    Nov 21, 2013
    Messages:
    59,808
    Likes Received:
    16,434
    Trophy Points:
    113
    No, that kind of logic doesn't work in this case.

    There has to be evidence - not feelings.

    Technology conqueres our feelings about how things work on a day in, day out basis.

    Simply saying you don't have a "feeling" that technology can solve this problem is just plain not good enough.

    Your Scientific American cite supports the position that brains are our storage mechanisms. Beyond that, it states that the new ideas on how that storage works are not fully confirmed. But, it does show that this area of study is active and, like all science, not constrained to established ideas.
     
  17. WillReadmore

    WillReadmore Well-Known Member

    Joined:
    Nov 21, 2013
    Messages:
    59,808
    Likes Received:
    16,434
    Trophy Points:
    113
    No, programming the comopute is not the learning.

    The programming is that of creating the ability to learn for itself.

    It's like the human-beating "go" program. The programming did not implement strategies for winning at go . What it did is to allow the computer to learn. So, it played games with iteslf, making really stupid moves - like a baby would. Over time, the computer tried moves that it found to be increasingly cometitive. After some period of time, it had learned strategies that no human can beat. None of that learning involved human programming intervention.

    This is an area of technology advancement. Universities around the world have computer science departments that study how to create artificial systems that learn their jobs.
     
  18. Patricio Da Silva

    Patricio Da Silva Well-Known Member Donor

    Joined:
    Apr 26, 2020
    Messages:
    31,853
    Likes Received:
    17,227
    Trophy Points:
    113
    Gender:
    Male
    My point is that 'alive' is not within the province of science, insofar as sentient beingness is concerned.

    In terms of motor functions, biology, sure , but that quotient that is life is more than just that.

    The point is a sentient being is composed of feelings, i.e., a machine has none, and that we do, and machine doesn't is a difference that is infinite in scope.
     
  19. Patricio Da Silva

    Patricio Da Silva Well-Known Member Donor

    Joined:
    Apr 26, 2020
    Messages:
    31,853
    Likes Received:
    17,227
    Trophy Points:
    113
    Gender:
    Male

    However the computer learns, it's still a program. There will be bits and bytes and ones and zeros resulting in computer instructions, self programmed or otherwise, driving the thing. However the computer instructions arrive to drive the thing, it is still ones and zeros at the very essence of a computer.

    Quantum is another thing, but it's electrons or something similar. It's physics. The human consciousness, the spiritual essence that is it does not exist in the physical realm. I realize that can't be acceptable to a Scientist.
     
  20. WillReadmore

    WillReadmore Well-Known Member

    Joined:
    Nov 21, 2013
    Messages:
    59,808
    Likes Received:
    16,434
    Trophy Points:
    113
    Our emotins are something that we have learned to calculate. They are part of evolution. Other animals have emotions, too. They aren't something limited to humans.

    I don't see any reason to believe that emotions provide some sort of impossible wall. Also, it's not likely that machines will be give capability in this range until such time as there is a reason to create that.


    More than that, I'm not so sure that emotions are a requirement for consciousness/self awareness.
     
    Derideo_Te likes this.
  21. WillReadmore

    WillReadmore Well-Known Member

    Joined:
    Nov 21, 2013
    Messages:
    59,808
    Likes Received:
    16,434
    Trophy Points:
    113
    Our brains are programs, too. There is machinery (built of neurons) for figuring out problems, storing data, creating strategy, calculating emotions, making educated guesses, etc. After YEARS of exposure to training by parents, by teachers, by contact with the world, by friends, etc., humans move from being babies that can't even hardly see and aren't aware that they have arms and legs to being scientists, athletes, politicians, philosophers, etc.

    That original set of programming in our baby brains is enough learn that stuff and to use what it learns. Our brains have specific structures for storing words, for example. There are other structures that are programmed to allow babies to start making sense out of the information their eyes and ears are sending.

    Brains aren't magic. They are computers that come with really cool hardware and sofware that excels at learning.

    And, your comments about bits and bytes has to include the fact that our brains store ones and zeros, bits and bytes like that, too. Except the brain uses bytes made of neurons instead of bytes made by magnetic granuals.

    There are labs which have working models of collections of these neurons that interact like they do in our brains

    The catch is that the limit of interacting neurons in these labs is at about 8. And, of course our brains have 100 billion!

    That is the challenge. There is nothing to say that the problem is impossible. It's EASY to notice that the problem of building a fully function human brain is a desperately difficult technology problem.

    On the other hand, there is nothing to indicate that we would need to duplicate a whole brain in order to create an AI that is self aware. There is a TON of stuff in our human brains that doesn't help with that specific problem.

    Remember that the OP issue is whether it is impossible. And, I don't see anything other than a technology limit here - not a limit that suggests that it can not ever be done - something that isn't calculable.
     
    Last edited: Jan 14, 2021

Share This Page