Why computers* will not become self aware.

Discussion in 'Science' started by RevAnarchist, Dec 14, 2014.

  1. Tigger2

    Tigger2 Well-Known Member

    Joined:
    Nov 8, 2020
    Messages:
    3,688
    Likes Received:
    1,684
    Trophy Points:
    113
    Gender:
    Male
    I think the main problem with machine intelligence is the ability to guess and make mistakes. Humans make many of their extremely fast decisions without properly analysing all the information they have and consequently accept the mistakes that creates. I'm not sure we could ever accept a machine that worked in this way.
     
  2. Tigger2

    Tigger2 Well-Known Member

    Joined:
    Nov 8, 2020
    Messages:
    3,688
    Likes Received:
    1,684
    Trophy Points:
    113
    Gender:
    Male
    Same way as you create a child. Or a pet dog.
     
    Last edited: Feb 4, 2021
  3. OldManOnFire

    OldManOnFire Well-Known Member

    Joined:
    Jul 2, 2008
    Messages:
    19,980
    Likes Received:
    1,177
    Trophy Points:
    113
    You didn't answer my question? How do YOU know the limitations of AI?
     
  4. Tigger2

    Tigger2 Well-Known Member

    Joined:
    Nov 8, 2020
    Messages:
    3,688
    Likes Received:
    1,684
    Trophy Points:
    113
    Gender:
    Male
    Beg your pardon, meant to come back to it and forgot.

    You wouldn't know the limitations of AI, just as you wouldn't know the limitations of children. How they develop is down to an ongoing education, monitoring and chastisement for breaching the rules.
    Three things generally assumed in AI
    1, That it has infinite intelligence
    2, That it has massive superior strength
    3, That all other AI's would join a rogue one.
    But I don't think any of these would be true. We might build an AI stronger than a man but not stronger than a JCB
    The AI's we build would have enough intelligence to do the tasks we set them.
    AI's would be taught what is right and wrong, they would not support the actions of a bad AI.
     
  5. WillReadmore

    WillReadmore Well-Known Member

    Joined:
    Nov 21, 2013
    Messages:
    59,920
    Likes Received:
    16,454
    Trophy Points:
    113
    Whether an AI is residing in a framwork that has physical strength of ANY kind is really of no interest, I think. This isn't about creating robots that can smash buildings. Smashing buildings just isn't the concern.

    An AI that is conneected to the internet is of huge concern. Consider what an AI might be able to accomplish after learning how our stock markets work. Today, we have "Game Stop" - a nation wide event caused by some crazy day traders. Think what an AI might accomplish.

    Think of an AI that learns how our national and local power grids work, including how the financial reward is figured out.

    Read the story of the Elon Musk battery system that saved a section of Australia from GIGANTIC power bills. Were those bills "right" or "wrong"?? Those gigantic power bills certainly did major damage to the population before Musk came up with a solution.

    I don't know how you would teach an AI what is "right" or "wrong" about these things. WE don't know what is right and wrong!

    I just do not believe this is as easy as teaching "right" or "wrong". AND, I think teaching that is really hard, even though it isn't enough.
     
  6. OldManOnFire

    OldManOnFire Well-Known Member

    Joined:
    Jul 2, 2008
    Messages:
    19,980
    Likes Received:
    1,177
    Trophy Points:
    113

    The bottom line is what 'we' choose to do with AI...in which 'we' means the world. AI won't be any more scary than what 'we' decide to do with it. If 'we' create an AI police force, with the ability to learn, adjust and evolve, and the ability to kill, we will want to be very careful. How will we identify and manage the 'we' around the world when they attempt to tamper with things like the stock market, or power grids, navigation systems, etc...all of which can be under the radar for a long time prior to being detected? I'm not concerned about how we approach this in the US...I'm concerned about those around the world who have evil motives...
     
  7. WillReadmore

    WillReadmore Well-Known Member

    Joined:
    Nov 21, 2013
    Messages:
    59,920
    Likes Received:
    16,454
    Trophy Points:
    113
    I don't believe it's that easy.

    If "we" become capable of creating AI's of the characteristics being discussed, any entity having such a device would have gigantic power.

    Expecting it to be uniformly used for good or successfully contained hits me as highly unlikely.
     
  8. Tigger2

    Tigger2 Well-Known Member

    Joined:
    Nov 8, 2020
    Messages:
    3,688
    Likes Received:
    1,684
    Trophy Points:
    113
    Gender:
    Male
    Again you fall into the same trap. You could give a sheep access to the whole internet and it would learn nothing. Every human has access to the whole internet with our massively powered brains yet we cannot begin to absorb it .
    The computer with access to the internet would only be able to absorb what its memory allowed it to and its processor could cope with.

    Why do you keep thinking they are unlimited.
     
  9. WillReadmore

    WillReadmore Well-Known Member

    Joined:
    Nov 21, 2013
    Messages:
    59,920
    Likes Received:
    16,454
    Trophy Points:
    113
    No. The issue is NOT memory size. It is the ability to use available information to design strategies for achieving a goal.

    Let's remember that an AI on the scale of having the capacity to learn and design strategy CAN beat humans. The fact that humans can't beat a computer at "go" anymore is serious evidence. Also, the programmers of that system were experts in creating the ability to learn and create strategy - they weren't expert "go" players. In that case, the AI learned strategies that human "go" experts have a hard time understanding - let alone the fact that THEY didn't create those strategies. The AI in that case does NOT win by any brute force memory direction. For one thing, there are FAR too many possible moves in "go" for that to be a realistic direction. THAT is why the game of "go" is interesting - it absolutely requires a strategic approach.


    And, we have serious evidence today that far less is required to cause issues such as "Game Stop". That kind of manipulation is EASILY within the capacity an AI with far less than the powers discussed in this thread. Those who mucked up our stock market weren't some sort of superhuman geniuses.

    I gave you two clear examples of cases where an AI capable of designing and executing strategy could cause SERIOUS problems - stock market and power grid.

    I don't believe you have an answer. And I don't, either. My contention is that nobody does.
     
    Last edited: Feb 4, 2021
  10. Tigger2

    Tigger2 Well-Known Member

    Joined:
    Nov 8, 2020
    Messages:
    3,688
    Likes Received:
    1,684
    Trophy Points:
    113
    Gender:
    Male
    Memory and processing together.
    Assuming the dastardly machine has worked out not only how to bring down the stock market, but a reason to desire to. Not to mention the ability to outwit the other computers built to prevent its cunning plans.
    Yet humans can easily outwit our cleverest computer.
     
  11. Tigger2

    Tigger2 Well-Known Member

    Joined:
    Nov 8, 2020
    Messages:
    3,688
    Likes Received:
    1,684
    Trophy Points:
    113
    Gender:
    Male
    This reminds me of the experiment where a monkey beats a human in a computer test rigged to the monkeys advantage. And the guy who lost but then pointed out that it was humans who made the computer in the first place.
     
  12. OldManOnFire

    OldManOnFire Well-Known Member

    Joined:
    Jul 2, 2008
    Messages:
    19,980
    Likes Received:
    1,177
    Trophy Points:
    113
    I also don't believe it's that easy. It is an emerging technology, that will move faster than we think, and it can do wonderful things, but it won't take long for the evil 'we' to realize nefarious applications. The technology and potential won't be halted so society must pay close attention...
     
    WillReadmore likes this.
  13. WillReadmore

    WillReadmore Well-Known Member

    Joined:
    Nov 21, 2013
    Messages:
    59,920
    Likes Received:
    16,454
    Trophy Points:
    113
    No human can win against an computer in the game of "go", thought to be the top of the strategy game heap.

    There will be more and more cases where human created strategy can not compete with computer created strategy.

    The issue won't be "how to bring down the stock market". The issue will be "how to win".

    Today, we have large corporate interests working on how to do that. The desire to do that is not going to go away. And, those with computer power are not even SLIGHTLY interested in crippling their computers when it comes to winning at the stock market.
     
  14. WillReadmore

    WillReadmore Well-Known Member

    Joined:
    Nov 21, 2013
    Messages:
    59,920
    Likes Received:
    16,454
    Trophy Points:
    113
    Well said. I was thinking the same thing and didn't express it well.

    Is winning at the stock market using machines "evil"? Experts put huge effort into doing that TODAY using computers. And, we have futures markets that are rife with speculation tactics that block the actual purpose of commodities markets.

    I don't believe we really have a good definition of what would be "evil" for a computer to do. What we would hate, what would be really damaging, would include computers doing a better job of the same stuff we humans are making every effort to do today.
     
  15. OldManOnFire

    OldManOnFire Well-Known Member

    Joined:
    Jul 2, 2008
    Messages:
    19,980
    Likes Received:
    1,177
    Trophy Points:
    113
    Interesting that if someone tried to take a computer into a casino to gain advantage in gaming that person will probably leave in an ambulance. However, when institutions use computers to buy/sell equities in the stock market casino, no problem.

    Evil might be defined when there is a quantifiable unfair advantage over others in the stock market? In terms of what Russia or North Korea might do with AI will be a different definition...
     
  16. WillReadmore

    WillReadmore Well-Known Member

    Joined:
    Nov 21, 2013
    Messages:
    59,920
    Likes Received:
    16,454
    Trophy Points:
    113
    Great example.
     
  17. Tigger2

    Tigger2 Well-Known Member

    Joined:
    Nov 8, 2020
    Messages:
    3,688
    Likes Received:
    1,684
    Trophy Points:
    113
    Gender:
    Male
    Interesting aside. London stock exchange spent millions on hyperfast fibre connection that gives them information 100th of seconds before other exchanges. The cost was considered worth it for the gains.
     
  18. Tigger2

    Tigger2 Well-Known Member

    Joined:
    Nov 8, 2020
    Messages:
    3,688
    Likes Received:
    1,684
    Trophy Points:
    113
    Gender:
    Male
    Back on subject, I predict that within the next 5 years the internet will get restrictions designed to stop the growing storm of cyber attacks.
    The main one will be that users will need a licence from the government for which they will have to prove their identity.
     
  19. WillReadmore

    WillReadmore Well-Known Member

    Joined:
    Nov 21, 2013
    Messages:
    59,920
    Likes Received:
    16,454
    Trophy Points:
    113
    I certainly hope you are right about that.

    However, I wasn't indicating an AI that had malicious intent.

    Those who engineered the "Game Stop" situation, operated fully within the law. And, creating new law that would prevent their strategy is not easy. It's not as if they found some loophole in stock trading.

    The same could be true for other systems where an AI with an objective of making money (or some other reasonably legit objective) could have a major impact.
     
  20. Distraff

    Distraff Well-Known Member

    Joined:
    Feb 4, 2011
    Messages:
    10,833
    Likes Received:
    4,092
    Trophy Points:
    113
    We would have to figure out how life became self-aware first. We know that the brain evolved during evolution with one of the first examples being simple fish. Our current understanding of the brain can't explain self-awareness but maybe there are more dimensions to reality that explain it.
     
  21. Tigger2

    Tigger2 Well-Known Member

    Joined:
    Nov 8, 2020
    Messages:
    3,688
    Likes Received:
    1,684
    Trophy Points:
    113
    Gender:
    Male
    I certainly think in the early days maliciousness will not occur to AI's. They will be the tools of the nefarious.
     
  22. WillReadmore

    WillReadmore Well-Known Member

    Joined:
    Nov 21, 2013
    Messages:
    59,920
    Likes Received:
    16,454
    Trophy Points:
    113
    I think you are still missing the point.

    Please remember that NEITHER of the examples I gave had ANYTHING AT ALL to do with maliciousness.

    In fact they involve EACTLY what humans are doing TODAY.

    I'm just pointing out that an AI with the objective of turning a profit and with the power of AIs in this discussion could do a FAR superior job of it, leaving the very purpose of our stock and commodities markets in tatters, causing huge additional costs in electric power, etc.

    I'm saying that with NO MALICIOUSNESS AT ALL and with objectives that HUMANS have today and fully WITHIN THE LAW, an AI of the power we're discussing could have catastrophic impact.

    Please read up on "Game Stop". Even these crazy day traders had an impact that made news around the world, with many millions of dollars won and lost over ... nothing. And, entirely within the law.

    We've seen world wide deliterious effects of market speculation in oil prices.

    I'm only saying that an AI could beat humans at these "games" - just like even today an AI can beat humans at "go" (and at chess, too, but chess is easier than go).
     
  23. Tigger2

    Tigger2 Well-Known Member

    Joined:
    Nov 8, 2020
    Messages:
    3,688
    Likes Received:
    1,684
    Trophy Points:
    113
    Gender:
    Male
    Try to remember I am not here to directly answer your points. Some of my comments are MY opinion of something you are discussing.

    That an algorithm could work the money markets is very old news. Simply adding the ability for the algorithm to learn is also not new.
    What I am trying to get my head round is why you think this point to a dangerous future.
     
  24. OldManOnFire

    OldManOnFire Well-Known Member

    Joined:
    Jul 2, 2008
    Messages:
    19,980
    Likes Received:
    1,177
    Trophy Points:
    113
    It's the challenge!
     
  25. OldManOnFire

    OldManOnFire Well-Known Member

    Joined:
    Jul 2, 2008
    Messages:
    19,980
    Likes Received:
    1,177
    Trophy Points:
    113
    Well...fear of AI being in the wrong people's hands? Concerns about the unknown limitations, if any, of super-intelligent machines? Fear of AI impact on society like less jobs or law enforcement? Fear of machines being smarter and more capable than humans? Fear of the unknown...
     

Share This Page