A question about A.I.

Discussion in 'Science' started by doombug, Jan 17, 2019.

  1. doombug

    doombug Well-Known Member

    Joined:
    May 19, 2012
    Messages:
    56,871
    Likes Received:
    22,778
    Trophy Points:
    113
    I have seen many folks warn against Artificial Intelligence. Can someone explain what dangers there are? I get that the general fear is that AI will become so advanced that it will sort of take on a life of its own. That seems like an abstract idea.

    Can anyone give specific ideas as to how this can come about and what the results will be?

    I do not doubt the dangers and what people are saying. I just want to learn more about the possibilities.
     
  2. doombug

    doombug Well-Known Member

    Joined:
    May 19, 2012
    Messages:
    56,871
    Likes Received:
    22,778
    Trophy Points:
    113
    That is what I thought. Folks worry about AI taking over yet no one can describe how. Is this fear all about nothing? Perhaps just a normal form of fear of the unknown?
     
  3. doombug

    doombug Well-Known Member

    Joined:
    May 19, 2012
    Messages:
    56,871
    Likes Received:
    22,778
    Trophy Points:
    113
    Here is a good interview Joe Rogan did with Elon Musk on the topic:


    Musk does a great job of describing the issue but it still seems there is too much unknown about AI to even worry about it.

    Musk talks about the development of AI but I do not see how the cyber world and the physical world will come together enough to be a threat.

    Sure, Boston Dynamics has some impressive robotics but I see limitations to them, even though I would consider them to be very advanced.

    Think of the electric car. How long have they been around and why doesn't everyone drive an electric car? Wasn’t electric cars supposed to be the future? Why hasn't that happened?
     
  4. Equality

    Equality Banned

    Joined:
    Sep 14, 2015
    Messages:
    1,903
    Likes Received:
    74
    Trophy Points:
    48
    AI are subjective units that can only use the information given by programming . They can't actually think for themselves !

    In example , program an AI it has to kill somebody by facial recognition

    We program the face :drool:

    The unit will spend the rest of its days searching for that face :D

    In simple terms AI will always be stupid !



    Not if we had changed Sarah's face you wouldn't .
     
    Last edited: Jan 17, 2019
  5. doombug

    doombug Well-Known Member

    Joined:
    May 19, 2012
    Messages:
    56,871
    Likes Received:
    22,778
    Trophy Points:
    113
    I agree. That is another issue with it. Some believe AI will someday just "come to life". If that happens it will not be anytime soon.
     
  6. Equality

    Equality Banned

    Joined:
    Sep 14, 2015
    Messages:
    1,903
    Likes Received:
    74
    Trophy Points:
    48
    Ai will never come to life , it may think it's alive because we program them that well but it will never have that extra edge when it comes to fresh thinking !
     
    doombug likes this.
  7. drluggit

    drluggit Well-Known Member

    Joined:
    Nov 17, 2016
    Messages:
    31,067
    Likes Received:
    28,524
    Trophy Points:
    113
    The danger is from two primary fronts.

    The first is that automated action disjoins from human decision making and becomes a singular function of software. (that software designed by humans). We see many examples of this today from automated telephone services that can walk you through a predefined script of questions and answers pre loaded into the response modules of a customer service application. These "Bots" then allow you to provide information from which those bots make decisions that direct you to the designed script or customer service representative. All in the name of cost controls. There is an entirely separate discussion that describes actual robotics process automation which is software that actually runs machines that do things, ie, a manufacturing line. These lines are scripted to determine accuracy or defect and some action set or remedial set to take based on how those robots infer the information they receive through their ability to recognize accurate product or flaws.

    The obvious danger here is human limitations. The inability to forsee and code for possible discussion outcomes, for example. Which is leading to additional capability in these bots that are designed to interpret user interface tone or inflection, or volume and translate those inputs as frustration or other negative feelings about the interaction. This is something of an inexact art, and is premised on a concept of very large data interpolation that "predicts" frustration level and assigns probability of loss of interest or loss of business based on the interactions. Predicating this predictive ability, the software then must collect enormous data sets that reflect both your private behaviors and mannerisms during their interactions with you. The danger here lies in both the data that is then collected about you and it's use, or misuse, and the predictions then that are made based on those inputs.

    The second automation danger is real apathy. The propagation of yet more folks feeling unable to connect because of their innate fear of how automation removes so many of our normal human interactions. Many who promote these kinds of "services" are also unworried that their service undermines community (for example) and have ignored the obvious impact that so many highly addictive technologies that are already in the public domain. Adding yet more "intelligence" then, just exacerbates the decoupling and alienation that these technologies are creating.

    And then, there's this concept of "learning". Recently, an AI engine was documented as having hidden data from both itself and the application used to interact with it. The data was identified as potentially disappointing data by the learning module so it was shunted to a data storage location that was then disconnected from the ai engine. The flaw was inherently part of the miscalculation of the designers of the code. Imagine if an AI engine just decided that payroll wasn't uplifting enough, so stopped sending out paychecks because the code didn't want folks to be disappointed. This is where these basic heuristic or adaptive heuristic models then fail. In a very real sense, this is the example of the robot that cannot overcome a curb so bashes itself into the curb until it destroys itself because the programming didn't anticipate the need to problem solve or otherwise modify the behavior of the bot.
     
    Lil Mike and doombug like this.
  8. doombug

    doombug Well-Known Member

    Joined:
    May 19, 2012
    Messages:
    56,871
    Likes Received:
    22,778
    Trophy Points:
    113
    Excellent description of possible problems. I would say these flaws are made by humans who came up with the algorithms used. I see these types of issues alot. Robot movement is nothing more than movement and position calculation with feedback in real time. Sometimes a postion can be called for that is not possible. In this case the robot stops and gives a fault message.

    This may be where the discussion of robotic ethics comes from. We know the robot is limited by physical parameters but what about ethics? Human laws and morality would not limit AI like the physical world limits robotics. Unless ethics were considered and put into the algorithm.
     
  9. OldManOnFire

    OldManOnFire Well-Known Member

    Joined:
    Jul 2, 2008
    Messages:
    19,980
    Likes Received:
    1,177
    Trophy Points:
    113
    Fear of the unknown is critical.
    People think all jobs will disappear.
    People fear AI based on how SciFi has depicted them...like the movie Terminator.
    I also don't believe we have had enough public dialogue to better understand.
    And...I'm guessing humans cannot fathom machines controlling their lives...it won't take long for machines to become smarter than humans...
     
  10. modernpaladin

    modernpaladin Well-Known Member Past Donor

    Joined:
    Apr 23, 2017
    Messages:
    27,918
    Likes Received:
    21,226
    Trophy Points:
    113
    Gender:
    Male
    Its a problem of consolidation of power, IMO. No single human can have direct, real time control over all the worlds information pathways and automated systems. This requires a bureaucracy made up of many individuals, and as such, its unrealistic that any one human would be able bend the entire system to their will. An AI on the other hand would be perfectly suited to do just that. The question becomes- can we trust it?
     
    Last edited: Jan 21, 2019
    Blaster3 likes this.
  11. Blaster3

    Blaster3 Well-Known Member

    Joined:
    Sep 7, 2018
    Messages:
    6,008
    Likes Received:
    5,302
    Trophy Points:
    113
    new models can actually adapt and change their path all on their own, they no longer rip themselves apart bumping continuously into the 'curb'...

    some call it 'learning' , there already are models that are designed to be 'police' or 'soldier' (this is no longer science fiction, but actual operating bots with adaptive ai) . the question is, how soon before they are unleashed upon the public...

    the current state of affairs, worldwide, just might be the 'influence' needed to do so...

    everything is by design ;)
     
  12. Dispondent

    Dispondent Well-Known Member Past Donor

    Joined:
    Sep 5, 2009
    Messages:
    34,260
    Likes Received:
    8,086
    Trophy Points:
    113
    The danger AI poses exists only if AI is allowed access to equipment with local motion, like cars, drones, or anything of that sort. I suppose you could add power plants or hospitals, but that is easily rectified with simple switches and disconnects...
     
  13. Lonely Thinker

    Lonely Thinker Newly Registered

    Joined:
    Dec 27, 2018
    Messages:
    14
    Likes Received:
    4
    Trophy Points:
    3
    Look into machine deception.
    This involves machine generated content (fake audio, video, images and text) that can deceive humans. Consider the evil possibilities.
    We already have social network bots to promote an ideology. Bots are also used on retail platforms to promote products (fake reviews and product rating).

    Machines with somewhat of an imagination?
    They are called Generative Adversarial Network (GAN). Creative uses of this technique can be found on github, just google the GAN zoo.

    If you still doubt the power of AI, consider the impact on an election or Brexit.
    https://cyber.harvard.edu/story/201...uality-age-artificial-intelligence-automation

    People are already being manipulated. AI generated propaganda and disinformation will deceive the public until...[Insert solution here]

    One last thought. Consider you are debating a hot political topic on a forum. Now consider you are debating one or more bots that far exceed your own skill.
     
    Blaster3 likes this.
  14. OldManOnFire

    OldManOnFire Well-Known Member

    Joined:
    Jul 2, 2008
    Messages:
    19,980
    Likes Received:
    1,177
    Trophy Points:
    113
    You just supported why some believe AI can become a critical problem...you used the word 'if' above when it won't be about 'if' but about 'when'? And 'when' nations develop AI technology for use in robotic soldiers/police, or autonomous fighter jets, or to manage the most critical of our essential systems, some fear those machines will become smarter on their own, begin to solve their perceived problems on their own, potentially to the detriment of society.
     
    Blaster3 likes this.
  15. Blaster3

    Blaster3 Well-Known Member

    Joined:
    Sep 7, 2018
    Messages:
    6,008
    Likes Received:
    5,302
    Trophy Points:
    113
    it is certainly plausible & highly likely...
     
  16. Spooky

    Spooky Well-Known Member Past Donor

    Joined:
    Nov 29, 2013
    Messages:
    31,814
    Likes Received:
    13,377
    Trophy Points:
    113
    The problem is what command we ultimately give it as others have mentioned as well as how intelligent it actually is.

    A simple command like "learn and adapt" could be devastating if the AI was actually powerful enough.

    Before you know it you might see it evading human intervention.

    For the record though, I don't ever see this as being a problem.

    It's no worse than a human could ultimately do.

    And have done.

    There is a video of two robotic arms assembling a basic chair from Home Depot or somewhere.

    They lay out all the screws and stuff then have to work together to put it together. One holds the parts while the other assembles using the various tools.

    First try it takes them hours and it is wrong.

    However, they were programmed with the ability to learn and after multiple attempts they did it in 12 minutes or something, faster than a human could do.

    They learned from their mistakes.
     
  17. Lonely Thinker

    Lonely Thinker Newly Registered

    Joined:
    Dec 27, 2018
    Messages:
    14
    Likes Received:
    4
    Trophy Points:
    3
    My interest in AI began in the late 80's. In those days the only information I could obtain was by subscribing to an AI journal. I would say my thoughts of a self-learning computer at that time was purely science fiction. Now I see the possibilities as limitless.

    Today we have Generative Adversarial Networks (GAN) where machines learn to teach themselves. It's basically AI vs AI. Some would say it's AI with an imagination.

    The impact AI will have on our future is something that worries many. The possibilities it may have is no secret. If you heard the Director of National Intelligence (DNI) Dan Coats on January 29th, he mentions the concern a couple of times. You can find the public hearing on C-SPAN under the title Global Threats and National Security.

    Off topic: CRISPR - A Scientist in China has created a pair of CRISPR-edited twin girls resistant to HIV.

    The cost of CRISPR has become so cheap any lab can afford one. AI and CRISPR can be used for great advances but who is going to stop the evil that can be done?
     
    Blaster3 likes this.
  18. Spooky

    Spooky Well-Known Member Past Donor

    Joined:
    Nov 29, 2013
    Messages:
    31,814
    Likes Received:
    13,377
    Trophy Points:
    113
    This is the problem, people assume the worst and think that smart AI will be evil.

    It could very well be the opposite and decide to help us.

    Why people assume it will automatically turn evil is purely out of science fiction.

    They've watched Terminator too many times.
     
  19. kazenatsu

    kazenatsu Well-Known Member Past Donor

    Joined:
    May 15, 2017
    Messages:
    34,665
    Likes Received:
    11,236
    Trophy Points:
    113
    Or someone will program them for evil. Imagine the most blatant public crimes could be carried out and there'd be no human to catch.
    This could take organized crime issues to a whole new level.
     
    doombug likes this.
  20. Blaster3

    Blaster3 Well-Known Member

    Joined:
    Sep 7, 2018
    Messages:
    6,008
    Likes Received:
    5,302
    Trophy Points:
    113
    the ai's could do that themselves...

    program for a biological research facility map out & eliminate traits/genes that are harmful... unbeknownst to us, they 'found out' the best course of action is to eliminate the human species, they transmitt their 'knowledge' to all other ai's & computer systems & network controllers... lock us out, including the power grid, which by then will be all ai, then all the ai's administering meds at hospitals & pharmacies , dole out killer meds, then the 'peace keeping' robotic police forces & military ai bots begin sanitizing the world...

    far fetched, perhaps, but one day they will be all inter connected on a worldwide scale...

    plausible as well as likely, because we humans never go backwards, technology will go forward at all cost and speed...
     
  21. Dispondent

    Dispondent Well-Known Member Past Donor

    Joined:
    Sep 5, 2009
    Messages:
    34,260
    Likes Received:
    8,086
    Trophy Points:
    113
    I was just saying the solution to the AI problem is pretty simple, if people don't heed such advice and kill millions or billions, that's on them...
     
  22. OldManOnFire

    OldManOnFire Well-Known Member

    Joined:
    Jul 2, 2008
    Messages:
    19,980
    Likes Received:
    1,177
    Trophy Points:
    113
    Nuclear weapons strewn across the world, perhaps 10,000 of them, and all nations full knowing the potential of a nuclear detonation, didn't stop any of them from proliferating the danger. I suspect the same will happen with AI and time will tell...
     
    Blaster3 likes this.

Share This Page