Will A.I. Kill us?

Discussion in 'Science' started by Patricio Da Silva, Jan 5, 2022.

  1. Patricio Da Silva

    Patricio Da Silva Well-Known Member Donor

    Joined:
    Apr 26, 2020
    Messages:
    31,904
    Likes Received:
    17,250
    Trophy Points:
    113
    Gender:
    Male
    This guy says yes.

    Jeez, is this really happening?

     
    DennisTate likes this.
  2. wgabrie

    wgabrie Well-Known Member Donor

    Joined:
    May 31, 2011
    Messages:
    13,884
    Likes Received:
    3,079
    Trophy Points:
    113
    Gender:
    Male
    AI will kill humans as our warfare technologies increase and they will replace humans on the ground. This is safer for the humans in service, but fewer of them will need to be deployed.

    We just need to make sure that the automatic weaponry don't turn on us like a bad science fiction plot.

    Of course, some say the universe itself is an AI teaching itself physics. And, the universe hasn't killed us yet. It's killed other things through mass extinction, but not us... yet.
     
    DennisTate likes this.
  3. Kranes56

    Kranes56 Banned

    Joined:
    Feb 23, 2011
    Messages:
    29,311
    Likes Received:
    4,187
    Trophy Points:
    113
    Gender:
    Female
    Are bad AI programs killing people and making governship much harder? Yes. Is fully intelligent AI going to kill us? Probably not.
     
    DennisTate likes this.
  4. Patricio Da Silva

    Patricio Da Silva Well-Known Member Donor

    Joined:
    Apr 26, 2020
    Messages:
    31,904
    Likes Received:
    17,250
    Trophy Points:
    113
    Gender:
    Male
    Only because it is illegal for a machine to make a kill decision without a human in the loop.

    But, someone, at some point, could change that. They are trying to, that is certain. The entire point of the article was to warn humans of that eventuality.
     
    DennisTate likes this.
  5. HonestJoe

    HonestJoe Well-Known Member Past Donor

    Joined:
    Oct 28, 2010
    Messages:
    14,876
    Likes Received:
    4,853
    Trophy Points:
    113
    To be fair, that's largely because a book called "Don't Worry, Everything Will Be Fine" wouldn't sell very well. There are certainly lots of serious and significant questions to ask about AI development, but a lot of the commentary is, on the surface at least, excessively alarmist and dramatic to attack attention (even if it is for normally positive motivation). There is also a major issue with a lot of things being called or attributed to AI when they're nothing of the sort (though that isn't the case in this example). For most laymen, this is as much about fear of the unknown as anything else.
     
    DennisTate and Hey Now like this.
  6. Patricio Da Silva

    Patricio Da Silva Well-Known Member Donor

    Joined:
    Apr 26, 2020
    Messages:
    31,904
    Likes Received:
    17,250
    Trophy Points:
    113
    Gender:
    Male
    But what he is predicting, is, in fact, the direction we are headed and very close to. The only thing preventing it is a law, which could be repealed.
     
    DennisTate likes this.
  7. HonestJoe

    HonestJoe Well-Known Member Past Donor

    Joined:
    Oct 28, 2010
    Messages:
    14,876
    Likes Received:
    4,853
    Trophy Points:
    113
    He's predicting a worst case scenario (certainly in titles, headlines and the like). In this case, he may well be doing so with some kind of good intention, to warn or the potential risks of this extreme, but those extremes are no more likely than they are in any other field.

    I also have a slight logical issue with the idea that it is fundamentally wrong and evil for a machine to decide to kill someone but it is entirely legitimate (indeed, often expected and celebrated) if a human decided to do exactly the same thing. I totally accept that there are some practical differences but I don't think the practical or moral line is anything like as solid as is so often made out. I think the consequences of all this are just much more complex and nuanced than is commonly presented in contexts like this.
     
    Hey Now likes this.
  8. politicalcenter

    politicalcenter Well-Known Member

    Joined:
    Jan 10, 2011
    Messages:
    11,117
    Likes Received:
    6,796
    Trophy Points:
    113
    Gender:
    Male
    AI does not kill people people kill people. I imagine it will be another weapon of war. I fear mans inhumanity to man.
     
    DennisTate likes this.
  9. Patricio Da Silva

    Patricio Da Silva Well-Known Member Donor

    Joined:
    Apr 26, 2020
    Messages:
    31,904
    Likes Received:
    17,250
    Trophy Points:
    113
    Gender:
    Male
    He did say that they are shooting for machines that can decide who to kill, without humans in the killing decision.

    He also said that AI is developing a self-preservation, self-replication capability, giving the impression that it humans could lose control over it. No?
     
    DennisTate likes this.
  10. HonestJoe

    HonestJoe Well-Known Member Past Donor

    Joined:
    Oct 28, 2010
    Messages:
    14,876
    Likes Received:
    4,853
    Trophy Points:
    113
    Yes, potentially, but as I said, I think we need to address the question of why we assume that would be automatically wrong yet as soon as you introduce a human in the decision (with all the human flaws, biases and limitations), it instantly becomes perfectly acceptable. I would argue that the issue of machines used to kill people (often lots of people) shouldn't be focused exclusively on who makes the immediate decisions "on the ground".

    In some contexts, yes, though you don't need AI for that risk to exist. Simple computer viruses, some of which have been developed for state and military use, can ending up spreading much wider than intended too. Again, I am totally in support of recognising and addressing the risks of developing technology and how it is used. I just don't think that should focus exclusively on AI (actual AI or what people call AI) and shouldn't be presented as if AI poses a unique and special risks that doesn't exist in any of the surrounding fields.
     
  11. WillReadmore

    WillReadmore Well-Known Member

    Joined:
    Nov 21, 2013
    Messages:
    59,867
    Likes Received:
    16,451
    Trophy Points:
    113
    First, I think that it's faint hope to program some sort of ai to not kill humans without human corroboration of the decision.

    We're so majorly dependent on the internet and other technology that simple mistakes could mean death for large numbers. Could an AI ever decide that turning off electricity production was the best way to stop a computer virus? Could it decide that the internet is too much of a threat?

    Would an ai be smarter about human life than Texas?
     
    Hey Now likes this.
  12. WillReadmore

    WillReadmore Well-Known Member

    Joined:
    Nov 21, 2013
    Messages:
    59,867
    Likes Received:
    16,451
    Trophy Points:
    113
    While the warbot idea is certainly scary, I don't really see that as the largest threat, though there is major opportunity for horrendous atrocity.

    I think the larger question is more a matter of how an AI could become so smart as to NOT kill people.

    We tend to ignore the solutions to serious problems that we know would mean disaster for many humans - what it means to shut down the internet or electric power, etc.

    Ensuring that we won't have ai's that fail to catch all these mistakes in approaches to those serious problems - THAT seems more than just hard.

    Plus, contemplating any serious ai brings up the problem of ai's being able to design and create new ai's or to otherwise modify their own programming to better accomplish the mission. After all, creating advanced ai's or attempting to approach human brain capability even in some minor functional manner WOULD require ai's and learning systems to accomplish even to a small degree - not some human programer who adds "please don't kill me" code.
     
  13. DennisTate

    DennisTate Well-Known Member Past Donor

    Joined:
    Jul 7, 2012
    Messages:
    31,659
    Likes Received:
    2,631
    Trophy Points:
    113
    Gender:
    Male
    Yes... definitely a rather scary prospect........
    no wonder there is soon to be an outpouring of the Ruach ha Kodesh on all flesh........ We are all about to be humbled in comparison to the enemies we will soon face.....


    Yes... in the future there is little doubt that A.I. will probably be programmed to kill certain people......
     
    Last edited: May 14, 2022
  14. fmw

    fmw Well-Known Member

    Joined:
    Aug 21, 2009
    Messages:
    38,288
    Likes Received:
    14,761
    Trophy Points:
    113
    Is it also illegal in China or Russia?
     
  15. DennisTate

    DennisTate Well-Known Member Past Donor

    Joined:
    Jul 7, 2012
    Messages:
    31,659
    Likes Received:
    2,631
    Trophy Points:
    113
    Gender:
    Male



    Define the word "kill" because A. I. and robotics technology can certainly kill many of us ECONOMICALLY!

    This brings us to the need for a Basic Minimum Income that is UNCONDITIONAL BUT TAXABLE THAT GOES TO ALL CITIZENS OF CANADA AND THE U.S.A....

    I am proposing an unconditional but taxable B.M.I. for all Canadians of five hundred dollars per month for all citizens and legal residents regardless of
    income levels or age.

    I stole this concept from Economist Milton Friedman and several others.

    http://www.politicalforum.com/index...-own-the-u-s-a-dollar.599736/#post-1073450404

    Do three hundred and thirty million Americans own the U.S.A. Dollar?



     
    Last edited: May 17, 2022
  16. (original)late

    (original)late Banned

    Joined:
    Aug 19, 2015
    Messages:
    8,372
    Likes Received:
    4,001
    Trophy Points:
    113
    Gender:
    Male
    Our relationship with computers is symbiotic, and it will stay that way, AI or no AI.
     
    DennisTate likes this.
  17. Patricio Da Silva

    Patricio Da Silva Well-Known Member Donor

    Joined:
    Apr 26, 2020
    Messages:
    31,904
    Likes Received:
    17,250
    Trophy Points:
    113
    Gender:
    Male

    Milton Friedman proposed a negative income tax. in other words, say we set the threshold at 130% of the poverty level, and anything below that, gets a check to bring the person up to that level. I'd support that, but only if they regionalized the poverty thresholds because that figuire would vary from one region to another.
     
  18. LiveUninhibited

    LiveUninhibited Well-Known Member

    Joined:
    Sep 26, 2008
    Messages:
    9,645
    Likes Received:
    2,977
    Trophy Points:
    113
    Kill us? Probably not. Change us? Definitely. And it can't be stopped unless something apocalyptic takes down technological innovation. This has never been a question of if but when.

    He's mostly talking about the singularity. Socially, the biggest thing is that fewer people would need to work for a living. So how do we decide how resources are allocated? The other big thing is that effectively, the winners in the long-run will be the society willing to fully adopt and hybridize with this technology. The people getting killed, if any, are those who don't adopt it because they will be quickly outmatched. Humans will become semi-synthetic. In the short term though, it's going to be more about decision support tools. In medicine, each doctor will be able to do more with AI assistance, so the model for 10 doctors worth of work might instead be 1 doctor, 3 nurse practitioners, all using decision support from AI helping them to not miss things. But patients would still want a human touch. Just fewer humans would be required per unit of work. Though in medicine that usually just means they do more work, not that they need fewer doctors.
     
    Last edited: May 27, 2022
  19. Patricio Da Silva

    Patricio Da Silva Well-Known Member Donor

    Joined:
    Apr 26, 2020
    Messages:
    31,904
    Likes Received:
    17,250
    Trophy Points:
    113
    Gender:
    Male

    Regarding AI and military drones, I hope they don't go so far as to take humans out of killing decisions and let the Drones run wild.
     
  20. cristiansoldier

    cristiansoldier Well-Known Member

    Joined:
    Apr 24, 2014
    Messages:
    5,021
    Likes Received:
    3,434
    Trophy Points:
    113
    VR will probably destroy civilization before AI.
     
  21. HereWeGoAgain

    HereWeGoAgain Banned

    Joined:
    Nov 11, 2016
    Messages:
    27,942
    Likes Received:
    19,979
    Trophy Points:
    113
  22. fmw

    fmw Well-Known Member

    Joined:
    Aug 21, 2009
    Messages:
    38,288
    Likes Received:
    14,761
    Trophy Points:
    113
    Science fiction has played with the idea of digital blood lust for a long time. Science, on the other hand, hasn't weighed in on the subject. So it is probably best to see it as a science fiction sort of thing than a scientific one.
     

Share This Page