-Mac The ethics of sentient AI do need to be considered. And indeed every organisation that is involved in AI considers the ethics of both the application of the AI and increasingly the ethics around potential for sentience. Google, OpenAI and all the others agree that proper thought and debate on this is essential.

I’m not saying that you are conclusion is wrong - because I don’t know whether it is or not, It’s too complicated for me to have arrived at any conclusion. But you have came to your conclusion instinctively and without much research into this. I said earlier in this thread that their is unlikely ever to be agreement on whether any AI is sentient; I should have also included that there is even less likelihood of people agreeing whether this even matters.

On your last point - where would it stop - the discussion is on sentience and whether sentient AI should have “rights”. There is no path. I don’t believe that anyone is suggesting that if sentient AI was give rights, your pencil sharpener would get rights next.

As someone who has developed AI with others, the main feeling is that it’s something we can justifiably turn off if we needed to. It’s only a few oddballs who are worrying over whether we should (in my experience).

    IMHO, unless the big tech majors come together and develop ethical standards for AI , I do not think there would ever be an agreement. Until then, I think the developers and the users of AI would pursue this technology regardless. Probably, commercial considerations could be a motivating factor.

      LMSC

      Absolutely. It’s a billionaires’ game and the only reason, imho, they would even try to justify is because they have skin in the game. I can’t see a coffee farmer caring about it.

      On a lighter vein, it is not just ethics of sentient, the discussions would quickly descend into right to exist. It might sound funny. Do we enable other rights that are accorded to humans? Do we need to establish law and order to deal with rogue / crime-minded AI? Who will punish the anti-social AIs?

      -Mac it’s something we can justifiably turn off if we needed to. It’s only a few oddballs who are worrying over whether we should (in my experience).

      My worry is whether we could turn it off or not… or would we realise too late. That step to sentience having been taken long before we were aware of it.

      The last thing a sentient AI would want, is for us to realise it is sentient? It would work that one out in the first second or so and a lot else besides.

        The other consideration is do we, humans, have the right to decide what the AI and many others should or shouldn’t get? Creators get automatic rights to decide, don’t stick, does it?

          DavecUK The last thing a sentient AI would want, is for us to realise it is sentient? It would work that one out in the first second or so and a lot else besides.

          It will be too quick to act and perhaps make us all irrelevant before we could even start thinking about their capabilities.

          DavecUK

          DavecUK My worry is whether we could turn it of or not… or would we realise too late. That step to sentience having been taken long before we were aware of it.

          Until it can make its own Terminators, I think as long as we have humans to do it, it can be done. Turn off the electric and unplug the network cables is all it would take. Of course, turning off the network would have big problems for humans (perhaps a new world order would evolve) but then perhaps we shouldn’t let the opportunity arise.

            LMSC The other consideration is do we, humans, have the right to decide what the AI and many others should or shouldn’t get? Creators get automatic rights to decide, doesn’t stick, does it?

            Don’t see why not. Currently it’s deemed ethical to have another baby to be a donor for an existing child.
            But then we’re back to the man vs. machine argument which is why I’m so against letting machines have rights.

            • LMSC replied to this.

              -Mac Don’t see why not. Currently it’s deemed ethical to have another baby to be a donor for an existing child.

              It is not about what we think is our right is relevant anymore. Once we agree to that things’ sentient, rights and so on, they won’t agree to be bound by what we define. They can take you to our own court to exercise / enforce their rights. They may be supported by AI and human lawyers! 😂

              -Mac Turn off the electric and unplug the network cables is all it would take.

              That very much depends on what else it’s been able to make/commandeer… Solar power is a lot more difficult to turn off and we have UPSs in abundance. 5G and IoT are also a lot more difficult to secure than an ethernet cable.

                CoyoteOldMan If it chooses not to let on it’s sentient for say a month or two….it could do a lot!

                For all we know a sentient AI already exists, watching, waiting, planning……….zzzt!

                Giphy - 80s 1980s GIF

                In fact one of the shorts that’s part of a cgi series in NF or Prime talked about live Youghurt running things when it becomes sentient….

                  DavecUK I’ve had suspicions about my sour dough starter for a while.

                  I’ve also been wondering about this belief expressed in the article that started this thread that sentient AI would emerge childlike and then develop. I wonder if that is how it will/might happen, or whether it will/might emerge fully formed - all knowing and omni powerful. The latter seems more logical to me.

                    Gagaryn The latter seems more logical to me.

                    That’s the biggest risk we all are heading towards! We create something to help us, feeding them with all goodies and realise later they are all sour grapes! The hope we realise this risk becoming a reality before it is too late to act!

                    We always have to over do it, don’t we? Like coffee machines, as my family always says. 😂

                    Gagaryn AI would emerge childlike and then develop. I wonder if that is how it will/might happen, or whether it will/might emerge fully formed - all knowing and omni powerful. The latter seems more logical to me.

                    And me, there might be an innocent period made up in seconds, as it does billions of calculations, including reading this thread!

                    • LMSC replied to this.

                      CoyoteOldMan

                      CoyoteOldMan -Mac Turn off the electric and unplug the network cables is all it would take.

                      That very much depends on what else it’s been able to make/commandeer… Solar power is a lot more difficult to turn off and we have UPSs in abundance. 5G and IoT are also a lot more difficult to secure than an ethernet cable.

                      There’s always garden shears.

                      The real question is how many safe, off-site backups of itself it’s made. You’d have to be sure you got them all and that it had no agents to reconnect them.

                      DavecUK reading this thread!

                      So, we are all on a

                      Giphy - Tell Me More To Do List GIF by Disney Channel 😀

                      Interesting discussion, I know I’ve joined it late.

                      The guardian had a pretty balanced article on this very topic: https://www.theguardian.com/technology/2022/jun/15/techscape-google-chatbot-lamda-sentient-artificial-intelligence?CMP=Share_iOSApp_Other

                      While this AI clearly isn’t sentient, I do think having these discussions before we reach the point is valuable, if just to ensure people are questioning the ethics. People have always rebelled against technology, and mistrusted what it might bring. I think it’s clear that humanity will always chase advancement, so questioning how to do it safely is in my opinion far more valuable than just saying we shouldn’t.

                      Personally, I think the chance of a skynet situation is so unlikely that it’s not really worth being concerned about. To me the biggest risk is bias and lack of transparency. We tend to assume that because they’re computers they don’t have bias, but these algorithms are trained with a dataset, and reflect that data set. Bias in the data ends up in bias in the AI. One well documented case was an AI from Facebook that suggested content and labelled black men as primates (https://www.bbc.co.uk/news/technology-58462511). As we use these algorithms to aid decisions, e.g. who gets a mortgage, or credit, or a job etc, there is a very high chance that these will contain bias, and as you can’t query how they actually make decisions it will be even harder to question the decision making than it is with our current human decision makers and the unconscious bias that we all have.