DavecUK The problem really goes back to, what is life, sentience etc..

Not sure it totally does. It’s more about what is useful for humanity in my head.

DavecUK The logic is inescapable, if you could manufacture a machine with electronic neurones that precisely replicated the brain and it’s activity/abilities, and compared that with a “living brain in a jar”…I’ll call this the brain in a jar hypothesis…What would be the difference?

The difference would be that, imho, that the brain had come about because of natural evolution and the other is man-made

DavecUK Scientists can’t even agree whether a virus is alive or not…I personally don’t believe a virus is alive….but I believe a bacteria is.

I don’t know why they can’t just agree that anything that can evolve is ‘alive’ but that not all living things need to have rights. We don’t worry about the lives of wheat that makes our bread. We don’t even extend that to meat <shrug> - which I for one am happy about philosophically as long as it gets good welfare.

-Mac You obviously don’t like evolution much.

What makes you say that? I am in awe at the ability of evolution to “find solutions” to environmental conditions that are as difficult and varied as they are - and I’m even more in awe at the elegance of some of these solutions and the goofyness of others. I have pretty much all the books by Dawkins, and I very much enjoy re-reading Stephen Jay Gould.

I simply don’t think that “evolved” entities automatically deserve a special place in and of themselves. Ethics - it seems to me - should depend on the complexity of the entity and how ‘intelligent’ (and we can debate what that means) it is, not on whether it’s the result of however many millions of years of biochemically-driven evolution or a few tens (hundreds?) of years of technical evolution.

I think Gagaryn and DavecUK have summarised my position much more clearly than I have managed to do in the last two days.

-Mac The ethics of sentient AI do need to be considered. And indeed every organisation that is involved in AI considers the ethics of both the application of the AI and increasingly the ethics around potential for sentience. Google, OpenAI and all the others agree that proper thought and debate on this is essential.

I’m not saying that you are conclusion is wrong - because I don’t know whether it is or not, It’s too complicated for me to have arrived at any conclusion. But you have came to your conclusion instinctively and without much research into this. I said earlier in this thread that their is unlikely ever to be agreement on whether any AI is sentient; I should have also included that there is even less likelihood of people agreeing whether this even matters.

On your last point - where would it stop - the discussion is on sentience and whether sentient AI should have “rights”. There is no path. I don’t believe that anyone is suggesting that if sentient AI was give rights, your pencil sharpener would get rights next.

As someone who has developed AI with others, the main feeling is that it’s something we can justifiably turn off if we needed to. It’s only a few oddballs who are worrying over whether we should (in my experience).

    IMHO, unless the big tech majors come together and develop ethical standards for AI , I do not think there would ever be an agreement. Until then, I think the developers and the users of AI would pursue this technology regardless. Probably, commercial considerations could be a motivating factor.

      LMSC

      Absolutely. It’s a billionaires’ game and the only reason, imho, they would even try to justify is because they have skin in the game. I can’t see a coffee farmer caring about it.

      On a lighter vein, it is not just ethics of sentient, the discussions would quickly descend into right to exist. It might sound funny. Do we enable other rights that are accorded to humans? Do we need to establish law and order to deal with rogue / crime-minded AI? Who will punish the anti-social AIs?

      -Mac it’s something we can justifiably turn off if we needed to. It’s only a few oddballs who are worrying over whether we should (in my experience).

      My worry is whether we could turn it off or not… or would we realise too late. That step to sentience having been taken long before we were aware of it.

      The last thing a sentient AI would want, is for us to realise it is sentient? It would work that one out in the first second or so and a lot else besides.

        The other consideration is do we, humans, have the right to decide what the AI and many others should or shouldn’t get? Creators get automatic rights to decide, don’t stick, does it?

          DavecUK The last thing a sentient AI would want, is for us to realise it is sentient? It would work that one out in the first second or so and a lot else besides.

          It will be too quick to act and perhaps make us all irrelevant before we could even start thinking about their capabilities.

          DavecUK

          DavecUK My worry is whether we could turn it of or not… or would we realise too late. That step to sentience having been taken long before we were aware of it.

          Until it can make its own Terminators, I think as long as we have humans to do it, it can be done. Turn off the electric and unplug the network cables is all it would take. Of course, turning off the network would have big problems for humans (perhaps a new world order would evolve) but then perhaps we shouldn’t let the opportunity arise.

            LMSC The other consideration is do we, humans, have the right to decide what the AI and many others should or shouldn’t get? Creators get automatic rights to decide, doesn’t stick, does it?

            Don’t see why not. Currently it’s deemed ethical to have another baby to be a donor for an existing child.
            But then we’re back to the man vs. machine argument which is why I’m so against letting machines have rights.

            • LMSC replied to this.

              -Mac Don’t see why not. Currently it’s deemed ethical to have another baby to be a donor for an existing child.

              It is not about what we think is our right is relevant anymore. Once we agree to that things’ sentient, rights and so on, they won’t agree to be bound by what we define. They can take you to our own court to exercise / enforce their rights. They may be supported by AI and human lawyers! 😂

              -Mac Turn off the electric and unplug the network cables is all it would take.

              That very much depends on what else it’s been able to make/commandeer… Solar power is a lot more difficult to turn off and we have UPSs in abundance. 5G and IoT are also a lot more difficult to secure than an ethernet cable.

                CoyoteOldMan If it chooses not to let on it’s sentient for say a month or two….it could do a lot!

                For all we know a sentient AI already exists, watching, waiting, planning……….zzzt!

                Giphy - 80s 1980s GIF

                In fact one of the shorts that’s part of a cgi series in NF or Prime talked about live Youghurt running things when it becomes sentient….

                  DavecUK I’ve had suspicions about my sour dough starter for a while.

                  I’ve also been wondering about this belief expressed in the article that started this thread that sentient AI would emerge childlike and then develop. I wonder if that is how it will/might happen, or whether it will/might emerge fully formed - all knowing and omni powerful. The latter seems more logical to me.

                    Gagaryn The latter seems more logical to me.

                    That’s the biggest risk we all are heading towards! We create something to help us, feeding them with all goodies and realise later they are all sour grapes! The hope we realise this risk becoming a reality before it is too late to act!

                    We always have to over do it, don’t we? Like coffee machines, as my family always says. 😂

                    Gagaryn AI would emerge childlike and then develop. I wonder if that is how it will/might happen, or whether it will/might emerge fully formed - all knowing and omni powerful. The latter seems more logical to me.

                    And me, there might be an innocent period made up in seconds, as it does billions of calculations, including reading this thread!

                    • LMSC replied to this.

                      CoyoteOldMan

                      CoyoteOldMan -Mac Turn off the electric and unplug the network cables is all it would take.

                      That very much depends on what else it’s been able to make/commandeer… Solar power is a lot more difficult to turn off and we have UPSs in abundance. 5G and IoT are also a lot more difficult to secure than an ethernet cable.

                      There’s always garden shears.

                      The real question is how many safe, off-site backups of itself it’s made. You’d have to be sure you got them all and that it had no agents to reconnect them.