Came across this a few days ago. Basically a Google engineer (who is also a priest) feels it is scientifically merited to explore whether Google’s latest and greatest AI is sentient. Lots of things to entangle including Google’s internal ethics and standards of behaviour, the murky nature of what actually goes on within these behemoth corporations not answerable to anyone but their shareholders, and not least the nature of sentience and personhood and whether AIs (and other beings?) can be deemed to possess it.
https://www.theguardian.com/technology/2022/jun/12/google-engineer-ai-bot-sentient-blake-lemoine

The conversation, edited from several sessions, is published by the engineer in question Blake Lemoine here.

A contrary opinion from a Linguistics expert Blake himself defers to on some matters of linguistics
https://www.theguardian.com/commentisfree/2022/jun/14/human-like-programs-abuse-our-empathy-even-google-engineers-arent-immune

(I’m probably betraying my political leanings here 😂)

I’m no IT guy and I don’t even program. What do others think? Those of you with experience in IT or Philosophy or Linguistics or even those with no experience at all!

This is pretty scary shit is what I think. Once again, corporations messing around with shit they shouldn’t be and when they open pandora’s box we’re all in for it.

Interesting conversation, hardly surprising and the AI is extremely useful to carry out tasks at a rate we can’t.

Some would think it will replace humans in employment sector. Sure, it does, for example, making some of the repetitive skills redundant. At the same time, it had created an acute demand for skills in other areas. The corporations and we employed will have to retrain, update our skills and stay relevant.

If we put these technologies to good use, it is good. It can make us all productive, prevent accidents, disasters and amongst many excellent things. But, the technology is so advanced, there are risks in areas you don’t want to give this technology a full control or exercise a decision; such automated decision, if left unsupervised, can wreck havoc, if technology senses danger.

As far as the sentient is concerned, billions of human hours and money have into development. It is therefore not surprising they have sentiments, emotions and think like us - based on the information fed into those AI technology. But, you can’t draw a parallel between humans and the AI. I do not think you can even compare what a 7-year old kid feels like.

Try engaging in a conversation with Siri steadily start saying things it won’t like — ex: something rude. It won’t like.

It is very capable of achieving things or seeing things at scale. But, end of the day, it’s a tool created by humans.

Edit :

Bottom line : We must draw a solid line, where neither we humans nor the tools we develop cross that line.

    It seems to have passed the original Turing test with flying colours - at least with Lemoine.

    The real issue (in my totally personal opinion), is that we don’t have an agreed definition of “intelligence”, “sentience” or even “consciousness”; I’m not even sure there can be one, considering I hear quite a few people expressing sentiments similar to LMSC “at the end of the day, it’s a tool created by humans.”

    Yes, it is - why does this preclude it from being conscious, sentient or intelligent? Particularly since we can’t really explain the behaviour of the entity in algorithmic terms AND we can’t explain our own behaviour either. (Or maybe I misunderstood what you were saying?)

    To avoid misunderstanding - I’m definitely not arguing that Siri (or Alexa) are conscious, sentient or intelligent. I don’t know about LaMDA, but I certainly don’t count the fact that it has been created by humans as something that goes against it possibly being conscious, sentient or intelligent.

      The thing is the codes are so complex and massive, like that Google Engineer admitted, we may not even be able to debug and understand why the AI thinks in a certain way.

      Humans can’t help but anthropomorphise non human things, we do it with animals, cars and probably our coffee machine so it’s no surprise that we do it with AI.

      That said, I’m not sure whether machine learning will ever lead us to AI that is sentient or not and I’m not sure this will ever be more than a matter of opinion rather than fact. Humans and AI are both complex neural networks controlled by trillions of electrical impulses. Who can say that one is sentient and which is not.

      I work in health IT and automation via ML/AI is already starting to be used in various fields such as radiology. I have no suspicion that these robots are sentient - yet. :-)

        Gagaryn Nicely captures the underlying risk of an unknown the humanity can face. That’s why drawing a line is important.

        On a lighter vein, imagine an AI driven smart lock around our home refusing to allow us in in a freezing winter! The coffee machine has gone too smart and it says it is not a mood to make a cuppa in the morning or it says the steam boiler is too hot for it at 130C and auto defaults to room temperature! 🤣🤣🤣

          CoyoteOldMan the original Turing test

          "Turing proposed that a computer can be said to possess artificial intelligence if it can mimic human responses under specific conditions. The original Turing Test requires three terminals, each of which is physically separated from the other two. One terminal is operated by a computer, while the other two are operated by humans.

          During the test, one of the humans functions as the questioner, while the second human and the computer function as respondents. The questioner interrogates the respondents within a specific subject area, using a specified format and context. After a preset length of time or number of questions, the questioner is then asked to decide which respondent was human and which was a computer.

          The test is repeated many times. If the questioner makes the correct determination in half of the test runs or less, the computer is considered to have artificial intelligence because the questioner regards it as “just as human” as the human respondent."
          For others like me who may not know exactly what it is.

          Reading Lemoine’s blog is very interesting, but his latest post has me questioning if he has the required detachment to make a rational assessment.

          LMSC there are risks in areas you don’t want to give this technology a full control or exercise a decision; such automated decision, if left unsupervised, can wreck havoc

          one could argue this is already happening, not in a terminator robots kind of way but in the quiet ways government and corporations try to automate and expedite things like sieving through CVs, assessing traffic fine appeals and police face matching using algorithms. Not actively malicious but damaging for certain people nonetheless

            hthec his latest post has me questioning if he has the required detachment to make a rational assessment.

            I doubt he has…

            FWIW, I totally share your and @LMSC ’s concerns. As someone once said: “to err is human, but to really mess things up, one needs a computer.”

            Some of my thoughts on the whole AI subject…

            A machine is a machine and even if it does have ‘sentience’ there is no moral ground on which it should not be turned off.

            The biggest threat of ‘the singularity’ will come not from the AI itself (because of 1) but due to some middle manager with his own promotional agenda wanting to ‘get it done’ to impress his bosses. I’ve worked for people like this and they’re the bigger threat.

            A machine cannot connect itself to a network. AI might advance, but making its own ‘robots’ to connect it is years beyond the AI itself.

              Gagaryn Humans can’t help but anthropomorphise non human things, we do it with animals, cars and probably our coffee machine so it’s no surprise that we do it with AI.

              I think it’s a bit more subtle than that - as ‘culturally’ we recognise humans as intelligent, and we build AIs to mimic and amplify and extend those behaviours… AI is largely an “antropomorphic” object by design, not just an antromorphised one.

              LMSC On a lighter vein, imagine an AI driven smart lock around our home refusing to allow us in in a freezing winter!

              It’s already here:

              -Mac A machine is a machine and even if it does have ‘sentience’ there is no moral ground on which it should not be turned off.

              There be dragons. A slave is a slave, a jew is a jew, a heretic is a heretic, a negro is a negro, a beast is a beast, a … is a … - add your own categories there. I don’t think it’s quite as straightforward as you put it, and part of the reason is that there is no agreement either technically or ethically as to what life, consciousness, sentience or intelligence actually are. The “line” that @LMSC would like to draw is nowhere to be seen.

                CoyoteOldMan The “line” that @LMSC would like to draw is nowhere to be seen.

                That’s the risk of an unknown, which I hope the humans never face.

                  LMSC That’s the risk of an unknown, which I hope the humans never face.

                  You and I both.

                  I found the video on YT - fixed the link!

                  CoyoteOldMan

                  The point is, though, that it wouldn’t matter. Otherwise you might just as well worry about sentient toasters.

                    -Mac As I said - why does it not matter? A slave is a slave - they weren’t considered “human beings” in many cultures, including the ‘Western’ one, until fairly recently. A woman is a woman - they were considered their husband’s property and largely objectified until even more recently, and still are in some places.

                      -Mac And in which way, precisely? They are an assembly of inanimate, non-sentient objects (organs, cells, molecules, atoms, sub-atomic particles), and the definition of “animate” or “alive” is precisely the point of the discussion, isn’t it? Or do you have a precise and commonly accepted definition of “alive”, “conscious”, “sentient” and “intelligent”?

                      The point I’m making is that slaves, women etc. were “non people” at some point… for the same “obvious” reasons that you currently see a “machine” as different from a “person” in principle. Not obvious at all.

                        CoyoteOldMan Agreed. It will always be a matter of opinion until there is general consensus. That takes time, some people change their views quicker than others - and some never do. There are still places in the world that don’t agree with the consensus that enslaving people is wrong and that women are not their husbands chattel. It’s less than 60 years since the USA put laws in place to outlaw segregation of human races and give all humans equal rights, and some might argue that the aims of those laws are yet to be fully realised.

                        It will be interesting to see what the human consensus is on the sentience of AI in 60 years from now. Unfortunately I will be dead. Hopefully my robot will feed my cat. :-)

                        I suppose everything hinges around the definition of sentience…..what exactly is it?

                        Is a human sentient, what about a chimpanzee, a fish, a beetle a midge a targigrade, a bacteria a plant? Where does sentience end and just being alive begin.

                        What does a “sentient” AI do when it has no input, do you need a body to have sentience?

                        From when I was very young (about 15) I puzzled on this and used to lay in my bed at night thinking about how, could a human transfer consciousness, sentience if you like into a machine. Computers were primitive, but improving, an artificial neurone could be built for sure and linked together just like a human brain (one day). We could even have intelligent electronics that could make an break links between those artificial neurones.

                        Then we have the biolectronic interface, something I felt again was possible.

                        We get those two bits sorted out and implant into the brain, additional artificial neurones, to allow the brain to use them as part of the thought process, could we gradually add more of those circuits into/beside the brain, and could we use them to use the hardware in our daily thoughts, calculations, memory storage. Would we be aware that we were accessing data, memories, thoughts from those circuits as opposed to the wetware?

                        Gradually offloading more and more activity into the artificial components, would eventually our sense of self and sentience transfer. Is this the route to immortality, would we still sleep, or dream, or would we lose ourself?