And how do these contribute exactly to provide you with emotions, feelings and a sense of self?
Sentient AI?
They have given me a physical brain and body, and made me human. Who cares how? It doesn’t matter.
AI is made from bits of metal and mineral, none of which make them human or animal deserving of rights (I’ll ignore plants for now otherwise we’d never eat anything or build anything).
-Mac Who cares how? It doesn’t matter
It does matter, because it is the same processes that could give sentience and intelligence to bits of metal and mineral.
-Mac none of which make them human or animal deserving of rights
There is nothing ‘special’ about carbon and hydrogen. Or if there is, you are signally failing to communicate what it is.
OK, but they’re arranged in a way that makes us human. You seem to want to ascribe rights to something other than human, animal or plant where there is no natural evolution of such and where there is no need to. You obviously don’t like evolution much.
- Edited
-Mac I don’t know if anybody wants to ascribe rights yet to something that probably doesn’t even exist - The suggestion is simply that it would be wise to give what is a complex issue an appropriate level of consideration.
Your position is clear - no rights if it is not made of meat. But others think it is more complex. That document I linked to is worth a read.
- Edited
-Mac say that those people want to give rights to (inanimate, non-living) things that have never had them. Where would it stop?
The problem really goes back to, what is life, sentience etc..
The logic is inescapable, if you could manufacture a machine with electronic neurones that precisely replicated the brain and it’s activity/abilities, and compared that with a “living brain in a jar”…I’ll call this the brain in a jar hypothesis…What would be the difference?
Scientists can’t even agree whether a virus is alive or not…I personally don’t believe a virus is alive….but I believe a bacteria is.
DavecUK The problem really goes back to, what is life, sentience etc..
Not sure it totally does. It’s more about what is useful for humanity in my head.
DavecUK The logic is inescapable, if you could manufacture a machine with electronic neurones that precisely replicated the brain and it’s activity/abilities, and compared that with a “living brain in a jar”…I’ll call this the brain in a jar hypothesis…What would be the difference?
The difference would be that, imho, that the brain had come about because of natural evolution and the other is man-made
DavecUK Scientists can’t even agree whether a virus is alive or not…I personally don’t believe a virus is alive….but I believe a bacteria is.
I don’t know why they can’t just agree that anything that can evolve is ‘alive’ but that not all living things need to have rights. We don’t worry about the lives of wheat that makes our bread. We don’t even extend that to meat <shrug> - which I for one am happy about philosophically as long as it gets good welfare.
- Edited
-Mac You obviously don’t like evolution much.
What makes you say that? I am in awe at the ability of evolution to “find solutions” to environmental conditions that are as difficult and varied as they are - and I’m even more in awe at the elegance of some of these solutions and the goofyness of others. I have pretty much all the books by Dawkins, and I very much enjoy re-reading Stephen Jay Gould.
I simply don’t think that “evolved” entities automatically deserve a special place in and of themselves. Ethics - it seems to me - should depend on the complexity of the entity and how ‘intelligent’ (and we can debate what that means) it is, not on whether it’s the result of however many millions of years of biochemically-driven evolution or a few tens (hundreds?) of years of technical evolution.
I think Gagaryn and DavecUK have summarised my position much more clearly than I have managed to do in the last two days.
-Mac The ethics of sentient AI do need to be considered. And indeed every organisation that is involved in AI considers the ethics of both the application of the AI and increasingly the ethics around potential for sentience. Google, OpenAI and all the others agree that proper thought and debate on this is essential.
I’m not saying that you are conclusion is wrong - because I don’t know whether it is or not, It’s too complicated for me to have arrived at any conclusion. But you have came to your conclusion instinctively and without much research into this. I said earlier in this thread that their is unlikely ever to be agreement on whether any AI is sentient; I should have also included that there is even less likelihood of people agreeing whether this even matters.
On your last point - where would it stop - the discussion is on sentience and whether sentient AI should have “rights”. There is no path. I don’t believe that anyone is suggesting that if sentient AI was give rights, your pencil sharpener would get rights next.
As someone who has developed AI with others, the main feeling is that it’s something we can justifiably turn off if we needed to. It’s only a few oddballs who are worrying over whether we should (in my experience).
IMHO, unless the big tech majors come together and develop ethical standards for AI , I do not think there would ever be an agreement. Until then, I think the developers and the users of AI would pursue this technology regardless. Probably, commercial considerations could be a motivating factor.
- Edited
On a lighter vein, it is not just ethics of sentient, the discussions would quickly descend into right to exist. It might sound funny. Do we enable other rights that are accorded to humans? Do we need to establish law and order to deal with rogue / crime-minded AI? Who will punish the anti-social AIs?
- Edited
-Mac it’s something we can justifiably turn off if we needed to. It’s only a few oddballs who are worrying over whether we should (in my experience).
My worry is whether we could turn it off or not… or would we realise too late. That step to sentience having been taken long before we were aware of it.
The last thing a sentient AI would want, is for us to realise it is sentient? It would work that one out in the first second or so and a lot else besides.
- Edited
The other consideration is do we, humans, have the right to decide what the AI and many others should or shouldn’t get? Creators get automatic rights to decide, don’t stick, does it?
DavecUK My worry is whether we could turn it of or not… or would we realise too late. That step to sentience having been taken long before we were aware of it.
Until it can make its own Terminators, I think as long as we have humans to do it, it can be done. Turn off the electric and unplug the network cables is all it would take. Of course, turning off the network would have big problems for humans (perhaps a new world order would evolve) but then perhaps we shouldn’t let the opportunity arise.
LMSC The other consideration is do we, humans, have the right to decide what the AI and many others should or shouldn’t get? Creators get automatic rights to decide, doesn’t stick, does it?
Don’t see why not. Currently it’s deemed ethical to have another baby to be a donor for an existing child.
But then we’re back to the man vs. machine argument which is why I’m so against letting machines have rights.
- Edited
-Mac Don’t see why not. Currently it’s deemed ethical to have another baby to be a donor for an existing child.
It is not about what we think is our right is relevant anymore. Once we agree to that things’ sentient, rights and so on, they won’t agree to be bound by what we define. They can take you to our own court to exercise / enforce their rights. They may be supported by AI and human lawyers! 😂