Interesting discussion, I know I’ve joined it late.
The guardian had a pretty balanced article on this very topic: https://www.theguardian.com/technology/2022/jun/15/techscape-google-chatbot-lamda-sentient-artificial-intelligence?CMP=Share_iOSApp_Other
While this AI clearly isn’t sentient, I do think having these discussions before we reach the point is valuable, if just to ensure people are questioning the ethics. People have always rebelled against technology, and mistrusted what it might bring. I think it’s clear that humanity will always chase advancement, so questioning how to do it safely is in my opinion far more valuable than just saying we shouldn’t.
Personally, I think the chance of a skynet situation is so unlikely that it’s not really worth being concerned about. To me the biggest risk is bias and lack of transparency. We tend to assume that because they’re computers they don’t have bias, but these algorithms are trained with a dataset, and reflect that data set. Bias in the data ends up in bias in the AI. One well documented case was an AI from Facebook that suggested content and labelled black men as primates (https://www.bbc.co.uk/news/technology-58462511). As we use these algorithms to aid decisions, e.g. who gets a mortgage, or credit, or a job etc, there is a very high chance that these will contain bias, and as you can’t query how they actually make decisions it will be even harder to question the decision making than it is with our current human decision makers and the unconscious bias that we all have.