I say the following as someone who as in my first reply further up, has been dragged kicking and screaming into the use of AI at work and remain somewhat cynical, sceptical about it.
One issue is that the most publicly accessible AI tools are arguably not really showing AI in it’s best light.
Some of the tools we use at work have made certain processes incredibly efficient, like 100s of times quicker. But its most effective with processes which can be clearly incorrect or correct where the models have been able to learn what is incorrect or correct.
A lot of what is available publicly has presumably been trained on publicly available knowledge so just as you’d historically have needed to fact check things you read online, you still have to with AI summaries.
I’ll often skim the AI summary even though I didn’t ask for it. Sometimes it’s useful, other times, I’ll research further manually.
I think the problem is publicly available tools aren’t necessarily as good an advert for AI as some of the internal specialist tools used in industries that aren’t free.
As long as you see AI as a casual, occasionally clumsy assistant and not gospel, it can be useful.
The problems occur when people assume it’s fact, the potential implications of which is rightly one of the biggest concerns for both advocates and non-advocates of the technology.
It’s also in its infancy when you compare it to other tech. It’s easy to forget how far we’ve come in so many technologies because many are so incredibly advanced, but AI, in the grand scheme of things, is very very new.