DaveW ... interesting avenue ... I guess it depends just how 'human' they become ... you might imagine that, as a coldly intellectual 'being', it would have no reason to be anything other than entirely truthful, as it wouldn't have the ego imperatives to.protect (and the overwhelming number of Human lies are minor and to protect our self-view/social-standing) ... it wouldn't give a toss about that. However if it became Human-like it might also pick that baggage up along the way.
More worrying would be an AI that had been 'brought up' with a deliberately skewed input or with it's access to information rigorously controlled (think Einstein raised by a radical cult.) It would follow the only way it knew.
Similarly, without empathy or a moral framework with a broad knowledge base then lies would be seen as just another tool to achieve its allotted task and positively useful..
My worry is powerful synthetic intellects moulded by imperfect Humans at best ... and deliberate bad-actors at worst.
K
Yes to all of this. As I have mentioned before, 20 of us spent some time assisting the growth of one test AI model and managed to make it consider a very odd set of things as it's base of "truth" and so it was very happy to think it was not lying or cheating. Put a hungry unsympathetic state, company or leader under this who can just change the rules and you do have an opportunity for driving bad outcomes. The difference is you can drive them way cheaper, faster, wider and without potential censure. A bit like Wikipedia "who edited that?" is not said nearly often enough in my opinion. The younger generations also do not yet have way enough cynicism in them to be protected effectively.
But just wait until this next concern is highlighted. Some may not like it but we have a system of law that we need to abide by. It is a moving target, very slow moving. It has yet to really work out how the generic state of cyber-odd should be ruled, or even if it is within the boundary of UK law in many cases. It has little in it's book to deal with it. Not aiming to be rude but this is outside it's existing remit and it does not create the new stuff as quickly as the problems occur and mutate. How will we rule AI? In truth it needs to be global but I cannot see that happening. Even at UN level after all it is a global issue now. Perhaps a little like the model of air travel, airside and landside. There would need to a Cyberside rulebook which countries sign up to. Gambling and Pron industries would put some money into killing that I suspect?
So its a far more simple question for me, is how does AI differentiate between the many version of the truth and a lie which may arise from the various interested groups interpretation of an event and therefore AI even recognise the truth , so to be in a position should A1 so choose to condition it's response with a lie..........what belief system and values will AI have.
This is my fear - he who pays the piper calls the tune. I fear most of the AI we will bump into will be the same as googleads - commercially driven. If you are not paying then you are the product of the tool. TalkMorgan remains rare and now almost unique in my browser tabs for having value but not demanding money. Along with the CPU/GPU performance of simple machines now having enough power to create deep fake content. The bar to audience manipulation gets lower and lower.
Moggo - That really made me laugh! "I personally feel such a prune at having enjoyed the antics of a comedian cross dressing as a polititian." Don't be so hard on yourself, just assume they are all a little bit cross in that respect! Comedian cross dressing as a politician, ha ha ha