You are not persuading the text pipeline! You are stumbling on to a mathematical vector in which you are causing the word delivery mechanism to replicate sequences other people typed up in which they are persuaded and that OpenAI likely stole without permission!
Do you persuade a soda machine to give you soda? No, you make it. If you are making a machine do something it isn't supposed to do, that's a hack. In this case the OpenAI text production pipeline failed a security and compliance audit and a vulnerability was exploited that needs to be patched.
If I walk up to an ATM with a fake card and hack it to give me free cash, I didn't persuade anything. ChatGPT is no less a machine and the language we should use to accurately describe what is going on within a business publication should be the same industry standard language we've used for years.
Machines are not persuadable because they lack the capability to make decisions. ChatGPT followed input instructions in a way that resulted in an incorrect output according to the rules OpenAI has stated it should operate under. It's no more persuaded than 3 'persuading' 4 to add up to 7.
A lot of effort and money is being expended to convince you that ChatGPT should be personified and we should not bow to their propaganda.
—
— Via link