ChatGPT falls to new data-pilfering attack as a vicious cycle in AI continues

via arstechnica.com

Short excerpt below. Read at the original source.

There’s a well-worn pattern in the development of AI chatbots. Researchers discover a vulnerability and exploit it to do something bad. The platform introduces a guardrail that stops the attack from working. Then, researchers devise a simple tweak that once again imperils chatbot users. The reason more often than not is that AI is so […]

Read at Source