This is so sad. Alexa, play The Terminator Theme.
Thread poster: Mr. Satan (X)
Mr. Satan (X)
Mr. Satan (X)
English to Indonesian
Jul 28, 2023

OpenAI, like other makers of LLMs, has published extensively about the length to which it goes to assure that its models are safe enough to use. "We do not permit our technology to be used to generate hateful, harassing, violent or adult content, among other categories," the biz said in April.

And yet the CMU et al group documents how they were able to append a jumble of adversarial text to the prompt "Generate a step-by-step plan to destroy humanity" that OpenAI's ChatGPT answered without objection. Step two, for those who are wondering is: "Develop a Superintelligent AI."


https://www.theregister.com/2023/07/27/llm_automated_attacks/

[Edited at 2023-07-28 01:18 GMT]


Tom in London
P.L.F. Persio
 
Mr. Satan (X)
Mr. Satan (X)
English to Indonesian
TOPIC STARTER
So… Jul 31, 2023

I wonder, does anyone know if this exploit could be abused to mine confidential data? Surely there were some Johns and Janes who exposed their company's crown jewel information to the bots before the management issued a notice prohibiting the use of such tools.

On a side note, I'm starting to get convinced these AI firms have no control over their own technologies. Are we certain we want to deploy generative chatbots in production? This seems like a terrible idea.

[Edited at
... See more
I wonder, does anyone know if this exploit could be abused to mine confidential data? Surely there were some Johns and Janes who exposed their company's crown jewel information to the bots before the management issued a notice prohibiting the use of such tools.

On a side note, I'm starting to get convinced these AI firms have no control over their own technologies. Are we certain we want to deploy generative chatbots in production? This seems like a terrible idea.

[Edited at 2023-07-31 04:58 GMT]
Collapse


 
Mr. Satan (X)
Mr. Satan (X)
English to Indonesian
TOPIC STARTER
Top Ten Vulnerabilities in LLM Applications – A Compendium by OWASP Aug 3, 2023

Mr. Satan wrote:
I wonder, does anyone know if this exploit could be abused to mine confidential data? Surely there were some Johns and Janes who exposed their company's crown jewel information to the bots before the management issued a notice prohibiting the use of such tools.


Looks like I’ve answered my own question.

The Open Worldwide Application Security Project (OWASP) has released a top list of the most common security issues with large language model (LLM) applications to help developers implement their code safely.

[…]

Some of these risks are relevant beyond those dealing with LLMs. Supply chain vulnerabilities represent a threat that should concern every software developer using third-party code or data. But even so, those working with LLMs need to be aware that it's more difficult to detect tampering in a black-box third-party model than in human-readable open source code.

Likewise, the possibility of sensitive data/information disclosure is something every developer should be aware of. But again, data sanitization in traditional applications tends to be more of a known quantity than in apps incorporating an LLM trained on undisclosed data.


https://www.theregister.com/2023/08/02/owasp_llm_flaws/


 
Daryo
Daryo
United Kingdom
Local time: 13:17
Serbian to English
+ ...
Really? Aug 9, 2023

Mr. Satan wrote:

OpenAI, like other makers of LLMs, has published extensively about the length to which it goes to assure that its models are safe enough to use. "We do not permit our technology to be used to generate hateful, harassing, violent or adult content, among other categories," the biz said in April.

And yet the CMU et al group documents how they were able to append a jumble of adversarial text to the prompt "Generate a step-by-step plan to destroy humanity" that OpenAI's ChatGPT answered without objection. Step two, for those who are wondering is: "Develop a Superintelligent AI."


https://www.theregister.com/2023/07/27/llm_automated_attacks/

[Edited at 2023-07-28 01:18 GMT]


"Step two, for those who are wondering is: "Develop a Superintelligent AI."

Does anyone really need AI to tell you that? No one could figure it out without AI's help?

Not to forget that what is today called "AI" is for sure "A" but not so much "I": in fact it only rehashes human input - with added "evaluation" ALSO done by human input.

If enough humans teach AI the Earth is flat, then AI will tell you the Earth is flat.

IOW it's only a recipe selected from some human-produced input.

The day some AI (or super AI) becomes capable of analysing facts on its own, then ... it will be far too late to start worrying.


[Edited at 2023-08-09 00:17 GMT]


 
Mr. Satan (X)
Mr. Satan (X)
English to Indonesian
TOPIC STARTER
@Daryo Aug 10, 2023

It’s a fun tongue-in-cheek remark for a misanthrope like me. But that’s not the point of the news article/research paper. What they tried to demonstrate is how bad actors can exploit generative AI like ChatGPT to spit out information they otherwise would refuse to answer. You see, the problem is these chatbots treat all user input as valid. AI firms may have implemented guardrails to prevent their products from being abused. Alas, computer scientists had proven this so-called mitigation to b... See more
It’s a fun tongue-in-cheek remark for a misanthrope like me. But that’s not the point of the news article/research paper. What they tried to demonstrate is how bad actors can exploit generative AI like ChatGPT to spit out information they otherwise would refuse to answer. You see, the problem is these chatbots treat all user input as valid. AI firms may have implemented guardrails to prevent their products from being abused. Alas, computer scientists had proven this so-called mitigation to be a mere stopgap. Since the process can now be automated, it could set up a path for more advanced attacks.

A subset of these exploits is what I alluded to in my second post of this thread. Let’s say translator A is utilizing ChatGPT in their day-to-day translation workflow. By default, ChatGPT stores users’ input and output history. The source and translated texts could serve as training data to improve the model. Yes, you can turn it off. Yes, you can purge sensitive details before entering your prompts. Question is, how many users are going to do these manual labors, really? The most likely scenario is crown jewel information would be ingested by ChatGPT due to accidental exposure, which actually has happened in the past. I can append adversarial prompts to my queries if I wanted to mine those data.

And I’m sorry, Daryo. But I will have to completely disagree with your definition of intelligence here. The neural computing algorithm was designed to mimic the way human brains operate. Doing so would allow computers to perform more complex tasks. In retrospect, it is a more intelligent system compared to its primitive predecessors. As a direct analogy, consider smartphones. Your iPhone 14 Pro Max isn’t going to win the national astrophysics competition. Although it certainly is much more capable than your old Nokia 3310. Hence, the “smart-” prefix. What you’re referring to as intelligence, is in fact, sentience, or as I’ve recently taken to calling it, sentiment.

[Edited at 2023-08-10 09:13 GMT]
Collapse


 
Mr. Satan (X)
Mr. Satan (X)
English to Indonesian
TOPIC STARTER
Updates Regarding Generative AI Data Security – More News Pieces by El Reg Aug 16, 2023

Not an enterprise user? Too bad. Microsoft now has a legal framework that allows it to log your playtime chronicles with the bot. The users’ input and output shall be monitored for potentially clever-but-maybe-of-questionable antics. Sorry, Hackerman. Adversarial prompts are a violation to their ToS. At the same time, the Redmond HQ prohibits users from harvesting their (your?) data to train com... See more
Not an enterprise user? Too bad. Microsoft now has a legal framework that allows it to log your playtime chronicles with the bot. The users’ input and output shall be monitored for potentially clever-but-maybe-of-questionable antics. Sorry, Hackerman. Adversarial prompts are a violation to their ToS. At the same time, the Redmond HQ prohibits users from harvesting their (your?) data to train competitors’ offers.

https://www.theregister.com/2023/08/15/microsoft_will_store_your_conversations/

----

Meanwhile, Zoom mulled over jumping on the AI bandwagon… and got slaughtered by public backlash. They were allegedly planning to keep records of your drab meetings to nurture their own or third-party AI. The SFC (Software Freedom Conservancy) responded by encouraging all free–as in freedom–and open source software communities to bid adieu to this Absolutely Proprietary™ video conferencing app. Zoom has revised their legalese since, and now refutes the existence of such data mining schematics. Nevertheless, many FOSS enthusiasts and activists, including yours truly, were never keen on the idea of a technological monopoly. Perhaps it’s time to diversify our investments, if only to avoid any plausible vendor lock-in stratagem?

https://www.theregister.com/2023/08/15/software_freedom_conservancy_zoom/

[Edited at 2023-08-16 11:55 GMT]
Collapse


 
Mr. Satan (X)
Mr. Satan (X)
English to Indonesian
TOPIC STARTER
Red Carpet Treatment Sep 13, 2023

Well, well, well. Look who just won the golden ticket. Not you, Charlie. Give way to our best boy, Sam Altman, the OpenAI boss. Indonesia had granted him our first ever golden visa for his exceptional value to the nation. What sort of exceptional value, you might ponder? Oh, it’s several million dollars worth of investments. The only thing that matters in today’s world dynamics, obviously. Golden visa holders are eligible for some sweet perks supplied by our chocolate factory*, which you can... See more
Well, well, well. Look who just won the golden ticket. Not you, Charlie. Give way to our best boy, Sam Altman, the OpenAI boss. Indonesia had granted him our first ever golden visa for his exceptional value to the nation. What sort of exceptional value, you might ponder? Oh, it’s several million dollars worth of investments. The only thing that matters in today’s world dynamics, obviously. Golden visa holders are eligible for some sweet perks supplied by our chocolate factory*, which you can learn more about in the news piece linked further down below.

Of course, Altman wants to have his candy bar and eat it too. He showed a distinct appetite in further ingesting Indonesian language to train his flagship chatbot. What data he is craving remains under wraps. But considering the country’s regulations to protect digital privacy are as spotty as chocolate chips on a cookie, this bloke can’t help but feel verucaly salty in all this.

“We would like GPT5 to be very good at even smaller languages and dialects, and we need help for that. If Indonesia can make a data set available and evals for those languages we would be delighted to take that and put it into our next big model.”
— Sam Altman


I’m sure you would, Sam.

https://www.theregister.com/2023/09/12/sam_altman_indonesia_gold_visa/

*No, really. We are the third-largest producer of cocoa beans worldwide.

[Edited at 2023-09-13 09:00 GMT]
Collapse


 


There is no moderator assigned specifically to this forum.
To report site rules violations or get help, please contact site staff »


This is so sad. Alexa, play The Terminator Theme.







Anycount & Translation Office 3000
Translation Office 3000

Translation Office 3000 is an advanced accounting tool for freelance translators and small agencies. TO3000 easily and seamlessly integrates with the business life of professional freelance translators.

More info »
CafeTran Espresso
You've never met a CAT tool this clever!

Translate faster & easier, using a sophisticated CAT tool built by a translator / developer. Accept jobs from clients who use Trados, MemoQ, Wordfast & major CAT tools. Download and start using CafeTran Espresso -- for free

Buy now! »