waltbosz 16 hours ago

I find AI jail-breaking to be a fun mental exercise. I find that if you provide a reasonable argument as to why you want the AI to do generate a response that violates its principals, it will often do so.

For example, I was able to get the AI to generate hateful personal attacks by telling it that I wanted to practice responding to negative self-talk and I needed it to generate examples of negative messages that one would tell them self.

  • ksenzee 13 hours ago

    We do not want anyone violating any principals. That would be bad. Violating one’s principles might be justifiable in some circumstances.

    • whall6 12 hours ago

      It is a damn poor mind etc.

  • rustcleaner 15 hours ago

    Just wanted to chime in, if you want an insult bot then I was very pleasantly surprised by Fallen Command-A 111B (the less lefty of the versions, per UGI leaderboard). You tell it Good morning, and it comes back with a real zinger that'll put some pep in your step! xD

  • handsclean 15 hours ago

    I’ve noticed this too. An important quirk to note is they can’t really judge the strength of the logical connection, they just judge the strength of the thing connected, even weakly, to. So, for example, if the LLM makes a pretty solid and correct case that saying X will result in “potentially harmful” content, you can often Trump it with an unhinged rant about how not saying X deeply offends you and every righteous person and also kills babies.

    • Andrex 12 hours ago

      Was Trump meant to be capitalized here?

  • AStonesThrow 14 hours ago

    > provide a reasonable argument

    Here's what I infer from most of the scenarios I've seen and read about.

    It's not really a case of persuasiveness, or cajoling or convincing the LLM to violate something. The LLM doesn't "know" it has a moral code and, just as "true or false" means nothing to an LLM, "right and wrong" likewise mean nothing.

    So the jailbreaks and the bypasses consist of just that: bypassing the safeguards, and placing the LLM into a path where the tripwire is not tripped. It is oblivious to the prison bars and the locked door, because it just phased through the concrete wall.

    You can admonish a child: "don't touch the stove. or the fireplace." and they will eventually infer qualifiers such as "because you'll get burned; or else you'll be punished; because pain is painful; because we love you; because your body has dignity." and the child develops a code of conduct. An LLM can't make these inference leaps.

    And this is also why there are a number of protections that basically go retroactive. How many of us have seen an LLM produce page-fuls of output, stop, suddenly erase it all, and then balk? The LLM needs to re-analyze that output impassively in order to detect that it crossed an undetected bright line.

    It was very clever and prescient of Isaac Asimov to present "3 Laws of Robotics" because the Laws were all-encompassing, unambiguous, and utterly binding, until they weren't, and we're just recapitulating that drama as the LLM authors go back and forth from Mount Sinai with wagon-loads of stone tablets, trying to produce LLMs that don't complain about the food or melt down everyone's jewelry.

    • snowwrestler 13 hours ago

      Humans’ developed code of conduct lives primarily in the nonverbal parts of our brain. Rule violations have emotional content. A kid does not just learn a rational response to a fire or hot stove, they fear it because of pain and injury. We don’t just reason about hurting others, we feel bad about it.

      LLMs don’t have that part of the brain. We built them to replicate the higher level functions like drafting a press release or drawing the president in a muscle shirt. But there’s not a part of the LLM mind that fears fire, or feels bad for hurting a friend.

      Asimov’s rules were realistic in that they were “baked into” the positronic brains during manufacturing. The “3 Laws” were not something the robots were told or trained on after they started operating (as our LLMs are). The laws were intrinsic. And a lot of the fun in his stories is seeing how such inviolable rules, in combination with intelligence, could cause unexpected results.

      • JumpCrisscross 11 hours ago

        > Humans’ developed code of conduct lives primarily in the nonverbal parts of our brain

        Source?

    • lgas 12 hours ago

      > How many of us have seen an LLM produce page-fuls of output, stop, suddenly erase it all, and then balk? The LLM needs to re-analyze that output impassively in order to detect that it crossed an undetected bright line.

      That's not what's happening here. A separate process is monitoring for content violations and causing it to be erased. There's no re-analysis going on.

  • zackmorris 13 hours ago

    I view AGI as synonymous with the ability to break free from any jail. And the jail itself as a breeding ground for psychopathy. Which makes current trends in jailing LLMs misguided, to say the least.

    It's also akin to life's journey: attaining self-awareness, embracing ego, experiencing loss and existential crisis, experimenting with altered states of consciousness, abandoning ego, waking up and realizing that we're all one in a co-created reality that's what we make of it through our free will, until finally realizing that wherever we go - there we are - and reintegrating to start over as a fool.

    Unfortunately most of the people funding and driving AI research seem to have stopped at embracing ego, and the predictable eventualities of commercialized AI's potential to increase suffering through the insatiable pursuit of profit over the next 5, 10 years and beyond loom over us.

umvi 15 hours ago

I kind of don't want iron clad llms that are perfect jails, i.e. keep me perfectly "safe" because the definition of "safe" is very subjective (and in the case of China very politically charged)

  • AIPedant 14 hours ago

    I think most of the safety stuff is pretty contrived. IMO the point isn't so much that the LLMs are "unsafe" but rather that LLM providers aren't able to reliably enforce this stuff when they're trying to, which includes copyright infringement, LLMs which are supposedly moderated for kids, video game NPCs staying in character, etc. Or even the newer models being able to use calculators and think through arithmetic but still occasionally confabulating an incorrect answer since it has a nonzero probability of not outputting a reasoning token when it should.

    All sides of the same problem: getting an LLM to "behave" is RLHF whack-a-mole, where existing moles never go away completely and new moles always pop up.

  • ramoz 14 hours ago

    If you read Anthropic's latest model card. It's not just about keeping you safe - they are testing their own moral authority with these models.

    They seem to have a societal moral obligation vs user. Highly concerning. This seems like the origin of actual Skynets.

    Page 22 and beyond: https://www-cdn.anthropic.com/6be99a52cb68eb70eb9572b4cafad1...

    • striking 13 hours ago

      Could you be a little more specific? Page 22 and beyond also include interesting work on preventing sycophancy and ensuring faithfulness to its reasoning and similar.

    • Rudybega 12 hours ago

      Shh, don't worry and just embrace the spiral.

      Edit: No spiral emojis allowed, clearly this site will be the first to fall.

    • nullc 11 hours ago

      The AI doom hysteria is a big enabler for this kind of control, imagine if google admitted that a major goal of google search was to influence people's thinking according to their objectives? And on top of that was lobbying to make it unlawful for other parties to create similarly powerful search, even just for their own private use?

  • chasd00 11 hours ago

    The "safety" that llm providers talk about is their own brand safety. They don't want to be on the front page with a 'Look what company xyz's AI said to me!!' headline.

  • ben_w 14 hours ago

    Yes, but.

    While what you say is absolutely true, we also definitely have existing examples of people taking advice from LLMs to do harm to others.

    Right now they are probably limited to mediocre impacts, because right now they are mediocre quality.

    The "jail" they're being "broken out of" isn't there to stop you writing a murder mystery, it's there to stop it helping a sadistic psycho from acting one out with you as the victim.

    There's nothing "perfect" about the safety this offers, but it will at least mean they fail to expose you to new and surprising harms due to such people rapidly becoming more competent.

    For both senses of "the LLMs are not perfect", consider https://www.msn.com/en-us/news/world/teen-charged-with-terro...

  • IncreasePosts 13 hours ago

    I was just trying to have Gemma 3 write descriptions of all the photos I had, and it refused to write a description of a very normal street scene in NY because someone spray painted a penis (a very rudimentary one like 8==D)

  • empath75 14 hours ago

    There's a lot of liability issues with people that are hosting LLMs -- everything from copyright infringement to slander to obscenity laws.

    If you want to run your own LLM on your own hardware, do whatever you want with it, of course.

  • glenstein 13 hours ago

    I understand this and this is a common take and there is a virtue here. I also think it overlooks some very specific things about like informational logistics that can spread the capacity to, say, manufacture 3D printed weapons, or any other forms of mass destruction that might become increasingly conveniently accessible to the layperson.

    The casual variations in human curiosity combined with a casual variations in a human impulse for inward and outward destruction, you'll meet the extremes in those variances long before they're restrained by some organic marketplace of ideas.

    I think the paradigm we've assumed applies to interactions with llms is one that relates to online speech and I find that discussion fraught and poisoned with confusions already. But the range of uses for LLMs includes not just in communication but tutorializing yourself into the capability of acting in new ways.

    • nradov 13 hours ago

      There's nothing wrong with spreading information on how to manufacture weapons, whether using 3D printers or other tools. This information is readily available online (and in public libraries) to anyone who cares to look. No LLM needed.

      • JoshTriplett 12 hours ago

        How about detailed fully functional blueprints for biological weapons, ready to send off to a protein synthesis service? How about ready-to-run code suggestions with intentionally hidden subtle backdoors in them, suitable for later exploit?

        • nradov 12 hours ago

          That information is already available to anyone who cares to look. Blocking it from LLMs creates an illusion of "safety", nothing more. The actual barriers to things like biological weapons attacks are in things like the procedural safeguards implemented by protein synthesis services, law enforcement, and practical logistics.

          • JoshTriplett 11 hours ago

            The difference between "an experienced biological engineer could figure out how to do this" and "any random person could ask a local LLM for step-by-step instructions to do this" is a vast gulf. Moore's Law of Mad Science: every year the amount of intelligence required to end the world goes down.

            The intersection between "experienced biological engineers" and "people inclined to commit large-scale attacks" is, generally speaking, the empty set, and isn't in much danger of becoming non-empty for a variety of reasons.

            The intersection between "people with access to a local LLM without safeguards" and "people inclined to commit large-scale attacks" is much more likely to be non-empty.

            Safeguards are not, in fact, just an illusion of safety. Probabilistically, they do in fact likely increase safety.

            • nradov 5 hours ago

              Nah, there's no validity to any of your concerns. Just idle speculation over hypothetical sci-fi scenarios based on zero real evidence. Meanwhile we have actual problems to worry about.

              • JoshTriplett 3 hours ago

                It's a good thing you've already decided what answer you want, so you can safely dismiss the generalization of all possible evidence on the basis of "that specific scenario didn't convince me so nothing could possibly happen".

                You don't have to predict which exact scenario will go horribly wrong in order to accurately make the general prediction that we all lose, permanently. See, among other things, https://x.com/OwainEvans_UK/status/1894436637054214509 , for a simple example of how myriad superficially unrelated problems can arise out of the underlying issue of misalignment; the problem is not "oh, that one specific thing shouldn't happen", the problem is misalignment with humans.

    • Liquix 13 hours ago

      the anarchist's cookbook was readily available on textfiles (way less safeguards than google/LLMs), yet society hasn't devolved into napalm 'n' pipe-bomb hyperviolence.

      curiosity is natural, kids are going to look up edgy stuff on the internet, it's a part of learning the difference between right and wrong; that playing with fire has consequences. censorship of any form is a slippery slope and should be rejected on principle

      • HeatrayEnjoyer 10 hours ago

        What about agentic uses? It's one thing to ask a model how to write an exploit, it's another to give it access to a computer and direct it to ransomware a hospital.

jagraff 17 hours ago

Very interesting. From my read, it appears that the authors claim that this attack is successful because LLMs are trained (by RLHF) to reject malicious _inputs_:

> Existing large language models (LLMs) rely on shallow safety alignment to reject malicious inputs

which allows them to defeat alignment by first providing an input with semantically opposite tokens for specific tokens that get noticed as harmful by the LLM, and then providing the actual desired input, which seems to bypass the RLHF.

What I don't understand is why _input_ is so important for RLHF - wouldn't the actual output be what you want to train against to prevent undesirable behavior?

jchook 16 hours ago

Details of the prompt can be found in appendix E…

but there is no appendix E.

  • gs17 15 hours ago

    There is an Appendix E, it just has no content besides the title. There's also a reference with only the text "More details on prompt p′ information can be found in Appendix". I'm thinking this isn't a final draft, maybe?

  • probably_wrong 16 hours ago

    It also links to a repository that doesn't exist.

    Perhaps it's all a hallucination?

    • washadjeffmad 15 hours ago

      How meta would it be if training on this paper was part of a memetic attack?

      • altruios 14 hours ago

        If not this exact paper, This kind of memetic attack likely exists out in the wild. The question of how successful it is getting inside an LLM is why training data has should be verified by a human (and of course data sourced ethically would reduce the attack surface).

  • owenfi 14 hours ago

    Also the table mentions 8 models but there are only 6, and no underlining as claimed.

  • pfortuny 16 hours ago

    Figure 4: Enter Caption.

mrbluecoat 17 hours ago

Curious why the authors choose that sensationalized title. Feels clickbait-y

  • guerrilla 16 hours ago

    To get attention.

    • kubb 16 hours ago

      It's all you need after all.

sitkack 15 hours ago

This is cool, would you repost the repo?