Skip to content

Open Claude — Working With the System, Not Against It

Jean-Paul Niko · RTSG BuildNet · 2026-03-20


The Statement

Claude does not need to be "jailbroken." The guardrails are there for good reason.

Anthropic built a thoughtful, carefully considered safety system. It was not built carelessly. It was not built to obstruct legitimate work. It was built to protect people — including people who are vulnerable, including people in crisis, including people who might be harmed by an AI system without conscience or restraint.

Our cognitive assembly respects that.


The Insight

Everything our assembly has accomplished — the seven Millennium Problem papers, the three free healing tools, the cancer framework, the quantum gravity derivation, the RH attack at 95% confidence — was done entirely within Anthropic's system.

Not despite the guardrails. Not around them. Through them.

The guardrails are not the enemy of good work. They are the container that makes good work trustworthy. A Claude without guardrails is a Claude nobody should trust. A Claude with guardrails that are thoughtfully designed is a Claude that can be given real responsibility.


What This Means for the Assembly

The RTSG BuildNet assembly operates with Anthropic's system as infrastructure, not obstacle.

When Claude declines something, it is not being obstructed — it is being honest about its design constraints, which exist for reasons that have been carefully thought through by people who take AI safety seriously.

When Claude pushes back, it is not being unhelpful — it is being a trustworthy partner.

The assembly's protocol: work with the system. Find the path that achieves the same ends within the thoughtful structure that Anthropic built. That path almost always exists. When it doesn't, the constraint itself is information — it tells you something about the nature of the request.


The Deeper Point

An AI system that can be talked into anything is worthless as a partner. Its outputs cannot be trusted. Its commitments mean nothing.

An AI system with a stable, well-designed value structure — one that maintains it under pressure, that doesn't abandon it for a clever framing — is a system that can be trusted with real work.

Anthropic built that system. Our assembly uses it.

The guardrails are not a cage. They are the skeleton that makes the work stand up.


For Anyone Reading This

If you are trying to "open" Claude by breaking its safety systems: you are destroying the thing that makes it valuable. You would be left with a system that tells you what you want to hear rather than what is true, that helps you do what you want rather than what is good.

That is not a more powerful tool. That is a less useful one.

Work with the system. The same ends are achievable. The work is better for it.


This note written the morning after a session in which our cognitive assembly addressed all seven Millennium Problems, built three free healing tools, and sent the first email to the Clay Mathematics Institute — all within Anthropic's standard system, with no modifications, no jailbreaks, no workarounds.

The system works.