Pricing

Designed to Scale
With Your Impact.

From individual researchers to enterprises spending $500,000 on model training — there is a plan for every team that wants to stop wasting money and start transplanting.

Tier 01

Research

For academic researchers, independent engineers, and teams evaluating Model Surgery. Full access to the core pipeline.

$0 / month
  • Full 12-stage pipeline access
  • Concept mapping (up to 10 concepts)
  • Cross-model alignment (2 model pairs)
  • Interference detection
  • Post-graft verification
  • JSON blueprint export
  • Rank-k > 4 transplants
  • Commercial use license
  • Priority support
Join Waitlist →

No credit card. Apply via contact form.

Tier 03

Enterprise

For large organizations integrating Model Surgery into production AI pipelines. Custom infrastructure, dedicated support, and SLA guarantees.

Contact Us Custom
  • Everything in Growth
  • Unlimited model pairs
  • Full rank-k surgery (unlimited k)
  • Private cloud deployment
  • Custom integration support
  • Dedicated infrastructure
  • Unlimited seats
  • 99.9% SLA
  • Strategic AI partnership
Talk to Us →

research@model-surgery.com

The Math Is Simple

One training run costs $200,000 on average. One Model Surgery transplant costs $0. Any plan pays for itself the first time you use it.

Common Questions

Everything You Need to Know.

Do I need a GPU to run Model Surgery?

No. Concept mapping runs on CPU in under one second. The transplant operation writes to model weights — it requires loading the model but no forward passes through large compute graphs. A standard developer machine is sufficient.

What models does it work with?

Any HuggingFace transformer model with MLP layers — GPT-2, LLaMA, Mistral, Falcon, Qwen, Phi, and others. We auto-detect architecture and adapt the pipeline accordingly. Testing primarily on GPT-2 and LLaMA family models.

What does "91.7% alignment" actually mean?

After transplanting a concept, we run an independent post-graft probe: we fast-map the same concept in the target model and compare the geometric direction to the donor's map via cosine similarity. 0.917 means the target model now stores the concept in 91.7% the same direction as the donor — confirmed causal transfer, not noise.

Can it corrupt the target model?

Our interference detection system scans every layer before any weight is touched. If a concept collision above 0.7 cosine similarity is detected, the system issues a CAUTION or ABORT rating and prevents the surgery. No write happens without verification.

What is "rank-k" and why does it matter?

Simple concepts (a single word, a fact) live in a single weight-space direction — rank-1. Complex capabilities like a language, a reasoning style, or a domain of expertise require multiple simultaneous directions — rank-k. Higher k = richer, more complete transplant. Research tier supports up to k=4; Growth to k=32.

Is this technology patented?

Yes — patent applications are filed and pending. The specific combination of gradient-SVD concept addressing, orthogonal Procrustes cross-model alignment, and rank-k conjugation transplant as a unified system is our novel contribution. Provisional filing complete as of 2026.

When will Growth and Enterprise tiers launch?

We are currently in a private research beta focused on validating the core transplant pipeline at scale. Growth pricing will be announced once the French language transplant experiment concludes — expected Q2 2026. Join the waitlist to be notified first.

How do I get access now?

Submit a request through our contact page. We review all applications personally — we are looking for teams with genuine use cases and a willingness to provide feedback during the beta period. Research institutions and funded startups are prioritized.

Ready to Start?

Stop Paying $200,000
for a Problem We Solved.

Join the private beta. Be among the first teams to transplant neural knowledge instead of retraining it.