From individual researchers to enterprises spending $500,000 on model training — there is a plan for every team that wants to stop wasting money and start transplanting.
For academic researchers, independent engineers, and teams evaluating Model Surgery. Full access to the core pipeline.
No credit card. Apply via contact form.
For AI startups and product teams. Unlimited concepts, commercial license, priority support, and advanced rank-k surgery.
Notify me when Growth launches.
For large organizations integrating Model Surgery into production AI pipelines. Custom infrastructure, dedicated support, and SLA guarantees.
research@model-surgery.com
One training run costs $200,000 on average. One Model Surgery transplant costs $0. Any plan pays for itself the first time you use it.
No. Concept mapping runs on CPU in under one second. The transplant operation writes to model weights — it requires loading the model but no forward passes through large compute graphs. A standard developer machine is sufficient.
Any HuggingFace transformer model with MLP layers — GPT-2, LLaMA, Mistral, Falcon, Qwen, Phi, and others. We auto-detect architecture and adapt the pipeline accordingly. Testing primarily on GPT-2 and LLaMA family models.
After transplanting a concept, we run an independent post-graft probe: we fast-map the same concept in the target model and compare the geometric direction to the donor's map via cosine similarity. 0.917 means the target model now stores the concept in 91.7% the same direction as the donor — confirmed causal transfer, not noise.
Our interference detection system scans every layer before any weight is touched. If a concept collision above 0.7 cosine similarity is detected, the system issues a CAUTION or ABORT rating and prevents the surgery. No write happens without verification.
Simple concepts (a single word, a fact) live in a single weight-space direction — rank-1. Complex capabilities like a language, a reasoning style, or a domain of expertise require multiple simultaneous directions — rank-k. Higher k = richer, more complete transplant. Research tier supports up to k=4; Growth to k=32.
Yes — patent applications are filed and pending. The specific combination of gradient-SVD concept addressing, orthogonal Procrustes cross-model alignment, and rank-k conjugation transplant as a unified system is our novel contribution. Provisional filing complete as of 2026.
We are currently in a private research beta focused on validating the core transplant pipeline at scale. Growth pricing will be announced once the French language transplant experiment concludes — expected Q2 2026. Join the waitlist to be notified first.
Submit a request through our contact page. We review all applications personally — we are looking for teams with genuine use cases and a willingness to provide feedback during the beta period. Research institutions and funded startups are prioritized.
Join the private beta. Be among the first teams to transplant neural knowledge instead of retraining it.