LLaMA
by Meta
Strong general-purpose performance across instruction-following and reasoning tasks.
BeaverYard supports three base model families. You choose a model family before uploading your dataset — pricing stays the same regardless of your choice.
All runs use 7B/8B-class instruction-tuned models with LoRA/QLoRA. You select the model family — we handle version pinning internally.
LLaMA
by Meta
Strong general-purpose performance across instruction-following and reasoning tasks.
Mistral
by Mistral AI
Efficient architecture with strong multilingual and instruction-following capabilities.
Gemma
by Google
Lightweight and well-suited for structured tasks, classification, and compact deployment.
Model choice does not affect price. Pricing is based on token caps and a standardized training recipe. All three model families use the same compute envelope (sequence length, batch size, optimizer, etc.), so runtime and cost are equivalent.
BeaverYard pins specific model versions internally for reproducibility. We do not auto-update to the latest release — upgrades are tested and promoted manually. Your run always uses the exact version that was active when it was created.
Before confirming your run, you agree to the license terms for the selected base model. Each model has its own terms set by its creator — review them on the confirmation page before payment.