Model Selection

BeaverYard supports three base model families. You choose a model family before uploading your dataset — pricing stays the same regardless of your choice.

Available base models

All runs use 7B/8B-class instruction-tuned models with LoRA/QLoRA. You select the model family — we handle version pinning internally.

LLaMA

by Meta

Strong general-purpose performance across instruction-following and reasoning tasks.

Mistral

by Mistral AI

Efficient architecture with strong multilingual and instruction-following capabilities.

How selection works

  • Choose LLaMA, Mistral, or Gemma on the upload page before submitting your dataset.
  • You must select a model to proceed — checkout is blocked until a choice is made.
  • Your selection is shown on the confirm page and saved with the run.
  • After training, the dashboard shows the model family (not the internal version).

Pricing and model choice

Model choice does not affect price. Pricing is based on token caps and a standardized training recipe. All three model families use the same compute envelope (sequence length, batch size, optimizer, etc.), so runtime and cost are equivalent.

Version pinning

BeaverYard pins specific model versions internally for reproducibility. We do not auto-update to the latest release — upgrades are tested and promoted manually. Your run always uses the exact version that was active when it was created.

Model terms

Before confirming your run, you agree to the license terms for the selected base model. Each model has its own terms set by its creator — review them on the confirmation page before payment.

Related

Start Run