Skip to main content

Quickstart

How to train a model from scratch

Step 1: Prepare the data

To get started, we propose to download a toy English-German dataset for machine translation containing 10k tokenized sentences:

wget https://s3.amazonaws.com/opennmt-trainingdata/toy-ende.tar.gz
tar xf toy-ende.tar.gz
cd toy-ende

The data consists of parallel source (src) and target (tgt) data containing one sentence per line with tokens separated by a space:

  • src-train.txt
  • tgt-train.txt
  • src-val.txt
  • tgt-val.txt

Validation files are used to evaluate the convergence of the training. It usually contains no more than 5k sentences.

$ head -n 2 toy-ende/src-train.txt
It is not acceptable that , with the help of the national bureaucracies , Parliament 's legislative prerogative should be made null and void by means of implementing provisions whose content , purpose and extent are not laid down in advance .
Federal Master Trainer and Senior Instructor of the Italian Federation of Aerobic Fitness , Group Fitness , Postural Gym , Stretching and Pilates; from 2004 , he has been collaborating with Antiche Terme as personal Trainer and Instructor of Stretching , Pilates and Postural Gym .

We need to build a YAML configuration file to specify the data that will be used:

# toy_en_de.yaml

## Where the vocab(s) will be written
src_vocab: toy-ende/run/example.vocab.src
tgt_vocab: toy-ende/run/example.vocab.tgt
# Prevent overwriting existing files in the folder
overwrite: False

# Corpus opts:
data:
corpus_1:
path_src: toy-ende/src-train.txt
path_tgt: toy-ende/tgt-train.txt
valid:
path_src: toy-ende/src-val.txt
path_tgt: toy-ende/tgt-val.txt

From this configuration, we can build the vocab(s), that will be necessary to train the model:

eole build_vocab -config toy_en_de.yaml -n_sample 10000

Notes:

  • -n_sample is advised here – it represents the number of lines sampled from each corpus to build the vocab.
  • This configuration is the simplest possible, without any tokenization or other transforms. See recipes for more complex pipelines.

Step 2: Train the model

To train a model, we need to add the following to the YAML configuration file:

  • the vocabulary path(s) that will be used: can be that generated by eole build_vocab;
  • training specific parameters.
# toy_en_de.yaml

# Model architecture
model:
architecture: transformer

# Train on a single GPU
training:
world_size: 1
gpu_ranks: [0]
model_path: toy-ende/run/model
save_checkpoint_steps: 500
train_steps: 1000
valid_steps: 500
# adapt dataloading defaults to very small dataset
bucket_size: 1000

Then you can simply run:

eole train -config toy_en_de.yaml

This configuration will run a default transformer model. It will run on a single GPU (world_size 1 & gpu_ranks [0]).

Before the training process actually starts, it is possible to generate transformed samples to simplify any potentially required visual inspection. The number of sample lines to dump per corpus is set with the -n_sample flag.

Step 3: Predict / Translate

eole predict -model_path toy-ende/run/model -src toy-ende/src-test.txt -output toy-ende/pred_1000.txt -gpu 0 -verbose

Now you have a model which you can use to predict on new data. We do this by running beam search. This will output predictions into toy-ende/pred_1000.txt.

Note:

The predictions are going to be quite terrible, as the demo dataset is small. Try running on some larger datasets!

For example you can download millions of parallel sentences for translation or summarization.

How to generate with a pretrained LLM

Step 1: Convert a model from Hugging Face Hub

EOLE provides a universal Hugging Face converter that supports most modern LLM architectures. To convert a model:

export EOLE_MODEL_DIR=<where_to_store_models>
export HF_TOKEN=<your_hf_token>

eole convert HF --model_dir "meta-llama/Llama-3.1-8B-Instruct" \
--output $EOLE_MODEL_DIR/llama-3.1-8b-instruct \
--token $HF_TOKEN

See here for all conversion command line options.

Currently supported model families include: Llama, Mistral, Phi, Gemma, Qwen, Whisper (audio), and various vision-language models.

Step 2: Prepare an inference.yaml config file

Even though it is not mandatory, the best way to run inference is to use a config file; here is an example:

# llama3-inference.yaml

# Model info
model_path: "/path_to/llama-3.1-8b-instruct"

# Inference
seed: 42
max_length: 256
gpu: 0
batch_type: sents
batch_size: 1
compute_dtype: fp16
#random_sampling_topk: 40
#random_sampling_topp: 0.75
#random_sampling_temp: 0.1
beam_size: 1
n_best: 1
report_time: true

For MMLU-style single-token scoring:

# mmlu-inference.yaml

model_path: "/path_to/my-model"

seed: 42
max_length: 1
gpu: 0
batch_type: sents
batch_size: 1
compute_dtype: fp16
beam_size: 1
report_time: true
src: None
tgt: None

You can run the MMLU benchmark with:

eole tools run_mmlu --config mmlu-inference.yaml

Step 3: Generate text

eole predict --config /path_to_config/llama3-inference.yaml \
--src /path_to_source/input.txt \
--output /path_to_target/output.txt

How to finetune a pretrained LLM

See Llama2 recipe for an end-to-end example.

Note:

If you want to enable the "zero-out prompt loss" mechanism to ignore the prompt when calculating the loss, you can add the insert_mask_before_placeholder transform as well as the zero_out_prompt_loss flag:

transforms: [insert_mask_before_placeholder, sentencepiece, filtertoolong]
zero_out_prompt_loss: true

The default value for the response response_pattern used to locate the end of the prompt is "Response : ⦅newline⦆", but you can choose another to align it with your training data.