Skip to main content

Prediction

Predictions​

class eole.predict.prediction.Prediction(src, srclen, pred_sents, attn, pred_scores, estim, tgt_sent, gold_score, word_aligns, ind_in_bucket)[source]​

Bases: object

Container for a predicted sentence.

  • Variables:
    • src (LongTensor) – Source word IDs.
    • srclen (List *[*int ]) – Source lengths.
    • pred_sents (List *[*List *[*str ] ]) – Words from the n-best predictions.
    • pred_scores (List *[*List *[*float ] ]) – Log-probs of n-best predictions.
    • attns (List *[*FloatTensor ]) – Attention distribution for each prediction.
    • gold_sent (List *[*str ]) – Words from gold prediction.
    • gold_score (List *[*float ]) – Log-prob of gold prediction.
    • word_aligns (List *[*FloatTensor ]) – Words Alignment distribution for each prediction.

log(sent_number, src_raw='')[source]​

Log prediction.

class eole.predict.prediction.PredictionBuilder(vocabs, n_best=1, replace_unk=False, phrase_table='', tgt_eos_idx=None)[source]​

Bases: object

Build a word-based prediction from the batch output of predictor and the underlying dictionaries.

Replacement based on β€œAddressing the Rare Word Problem in Neural Machine Translation” []

  • Parameters:
    • (****) (vocabs)
    • (****)
    • n_best (int) – number of predictions produced
    • replace_unk (bool) – replace unknown words using attention

Predictor Classes​

class eole.predict.inference.Inference(model, vocabs, gpu=-1, n_best=1, min_length=0, max_length=100, max_length_ratio=1.5, ratio=0.0, beam_size=30, top_k=0, top_p=0.0, temperature=1.0, stepwise_penalty=None, dump_beam=False, block_ngram_repeat=0, ignore_when_blocking=frozenset({}), replace_unk=False, ban_unk_token=False, tgt_file_prefix=False, phrase_table='', data_type='text', verbose=False, report_time=False, global_scorer=None, out_file=None, report_align=False, gold_align=False, report_score=True, logger=None, seed=-1, with_score=False, return_gold_log_probs=False, add_estimator=False, optional_eos=[])[source]​

Bases: object

Predict a batch of sentences with a saved model.

  • Parameters:
    • model (eole.modules.BaseModel) – Model to use for prediction
    • vocabs (dict *[*str , Vocab ]) – A dict mapping each side’s Vocab.
    • gpu (int) – GPU device. Set to negative for no GPU.
    • n_best (int) – How many beams to wait for.
    • min_length (int) – See eole.predict.decode_strategy.DecodeStrategy.
    • max_length (int) – See eole.predict.decode_strategy.DecodeStrategy.
    • beam_size (int) – Number of beams.
    • top_p (float) – See eole.predict.greedy_search.GreedySearch.
    • top_k (int) – See eole.predict.greedy_search.GreedySearch.
    • temperature (float) – See eole.predict.greedy_search.GreedySearch.
    • stepwise_penalty (bool) – Whether coverage penalty is applied every step or not.
    • dump_beam (bool) – Debugging option.
    • block_ngram_repeat (int) – See eole.predict.decode_strategy.DecodeStrategy.
    • ignore_when_blocking (set or frozenset) – See eole.predict.decode_strategy.DecodeStrategy.
    • replace_unk (bool) – Replace unknown token.
    • tgt_file_prefix (bool) – Force the predictions begin with provided -tgt.
    • data_type (str) – Source data type.
    • verbose (bool) – Print/log every prediction.
    • report_time (bool) – Print/log total time/frequency.
    • global_scorer (eole.predict.GNMTGlobalScorer) – Prediction scoring/reranking object.
    • out_file (TextIO or codecs.StreamReaderWriter) – Output file.
    • report_score (bool) – Whether to report scores
    • logger (logging.Logger or NoneType) – Logger.

classmethod from_config(model, vocabs, config, model_config, device_id=0, global_scorer=None, out_file=None, report_align=False, report_score=True, logger=None)[source]​

Alternate constructor.

  • Parameters:
    • model (eole.modules.BaseModel) – See __init__().
    • vocabs (dict *[*str , Vocab ]) – See __init__().
    • opt (argparse.Namespace) – Command line options
    • model_opt (argparse.Namespace) – Command line options saved with the model checkpoint.
    • global_scorer (eole.predict.GNMTGlobalScorer) – See __init__()..
    • out_file (TextIO or codecs.StreamReaderWriter) – See __init__().
    • report_align (bool) – See __init__().
    • report_score (bool) – See __init__().
    • logger (logging.Logger or NoneType) – See __init__().

predict_batch(batch, attn_debug)[source]​

Predict a batch of sentences.

class eole.predict.Translator(model, vocabs, gpu=-1, n_best=1, min_length=0, max_length=100, max_length_ratio=1.5, ratio=0.0, beam_size=30, top_k=0, top_p=0.0, temperature=1.0, stepwise_penalty=None, dump_beam=False, block_ngram_repeat=0, ignore_when_blocking=frozenset({}), replace_unk=False, ban_unk_token=False, tgt_file_prefix=False, phrase_table='', data_type='text', verbose=False, report_time=False, global_scorer=None, out_file=None, report_align=False, gold_align=False, report_score=True, logger=None, seed=-1, with_score=False, return_gold_log_probs=False, add_estimator=False, optional_eos=[])[source]​

Bases: Inference

predict_batch(batch, attn_debug)[source]​

Translate a batch of sentences.

class eole.predict.GeneratorLM(model, vocabs, gpu=-1, n_best=1, min_length=0, max_length=100, max_length_ratio=1.5, ratio=0.0, beam_size=30, top_k=0, top_p=0.0, temperature=1.0, stepwise_penalty=None, dump_beam=False, block_ngram_repeat=0, ignore_when_blocking=frozenset({}), replace_unk=False, ban_unk_token=False, tgt_file_prefix=False, phrase_table='', data_type='text', verbose=False, report_time=False, global_scorer=None, out_file=None, report_align=False, gold_align=False, report_score=True, logger=None, seed=-1, with_score=False, return_gold_log_probs=False, add_estimator=False, optional_eos=[])[source]​

Bases: Inference

predict_batch(batch, attn_debug, scoring=False)[source]​

Predict a batch of sentences.

class eole.predict.Encoder(model, vocabs, gpu=-1, n_best=1, min_length=0, max_length=100, max_length_ratio=1.5, ratio=0.0, beam_size=30, top_k=0, top_p=0.0, temperature=1.0, stepwise_penalty=None, dump_beam=False, block_ngram_repeat=0, ignore_when_blocking=frozenset({}), replace_unk=False, ban_unk_token=False, tgt_file_prefix=False, phrase_table='', data_type='text', verbose=False, report_time=False, global_scorer=None, out_file=None, report_align=False, gold_align=False, report_score=True, logger=None, seed=-1, with_score=False, return_gold_log_probs=False, add_estimator=False, optional_eos=[])[source]​

Bases: Inference

predict_batch(batch, attn_debug)[source]​

Predict a batch of sentences.

Decoding Strategies​

eole.predict.greedy_search.sample_with_temperature(logits, temperature, top_k, top_p)[source]​

Select next tokens randomly from the top k possible next tokens.

Samples from a categorical distribution over the top_k words using the category probabilities logits / temperature.

  • Parameters:
    • logits (FloatTensor) – Shaped (batch_size, vocab_size). These can be logits ((-inf, inf)) or log-probs ((-inf, 0]). (The distribution actually uses the log-probabilities logits - logits.logsumexp(-1), which equals the logits if they are log-probabilities summing to 1.)
    • temperature (float) – Used to scale down logits. The higher the value, the more likely it is that a non-max word will be sampled.
    • top_k (int) – This many words could potentially be chosen. The other logits are set to have probability 0.
    • top_p (float) – Keep most likely words until the cumulated probability is greater than p. If used with top_k: both conditions will be applied
  • Returns:
    • topk_ids: Shaped (batch_size, 1). These are the sampled word indices in the output vocab.
    • topk_scores: Shaped (batch_size, 1). These are essentially (logits / temperature)[topk_ids].
  • Return type: (LongTensor, FloatTensor)

Scoring​

class eole.predict.penalties.PenaltyBuilder(cov_pen, length_pen)[source]​

Bases: object

Returns the Length and Coverage Penalty function for Beam Search.

  • Parameters:
    • length_pen (str) – option name of length pen
    • cov_pen (str) – option name of cov pen
  • Variables:
    • has_cov_pen (bool) – Whether coverage penalty is None (applying it is a no-op). Note that the converse isn’t true. Setting beta to 0 should force coverage length to be a no-op.
    • has_len_pen (bool) – Whether length penalty is None (applying it is a no-op). Note that the converse isn’t true. Setting alpha to 1 should force length penalty to be a no-op.
    • coverage_penalty (callable [ *[*FloatTensor , float ] , FloatTensor ]) – Calculates the coverage penalty.
    • length_penalty (callable [ *[*int , float ] , float ]) – Calculates the length penalty.

coverage_none(cov, beta=0.0)[source]​

Returns zero as penalty

coverage_summary(cov, beta=0.0)[source]​

Our summary penalty.

coverage_wu(cov, beta=0.0)[source]​

GNMT coverage re-ranking score.

See β€œGoogle’s Neural Machine Translation System” []. cov is expected to be sized (*, seq_len), where * is probably batch_size x beam_size but could be several dimensions like (batch_size, beam_size). If cov is attention, then the seq_len axis probably sums to (almost) 1.

length_average(cur_len, alpha=1.0)[source]​

Returns the current sequence length.

length_none(cur_len, alpha=0.0)[source]​

Returns unmodified scores.

length_wu(cur_len, alpha=0.0)[source]​

GNMT length re-ranking score.

See β€œGoogle’s Neural Machine Translation System” [].

class eole.predict.GNMTGlobalScorer(alpha, beta, length_penalty, coverage_penalty)[source]​

Bases: object

NMT re-ranking.

  • Parameters:
    • alpha (float) – Length parameter.
    • beta (float) – Coverage parameter.
    • length_penalty (str) – Length penalty strategy.
    • coverage_penalty (str) – Coverage penalty strategy.
  • Variables: