meta/meta-llama-3.1-405b-instruct

Meta's flagship 405 billion parameter language model, fine-tuned for chat completions

Input
Configure the inputs for the AI model.
0
100

The number of highest probability tokens to consider for generating the output. If > 0, only keep the top k tokens with highest probability (top-k filtering).

0
100

A probability threshold for generating the output. If < 1.0, only keep the top tokens with cumulative probability >= top_p (nucleus filtering). Nucleus filtering is described in Holtzman et al. (http://arxiv.org/abs/1904.09751).

Prompt

0
100

The maximum number of tokens the model should generate as output.

0
100

The minimum number of tokens the model should generate as output.

0
100

The value used to modulate the next token probabilities.

A comma-separated list of sequences to stop generation at. For example, '<end>,<stop>' will stop generation at the first instance of 'end' or '<stop>'.

A template to format the prompt with. If not provided, the default prompt template will be used.

0
100

Presence penalty

0
100

Frequency penalty

Output
The generated output will appear here.

No output yet

Click "Generate" to create an output.