meta/meta-llama-3-8b

Base version of Llama 3, an 8 billion parameter language model from Meta.

Input
Configure the inputs for the AI model.
0
100

The number of highest probability tokens to consider for generating the output. If > 0, only keep the top k tokens with highest probability (top-k filtering).

0
100

A probability threshold for generating the output. If < 1.0, only keep the top tokens with cumulative probability >= top_p (nucleus filtering). Nucleus filtering is described in Holtzman et al. (http://arxiv.org/abs/1904.09751).

Prompt

0
100

The maximum number of tokens the model should generate as output.

0
100

The minimum number of tokens the model should generate as output.

0
100

The value used to modulate the next token probabilities.

Prompt template. The string `{prompt}` will be substituted for the input prompt. If you want to generate dialog output, use this template as a starting point and construct the prompt string manually, leaving `prompt_template={prompt}`.

0
100

Presence penalty

0
100

Frequency penalty

Output
The generated output will appear here.

No output yet

Click "Generate" to create an output.