meta/llama-2-13b-chat

A 13 billion parameter language model from Meta, fine tuned for chat completions

Input
Configure the inputs for the AI model.

Random seed. Leave blank to randomize the seed

provide debugging output in logs

0
100

When decoding text, samples from the top k most likely tokens; lower to ignore less likely tokens

0
1

When decoding text, samples from the top p percentage of most likely tokens; lower to ignore less likely tokens

Prompt to send to the model.

0.01
5

Adjusts randomness of outputs, greater than 1 is random and 0 is deterministic, 0.75 is a good starting value.

1
100

Maximum number of tokens to generate. A word is generally 2-3 tokens

-1
100

Minimum number of tokens to generate. To disable, set to -1. A word is generally 2-3 tokens.

A comma-separated list of sequences to stop generation at. For example, '<end>,<stop>' will stop generation at the first instance of 'end' or '<stop>'.

Path to fine-tuned weights produced by a Replicate fine-tune job.

Output
The generated output will appear here.

No output yet

Click "Generate" to create an output.

llama-2-13b-chat - ikalos.ai