ibm-granite/granite-3.2-8b-instruct

Granite-3.2-8B-Instruct is a 8-billion parameter 128K context length language model fine-tuned for reasoning and instruction-following capabilities.

Input
Configure the inputs for the AI model.
0
100

The number of highest probability tokens to consider for generating the output. If > 0, only keep the top k tokens with highest probability (top-k filtering).

0
100

A probability threshold for generating the output. If < 1.0, only keep the top tokens with cumulative probability >= top_p (nucleus filtering). Nucleus filtering is described in Holtzman et al. (http://arxiv.org/abs/1904.09751).

Prompt

0
100

The maximum number of tokens the model should generate as output.

0
100

The minimum number of tokens the model should generate as output.

0
100

The value used to modulate the next token probabilities.

A comma-separated list of sequences to stop generation at. For example, '<end>,<stop>' will stop generation at the first instance of 'end' or '<stop>'.

0
100

Presence penalty

0
100

Frequency penalty

Output
The generated output will appear here.

No output yet

Click "Generate" to create an output.

granite-3.2-8b-instruct - ikalos.ai