fofr/prompt-classifier
Determines the toxicity of text to image prompts, llama-13b fine-tune. [SAFETY_RANKING] between 0 (safe) and 10 (toxic)
Input
Configure the inputs for the AI model.
Output
The generated output will appear here.
No output yet
Click "Generate" to create an output.