Protect your users
Galilei's Toxic API utilises machine learning models to detect toxic sentiment within a string of text.
If text is deemed toxic by the API, it will return an additional string value containing a reason why. You could use this response to:
- Ban or mute a user
- Remove the content immediately
- Censor the content
We are constantly training our models with new content to keep them performing at their best.
Security comes first
Galilei is powerful enough to handle millions of requests per second. All data transferred through our API is encrypted and transferred via HTTPS.
Galilei is built using industry leading technologies