Service operation
Technical details aspects
Last updated
Was this helpful?
Technical details aspects
Last updated
Was this helpful?
The backend is managed by Salad.com, Venice.AI and Presearch in tandem.
PreGPT 2.0 is fueled by Salad.comβs and Venice.AI's decentralized network of GPU nodes. For more insights into Salad's distributed cloud technology, visit their and . For information about Venice.AI visit their .
PreGPT 2.0 currently uses the Mistral 7B LLM model and models from Venice.AI.
Many parts of PreGPT 2.0, including the LLMs and the UI (forked from Hugging Face), are open-sourced. The inference endpoints are hosted by Salad.com.
While we aim for consistent uptime, occasional downtime may occur due to high demand.
Like all chatbots, PreGPT has its boundaries. It does not provide real-time or web-based information and operates with knowledge up until roughly Spring 2023, when the Mistral 7B training data ended. The current LLM model primarily supports English, with limited capabilities in other languages.