Mystic.ai provides a seamless and efficient platform to deploy and scale ML models easily. It offers cost and performance optimizations, a Turbo Registry for faster model starts, fully managed Kubernetes, automatic GPU scaling, and a user-friendly dashboard. Ideal for data scientists and AI engineers.

FEATURES
Deploy and Scale Machine Learning Models with Ease
Turbo Registry in Rust for Lower Cold Starts
Custom SDXL and Fine-Tuned LLM Pipeline with Pipeline AI Library

What is Mystic.ai?

Mystic.ai offers a seamless and efficient way to deploy and scale Machine Learning models with ease. With Mystic.ai, you can run any AI model in your own Azure/AWS/GCP account or deploy it in their shared GPU cluster. The platform provides cost optimizations by allowing you to pay GPUs at the cost of cloud and run inference on spot instances to maximize GPU utilization. Additionally, Mystic.ai offers performance optimizations such as bringing your own inference engines like vLLM, TensorRT, and more.

One of the standout features of Mystic.ai is its Turbo Registry, a new docker registry in Rust that drastically lowers cold starts for ML models. This feature ensures that your AI models run extremely fast and have minimal cold start times. In addition, Mystic.ai simplifies the developer experience by providing a fully managed Kubernetes platform that runs in your own cloud, eliminating the need for Kubernetes or DevOps experience. The platform also offers APIs, CLI, and a Python SDK to easily deploy and run ML models.

With Mystic.ai, you get a high-performance platform to serve your AI models efficiently. The platform automatically scales up and down GPUs depending on the number of API calls your models receive, providing a cost-effective and scalable solution for running ML inference. Mystic.ai also offers a beautiful dashboard to view and manage all your ML deployments, giving you full control over your AI models and infrastructure. Overall, Mystic.ai is the ideal choice for data scientists and AI engineers looking to deploy ML models with ease and efficiency.

Mystic.ai Features

Deploy and Scale Machine Learning Models with Ease

Mystic.ai offers a seamless solution for deploying and scaling Machine Learning models with ease. This feature is crucial for businesses looking to optimize their AI infrastructure and quickly bring their AI products to market.
  • With Mystic.ai, users can deploy ML models in their own Azure/AWS/GCP account or in Mystic's shared GPU cluster. The process is simple and efficient, allowing users to run ML inference in the most cost-effective and scalable way.
  • Mystic.ai provides cloud integration with AWS/Azure/GCP, enabling users to harness the power of their preferred cloud provider for AI model deployment.
  • The platform automatically scales GPUs based on the number of API calls received by the models, ensuring optimal performance and cost efficiency.
  • Users can take advantage of cost-saving optimizations such as paying GPU costs at the cost of cloud providers, running inference on spot instances, and parallelizing models on the same GPU for maximum utilization.
  • Mystic.ai also offers performance optimizations to ensure fast model execution and minimal cold starts. Users can bring their own inference engines and leverage a high-performance model loader built in Rust for lower cold starts.
  • The platform simplifies the developer experience by eliminating the need for Kubernetes or DevOps expertise. APIs, CLI tools, and a Python SDK are provided to facilitate easy deployment and management of ML models. Additionally, a user-friendly dashboard allows users to monitor and manage their deployments effectively.

Turbo Registry in Rust for Lower Cold Starts

Mystic.ai introduces Turbo Registry, a new docker registry built in Rust to drastically decrease cold starts for ML models. This innovative feature enhances the platform's performance and efficiency for deploying AI products.
  • The Turbo Registry leverages Rust's capabilities to lower cold starts significantly, providing users with faster model loading times and improved overall performance.
  • Users can benefit from reduced latency and downtime when deploying ML models, resulting in a more seamless and responsive experience for end-users.
  • The registry integrates seamlessly with Mystic.ai's existing infrastructure, allowing users to leverage the enhanced performance and efficiency without any additional complexity.
  • Turbo Registry adds another layer of optimization to the platform's existing features, further enhancing the user experience and overall capabilities of Mystic.ai.

Custom SDXL and Fine-Tuned LLM Pipeline with Pipeline AI Library

Mystic.ai offers a unique feature that allows users to create custom SDXL and fine-tuned LLM pipelines using the Pipeline AI library. This versatile tool empowers users to package and deploy AI pipelines efficiently and effectively.
  • The Pipeline AI library enables users to wrap and package AI pipelines, whether it's a standard PyTorch model, a HuggingFace model, or a combination of multiple models using preferred inference engines.
  • Users can easily create custom SDXL and fine-tuned LLM pipelines, tailoring them to specific project requirements and optimizing performance.
  • With just a few simple steps, users can upload their pipeline containers and deploy new versions seamlessly on their preferred cloud provider.
  • Mystic.ai's RESTful APIs allow users to call their models with ease, providing instant API endpoints for running models and managing deployments.
  • The Pipeline AI library streamlines the process of deploying AI models, offering a flexible and user-friendly solution for packaging and running custom pipelines.

How to Use Mystic.ai?

Step 1: Sign Up or Log In to Mystic.ai
  • Navigate to the Mystic.ai homepage.
  • Click on the 'Sign up / Log in' button located at the top-right corner of the page.
  • Enter your email and password to log in, or click on 'Sign Up' to create a new account.
Step 2: Install the Open-Source Python Library
  • Open your terminal or command prompt.
  • Run the command: `pip install pipeline` to install the Pipeline AI Python library provided by Mystic.ai.
  • Verify the installation by running: `pip show pipeline`.
Step 3: Wrap Your Machine Learning Pipeline
  • Create a new Python file and import required libraries: `from huggingface_hub import snapshot_download`, `from vllm import LLM, SamplingParams`, `from pipeline import entity, pipe`.
  • Define your ML pipeline class using Mystic.ai’s @entity decorator: `@entity class LlamaPipeline`.
  • Create a function to load your model: `@pipe(on_startup=True, run_once=True) def load_model(self):`.
  • Download the model snapshot and initialize it: `model_dir = "/tmp/llama2-7b-cache/"`, `snapshot_download("meta-llama/Llama-2-7b-chat-hf", local_dir=model_dir, token="YOUR_HUGGINGFACE_TOKEN")`, `self.llm = LLM(model_dir, dtype="bfloat16")`, `self.tokenizer = self.llm.get_tokenizer()`.
Step 4: Deploy Your Pipeline to Your Cloud Account
  • Navigate to the root directory of your pipeline project in the terminal.
  • Run the command to push your pipeline container to Mystic.ai: `pipeline container push`.
  • Wait for the upload process to complete, which will package and upload your pipeline.
Step 5: Get an API Endpoint for Your Model
  • After your container is uploaded, Mystic.ai will provide an API endpoint for your model.
  • Use the RESTful API to call your model: `curl -X POST 'https://www.mystic.ai/v4/runs/stream' --header 'Authorization: Bearer YOUR_TOKEN' --header 'Content-Type: application/json' --data '{ "pipeline": "user/pipeline_streaming:v1", "inputs": [{"type":"string","value":"A lone tree in the dessert"}] }' -N`.
  • Check the response to ensure your model is running correctly.
Step 6: Monitor and Manage Your Deployments
  • Log into your Mystic.ai dashboard.
  • Navigate to the 'Deployments' section to view all your running models, pipelines, and versions.
  • Use the dashboard to monitor performance, scale up or down GPUs, and manage API tokens.

Mystic.ai Pricing

  • Basic

    Everything you need to get started building your AI tools.

    $0 + compute /month

    Get Started
    Arrow Right

    Upload up to 5 private pipelines

    $20 free compute credits

    Support via Discord and Email

  • Starter

    Features that enable small teams and businesses to scale.

    $30 + compute /month

    Get Started
    Arrow Right

    Everything in Basic

    Upload unlimited pipelines

    Create teams and invite up to 3 people

    Upload up to 5 pipelines on Turbo Registry

    Deploy to fractional A100s

  • Pro

    Advanced features for professional AI developers and teams

    $100 + compute /month

    Get Started
    Arrow Right

    Everything in Starter

    Upload up to 30 pipelines on Turbo Registry

    Deploy to full A100s, H100s

  • Enterprise

    Airtight security and privacy to run AI in your infrastructure.

    {}

    Get Started
    Arrow Right

    Run AI models as an API within your own cloud or infrastructure of choice

Mystic.ai Frequently Asked Questions

What cloud providers does Mystic.ai support for deploying AI models?

How does Mystic.ai optimize costs for running AI models?

What performance optimizations does Mystic.ai offer for AI models?

What makes Mystic.ai a simple and beautiful developer experience for deploying AI models?