AI Inference Termination (AIT)

Maingrid’s AI Inference Termination (AIT) is a critical part of the end-to-end AI ecosystem we provide. This entire infrastructure is deployed seamlessly within the user's own cloud environment, offering a managed platform for powerful, scalable AI applications.

With AIT, Maingrid ensures efficient session management and rapid resource turnover, all while providing a robust, flexible AI infrastructure within the user’s own cloud environment.

In technical terms, AI inference termination refers to the point when a user’s inference request (Origination)—such as an AI prompt—has been fully processed, and the results have been returned. In Maingrid, AIT marks the completion of a session, whether it’s a single inference or an ongoing set of requests. When an end user submits a prompt to Maingrid, CIMS orchestrates the request, assigns resources, and processes the inference. Upon completion, AIT finalizes the session, allowing CIMS to free up resources and prepare for the next request.


Key AIT Features:

  • Python and Beyond: Maingrid supports all code written in Python and its derivatives, accommodating major open-source models, user-trained models, and multi-modal AI models alike.

  • Automated Dockerization: Users don’t need to worry about containerizing their applications. Maingrid automatically packages user code into a Docker image, with dependencies configured for smooth execution and scaling.

  • Secure Code Integration: Users can securely pull code directly from their Git repositories into their AWS environment. Maingrid never accesses or interacts with user code or models, ensuring complete privacy.

  • Fully Secured Endpoints: All endpoints are protected, with access restricted solely to the user. This includes secure networking and communication for APIs and embedding.

For further technical specifications, visit Tech Specs page.

Last updated