TrueFoundry is recognized in the 2025 Gartner® Market Guide for AI Gateways! Read the full report

Deployment Platform Updates

October 17, 2022
|
9:30
min read
SHARE

The Truefoundry team has been working really hard the last month adding features to our ML Deployment platform. Our goal here is to build a deployment platform that makes is absolutely easy to deploy ML models and services while enforcing the best engineering and security principles. To build a great ML platform, we need to have a solid engineering platform and that's why a lot of the initial focus has been on delivering a solid platform to deploy code.

ML Platform components

Out of all the pieces of the Ml platform described above, we focus on the serving infrastructure, monitoring and all the automation around that.

A lot of work went in building our deployment platform on top of Kubernetes. The goal here has been to make it absolutely easy to deploy in under 5 minutes wherein the platform takes care of building the image from the source code, storing it in a docker registry and then finally deploying the application on Kubernetes. A few of the updates from our last month include the following:

Ability to choose instance family while deploying

Machine learning models can have very different inference latency or performance based on the instance type. For e.g, when testing the inference latency of a hugging face model on Intel vs AMD processors, we found Intel processors to be around 30% faster. That’s why we now have an option to allow users to choose the instance type while deploying their workloads. If the instance type is not selected, the workload can be deployed on any available instance type.

Choose instance type while deploying
Choose instance type while deploying

Logs and Metrics for Deployments

We earlier had a Grafana link for showing logs and metrics. While Grafana was highly customizable, permission and access control wasn’t really possible on Grafana. Also, it turned out to be a bit slow and difficult to understand for users who weren’t used to Grafana. That’s why we implemented our own UI for showing logs and metrics which should suffice in most cases. We still offer the Grafana integration in public cloud for more advanced users.

System Metrics
System Metrics
Application Logs
Application Logs

Permission Control On Secret Groups

We can now add users as editor, viewer or admin on secret groups.

Application Logs

Github and Bitbucket integration

We can now deploy directly to Truefoundry from any Github or bitbucket repositories. Users can integrate with their own private repositories using the Oauth Flow and select the appropriate parameters to deploy the application.


In the next month, we are working on a few exciting features like:

  1. Making the platform more intuitive and easy to use.
  2. Automated deployment of truefoundry stack on any Kubernetes cluster
  3. Support for teams
  4. Deployment rollback functionality

Stay tuned and let us know your feedback!

The fastest way to build, govern and scale your AI

Discover More

November 5, 2025
|
5 min read

Data Residency in the Age of Agentic AI: How AI Gateways Enable Sovereign Scale and Compliance

August 27, 2025
|
5 min read

Mapping the On-Prem AI Market: From Chips to Control Planes

August 27, 2025
|
5 min read

AI Gateways: From Outage Panic to Enterprise Backbone

Secure AI Gateway with MCP: Enterprise-Ready Protection
July 4, 2025
|
5 min read

Secure AI Gateway with Centralized MCP for Enterprises

MCP Registry and AI Gateway for Enterprises
February 28, 2026
|
5 min read

MCP Registry & AI Gateway: Architecture and Enterprise Use Cases

No items found.
Best LLM Gateways in 2026
February 28, 2026
|
5 min read

6 Best LLM Gateways in 2026

comparison
February 28, 2026
|
5 min read

What is AI Gateway ? Core Concepts and Guide

No items found.
February 27, 2026
|
5 min read

What Is AI Model Deployment ?

No items found.
No items found.
Take a quick product tour
Start Product Tour
Product Tour