[Avg. reading time: 8 minutes]

Why MLflow

MLflow provides comprehensive support for traditional ML workflows, making it effortless to track experiments, manage models, and deploy solutions at scale.

Key Features

Intelligent (Auto)logging

- Simple Integration for scikit-learn, XGBoost, and more
- Automatic Parameter Capture (logs all model hyperparameters without manual intervention)
- Built-in Evaluation Metrics (automatically computes and stores relevant performance metrics)
- Model Serialization (handles complex objects like pipelines seamlessly)

Compare Model Performance Across Algorithms

  • Save Time: No more manually tracking results in spreadsheets or notebooks

  • Make Better Decisions: Easily spot which algorithms perform best on your data

  • Avoid Mistakes: Never lose track of promising model configurations

  • Share Results: Team members can see all experiments and build on each other’s work

  • Visual charts comparing accuracy, precision, recall across all your models

  • Sortable tables showing parameter combinations and their results

  • Quick filtering to find models that meet specific performance criteria

  • Export capabilities to share findings with stakeholders

Flexible Deployment

  • Real-Time Inference for low-latency prediction services
  • Batch Processing for large-scale scoring jobs
  • Edge Deployment for offline and mobile applications
  • Containerized Serving with Docker and Kubernetes support
  • Cloud Integration across AWS, Azure, and Google Cloud platforms
  • Custom Serving Logic for complex preprocessing and postprocessing requirements

Capabilities

Tracking Server & MLflow UI

Start a new project

VSCode, Open Workspace

Open Shell 1 (Terminal/GitBash)

uv init mlflow_demo
cd mlflow_demo
uv add mlflow pandas numpy scikit-learn matplotlib

Option 1: Store MLflow details in Local Machine

mlflow server --host 127.0.0.1 --port 8080

Open this URL and copy the file to your VSCode

https://github.com/gchandra10/uni_multi_model/blob/main/01-lr-model.py

Open Shell 2

Step Activate Virtual Environment

python 01-lr-model.py

Open your browser and goto http://127.0.0.1:8080

View the Experiment


Option 2: Store MLflow details in a Local Database

mlflow server --host 127.0.0.1 --port 8080 \
--backend-store-uri sqlite:///mlflow.db

Option 3: Store MLflow details in a Remote Database

export AWS_PROFILE=your_profile_name

mlflow server --host 127.0.0.1 --port 8080 \
  --default-artifact-root s3://yourbucket
  --backend-store-uri 'postgresql://yourhostdetails/'

Model Serving

Open Shell 3

Optional Step Activate Virtual Environment


export MLFLOW_TRACKING_URI=http://127.0.0.1:8080

mlflow models serve \
  -m "models:/Linear_Regression_Model/1" \
  --host 127.0.0.1 \
  --port 5001 \
  --env-manager local

Real Time Prediction

Open Shell 4

Optional Step Activate Virtual Environment


curl -X POST "http://127.0.0.1:5001/invocations" \
  -H "Content-Type: application/json" \
  --data '{"inputs": [{"ENGINESIZE": 2.0}, {"ENGINESIZE": 3.0}, {"ENGINESIZE": 4.0}]}'

OR

curl -X POST http://127.0.0.1:5001/invocations \
  -H "Content-Type: application/json" \
  -d '{
        "dataframe_split": {
          "columns": ["ENGINESIZE"],
          "data": [[2.0],[3.0],[4.0]]
        }
      }'

#mlflow #serving #mlflow_serverVer 0.3.6

Last change: 2025-12-02