Research Article | | Peer-Reviewed

Building Scalable MLOps Pipelines with DevOps Principles and Open-Source Tools for AI Deployment

Received: 5 October 2025     Accepted: 3 November 2025     Published: 11 December 2025
Views:       Downloads:
Abstract

The convergence of Artificial Intelligence (AI) with DevOps, DataOps, and MLOps has transformed the software development lifecycle, enabling scalable, automated, and intelligent systems. This paper explores the transition from traditional DevOps to MLOps, emphasizing the integration of machine learning workflows into continuous integration, deployment, and training pipelines. We present a practical framework for implementing MLOps using tools such as MLflow, Airflow, and Kubernetes, and address challenges like overfitting, underfitting, and model drift. The proposed architecture leverages Docker and ONNX for model packaging and deployment, ensuring reproducibility and cross-platform compatibility. Through real-world examples and pipeline automation strategies, we demonstrate how MLOps enhances model reliability, governance, and performance monitoring in dynamic environments. This study contributes to the growing body of knowledge on AI-driven DevOps by offering actionable insights for researchers and practitioners aiming to build robust ML systems. Build an Apache Airflow pipeline to load, train, and evaluate a ML model, store it, and use it for inferencing by deploying the model with a sleek Streamlit UI, Docker, and auto-scale it with Kubernetes as container orchestration tool. Techniques for implementing and automating continuous integration (CI), continuous delivery (CD), and continuous training (CT) for machine learning (ML) systems. This document applies primarily to predictive AI systems.

Published in American Journal of Artificial Intelligence (Volume 9, Issue 2)
DOI 10.11648/j.ajai.20250902.29
Page(s) 297-309
Creative Commons

This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited.

Copyright

Copyright © The Author(s), 2025. Published by Science Publishing Group

Keywords

Artificial Intelligence, DevOps, MLOps, Overfitting, Docker, Kubernetes, DataOps, Machine Learning Lifecycle

References
[1] Codezup, “Building a Robust MLOps Pipeline: A Step-by-Step Guide,” May 3, 2025. Available:
[2] Qu Xiangjie, “Build an end-to-end MLOps pipeline with Air-flow, Streamlit, Docker, and Kubernetes,” Oct. 4, 2025. Available:
[3] M. Zaharia et al., “MLflow: Accelerating the machine learning lifecycle,” 2020. Available:
[4] Google Cloud, “MLOps: Continuous delivery and automation pipelines in machine learning,” 2023. Available:
[5] V. Lakshmanan, Practical MLOps: Operationalizing machine learning models, Sebastopol, CA: O'Reilly Media, 2022.
[6] Seldon, “Monitoring and managing ML models in production,” 2023. Available:
[7] MLflow, “MLflow Documentation,” 2020. Available:
[8] Airbnb Engineering, “Automating ML Pipelines for Real-Time Recommendations,” 2023. Available:
[9] Arxiv, “Multivocal Review on MLOps Tooling Fragmentation,” 2024. Available:
[10] Facebook Prophet, “Forecasting at Scale,” 2022. Available:
[11] Hugging Face, “Transformers Documentation,” 2023. Available:
[12] OpenAI Gym, “Toolkit for Developing and Comparing Reinforcement Learning Algorithms,” 2022. Available:
[13] Philips, “AI-Powered Diagnostic Imaging with MLOps,” 2023. Available:
[14] Ray Project, “Distributed Hyperparameter Tuning with Ray Tune,” 2023. Available:
[15] Unity ML-Agents Toolkit, “Training Intelligent Agents,” 2022. Available:
Cite This Article
  • APA Style

    Minh, T. Q., Lan, N. T., Phuong, L. T., Cuong, N. C., Tam, D. C. (2025). Building Scalable MLOps Pipelines with DevOps Principles and Open-Source Tools for AI Deployment. American Journal of Artificial Intelligence, 9(2), 297-309. https://doi.org/10.11648/j.ajai.20250902.29

    Copy | Download

    ACS Style

    Minh, T. Q.; Lan, N. T.; Phuong, L. T.; Cuong, N. C.; Tam, D. C. Building Scalable MLOps Pipelines with DevOps Principles and Open-Source Tools for AI Deployment. Am. J. Artif. Intell. 2025, 9(2), 297-309. doi: 10.11648/j.ajai.20250902.29

    Copy | Download

    AMA Style

    Minh TQ, Lan NT, Phuong LT, Cuong NC, Tam DC. Building Scalable MLOps Pipelines with DevOps Principles and Open-Source Tools for AI Deployment. Am J Artif Intell. 2025;9(2):297-309. doi: 10.11648/j.ajai.20250902.29

    Copy | Download

  • @article{10.11648/j.ajai.20250902.29,
      author = {Trinh Quang Minh and Ngo Thi Lan and Lam Tan Phuong and Nguyen Chi Cuong and Do Chi Tam},
      title = {Building Scalable MLOps Pipelines with DevOps Principles and Open-Source Tools for AI Deployment},
      journal = {American Journal of Artificial Intelligence},
      volume = {9},
      number = {2},
      pages = {297-309},
      doi = {10.11648/j.ajai.20250902.29},
      url = {https://doi.org/10.11648/j.ajai.20250902.29},
      eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.j.ajai.20250902.29},
      abstract = {The convergence of Artificial Intelligence (AI) with DevOps, DataOps, and MLOps has transformed the software development lifecycle, enabling scalable, automated, and intelligent systems. This paper explores the transition from traditional DevOps to MLOps, emphasizing the integration of machine learning workflows into continuous integration, deployment, and training pipelines. We present a practical framework for implementing MLOps using tools such as MLflow, Airflow, and Kubernetes, and address challenges like overfitting, underfitting, and model drift. The proposed architecture leverages Docker and ONNX for model packaging and deployment, ensuring reproducibility and cross-platform compatibility. Through real-world examples and pipeline automation strategies, we demonstrate how MLOps enhances model reliability, governance, and performance monitoring in dynamic environments. This study contributes to the growing body of knowledge on AI-driven DevOps by offering actionable insights for researchers and practitioners aiming to build robust ML systems. Build an Apache Airflow pipeline to load, train, and evaluate a ML model, store it, and use it for inferencing by deploying the model with a sleek Streamlit UI, Docker, and auto-scale it with Kubernetes as container orchestration tool. Techniques for implementing and automating continuous integration (CI), continuous delivery (CD), and continuous training (CT) for machine learning (ML) systems. This document applies primarily to predictive AI systems.},
     year = {2025}
    }
    

    Copy | Download

  • TY  - JOUR
    T1  - Building Scalable MLOps Pipelines with DevOps Principles and Open-Source Tools for AI Deployment
    AU  - Trinh Quang Minh
    AU  - Ngo Thi Lan
    AU  - Lam Tan Phuong
    AU  - Nguyen Chi Cuong
    AU  - Do Chi Tam
    Y1  - 2025/12/11
    PY  - 2025
    N1  - https://doi.org/10.11648/j.ajai.20250902.29
    DO  - 10.11648/j.ajai.20250902.29
    T2  - American Journal of Artificial Intelligence
    JF  - American Journal of Artificial Intelligence
    JO  - American Journal of Artificial Intelligence
    SP  - 297
    EP  - 309
    PB  - Science Publishing Group
    SN  - 2639-9733
    UR  - https://doi.org/10.11648/j.ajai.20250902.29
    AB  - The convergence of Artificial Intelligence (AI) with DevOps, DataOps, and MLOps has transformed the software development lifecycle, enabling scalable, automated, and intelligent systems. This paper explores the transition from traditional DevOps to MLOps, emphasizing the integration of machine learning workflows into continuous integration, deployment, and training pipelines. We present a practical framework for implementing MLOps using tools such as MLflow, Airflow, and Kubernetes, and address challenges like overfitting, underfitting, and model drift. The proposed architecture leverages Docker and ONNX for model packaging and deployment, ensuring reproducibility and cross-platform compatibility. Through real-world examples and pipeline automation strategies, we demonstrate how MLOps enhances model reliability, governance, and performance monitoring in dynamic environments. This study contributes to the growing body of knowledge on AI-driven DevOps by offering actionable insights for researchers and practitioners aiming to build robust ML systems. Build an Apache Airflow pipeline to load, train, and evaluate a ML model, store it, and use it for inferencing by deploying the model with a sleek Streamlit UI, Docker, and auto-scale it with Kubernetes as container orchestration tool. Techniques for implementing and automating continuous integration (CI), continuous delivery (CD), and continuous training (CT) for machine learning (ML) systems. This document applies primarily to predictive AI systems.
    VL  - 9
    IS  - 2
    ER  - 

    Copy | Download

Author Information
  • Faculty of Engineering and Technology, Tay Do University, Can Tho City, Viet Nam

  • Faculty of Engineering and Technology, Tay Do University, Can Tho City, Viet Nam

  • Faculty of Engineering and Technology, Tay Do University, Can Tho City, Viet Nam

  • Faculty of Engineering and Technology, Tay Do University, Can Tho City, Viet Nam

  • Faculty of Engineering and Technology, Tay Do University, Can Tho City, Viet Nam

  • Sections