Multi-Agent Code Orchestrator using CrewAI , Ollama and Docker-Compose

The primary goal of this project was to create an AI-powered machine learning assistant using CrewAI, integrated with language models (LLMs) ( Ollama’s LLaMA model). The assistant is designed to guide users through the process of defining, assessing, and solving machine learning problems. The project leverages the capabilities of multiple AI agents, each specialized in different phases of the machine learning workflow, and coordinates their tasks to produce a coherent and effective solution.

The purpose of this project was to streamline the machine learning workflow by automating the processes of data preprocessing, model building, and validation using AI agents. This is particularly useful for users who may not be experts in AI but still need to develop robust machine learning models. The project aims to simplify the entire coding flow for ML tasks, making it accessible and efficient.

How the Project is Setup

    • Setting Up the LLM with Ollama:

      I configured the system to use the llama3.1 model from Ollama, pulling the model before its use and ensuring it’s ready for the AI agents to utilize. The LLaMA model is known for its performance in various NLP tasks, making it a suitable choice for generating and validating complex AI coding flows.

      Streamlit UI for User Interaction:

      I built a user-friendly interface with Streamlit, allowing users to upload a CSV file, select a variable to predict, and execute the tasks. Streamlit provides an interactive platform ideal for creating web apps around data science workflows, making it accessible for non-technical users.

      Defining AI Agents with Specific Roles:

      I broke down the ML workflow into specialized tasks, each handled by a dedicated AI agent:

      • Data Preprocessing Agent: Focused on generating a coding plan for data cleaning, preprocessing, and feature engineering.
      • Model Building Agent: Tasked with designing the coding flow for model building, evaluation, and other related phases.
      • Validation Agent: Ensured the correctness of the entire coding flow, fixing any errors and finalizing the code.

      Task Definitions and Execution:

      I defined the specific tasks that each agent must perform, ensuring a sequential and logical flow from data preprocessing to final validation:

      • Task Data Preprocess: The Data Preprocessing Agent analyzed the uploaded data and created a preprocessing plan.
      • Task Choose Model: The Model Building Agent took the output of the preprocessing task and designed the subsequent phases of the coding flow.
      • Task Check Code: The Validation Agent reviewed the entire code, fixed any issues, and ensured that the output was error-free.

      Creating and Executing the Crew:

      I orchestrated the collaboration between the agents and managed the execution of tasks by creating a Crew object that included all the agents and tasks, set to execute sequentially. The Crew.kickoff() method was then called to start the process.

      Containerising the Application

    •  
    • I started by creating a Dockerfile for the Streamlit app, which sets up the environment and installs the required dependencies.
    • I also pulled the Ollama image directly from Docker Hub to ensure the LLaMA model was ready for use.
    • I orchestrated the two services (Ollama and the Streamlit app) using Docker Compose.