Lab 1: Hands on Running ML service in a Container
Task 1: Create a directory named: docker-ml-scenario
65456@labuser/~$ mkdir docker-ml-scenario
65456@labuser/~$ cd docker-ml-scenario
Task 2: Create three required files. You supposed to have the docker-ml-scenario folder containing these three files:
65456@labuser/~$ touch app.py requirements.txt Dockerfile
Task 3: Add required package to requirements.txt file.
65456@labuser/~$ cat <<EOF >>requirements.txt
Flask
scikit-learn==1.0.2
EOF
# Check whether the file is updated.
65456@labuser/~$ cat requirements.txt
Flask
scikit-learn==1.0.2
Task 4: Write a Flask app with simple ML Model in app.py file.
65456@labuser/~$ cat <<EOF >>app.py
import joblib
from flask import Flask, request, jsonify
from sklearn.linear_model import LogisticRegression
import numpy as np
# Create the Flask app
app = Flask(__name__)
# == ML MODEL SETUP (RUNS ONCE AT STARTUP) ==
# In a real-world scenario, you would load a pre-trained model file.
# For this lab, we'll train and "save" a dummy model right here.
# 1. Create a dummy dataset
X_train = np.array([[1], [2], [3], [4], [10], [11], [12], [13]])
y_train = np.array([0, 0, 0, 0, 1, 1, 1, 1]) # Two classes: 0 and 1
# 2. Train a simple Logistic Regression model
model = LogisticRegression()
model.fit(X_train, y_train)
print("Model trained successfully!")
# == API ENDPOINTS ==
@app.route("/")
def index():
return "<h1>ML Model Deployment API is running!</h1>Send a POST request to /predict
"
@app.route("/predict", methods=['POST'])
def predict():
"""
Receives a POST request with JSON data and returns a prediction.
Request body should look like: {"data": [5]}
"""
try:
# Get the json from the request
json_payload = request.get_json()
# The user's input data
input_data = np.array(json_payload['data']).reshape(-1, 1)
# Make a prediction
prediction = model.predict(input_data)
# Return the result as JSON
return jsonify({
'input': json_payload['data'],
'prediction': int(prediction[0])
})
except Exception as e:
return jsonify({'error': str(e)}), 400
if __name__ == '__main__':
app.run(host='0.0.0.0', port=5000)
EOF
Task 5. Create a Dockerfile with the given code.
FROM python:3.7
WORKDIR /code
ENV FLASK_APP=app.py
ENV FLASK_RUN_HOST=0.0.0.0
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
EXPOSE 5000
COPY . .
CMD ["flask", "run"]
Task 6. Now build a Docker Image using build command.
docker build -t ml-app .
Task 7. Run the Docker Image.
docker run -p 5000:5000 ml-app
Lab 2: Running a Model as a Service
Step 1: Install Required Packages
- Run the following commands in your terminal to install pip and the required Python packages:
curl https://bootstrap.pypa.io/pip/3.6/get-pip.py -o get-pip.py
sudo apt-get install python3-distutils -y
python3 get-pip.py --force-reinstall
python3 -m pip install --user numpy scikit-learn flask flask-restful
Step 2: Generate the Model
- Create the project folder and move into it:
mkdir SampleProject
cd SampleProject
- Create an empty folder models to store the created models:
mkdir models
- Create a file model_generator.py with the following content:
from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import accuracy_score
import pickle
iris = datasets.load_iris()
validation_size = 0.20
seed = 100
X_train, X_test, Y_train, Y_test = train_test_split(iris.data,
iris.target,
test_size=validation_size,
random_state=seed)
knn = KNeighborsClassifier()
knn.fit(X_train, Y_train)
with open('models/iris_classifier_model.pk', 'wb') as model_file:
pickle.dump(knn, model_file)
- Run the script to create the required model and store it in the models folder:
python3 model_generator.py
Step 3: Develop a RESTful API using Flask
- Create a file iris_classifier.py and add the following API implementation:
from flask import Flask, request
from flask_restful import Resource, Api
import pickle
app = Flask(__name__)
api = Api(app)
def classify(petal_len, petal_wd, sepal_len, sepal_wd):
species = ['Iris-Setosa', 'Iris-Versicolour', 'Iris-Virginica']
with open('models/iris_classifier_model.pk', 'rb') as model_file:
model = pickle.load(model_file)
species_class = int(model.predict([[petal_len, petal_wd, sepal_len, sepal_wd]])[0])
return species[species_class]
class IrisPredict(Resource):
def get(self):
sl = float(request.args.get('sl'))
sw = float(request.args.get('sw'))
pl = float(request.args.get('pl'))
pw = float(request.args.get('pw'))
result = classify(sl, sw, pl, pw)
return {'sepal_length': sl,
'sepal_width': sw,
'petal_length': pl,
'petal_width': pw,
'species': result}
api.add_resource(IrisPredict, '/classify/')
Step 4: Run the Model as a Service
- Set the FLASK_APP environment variable:
export FLASK_APP=iris_classifier.py
- Set the other required environment variables:
export LC_ALL=C.UTF-8
export LANG=C.UTF-8
- Start the server and expose the API as a service on port 8000:
python3 -m flask run --host=0.0.0.0 --port=8000
- Open a new terminal and try to access the API using the /classify/ endpoint:
curl "http://0.0.0.0:8000/classify/?sl=5.1&sw=3.5&pl=1.4&pw=0.3"
Example Output
A successful call returns JSON similar to:
{
"sepal_length": 5.1,
"sepal_width": 3.5,
"petal_length": 1.4,
"petal_width": 0.3,
"species": "Iris-Setosa"
}
Notes and Tips
- If you see an error related to packages, ensure flask and flask-restful are installed for the same Python interpreter you use to run the app (here, python3).
- You can modify KNN parameters (e.g.,
n_neighbors) if you want to experiment. - The classify function above maps parameters in the order: petal_len, petal_wd, sepal_len, sepal_wd. The REST endpoint captures
sl, sw, pl, pw(sepal first, then petal) and passes them accordingly. - If you run this on a remote VM, access the API via the machine’s IP address instead of
0.0.0.0.
Optional: Alternate API (sepal-first classify signature)
If you prefer keeping names aligned with the Iris dataset (sepal first, then petal), here is an equivalent variant you can use in iris_classifier.py:
from flask import Flask, request
from flask_restful import Resource, Api
import pickle
app = Flask(__name__)
api = Api(app)
def classify(sl, sw, pl, pw):
species = ['Iris-Setosa', 'Iris-Versicolour', 'Iris-Virginica']
with open('models/iris_classifier_model.pk', 'rb') as model_file:
model = pickle.load(model_file)
# Model expects features in the given order
species_class = int(model.predict([[sl, sw, pl, pw]])[0])
return species[species_class]
class IrisPredict(Resource):
def get(self):
sl = float(request.args.get('sl'))
sw = float(request.args.get('sw'))
pl = float(request.args.get('pl'))
pw = float(request.args.get('pw'))
result = classify(sl, sw, pl, pw)
return {
'sepal_length': sl,
'sepal_width': sw,
'petal_length': pl,
'petal_width': pw,
'species': result
}
api.add_resource(IrisPredict, '/classify/')
Lab 3: Scaling ML service using Kubernetes
Minikube Hands-on: Deploying iris-classifier-site
Learn how to install Minikube, start a Kubernetes cluster, deploy the iris-classifier-site app, expose it via NodePort, and test the endpoint.
Prerequisites: Install Docker and Minikube
sudo apt install docker.io -y
sudo systemctl unmask docker
sudo service docker restart
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube_latest_amd64.deb
sudo dpkg -i minikube_latest_amd64.deb
rm -rf minikube_latest_amd64.deb
Step 1: Start the Minikube Cluster
minikube start
Step 2: Deploy the iris-classifier-site Application
Create Deployment YAML
echo "apiVersion: apps/v1
kind: Deployment
metadata:
name: iris-classifier-site
labels:
app: web
spec:
replicas : 1
selector :
matchLabels:
app : iris-classifier
template :
metadata :
labels : { app : iris-classifier }
spec:
containers:
- name: mlexample
image: gpcplay/playimages:mlexample
ports:
- containerPort: 5000" > iris-classifier-deployment.yaml
Apply Deployment
kubectl apply -f iris-classifier-deployment.yaml
Step 3: Create Service to Expose App
Create Service YAML
echo "apiVersion: v1
kind: Service
metadata:
name: iris-classifier-svc
spec:
selector:
app: iris-classifier
type: NodePort
ports:
- port: 5000
nodePort : 30800
targetPort: 5000" > iris-classifier-service.yaml
Apply Service
kubectl apply -f iris-classifier-service.yaml
Step 4: Verify and Test
Check Pods
kubectl get pods
Check Services
kubectl get services
Set NODE_PORT
export NODE_PORT=$(kubectl get services/iris-classifier-svc -o go-template='{{(index .spec.ports 0).nodePort}}')
echo NODE_PORT=$NODE_PORT
Test Endpoint
curl "http://$(minikube ip):$NODE_PORT/classify/?sl=5.1&sw=3.5&pl=1.4&pw=0.3"
Notes & Tips
- If Minikube fails to start, use:
minikube start --driver=docker - Check pod issues:
kubectl describe pod <pod-name> - View logs:
kubectl logs -l app=iris-classifier