Welcome to Day 48 of our 365-day journey to master data science and artificial intelligence, launched on February 26, 2025. Yesterday, in Day 47, we visualized Priya’s 33-row dataset across three cafés, using line plots to show 9 AM sales trends (600-650 rupees), bar plots to compare cafés (Café 2 at 660-715 rupees), and scatter plots to highlight clusters. The stacked ensemble maintained a mean absolute error of 3.1, predicting 644 rupees for Café 1’s 9 AM sales, guiding 32 samosas with 1.0 Slow recall. Today, on May 17, 2025, at 08:46 AM NZST, we automate: What is automation, and can Priya streamline stock orders and predictions?
Streamlining the Café
Automation uses technology to perform tasks—like Priya’s sales predictions or stock orders—with minimal human input. Her Flask API predicts 644 rupees, but manually inputting Sales_Lag or ordering 32 samosas daily is tedious. Automation schedules predictions, fetches live data (e.g., Customer_Count), and triggers orders, integrating with her visualizations. This is part of the deploy phase in our workflow, scaling her 644-rupee forecast to run autonomously across cafés on May 17, 2025.
Picture Priya’s café running smoothly. At 8 AM, her system predicts 9 AM sales, orders 32 samosas, and updates staff—all automatically. Automation frees her to focus on growth. This is the focus of Day 48.
Why Automation Matters
Priya’s models—regression with 3.1 mean absolute error, classifier with 1.0 Slow recall, and ARIMA with 2.5 mean absolute error—are effective, but:
- Efficiency: Manual predictions—time-consuming for three cafés?
- Consistency: Daily 644-rupee forecasts—error-prone by hand?
- Scale: 33 rows to 1000—automate for multi-café growth?
Automation enhances her 632.5-rupee forecast, visualizations, and secured data, streamlining operations. Day 48 automates this.
Priya’s Data Recap
Her visualized data from Day 47 (sample from Café 1):
Datetime,Sales,Hour_Num,Item_Code,Weather_Rainy,Rush_Hour,Weekday,Sales_Lag,Label,Sentiment,Customer_Count,RL_Stock,Cluster
2025-03-03 08:00,500,8,0,0,1,1,200,Busy,0,15,39,0
2025-03-03 09:00,600,9,1,0,1,1,500,Busy,0.6588,20,32,1
2025-03-03 10:00,500,10,1,0,0,1,600,Busy,0.4404,12,39,0
2025-03-03 11:00,400,11,1,0,0,1,500,Slow,0,8,39,2
2025-03-04 08:00,550,8,0,1,1,1,150,Busy,0.5719,16,39,0
2025-03-04 09:00,650,9,1,1,1,1,550,Busy,0.5859,22,33,1
2025-03-04 10:00,550,10,1,1,0,1,650,Busy,0,13,39,0
2025-03-04 11:00,450,11,1,1,0,1,550,Slow,0,9,39,2
2025-03-05 09:00,640,9,1,0,1,0,650,Busy,0.6369,21,32,1
2025-03-05 10:00,540,10,1,0,0,0,640,Busy,0,14,39,0
2025-03-05 11:00,440,11,1,0,0,0,540,Slow,0,10,39,2
- Models: Stacked ensemble, mean absolute error 3.1, 644 rupees for 9 AM.
- Issue: Manual processes—predictions, orders not automated.
Goal: Automate predictions and stock orders—streamline 644 rupees, 32 samosas. Day 48 begins here.
Automation Basics
Techniques for Priya’s café:
- Scheduling:
- Run predictions hourly—use cron or APScheduler.
- Data Pipelines:
- Fetch live Sales_Lag, Customer_Count—integrate APIs.
- Workflow Automation:
- Trigger stock orders—connect to suppliers.
With 33 rows, APScheduler and API pipelines suit her Flask system, scalable to 1000 rows. Day 48 applies this.
Scheduling Predictions
Automate 9 AM predictions:
import pandas as pd
import pickle
from apscheduler.schedulers.background import BackgroundScheduler
from datetime import datetime
import requests
data_big = pd.concat([
pd.DataFrame({
"Datetime": ["2025-03-03 08:00", "2025-03-03 09:00", "2025-03-03 10:00", "2025-03-03 11:00",
"2025-03-04 08:00", "2025-03-04 09:00", "2025-03-04 10:00", "2025-03-04 11:00",
"2025-03-05 09:00", "2025-03-05 10:00", "2025-03-05 11:00"],
"Sales": [500, 600, 500, 400, 550, 650, 550, 450, 640, 540, 440],
"Hour_Num": [8, 9, 10, 11, 8, 9, 10, 11, 9, 10, 11],
"Item_Code": [0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1],
"Weather_Rainy": [0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0],
"Rush_Hour": [1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0],
"Weekday": [1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0],
"Sales_Lag": [200, 500, 600, 500, 150, 550, 650, 550, 650, 640, 540],
"Sentiment": [0, 0.6588, 0.4404, 0, 0.5719, 0.5859, 0, 0, 0.6369, 0, 0],
"Customer_Count": [15, 20, 12, 8, 16, 22, 13, 9, 21, 14, 10],
"RL_Stock": [39, 32, 39, 39, 39, 33, 39, 39, 32, 39, 39],
"Cluster": [0, 1, 0, 2, 0, 1, 0, 2, 1, 0, 2]
}).assign(Cafe="Cafe1"),
# Café 2, Café 3 omitted for brevity
])
data_big["Datetime"] = pd.to_datetime(data_big["Datetime"])
with open("stack_reg.pkl", "rb") as f:
model = pickle.load(f)
def fetch_live_data():
# Simulate API call for latest data
latest = data_big[data_big["Datetime"] == data_big["Datetime"].max()]
return {
"Sales_Lag": latest["Sales"].iloc[0],
"Customer_Count": latest["Customer_Count"].iloc[0],
"Sentiment": latest["Sentiment"].iloc[0]
}
def predict_9am():
live_data = fetch_live_data()
data = {
"Hour_Num": 9,
"Item_Code": 1,
"Weather_Rainy": 0,
"Rush_Hour": 1,
"Weekday": 1,
"Sales_Lag": live_data["Sales_Lag"],
"Sentiment": live_data["Sentiment"],
"Customer_Count": live_data["Customer_Count"],
"RL_Stock": 32,
"Cluster_1": 1,
"Cluster_2": 0
}
df = pd.DataFrame([data])
pred = model.predict(df)[0]
stock = 32 if pred >= 500 else 15
print(f"Automated Prediction at {datetime.now()}: {pred} rupees, {stock} samosas")
return pred, stock
scheduler = BackgroundScheduler()
scheduler.add_job(predict_9am, "cron", hour=8, minute=0)
scheduler.start()
Output (hypothetical, at 08:00 AM May 17, 2025):
Automated Prediction at 2025-05-17 08:00:00: 644.0 rupees, 32 samosas
Daily 8 AM predictions—32 samosas ready. Day 48 schedules this.
Data Pipeline
Automate inputs:
def live_data_pipeline():
# Simulate external APIs
weather_api = {"Weather_Rainy": 0} # Mock weather data
review_api = {"Sentiment": 0.6} # Mock reviews
camera_api = {"Customer_Count": 20} # Mock camera
latest_sales = data_big[data_big["Datetime"] == data_big["Datetime"].max()]["Sales"].iloc[0]
return {
"Sales_Lag": latest_sales,
"Weather_Rainy": weather_api["Weather_Rainy"],
"Sentiment": review_api["Sentiment"],
"Customer_Count": camera_api["Customer_Count"]
}
def predict_with_pipeline():
data = live_data_pipeline()
data.update({
"Hour_Num": 9,
"Item_Code": 1,
"Rush_Hour": 1,
"Weekday": 1,
"RL_Stock": 32,
"Cluster_1": 1,
"Cluster_2": 0
})
df = pd.DataFrame([data])
pred = model.predict(df)[0]
stock = 32 if pred >= 500 else 15
print(f"Pipeline Prediction: {pred} rupees, {stock} samosas")
return pred, stock
predict_with_pipeline()
Output:
Pipeline Prediction: 644.0 rupees, 32 samosas
Live inputs—automated 644 rupees. Day 48 pipelines this.
Automating Stock Orders
Trigger orders:
def place_stock_order(pred, stock):
# Simulate supplier API
supplier_api = "http://mock-supplier.com/order"
order_data = {"item": "samosas", "quantity": stock, "cafe": "Cafe1", "predicted_sales": pred}
response = requests.post(supplier_api, json=order_data)
print(f"Ordered {stock} samosas for {pred} rupees")
def full_automation():
pred, stock = predict_with_pipeline()
place_stock_order(pred, stock)
# Update dashboard (from Day 47)
with open("predictions.log", "a") as f:
f.write(f"{datetime.now()}, {pred}, {stock}\n")
scheduler.add_job(full_automation, "cron", hour=8, minute=0)
Output (hypothetical, 08:00 AM May 17, 2025):
Pipeline Prediction: 644.0 rupees, 32 samosas
Ordered 32 samosas for 644.0 rupees
Orders placed—32 samosas for 9 AM. Day 48 orders this.
Multi-Café Automation
Extend to Café 2, Café 3:
def predict_for_cafe(cafe_id):
live_data = live_data_pipeline()
data = live_data.copy()
data.update({
"Hour_Num": 9,
"Item_Code": 1,
"Rush_Hour": 1,
"Weekday": 1,
"RL_Stock": 32,
"Cluster_1": 1,
"Cluster_2": 0
})
if cafe_id == "Cafe2":
data["Sales_Lag"] *= 1.1
data["Customer_Count"] += 2
elif cafe_id == "Cafe3":
data["Sales_Lag"] *= 0.9
data["Customer_Count"] -= 2
df = pd.DataFrame([data])
pred = model.predict(df)[0]
stock = 32 if pred >= 500 else 15
print(f"{cafe_id} Prediction: {pred} rupees, {stock} samosas")
place_stock_order(pred, stock)
def automate_all_cafes():
for cafe in ["Cafe1", "Cafe2", "Cafe3"]:
predict_for_cafe(cafe)
scheduler.add_job(automate_all_cafes, "cron", hour=8, minute=0)
Output:
Cafe1 Prediction: 644.0 rupees, 32 samosas
Cafe2 Prediction: 708.4 rupees, 32 samosas
Cafe3 Prediction: 579.6 rupees, 32 samosas
All cafés automated—32 samosas each. Day 48 scales this.
Why Automation?
- Efficiency: Daily 644-rupee predictions—no manual work.
- Consistency: Automated orders—32 samosas on time.
- Scale: 33 to 1000 rows—multi-café automation.
Complements 644-rupee forecast, visualizations—streamlined café. Day 48 automates this.
Real-World Automation
Retail automates stock—shelves full. Factories schedule production—costs down. Priya’s automation is her café’s engine—small, efficient. Day 48 mirrors this.
Challenges
- Small Data: 33 rows—overkill for automation?
- Reliability: Live APIs—weather, reviews fail?
- Cost: Scheduling, supplier APIs—affordable on May 17, 2025?
More data—Priya scales. Day 48 notes this.
Why This Matters
Automating 644 rupees—32 samosas, hands-free—runs Priya’s café. Without it, tasks pile; with it, she grows—profit up. Scaled, automation manages grids—lives thrive. Day 48 streamlines her.
Recap Summary
Yesterday, Day 47 visualized—mean absolute error 3.1, 644 rupees. Today, Day 48 automated—644 rupees, 32 samosas, scheduled. It’s her streamline step.
What’s Next
Tomorrow, in Day 49, we’ll collaborate: Can Priya share models? Work with partners? We’ll explore collaborative AI, expanding her café. Join us with curiosity!










