Continuous Learning & Feedback System
How Olympus Cloud AI systems learn and improve from user interactions through distributed feedback loops.
Overview
Olympus Cloud uses a distributed feedback architecture rather than a centralized learning platform. Feedback signals are collected at the point of interaction and flow into two complementary storage systems:
- ClickHouse -- Persistent analytics storage for recommendation feedback, used for long-term performance measurement and model retraining decisions.
- Redis -- Short-lived learning state (24-hour TTL) for the
LearningEngine, which adjusts AI suggestion behavior in near-real-time on a per-tenant basis.
Each feedback channel is purpose-built for its domain. Recommendation feedback is persisted to ClickHouse for durable analytics. Conversational AI feedback (e.g., thumbs-up/down on assistant responses) is recorded to Cloud Logging for observability, not to a database.
There is no single "learning service." Feedback collection and learning are co-located with the services that generate AI outputs, following the same data ownership boundaries as the rest of the platform.
Architecture
Olympus Cloud Continuous Learning
┌─────────────────────────────────────────────────────────────────────┐
│ USER INTERACTIONS │
│ POS App │ Customer App │ Kiosk │ Cockpit │ AI Assistant │
└─────┬────────────┬──────────────┬──────────┬──────────────┬────────┘
│ │ │ │ │
▼ ▼ ▼ ▼ ▼
┌──────────────────────────────┐ │ ┌──────────────────────────────┐
│ Recommendation Feedback │ │ │ AI Assistant Feedback │
│ POST /recommendations/ │ │ │ (thumbs-up/down, ratings) │
│ feedback │ │ │ │
└──────────┬───────────────────┘ │ └──────────┬───────────────────┘
│ │ │
▼ │ ▼
┌──────────────────────┐ │ ┌──────────────────────┐
│ ClickHouse │ │ │ Cloud Logging │
│ recommendation_ │ │ │ (structured JSON) │
│ feedback table │ │ │ │
└──────────┬─────────────┘ │ └───────────────────────┘
│ │
▼ ▼
┌──────────────────────────────────────────────────────────────┐
│ LearningEngine (Redis) │
│ Per-tenant learning state │ 24h TTL │ Batch processing │
└──────────────────────────────┬───────────────────────────────┘
│
▼
┌──────────────────────────────────────────────────────────────┐
│ Improved Recommendations & Suggestions │
│ RecommendationEngine │ ContextEnricher │ PolicyFilter │
└──────────────────────────────────────────────────────────────┘
Recommendation Feedback Loop
The recommendation engine uses a closed feedback loop: recommendations are served, user interactions are recorded, and acceptance metrics inform future scoring.
How It Works
- The
RecommendationEnginegenerates scored suggestions using collaborative filtering, content-based matching, and contextual enrichment. - The client application presents recommendations to the user (staff or customer).
- When the user accepts or dismisses a recommendation, the client sends a feedback event via the REST API.
- Feedback is persisted to the ClickHouse
recommendation_feedbacktable. - Performance metrics (acceptance rates, revenue impact) are computed from this table and used to evaluate recommendation quality over time.
Feedback API
POST /recommendations/feedback
Content-Type: application/json
{
"tenant_id": "restaurant-1",
"location_id": "loc_123",
"order_id": "ord_789",
"item_id": "app-001",
"recommendation_type": "cross_sell",
"was_accepted": true,
"position": 0,
"experiment_id": "exp_abc",
"experiment_variant": "variant_b"
}
Performance Metrics API
Query aggregated recommendation performance:
GET /recommendations/performance?tenant_id=restaurant-1&location_id=loc_123&days=30
Returns acceptance rates by recommendation type, total recommendation count, and estimated revenue attributed to accepted recommendations.
Tenant Learning Engine
The LearningEngine class provides near-real-time learning from operator feedback on AI-generated suggestions (pricing, staffing, inventory, and other operational recommendations).
Location
| Component | Path |
|---|---|
| LearningEngine | backend/python/app/services/ai/learning_engine.py |
| Tests | backend/python/tests/test_learning_engine.py |
| Integration | backend/python/app/services/events/restaurant_processor.py |
How It Works
The LearningEngine operates on a buffer-and-batch model:
- Buffer -- Each piece of feedback is appended to an in-memory buffer with its
suggestion_id,action(accepted/rejected/modified), context, and timestamp. - Batch trigger -- When the buffer reaches 100 items, batch processing is triggered automatically.
- Analysis -- Feedback is grouped by suggestion type (e.g.,
pricing,staffing,inventory). For each type:- Acceptance rate is calculated. If it falls below 30%, rejection pattern analysis runs.
- Modification patterns are extracted if users frequently modify rather than accept suggestions outright.
- Storage -- Learning signals are persisted to Redis with a 24-hour TTL under keys like
learning_engine:rejections:{type}andlearning_engine:modifications:{type}.
Feedback Actions
| Action | Meaning | Triggers |
|---|---|---|
accepted | User adopted the suggestion as-is | Acceptance rate calculation |
rejected | User dismissed the suggestion | Rejection pattern analysis (if rate is below 30%) |
modified | User adjusted the suggestion before accepting | Modification learning |
Rejection Pattern Analysis
When acceptance rate drops below 30% for a suggestion type, the engine analyzes rejection context across four dimensions:
| Dimension | Redis Key Example | What It Reveals |
|---|---|---|
| Time of day | patterns.time_of_day | Suggestions rejected during specific hours |
| Day of week | patterns.day_of_week | Day-specific rejection trends |
| User role | patterns.user_role | Role-based preference differences |
| Business state | patterns.business_state | Rejections correlated with busy/slow periods |
Code Example
from app.services.ai.learning_engine import LearningEngine
from app.core.redis import RedisManager
# Initialize with Redis connection
engine = LearningEngine(redis=redis_manager)
# Record operator feedback on a suggestion
await engine.record_suggestion_feedback(
suggestion_id="pricing:discount-123",
action="rejected",
context={
"hour": 18,
"day": "Friday",
"role": "manager",
"state": "busy",
},
modification=None,
)
Redis Key Structure
learning_engine:rejections:{suggestion_type}
→ { patterns: { time_of_day: {...}, day_of_week: {...}, ... }, samples: N, generated_at: "..." }
learning_engine:modifications:{suggestion_type}
→ { status: "pending_training", samples: N, generated_at: "..." }
All keys expire after 86,400 seconds (24 hours), ensuring the learning state reflects recent operator behavior rather than stale historical patterns.
The 24-hour TTL means the system naturally adapts to changing operator preferences without manual resets. If a restaurant changes management style or seasonal patterns shift, old learning signals expire automatically.
Feedback Event Schema
Recommendation Feedback (ClickHouse)
Feedback on menu item recommendations is persisted to the ClickHouse recommendation_feedback table:
| Field | Type | Description |
|---|---|---|
tenant_id | String | Tenant identifier |
location_id | String | Location identifier |
order_id | String | Associated order |
item_id | String | Recommended menu item |
recommendation_type | String | One of: upsell, cross_sell, substitute, popular, personalized |
was_accepted | Boolean | Whether the user added the item to their order |
position | Int | Position in the recommendation list (0-indexed) |
experiment_id | String (nullable) | A/B test experiment ID |
experiment_variant | String (nullable) | Assigned variant |
feedback_time | DateTime | When the feedback was recorded |
LearningEngine Feedback (Redis)
Operational suggestion feedback flows through the LearningEngine buffer:
| Field | Type | Description |
|---|---|---|
suggestion_id | String | Format: {type}:{id} (e.g., pricing:discount-123) |
action | String | One of: accepted, rejected, modified |
context | Dict | Contextual metadata (hour, day, role, business state) |
modification | Dict (nullable) | What the user changed if action is modified |
timestamp | DateTime | UTC timestamp of the feedback |
ClickHouse Analytics
Recommendation feedback in ClickHouse enables long-term performance analysis and model evaluation.
Acceptance Rate by Type
SELECT
recommendation_type,
count(*) AS total_recommendations,
sum(CASE WHEN was_accepted THEN 1 ELSE 0 END) AS accepted,
avg(CASE WHEN was_accepted THEN 1.0 ELSE 0.0 END) AS acceptance_rate,
avg(position) AS avg_position_when_accepted
FROM recommendation_feedback
WHERE tenant_id = %(tenant_id)s
AND location_id = %(location_id)s
AND toDate(feedback_time) >= today() - interval 30 day
GROUP BY recommendation_type
Revenue Impact from Accepted Recommendations
WITH accepted_recs AS (
SELECT order_id, item_id
FROM recommendation_feedback
WHERE tenant_id = %(tenant_id)s
AND location_id = %(location_id)s
AND was_accepted = true
AND toDate(feedback_time) >= today() - interval 30 day
)
SELECT
sum(oi.price * oi.quantity) AS recommended_revenue
FROM accepted_recs ar
JOIN order_items oi
ON ar.order_id = oi.order_id AND ar.item_id = oi.item_id
Data Flow
The ClickHouse analytics data follows the standard Olympus Cloud OLAP pipeline:
Rust Commerce Service → Cloud Spanner (OLTP)
│
▼
GCP Pub/Sub
│
▼
Python ML Service → ClickHouse Cloud (OLAP)
│
▼
recommendation_feedback table
Python services do not read from Cloud Spanner directly. All analytics data flows through Pub/Sub into ClickHouse, following the Architecture 3.0 data ownership model. The RecommendationFeatureStore reads exclusively from ClickHouse.
AI Assistant Feedback
Conversational AI feedback (for agents like Maximus, Minerva, and the Support Agent) uses a different path than recommendation feedback.
How It Works
- Users can provide thumbs-up/thumbs-down or text feedback on AI assistant responses.
- This feedback is recorded as structured JSON log entries in Google Cloud Logging.
- Feedback is not persisted to a database table. It is available for analysis through Cloud Logging queries and log-based metrics.
Why Cloud Logging
| Concern | Approach |
|---|---|
| Storage | Cloud Logging retains logs per retention policy (default 30 days) |
| Querying | Log Explorer and log-based metrics in Cloud Monitoring |
| Alerting | Log-based alerts can fire on feedback quality drops |
| Cost | No additional database cost for low-volume feedback signals |
Querying Assistant Feedback
# View recent negative feedback in Cloud Logging
gcloud logging read \
'resource.type="cloud_run_revision" AND jsonPayload.event_type="ai_feedback" AND jsonPayload.rating="negative"' \
--project=olympuscloud-dev \
--limit=50
If assistant feedback volume grows to a level where structured querying becomes important, a future enhancement could persist these events to ClickHouse. Currently the volume does not justify the added complexity.
A/B Testing Integration
The recommendation engine supports experiment-based feedback tracking for controlled evaluation of algorithm changes.
How Experiments Work
- Create an experiment with multiple variants (e.g.,
controlvsvariant_bwith different scoring weights). - Assign sessions to variants via
POST /recommendations/experiments/{id}/assign. - Recommendations include
experiment_idandexperiment_variantin responses. - Feedback records carry these fields, enabling per-variant acceptance rate analysis.
- Query experiment results via
GET /recommendations/experiments?tenant_id=...&status=running.
Experiment Feedback Analysis
Because experiment_id and experiment_variant are stored alongside every feedback record in ClickHouse, you can compute per-variant metrics:
SELECT
experiment_variant,
count(*) AS total,
avg(CASE WHEN was_accepted THEN 1.0 ELSE 0.0 END) AS acceptance_rate
FROM recommendation_feedback
WHERE experiment_id = 'exp_abc'
GROUP BY experiment_variant
Related Documentation
- ACP AI Router -- Model tier selection and cost optimization for AI inference
- AI Agent RAG Configuration -- Knowledge base setup for AI agents
- Agent Contexts & Personas -- System prompts and persona definitions for each AI agent