Skip to main content

Continuous Learning & Feedback System

How Olympus Cloud AI systems learn and improve from user interactions through distributed feedback loops.

Overview

Olympus Cloud uses a distributed feedback architecture rather than a centralized learning platform. Feedback signals are collected at the point of interaction and flow into two complementary storage systems:

  • ClickHouse -- Persistent analytics storage for recommendation feedback, used for long-term performance measurement and model retraining decisions.
  • Redis -- Short-lived learning state (24-hour TTL) for the LearningEngine, which adjusts AI suggestion behavior in near-real-time on a per-tenant basis.

Each feedback channel is purpose-built for its domain. Recommendation feedback is persisted to ClickHouse for durable analytics. Conversational AI feedback (e.g., thumbs-up/down on assistant responses) is recorded to Cloud Logging for observability, not to a database.

Key Design Principle

There is no single "learning service." Feedback collection and learning are co-located with the services that generate AI outputs, following the same data ownership boundaries as the rest of the platform.


Architecture

                           Olympus Cloud Continuous Learning
┌─────────────────────────────────────────────────────────────────────┐
│ USER INTERACTIONS │
│ POS App │ Customer App │ Kiosk │ Cockpit │ AI Assistant │
└─────┬────────────┬──────────────┬──────────┬──────────────┬────────┘
│ │ │ │ │
▼ ▼ ▼ ▼ ▼
┌──────────────────────────────┐ │ ┌──────────────────────────────┐
│ Recommendation Feedback │ │ │ AI Assistant Feedback │
│ POST /recommendations/ │ │ │ (thumbs-up/down, ratings) │
│ feedback │ │ │ │
└──────────┬───────────────────┘ │ └──────────┬───────────────────┘
│ │ │
▼ │ ▼
┌──────────────────────┐ │ ┌──────────────────────┐
│ ClickHouse │ │ │ Cloud Logging │
│ recommendation_ │ │ │ (structured JSON) │
│ feedback table │ │ │ │
└──────────┬─────────────┘ │ └───────────────────────┘
│ │
▼ ▼
┌──────────────────────────────────────────────────────────────┐
│ LearningEngine (Redis) │
│ Per-tenant learning state │ 24h TTL │ Batch processing │
└──────────────────────────────┬───────────────────────────────┘


┌──────────────────────────────────────────────────────────────┐
│ Improved Recommendations & Suggestions │
│ RecommendationEngine │ ContextEnricher │ PolicyFilter │
└──────────────────────────────────────────────────────────────┘

Recommendation Feedback Loop

The recommendation engine uses a closed feedback loop: recommendations are served, user interactions are recorded, and acceptance metrics inform future scoring.

How It Works

  1. The RecommendationEngine generates scored suggestions using collaborative filtering, content-based matching, and contextual enrichment.
  2. The client application presents recommendations to the user (staff or customer).
  3. When the user accepts or dismisses a recommendation, the client sends a feedback event via the REST API.
  4. Feedback is persisted to the ClickHouse recommendation_feedback table.
  5. Performance metrics (acceptance rates, revenue impact) are computed from this table and used to evaluate recommendation quality over time.

Feedback API

POST /recommendations/feedback
Content-Type: application/json

{
"tenant_id": "restaurant-1",
"location_id": "loc_123",
"order_id": "ord_789",
"item_id": "app-001",
"recommendation_type": "cross_sell",
"was_accepted": true,
"position": 0,
"experiment_id": "exp_abc",
"experiment_variant": "variant_b"
}

Performance Metrics API

Query aggregated recommendation performance:

GET /recommendations/performance?tenant_id=restaurant-1&location_id=loc_123&days=30

Returns acceptance rates by recommendation type, total recommendation count, and estimated revenue attributed to accepted recommendations.


Tenant Learning Engine

The LearningEngine class provides near-real-time learning from operator feedback on AI-generated suggestions (pricing, staffing, inventory, and other operational recommendations).

Location

ComponentPath
LearningEnginebackend/python/app/services/ai/learning_engine.py
Testsbackend/python/tests/test_learning_engine.py
Integrationbackend/python/app/services/events/restaurant_processor.py

How It Works

The LearningEngine operates on a buffer-and-batch model:

  1. Buffer -- Each piece of feedback is appended to an in-memory buffer with its suggestion_id, action (accepted/rejected/modified), context, and timestamp.
  2. Batch trigger -- When the buffer reaches 100 items, batch processing is triggered automatically.
  3. Analysis -- Feedback is grouped by suggestion type (e.g., pricing, staffing, inventory). For each type:
    • Acceptance rate is calculated. If it falls below 30%, rejection pattern analysis runs.
    • Modification patterns are extracted if users frequently modify rather than accept suggestions outright.
  4. Storage -- Learning signals are persisted to Redis with a 24-hour TTL under keys like learning_engine:rejections:{type} and learning_engine:modifications:{type}.

Feedback Actions

ActionMeaningTriggers
acceptedUser adopted the suggestion as-isAcceptance rate calculation
rejectedUser dismissed the suggestionRejection pattern analysis (if rate is below 30%)
modifiedUser adjusted the suggestion before acceptingModification learning

Rejection Pattern Analysis

When acceptance rate drops below 30% for a suggestion type, the engine analyzes rejection context across four dimensions:

DimensionRedis Key ExampleWhat It Reveals
Time of daypatterns.time_of_daySuggestions rejected during specific hours
Day of weekpatterns.day_of_weekDay-specific rejection trends
User rolepatterns.user_roleRole-based preference differences
Business statepatterns.business_stateRejections correlated with busy/slow periods

Code Example

from app.services.ai.learning_engine import LearningEngine
from app.core.redis import RedisManager

# Initialize with Redis connection
engine = LearningEngine(redis=redis_manager)

# Record operator feedback on a suggestion
await engine.record_suggestion_feedback(
suggestion_id="pricing:discount-123",
action="rejected",
context={
"hour": 18,
"day": "Friday",
"role": "manager",
"state": "busy",
},
modification=None,
)

Redis Key Structure

learning_engine:rejections:{suggestion_type}
→ { patterns: { time_of_day: {...}, day_of_week: {...}, ... }, samples: N, generated_at: "..." }

learning_engine:modifications:{suggestion_type}
→ { status: "pending_training", samples: N, generated_at: "..." }

All keys expire after 86,400 seconds (24 hours), ensuring the learning state reflects recent operator behavior rather than stale historical patterns.

tip

The 24-hour TTL means the system naturally adapts to changing operator preferences without manual resets. If a restaurant changes management style or seasonal patterns shift, old learning signals expire automatically.


Feedback Event Schema

Recommendation Feedback (ClickHouse)

Feedback on menu item recommendations is persisted to the ClickHouse recommendation_feedback table:

FieldTypeDescription
tenant_idStringTenant identifier
location_idStringLocation identifier
order_idStringAssociated order
item_idStringRecommended menu item
recommendation_typeStringOne of: upsell, cross_sell, substitute, popular, personalized
was_acceptedBooleanWhether the user added the item to their order
positionIntPosition in the recommendation list (0-indexed)
experiment_idString (nullable)A/B test experiment ID
experiment_variantString (nullable)Assigned variant
feedback_timeDateTimeWhen the feedback was recorded

LearningEngine Feedback (Redis)

Operational suggestion feedback flows through the LearningEngine buffer:

FieldTypeDescription
suggestion_idStringFormat: {type}:{id} (e.g., pricing:discount-123)
actionStringOne of: accepted, rejected, modified
contextDictContextual metadata (hour, day, role, business state)
modificationDict (nullable)What the user changed if action is modified
timestampDateTimeUTC timestamp of the feedback

ClickHouse Analytics

Recommendation feedback in ClickHouse enables long-term performance analysis and model evaluation.

Acceptance Rate by Type

SELECT
recommendation_type,
count(*) AS total_recommendations,
sum(CASE WHEN was_accepted THEN 1 ELSE 0 END) AS accepted,
avg(CASE WHEN was_accepted THEN 1.0 ELSE 0.0 END) AS acceptance_rate,
avg(position) AS avg_position_when_accepted
FROM recommendation_feedback
WHERE tenant_id = %(tenant_id)s
AND location_id = %(location_id)s
AND toDate(feedback_time) >= today() - interval 30 day
GROUP BY recommendation_type

Revenue Impact from Accepted Recommendations

WITH accepted_recs AS (
SELECT order_id, item_id
FROM recommendation_feedback
WHERE tenant_id = %(tenant_id)s
AND location_id = %(location_id)s
AND was_accepted = true
AND toDate(feedback_time) >= today() - interval 30 day
)
SELECT
sum(oi.price * oi.quantity) AS recommended_revenue
FROM accepted_recs ar
JOIN order_items oi
ON ar.order_id = oi.order_id AND ar.item_id = oi.item_id

Data Flow

The ClickHouse analytics data follows the standard Olympus Cloud OLAP pipeline:

Rust Commerce Service → Cloud Spanner (OLTP)


GCP Pub/Sub


Python ML Service → ClickHouse Cloud (OLAP)


recommendation_feedback table
warning

Python services do not read from Cloud Spanner directly. All analytics data flows through Pub/Sub into ClickHouse, following the Architecture 3.0 data ownership model. The RecommendationFeatureStore reads exclusively from ClickHouse.


AI Assistant Feedback

Conversational AI feedback (for agents like Maximus, Minerva, and the Support Agent) uses a different path than recommendation feedback.

How It Works

  • Users can provide thumbs-up/thumbs-down or text feedback on AI assistant responses.
  • This feedback is recorded as structured JSON log entries in Google Cloud Logging.
  • Feedback is not persisted to a database table. It is available for analysis through Cloud Logging queries and log-based metrics.

Why Cloud Logging

ConcernApproach
StorageCloud Logging retains logs per retention policy (default 30 days)
QueryingLog Explorer and log-based metrics in Cloud Monitoring
AlertingLog-based alerts can fire on feedback quality drops
CostNo additional database cost for low-volume feedback signals

Querying Assistant Feedback

# View recent negative feedback in Cloud Logging
gcloud logging read \
'resource.type="cloud_run_revision" AND jsonPayload.event_type="ai_feedback" AND jsonPayload.rating="negative"' \
--project=olympuscloud-dev \
--limit=50
info

If assistant feedback volume grows to a level where structured querying becomes important, a future enhancement could persist these events to ClickHouse. Currently the volume does not justify the added complexity.


A/B Testing Integration

The recommendation engine supports experiment-based feedback tracking for controlled evaluation of algorithm changes.

How Experiments Work

  1. Create an experiment with multiple variants (e.g., control vs variant_b with different scoring weights).
  2. Assign sessions to variants via POST /recommendations/experiments/{id}/assign.
  3. Recommendations include experiment_id and experiment_variant in responses.
  4. Feedback records carry these fields, enabling per-variant acceptance rate analysis.
  5. Query experiment results via GET /recommendations/experiments?tenant_id=...&status=running.

Experiment Feedback Analysis

Because experiment_id and experiment_variant are stored alongside every feedback record in ClickHouse, you can compute per-variant metrics:

SELECT
experiment_variant,
count(*) AS total,
avg(CASE WHEN was_accepted THEN 1.0 ELSE 0.0 END) AS acceptance_rate
FROM recommendation_feedback
WHERE experiment_id = 'exp_abc'
GROUP BY experiment_variant