Skip to main content

Learning Analytics Dashboard

How to monitor AI learning performance, track feedback trends, and measure recommendation quality improvements over time.

Overview

Olympus Cloud provides two complementary sources of learning analytics that together give operators and developers visibility into how AI suggestions are performing:

  • Recommendation Performance API -- A REST endpoint backed by ClickHouse that returns acceptance rates, revenue impact, and per-type breakdowns for menu item recommendations over configurable time windows.
  • LearningEngine Redis Signals -- Short-lived learning state (24-hour TTL) stored in Redis that captures rejection patterns and modification trends for operational suggestions such as pricing, staffing, and inventory.

These analytics are consumed programmatically via the Python ML service API. There is no standalone dashboard UI at this time -- the data feeds into the NebusAI Cockpit and restaurant management views through the existing API layer.

info

This guide focuses on the analytics and monitoring aspects of the learning system. For details on how feedback is collected and processed, see the Continuous Learning & Feedback System guide.


Recommendation Performance Metrics

The RecommendationEngine exposes a performance metrics method that queries the ClickHouse recommendation_feedback table and returns aggregated analytics.

API Endpoint

GET /recommendations/performance?tenant_id=restaurant-1&location_id=loc_123&days=30

Response Structure

The endpoint returns a PerformanceMetricsResponse with the following fields:

FieldTypeDescription
period_daysintNumber of days in the analysis window
total_recommendationsintTotal recommendations served in the period
overall_acceptance_ratefloatAggregate acceptance rate across all types
recommended_revenuefloatEstimated revenue from accepted recommendations
by_typedictPer-type breakdown (see below)
trending_itemslistRecommendation types with acceptance rate above 25%
improvement_suggestionslistAuto-generated suggestions when performance is low

Per-Type Breakdown

The by_type field contains metrics for each recommendation type (upsell, cross_sell, substitute, popular, personalized):

FieldTypeDescription
totalintTotal recommendations of this type
acceptedintNumber accepted by users
acceptance_ratefloatAcceptance rate (0.0 to 1.0)
avg_positionfloatAverage list position when accepted (0-indexed)

Example Response

{
"period_days": 30,
"total_recommendations": 4250,
"overall_acceptance_rate": 0.182,
"recommended_revenue": 12450.00,
"by_type": {
"cross_sell": {
"total": 1800,
"accepted": 410,
"acceptance_rate": 0.228,
"avg_position": 0.8
},
"upsell": {
"total": 1200,
"accepted": 144,
"acceptance_rate": 0.120,
"avg_position": 1.5
},
"popular": {
"total": 850,
"accepted": 170,
"acceptance_rate": 0.200,
"avg_position": 1.2
},
"personalized": {
"total": 400,
"accepted": 120,
"acceptance_rate": 0.300,
"avg_position": 0.5
}
},
"trending_items": [
{"type": "cross_sell", "acceptance_rate": 0.228, "total": 1800},
{"type": "personalized", "acceptance_rate": 0.300, "total": 400}
],
"improvement_suggestions": [
"Upsell acceptance is low. Try featuring higher-value appetizers and sides."
]
}

Auto-Generated Improvement Suggestions

The API produces contextual improvement suggestions based on current metrics:

ConditionSuggestion
Overall acceptance rate below 15%"Acceptance rate is below 15%. Consider refining recommendation relevance."
Upsell acceptance rate below 10%"Upsell acceptance is low. Try featuring higher-value appetizers and sides."

LearningEngine Analytics Signals

The LearningEngine stores learning signals in Redis that represent near-real-time insights into how operators respond to AI-generated suggestions. These signals are available for monitoring and are consumed by downstream systems to adjust future suggestion behavior.

Source Code

ComponentPath
LearningEnginebackend/python/app/services/ai/learning_engine.py
Testsbackend/python/tests/test_learning_engine.py
Integrationbackend/python/app/services/events/restaurant_processor.py

Querying Redis Learning Signals

Learning signals are stored under predictable key patterns. You can query them directly from Redis for monitoring:

# Rejection patterns for a suggestion type
redis-cli GET "learning_engine:rejections:pricing"

# Modification patterns for a suggestion type
redis-cli GET "learning_engine:modifications:staffing"

Rejection Pattern Structure

When the acceptance rate for a suggestion type drops below 30%, the engine stores a rejection analysis across four dimensions:

{
"patterns": {
"time_of_day": {"18": 45, "19": 38, "20": 22},
"day_of_week": {"Friday": 52, "Saturday": 33},
"user_role": {"manager": 60, "staff": 25},
"business_state": {"busy": 70, "slow": 15}
},
"samples": 105,
"generated_at": "2026-02-20T14:30:00+00:00"
}

Each dimension shows a frequency count of rejections by context value, revealing when and why operators reject suggestions:

DimensionWhat It RevealsExample Insight
time_of_dayHour-based rejection patternsPricing suggestions rejected during dinner rush (18-20)
day_of_weekDay-specific trendsStaffing suggestions rejected on weekends
user_roleRole-based differencesManagers reject inventory suggestions more than chefs
business_stateBusiness context correlationSuggestions rejected when restaurant is busy

Modification Pattern Structure

When operators modify suggestions rather than accepting them outright, the engine records this for future training:

{
"status": "pending_training",
"samples": 23,
"generated_at": "2026-02-20T14:30:00+00:00"
}

The pending_training status indicates that modification patterns have been captured but not yet consumed by a retraining pipeline.

Key Expiration

All learning signal keys expire after 86,400 seconds (24 hours). This means:

  • Signals reflect the most recent 24 hours of operator behavior
  • Stale patterns from changed business conditions expire automatically
  • Monitoring systems should poll periodically rather than relying on cached values

ClickHouse Analytics Queries

For deeper analysis beyond the REST API, you can query the ClickHouse recommendation_feedback table directly.

Acceptance Rate Trend Over Time

SELECT
toDate(feedback_time) AS date,
count(*) AS total,
avg(CASE WHEN was_accepted THEN 1.0 ELSE 0.0 END) AS acceptance_rate
FROM recommendation_feedback
WHERE tenant_id = %(tenant_id)s
AND location_id = %(location_id)s
AND toDate(feedback_time) >= today() - interval 30 day
GROUP BY date
ORDER BY date

Acceptance Rate by Recommendation Type

SELECT
recommendation_type,
count(*) AS total_recommendations,
sum(CASE WHEN was_accepted THEN 1 ELSE 0 END) AS accepted,
avg(CASE WHEN was_accepted THEN 1.0 ELSE 0.0 END) AS acceptance_rate,
avg(position) AS avg_position_when_accepted
FROM recommendation_feedback
WHERE tenant_id = %(tenant_id)s
AND location_id = %(location_id)s
AND toDate(feedback_time) >= today() - interval 30 day
GROUP BY recommendation_type

Revenue Impact from Accepted Recommendations

WITH accepted_recs AS (
SELECT order_id, item_id
FROM recommendation_feedback
WHERE tenant_id = %(tenant_id)s
AND location_id = %(location_id)s
AND was_accepted = true
AND toDate(feedback_time) >= today() - interval 30 day
)
SELECT
sum(oi.price * oi.quantity) AS recommended_revenue
FROM accepted_recs ar
JOIN order_items oi
ON ar.order_id = oi.order_id AND ar.item_id = oi.item_id

A/B Experiment Performance Comparison

SELECT
experiment_variant,
count(*) AS total,
avg(CASE WHEN was_accepted THEN 1.0 ELSE 0.0 END) AS acceptance_rate
FROM recommendation_feedback
WHERE experiment_id = 'exp_abc'
GROUP BY experiment_variant

Monitoring AI Assistant Feedback

Conversational AI feedback (for agents like Maximus, Minerva, and the Support Agent) is logged as structured JSON to Google Cloud Logging rather than persisted to a database.

Querying Negative Feedback

gcloud logging read \
'resource.type="cloud_run_revision" AND jsonPayload.event_type="ai_feedback" AND jsonPayload.rating="negative"' \
--project=olympuscloud-dev \
--limit=50

Setting Up Log-Based Alerts

You can create Cloud Monitoring alerts that fire when negative AI feedback exceeds a threshold:

  1. Create a log-based metric in Cloud Monitoring filtering on jsonPayload.event_type="ai_feedback" and jsonPayload.rating="negative"
  2. Set an alerting policy with the desired threshold (e.g., more than 10 negative ratings in 1 hour)
  3. Route alerts to PagerDuty or Slack via notification channels

Configuration and Thresholds

LearningEngine Thresholds

The following thresholds are defined in the LearningEngine implementation:

ParameterValueEffect
Batch size100 feedback itemsProcessing triggers when buffer reaches 100
Low acceptance threshold30%Rejection pattern analysis runs when acceptance drops below this
Redis TTL86,400 seconds (24h)Learning signals expire after one day

Performance API Thresholds

The performance endpoint generates improvement suggestions based on these thresholds:

ThresholdValueSuggestion Generated
Overall acceptance rateBelow 15%Refine recommendation relevance
Upsell acceptance rateBelow 10%Feature higher-value appetizers and sides
Trending thresholdAbove 25%Type appears in trending_items list

These thresholds are currently hardcoded in the recommendation routes. To adjust them, modify backend/python/app/api/recommendation_routes.py in the get_performance_metrics handler.