Skip to main content

Operational Metrics

For Business Users

Operational excellence requires consistent, trusted metrics. Olytix Core provides a unified view of operations across fulfillment, support, production, and service delivery.

Common Operational Challenges

Operations teams face unique data challenges:

  • Real-time visibility needed but data is delayed
  • Metrics defined differently across locations/shifts
  • No single view of end-to-end operations
  • Reactive instead of proactive management

Olytix Core Operations Solution

Operations Cube

# cubes/operations.yml
cubes:
- name: operations
sql: "SELECT * FROM {{ ref('fct_operations') }}"
description: "Unified operational metrics"

measures:
# Throughput metrics
- name: orders_processed
type: count
description: "Total orders processed"

- name: units_shipped
type: sum
sql: units_quantity
description: "Total units shipped"

- name: throughput_rate
type: number
sql: "COUNT(*) / NULLIF(EXTRACT(HOUR FROM processing_time), 0)"
description: "Orders processed per hour"

# Efficiency metrics
- name: avg_processing_time
type: avg
sql: processing_time_minutes
format: number
description: "Average order processing time"

- name: on_time_rate
type: avg
sql: "CASE WHEN shipped_on_time THEN 1.0 ELSE 0.0 END"
format: percentage
description: "Percentage shipped on time"

# Quality metrics
- name: error_count
type: count
filters:
- sql: "has_error = true"
description: "Orders with errors"

- name: error_rate
type: number
sql: "SUM(CASE WHEN has_error THEN 1 ELSE 0 END)::float / NULLIF(COUNT(*), 0)"
format: percentage
description: "Error rate"

# Cost metrics
- name: total_cost
type: sum
sql: processing_cost
format: currency

- name: cost_per_order
type: avg
sql: processing_cost
format: currency

dimensions:
- name: operation_id
type: string
primary_key: true

- name: operation_date
type: time
sql: operation_timestamp
granularities: [hour, day, week, month]

- name: facility
type: string
sql: facility_code
description: "Processing facility"

- name: shift
type: string
sql: shift_name
description: "Work shift (day, evening, night)"

- name: operation_type
type: string
sql: operation_type

- name: status
type: string
sql: operation_status

Key Operational Metrics

# metrics/operations.yml
metrics:
# Overall Equipment Effectiveness (OEE)
- name: oee
description: |
Overall Equipment Effectiveness = Availability × Performance × Quality
Industry benchmark: 85%
type: derived
expression: |
availability_rate * performance_rate * quality_rate
format: percentage
meta:
owner: operations
target: 0.85

# Perfect Order Rate
- name: perfect_order_rate
description: |
Percentage of orders delivered:
- On time
- Complete
- Undamaged
- With correct documentation
type: ratio
numerator: operations.perfect_orders
denominator: operations.total_orders
format: percentage
meta:
owner: operations
target: 0.95

# Cycle Time
- name: order_cycle_time
description: |
Average time from order receipt to delivery.
type: simple
expression: operations.avg_cycle_time
format: duration_hours

# First Pass Yield
- name: first_pass_yield
description: |
Percentage of items completed correctly the first time,
without rework or corrections.
type: ratio
numerator: operations.first_pass_count
denominator: operations.total_count
format: percentage

Real-Time Operations Dashboard

Operations Control Center
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

Today's Performance (Real-Time)
────────────────────────────────────────────────────────────

Orders Processed │ Throughput │ On-Time Rate │ Error Rate
2,847 │ 142/hour │ 96.2% │ 0.8%
↑ 12% vs avg │ ↑ 8% vs target │ ✓ On target │ ✓ Below 1%

Facility Performance
────────────────────────────────────────────────────────────
Facility │ Orders │ Throughput │ On-Time │ Errors │ Status
────────────┼────────┼────────────┼─────────┼────────┼────────
Chicago │ 1,250 │ 165/hr │ 97.5% │ 0.5% │ ✓ Good
Dallas │ 892 │ 128/hr │ 95.8% │ 0.9% │ ✓ Good
Phoenix │ 705 │ 118/hr │ 94.1% │ 1.2% │ ⚠ Monitor

Hourly Trend
────────────────────────────────────────────────────────────
200 ┤ ╭─────╮
│ ╭────╯ ╰────╮
150 ┤ ╭────╯ ╰───╮
│ ╭────╯ ╰──
100 ┤ ────╯

50 ┤
└─────┬─────┬─────┬─────┬─────┬─────┬─────┬─────┬─────
6AM 8AM 10AM 12PM 2PM 4PM 6PM 8PM Now

Active Alerts
────────────────────────────────────────────────────────────
⚠ Phoenix: Throughput 12% below target (last 2 hours)
⚠ Chicago: Equipment PM due in 4 hours
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

Operational Use Cases

Supply Chain Visibility

cubes:
- name: supply_chain
sql: "SELECT * FROM {{ ref('fct_shipments') }}"

measures:
- name: shipments_in_transit
type: count
filters:
- sql: "status = 'in_transit'"

- name: avg_transit_days
type: avg
sql: transit_days

- name: late_shipments
type: count
filters:
- sql: "actual_delivery > expected_delivery"

- name: on_time_delivery_rate
type: number
sql: |
SUM(CASE WHEN actual_delivery <= expected_delivery THEN 1 ELSE 0 END)::float
/ NULLIF(COUNT(*), 0)
format: percentage

dimensions:
- name: origin
type: string
sql: origin_facility

- name: destination
type: string
sql: destination_region

- name: carrier
type: string
sql: carrier_name

- name: shipment_type
type: string
sql: shipment_type

Quality Management

cubes:
- name: quality
sql: "SELECT * FROM {{ ref('fct_quality_events') }}"

measures:
- name: defect_count
type: count
filters:
- sql: "event_type = 'defect'"

- name: defects_per_million
type: number
sql: |
SUM(CASE WHEN event_type = 'defect' THEN 1 ELSE 0 END)::float
/ NULLIF(SUM(units_inspected), 0) * 1000000
description: "Defects per million opportunities (DPMO)"

- name: sigma_level
type: number
sql: |
-- Calculate approximate sigma level from DPMO
CASE
WHEN dpmo < 4 THEN 6.0
WHEN dpmo < 63 THEN 5.0
WHEN dpmo < 668 THEN 4.5
WHEN dpmo < 6210 THEN 4.0
ELSE 3.5
END

dimensions:
- name: defect_type
type: string
sql: defect_category

- name: root_cause
type: string
sql: root_cause_category

- name: production_line
type: string
sql: line_id

Service Level Management

metrics:
- name: sla_compliance
description: |
Percentage of operations meeting SLA requirements.
type: ratio
numerator: operations.sla_met_count
denominator: operations.total_count
format: percentage

- name: mttr
description: |
Mean Time To Repair - average time to resolve issues.
type: simple
expression: incidents.avg_resolution_time
format: duration_hours

- name: mtbf
description: |
Mean Time Between Failures - average uptime between incidents.
type: simple
expression: equipment.avg_time_between_failures
format: duration_days

Alerting and Monitoring

Operational Alerts

alerts:
# Throughput alert
- name: low_throughput
description: "Throughput below target"
metric: operations.throughput_rate
condition: "value < 100" # orders per hour
severity: warning
channels:
- slack: #ops-alerts

# Error rate alert
- name: high_error_rate
description: "Error rate exceeding threshold"
metric: error_rate
condition: "value > 0.02" # 2%
severity: critical
channels:
- slack: #ops-critical
- pagerduty: ops-oncall

# SLA breach warning
- name: sla_breach_risk
description: "SLA at risk of breach"
metric: operations.on_time_rate
condition: "value < 0.95" # 95%
severity: warning
channels:
- email: operations-managers@company.com

# Quality alert
- name: quality_issue
description: "Quality metrics degrading"
metric: first_pass_yield
condition: "value < 0.98" # 98%
severity: warning
lookback: "1 hour"

Anomaly Detection

anomaly_detection:
- metric: operations.throughput_rate
method: statistical
sensitivity: 2 # standard deviations
baseline_period: 7 days
alert_on: low

- metric: operations.error_rate
method: statistical
sensitivity: 2
baseline_period: 30 days
alert_on: high

Shift and Facility Comparisons

Cross-Facility Analysis

{
"metrics": [
"operations.orders_processed",
"operations.throughput_rate",
"operations.on_time_rate",
"operations.error_rate"
],
"dimensions": ["operations.facility"],
"filters": [
{ "dimension": "operations.operation_date", "operator": "gte", "value": "2024-01-01" }
]
}

Shift Performance

{
"metrics": [
"operations.orders_processed",
"operations.avg_processing_time",
"operations.error_rate"
],
"dimensions": ["operations.shift", "operations.operation_date.day"],
"order_by": [{ "field": "operations.operation_date.day", "direction": "asc" }]
}

Integration Examples

IoT Data Integration

from olytix-core import Olytix CoreClient
import paho.mqtt.client as mqtt

client = Olytix CoreClient("http://localhost:8000")

def on_sensor_data(sensor_id, reading):
# Query current thresholds from Olytix Core
thresholds = client.query(
dimensions=["equipment.sensor_id", "equipment.low_threshold", "equipment.high_threshold"],
filters=[{"dimension": "equipment.sensor_id", "operator": "equals", "value": sensor_id}]
).data[0]

# Check against thresholds
if reading < thresholds["equipment.low_threshold"]:
alert("Low reading", sensor_id, reading)
elif reading > thresholds["equipment.high_threshold"]:
alert("High reading", sensor_id, reading)

ERP Integration

# Sync operational metrics to ERP
daily_metrics = client.query(
metrics=[
"operations.orders_processed",
"operations.total_cost",
"operations.on_time_rate"
],
dimensions=["operations.facility", "operations.operation_date.day"],
filters=[
{"dimension": "operations.operation_date", "operator": "equals", "value": "2024-01-20"}
]
).data

# Post to ERP
for record in daily_metrics:
erp.post_production_stats(
facility=record["operations.facility"],
date=record["operations.operation_date.day"],
orders=record["operations.orders_processed"],
cost=record["operations.total_cost"],
otd_rate=record["operations.on_time_rate"]
)

Best Practices

Real-Time vs. Batch

Metric TypeUpdate FrequencyUse Case
Safety metricsReal-timeImmediate action needed
ThroughputEvery 5 minutesOperational monitoring
Quality metricsHourlyTrend analysis
Cost metricsDailyFinancial reporting

Metric Hierarchy

Strategic Metrics (Executive)
└── OEE, Perfect Order Rate, Total Cost

├── Tactical Metrics (Manager)
│ └── Throughput, On-Time Rate, Error Rate
│ │
│ └── Operational Metrics (Supervisor)
│ └── Orders/Hour, Cycle Time, Defect Count

Next Steps

Ready to implement operational metrics?

  1. Explore AI/ML integration →
  2. Set up real-time dashboards →
  3. Configure alerts →

Start with Visibility

Begin by creating visibility into current operations before adding alerts and automation. You need to understand normal before you can detect abnormal.