How Feedback Works
Every conversation in your conversation logs is reviewable. You have two levels of feedback:Message-Level: Thumbs Up / Thumbs Down
On any AI-generated message, you can give a thumbs up or thumbs down.- Thumbs up tells us the AI got it right — the tone, the information, the approach. We use this to reinforce good behavior.
- Thumbs down flags something you didn’t like — a wrong answer, poor tone, missing information, or an action that shouldn’t have happened.
Ticket-Level: Score and Comment
Beyond individual messages, you can score the entire conversation:- Override the AICSAT score — Click the score badge to assign your own rating (1–10), overriding the automated quality score
- Review the ticket — Click Review Ticket to open a detailed scoring modal where you can rate per-section and leave comments explaining your assessment
Feedback Analysis
When you leave a thumbs-down, something happens immediately: the Cevro Feedback Assistant appears in a chat widget at the bottom of your screen. This AI assistant:- Acknowledges your feedback — confirms what you flagged
- Analyzes the conversation — reviews what the AI did and identifies the likely issue
- Suggests a fix — often it’s an AIP instruction gap, a missing tool, or an edge case
Escalate to a Human
If the automated analysis isn’t enough — or if you want to discuss the issue directly — you can escalate from the feedback chat to connect with a Cevro engineer or CSM. This creates a live thread where your team and ours can collaborate on the fix.This is the fastest way to get a human from Cevro looking at a specific conversation. Use it whenever you see something that needs expert attention.
What Happens with Your Feedback
Every piece of feedback is reviewed. Here’s the typical lifecycle:Feedback is triaged
Your feedback appears in our internal review queue. The Cevro team categorizes it and identifies the root cause.
Root cause identified
Most issues fall into a few categories:
- AIP gap — The AI Procedure instructions didn’t cover this scenario. Fix: update the AIP.
- Missing data — The agent didn’t have access to the information it needed. Fix: connect a tool or add a field.
- Edge case — A situation the AIP didn’t anticipate. Fix: add the edge case to instructions.
- Process issue — The AI followed instructions correctly, but the process itself needs changing. Fix: update the SOP.
- Platform improvement — Something we need to fix on our end. Fix: engineering work.
Fix is applied
AIP updates are applied and take effect on the next conversation. Platform fixes ship through our release cycle.
The Iteration Cycle
Going live is the beginning, not the end. The feedback loop is how your agent improves:Enhanced Analysis with QA Scoring
For operators who want deeper quality analysis beyond thumbs and comments, QA Scoring provides:- Custom scorecards — Define your own quality criteria with weighted sections and rules
- Per-rule breakdowns — See exactly which criteria passed or failed, with the AI’s reasoning
- Manual review scales — Score conversations using your own rating system
- Score trends over time — Track quality improvement across agents, brands, and time periods
- 100% coverage — Every conversation scored, not just the ones you flag
Tips for Effective Feedback
- Be specific in comments — “The agent gave wrong bonus info” is more actionable than “bad response”
- Reference what you expected — “Should have offered the Weekend Reload instead of saying no bonuses available”
- Flag patterns, not just one-offs — If you see the same issue across multiple conversations, mention that in your feedback
- Use thumbs-up generously — Positive signals are just as important as negative ones for training
- Escalate when stuck — If automated analysis doesn’t help, use the escalation button to get a human in the loop
Related
- Conversation Logs — Where to find and review conversations
- QA Scoring — Set up custom scorecards for systematic quality assurance
- Understanding Metrics — How AICSAT and other scores are calculated
- Going Live — The go-live checklist and what to monitor after launch