Skip to main content
Your AI agent gets better when you tell it what’s working and what isn’t. The feedback system is how you communicate directly with the Cevro team about your agent’s performance — and it’s the single most impactful thing you can do after going live.

How Feedback Works

Every conversation in your conversation logs is reviewable. You have two levels of feedback:

Message-Level: Thumbs Up / Thumbs Down

On any AI-generated message, you can give a thumbs up or thumbs down.
  • Thumbs up tells us the AI got it right — the tone, the information, the approach. We use this to reinforce good behavior.
  • Thumbs down flags something you didn’t like — a wrong answer, poor tone, missing information, or an action that shouldn’t have happened.
Don’t just flag problems. Thumbs-up on messages that impressed you is equally valuable — it helps us understand what “good” looks like for your brand so we can reinforce it across all conversations.

Ticket-Level: Score and Comment

Beyond individual messages, you can score the entire conversation:
  • Override the AICSAT score — Click the score badge to assign your own rating (1–10), overriding the automated quality score
  • Review the ticket — Click Review Ticket to open a detailed scoring modal where you can rate per-section and leave comments explaining your assessment
Comments are especially valuable. A thumbs-down tells us something was wrong — a comment tells us what and why.

Feedback Analysis

When you leave a thumbs-down, something happens immediately: the Cevro Feedback Assistant appears in a chat widget at the bottom of your screen. This AI assistant:
  1. Acknowledges your feedback — confirms what you flagged
  2. Analyzes the conversation — reviews what the AI did and identifies the likely issue
  3. Suggests a fix — often it’s an AIP instruction gap, a missing tool, or an edge case
You can chat back and forth with the assistant (up to 5 messages) to clarify the issue or provide more context.

Escalate to a Human

If the automated analysis isn’t enough — or if you want to discuss the issue directly — you can escalate from the feedback chat to connect with a Cevro engineer or CSM. This creates a live thread where your team and ours can collaborate on the fix.
This is the fastest way to get a human from Cevro looking at a specific conversation. Use it whenever you see something that needs expert attention.

What Happens with Your Feedback

Every piece of feedback is reviewed. Here’s the typical lifecycle:
1

You flag an issue

Thumbs-down on a message, a low score, or a comment explaining what went wrong.
2

Feedback is triaged

Your feedback appears in our internal review queue. The Cevro team categorizes it and identifies the root cause.
3

Root cause identified

Most issues fall into a few categories:
  • AIP gap — The AI Procedure instructions didn’t cover this scenario. Fix: update the AIP.
  • Missing data — The agent didn’t have access to the information it needed. Fix: connect a tool or add a field.
  • Edge case — A situation the AIP didn’t anticipate. Fix: add the edge case to instructions.
  • Process issue — The AI followed instructions correctly, but the process itself needs changing. Fix: update the SOP.
  • Platform improvement — Something we need to fix on our end. Fix: engineering work.
4

Fix is applied

AIP updates are applied and take effect on the next conversation. Platform fixes ship through our release cycle.
5

You see the improvement

Monitor subsequent conversations to verify the fix. If it’s not right, flag it again — iteration is the name of the game.

The Iteration Cycle

Going live is the beginning, not the end. The feedback loop is how your agent improves:
Go live → Monitor conversations → Leave feedback → Cevro fixes → Agent improves → Repeat
The operators who get the best automation rates are the ones who actively review conversations and provide feedback in the first few weeks. It’s like onboarding a new hire — the more guidance you give early on, the faster they ramp up.
Don’t wait for problems to come to you. Proactively review a sample of conversations daily, especially in the first 2 weeks after going live. Check both escalated tickets (where something went wrong) and automated tickets (to confirm quality).

Enhanced Analysis with QA Scoring

For operators who want deeper quality analysis beyond thumbs and comments, QA Scoring provides:
  • Custom scorecards — Define your own quality criteria with weighted sections and rules
  • Per-rule breakdowns — See exactly which criteria passed or failed, with the AI’s reasoning
  • Manual review scales — Score conversations using your own rating system
  • Score trends over time — Track quality improvement across agents, brands, and time periods
  • 100% coverage — Every conversation scored, not just the ones you flag
QA Scoring transforms your feedback from ad-hoc observations into systematic quality assurance.

Tips for Effective Feedback

  • Be specific in comments — “The agent gave wrong bonus info” is more actionable than “bad response”
  • Reference what you expected — “Should have offered the Weekend Reload instead of saying no bonuses available”
  • Flag patterns, not just one-offs — If you see the same issue across multiple conversations, mention that in your feedback
  • Use thumbs-up generously — Positive signals are just as important as negative ones for training
  • Escalate when stuck — If automated analysis doesn’t help, use the escalation button to get a human in the loop