10 min read

Defining Evaluation Criteria

Set up the rules and weights that determine how submissions are scored — from simple thresholds to multi-factor analysis.

Overview

Evaluation criteria are the foundation of your grader's intelligence. They define how form submissions are analyzed, weighted, and scored to determine lead quality. This guide covers setting up effective criteria that align with your sales process.

Criteria Types

1. Field-Based Criteria

Direct evaluation of form field values:

{
  "field": "company",
  "condition": "contains",
  "values": ["enterprise", "corporation", "international"],
  "weight": 25,
  "description": "Large company indicators"
}

Common Field Patterns:

  • Email Domain - Score based on company email domains
  • Job Title - Weight decision-maker titles higher
  • Company Size - Prefer specific employee ranges
  • Industry - Target or avoid certain sectors
  • Location - Geographic preferences

2. Behavioral Criteria

Analyze patterns and engagement:

  • Form Completion Rate - How many fields were filled
  • Response Quality - Length and detail of text responses
  • Time on Page - Engagement before submission
  • Return Visits - Multiple touchpoints
  • Download History - Content engagement

3. Derived Criteria

Calculated values from multiple fields:

// Example: Company maturity score
if (website && employees > 50 && revenue) {
  score += 20; // Established company
}

// Email domain reputation
if (email.endsWith('.edu')) {
  score += 10; // Educational discount interest
} else if (email.endsWith('.gov')) {
  score += 15; // Government prospect
}

Setting Up Criteria

Step 1: Define Your Ideal Customer Profile (ICP)

Before creating criteria, document your ICP:

AttributeIdealGoodPoor
Company Size100-500 employees50-100 or 500+<50 employees
IndustrySaaS, Tech, HealthcareFinancial, RetailAgriculture, Non-profit
Job TitleVP, Director, ManagerCoordinator, AnalystIntern, Student
Budget$10k+ annually$5k-10k<$5k

Step 2: Map Criteria to Form Fields

For each form field, define scoring rules:

Email Field Criteria

{
  "field": "email",
  "rules": [
    {
      "condition": "domain_in",
      "values": ["fortune500domains.com"],
      "points": 30,
      "label": "Fortune 500 Company"
    },
    {
      "condition": "domain_type",
      "values": ["business"],
      "points": 15,
      "label": "Business Email"
    },
    {
      "condition": "domain_type", 
      "values": ["personal"],
      "points": -10,
      "label": "Personal Email"
    }
  ]
}

Company Field Criteria

{
  "field": "company",
  "rules": [
    {
      "condition": "keywords",
      "values": ["enterprise", "corporation", "solutions", "systems"],
      "points": 20,
      "label": "Enterprise Keywords"
    },
    {
      "condition": "length",
      "min": 3,
      "points": 10,
      "label": "Valid Company Name"
    }
  ]
}

Step 3: Weight Your Criteria

Assign importance weights to each criterion:

CriterionWeightRationale
Company Email Domain25%Strong indicator of company size
Job Title Seniority20%Decision-making authority
Industry Match15%Product-market fit
Company Size15%Budget capability
Form Completion10%Engagement level
Geographic Location10%Sales territory alignment
Other Factors5%Miscellaneous signals

Advanced Criteria Patterns

Multi-Field Conditional Logic

// High-value prospect detection
if (jobTitle.includes('VP', 'Director', 'Chief') && 
    companySize > 100 && 
    industry.includes('Technology', 'Healthcare')) {
  score += 40; // Premium prospect
}

// Risk mitigation
if (email.includes('gmail.com', 'yahoo.com') && 
    !website && 
    phone.isEmpty()) {
  score -= 20; // Potential spam/low quality
}

Industry-Specific Scoring

{
  "industries": {
    "healthcare": {
      "keywords": ["hospital", "medical", "clinic", "health"],
      "job_titles": ["CTO", "IT Director", "CMIO"],
      "company_size_weight": 1.5,
      "base_score": 60
    },
    "manufacturing": {
      "keywords": ["manufacturing", "factory", "production"],
      "job_titles": ["Plant Manager", "Operations Director"],
      "geographic_preference": ["midwest", "south"],
      "base_score": 55
    }
  }
}

Time-Based Scoring

// Urgency indicators
const urgencyKeywords = ['urgent', 'asap', 'immediate', 'deadline'];
if (message.toLowerCase().includesAny(urgencyKeywords)) {
  score += 15; // Time-sensitive opportunity
}

// Seasonal adjustments
if (currentMonth.in(['Q4']) && industry === 'retail') {
  score += 10; // Year-end budget cycles
}

Testing and Validation

A/B Testing Criteria

  1. Split Traffic - Direct 50% of submissions to each criteria set
  2. Track Conversion - Monitor which criteria produce better leads
  3. Statistical Significance - Wait for adequate sample size
  4. Implementation - Roll out winning criteria to all traffic

Validation Metrics

MetricTargetCalculation
Precision>70%True positives / All positive predictions
Recall>80%True positives / All actual positives
F1 Score>75%2 × (Precision × Recall) / (Precision + Recall)

Historical Analysis

Review past leads to calibrate criteria:

-- Example analysis query
SELECT 
  AVG(score) as avg_score,
  COUNT(*) as total_leads,
  SUM(CASE WHEN converted = 1 THEN 1 ELSE 0 END) as conversions,
  (SUM(CASE WHEN converted = 1 THEN 1 ELSE 0 END) * 100.0 / COUNT(*)) as conversion_rate
FROM submissions 
WHERE score_range = 'high' AND created_date > DATE_SUB(NOW(), INTERVAL 90 DAY)
GROUP BY score_range;

Best Practices

1. Start Simple, Iterate Complex

Begin with basic field-based criteria, then add behavioral and derived patterns as you gather data.

2. Regular Review Cycles

  • Weekly - Review top/bottom performers
  • Monthly - Analyze conversion trends
  • Quarterly - Major criteria adjustments

3. Negative Scoring

Don't just reward good signals — penalize bad ones:

  • Personal email addresses
  • Generic job titles
  • Incomplete submissions
  • Suspicious patterns

4. Score Distribution Balance

Aim for this distribution:

  • Hot Leads (80-100): 10-15%
  • Warm Leads (50-79): 60-70%
  • Cold Leads (0-49): 15-25%

Common Pitfalls

Over-Fitting

Creating too many specific rules that don't generalize:

// Bad: Too specific
if (company === 'Acme Corp' && jobTitle === 'VP Marketing') {
  score += 50; // Won't help with other prospects
}

// Good: Generalizable pattern  
if (companySize > 100 && jobTitle.includes('VP', 'Director')) {
  score += 25; // Applies to many prospects
}

Bias Introduction

Unconsciously encoding preferences that limit opportunities:

  • Geographic bias
  • Industry prejudice
  • Company size assumptions
  • Title hierarchy assumptions

Insufficient Data

Making criteria changes based on small sample sizes. Ensure you have at least 100 submissions before major adjustments.

Next Steps

After setting up evaluation criteria:

Troubleshooting

All Leads Scoring Too High/Low

  • Review weight distribution
  • Check for missing negative criteria
  • Validate against historical data
  • Consider industry benchmarks

Inconsistent Scoring

  • Audit criteria conflicts
  • Simplify complex conditional logic
  • Test with sample data
  • Review field mapping accuracy

Support Resources

  • Criteria Templates - Industry-specific starting points
  • Scoring Simulator - Test criteria with sample data
  • Best Practices Guide - Advanced patterns and techniques
  • Expert Consultation - Schedule a criteria review session