How We Prevent Referral Fraud Without Hurting UX
The technical and UX challenges of building a fraud-resistant referral system.
How We Prevent Referral Fraud Without Hurting UX
Referral fraud costs businesses millions. Uber once discovered a fraud ring that had generated over $50,000 in fake referral bonuses using bot accounts.
When we built Select, fraud prevention was a core requirement—not an afterthought. But here's the challenge: aggressive fraud prevention destroys user experience. Block too aggressively, and you'll reject legitimate referrals.
Here's how we found the balance.
The Fraud Landscape
Common referral fraud patterns:
1. Self-Referral
Users create multiple accounts to refer themselves.
Signals:- Same IP address for referrer and referred
- Same device fingerprint
- Similar email patterns (john1@gmail, john2@gmail)
- Account created immediately after clicking referral link
2. Referral Farms
Organized groups that mass-create accounts to harvest referral rewards.
Signals:- Burst of referrals from same source
- Referred accounts never become active
- Unusual geographic patterns
- Datacenter IP addresses
3. Click Fraud
Bots clicking referral links to manipulate analytics or attribution.
Signals:- Inhuman click timing
- Missing browser fingerprint data
- Known bot user agents
- Impossible travel (clicks from multiple countries in seconds)
4. Cookie Stuffing
Forcing referral cookies onto users without their knowledge.
Signals:- Referral clicks without page views
- Impossible referral chains
- Clicks from hidden iframes
Our Multi-Layer Approach
We built fraud prevention as a scoring system, not a binary filter. Each signal contributes to a "risk score," and actions are taken based on thresholds.
Layer 1: IP Intelligence
IP Analysis:
├── Is it a datacenter IP? (+30 risk)
├── Is it a known VPN/proxy? (+20 risk)
├── Has this IP referred before? (+10 risk per occurrence)
├── Geographic consistency check
└── IP reputation score (third-party)
We don't block VPN users outright—many legitimate users use VPNs. But a datacenter IP combined with other signals raises suspicion.
Layer 2: Device Fingerprinting
We generate a lightweight device fingerprint using:
- Browser characteristics (timezone, language, screen size)
- Canvas rendering patterns
- Audio context fingerprint
- WebGL renderer info
This fingerprint persists across sessions without cookies, making it harder for fraudsters to appear as unique users.
Privacy note: We hash fingerprints immediately and never store raw device data. The fingerprint is only used for fraud detection, not tracking.Layer 3: Behavioral Analysis
Real users behave differently than bots and fraudsters:
| Signal | Legitimate | Fraudulent |
|---|---|---|
| Time on page before signup | 30+ seconds | < 5 seconds |
| Mouse movement | Natural curves | Linear/none |
| Form filling | Variable speed | Instant paste |
| Session depth | Multiple pages | Single page |
We track these signals in the SDK and factor them into the risk score.
Layer 4: Velocity Limits
Even without fraud signals, unusual velocity is suspicious:
- Max 10 successful referrals per user per day
- Max 3 referrals from same IP per hour
- Referred users must wait 24h before referring others
- Cooldown period after failed referral attempts
These limits rarely affect legitimate users but stop abuse at scale.
Layer 5: Network Analysis
We build a graph of referral relationships. Fraudulent networks have distinct patterns:
Legitimate Network:
User A → User B → User C
User A → User D
(branching, organic growth)
Fraud Network:
User X → User Y₁, Y₂, Y₃, Y₄, Y₅...
(single referrer, many referred)
User X₁ ↔ User X₂
User X₂ ↔ User X₃
(circular referrals)
When we detect suspicious network patterns, we flag the entire cluster for review.
The Risk Scoring System
Each referral gets a risk score from 0-100:
| Score | Action |
|---|---|
| 0-20 | Approved automatically |
| 21-50 | Approved with monitoring |
| 51-75 | Held for manual review |
| 76-100 | Rejected automatically |
Most legitimate referrals score below 15. Most fraud attempts score above 70. The tricky cases are in between.
Protecting UX
Here's where most fraud systems fail: they optimize for catching fraud at the expense of user experience.
Our principles:
1. Never Block Silently
If a referral is rejected, we tell the user why (in generic terms) and provide a path forward.
Bad: Referral just doesn't work, no explanation. Good: "We couldn't verify this referral. If you believe this is an error, contact support."2. Err on the Side of Approval
For borderline cases, we approve the referral but flag it for review. The user gets their reward immediately, and we investigate in the background.
If we later confirm fraud, we can revoke rewards. But false positives hurt more than false negatives.
3. Graduated Responses
First-time suspicious behavior gets a warning. Repeated patterns trigger restrictions. Only confirmed fraud results in account suspension.
4. Fast Appeals
Users can appeal rejected referrals. Our target: respond to appeals within 24 hours.
The Results
After implementing this system:
| Metric | Before | After |
|---|---|---|
| Fraud rate | 8.3% | 0.4% |
| False positive rate | N/A | 0.1% |
| User complaints (fraud-related) | 12/week | 1/week |
| Avg. time to detect fraud | 7 days | < 1 hour |
We reduced fraud by 95% while keeping false positives under 0.1%.
What We Don't Do
Some fraud prevention tactics we deliberately avoid:
Open Questions
Fraud prevention is an arms race. Areas we're still improving:
- AI-generated content - Harder to detect bot accounts that use LLMs for realistic activity
- Residential proxies - Fraudsters using real residential IPs are harder to detect
- Slow-burn fraud - Patterns that only emerge over weeks of normal-looking activity
For Developers
If you're building your own system, start simple:
Don't build everything at once. Start with layer 1 and add complexity as you encounter new fraud patterns.
Select's fraud prevention is built-in and automatic. Start your free trial and let us handle the hard parts.