Skip to main content

Sex Work Safety Protocol: A Ready-to-Implement Specification

Sex Work Safety Protocol: A Ready-to-Implement Specification

Executive Summary

This is a complete, ready-to-build system for sex worker collective safety. It provides pseudonymous reputation tracking, verification codes, and mathematical protection against retaliation—without becoming a marketplace or collecting identity data.

1. What You're Building

1.1 Core Purpose

  • For sellers: Screen buyers safely before meeting

  • For buyers: Build reputation through safe, reliable behavior

  • For the collective: Share safety intelligence without exposure

1.2 What It Is NOT

  • ❌ A dating site or escort directory

  • ❌ A booking platform

  • ❌ A payment processor

  • ❌ A social network

  • ❌ An advertising platform

It's screening infrastructure only.

2. The Mathematical Core (Non-Negotiable)

2.1 How Reputation Works

Each buyer has two scores calculated from seller ratings:

Safety Score (S):

text
S = 25th percentile of all "Safe?" ratings (0-1)

What's the worst 25% of this buyer's safety behavior?

Reliability Score (R):

text
R = 25th percentile of all "Reliable?" ratings (0-1)

What's the worst 25% of this buyer's reliability?

Trust Coefficient (T):

text
T = S × R

Combined safety AND reliability

Influence Weight (I):

text
I = T³

Buyer's impact on seller reputations (cubic damping)

2.2 Why This Math Matters

  • 25th percentile: Shows pattern behavior, not averages

  • Multiplication (S×R): Requires both safety AND reliability

  • Cubic damping (T³): Unsafe buyers lose influence rapidly

Example: A buyer with T=0.8 has I=0.51 (half influence). A buyer with T=0.5 has I=0.13 (almost no influence). Retaliation becomes mathematically impossible.

3. Verification System (Anti-Fraud)

3.1 The Code Flow

text
1. Seller and buyer meet (arranged off-platform)
2. After meeting, seller generates 6-digit code
3. Seller gives code to buyer
4. Buyer uses code to rate seller
5. Code invalidated after use

No code = no rating. Prevents fake reviews, Sybil attacks, rating manipulation.

3.2 Code Characteristics

  • 6 digits (1,000,000 combinations)

  • Single-use only

  • Expires after 30 days (unused)

  • Rate-limited (5 attempts/hour)

  • Cannot be traced to buyer identity

4. User Workflows

4.1 Seller Flow

text
1. Buyer contacts seller (off-platform ad/website)
2. Seller asks for buyer's platform pseudonym
3. Seller looks up buyer reputation:
   - Safety score (0-1)
   - Reliability score (0-1)
   - Trust coefficient (0-1)
   - Influence weight (0-1)
   - Sample size + confidence interval
4. Seller decides: accept/decline
5. If accept → meet off-platform
6. After meeting → generate code, optionally rate buyer

4.2 Buyer Flow

text
1. Create pseudonymous account
2. Contact sellers through their external channels
3. Share pseudonym when asked
4. Meet seller (if accepted)
5. Get code from seller after meeting
6. Use code to rate seller (optional)

5. Technical Implementation (Minimal)

5.1 Database Schema (Simplified)

sql
-- Users table
CREATE TABLE users (
    id UUID PRIMARY KEY,
    pseudonym TEXT UNIQUE NOT NULL,
    account_type ENUM('seller', 'buyer'),
    created_at TIMESTAMP,
    last_active TIMESTAMP
);

-- Ratings table
CREATE TABLE ratings (
    id UUID PRIMARY KEY,
    rater_id UUID REFERENCES users(id),
    rated_id UUID REFERENCES users(id),
    safe BOOLEAN,  -- NULL if not rated
    reliable BOOLEAN,  -- NULL if not rated
    created_at TIMESTAMP
);

-- Verification codes
CREATE TABLE verification_codes (
    id UUID PRIMARY KEY,
    code CHAR(6) UNIQUE,
    seller_id UUID REFERENCES users(id),
    generated_at TIMESTAMP,
    expires_at TIMESTAMP,
    used_at TIMESTAMP NULL,
    used_by_buyer_id UUID REFERENCES users(id) NULL
);

5.2 Core API Endpoints

python
# Seller endpoints
GET    /api/buyer/:pseudonym/reputation
POST   /api/codes/generate
POST   /api/ratings/submit
GET    /api/codes/history
GET    /api/messages  # Seller-to-seller only

# Buyer endpoints
POST   /api/ratings/verify_code
POST   /api/ratings/submit_rating

# Public endpoint (no auth)
GET    /api/seller/:pseudonym/profile  # Minimal verification page

5.3 Reputation Calculation (Python Example)

python
def calculate_safety_score(buyer_id):
    # Get all safe flags for this buyer
    ratings = Rating.objects.filter(rated_id=buyer_id, safe__isnull=False)
    safe_flags = [r.safe for r in ratings]
    
    if not safe_flags:
        return None  # No data
    
    # 25th percentile
    return np.percentile(safe_flags, 25)

def calculate_influence_weight(buyer_id):
    S = calculate_safety_score(buyer_id)
    R = calculate_reliability_score(buyer_id)
    
    if S is None or R is None:
        return 0.0  # New buyer, no influence
    
    T = S * R
    I = T ** 3  # Cubic damping
    
    return I

6. Security & Privacy

6.1 What's NOT Collected

  • Real names

  • Phone numbers

  • Email (except optional recovery)

  • Location data

  • Payment information

  • Device fingerprints

  • IP addresses (beyond session management)

6.2 What IS Collected (Minimal)

  • Pseudonyms

  • Binary safety/reliability flags

  • Verification codes (temporary)

  • Encrypted seller messages (auto-expire)

6.3 Hosting Requirements

  • Jurisdiction: Where sex work is legal/tolerated (Germany ideal)

  • Encryption: AES-256 at rest, TLS 1.3+ in transit

  • No third-party tracking: Self-hosted everything

  • Web-only: No app stores, no native apps

7. Governance Structure

7.1 Cooperative Model

text
Ownership: Seller cooperative (one member, one vote)
Funding: Fixed monthly dues (€5-20, tiered by region)
Voting: All algorithm changes require member vote
Transparency: Public changelog, financial reports

7.2 Why Cooperative?

  • Aligns incentives with safety (not growth)

  • Prevents investor capture

  • Ensures seller control

  • No transaction fees = no incentive to maximize volume

8. Pilot Implementation Plan

Phase 1: Foundation (Week 1-2)

text
1. Set up basic web app (Django/Flask + PostgreSQL)
2. Implement user accounts (pseudonyms only)
3. Build reputation calculation engine
4. Create verification code system

Phase 2: Core Features (Week 3-4)

text
1. Buyer lookup interface for sellers
2. Rating submission (binary flags only)
3. Code verification for buyer ratings
4. Seller-to-seller messaging (encrypted, ephemeral)

Phase 3: Polish (Week 5-6)

text
1. Mobile-responsive design
2. Security hardening
3. User documentation
4. German translation (for pilot)

Phase 4: Pilot Launch (Week 7-8)

text
1. Invite 20-50 trusted sellers (Germany)
2. Onboard with training materials
3. 3-month pilot period
4. Weekly feedback collection

9. Cost Structure (Monthly)

text
Hosting: €100-200 (privacy-focused provider)
Development: €500-1000 (part-time maintainer)
Legal: €200-500 (German sex work law specialist)
Total: €800-1700/month

Break-even: 85-170 sellers at €10/month average dues.

10. Success Metrics

Safety Metrics

  • Reduction in reported boundary violations

  • Decrease in repeat offender incidents

  • Seller satisfaction with screening

System Metrics

  • Rating submission rate (>50% target)

  • Verification code usage rate (>60% target)

  • Seller retention (>80% target)

Anti-Metrics (What NOT to optimize)

  • ❌ Total number of buyers

  • ❌ Platform engagement time

  • ❌ Transaction volume

  • ❌ Revenue growth beyond sustainability

11. First 100 Days Checklist

Day 1-30: Build Core

  • Basic web app with user accounts

  • Reputation calculation engine

  • Verification code system

  • Seller buyer lookup interface

Day 31-60: Polish & Secure

  • Mobile-responsive design

  • Encryption implementation

  • German translation

  • User documentation

Day 61-90: Pilot Launch

  • Invite 20 pilot sellers

  • Onboard with training

  • Collect feedback

  • Adjust based on feedback

Day 91-100: Evaluate & Plan

  • Analyze pilot data

  • Plan scaling strategy

  • Document lessons learned

  • Prepare for next phase

12. The One-Page Version (For Developers)

Build this:

python
# Core logic in 20 lines
def screen_buyer(buyer_pseudonym):
    buyer = User.get(pseudonym=buyer_pseudonym)
    
    # Calculate scores
    S = percentile_25(buyer.safe_ratings)  # 0-1
    R = percentile_25(buyer.reliable_ratings)  # 0-1
    T = S * R
    I = T ** 3
    
    # Return screening info
    return {
        'safety': S,
        'reliability': R,
        'trust': T,
        'influence': I,
        'sample_size': buyer.rating_count,
        'confidence': f"{max(0, S-0.2):.2f}-{min(1, S+0.2):.2f}"
    }

Plus:

  • Pseudonymous user accounts

  • Verification codes (6-digit, single-use)

  • Seller messaging (encrypted, auto-expire)

  • No identity collection

  • No marketplace features

13. Why This Will Work

  1. Solves real pain point: Sellers need better screening

  2. Mathematically sound: Retaliation protection baked in

  3. Privacy-preserving: No identity = no blackmail/legal risk

  4. Sustainable: Fixed dues align incentives with safety

  5. Scalable: Simple architecture, clear boundaries

14. Next Step

If you're a developer:

  1. Fork the reference implementation (when available)

  2. Build the 20-line core + verification codes

  3. Test with 2-3 trusted sellers

If you're a seller:

  1. Form a cooperative (5+ members)

  2. Find a developer (tech-savvy ally)

  3. Start with manual screening + shared spreadsheet

  4. Gradually automate as trust builds

The system is ready to build. The math works. The need is real. Start small, build trust, scale gradually.

This isn't a startup. It's infrastructure. Build it like you'd build a well: carefully, sustainably, for the community that needs it

Comments

Popular posts from this blog

Field Manual: Epistemic Self-Defense with Large Language Models

Field Manual: Epistemic Self-Defense with Large Language Models Doctrine, Procedures, Constraints 0. Purpose This document defines the primary strategic use of locally operated large language models. Not content generation. Not companionship. Not automation of thought. Primary function: reduce the cost of verifying claims. Outcome: epistemic self-defense. 1. Core Premise Large language models are clerical cognition engines. They compress text, extract structure, reorganize information, and compare documents. They do not originate truth, exercise judgment, or determine correctness. They reduce labor. They do not replace thinking. 2. Historical Constraint Before cheap computation, reading large volumes was expensive, cross-checking sources was slow, and synthesis required staff. Institutions therefore held advantages: think tanks, policy offices, PR operations, lobbying groups, major media. Their edge was processing scale. They could read everything. Individuals could not. Trust in autho...

Field Manual: Minimal Federated Trust-Bound Social Infrastructure

Minimal Federated Trust-Bound Social Infrastructure (Ur-Protocol) Complete Specification and Field Manual v0.5 Part I: Specification 0. Scope Ur-Protocol defines a portable identity + small-group coordination substrate. It is not: a platform a company service a monolithic app a global social graph It is: a protocol that allows many independent servers and many independent clients to coordinate small human groups safely and cheaply The protocol guarantees: identity continuity social proof admission/recovery group ordering/consistency server replaceability client replaceability Everything else (UX, features, aesthetics) is out of scope. 0.1 Notational Conventions The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119. 0.5 Fo...