Seller-Collective Safety & Reputation Infrastructure
Seller-Collective Safety & Reputation Infrastructure
Complete System Specification
PART I: CHARTER
1. Purpose and Function
1.1 Core Function
This system exists solely to:
- make risk legible across the seller collective,
- make unsafe behavior expensive through loss of access,
- enable quiet, collective exclusion of unsafe buyers,
- prevent retaliatory review power from affecting seller safety,
while preserving:
- pseudonymity for all participants,
- discretion and non-propagation of activity,
- seller autonomy in accept/decline decisions,
- the right to exit without penalty.
1.2 What This System Is
- A screening registry for buyer behavior patterns
- A seller network for safety coordination
- A two-sided rating system with asymmetric authority
- A piece of collective risk infrastructure
1.3 What This System Is Not
The system is explicitly not:
- a directory or listing of sellers,
- a buyer discovery or search tool,
- a booking, scheduling, or calendar system,
- a payment, escrow, or financial service,
- an advertising or promotional platform,
- a content hosting service,
- a social network or community platform,
- a marketplace or transaction facilitator.
Any feature that moves the system toward these functions constitutes mission failure and must be rejected.
2. Foundational Principles (Non-Negotiable)
2.1 Authority Proportional to Risk
Axiom: Influence within the system is proportional to risk borne.
Because sellers bear bodily risk in interactions, sellers hold primary epistemic authority over safety judgments. Buyer input exists conditionally, weighted by demonstrated trustworthiness.
2.2 Demand-Pull Market Structure
Axiom: Buyers seek; sellers select.
The market operates on seller selection, not buyer discovery. Any feature that inverts this relationship—making sellers visible to buyers, creating buyer browsing capabilities, or facilitating buyer-initiated discovery—is prohibited.
2.3 Minimalism as Safety
Axiom: Every feature is a liability until proven necessary.
Each additional feature creates:
- attack surface for exploitation,
- legal exposure,
- identity leakage vectors,
- mission creep pressure,
- governance complexity.
Features must justify themselves against these costs. Unnecessary features are structural risk.
2.4 No Identity Escrow
Axiom: The system must never hold, verify, or infer real-world identity.
This includes:
- government-issued identification,
- phone numbers or contact information,
- payment credentials or financial data,
- device fingerprints or tracking identifiers,
- biometric data,
- location data,
- any other personally identifying information.
Rationale: Identity escrow creates:
- coercion points (leverage against participants),
- subpoena targets (legal vulnerability),
- blackmail vectors (for external actors and internal abuse),
- governance capture points (those who control identity control the system).
The system cannot leak, weaponize, or be compelled to surrender information it does not possess.
2.5 Discretion as Infrastructure
Axiom: Non-propagation is a design requirement, not a moral preference.
Discretion enables:
- safe exit from sex work,
- role separation (preventing sex work identity from contaminating other life domains),
- protection from social and economic retaliation,
- reduction of stigma-based harm.
The system is designed to forget by default. Long-lived data is structural liability.
2.6 Exit Without Penalty
Axiom: Participants may leave at any time without consequence.
- Accounts can be deleted.
- Reputation does not follow users outside the system.
- Re-entry resets reputation (this cost itself discourages cycling).
- No permanent records, no "shadow bans," no trailing consequences.
Exit capability prevents the system from becoming a mechanism of permanent control.
3. Prohibited Uses and Mission Boundaries
3.1 Hard Constraints
The system must never become or incorporate:
-
Seller directory or profile system
- No browsable seller listings
- No seller search functionality
- No seller discovery features
-
Buyer discovery tool
- No buyer profiles visible to sellers
- No "available buyers" lists
- No "highly rated buyers" browsing
- No buyer matching or recommendation
-
Transaction infrastructure
- No booking or scheduling
- No payment processing
- No escrow services
- No pricing tools
- No invoicing
-
Content platform
- No photo/video hosting
- No erotic content
- No advertising space
- No promotional tools
-
Social features
- No public feeds
- No social graphs beyond seller safety network
- No follower/following mechanics
- No public profiles (beyond minimal verification profiles)
- No reputation displays beyond functional screening
-
Growth mechanics
- No referral programs
- No network effects that benefit platform over users
- No gamification
- No engagement optimization
Exception - Minimal Seller Verification Profiles:
Sellers may have minimal public profile pages that:
- Display seller's pseudonym
- Confirm seller is platform member (verification)
- Provide instructions for buyers to create accounts
- Do NOT contain: contact info, services, pricing, photos, reviews, or any marketplace functionality
Rationale: This minimal profile enables the screening workflow (buyer account creation, reputation lookup) without transforming the platform into a marketplace or directory.
Profile pages are discovery-neutral: they confirm membership but do not facilitate searching or browsing of sellers.
3.2 Enforcement
Any proposal to add features in these categories must be:
- rejected by default,
- subject to seller vote if raised,
- evaluated against mission integrity,
- and blocked if it creates mission creep risk.
4. Governance Structure
4.1 Ownership
The system is owned by a seller cooperative.
- No external shareholders
- No venture capital
- No private equity
- No individual majority ownership
Ownership structure must prevent:
- extraction of value to non-participants,
- governance capture by external interests,
- mission drift under investor pressure.
4.2 Funding Model
Revenue sources (permitted):
- Seller membership dues (fixed periodic fee)
- Cost recovery for infrastructure
- Grants from aligned organizations (with no governance strings)
Revenue sources (prohibited):
- Buyer fees or charges
- Transaction fees or percentage-based revenue
- Advertising
- Data sales
- Any revenue model that scales with platform activity volume
Rationale: Volume-linked revenue creates perverse incentives. If platform revenue increases with activity, the platform has incentive to:
- maximize transactions (conflicting with safety),
- retain unsafe buyers (to maintain volume),
- add features that increase engagement (mission creep).
Fixed dues align platform incentives with member safety, not growth.
4.3 Voting Rights
- One member, one vote
- Members are active sellers in good standing
- Voting on:
- Constitutional amendments
- Algorithm changes
- Rule modifications
- Budget allocation
- Governance structure changes
4.4 Constitutional Charter
The cooperative operates under a constitutional charter that:
- defines mission boundaries (this document),
- requires supermajority (e.g., 2/3) to amend core principles,
- prohibits certain changes entirely (identity escrow, marketplace features),
- establishes transparency requirements,
- creates accountability mechanisms.
4.5 Change Control Process
For algorithm or rule changes:
-
Proposal published to membership with:
- Detailed specification
- Rationale
- Risk analysis
- Alternatives considered
-
Comment period (minimum 30 days)
-
Member vote (simple majority for operational changes, supermajority for structural changes)
-
Public changelog maintained
-
Rollback capability required before deployment
For emergency changes (security/safety):
- Temporary implementation allowed
- Membership notification within 24 hours
- Retroactive vote within 14 days
- Automatic rollback if vote fails
Transparency requirement: Silent changes void system legitimacy. All changes must be:
- announced before implementation,
- explained in accessible language,
- documented in public changelog,
- subject to rollback if vote fails.
4.6 Dispute Resolution
Internal disputes:
- Mediation by elected member committee
- Binding arbitration if mediation fails
- No external legal system involvement unless unavoidable
Platform decisions:
- Appeal process for membership decisions
- Transparent criteria for account suspension/termination
- Due process requirements
5. Enforcement Philosophy
5.1 Quiet Exclusion
Enforcement operates through loss of access, not through:
- public banning,
- shaming,
- call-outs,
- permanent marks,
- public records of exclusion.
Mechanism: Unsafe buyers simply find fewer sellers willing to accept them. No announcement. No spectacle. No drama.
5.2 Local Autonomy
Each seller retains full autonomy to:
- accept any buyer regardless of reputation,
- decline any buyer for any reason,
- set their own thresholds and criteria,
- ignore collective recommendations.
The system provides information; sellers make decisions.
5.3 No Permanent Consequences
- Reputation resets on exit/re-entry
- No permanent bans
- No "sex offender registry" model
- Pattern visibility, not permanent marking
This prevents the system from becoming a mechanism of permanent social control.
6. Scope Limits and Epistemic Humility
6.1 What This System Does Not Solve
This system does not address:
- Economic coercion (poverty forcing sex work participation)
- Immigration precarity (lack of alternatives due to legal status)
- Structural labor market failures (absence of other viable work)
- Criminalization (legal risk creating vulnerability)
- Stigma (social consequences of sex work)
- Upstream inequality (conditions that create "voluntary" sex work under constrained choice)
These require broader social and political change. This system is:
- internal market infrastructure,
- harm reduction where agency exists,
- not a substitute for decriminalization, labor rights, or economic justice.
6.2 First-Instance Harm
Critical limitation: This system does not prevent first-instance harm.
It makes pattern behavior visible and costly. Sellers bear initial risk; the collective bears pattern-prevention responsibility.
Mechanism:
- New buyers have no reputation history
- First interactions generate initial data
- Pattern emerges only after multiple interactions
- System optimizes for preventing repeat offenders, not first offenses
Implication: Risk-tolerant sellers (or those in controlled environments) function as pattern detectors, generating initial safety data that benefits the collective.
6.3 Heterogeneous Risk Tolerance
Risk tolerance varies across sellers based on:
- personal factors,
- operational environment (controlled spaces vs. outcall),
- support systems,
- financial pressure,
- experience level.
The system provides uncertainty information so that each seller may apply their own risk thresholds based on their circumstances and capabilities.
Design consequence: The system must support stratified decision-making, not impose uniform rules.
7. Values and Commitments
7.1 Seller-Centered Design
Every design decision prioritizes:
- seller safety,
- seller autonomy,
- seller privacy,
- seller collective power.
When trade-offs arise, seller interests supersede:
- buyer convenience,
- platform growth,
- external stakeholder preferences,
- abstract principles (unless those principles protect sellers).
7.2 Harm Reduction, Not Moralism
The system takes no position on:
- whether sex work should exist,
- whether sex work is empowering or exploitative,
- whether participants are victims or agents,
- the moral status of buying or selling sex.
The system exists to reduce harm within the market as it exists, not to advocate for or against the market itself.
7.3 Anti-Carceral Approach
The system must not:
- cooperate with law enforcement,
- share data with legal authorities,
- create records suitable for prosecution,
- function as surveillance infrastructure.
Rationale: Carceral approaches to sex work consistently increase harm to sellers. This system exists in opposition to criminalization, not in cooperation with it.
7.4 Dignity and Respect
Participants—sellers and buyers—deserve:
- respectful treatment,
- clear communication,
- transparent rules,
- accountability when harmed.
The system operates with the presumption that people are capable of good-faith participation until their behavior demonstrates otherwise.
PART II: TECHNICAL SPECIFICATION
8. System Architecture
8.1 Platform Type
Web-based application only.
No native mobile or desktop applications.
Rationale:
Application stores (Apple App Store, Google Play Store) impose:
- external governance and content policies,
- arbitrary removal capability,
- moral enforcement mechanisms,
- centralized control points.
Native applications create:
- device identifiers and fingerprints,
- forced telemetry and tracking,
- payment trail visibility,
- update control by external parties,
- platform dependency.
Web-based architecture preserves:
- platform independence,
- governance autonomy,
- continuity of access,
- privacy through lack of device integration.
Progressive Web App (PWA) acceptable: Can provide app-like experience while remaining web-based.
8.2 Infrastructure Requirements
Hosting:
- Distributed hosting to prevent single point of failure
- Jurisdiction selection based on legal protection for sex work
- Redundancy and backup systems
- DDoS protection
Network:
- HTTPS only (TLS 1.3+)
- Certificate pinning where feasible
- No mixed content
- No third-party scripts except essential, audited libraries
Data storage:
- Encrypted at rest (AES-256 or equivalent)
- Encrypted in transit (TLS 1.3+)
- Database access strictly controlled
- Regular security audits
8.3 Third-Party Dependencies
Prohibited:
- Google Analytics or similar tracking
- Social media integration (Facebook Login, etc.)
- Advertising networks
- Third-party cookies
- Embedded content from external domains
- CDN-hosted tracking scripts
Permitted (with audit):
- Open-source libraries (security-audited)
- Self-hosted analytics (privacy-respecting, aggregate only)
- Email service (for account recovery only, not marketing)
9. Identity and Account System
9.1 Account Types
Two account types:
-
Seller Account
- Persistent pseudonym
- Minimal non-identifying metadata
- Access to seller communication tools
- Rating submission capabilities
- Buyer lookup functionality
- Verification code generation
- Minimal public profile (for buyer screening workflow)
-
Buyer Account
- Persistent pseudonym
- Minimal non-identifying metadata
- Limited rating submission (structured only, with verification code)
- No access to seller communication tools
- No lookup capabilities
9.2 Pseudonym Requirements
Pseudonym characteristics:
- User-selected string
- Unique within system
- No real-name requirement
- No verification against external identity
- No restriction on changes (but reputation does not transfer)
What pseudonyms are not:
- Not real names (but can be if user chooses)
- Not verified identities
- Not linked to external accounts
- Not discoverable outside system context
9.3 Metadata Minimization
Collected metadata (maximum):
- Account creation date
- Last activity date
- Account type (seller/buyer)
- Interaction count (number of ratings given/received)
- Aggregate reputation scores
Not collected:
- Real names
- Contact information (phone, email except for recovery)
- Payment information (if dues paid via external system)
- Location data
- Device identifiers
- IP addresses (beyond immediate session management)
- Browsing behavior
- Social connections outside safety network
9.4 Persistence Rationale
Why accounts persist:
Persistence makes behavior cumulative:
- Safe behavior compounds into increased access
- Unsafe behavior compounds into decreased access
- Patterns become visible across time
- Account cycling becomes costly (reputation loss)
Without persistence:
- Burner accounts enable consequence-free unsafe behavior
- Pattern detection becomes impossible
- Collective memory fails
- System reduces to individual judgment per interaction
Trade-off accepted: Persistence creates some permanence of record. Mitigated by:
- Exit capability (reputation does not follow users)
- No real identity linkage
- No external propagation
- Data minimization
9.5 Account Lifecycle
Creation:
- Pseudonym selection
- Account type selection
- Agreement to terms (charter principles)
- Optional email for recovery (strongly recommended but not required)
Active use:
- Submit ratings
- Query buyer reputation (sellers only)
- Generate verification codes (sellers only)
- Communicate within safety network (sellers only)
- Update account settings
Suspension (rare, seller-voted):
- Criteria defined in governance process
- Appeals process available
- Temporary status with review period
Deletion:
- User-initiated at any time
- All associated ratings remain (pseudonym anonymized)
- Account cannot be recovered
- Re-entry permitted with new account (reputation reset)
9.6 Authentication
Requirements:
- Strong password (enforced complexity)
- Optional two-factor authentication (TOTP, not SMS)
- No social login
- No biometrics
- Session timeout after inactivity
Recovery:
- Email-based recovery if email provided
- Security questions (if no email)
- No account recovery without one of these mechanisms (priority: user privacy over convenience)
10. Reputation System: Inputs
10.1 Seller → Buyer Ratings (Primary Signal)
Fields (only):
-
Safe: Binary {0, 1}
- 1 = interaction was physically and behaviorally safe
- 0 = interaction involved boundary violations, coercion, threat, or unsafe behavior
-
Reliable: Binary {0, 1}
- 1 = buyer adhered to agreement (timing, payment, scope)
- 0 = buyer deviated from agreement materially
No additional fields. No free text. No commentary. No qualitative descriptors.
Rationale:
- Free text becomes gossip, leverage, and legal exposure
- Qualitative fields invite subjective judgment and dispute
- Binary signals are:
- Unambiguous
- Legally defensible
- Culturally neutral
- Resistant to manipulation
- Focused on safety essentials
Submission:
- Optional (sellers not required to rate every interaction)
- Can be updated (if new information emerges)
- Timestamped
- Pseudonymous (visible as "seller" but not which seller to most users)
10.2 Buyer → Seller Ratings (Secondary Signal)
Submission prerequisite:
Buyers can submit ratings only with a valid one-time verification code provided by the seller after interaction.
Verification code characteristics:
- Six-digit numeric code
- Auto-generated by system
- Single-use only
- Expires after use or 30 days (whichever comes first)
- Associated with specific seller-buyer pair
Code workflow:
- After interaction, seller generates code via platform
- Seller provides code to buyer (verbal, text, or other off-platform method)
- Buyer enters code when submitting rating
- System validates code, accepts rating if valid, invalidates code
- If code invalid/expired/already used: rating rejected with explanation
Rationale:
- Prevents rating submission without actual interaction
- Prevents Sybil attacks (fake ratings from non-interactions)
- Seller retains control over who can rate them
- Creates verification without identity linkage
Fields (only):
Structured options with closed vocabulary:
-
Listing Accuracy: {Accurate, Minor deviation, Significant deviation}
- Refers only to: appearance, services offered, location, timing
- Not: subjective quality judgments
-
Punctuality: {On time, Slightly late, Very late, Did not show}
- Objective time-based assessment
-
Professionalism: {Professional, Adequate, Unprofessional}
- Refers to: communication clarity, adherence to boundaries, respect
No additional fields. No free text. No narrative. No sexual content descriptors. No quality-of-service ratings beyond professionalism.
Rationale:
- Prevents review-as-coercion ("do extra services or I'll rate you badly")
- Prevents sexual objectification through reviews
- Prevents narrative construction that could be used as leverage
- Limits buyer rating to observable, non-intimate factors
- Maintains focus on mutual professionalism, not performance evaluation
Submission:
- Optional
- Requires valid verification code
- Weighted by buyer's trust coefficient (see Section 11)
- Cannot be updated after initial submission (prevents retaliatory editing)
10.3 Rating Visibility
Seller ratings of buyers:
- Visible to all sellers (aggregate scores)
- Individual rating breakdowns visible only to submitting seller
- Not visible to buyers (except as reflected in aggregate scores)
- Not visible to other buyers
Buyer ratings of sellers:
- Visible to seller being rated
- Contribute to seller's aggregate score (weighted)
- Not visible to other buyers
- Not visible to other sellers (except as part of aggregate score)
Rationale: Prevents:
- Buyer coordination against sellers
- Buyer harassment campaigns
- Social graph construction
- Reputational surveillance
11. Reputation System: Aggregation and Weighting
11.1 Objective
Create a formal mathematical model where:
- Unsafe buyer behavior diminishes that buyer's influence on seller reputation
- Influence loss precedes access loss (buyers lose voice before they lose access)
- Retaliation becomes structurally ineffective
- Pattern behavior becomes visible across the collective
- Risk tolerance heterogeneity is supported (sellers can apply different thresholds)
11.2 Inputs for Buyer b
From all interactions, buyer b accumulates:
- Safe flags: {s₁, s₂, s₃, ...} where each sᵢ ∈ {0, 1}
- Reliable flags: {r₁, r₂, r₃, ...} where each rᵢ ∈ {0, 1}
- Interaction count: n (total number of ratings received)
11.3 Aggregate Safety Score (S_b)
Robust aggregation using lower quantile:
S_b = quantile₀.₂₅({s₁, s₂, ..., sₙ})
Rationale for lower quantile:
Standard mean aggregation allows unsafe buyers to "wash out" negative ratings with volume. If a buyer has:
- 20 safe interactions
- 5 unsafe interactions
Mean would be: (20×1 + 5×0) / 25 = 0.80
But this obscures critical risk. The 25th percentile captures the lower tail of behavior—it asks "what does the worst quartile of this buyer's behavior look like?"
For the example above, the 25th percentile is 0 (unsafe), which correctly signals risk.
Alternative quantile values:
- 0.10 (10th percentile): more aggressive, flags buyers with rare unsafe incidents
- 0.25 (25th percentile): recommended baseline
- 0.50 (median): too forgiving, allows too much risk-washing
11.4 Aggregate Reliability Score (R_b)
Same robust aggregation:
R_b = quantile₀.₂₅({r₁, r₂, ..., rₙ})
Interpretation: What does the worst quartile of this buyer's reliability look like?
11.5 Trust Coefficient (T_b)
Combined score:
T_b = S_b × R_b
Interpretation:
- T_b ∈ [0, 1]
- T_b = 1: buyer is consistently safe AND reliable (top quartile behavior is perfect)
- T_b = 0: buyer is either unsafe OR unreliable (bottom quartile has failures)
- T_b captures joint safety and reliability
Why multiplication:
- Safety and reliability are both necessary
- An unsafe but reliable buyer (shows up on time but violates boundaries) should have T_b ≈ 0
- A safe but unreliable buyer (no boundaries crossed but frequently flakes) should have T_b ≈ 0
- Only buyers who are both safe AND reliable achieve high T_b
11.6 Influence Weight (I_b)
Nonlinear damping:
I_b = (T_b)^k
Recommended: k = 3
Effect of nonlinear exponent:
| T_b | I_b (k=3) | Interpretation |
|---|---|---|
| 1.0 | 1.0 | Perfect behavior → full influence |
| 0.9 | 0.73 | Mostly safe → reduced influence |
| 0.8 | 0.51 | Some issues → half influence |
| 0.7 | 0.34 | Pattern concerns → strong reduction |
| 0.5 | 0.13 | Significant issues → minimal influence |
| 0.3 | 0.03 | Unsafe pattern → near-zero influence |
| 0.0 | 0.0 | Clear danger → no influence |
Rationale:
- Linear weighting (k=1) reduces influence proportionally: T_b = 0.5 gives I_b = 0.5 (still half influence)
- Quadratic (k=2) reduces more: T_b = 0.5 gives I_b = 0.25
- Cubic (k=3) collapses influence rapidly: T_b = 0.5 gives I_b = 0.125
Why k=3:
- Strong deterrent effect
- Clear threshold behavior (influence drops dramatically below T_b ≈ 0.8)
- Mathematically simple
- Tunable if collective decides differently
Alternative values:
- k=2: softer damping, more forgiving
- k=4: harder damping, less forgiving
- k=3: recommended baseline
11.7 Seller Reputation Calculation
For seller s, receiving ratings from multiple buyers:
Inputs:
- Buyer b gives seller s rating: x_b,s (normalized to [0,1])
- Buyer b has influence weight: I_b
Weighted aggregate:
Q_s = Σ(I_b × x_b,s) / Σ(I_b)
Interpretation:
- Ratings from high-I_b buyers (safe, reliable) carry more weight
- Ratings from low-I_b buyers (unsafe, unreliable) carry less weight
- Retaliatory rating from unsafe buyer has minimal impact
Example:
Seller receives three ratings:
- Buyer A (I_b = 1.0): rating = 0.9 → weighted contribution = 0.9
- Buyer B (I_b = 0.8): rating = 0.8 → weighted contribution = 0.64
- Buyer C (I_b = 0.1, unsafe): rating = 0.2 (retaliation) → weighted contribution = 0.02
Q_s = (1.0×0.9 + 0.8×0.8 + 0.1×0.2) / (1.0 + 0.8 + 0.1)
= (0.9 + 0.64 + 0.02) / 1.9
= 1.56 / 1.9
= 0.82
Without weighting:
Q_s = (0.9 + 0.8 + 0.2) / 3 = 0.63
Effect: Retaliation from Buyer C barely affects seller's score. The unsafe buyer's voice has collapsed.
11.8 Hard Exclusion Threshold
Rule: If I_b < ε, exclude buyer's ratings entirely.
Recommended: ε = 0.05
Effect:
- Buyers with T_b < 0.37 (approximately) are completely excluded from affecting seller reputation
- This is the "voice loss before access loss" mechanism
- Buyers can still attempt to book, but their reviews have zero weight
Rationale:
- Prevents even marginal influence from dangerous buyers
- Clear bright-line threshold
- Prevents "death by a thousand paper cuts" (many tiny-weighted bad reviews)
11.9 Confidence Intervals and Sample Size
Problem: Early ratings have high variance. A buyer with 1 or 2 interactions may have:
- S_b = 1.0 (looks perfect)
- But this is based on tiny sample, low confidence
Solution: Display confidence intervals alongside scores.
Confidence interval formula:
For binary proportion p based on n samples:
CI = p ± z × √(p(1-p)/n)
Where:
- z = 1.96 for 95% confidence
- p = score (S_b or R_b)
- n = interaction count
Display format:
Safe score (S_b): 0.85 [based on 12 interactions, 95% CI: 0.68–0.95]
Reliable score (R_b): 1.0 [based on 12 interactions, 95% CI: 0.83–1.0]
Trust coefficient (T_b): 0.85
Influence weight (I_b): 0.61
Warning threshold:
If n < 5, display:
⚠ Low sample size: interpret with caution
Effect on seller decision-making:
- High-tolerance seller: May accept buyer with T_b = 0.75, n = 3 (uncertain but willing to risk)
- Medium-tolerance seller: Waits for T_b = 0.80, n ≥ 5 (moderate confidence)
- Low-tolerance seller: Requires T_b = 0.90, n ≥ 10 (high confidence, narrow CI)
This operationalizes risk tolerance heterogeneity without imposing uniform rules.
11.10 Bayesian Prior (Optional Enhancement)
Problem: New buyers (n=0) have no data. How should they be treated?
Current approach: New buyers have undefined S_b, R_b, T_b, I_b.
Bayesian alternative: Start with neutral prior, update with data.
Prior assumption:
- S_b prior = 0.75 (assume moderate safety until proven otherwise)
- R_b prior = 0.75 (assume moderate reliability)
- Prior "weight" = 2 equivalent interactions
Update formula:
S_b = (prior_weight × prior_S + n × observed_S) / (prior_weight + n)
Effect:
- New buyer starts with T_b = 0.75 × 0.75 = 0.56, I_b = 0.56³ = 0.18 (reduced influence)
- As real data accumulates, prior fades in importance
- After n=10, prior contributes only 17% of weight
Recommendation: Optional. Adds complexity but handles cold-start problem more gracefully.
If not implemented: New buyers have "unknown" status, and high-tolerance sellers generate initial data.
12. Visibility and Access Control
12.1 Asymmetric Visibility (Critical Design)
Buyers cannot see buyers:
- No buyer profile browsing
- No buyer-to-buyer messaging
- No buyer social graph
- No "other buyers who saw this seller" features
- No buyer reputation leaderboards
- No buyer search functionality
Rationale:
- Prevents buyer coordination (brigading, harassment campaigns)
- Prevents collusion (review manipulation)
- Prevents blackmail networks ("I'll rate you up if you rate me up")
- Prevents social graph construction
- Maintains buyer pseudonymity from other buyers
Sellers can see buyer reputation (when needed):
- Seller queries specific buyer pseudonym
- System returns: S_b, R_b, T_b, I_b, confidence intervals, sample size
- No browsing of all buyers
- No unsolicited buyer suggestions
Buyers cannot see seller discovery:
- No seller browsing within platform
- No seller search within platform
- No seller recommendations
- No seller profiles with service descriptions (only minimal verification profiles)
Rationale:
- Demand-pull market (buyers seek sellers externally, then use platform for screening)
- Prevents platform from becoming marketplace
- Reduces legal exposure
- Maintains seller control of visibility
12.2 Seller-to-Seller Communication Layer
Purpose:
- Share safety intelligence
- Coordinate on pattern detection
- Discuss boundary issues
- Provide mutual support
Features:
- Direct messaging between sellers
- Group messaging (opt-in)
- Safety alerts (time-limited)
- Discussion threads (ephemeral)
Constraints:
- Messages auto-expire (default: 30 days, configurable)
- No permanent archives
- No searchable history beyond 90 days
- No screenshot prevention (not technically feasible), but strong norms against sharing outside platform
Buyer communication layer:
- Does not exist
- Buyers have no parallel communication tools
- Buyers cannot message each other
- Buyers cannot message sellers within platform (contact occurs off-platform)
Rationale:
- Seller collective immunity requires internal coordination
- Buyer coordination creates risk, not value
- Asymmetry reflects asymmetry of risk
12.3 Operational Flow (Canonical)
Standard interaction workflow:
-
Buyer discovers seller off-platform (ad, website, referral, etc.)
- Seller's ad/profile may include platform profile link (optional but recommended)
-
Buyer reviews seller's platform presence (optional)
- Clicks profile link (if provided)
- Sees verification that seller is platform member
- Creates buyer account or logs in if desired
-
Buyer contacts seller off-platform with initial inquiry
-
Seller conducts screening:
- Requests buyer's platform pseudonym (if buyer has account)
- OR provides platform link and requests buyer create account
- Queries buyer pseudonym in system
- Reviews: S_b, R_b, T_b, I_b, confidence intervals, sample size
-
Seller evaluates risk
- Applies own risk threshold
- Decides: accept, decline, request more information
-
If accept: interaction arranged and occurs off-platform
- Platform not involved in booking, payment, or logistics
- All operational details handled externally
-
After interaction: Seller generates verification code
- Seller logs into platform
- Navigates to "Generate rating code"
- System generates six-digit code
- Displays: "Provide this code to buyer: 123456"
- Seller shares code with buyer (verbally, text, etc.)
-
Seller submits rating (optional)
- Submits Safe flag {0,1}
- Submits Reliable flag {0,1}
- Flags added to buyer's aggregate scores
-
Buyer submits rating (optional)
- Navigates to "Rate seller"
- Enters verification code: 123456
- System validates code
- If valid: presents rating form (listing accuracy, punctuality, professionalism)
- Submits structured rating
- Code invalidated after use
- Rating weighted by buyer's I_b
Code security:
- Codes cannot be guessed (6 digits = 1,000,000 combinations, single-use, rate-limited)
- Rate limiting: 5 failed code attempts per buyer per hour
- Expired codes rejected with clear message
No loops. No amplification. No discovery. No transactions.
The platform touches only the screening step and post-interaction rating verification.
12.4 Seller Query Interface (Wireframe)
Input:
[Search buyer by pseudonym: __________] [Search]
Output:
═══════════════════════════════════════════════════════════════
Buyer: [pseudonym]
───────────────────────────────────────────────────────────────
Safe score (S_b): 0.85 [12 interactions, 95% CI: 0.68–0.95]
Reliable score (R_b): 1.0 [12 interactions, 95% CI: 0.83–1.0]
Trust coefficient (T_b): 0.85
Influence weight (I_b): 0.61
⚠ Note: This buyer has moderate trust. Interpret with caution.
[View detailed history (sellers only)] [Report issue]
═══════════════════════════════════════════════════════════════
What sellers do NOT see:
- Individual ratings from other sellers (only aggregates)
- Buyer's ratings of sellers (except their own)
- Other buyers this buyer has interacted with
- External contact information
- Location or identifying data
12.5 Access Control and Quiet Exclusion
Seller autonomy:
- Each seller independently decides accept/decline
- No platform-imposed bans or restrictions
- No forced acceptance rules
- No quotas or balancing
Optional collective thresholds:
- Sellers may opt into shared defaults (e.g., "auto-decline if T_b < 0.6")
- Defaults are private, reversible, seller-specific
- Defaults are suggestions, not mandates
No public banning:
- No "banned buyers" list
- No public wall of shame
- No announcements of exclusion
- No spectacle
Quiet exclusion mechanism:
- Unsafe buyers simply find fewer sellers willing to accept them
- Buyers are not told why (prevents gaming)
- Buyers retain ability to attempt contact
- Pattern of rejections signals reputation problem, but no explicit communication
Rationale:
- Public banning creates retaliation risk
- Public banning creates legal exposure
- Public banning creates escalation dynamics
- Quiet exclusion is safer and sufficient
Buyer experience of exclusion:
- Buyer contacts seller A: declined (no reason given)
- Buyer contacts seller B: declined (no reason given)
- Buyer contacts seller C: accepted (high-tolerance seller)
- Buyer gradually realizes access is limited but has no specific target for retaliation
12.6 Seller Discovery and Initial Contact
Permitted seller-initiated sharing:
Sellers may share their platform profile link externally for screening purposes. Profile link provides:
- Seller's pseudonym (for buyer account lookup/creation)
- Platform verification (confirms seller is platform member)
- Basic instructions for buyers on how to create account if needed
What profile link does NOT contain:
- Seller's real identity
- Contact information
- Location
- Services offered
- Pricing
- Photos
- Any content that makes platform a "marketplace"
Profile link format:
https://[platform-domain]/verify/[seller-pseudonym-hash]
Profile page contents (minimal):
═══════════════════════════════════════════════════════════
This seller uses [Platform Name] for safety screening.
To interact with this seller:
1. Create a buyer account (if you don't have one)
2. Contact the seller off-platform to arrange details
3. After your interaction, the seller will provide you a code to submit feedback
Learn more about [Platform Name]: [link to public info page]
[Create Buyer Account] [Log In]
═══════════════════════════════════════════════════════════
Usage patterns:
Pattern A - Default disclosure: Seller includes profile link in all public advertisements:
- Ad text: "I use [Platform Name] for screening. Profile: [link]"
- Buyer sees this before initial contact
- Buyer creates account proactively (optional)
Pattern B - Screening-phase disclosure: Seller shares profile link during screening conversation:
- Seller (via text/email): "I screen through [Platform Name]. Here's my profile: [link]"
- Buyer reviews seller's reputation (none visible initially, but establishes seller legitimacy)
- Buyer creates account or logs in
Pattern C - Post-interaction disclosure: Seller provides link only after interaction for rating collection:
- Least optimal (doesn't allow pre-screening)
- Still valid for rating collection purposes
Recommended: Pattern A or B (enables screening function).
13. Data Practices and Security
13.1 Data Minimization
Principle: Collect only the minimum data necessary for core function.
Data collected:
- Pseudonyms
- Account type (seller/buyer)
- Interaction dates (timestamp precision: day, not minute)
- Rating flags (Safe, Reliable, structured buyer ratings)
- Verification codes (active codes only)
- Aggregate scores (calculated, not stored raw)
- Messages (ephemeral, auto-expire)
Data NOT collected:
- Real names
- Contact information (except optional recovery email, stored separately)
- Payment information (dues handled via external processor)
- Location or GPS data
- Device fingerprints
- IP addresses (beyond immediate session management, not logged)
- Browsing behavior or analytics
- Social connections outside safety network
- Content of interactions (what services, where, etc.)
- Photographic or biometric data
13.2 Data Retention
Ratings: Retained indefinitely (necessary for pattern detection)
Verification codes: Retained while active (30 days max), deleted when used or expired
Messages: Auto-expire (default 30 days, maximum 90 days)
Logs:
- Operational logs: 7 days
- Security logs: 30 days
- No long-term archiving
Account data after deletion:
- Pseudonym anonymized (replaced with hash)
- Ratings remain (associated with anonymized ID, not pseudonym)
- All other data purged within 24 hours
Rationale:
- Long-lived data is liability
- Minimal retention reduces legal exposure
- Ephemeral data prevents weaponization
- Pattern detection requires rating persistence, but not identity persistence
13.3 Encryption
At rest:
- AES-256 encryption for database
- Separate encryption for backups
- Key management via hardware security module (HSM) or equivalent
In transit:
- TLS 1.3 minimum
- Perfect forward secrecy
- Certificate pinning where feasible
- No downgrade to older TLS versions
Application layer:
- Password hashing: Argon2id (current best practice)
- No reversible password storage
- No plaintext secrets in code
13.4 Access Control (Internal)
Role-based access:
- Administrators: system maintenance, no data access
- Security team: logs and anomaly detection, no user data access
- Support: limited access for dispute resolution (audit-logged)
- Developers: no production data access (use anonymized test data)
Principle of least privilege:
- Each role has minimum necessary access
- All access logged
- Regular access reviews
- Two-person rule for sensitive operations
No single point of compromise:
- No single admin account with full access
- No "master key" that decrypts everything
- Compartmentalization of sensitive functions
13.5 Legal Protection
Jurisdiction selection:
- Host in jurisdiction with strong privacy laws
- Host in jurisdiction with sex work decriminalization or tolerance
- Avoid jurisdictions with mandatory data retention
- Avoid jurisdictions with broad surveillance laws
Legal structure:
- Establish as non-profit cooperative (if possible)
- Clear terms of service establishing platform as infrastructure, not marketplace
- Legal counsel specializing in sex work, privacy, and tech
Subpoena response:
- Minimal compliance (only what legally required)
- Immediate user notification (unless legally prohibited)
- Transparency report (annual, detailing requests received and responses)
- Challenge overbroad requests
No proactive cooperation:
- No voluntary data sharing with law enforcement
- No "partnerships" with police or prosecutors
- No moral panic compliance
13.6 Third-Party Processors
Email (account recovery):
- Use privacy-respecting provider (ProtonMail, Tutanota, or similar)
- No tracking or analytics in emails
- Minimal email content (links only, no data)
Payment (membership dues):
- Use processor with anonymity support (crypto, privacy-focused payment services)
- No stored payment credentials on platform
- No transaction history beyond "dues paid / not paid"
Hosting:
- Use provider with strong privacy commitment
- Use provider resistant to takedown pressure
- Distributed or federated hosting if feasible
Analytics (if any):
- Self-hosted only (Matomo, Plausible, or similar)
- No third-party analytics services
- Aggregate data only, no individual tracking
- Opt-out capability
14. Attack Surface and Failure Modes
14.1 Failure Mode: Retaliatory Reviews
Attack: Unsafe buyer gives seller bad rating in retaliation for declining service or reporting safety issue.
Countermeasure:
- Buyer's I_b collapses when Safe flags are low
- Retaliatory rating carries minimal or zero weight
- Seller's Q_s barely affected
Residual risk: Minimal. System structurally defeats this attack.
14.2 Failure Mode: Review Coercion
Attack: Buyer pressures seller during interaction: "Do this or I'll review-bomb you."
Countermeasure:
- Buyer demonstrating coercive behavior will be flagged Safe=0
- This immediately reduces future I_b
- Coercion attempt backfires (buyer loses influence)
Residual risk: Low. Coercion attempt is self-defeating in weighted system.
14.3 Failure Mode: Buyer Brigading
Attack: Multiple buyers coordinate to all give low ratings to a seller.
Countermeasure:
- Buyers cannot see other buyers
- No buyer communication layer
- No social graph
- Coordination is structurally difficult
Additional protection:
- If brigading detected (multiple low ratings in short time from new accounts), flag for review
- Governance can adjust algorithms to detect suspicious patterns
Residual risk: Low due to structural barriers.
14.4 Failure Mode: Sybil Attack (Fake Accounts)
Attack: Malicious actor creates many fake buyer accounts to:
- Generate fake positive ratings (for themselves)
- Generate fake negative ratings (for others)
Countermeasure:
- Account creation rate limiting
- Interaction history required to build influence (I_b grows with legitimate interaction history)
- Fake accounts with no real interaction history have I_b ≈ 0
- Sellers can see interaction count (n) and confidence intervals
- Verification codes required for buyer ratings (fake accounts cannot rate without seller-provided codes)
Residual risk: Low. Verification code requirement makes Sybil attacks much harder—attacker would need to actually interact with sellers to get codes.
Enhancement consideration: CAPTCHA or proof-of-work on account creation (increases cost of Sybil attack).
14.5 Failure Mode: Seller Collusion
Attack: Group of sellers coordinate to falsely flag a buyer as unsafe to exclude competition for that buyer's business.
Countermeasure:
- Lower-quantile aggregation makes this harder (requires 25%+ of ratings to be false)
- Buyer can appeal via governance dispute process
- Sellers submitting false flags risk reputation within seller network
Residual risk: Moderate. Difficult to prevent entirely, but:
- Requires coordinated effort
- Visible to governance if pattern emerges
- Social cost within seller collective
Mitigation: Transparency to buyers about why they have low scores (number of Safe=0 flags, without identifying which sellers).
14.6 Failure Mode: Platform Drift into Marketplace
Attack: (Internal) Platform operators or governance gradually add features that turn system into marketplace (booking, payments, discovery).
Countermeasure:
- Constitutional charter prohibits these features
- Requires supermajority vote to change
- Clear mission statement in governance documents
- Regular community review of feature requests
Residual risk: Moderate. Requires vigilance.
Mitigation: Strong norms, transparent decision-making, member education about mission boundaries.
14.7 Failure Mode: Monetization Capture
Attack: (Internal or external) Pressure to monetize through ads, transaction fees, or investor funding.
Countermeasure:
- Cooperative ownership structure (no external shareholders)
- Constitutional prohibition on certain revenue models
- Fixed-dues funding model
Residual risk: Low if governance holds firm.
Mitigation: Financial transparency, regular reporting to membership, clear communication about funding sustainability.
14.8 Failure Mode: External Platform Gatekeeping
Attack: App stores (Apple, Google) remove platform for moral reasons.
Countermeasure:
- Web-based platform (no app store dependency)
- Distributed hosting (no single point of removal)
- Mirrors and redundancy
Residual risk: Minimal. Web architecture prevents this attack.
14.9 Failure Mode: Legal Exposure Through Archives
Attack: Law enforcement or civil litigants subpoena platform data, especially message archives or rating details.
Countermeasure:
- Minimal data retention (ephemeral messages)
- No long-term logs
- Pseudonymous accounts (no real identity to expose)
- Strong legal jurisdiction selection
Residual risk: Moderate. Subpoenas are always possible, but minimal data limits exposure.
Mitigation: Immediate user notification of subpoenas, legal challenge to overbroad requests, transparency reporting.
14.10 Failure Mode: Social Engineering / Phishing
Attack: Attacker impersonates platform or admin to extract user credentials or data.
Countermeasure:
- User education about platform communication methods
- No admins ever request passwords or verification codes
- Two-factor authentication encouraged
- Clear domain verification (HTTPS, certificate pinning)
Residual risk: Moderate. Users remain vulnerable to sophisticated phishing.
Mitigation: Regular security reminders, clear official communication channels, reporting mechanism for suspicious contact.
14.11 Failure Mode: Code Sharing/Trading
Attack: Buyers share unused verification codes or sellers generate codes for non-existent interactions.
Countermeasures:
- Code-to-seller relationship tracked (buyer can only use code for the seller who generated it)
- Statistical monitoring: sellers generating excessive unused codes flagged for review
- Buyers rating same seller multiple times with different codes flagged
- Governance committee reviews suspicious patterns
Residual risk: Moderate. Determined actors could game this, but:
- Requires coordination
- Creates statistical anomalies (detectable)
- Limited benefit (fake positive ratings still weighted by buyer's I_b)
Mitigation: Community norms against code trading, clear terms of service prohibition.
14.12 Failure Mode: Code Coercion
Attack: Buyer pressures seller to provide code for non-existent interaction, or demands code in advance.
Countermeasures:
- Education: sellers instructed to provide codes only after completed interactions
- Seller autonomy: refusing to provide code is always acceptable
- No penalties for unused codes (reduces pressure to "use them all")
- Clear messaging: "Only provide codes after interactions"
Residual risk: Low. Seller controls code generation; coercion is limited.
15. Governance: Operational Details
15.1 Membership Criteria
Seller membership:
- Self-identification as seller
- Agreement to charter principles
- Payment of dues
- Participation in good faith
No additional requirements:
- No verification of sex work status
- No proof of identity
- No minimum activity level
Buyer accounts:
- Not members (no voting rights)
- Can create accounts freely
- Subject to reputation system
15.2 Dues Structure
Fixed periodic fee (not volume-based):
- Monthly or annual dues
- Tiered by region (adjusted for purchasing power)
- Waiver available for financial hardship (application process)
Example structure:
- High-income region: $20/month or $200/year
- Middle-income region: $10/month or $100/year
- Low-income region: $5/month or $50/year
- Hardship waiver: free (requires brief explanation, no verification)
Rationale:
- Fixed dues align incentives with safety, not growth
- Tiered structure ensures accessibility
- Hardship waiver prevents financial exclusion
15.3 Voting Procedures
Regular votes (simple majority):
- Operational policy changes
- Budget allocation
- Non-structural governance changes
- Committee elections
Supermajority votes (2/3 required):
- Constitutional amendments
- Algorithm changes affecting reputation weighting
- Changes to core principles
- Addition of prohibited features
Voting mechanics:
- Electronic voting (secure, auditable)
- Voting period: minimum 14 days
- Quorum requirement: 20% of membership
- Results published within 24 hours
15.4 Committees
Security Committee:
- Monitors for attacks and abuse
- Responds to security incidents
- Conducts regular audits
- Reports to membership quarterly
Dispute Resolution Committee:
- Handles appeals and conflicts
- Reviews edge cases
- Recommends policy clarifications
- Reports to membership quarterly
Governance Committee:
- Manages voting procedures
- Reviews charter compliance
- Facilitates constitutional amendments
- Ensures transparency
Terms:
- Elected annually
- Staggered terms (50% each year for continuity)
- Term limits (maximum 3 consecutive terms)
- Recall mechanism (petition + vote)
15.5 Transparency Requirements
Public (to membership):
- All algorithm changes
- All policy changes
- All votes and results
- Financial statements (quarterly)
- Security incident reports (anonymized)
- Subpoena transparency reports (annual)
Private:
- Individual user data
- Specific security vulnerabilities (until patched)
- Ongoing investigations
Mechanism:
- Regular newsletters to membership
- Public changelog (version-controlled)
- Annual report
- Open forum for questions
15.6 Dispute Resolution Process
Step 1: Self-resolution
- Parties attempt to resolve directly (if safe to do so)
Step 2: Mediation
- Request mediation from Dispute Resolution Committee
- Neutral mediator assigned
- Non-binding recommendation
Step 3: Binding arbitration
- If mediation fails, arbitration by 3-member panel
- Decision is final within platform
- No external legal system involvement unless participant chooses to pursue independently
Appeal:
- Can appeal to full membership (requires petition with 10 member signatures)
- Membership vote on whether to overturn arbitration decision
Protections:
- No retaliation for filing disputes
- Confidentiality maintained
- Process documented for accountability
16. User Experience and Interface Design
16.1 Design Principles
Simplicity:
- Minimal cognitive load
- Clear information hierarchy
- No unnecessary features
Privacy:
- No tracking
- No social features that leak information
- Minimal metadata exposure
Accessibility:
- Works on low-bandwidth connections
- Works on older devices
- Screen reader compatible
- Multiple language support
Discretion:
- Non-identifiable visual design (doesn't "look like sex work platform")
- No auto-playing media
- Private browsing mode respected
16.2 Seller Interface
Dashboard:
═══════════════════════════════════════════════
Welcome, [pseudonym]
Quick actions:
• Look up buyer reputation
• Generate rating code for recent interaction
• Submit rating for recent interaction
• View messages from seller network
• Update account settings
Recent activity:
• 3 new messages in safety alerts
• 1 new interaction rated
• 2 unused rating codes generated
• 0 pending disputes
Your reputation (Q_s): 4.7/5.0 (based on 82 ratings)
Your profile link: [copy link]
═══════════════════════════════════════════════
Buyer lookup flow:
1. Enter buyer pseudonym →
2. View reputation scores + confidence intervals →
3. Decision (accept/decline) →
4. [If accept] After interaction, generate code and submit rating
Rating submission:
Rate buyer: [pseudonym]
Was this interaction safe?
( ) Yes ( ) No
Was this buyer reliable?
( ) Yes ( ) No
[Submit rating] [Cancel]
Code generation flow:
Generate Rating Code
This code allows a buyer to submit a rating for you.
Only provide this code to buyers after a completed interaction.
[Generate Code]
═══════════════════════════════════════════════
Code generated: 847392
Provide this code to your buyer so they can rate the interaction.
This code:
• Is valid for 30 days
• Can be used only once
• Cannot be traced back to buyer's identity
[Copy Code] [Generate Another] [View Code History]
═══════════════════════════════════════════════
Code history view:
Rating Code History
Recent codes:
• 847392 - Generated 5 min ago - Status: Unused
• 621038 - Generated 2 days ago - Status: Used
• 394857 - Generated 4 days ago - Status: Expired (unused)
• 182746 - Generated 1 week ago - Status: Used
[Show more]
Messaging:
Seller Network Messages
[Compose new message]
Recent threads:
• Safety alert: Pattern concern in [region] (3 hours ago)
• Discussion: Screening best practices (1 day ago)
• Support: New member introductions (2 days ago)
[View all threads]
16.3 Buyer Interface
Dashboard:
═══════════════════════════════════════════════
Welcome, [pseudonym]
Your reputation:
Safe score: 0.85 (based on 12 interactions)
Reliable score: 1.0 (based on 12 interactions)
What this means:
• You have good standing in the community
• Most sellers will consider accepting your requests
• Continue being safe and reliable to maintain access
Recent ratings you submitted: [View history]
[Update account settings]
═══════════════════════════════════════════════
Rating submission:
Rate your experience with seller
First, enter the rating code provided by the seller:
[______] (6 digits)
[Verify Code]
═══════════════════════════════════════════════
Code verified ✓
Rate your experience with seller: [pseudonym]
Listing accuracy:
( ) Accurate ( ) Minor deviation ( ) Significant deviation
Punctuality:
( ) On time ( ) Slightly late ( ) Very late ( ) Did not show
Professionalism:
( ) Professional ( ) Adequate ( ) Unprofessional
[Submit rating] [Cancel]
Note: Your rating will be weighted based on your safety and reliability scores.
Error states:
Code verification failed:
• "Code invalid or expired" (code doesn't exist or >30 days old)
• "Code already used" (prevents double-rating)
• "Too many failed attempts" (rate limiting triggered)
• "You must complete an interaction before rating" (educational message)
What buyers cannot do:
- Search or browse buyers
- See other buyers' profiles
- Message other buyers
- See their influence weight (I_b) directly (only see aggregate scores)
- Browse or search sellers within platform
16.4 Mobile Responsiveness
Requirements:
- Responsive design (works on phones, tablets, desktops)
- Touch-friendly interface
- Readable on small screens
- Fast loading on mobile networks
Not requirements:
- Native app (prohibited)
- Push notifications (potential privacy leak)
- Offline functionality (not needed)
17. Launch and Scaling Strategy
17.1 Pilot Phase
Initial launch:
- Invite-only (trusted seller network)
- Small geographic region or community
- 50-100 initial seller members
- Intensive feedback collection
- Rapid iteration on UX and policy
Pilot goals:
- Validate reputation algorithm
- Test dispute resolution process
- Identify UX pain points
- Test verification code system
- Build trust within initial community
- Establish governance norms
Duration: 3-6 months
17.2 Expansion Strategy
Geographic expansion:
- One region at a time (prevents overwhelming support capacity)
- Prioritize regions with:
- Strong seller organizing
- Legal environment favorable to harm reduction
- Existing trust networks
Invitation model:
- Existing members can invite new members
- Invitation approval by governance (prevents bad actors)
- Gradual scaling (controlled growth)
Not:
- Open registration (too risky)
- Viral marketing (inappropriate for sensitive service)
- Paid advertising (mission-incompatible)
17.3 Sustainability Planning
Financial sustainability:
- Break-even target: 1000 paying members (at $10/month average = $10k/month)
- Costs:
- Hosting: $1-2k/month
- Development: $3-5k/month (part-time developers)
- Administration: $2-3k/month
- Legal: $1-2k/month
- Reserve fund: $1k/month
Technical sustainability:
- Open-source codebase (allows community contribution)
- Documentation for handoffs
- No dependency on single developer
- Regular security audits
Governance sustainability:
- Smooth leadership transitions
- Knowledge transfer processes
- Regular elections
- Preventing burnout through distributed responsibility
17.4 Success Metrics
Safety metrics:
- Reduction in reported boundary violations (tracked via seller surveys)
- Reduction in repeat offenders (tracked via buyer pattern data)
- Seller satisfaction with safety outcomes (periodic surveys)
System metrics:
- Active seller membership (target: steady growth, not explosive)
- Buyer reputation distribution (expect: most buyers moderate-to-high, small tail of low-trust)
- Rating submission rate (target: >50% of interactions rated)
- Verification code usage rate (target: >60% of codes used)
- Dispute rate (target: <2% of interactions)
Governance metrics:
- Voter participation rate (target: >30%)
- Committee turnover (healthy: 50% per year)
- Transparency report compliance (target: 100%)
Anti-metrics (what not to optimize for):
- Total number of buyers (bigger is not better)
- Transaction volume (not a marketplace)
- Platform engagement time (efficiency is better)
- Revenue growth beyond sustainability
18. Edge Cases and Special Considerations
18.1 Cross-Border Interactions
Problem: Buyer and seller in different legal jurisdictions.
Approach:
- System remains jurisdiction-neutral
- Ratings submitted based on behavior, not legality
- No legal advice or compliance guidance
- Users responsible for understanding local laws
18.2 Touring/Traveling Sellers
Problem: Seller works in multiple regions, may use different working names.
Approach:
- Single account, single pseudonym
- Account follows seller across regions
- No need to create multiple accounts
- Reputation travels with account
Alternative: If seller prefers separate identities in different regions:
- Can create multiple accounts
- Each account has separate reputation
- No cross-linking (preserves discretion)
18.3 Buyers Who Are Also Sellers
Problem: Some people both buy and sell sex.
Approach:
- Can have both seller and buyer accounts (separate pseudonyms)
- Accounts not linked
- Each account builds separate reputation
- No special treatment or privileges
18.4 Group or Duo Providers
Problem: Some sellers work in pairs or groups.
Approach:
- Each seller has individual account
- Buyers rate each seller individually
- Sellers coordinate externally on who to accept
- No joint accounts (prevents accountability diffusion)
18.5 Agencies and Management
Problem: Some sellers work through agencies.
Approach:
- Individual seller accounts only (no agency accounts)
- Agency cannot control seller's account
- Agency cannot view seller's ratings or data
- Ratings about seller behavior, not agency behavior
Rationale: Prevents agency capture of reputation system.
18.6 Returning Users After Exit
Problem: User deletes account, then wants to return.
Approach:
- Can create new account
- Reputation does not transfer
- Starts fresh (this is intentional cost of exit)
Exception: If deletion was due to platform error or safety concern (verified by dispute committee), reputation can be manually restored.
18.7 Inactive Accounts
Problem: Accounts unused for extended period.
Approach:
- No automatic deletion
- Reputation remains (valuable for pattern detection even if account inactive)
- Can reactivate at any time
Exception: If account breaches terms (spam, abuse), can be suspended by governance vote.
18.8 Reputation Rehabilitation
Problem: Buyer with low reputation wants to improve.
Approach:
- Possible through demonstrated changed behavior
- Quantile-based aggregation means recent good behavior improves scores
- Takes time (intentionally)
- No "expungement" or reset
Mechanism:
- As new Safe=1, Reliable=1 ratings accumulate, quantile₀.₂₅ shifts upward
- I_b gradually recovers
- Buyer regains access through sustained good behavior
Not allowed:
- Account cycling to reset reputation
- Paid rehabilitation programs
- Administrative override without pattern of good behavior
18.9 Lost or Compromised Verification Codes
Problem: Seller generates code but loses it, or buyer claims to have lost code.
Approach:
- Seller can view code history to retrieve unused codes
- If code truly lost and buyer needs to rate, seller can generate new code
- No backdoor bypass (buyer must have valid code to rate)
If account compromised:
- Seller can invalidate all unused codes
- Change password immediately
- Review recent code usage for suspicious activity
19. Communication and Education
19.1 User Onboarding
New seller onboarding:
- Charter principles explained
- How reputation system works
- How to query buyer reputation
- How to generate and manage verification codes
- How to submit ratings
- How to use seller network
- How to share profile link for screening
- Governance participation information
New buyer onboarding:
- What the system is (and is not)
- How to build good reputation
- What Safe and Reliable mean
- How ratings work with verification codes
- Appeal process
Format:
- Brief written guide
- Video tutorials (optional)
- FAQ
- Live onboarding session (for pilots)
19.2 Ongoing Education
Regular communications:
- Quarterly newsletter
- System updates and changes
- Governance decisions and votes
- Security tips and best practices
- Community spotlights (anonymized success stories)
Resources:
- Safety best practices library (seller-contributed)
- Legal resources (jurisdiction-specific)
- Technical help documentation
- Glossary of terms
19.3 External Communication
Public-facing:
- Website explaining mission and principles
- Press inquiries (handled by governance committee)
- Research partnerships (with strict privacy protections)
What not to communicate:
- User data or statistics that could identify participants
- Specific implementation details that could enable attacks
- Internal disputes or conflicts
19.4 Cultural Norms
Encouraging:
- Honest rating (not vindictive, not inflated)
- Respectful communication in seller network
- Constructive feedback in governance
- Supporting new members
- Providing verification codes only after completed interactions
Discouraging:
- Public shaming or call-outs
- Gossip outside seller network
- Gaming the system
- False ratings (intentional misrepresentation)
- Code sharing or trading
Enforcement:
- Norms primarily enforced socially (peer accountability)
- Governance intervention only for serious violations
- Education before punishment
20. Technical Implementation Notes
20.1 Technology Stack (Recommendations)
Backend:
- Language: Python, Ruby, or Node.js (mature, well-supported)
- Framework: Django, Rails, or Express (depends on language choice)
- Database: PostgreSQL (robust, open-source, handles complex queries)
- Caching: Redis (for session management, rate limiting)
Frontend:
- HTML/CSS/JavaScript (standard web technologies)
- Framework: React or Vue (if complex UI needed, otherwise vanilla JS)
- Mobile-first responsive design (Bootstrap or Tailwind CSS)
Infrastructure:
- Hosting: Privacy-respecting provider (Njalla, 1984 Hosting, or similar)
- CDN: Minimal use, self-hosted if possible
- Monitoring: Self-hosted (avoid third-party analytics)
Security:
- SSL/TLS: Let's Encrypt or similar
- WAF: Web application firewall for DDoS protection
- Rate limiting: Prevent abuse
- Security audits: Annual third-party penetration testing
20.2 Database Schema (Simplified)
Users table:
id (uuid, primary key)
pseudonym (string, unique)
account_type (enum: seller/buyer)
created_at (timestamp)
last_active (timestamp)
email_hash (optional, for recovery)
password_hash (argon2id)
Ratings table:
id (uuid, primary key)
rater_id (uuid, foreign key to users)
rated_id (uuid, foreign key to users)
rating_type (enum: safe/reliable/listing_accuracy/punctuality/professionalism)
rating_value (integer or enum)
created_at (timestamp)
VerificationCodes table:
id (uuid, primary key)
code (string, 6 digits, indexed, unique while active)
seller_id (uuid, foreign key to users, seller only)
generated_at (timestamp)
expires_at (timestamp, default: generated_at + 30 days)
used_at (timestamp, nullable)
used_by_buyer_id (uuid, foreign key to users, buyer only, nullable)
status (enum: active/used/expired)
Messages table:
id (uuid, primary key)
sender_id (uuid, foreign key to users, seller only)
recipient_id (uuid, foreign key to users, seller only, or null for group)
content (text, encrypted)
created_at (timestamp)
expires_at (timestamp)
Computed reputation table (cached):
user_id (uuid, primary key)
S_b (float)
R_b (float)
T_b (float)
I_b (float)
interaction_count (integer)
last_updated (timestamp)
Indexes:
- code (for fast lookup during verification)
- seller_id + status (for seller dashboard)
- expires_at (for cleanup job)
Cleanup job: Nightly deletion of expired codes (status=expired and expires_at < now - 90 days).
20.3 API Design (Internal)
Endpoints (authenticated):
POST /api/ratings/submit
GET /api/buyer/:pseudonym/reputation
GET /api/seller/me/reputation
POST /api/codes/generate
POST /api/ratings/verify_code
GET /api/codes/history
GET /api/messages
POST /api/messages/send
GET /api/account/settings
PUT /api/account/settings
DELETE /api/account
GET /api/seller/:pseudonym/profile (public, no auth required)
Authentication:
- Session-based (cookies, HTTP-only, secure)
- JWT tokens for mobile/API access (if needed)
- Rate limiting on all endpoints
20.4 Deployment Considerations
Staging environment:
- Test all changes before production
- Anonymized test data only
- No real user data in staging
Deployment process:
- Blue-green deployment (zero downtime)
- Automated testing before deploy
- Rollback capability
- Changelog updated automatically
Backup strategy:
- Daily encrypted backups
- Offsite storage
- Tested restore procedure
- 30-day retention
Monitoring:
- Uptime monitoring
- Error tracking (self-hosted Sentry or similar)
- Performance metrics
- Security alerts
21. Future Considerations (Outside Current Scope)
These items are not part of the current system but may be considered in the future, subject to governance approval and alignment with charter principles:
21.1 Federated/Distributed Architecture
Concept: Instead of single centralized system, multiple instances that share reputation data.
Potential benefits:
- Increased resilience
- Harder to shut down
- Regional customization
Challenges:
- Trust between instances
- Data synchronization
- Complexity
Status: Interesting long-term possibility, but adds significant complexity. Centralized architecture acceptable for pilot and early scaling.
21.2 Blockchain/Decentralization
Concept: Store reputation data on blockchain or distributed ledger.
Potential benefits:
- No central point of control
- Tamper-resistant records
- Transparency
Challenges:
- Privacy risks (public ledger)
- Technical complexity
- Energy consumption
- Governance of protocol changes
Status: Not recommended. Blockchain adds more problems than it solves for this use case. Centralized architecture with strong governance is simpler and more privacy-preserving.
21.3 Machine Learning for Pattern Detection
Concept: Use ML to detect suspicious patterns (Sybil attacks, coordinated brigading, etc.).
Potential benefits:
- Faster detection of attacks
- More sophisticated pattern recognition
Challenges:
- Black box decision-making
- Bias and fairness concerns
- Requires significant data
- Explainability problems
Status: Not recommended for launch. Simple statistical methods (quantile aggregation, threshold rules) are more transparent and auditable. ML could be considered later if attacks become sophisticated, but only with strong governance oversight.
21.4 Integration with External Verification
Concept: Allow users to optionally link external verified identity (e.g., government ID) to gain "verified" status.
Status: Explicitly rejected. Conflicts with "no identity escrow" principle. Creates coercion vector. Not compatible with charter.
21.5 Payment Integration
Concept: Allow payments to occur through platform (bookings, escrow, tipping).
Status: Explicitly rejected. Conflicts with mission boundaries. Turns system into marketplace. Creates legal exposure. Not compatible with charter.
22. Frequently Asked Questions
22.1 Why web-only, no apps?
App stores impose external governance, can remove apps arbitrarily, and require telemetry that compromises privacy. Web-based platform maintains independence and user privacy.
22.2 Why no real identity verification?
Identity verification creates:
- Coercion points (leverage for blackmail)
- Legal vulnerability (subpoena targets)
- Platform capture (whoever controls identity controls access)
The system protects safety without needing real identity.
22.3 Why can't buyers see other buyers?
Buyer-to-buyer visibility enables:
- Coordination for harassment
- Brigading and review manipulation
- Social graph construction
- Blackmail networks
Preventing buyer visibility protects sellers and maintains buyer pseudonymity.
22.4 Why no public banning?
Public bans create:
- Retaliation targets
- Legal exposure
- Escalation dynamics
Quiet exclusion (buyers simply lose access gradually) is safer and effective.
22.5 Why seller-only funding?
Buyer funding or transaction fees create perverse incentives. Platform would have incentive to maximize volume rather than safety. Fixed seller dues align platform with member interests.
22.6 Can good buyers recover from a bad rating?
Yes, but it takes time. Continued good behavior (Safe=1, Reliable=1 ratings) will gradually improve quantile-based scores and rebuild I_b. This is intentional—trust must be re-earned through demonstrated behavior.
22.7 What if a seller falsely flags me as unsafe?
Single false rating has limited impact due to quantile aggregation. If pattern of false ratings emerges, you can appeal to dispute resolution committee. Sellers who submit false ratings risk their own reputation within seller network.
22.8 Is this system trying to end sex work?
No. This system takes no position on whether sex work should exist. It exists to reduce harm within the market as it exists.
22.9 Why no photos or profiles on the platform?
This is not a marketplace or discovery platform. Sellers maintain their own marketing externally. Platform only handles screening after contact has been made, plus minimal verification profiles for buyer account creation.
22.10 What if the platform is shut down by authorities?
Web-based architecture, distributed hosting, and legal jurisdiction selection reduce this risk. If shutdown occurs, data minimization ensures limited exposure. Seller network can rebuild with lessons learned.
22.11 Why do I need a code to rate a seller?
Verification codes prevent fake ratings from people who never actually interacted with the seller. This protects against Sybil attacks and ensures rating authenticity.
22.12 What if I lose my verification code?
Contact the seller who provided it. They can look up the code in their history if it hasn't been used. If necessary, they can generate a new code for you.
22.13 Can sellers see who rated them?
Sellers see aggregate scores but not which specific buyers left which specific ratings (except their own submissions). This protects buyer privacy while maintaining accountability.
23. Conclusion and Call to Action
23.1 What This System Achieves
This system creates:
- Legible risk (pattern behavior becomes visible)
- Expensive unsafe behavior (loss of access)
- Structural protection against retaliation (weighted reputation system)
- Collective safety (seller network coordination)
- Autonomous decision-making (each seller controls their own thresholds)
- Verified authenticity (verification codes prevent fake ratings)
- Exit without penalty (accounts can be deleted)
- Privacy preservation (pseudonymity, no identity escrow)
It achieves these goals without:
- Becoming a marketplace
- Controlling who can participate
- Creating permanent records
- Cooperating with law enforcement
- Imposing moral judgments
23.2 What This System Requires
Success requires:
- Seller solidarity (collective commitment to honest rating)
- Active governance (participation in votes and decisions)
- Financial sustainability (paying dues)
- Cultural norms (respecting privacy, honest feedback, no public shaming, proper code usage)
- Vigilance against mission creep (resisting feature requests that compromise principles)
23.3 How to Get Involved
For sellers interested in pilot:
- Contact governance committee (details in separate document)
- Review charter and commit to principles
- Participate in onboarding
- Provide feedback during pilot phase
For developers interested in contributing:
- Code will be open-source (license TBD)
- Security review opportunities
- Documentation contributions
For researchers:
- Research partnerships possible (with strict privacy protections)
- Aggregate anonymized data may be available for harm-reduction research
- No individual data access
For allies and advocates:
- Support sex worker organizing
- Advocate for decriminalization
- Spread awareness of harm-reduction tools
- Respect privacy and discretion of users
23.4 Final Statement
This system exists to make sex work safer for those who choose to do it. It is infrastructure, not ideology. It is harm reduction, not abolition. It is collective power, not platform control.
It will succeed only if sellers govern it, protect it, and hold it accountable to its mission.
Solidarity.
Appendices
Appendix A: Glossary of Terms
Agency: The capacity to make meaningful choices. In context of sex work, refers to degree of autonomy vs. coercion.
Epistemic authority: Justified claim to knowledge based on position or experience. Sellers have epistemic authority on safety because they bear the risk.
I_b (Influence weight): Mathematical measure of buyer's impact on seller reputation, scaled by buyer's own trustworthiness.
Mission creep: Gradual expansion of system scope beyond original purpose, often creating risks or conflicts.
Pseudonymity: Use of consistent but non-real identity. Differs from anonymity (no persistent identity) and real-name systems.
Quantile aggregation: Statistical method that looks at specific percentiles (e.g., 25th percentile) rather than averages, making it harder to "wash out" negative signals with volume.
Quiet exclusion: Loss of access without public announcement or shaming. Preferred enforcement mechanism.
R_b (Reliability score): Aggregate measure of whether buyer honors agreements (timing, payment, scope).
S_b (Safety score): Aggregate measure of whether buyer respects boundaries and does not threaten/harm.
T_b (Trust coefficient): Combined measure of safety and reliability (S_b × R_b).
Verification code: Six-digit single-use code generated by seller and provided to buyer after interaction, required for buyer to submit rating.
Weighted reputation: System where different raters' inputs have different levels of influence based on their own trustworthiness.
Appendix B: Mathematical Formulas (Summary)
Safety score:
S_b = quantile₀.₂₅({s₁, s₂, ..., sₙ})
Reliability score:
R_b = quantile₀.₂₅({r₁, r₂, ..., rₙ})
Trust coefficient:
T_b = S_b × R_b
Influence weight:
I_b = (T_b)^k where k = 3 (recommended)
Seller reputation:
Q_s = Σ(I_b × x_b,s) / Σ(I_b)
Hard exclusion threshold:
If I_b < ε (where ε = 0.05), exclude rating entirely
Confidence interval:
CI = p ± z × √(p(1-p)/n) where z = 1.96 for 95% confidence
Appendix C: Change Log
Version 1.0 (Current document)
- Initial comprehensive specification
- Charter principles established
- Technical architecture defined
- Governance structure outlined
- Verification code system integrated
Future versions will be logged here with dates and descriptions of changes.
Appendix D: References and Further Reading
Harm reduction in sex work:
- Global Network of Sex Work Projects (NSWP) resources
- Sex Workers Outreach Project (SWOP) guidelines
- Red Umbrella Fund documentation
Reputation systems:
- Resnick & Zeckhauser, "Trust Among Strangers in Internet Transactions" (2002)
- Jøsang et al., "A Survey of Trust and Reputation Systems for Online Service Provision" (2007)
Privacy-preserving systems:
- Tor Project documentation
- Signal Protocol specifications
- Privacy by Design framework (Cavoukian)
Platform cooperativism:
- Trebor Scholz, "Platform Cooperativism" (2016)
- Platform Coop Consortium resources
Comments
Post a Comment