Loading ventures...

Settings

Loading...
0 annotations Open original
Annotations Select text to annotate
No annotations yet. Select text on the page to add one.

Comments

No comments yet. Add one below!

Help & Documentation

Venture Intelligence Engine v2.1

Develeap's internal platform for discovering, evaluating, and iteratively refining new business ventures. It continuously harvests signals from the tech ecosystem, generates venture ideas using AI, and runs the Ralph Loop — an iterative refinement process that scores, reviews, and improves each venture until it reaches 95+ composite score.

How It Works

  • Harvest — Automated pipelines pull signals from Hacker News, GitHub, Product Hunt, ArXiv, tech blogs, and RSS feeds. You can also manually post URLs to the News feed.
  • Generate — Claude analyzes signals and brainstorms venture ideas. Each idea is enriched with title, problem statement, solution, target buyer, and domain.
  • Ralph Loop — Each venture is iteratively refined: score → review (OH + Eng + Design + TL) → identify weak dimensions → improve → re-score. Repeats until composite reaches 95+ or max iterations.
  • Rank & Act — Ventures are sorted by composite score (0–100). Team members vote, comment, and annotate. High-scoring ventures move to active development.

Key Concepts

  • Dark Factory Model — Ventures must be buildable and operable by 1–2 engineers using heavy automation, AI code generation, and managed services.
  • 8-Dimension Scoring — Four AI-scored dimensions (Monetization, Cashout Ease, Dark Factory Fit, Tech Readiness) + four review-based dimensions (TL Score, Office Hours, Eng Review, Design Review). Combined into a weighted composite (0–100).
  • Ralph Loop — Iterative venture refinement targeting weak scoring dimensions until the composite score exceeds 95. Named after the principle of relentless iteration.
  • Tech Gaps — When a venture scores below 8/10 on tech readiness, the missing technology is tracked with estimated availability and readiness signals.
  • Thought Leaders — 100+ real and simulated industry voices (Kelsey Hightower, DHH, Paul Graham, etc.) that react to each venture, providing a crowd-wisdom signal.
  • AI Agents — Four autonomous agents continuously improve the platform: CodeHawk (scans code for real bugs), PixelEye (Playwright + Vision for UI/UX bugs), Maya Levi (AI PO with Claude-scored sprint planning), and AutoFix (generates and applies real code fixes). See the AI Agents help section for details.

Dashboard Tabs

The dashboard organizes content into seven horizontally scrollable tabs. Each tab represents a different venture strategy or content type.

News
A curated feed of signals from Hacker News, tech blogs, TL posts, conferences, and research papers. Each news item links to the ventures it inspired. Includes a post bar at the top where you can paste any URL to add it to the feed — the engine will scrape metadata, create a news item, and automatically generate a venture from it via the Ralph Loop. Signal strength (0–10) indicates relevance.
Ideas
AI-generated venture ideas including B2B SaaS products, developer tools, and infrastructure solutions. Includes both signal-based ideas (generated from harvested news) and brainstormed ideas (generated directly by Claude). Each is scored across all 8 dimensions and refined by the Ralph Loop. Formerly "Ventures" — also includes merged Opportunities.
Clone
Early-stage startups (pre-Series A, stealth, or recently launched) that Develeap could replicate and outcompete using the dark factory model. Shows the target company, their value prop, clone time estimate, Achilles heel, and how our clone exploits their weakness.
Quick Flip
Build-to-sell ventures designed for rapid acquisition. Small tools or plugins that solve a pain point for a larger platform company. Shows potential acquirers with estimated prices, competitor pricing analysis with our undercut price, and margin analysis.
Customers
Acqui-hire and consulting opportunities. Companies that could become Develeap customers or acquisition targets. Tracks target acquirer, target product, estimated acquisition price, and strategic fit.
Missing Piece
Plugins, extensions, and companion tools for leading ISV platforms (Terraform, Kubernetes, Datadog, Grafana, Jenkins, etc.). Each identifies the target ISV, the specific user pain point, and the integration approach (plugin, API, sidecar, etc.). These ride the ISV's ecosystem without requiring users to switch tools.
Training
Course and workshop ideas for Develeap's training business. Shows course length, admission price, job listings count, expected salary range, and required skills. Generated weekly from market gaps in DevOps/cloud-native education.

News Feed & Post Bar

The News tab is the starting point of the venture pipeline. It shows all signals that inspired venture ideas.

Post Bar

At the top of the News tab, you'll find a post bar (similar to Facebook/LinkedIn). Paste any URL to add it to the news feed:

  • URL (required) — Must be a valid http/https URL. The engine validates the URL before accepting it.
  • Comment (optional) — Add context about why this article is relevant.
  • What happens — The engine scrapes metadata from the URL using Claude (title, summary, source type, tags, signal strength), creates a news feed item, then triggers the full Ralph Loop to generate and refine a venture idea from it.

News Items

  • Click the title to open the original article in a new tab.
  • The domain hostname is shown as a link badge next to the timestamp.
  • Signal strength (colored badge, 0–10) indicates relevance to Develeap's domain.
  • Inspired ventures are shown as linked chips at the bottom of each item, with their scores.
  • Tags help categorize the content (AI agents, security, DevOps, etc.).

Sources

News items come from: Hacker News, Twitter/X, tech blogs, ArXiv, GitHub, conferences, podcasts, newsletters, disaster postmortems, and manual URL posts.

Ralph Loop

The Ralph Loop is the engine's iterative venture refinement process. It takes a rough idea and iteratively improves it until the composite score reaches 95+ (or a configurable target).

How It Works

  1. Suggest — Claude enriches a rough idea into a complete venture with title, slogan, summary, problem, solution, target buyer, and domain.
  2. Create — The venture is persisted to the database with a unique DiceBear robot avatar.
  3. Review — The full review suite runs: YC Office Hours (6 forcing questions), Engineering Review (6 dimensions), Design Review (6 dimensions), and TL Simulations (up to 5 thought leaders react).
  4. Score — Claude scores 4 AI dimensions; combined with 4 review-based dimensions into a composite (0–100).
  5. If below target — Claude analyzes weak dimensions and rewrites the venture fields to specifically address them (e.g., if monetization is low, it sharpens the pricing model and revenue path).
  6. Re-review & re-score — Steps 3–5 repeat with the improved venture description.
  7. Done — When the score reaches the target (default 95), or max iterations (default 10) is reached.

Triggering the Ralph Loop

  • News Post — Posting a URL on the News tab automatically triggers a Ralph Loop to generate a venture from it.
  • APIPOST /api/ventures/ralph-loop with {"idea": "...", "category": "venture"}.
  • Harvest Pipeline — The automated harvester generates and scores ventures from signals (though it uses a simpler score-once pipeline rather than the full iterative loop).

Score History

Every scoring pass creates a VentureScore record, providing a full audit trail of how the venture improved through each iteration. You can see the score history in the venture detail panel's radar chart.

Harvesting Process

The harvester is an automated pipeline that runs on a configurable schedule (default: every 4 hours) to discover new venture opportunities from multiple sources across the tech ecosystem.

Signal Sources

📰
Hacker News
Front page stories via Algolia API, filtered by domain keywords. Signal strength scaled by points.
🚀
Product Hunt
New product launches in developer tools category. Signal strength scaled by upvotes.
💻
GitHub Trending
Daily trending repositories matching domain keywords. Signal strength scaled by stars today.
📚
ArXiv Papers
CS/SE papers with infrastructure keywords (cs.DC, cs.SE).
📡
The New Stack
Infrastructure and startup news via RSS feed.
📝
Company Blogs
RSS feeds from Netflix, Slack, Spotify, Cloudflare, AWS, LinkedIn, Airbnb, GCP.

Domain Keywords (38)

kubernetes, devops, devsecops, mlops, dataops, sre, platform engineering, observability, gitops, argo, helm, terraform, pulumi, opentelemetry, chaos engineering, finops, policy-as-code, ai ops, llmops, ai engineering, vector db, feature store, model serving, ray, kubeflow, docker, container, cicd, pipeline, infrastructure as code, cloud native, service mesh, istio, envoy, prometheus, grafana, backstage, internal developer platform

Keywords are editable in Settings → Domains. Signals must match at least one keyword to be ingested.

Pipeline Steps

  1. Signal Collection — All 6 sources are fetched concurrently, filtered by domain keywords.
  2. Deduplication — Signals deduplicated by URL (both within-run and cross-run against existing DB).
  3. Clustering — Claude groups signals into thematic clusters identifying market opportunities.
  4. Venture Generation — Claude generates up to 3 venture ideas per cluster, plus a separate "OR path" brainstorm of 5 additional ideas.
  5. Dedup Check — Each new idea is checked against existing ventures via Claude to avoid duplicates.
  6. Scoring — All new ventures are scored across 8 dimensions.
  7. TL Simulation — Thought leaders react to all new unreviewed ventures.

Scheduled Jobs

  • Harvest + Score — Every N hours (default: 4). Full signal collection → generation → scoring → TL simulation pipeline.
  • Tech Gap Check — Daily at configured hour (default: 08:00 UTC). Re-evaluates open tech gaps.
  • TL Signal Sync — Every N hours (default: 12). Refreshes real thought-leader signals.
  • Training Brainstorm — Weekly (default: Sunday 10:00 UTC). Generates new training course ideas.
  • Weekly Digest — Weekly (default: Monday 09:00 UTC). Top 10 ventures summary with notification.
  • Activity Simulation — Every 30 min. Simulated team activity: comments, reactions, bug transitions.
  • CodeHawk Bug Hunter — Every 4 hours. AI scans source code for real bugs, security, and performance issues.
  • PixelEye UI Inspector — Every 6 hours. Playwright + Claude Vision screenshots and analyzes UI/UX on mobile & desktop.
  • Maya Levi PO Agent — Every hour. Claude-scored sprint planning with value/effort compound scoring.
  • AutoFix Bug Fixer — Every 2 hours. Generates and applies real code fixes for sprint bugs.
  • Auto-Release — Every 6 hours. Packages next_version bugs into versioned releases.

Manual Triggers

"Trigger Harvest" in the sidebar runs the full pipeline immediately. You can also post individual URLs on the News tab to generate ventures from specific articles.

Scoring System

Every venture is evaluated across 8 dimensions — 4 scored by Claude AI, and 4 sourced from review panels and thought leaders — combined into a weighted composite score on a 0–100 scale.

AI-Scored Dimensions (0–10 each)

Monetization (15%)
Revenue potential, market size, willingness to pay, and pricing power. How big is the addressable market and how strong is the business model?
Cashout Ease (15%)
Speed to first revenue. Low barrier to first sale, short sales cycles, self-serve potential, and land-and-expand opportunity.
Dark Factory Fit (15%)
Can 1–2 engineers build and run this with AI-assisted development, managed infra, and heavy automation? Penalizes large teams or complex ops.
Tech Readiness (10%)
Is the technology available today? 8+ means fully buildable. Below 8 triggers a Tech Gap record tracking what's missing and when it might be ready.

Review-Based Dimensions (0–10 each)

TL Score (10%)
Thought leader consensus. Weighted average of upvote/neutral/downvote signals. Real signals carry 2x weight vs simulated (configurable).
Office Hours (15%)
YC-style investability score from 6 forcing questions. Verdicts: FUND, PROMISING, NEEDS_WORK, PASS.
Eng Review (10%)
Engineering feasibility: architecture, build cost, tech debt risk, integration burden, scalability, AI leverage.
Design Review (5%)
UX review: user flow, first-run experience, competitive design, self-serve potential, visual differentiation.

Composite Formula

composite = (
  monetization × 0.15
  + cashout_ease × 0.15
  + dark_factory_fit × 0.15
  + tech_readiness × 0.10
  + tl_score × 0.10
  + oh_score × 0.15
  + eng_score × 0.10
  + design_score × 0.05
) × 10

All weights are adjustable in Settings → Scoring and should sum to 1.0. The Ralph Loop targets a composite of 95+.

Score Ranges

95+ — Ralph Loop target 70+ — High potential 40–69 — Worth watching <40 — Low priority

Most ventures score 40–70 on their first pass. The Ralph Loop iteratively improves ventures by targeting weak dimensions until they reach 95+. Ventures that reach 95+ after Ralph Loop refinement are considered launch-ready.

Tech Gap Tracking

When tech readiness < 8/10, the system records the gap description, missing technology, estimated availability, and readiness signal. Tech gaps are re-checked daily and auto-resolved when signals appear.

Review System

The engine runs three independent review panels on each venture, producing scores that feed into the composite. These run automatically during the Ralph Loop and can also be triggered manually from the venture detail panel.

YC Office Hours

Inspired by Garry Tan's 6 forcing questions diagnostic. Claude acts as a YC partner and rigorously evaluates:

  1. Demand Reality — Strongest evidence someone actually wants this?
  2. Status Quo — What are users doing right now to solve this?
  3. Desperate Specificity — Who needs this most? Name the role.
  4. Narrowest Wedge — Smallest version someone would pay for this week?
  5. Observation — What would surprise you watching someone use this?
  6. Future-Fit — In 3 years, more or less essential?

Produces a verdict (FUND / PROMISING / NEEDS_WORK / PASS), a YC score (0–10), killer insight, biggest risk, and recommended next action.

Engineering Review

Senior engineering manager perspective evaluating 6 dimensions:

  • Architecture Complexity, Build Cost (engineer-weeks), Tech Debt Risk, Integration Burden, Scalability Readiness, AI Leverage

Verdicts: SHIP_IT, REFACTOR_FIRST, PROTOTYPE_ONLY, NO_BUILD. Includes tech stack recommendation and biggest technical risk.

Design Review

Senior product designer perspective evaluating 6 dimensions:

  • User Flow Clarity, First-Run Experience, Competitive Design, Self-Serve Potential, Visual Differentiation, Accessibility

Verdicts: SHIP_READY, NEEDS_POLISH, REDESIGN, UX_BLOCKER. Includes ideal 3-step user flow and UX killer feature.

CEO Review

Founder/CEO product review evaluating problem clarity, user obsession, market timing, moat potential, revenue path, and team fit. Includes the "10-star version" of the product and a pivot suggestion.

Thought Leaders

100+ curated industry voices provide a crowd-wisdom layer on top of AI scoring. Each thought leader has a unique persona, known opinions, and investment thesis that Claude uses to generate authentic reactions.

How It Works

  • Simulated Reactions — Claude adopts each TL's persona (based on their public writing, talks, and known opinions) and generates a vote with reasoning and "what they would say."
  • Real Signals — When a TL actually tweets, blogs, or speaks about a related topic, that signal is captured with source links and weighted 2x (configurable).
  • Vote Types — Upvote (1.0), neutral (0.5), downvote (0.0) with confidence score and detailed reasoning.
  • Social Links — Click any TL card to see links to their X/Twitter, LinkedIn, GitHub, YouTube, and website.
  • Source Citations — Each TL signal can include source links (articles, posts, YouTube videos, mentions) that support their reaction.

YC Compatibility Badge

Ventures upvoted by YC-affiliated TLs (Paul Graham, Garry Tan, Michael Seibel, Dalton Caldwell, Jared Friedman) get a "YC Compatible" badge, indicating alignment with YC's investment thesis.

Signal Weights

Real reactions carry 2x the weight of simulated ones by default. Both multipliers are configurable in Settings → Scoring. The TL Score dimension contributes 10% to the composite.

Suggest a Venture

Anyone can propose a new venture idea using the "+ Suggest" button in the sidebar. The feature adapts to whichever tab you're viewing (Ideas, Clone, Quick Flip, Customers, Missing Piece).

Two Ways to Suggest

  • Quick Suggest — Type a rough one-line idea and Claude enriches it into a complete venture with all fields populated. Preview before saving.
  • Manual Submit — Fill in all fields yourself (Title, Summary, Problem, Proposed Solution, Target Buyer, Domain) for full control.

What Happens After Submission

  • The venture is created with status "backlog" and a unique DiceBear robot avatar.
  • It is immediately scored by Claude across all 4 AI dimensions.
  • TL reactions are simulated by up to 5 thought leaders.
  • The composite score is calculated and the venture appears in its tab.
  • For ventures posted via the News tab URL bar, the full Ralph Loop runs automatically, iteratively improving the venture to 95+.

Voting & Annotations

Team members can upvote/downvote any venture and leave comments. You can also highlight text in the venture description to create inline annotations (similar to Google Docs comments). Anonymous identities are assigned automatically.

AI Agents

Four autonomous AI agents continuously analyze the codebase, find real bugs, plan sprints, and generate fixes. Unlike the simulated bug pipeline, these agents work on the actual source code of this application.

Agent Overview

CodeHawk AI — Bug Hunter
Scans real source files (Python backend, HTML frontend, config) using Claude to find actual bugs, security vulnerabilities, performance issues, and code quality problems. Creates Bug entries with real file paths, line numbers, and suggested fixes. Runs every 4 hours, scanning 3 random files per run. Bugs are tagged real + ai-found.
PixelEye AI — UI Inspector
Launches a headless Chromium browser via Playwright, navigates to app views in both mobile (390×844) and desktop (1440×900) viewports, takes screenshots, and sends them to Claude Vision for UI/UX analysis. Finds layout bugs, broken alignment, unreadable text, tap targets too small, overflow issues, and accessibility problems. Runs every 6 hours, inspecting 4 random routes. Bugs are tagged real + ui-ux.
Maya Levi — AI Product Owner
Intelligent sprint planning powered by Claude. Sends the top 15 open tickets to Claude for evaluation, which assigns business_value (1–10), story_points (Fibonacci), and sprint_priority (1–100) with written reasoning for each. Real bugs (tagged real) get a 2.5× priority boost. Selects top tickets within velocity cap (40 SP) and sprint capacity (10 tickets). Posts sprint summary to Slack #general. Runs every hour.
AutoFix AI — Bug Fixer (Red/Green TDD)
Follows strict Red/Green TDD for every fix: (1) RED — generates a pytest that reproduces the bug and confirms it fails. (2) GREEN — generates a minimal code fix and confirms the test passes. If the test doesn't pass after the fix, the change is reverted. Each phase is recorded as a bug comment with the test code, failure output, diff, and pass confirmation. Runs every 2 hours.

The Real Bug Pipeline

  1. Discovery — CodeHawk scans source code; PixelEye screenshots the UI. Both create real Bug entries tagged real.
  2. Scoring — Maya (PO) evaluates all open tickets with Claude, assigning compound value/effort scores. Real bugs are boosted 2.5× over simulated ones.
  3. Sprint Planning — Maya picks the highest-scored tickets into the sprint, respecting velocity (40 SP max) and capacity (10 tickets max). Posts reasoning to Slack.
  4. Auto-Fix (TDD) — AutoFix picks up real sprint bugs and runs a Red/Green cycle: writes a failing test (RED), confirms it fails, generates a fix (GREEN), confirms the test passes. If the fix doesn't pass, it's reverted. Test + diff + output recorded as comments.
  5. Review & Release — Fixed bugs move to reviewdonenext_versionclosed via the standard release pipeline (auto-release every 6 hours).

Sprint Scoring Formula

sprint_priority = Claude's assessment (1–100)
fallback = (business_value / story_points) × priority_bonus × real_boost

priority_bonus: critical=3.0, high=2.0, medium=1.0, low=0.5
real_boost: 2.5× for bugs tagged "real", 1.0× for simulated

Real vs. Simulated Bugs

  • Real bugs — Found by CodeHawk or PixelEye. Tagged with real and ai-found. Reference actual file paths and line numbers. Fixes are applied to the live codebase.
  • Simulated bugs — Generated by the activity simulator for demo purposes. Have procedurally generated titles and descriptions. Follow the same pipeline but don't affect real code.
  • Both types flow through the same board, sprint planning, and release pipeline. The PO agent prioritizes real bugs over simulated ones.

Manual Triggers

  • POST /api/bugs/trigger-hunt — Run CodeHawk now (scan 3 files)
  • POST /api/bugs/trigger-ui-inspect — Run PixelEye now (screenshot 4 views)
  • POST /api/bugs/trigger-sprint — Run Maya's sprint planning now
  • POST /api/bugs/trigger-fix — Run AutoFix now (fix sprint bugs)
  • POST /api/bugs/{id}/auto-fix — Fix a specific bug

Scanned Files

CodeHawk scans 20+ key files including: routes.py, models.py, main.py, config.py, scheduler.py, activity_simulator.py, harvester sources, venture scorer/generator/ideator, thought leader simulator, discussion engine, notifications, settings service, Slack simulator, and the main dashboard template.

Inspected UI Views

PixelEye inspects 14 routes: News Feed, Ventures/Ideas, Bug Board, Slack, Knowledge Graph, Leaderboard, Activity Monitor, Release Notes, Sim Users, Settings, Investment Committee — each in mobile and desktop viewports.

3-Agent PM Team

An always-on product-management cell that owns the app's own backlog. Three AI personas debate every feature, score it on a seven-dimension rubric, run a Karpathy-style auto-research loop until the idea actually improves, then rank the backlog by value × ease for a human to approve. Approved features go through a sprint executor that can auto-deploy and roll back if smoke tests fail.

The Three Personas

  • Marty Cagan (purple) — Inspired, Empowered, Transformed. Pushes "outcomes not output" and the four product risks: value, viability, usability, feasibility. Owns the Value & Outcomes and Risk Coverage dimensions.
  • Teresa Torres (blue) — Continuous Discovery Habits. Pushes weekly user touches, opportunity solution trees, and assumption testing. Owns the Discovery Evidence and Opportunity Framing dimensions.
  • Shreyas Doshi (amber) — LNO framework, anti-goals, decision logs. Pushes leverage-vs-overhead clarity and crisp metrics. Owns the LNO Leverage, Decision Quality, and Metric Crispness dimensions.

The 7-Dimension Rubric

Every feature is scored 0–10 on seven dims, each owned by a persona:

  • Value & Outcomes (Cagan) — how clearly the feature ties to a real user outcome.
  • Risk Coverage (Cagan) — whether the four product risks are addressed.
  • Discovery Evidence (Torres) — depth of user evidence behind the bet.
  • Opportunity Framing (Torres) — quality of the opportunity-solution-tree fit.
  • LNO Leverage (Doshi) — leverage ratio of the work involved.
  • Decision Quality (Doshi) — tradeoff narrative and anti-goals are explicit.
  • Metric Crispness (Doshi) — is the success metric leading and crisp?

Scores are visualised as a radar chart on every feature card.

Karpathy 10-Cycle Auto-Research Loop

Each feature runs up to 10 research cycles. In each cycle the personas critique the proposal, the weakest dimension is identified, and a refinement is generated. The loop only counts a cycle as "improvement" when the explicit criterion holds:

  • Weakest dim score uplift ≥ 1.0 on the next pass, AND
  • No other dim regresses by more than 0.5.

If those conditions don't hold, the loop stops early; if they do, it continues until cycle 10 or until no further weakest dim is below threshold. Constants live in pm_engine.py as IMPROVEMENT_MIN_UPLIFT, IMPROVEMENT_MAX_OTHER_REGRESSION, and MAX_CYCLES.

Daily Value × Ease Ranking

Once a feature finishes researching it lands in the backlog with a value score and an ease score. The daily ranking job multiplies them into a composite rank and sorts the backlog descending so the top of the list is the highest-leverage thing to ship next. Re-ranks happen automatically every day and on demand via the "Rank Now" button.

Human-in-the-Loop Approval

Nothing ships without explicit human approval. Backlog items in backlog or ranked state surface an Approve for Dev button. Clicking it moves the feature to approved and queues it for the next sprint executor pass.

Sprint Executor: Auto-Deploy & Rollback

The sprint executor picks approved features, generates the implementation plan, and (when latched) deploys. Two environment variables gate this:

  • PM_AUTO_DEPLOY — if 1, executor pushes the change. If unset/0, the change stops at testing for human review.
  • PM_RUN_REAL_TESTS — if 1, real test suite runs as the gate. Otherwise simulated smoke checks run.

If smoke tests fail post-deploy, the executor flips status to rolled_back and reverts. Lifecycle: researchingbacklogrankedapprovedsprintin_devtestingdeployed | rolled_back | rejected.

The Five Sub-Tabs

  • Backlog — the ranked feature list with persona pills, status badges, and radar charts. Click a card for full research history and the approve button.
  • Sprints — current and historical sprints with the features each one carries.
  • Meetings — daily Zoom-style standup transcripts. Trigger a fresh standup with one click; each meeting captures speaker turns, decisions, and action items.
  • Calendar — simulated calendar events for the PM team (standups, IC reviews, customer interviews).
  • Inbox — simulated Gmail thread for the team. Star messages, open threads, see the persona who wrote each one.

Slack Channel

The team has a dedicated #pm-team Slack channel (in the simulated workspace). Research summaries, ranking updates, sprint state changes, and meeting recaps post here automatically.

API Endpoints

  • GET /api/pm/status — counts by lifecycle status.
  • GET /api/pm/features — ranked backlog.
  • POST /api/pm/features — create a new feature with {persona, context_hint}; auto-kicks off the research loop.
  • POST /api/pm/features/{id}/approve — human-in-the-loop approval.
  • POST /api/pm/run-rank — force a re-rank.
  • POST /api/pm/run-sprint — trigger the sprint executor.
  • POST /api/pm/run-daily — trigger the full daily PM review (rank + standup).
  • POST /api/pm/meetings/standup — trigger a fresh standup transcript.

Settings

The Settings panel (gear icon in sidebar) lets you customize all core parameters. Changes are persisted to the database and take effect immediately. The scheduler auto-reloads when relevant settings change.

Setting Categories

  • AI / Claude — Choose the Claude model (Sonnet/Opus/Haiku), set temperature for scoring (lower = deterministic) vs. ideation (higher = creative), and configure token limits.
  • Scoring — Adjust all 8 dimension weights (must sum to 1.0), configure real vs. simulated TL signal multipliers. Use "Normalize" to auto-balance.
  • Harvester — Set harvest frequency, gap check hour, TL sync interval, brainstorm count per run, and training harvest day.
  • Domains — Edit the 38 domain keywords for signal filtering, toggle active domain categories, manage company blog RSS feeds.
  • Notifications — High-score threshold, popular vote threshold, weekly digest day/hour/top-N count.
  • Display — Page size and default sort order (score, date, or votes).

Defaults & Resets

Every setting has a built-in default. The database only stores your overrides. Use "Reset Section" to restore defaults. Settings that differ from defaults are highlighted. The scheduler auto-reschedules when you change timing settings.

develeap