Crewcial Partners Internal Training Series
Last Updated: February 27, 2026
Estimated Implementation Time: 3-4 weeks
Estimated Time Savings: 12-16 hours per review once
deployed
An annual PowerPoint presentation that provides comprehensive analysis of a client’s complete private equity portfolio. Typically 15-25 slides covering:
Time Required: 15-20 hours per client review
Workflow: 1. Analyst pulls performance data from multiple sources (2-3 hours) 2. Downloads benchmark data from Cambridge Associates (30 min) 3. Creates comparison tables in PowerPoint (1-2 hours) 4. Reads manager letters, quarterly updates, meeting notes for each fund (4-6 hours) 5. Writes 2-3 bullet commentary for each fund (3-4 hours) 6. Creates market overview slides (2-3 hours) 7. Formats, reviews, refines presentation (2-3 hours)
Pain Points: - High time investment for what’s largely mechanical assembly - Inconsistent quality when rushed (quarter-end crunch) - Currently done ad hoc; many clients who should receive this don’t - Knowledge locked in analyst’s head; difficult to delegate or scale
Time Required: 4-6 hours analyst time (primarily review and strategic commentary)
New Workflow: 1. LLM pulls structured data from defined sources (automated) 2. LLM populates presentation template following specifications (automated) 3. LLM drafts fund commentary based on manager materials (automated, requires validation) 4. Analyst reviews, refines strategic insights, validates accuracy (human judgment) 5. Final formatting polish and client customization (minimal)
Expected Gains: - 70-75% time reduction - Consistent output quality - Scalable to all clients who should receive reviews - Documented process = easier delegation and training
Private Asset Reviews serve three strategic functions:
Reality: Many clients who would benefit from annual private asset reviews don’t receive them because of production time constraints. LLM automation makes systematic delivery viable.
Creating this workflow forces documentation of: - Where our data actually lives and how it’s structured - What “good commentary” looks like in our house style - Decision criteria for what’s “material” vs. noise - Quality thresholds for client-ready deliverables
Side benefit: This documentation becomes training infrastructure for new analysts.
STOP: Before proceeding, document your current reality.
Answer these questions honestly (write them down):
By the end of this implementation, you will have:
✅ Data Infrastructure: Documented sources,
standardized formats, automated pulls
✅ Context Documents: Templates, examples, style guide
loaded into LLM context
✅ Specifications: Slide-by-slide requirements that LLM
can execute against
✅ Quality Framework: Explicit criteria for “ready for
client delivery”
✅ Iteration Process: Documented feedback loop for
continuous improvement
Time to Target: 3-4 weeks of setup for first review; incremental improvement thereafter.
This section explains how the four disciplines apply specifically to Private Asset Review automation.
Bad approach:
"Create a private equity portfolio review for Client XYZ."
What you’ll get: Generic template with placeholder text. Zero actual data. 3-4 hours of manual rework.
Why it fails: - No access to your performance data - No understanding of your format/style - No knowledge of what’s “material” to highlight - No benchmark context
Lesson: Prompt Craft alone cannot handle complex, multi-source, data-intensive deliverables. You need infrastructure.
What this means for Private Asset Reviews:
The LLM needs access to your entire informational environment:
Critical Context Engineering Task:
Create a Private Asset Review Context Document
(.claw.md or equivalent) that the LLM loads at session
start. This document contains:
# Private Asset Review Context
## Our Standard Approach
[Describe your methodology, philosophy, what clients care about]
## Data Sources & Formats
- Performance data: [Location, format, update frequency]
- Benchmark data: [Source, how to interpret quartiles]
- Manager communications: [Where stored, naming conventions]
## Output Standards
- Slide template: [File path or description]
- Font/formatting: [Specify to avoid inconsistencies]
- Commentary style: [Active voice, specific metrics, no hedging language]
## Quality Thresholds
- What makes commentary "material enough" to include
- When to escalate to analyst vs. proceed autonomously
- Accuracy validation requirements🚨 GAP-FILLING REQUIRED: You cannot proceed to Phase 2 until you’ve documented: 1. Where your fund performance data lives and how it’s structured 2. Your benchmark data source and interpretation methodology 3. Location of manager communications (letters, reports, notes) 4. Standard template or style guide (even if informal)
What this means:
Every slide in the review needs a complete specification that includes: - Data inputs (source, format, location) - Processing logic (calculations, comparisons, filters) - Layout requirements (where each element appears) - Quality criteria (what makes this slide “done”)
Example Specification Hierarchy:
PRIVATE ASSET REVIEW SPECIFICATION
├── Market Overview (Slides 1-3)
│ ├── Slide 1: Market Performance Summary
│ ├── Slide 2: Fundraising & Deployment Trends
│ └── Slide 3: Vintage Year Analysis
├── Portfolio Overview (Slide 4)
│ └── Performance Table: All Funds vs Benchmarks
└── Individual Fund Analysis (Slides 5-N)
└── One slide per fund following standard template
We’ll build complete specifications in Phase 3.
The Critical Question: How does the LLM decide what’s “material” enough to highlight?
Example Decision Points:
When reading a manager’s quarterly letter, which developments merit inclusion in the 2-3 bullet commentary?
Without Intent Engineering (Generic AI Response): - “The fund made several new investments this quarter.” - “Portfolio companies showed mixed performance.” - “The manager remains optimistic about the strategy.”
With Intent Engineering (Crewcial Standards): - Only include developments that: - Materially impact NAV (>5% mark change in a portfolio company) - Represent strategic pivots (new sector focus, geographic expansion) - Signal operational concerns (management changes, covenant issues) - Explain performance vs benchmark (why outperforming/underperforming)
🚨 GAP-FILLING REQUIRED: Define YOUR materiality thresholds:
BEFORE starting any Private Asset Review automation, complete this 10-minute exercise away from your computer.
Grab pen and paper. Answer these questions:
What does “done” look like for THIS specific review?
Write 3-5 concrete completion criteria: 1. ________________________________ 2. ________________________________ 3. ________________________________ 4. ________________________________ 5. ________________________________
What are the hard parts for THIS review? - [ ] Missing recent performance data for Fund(s): ________________ - [ ] No benchmark match for Fund(s): ________________ - [ ] Manager communication gaps (no recent letters) - [ ] Client-specific formatting requirements - [ ] Unclear strategic context for some funds - [ ] Other: ________________________________
For THIS review, what matters most? (Rank 1-3) - _____ Speed (need it done quickly) - _____ Accuracy (performance numbers must be perfect) - _____ Insight (strategic commentary quality) - _____ Consistency (matches our standard format exactly)
Purpose of this exercise: Clarity on what you’re optimizing for BEFORE the LLM shapes your thinking.
Goal: Get all required data into standardized, LLM-accessible formats.
Time Required: 4-6 hours (one-time setup; incremental maintenance thereafter)
Performance Data Inventory
Create a spreadsheet documenting:
| Data Element | Source System | Location | Format | Update Frequency | Owner |
|---|---|---|---|---|---|
| Fund NAV | _______ | _______ | _______ | _______ | _______ |
| Capital Called | _______ | _______ | _______ | _______ | _______ |
| Distributions | _______ | _______ | _______ | _______ | _______ |
| IRR (since inception) | _______ | _______ | _______ | _______ | _______ |
| TVPI | _______ | _______ | _______ | _______ | _______ |
| DPI | _______ | _______ | _______ | _______ | _______ |
| Commitment Amount | _______ | _______ | _______ | _______ | _______ |
| Vintage Year | _______ | _______ | _______ | _______ | _______ |
🚨 BLOCKER: If you cannot fill out this table, you cannot proceed. This documentation is prerequisite infrastructure.
Best Practice Structure (Based on Industry Standards):
Create a master spreadsheet:
Private_Performance_Data.xlsx
Required columns: - Client_ID (unique
identifier) - Client_Name (full legal name) -
Fund_Name (official fund name) - Fund_ID
(unique identifier, if available) - Asset_Class
(Buyout/Venture/Growth/RE/Infra/Credit) - Strategy
(specific strategy within asset class) - Vintage_Year (fund
inception year) - Total_Commitment (client’s commitment in
$) - Called_Capital (cumulative called in $) -
Distributions (cumulative distributed in $) -
NAV (current net asset value in $) -
IRR_Inception (since inception IRR in %) -
TVPI (Total Value to Paid-In multiple) - DPI
(Distributions to Paid-In multiple) - RVPI (Residual Value
to Paid-In multiple) - As_Of_Date (data date, typically
quarter-end)
Template Structure:
Client_ID | Client_Name | Fund_Name | Vintage_Year | Asset_Class | Strategy | Commitment | Called | Distributed | NAV | IRR | TVPI | DPI | RVPI | As_Of_Date
🚨 ACTION REQUIRED: 1. Export your current
performance data 2. Reformat to match this structure (or document your
preferred structure if different) 3. Save as:
[Location]/Private_Performance_Master.xlsx 4. Document
location here: ________________________________
Source: Cambridge Associates (or your alternative benchmark provider)
Standard Approach: - Download vintage-year-matched benchmark data quarterly - Store in consistent format - Map your funds to appropriate benchmark universes
Required Benchmark Metrics: - Median IRR (vintage-matched) - 1st Quartile IRR threshold - 3rd Quartile IRR threshold - Median TVPI - Quartile TVPI ranges
🚨 ACTION REQUIRED:
Benchmark_Data.xlsxSuggested format:
Vintage_Year | Asset_Class | Strategy | Median_IRR | Q1_IRR | Q3_IRR | Median_TVPI | Q1_TVPI | Q3_TVPI | As_Of_Date
Goal: Centralized, organized access to all manager-provided materials.
Best Practice Structure:
/Private_Equity_Materials/
├── [Manager_Name]/
│ ├── Quarterly_Letters/
│ │ ├── 2024_Q4_Letter.pdf
│ │ ├── 2024_Q3_Letter.pdf
│ │ └── ...
│ ├── Annual_Reports/
│ │ ├── 2024_Annual_Report.pdf
│ │ └── ...
│ └── Meeting_Notes/
│ ├── 2025_02_15_Call_Notes.docx
│ └── ...
🚨 ACTION REQUIRED:
If storage is disorganized: Spend 2-3 hours creating standardized folder structure. This is prerequisite work.
Before proceeding to Phase 2, verify:
If any checkbox is unchecked, that’s your blocker. Resolve before advancing.
Goal: Create the informational environment the LLM operates within.
Time Required: 3-4 hours (one-time setup)
File:
Private_Asset_Review_Context.md
Template:
# Private Asset Review Context Document
*Version 1.0 | Last Updated: [DATE]*
## Purpose
This document provides all context an LLM needs to generate Private Asset Reviews
that meet Crewcial Partners quality standards.
## Our Approach to Private Asset Reviews
### Philosophy
[Describe your methodology. Example:]
"We provide clients with transparent, data-driven analysis of their private
portfolios, emphasizing performance context (vs vintage-matched benchmarks),
material developments affecting NAV, and strategic positioning for long-term value."
### Client Expectations
[What do clients care about most? Example:]
- Performance vs. benchmark (are we in top half? top quartile?)
- Strategic developments affecting fund direction
- Risk/concern flags (GP changes, covenant issues, sector concentration)
- Outlook and positioning
## Data Sources & Interpretation
### Performance Data
- **Location:** [File path]
- **Format:** [Describe structure]
- **Key Metrics:**
- IRR: Annualized return since inception
- TVPI: Total Value / Paid-In Capital (includes NAV + Distributions)
- DPI: Distributions / Paid-In Capital (realized returns only)
- RVPI: Residual Value / Paid-In Capital (unrealized NAV component)
### Benchmark Data (Cambridge Associates)
- **Location:** [File path]
- **Interpretation:**
- 1st Quartile: Top 25% of funds
- Median: 50th percentile
- 3rd Quartile: Bottom 25% threshold
- **Our Standard:** Client funds should target median or better for vintage/strategy
### Manager Communications
- **Location:** [Folder path]
- **Types:** Quarterly letters, annual reports, meeting notes
- **Priority:** Most recent materials take precedence
## Output Standards
### Presentation Template
- **File:** [Path to PowerPoint template]
- **Standard Slide Count:** 15-25 slides (varies by portfolio size)
- **Sections:**
1. Market Overview (2-3 slides)
2. Portfolio Performance Table (1 slide)
3. Individual Fund Analysis (1 slide per fund)
### Formatting Requirements
- **Font:** [Specify: e.g., Arial 11pt for body, 14pt for headers]
- **Colors:** [Specify brand colors if applicable]
- **Logo/Branding:** [Requirements]
- **Confidentiality Footer:** [Required text]
### Commentary Style Guide
**DO:**
- Use active voice
- Cite specific metrics (e.g., "Portfolio company ABC grew revenue 25% YoY")
- Connect performance to operational drivers
- Be concise (2-3 bullets per fund, 15-25 words each)
**DON'T:**
- Use hedging language ("amid challenging conditions")
- Make generic market observations
- Speculate beyond source materials
- Use jargon without explanation
**Example - GOOD:**
"Fund outperformed vintage median (IRR: 18.5% vs 14.2%) driven by strong exits
in two healthcare services companies, partially offset by markdowns in consumer
retail holdings."
**Example - BAD:**
"The fund performed well despite challenging market conditions, with the manager
executing their strategy effectively."
## Quality Thresholds
### Materiality Standards
Include commentary on developments that meet ANY of these criteria:
- Performance: >300bps variance from benchmark
- Portfolio impact: >5% NAV change in single portfolio company
- Strategic: GP changes, fund restructuring, covenant issues
- Sector: New allocation >15% of fund NAV
### Accuracy Requirements
- All performance numbers must match source data exactly
- Benchmark comparisons must use vintage-matched peer groups
- Fund names must match legal entity names (no abbreviations)
### Escalation Triggers (Flag for Analyst Review)
- Performance data missing or >90 days stale
- Benchmark match uncertain (no clear vintage/strategy peer group)
- Material concerns in manager letters (GP changes, legal issues, covenant breaches)
- Contradictions between performance data and manager commentary
## Quality Checklist (Before Client Delivery)
An output is "ready for client delivery" when ALL criteria are met:
- [ ] All performance data matches source systems
- [ ] Benchmark comparisons are vintage-matched and accurate
- [ ] Every fund has 2-3 bullets of commentary (unless data unavailable)
- [ ] Commentary cites specific operational/strategic drivers (not generic)
- [ ] Formatting matches template exactly
- [ ] No placeholder text remains
- [ ] Confidentiality footer present
- [ ] Analyst has reviewed and approved strategic insights🚨 ACTION REQUIRED: Fill in all bracketed placeholders in the template above with YOUR actual information.
Save as:
Private_Asset_Review_Context.md in a location your LLM can
access.
Best practice: Provide 2-3 examples of past reviews that hit your quality bar.
How to use: 1. Export previous high-quality reviews
to PDF 2. Store in /Examples/Private_Asset_Reviews/ 3.
Reference in your context document: “See examples folder for reference
quality”
If you don’t have examples: That’s okay. You’ll create the first “gold standard” manually, then use it as reference going forward.
For each fund in client portfolio, maintain brief context notes:
Format: Fund_Context_Notes.xlsx
| Fund_Name | Asset_Class | Strategy | Key_Context |
|---|---|---|---|
| ABC Capital Fund III | Buyout | Lower middle market B2B software | Strong track record, consistent strategy, focus on vertical SaaS with recurring revenue >70% |
| XYZ Ventures Fund II | Venture | Early-stage enterprise AI | Newer manager, opportunistic approach, higher risk profile, concentrated portfolio (8-10 companies) |
Purpose: Provides strategic context the LLM can incorporate when drafting commentary.
🚨 OPTIONAL ACTION: Create this if you have 1-2 hours. Significantly improves output quality but not prerequisite.
Goal: Write complete, executable specifications for every slide type.
Time Required: 4-5 hours (one-time creation; reusable thereafter)
A good specification is: - Complete: No missing information, no assumptions - Structured: Consistent format, easy to parse - Testable: Clear success criteria - Bounded: Explicit scope (what’s in/out)
Slide 1: Private Equity Market Performance
## SLIDE SPECIFICATION: Market Performance Summary
### Slide Title
"Private Equity Market Performance | [Period]"
Example: "Private Equity Market Performance | 2024 Annual Review"
### Data Sources
1. Cambridge Associates quarterly benchmark data (most recent available)
2. Pitchbook fundraising statistics
3. [SPECIFY YOUR SOURCES]: ________________________________
### Content Requirements
**Section 1: Market-Level Performance (Top Half of Slide)**
- Chart: PE benchmark performance by strategy (last 5 years)
- Buyout (Median IRR)
- Venture (Median IRR)
- Growth Equity (Median IRR)
- Data source: Cambridge Associates benchmark file
- Format: Bar chart or line graph
**Section 2: Key Market Observations (Bottom Half)**
2-3 bullets covering:
- Overall market performance trend (improving/declining)
- Best/worst performing strategy and why
- Notable vintage year performance (if relevant)
**Quality Criteria:**
✓ All data is from most recent quarter available
✓ Source attribution present ("Source: Cambridge Associates Q4 2024")
✓ Observations cite specific metrics (not generic)
✗ No speculation beyond data provided
**Example Output:**
"Private equity market performance remained strong in 2024:
• Buyout funds posted median IRR of 14.2% (up from 12.8% in 2023)
• Venture capital saw bifurcation: early-stage median 8.5% vs late-stage 18.3%
• 2020 vintage funds (4-year seasoning) showing top quartile performance across strategies"
**Processing Instructions for LLM:**
1. Load Cambridge benchmark data for [PERIOD]
2. Calculate median IRR by strategy
3. Identify highest/lowest performing strategy
4. Draft 2-3 observations citing specific metrics
5. Create chart using data
6. Format per template🚨 ACTION REQUIRED: Fill in your actual data sources and modify structure to match YOUR standard approach.
Slide 2: Fundraising & Deployment Trends
## SLIDE SPECIFICATION: Fundraising Trends
### Slide Title
"Private Equity Fundraising & Deployment | [Period]"
### Data Sources
1. Pitchbook fundraising statistics
2. [YOUR SOURCE]: ________________________________
### Content Requirements
**Chart 1: Annual Fundraising Volume**
- 5-year trend line (bar chart)
- Total capital raised per year
- Segmented by asset class if data available
**Chart 2: Deployment Pace**
- Dry powder vs deal volume trend
- Industry-wide metric
**Key Observations (2-3 bullets):**
- Fundraising trend direction
- Deployment pace assessment (accelerating/slowing)
- Implications for portfolio construction (if relevant)
**Quality Criteria:**
✓ Data from credible industry source (Pitchbook, Preqin, Cambridge)
✓ Timeframe specified clearly
✓ Charts are readable and properly labeled
**Processing Instructions:**
1. Access fundraising data for [TIMEFRAME]
2. Create 5-year trend visualization
3. Draft observations on key trends
4. Format per templateSlide 3: Vintage Year Analysis
## SLIDE SPECIFICATION: Vintage Year Performance
### Slide Title
"Private Equity Performance by Vintage Year"
### Data Sources
1. Cambridge Associates vintage year benchmarks
2. Client portfolio vintage distribution (for context)
### Content Requirements
**Chart: Vintage Year Performance Heatmap**
- Rows: Vintage years (last 10-15 years)
- Columns: Asset class (Buyout, Venture, Growth, etc.)
- Color coding:
- Green: 1st quartile performance
- Yellow: 2nd quartile
- Orange: 3rd quartile
- Red: 4th quartile
**Client Portfolio Context:**
- Overlay: Which vintage years our client has exposure to
- Helps them see where their portfolio sits in cycle
**Key Observations (2-3 bullets):**
- Best/worst vintage years and why
- Current vintage positioning (recent funds in strong/weak vintages?)
- Implications for portfolio construction
**Quality Criteria:**
✓ Vintage analysis covers at least 10 years
✓ Client portfolio vintage distribution accurately represented
✓ Color coding is consistent and clear
**Processing Instructions:**
1. Load Cambridge vintage year data
2. Create heatmap visualization
3. Overlay client vintage exposure from performance data
4. Draft vintage performance observations
5. Format per template🚨 DECISION POINT: Do you include vintage year analysis in your market overview? If not standard, remove this spec.
## SLIDE SPECIFICATION: Portfolio Performance Summary Table
### Slide Title
"[Client Name] Private Equity Portfolio | Performance Summary as of [Date]"
### Data Sources
1. Private_Performance_Master.xlsx (client fund data)
2. Benchmark_Data.xlsx (vintage-matched benchmarks)
### Table Structure
**Columns (Left to Right):**
1. Fund Name (sorted alphabetically or by vintage year - [SPECIFY YOUR PREFERENCE])
2. Vintage Year
3. Asset Class / Strategy
4. Commitment ($M) [or specify currency]
5. Called ($M)
6. Distributed ($M)
7. NAV ($M)
8. IRR (%)
9. TVPI (x)
10. DPI (x)
11. Benchmark IRR (%) [vintage-matched median]
12. Quartile Ranking [vs benchmark]
**Formatting Requirements:**
- Font: [Specify size]
- Number formats:
- Currency: $##.#M (millions, one decimal)
- IRR: ##.#% (one decimal)
- Multiples: #.##x (two decimals)
- Color coding:
- IRR > Benchmark: [Specify color, e.g., green highlight]
- IRR < Benchmark: [Specify color, e.g., yellow/orange]
- Quartile 1-2: [Color]
- Quartile 3-4: [Color]
**Summary Row (Bottom):**
- Total Commitment (sum)
- Total Called (sum)
- Total Distributed (sum)
- Total NAV (sum)
- Portfolio-weighted IRR (calculate)
- Portfolio TVPI (calculate)
- [DO NOT include benchmark in summary row - not mathematically meaningful]
**Data Processing Logic:**
1. **Load client performance data:**
- Filter for Client_ID = [CLIENT]
- Filter for most recent As_Of_Date
2. **Match benchmarks:**
- For each fund, lookup benchmark using:
- Vintage_Year (exact match)
- Asset_Class (exact match) OR Strategy (if more specific)
- If no exact match: Flag for analyst review
3. **Calculate quartile ranking:**
- If IRR >= Q1 threshold: "1st Quartile"
- If IRR >= Median and < Q1: "2nd Quartile"
- If IRR >= Q3 and < Median: "3rd Quartile"
- If IRR < Q3: "4th Quartile"
4. **Apply formatting:**
- Color code cells per requirements above
- Format numbers per specification
5. **Calculate summary:**
- Sum commitment/called/distributed/NAV
- Portfolio IRR = Total Distributed + Total NAV / Total Called (annualized)
- Portfolio TVPI = (Total Distributed + Total NAV) / Total Called
**Quality Criteria:**
✓ All fund data present (no missing rows)
✓ All benchmarks matched (or flagged if unavailable)
✓ Calculations verified (spot-check 2-3 funds manually)
✓ Formatting consistent
✓ Summary row calculates correctly
**Error Handling:**
- If benchmark unavailable: Leave cell blank, add footnote "Benchmark data unavailable"
- If performance data >90 days stale: Flag with footnote "As of [DATE] - updated data pending"
**Processing Instructions:**
1. Load Private_Performance_Master.xlsx
2. Filter for [CLIENT_ID]
3. Load Benchmark_Data.xlsx
4. Execute benchmark matching logic
5. Calculate quartile rankings
6. Apply formatting
7. Generate summary row
8. Insert into PowerPoint template at designated location
9. Validate quality criteria🚨 ACTION REQUIRED:
This is the most critical specification—it’s repeated for every fund in the portfolio.
## SLIDE SPECIFICATION: Individual Fund Analysis
### Slide Title
"[Fund Name] | [Strategy]"
Example: "ABC Capital Fund III | Lower Mid-Market Buyout"
### Slide Layout (Standard Template)
**Top Section: Performance Comparison Table**
- Replicates key columns from portfolio overview for THIS fund:
- Vintage, Commitment, Called, Distributed, NAV, IRR, TVPI, DPI
- Benchmark comparison (vintage-matched)
- Quartile ranking
**Right Side Box: Fund Highlights**
- Vintage Year: [YEAR]
- Total Fund Size: [AMOUNT]
- Client Commitment: [AMOUNT] ([%] of fund)
- Strategy: [DESCRIPTION]
- GP Commitment: [IF AVAILABLE]
**Bottom Section: Performance Commentary (2-3 bullets)**
- Material developments from manager materials
- Performance drivers (operational, strategic)
- Portfolio positioning/outlook
### Data Sources
**Performance Data:**
- Private_Performance_Master.xlsx (filter for this fund)
**Fund Highlights Data:**
- Total Fund Size: [YOUR SOURCE - e.g., fund documents, manager reports]
- Client Commitment %: Calculate from (Client Commitment / Total Fund Size)
- GP Commitment: [YOUR SOURCE or mark as "N/A" if unavailable]
**Commentary Sources (Priority Order):**
1. Most recent quarterly letter (last 90 days)
2. Most recent annual report (if no quarterly letter)
3. Meeting notes from manager calls (last 6 months)
### Commentary Generation Specification
**Objective:** Provide 2-3 bullet points that give client meaningful context on fund performance and positioning.
**Materiality Criteria (Include developments that meet ANY of these):**
1. **Performance Attribution:**
- IRR variance from benchmark >300 basis points (either direction)
- TVPI significantly above/below vintage median
- Explanation: What drove outperformance or underperformance?
2. **Portfolio Company Developments:**
- NAV mark changes >5% for any single portfolio company
- Successful exits (IPO, strategic sale) in last quarter/year
- Material operational inflection (revenue growth >25% YoY, margin expansion >300bps)
- Downward marks or write-offs
3. **Strategic Positioning:**
- New sector allocations >15% of fund NAV
- Geographic expansion
- Fund restructuring or recapitalization
- Changes in deployment pace (accelerating vs slowing)
4. **Organizational/Risk Factors:**
- GP or key personnel changes
- Fund extensions or term modifications
- Covenant issues or portfolio company stress
- Legal/regulatory matters
**Exclusion Criteria (DO NOT include):**
- Generic market commentary without fund-specific impact
- Boilerplate language from manager letters (e.g., "We remain focused on our strategy")
- Speculation not grounded in source materials
- Minor portfolio company developments (<3% NAV impact)
**Processing Instructions:**
1. **Access manager materials:**
- Navigate to /Private_Equity_Materials/[MANAGER_NAME]/
- Identify most recent quarterly letter or annual report
- If available, check meeting notes from last 6 months
2. **Extract material developments:**
- Read manager materials with materiality criteria in mind
- Flag sections discussing:
- Performance vs expectations
- Major portfolio company updates
- Strategic shifts or positioning
- Risk factors or concerns
3. **Draft 2-3 bullets:**
- Bullet 1 (Performance): Address IRR vs benchmark if >300bps variance
- Structure: "[Fund] [outperformed/underperformed] vintage median ([IRR]% vs [Benchmark]%) driven by [specific factors]"
- Bullet 2 (Operational/Strategic): Most material portfolio development
- Structure: "[Specific company/sector] [development] resulting in [outcome/impact]"
- Bullet 3 (Positioning/Outlook): Forward-looking context if provided
- Structure: "[Manager] [action/focus] positioning for [strategic objective]"
4. **Quality validation:**
- ✓ Each bullet cites specific metrics or developments (not generic)
- ✓ Information comes from source materials (not speculation)
- ✓ Length: 15-25 words per bullet
- ✓ Active voice, no hedging language
- ✗ No placeholder text or TBD markers
5. **Escalation triggers:**
- Flag for analyst review if:
- No manager materials available from last 6 months
- Contradictions between performance data and commentary
- Material concerns mentioned (GP changes, covenant issues, legal matters)
- Insufficient information to draft 2 meaningful bullets
**Example Outputs:**
**Example 1 - Outperformer:**
"ABC Capital Fund III outperformed vintage median (IRR: 22.3% vs 14.8%) driven by successful exit of portfolio company DataCorp through strategic sale to Oracle, generating 4.2x return. Fund increased allocation to vertical SaaS businesses to 65% of NAV, reflecting manager's thesis on recurring revenue defensibility. Two new platform investments deployed in Q4 within healthcare IT, maintaining disciplined entry multiples at 8-9x EBITDA."
**Example 2 - Underperformer:**
"XYZ Ventures Fund II trailed vintage median (IRR: 6.2% vs 11.5%) due to markdowns in three consumer-focused portfolio companies reflecting challenging macro conditions and heightened competition. Fund took 45% write-down on RetailCo following bankruptcy filing in Q3. Manager has slowed deployment pace, preserving $25M in dry powder (18% of committed capital) for follow-on support to existing winners rather than new platform investments."
**Example 3 - Steady Performer:**
"DEF Growth Equity Fund I tracking inline with vintage median (IRR: 16.1% vs 15.8%) with balanced portfolio performance across software and healthcare services holdings. Portfolio company MedTech Solutions expanded gross margins from 42% to 51% following international expansion into European markets. Fund 85% deployed with remaining capital reserved for follow-on rounds in top quartile performers."
### Fund Highlights Box Data Requirements
**Required Fields:**
- Vintage Year: [YYYY] - Source: Performance data
- Total Fund Size: [AMT] - Source: [SPECIFY YOUR SOURCE]
- Client Commitment: [AMT] - Source: Performance data
- Client % of Fund: [%] - Calculated
- Strategy: [TEXT] - Source: [SPECIFY]
**Optional Fields (if available):**
- GP Commitment: [AMT or %]
- Fund Term: [Years]
- Geography: [Region]
**🚨 ACTION REQUIRED:**
1. Where is "Total Fund Size" data stored in your systems? ________________________________
2. How do you currently track GP commitment? ________________________________
3. Are there other fund attributes you include? ________________________________
### Quality Checklist for Individual Fund Slides
Before marking slide as "complete," verify:
- [ ] Performance table matches portfolio summary (consistency check)
- [ ] Fund highlights box is fully populated (no TBD/missing fields)
- [ ] 2-3 commentary bullets present
- [ ] Commentary cites specific developments (not generic)
- [ ] Sources are documented (which letter/report/meeting note)
- [ ] Formatting matches template exactly
- [ ] If material concerns flagged, analyst review completedGoal: Encode decision criteria and quality thresholds so LLM knows what “good” looks like.
Time Required: 2-3 hours
Create document:
Quality_Framework_Private_Asset_Review.md
# Quality Framework: Private Asset Review
## Purpose
This document defines what makes a Private Asset Review "ready for client delivery"
vs "needs analyst review" vs "not acceptable."
## Three Quality Tiers
### Tier 1: Ready for Client Delivery
Output meets ALL of these criteria:
**Data Accuracy:**
- [ ] All performance metrics match source systems exactly (zero discrepancies)
- [ ] Benchmark comparisons use vintage-matched peer groups
- [ ] Calculations verified (spot-check IRR, TVPI, DPI)
- [ ] As-of dates consistent across all slides
**Content Quality:**
- [ ] Every fund has 2-3 bullets of commentary (unless unavailable)
- [ ] Commentary cites specific operational/strategic drivers
- [ ] No generic/boilerplate language ("challenging conditions," "executing strategy")
- [ ] Material developments appropriately highlighted per criteria
**Formatting:**
- [ ] Matches template exactly (fonts, colors, layout)
- [ ] No placeholder text remains ([TBD], [INSERT], etc.)
- [ ] Charts/tables are readable and properly labeled
- [ ] Confidentiality footer present
- [ ] Client name correct throughout
**Completeness:**
- [ ] All sections present (market overview, portfolio table, individual funds)
- [ ] No missing data (or appropriate footnotes if unavailable)
- [ ] Sources attributed where required
### Tier 2: Needs Analyst Review
Output has minor issues requiring attention:
**Triggers:**
- Performance data >90 days stale for any fund
- Benchmark unavailable for 1-2 funds (footnoted but needs context)
- Commentary for 1-2 funds is thin (only 1 bullet or generic)
- Material concerns mentioned in manager letters (GP changes, covenant issues)
- Contradictions between performance data and manager commentary
**Action:** Analyst reviews flagged items, makes corrections, approves for delivery.
**Estimated review time:** 30-60 minutes
### Tier 3: Not Acceptable
Output has major flaws requiring significant rework:
**Blockers:**
- Performance data missing for >25% of funds
- Calculation errors in summary metrics
- Placeholder text or formatting severely broken
- Commentary is entirely generic across all funds
- Material errors in fund names, vintage years, or client identification
**Action:** Do not proceed. Fix data sources or specification before retry.
## Decision Boundaries
### When to Auto-Populate vs Flag for Human Review
**Auto-populate (proceed without human intervention) when:**
- All data sources available and recent (<90 days)
- Benchmarks match cleanly
- Manager materials available and straightforward
- No material concerns flagged in source documents
**Flag for analyst review when:**
- Data gaps or staleness
- Unclear benchmark matches
- Material concerns (GP changes, legal issues, covenant breaches)
- Contradictions between data sources
- Insufficient source material to draft meaningful commentary
### Materiality Thresholds (Repeat for Emphasis)
**Performance variance:** Include if IRR vs benchmark delta >300 bps
**Portfolio company impact:** Include if NAV mark change >5%
**Strategic developments:** Include if new allocation >15% NAV
**Risk factors:** ALWAYS include GP changes, covenant issues, legal matters
## Quality Calibration Examples
### Example: GOOD Commentary
"Fund outperformed vintage median (IRR: 18.5% vs 14.2%) driven by strong exits
in two healthcare services companies, partially offset by markdowns in consumer
retail holdings. Portfolio company MedCo grew EBITDA 35% YoY following successful
bolt-on acquisition strategy."
**Why it's good:**
- Cites specific metrics (18.5% vs 14.2%, 35% EBITDA growth)
- Explains performance drivers (exits, markdowns)
- Names sectors/companies (healthcare services, consumer retail, MedCo)
- Connects to operational factors (bolt-on acquisitions)
### Example: BAD Commentary
"The fund performed well this quarter despite challenging market conditions.
The manager remains focused on executing their disciplined investment strategy
and continues to see attractive opportunities in the pipeline."
**Why it's bad:**
- No specific metrics
- Generic market commentary
- Boilerplate language ("challenging conditions," "disciplined strategy")
- No operational drivers or portfolio company details
- Could apply to any fund
## Analyst Review Protocol
When output is flagged for Tier 2 review:
1. **Review flagged items** (15-20 min)
- Check performance data accuracy
- Verify benchmark matches
- Read source materials for flagged funds
2. **Make corrections** (15-30 min)
- Update commentary for thin/generic bullets
- Add context for benchmark mismatches
- Address material concerns flagged
3. **Validate quality tier** (10 min)
- Confirm now meets Tier 1 criteria
- Final formatting check
4. **Approve for delivery**
**Total estimated time:** 30-60 minutes (vs 15-20 hours manual creation)Document key decision points the LLM will encounter:
# Decision Tree: Commentary Generation
## When reading manager quarterly letter...
**Step 1: Identify material developments**
- Scan for: Performance drivers, portfolio company updates, strategic positioning, risk factors
- Check: Does development meet materiality criteria? (See Quality Framework)
- If YES → Extract and flag for inclusion
- If NO → Skip
**Step 2: Assess information completeness**
- Question: Can I draft 2-3 specific bullets from available materials?
- If YES → Proceed to drafting
- If NO → Flag: "Insufficient source material - analyst review required"
**Step 3: Draft bullets**
- For each material development:
- Lead with specific metric or outcome
- Connect to operational/strategic driver
- Keep to 15-25 words
- Active voice, no hedging
**Step 4: Quality check**
- Review against GOOD/BAD examples
- If any bullet is generic → Revise with more specificity
- If cannot add specificity → Flag for analyst input
**Step 5: Escalation check**
- Does commentary mention: GP changes, legal issues, covenant breaches, major write-offs?
- If YES → Flag: "Material concern - analyst review required"
- If NO → ProceedGoal: Validate that specifications produce acceptable output; refine based on failures.
Time Required: 3-4 hours (including 2-3 test cycles)
Choose a pilot client portfolio with these characteristics: - Moderate complexity (5-8 funds, not too simple/not overwhelming) - Recent performance data available - Manager materials accessible - Someone familiar with the portfolio who can validate quality
Document test case: - Client: ________________________________ - Number of funds: ________ - Vintage range: ________ to ________ - Expected completion time: ________ hours
Execute the full workflow:
Track: - Time required: ________ (LLM processing + your setup time) - Output quality tier: 1 / 2 / 3 (per Quality Framework) - What worked well: ________________________________ - What failed: ________________________________
For each failure, diagnose root cause:
| Failure | Root Cause | Fix |
|---|---|---|
| Performance data didn’t load | Data format incompatible | Revise data standardization in Phase 1 |
| Benchmark match failed | Specification logic unclear | Update Step 3.2 matching algorithm |
| Commentary too generic | Materiality criteria not applied | Strengthen intent engineering (Phase 4) |
| Formatting broken | Template spec incomplete | Add missing formatting details to spec |
Common failure patterns:
Data access issues: - LLM couldn’t locate files → Document exact file paths in context - Format not parseable → Standardize to CSV or clearly structured Excel
Specification ambiguity: - “Similar funds” → Define similarity (same vintage? same strategy?) - “Recent materials” → Define recency (last 90 days? last quarter?) - “Material developments” → Quantify (>5% NAV? >300bps variance?)
Quality threshold failures: - Output is generic → Add more counter-examples to context - Missing key info → Check if source materials were actually available - Formatting inconsistent → Tighten specification language
After identifying failures, iterate:
Repeat until output reaches Tier 1 quality.
Target: 80%+ of slides reach Tier 1 on first pass; 20% require minor analyst review (Tier 2).
Acceptable: 60% Tier 1, 40% Tier 2, 0% Tier 3.
Unacceptable: >10% Tier 3 outputs. Go back to Phase 2-3 and strengthen specifications.
Create:
Implementation_Log_Private_Asset_Review.md
Track each test iteration:
# Implementation Log: Private Asset Review
## Test Run #1
- **Date:** [DATE]
- **Client:** [TEST CLIENT]
- **LLM Used:** [Model name/version]
- **Time:** [Hours]
- **Quality Outcome:** Tier [1/2/3]
- **Key Issues:**
1. [Issue description]
2. [Issue description]
- **Fixes Applied:**
1. [What was changed in specs/context]
2. [What was changed]
## Test Run #2
- **Date:** [DATE]
- **Changes from Run #1:** [Summary]
- **Outcome:** [Improved/Same/Worse]
- **Remaining Issues:** [List]
## Test Run #3 (if needed)
...
## Final Production Version
- **Date:** [DATE]
- **Quality Metrics:**
- Tier 1: [%] of slides
- Tier 2: [%] of slides
- Tier 3: [%] of slides
- **Time Savings:** [Manual hours] → [LLM-assisted hours]
- **Ready for production:** YES / NOPrerequisites before going live: - ✅ Test case completed with 80%+ Tier 1 quality - ✅ All specifications documented and refined - ✅ Context document finalized - ✅ Data sources validated and accessible - ✅ Quality framework socialized with team - ✅ Analyst review process defined
Select pilot clients: - Portfolio characteristics similar to test case - Clients who won’t receive review for 30-60 days (buffer for refinement) - Internal stakeholders aligned on pilot program
Pilot protocol: 1. Generate draft using LLM workflow 2. Analyst reviews using Quality Framework 3. Document time spent (LLM + analyst review) 4. Track quality tier distribution 5. Collect feedback from reviewing analyst
Key metrics to track:
| Metric | Target | Actual |
|---|---|---|
| Time per review | 4-6 hours | _____ |
| Tier 1 quality % | >80% | _____ |
| Analyst review time | <2 hours | _____ |
| Reviews completed/quarter | [YOUR GOAL] | _____ |
Qualitative feedback: - What still requires heavy editing? - What quality issues recur? - What surprised you (good or bad)?
Rollout plan:
Month 1: Pilot (2-3 clients)
Month 2: Expand to 5-7 clients, incorporate
learnings
Month 3: Full production for all eligible clients
Risk mitigation: - Maintain manual backup process for first quarter - Over-communicate with clients: “We’ve enhanced our review process” - Build in buffer time for unexpected issues
Symptom: LLM generates nonsense or placeholder text despite good specifications.
Root cause: Underlying data is incomplete, stale, or inconsistent.
Example: - Performance data has missing quarters for several funds - Manager materials not uploaded for 6+ months - Benchmark file hasn’t been updated since 2023
Fix: - Phase 1 prerequisite work is critical. Do not skip data infrastructure setup. - Establish ongoing data maintenance protocol (who updates, when, how) - Build data validation checkpoints into workflow
Prevention: Monthly data quality audit before quarter-end reviews.
Symptom: Commentary is well-written but so generic it could apply to any fund.
Example: “The fund executed its strategy effectively despite market volatility.”
Root cause: Materiality criteria not enforced; LLM defaults to safe, vague language.
Fix: - Strengthen materiality thresholds in specification - Add counter-examples to context (“This is BAD commentary”) - Require specific metrics in every bullet (enforce in quality check)
Prevention: Include 5-10 examples of good vs bad commentary in context document.
Symptom: Funds compared to wrong benchmarks (wrong vintage, wrong strategy).
Root cause: Matching logic in specification is ambiguous.
Example: Growth equity fund compared to buyout benchmark because vintage matched.
Fix: - Make matching algorithm explicit: Vintage + Strategy (not just vintage) - Define fallback logic: If no exact match, what’s acceptable secondary match? - Flag for analyst review if match confidence is low
Prevention: Manually validate benchmark matches for first 3-5 pilots.
Symptom: Review includes outdated information; client receives materials with old data.
Root cause: No freshness validation in workflow.
Fix: - Add timestamp check to specification: Flag if data >90 days old - Require “As of [DATE]” on every performance metric - Build data refresh reminder into quarterly calendar
Prevention: Automate data freshness checks; block review generation if data is stale.
Symptom: Commentary misses important developments because source materials weren’t accessed.
Root cause: Source document location/naming convention not standardized.
Fix: - Standardize folder structure (Phase 1) - Document naming convention explicitly - Add file existence check to workflow (flag if expected materials missing)
Prevention: Quarterly audit of manager materials repository; flag missing uploads.
Time Efficiency: - Baseline (manual): 15-20 hours per review - Target (LLM-assisted): 4-6 hours per review - Savings: 10-14 hours (65-75% reduction)
Quality Distribution: - Target: 80% Tier 1, 20% Tier 2, 0% Tier 3 - Track by pilot client and over time
Scale Impact: - Baseline: [X] reviews completed annually - Target: [2-3X] reviews completed annually - Measure: Reviews per analyst per quarter
Client Feedback: - Do clients find reviews valuable? - Any quality concerns raised? - Increased engagement (questions, follow-up calls)?
Analyst Feedback: - Time savings accurate? - Quality acceptable? - What still frustrating/manual? - Would they use for next review?
Process Improvement: - What specifications needed refinement? - What data quality issues surfaced? - What institutional knowledge got documented?
After each quarter-end cycle:
Maintain versioned specifications:
Private_Asset_Review_Spec_v1.0.md (Initial production)
Private_Asset_Review_Spec_v1.1.md (Post Q1 2025 refinements)
Private_Asset_Review_Spec_v1.2.md (Post Q2 2025 refinements)
Track what changed and why: - What quality issue did update address? - What feedback prompted change? - Measurable improvement?
After 2-3 successful pilots: - Document success stories - Share time savings data - Socialize best practices with team - Create “train the trainer” session for new analysts
Week 1: Data Infrastructure - [ ] Complete Step 1.1-1.5 (map data sources, standardize formats) - [ ] Validate data quality and freshness - [ ] Document data maintenance protocol
Week 2: Context Engineering - [ ] Create context document (Step 2.1) - [ ] Gather example reviews if available (Step 2.2) - [ ] Optional: Build fund-specific context notes (Step 2.3)
Week 3: Specification Engineering - [ ] Write slide specifications (Step 3.1-3.3) - [ ] Document quality framework (Step 4.1) - [ ] Create decision trees (Step 4.2)
Week 4: Testing & Refinement - [ ] Select test case client (Step 5.1) - [ ] Execute first test run (Step 5.2) - [ ] Analyze failures and iterate (Step 5.3-5.4) - [ ] Document learnings (Step 5.5)
Month 2: Pilot Deployment - [ ] Generate 2-3 pilot reviews - [ ] Measure quality and time metrics - [ ] Refine based on feedback - [ ] Prepare for scale
Questions about this playbook?
Contact: [Stephane Ligonde / Internal LLM Working Group]
Suggestions for improvement?
Document in: Implementation_Log_Private_Asset_Review.md
Found a better approach?
Share with team and we’ll update next version
v1.0 (February 27, 2026) - Initial release - Based on Jonathan Goldberg’s Private Asset Review wishlist - Incorporates Nate B. Jones four-disciplines framework - Pilot-tested with [TBD - add after pilot]
Contributors: - Stephane Ligonde (Framework & Authorship) - Jonathan Goldberg (Use Case Definition & Requirements) - [Add pilot testers here after testing phase]
This playbook is a living document. Update it based on your learnings and share improvements with the team.