
There is a version of due diligence that most institutional allocators practice, and there is a version that actually protects capital. The gap between them is not effort. It is structure.
The typical DDQ review goes something like this: an analyst sends a questionnaire to a fund manager. The manager returns a completed document (40, sometimes 60 pages) covering strategy, operations, team, compliance, and risk. The analyst reads it carefully. They flag a few things that catch their attention. They write up their notes. The Investment Committee reviews the summary and asks follow-up questions. Capital gets committed, or it doesn't.
This process has a structural limitation that rarely gets named directly: without a shared rubric, the depth and focus of any review is naturally shaped by the analyst's experience, the time available, and the particular sections of the document that demand attention in that moment. No single reviewer can be expected to cover every dimension with equal rigor across every engagement. And when a manager suffers a significant loss event or fails an operational audit three years later, there is often no documented record of what was reviewed and what the review prioritized.
This is not an intelligence problem, but a process one.
The root cause
To understand why this happens, consider what a DDQ actually is, and what it is not.
The LP originates the questions, often using a standardized template from ILPA or AIMA. But the GP authors every answer. This distinction matters more than the industry tends to acknowledge. A completed DDQ is not a neutral disclosure document. It is a marketing document that happens to be structured like one. The manager controls the framing of every response, chooses which details to surface and which to bury in qualifications, and presents the information in the most favorable light a reasonable person could defend. Industry practitioners put it plainly: DDQs are like pitchbooks with footnotes.
The questions may be standardized, but the answers are curated. A fund with a compliance function that reports through General Counsel rather than an independent board will describe that compliance function in detail and at length. What it will not do is flag the reporting line as a governance concern. A manager charging a 2% fee with no hurdle rate will position that fee structure as commensurate with their strategy's complexity and track record. They will not frame it as an above-market arrangement that shifts the burden of proof.
An allocator reading a completed DDQ straight through is, in a sense, accepting the manager's editorial decisions about what matters. The document pulls attention toward strengths and away from structural vulnerabilities, not through deception, but through emphasis.
A scoring framework inverts this dynamic entirely. Instead of reading to understand, you are reading to evaluate against a predefined set of criteria. The criteria are yours, not the manager's. You are no longer working through their document; you are running their document through your filter. The question is no longer "what did they say about compliance?" It is "does their compliance structure meet our independence threshold: yes or no?"
This distinction sounds subtle but its implications are significant. When the evaluation criteria are fixed in advance, three things happen that cannot happen in an unstructured review.
First, coverage becomes mandatory. You cannot skip operational infrastructure because the investment thesis section was compelling. Every dimension gets assessed, every time.
Second, comparison becomes possible. Once you have evaluated ten managers against the same rubric, you have a library of comparable assessments. A fee structure that looked reasonable in isolation looks different when you can see it ranked against ten others. Patterns emerge that no individual review would surface.
Third, accountability becomes real. A structured assessment creates a documented record of what was evaluated and what was flagged. For a foundation or endowment operating under fiduciary duty, this is not a bureaucratic nicety. It is protection.
What a good framework measures
The temptation when building a scoring framework is to make it comprehensive. Resist this. Comprehensive frameworks become checklists that analysts race through. The goal is not coverage for its own sake. The goal is to force explicit judgment on the dimensions where judgment most often fails.
There are five areas where allocator due diligence needs to focus on the most. These are structurally easy to underweight when you're reading a document the manager wrote.
Organizational stability. The question is not whether the fund has talented people. It is whether the structure survives the departure of one of them. Signal to look for: a formal key person clause with defined succession triggers. Its absence in a DDQ is rarely a coincidence.
Operational infrastructure. Most allocators evaluate managers on investment merit and treat operational due diligence as secondary. This ordering is backwards. A fund with a mediocre investment process but sound operations will survive. A fund with a brilliant process and inadequate operations can be destroyed by a single failure. If the administrator's name is not explicitly stated in the DDQ, that is worth noting.
Compliance independence. This is a structural question, not a behavioral one. A CCO reporting through General Counsel has a different incentive structure than one reporting to an independent board. Neither guarantees good behavior, but one arrangement makes it structurally easier to contain a compliance concern before it surfaces.
Liquidity alignment. The evaluation is nearly mechanical: do the redemption terms match the liquidity profile of the underlying assets? Most avoidable LP losses in the past two decades trace back to this mismatch. Quarterly redemptions with 45-day notice work for liquid credit. They do not work for illiquid private assets dressed up in a similar structure.
Fee economics. A 2% management fee with no hurdle rate is not automatically disqualifying. But it shifts the burden of proof onto the manager. A scoring framework forces that conversation into the open rather than letting it get absorbed into 40 pages of investment process description, where it tends to disappear.
Flagging approach as a solution
The most useful implementation of a DDQ scoring framework is not a numerical score. Numbers imply false precision and invite gaming. Instead, a binary red flag / green flag rubric for each criterion forces the analyst to make a judgment call and then support it with specific evidence from the document.
This approach has two advantages over numerical scoring. It is faster, because it eliminates the cognitive overhead of deciding whether something is a 6 or a 7. And it is more honest, because it forces the analyst to commit to a position rather than retreating to the middle of a scale.
In practice, a complete assessment of a hedge fund DDQ across these five dimensions should yield somewhere between 15 and 20 data points. The ratio of red to green flags is less important than the quality of the flags. Three well-documented red flags (no key person clause, fees above market with no hurdle, administrator identity unclear) are more useful than 18 superficial ones.
The output should feed directly into IC preparation. A committee member reviewing a manager for the first time should be able to read the scoring matrix in five minutes and arrive at the meeting knowing exactly where to probe.
Key to leverage
Building and following this kind of framework goes beyond analytical rigor. It helps enrich the institutional memory.
Analysts leave, and the institutional knowledge accumulated through years of manager reviews walks out the door with them. A scoring framework, applied consistently, creates a knowledge base that is organizational rather than individual. The eleventh analyst to join the team inherits not just a process but a library of assessments that teaches them, concretely, what sound and unsound looks like across a range of managers and strategies.
This is the deepest value of structured due diligence. It converts tacit knowledge, the accumulated judgment of experienced analysts who have seen a lot of DDQs, into explicit processes that can be transmitted, audited, and improved.
Most allocators treat due diligence as a reading exercise. The ones who get this right treat it as an engineering problem: how do you build a system that produces consistent, high-quality evaluations regardless of who is doing the evaluating? A scoring framework is not the whole answer. But it is where the answer starts.
This is precisely the problem Finpilot was built to solve. Finpilot is an AI knowledge management system for institutional allocators that automates workflows like DDQ analysis and brings consistency to the evaluation of both structured and unstructured documents. If you'd like to walk through how your team could apply this framework, we're happy to show you a live example. Book a 15-minute demo with Finpilot today.
