March 2026
How PE Firms Should Evaluate AI Investments in Portfolio Companies
The AI Investment Due Diligence Gap in Private Equity
Private equity firms are burning capital on AI consulting engagements that produce 120-page reports instead of measurable business impact. The problem is systematic: PE operating partners lack evaluation frameworks to distinguish genuine AI value creation from expensive theater.
McKinsey research shows that 70% of AI investments in mid-market companies fail to deliver projected ROI within 18 months. The gap isn't technical—it's in due diligence. Traditional IT assessment frameworks miss AI-specific risks because they evaluate technology adoption rather than business outcomes.
The cost of AI theater is predictable. Consultancies pitch "comprehensive AI strategies" that require $500K Phase 1 assessments, followed by $2M "transformation roadmaps." Six months later, portfolio companies have slide decks and proof-of-concepts, but no production systems moving the revenue needle.
This pattern repeats because PE firms apply traditional software due diligence to AI investments. They ask whether the technology works instead of whether it solves a quantifiable business problem. The result: portfolio companies deploy AI solutions in search of problems to solve.
According to Anthropic's Economic Index research, successful AI implementations follow the opposite pattern—they start with specific cost reduction targets and work backward to appropriate technology choices.
The Four-Question AI Value Framework for Operating Partners
Four binary questions separate AI investments that create value from consulting theater. Each question requires a yes/no answer—complex scoring rubrics invite rationalization.
Question 1: Does this solve a $1M+ annual cost problem? AI implementations below this threshold disappear into operational noise. If you cannot point to a specific line item that will decrease by at least $1M annually, you are buying technology in search of a problem. The math is unforgiving: $500K implementation costs require $1M+ annual savings to justify 18-month payback periods.
Question 2: Can we measure the before/after objectively? Measurable means: invoices processed per hour, customer service tickets resolved, contract review time, inventory turns. Not: "improved decision-making" or "enhanced customer experience." If the metric requires surveys to measure, the ROI will be disputed at every board meeting.
Question 3: Does this require 6+ months of "organizational change management"? Change management is consulting code for "we cannot automate your current process, so we need to redesign your business around our technology." Legitimate AI implementations automate existing workflows. If your people need extensive retraining, the solution is over-engineered.
Question 4: Are we buying technology or buying transformation consulting? Technology purchases name specific tools, integration points, and deliverable systems. Transformation consulting sells "strategic frameworks" and "organizational readiness assessments." The difference shows up in SOWs: technology vendors specify what systems will be built; consultants specify what reports will be delivered.
Red flags include: any proposal over 50 pages, timelines with "phases" stretching beyond 6 months, or vendors who cannot demo working software in the first meeting.
Technical Red Flags Every Operating Partner Should Know
LangChain implementations signal over-engineering. LangChain is an abstraction layer that adds complexity without business value for standard use cases. Companies using LangChain for basic document processing or API integrations are optimizing for vendor demos rather than production reliability. Why we don't use LangChain explains the technical reasoning, but the business implication is simpler: unnecessary abstraction layers create vendor lock-in and maintenance overhead.
"Comprehensive AI strategy" proposals that never name specific use cases indicate consultants selling methodology rather than solutions. Legitimate AI vendors lead with concrete examples: "We'll automate your invoice processing to reduce manual review time from 40 minutes to 30 seconds per invoice."
MLOps infrastructure before proven use cases puts the cart before the horse. Companies proposing Kubernetes clusters and model deployment pipelines before identifying which models will run in production are selling enterprise theater. Start with direct API calls to proven models. Build infrastructure after proving value.
Proprietary model training for standard business problems wastes capital. Document classification, data extraction, and customer service automation are solved problems. Custom models make sense for genuinely unique domains—specialized manufacturing processes or proprietary financial instruments. Everything else should use Anthropic's Claude API or equivalent foundation models.
Integration complexity that exceeds the value of automation indicates poor solution architecture. If connecting to your ERP system requires six months of custom development, the implementation is badly designed. According to Anthropic's MCP documentation, modern AI integrations should require days or weeks of connector development, not quarters.
Timeline red flags: solutions promising results in under 6 weeks are demos masquerading as production systems. Solutions requiring over 6 months are consulting engagements masquerading as technology implementations. Legitimate AI automation takes 6-16 weeks from kickoff to production deployment.
Portfolio Company AI Readiness Assessment
Data audit reveals implementation feasibility before vendor selection. Most AI failures trace to data problems discovered after contracts are signed. The audit answers three questions: what systems exist, what data is accessible, and what percentage of that data is clean enough to use immediately.
Start with system inventory: ERP platform, CRM system, document storage, and communication tools. AI implementations require API access or database connectivity to these systems. If critical data lives in spreadsheets or requires manual export, factor data migration costs into project budgets.
Data accessibility differs from data existence. Your accounting system may contain ten years of invoice data, but if extracting structured records requires IT tickets and vendor approvals, implementation timelines extend accordingly. Document data access procedures and approval workflows before evaluating AI vendors.
Data cleanliness determines AI accuracy. Foundation models excel with clean, consistent inputs and struggle with data quality problems. Survey a representative sample: do customer records have consistent formatting, are product codes standardized across systems, do document naming conventions follow patterns? Plan for data cleaning as a separate workstream before AI implementation begins.
Process mapping identifies the highest-value automation targets within each portfolio company. Map workflows that consume significant employee time and follow predictable patterns. Invoice processing, contract review, customer onboarding, and inventory management typically offer strong automation candidates.
Technical infrastructure assessment determines whether existing systems can support AI integrations without major upgrades. Modern AI implementations require REST API access, webhook support, and sufficient network bandwidth for document transfers. Legacy systems may require middleware layers or system upgrades before AI integration becomes feasible.
Team capacity evaluation distinguishes between AI literacy and implementation capability. Teams need AI literacy to identify use cases and evaluate vendor claims. Implementation requires technical project management, system integration experience, and vendor relationship management. Most portfolio companies need external implementation support but should retain long-term operational control.
Budget reality separates pilot programs from production deployments. Pilots run on sample data with manual oversight. Production systems handle full transaction volumes with automated error handling, monitoring, and backup procedures. Pilot budgets typically represent 10-15% of production deployment costs.
When to Build vs. Buy vs. Partner for Portfolio AI Projects
Build proprietary AI capabilities when competitive advantage requires unique approaches that vendors cannot replicate. Proprietary models make economic sense for companies processing millions of documents with domain-specific formatting, or manufacturers with unique quality control requirements that off-the-shelf computer vision cannot address.
Manufacturing companies with custom inspection procedures often require proprietary computer vision models trained on their specific defect patterns. Financial services firms processing unique document types may need custom extraction models. SaaS platforms can differentiate through AI features that competitors cannot easily replicate.
Buy standard business process automation through established vendors. Invoice processing, customer service chatbots, and basic document classification are commodity problems with proven solutions. Salesforce, Microsoft, and specialized AI vendors offer production-ready systems that integrate with existing business software.
Accounting automation, HR onboarding, and customer support represent strong "buy" categories. Multiple vendors offer tested solutions with established ROI metrics. The technology is mature enough that customization requirements signal over-engineering rather than competitive necessity.
Partner for complex integrations requiring deep domain expertise combined with technical implementation capability. Construction companies automating project cost estimation need partners who understand both construction workflows and AI model architecture. Healthcare organizations implementing clinical decision support need AI expertise combined with regulatory compliance knowledge.
The false economy of "comprehensive AI partnerships" creates vendor dependency without transferring capability to internal teams. Partnerships should include knowledge transfer milestones and train internal staff to manage AI systems independently. Avoid partnerships that position the vendor as the long-term operator of AI systems.
Structure partnerships to transfer capability gradually. Phase 1: vendor implements and operates systems. Phase 2: vendor trains internal team members to handle routine maintenance and monitoring. Phase 3: internal team assumes operational control with vendor providing technical support only.
Vendor evaluation for mid-market portfolio companies prioritizes production reliability over cutting-edge features. Evaluate vendors on: reference customers with similar use cases, documented integration procedures, clear SLA definitions, and transparent pricing models that scale predictably with usage volume.
Measuring AI ROI Across Your Portfolio
Leading indicators predict AI implementation success before revenue impact appears in financial statements. System utilization rates, accuracy metrics, and exception handling volumes signal whether deployments will deliver projected ROI. Track these metrics weekly during the first 90 days of production operation.
Document processing systems should achieve 95%+ straight-through processing rates within 60 days of deployment. Customer service automation should handle 70%+ of routine inquiries without human escalation within 30 days. If leading indicators miss these benchmarks, investigate technical problems before they compound.
Lagging indicators measure actual business impact through cost reduction, revenue increase, or capacity expansion. Cost reduction shows up in labor hours, error correction time, and process cycle time. Revenue impact appears through faster customer onboarding, improved sales conversion, or expanded service capacity.
Manufacturing companies measure AI ROI through defect detection rates, inspection time reduction, and quality control cost savings. Service companies track customer service resolution time, sales process automation, and administrative cost reduction.
Industry-specific metrics matter because AI implementations solve different problems across sectors. Construction firms measure project estimation accuracy and change order processing time. Healthcare organizations track patient scheduling efficiency and clinical documentation time.
The measurement infrastructure problem requires investment before AI deployment begins. You cannot manage what you do not measure, but most companies lack baseline metrics for processes they plan to automate. Install measurement systems 30-60 days before AI deployment to establish accurate baseline performance.
Quarterly board reporting templates standardize AI investment tracking across portfolio companies. Report utilization metrics, accuracy benchmarks, cost reduction achieved, and identified expansion opportunities. Include specific examples of successful automations and lessons learned from implementation challenges.
Kill underperforming AI projects within 90 days of production deployment. Warning signs include: utilization rates below 50% of projected levels, accuracy requiring constant manual correction, or user resistance preventing adoption. Early project termination preserves capital for more promising opportunities.
Scale successful pilots across similar portfolio companies through technology and process standardization. Document integration procedures, train technical teams, and establish vendor relationships that support multi-company deployments. Successful AI implementations in one portfolio company often apply to others in similar industries.
Building AI Capability Across Your Portfolio
The talent acquisition challenge requires distinguishing between AI literacy and implementation expertise. AI literacy means understanding capabilities, limitations, and business applications of current AI technology. Implementation expertise means architecting systems, managing vendor relationships, and operating AI infrastructure in production environments.
Operating partners need AI literacy to evaluate investment opportunities and oversee implementations. Portfolio company teams need both AI literacy to identify use cases and implementation expertise to deploy systems successfully. Most companies require external support for initial implementations but should build internal capability for ongoing operation and expansion.
Cross-portfolio learning accelerates AI adoption by sharing successful patterns between portfolio companies. Quarterly knowledge sharing sessions allow technical teams to discuss integration approaches, vendor experiences, and operational lessons learned. Document standard operating procedures for common AI implementations.
Successful invoice processing automation at one portfolio company provides a template for similar implementations across the portfolio. Share integration code, vendor selection criteria, and change management approaches. Avoid reinventing solved problems across multiple companies.
Technology standardization creates economies of scale through volume discounts and shared expertise, but limits flexibility for company-specific requirements. Standardize on foundational platforms—Anthropic Claude for language processing, established computer vision APIs for image analysis—while allowing customization for industry-specific needs.
The benefits of standardization include: volume pricing discounts, shared technical expertise across the portfolio, and faster implementation timelines for subsequent companies. The risks include: reduced flexibility for unique requirements and vendor dependency across multiple investments.
Exit preparation requires demonstrating systematic AI capabilities rather than point solutions. Buyers value portfolio companies with AI infrastructure that supports business growth, not companies dependent on specific vendor relationships. Document AI system architecture, operational procedures, and expansion roadmaps.
The sustainability question addresses whether AI systems become technical debt over time. Sustainable AI implementations use standard protocols, maintain clear documentation, and operate independently of specific vendor dependencies. Avoid custom solutions that require ongoing consulting relationships to maintain.
Ready to evaluate AI investments across your portfolio with a systematic approach? Reach out to discuss how Tenon's framework applies to your specific portfolio companies and industry focus.
Questions about what you've read?
Reach out