In our recent webinar, we explored how private equity firms are using agentic AI in their deal sourcing workflows. The session was hosted in partnership with Datasite. Here's the key talking points:
Origination remains central to private equity. Traditional deal flow has typically been intermediated, with off-market opportunities heavily reliant on relationships, in-person meetings, and timing. Against that backdrop, agentic AI is being positioned as a way to expand sourcing capacity by running autonomous or semi-autonomous workflows in the background, while keeping humans in the approval loop.
AI adoption among GPs appears at different maturity levels.
At a baseline level, many firms use AI as a productivity layer. Common uses include drafting memos, summarising diligence, and cleaning up decks. These uses improve efficiency but do not fundamentally change the investment process.
A smaller but growing group embeds AI into specific workflows. Examples include thematic sourcing, diligence tracking, and portfolio KPI monitoring. These use cases are narrower but repeatable and begin to change how work gets done.
However, a leading group is moving toward agentic systems that continuously monitor, flag issues, and trigger actions, with humans still approving key steps. This stage is described as process-level differentiation rather than simple efficiency gains. AI may influence underwriting and investment committee preparation, but it is not described as fully autonomous decision-making.
In terms of sourcing specifically, AI usage is described as widespread in some capacity, with roughly 30–40% of screened firms using some type of sourcing AI.
What an agentic sourcing system looks like in practice
One example of an internal system is a deal sourcing platform built over a long period and used to provide direct, non-intermediated access to the market. In this case, the system drives the majority of deal flow—up to 80–85%. Opportunities sourced outside the system are still entered into it to strengthen a reinforcement loop.
"Everyone is going to build and buy, and the question is, what are you going to build and what are you going to buy?" Henry Lindemann, Blueflame
Consistent usage by everyone in the firm and disciplined capture of information are essential operating requirements for such a system to work. Over time, the system can learn from partner behaviour.
The system is also described as a way to scale without relying on large teams of juniors for sourcing. Partners can access a large set of companies directly, while juniors focus on analyst work on specific deals rather than volume-based sourcing.
Data, internal signals, and adoption as core enablers
Concerns raised about agentic AI are consistent with broader AI concerns, specifically garbage in, garbage out; unclear sources of truth; and difficulty making processes repeatable and scalable. Accuracy and hallucinations are recurring issues, especially when users extrapolate from satisfying general-purpose chat experiences to high-stakes workflows.
A key mitigation is grounding workflows in high-quality data and clearly defined sources of truth. One approach described connects systems to external private company databases and internal CRMs, then builds workflows on top of that foundation.
Capturing internal decision history
Beyond external data, internal signals are described as critical. A firm that has stored years of internal decisions in a single place can use that history to build and refine its sourcing and evaluation logic. This is framed as strong data management—often associated with CRM or ECM discipline—rather than AI itself.
Adoption and user experience
Adoption is described as one of the hardest problems. Without consistent daily usage, a system risks becoming “just a search engine.” Strong adoption can be driven by leadership expectations and disciplined, methodical use.
User experience also matters. Even if a system is functionally strong, investors may not use it without a workflow and interface that fits how they already work. Different personas are described, ranging from developers using APIs, to “agent builders,” to senior users who prefer minimal training, to users who primarily operate through email. Higher adoption is associated with embedding the technology into existing habits rather than trying to change behaviour.
Build vs buy: how firms are approaching implementation
The build-versus-buy decision is described as firm-specific. Some organizations with more sophisticated data and monitoring workflows may lean toward building internal agentic use cases, starting with data ingestion and automation such as OCR to reduce manual entry and tagging. Others may benefit from an external product that provides an “80% layer” that is consistent across firms, while still allowing tailoring of agentic workflows on top.
The static distinction between building and buying is described as collapsing. With accessible foundation models, investment professionals can increasingly build components themselves, while still relying on vendors for standardized layers. The practical question becomes where to build and where to buy.
Building a differentiated internal system is described as a major investment, including full-time software teams and ongoing maintenance to stay current with data sets. For firms without a strong engineering culture, using off-the-shelf software and customizing on top is presented as a pragmatic path, especially given that many capabilities that once required custom development now exist in the market.