The Artificial Fiduciary: How Black Box AI Models Are Compromising Fiduciary Duties Within Private Equity
July 18, 2025Artificial intelligence is transforming the principal operations of private equity at a pace that challenges the existing framework and regulations of securities law. Fund managers increasingly leverage AI to build predictive models and automate investment decisions, including deal sourcing, financial modeling, risk analysis, and operational benchmarking, reducing the degree of manual analysis and oversight in the process. The rapid development of machine learning has enabled groundbreaking innovation, making AI more prevalent in the analysis and execution of private equity strategies than ever before. Nevertheless, the benefits of AI come with a price, as the unprecedented norms between human and machine interaction raise novel issues and demand change within our existing infrastructure of the financial markets.
At the core of securities regulation lies fiduciary duties owed by private fund advisors to their investors.[1] Today, AI cannot fulfill these fiduciary duties without human oversight. Pursuant to the Investment Advisors Act of 1940 (the “IAA”), private fund advisors have a duty of care, which obligates advisors to provide investment advice that is in the client’s best interest.[2] Furthermore, the IAA mandates a duty of loyalty, requiring advisors to disclose all material facts and eliminate or disclose conflicts of interest.[3] In private equity, these duties also encompass performing adequate diligence, monitoring investments, and aligning decisions with the fund’s strategy and governing documents.
Conflicts arise when AI-driven systems make investment decisions based on “black box” models. “Black box” AI models operate on ambiguous algorithms that are not fully understood by the user.[4] AI developers intentionally conceal the internal operations of these AI models to protect intellectual property.[5] The lack of transparency when using these models raises many concerns, as private fund advisors are unable to comprehend or rely on the AI’s decision-making methods.[6] As such, AI models may compromise the fiduciary duties owed to investors, thus violating securities regulations implemented by the Securities and Exchange Commission.[7]
Former Securities and Exchange Commission Chair Gary Gensler stated at the National Press Club “If the optimization function in the AI system is taking the interest of the platform into consideration as well as the interest of the customer, this can lead to conflicts of interest.”[8] Gensler went on to say “In finance, conflicts may arise to the extent that advisers or brokers are optimizing to place their interests ahead of their investors’ interests. That’s why I’ve asked SEC staff to make recommendations for rule proposals for the Commission’s consideration regarding how best to address such potential conflicts across the range of investor interactions.”[9] Gensler suggests that AI-driven conflicts of interest could become widespread and revisions to securities regulations may be necessary to uphold fiduciary ethics and protect investors.[10]
The intersection between computer science and law may be the solution, as fiduciary AI capable of legal compliance could fill the void when utilizing modern AI models.[11] While still at the conceptual stage, Fiduciary AI refers to AI specifically designed and audited to facilitate fiduciary duties with respect to its operator.[12] The purpose of Fiduciary AI is to align model outputs with legal standards of care and loyalty.[13] AI must be designed to identify and assess the best interest of the principals while maintaining loyalty to ensure that AI acts in accordance with these interests.[14] AI must also be designed to identify multiple principals and determine how these interests can be aggregated into a single objective function. [15] Nonetheless, AI may not be the perfect solution as issues could arise from conflicting fiduciaries, restricting the efficiency of the aggregation function of AI.[16] AI models, no matter how advanced, are unlikely to be able to replace the human judgment, accountability, or discretion required to fulfill fiduciary obligations under the IAA. Investment advisers remain responsible for all decisions and must be able to explain, document, and stand behind the rationale for those decisions.
The AI revolution has made tremendous strides within the past decade, prompting the new age of technological advancements within private equity. Although AI has been constructively adopted into the dynamics of private equity, it remains a tool, not a decision-maker. As much as AI can accomplish, it is evident that AI may lack the very fundamental nature of human judgment and, in some senses, remains inferior to human output.
This advisory is a general overview of the Exemption Order and is not intended as legal advice. If you have any questions, please feel free to contact Joseph Spina at (516) 296-9120 or via email at JSpina@cullenllp.com, Ariel Ronneburger (aronneburger@cullenllp.com) at (516) 296-9182, or Christian Lastihenos at (516) 296-9108 or via email at Clastihenos@cullenllp.com.
This advisory provides a brief overview of the most significant changes in the law and does not constitute legal advice. Nothing herein creates an attorney-client relationship between the sender and recipient.
Footnotes
[1] See Murphy v. Schaible, 108 F.4th 1257, 1266 (10th Cir. 2024).
[2]1 Securities Arb. Proc. Manual § 3-11 (2025).
[3] Id.
[4] Matthew Kosinski, What is black box AI?, International Business Machine (Oct. 29, 2024), https://www.ibm.com/think/topics/black-box-ai.
[5] Id.
[6] Navigating the Risks of Using Artificial Intelligence for Investment Decisions in Private Equity, RKON (Aug. 15, 2024), https://www.rkon.com/articles/artificial-intelligence-in-private-equity/.
[7] See Amy Caiazza, The Use of Artificial Intelligence by Investment Advisors: Considerations based on an Advisor’s Fiduciary Duties, Wilson Sonsini (May 28, 2020), https://www.wsgr.com/en/insights/the-use-of-artificial-intelligence-by-investment-advisers-considerations-based-on-an-advisers-fiduciary-duties.html?utm_source=chatgpt.com.
[8] John Sullivan, Could AI Cause a 401(k) Fiduciary Breach? SEC’s Gensler Says Yes, National Association of Plan Advisors (July 17, 2023), https://www.napa-net.org/news/2023/7/could-ai-cause-401k-fiduciary-breach-secs-gensler-says-yes/?utm_source=chatgpt.com.
[9] Id.
[10] Id.
[11] Sebastian Benthall & David Shekman, Designing Fiduciary Artificial Intelligence, arxiv (July 27, 2023), https://arxiv.org/abs/2308.02435.
[12] Sebastian Benthall & David Shekman, Designing Fiduciary Artificial Intelligence, arxiv 1 (July 27, 2023), https://arxiv.org/pdf/2308.02435.
[13] Id.
[14] Id.
[15] Id. at 9.
[16] Id.