AI Has a Trust Problem. Here's Why CPAs Are the Unexpected Solution

• 3 min read

Artificial intelligence is becoming a big part of how companies work today—but it also brings a major trust issue. Can businesses always trust AI to be fair, reliable, and safe? CPAs, known for their objectivity and high standards, are the unexpected heroes in making sure we can trust these complex systems.

AI Has a Trust Problem. Here's Why CPAs Are the Unexpected Solution

The Problem: Can We Trust AI?

Businesses use AI to do everything from hiring people to predicting market changes and spotting security risks. While this saves time and money, there’s a hidden problem—nobody really knows if AI systems are always accurate or unbiased. Developers often keep their technology secret, leaving companies unsure if their AI is reliable and fair.

Why CPAs Are the Solution

CPAs, or Certified Public Accountants, are trained to be objective, independent, and to protect the public. These are values AI systems don’t have by themselves. CPAs are trusted because they follow strict rules and put honesty first. Their skills are now needed to review and confirm that AI systems work the way they should.

It’s Not Really an “AI Audit” Yet

Experts say what CPAs do here isn’t really an “AI audit” in the traditional sense. Instead, they give “assurance” over AI, meaning they look at the system and check if it’s trustworthy. Because the wording “assurance” can mean different things legally in different places, some call these checks “evaluation reports.” This shows the field is still developing and figuring out the best way to label these new services.

CPAs Already Have a Playbook: SOC Reports

The type of work CPAs will do for AI is similar to what they already do when checking companies’ systems through SOC reports. SOC stands for System and Organization Controls, and these reports evaluate how secure, private, and reliable those systems are. SOC 2 and SOC 3 reports already help businesses prove to others that their systems are solid—these same standards can be used for AI systems, too.

Not All AI Systems Are the Same

Every AI system comes with different risks. For example, an AI booking vacations isn’t as risky as one helping doctors with surgeries. CPAs must use their judgment and special guidelines, like those from NIST or ISO, to decide what needs to be checked for each system. The more critical the system, the more strict the evaluation should be.

Europe Is Leading the Way

Right now, Europe is ahead in setting laws for AI use. The EU Artificial Intelligence Act bans some uses of AI and makes strict rules for high-risk systems. Experts think these rules will become the global standard everyone follows, because they help prevent AI from spreading biases or causing harm on a large scale.

Trust as a Business Advantage

Proving that your company’s AI is fair and trustworthy is not just safe—it’s also smart business. Companies that show their AI works properly can stand out from competitors, adopt new tech faster, and earn more confidence from the public.

Conclusion: CPAs Secure the Future of AI

AI’s trust problem puts businesses and society at risk. CPAs, with their history of honesty and independence, are perfectly placed to review and ensure AI systems are reliable. As AI becomes even more important in business, CPAs will be the key professionals who make sure the future of technology is secure and trustworthy.


Based on the Journal of Accountancy article “A new frontier: CPAs as AI system evaluators” by Jamie J. Roessner.