How Baker Tilly developed a framework for evaluating the responsible use of AI
Accountability in artificial intelligence (AI) is crucial - it directly impacts customer trust, brand reputation, legal liability and ethical considerations. And with AI-powered systems handling everything from customer interactions to strategic decision-making, accountability cannot be an afterthought.
So, when a large service provider using AI models for predictive analysis came to us for assistance in verifying those models, our team set about developing a framework that evaluated the responsible use of AI.
''The framework we developed can be used to evaluate against any use case of AI - be it custom developed models or those embedded in other products.''
Transparent technology
Our member firms in Canada, the US and the Netherlands collaborated to develop the framework.
Encompassing currently available laws and regulations, as well as best practices around AI, the framework allows our team to evaluate and report back findings around five key categories.
It can be used to evaluate against any use case of AI - be it custom developed models or those embedded in other products.
Uncovering outliers
The client was using predictive analysis of geo-location-based tagging for pipeline maintenance for energy companies. Our team uncovered specific outliers in the client’s data set with the potential to cause the model to miscalculate over time.
This saved the client from erroneous analysis and a significant loss of revenue, as well as lowering the risk of reputational damage.
The framework also helped demonstrate the client’s commitment to responsible use of AI, giving them a competitive advantage in the market and building confidence among their stakeholders.
Where borderless becomes limitless. The global expertise to help you scale your way to digital transformation.