Leesburg, VA — Mosaic Data Science has joined EBG Advisors, the affiliate consultancy of national law firm Epstein Becker Green, to support the advisory network’s nationwide algorithmic bias auditing and risk management services through Mosaic’s award-winning artificial intelligence (AI) capabilities.  

Developed in alignment with the AI Risk Management Framework developed by the National Institute of Standards and Technology (NIST), EBG Advisors’s algorithmic bias auditing and risk management initiative unites data and social scientists to identify and advise on processes that can help reduce bias in AI without being cumbersome or blocking progress. 

Mosaic is tasked with critically evaluating the AI lifecycle for potential risk and abuse, helping Epstein Becker Green and EBG Advisors clients understand how AI-based technology can be applied to their businesses while identifying possible harmful outcomes.

“Integrated quality assurance and model monitoring provide peace of mind and empirical evidence that deployed models or algorithms continue to be in alignment with relevant regulations and organizational values, while still empowering data scientists to innovate and increase the value that models provide.”

Michael Shumpert, Managing Director of Mosaic Data Science

To successfully audit for algorithmic bias within AI models, Mosaic analyzes model or algorithm inputs and outputs under various scenarios, as well as potential data sets, the code used to train the models, and the trained model code. Mosaic’s data scientists do this in the context of existing or proposed regulations or legislation that dictate relevant ethical standards for an algorithm and use cases, such as non-discrimination based on race or gender.  

Next, Mosaic develops analyses and experiments to ensure compliance with the identified standards. If the statistical evidence suggests that an algorithm does not comply with the standards, Mosaic will help identify algorithm adjustments to ensure compliance. Every model that is moved to production is evaluated, preventing problematic models from being released. Post-deployment model monitoring is also conducted to ensure that issues with models or changes in the data fed into them are identified and handled appropriately without potentially costly delays.

“Federal and state regulators are actively using existing statutes and regulations to threaten industry with enforcement if models cause harm, especially in such areas as consumer products and services, employment and healthcare. Further, as new regulatory compliance standards for AI continue to emerge, those companies that have invested in their algorithmic quality assurance will be the most prepared to verify the quality of their algorithms and models and mitigate regulatory risks.”

Bradley Merrill Thompson, Member of the Firm at Epstein Becker Green and Chief Data Scientist at EBG Advisors
Privacy Policy
Cookie Policy