Great for lone data scientists or regulated businesses, model audits are a way to check data science products via an independent and experienced third-party.
Why do you need a model audit?
Model audits are particularly useful in a few scenarios:
- You’re the lone data scientist with no-one able to check your work
- You data scientist has left and you need to build some institutional knowledge up about the model
- You need an independent review for regulatory or external reasons
Locke Data can do reviews and audits of developed models. Working from data provenance through to production deployment, we can review code and documentation to determine the quality of a model’s build and outline important findings.
Reviewing models can offer a useful learning experience, assurance of quality, and give an independent assessment for key stakeholders.
What’s involved in a model audit?
Model audits typically involve assessing:
- data provenance
- sampling techniques
- feature engineering
- model relevance for business problem
- code quality
- code reproducibility
- model evaluation metrics
- deployment methodology
Reviewing these aspects of a model often requires some interviews, access to documentation, and access to the model’s code. Our primary analytical language in-house is R, but we’re able to assess models developed in a range of languages.