{"id":126403,"date":"2025-04-03T16:18:53","date_gmt":"2025-04-03T16:18:53","guid":{"rendered":"http:\/\/cryptospotters.net\/?p=126403"},"modified":"2025-04-03T16:18:53","modified_gmt":"2025-04-03T16:18:53","slug":"how-zero-knowledge-proofs-can-make-ai-fairer","status":"publish","type":"post","link":"http:\/\/cryptospotters.net\/?p=126403","title":{"rendered":"How zero-knowledge proofs can make AI fairer"},"content":{"rendered":"<p>Source: Cointelegraph.com NewsOpinion by: Rob Viglione, co-founder and CEO of Horizen Labs<br \/>\nCan you trust your AI to be unbiased? A recent research paper suggests it\u2019s a little more complicated. Unfortunately, bias isn\u2019t just a bug \u2014 it\u2019s a persistent feature without proper cryptographic guardrails.<br \/>\nA September 2024 study from Imperial College London shows how zero-knowledge proofs (ZKPs) can help companies verify that their machine learning (ML) models treat all demographic groups equally while still keeping model details and user data private.\u00a0<br \/>\nZero-knowledge proofs are cryptographic methods that enable one party to prove to another that a statement is true without revealing any additional information beyond the statement\u2019s validity. When defining \u201cfairness,\u201d however, we open up a whole new can of worms.\u00a0<br \/>\nMachine learning bias<br \/>\nWith machine learning models, bias manifests in dramatically different ways. It can cause a credit scoring service to rate a person differently based on their friends\u2019 and communities\u2019 credit scores, which can be inherently discriminatory. It can also prompt AI image generators to show the Pope and Ancient Greeks as people of different races, like Google\u2019s AI tool Gemini infamously did last year.\u00a0\u00a0<br \/>\nSpotting an unfair machine learning (ML) model in the wild is easy. If the model is depriving people of loans or credit because of who their friends are, that\u2019s discrimination. If it\u2019s revising history or treating specific demographics differently to overcorrect in the name of equity, that\u2019s also discrimination. Both scenarios undermine trust in these systems.<br \/>\nConsider a bank using an ML model for loan approvals. A ZKP could prove that the model isn\u2019t biased against any demographic without exposing sensitive customer data or proprietary model details. With ZK and ML, banks could prove they\u2019re not systematically discriminating against a racial group. That proof would be real-time and continuous versus today\u2019s inefficient government audits of private data.\u00a0\u00a0<br \/>\nThe ideal ML model? One that doesn\u2019t revise history or treat people differently based on their background. AI must adhere to anti-discrimination laws like the American Civil Rights Act of 1964. The problem lies in baking that into AI and making it verifiable.\u00a0<br \/>\nZKPs offer the technical pathway to guarantee this adherence.<br \/>\nAI is biased (but it doesn\u2019t have to be)<br \/>\nWhen dealing with machine learning, we need to be sure that any attestations of fairness keep the underlying ML models and training data confidential. They need to protect intellectual property and users\u2019 privacy while providing enough access for users to know that their model is not discriminatory.\u00a0<br \/>\nNot an easy task. ZKPs offer a verifiable solution.\u00a0<br \/>\nZKML (zero knowledge machine learning) is how we use zero-knowledge proofs to verify that an ML model is what it says on the box. ZKML combines zero-knowledge cryptography with machine learning to create systems that can verify AI properties without exposing the underlying models or data. We can also take that concept and use ZKPs to identify ML models that treat everyone equally and fairly.\u00a0<br \/>\nRecent: Know Your Peer \u2014 The pros and cons of KYC<br \/>\nPreviously, using ZKPs to prove AI fairness was extremely limited because it could only focus on one phase of the ML pipeline. This made it possible for dishonest model providers to construct data sets that would satisfy the fairness requirements, even if the model failed to do so. The ZKPs would also introduce unrealistic computational demands and long wait times to produce proofs of fairness.<br \/>\nIn recent months, ZK frameworks have made it possible to scale ZKPs to determine the end-to-end fairness of models with tens of millions of parameters and to do so provably securely.\u00a0\u00a0<br \/>\nThe trillion-dollar question: How do we measure whether an AI is fair?<br \/>\nLet\u2019s break down three of the most common group fairness definitions: demographic parity, equality of opportunity and predictive equality.\u00a0<br \/>\nDemographic parity means that the probability of a specific prediction is the same across different groups, such as race or sex. Diversity, equity and inclusion departments often use it as a measurement to attempt to reflect the demographics of a population within a company\u2019s workforce. It\u2019s not the ideal fairness metric for ML models because expecting that every group will have the same outcomes is unrealistic.<br \/>\nEquality of opportunity is easy for most people to understand. It gives every group the same chance to have a positive outcome, assuming they are equally qualified. It is not optimizing for outcomes \u2014 only that every demographic should have the same opportunity to get a job or a home loan.\u00a0<br \/>\nLikewise, predictive equality measures if an ML model makes predictions with the same accuracy across various demographics, so no one is penalized simply for being part of a group.\u00a0<br \/>\nIn both cases, the ML model is not putting its thumb on the scale for equity reasons but only to ensure that groups are not being discriminated against in any way. This is an eminently sensible fix.<br \/>\nFairness is becoming the standard, one way or another<br \/>\nOver the past year, the US government and other countries have issued statements and mandates around AI fairness and protecting the public from ML bias. Now, with a new administration in the US, AI fairness will likely be approached differently, returning the focus to equality of opportunity and away from equity.\u00a0<br \/>\nAs political landscapes shift, so do fairness definitions in AI, moving between equity-focused and opportunity-focused paradigms. We welcome ML models that treat everyone equally without putting thumbs on the scale. Zero-knowledge proofs can serve as an airtight way to verify ML models are doing this without revealing private data.\u00a0\u00a0<br \/>\nWhile ZKPs have faced plenty of scalability challenges over the years, the technology is finally becoming affordable for mainstream use cases. We can use ZKPs to verify training data integrity, protect privacy, and ensure the models we\u2019re using are what they say they are.\u00a0<br \/>\nAs ML models become more interwoven in our daily lives and our future job prospects, college admissions and mortgages depend on them, we could use a little more reassurance that AI treats us fairly. Whether we can all agree on the definition of fairness, however, is another question entirely.<br \/>\nOpinion by: Rob Viglione, co-founder and CEO of Horizen Labs.<br \/>\nThis article is for general information purposes and is not intended to be and should not be taken as legal or investment advice. The views, thoughts, and opinions expressed here are the author\u2019s alone and do not necessarily reflect or represent the views and opinions of Cointelegraph.<a href=\"https:\/\/cointelegraph.com\/news\/how-zero-knowledge-proofs-can-make-ai-fairer?utm_source=rss_feed&amp;utm_medium=rss&amp;utm_campaign=rss_partner_inbound\" target=\"_blank\" class=\"feedzy-rss-link-icon\" rel=\"noopener\">Read More<\/a><\/p>","protected":false},"excerpt":{"rendered":"<p>Source: Cointelegraph.com NewsOpinion by: Rob Viglione, co-founder and CEO of Horizen Labs Can you trust your AI to be unbiased? A recent research paper suggests it\u2019s a little more complicated.&hellip; <\/p>\n","protected":false},"author":0,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[5],"tags":[],"_links":{"self":[{"href":"http:\/\/cryptospotters.net\/index.php?rest_route=\/wp\/v2\/posts\/126403"}],"collection":[{"href":"http:\/\/cryptospotters.net\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/cryptospotters.net\/index.php?rest_route=\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"http:\/\/cryptospotters.net\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=126403"}],"version-history":[{"count":0,"href":"http:\/\/cryptospotters.net\/index.php?rest_route=\/wp\/v2\/posts\/126403\/revisions"}],"wp:attachment":[{"href":"http:\/\/cryptospotters.net\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=126403"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/cryptospotters.net\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=126403"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/cryptospotters.net\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=126403"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}