IBM Researchers Propose ‘Factsheets’ for AI Transparency

23 Aug 2018 by Technology

Artificial intelligence is getting smarter day by day. And the technology has enormous potential to transform the way we live and work. Still, many of us don’t trust AI. Despite all the good work done by AI-powered systems, customers are sceptical about its safety and transparency. To address these concerns, IBM proposes transparency docs for AI services.

According to a paper published by IBM Research, lack of standard practices to document how an AI service was created, tested, trained, deployed, and evaluated is one reason for the trust deficit. IBM believes that the concept of factsheets for AI services can solve the problem.

The IBM researchers believe that four pillars such as fairness, robustness, explainability, and lineage form the basis for trusted AI technology. “To build AI systems that are truly trusted, we need to strengthen all the pillars together. Our comprehensive research and product strategy are designed to do just that, advancing on all fronts to lift the mantle of trust into place,” Aleksandra Mojsilovic, head of AI foundations at IBM Research said in the blog post.

However, the researchers point out that this is not enough to build trust in AI. The information provided with AI should be complete. The researchers argue that a Supplier’s Declaration of Conformity (SDoC, or factsheet) can be a solution. “Like nutrition labels for foods or information sheets for appliances, factsheets for AI services would provide information about the product’s important characteristics. Standardizing and publicizing this information is key to building trust in AI services across the industry,” Mojsilovic explained in the blog.

IBM believes that AI service developers and providers should voluntarily release SdoC. This will be an effective step to ensure transparency of their services. If you are wondering about the questions that could be included in a factsheet, here are a few examples:

Was the dataset and model checked for biases? If “yes” describe bias policies that were checked, bias checking methods, and results.

Was any bias mitigation performed on the dataset? If “yes” describe the mitigation method.

Are algorithm outputs explainable/interpretable? If yes, explain how is explainability achieved (e.g. directly explainable algorithm, local explainability, explanations via examples)

Was the service tested on any additional datasets? Do they have a datasheet or data statement?

If properly implemented, SdoC concept might be able to resolve the ambiguity surrounding AI. And it can bring in more transparency in the AI market.

Author Technology is the fastest and easiest way for businesses to find and compare solutions from the world's leading providers of Cloud, Bare Metal, and Colocation. We offer customizable RFPs, instant multicloud and bare metal deployments, and free consultations from our team of technology experts. With over 10 years of experience in the industry, we are committed to helping businesses find the right provider for their unique needs. 


Subscribe to Our Newsletter to Receive All Posts in Your Inbox!