NL
EN
Google Cloud

The building blocks for responsible technology

Artificial intelligence (AI) is rapidly becoming an indispensable part of the way organizations operate. Companies use AI to automate business processes, gain insight through data analysis and engage with customers and employees. While AI has evidently added significant value to businesses, we must be mindful that our solutions are inclusive and represent our company values. 

AI has not always shown to be fully inclusive. When failing to manage unintended and undesired consequences of AI, incidents have shown risk of reputational damage and losing consumers’ trust. According to research by Capgemini, 62% of consumers place higher trust in a company whose AI interactions they perceive as ethical and 59% said that they would have higher loyalty towards an AI-ethical company. So how can we harvest the benefits that AI has to offer, while simultaneously developing safer and more accountable products and earn and keep consumers’ trust?

The foundation

At Google, we believe that products and design should be transparent and should work for everyone. As we recognize that this area is dynamic and evolving, we approach it with humility and a willingness to adapt as we learn over time. Our commitment and our way of working is outlined in our AI principles. These principles list our core values when it comes to implementing AI.

The structure

Building on these fundamental principles, we introduced Explainable AI. Explainable AI consists of tools and frameworks, which help us understand how mathematical models reach their decisions and conclusions. We have also introduced the idea of Model Cards. Model cards are short documents accompanying AI-driven models that provide an overview of a model’s suggested uses, limitations and details of the performance evaluation procedures. We propose model cards as a step towards the responsible democratization of AI technology, increasing transparency into how well AI technology works. It will encourage developers to consider their impact on a diverse range of people from the start of the development process and keep them in mind throughout.

“With tools like What-If Tool, and feature attributions in AI Platform, our data scientists can build models with confidence, and provide human-understandable explanations.”

— Stefan Hoejmose, Head of Data Journeys at Sky

Maintenance

Besides building a strong structure using these tools, it is important to keep re-evaluating the structure. Mathematical models rely on patterns and behaviors from the past to predict the future. The current global pandemic is a great example of an event that can disrupt and transform the way we live and work, and thereby affect the effectiveness of predictive models. Just as organizations need to reassess their business strategies, they will need to reevaluate their mathematical models. How is the model built? What data are used? How are features weighted? What assumptions were made during the building of the model? The faster you can detect, adjust and test trends, the more accurately you can respond. This will result in safer and more accountable products. 

For example, you can use our What-If tool to inspect your ML models, examine their range of values for each feature, and optimize them for fairness. In the video below we briefly explain how this works. 

Sky, one of Europe’s leading media and communications companies, started using the What-if tool in combination with Google’s AI platform: “Understanding how models arrive at their decisions is critical for the use of AI in our industry. We are excited to see the progress made by Google Cloud to solve this industry challenge. With tools like What-If Tool, and feature attributions in AI Platform, our data scientists can build models with confidence, and provide human-understandable explanations.” - Stefan Hoejmose, Head of Data Journeys at Sky.

The roof

The future of AI is still being built. It will be a collective effort of people from different industries, backgrounds and cultures, but with the same ambition: to make a positive impact on the world. Every industry is facing its own AI revolution and is in need of guidance. In order to develop more accountable products and to keep consumers’ trust, it is essential to keep the building blocks for trustworthy technology in mind. Google Cloud is happy to help.

If you are investing in AI, please read our guide on responsible AI practices and refer to our Inclusive ML guide. By learning together, we can build more ethical solutions and improve our society for everyone.

Sources: