Which values should we instill in artificial intelligence?
The responsible use of technologies is a topic that affects not only environment and the economy, but also society at large. Today, it is artificial intelligence that makes us reflect the most on the social and political ramifications of technology. AI has already become ubiquitous in many ways, yet at the same time it is generating a lot of debate. Is it a blessing or an imminent danger?
Obviously, the prospects of AI require special attention, and it is important for humanity to define the principles it wants to underlie this technology. Governments, IT vendors and consumers are currently developing their own frameworks for AI.
Christina Tikhonova, President of Microsoft Russia, spoke at the Microsoft Envision Forum about the six ethical principles for AI development formulated by the corporation and published in the book The Future Computed.
So, according to Microsoft, the six principles are fairness, reliability, privacy and security, inclusiveness, transparency, and accountability. Here’s a brief explanation:
Fairness is what we expect from technology. In contrast to humans, it has no prejudices against gender (recall credit officers who are more likely to approve loans to men as opposed to women) or skin color. Speaking of fairness, we want technology to make decisions without bias. Therefore, it is better to have a diverse AI development team that can represent the views of various social groups. To prevent biases from being originally embedded in the technology, we need to make sure that the data set used by the AI system for learning is as diverse and representative as possible.
Reliability. What happens if something goes wrong? How will an algorithm behave in an unexpected situation? AI reliability is all about that perennial “what if” question. All reliability scenarios and responses from the machine must be foreseen and fully taken into account. It is also important to figure out how humans can make timely adjustments to the system. Artificial Intelligence in any situation should remain but a tool for human actors.
Privacy and security. Like any other cloud technology, AI systems must comply with privacy laws that govern the collection, processing and storage of data to protect personal information.
Inclusiveness is closely related to fairness. While AI products and services must accommodate a wide range of human needs and practices through inclusive design methods, they must be free of barriers that may unintentionally discriminate against any group of people.
Transparency. Considering the ever-increasing impact of AI on our life, it is our duty to make its algorithms transparent to the general public, so that people could understand the system’s decision-making mechanism and be aware of potential risks and mistakes.
Accountability. People and businesses that develop AI systems must be accountable for their work. The standards of their accountability and responsibility should be based on those from areas such as healthcare and privacy.
Because artificial intelligence holds great promise for the future, it is essential to articulate the principles for its development and use at the global level. To help the world see how AI can help us solve common problems, Microsoft has launched several programs under the concept of AI for Good.
One of those programs, AI for Healthcare, became especially relevant during the pandemic, when AI was leveraged in a vast number of scenarios to help diagnose Codid-19, conduct tests, predict contagion and assess the efficacy of treatment methods.
A good example is Botkin.AI, a Russian company that has developed an AI system to analyze a variety of medical images, particularly lung scans. The system can detect pneumonia at early stages and speed up treatment. The AI for Good initiative lowers the barriers of entry to cloud and AI technologies through grants, education, research and strategic partnerships.
Apart from its obvious effect on competition, success and the very survival of businesses, technology is a powerful catalyst of change in society and the world in general. Digital transformation strategies, therefore, must entail a responsible attitude to technologies and sustainable development.