Gartner: How D&A leaders develop a successful AI governance
In order for AI to be effective in governing, four key actions must be in place, according to D&R leaders.
AI is there for organizations to make sure assigning tasks, accountability, decision rights, policies, and such are taken care of. AI is about asking the right questions and providing appropriate feedback.
AI also implies solving dilemmas, risk management, policies, making investment decisions for the use of AI.
1. Documenting facts regarding AI models for all projects
Updating technical knowledge alongside governance activities is big for AI-powered systems. It’s important to keep and document facts regarding AI models for every project for obtaining trust from business.
The documentation may imply different kinds of data: visuals, metadata, but most importantly, automation (the ability to produce model updates and keep data updated).
2. Presence of basic standards for creating and launching AI
Establish standards (collaboration, data management, formats, bias solving) asap and make sure that AI-team follows them. When it comes to bias solving, explainability standard and interpretability one must be in place. It is useful for cases when data structure or another issue may rise.
The standards will ease development and production. Lack of those will deteriorate AI expansion in the company.
3. AI does important stuff only.
Typically, leave compliance, security, shared data to AI. Minor stuff can be governed by AI temporarily. Less critical business parts are to be taken care of by business owners. Data scientists need freedom: they typically work on something that does not affect many business lines and is risk-free.
4. Legal and AI
It is crucial to establish communication with legal and compliance teams to make sure AI initiatives follow them.
There are two categories to follow: existing laws and regulations that must be met by AI and those targeting specifically AI.
Legal and compliance counterparts will be familiar with the existing laws and so AI governance organizations first consult them on the measures and approaches they must adopt. These measures include guidelines, review dates and industry-specific validation related to privacy, data protection, intellectual property, competition, and corporate law.
Legal counterparts as well as compliance specialists should guide AI governance on the measures and steps for AI specialists to adopt. Guidelines should be worked out as well as industry-specific confirmation regarding privacy, data protection, corporate law, etc.
The problem with the laws and regulations, however, is that they are inconsistent in various jurisdictions and quite often are not enforced. They are helpful as a general understanding of AI expectations in the future. Legal and compliance teams are to jointly decide on the needed course of action.
AI is also in great attention to safety and value - and this is why it is hard to govern it for the companies. This leads to a lack of clarity about AI’s reputational. business and social influence.
So, not only implementation is big, but also business education for the benefit of the business.
AI Catalog's chief editor