09.06.2023

Self-Regulation Is the Standard in AI, for Now

AI and self-regulation

Do you worry that the rapid advancement of AI might have unfavorable effects? Do you wish that it were governed by a federal law? That club has a membership that is expanding quickly. Self-regulation is now the greatest option for businesses embracing AI because, regrettably, there are currently no new regulations intended to limit the use of AI in the United States, and companies are unfortunately on their own in this context.

Even though it has been a while since "AI" overtook "big data" as the most popular term in technology, the debut of ChatGPT in late November 2022 marked the beginning of an AI goldrush that caught many AI analysts off guard due to its ability to mimic human speech and comprehension.

The astonishing rise of generative models in popular culture, fueled by ChatGPT's development, has raised many concerns about where it's all headed. The wonder that AI can produce imaginative poetry and captivating prose is giving way to worry about the bad effects of AI, which can include consumer harm, lost employment, wrongful incarceration, and even the extinction of the human species.

Some people are really concerned about this. And last month, a group of AI academics asked for a six-month hold on creating new generative models that were bigger than GPT-4, the enormous language model unveiled by OpenAI in the previous month.

In an open letter that includes Turing Award recipient Yoshua Bengio and co-founder of OpenAI Elon Musk among others, it is said that "advanced AI might signify a fundamental change in the history of life on Earth, and should be prepared for and handled with comparable care and resources." "Unfortunately, this degree of management and planning is not taking place."

It should come as no surprise that requests for AI regulation are growing. According to polls, Americans believe AI is unreliable and want regulation, especially when it comes to important issues like using self-driving cars and obtaining government benefits. According to Musk, AI might "destroy civilizations."

However, while there are several new local laws targeting AI (one in NYC which is about the use of AI in hiring, enactment of which was postponed until recently)--there are no new federal laws about AI waiting for a green light in Congress (even though AI is under the pile of laws already for highly regulated industries like financial services and healthcare).

With all the AI excitement, what should a company do? No wonder that companies would like to reap the benefits of AI. After all, the buzz to be “data-driven” is viewed as a must to survive in the digital age. However, companies also want to avoid drawbacks- real or perceived so--that can result negatively from using AI, given our vexatious and cancel-loving environment.. 

”Earlier this year, Andrew Burt, the creator of the AI legal company BNH.ai, told Datanami that "AI is the Wild West." "Nobody is adept at managing risk. Everyone carries it out differently.

Having said that, there are a number of frameworks that businesses may employ to assist manage the risk of AI. Burt suggests using the AI Risk Management Framework, which was released earlier this year and originates from the National Institute of Standards (NIST).

The RMF assists businesses in considering how their AI functions and any potential drawbacks. It employs a "Map, Measure, Manage, and Govern" strategy to comprehend and eventually reduce risks associated with deploying AI in a range of goods.

Although businesses are concerned about the legal danger of adopting AI, Burt argues such concerns are now exceeded by the benefits of doing so. Companies, he claims, are more thrilled than anxious. However, as we have been emphasizing for years, there is a clear correlation between an AI system's worth and its danger.

Cathy O'Neil, CEO of O'Neil Risk Consulting & Algorithmic Auditing (ORCAA) and a 2018 Datanami Person to Watch, has developed another methodology for managing AI risks. Explainable Fairness is a framework that ORCAA has suggested.

Organizations may use the Explainable Fairness to test their algorithms for bias as well as to think through what happens when discrepancies in results are found. What criteria, for instance, may a bank properly employ to decide whether to approve or reject a student loan application or to set a loan's interest rate higher or lower?

The bank must, of course, utilize data to respond to those inquiries. But what information—specifically, what variables characterizing the loan applicant—can they use? Which factors ought to be prohibited from usage legally, and which ought to be? According to O'Neil, providing answers to such issues is neither simple nor easy.

During a talk at Nvidia's GPU Technology Conference (GTC), which took place last month, O'Neil remarked, "That's the whole idea of this framework, is that those genuine elements have to be legitimized. "Legality is case by case dependent...Can one give more to one individual and less to the other ?.

Triveni Gandhi, the responsible AI lead at data analytics and AI software provider at Dataiku, advises businesses to start thinking about how they might apply AI fairly and ethically to comply with current regulations. This is true even in the absence of new AI legislation.

People need to begin asking themselves, "Okay, how do we take the law the way it is and apply it to the current contexts?" she adds. Although there are laws in place, many individuals are also considering the moral and ethical standards by which AI should be developed. Even in the absence of general laws, businesses are beginning to ask themselves such questions.

Gandhi recommends using frameworks to guide businesses as they begin their ethical AI initiatives. There are several frameworks and methods of thinking available in addition to the NIST RMF framework, the speaker notes. So all you have to do is choose the one that applies to you the most and get to work. 

Gandhi urges businesses to begin investigating the frameworks and getting acquainted with the various issues since doing so will help them begin their own journeys toward ethical AI. The worst thing they can do is put off starting in order to find the "ideal framework."

People's immediate expectations of perfection, she claims, are the obstacle. There will never be a flawless product, pipeline, or process when you first launch it. However, starting is better than having nothing at all.The path to AI legislation in the US is likely to be long and convoluted, with no clear end in sight. However, the AI Act, a new rule being developed by the European Union, might become effective later this year.

The AI Act would provide a uniform regulatory and legislative framework for the use of artificial intelligence (AI) that affects EU citizens, including how it is produced, what businesses may do with it, and the legal repercussions of disobeying the regulations. The rule is expected to ban some additional AI applications thought to be too hazardous and compel businesses to obtain authorisation before implementing AI for particular use cases.

The AI Act may serve as a template for American AI legislation if US states adopt European AI regulations, like California did with the California Consumer Privacy Act (CCPA) and the EU's General Data Protection Act (GDPR).

We need a worldwide agreement on AI ethics, which may be a positive thing, says Sray Agarwal, a data scientist and principal consultant at Fractal.

According to Agarwal, a pro bono specialist for the United Nations on ethical AI issues, "you never want a privacy legislation or any type of ethical regulation in the US to be opposite of any other country which it trades with." "There must be universal agreement. Therefore, organizations like the OECD, World Economic Forum, United Nations, and many other similar international bodies need to come to a consensus, or let's say global norms, that everyone must adhere to.

Agarwal, though, is not holding his breath in the hopes that we will reach that consensus any time soon. We have not reached that point yet.. We are not even close to responsible AI, he claims. "We haven't even used very straightforward machine learning models to execute it holistically and thoroughly across several businesses. So it presents a difficult question to discuss its implementation in ChatGPT.

According to Agarwal, corporations should continue using their own ethical standards despite the absence of regulations. Self-regulation continues to be the second-best choice after governmental or corporate regulation.

Yasmin Anderson

AI Catalog's chief editor

Share on social networks:

Similar news

Stay up to date with the latest news and developments in AI tools at our AI Catalog. From breakthrough innovations to industry trends, our news section covers it all.

29.05.2023

Fashion Brands use AI to create a variety of models. To complete the idea of the diff...

30.05.2023

Country’s Spring Budget is directed towards supporting the AI industry. In the recent...

30.05.2023

Facial recognition tool Clearview AI has revealed that it reached almost a million sea...