Gemini is restricted due to democratic elections
In order to stop Gemini, Google's artificial intelligence tool, from responding to specific election-related queries, the company has announced new limits.
The tech behemoth claimed that slowing down Gemini in the US and India was a component of its strategy to use generative AI technologies in a "responsible manner."
The business said in a blog post on Tuesday, "We have begun to roll out restrictions on the types of election-related queries for which Gemini will return responses. This is a result of extreme care with such a significant subject.."
"We continuously work to improve our protections, and we take our responsibility to provide high-quality information for these types of queries seriously."
Over 4 billion voters in 50 nations are expected to cast ballots in 2024, making it the largest election year ever. Additionally, for most of the voters, this is the first time that generative AI has been made publicly accessible.
The potential for generative AI to produce convincing false material in the form of text, as well as deepfake photos and videos, has alarmed experts.
A recent OnePoll study indicated that, in front of the US elections later this year, Republicans and Democrats alike ranked AI-generated material as one of their top worries.
The non-profit organization Defending Digital Campaigns and the security company Yubico commissioned a study of 2,000 Americans, and among them, 42% of Democrats and 49% of Republicans thought artificial intelligence will negatively affect the results of the elections.
Vice President of Solutions Architecture at Yubico David Treece stated, "We found it interesting that over 78 percent of respondents are concerned about AI-generated content being used to impersonate a political candidate or create inauthentic content, with Democrats at 79 percent and Republicans at 80 percent."
"Maybe even more telling is the belief that AI will negatively impact the results of this year's elections."
Google has joined other businesses in placing restrictions on AI products; early this year, OpenAI, the company that created ChatGPT, outlined its strategy to stop their technology from being abused.
To look into and handle possible misuse of its ChatGPT and Dall-E products, OpenAI said that it was bringing together members of its technical, legal, policy, safety, and threat intelligence teams.
"Our plan is to continue our platform safety work by elevating correct voting information, enforcing measured standards, and boosting transparency as we prepare for elections in 2024 across the world's finest democratic systems,” a corporate journal entry from January said.
Like any new technology, these instruments have advantages as well as disadvantages. They are also unparalleled, and as we get more insight into the application of our technologies, we will continue to refine our strategy.
AI Catalog's chief editor