In early October, Twitter announced that it would be ending its use of an artificial intelligence system that it said could help users find and remove malicious accounts and bots.
The company said the system, which it called “Intelligent Change,” would now only be used for automated content removal and “intelligence-enhanced” content.
Twitter said the technology had been used in a variety of other applications, including trending topics, and was intended to help users see if their tweets contained spam or fake accounts.
The move came after a flurry of negative coverage over the program, with critics calling it a bot bot that only took down accounts that were threatening.
But the company acknowledged the use of the system could lead to mistakes and misbehavior.
In a statement to The Verge, a Twitter spokesperson said: “While we are pleased to be able to make this change, it is important to note that the use or misuse of the intelligent change technology will not be tolerated on our platform.
This type of misuse will not impact our user-generated content or content that is flagged for removal.”
The company also acknowledged that the program was able to find false positives and errors in some cases, but that it was not able to determine how often such errors would occur.
The new automated process “was never intended to provide a comprehensive list of accounts that are flagged for action by the program,” the company said.
The bot would also likely have missed accounts that didn’t violate the company’s rules and were not in violation of the terms of service.
The news sparked outrage from some Twitter users, who pointed to a tweet from early last year, in which the company was already warning users that bots would be used in the future.
But a week later, the company removed the bot, and it was only replaced with a new automated system.
Twitter has since released a statement clarifying that it will not use the system to delete or remove accounts, only to help it find them and remove them from its platform.
“We know some people will use this new system to see if they’re on the wrong side of the law,” the statement read.
“But our intention was to help Twitter do the right thing by helping us find and eliminate spam, bot accounts and others that are threatening to disrupt our users.”
As part of the program’s final phase, Twitter also started allowing users to request that their tweets be automatically flagged for moderation, something it previously prohibited.
Twitter users have also been asking for the program to be shut down after it came under fire for its use.
In December, the social media company said it was shutting down the system “in the interests of user safety.”
The move was met with criticism from some users, including former Twitter CEO Dick Costolo, who said the company should have done more to address the issue of bots.
“The way it was put together, I think it was an incredibly, incredibly bad decision,” Costolo told Business Insider.
“I think we should have shut it down for a while.”
The bot removal announcement came after Twitter started to take a more proactive approach to its bot program, which included a recent update to its automated moderation program that said users can ask for their tweets to be flagged for removing spam and fake accounts in the form of an alert message, and they can also request their tweets removed from their feed for the same reason.
The updated system is currently available to the public, but a full rollout will be rolled out over the next few weeks.
In early December, Twitter had announced plans to roll out a new version of its bot system that would allow users to ask for it to be filtered by the company and removed from the platform.
Twitter also said it would “begin to roll-out an opt-in system that will allow users with a strong reputation to request their accounts to be removed from Twitter for posting false information.”