LOS ANGELES — It’s been a week since Twitter announced a profound shift in how its service will work, yet there have been few grumblings from the adult industry and its legions of Twitter users — including countless cam girls, clip artists and performers who rely on the platform to stay connected with fans while building their careers.
Could it be that Twitter’s efforts to curb “bad behavior” have been a boon to adult, and will continue to prune the trolls and other misusers of its service — or has the hammer yet to fall on practices that many simply consider being business as usual?
According to Twitter CEO Jack Dorsey, the new moves to screen overall user behavior — rather than just the content of a specific tweet — are the latest in a series intended to remove abusive users, including fake accounts, scammers and “search marketers” seeking to game the system.
By examining thousands of behavioral signals, reportedly including the user’s ratio of tweets to followed and unfollowed accounts; the number of times a user has been blocked; and how closely related a user is to other accounts exhibiting bad behavior or generated from a single IP address; Twitter hopes to stop problems proactively before an abuse report is filed.
Although it is not clear at what threshold a user may be banned outright, a “shadowban” is more likely, with Wikipedia defining the practice as “blocking a user or their content from an online community such that the user does not realize that they have been banned.” This non-confrontational approach, notes the online encyclopedia, makes “a user’s contributions invisible or less prominent to other members of the service, [in the hope that] in the absence of reactions to their comments, the problematic user will become bored or frustrated and leave the site.”
“A lot of our past action has been content-based,” says Dorsey, “and we have been shifting more and more toward conduct and behaviors on the system.”
Twitter’s testing of the new system reportedly resulted in an eight percent decline in conversational abuse reports covering discussions occurring in a tweet’s replies; and a four percent dip in abuse reports from search. Additionally, less than one percent of Twitter accounts are said to be responsible for the bulk of abuse reports.
One benefit of Twitter’s new system is the relative ease with which it can be deployed because it does not rely on the tweet’s content (and the hundreds of languages those appear in), but on actions that are easily flagged without human intervention.
“Directionally, it does point to probably our biggest impact change. This is a step, but we can see this going quite far,” Dorsey reveals. “It’s not dependent on hiring more people, it’s a model built into the network.”
While activated by default, Twitter’s new behavioral filters will be optional, with a search toggle allowing unfettered access to everything on its service — an affirmative nod to freedom of expression at the expense of perfect policing.
“This is not an endpoint,” Dorsey concludes. “We have to be constantly 10 steps ahead. Because even a system like this, a new model, people will figure out how to game it, [to] take advantage of it.”
As for how these changes will impact adult, only time will tell.