YouTube says it is making changes to its platform in advance of the 2020 United States elections in an effort to curb the spread of false information about candidates or the election process.
Checking content uploaded to YouTube for such a specific purpose is difficult, mainly because Google aims to verify the information provided by the creators, whether it’s something they say or something they show.
The best way to inform people is to offer as much information as possible on the available content. The company is looking to bring quality videos in the search and ‘watch next’ panels, which it says should tilt the balance in favor of truthful information. Also, YouTube says it will help “quality” content of campaigns, candidates and political creators to reach a wider audience and keep their accounts secure.
The heavy lifting will be done “invisibly,” behind the curtains. The detection of deepfakes or modified content is the work of algorithms and teams of people, and YouTube says it’s always trying to stay ahead of the technology. One tool is the Intelligence Desk, which tracks trends in the community and addresses them before they become an issue.
The goal would be to offer a platform for political discourse that can be trusted, but that has proven a big problem for other platforms as well, such as Twitter and Facebook. The difference is that both Twitter and YouTube promise to tackle this issue head-on, while Facebook chose a more lax approach to the elections.
Lastly, YouTube is expanding the political advertisement policies to increase the transparency behind the funding process. Users will have an easier time seeing who’s buying political ads on YouTube, Google, and partner properties.
YouTube”s Community Guidelines are designed to cover technically manipulated or doctored content, voting or the census processes, and even content that makes false claims about political candidates.
Furthermore, attempting to impersonate other people or channels, along with artificially increasing the number of views, likes, and comments will attract the termination of the account.
“Content that comes close to violating our Community Guidelines is a fraction of 1% of what’s watched on YouTube in the U.S.,” explained Leslie Miller, VP of Government Affairs & Public Policy for YouTube.
“To reduce this even further, in January 2019, we launched changes to our recommendations systems to limit the spread of harmful misinformation and borderline content. The result is a 70% average drop in watch time of this content coming from non-subscribed recommendations in the U.S.”
While some of these measures are already in effect, YouTube will rely on user input. Many of the changes are done server-side, but the community is asked to report inaccurate content as well.
tags
Silviu is a seasoned writer who followed the technology world for almost two decades, covering topics ranging from software to hardware and everything in between.
View all postsNovember 14, 2024
September 06, 2024