Trust and safety

Transparency Report: Second Half of 2020

Today we’re publishing our Transparency Report for July - December 2020, covering government requests we received as well as action we took to remove inappropriate content and fake accounts. In the six months since our last report, we’ve continued to see record engagement on LinkedIn, including more than a 40% increase in conversations on the platform. With more people relying on LinkedIn to connect, learn, create content, and find jobs, we continue to see an increase in member reports on posts, comments and messages, and an increase in the volume of content removed for violating our Professional Community Policies

Here are a few highlights from the July - December 2020 report:

  • The number of government requests for data about our members went up, but the requests encompassed far fewer member accounts than the prior reporting period.

  • Our automated defenses blocked the vast majority (98.3%) of the fake accounts we took action on during this period. We also saw a decrease in spam and scam content generated by fake accounts due to automated defenses catching them quicker.

  • We saw big increases in the volume of content removed in a number of categories, including misinformation and violent or graphic content, driven in part by world events that triggered polarizing content, such as U.S. elections and COVID-19.

  • We made improvements to our reporting, including more accurate attribution of re-shared content that violates our policies, and a broadening of our harassment category to encompass a wider set of abusive and insulting content. 

You’ll find a lot more details in the Report, as well as more information in our official blog on features and policies designed to ensure a safe and professional experience on LinkedIn. Our responsibility doesn’t end with reporting on policy violations, and we’re continuing our investment in expanded platform safety tools and teams.

###

*An earlier version of this post attributed a decrease in spam and scam content to a reduction in fake profiles. We edited the post to clarify that improved AI review systems (as well as improved attribution of re-shared content) account for the decrease in spam and scam content.