Trust and safety

Transparency Report: Second half of 2019

Today we’re releasing our latest Transparency Report, addressing government requests we received as well as our own actions to remove inappropriate content and fake accounts from LinkedIn in the July-December 2019 reporting period. We see transparency as crucial to maintaining the trust of our members and building a safe, trusted and professional platform. 

So what do the numbers show? As expected, across most reporting categories in our Government Report and our Community Report, we continue to see an incremental increase in requests and policy enforcement activity, in keeping with our growing member engagement. A few items of note:

  • In this reporting period, we saw a decrease in the number of fake account registration attempts. We stopped nearly 8 million accounts from being created and caught another 3.4 million through our proactive tools and our safety teams. We continue to invest in our fake account detection measures to protect our members.

  • We updated our methodologies for classifying and reporting harassing content, as well as hateful and derogatory content. These are key focus areas for our teams and it’s important that we track them as accurately as possible.

  • We also modified the way we report child exploitation content, to now reflect individual items of content removed rather than counting the number of reports we submitted to the relevant authorities. 

There’s a lot more detail in the report itself, and a lot more work going on behind the scenes to build a safe, professional community on LinkedIn - including new tools and warnings to limit harassment on the platform, and an improved experience for members reporting content that doesn’t belong on LinkedIn. We’ll continue to look for ways to refine and expand our reporting going forward.