Equity

Mythbusting the Feed: How We Work to Address Bias

Whether we’re building AI models or new features, we’re continuously iterating on our processes to ensure we’re uncovering and addressing any instances of bias in our products and enabling fairness on the platform. This is part of our always-ongoing work to find and address bias. 

This work is especially important on our LinkedIn Feed and is the focus of the third and final part of our “Mythbusting the Feed'' blog series. If you missed part one or two in the series make sure to go back and check them out to learn what type of conversations are welcomed on LinkedIn, how the Feed algorithm works, how to curate your Feed, and common myths and misconceptions about how the Feed works.

Sabry Tozin, our VP of Engineering, expands on some of your most-asked questions in the videos below:

How do we detect bias in your algorithms?

To help drive equitable outcomes for our members, we aim to create a platform that is free from unfair bias. If harmful biases are identified anywhere in our products, platform, or processes, we work to address them quickly. We continue to develop and share methods for assessing and mitigating potential unfair bias in our AI models, including those that power recommendation and ranking systems.

Does LinkedIn limit the distribution of certain posts or people?

We do not limit distribution or remove content based on race, ethnicity, political views, or other characteristics. However, we do remove content that violates our Professional Community Policies, which could include harassment, misinformation, and hate speech. We also may choose to not broadly promote low-quality content, such as engagement bait.

This series is intended to help increase understanding of the LinkedIn Feed. As the world around us changes quickly, we’ll continue our work to be the professional community for people looking to learn a new skill, grow professionally, share knowledge, help others or take the next step in their career journey.

Equity Journey Update: 10 Million and Counting

Addressing bias is a step on our journey to help drive equitable outcomes for all members of the global workforce. An important component of this work centers on our LinkedIn Feed and understanding how our members experience our platform, especially those from historically and systemically marginalized communities. Several years ago, we began collecting gender and disability demographic data globally, but we realized it wasn't representing the much broader spectrum of identities that encompassed our members. In 2021, we expanded those efforts in the U.S. to initially include nine identity dimensions (with the option to write-in for race/ethnicity and sexual orientation) and asked members to self-identify on LinkedIn. As we continue to scale Self-ID, we’ll evaluate what additional identity dimensions are needed to both create equal access to opportunity and help drive more equitable outcomes for all members of the global workforce. (Check out my article where I explained why I chose to self-identify.)

Today, more than 10 million members have shared some aspect of their identity on LinkedIn. As more members join us on this Equity journey and Self-ID, we’ll be able to evaluate how members from historically and systemically marginalized communities experience the platform. We’ll also be able to expand and share more relevant workforce trends by accounting for how members’ demographic identities may impact their access to economic opportunity, as well as launching  new products and experiences to help drive more equitable outcomes for those members facing barriers.  

Everyday, more people from across the world come to LinkedIn to share their knowledge, perspectives, and discuss topics they care about. We’ll continue sharing our equity journey by providing transparency into the Feed, how it works, and what we are doing to address any bias. We know how important it is for members to have safe, trusted, and authentic conversations that resonate with them and their communities.