Protecting data and preventing bias: Creating a responsible AI strategy

Protecting data and preventing bias: Creating a responsible AI strategy

As more staffing firms explore the use of AI for staffing, whether crafting pre-screen questions or creating client pitches, data privacy and security considerations have risen to the forefront of the conversation. And with good reason: Staffing firms handle large amounts of data, and it’s crucial to have practices in place to ensure that data is private and secure and that both vendors and staffing firms are partnering to prioritize data security. On top of that, firms must build strategies to prevent bias and ensure the outputs created by automation and AI tools remain fair.

How can staffing firms ensure their teams keep important information secure and private, particularly when using automation and AI? How is Bullhorn ensuring our systems are secure and our customers' information remains private? What are some ways firms can prevent bias when using AI tools?

Read on to learn more about Bullhorn ’s approach to AI and what customers should keep in mind as they build their responsible AI strategies.

What steps has Bullhorn taken to ensure its AI-driven capabilities adhere to current regulations and privacy and security standards?

We have partnered with TrustArc, a provider of third-party data privacy compliance solutions, certifications, and data governance, to evaluate our processes and the way we’re building our automated decision technology—the engine behind candidate matching and other AI capabilities in our Bullhorn Copilot suite—to ensure our AI data governance is accountable, fair in practice, and transparently used.

We are presently working towards a Responsible AI certification of Bullhorn Copilot with TrustArc , which will show that Bullhorn Copilot has been built with data protection and privacy in mind.

We have also partnered with O’Neil Risk Consulting and Algorithmic Auditing (ORCAA) to undertake a review of the Bullhorn Copilot Automatch functionality (currently available in Bullhorn Automation), which helps recruiters match the right candidates to the right roles by providing recommendations based on data found in the candidate's profile and resume. ORCAA’s aim is to assist companies to ensure the use of algorithms and AI perform as intended and operate within sensible guardrails while avoiding discrimination, bias, and other problems. Specifically, we worked with ORCAA to perform an audit of the Bullhorn Copilot Automatch functionality for inherent bias in light of NYC’s automated employment decision tools legislation, Local Law 144. The results of the audit showed that candidates grouped by gender and race/ethnicity were treated equally by Bullhorn Copilot Automatch.

Our overall approach is that we want to feel confident that we have a considerate framework for building these solutions that encompass automation and AI. Experts like ORCAA and TrustArc help us to ensure that we’re doing the things our customers trust us to do. We believe our thoughtful approach, including expert analysis of products like Bullhorn Automation and Copilot, provides the foundation on which we can build so that we remain compliant and accountable as a SaaS service provider with new AI and automated decision-making laws and regulations. 

We can think of this similarly to how Bullhorn ATS is SOC 2, Type 2-certified, which means we have the right overall security, availability, confidentiality, and privacy controls in place. If an organization passes SOC 2, you can feel confident they're taking the right steps. We aim to ensure that by undertaking third-party expert analyses of our products, we’re following the proper criteria and the tools we're building are appropriately protecting our customers' data.

What do Bullhorn customers need to do to keep their data private and secure?

First and foremost, staffing firms should have established internal security practices to ensure their employees understand how to handle personal information and sensitive data.

From there, firms should ensure they work with trusted vendors and software providers who have done the due diligence to ensure their solutions are secure and access is limited to only necessary employees.

It’s important to note that our customers have complete control over who has access to their data, whether it’s Bullhorn, a Marketplace partner, or a third-party service provider, such as an LLM provider that you might use to power generative AI functions.

On the Bullhorn side, we’ve built our AI and automation products using models that we host and own the data pipelines to, meaning all data is securely stored in our data centers. High-volume actions that you can take in Bullhorn, like candidate ranking, happen entirely on systems we own. That data never leaves the same data center and secure system that houses the rest of Bullhorn’s data. Just like our customers have trusted us with their data since 1999 in the cloud, it's that same concept. We control the system end-to-end.

What’s the most important thing our customers should know about how Bullhorn is approaching privacy and security in our AI solution, Copilot?

Our customers should know that the foundation of how we build all of our tools is safe, secure, and transparent. We can tell you the data that’s being used to generate the output of AI-powered queries; for example, when a candidate search is run, you can always see the exact criteria that were used to generate the search. All the data we're using is transacted securely with access limited to only those who need it, in the same way that all the rest of your data is treated.

There are two sides to the privacy and security of AI tools: first, there's building a smart solution, which requires taking lots of data and figuring out what the AI tool needs to create the output you requested. On the other side is the idea that whenever I want to ask Copilot a question, I send the data necessary to ask the question, giving me that output.

In both of those transaction scenarios—when we're building or when we're trying to give you answers—that data is all transacted securely. And, ultimately, I think that is the most important thing.

There are two main components to a responsible AI strategy: securing your data and preventing bias. We’ve shared what Bullhorn and our customers should do to protect their data. Next, how can staffing pros work to reduce or eliminate bias in their use of AI, and how do AI tools provide unbiased outputs?

At the most basic level, our tools will not produce biased outcomes if your data doesn’t reflect bias. For actions like candidate ranking or generating a summary with AI, we can only choose from the data in your system. So, my first recommendation is to ensure you have solid internal procedures to reduce bias in recruiting and ensure you’re representing all individuals equally. This means having a broad, heterogeneous dataset that includes different genders, backgrounds, and geographies so the AI tool provides outputs based on diverse data. Assuming you're doing the right things to avoid bias in hiring, the software should follow your lead.

When building our solutions, particularly ones that generate outputs based on your data—like Bullhorn Automation and Copilot— we’re not using any data that can be conceived as biased to generate the outputs. For example, our automation and AI tools do not refer to candidates’ names to inform the screening questions you ask them to create.

It’s also worth mentioning that we do not use personally identifiable data to build the models that power our automation and AI tools. When you ask Copilot to generate a candidate summary, the tool only uses certain parts of the candidate’s data—like work experience—to generate the output. Then, the data is essentially “thrown away”, meaning it just goes back to being stored in your records and is not held or kept in our system. We built our models on successful placements so the outputs are aligned with the outcomes our customers are working towards.


Learn more about Bullhorn Copilot and the future of staffing AI


Alistair Lowe-Norris AIGP CCMP

I help CEOs get Responsible AI right | Former Chief Change Officer for Microsoft | "the Responsible AI guy" | Responsible AI and Change Leadership Coach | 23 years of Microsoft Changing the World

8mo

Jason Heilman, Bullhorn - great to see the investment in Responsible AI and the work you've taken to build off the existing work on handling sensitive information being SOC 2, Type 2-certified. Staying ahead of legislation, and working to become TrustArc certified, definitely demonstrates to customers that you're committed to ethical, trustworthy, and responsible use of AI.

Cheers to our brilliant partners 🥂 for leading the charge with AI responsibility! 👏🏽 Your dedication to data protection is impressive. We're excited to continue this journey together!

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics