Sumser: How the EU rules on AI will alter HR’s relationship with tech

The number of agencies that regulate HR is about to grow—and the accompanying penalties facing employers can make EEO compliance look tame.

If you find compliance requirements burdensome, wait until you see what’s coming with the European Union’s proposed regulation governing artificial intelligence, which is making waves in HR circles. In short, within a year, you will need access to employment attorneys who understand technology, AI, and the new and existing EU tech and privacy regulations. Shortly thereafter, you’ll need the same kind of help and advice, but this time focused on all the new U.S. state and federal laws that will inevitably follow.

And guess what?

Employment lawyers and technology go together like oysters and cupcakes. There are about as many employment lawyers who really understand technology as there are chimpanzees in New York City (apologies to the chimps—and my wife).

Advertisement

And get this: The proposed regulation implies that both the vendors of automated HR AI systems and their customers need to be fully competent to manage and override AI in HR. It would be quite a feat to bring the profession up to speed as fast as the technology is proliferating, especially since many of the advantages of technology are that it takes care of stuff so you don’t have to.

That’s about to change.

In late April, the EU published the first draft of a regulation defining its approach to governing AI, which focuses heavily on HR and recruiting technologies. Like the General Data Protection Regulation before it, the EU regulation will become the definitive baseline for global entities. And the internet makes us all global players, more or less, since tech programs do not know the citizenship of users, employees or their data.

Because the law changes slowly, tech companies have been able to run ahead of it for over 40 years. In fact, many big companies depend on staying ahead of regulators. Technology evolves quickly, and the time between invention and regulation is very high profit.

Software, in particular, usually works on this model and is often given a boost by the same governments regulating them. Software product liability is a case in point—there is currently no such thing. The organizations making the employment decisions carry all liability for those decisions, both by contract with the vendors and under anti-discrimination laws. But as software becomes a ubiquitous part of recruiting and hiring, performance management and monitoring (surveillance) of employees, we are going to see more legal oversight, including potential liability for solution providers.

Advertisement

Related: Hear John Sumser’s insights into the future of benefits at next week’s Health & Benefits Leadership Conference. Click here to register.

At the heart of the EU’s proposed regulation are some familiar themes from GDPR:

  • Fairness—including preventing discrimination against individuals and classes;
  • Transparency—vendors must be able to explain the logic involved in the AI system and how it works; and
  • The right of individuals to participate in their future—people will be able to challenge the automated decisions and have more control over who has data about them and how it gets used.

GDPR imposes legal requirements on whomever uses an AI system for profiling and/or automated decision-making purposes; that’s even if they acquired the system from a third party. Compared to GDPR, the new regulation introduces additional obligations for AI vendors, prohibits certain high-risk AI systems and defines more precise requirements for high-risk AI systems and their users.

Slide1

The category of “high-risk AI system” includes a distinct focus on HR technology systems and tools, including:

  • Recruitment and selection: job ads, sorting applications (i.e. screening, filtering, matching, scoring, ranking), and assessments or evaluations of candidates in the course of interviews or tests; and,
  • HR tech: making decisions about promotions, work task allocation and monitoring/assessing performance and behavior.

The new regulations require that vendors and users of high-risk AI systems:

  • Can understand its output and use it fairly and appropriately (not let the systems make decisions without interrogating both the decision/prediction/ranking and how it was made);
  • Can ensure accuracy, robustness and cybersecurity to foster resilience regarding errors, faults, inconsistencies, technical faults, unauthorized use or exploitation of vulnerabilities;
  • Can provide training, validation and testing data, including relevance, representativeness, accuracy, completeness, and ongoing bias monitoring, detection and correction;
  • Can establish a risk-management system and maintain it continuously throughout the offering’s lifetime; identify and analyze known and foreseeable risks; estimate and evaluate those risks; and adopt risk management measures;
  • Can create automatic logs that ensure traceability of the system’s workings;
  • Can enable human oversight of the AI system by someone who fully understands the system’s capabilities and limitations and can decide not to use the system or its output in any particular situation in order to minimize risks to health, safety or fundamental rights; and
  • Can register the system and document its compliance before introducing it to the market.

In short, everyone who touches or manages the system is subject to both automated and human oversight.

The goals are to prevent users from mindlessly following the machine’s recommendations, to demand that systems and training are kept up to date, and to make sure that system performance is documented, monitored and evaluated.

Most organizations use HR software tools that include some level of intelligent recommendations or predictions. Over the past five years, the functionality crept into the office through the periodic updates of existing tools; that’s how SaaS software works.

See also: AI and hiring bias—why you need to teach your robots well

Unlike GDPR, local American municipalities are busy developing their own set of AI regulations. Travesties such as facial recognition systems that misgender or don’t recognize people with darker skin as human create a compelling sense of urgency.

As an HR leader, it’s imperative that you understand where the AI is in your organization, how it works, how to turn it off and where it is likely to make errors. AI-based programs can offer insights that we would not normally have, but they do not provide answers. They hopefully give us better questions and opportunities to pay attention to what is actually going on in our organizations.

Slide1

Leave a Reply