Tristan Fretwell: Navigating employee data privacy with the rise of AI

  • Print
Listen to this story

Subscriber Benefit

As a subscriber you can listen to articles at work, in the car, or while you work out. Subscribe Now
0:00
0:00
Loading audio file, please wait.
  • 0.25
  • 0.50
  • 0.75
  • 1.00
  • 1.25
  • 1.50
  • 1.75
  • 2.00

With the growing availability and use of artificial intelligence in the workplace, employers have difficult-to-pass-up opportunities to improve the efficiency and productivity of their operations.

This rise in available technology, however, also comes with new risks. These include how employers collect, analyze, and use employees’ personal information. As the number of employers using AI grows, businesses and organizations have certain obligations to ensure compliance with privacy laws while also safeguarding their employees’ personal data.

Across most industries, employers are rapidly adopting AI for tasks like recruiting and hiring, onboarding and retention, and analyzing employee performance. And these tools continue to evolve and become more powerful every day. AI can process vast amounts of data, make predictions, and offer insights based on employee behavior, work patterns, and other personal data.

While the use of AI may improve efficiency, it also raises concerns about how employers collect and use this information. AI’s ability to analyze employee data creates the potential for impacting employees’ privacy rights, on top of potential claims of bias and discrimination. And a lack of comprehensive federal law governing employee data privacy in the U.S compounds these challenges. Instead, employers must navigate a mixture of federal and state laws.

For example, under the Americans with Disabilities Act, employers must keep an employee’s medical information confidential. Information obtained from medical exams or information requested about an employee’s disability require certain protections under the ADA, including maintaining that information on separate forms and in separate files. For employers using AI to track this information, it is important to ensure these technologies are keeping the information confidential and that access to the information is protected.

Another federal law is the Fair Credit Reporting Act, or FCRA, which governs the use of consumer reports and background checks for employment purposes. Under the FCRA, employers must obtain written consent from employees or job applicants before seeking these reports. With AI hiring tools, if an employer uses algorithms or automated systems to make hiring decisions, it is critical that the data used is accurate and that employers inform candidates about the screening process.

And according to the Consumer Financial Protection Bureau, other requested consumer reports can trigger FCRA compliance, including those tracking employees’ work performance or those intended to predict a worker’s behavior.

And the Electronic Communications Privacy Act (ECPA) governs the interception and monitoring of electronic communications in the workplace. Employers can monitor company-owned devices and communications for business purposes, but employees are entitled to a reasonable expectation of privacy about their personal communications. If employers use AI to monitor employee productivity, employers should ensure that the monitoring tracks communicated privacy expectations.

Despite these federal protections, employee data privacy in the U.S. isn’t complete. These laws generally focus on types of data or specific industries, and none specifically regulate the use of AI. Several federal agencies—including the Equal Employment Opportunity Commission, National Labor Relations Board, and Department of Labor—have offered limited guidance about these issues, but it is unclear if the current administration will maintain or change these recommendations.

A few states, like California and Illinois, have established additional protections to fill the gaps in federal regulations. For example, the California Privacy Rights Act gives employees the right to access their data, correct inaccuracies, and object to some data processing. The law also requires employers provide privacy notices, obtain employee consent before processing, and follow specific data minimization principles.

Illinois likewise amended its Human Rights Act and requires employers to notify employees when AI is being used for specific purposes. And other states are considering adopting similar protections.

As using AI in the workplace grows, keeping up-to-date on new laws regulating AI use is critical for avoiding potential government investigations or other legal claims from employees. It is important therefore that employers aim to ensure AI is being used safely.

First, employers should develop and communicate clear policies related to the use of AI in the workplace. These policies should outline what data employers collect, how employers use that data, and the purpose of any monitoring. Employees should be aware of how an employer’s AI tools may access or monitor their data.

Second, employers should adopt appropriate cybersecurity measures. As businesses adopt and use new AI technologies in the workplace, employers should regularly audit and update their cybersecurity. This ensures employer’s policies are being followed and proper protections are in place for the data these tools collect and use.

Finally, regular training for employees about the appropriate use of AI and data privacy risks is important. On top of training about the approved ways employees can use AI in the workplace, employers should update and offer these trainings as their workforce changes and technologies evolve. And these trainings should also include steps employees should take to protect an employer’s confidential business information when using AI at work.

Given that AI will keep transforming the modern workplace, employers must balance innovation with responsibility. By adhering to privacy laws, implementing clear policies, and safeguarding employee data, employers can create an environment that not only leverages AI’s potential but prioritizes the protection of employee’s personal information.•

__________

Tristan Fretwell is an attorney at Taft and a member of the firm’s employment and labor and commercial litigation groups. Opinions expressed are those of the author.

Please enable JavaScript to view this content.

{{ articles_remaining }}
Free {{ article_text }} Remaining
{{ articles_remaining }}
Free {{ article_text }} Remaining Article limit resets on
{{ count_down }}