Published Article
Artificial Intelligence in the Workplace: Generating AI Policies That Comply With State and Federal Law
Read Time: 6 minsThe boom of artificial intelligence has hit a critical mass in 2025 and continues to grow and evolve. Broadly, generative artificial intelligence (AI) includes the use of computer systems and programs to perform tasks and create output that resemble human intelligence. Examples include decision-making, complex analyses and summarizing, prediction of behavior, optimization, and drafting of documents.
Nearly every major technology company has invested in, developed, or uses some form of AI. For example, Google has developed its Gemini technology, Meta has its Meta AI embedded into Facebook and Instagram, Elon Musk’s X and Tesla both use Grok AI, and, of course, OpenAI has its ubiquitous ChatGPT. Microsoft has included Copilot into its suite of office products, so even the workplace constants of Outlook, Word, PowerPoint, and Excel are not immune from its reach. More specific tools for individual professions include Thomson Reuters’ CoCounsel, built into its Westlaw platform for lawyers, DeepScribe, an AI scribe for physicians and other clinicians, and Asana’s “work management” platform for project managers.
With AI’s omnipresence, employers need to be cognizant of the impacts, both positive and negative, these technologies can have on the workplace. To that end, as with many issues in the workplace, the best sword and shield is simple: draft an effective policy that outlines how to responsibly use AI and limit the employer’s liability from its potential misuse. A policy for the use of AI in the workplace should, at a minimum, address the main concerns for most employers: discrimination and bias risks, confidentiality and intellectual property, and compliance with state and federal law.
Discrimination and Bias
One use of AI in the workplace is in the human resources sector, with AI technologies available to assist in the employee screening and hiring process. As with all hiring decisions, the methodology behind the decision must be unbiased and nondiscriminatory to comply with Title VII of the Civil Rights Act of 1964, 42 U.S.C. Section 2000e-2 et seq, as well as other applicable state and federal laws.
In 2023, the EEOC released a technical assistance document focused on preventing discrimination against job seekers and workers. EEOC-NVTA-2023-2, Title VII and AI: Assessing Adverse Impact (May 18, 2023). Under the current administration, this document has been removed from the EEOC’s website; however, it remains valuable for analyzing the application of key established aspects of Title VII to an employer’s use of AI in hiring.
In sum, the piece states that the use of AI tools in place of human analysis in selection procedures to make employment decisions such as hiring, promotion, and firing will still trigger the protections of Title VII, which prohibits discrimination on the basis of race, color, national origin, religion, sex, disability, age, and genetic information, as well as other legally protected classes. This applies even if “facially neutral tests” have the effect of disproportionately excluding individuals based on those protected characteristics, referred to as “disparate” or “adverse” impact discrimination.
Importantly, under the new EEOC leadership, this can include so-called “reverse” discrimination based on diversity, equity, and inclusion programs. See “What You Should Know About DEI-Related Discrimination at Work” (Mar. 19, 2025).
At least one court in the Northern District of California has allowed a discrimination case regarding the use of AI in the hiring process alleging violations of Title VII, the Age Discrimination in Employment Act (ADEA), and the Americans with Disabilities Act (ADA) to move forward beyond the pleading stage. See Mobley v. Workday, 740 F. Supp. 3d 796 (N.D. Cal. 2024).
In this case, the plaintiff alleged that the HR management service had used algorithmic decision-making tools that improperly discriminated against him and other candidates based on protected characteristics. The court discussed that the outsourcing of hiring decisions to an AI program would not limit liability for the employer and would cut “against the well-recognized directive that courts are to construe remedial statutes such as Title VII, the ADEA, and the ADA broadly to effectuate their purposes.”
The AI program was thus a third-party agent that could impute liability to the employer. While no final determination of liability has been made, the court has preliminarily certified a collective of similarly situated individuals as of May 2025.
Because of this, any employer using such AI tools to make employment decisions should consult counsel to consider: conducting an analysis of the program to ensure it does not have a disparate impact on any particular group based on protected characteristics; and drafting a policy explicitly stating that the use of such programs is not intended to discriminate against any protected Equal Employment Opportunity characteristic.
Confidentiality, Intellectual Property, and Content
AI also presents unique scenarios for general productivity in the workplace, with capabilities such as drafting emails, summarizing documents, performing advanced spelling and grammar checks, and creating outlines. These functions can make workplace tasks more efficient and increase productivity. However, the companies that own these AI technologies also may claim certain rights to the content their chatbots produce. For example, OpenAI’s privacy policy provides that it collects personal information from users and may store such data, but it also allows for business account administrators to control their organization’s data and keep that data confidential. See Enterprise Privacy at OpenAI (June 4, 2025).
With respect to employee privacy, data protection laws may apply with respect to the personal information input into AI programs. The Consumer Financial Protection Bureau (CFPB) has issued a circular stating that the Fair Credit Reporting Act (FCRA) applies to the use of third-party consumer reports when used for background checks, monitoring of employees, and creating algorithmic scores. See Consumer Financial Protection Circular 2024-06 (Oct. 24, 2024).
This means that the information input into these programs is subject to the same protection as other consumer reports, requiring notice of the use of such reports prior to taking any adverse action and allowing employees to dispute, correct, or delete inaccurate, incomplete, or unverifiable information.
AI in the Law
Additionally, lawyers should be aware of the use of AI in the practice of law. The American Bar Association (ABA) issued guidance in 2024 discussing the use of artificial intelligence technologies and how they may impact privilege, summarizing that the use of AI to perform legal research does not remove the lawyer’s responsibility to verify the competence of the research, identifying a heightened need for “accountability, critical thinking, and professional judgment.” See Formal Opinion 512 (July 29, 2024).
This is made more evident by the strict sanctions being levied against lawyers who used AI to draft briefs without verifying the accuracy of the information therein. See, e.g., Mata v. Avianca, 678 F. Supp. 3d 443 (S.D.N.Y. 2023). And for clients, OpenAI’s CEO himself has admitted that there is no legal confidentiality when using ChatGPT. See Snyder, Jason. OpenAI: ChatGPT Wants Legal Rights. You Need the Right to Be Forgotten (July 27, 2025).
Because the needs of different businesses and industries will vary, the best practice for drafting an AI policy will depend on the nature of the employer’s business. For example, legal professionals will need to include language prohibiting the practice of law through AI and ensuring that lawyers and paralegals double- and triple-check all sources when using AI tools for research. In addition, confidentiality and attorney-client privilege must be discussed. Medical professionals will need to confirm that the AI programs being used are compliant with the Health Insurance Portability and Accountability Act (HIPAA) and other relevant privacy acts.
Broadly, employers can use an AI use policy to expand existing confidentiality and proprietary business information policies, to ensure that the use of AI does not forfeit the right of the company to its own confidential business information. And all employers should advise employees to avoid the use of AI exclusively in place of human thought.
Compliance With State Law
Finally, when drafting any policy that affects workers, employers need to be aware of the applicable laws in the individual states where their employees live and work. No comprehensive federal law currently exists to regulate the use of AI in the workplace, which means multi-state employers will need to review the laws of all states in which their employees are located.
Because the AI boom remains ongoing, some legislatures have been slow to catch up. However, multiple states have enacted legislation, and others are continuously attempting to do so.
For example, Illinois has enacted a law (effective January 1, 2026) amending the Illinois Human Rights Act and prohibiting employers from using biased AI, including the use of zip codes as a proxy for discrimination, in hiring practices. 775 ILCS 5/2-101(M), (N).
New York City has implemented its Automated Employment Decision Tools law, prohibiting the use of such tools without an independent bias audit conducted at least one year in advance. N.Y.C. Admin. Code Sections 20-870. Other states with enacted laws addressing AI include Georgia, Florida, Virginia, New York, Massachusetts, Connecticut, New Jersey, Maryland, Michigan, Indiana, Wisconsin, Minnesota, New Mexico, Oregon, Utah, Arizona, Montana, South Dakota, Tennessee, New Hampshire, Delaware, Alaska, Hawaii, Colorado, Texas, and California.
Thus, when drafting a comprehensive AI policy, employers will need, as with all employment policies, to ensure that their language is fluid and evolves as the law does. Conducting a 50-state survey and monitoring for new developments will ensure that multi-state employers have a policy that meets their needs when it comes to AI.
This article was originally published by ALM Media Properties, LLC in The Legal Intelligencer on October 29, 2025, and is reprinted with permission. Further duplication without permission is prohibited. All rights reserved.
Subscribe for Updates
Receive emails regarding timely legal developments and events in your areas of interest.
