As the use of Artificial Intelligence (AI) becomes more commonplace, attorneys and the legal system will be called upon to address its use and application including the growing role of AI evidence in litigation. While this nascent technology offers powerful new tools, it also presents profound concerns about accuracy, reliability, fairness, and confidentiality. As AI tools for investigation, analysis, and decision-making continue to advance and become more widespread, attorneys must understand their potential benefits, applications, and drawbacks.
Artificial Intelligence, or AI, is a branch of computer science that focuses on using computers and machines to execute tasks that have traditionally required human intelligence and input. AI systems are essentially prediction models that use a specific prompt to evaluate patterns based on training data. As part of the training process, the AI system is fed massive amounts of information, which is tested to verify that the predictions are accurate. To ensure users receive accurate output, much of the AI system’s training occurs before the public uses it. However, many AI systems continue to use progressive learning algorithms that adapt when new data is provided.
Some attorneys use AI systems to manage and analyze the vast amounts of data and digital evidence commonly collected in complex cases. AI systems can quickly sort through and locate relevant information that might take humans weeks or months to find manually. However, generative AI represents a dramatic shift forward. Instead of simply analyzing content, generative AI can create new content, such as audio, images, text, videos, and simulations.
Generative AI is particularly controversial because it can respond to queries with false answers, sometimes called “hallucinations.” A hallucination occurs when a generative AI system perceives a pattern that is nonexistent or imperceptible to human beings and uses it to generate an output that is nonsensical or inaccurate.
Generative AI can also introduce bias and prejudice into the legal system, which can lead to discrimination and unfair outcomes. For example, if AI systems disproportionately identify individuals from marginalized communities as being high-risk, individuals in those communities could face harsher sentences, perpetuating systemic bias within the legal system.
AI is transforming how criminal cases are being litigated. Lawyers may use AI to analyze case precedents, conduct research, or create template documents for use in court. AI systems can streamline research and document review, particularly in cases involving data that is stored electronically. However, there is a significant concern that AI systems may misappropriate client data in violation of attorney-client confidentiality.
An attorney could inadvertently violate their duty of confidentiality by putting confidential client information into an AI system that learns from the questions asked. While the AI system might not directly reveal confidential information to a third party, a lawyer who uses AI to craft legal documents could violate their duty of confidentiality by exposing confidential client information to an AI provider that does not guarantee the information will not be used to continue training the AI system. Once confidential information has been incorporated into the AI system’s training data, it is virtually impossible to remove without completely retraining the system.
Attorneys are ethically required to take reasonable precautions to prevent unauthorized access to confidential information. Currently, reasonable precautions do not necessarily include security measures like encryption or access controls, so long as the method of transmission affords a reasonable expectation of privacy. However, attorneys must take reasonable precautions to ensure they do not input confidential information into an AI system that could possibly reveal it now or in the future. Lawyers who use open-source AI systems, like ChatGPT, must strictly examine the terms and conditions of the AI use agreement to ensure that shared data will not be used to improve the service, and should avoid entering sensitive client information into the system unless these safeguards are in place. Attorneys and law firms should establish guidelines and procedures to ensure employees who use AI systems do not compromise client confidentiality. Likewise, attorneys should provide clients with assurances that confidential information will not be input into an AI system.
Courts have begun creating new rules and implementing new policies to address the use of AI in court. Some courts are mandating that attorneys disclose any submissions that may have been generated in whole or in part by AI. Courts are also sanctioning attorneys for the misuse of AI. For example, the Southern District of New York sanctioned attorneys who wrote a legal brief using generative AI, finding that the attorneys “abandoned their responsibilities” when they submitted a legal brief written by AI. A federal judge in Wyoming sanctioned attorneys for filing pretrial motions that contained citations that were fabricated by AI and withdrew pro hac vice status from an attorney who drafted motions without verifying the legitimacy of the case citations.
The use of AI in the legal system represents a significant shift in how cases are being handled. While AI systems promise greater efficiency, their use raises important questions about accuracy, bias, fairness, and confidentiality. Law schools like the University of California, Berkeley, and Arizona State University have introduced courses on how AI operates within the practice of law. Other law schools have introduced clinics and labs where students work with AI systems and gain hands-on experience with how AI evidence and resources can be used effectively and ethically.
As the widespread use of AI in the legal profession becomes almost unavoidable, judges, attorneys, and legal experts must develop clear standards and regulations to ensure accuracy, reliability, fairness, and confidentiality regarding the use of AI evidence. Legal professionals must become educated on the capabilities and limitations of AI with an emphasis on its ethical and fair use.
Hope Lefeber is a federal criminal defense attorney with over three decades of experience. She has been consistently honored as a Pennsylvania Super Lawyer, as one of the Top 100 Trial Lawyers by the National Trial Lawyers Association, and as one of the Top 10 Criminal Defense Attorneys in Pennsylvania by the National Academy of Criminal Defense Attorneys. Ms. Lefeber has represented high-profile clients, published numerous articles, lectured on federal criminal law issues, and appeared on TV News as a legal expert. She represents clients in Philadelphia and its surrounding counties, New York and in federal courts nationwide.
When you need the expertise of an experienced federal criminal defense attorney, contact the Law Offices of Hope Lefeber.
© 2026 Law Offices of Hope Lefeber| View Our Disclaimer | Privacy Policy