Artificial Intelligence (AI) is revolutionizing the way businesses operate, boosting efficiency, and streamlining processes. The use of AI has people across industries wondering what the future has in store for them, how they can prepare their business for the use of AI and how they can protect their business from potential threats that open frameworks pose. 

As GLC is home to many people who love productivity and using data to assure efficiency, naturally the use of AI has piqued our interest. Our core concern is that new technology must be carefully managed to ensure ethical, accurate, and responsible decision-making. Human oversight remains essential to maintaining quality, fairness, and accountability in AI-driven workflows. 

Fear of change over the way we work will forever be a source of anxiety. With more than 30 years in business, our own company has been through several waves of new technology, including the onset of digitization and public access to the internet. New systems are constantly becoming more powerful and user friendly; AI is no exception. As a tool the whole world seems to be exploring its capacity and limitations. The real questions for professional firms are where does AI pose a risk to privacy or compliance, and how can I prepare for the most effective use of AI in business? 

In an effort to answer this question GLC’s Director of Records, John Brokaw, ran a research project to determine whether AI platforms could be useful for expediting a routine business process and making it more efficient. The project selected was an AI-driven evaluation and ranking of candidate resumes for an open position within the company. The project was run with a strong emphasis on privacy and elimination of bias. He decided to run the project evaluating AI alongside the usual manual process for comparison. 

John’s report on the project states “It has been reported that a hiring manager, given a large number of resumes to review, will spend only a few seconds giving each of them a quick initial scan. Conscious or unconscious bias relating to matters such as the candidate’s name, address, education, or number of jobs may quickly remove a qualified candidate from consideration.” 

The question posed, could an AI system without those individual biases, and without the constraints of a busy schedule, do the same task more quickly and efficiently? John then went about inputting his qualifications into several AI programs to assure it was assessing the right information, and doing so responsibly. 

He says “The first step in using AI for this purpose is to remove or redact all personally-identifiable information (PII) from the resumes… Removal of PII does not ensure the elimination of bias in ranking resumes. Other factors such as resume formatting, language style, implicit demographic indicators, and excessive employment moves as well as gaps can still be detected and weighted to the candidate’s detriment. These factors must be addressed specifically in the prompt to AI, giving instructions to ignore or otherwise handle them.”  

In regards to the prompt in question he says it must be carefully constructed. Besides pointing to the target resumes and providing instructions to eliminate bias, the AI application must be:  

  1. Assigned a role- in this case, hiring manager
  2. Given the objective- hiring based on a formal job description
  3. Given a list of special requirements for this particular position
  4. Given a comprehensive list of indicators- e.g., previous experience, customer service, technical skills
  5. Provided associated percentage weights to be used in evaluation of the resume- the indicators and their percentages for this project were the same as those used in the annual review of similar incumbents
  6. Given Instructions on the desired output. 

For this project, the AI application was asked to provide:

  • An overall summary statement of its findings

  • A ranked list of resumes with, for each resume:

  • A 1 to 10 rating overall

  • A 1 to 10 rating for each of the indicators

  • Comments for each indicator

  • A recommendation for review

  • Recommended interview strategy, focus, sample questions

This project was run across three popular AI platforms. Weighted scores were calculated, and candidates were grouped into “Strongly Recommended, “Recommended”, “Considered”, and “ Not Recommended” for interviews.

The AI results were compared with the hiring manager’s review. Both processes produced similar lists with the same top candidates. The person ultimately hired through the manual process was among the “Strongly Recommended” in the AI system found to have the most comprehensive insights. While the hiring manager’s manual choices were the only ones used in the actual hiring process, it provided good insight to the capabilities of AI systems in similar tasks. 

The findings of this research project was that Quality control standards must be rigorously upheld if using AI for tasks like this and human input is essential. Specifically in the hiring process he lists: 

  • Results should be audited for bias and fairness. Results should be examined to make sure there are no exclusions of qualified candidates based on race, gender, age, disability, or other protected characteristics.

  • Use accurate, representative, and up-to-date data for AI input (e.g., Job Description)  and eliminate redundant or misleading information that may skew AI decisions.

  • AI should assist rather than replace human judgment in hiring decisions. Face-to-face interviews, whether in person or via remote technology, are essential for all final hiring decisions.

  • The use of AI must be transparent and clearly documented, candidates must be informed of its use, and candidates must be allowed to contest or ask for explanations regarding AI-based decisions.

  • The use of AI must be in compliance with relevant laws and regulations. For example, in the City of New York, NYC Local Law 144 must be complied with. It stipulates audits, transparency, candidate notification, availability of assessments, and non-discrimination based on protected classes.

Human oversight is absolutely crucial when using AI for any task, and this project made that clearer than ever. Every step of the process—manually removing personally identifiable information, crafting a well-thought-out AI prompt, and ensuring the AI worked with accurate and relevant data—required human input. Just as importantly, once the AI generated its results, we carefully reviewed them for bias and compared them to the manual resume evaluations.

At its core, this research project aimed to see whether AI could streamline resume reviews. We wanted to know: Could AI provide rankings that aligned with human hiring decisions while maintaining privacy, eliminating bias, and ensuring high-quality results? The answer was promising. In fact, the AI's top choices closely matched those of the hiring manager, showing potential for AI-assisted hiring.

That said, AI can’t—and shouldn’t—replace human judgment. To use it responsibly, we need strict quality control, including regular bias audits, transparent documentation, legal compliance, and, above all, human oversight.

We are all for enhancing efficiency, but human expertise must remain at the heart of every process. AI may change how we work, but the thoughtfulness and intention behind our decisions matter more than ever. By combining the insights of AI with human judgment and oversight, we can create a future where technology enhances our capabilities rather than replaces them.