Artificial Intelligence in College Admissions

How the increasing usage of AI in college admissions is threatening the system

By: Leila Blake

We have all used artificial intelligence. Whether ChatGPT, Google search AI, chatbots, or one of the many other forms of artificial intelligence, everyone has likely had an interaction with this technology. As AI becomes more normalized, students, workers, artists, teachers, and college admission officers are growing over-reliant on its services. In recent years, college admission teams have increasingly adopted AI into their process. Some colleges use it just to aid with screening the hundreds of thousands of applicants each year, while others let these programs make the final decisions on whether someone is or is not eligible for the school. This usage brings up issues regarding AI’s bias based on precedence, its inability to provide human insight or original thoughts, and the ethics of letting it decide the future of millions of people. When used in moderation, AI is a valuable tool for speeding up the grueling college admissions process, but the excessive usage of AI to complete important tasks is decreasing integrity within the admission process. 

Different AI programs are being developed and used for multiple purposes in college admission offices, some enhancing the process, while others are diminishing it. One algorithm was created to screen applicants, where it could sort through and process the GPAs and rigor of courses of applicants working more efficiently and accurately than a person could. Rick Clark, the Executive Director of Strategic Student Access at Georgia Tech, stated on a podcast that it takes at least “several minutes” per applicant to do these calculations by hand, and doing this for thousands of applicants takes up hours that could instead be spent on more human tasks such as reviewing essays and teacher recommendations. Using AI to aid in computations such as these allows admissions officers more time to thoroughly consider an applicant’s character and fit for the school. There are other colleges, however, that use AI programs to review students’ writings and allow them to make the final decision on an applicant’s suitability for the school. This is where questions on ethics and bias are most prominent. 

https://www.intelligent.com/8-in-10-colleges-will-use-ai-in-admissions-by-2024

As AI continues to develop and advance, there is an increased amount of temptation to use it more frequently and for a wider variety of things. Recently, it’s been found that AI can be used to scan recommendations or essays and detect traits or values of the applicant through an algorithm. This has become one of the most common applications of AI in the college admissions process, with more than 70% of respondents in a survey stating that they use it for such tasks. Utilizing AI in this way can make it easy to overlook that it can not actually think or make judgments itself. Regardless of what it seems to be capable of, if it follows an algorithm formed from previous information, AI cannot determine whether applicants are fit for a school the way a person can. According to a study commissioned by Intelligent.com, the majority of people actually believe that this lack of thinking can help to eliminate bias as AI does not have any prejudice or any understanding of societal constructs such as gender or race. On the contrary, the programs that AI follows copy the patterns of students who have previously been admitted. If a certain kind of student (ex. one of a certain race or gender), has been more frequently accepted in the past, the AI will be more likely to continue to admit applicants like them.

Another topic of concern regarding AI in college admissions is the ethics of its use, and whether or not it is trustworthy enough to be allowed to have a considerable say in determining the future of millions, if not billions, of applicants. Two out of three admission professionals are concerned about AI ethics and how different uses of AI are or are not fair. Some worry about AI’s ability to understand the circumstances of applicants, while others question the judgment of AI or the methods it uses to come to its decisions. Because the very nature of computers is their simplicity and inability to make their own judgment, AI’s decisions could be inaccurate and unfair in comparison to a decision made by a human. For example, if an essay is written in simpler sentences, AI could determine that the writer is unsophisticated while a human reading it may appreciate the concision. This simple-mindedness makes AI an inherently inaccurate and unsuitable tool for drawing conclusions based on pieces of written work. Another ethical question that comes up when considering this topic is the transparency of AI programs. People are worried about how AI comes to the decisions it makes, as it is not always clear to the general public how the AI is programmed and gets to its conclusions. A balance needs to be found between AI usage and human review to ensure accuracy and efficiency, as well as maintain people’s trust in the industry. 

There are clear risks in this increasing usage and reliance on AI in college admissions, where concerns regarding bias, ethics, and security are common and decrease the credibility of the admissions process. Despite this, burnout, human errors, and other issues found in our current system show how technology can be used to advance today’s process. With increasing numbers of applicants every year, using AI to analyze data or numerical scores to save time and effort where humans don’t need to exert it is a useful resource and tool. Admission teams should continue to perform tasks such as reviewing recommendations and the applicants’ essays, but using AI to make menial, tedious, and inhuman jobs more efficient is beneficial to this process, and used in moderation it will help minimize human bias and mistakes. In the coming years, an increase in the use of AI in admissions will likely be observed, though it is unlikely that AI will be able to completely replace people working in admissions due to the problems that accompany over-reliance on AI in this field.

Leave a Reply

Your email address will not be published. Required fields are marked *