+91 7682 015 542       info@gexinonline.com

  • Account
    • Sign In
      • Author
      • Editor
      • Reviewer
    • Sign Up
      • Author
logo
  • Home
  • Open Access
  • About Us
    • About Us
    • Our Team
  • Journal
  • Submission
    • Submit Manuscript
    • Instructions to Authors
    • Review Process
    • Join As Reviewers
    • Our Reviewers
  • Policies & Ethics
    • Open Access Policy
    • Editorial Policy
    • Conflict of Interest
    • Publication Ethics and Malpractice Statement
    • Plagiarism Policy
    • Review Policy
    • Correction, Retraction, Withdrawal Policies
    • Digital Preservation Policy
    • Waiver Policy
    • Complaints Policy
    • Advertising Policy
    • Data Sharing Policy
    • Policy on Statement of Informed Consent
    • Policy on Ethics of Human and Animal Experimentation
  • Contact Us
  • About the Journal
  • Editorial Board
  • Review Process
  • Author Guidelines
  • Article Processing Charges
  • Special Issues
  • Current Issue
  • Past Issue
Journal of Information Technology and Integrity
Full-Text HTML   Full-Text PDF  

Journal of Information Technology and Integrity Volume 3 (2025), Article ID: JITI-112

https://doi.org/10.33790/jiti1100112

Review Article

Artificial Intelligence in Higher Education: Ethical Challenges, Governance Frameworks, and Student-Centered Pathways

Marc P. Knox1*, MBA, and Avril W. Knox2, DSW,

1*Associate Chief, Digital Engagement & Innovation, Dallas College, Dallas, Texas, United States.

2Assistant Professor & MSW Field Education Director, School of Social Work, East Texas A&M University, Commerce, Texas, United States.

Corresponding Author: Marc P. Knox, MBA, Associate Chief, Digital Engagement & Innovation, Dallas College, Dallas, Texas, United States.

Received date: 24th September, 2025

Accepted date: 24th October, 2025

Published date: 27th October, 2025

Citation: Knox, M. P., & Knox, A.W., (2025). Artificial Intelligence in Higher Education: Ethical Challenges, Governance Frameworks, and Student-Centered Pathways. J Inform Techn Int, 3(2): 112.

Copyright: ©2025, This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited

Abstract

Artificial intelligence (AI) is rapidly changing the higher education sector, as it is radically changing the pedagogical processes, learning experience, and administrative procedures. Even though these inventions bring a new wave of personalization and efficiency in operations, as well as predictability, they also raise serious ethical issues related to bias, fairness, governance, and surveillance. The review is a synthesis of AI studies in higher education that summarizes literature on the subject matter using foundational literature, normative models, case studies, and technological advances. The information describes the opportunities and threats of the adoption of AI and stresses the role of participatory design, systems of governance, and, most basic, continuous ethical supervision. This is a protective mechanism against the integrity and equity of the AI systems that will act as a guiding comfort to the education sector in the future, and hence establish hope among stakeholders.

Keywords: Accountability, Algorithmic Governance, Artificial Intelligence (AI), Ethical Dilemmas, Ethical Frameworks, Ethics, Fairness, Frameworks, Governance, Higher Education, Institutional Governance, Student Trust, Surveillance, Technological Advancement, Technological Perspective, Widespread Datafication

Introduction

The last twenty years have witnessed the deployment of algorithmic systems in society, and the integration of artificial intelligence (AI) into higher education has gained momentum over the last decade. AI is being increasingly utilized in universities for admissions, learning analytics, student advising, and classroom management. This innovation can significantly enhance student learning and improve academic performance, ushering in a new era of personalized education. The future of learning should be viewed in the light of the growing prospect of customized education. Although AI has been discussed in terms of equity, ethics, and governance, it also suggests potential solutions to some long-standing issues in the field of education, including personalized learning and early intervention for at-risk students.

Researchers argue that AI in education is not a purely technical phenomenon, but rather a sociotechnical one. It is informed by social, political, and ethical situations, as noted by Williamson and Eynon [1]. With the introduction of AI to institutions, they are facing a conflict between the promise of data-driven innovation and the threat of bias, surveillance, and erosion of student autonomy [2,3]. The given data is a comprehensive review of the literature on the subject of AI in higher education, summarizing the results of studies in the field while considering background theory, ethical concerns, technological development, governance and control, student experience, and a case study.

Theoretical Framework: Sociotechnical Systems Theory

The Sociotechnical Systems Theory (STS) describes the interaction of technology and human systems to determine the outcomes of institutions. Instead of considering artificial intelligence (AI) as a neutral technology, STS understands it as a system of interrelated values, culture, and other technical infrastructures [3].

Ideally, the Sociotechnical Systems Theory contextualizes AI in higher learning by highlighting the dynamic nature of the relationship between individuals, institutions, and machines. It highlights the fact that ethical practice is not garnered by technical design alone but is a product of social organization, management, and interpretation of technology. Basing this research on STS also allows the subtle exploration of the process of overcoming the contradiction between innovation, governance, and humanness in the era of AI within universities.

Method of Literature Selection

Relevant literature was located through a search in ProQuest, Scopus, ScienceDirect, Google Scholar, Taylor and Francis Online, using the keywords such as artificial intelligence, higher education, ethics, governance, fairness, and student trust. The literature review concentrated on peer-reviewed articles and systematic reviews published between 2020 and 2025, focusing on literature that covered ethical frameworks, governance models, and student-oriented uses of AI. Articles not related to higher education or not ethically analyzed were excluded, but major theoretical articles from before 2020 were used to explain concepts. This focus strategy ensured that the latest and most impactful scholarship, which can inform ethical and sociotechnical considerations of AI, was incorporated into higher education.

Literature Review

Artificial Intelligence in Education: Historical and Conceptual Foundations

The last 20 years have seen the continuous integration of artificial intelligence in the field of education. Educational data mining and learning analytics serve as the initial entry point on a large scale, as introduced by Baker and Siemens [5], enabling institutions to utilize clickstream data, records from learning management systems, and academic performance data to forecast potential academic risks and provide targeted interventions. They state that these tools transform the reactive pedagogy to a proactive one and provide personalization at scale. This possibility of customized learning forms a promising aspect of the future of learning.

In their textbook work, Russell and Norvig [6] explain the algorithms and architectures, such as decision trees, deep neural networks, as well as reinforcement learning, that underlie such applications. Their article situates educational AI within a broader shift in the field of computational intelligence, as the ability to process complex data in real-time has opened up possibilities in classes that were previously inaccessible. Meanwhile, in their article, Williamson and Eynon [1] are also a timely reminder that AI in education is not a new aspiration. The initial instructional models of the 1960s-1980s held the promise of customized learning routes but ultimately failed due to the limited computing power and basic model. The current AI inherits such ambitions and brings about new threats, including the universal datafication of students and the implementation of algorithmic controls. The given historical approach shows that the AI trend in education is not straight, but a circle, with each new generation of innovation enhancing and making the previous models more complex, and thus providing a unique perspective on the evolution of AI in education.

Ethical Dilemmas: Bias, Fairness, and Inequality

The problem of the bias of artificial intelligence is a widespread one. Mehrabi et al. [7] provide a list of types of bias, including measurement, representation, and historical bias, and claim that inequities often represent social trends that are deeply rooted, not just flaws in code. They note that technical solutions cannot work correctly without measures to overcome structural imbalances.

In a clear example of how this process works, as discussed by Noble [2], the author also presents how search engines can perpetuate racist stereotypes. Her analysis is not explicitly dedicated to higher education. However, the results are still informative, as algorithmic systems can strengthen discriminatory discourse and thus follow in the footsteps of potential harms AI systems in universities can cause marginalized students. According to Veale and Binns [8], fairness often requires threatening the availability of demographic data, but the process of obtaining this data may violate regulations. This conflict highlights the multifacetedness of accuracy, privacy, and justice in educational AI.

Floridi et al. [9] contribute to the discussion with the AI4People, a system of ethics that defines the principles of beneficence, non maleficence, autonomy, justice, and explicability. This framework lies between philosophy and practice, providing leaders in higher education with a roadmap for aligning AI with the common good. Collectively, these works argue that bias in AI is not a question of technical factors, but rather a question of ingrained social and political processes.

Technological Innovations in AI and Education

Advancing technologies in the sphere of artificial intelligence (AI) are changing the higher education realm with the help of adaptive learning platforms, predictive analytics, and intelligent tutoring systems, as well as generative AI tools. All technologies possess pedagogical possibilities and create new ethical and instructional contradictions that require critical analysis.

Predictive analytics and adaptive learning systems can individualize the learning process and detect at-risk students to improve engagement and retention [5,10]. They endorse pedagogically instructor-led teaching and individualized learning patterns. However, their dependence on automated decision-making can support the concept of data determinism, in which human wisdom will be substituted by machine reasoning. These tools, without special attention, will reduce the learning process to measurable results without considering the social and emotional aspects that are at the core of student development.

The AI-based tutoring systems and chatbots provide opportunities of real-time reaction and formative assessment [11]. Through co-design, such systems can be created with students to enhance autonomy and involvement. However, too much use of automated tutoring may build upon superficial learning, which emphasizes learning the process rather than doubting. The teachers should thus apply these tools as extensions and not substitutes for human teaching.

On the other hand, surveillance technologies (like plagiarism detectors and remote proctoring tools) have severe pedagogical consequences. Although their aims are associated with academic integrity, they tend to foster cultures of compliance and mistrust [3]. Students who are constantly surveilled are not allowed to be creative, collaborate, and take risks, which are the hallmarks of authentic learning.

Lastly, with the emergence of generative AI and automated feedback tools, new needs for AI literacy arise. Educators and students should think critically about the results of their algorithms and how they affect authorship and evaluation. These inventions highlight one of the central pedagogical issues: AI must not act as an independent teacher, but as an augmentative companion. The future of technology efficient, student-oriented education lies in balancing technological efficiency with relational and reflective aspects that enhance human learning.

Challenges in Implementing AI in Higher Education

Despite this potential, the implementation of artificial intelligence in the realm of higher education is marred with numerous challenges. One of the major concerns is related to bias and inequality. Mehrabi et al. [7] demonstrate that machine-learning systems often replicate structural disparities, whereas Noble [2] explains how algorithms perpetuate racial stereotypes in search-engine results. The importance of the paradox lies in the fact that, to reduce bias, it may be necessary to collect sensitive demographic information, as noted by Veale and Binns [8]. However, gathering such information may also create additional ethical risks.

Slade and Prinsloo [12] argue that the lack of information on topics such as consent, data ownership, and predictive analytics raises ethical dilemmas. Similarly, Jones, Black, and McLaughlin [13] and Zeng, Lu, and Huangfu [14] note that governance structures remain incoherent at both the institutional and national levels, resulting in an unequal distribution of oversight.

Surveillance technologies in schools make surveillance normal, as Selwyn, Pangrazio, and Nemorin [3] show, which brings the question of privacy and autonomy. According to Kemp and Livingston [11], the students are not to be left aside during the design processes because this will make AI systems seem invasive and, therefore, invalidate them.

Finally, AI is experiencing technical and situational constraints. According to Broussard [15], algorithms usually fail to understand human needs, which leads to negative misclassifications. According to Williamson and Eynon [1], AI application involves much infrastructure and expertise that many institutions cannot afford. Collectively, these works demonstrate that, despite the potential of AI in innovation, its application is limited by ethical, technical, and institutional factors.

Governance, Accountability, and Surveillance

Complex governance issues are facing college campuses with the rapid growth of artificial intelligence. Slade and Prinsloo [12] state that these dilemmas are consent, data ownership, and predictive analytics usage. Nevertheless, Selwyn, Pangrazio, and Nemorin [3] caution against normalized surveillance technologies.

At the institutional level, Jones, Black, and McLaughlin [13] report on the attempt to design governance structures to balance between innovation and accountability. According to the recent institutional case studies, the universities are instituting formal AI governance frameworks where attention is paid to transparency, stakeholder engagement, and ethical control [16]. Oncioiu [17], in his turn, makes a similar conclusion: the efficiency of governance is conditioned by the active involvement of stakeholders and the established system of responsibility, particularly when it comes to AI tools that influence the evaluation of students and academic honesty.

Zeng, Lu, and Huangfu [14] extend this discussion to the cross national level, showing how regulatory environments impact university practices. Such results highlight the importance of the governance being dynamic, situation-specific, and open to building trust.

Case Studies and Applied Examples

The scholarship on artificial intelligence in higher education is primarily conceptual or theoretical, but several empirical studies provide case-based information about the practical complexities of implementing it. Such examples can be used to bridge the gap between theory and practice. Recently, empirical studies have been formulated by Wu et al. [10], who demonstrate that universities where faculty and students engage in policy formulation will exhibit a better degree of trust in artificial intelligence applications and will have fewer governance failures. This demonstration supports the fact that the participatory models of governance, outlined by Kemp and Livingston [11], are important.

Participatory Design in Student Support

As Kemp and Livingston [11] state, the development of AI based help tools was done through multiple participatory design workshops, and students worked together to develop support tools. These workshops showed that students were more inclined to believe in and accept AI systems when they had a more direct part in their creation. The participants, in particular, emphasized that it was crucial to have systems that worked toward not only meeting their academic needs but also their social and emotional needs.

Surveillance Technologies in Schools

The article by Selwyn, Pangrazio, and Nemorin [3] gives an example of the surveillance technologies used in school settings, such as a biometric attendance system, classroom camera surveillance, and predictive behavior algorithms. These tools were created to increase efficiency and discipline but they have also caused certain unintended consequences: students have complained that they were micro-managed, teachers have been wanting the ability to make choices and make professional decisions. These instances highlight the normalizing effect of technological innovation on surveillance, provoking ethical issues related to privacy, consent, and trust. The two descriptions highlight a larger issue in ensuring that AI implementation aligns with the educational goal of developing independent and critical-thinking students.

Institutional Governance Frameworks

Institutional-level examples of AI governance structures that have been experimented in universities are given by Jones, Black, and McLaughlin [13]. It is important to note that there are institutions that have put in place AI ethics committees to review emerging technologies. Conversely, others have implemented policy guidelines that follow transparency and accountability of student information usage. These instances demonstrate the challenges of operationalizing governance. Although the policies were effective, their implementation was disproportionate, and faculty and students claimed that they were confused about how these guidelines worked in practice. The example demonstrates that governance is not merely a matter of developing structures, but also of integrating them into the institutional culture.

Cross-National Policy Environments

Zeng, Lu, and Huangfu [14] discuss the differences in AI governance in higher education between China and Europe. State policy was influential in the adoption of AI in China, and it focused largely on scaling data collection and its national inclusion in innovation strategies. Quite to the contrary, the idea of governance was often understood by European institutions in terms of privacy and compliance with the General Data Protection Regulation (GDPR). These examples suggest that institutional practices are strongly influenced by the national policy environment, implying that AI governance is impossible to comprehend without reference to the larger context of geopolitics and culture.

Critical Synthesis: Contradictions, Tensions, and Gaps in Practice

The current literature from 2022 to 2025 indicates a growing consensus that artificial intelligence (AI) in higher education presents not only an opportunity but also a continuous ethical challenge. Although the responsible AI governance frameworks have spread like nut clusters, the actual implementation is still below the principles. Wu, Zhang, and Carroll [16] and Oncioiu [17] show that although universities have officially set ethics committees, algorithmic review boards, and governance policies, in most instances, these mechanisms are just on paper. Organizations, especially those with fewer resources or technical ability, find it hard to institutionalize high-level ethical obligations in resource-constrained environments. This difference between AI policy and AI practice is thus among the most burning tensions of the new literature.

Transparency is a characteristic of ethical AI governance, but it often contradicts privacy and viability. Veale and Binns [8] explain that to gauge algorithmic fairness, demographic and contextual information may be legally or ethically inaccessible to institutions to gather. Slade and Prinsloo [12] warn that even good intentions in the analytics system would still violate the autonomy of the students if implemented without their permission. According to Wu et al. [16], a common approach in universities has been the tiered models of transparency, i.e., providing detailed documentation of high-stakes AI systems, e.g., predictive analytics systems in admissions or retention, but providing less detailed summaries of systems with less risk. This practice is not ideal, but it indicates a real-world trade-off between moral values and actual institutions.

Personalization and efficiency criteria are promoted using adaptive technologies and learning analytics [5,6]. Nonetheless, their use can result in a smooth road on student surveillance, compromising the confidence of the students. According to Selwyn, Pangrazio, and Nemorin [3], surveillance practices have become a normalized activity in the academic setting. Sommers and Zhang [18] address the same ethical issue by applying it to a K-12 setting, emphasizing the universality of privacy risks in the educational AI system. The institutions that perceived AI as a human-oriented support system (as opposed to a control system) were more likely to obtain more sustainable trust results. Wu et al. [16] established that when universities clearly differentiated between pedagogical analytics (to improve learning) and disciplinary analytics (to enforce behavior), students showed a high degree of trust in AI tools.

Equity-by-design remains a far-fetched dream. Mehrabi et al. [7] describe a strong taxonomy of bias and mitigation strategies, but, as Williamson and Eynon [1] observe, smaller and regional organizations tend to lack the necessary data to use fairness audits effectively. As Broussard [15] and Oncioiu [17] argue, even the complex AI models which are impossible to interpret can be more detrimental than useful in low-resource contexts. Rather, the new literature proposes to give priority to interpretable and transparent systems, which is at the expense of predictive accuracy. Universities can start constructing equitable AI infrastructures that are not prohibitive by using smaller models, which can be interpreted and documented with model cards or algorithmic transparency statements.

Kemp and Livingston [11] show that participatory co-design boosts legitimacy and relevance, although it has not been widely used. Most institutions introduce AI tools due to administrative requirements rather than stakeholder involvement. Wu et al. [10] demonstrate that college campuses in the Big Ten that use organized consultation, such as those with ethics committees including student representation, indicate lower rates of algorithmic mistrust and higher levels of buy in by the faculty. Oncioiu [17] also notes that it is more important to incorporate ethics into organizational practices rather than viewing it as merely a compliance measure. These results demonstrate a significant shift towards participatory governance as an ethical and functional requirement.

Governance structures remain disjointed even though there is a positive developmental trend. Although there are abundant high-level statements of principle, both Sommers et al. [18] and Oncioiu [17] warn that policy statements do not always specify how to balance ethics with other practical considerations, including staffing, time, and technical expertise. This may indicate the need to have what Wu et al. [16] offer as adaptive governance, namely continuous review and expansion that facilitates both increased oversight and transparency to an institutional capacity. Lighter and consistent documentation, done in environments with limited resources (including brief ethics summaries, risk registers, and annual tool reviews), can inculcate moral reflection without overwhelming the personnel.

All of these studies indicate that policy is not enough to achieve ethical AI governance in higher education. It involves active negotiation between opposing values: openness and confidentiality, creativity and moderation, involvement and convenience. The 2022-2025 literature highlights a crucial fact, which is that ethical frameworks can only be as efficient as the institutional ecosystems on which they are supported. Responsible AI is not a definitive goal but an ongoing practice of ethical tuning, where institutions revise their technical aspirations to reflect human values of fairness, accountability, and caring. This balance requires both humility and flexibility, and most importantly, an openness to operationalize ethics not as an abstract objective but as a practice in everyday teaching and learning stewardship.

Discussion

The literature indicates a steady debate over the possibilities and dangers of AI. On the one hand, AI enables individualized learning, anticipatory assistance, and administrative efficiency [5,6]. Conversely, it will increase inequality, diminish student privacy, and affect institutional trust [2,3].

Key themes emerge:

• The concepts of equity and fairness necessitate rigorous audits of AI systems [7,8].

• The ethical frameworks should be integrated into institutional governance and accountability practices [13,14].

• The design and clear communication should be participatory in relation to student trust and legitimacy [11].

The literature review reveals a central tension in the introduction of AI in higher education, its dual potential to bring more innovation and reinforce inequities. On the one hand, the use of AI in applications such as predictive analytics and adaptive learning platforms presents an opportunity to personalize education, enhance retention, and deliver timely interventions to at-risk students [5,6]. On the contrary, discrimination and inequity in AI systems fact convey the idea that uncontrolled use of technologies may amplify systemic inequality [2,7]. This paradox also shows that AI in education is not only a technological issue, but also an ethical and legislative issue.

Even though such models as AI4People [9] recommend guiding principles, their strategies are poorly translated into practice. Institutions are likely to introduce AI without explaining to stakeholders what it implies or undergoing a comprehensive analysis of fairness. The tension between privacy and fairness complicates implementation, which is handled more easily, according to Veale and Binns [8]. Additionally, the lack of understanding of human needs by decontextualized algorithms, as exemplified by Broussard [15], only makes the implementation process more difficult. Collectively, these results suggest that ethics should be implemented not only in the form of guidelines but also in the context of everyday institutional practice.

Universities have been experimenting with various forms of governance [13], but there is no consensus on what constitutes effective governance. As Zeng, Lu, and Huangfu [14] illustrate, the nature of governance in various countries is very diverse, which raises questions about the possibility of scalability and interchangeability. In the meantime, Slade and Prinsloo [12] and Selwyn et al. [3] warn that AI may become an instrument of surveillance and control rather than a help unless it is regulated publicly. The role of governance should thus involve a trade-off between innovation and accountability, in which student data is not compromised while the institution still gains access to predictive tools.

Students' experiences become a pivotal point in AI adoption. Kemp and Livingston [11] demonstrate that participatory design is associated with increased trust and legitimacy. In contrast, Selwyn et al. [3] and Noble [2] caution that AI is likely to harm autonomy and perpetuate discrimination without proper management. According to the literature, the effectiveness of AI is more determined by how it is perceived by its users rather than its technical complexity. Any lack of student involvement in design processes will put students at risk of being alienated and compromise the institution's credibility.

In technological terms, new technologies such as adaptive platforms and NLP-based tutoring agents are potentially useful; however, they are often accompanied by more controversial technologies, including biometric surveillance [3]. This lack of congruence highlights an issue that should not be overlooked: not every technological change is equivalent to an improvement in education. Institutions are thus required to critically assess innovations that align with pedagogical values and those that conflict with ethical commitments.

Lastly, some gaps and future needs have been identified in the literature. Williamson and Eynon [1] emphasize the importance of being aware of the past, as they observe that discussions about AI often overlook previous waves of technological optimism and disappointment. To determine whether AI systems will fulfill their promise of promoting equity and achieving better learning results, or whether they follow previous trends of technological hype, longitudinal research is necessary. Additionally, comparative case studies involving diverse cultural and national backgrounds [14] may provide more insightful insights regarding the influence of various forms of governance, resource levels, and cultural expectations on AI.

All these ideas suggest that AI in the realm of higher education is not merely a technical solution to education-related issues, but a sociotechnical phenomenon that must be approached with care. Its future may be founded on how universities can incorporate ethical frameworks, inclusive systems, create governance capacity, and guard against unintended consequences. The balance between innovation and equity as a strategic option will assume the final decision on whether the mission of higher education will advance or not, generating opportunities and justice with the assistance of AI. Those findings indicate that AI in higher education is not just technological adoption but a multifaceted process involving cultural, ethical, and institutional negotiation.

Technological Innovations in AI and Education

The literature cites a list of technological advancements that have influenced the use of AI in higher education. Baker and Siemens [5] highlight learning analytics and educational data mining as two of the first innovations that made it possible to predict student performance and utilize adaptive learning platforms that adjust the material to meet the needs of individual learners. These systems signal the transition to data-driven individualization in education. According to Russell and Norvig [6], the advancement of machine learning, natural language processing, and reinforcement learning has become the pillar of contemporary AI in education. These technologies provide technical assistance to automated evaluation, intelligent tutoring tools, and chatbots.

Simultaneously, Selwyn, Pangrazio, and Nemorin [3] document the emergence of surveillance technologies in schools, including biometric tracking and AI-based classroom monitoring. Though highly advanced technologically, these tools are associated with ethical issues of privacy and autonomy as well as the normalization of surveillance.

Kemp and Livingston [11] provide an example of participatory design workshops in which students also create AI support systems, a form of social innovation that increases trust and legitimacy. Floridi et al. [9] developed AI4People, which facilitates innovation at the framework level; they suggest that ethical principles should be considered to create responsible AI. Similarly, in the works of Jones, Black, and McLaughlin [13] and Zeng, Lu, and Huangfu [14], the authors discuss innovations in institutional and cross-national AI governance models, including policy and oversight mechanisms that evolve with the technology.

Collectively, these works suggest that technological innovation in higher education is multidimensional and encompasses algorithmic and data-driven applications.

Challenges in Implementing AI in Higher Education

Although it is promising, the application of AI in higher education is associated with difficulties. One of the major concerns is prejudice and inequality. Mehrabi et al. [7] demonstrate that machine learning engines tend to recreate structural imbalances, and Noble [2] reveals that algorithms with inherited racial stereotypes often support search results. The authors of the study by Veale and Binns [8] note that efforts to minimize bias might involve handling sensitive demographic data, but this could lead to new ethical issues. Slade and Prinsloo [12] state that unanswered questions regarding consent, data ownership, and predictive analytics create ethical dilemmas.

Selwyn, Pangrazio, and Nemorin [3] demonstrate that surveillance technologies normalize surveillance in schools, raising questions about aspects of privacy and autonomy. Kemp and Livingston [11] further note that excluding students from design processes poses a risk to AI systems, as they might be perceived as obtrusive and unauthorized.

Lastly, AI has technical and contextual weaknesses. Broussard [15] cautions that algorithms do not work well in understanding the needs of human beings and, in most cases, they give negative misclassifications. Williamson and Eynon [1] state that many institutions cannot offer the necessary infrastructure and expertise needed to implement AI. Collectively, these studies reveal that, although AI has potential in the area of innovation, it has ethical, technical, and institutional constraints to its use.

Future Directions for AI in Higher Education

The literature proposes some courses of action for creating AI in a responsible way in higher education. To start with, ethical issues will need to be addressed with the help of tools such as AI4People [9]. Second, there must be increased models, which integrate national and global policy standards with institutional oversight [13,14]. Third, the student-centered design should be given priority through building trust and reducing surveillance perceptions [3,11].

Moreover, the future trends are related to the humanization of the decision-making process in order to prevent overreliance on the autonomy of automated systems [15]. Cross-cultural and comparative studies must be done in order to research the effects of different contexts on AI [1,14]. Longitudinal research would demonstrate the effect of AI on long-term outcomes of learning and equity. New scholarship reinforces the need to establish global standards for artificial-intelligence governance and ongoing education in ethics, especially in academic positions [17]. The future of AI research should be longitudinal and cross-disciplinary in assessing the long- term effects of AI on learning equity and institutional culture [18].

Conclusion

The field of higher education is undergoing a revolution in artificial intelligence, which has both opportunities and threats. The application of this approach should be guided by ethical standards, good governance, and participatory design to deliver equal outcomes as demonstrated in the reviewed literature. More research should also be conducted in the future to examine longitudinal case studies, additional cross-cultural modes of governance, and ways to incorporate student voices into the development of AI. By adopting open, transparent, and responsible practices, higher learning will be able to transform to an equitable AI future that facilitates innovation and protects rights and values.

Competing Interests:

The authors declare that they have no competing interests.

References

  1. Williamson, B., & Eynon, R. (2020). Historical threads, missing links, and future directions in AI in education. Learning, Media and Technology, 45(3), 223–235. View

  2. Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. NYU Press. View

  3. Selwyn, N., Pangrazio, L., & Nemorin, S. (2020). Surveillance technologies in schools: Mapping the terrain. Oxford Review of Education, 46(3), 304–322.

  4. Bijker, W. E., Hughes, T. P., & Pinch, T. (1987). The social construction of technological systems: New directions in the sociology and history of technology. MIT Press. View

  5. Baker, R. S., & Siemens, G. (2014). Educational data mining and learning analytics. In R. K. Sawyer (Ed.), Cambridge handbook of the learning sciences (2nd ed., pp. 253–274). Cambridge University Press. View

  6. Russell, S. J., & Norvig, P. (2021). Artificial intelligence: A modern approach (4th ed.). Pearson. View

  7. Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2021). A survey on bias and fairness in machine learning. ACM Computing Surveys, 54(6), 1–35. View

  8. Veale, M., & Binns, R. (2017). Fairer machine learning in the real world: Mitigating discrimination without collecting sensitive data. Big Data & Society, 4(2), 1–17. View

  9. Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., … Vayena, E. (2018). AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689–707. View

  10. Wu, C., Zhang, H., & Carroll, J. M. (2024). Artificial intelligence governance in higher education: Comparative case studies from Big Ten universities. arXiv preprint. View

  11. Kemp, N., & Livingston, D. (2021). AI and student support: Building trust through participatory design. Journal of Educational Technology, 48(3), 45–60.

  12. Slade, S., & Prinsloo, P. (2013). Learning analytics: Ethical issues and dilemmas. American Behavioral Scientist, 57(10), 1510–1529. View

  13. Jones, R., Black, K., & McLaughlin, P. (2022). Developing governance frameworks for AI in higher education. Higher Education Policy, 35(2), 202–217.

  14. Zeng, Y., Lu, E., & Huangfu, C. (2022). Linking governance and artificial intelligence: A new challenge for higher education. AI & Society, 37(1), 91–104. View

  15. Broussard, M. (2019). Artificial intelligence: How computers misunderstand the world. MIT Press. View

  16. Wu, C., Zhang, H., & Carroll, J. M. (2024). AI governance in higher education: Case studies of guidance at Big Ten universities. Internet, 16(10), 354. View

  17. Oncioiu, I. (2025). Artificial intelligence governance in higher education. Societies, 15(6), 144. View

  18. Sommers, E., & Zhang, L. (2025). The ethics of using AI in K–12 education: A systematic literature review. Technology, Pedagogy and Education, 34(1), 45–67.

LICENSE

This work is licensed under a Creative Commons Attribution 4.0 International License.

Quick Links

  • Open Access
  • About Us
  • Journal
  • Submit Manuscript
  • Copyright & Licensing Policy

Contact Us

  • Plot No. - 814/1775, Jayar Sasan, Bhubaneswar, Odisha, India, Pin - 752101
  • +91 7682 015 542
  • info@gexinonline.com
MEMBER OF
JOURNAL ARCHIVED IN

© Gexin Publications.

All Rights Reserved.