ETHICAL CHALLENGES OF AI IN HR: WITH COMPANY CASES

Introduction

I want to demonstrate that AI in HR is more than just a theoretical concept. Nowadays, actual businesses are dealing with actual ethical dilemmas. Using both recent business examples and traditional HR/employee relations sources, I apply theory to critique in this article. 

 

AI's current role in human resources

  • Evaluating video interviews and resumes.
  • Tracking output and performance, particularly when working remotely or from home.
  • Chatbots for virtual assistants and HR concerns.
  • Predictive analytics: skill gaps, attrition, and turnover.

These align with what is described as accelerating HR functions by theory of Blyton & Turnbull (2004), Boxall, Purcell & Wright (2008), Bratton & Gold (2017). However, they create moral conflict.

Examples of businesses and ethical dilemmas in 2025

(Langley, 2025) , (The Economic Times, 2025) , (Croft, 2024)

Critical analysis and theoretical viewpoints

1. Herzberg’s two-factor theory

Firstly, Herzberg's two-factor theory clarifies the impact of AI on motivation. Dissatisfaction results from opaque algorithms or coercive policies that frequently violate hygiene factors like fair treatment, privacy, and transparency.  Poor hygiene can lead to ethical and morale problems, as demonstrated by Google's mandatory data case and Workday's algorithmic bias (Herzberg, 1959). AI can improve motivators like growth and recognition if applied carefully, but the cases demonstrate that in the absence of ethical supervision, AI turns into a source of demotivation rather than encouragement.

2. Control Theory

When considering AI monitoring in HR, I find Control Theory to be highly pertinent. The alignment of individual performance with organizational goals and feedback loops are key components of control theory (Armstrong, 2017).  Constant employee monitoring in situations such as LawSikho results in a one-way control system instead of a helpful feedback loop.  This, in my opinion, is an abuse of AI since it undermines autonomy and trust rather than directing performance, underscoring the moral dangers of techno-determinism.

3. Bandura’s Social Cognitive Theory

These situations are also related to Bandura's Social Cognitive Theory, specifically self-efficacy. AI tools have the potential to either boost or erode workers' confidence (Bandura, 1986).  For instance, older applicants' confidence in their ability to succeed is diminished when Workday's resume-screening algorithms disproportionately impact them.  This is an ethical dilemma in my opinion: technology ought to improve workers' capabilities rather than penalize them through unconscious prejudices.

I put forward theories and contrast them with these examples:

  • Organizational control and power: Clegg et al. (2006) contend that disproportionate power exists in organizations. Power is significantly shifted toward management and tech providers by AI surveillance, coerced consent, and secretive algorithms.
  •  Voice and employee relations: According to Blyton & Turnbull (2004), voice and trust are essential. Employees' voices are silenced when they feel under pressure (as in the Google case) or uninformed (as in the LawSikho case). Relationships deteriorate.
  •  Ethical distance and strategic alignment: Purcell & Boxall (2022) contend that HR strategy needs to be in line with values, business logic, and ethical standards. Employees suffer if AI tools are implemented solely for cost or efficiency reasons without considering justice or fairness.
  •  Bratton & Gold (2017) caution that we cannot treat AI as inevitable and value-neutral, contrasting technological dominance with social shaping. These business examples demonstrate the importance of social decisions regarding the development, regulation, and implementation of AI.

 Critical analysis: advantages and disadvantages

What we benefit from is what is still problematic. 

Positives and strengths:

  • Efficiency improvements include automating repetitive HR duties, releasing HR personnel, and expediting procedures.
  •  If properly designed, certain AI tools can lessen bias (e.g., eliminating human resume filtering errors).
  •  As demonstrated in previous reports by PepsiCo, IBM, and others, predictive analytics aids in staffing, turnover, and training needs planning. (Paradigm, 2025)


Restrictions and moral hazards:

  • Coercion and consent: The lines between required and optional are actually blurring. The Google case demonstrates the coercive nature of requiring data for benefits.
  •  Transparency "black box": Workers frequently aren't able to observe how algorithmic decisions are made.
  •  Discrimination and bias: The Workday case demonstrates that even tools designed with "neutral" goals can discriminate using historical data or proxies.
  •  Cost of trust and morale: Relationships suffer when people believe that secretive AI is watching and judging them. Cultures can deteriorate.
  •  Uncertainty in the law and regulations: Some nations have lagged behind. Workers may not be protected due to inadequate enforcement.

What businesses should do differently (My suggestions)


 Here, I make recommendations for HR professionals based on these cases and theory.

1. Control and Supervision

  • Create committees to oversee AI HR tools. Employee representatives, ethics, and legal should be included.
  •  Routine evaluations of the privacy and bias of AI tools in use.

2. Consent and Openness

  • Give clear consent before using data, and make opt-outs practical.
  • Describe the scoring and decision-making process.


3. A human in the loop

  • Human judgment should always be considered when making important HR decisions (discipline, promotion, and hiring).
  •  AI should be used as a guide, not a decision maker.

4. Voice and Participation of Employees

  • Engage employees in the development, implementation, and evaluation of AI systems.
  • Avenues for complaints when AI judgments seem unjust.

5. Education and Moral Ability

  • Training on bias, ethical AI, and data privacy (both local and international laws) is necessary for HR personnel.
  • Workers must understand what information is gathered and how it is used.

6. Implementation Localization

  • According to IHRM theory, adapt tools to national legal frameworks and cultural norms.
  •  What is morally right in the US or the EU might need to be modified for Asia, Africa, or Latin America.

7. Compliance & Regulation

  • Stay informed about laws such as the EU AI Act, employment law, and privacy laws.
  •  Be ready for liability and penalties (Workday case, etc.).
 

(Bankins, 2021)

 

My conclusion

AI in HR can be beneficial. However, it also raises significant moral dilemmas. Examples of companies that demonstrate actual problems with privacy, discrimination, surveillance, and consent include Google/Nayya, Workday, LawSikho, and Serco. HRM, IHRM, and employee relations theory are useful because it cautions us to consider context, voice, and power. In my opinion, businesses shouldn't solely view AI as a "magical efficiency tool." Transparency, human voice, ethical boundaries, and supervision are crucial.

References

Armstrong, M. (2017) Armstrong’s Handbook of Human Resource Management Practice. 14th edn. London: Kogan Page.

Bandura, A. (1986) Social Foundations of Thought and Action: A Social Cognitive Theory. Englewood Cliffs, NJ: Prentice Hall.

Bankins, S. (2021) ‘The ethical use of artificial intelligence in human resource management: a decision-making framework’, Ethics and Information Technology. Figure: Ethical applications of artificial intelligence to HRM: a decision-making framework. Available at: https://www.researchgate.net/figure/Ethical-applications-of-artificial-intelligence-to-HRM-a-decision-making-framework_fig1_356539511 (Accessed: 28 November 2025).

Blyton, P. & Turnbull, P. (2004) The Dynamics of Employee Relations. 3rd edn. Basingstoke: Palgrave Macmillan.

Boxall, P., Purcell, J. & Wright, P. (2008) The Oxford Handbook of Human Resource Management. Oxford: Oxford University Press.

Bratton, J. & Gold, J. (2017) Human Resource Management: Theory and Practice. 6th edn. London: Palgrave Macmillan.

Brewster, C., Chung, C. & Sparrow, P. (2017) Globalizing Human Resource Management. 2nd edn. London: Routledge.

Briscoe, D., Schuler, R. & Tarique, I. (2012) International Human Resource Management: Policies and Practices for Multinational Enterprises. 4th edn. London: Routledge.

Business Insider (2025) ‘Google used startup Nayya’s AI to manage employee health benefits — and some staff say they were told data sharing was mandatory.’ Available at: https://www.businessinsider.com (Accessed: 28 November 2025).

Clegg, S., Courpasson, D. & Phillips, N. (2006) Power and Organizations. London: SAGE Publications.

Edwards, T. & Rees, C. (2011) International Human Resource Management: Globalization, National Systems and Multinational Companies. 2nd edn. Harlow: Pearson Education.

Frege, C. & Kelly, J. (2020) Comparative Employment Relations in the Global Economy. 2nd edn. London: Routledge.

Herzberg, F. (1959) The Motivation to Work. New York: Wiley.

Paradigmiq (2025) ‘Predictive analytics in HR: global company examples (PepsiCo, IBM, Unilever).’ Available at: https://paradigmiq.com/blog/predictive-analytics-hr (Accessed: 28 November 2025).

Purcell, J. & Boxall, P. (2022) Strategy and Human Resource Management. 5th edn. London: Red Globe Press.

The Economic Times (2025) ‘AI tool flags moonlighting at LawSikho; employee resigns after surveillance concerns.’ Available at: https://economictimes.indiatimes.com (Accessed: 28 November 2025).

The Guardian (2025) ‘UK regulator orders Serco Leisure to stop using facial recognition to monitor staff attendance.’ Available at: https://www.theguardian.com (Accessed: 28 November 2025).



  

Comments

  1. This is an excellent and thought-provoking analysis of the ethical side of AI in HR. I really like how you linked real company cases with HRM theory — it gives your argument both depth and credibility. You could make it even stronger by suggesting how organizations can create a clear balance between innovation and employee trust in AI adoption.

    ReplyDelete
    Replies
    1. I appreciate your insightful comments. In order to illustrate the potential benefits and moral conundrums of adopting AI, as covered by (Bratton and Gold, 2017) and (Marchington and Wilkinson, 2020), I sought to critically connect real-world business cases with accepted HRM theories. I understand that the analysis could be improved by offering solutions for striking a balance between employee trust and innovation. (Purcell and Boxall, 2022) support this idea, highlighting the importance of coordinating technology use with open and equitable HR procedures. According to (Blyton and Turnbull, 2004), preserving open lines of communication and trust is crucial to making sure AI enhances rather than detracts from human-centered employment relations.

      Delete
  2. This article presents a very timely and insightful discussion on how AI is reshaping HR practices. I really like how you connect theory with practical examples, such as Amazon’s bias issues and IBM’s career support, which makes the content realistic and relatable. The reminder that AI should support human judgment—not replace it—shows a mature understanding of HR’s people-centred nature. The flow is clear and engaging!

    ReplyDelete
    Replies
    1. I appreciate your positive comments. In keeping with (Bratton and Gold's, 2017) view of HR as both strategic and people-focused, I aimed to provide a timely analysis of how AI is changing HR by tying theoretical viewpoints to actual examples like IBM's AI-driven career support and Amazon's bias issues. Additionally, I wanted to support (Purcell and Boxall's, 2022) contention that technology should supplement human judgment in HR procedures rather than take its place. To ensure fairness, engagement, and trust in contemporary organizations, it is crucial to preserve the human element of HR while utilizing AI innovation, as highlighted by (Marchington and Wilkinson, 2020) and (Blyton and Turnbull, 2004).

      Delete
  3. This is a very thoughtful and well-structured discussion of the ethical dilemmas created by AI in HR. I appreciate how you connected real business cases with foundational HR and employee relations theories — it shows strong analytical depth. Your emphasis on power, transparency, and employee voice highlights the human side of technology adoption, which is often overlooked. The recommendations you propose, such as “human in the loop” decision-making and localization of implementation, are especially practical. You might further strengthen it by briefly contrasting how different industries (e.g., tech vs. manufacturing) face unique ethical pressures when deploying AI in HR.

    ReplyDelete
    Replies
    1. I appreciate your thoughtful comments. By connecting actual business cases with fundamental HR and employee relations theories, I sought to critically examine the ethical conundrums of AI in HR. To add analytical depth, I drew on (Blyton and Turnbull, 2004), (Bratton and Gold, 2017), and (Clegg, Courpasson, and Phillips, 2006). The significance of upholding human-centered practices despite technological advancements is reflected in the emphasis placed on power, transparency, and employee voice. While (Purcell and Boxall, 2022) emphasize that it is crucial to align AI adoption with ethical and strategic considerations, the recommendations, which include "human-in-the-loop" decision-making and localized implementation, are intended to ensure practical applicability. Given that various industries face unique difficulties in striking a balance between innovation and trust, taking industry-specific ethical pressures into account would add even more nuance.

      Delete
  4. This is a crucial analysis. Your case studies are the best example of the conflict between efficiency and ethics. I like the practical suggestions that you made, particularly, the recommendation that you made under A human in the loop. However, In case an AI tool is shown to be able to decrease the bias in hiring yet also decrease the staff trust and morale because of the black box character, what ethical thing should HR prioritize? reduced bias or increased trust?

    ReplyDelete
    Replies
    1. I appreciate your insightful comment. The situation you present exemplifies the age-old conflict in HR ethics between relational trust and efficiency. HRM must strike a balance between performance goals, employee relations, and ethical considerations, as emphasized by (Bratton and Gold, 2017) and (Blyton and Turnbull, 2004). (Purcell and Boxall, 2022) contend that trust, transparency, and employee engagement are the cornerstones of long-term organizational success, even though eliminating bias in hiring is essential for equity and legal compliance. By fusing human judgment with AI's analytical objectivity, a "human-in-the-loop" strategy can help ease these conflicts and guarantee that decisions are fair and regarded as valid by employees. In actuality, HR should place a high priority on a well-rounded approach that reduces bias while preserving trust, perhaps through open communication and collaborative decision-making.

      Delete
  5. Great discussion! I especially value your focus on ethical considerations and employee relations alongside AI’s practical advantages. Highlighting human judgment, transparency, and participation makes this a very relevant guide for responsible HR technology implementation

    ReplyDelete
    Replies
    1. I appreciate the compliments. My goal was to emphasize that, despite the fact that AI has useful benefits for HR, employee relations and ethical issues are still crucial, as noted by (Bratton and Gold, 2017) and (Blyton and Turnbull, 2004). The discussion supports (Purcell and Boxall's, 2022) contention that responsible HR technology implementation necessitates striking a balance between efficiency, trust, and engagement by emphasizing human judgment, transparency, and participatory practices. This strategy guarantees that AI upholds the human-centered principles necessary for long-term HRM, rather than undermining them.

      Delete
  6. This is a clear article of how AI is changing the employee experience at work. It explains how AI can help, by making things faster, more personal and more efficient. Also shows the risks like losing the human touch, privacy issues and unfair treatment if it is not used carefully. overall, this reminds that while AI can improve work, it needs to be used with care.

    ReplyDelete
    Replies
    1. I appreciate your input. Although (Bratton and Gold, 2017) and (Marchington and Wilkinson, 2020) have discussed the risks of AI, including the loss of human touch, privacy issues, and potential unfair treatment, I wanted to demonstrate how AI can improve the employee experience by increasing efficiency, personalization, and speed. This supports the claim made by (Purcell and Boxall, 2022) that technology in HR should enhance human judgment rather than take its place. By employing AI sensibly, businesses can strike a balance between increased productivity and moral and interpersonal issues, maintaining employee engagement and trust at the forefront.

      Delete
  7. The article discusses the use of AI and virtual assistants in relations to Human Resource Management. Within the article, the authors argue the ethical boundaries and lack of clear “human voices” which AI lacks in their use of AI in Human Resource Management. Furthermore, the article discusses well known corporations such as Google, Workday, and Serco in context of the nuances and issues surrounding AI, behavioral, and legal AI discrimination and privacy, surveillance, and consent paradoxes. It integrates employee relations with HRM and IHRM theories thus sharpening the context, employee “power” voice, oversight, and balance on AI use or “implementation” on the business. The use of AI as just a transactional tool to lower operational and administrative workloads is strongly criticized. An appeal is made to businesses to ensure that there is ethical “supervision” and human balance to the technology. This is particularly important in order to enhance and bolster trust within the workplace as well as their rights. The appeal made on lower operational and administrative workloads to HR professionals and organizational leaders on the responsible use and adoption of AI technology which seeks to enhance and improve workplace relations is a valuable one.

    ReplyDelete
    Replies
    1. I appreciate your thoughtful comments. Your careful consideration of the article's discussion of the relational and ethical aspects of AI in HRM is greatly appreciated. When incorporating technology into HR procedures, it is crucial to preserve employee "voice," equity, and supervision, as noted by Blyton and Turnbull (2004) and Bratton and Gold (2017). Google, Workday, and Serco serve as examples of how human judgment must be used to balance AI's efficiency in order to prevent problems like discrimination, privacy violations, and eroded trust. Reiterating Boxall and Purcell (2016), the argument highlights that AI should be used as a supportive mechanism under ethical supervision, rather than just as a transactional tool. This would improve workplace relations, transparency, and employee empowerment in an HR environment that is focused on people.

      Delete
  8. This is an excellent article. You have discussed how to face an ethical challenge of AI in HRM in an organization. And also, you have discussed, that acceptance of AI's efficiency benefits supporting for ethical safeguards, transparency, and human error to ensure technology, serving both organizational goals and employee dignity. Furthermore, you have made recommendations for seven principles in HRM with respect to ethical challenges.

    ReplyDelete
    Replies
    1. I appreciate your insightful comments. I'm happy you thought the conversation about tackling the moral dilemmas raised by AI in HRM was insightful. Integrating AI into HR necessitates striking a balance between efficiency and ethical protections, transparency, and respect for human dignity, as noted by Bratton and Gold (2017) and Blyton and Turnbull (2004). The purpose of the seven suggested principles is to help organizations make sure AI promotes accountability, fairness, and trust while advancing organizational objectives and employee well-being. According to Boxall and Purcell (2016), the focus is still on upholding a human-centered approach so that technology complements moral and responsible HR practices rather than taking their place.

      Delete
  9. This is a very relevant and well-written article about the ethical side of AI in HR. I really admire how you’ve connected theoretical factors in real-world cases in organizations, presenting theoretical issues but real-world challenges. Your recommendations are also very practical.

    ReplyDelete
    Replies
    1. I sincerely appreciate your insightful comments. I'm so happy that the link between theory and practical examples made sense to you. My objective was to demonstrate the practical application of ethical principles in the AI-driven HR environment of today. I sincerely appreciate your feedback regarding the suggestions. I think that the future of ethical HR will be determined by how well innovation and ethics are balanced.

      Delete
  10. Really enjoyed reading this! You’ve done an amazing job breaking down the ethical side of AI in HR. I like how you’ve shown that it’s not just about efficiency, but also about fairness, transparency & treating people with respect.

    ReplyDelete
    Replies
    1. Your kind words and insightful analysis are greatly appreciated. The emphasis on ethics is something that frequently gets overlooked when conversations about AI center on automation and efficiency, so I'm delighted you found it appealing.

      The core of responsible AI in HR is encapsulated in your statement regarding equity, openness, and respect. Even though AI has the potential to improve decision-making, its ethical governance and human-centered design make sure that technology works for people rather than against them.

      Using ethical frameworks, such as explainable AI models, bias auditing, and inclusive data practices, enables HR teams to not only meet legal requirements but also foster sincere employee trust. In the end, technology should enhance human potential rather than diminish or replace it.

      Delete
  11. The article highlights how AI adoption in HR raises complex ethical questions around fairness, transparency, and accountability. It emphasizes that while AI can streamline recruitment, performance evaluation, and workforce planning, it can also amplify biases if algorithms are trained on skewed data. The inclusion of company case studies makes the discussion practical, showing both successful implementations and cautionary tales. It acknowledges AI’s efficiency gains while critically examining risks such as discrimination, privacy breaches, and loss of human empathy. It positions ethical AI as not just a compliance issue but a business imperative for reputation, employee trust, and long-term sustainability.

    ReplyDelete
    Replies
    1. I appreciate your insightful observations. I agree that AI presents difficult ethical problems for HR, especially with regard to fairness and transparency, since algorithms can readily replicate current disparities when organizational data reflects deeply ingrained bias (Brewster et al., 2017). Scholars emphasize that these advantages must not come at the expense of equity, privacy, or dignity at work, even though the efficiency gains in hiring, planning, and performance management are well documented (Bratton and Gold, 2017). As you point out, case studies demonstrate that technology can only be successful when it is backed by moral leadership and transparent accountability systems. This is consistent with the broader body of HRM literature, which contends that ethical AI adoption is a strategic relational priority for maintaining legitimacy and trust in work relationships (Blyton and Turnbull, 2004).

      Delete
  12. Sashini, this article shows a clear view of how AI is reshaping HR and creating real ethical concerns. It explains how AI expands HR functions but also increases managerial power. I like the examples of Google, Workday show risks such as coercive consent and biased algorithms. This concerns highlights in Blyton and Turnbull’s (2004) work on trust and voice. This article discusses that AI must align with ethical HR strategy, not only efficiency goals (Purcell & Boxall, 2022).

    ReplyDelete
    Replies
    1. I appreciate your thoughtful comment. I agree that AI is changing HR while also changing the balance of power in businesses, especially by increasing managerial supervision and data-driven control over workers. This is in line with concerns expressed in the literature on employment relations, where academics contend that if technology is not regulated ethically, it can exacerbate power and voice imbalances (Blyton and Turnbull, 2004). Your use of Google and Workday as examples shows how algorithmic systems may result in discriminatory outcomes and coercive consent, highlighting the need for more openness and protections. As you correctly note, the difficulty lies in making sure AI promotes HR values instead of just operational goals. According to Purcell and Boxall (2022), technology adoption needs to be based on strategic ethical alignment in order to safeguard participation, fairness, and trust.

      Delete
  13. This post raises very important points about the ethical challenges of using AI in HR. I like how you examine both the benefits & the risks around fairness, privacy & bias. It is a reminder that while AI offers great potential, it must be implemented with care & ethical oversight

    ReplyDelete
    Replies

    1. Thanks for your helpful comments. I'm glad the article clearly showed the ethical problems of using AI in HR, like being fair, keeping data safe, and avoiding prejudice (Blyton & Turnbull, 2004; Boxall, Purcell & Wright, 2008; Bratton & Gold, 2017). The idea was to look at the good things AI can do but also think carefully about the bad things, pointing out that using AI responsibly needs good ethical rules (Farnham, 2015; Brewster et al., 2017). I appreciate you noticing this balanced way of thinking, which makes it even clearer that we need to put ethical rules into how we use AI in HR. This will help make sure we use new technology and protect workers' rights.

      Delete
  14. AI in HR is powerful speeding hiring and analytics but brings real ethical headaches bias, surveillance, opaque “black-box” decisions, and weakened employee voice (seen in Google, Workday, Serco, LawSikho). Organizations must insist on transparency, human in the loop oversight, employee participation, bias audits and localized implementation. Done right, AI supports fairness and efficiency; done poorly, it destroys trust and morale.

    ReplyDelete
    Replies
    1. The dual reality of AI in HR is succinctly and clearly captured in your argument. I concur that while AI improves analytics and speeds up hiring, it also presents significant ethical issues, particularly when systems function without meaningful employee input or transparency. The examples you cite demonstrate how quickly trust can erode when decisions feel automated or unchallengeable. I appreciate your emphasis on human-in-the-loop oversight and context-specific implementation, because HR ultimately relies on judgement, empathy, and fairness. AI becomes a tool for empowerment rather than control when organizations combine it with robust governance, bias checks, and open communication. However, without these protections, AI runs the risk of escalating inequality and depressing worker morale, which is why ethical discipline is crucial.

      Delete
  15. Good Article.
    Offers a convincing and comprehensive review of the ethical and practical ramifications of AI in HR. The moral conundrums that occur when AI is used without adequate control are skillfully illustrated through the use of recent commercial situations, such as Google, Workday, and LawSikho. The criticism is strengthened and it is shown how theory can guide practice by connecting these instances to traditional HR and organizational theories including Bandura's Social Cognitive Theory, Control Theory, and Herzberg's Two-Factor Theory. The benefits and risks are discussed in a balanced manner, emphasizing efficiency improvements as well as possible threats to employee autonomy, justice, and confidence. The suggestions, which provide practical advice on permission, openness, employee involvement, and ethical supervision, are especially helpful.

    ReplyDelete
    Replies
    1. I appreciate your careful and thorough comments. I like how you acknowledged the need to strike a balance between the practical benefits of AI in HR and the ethical risks. Because behavior and systems change as a result of ongoing interaction, misuse of AI can have a direct impact on employee autonomy and trust, as explained by Bandura (1986). Understanding how technology can influence motivation and fairness is strengthened by connecting real-world examples to theories like Control Theory and Herzberg's Two-Factor Model (Herzberg, 1968). Bratton and Gold's (2017) emphasis on ethical oversight in contemporary HRM is also consistent with your reflection. Since employee participation and transparency are still essential to the responsible adoption of AI, I'm happy the suggestions were helpful.

      Delete
  16. This is an incredibly insightful analysis of AI in HR! I really appreciate how you go beyond the theoretical and bring in real company cases, it makes the ethical challenges much more tangible. The way you connect AI practices to theories like Herzberg’s two-factor model, Control Theory, and Bandura’s Social Cognitive Theory really helps to understand the human impact behind the technology. I especially liked your emphasis on transparency, employee voice, and “human in the loop” decision making. Too often, organizations see AI as a purely efficiency driven tool and forget that morale, trust, and fairness are just as important. Your recommendations feel practical and grounded, like involving employees in AI oversight and adapting tools to cultural and legal contexts.

    ReplyDelete
    Replies
    1. I sincerely appreciate your kind and considerate comments. I'm happy that the incorporation of actual business cases made the ethical ramifications of AI more vivid. Because AI fundamentally affects motivation, autonomy, and behavior in the workplace, it is crucial to relate AI practices to Bandura's Social Cognitive Theory, Control Theory, and Herzberg's motivators. As Bratton and Gold (2017) point out, trust and justice must be the cornerstones of any HR system, which is why your point regarding openness and employee voice is so important. Maintaining a "human in the loop" is consistent with Blyton and Turnbull's (2004) contention that technology should never take precedence over human judgment. Thank you so much for acknowledging the importance of employee-centered, culturally sensitive AI governance.

      Delete

Post a Comment

Popular posts from this blog