Intelligence artificielle et transformation de la GRH : Vers une reconceptualisation de l’efficacité des systèmes RH

Contenu principal de l'article

Anaïs Hébrard
Jean Frantz Ricardeau Registre

Résumé

La recherche en gestion stratégique des ressources humaines (GRH) s’est traditionnellement concentrée sur le contenu, soit sur les pratiques mises en place au sein des organisations. Cette perspective domine également les travaux récents sur l’intelligence artificielle (IA), principalement axés sur la transformation des tâches et des activités. Toutefois, une approche émergente met l’accent sur le processus, offrant une meilleure compréhension de la structuration des systèmes RH et des signaux transmis aux personnes en emploi. Dans cette logique, cet article propose un modèle conceptuel montrant comment les paramètres d’une IA responsable — fiabilité, sécurité et confiance — influencent les dimensions du processus RH — distinctivité, cohérence et consensus — et façonnent les signaux perçus en matière d’équité. L’étude contribue ainsi à la littérature et propose des recommandations pour orienter les recherches futures.

Renseignements sur l'article

Comment citer
Hébrard, A., & Registre, J. F. R. (2025). Intelligence artificielle et transformation de la GRH : Vers une reconceptualisation de l’efficacité des systèmes RH. Diversité Urbaine, 22(2). Consulté à l’adresse https://diversite-urbaine.ojs.umontreal.ca/index.php/diversite-urbaine/article/view/33
Rubrique
Articles
Bibliographies de l'auteur-e

Anaïs Hébrard, Université de Montréal

Candidate au doctorat

Centre de recherche interuniversitaire sur la mondialisation et le travail, École de relations industrielles

Université de Montréal 

Jean Frantz Ricardeau Registre, Université de Montréal

Candidat au doctorat

Chaire BMO en diversité et gouvernance, École de relations industrielles

Université de Montréal

Références

Adams, J. S. (1965). Inequity in social exchange. In L. Berkowitz (Ed.), Advances in experimental social psychology (Vol. 2, pp. 267–299). Academic Press.

Abraham, R., Schneider, J., et vom Brocke, J. (2019). Data governance: A conceptual framework, structured review, and research agenda. International Journal of Information Management, 49, 424–438. https://doi.org/10.1016/j.ijinfomgt.2019.07.008

Alter, N. (2010). L’innovation ordinaire. Presses Universitaires de France.

Bailey, D. E., et Barley, S. R. (2020). Beyond design and use: How scholars should study intelligent technologies. Information and Organization, 30(2), Article 100286. https://doi.org/10.1016/j.infoandorg.2019.100286

Bailey, D., Faraj, S., Hinds, P., von Krogh, G., et Leonardi, P. (2019). Special issue of organization science: Emerging technologies and organizing. Organization Science, 30(3), 642-646.

https://pubsonline.informs.org/doi/10.1287/orsc.2019.1299

Bailey, D. E., Faraj, S., Hinds, P. J., Leonardi, P. M., et von Krogh, G. (2022). We are all theorists of technology now: A relational perspective on emerging technology and organizing. Organization Science, 33(1), 1–18.

https://doi.org/10.1287/orsc.2021.1562

Bankins, S. (2021). The ethical use of artificial intelligence in human resource management: A decision-making framework. Ethics and Information Technology, 23(4), 841–854.

https://doi.org/10.1007/s10676-021-09619-6

Bankins, S., et Formosa, P. (2023). The ethical implications of artificial intelligence (AI) for meaningful work. Journal of Business Ethics, 185, 725–740.

https://doi.org/10.1007/s10551-023-05339-7

Bankins, S., Formosa, P., Griep, Y., et Richards, D. (2022). AI decision making with dignity? Contrasting workers’ justice perceptions of human and AI decision making in a human resource management context. Information Systems Frontiers, 24(3), 857–875.

https://doi.org/10.1007/s10796-021-10223-8

Barredo Arrieta, A., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., García, S., Gil-López, S., Molina, D., Benjamins, R., Chatila, R., et Herrera, F. (2020). Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82–115.

https://doi.org/10.1016/j.inffus.2019.12.012

Binns, R. (2018). Algorithmic accountability and public reason. Philosophy et Technology, 31(4), 543–556.

https://doi.org/10.1007/s13347-017-0263-5

Bowen, D. E., et Ostroff, C. (2004). Understanding HRM–firm performance linkages: The role of the “strength” of the HRM system. Academy of Management Review, 29(2), 203–221. https://doi.org/10.5465/amr.2004.12736076

Brown, S., Davidovic, J., et Hasan, A. (2021). The algorithm audit: Scoring the algorithms that score us. Big Data et Society, 8(1), 2053951720983865.

https://doi.org/10.1177/2053951720983865

Budhwar, P., Malik, A., De Silva, M. T. T., et Thevisuthan, P. (2022). Artificial intelligence – challenges and opportunities for international HRM: A review and research agenda. The International Journal of Human Resource Management, 33(6), 1065–1097.

https://doi.org/10.1080/09585192.2022.2035161

Cachat-Rosset, G., et Klarsfeld, A. (2023). Diversity, equity, and inclusion in artificial intelligence: An evaluation of guidelines. Applied Artificial Intelligence, 37(1), Article 2176618.

https://doi.org/10.1080/08839514.2023.2176618

Cheng, M. M., et Hackett, R. D. (2021). A critical review of algorithms in HRM: Definition, theory, and practice. Human Resource Management Review, 31(1), Article 100698.

https://doi.org/10.1016/j.hrmr.2019.100698

Chhatre, R., et Singh, S. (2024). Impact of AI on human behaviour and decision making: Ethical implications of AI. SSRN Electronic Journal.

https://doi.org/10.2139/ssrn.4869328

Chowdhury, S., Budhwar, P., Dey, P. K., Joel-Edgar, S., et Abadie, A. (2022). AI–employee collaboration and business performance: Integrating knowledge-based view, socio-technical systems and organisational socialisation framework. Journal of Business Research, 144, 31–49.

https://doi.org/10.1016/j.jbusres.2022.01.069

Chowdhury, S., Dey, P., Joel-Edgar, S., Bhattacharya, S., Rodriguez-Espindola, O., Abadie, A., et Truong, L. (2023). Unlocking the value of artificial intelligence in human resource management through AI capability framework. Human Resource Management Review, 33(1), Article 100899. https://doi.org/10.1016/j.hrmr.2022.100899

Colquitt, J. A., Scott, B. A., Rodell, J. B., Long, D. M., Zapata, C. P., Conlon, D. E., et Wesson, M. J. (2013). Justice at the millennium, a decade later: A meta-analytic test of social exchange and affect-based perspectives. Journal of Applied Psychology, 98(2), 199–236. https://doi.org/10.1037/a0031757

Cooper, A. F., Moss, E., Laufer, B., et Nissenbaum, H. (2022, June). Accountability in an algorithmic society: Relationality, responsibility, and robustness in machine learning. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (pp. 864–876). Association for Computing Machinery.

https://doi.org/10.1145/3531146.3533150

Dastin, J. (2018, October 10). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters.

https://www.reuters.com/article/world/insight-amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK0AG/

Delmotte, J., De Winne, S., et Sels, L. (2012). Toward an integrative framework for HRM–performance research: The role of HR system strength. Human Resource Management Review, 22(2), 73–86.

http://dx.doi.org/10.1080/09585192.2011.579921

Del Giudice, M., Scuotto, V., Orlando, B., et Mustilli, M. (2023). Toward the human-centered approach: A revised model of individual acceptance of AI. Human Resource Management Review, 33(1), Article 100856.

https://doi.org/10.1016/j.hrmr.2021.100856

Edionwe, T. (2017, May 22). The fight against racist algorithms: Can we teach our machines to unlearn racism? The Outline.

https://theoutline.com/post/1571/the-fight-against-racist-algorithms

Farndale, E., et Sanders, K. (2017). Conceptualizing HRM system strength through a cross-cultural lens. The International Journal of Human Resource Management, 28(1), 132–148.

https://doi.org/10.1080/09585192.2016.1239124

Feuerriegel, S., Shrestha, Y. R., von Krogh, G., et Zhang, C. (2022). Bringing artificial intelligence to business management. Nature Machine Intelligence, 4, 611–613. https://doi.org/10.1038/s42256-022-00512-5

Fiske, S. T., et Taylor, S. E. (2013). Social cognition: From brains to culture (2nd ed.). SAGE Publications.

https://doi.org/10.4135/9781529681451

Floridi, L. (2018). Soft ethics and the governance of the digital. Philosophy et Technology, 31, 1–8.

https://doi.org/10.1007/s13347-018-0303-9

Fjeld, J., Achten, N., Hilligoss, H., Nagy, A., et Srikumar, M. (2020). Principled artificial intelligence: Mapping consensus in ethical and rights-based approaches to principles for AI (Berkman Klein Center Research Publication No. 2020-1). https://doi.org/10.2139/ssrn.3518482

Guest, D. E., et Conway, N. (2011). The impact of HR practices, HR effectiveness and a “strong HR system” on organizational outcomes: A stakeholder perspective. The International Journal of Human Resource Management, 22(8), 1686–1702. https://doi.org/10.1080/09585192.2011.565657

Gevaert, C. M., Carman, M., Rosman, B., Georgiadou, Y., et Soden, R. (2021). Fairness and accountability of AI in disaster risk management: Opportunities and challenges. Patterns, 2(11), Article 100363.

https://doi.org/10.1016/j.patter.2021.100363

Glikson, E., et Woolley, A. W. (2020). Human trust in artificial intelligence: Review of empirical research. Academy of Management Annals, 14(2), 627–660. https://doi.org/10.5465/annals.2018.0057

Hagendorff, T. (2020). The ethics of AI ethics: An evaluation of guidelines. Minds and Machines, 30(1), 99–120.

https://doi.org/10.1007/s11023-020-09517-8

Hauff, S., Alewell, D., et Hansen, N. K. (2016). HRM system strength and HRM target achievement: Toward a broader understanding of HRM processes. Human Resource Management, 56(5), 715–729.

https://doi.org/10.1002/hrm.21798

Huang, M. H., et Rust, R. T. (2021). Artificial intelligence in service. Journal of Service Research, 24(1), 3–20.

https://doi.org/10.1177/1094670520902266

Hunkenschroer, A. L., et Luetge, C. (2022). Ethics of AI-enabled recruiting and selection: A review and research agenda. Journal of Business Ethics, 178, 977–1007.

https://doi.org/10.1007/s10551-022-05049-6

Jobin, A., Ienca, M., et Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399.

https://doi.org/10.1038/s42256-019-0088-2

John-Mathews, J. M., Cardon, D., et Balagué, C. (2022). From reality to world: A critical perspective on AI fairness. Journal of Business Ethics, 178, 945–959.

https://doi.org/10.1007/s10551-022-05055-8

Kaplan, A., et Haenlein, M. (2020). Rulers of the world, unite! The challenges and opportunities of artificial intelligence. Business Horizons, 63(1), 37–50. https://doi.org/10.1016/j.bushor.2019.09.003

Karunakaran, A., Lebovitz, S., Narayanan, D., et Rahman, H. A. (2025). Artificial intelligence at work: An integrative perspective on the impact of AI on workplace inequality. Academy of Management Annals, 19(2), 1–40.

https://doi.org/10.5465/annals.2023.0230

Katou, A. A., Budhwar, P. S., et Patel, C. (2014). Content vs. process in the HRM–performance relationship: An empirical examination. Human Resource Management, 53, 527–544.

https://doi.org/10.1002/hrm.21606

Kordzadeh, N., et Ghasemaghaei, M. (2022). Algorithmic bias: Review, synthesis, and future research directions. European Journal of Information Systems, 31(3), 388–409.

https://doi.org/10.1080/0960085X.2021.1927212

Korolov, M. (2025, June 3). AI gives superpowers to BEC attackers. CSO Online. https://www.csoonline.com/article/3995364/ai-superpowers-bec-attacks.html

Köchling, A., et Wehner, M. C. (2020). Discriminated by an algorithm: A systematic review of discrimination and fairness by algorithmic decision-making in the context of HR recruitment and HR development. Business Research, 13(3), 795–848.

https://doi.org/10.1007/s40685-020-00134-w

König, P. D. (2023). Citizen conceptions of democracy and support for artificial intelligence in government and politics. European Journal of Political Research, 62(4), 1280–1300.

https://doi.org/10.1111/1475-6765.12570

Langer, M., et König, C. J. (2023). Introducing a multi-stakeholder perspective on opacity in algorithm-based HRM. Human Resource Management Review, 33(2), Article 100832.

https://doi.org/10.1016/j.hrmr.2021.100881

Legg, S., et Hutter, M. (2007). Universal intelligence: A definition of machine intelligence. Minds and Machines, 17(4), 391–444.

https://doi.org/10.1007/s11023-007-9079-x

Lepri, B., Oliver, N., Letouzé, E., Pentland, A., et Vinck, P. (2018). Fair, transparent, and accountable algorithmic decision-making processes: The premise, the proposed solutions, and the open challenges. Philosophy et Technology, 31(4), 611–627.

https://doi.org/10.1007/s13347-017-0279-x

Leslie, D. (2019). Understanding artificial intelligence ethics and safety: A guide for the responsible design and implementation of AI systems in the public sector. Alan Turing Institute.

https://doi.org/10.5281/zenodo.3240529

Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., et Galstyan, A. (2021). A survey on bias and fairness in machine learning. ACM Computing Surveys, 52(6), Article 115.

https://doi.org/10.1145/3457607

Mensah, J. (2023). Artificial intelligence and ethics: A comprehensive review of bias mitigation, transparency, and accountability in AI systems. African Journal for Law et Development Research.

https://doi.org/10.13140/RG.2.2.23381.19685/1

Mikalef, P., Conboy, K., Lundström, J. E., et Popovič, A. (2022). Thinking responsibly about responsible AI and “the dark side” of AI. European Journal of Information Systems, 31(3), 257–268.

https://doi.org/10.1080/0960085X.2022.2026621

Mikalef, P., Lemmer, K., Schaefer, C., Ylinen, M., Fjørtoft, S. O., Torvatn, H. Y., Gupta, M., et Niehaves, B. (2022). Enabling AI capabilities in government agencies: A study of determinants for European municipalities. Government Information Quarterly, 39(4), Article 101596.

https://doi.org/10.1016/j.giq.2021.101596

Nishant, R., Schneckenberg, D., et Ravishankar, M. N. (2024). The formal rationality of artificial intelligence-based algorithms and the problem of bias. Journal of Information Technology, 39(1), 19–40.

https://doi.org/10.1177/02683962231176842

Nishii, L. H., Lepak, D. P., et Schneider, B. (2008). Employee attributions of the “why” of HR practices: Their effects on employee attitudes and behaviors, and customer satisfaction. Personnel Psychology, 61(3), 503–545.

https://doi.org/10.1111/j.1744-6570.2008.00121.x

Novelli, C., Taddeo, M., et Floridi, L. (2024). Accountability in artificial intelligence: What it is and how it works. AI et Society, 39, 1871–1882.

https://doi.org/10.1007/s00146-023-01635-y

Ochmann, J., Michels, L., Tiefenbeck, V., Maier, C., et Laumer, S. (2024). Perceived algorithmic fairness: An empirical study of transparency and anthropomorphism in algorithmic recruiting. Information Systems Journal, 34(2), 384–414.

https://doi.org/10.1111/isj.12482

Ostroff, C., et Bowen, D. E. (2016). Reflections on the 2014 decade award: Is there strength in the construct of HR system strength? Academy of Management Review, 41(2), 196–214.

https://doi.org/10.5465/amr.2015.0323

Papagiannidis, S., Conboy, K., et Mikalef, P. (2025). Responsible artificial intelligence governance: A review and research framework. Journal of Strategic Information Systems, 34(2), Article 101885.

https://doi.org/10.1016/j.jsis.2024.101885

Pereira, V., Hadjielias, E., Christofi, M., et Vrontis, D. (2023). A systematic literature review on the impact of artificial intelligence on workplace outcomes: A multi-process perspective. Human Resource Management Review, 33(1), Article 100857.

https://doi.org/10.1016/j.hrmr.2021.100857

Rao, P. S., Venkatesan, L. N., Cherubini, M., et Jayagopi, D. B. (2025, October). Invisible filters: Cultural bias in hiring evaluations using large language models. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (Vol. 8, No. 3, pp. 2164–2176).

Ravanera, C., et Kaplan, S. (2021, August 15). An equity lens on artificial intelligence. Institute for Gender and the Economy, Rotman School of Management, University of Toronto. https://www.gendereconomy.org/wp-content/uploads/2021/09/An-Equity-Lens-on-Artificial-Intelligence-Public-Version-English-1.pdf

Revillod, G. (2025). Trust influence on AI HR tools perceived usefulness in Swiss HRM: The mediating roles of perceived fairness and privacy concerns. AI et Society.

https://doi.org/10.1007/s00146-025-02216-x

Robert, L. P., Pierce, C., Marquis, L., Kim, S., et Alahmad, R. (2020). Designing fair AI for managing employees in organizations: A review, critique, and design agenda. Human–Computer Interaction, 35(5–6), 545–575.

https://doi.org/10.1080/07370024.2020.1735391

Sanders, K., Dorenbosch, L., et de Reuver, R. (2008). The impact of individual and shared employee perceptions of HRM on affective commitment: Considering climate strength. Personnel Review, 37(4), 412–425.

https://doi.org/10.1108/00483480810877589

Starke, C., Blanke, T., Helberger, N., Smets, S., et de Vreese, C. (2025). Interdisciplinary perspectives on the (un)fairness of artificial intelligence. Minds and Machines, 35(2), Article 22. https://doi.org/10.1007/s11023-025-09722-3

Selbst, A. D., Boyd, D., Friedler, S. A., Venkatasubramanian, S., et Vertesi, J. (2019). Fairness and abstraction in sociotechnical systems. In Proceedings of the Conference on Fairness, Accountability, and Transparency (pp. 59–68). Association for Computing Machinery. https://doi.org/10.1145/3287560.3287598

Shneiderman, B. (2020). Human-centered artificial intelligence: Reliable, safe et trustworthy. International Journal of Human–Computer Interaction, 36(6), 495–504. https://doi.org/10.1080/10447318.2020.1741118

Shneiderman, B. (2022). Human-centered AI. Oxford University Press.

Shin, D., et Park, Y. J. (2019). Role of fairness, accountability, and transparency in algorithmic affordance. Computers in Human Behavior, 98, 277–284. https://doi.org/10.1016/j.chb.2019.04.019

Skillora. (2025, November 27). Top 4 AI recruitment case studies.

https://skillora.ai/blog/ai-recruitment-case-study

Shrestha, Y. R., Ben-Menahem, S. M., et von Krogh, G. (2019). Organizational decision-making structures in the age of artificial intelligence. California Management Review, 61(4), 66–83. https://doi.org/10.1177/0008125619862257

Tocchetti, A., Corti, L., Brambilla, M., et Celino, I. (2022). EXP-crowd: A gamified crowdsourcing framework for explainability. Frontiers in Artificial Intelligence, 5, Article 826499. https://doi.org/10.3389/frai.2022.826499

Varma, A., Dawkins, C., et Chaudhuri, K. (2023). Artificial intelligence and people management: A critical assessment through the ethical lens. Human Resource Management Review, 33(1), Article 100923.

https://doi.org/10.1016/j.hrmr.2022.100923

Vrontis, D., Christofi, M., Pereira, V., Tarba, S., et Makrides, A. (2022). Artificial intelligence, robotics, advanced technologies and human resource management: A systematic review. The International Journal of Human Resource Management, 33(6), 1237–1276. https://doi.org/10.1080/09585192.2020.1871398

Wilson, K., Ghosh, S., et Caliskan, A. (2025a). Bias amplification in Stable Diffusion’s representation of stigma through skin tones and their homogeneity. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (Vol. 8, No. 3, pp. 2705–2717).

Wilson, K., Sim, M., Gueorguieva, A. M., et Caliskan, A. (2025b). No thoughts just AI: Biased LLM hiring recommendations alter human decision making and limit human autonomy. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (Vol. 8, No. 3, pp. 2692–2704).

Xu, W. (2019). Toward human-centered AI: A perspective from human–computer interaction. Journal of Applied Psychology, 25(4), 291–305.

https://doi.org/10.1145/3328485

Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. PublicAffairs.