Artificial intelligence in human resource management: a challenge for the human-centred agenda?
Abstract
The ILO human-centred agenda puts the needs, aspirations and rights of all people at the heart of economic, social and environmental policies. At the enterprise level, this approach calls for broader employee representation and involvement that could be powerful factors for productivity growth. However, the implementation of the human-centred agenda at the workplace level may be challenged by the use of artificial intelligence (AI) in various areas of corporate human resource management (HRM). While firms are enthusiastically embracing AI and digital technology in a number of HRM areas, their understanding of how such innovations affect the workforce often lags behind or is not viewed as a priority. This paper offers guidance as to when and where the use of AI in HRM should be encouraged, and where it is likely to cause more problems than it solves.
Introduction
Sustainable development is at the core of national and international discussions on development issues. At the enterprise level, the ILO defines sustainability as “operating a business so as to grow and earn profit, and recognition of the economic and social aspirations of people inside and outside the organization on whom the enterprise depends, as well as the impact on the natural environment” (ILO 2007). According to the ILO, “sustainable enterprises should innovate, adopt environmentally friendly technologies, develop skills and human resources, and enhance productivity to remain competitive in national and international markets” (ILO 2007).
The ILO Centenary Declaration for the Future of Work emphasizes “the role of sustainable enterprises as generators of employment and promoters of innovation and decent work” and, in this regard, underlines the importance of “supporting the role of the private sector as a principal source of economic growth and job creation by promoting an enabling environment for entrepreneurship and sustainable enterprises […] in order to generate decent work, productive employment and improved living standards for all”. Creating “productive workplaces” and “productive and healthy conditions” of work are critical in achieving this goal (ILO 2019a).
At both the macro- and micro-levels, the ILO promotes the “high road” approach to productivity which “seeks to enhance productivity through better working conditions and the full respect for labour rights as compared to the “low road” which consists of the exploitation of the workforce” (ILO, n.d.). The “high road” is related to the ILO’s “human-centred agenda,” which is a key part of the ILO human-centred approach to the future of work highlighted in the ILO Centenary Declaration for the Future of Work and described in-depth in the related Work for a brighter future – Global Commission on the Future of Work report. This approach puts “workers’ rights and the needs, aspirations and rights of all people at the heart of economic, social and environmental policies” (ILO 2019a) and calls for investments in people’s capabilities, institutions of work and in decent and sustainable work (ILO 2019b). It is expected that such investments would be combined with people-centred approach to business practices at the workplace level.
This paper is aimed at exploring when and how AI is used in HRM, and when its impact on firm and individual performance is positive, negative or cannot be properly accessed. We start by looking at the principles of high road approach and how these principles are related to the use of AI in HRM. Then we specifically look at the pluses of minuses of AI in the workplace focusing on such aspects of HRM as hiring and work organization. We conclude with a brief overview of some possible policy responses to the AI-related and other technological challenges.
Principles of the “high road”
Since the Western Electric studies that were carried out in the 1920s and 1930s (Landsberger 1958), evidence has accumulated year-by-year about the advantages of taking employee management seriously: look after employees, and they will look after the employer’s interests; empower employees to make decisions, from quality circles to lean production to agile management, and performance and quality improves.
In the 1950s and early 1960s, Douglas McGregor described the developing literature on the effectiveness of management practices as “Theory Y” and contrasted it with “Theory X” which essentially views employees as simply another factor of production like raw materials in manufacturing (McGregor 1960). Frederick Taylor and his scientific management approach were arguably the originators of a sophisticated view of Theory X, which is rooted in a simple, conservative (with a small “c”) notion that employees are mainly motivated by money, need to be told what to do by experts, and will shirk their responsibilities if not watched closely. Theory Y has the much more complex but more accurate assumption that employees have many complicated motivations and if managed correctly would do the right thing for the employer even if they are not monitored or incentivized by financial rewards and punishments. The contemporary incarnation of Theory X and Y with a few new twists is the idea of a “high road” approach for Theory Y practices and a “low road” for Theory X.
In recent decades, evidence has accumulated about the advantages of Theory Y approach of taking employee management seriously and the most fundamental element of that approach, reciprocity: if employers look after the interests of their employees, then the employees in turn will be inclined to look after the interests of their employer.
The ILO data from the Better Work and Sustaining Competitive and Responsible Enterprises (SCORE) programmes1 provides evidence of the positive effects of such an approach, showing that “improved workplace cooperation, effective workers’ representation, quality management, clean production, human resource management and occupational safety and health, as well as supervisory skills training, particularly among female supervisors, all increase productivity”. Moreover, “better management also helps to lower accidents at work2 and employee turnover and reduces the occurrence of unbalanced production lines (where work piles up on one line while other workers are sitting idle)”. Evidence also points to “increased productivity and profitability associated with a reduction in verbal abuse and sexual harassment.”3
Evidence has even moved past showing reductions in turnover and improvements in individual and organizational productivity to financial performance. The strongest of these studies is arguably Edmans (2011) which finds that companies making the “best places to work” ranking have higher than anticipated share prices in future years. A different study finds a similar market-beating performance for companies that have greater managerial integrity and ethics (Guiso, Sapienza and Zingales 2015). Another global study shows that companies that have better management (including more sophisticated human resource practices) perform better on a wide range of economic dimensions (Bloom and Van Reenen 2010).
None of this is to suggest that tracking employee performance, setting standards for their work efforts, and rewarding and punishing are irrelevant. However, relying solely on those tactics is not enough.
At the same time, it is important to note that at least in the short term the “low road” approach to management can allow firms to break-even or even improve economic performance (but not social outcomes) where the initial practices are simplistic. In those countries and sectors where labour standards and laws are not always respected and workers are often not organized and represented, the “low road” approach to productivity is still common, in part because it is simpler for management and may appeal to their world view that focuses on their roles. However, the “low road” approach is seeing something of a resurgence even in the most sophisticated sectors of the world leading economies as we note below.
The return of Theory X using artificial intelligence
The use of artificial intelligence (AI) in HRM can challenge the implementation of the ILO-led human-centred agenda at the workplace level. While firms are enthusiastically embracing artificial intelligence and digital technology in a number of their HRM areas, their understanding of how such innovations affect the workforce is often not viewed as a priority or lags behind (Rogovsky and Cooke 2021).
Many enterprises in both developing and developed countries are replacing the employee empowerment approach, such as quality circles and lean production, with an “optimization” approach where experts and the algorithms associated with artificial intelligence (AI) they create take back the decision-making that empowerment had created. Optimization seems to appeal to many managers as it sounds per se to be more efficient. As a result, the evidence of employee empowerment as a productivity driver is largely ignored (Cappelli 2020).
The application of data science as well as an increase in computer power in worker-related questions have spawned a huge number of applications, indeed an entire industry of vendors, offering solutions to virtually every human resource question. It takes the decision-making out of the hands of employees and their supervisors as well, turning it over to the software and ultimately the vendors and their programmers who generate answers to human resource problems. In 2020, 28 per cent of US employers report that they were using data science tools to “replace line manager duties in assigning tasks and managing performance.” 39 per cent were planning to start doing so the following year (Mercer 2020).
The use of AI in the form of data science in workforce management is not per se a bad thing. As with AI in other contexts, it may allow us to answer questions that have not been addressed before: not every AI solution is taking decisions away from humans. For example, advice to employees about possible career paths can be generated for them by machine-learning algorithms based on what has been best in the past for other workers like them. Rigorous advice on questions like this has simply not been available before. It is also the case that decisions currently made by managers and supervisors are often so poor, driven by subjectivity and bias, which makes it easier for data science solutions to do better. In hiring, for example, it is easier for data-based algorithms to do a better job than line managers who have no relevant training and base their decisions largely on subjective opinion. More generally, the lag in productivity growth across most industrialized countries has been caused, at least in part, because not enough investment was made in solutions where “capital,” which includes software, takes over tasks from workers and perform them at less cost. Consider, for example, what it would cost for a large employer that receives thousands of job applications every year if it had to do the initial classification of applications by hand instead of by applicant tracking software.
The issue in terms of guidance is knowing when the application of these AI techniques is useful (i.e. they solve new problems and handle tasks better than humans do) and where they are counterproductive (i.e. they offer no advantage over human decisions and may actually make employment relationships worse).
Finding such a mix is a challenge that involves managerial as well as moral dimensions. At the very least, we believe that when there is a choice between options that are equal in terms of organizational outcomes, employers should choose the one that is better for employees. This principle coincides with standard utilitarian views of ethics and with economic interpretations of Pareto improvements.4 Perhaps more importantly, it draws on the legal principle in civil law of “abuse of right”, which means that simply because one party has the legal right to do something does not create the right to do it if by doing so it damages other parties without creating benefits (Mughal, unpublished).
There are still very few studies that examine the implications of artificial intelligence for corporate HRM. Tambe, Cappelli and Yakubovich (2019) noted “a substantial gap between the promise and reality of artificial intelligence” in the area of HRM. They identified four major challenges in using artificial intelligence as part of HRM:
-
complexity of HR phenomena, which make it difficult to model;
-
limitations of small data sets;
-
accountability issues associated with fairness and other ethical and legal constraints when decisions are made by algorithms; and
-
potentially negative employee reactions to managerial decisions taken based on data-based algorithms.
In particular, from both economic and social points of view there is a growing concern over the use of artificial intelligence algorithms for hiring (Cappelli 2019) and for work organization (Cappelli 2020). These issues will be considered next.
The pluses and minuses of AI in the workplace
It may be easiest to grasp the general principles behind the use of AI through some common examples. Before we look into the “optimization” policies and practices per se, let us focus on hiring which is perhaps the most basic, time-consuming, and important of the employee management questions. The evidence increasingly points to the fact that we do not handle this process well even without AI: we rely on ad hoc methods of finding recruits, mainly just hoping that the right ones come to us, and then we hope that hiring managers, typically untrained in the process who rely on off-the-cuff interviews, will somehow find the best candidates to hire. Then we do not check to see whether the ones we have hired are good or bad so we do not learn from the process. What we do know is that this process gives ample room for biases to influence decisions: my personal views on what constitutes a good cultural “fit” shape who gets hired as does how much I like candidates, which is strongly correlated with how similar they are to me.
Hiring is actually a context where the prospects for algorithms are best. The way data science ideally works starts with machine learning, where the software (the “machine” in this case) looks at the attributes of as many current and past employees as possibly to see how their attributes relate to their quality as employees. The software is agnostic as to what should matter and how it should matter: relationships could be non-linear, simultaneous, in any form. It generates a single equation to measure the attributes that are associated with a good performer, not as with prior “best practice” approaches where there is one score for say IQ, one for prior experience, one for interviews, and so forth. The machine learning algorithm looks at any potential candidate and tells you how similar they are to those in the past who were your best performing employees.
The plus of this approach is that it is objective. Unlike human assessors, it will not give higher scores to more attractive applicants or those most similar to us. Algorithms have the advantage of treating all similar observations the same way: if it is counting a college degree a certain way, it does not give extra credit to the college where the boss is an alumnus. Cowgill (2020) finds that an algorithm used to predict who should advance to short-list status did a better job than human recruiters did in part because it did not over-value credentials that had higher social status such as degrees from elite universities.5 An algorithm will also find factors that predict that humans with our more limited experience would never find. Another plus is that once set up, using algorithms to hire is remarkably cheaper than relying on humans.
The downside that is common to human assessors is that if prior experience was shaped by bias, then the algorithm will be as well. Amazon’s hiring algorithm, for example, gave higher scores to men because in the past Amazon managers had given higher scores to male employees (Cappelli 2019). Another downside is the issue now known as “explainability”: can we explain to the candidates why they were not hired when they ask why their scores were low? It is difficult for machine learning algorithms to address those questions. Complaints from gig workers that the algorithms managing them are biased have led organizations like the UK-based Workers Info Exchange to press those gig companies to explain to their contractors why and how their algorithms made the decisions they did (Murgia 2021). It also takes very large data sets to generate machine learning algorithms, and few employers hire enough employees to build their own. They are likely as a result to rely on the algorithms produced by vendors with no guarantee or even reason to believe that the vendor’s algorithm will predict hiring success for their jobs.
A related issue is that some of the factors that have been used in generating these algorithms might give us qualms. For example, the commuting distance from one’s home to a job has been shown to be a good predictor of turnover and some aspects of performance. Where one lives, therefore, shapes the likelihood of getting a job. Social media postings are sometimes used in building hiring algorithms as well. Most employers would probably want limits placed on the kind of information on which the algorithms are based, something that is not possible when one uses algorithms produced elsewhere.
From the human-centred point of view, these practices are not only potentially discriminatory as Amazon case shows, but they also prevent decent candidates getting the jobs they deserve.
If hiring is amongst the most promising uses of AI, perhaps the most troublesome is the use of software to determine workers’ schedules. This is not a new idea, but its use has expanded considerably to a wide range of jobs. 6 42 per cent of US companies now use it (Harris and Gurchensky 2020). The goal is a sensible one, to “optimize” work scheduling process in order to minimize total amount of labor needed to cover assignments and make sure that everyone is doing roughly the same amount of work allocated across similar schedules. The reason this approach is troublesome, though, is because we have other approaches that work even better where the employees themselves work out schedules through a process of negotiations and social exchange: I’ll cover for you this weekend if you take my shift next week, for example. Scheduling algorithms cut both employees and supervisors out of the process and end up being quite rigid and unable to respond to last-minute adjustments.7 A study of optimization approaches in scheduling discovered that it increased turnover and turnover costs while adding nothing to performance outcomes (Kesavan and Kuhnen 2017). The effort to cut costs in one category (headcount) increased them in another (turnover).
The evidence that the flexible approach works is, by the standards of rigorous research, about as good as it gets. It improves a range of outcomes for employees, such as better job attitudes (Baltes et al. 1999), as well as better accommodation of life challenges outside of work including evidence that it is worth extra salary to employees (Kelly et al. 2008). For employers, it leads to higher productivity8. Software, in contrast, assumes that the workers are interchangeable, it imposes schedules without any consideration as to the varying needs of individual employees, and it is not at all flexible when last-minute problems pop up. As with many of these new practices, the question is, what problem is it really solving, and is the solution worse than the original problem?
Then we have situations where existing practices that involve empowering employees have worked extremely well yet there is a push to replace them with software. Beginning in the 1970s, efforts to involve employees in solving workplace problems borrowed from Japan by North American and West European companies worked so well that they spread systematically throughout industrialized countries and beyond, starting union-based cooperative programmes on safety problems, to quality circles where workers identified the causes of quality problems, and then to lean production where workers took over some of the tasks of the industrial engineers, redesigning their own jobs to improve productivity and quality. The evidence that lean production in the form of Toyota’s operating model worked so much better than anything else, especially the efforts at GM and Volkswagen to deal with productivity and quality problems with automation, was so clear that it was impossible to ignore (MacDuffie and Pil 1997). Lean production spread from there to other industries including healthcare.
Recently, though, we have seen efforts to replace the employee involvement that was at the heart of lean production with machine learning software. The new approach is called “machine vision.” Rather than having employees figure out what is wrong with their work processes, it captures what employees are doing now with cameras. Some of the new software ends there, monitoring assembly line workers constantly to make sure that they perform the tasks exactly as designed. Another software called Robotic Process Automation takes those video images and figures out how to redesign tasks to make them more efficient. In other words, it takes over the tasks the workers used to do in lean production (Simonite 2020). Other vendors reassemble jobs to push simpler tasks down to cheaper labour,9 the classic “deskilling” practice with the classic pushback, that the narrow, simple tasks that result are so boring that engagement, commitment, and performance ultimately decline. They are performing the same tasks that workers had done before with the difference being that now, the most and possibly only interesting part of those jobs is gone. That control is what made the boring jobs tolerable.
More generally, it is also difficult to argue that paying vendors to take over a task that employees either were already doing or could do – updating the performance of tasks through lean production - is going to be cheaper, especially because lean production is a never-ending process that has to be recalibrated whenever there are changes anywhere in the system.
A final especially illustrative example comes from earlier days in IT technology and the introduction of numerically controlled machines in machining work. Here the question was, who will perform the tasks of setting up and programming those machines, something that has to be done frequently, whenever they switch over to a new product or new specifications for it. One option was to hire engineers who were skilled programmers and have them learn the context of machining that was done in different organizations. That would mean getting rid of many of the machinists. The other was to take the machinists who had the knowledge for the latter tasks and teach them programming. It was easier to do the former, but it was far cheaper in the long run to do the latter not only by avoiding the churning costs of laying off one group of workers and hiring in another or even because machinists were paid less than engineers but because the employer then created a cadre of employees with skills unique to them: unlike the programming engineers, who could easily leave for jobs elsewhere, these machinist-programmers now had the best jobs they likely could find anywhere (Kelley 1996).
Managing the transition: Why the “wrong” choices are made?
There is sometimes a view stemming from simple economic assumptions that “firms” always make the most efficient choices because if they do not, they go out of business. But most businesses do fail, and it is possible for larger companies to make the wrong decisions for some time and yet stay in business. There are also so many decisions to be made in businesses that it is inevitable that we will make the wrong ones in some area.
Employers are not rational calculating machines, they are humans with the same limitations in ability to make decisions as all of us have. In the workplace, though, there are systematic reasons why employers might choose the “low road” approach even when alternatives objectively make more sense. One reason is that high road approaches that require engaging employees and soliciting their best efforts are not easy to pursue. They require sustained efforts at communication, building trust, and so forth. Not every business leader has the inclination to pursue that path. Nor do they have the knowledge base to do so. Leaders who come from engineering backgrounds are taught optimization approaches to business problems that, when focused on worker issues, come down to minimizing the costs of using them. That approach per se is not the issue as long as we have complete and accurate measures of costs and benefits10. But few if any employers have those measures.
Consider, for example, the cost of turnover, which is one of the most basic facts necessary to operate efficiently. Organizations that are focused on making money need to know what those costs are in order to determine how much investment is efficient to head them off. We also need to know where those costs occur. It is common if they are measured at all to simply count the administrative costs of hiring a replacement. A very careful look at these costs found that even in front-line retail jobs, two-thirds of the costs of turnover come between the time when the employee gives their notice to leave and before they actually depart. That happens in part because of negative effects on peers who remain, in part because of the demands on them of recruiting, hiring, and onboarding replacements. Those costs are massively greater than the administrative costs (Kuhn and Yu 2021). What most employers do instead is use a rough measure of the administrative costs of hiring a new worker as a proxy, which vastly undercounts the true costs. Why employers had not calculated them is in part because it is difficult to do but ultimately because of the unspoken assumption that, unlike say the costs of missing inventory, they are not big enough to bother.
At the same time, employers’ incorrect assumptions can also be explained by a lack of understanding about how humans actually behave. Many employers are simply convinced that in order to be productive the employees must be tightly controlled and refuse to accept the notion that the employees can contribute more when they are given freedom to express their views, contribute to the decision making process and are expected to take initiative11. Another reason, which is investor driven, is the quirkiness of financial accounting: Chief Financial Officers (CFOs) are more likely to invest in software but not in employees because software is an asset that can be depreciated – paid off over time – whereas training and other investments in employees are current expenses that must be paid off completely in the year they are “purchased” (Cappelli 2023).
To summarize, we offer some practical suggestions on the use of AI in corporate HRM (see Box 1). The choices as to whether to use AI tools or rely on employees depend in part – but only in part - on the nature of the tasks in question. The traditional view that we should automate the simplest tasks is not necessarily the right advice as we saw earlier with lean production where simple tasks were bundled together into jobs that workers largely controlled. There they were able to take over supervisory tasks and proved more adaptable (e.g., they did not need to be reprogrammed) than robots. Beyond the nature of the tasks, the context also determines the choice of using AI or humans.
Box 1. AI and HRM: Q&A
Question |
Answer |
---|---|
Is it a new task? |
Go digital |
Is it something where the outcome has to be explained and justified? |
Think first what the complaints will be and how you can answer them with AI. |
Do you have a lot of data about HR issues readily available and a capable analytics team? |
Expect to spend a great deal of resources to get the data bases organized. But do not even bother if you are doing this to address one task, such as predicting turnover, since the fixed costs are too high. |
Which tasks should be continued to be performed by current employees? |
Identify those individual tasks in the work that develop employee engagement and commitment. |
How to choose vendors? |
Choose the vendors that can provide you with the evidence on exact tasks you want performed (e.g., improvement in percentage of hires rated as good), and that can validate the algorithm with your data. |
Policy responses to the AI-related and other technological challenges
Governments and social partners can come up with a number of policies and practices that help guide corporate HR functions to respond to the AI-related opportunities as well as other technological challenges. Many of them are in line with the ILO-driven human-centred agenda, in particular with its pillars related to “harnessing and managing technology for decent work”, and “universal entitlement to lifelong learning that enables people to acquire skills and to reskill and upskill” 12 (ILO 2019b).
Many governments have been active in promoting a knowledge economy, the development of high-tech firms and technological upgrading in the manufacturing sector through smart manufacturing underpinned by innovations (Cooke, forthcoming). For example, in 2015, the Chinese government launched “Made in China 2025”, which is one of the national strategic initiatives aimed at transitioning China from a “large manufacturing country” to a “strong manufacturing country” through innovations related to digital technology and artificial intelligence (Kania 2019). The success of such a strategic initiative largely depends on the development of a well-educated workforce equipped with the skills and knowledge required by employers. In this case, the industrial policy of making more use of AI went together with upgrading the education and skills of workers.
Technological challenges may imply that workers will experience more transitions – as some jobs get automated. They will need more than ever support to go through a growing number of labour market transitions throughout their lives. In particular, younger workers will need help in “navigating increasingly difficult school-to-work transition” (Cooke, forthcoming). Older workers will need to be able to stay economically active as long as they want.13 Lifelong learning policies will definitely help to be prepared for these transitions. Interestingly, data science algorithms may actually be useful here first in creating a more efficient labour market for matching workers and jobs and second by making better predictions as to what kind of skills individuals will need next based on their current experiences and jobs.
Conclusion
In this paper we identified some of the key challenges for high-road approach to employee management that are associated with rapid technological development and, in particular, with the use of AI. While the use of AI in HRM, in particular for hiring and work organization, is promising, still low-road approach is rather common and many suboptimal decisions are being made. The situation can be improved by broader employee engagement in HR-related decision-making process, training of managers on the principles and examples of high-road approach, as well as smart government policies. Particular attention should be paid to the development of “knowledge economy”, harnessing and managing technology for decent work, and universal entitlement to lifelong learning that enables people to acquire skills and to reskill and upskill.
As far as research is concerned, we call for more research to be done on:
-
pluses and minuses of using the AI in HRM;
-
the “natural boundaries” between the humans and AI;
-
how to ensure that the AI does not inherit mistakes made by the humans in the past (for example when it comes to hiring);
-
how AI products can become truly self-learning;
-
the ways to encourage fruitful collaboration of data scientists and HRM professionals in the development of the AI products; and
-
the role of policy makers in encouraging the use of “people-friendly” AI and in promoting high-road corporate practices.
References
Baltes, Boris B., Thomas E. Briggs, Joseph W. Huff, Julie A. Wright, and George A. Neuman. 1999. “Flexible and Compressed Workweek Schedules: A Meta-Analysis of Their Effects on Work-Related Criteria”. Journal of Applied Psychology 84 (4): 496–513.
Bernstein, Ethan, Saravanan Kesavan, and Bradley Staats. 2014. “How to Manage Scheduling Software Fairly”. Harvard Business Review, December 2014.
Bloom, Nicholas, and John Van Reenen. 2010. “Why Do Management Practices Differ across Firms and Countries?” Journal of Economic Perspectives 24 (1): 203–224.
Cappelli, Peter. 2019. “Your Approach to Hiring Is All Wrong”. Harvard Business Review, May–June 2019.
———. 2020. “Stop Overengineering People Management: The Trend toward Optimization Is Disempowering Employees”. Harvard Business Review, September–October 2020.
———. 2023. Our Least Important Asset: Why the Relentless Focus on Finance and Accounting Is Bad for Business and Employees. Oxford: Oxford University Press.
Cooke, Fang Lee. Forthcoming. “Towards a Human-Centred Approach to Increasing Workplace Productivity: A Multi-Level Analysis of China”. In The Human-Centred Approach to Increasing Workplace Productivity: Evidence from Asia, edited by Nikolai Rogovsky and Fang Lee Cooke. Geneva: ILO.
Cowgill, Bo. 2020. “Bias and Productivity in Humans and Algorithms: Theory and Evidence from Résumé Screening”. Research paper. Columbia Business School.
Edmans, Alex. 2011. “Does the Stock Market Fully Value Intangibles? Employee Satisfaction and Equity Prices”. Journal of Financial Economics 101 (3): 621–640.
Ghosheh, N.S., Jr., Sangheon Lee, and Deirdre McCann. 2006. “Conditions of Work and Employment for Older Workers in Industrialized Countries: Understanding the Issues”, ILO Conditions of Work and Employment Series No. 15.
Guiso, Luigi, Paola Sapienza, and Luigi Zingales. 2015. “The Value of Corporate Culture”. Journal of Financial Economics 117 (1): 60–76.
Harris, Stacey, and Amy L. Gurchensky. 2020. Sierra-Cedar 2019–2020 HR Systems Survey: 22nd Annual Edition. Sierra-Cedar.
ILO. 2007. Conclusions concerning the promotion of sustainable enterprises. International Labour Conference. 96th Session.
———. 2019a. ILO Centenary Declaration for the Future of Work.
———. 2019b. Work for a Brighter Future – Global Commission on the Future of Work.
———. 2021. Decent Work and Productivity. GB.341/POL/2.
———. n.d. “Productivity”. https://www.ilo.org/global/topics/dw4sd/themes/productivity/lang--en/index.htm.
Kania, Elsa B. 2019. “Made in China 2025, Explained: A Deep Dive into China’s Techno-Strategic Ambitions for 2025 and Beyond”. The Diplomat, 1 February 2019.
Kelley, Maryellen R. 1996. “Participative Bureaucracy and Productivity in the Machined Products Sector”. Industrial Relations: A Journal of Economy and Society 35 (3): 374–399.
Kelly, Erin L., Ellen Ernst Kossek, Leslie B. Hammer, Mary Durham, Jeremy Bray, Kelly Chermack, Lauren A. Murphy, and Dan Kaskubar. 2008. “Getting There from Here: Research on the Effects of Work–Family Initiatives on Work–Family Conflict and Business Outcomes”. The Academy of Management Annals 2 (1): 305–349.
Kesavan, Saravanan, and Camelia M. Kuhnen. 2017. “Demand Fluctuations, Precarious Incomes, and Employee Turnover”. Working paper. Kenan‑Flagler Business School.
Kuhn, Peter, and Lizi Yu. 2021. “How Costly is Turnover? Evidence from Retail”. Journal of Labor Economics 39 (2).
Landsberger, Henry A. 1958. Hawthorne Revisited: Management and the Worker, Its Critics, and Developments in Human Relations in Industry. Ithaca, NY: Cornell University.
Lee, Byron Y., and Sanford E. DeVoe. 2012. “Flextime and Profitability”. Industrial Relations: A Journal of Economy and Society 51 (2): 298–316.
Liem, Cynthia C.S, Markus Langer, Andrew Demetriou, Annemarie M.F. Hiemstra, Achmadnoer Sukma Wicaksana, Marise Ph. Born, and Cornelius J. König. 2018. “Psychology Meets Machine Learning: Interdisciplinary Perspectives on Algorithmic Job Candidate Screening”. In Explainable and Interpretable Models in Computer Vision and Machine Learning, edited by Hugo Jair Escalante, Sergio Escalera, Isabelle Guyon, Xavier Baró, Yağmur Güçlütürk, Umut Güçlü and Marcel van Gerven, 197–253. Cham: Springer.
MacDuffie, John Paul, and Frits K. Pil. 1997. “Changes in Auto Industry Employment Practices: An International Overview”. In After Lean Production: Evolving Employment Practices in the World Auto Industry, edited by Thomas A. Kochan, Russell D. Landsbury and John Paul MacDuffie, 9–44. Ithaca, NY: Cornell University.
McGregor, Douglas. 1960. The Human Side of Enterprise. New York: McGraw‑Hill.
Mercer. 2020. 2020 Global Talent Trends Study.
Mughal, Munir Ahmad. Unpublished. “What is Abuse of Rights Doctrine?” 8 September 2011.
Murgia, Madhumita. 2021. “Workers Demand Gig Economy Companies Explain their Algorithms”. Financial Times, 13 December 2021.
Rogovsky, Nikolai, and Fang Lee Cooke, eds. 2021. Towards a Human-Centred Agenda: Human Resource Management in the BRICS Countries in the Face of Global Challenges. Geneva: ILO.
Simonite, Tom. 2020. “When AI Can’t Replace a Worker, It Watches Them Instead”. WIRED, 27 February 2020.
Tambe, Prasanna, Peter Cappelli, and Valery Yakubovich. 2019. “Artificial Intelligence in Human Resources Management: Challenges and a Path Forward”. California Management Review 61 (4): 15–42.
Van den Bergh, Jorne, Jeroen Beliën, Philippe De Bruecker, Erik Demeulemeester, and Liesje De Boeck. 2013. “Personnel Scheduling: A Literature Review”. European Journal of Operational Research 226 (3): 367–385.
WTW. n.d. “WorkVue”. https://www.wtwco.com/en-ch/solutions/products/work-vue.