In preparing for that talk in HR Technology Conference, I referenced two highly recommended books, How Not to Be Wrong: The Power of Mathematical Thinking by Jordan Ellberg; and Prediction Machines: The Simple Economics of Artificial Intelligence by Ajay Agrawal, Joshua Gans and Avi Goldfarb. While neither book is “about” HR—or even the workplace—both provided some excellent frameworks for thinking about information, data, technology and AI, and had great examples of how understanding these “non-HR” concepts can help those of us in HR get better at making talent decisions.
I thought I’d devote this month’s column to sharing a few ideas from those books and my own personal thoughts on how we might want to view our people challenges a little differently.
Data don’t always mean what you think they mean.
How Not to Be Wrong opens with an extremely interesting tale from World War II. As air warfare gained prominence, the challenge for the military was figuring out where and in what amount to apply protective armor to fighter planes and bombers. Apply too much armor and the planes become slower, less maneuverable and use more fuel. Too little armor, or if it’s in the “wrong” places, and the planes run a higher risk of being brought down by enemy fire.
To make these determinations, military leaders examined the amount and placement of bullet holes on damaged planes that returned to base following their missions. The data showed almost twice as much damage to the fuselage of the planes compared to other areas, most specifically the engine compartments, which generally had little damage. This data led the military leaders to conclude that more armor needed to be placed on the fuselage.
But mathematician Abraham Wald examined the data and came to the opposite conclusion. The armor, Wald said, doesn’t go where the bullet holes are; instead, it should go where the bullet holes aren’t, specifically, on the engines. The key insight came when Wald looked at the damaged planes that returned to the base and asked where all the “missing” bullet holes to the engines were. The answer was the “missing” bullet holes were on the missing planes, i.e. the ones that didn’t make it back safely to base. Planes that got hit in the engines didn’t come back, but those that sustained damage to the fuselage generally could make it safely back. The military then put Wald’s recommendations into effect and they stayed in place for decades.
The reason I wanted to share the story here, and talked about it in my recent presentation, is that it reminds us that raw data, even in substantial amounts and of good quality (like being able to measure and count every bullet hole on every plane that returned to base), are not usually sufficient to help us gain insight. The placement and number of bullet holes was not insight—it was simply information, data. It required human traits—curiosity, intuition, willingness to ask different questions of the data—essentially Wald’s contribution, before that data could lead to insight, and better decisions.
Here’s a simple but powerful definition of artificial intelligence.
The other book I referenced above, Prediction Machines, adds to this idea that data alone are not sufficient to make better decisions by breaking down the decision-making process into its component elements. It goes on to describe how modern technologies (like AI, machine learning and natural-language processing) are supplementing the kind of human thinking and added value in the plane-armor example provided by Wald.
In the model, information is fed into an engine or an algorithm for the purposes of making a “prediction.” Put simply, a prediction is the process of filling in the missing data. Prediction takes what information we have and uses it to generate the information we don’t have. But the key going forward is that the tools and technologies that are being applied to these engines and algorithms are making it possible to simultaneously take in much more raw input data, generate more predictions, learn from these predictions and do all of this faster and cheaper every day.
Here’s an HR-centric example of why this is important: For years, most of the new technologies that were being developed to support talent-acquisition processes generally served to make communicating and broadly distributing job openings easier. They also, after a time, made it faster and simpler for candidates to apply to these openings. Greater distribution and easier applications then led to significant increases in the average application volume per job.
While the technology did offer some process and administrative-support improvements for HR and recruiters, it generally did not make up for the fact that application volumes were so much higher. Essentially, if finding the right candidate for a job is compared to finding a needle in a haystack, then all the early HR tech did was make the haystack bigger, and without offering any more help to locate the needle.
But today there are myriad HR and talent-acquisition tools that focus on one thing—generating the “prediction,” or the answer to the question: “How good of a fit is the candidate for the opening?” And since these tools make this prediction faster and much more at scale than a recruiter reviewing resumes or phone-interview logs could do, it doesn’t matter how large the haystack gets. In fact, you could argue only now does it make sense to try and generate the largest haystack possible for every job, as the new AI tools are able to make these predictions so quickly, and over time, more accurately.
In an AI world, humans become more important, not less.
I’d like to sum this up with a quick story I came across when working on my recent presentation. In the early days of programmers’ efforts to develop a world-class computer chess program, one of the strategies they used was to feed the program the results of thousands of games played by top-level human chess grandmasters. The thinking was that the computer program could digest these thousands of games and millions of individual moves played by the very best human chess players and begin to “learn” how to play at that level and eventually surpass the human players. But these initial efforts led to some curious results. In the first games the computer played after ingesting all that data, it kept sacrificing its own queen early in the games, usually leading to a quick defeat at the hands of its human opponent.
The problem? Apparently at the grandmaster level of chess, the strategic sacrifice of one’s own queen is often a precursor to victory. At that level, one’s chess skills are so advanced that a seemingly bad move is actually a strategic play that can become a winning one. But the computer had not yet developed enough high-level “thinking” about chess to understand that, so instead it simply interpreted the data in a linear manner, i.e. “sacrificing the queen leads to winning” and simply began to give up the queen all of the time and early in the games.
Whether it is a game of chess or trying to choose the best candidate for the open job, we are now able to throw more technology, computing power and advanced technologies at our challenges than ever before. But these technologies only understand what we can tell them and we still can’t truly tell them what is unique about us, our people, our culture and our organizations. The role of people in the AI age, I believe, will be elevated. But in order for HR professionals to remain relevant, they will need a deep understanding of how to harness these tools in the best way.
For much more details on these ideas (and to read plenty more of them, too), check out the books I mentioned, as I think both are essential reading for any HR or business professional who knows he or she must get better at using data and technology to power better decision-making.
It’s also not too early to make your plans to join us for the HR Technology Conference set for Oct. 1-4, 2019, in Las Vegas, where all the HR-technology innovations that can help you and your organization make better talent decisions will be on display. Leaders from some of the most innovative, successful companies in the world will also share their experiences and successes implementing these technologies.
Author- Steve Boese, is currently a Co-Chair of the HR Technology Conference and a Technology Editor at LRP Publications. Previously he was a Director of TaIent Management Product Strategy at Oracle, on the product team building the next generation of Enterprise HCM solutions, Oracle Fusion HCM. Steve has over 20 years experience implementing enterprise technologies for Human Resources, Recruiting, Finance and Distribution, including significant experience with Oracle Applications in numerous industries and locations. And served in a wide range of roles from team member, to team lead, to Project Manager, and Manager of HR Technology. Ref- http://hrexecutive.com