Untitled

Transcripción

Untitled
The Industrial-Organizational Psychologist
1
2
July 2011
Volume 49 Number 1
TIP
The Industrial-Organizational Psychologist
Volume 49/Number 1
July 2011
Featured Articles
7
A Message From Your President
Adrienne Colella
9
From the Editor
Lisa Steelman
11
A Journey Through a Profession Full of Opportunities: Eduardo
Salas’ Presidential Address at the 26th Annual SIOP Conference,
April 14, 2011
17
How I-O Can Contribute to the Teacher Evaluation Debate:
A Response to Lefkowitz
Lorin Mueller
23
Using I-O to Fight a War
Douglas R. Lindsay
29
Applicant Faking: A Look Into the Black Box
Matthias Ziegler
38
SIOP Recommends Review of Uniform Guidelines
Doug Reynolds and Eric Dunleavy
43
OPM Bringing the Science of Validity Generalization (VG) to
Federal Hiring Reform
James C. Sharf
Editorial Departments
49
On the Legal Front: Supreme Court Hears Oral Arguments in
Wal-Mart v. Dukes
Art Gutman and Eric M. Dunleavy
55
TIP-TOPICS for Students: Industrial-Organizational Psychology’s
Contribution to the Fight Against Terrorism
Lily Cushenbery and Jeffrey Lovelace
59
TIP-TOPics Call for Graduate Student Columnist(s)
The Industrial-Organizational Psychologist
3
61 Practitioners’ Forum: IGNITE Your Work Life: One Practitioner’s
Reflections of SIOP 2011
Rich Cober
66 Translating the Magic Data: An I-O Psychology Fable
Eric D. Heggestad
69 Good Science–Good Practice
Jamie Madigan and Tom Giberson
74 Pro-Social I-O–Quo Vadis? Science–Practice-Impact: An
Expanding I and O
Stuart Carr
81 Practice Perspectives: Shaping the Future of IndustrialOrganizational Psychology Practice
Rob Silzer and Rich Cober
90 The Academics’ Forum: Participant Pool Considerations and
Standard Practices
Satoris S. Culbertson
97
Max. Classroom Capacity: Audacity
Daniel Sachau
102 Foundation Spotlight: Announcing the Joyce and Robert Hogan
Award for Excellence in Personality and Work Performance Research
Milt Hakel
News & Reports
105 A Record-Breaking SIOP Takes the Windy City by Storm!
Lisa M. Finkelstein and Mariangela Battista
109 Saturday Theme Track: Using Data to Drive Organizational
Decisions and Strategy
Deborah E. Rupp
111 So Many Great Speakers, So Little Time: The Sixth Annual Junior
Faculty Consortium Report
Mark C. Frame
113 LGBT Committee and Allies Outreach With The Night Ministry
Brian Roote
117 THe Educational Outreach (THEO) Program at SIOP 2011
Kizzy M. Parks and Mikki Hebl
4
July 2011
Volume 49 Number 1
118 2011 Frank Landy SIOP 5K Fun Run
Paul Sackett
121 SIOP Program 2012: San Diego
Deborah E. Rupp
122 Presidential Coin Celebrating Our Science and Practice Award Winners
125 2011 SIOP Award Winners
131 Announcement of New SIOP Fellows
Walter C. Borman
136 SIOP 2011 Highlights
138 The Virtual Workforce: Designing, Leading and Optimizing:
Registration Open for 2011 Leading Edge Consortium!
141 Report From the APA Council of Representatives February, 2011
David B. Peterson
143 Announcing the New Editor of Industrial and Organizational
Psychology
Scott Highhouse
146 Obituaries: Anthony T. Dalessio, Michael W. Maughan,
Kenneth Albert Millard, Robert Anthony Ramos
150 SIOP Members in the News
Clif Boutelle
154 IOTAS
Stephen Young
156 Announcing New SIOP Members
Kimberly Smith-Jentsch
162 Conferences & Meetings
David Pollack
164 CALLS & ANNOUNCEMENTS
166 INFORMATION FOR CONTRIBUTORS
167 SIOP OFFICERS AND COMMITTEE CHAIRS
168 ADVERTISING INFORMATION
Cover: Fruits and pickles for sale in a rural roadside shop. Assam, North
Eastern India, 1/2/2009
Photo courtesy of Jaya Pathak, doctoral student, Florida Institute of Technology
The Industrial-Organizational Psychologist
5
6
July 2011
Volume 49 Number 1
Adrienne Colella
Wow! What a great conference in Chicago! The most exciting (and scary)
event for me was receiving the presidential gavel from President Eduardo
Salas, the most interesting man in the world. He, along with his immediate
predecessors, Kurt Kraiger and Gary Latham, have left SIOP in great
shape. We weathered the financial downturn of the last several years. We
have begun several new initiatives that will expand the visibility and impact
of SIOP and the field of I-O psychology in the arenas of nonprofit organizations, the larger science community, the business community, and the general public. I am thrilled to be taking on this role for the year and moving SIOP
along in several directions.
One of SIOP’s strategic goals is to be the professional organization of
choice for professionals in the field of I-O psychology. Although membership
had been modestly growing, most of this growth has been due to large increases in student membership. This is great, but I would like to see a greater number of students become professional members. Also, it is important to understand why midcareer professionals leave the organization. Toward these goals,
we’ll be starting to monitor our retention and attrition process to find out why
people leave (or fail to join) and what SIOP can do to be a more attractive
organization to these valuable potential members. Another membership issue
that will receive attention over the next year is increasing the presence of
racial/ethnic minority group members in the field and in the organization.
How many of you in the past year have heard about policy decisions
being made about workplace psychology issues and wondered why the field
of I-O psychology isn’t mentioned in these conversations? How many of you
have seen some “expert” going on about workplace psychology issues who
completely ignores the body of science generated in our field? How many of
you have family members who have no idea what you do for a living? These
are all issues relating to the visibility and advocacy of I-O psychology in a
variety of domains. Over the next year, SIOP will make further headway into
increasing visibility and advocacy of the organization and the science and
practice of I-O psychology. We are nearing NGO status with the UN (thank
you, John Scott!), which will increase our visibility in the area of humanitarian work. Steve Kozlowski and his committee have just presented a thorough and long-term plan for how SIOP can better advocate for I-O psychology in the larger scientific community. SIOP will begin implementing this
plan over the next year. Finally, Gary Latham has done a great job in getting
the word out about us to the human resource management community
The Industrial-Organizational Psychologist
7
through a partnership with SHRM. Continuing along these lines, we will be
seeking professional advice on how to market the field to the business community and general public.
Every president has a theme and my theme is the IMPACT of I-O psychology science and practice on the welfare and performance of individuals,
organizations, and society. Highlighting and celebrating the impact of our
field is imperative to building an identity as a profession and in getting the
word out about what I-O psychology has to offer. I’ll have more to say on this
as the year progresses, but in the meantime, if you have done a project where
you really created positive change, let me know ([email protected]).
I’m looking forward to the upcoming year. There are a lot of exciting
things going on in SIOP. Before I say goodbye, I want to thank my predecessors Eduardo, Kurt, and Gary who have been great mentors; Dave Nershi
and the SIOP Administrative Staff who keep everything running (they are the
core of organization); and all of the hundreds of volunteers who keep SIOP
moving onward. The hours and energy that Lisa Finkelstein, Mariangela
Battista, and their conference committees put in to making the conference a
success are mindboggling.
Bye!
7th Annual SIOP Leading Edge Consortium
The Virtual Workforce:
Designing, Leading, and Optimizing
October 14–15, 2011
Louisville, Kentucky
Hilton Seelbach
General Chair: Kurt Kraiger
Practice Chair: Andrea Goldberg
Science Chair: Lori Foster Thompson
Research Chair: Allen Kraut
Registration Is Now Open!
www.siop.org/lec
8
July 2011
Volume 49 Number 1
Reflections on a
Good Summer Read
Lisa Steelman
Florida Tech
There are no housewives, no outrageous celebrities, and no zany mayhem
in this issue of TIP. So why should TIP be on your summer reading list?
Because this issue of TIP is chock full of news and information—good clean
I-O psychology fun!
Starting us out is a greeting from your new President Adrienne Colella,
followed by Eduardo Salas’ presidential address from this year’s conference. Join me in thanking Ed for his service to the society and the profession!
Lorin Mueller provides a response to Joel Lefkowitz’s article on evaluations of teacher performance that appeared in the April 2011 TIP. Lorin discusses considerations on whether or not value-added models of teacher evaluations are appropriate, the legality of using value-added models, and
thoughts on how I-O psychology can have an impact on this important societal issue. Next you can read Lt. Colonel Doug Lindsay’s postdeployment
thoughts on the impact of I-O psychology on the war effort. This is followed
by an article on some new thinking about applicant faking behavior from
Matthias Ziegler.
Discussion of the Uniform Guidelines on Employee Selection Procedures
continues and is even ramping up. Doug Reynolds and Eric Dunleavy provide an update on potential regulatory review of the Guidelines and share a
letter from President Salas to the EEOC stating SIOP’s position on this matter. Jim Sharf discusses validity generalization and the Uniform Guidelines
within the Office of Personnel Management (OPM). On the Legal Front,
Art Gutman and Eric Dunleavy discuss the myriad of issues in the WalMart v. Dukes class action lawsuit regarding alleged gender discrimination in
pay and promotion.
With their last TIP-TOPics article, this one on I-O psychology making an
impact on research on terrorism, we bid farewell to the Penn State team.
Great thanks to Scott Cassidy, Patricia Grabarek, Shin-I Shih, Lily
Cushenbery, Christian Thoroughgood, Amie Skattebo, Katina Sawyer,
Rachel Hoult Tesler, and Joshua Fairchild for your hard work and great
contributions to TIP over the last 2 years! It’s time to pass the TIP-TOPics
torch to a new team of graduate student authors. The call for submissions for
new TIP-TOPics authors can be found on page 59 and online at
The Industrial-Organizational Psychologist
9
www.siop.org under calls and announcements, nominations. We are accepting applications until July 11, 2011.
And yet another thanks is due, this time to the multitalented, multitasker
Joan Brannick for her insights on the Practitioners’ Forum column for the
past year. Joan has stepped out of the column, and I am pleased to announce
that Rich Cober and the rest of the Professional Practice Committee will be
taking over. This time, Rich discusses lessons learned from preparing for and
participating in an IGNITE presentation at the 2011 conference. It was part
of the Saturday Theme Track. You can read more about this Theme Track,
“Using Data to Drive Organizational Decisions and Strategy,” in Deborah
Rupp’s report. Also, don’t miss Eric Heggestad’s I-O fable that was presented as part of the IGNITE session.
In other articles, Jamie Madigan and Tom Giberson continue to synthesize the practice implications of I-O research, and Stu Carr interviews several SIOP members on their reflections of the conference and how humanitarian work psychology and corporate social responsibility can continue to
have an impact. In Practitioner Perspectives, Rob Silzer and Rich Cober
provide a high-level summary of the Future of I-O Psychology Practice Survey that assessed a practitioner’s view of the future of I-O practice, what
practitioners can do, and what SIOP can do to contribute to and facilitate I-O
practice as we move forward. Their recommendations are thought provoking.
In the Academics’ Forum, Tori Culbertson discusses considerations
associated with every academic researcher’s frenemy: the participant pool.
The guest columnist for Marcus Dickson’s Max. Classroom Capacity column is SIOP Distinguished Contributions in Teaching Award Winner Daniel
Sachau. If you need a good dose of chutzpah, this one is for you. And in the
Foundation Spotlight, Milt Hakel announces the Hogan Award for Excellence in Personality and Work Performance Research.
The activity within SIOP continues, unabated. If you would like to relive
those chilly but terrific moments of the Chicago conference, check out Lisa
Finkelstein and Mariangela Battista’s review of the 2011 conference program. You can read about other highlights of the conference too: Deborah
Rupp’s review of the wildly successful Saturday Theme Track, Mark
Frame’s action-packed Junior Faculty Consortium, wonderful events sponsored by the LGBT Committee (outreach with the Night Ministry) and the
Education and Training Committee in partnership with the Committee for
Ethnic Minority Affairs (THEO: The Educational Outreach Program), and of
course the results of the Frank Landy 5K Fun Run.
Rounding out this issue is an overview of the 2012 conference in San
Diego—something to look forward to and a report from the APA Council of
Representatives from David Peterson. We congratulate SIOP award winners
in a number of categories, and we remember prominent SIOP members
Anthony “Skip” Dalessio, Bob Ramos, Michael Maughan, and Ken Millard.
10
July 2011
Volume 49 Number 1
A Journey Through a Profession Full of Opportunities:
Eduardo Salas’ Presidential Address at the 26th Annual
SIOP Conference, April 14, 2011
I want to tell you a story. A story of a very rewarding, rich, fun, and, still
today, a much fulfilled journey that I hope never stops. A journey through a
profession full of opportunities. Opportunities to make a difference in the
workplace; to have an impact on people’s lives; to uncover new cognitions,
behaviors, and feelings that matter in the workplace.
I want to tell you a very personal journey of an I-O psychologist who never
dreamed of the opportunities that lay ahead. I want to talk to you not as your
president, but as your colleague, as someone who cares very much about this
SIOP family, as someone who loves this profession we are engaged in. I want
to share a story—my story—and along the way share with you some insights
I’ve gained and some opinions I’ve developed about our field. With your permission and indulgence, I want to celebrate this journey with you.
I left Lima, Peru, about 36 years ago. It was a summer night in Lima. It
was the most exciting yet frightening and sad night I had experienced in Peru.
I was leaving a secure environment, a mother that was terminally ill, my
brother and two sisters, my girlfriend (now my wife!), and many very close
friends. It was my first visit to the U.S. and I had no idea of what to expect
or what would follow.
It was a long flight to NYC and an even longer flight to Kearney, NE. Yes,
talk about cultural shock! A small town of 25,000 people during the week
days and much less during the weekends. This was a small town that welcomed me and opened my eyes to the values and principles that today I
respect, believe, and admire about America. A small town where I learned
about American football. I am probably the only Peruvian who follows and
cheers for the Huskers! A college town (now UN-K) that reinforced my desire
to be an I-O psychologist, which was my goal. I left Peru wanting to become
an I-O psychologist, and Nebraska was my first opportunity to learn about
this country, its people, its land, its resources, and its core values: that motivation and hard work lead to success!
In 11th grade I read a book, Psicologia Industrial by Norm R. F. Meier
(first edition). And I was hooked. That is what I wanted to do—I wanted to
be an I-O psychologist. So, the journey began. A journey that led me from
Nebraska to Florida International University in Miami, to UCF (got my MS
in I-O there), and to Old Dominion University.
At FIU, I wanted to work with Wayne Cascio, but he transferred to Colorado when I got there! (Sorry that I missed you, Wayne.) Working with
Wayne Burroughs at UCF, I experienced many opportunities to do applied
work in different sectors, and it was there that I was introduced to the Navy.
The Industrial-Organizational Psychologist
11
At ODU working with my mentors Ben Morgan and Albert Glickman, I
learned the value of theory, measurement, and application. I learned about the
scientist–practitioner model (which is ingrained in my mind and heart). I
learned about human factors and how this field also impacts the workplace. I
learned from my classmates (Scott Tannenbaum, John Mathieu, Bob
Jones, and many others) to think critically, to debate, to study, to write, and
yes, to play! We had great parties! It was at ODU where the opportunity arose
to study work teams and training and development. My journey and passion
for these topics was born there.
Let me share with you more thoughts about work teams, a science and practice that is alive and well but transforming itself. I truly believe that we have a
wealth of knowledge about how to manage work teams. This knowledge is not
perfect, but what we know is useful, practical, and in many cases, it works.
We know about what teams do, think, and feel; how to measure team
behaviors, attitudes and cognitions; what facilitates and what hinders team
effectiveness; how to train teams; what organizational factors influence team
behavior; how teams make decisions; how team leaders function and behave;
how teams manage conflict; and what team cognition is. We have a plethora
of theories, tools for measuring and assessing team performance, and a set of
well-designed and robust team-based instructional strategies, and more.
However, the team world has changed. We may be entering a new era in work
teams: the era of distributed and virtual teams, the era of multiteam systems,
the era of human–robot teams, the era of teams in extreme settings, the era of
avatars as teammates. I think these new forms of teams will need theories and
methodologies that help us learn about their idiosyncratic features. I submit
that these new conceptual drivers must be multidisciplinary (beyond I-O concepts), time based, multilevel, and parsimonious enough to capture the
essence of these new forms. The methodologies must be observational in
nature (away from self-report, if possible), qualitative, naturalistic, with techniques that help us uncover the components (like social network analysis or
computational models) but robust enough that replication is possible. These
are the challenges I see ahead for the science of work teams.
Back to the journey. The Navy days. What wonderful years I spent with
the Navy (15!) in Orlando. They were some of the best professional days I’ve
had! With Jan Cannon-Bowers, Joan Johnston, Kim Smith-Jentsch, Dan
Dwyer, and many others (whose names appear on the thank-you board). With
our industry and academic partners we built the dream lab. What an opportunity! Our lab was rooted in science (theory, methods), yet with an eye on the
customer and the need to deliver useful products. We were productive, publishing as much as we could in the best outlets we could, but we never ignored
developing practical tools for instructors, team leaders, or acquisition personnel. We sought to learn about our customer’s settings, deploying on ships for
example. We did whatever it took for them to understand that we cared about
their jobs and that we could help. And over time we won them over, and they
12
July 2011
Volume 49 Number 1
became our best allies and supporters. I believe our lab was a model for the
scientist–practitioner model. We knew what it meant: Let’s solve their problems with the best science we can, and then we will tell the world. At the end
of the day, I always thought that our best product and accomplishment was that
“We changed people’s minds.” We changed the minds of engineers, pilots,
sailors, instructors, team leaders, and training command leaders about to how
to view learning, how to train, how to diagnose competencies, how to develop teams, how to be leaders, and how to build simulators for learning. That
was our legacy, changing people’s minds. That remains today.
I joined UCF 11years ago and have been fortunate to have colleagues like
Barbara Fritzsche, Bob Dipboye, Bob Pritchard, Leslie DeChurch, and
Kim Smith-Jentsch, who left the Navy to join us, and more recently by
Nathan Carter and Dana Joseph. At UCF a whole new world opened up to
me and new opportunities for impact arose. There were opportunities in
healthcare, the corporate world, the financial and oil industries. I learned
quickly that these industries were hungry for our science and practice. These
industries wanted (and still do today) basic advice on leadership, training,
teams, and organizational functioning. They wanted tips on how to manage
people. It sounded simple. I was perplexed about the nature of their requests,
their interests, and their problems. They were so basic. And deep down I am
thinking and saying to myself, we have some answers! We can provide
advice, tools, tips, and interventions. We can help! But I always asked myself,
where are my colleagues? Where are the I-O psychologists? Why aren’t there
more like me here? Why are we not here with these industries? We, SIOP,
have been trying to answer these questions for some time with visibility and
advocacy initiatives. While progress has been made, more work is needed.
Over the last 7–8 years I decided I would do something about this. And
so I launched onto a path of translating our science into practice. Yes, translating our science into practice is something I believe we need to do more of.
We need to value, support, and teach our next generation of I-O psychologists
to do more. There is nothing wrong with translating our reliable findings into
language that organizations can use, nothing!
If done correctly, organizations want more, they appreciate it and value it,
they use it, and ultimately, we have impact! Having made an impact is an
incredible feeling. When you say to the organization “I think you should
move to the left because we know from our science that is best” and they
move! What a feeling. So that is what I’ve been doing with my students: Trying to transform healthcare, the aviation community, and the military one
brick at the time by translating our science, by educating these industries
about who we are and why we matter to them. And what we do does matter!
These activities and my involvement with SIOP’s Executive Board made me
think a lot about who we are, what we do, and why. For good or bad, I had
begun to think and seek an answer to two questions: (1) What is our soul?
The Industrial-Organizational Psychologist
13
What is the soul of an I-O psychologist? And (2) What will our future as a
field and as a Society look like? I offer these ideas as food for thought and in
the hope that we can engage in a dialogue. I see our soul as having many features, all interrelated, of course, and all adding to its richness. The first feature is the questions we ask. These are the guiding light, if you will. The questions, focused on some organizational problem or issue, set the direction and
motivation for the research or the practical interventions we pursue. The
questions are our point of departure; they are what guide us. Two, I think our
theories are an integral part of who we are. Our theories are the “engines” that
lead us to seek knowledge, to prove, to disprove, to validate, to integrate, to
summarize and help us generalize. Our theories help us with focusing on our
problem domain. Lewin said it many decades ago, “There is nothing so practical as a good theory.” So our questions and theories ground us. Three, our
methodological rigor is part of our soul. We use robust approaches to understand the world of work; whether we conduct experiments and studies in the
lab, in the field, or in simulations, that is part of our soul. Next, replicable
results are part of our soul. We seek to replicate knowledge, to ensure it is
reliable. This is a hallmark of a science and it has to be part of our soul. Our
soul is also about our evidence of what works, our tips and suggestions, our
practical tools, strategies, and techniques. The evidenced-based solutions that
we offer organizations are the catalyst for impact, for making a difference,
and that is in our soul. At the end of the day, our soul is the scientist–practitioner model! It represents all that we are: scientists and practitioners. This is
a model that sometimes is thought of as overrated and overused. Sometimes
it’s abused, and many times it is trivialized. But in the end it is our soul. No
one handed me this “model.” I learned about it and began to appreciate what
it meant and why it matters when my journey began. Wherever I go and whatever I do as an I-O psychologist that is what I carry: our rich theories, our
methodological rigor, our replicable findings, and our evidenced-based solutions…our soul indeed!
I have also been thinking about our future as a profession. How will we
look in the year 2025? What will the conference feel like? What research and
practice will be the “hot” items?
There is no question that I-O psychology is growing.There are more master’s and doctoral programs than ever before. We are now global. We are more
diverse than ever and we are getting younger. The conference is no longer an
“intimate” event. The world of work is changing. You get the point, change
keeps happening. All this suggests the need for us to be adaptive, change, try
new things to engage the upcoming generations, to keep relevant with the
world of work, and boy do we struggle with that—adapting! Changing!
At least at the leadership level of SIOP. This is in no way a criticism of
anyone on previous or current boards. It is only a self-reflection of my own
behavior. I thought before taking the seat of president that I would change
14
July 2011
Volume 49 Number 1
things, launch bold programs that we need. And to my surprise the moment I
got in the seat I became as conservative as I have never been. And while
things got done and new initiatives were launched, I always felt that more
future-oriented actions and initiatives were needed. I felt we needed more
boldness in what we propose and launch. We need to risk more and not be
afraid of the many stakeholders we need to please. We must be adaptive and
willing to visualize how 2025 will look and do something such that we
remain a viable Society and our field enjoys the respect of those who we try
to influence and educate. The whole SIOP family must tolerate risks. To
respectfully borrow from Martin Luther King, I have a dream. Yes, I have a
dream. A dream that SIOP will represent a field where science and practice
live in peace and both nurture each other. Where our diversity is our strength
and not a distraction. Where translations are valued. Where our practice is so
robust that “organizations do move to the left” when we tell them we have
evidence and then thank us for making an impact. I have a dream where our
soul as I-O psychologists becomes enriched by our research, theories, findings, methodologies, and by what we do in practice. A dream that I hope will
become a reality. Only time will tell.
I would never have made it here if it wasn’t for my father. I hope he is
smiling wherever he is now. You see, he listened to the 11th grader about
what most interested him. And one night he brought me a book and said, “I
think this is what you want to do, read it. I think you’ll enjoy it.” And he
handed me Mairer’s book.
In closing, I have not forgotten the place where I was born, one never
does. It is in Peru where I learned to love, cry, and appreciate life. But I am
eternally grateful to the places and the people who gave me the opportunity
to enrich my mind and soul, to learn, to become an I-O psychologist, and to
have an impact in the workplace. This is the place where the journey began
and continues. I hope I have served you well.
The Industrial-Organizational Psychologist
15
16
July 2011
Volume 49 Number 1
How I-O Can Contribute to the Teacher Evaluation
Debate: A Response to Lefkowitz
Lorin Mueller
American Institutes for Research
Some ideas are so intuitively appealing that we as a society are simply
unable to resist them. Using student achievement data to evaluate teacher effectiveness is one of those concepts. Regardless of the many measurement and
practical criticisms of these models, they are likely to be a fixture in schools for
the foreseeable future. It is with this belief in mind that I respond to Joel
Lefkowitz’s editorial in the April 2011 issue of The Industrial-Organizational
Psychologist (Lefkowitz, 2011) on the topic of whether current directions in
teacher evaluation may end up with systems that are not lawful. This editorial
covers some of the research behind the issues raised in Joel’s editorial and proposes directions for I-O psychologists to explore in responding to these issues.
Joel’s editorial raises three main issues with respect to “value-added models” (VAMs) as they are used to measure teachers effectiveness. The first issue
is whether or not VAMs are appropriate to use for measuring teacher effectiveness. The second issue is whether a teacher (or group of teachers) has a basis
for a lawsuit if terminated because of a low score on a value-added scale. The
third issue is why I-O psychologists haven’t been asked to contribute our
expertise to the problem of constructing valid teacher performance evaluations.
Are Value-Added Models Appropriate?
With respect to the first issue, whether these models are appropriate for use,
the data are to some extent equivocal. The first widely publicized VAM is generally recognized to be the Tennessee Value-Added Assessment System
(TVAAS; Sanders & Horn, 1994), which found that school and teacher effects
were relatively consistent year to year and that student gains were not related
to initial achievement levels. Another early model was the California REACH
(Rate of Expected Academic Change; Doran & Izumi, 2004).1 Both of these
models are adequately complex in relation to those Lefkowitz posits as sufficient; the TVAAS is a mixed-effects model, whereas the REACH model is a
mixed-effects model incorporating the trajectory required to be proficient by
the time of graduation. One of the first major reviews conducted by RAND
(McCaffrey, Lockwood, Koretz, & Hamilton, 2003) concluded that VAMs
show consistent teacher effects regardless of the specifics of the model
employed and age of the students. The RAND report acknowledged that many
factors influenced individual teacher value-added estimates, including student
level variables, nonrandom missing data, and test scaling issues (e.g., linking
error and disarticulated performance levels), and made several recommendations for researching and implementing effective VAMs. Other researchers
1 It should be noted that California legal prohibits teacher evaluations from being based on student test scores, as does New York.
The Industrial-Organizational Psychologist
17
have been less positive about the validity of VAMs. For example, Braun (2005)
notes that little can be certain with respect to VAMs because students are not
randomly assigned to teachers in practice, and VAMs require many assumptions, not the least of which is that teachers within the same unit of comparison (which depends on the model) are given the same level of resources. Jacob,
Lefgen, and Sims (2008) note that although individual teacher contributions to
achievement are estimable, the effects erode quickly, implying that focusing on
individual teachers may be less valuable than focusing on teams of teachers.
Finally, a recent review by Briggs and Domingue (2011) found that a recent
analysis of teacher effects in the Los Angeles Unified School District did not
properly consider the confidence interval of the teacher effect estimate, thereby potentially misclassifying many of the teachers in the study.
Based on the research on VAMs, we can be reasonably certain of the following findings. First, we need complex models to evaluate teacher effects.
Second, we need large sample sizes, likely larger than those found within the
typical classroom. Third, large changes can occur in teacher effect estimates on
the basis of a few students moving into a different performance level or dropping out of the model by virtue of being unavailable for baseline testing or endof-year testing. Lastly, the proportion of variance in student achievement attributable to student characteristics (e.g., general cognitive ability, parental support,
socioeconomic status, and student motivation) is very large and may influence
numerous practical and statistical issues related to teacher effect estimates.
Based on our extensive experience in performance evaluation in other occupations, here is what we can assume to be true. First, there will be, to some extent,
perverse learning in the system, such as “teaching to the test,” teachers focusing
their energies on those students who can either move easily into a new achievement level, or ignoring high-performing students who are at little risk to miss
achievement goals. Second, there is probably some small teacher effect, although
it will be small and difficult to separate out from the teaching context. Specifically, teachers are nested within schools, nested within districts, and teachers are
dependent, to some extent, on other teachers to perform effectively. Given what
we know and what we can assume, well-constructed VAMs are probably appropriate for measuring teacher effectiveness to the extent that they (a) are part of a
larger evaluation system, (b) are based on long-term data, and (c) are based on
well-constructed tests that provide reliable gain scores for students at all performance levels and do not lend themselves to a narrowly focused curriculum.
The Legality of Using VAMs
A second issue Joel raises is whether teachers could sue on the basis of terminations or other adverse personnel actions made on the basis of VAM data.
Although Joel presents a number of compelling arguments on why VAMs
might be susceptible to legal challenges, I believe this is unlikely. From an
empirical standpoint, if VAMs are of low reliability, they are unlikely to result
in widespread adverse impact (i.e., teacher will be identified as underper18
July 2011
Volume 49 Number 1
forming at random even if there are group differences). Second, states and districts that implement VAMs are generally doing so for business reasons. For
example, states and districts seeking Race to the Top grants are required to
include VAMs as part of their evaluation process. Finally, many states are
beginning to legislate the extent to which VAMs are to be weighted in teacher
performance evaluations. These issues make it extremely unlikely that a legal
challenge to VAMs would be successful even at the lowest judicial levels.
Why I-Os Aren’t Consulted
The third issue raised in Joel’s column was why I-O psychologists aren’t
consulted with regard to designing effective teacher performance evaluation
instruments. There are three answers to this, each relating to a different constituency in the development of teacher performance evaluation systems. At
the most technical level, implementation, I-O psychologists are consulted.
Some of us are working with professionals in the field of teacher professional development to develop and refine teacher performance evaluation instruments. And those of us who are doing this work agree with Joel’s point that
VAMs may be of limited utility in determining which teachers are actually
the most effective. We’ve also done a terrific job as a field of making the principles of good performance-evaluation design available to non-I-O technical
audiences. Most of the work I have seen in teacher performance evaluation
incorporates many I-O principles and makes reference to our literature.
At a second level, the professionals charged with interpreting and implementing policies don’t often turn to I-O psychologists for help because they
don’t know us. This constituency is made up of professionals in departments of
education at the federal and state levels charged with overseeing standards for
teacher hiring, pay, and retention. These professionals don’t seek out our advice
because we don’t publish in professional journals related to education, and we
don’t often seek grants to study teacher performance issues. Some I-O psychologists have cultivated personal relationships through their practices or universities, but this constituency still typically goes to education researchers for support.
At a third level are the legislators and political appointees who are really
driving the changes to VAMs. These people don’t come to us because they simply do not know who we are. Policy makers often seek out the advice of economists and educators for guidance on creating innovative policies to improve
student achievement. There are at least a few reasons for our inability to get a
seat at the policy table. First, we don’t offer simplistic solutions, such as teacher
pay-for-performance schemes that some economists support but are based on
untenable assumptions about teacher motivation and its relation to pay levels
and student achievement. Simplistic solutions sound really good and are easy to
sell to an impatient populous but don’t affect real change. In general, our solutions are long term and require continued commitment to best practices. Second, we don’t take public stands on these issues. Very few I-O psychologists
make themselves available to news and opinion outlets, and SIOP rarely takes
The Industrial-Organizational Psychologist
19
stands on particular issues (despite sometimes clear scientific evidence and best
practices). Getting our organization involved in important public debates is not
something we’ve done well through the years. In a sense, these two issues interact to make the teacher evaluation problem intractable. The professionals who
are perhaps most underutilized in designing and implementing effective teacher
evaluation systems are the least likely to be asked to contribute and the first to
be shown the door when immediate results are (typically) not present.
What We Can Do About It
VAMs in teacher performance evaluation are a concept that is here to stay.
The concept simply has too much face validity for measurement professionals
and I-O psychologists to be able to talk the voting public and their elected representatives out of it. The concept has persisted despite arguments from all sides;
initial models were too simple and didn’t account for important covariates, now
models are too complex and can’t be understood by the layperson. In the last
section, I focus on the areas where I think I-O psychology can contribute to the
teacher evaluation debate and what we can do to achieve those contributions.
One of our strengths relative to education researchers and economists is
that we’re used to thinking about implementation. I often refer to my job as
being more like engineering than business consulting. VAMs, based on what
we know, are an implementation issue rather than a scientific one. As such, we
can think of VAMs as a generalizability theory problem: How many students
and years of teacher performance must we observe to make a reliable conclusion about that teacher’s effect on student learning? I-O psychology researchers
could take conditional error estimates of teacher effects and estimate how
many years of similar performance would give an administrator enough confidence that the teacher was underperforming (or exceeding expectations).
Similarly, we can think of this as a validation study. From a convergence perspective, how do other measures of teacher performance corroborate the VAM
data? Specifically, can classroom observations indicate which in-class behaviors
are most highly related to stronger teacher effects in VAMs? From a content perspective, are there different subcategories of the job of teaching that moderate the
impact of key behaviors on student outcomes? This research has been done in the
education field, but all too often they consider teaching to be one job when in fact
it is likely to vary considerably across schools, grades, and content areas.
We can also contribute by helping policy makers to understand the implications of various mechanisms for calculating the teacher effect on the types of
errors the system is likely to make. For example, the effect estimate is more
likely to be prone to high standard errors if the mechanism uses student achievement levels as the outcome measure (e.g., proportion proficient, proportion
below proficient) as compared with using vertically scaled scores. In addition,
I-O psychologists can study the impact on teacher behavior of various types of
VAMs to determine the extent to which teachers engage in counterproductive or
narrow behaviors to maximize their performance on the VAM. I-O psycholo20
July 2011
Volume 49 Number 1
gists are also well-suited to study contextual performance issues that may not
manifest in the early stages of VAM implementation but may reveal themselves
later as teachers compete more to show the highest value-add estimates for
career advancement. We could conduct Monte Carlo studies to investigate the
impact of groups of teachers raising their performance within a school or district, which in some models could be misclassified as a school-level effect and
ultimately counterproductive to teamwork in the teaching environment.
It’s also important to emphasize the broader implications of being left out of
this critical debate. As the world economy becomes more competitive, education
is one of the key areas we must improve to continue to grow economically. SIOP
can take a more active role in public policy debates, but it means getting off the
fence and creating a mechanism for doing so. We will never have total agreement
among our membership on important policy issues, but we can have some agreement on the scientific and practical issues that policy makers must take into
account when devising accountability systems. The role of VAMs in teacher performance appraisals isn’t the only area that could use theory and evidencedbased input into policy issues. Other key debates where our input could be useful include motivating the long-term unemployed, revising the federal personnel
system, employment and disability law, and healthcare. All of these areas have IO psychologists contributing at the implementation level but not the policy level.
Currently, policy makers go to economists for workforce issues, and I-O
psychologists are left to implement ill-conceived polices based on market
theories with little direct empirical support relating to individual behavior. As
a professional society, we need to work hard to change that.
References
Braun, H. I. (2005). Using student progress to evaluate teachers: A primer on value-added
models. Princeton: NJ: Educational Testing Service. Retrieved from http://www.ets.org/
Media/Research/pdf/PICVAM.pdf.
Briggs, D. & Domingue, B. (2011). Due diligence and the evaluation of teachers: A review
of the value-added analysis underlying the effectiveness ranking of Los Angeles Unified School
District teachers by the Los Angeles Times. Boulder, CO: National Education Policy Center.
Retrieved from http://nepc.colorado.edu/publication/due-diligence.
Doran, H. C., & Izumi, L. T. (2004). Putting education to the test: A value-added model for
California. San Francisco, CA: Pacific Research Institute. Retrieved from http://www.pacificresearch.org/docLib/200702021_Value_Added.pdf.
Jacob, B. A., Lefgen, L., & Sims, D. (2008). The persistence of teacher-induced learning
gains. Cambridge, MA: National Bureau of Economic Research. Retrieved from
http://www.nber.org/papers/w14065.
Lefkowitz, J. (2011). Rating teachers illegally? The Industrial-Organizational Psychologist,
48(4), 47–49.
McCaffrey, D. F., Lockwood, J. R., Koretz, D. M., & Hamilton, L. S. (2003). Evaluating
value-added models for teacher accountability. Santa Monica, CA: RAND Corporation.
Retrieved from http://www.rand.org/pubs/monographs/2004/RAND_MG158.pdf.
Sanders, W. L., & Horn, S. P. (1994). The Tennessee Value-Added Assessment System:
Mixed-model methodology in educational assessment. Journal of Personnel Evaluation in Education, 8, 299–311.
The Industrial-Organizational Psychologist
21
22
July 2011
Volume 49 Number 1
Using I-O to Fight a War
Douglas R. Lindsay, Lt. Col., PhD
United States Air Force Academy
Industrial-organizational psychology is critical to prosecuting modern
warfare. This is a pretty strong statement and one that may make some people a little uncomfortable. However, it is truer today than ever before. Due to
the complex nature of modern warfare, the power and understanding that I-O
psychologists bring to bear is vital for success. In a military environment, having the right people in the right place at the right time with the right skill set
is not a nice thing to have, it is a necessity. In fact, it is a matter of life or death.
Although the military establishment may seem like a fairly nuanced type
of work environment (even while employing millions of workers), the military has been an important organization with respect to psychology and, in
particular, to I-O psychology. In fact, the relationship between I-O and the
military goes back almost 100 years to World War I (Landy & Conte, 2004).
This relationship has continued through the decades to many projects such as
Army Alpha (and Beta; Yerkes, 1921), Armed Services Vocational Aptitude
Battery (ASVAB; Jensen, 1985; Murphy, 1984), and Project A (Campbell &
Knapp, 2001), to name just a few. Even now, a quick search of the key word
military in just a few top journals (i.e., Journal of Applied Psychology,
Human Performance) yielded over 500 articles. So relevant is the military
that the American Psychological Association even created the Division of
Military Psychology (Division 19) with its own peer-reviewed journal (Military Psychology) to examine and understand the benefits and contributions of
this “unique” population.
Several months ago, I offered some comments in a TIP article titled
“Reflections From a Deployment to Afghanistan: The Relevance of I-O in a
War Zone.” In that article, I brought up several questions I had as I was going
through my recent deployment where I thought I-O had relevance (and there
were many). Now, having completed that deployment, I would like to offer
some insights as to where I-O is making strong contributions and where I
think we could improve. This will be done by explaining the particular challenge of military operations and how I-O has been used to deal with this situation. In particular, there are three areas in which this is apparent: job rotation, measurement, and employee contentment.
One of the enduring challenges in successfully conducting a military
operation is the coordinated movement of military personnel (and associated
civilians and contractors) into and out of the theater of operations. This is not
a simple task about putting one person into the right job. It is about putting
thousands of people into the right jobs for rotating periods of time in different locations in different countries across different military branches (Army,
The Industrial-Organizational Psychologist
23
Navy, Air Force, and Marines). For example, in order to fill one position for
a 6-month deployment, you must project the vacancy far enough in advance
to find a qualified replacement; identify that individual; ensure that person
has the minimum necessary qualifications, is of the appropriate rank level
(military jobs also have a rank requirement on them), and can be released during that time period by the organization that owns them (in other words, the
losing organization must be able to temporarily backfill the position); notify
the selected individual, provide them the necessary predeployment training,
get them their equipment, and transfer them into the theater (and that is just
to get them ready to actually start the work). There are literally thousands of
people simultaneously doing this at various points in the process all across
the world. I-O psychologists are the perfect professionals to assist in such an
endeavor. Due to our training and experience, we have a drastic impact at
every point along that chain. Although the military does not specifically hire
I-O psychologists, we do have many people in the organization with I-O education and training that assist in this process. As an example, I am a behavioral scientist in the Air Force but have a PhD in I-O psychology.
Ironically, for my deployment, I was caught in an interesting situation.
The position for which I deployed for was not quite what I had envisioned.
The position was requisitioned for a specific skill set from the Army to be
filled by the Air Force. This type of thing happens often as one branch of
service (Army, Navy, Air Force, and Marines) takes the lead with other
branches acting in a supporting role with respect to certain functions. The
position was asked for by the Army with certain codes and specifications outlined in the position description. However, when the information was
received by the Air Force, it was translated into Air Force codes and specifications. Interestingly, and probably not surprisingly, when these types of
translations occur, there can be discrepancies. That was exactly what happened in my case. The Army wanted a certain set of skills and that set of skills
is packaged differently in the military services. In a way, it can be seen as
someone knowing what they want but not knowing how to ask for it. However, due to my background and experience in the field of I-O, I was able to
fix the situation and ensure that there was more clarity as to who would fill
subsequent deployments into that position. In the end, I was able to contribute
effectively to the mission.
This is not an easy endeavor. One of the constant balancing acts that takes
place is how long one should leave these individuals in their deployed position. Because people are in these jobs for various lengths of time, it makes it
very difficult to build up a corporate knowledge of what is going on. Imagine
if your organization had to deal with a minimum of 100% turnover every single year, with no end in sight. What type of impact would that have on the
organization? One way that this has been mitigated in the military is to ensure
an adequate job analysis is conducted. Even with the high turnover, this spe24
July 2011
Volume 49 Number 1
cific process of defining the requirements for these vital positions helps to
ensure that the next person coming into the position will have the minimum
necessary qualifications and enough organizational knowledge to be effective
immediately upon hitting the ground. Although we have had some missteps
here and there, for the most part we are ensuring that most folks have the right
skills prior to assuming their deployed position. We are currently in our 9th
year of military operations in Afghanistan. Although this may seem like a relatively long time, it is often referred to as “9 one-year wars” versus a 9-year
war due to this high level of turnover. I-O is critical to ensuring that no matter
how long this military operation takes place, we are setting people (and the
organization) up for success by putting the right people into the right positions.
Another challenge area has to do with how the measurement of local
national attitudes and beliefs is conducted. The nation of Afghanistan has a
roughly 28% literacy rate (43% for men; 13% for women; Central Intelligence Agency, 2011). Add to that the fact that the nation is at war and you
have a very complicated data collection environment. Although there are
pockets in the country that are fairly stable (like the capital city of Kabul), in
order to get a representative sample of the Afghan population you obviously
have to get out of the major cities. This creates an enormous measurement
challenge. The result has been a reliance on data collection techniques such
as polling and atmospherics. Each of these methods has their limitations.
Polling typically takes place with the use of Afghan nationals conducting
the interviews. This helps in dealing with the translation and cultural considerations. However, there are several things that must be kept in mind regarding polling in Afghanistan. The first of these is that the questions must be
aimed at the level of the participant. That means adequate pretesting needs to
be done to ensure that this largely illiterate society is receiving and understanding the questions in the same way they are being asked. The second factor is that Afghan people like to tell you what they think you want to hear.
That means that the information they are passing on may not accurately
reflect what they are actually feeling. A third factor is that, due to the fighting that takes place on a daily basis, it is difficult to travel to particular parts
of the country (even for an Afghan citizen). This means that some areas may
be overrepresented where others may be underrepresented. A final aspect has
to do with the violence that citizens face on a daily basis from the Taliban. If
they are perceived as being supportive of international forces, then they are
subject to horrific retaliation from the Taliban. This has a direct impact on the
types of questions that are asked because they often do not answer questions
that they perceive to be dangerous to them, their family, or their village. As
psychologists, we are in a perfect place to understand the human in this particular environment. Although this is a difficult task, I believe that we are
making headway in this area, and the knowledge that we bring regarding culture and decision-making processes helps to inform our polling efforts.
The Industrial-Organizational Psychologist
25
Atmospherics is a different type of information collection that I was actually unfamiliar with prior to my deployment. In essence, atmospherics is a passive
form of information gathering where an individual (an Afghan) frequents certain places (e.g., coffee shop, mosque) and listens to what people are talking
about. This information is then recorded and passed on for analysis. Often times,
the collectors are given certain topics to listen for and key into those as they are
being talked about. The key here is unobtrusiveness. The intent is to pick up on
what folks are talking about and to see if there are any important messages that
are being repeated. Again, due to the high illiteracy rate in the country, much of
the information is passed on via word of mouth. As you are probably thinking
now, there are limitations to such a passive form of collection, such as where
these collectors should be and to whom they should listen. And what about the
conversations that occur outside of public places, translation issues, and so on?
Although this type of method is not one we traditionally use when collecting
data, it is a predictable response to the environment in which we find ourselves.
I think I-O psychologists can help inform how we collect and process such data
in the future. One of the many strengths of I-O psychologists is the enormous
quantitative and qualitative skills that can be applied in such a situation to figure out how best to capture and process such information.
A final challenge has to do with what I would classify as employee contentment. Specifically, how do we deal with the issues, stress, and separation
that the employees will experience (in this case, the individual soldier)? There
are many different factors going through an individual’s mind when they are
informed that they will spend up to 1 year away from their family, social support, and/or familiar surroundings by serving in a war zone. This means missing such things as holidays, birthdays, graduations, and other significant life
events while simultaneously processing the aspects of being in harm’s way. In
addition, these notifications occasionally occur at the last minute (in my case
I had 2 weeks’ notice for my 6-month deployment), which means added stress
to the situation as they quickly prepare for their upcoming “assignment.” As IO psychologists, we naturally start to think about things like work–life balance, employee safety, compensation, and motivation. All of these issues are
important and will likely impact the employee and their performance.
Work–life balance is relevant here because, in a sense, there is no balance.
In this case, the employee is facing a prolonged period of work, with a drastically different life part of the work–life equation. In addition, for those that
are married or have children, the family members left behind are also facing
a difficult adjustment process without that individual at home.
As a means of mitigating some of these factors, the organization makes an
attempt to partially compensate the soldier for their service. This comes in the
form of such benefits as hazardous duty pay, family separation allowance, and
tax-free income (earned while the military member is in the war zone).
Although clearly not meant to offset the total “cost” of the individual going to
war, it is an attempt by the organization to provide for the member (and fami26
July 2011
Volume 49 Number 1
ly, if applicable) while they are serving in the hostile environment. Along with
this compensation, there are many programs for the families of these deployed
members that attempt to help out those left behind (childcare programs, adjustment programs, and so forth). These programs are an attempt to help support
families and provide the deployed member some assurance that the organization is helping to look after their family while they are serving their country. All
of these types of issues benefit from I-O, and we are finding out new information every week on how to more adequately support these employees.
One of the largest challenges of ongoing operations is the recurrent nature
of military deployments. There are numerous military members that are on
their third, fourth, or fifth (and in some cases more) deployment. This is a difficult situation to handle because these operations do not have a definitive
end date. For example, combat operations have officially ceased in Iraq, but
there are still approximately 50,000 military members deployed to that country (as well as numerous civilians and contractors). With hot spots popping
up all over the world, it is hard to predict how this operational tempo will
look from year to year. The result is a constant predeployment, deployment,
and postdeployment cycle. This type of pace will have predictable long-term
effects on those involved. If you factor in the nature of war experiences, the
fact that many of these members also face combat, and the threat of improvised explosive devices on a daily basis, there are stress-related issues that
must also be dealt with (i.e., PTSD). We are making progress dealing with
such issues, but there is clearly more work that can be done in this area.
The bottom line is that I-O psychologists have been playing and will continue to play a critical role in how military operations are conducted and will
continue to do so in the future. In fact, we may be in the best position to provide critical information as to how these operations should be conducted. The
investment of I-O psychologists not only benefits the military in terms of
conducting asymmetric and urban warfighting, but generalizable findings
drawn from these extreme conditions can be applied to the battles encountered in nonmilitary organizations.
References
Campbell, J. P., & Knapp, D. J. (Eds.). (2001). Exploring the limits in personnel selection
and classification. Mahwah, NJ: Erlbaum.
Central Intelligence Agency (2011). The world factbook. Retrieved from https://www.cia.gov/
library/publications/the-world-factbook/geos/af.html.
Jensen, A. R. (1985). Armed services vocational aptitude battery. Measurement and Evaluation in Counseling and Development, 18, 32–37.
Landy, F. J., & Conte, J. M. (2004). Work in the 21st century: An introduction to industrial
and organizational psychology. New York, NY: McGraw-Hill.
Murphy, K. (1984). Armed services vocational aptitude battery. In D. J. Keyser, & R. C.
Sweetland (Eds.), Test critiques, Vol 1. Kansas City, MO: Test Corporation of America.
Yerkes, R. M. (1921). Psychological examining in the United States Army: Memoirs of the
national academy of sciences, Vol XV. Washington, DC: U.S. Government Printing Office.
The Industrial-Organizational Psychologist
27
28
July 2011
Volume 49 Number 1
Applicant Faking: A Look Into the Black Box
Matthias Ziegler
Humboldt-Universität zu Berlin, Berlin, Germany
Abstract
Applicant faking behavior has been a concern probably for as long as personality questionnaires have been used for selection purposes. By now, the
impact of faking on test scores, more specifically, their means, construct
validity, and criterion validity is more or less agreed upon. Many researchers
have put their efforts toward developing methods that prevent faking or help
to correct for it. However, models describing the actual faking process are
scarce. The models that do exist primarily focus on describing antecedents
that lead to faking or moderate the amount of faking. The cognitive process
itself mostly remains a black box. Based on existing literature from survey
research as well as qualitative and quantitative research, the present paper
introduces a process model of applicant faking behavior. Using this model as
a starting ground, possible future research areas are discussed.
Many different definitions for applicant faking behavior on a personality
questionnaire exist. Ziegler, MacCann, and Roberts (2011) recently wrote in an
edited book on faking: “Faking represents a response set aimed at providing a
portrayal of the self that helps a person to achieve personal goals. Faking
occurs when this response set is activated by situational demands and person
characteristics to produce systematic differences in test scores that are not due
to the attribute of interest” (p.8). In the same book MacCann, Ziegler, and
Roberts (2011) reviewed different definitions of faking that exist in the literature. They concluded: “It thus seems that most experts view faking as a deliberate act, distinguishing this from other forms of response distortion that may
not be conscious and intentional. Faking is thus a deliberate set of behaviors
motivated by a desire to present a deceptive impression to the world. Like most
other behavior, faking is caused by an interaction between person and situation
characteristics” (p. 311). There also seems to exist a rising agreement that, as a
conservative estimate, about 30% of applicants fake (Converse, Peterson, &
Griffith, 2009; Griffith, Chmielowski, & Yoshita, 2007; Griffith & Converse,
2011; Peterson, Griffith, & Converse, 2009) and that this rate might be moderated by such aspects as employability (Ellingson, 2011). Furthermore, it is
hardly controversial that faking impacts mean scores of personality questionnaires in applicant settings (Birkeland, Manson, Kisamore, Brannick, & Smith,
2006; Ziegler, Schmidt-Atzert, Bühner, & Krumm, 2007). Less agreement
exists as to the effect of faking on construct and criterion validity (Ellingson,
Smith, & Sackett, 2001; Ziegler & Bühner, 2009; Ziegler, Danay, Schölmerich,
& Bühner, 2010). Furthermore, a lot of research is dedicated to developing
questionnaires or warnings that prevent faking (Bäckström, Björklund, & LarsThe Industrial-Organizational Psychologist
29
son, 2011; Dilchert & Ones, 2011; Stark, Chernyshenko, & Drasgow, 2011) or
methods that allow correcting for faking (Kuncel, Borneman, & Kiger, 2011;
Paulhus, 2011; Reeder & Ryan, 2011). Even though this research has come up
with a lot of encouraging results, its aim might seem a bit hasty to people outside of the faking community. Transferring the research path taken so far to
medicine, for example, would mean that scientists try different cures for a new
virus or different prevention strategies without having fully understood the
virus’ nature. In other words, despite the amount of research being done, little
is known about the actual faking process. What do people think about when
faking a personality questionnaire? Even though there are some models of faking (Mueller-Hanson, Heggestad, & Thornton, 2006; Snell, Sydell, & Lueke,
1999), most of these models are concerned with the antecedents of faking
behavior but not faking itself. Therefore, this paper introduces a process model
based on existing literature as well as presents some new research to further elicidate the nature of faking.
The Process of Answering Personality Questionnaires
The actual thought process that takes place when answering a personality questionnaire item has been examined in detail in survey research. Krosnick (1999) summarized different models and ideas and proposed a four-step
process of responding that consists of comprehension, retrieval, judgment,
and mapping. In this sense, test takers first have to encode the item and form
a mental representation of its content (comprehension). During the next step,
information is retrieved that is of value with regard to the item content
(retrieval). This information is then compared with the mental representation
of the item (judgment) and the result mapped onto the rating scale (mapping).
Obviously, this is an optimal process that occurs when people are motivated
to answer the items of the questionnaire in a sincere manner. Krosnick used
the term optimizing for this strategy. Depending on factors such as motivation, cognitive ability, and fatigue, some test takers might not undergo this
optimal process. In that case, a satisficing strategy is used. Here, respondents
adjust the effort put forward relative to their expectation of the importance of
the test. Tourangeau and Rasinski (1988) suggested a similar process model
that included an additional step during mapping called editing. This editing is
supposed to reflect social desirable responding or faking. Thus, one could
conclude that a rather detailed model of faking behavior already exists. However, some results from faking research raise doubts. Birkeland et al. (2006)
in their meta-analysis demonstrated that, depending on the job, applicants
fake personality traits differently. Moreover, there is evidence that some people fake in the wrong direction (Griffith, 2011). In addition, there is growing
agreement that faking cannot be regarded as a unitary process that is comparable for each and every test taker. This idea is supported by empirical evidence showing that there are at least two different forms of faking: slight and
30
July 2011
Volume 49 Number 1
extreme faking (Zickar, Gibby, & Robie, 2004; Zickar & Robie, 1999). Thus,
it seems worthwhile to further investigate the cognitive process people undergo when faking on a personality questionnaire.
A Qualitative Study of Faking
We conducted a qualitative study using the cognitive interview technique
(Dillman & Redline, 2004; Willis, 2004) to elucidate the cognitive process
underlying faked personality questionnaire responses. Participants first filled
out a personality questionnaire with a specific faking instruction (see Ziegler &
Bühner, 2009) that they should assume they are applying for a position as a student in undergraduate psychology. In order to get a place in the program, they
had to first take the personality test, the NEO-PI-R (Ostendorf & Angleitner,
2004). However, they were told a test expert would examine the results for faking. All participants were asked to think out loud, to voice all their thoughts as
they filled out the personality questionnaire. The entire time a test administrator
was present but not visible to the participant. The administrator took notes and
in case participants fell silent, reminded them to speak all thoughts out loud.
Finally, participants were administered a semistructured interview. Participants
were asked how they tried to achieve the given goal, whether they applied that
technique to all questions, and finally if they could name the strategy they had
used. There were 50 participants (34 women and 16 men). All were undergraduate students enrolled in different studies (27 were psychology students). The
average age and semester were 22.26 (SD = 1.91) and 1.89 (SD = 2.01), respectively. Two faking experts independently analyzed the combined information
with the goal of developing a cognitive process model of faking. The approach
chosen to analyze the qualitative data was based on the grounded theory.
After analyzing all the information gathered, both judges independently
identified the two main strategies for intentionally distorting a questionnaire
suggested by Zickar et al. (2004): slight faking and extreme faking (see also
Robie, Brown, & Beaty, 2007). Consequently, rater agreement was compared
using Cohen’s kappa (= .77). A third expert was consulted for the cases of
nonagreement. Based on this, about 20% of the participants were categorized
as extreme fakers and 80% as slight fakers.
Of more interest though was the actual thought processes that were verbalized. We first looked at the strategies participants reported using to fake.
In most cases this provided little insight because participants could not really name a strategy. Of more help were the participants’ responses to how they
had tried to achieve the goal of faking good. Here two strategies emerged.
Most of the people said something like they took the answer they would have
given under normal circumstances and pushed it a little in the “right” direction. These responses were confirmed by the actual thought protocols. To
give an example, one participant said when pondering his answer to the item
“I keep my things clean and tidy:” “Well, I guess I don’t really do that, but if
The Industrial-Organizational Psychologist
31
I want to be selected, I better endorse a four.” Only a few participants, those
labeled as extreme fakers, stated that they endorsed the highest possible category. However, one important aspect that needs to be mentioned is that neither extreme nor slight fakers faked all items regardless of their content.
Before considering an answer, participants judged whether the question
would reveal information important to select psychology students. If that was
the case, they faked. If it was not the case participants did one of two things.
They either answered honestly or they answered neutrally using the middle
category, some even did both alternatingly. The middle category was often
chosen in order to avoid a wrong answer or keep information believed to be
unnecessary for the position applied for a secret.
Another interesting finding was that students currently enrolled in psychology faked in different ways from students in other academic programs. The differences were rather severe. Although psychology students tended to endorse
Conscientiousness items and portray themselves as low in Neuroticism, other
students faked Openness and portrayed themselves as more neurotic.
Developing a Process Model of Applicant Faking Behavior
Our findings can be integrated into the general models by Krosnick (1999)
and Tourangeau and Rasinski (1988) described above. The resulting model is
displayed in Figure 1. We did find evidence for the four-stage process model
consisting of comprehension, retrieval, judgment, and mapping. However,
there were also several noteworthy differences or extensions.
Mapping
- Middle category
- Honest
low
Comprehension
Importance
classification
high
Optimizing/Satisficingg
Retrieval/Judgment
Mapping
Retrieval/Judgment
Person
- Specific
knowledge
- Implicit
theories
- ...
Situation
- Low or high
stakes
- Opportunity
- ...
Person
-General mental
ability
-Ability to reflect
-Self-underlf d
standing
-Motivation
-Self-deception
-Honesty
-Self-concept
Self concept
-Frame of
reference
-…
Situation
- Supervision
- Presence of
testadministrator
d i it t
- ...
Person
- Honest trait
- Faking style
(slight vs.
extreme)
- Honesty
- Employability
- Self efficacy
beliefs
- Overclaiming
- Narcissism
- General mental
ability
- ...
Situation
-Warnings
- Presence of testadministrator
-…
Figure 1: A cognitive process model of applicant faking behavior
32
July 2011
Volume 49 Number 1
One noteworthy aspect is that the interaction between person and situation
characteristics could be observed in each of the stages. It seems that people first
evaluate the importance of an item in terms of the situational demand (e.g.,
application for a certain job or student program). If the test taker judges the item
as unimportant in regard to the demand of the situation, no faking occurs. Thus,
immediately after forming a mental representation of the item content, its
importance is classified. Therefore, the general model was extended to include
this classification. As a consequence, even the use of the maximum response
strategy does not necessarily result in a maximum score because not all items
are faked. The results support the idea that specific knowledge and implicit theories about the position applied for are used for evaluation of importance. Further, the stakes of the situation may also impact the importance decision.
Following this initial importance classification are retrieval and judgment.
This process is similar to that described by Krosnick (1999). Personality traits
such as self-understanding or the ability to reflect might be named as influencing this process more towards optimizing or satisficing. On the situation side,
the presence of the administrator clearly increased the likelihood of optimizing.
The mapping stage happened so quickly that it was virtually impossible
to gather information from the thought protocols. Of course, the “true” standing regarding the item poses a natural limit for the amount of faking possible.
Personality traits such as honesty or a high perceived employability can be
assumed to affect mapping as well. Drawing a distorted picture of one’s self
in an applicant setting certainly requires high self-efficacy beliefs (Ziegler,
2007). This means the test taker may believe he or she can live up to this standard. Otherwise, so-called dark personality traits such as narcissism or a general tendency to overclaim might be implied. Finally, continued lying
requires a certain cognitive effort. One has to keep up with the lies. Consequently, general mental ability (McGrew, 2009) was included in our model.
Implications and Future Research
At first glance, knowing more about the actual thought process does not
seem to bear immediate practical implications. An important issue though is
the importance of classification and its influence on the way uninformative
items are answered. The increased use of the middle category is a potential
problem. Thus, rating scales without such a middle category might be advisable. Moreover, the fact that prior knowledge as well as implicit theories
about the job are used suggests that providing job information before the test
might level the playing field. This might be particularly necessary if the
applicant pool contains people new to the job as well as experienced workers. Otherwise, job experience might help applicants to fake in the right direction leaving people new to the job at a disadvantage. Because the personality questionnaire is used to assess individual personality differences but not
job knowledge, this would endanger the test score’s construct validity.
The Industrial-Organizational Psychologist
33
The methods used here only give insight into conscious faking efforts.
However, as Paulhus has outlined (Paulhus, 2002), there are also unconscious
processes (e.g. self-deceptive enhancement and self-deceptive denial) influencing the answering process. Their influence as well as their interaction with other
personality and situational aspects of the model suggested here will be important to study. The occurrence of two qualitatively different ways to use a rating
scale has already been reported for low-stakes settings. Here, people preferring
either middle or extreme categories could be distinguished (Rost, Carstensen,
& von Davier, 1997, 1999). The extent to which these response styles relate to
slight and extreme faking should be examined using large samples. Otherwise,
we might be talking of the same phenomena using different labels.
This study also contradicts one of the dogmas of faking research, which
is to always use real applicant samples. In order to investigate basic cognitive
processes it sometimes is necessary to go back to the laboratory. This insight
might also be true for other faking research questions.
The model introduced here has many white spots on the situation side.
This clearly underscores a need to broaden the research perspective and place
more emphasis on situational factors associated with faking. This idea is anything but new. Nevertheless, systematic research on how situations are perceived and how this perception influences behavior is rare.
Finally, the importance classification of items took place so naturally and
quickly that it seems unlikely that such a classification does not occur in lowstakes settings as well. Thus, an important next step would be to further elucidate the cognitive process of item answering under normal conditions.
The model introduced here must be considered as preliminary. As the
many incomplete lists imply, the enumeration of person and situation aspects
influencing the thought process is by no means exclusive. However, the
model should be understood as a starting point for more research aimed at
understanding applicant faking behavior more fully. To this end, the hypotheses put forward here based mainly on qualitative research should be tested
applying quantitative methods and large samples. Finally, the proposed
model shows that to understand the complexity of faking behavior, more
elaborate models including person and situation variables, as well as their
interaction, are necessary. Assumptions that a general editing process or the
differentiation between slight and extreme faking suffice to explain individual differences in actual faking behavior are definitely premature.
This paper’s purpose was to elucidate the black box that is applicant faking
behavior. Using existing cognitive models for the item answering process as well
as qualitative analyses, a model was introduced consisting of five stages: comprehension, importance classification, retrieval, judgment, and mapping (including editing). The importance classification based on knowledge and implicit theories, the handling of supposedly neutral items (i.e. uninformative with regard to
the faking goal), as well as clear evidence for the person situation interaction
provide additional insight into the cognitive process taking place during faking.
34
July 2011
Volume 49 Number 1
References
Bäckström, M., Björklund, F., & Larsson, M. R. (2011). Social desirability in personality
assessment: Outline of a model to explain individual differences. In M. Ziegler, C. MacCann &
R. D. Roberts (Eds.), New perspectives on faking in personality assessment (pp. 201–213). New
York, NY: Oxford University Press.
Birkeland, S. A., Manson, T. M., Kisamore, J. L., Brannick, M. T., & Smith, M. A. (2006).
A meta-analytic investigation of job applicant faking on personality measures. International
Journal of Selection and Assessment, 14, 317–335.
Converse, P., Peterson, M., & Griffith, R. (2009). Faking on personality measures: Implications for selection involving multiple predictors. International Journal of Selection and Assessment, 17, 47–60.
Dilchert, S., & Ones, D. S. (2011). Application of preventive strategies. In M. Ziegler, C.
MacCann & R. D. Roberts (Eds.), New perspectives on faking in personality assessment (pp.
177-200). New York, NY: Oxford University Press.
Dillman, D. A., & Redline, C. D. (2004). Testing paper self-administered questionnaires:
Cognitive interview and field test comparisons. In S. Presser, J. M. Rothgeb, M. P. Couper, J. T.
Lessler, E. Martin, J. Martin & E. E. Singer (Eds.), Methods for testing and evaluating survey
questionnaires: Hoboken, NJ: Wiley.
Ellingson, J. E. (2011). People fake only when they need to fake. In M. Ziegler, C. MacCann
& R. D. Roberts (Eds.), New perspectives on faking in personality assessment (pp. 19–33). New
York, NY: Oxford University Press.
Ellingson, J. E., Smith, D. B., & Sackett, P. R. (2001). Investigating the influence of social
desirability on personality factor structure. Journal of Applied Psychology, 86, 122–133.
Griffith, R. L. (2011, April). Can faking ever be overcome in high-stakes testing? Paper presented at the Debate conducted at the 26th Annual Conference for the Society for Industrial and
Organizational Psychology, Chicago, IL.
Griffith, R. L., Chmielowski, T., & Yoshita, Y. (2007). Do applicants fake? An examination
of the frequency of applicant faking behavior. Personnel Review, 36, 341–357.
Griffith, R. L., & Converse, P. D. (2011). The rules of evidence and the prevalence of applicant faking. In M. Ziegler, C. MacCann, & R. D. Roberts (Eds.), New perspectives on faking in
personality assessment (pp. 34–52). New York, NY: Oxford University Press.
Krosnick, J. A. (1999). Survey research. Annual Review of Psychology, 50, 537–567.
Kuncel, N. R., Borneman, M., & Kiger, T. (2011). Innovative item response process and
Bayesian faking detection methods: More questions than answers. In M. Ziegler, C. MacCann &
R. D. Roberts (Eds.), New perspectives on faking in personality assessment (pp. 102–112). New
York, NY: Oxford University Press.
MacCann, C., Ziegler, M., & Roberts, R. D. (2011). Faking in personality assessment: Reflections and recommendations. In M. Ziegler, C. MacCann & R. D. Roberts (Eds.), New perspectives
on faking in personality assessment (pp. 309–329). New York, NY: Oxford University Press.
McGrew, K. (2009). CHC theory and the human cognitive abilities project: Standing on the
shoulders of the giants of psychometric intelligence research. Intelligence, 37, 1–10.
Mueller-Hanson, R., Heggestad, E. D., & Thornton, G. C. (2006). Individual differences in
impression management: an exploration of the psychological processes underlying faking. Psychology Science, 3, 288–312.
Ostendorf, F., & Angleitner, A. (2004). NEO-PI-R. NEO Persönlichkeitsinventar nach Costa und
McCrae. Revidierte Fassung. [NEO-PI-R. NEO Personality Inventory]. Göttingen, Germany: Hogrefe.
Paulhus, D. L. (2002). Socially desirable responding: The evolution of a construct. In H. I.
Braun, D. N. Jackson, & D. E. Wiley (Eds.), The role of constructs in psychological and educational measurement (pp. 49-69): Mahwah, NJ: Erlbaum.
Paulhus, D. L. (2011). Overclaiming on Personality Questionnaires. In M. Ziegler, C. MacCann & R. D. Roberts (Eds.), New perspectives on faking in personality assessment (pp. 151164). New York, NY: Oxford University Press.
The Industrial-Organizational Psychologist
35
Peterson, M., Griffith, R., & Converse, P. (2009). Examining the role of applicant faking in
hiring decisions: Percentage of fakers hired and hiring discrepancies in single- and multiple-predictor Selection. Journal of Business and Psychology, 24, 1–14.
Reeder, M. C., & Ryan, A. M. (2011). Methods for correcting for faking. In M. Ziegler, C.
MacCann & R. D. Roberts (Eds.), New perspectives on faking in personality assessment (pp.
131–150). New York, NY: Oxford University Press.
Robie, C., Brown, D. J., & Beaty, J. C. (2007). Do people fake on personality inventories?
A verbal protocol analysis. Journal of Business and Psychology, 21, 489–509. doi: DOI
10.1007/s10869-007-9038-9
Rost, J., Carstensen, C. H., & Von Davier, M. (1997). Applying the mixed Rasch model to
personality questionnaires. In J. In Rost & R. E. Langeheine (Eds.), Applications of latent trait
and latent class models in the social sciences (pp. 324–332). New York, NY: Waxmann.
Rost, J., Carstensen, C. H., & von Davier, M. (1999). Are the Big Five Rasch scalable? A
reanalysis of the neo-ffi norm data. Diagnostica, 45, 119–127.
Snell, A. F., Sydell, E. J., & Lueke, S. B. (1999). Towards a theory of applicant faking: Integrating studies of deception. Human Resource Management Review, 9, 219–242.
Stark, S., Chernyshenko, O. S., & Drasgow, F. (2011). Constructing fake-resistant personality tests using item response theory: High-stakes personality testing with multidimensional pairwise preferences. In M. Ziegler, C. MacCann & R. D. Roberts (Eds.), New perspectives on faking in personality assessment (pp. 214-239). New York, NY: Oxford University Press.
Tourangeau, R., & Rasinski, K. A. (1988). Cognitive-Processes Underlying Context Effects
in Attitude Measurement. Psychological Bulletin, 103, 299–314.
Willis, G. (2004). Cognitive interviewing revisited: A useful technique, in theory? In S.
Presser, J. M. Rothgeb, M. P. Couper, J. T. Lessler, E. Martin, J. Martin & E. E. Singer (Eds.),
Methods for testing and evaluating survey questionnaires (pp. 299–317). Hoboken, NJ: Wiley.
Zickar, M. J., Gibby, R. E., & Robie, C. (2004). Uncovering faking samples in applicant,
incumbent, and experimental data sets: An application of mixed-model item response theory.
Organizational Research Methods, 7, 168–190.
Zickar, M. J., & Robie, C. (1999). Modeling faking good on personality items: An item-level
analysis. Journal of Applied Psychology, 84, 551–563.
Ziegler, M. (2007). Situational demand and its impact on construct and criterion validity of
a personality questionnaire: State and trait, a couple you just can’t study separately! Dissertation, LMU München: Fakultät für Psychologie und Pädagogik.
Ziegler, M., & Bühner, M. (2009). Modeling socially desirable responding and its effects.
Educational and Psychological Measurement, 69, 548.
Ziegler, M., Danay, E., Schölmerich, F., & Bühner, M. (2010). Predicting academic success
with the Big Five rated from different points of view: Self-rated, other rated and faked. European
Journal of Personality. doi: 10.1002/per.753
Ziegler, M., MacCann, C., & Roberts, R. D. (2011). Faking: Knowns, unknowns, and points
of contention. In M. Ziegler, C. MacCann & R. D. Roberts (Eds.), New perspectives on faking
in personality assessment (pp. 3-16). New York, NY: Oxford University Press.
Ziegler, M., Schmidt-Atzert, L., Bühner, M., & Krumm, S. (2007). Fakability of different
measurement methods for achievement motivation: Questionnaire, semi-projective, and objective. Psychology Science, 49(4), 291–307.
36
July 2011
Volume 49 Number 1
The Industrial-Organizational Psychologist
37
SIOP Recommends Review of Uniform Guidelines
Doug Reynolds
Development Dimensions International
Eric Dunleavy
DCI Consulting Group
Early this spring, SIOP encouraged several federal agencies to review and
revise the Uniform Guidelines on Employee Selection Procedures (UGESP),
one of the primary sources of guidance for regulatory review of the validity
of assessments used for employment decision making in organizations. This
article briefly summarizes the background for the request and provides the
text of SIOP’s recommendation.
Background
On January 18, 2011, President Obama issued Executive Order 13563,
which directs federal agencies to develop a plan to review existing regulations within each agency’s purview.1 Consistent with the president’s Open
Government Initiative, the order emphasized the need for federal regulations
that are well-integrated, innovative, flexible, based on the best available science, and adopted through a process that involves public participation. Further, the order required each agency to prepare a plan for conducting a “retrospective analysis” of existing regulations to determine which should be
modified, streamlined, expanded, or repealed for the purpose of ensuring the
effectiveness and efficiency of the regulatory programs.
Consistent with this directive, federal agencies issued calls for public
input on which regulations should be reviewed and the methods for conducting the review and revisions. Requests of this nature were posted by each of
the existing UGESP-sponsoring agencies: the Equal Employment Opportunity Commission (EEOC), the Department of Labor (DoL), and the Department of Justice (DoJ). SIOP prepared a response to each of these agencies.
An initial draft of the response was circulated among the Executive Board for
review and comment, and a revised version was signed by SIOP President
Eduardo Salas. In March of this year, the SIOP Administrative Office submitted separate responses to EEOC, DoL, and DoJ.
SIOP’s Response
The substantive portions of SIOP’s response to the EEOC are reprinted
below. Responses to DoL and DoJ were nearly identical to the EEOC version.
The Society for Industrial and Organizational Psychology (SIOP) welcomes the opportunity to provide a response to the Equal Employment
Opportunity Commission’s (“EEOC”) request for public comment on the
i www.federalregister.gov/articles/2011/01/21/2011-1385/improving-regulation-and-regulatory-review
38
July 2011
Volume 49 Number 1
plan for retrospective analysis of significant regulations. We commend
the EEOC for offering the opportunity to provide suggestions regarding
the regulations that should be reviewed and the factors to be considered
as the review is conducted.
SIOP is a Division of the American Psychological Association (APA), an
organizational affiliate of the American Psychological Society, and
includes over 3,900 member industrial-organizational psychologists and
3,000 student affiliates. The Society’s mission is to enhance human wellbeing and performance in organizational and work settings by promoting
the science, practice, and teaching of industrial-organizational psychology.
On behalf of SIOP, I am writing to express our view that the Uniform
Guidelines on Employee Selection Procedures2 (the “Guidelines”) and
their corresponding Questions and Answers3 should be included among
the initial regulations to be reviewed in the retrospective analysis.
The Guidelines are a critical source of guidance for employers who intend
to select and manage their workforce using fair and valid selection
processes. According to Section 1(B) of the Guidelines, their purpose is
clear: “These guidelines incorporate a single set of principles which are
designed to assist employers, labor organizations, employment agencies,
and licensing and certification boards to comply with requirements of Federal law prohibiting employment practices which discriminate on grounds
of race, color, religion, sex, and national origin and provide a framework
for determining the proper use of tests and other selection procedures.”
Furthermore, the Guidelines describe research strategies (i.e., validation
research) that can be used to determine whether a selection procedure is sufficiently job-related, a critical question when a selection process has the
potential to adversely impact protected classes. Determining whether a selection procedure is sufficiently job-related is a research question that SIOP
members are particularly well suited to help answer; I-O Psychologists have
been conducting research on this topic for many decades. Our members work
for and consult with both the federal government and many of the nation’s
largest private employers. SIOP members also conduct scientific research
and provide expert testimony on behalf of agencies, plaintiffs and defendants
in legal proceedings that involve employee selection and validation methods.
The science of personnel assessment and employee selection has evolved
substantially since the Guidelines were published in 1978. Advancements
in scientific research and innovations in the practice of employee selection have been incorporated into the SIOP Principles for the Validation
and Use of Personnel Selection Procedures4 (“Principles”). The Princi2 www.access.gpo.gov/nara/cfr/waisidx_10/29cfr1607_10.html
3 www.eeoc.gov/policy/docs/qanda_clarify_procedures.html
4 www.siop.org/_Principles/principles.pdf
The Industrial-Organizational Psychologist
39
ples have been revised three times since the Guidelines were published,
most recently in 2003. The Principles specify established scientific findings and generally accepted professional practices from the field of personnel selection psychology related to the choice, development, evaluation, and use of personnel selection procedures. Likewise, the Standards
for Educational and Psychological Testing5 (“Standards”), which are
jointly published by the American Education Research Association
(AERA), APA, and the National Council on Measurement in Education
(NCME) have been revised twice since 1978 and are currently undergoing another revision. These Standards are written to address professional
and technical issues of test development and use in education, clinical
practice, and employment contexts.
Revisions to these technical guidance documents have been made to
ensure that contemporary selection is based on current scientific research.
Over the last 33 years there have been considerable advances in validation theory, substantial refinements in our understanding of how to best
implement traditional validation strategies, and new evidence related to
the availability and adequacy of modern alternative validation strategies.
Furthermore, the practice of employee assessment has changed dramatically over this timeframe as new technologies have emerged.
We suggest the Guidelines as a high-priority for revision because we
believe the regulatory standards should consider contemporary scientific
research and practice. Professional associations like SIOP, APA, AERA,
and NCME have documented these advances in scholarly literature and in
technical authorities like the Principles and Standards. Unfortunately, there
are inconsistencies between the Guidelines and some scholarly literature
related to validation research and the use of employee selection procedures,
and between the Guidelines and other technical authorities. These inconsistencies create substantial ambiguity for employers that use employee
selection procedures, as well as for federal agencies and the courts when
determining whether a selection procedure is job-related. Consideration of
contemporary research and scientifically supported recommendations will
help clarify the standards for valid selection procedures.
The Guidelines themselves anticipated the need to maintain currency and
consistency with other technical authorities. For example, in Section 5(A)
the Guidelines state: New strategies for showing the validity of selection
procedures will be evaluated as they become accepted by the psychological profession. In Section 5(C), the Guidelines are described as: intended to be consistent with generally accepted professional standards for
evaluating standardized tests and other selection procedures, such as
those described in the Standards for Educational and Psychological Tests
prepared by a joint committee of the American Psychological Associa5 www.apa.org/science/programs/testing/standards.aspx
40
July 2011
Volume 49 Number 1
tion, the American Educational Research Association, and the National
Council on Measurement in Education (American Psychological Association, Washington, D.C., 1974) (hereinafter “A.P.A. Standards”) and
standard textbooks and journals in the field of personnel selection. In
summary, we feel that a revision to the Guidelines is overdue, and we
welcome the opportunity to contribute to the effort.
On behalf of SIOP, it is my sincere hope that timely review of the Guidelines will serve as a focal point for positive dialogue among agencies, private employers, and other stakeholders with expertise in the current science and practice of employee selection. We strongly encourage the
EEOC to include the Uniform Guidelines among the initial regulations to
be reviewed in the Retrospective Analysis.
Should the Commission agree to undertake such a review, SIOP requests the
involvement of experts in our field during the review process. SIOP would
be pleased to identify a group of nationally recognized personnel selection
experts to assist with the review and possible revision process. Please contact SIOP’s Executive Director, Mr. David Nershi or me; we will immediately alert our Board to empanel an appropriate group of such experts.
Sincerely,
Eduardo Salas, PhD
President, Society for Industrial and Organizational Psychology
Next Steps
Of course, the request for input on regulations to be included in the agencies’ retrospective analyses does not convey an obligation to revise any specific regulation. Each agency has 120 days to prepare a preliminary plan for
reviewing existing regulations; these preliminary plans should then be posted for public comment before being finalized. Subsequent retrospective
reviews should also be planned by each agency. SIOP’s public comments will
hopefully increase the likelihood that our members are involved in the
process if the agencies decide to review and/or potentially revise UGESP. It
is interesting to note that our opinion regarding the Uniform Guidelines was
not alone—The Society of Human Resource Management6 and The Center
for Corporate Equality7 also identified UGESP as a high-priority candidate
for review in their comments to DoL.
It is likely that the identification of specific regulations for review will
require several steps. Even if UGESP is considered for review, the review
itself would likely be a lengthy and detailed process; actually revising the
Guidelines would be even more arduous. However, one potential step toward
revision of the Uniform Guidelines has now been taken.
6 http://dolregs.ideascale.com/a/dtd/SHRM-Response-to-Executive-Order-13563/126225-12911
7 http://dolregs.ideascale.com/a/dtd/Review-the-Uniform-Guidelines-on-Employee-SelectionProcedures/123333-12911
The Industrial-Organizational Psychologist
41
42
July 2011
Volume 49 Number 1
OPM Bringing the Science of Validity
Generalization (VG) to Federal Hiring Reform
James C. Sharf1
Brief History
In the early 1970s, the Lawyers Committee for Civil Rights was successful
in their Title VII challenge to the (then) Civil Service Commission’s (CSC)
Federal Service Entrance Exam (FSEE). FSEE was subsequently replaced by
the Professional and Career Exam (PACE), which was later abandoned in the
infamous Luevano consent decree signed in the final hour (no exaggeration) of
the Carter Administration. Both FSEE and PACE were measures of general
cognitive ability. Recall also that following the prohibition of “race norming”
(drafted by the author) in the Civil Rights Act of 1991, the Secretary of Labor
suspended use of the General Aptitude Test Battery (GATB), whose race norming and reliance on VG had both been commented upon favorably by the
National Research Council (Hartigan & Wigdor, 1989: DeGeest & Schmidt,
2011). The FSEE, PACE, and GATB were each a measure of general cognitive
ability (GCA). Subsequently, OPM delegated hiring decision-making responsibility to the agencies, and this delegated authority has never been rescinded. In
fiscal year 2009 there were approximately 160,000 new federal hires from
among approximately 11 million applicants. On average across these agencies,
about 1 in 65 was selected based typically on an applicant’s knowledge, skill,
and ability (KSA) ratings based on the applicant’s written essay.
Federal Reform
Last May, the administration issued a hiring reform memorandum, which
was followed by guidance over the signature of the director of the Office of
Personnel Management (OPM, 2009). Under this reform, the online
USAJOBS is to become federally managed, applicant’s written essays are
eliminated, and each federal agency is to decide the format an applicant is to
follow (resumé/cover letter/transcripts/agency-specific application/online
occupational KSA questionnaire), but “category ratings” are now required for
assessing candidates. The “rule of three” operational definition of merit has
been tossed by political diktat to be replaced by banding (Schmidt, 1995)
under the guise of “category ratings.” (Highhouse, 2008).
History of OPM’s Scientific Contribution to Hiring Reform
The scientific leading edge of federal hiring reform is OPM’s proposed use
of “shared registers” (today n = 12) based on partnering with cooperating agencies to test applicants for jobs in common using online (Sharf, 2009), unproctored
computer adaptive test (CAT)6 methodology measuring each candidate’s general cognitive ability. Validity evidence for cognitive ability measures used for
1 [email protected]
The Industrial-Organizational Psychologist
43
employment selection is a significant legacy of OPM, which began in 1977 when
Frank Schmidt (newly hired at CSC) presented his breakthrough meta-analysis
solution to the problems with local validation studies to Uniform Guidelines
negotiators from the four signatory agencies (EEOC, 1978). Schmidt (Schmidt
& Hunter, 1977) presented his peer-reviewed findings that the lack of statistical
power characterizing local small-sample validation studies could be overcome
by aggregating criterion-related validities across studies, thus overcoming sampling errors in less reliable small samples. Schmidt and Hunter described their
methodology as “validity generalization,” which provided a significantly
improved level of statistical accuracy sufficient to make inferences about an individual applicant’s future productivity. Notwithstanding the peer-reviewed paradigm shift of Schmidt and Hunter’s contribution to industrial psychology (Murphy, 2003), the Uniform Guidelines negotiators (including the author) followed
the early 1970s conventional wisdom of “situational specificity” and “single
group validity.” Under Schmidt’s leadership in the decade following publication
of the Uniform Guidelines, OPM established the empirical foundation of
research contributing to the science of validity generalization (Corts, Muldrow,
& Outerbridge, 1977; Hirsh, Northrup, & Schmidt, 1986; Hunter, 1981; Hunter
& Hirsh, 1987; Hunter & Hunter, 1984; Hunter & Schmidt, 1983; Hunter,
Schmidt, & Hunter, 1979; Lilienthal & Pearlman, 1983; McDaniel, 1985;
McKillip, Trattner, Corts, & Wing, 1977; McKillip & Wing, 1980; Northrup,
1979, 1980, 1986; Payne & Van Rijn, 1978; Pearlman, 1979; Pearlman, Schmidt,
& Hunter, 1980; Schmidt, Hunter, McKenzie, & Muldrow, 1979; Schmidt,
Hunter, Outerbridge, & Goff, 1988; Schmidt, Hunter, Outerbridge, & Trattner,
1986; Schmidt, Hunter, Pearlman, & Hirsh, 1985; Trattner, 1985).
VG and the Uniform Guidelines
The Uniform Guidelines have not been revised since 1978, but the APA
Standards and the SIOP Principles have been updated five times, reflecting
the cumulative knowledge based on empirical personnel selection and appraisal research (Sharf, 2006). Ten years after the Uniform Guidelines, VG as a personnel research strategy was commented upon favorably the National
Research Council of the National Academy of Sciences (Hartigan & Wigdor,
1989): “We accept the general thesis of validity generalization, that the results
of validity studies can be generalized to many jobs not actually studied, but we
urge a cautious approach of generalizing validities only to appropriately similar jobs.” Contemporary SIOP Principles (2003) endorse VG as follows:
At times, sufficient accumulated validity evidence is available for a selection procedure to justify its use in a new situation without conducting a
local validation research study. p.27
Meta-analysis…can be used to determine the degree to which predictor-criterion relationships are…generalizable to other situations…. Meta-analysis
requires the accumulation of findings from a number of validity studies to
44
July 2011
Volume 49 Number 1
determine the best estimates of the predictor-criterion relationship for the
kinds of work domains and settings included in the studies…. p.28
Meta-analysis is the basis for the technique that is often referred to as “validity generalization.” In general, research has shown much of the variation in
observed differences in obtained validity coefficients in different situations
can be attributed to sampling error and other statistical artifacts…. These
findings are particularly well-established for cognitive ability tests…” p.28.
Today, validity can be defended without the need for a local validity study
based on meta-analytic validity generalization research anchored using the
O*NET to document the verbal, quantitative, and technical/problem-solving
tasks, skills, and abilities required on the job (Sharf, 2010). Notwithstanding
more than 3 decades of peer-reviewed research and endorsement of validity generalization by the National Research Council (National Academy of Science,
1982), enforcement agencies in the public (DoJ) and private sectors (EEOC and
OFCCP) continue to apply the Uniform Guidelines literally. In a 2007 Commissioners’ meeting on the general topic of employment testing, EEOC was urged
by various panelists to update the Uniform Guidelines. As former Commissioner Fred Alvarez (2007) noted: “employers cannot technically comply with the
standards set forth in the Guidelines in their current, obsolete form because
industrial psychologists will not likely be persuaded to abandon state-of-the-art
validation in favor of a decades-old methodology presented in the Guidelines.”
In the 32 years since the Uniform Guidelines were adopted, the federal
courts have found in favor of generalizing validity evidence for cognitive
ability tests without having to conduct local validation studies and having to
conduct investigations of single-group validity (see Cases of Note section).
As early as 1988, the author (along with Dick Jeanneret) was successful in
presenting validity generalization evidence to justify the use of a cognitive
ability test with the presentation of validity generalization evidence (along
with Jack Hunter) having been affirmed by the Fifth Circuit in 1989 (Bernard
v. Gulf Oil Corp., 1989). Thus, it is entirely responsible to conclude that the
validity generalization of cognitive ability tests—OPM’s scientific leading
edge of federal reform—has been professionally embraced, endorsed by no
less than the National Research Council, and has been upheld in numerous
district courts as well as by the Fifth Circuit. Because VG is the general rebuttal to a disparate impact claim of discrimination,2 STAY TUNED!
References
Alvarez, F. W. (2007, May 16). Remarks. Presented at the U.S. Equal Employment Opportunity Commission Meeting on Employment Testing and Screening. Retrieved from
http://eeoc.gov/eeoc/meetings/archive/5-16-07/alvarez.html.
2 Letter from David Copus, Esq., to Charles James, Director, Office of Federal Contract Compliance Programs, U.S. Department of Labor (March 27, 2006; http://www.ogletreedeakins.com/
uploads/James%20FINAL%‌20letter1.pdf; available from author: [email protected] ).
The Industrial-Organizational Psychologist
45
Corts, D. B., Muldrow, T. W., & Outerbridge, A. M. (1977). Research base for the written
test portion ofthe Professional and Administrative Career Examination (PACE): Prediction of
job performance for customs inspectors. Washington, DC: U.S. Office of Personnel Management, Personnel Research and Development Center.
EEOC. (1978). 43 Fed. Reg. 38290 1978. The Department of Justice, the Department of
Labor and the Civil Service Commission (now OPM).
DeGeest, D., & Schmidt, F. (2011). The impact of research synthesis methods on industrial-organizational psychology: The road from pessimism to optimism about cumulative knowledge. Research Synthesis Methods, 3-4, 185–197.
Hartigan, J. & Wigdor, S. (Eds.). (1989). Fairness in employment testing: Validity generalization,
minority issues, and the General Aptitude Test Battery. Washington, DC: National Academy Press.
Highhouse, S. (2008). Stubborn reliance on intuition and subjectivity. Industrial and
Organizational Psychologist: Perspectives on Science and Practice, 1(3), 333–342.
Hirsh, H. R., Northrup, L. C., & Schmidt, F. L. (1986). Validity generalization results for
law enforcement occupations. Personnel Psychology, 39, 399–420.
Hunter, J. E. (1981). The economic benefits of personnel selection using ability tests: A
state-of-the-art review including a detailed analysis of the dollar benefit of U.S. Employment
Service placements and a critique of the low-cutoff method of test use. Washington, DC: U.S.
Employment Service.
Hunter, J. E. & Hirsh, H. R. (1987). Applications of meta-analysis. In C. L. Coopers & I. T.
Robertson (Eds.), Review of industrial psychology, volume II. New York, NY: Wiley.
Hunter, J. E., & Hunter, R. F. (1984). Validity and utility of alternate predictors of job performance. Psychological Bulletin, 96, 72-98.
Hunter, J. E., & Schmidt, F. L. (1983). Quantifying the effects of psychological interventions
on employee job performance and work force productivity. American Psychologist, 38, 473–478.
Hunter, J. E., Schmidt, F.L., & Hunter, R. F. (1979). Differential validity of employment
tests by race. Psychological Bulletin, 31, 215–232.
Lilienthal, R. A., & Pearlman, K. P. (1983). The validity of Federal selection tests of
aide/technicians in the health, science, and engineering fields. (ORPD-83-1), Washington, DC:
U.S. Office of Personnel Management, Office of Personnel Research and Development.
McDaniel, M. A. (1985). The evaluation of a causal model of job performance: The interrelationships of general mental ability, job experience, and job performance. Dissertation,
George Washington University.
McKillip, R. H., Trattner, M. H., Corts, D. B., & Wing, H. (1977). The professional and
administrative career examination: Research and development. Washington, DC: U.S. Office of
Personnel Management.
McKillip, R. H., & Wing, H. (1980). Application of a construct model in assessment for
employment. In construct validity in psychological measurement. Princeton, NJ: Educational
Testing Service.
Murphy, K. (2003). Validity generalization: A critical review. Mahwah, NJ: Erlbaum.
National Academy of Science (1982). Ability testing: Uses, consequences, and controversies. Washington, DC: National Academy Press.
Northrup, L. (1979). Psychometric support for test item type linkages to six ability constructs measured in the entrance examination for the D.C. fire department. Washington, DC:
U.S. Office of Personnel Management, Personnel Research and Development Center.
Northrup, L. C. (1980). Documentation of the constructs used in the Test 500 (PACE). Washington, DC: U.S. Office of Personnel Management.
Northrup, L. C. (1986). Validity generalization results for apprentice and helper-trainer
positions. Washington, DC: U.S. Office of Personnel Management, Office of Staffing Policy.
Office of Personnel Management. (2009). Hiring reform requirements–—elimination of
written essays (KSAs). Retrieved from http://www.opm.gov/HiringReform/HiringReformRequirements/EliminationofKSA/index.aspx.
46
July 2011
Volume 49 Number 1
Payne, S. S. & Van Rijn, P. (1978). Development of a written test of cognitive abilities for
entry into the DC fire department: The task–ability–test linkage procedure. Washington, DC:
U.S. Office of Personnel Management, Personnel Research and Development Center.
Pearlman, K. (1979). The validity of tests used to select clerical personnel: A comprehensive summary and evaluation. Washington, DC: U.S. Office of Personnel Management, Personnel Research and Development Center.
Pearlman, K., Schmidt, F. L., & Hunter, J. E. (1980). Validity generalization results for tests
used to predict training success and job proficiency in clerical occupations. Journal of Applied
Psychology, 65, 373–406.
Schmidt, F. L. (1995). Why all banding procedures in personnel selection are logically
flawed. Human Performance. 8(3), 165–177.
Schmidt, F. L., & Hunter, J. E. (1977). Development of a general solution to the problem of
validity generalization. Journal of Applied Psychology, 62, 529–540.
Schmidt, F. L., Hunter, J. E., McKenzie, R., Muldrow, T. (1979). The impact of valid selection procedures on workforce productivity. Journal of Applied Psychology, 64, 609–626.
Schmidt, F. L., Hunter, J. E., Outerbridge, A. N., & Goff, S. (1988). Joint relation of experience
and ability to job performance: Test of three hypotheses. Journal of Applied Psychology, 73, 46–57.
Schmidt, F. L., Hunter, J. E., Outerbridge, A. N., & Trattner, M. H. (1986). The economic
impact of job selection methods on the size, productivity, and payroll costs of the federal workforce: An empirical demonstration. Personnel Psychology, 39, 1–29.
Schmidt, F. L., Hunter, J. E., Pearlman, K., & Hirsh, H. R. (1985). Forty questions about
validity generalization and meta-analysis. Personnel Psychology, 38, 697–798.
Sharf, J. C. (2006, September). The maturation of validity generalization (VG) in defending
ability assessment. Presented at the 33rd International Congress on Assessment Center Methods,
London, England. Retrieved from http://www.assessmentcenters.org/pdf/IC2006_Sharf_ho.pdf.pdf
Sharf, J. (2009). Unproctored internet testing: faster, better, cheaper…choose two. Presented
at European Association of Test Publishers, Brussels. (available from author: [email protected])
Sharf, J. (2010). O*NET anchors validity generalization defense of employment ability testing. Presented at the European Association of Test Publishers, Barcelona. (available from author:
[email protected]).
Society for Industrial and Organizational Psychology, Inc. (2003). Principles for the validation and use of personnel selection procedures (4th ed.). Bowling Green, OH: Author.
Trattner, M.H. (1985). The validity of aptitude and ability tests for semiprofessional occupations using the Schmidt-Hunter interactive validity generalization procedures. (OSP-85-3),
Washington, D.C.: U.S. Office of Personnel Management, Office of Staffing Policy.
Cases of Note
Agulera v. Cook County Police & Corrections Merit Board 582 F.Supp. 1053, 1057 (N.D.
Ill. 1984), aff’d 760 F.2d 844, 847-48 (7th Cir. 1985), cited with approval in Davis v. City of Dallas, 777 F.2d 205, 212-13, n.6 (5th Cir. 1985), cert. Denied, 476 U.S. 1116, 106 S.Ct. 1972, 90
L.Ed.2d 656 (1986).
Bernard v. Gulf Oil Corp. 890 F.2d 735 (5th Cir. 1989).
Bruckner v. Goodyear Tire and Ruber Co. 339 F.Supp. 1108 (N.D. Ala. 1972), aff’d per curium, 476 F.2d 1287 (5th Cir. 1973).
Brunet v. City of Columbus 642 F.Supp. 1214 (S.D. Ohio 1986).
Friend v. City of Richmond (588 F.2d 61 (1978).
Pegues v. Mississippi State Employment Service (488 F.Supp. 239 (N.D. Miss. 1980), aff’d
in part and rev’d in part, 699 F.2d 760 (5th Cir.), cert. denied, 464 U.S. 991 (1983).
Rivera v. City of Wichita Falls 665 F.2d 531, 538 at n.10 (5th Cir. 1982).
Taylor v. James River Corp. (51 FEP Cases 893 (1988).
Watson v. Fort Worth Bank and Trust (108 S.Ct. 2777 (1988).
The Industrial-Organizational Psychologist
47

Documentos relacionados