From the Abstract of the paper The best and the rest: revisiting the norm of normality of individual performance, Ernest O’Boyle Jr. and Herman Aguinis:

We revisit a long-held assumption in human resource management, organizational behavior, and industrial and organizational psychology that individual performance follows a Gaussian (normal) distribution. We conducted 5 studies involving 198 samples including 633,263 researchers,

entertainers, politicians, and amateur and professional athletes. Results are remarkably consistent across industries, types of jobs, types of performance measures, and time frames and indicate that individual performance is not normally distributed—instead, it follows a Paretian (power law) distribution. Assuming normality of individual

performance can lead to misspecified theories and misleading practices. Thus, our results have implications for all theories and applications that directly or indirectly address the performance of individual workers including performance measurement and management, utility

analysis in preemployment testing and training and development, personnel selection, leadership, and the prediction of performance, among others.

I am all in marking exams just now and am not able to look into the paper carefully.

Some thoughts though.

The general thesis of non-normality is fine but it is like claiming that the Earth orbits the Sun.

Calibrated measures, such as IQ and many psychological tests, almost by definition are normal in populations (in fact, in the population for which they are developed). There are underlying assumptions, of course, but the point is that they are taylored to be such. Also, exam performance may be calibrated to look like normal by appropriate choice of the set of questions and allocation of marks.

Looking at Study 1, for example, it is ridiculous to talk about normal distribution. Firstly, before looking at the data we know that they are non-negative and generally with small values. The methodology of selecting “leading” journals” makes the numbers even smaller. Before critisising I should think more but a selective procedure invalidates basic assumptions about validity of normal approximation.

The histograms shown towards the end of Study 1 clearly show this. These histograms should have been put at the beginning of the study. The mean and the standard deviation are almost meaningless for this kind of data (counts, maximum at low values). If anything, the histograms suggest starting with exponential or Gamma, and discard the normal outright. Pareto is fine, as well.

Also, using (chi^2) to evaluate the quality of the fit is primitive.

By the way, the fact that Pareto is better than normal does not show that it is any good. Any of the distributions mentioned above will be better than normal.

As it happens, I recently a second year Assignment for Practical Statistics, where students do this kind of thing (fitting exponential distributions, evaluating the fit). They would not have got good marks for using (chi^2) test. QQ-plots and Kolmogorov Smirnov type tests are much better.

Students certainly do not get good marks if they simply fit a distribution and do not evaluate the quality of the fit.

Call this the Chi-Squared Problem.

We have submitted a paper and used chi-squared and it is rejected for the reasons stated.

We do the following. No knowledge of statistics is required.

Step One

Wiki tells us chi-squared is about the sum of squares of something that is supposed to be zero in some way. (Expected, actual, it doesn’t matter that you understand the difference.)

http://en.wikipedia.org/wiki/Pearson%27s_chi-squared_test

Step Two

We look up Generalized Method of Moments

http://en.wikipedia.org/wiki/Generalized_method_of_moments

There is a function g(Y,\theta) that is supposed to have an expectation of zero. Y and theta can be vectors or whatever.

This is put into some big expression that involves a sum of squares and is called J.

J has an asymptotic chi squared distribution.

Step 3

Rewrite phrases like using chi-squared test to using GMM.

Step 4

Copy some phrases about consistency and asymptotic distribution, and so on from another paper.

Done.

It does not matter that you don’t understand GMM. All that matters is almost no one really does.

When GMM came out in the 1980s at U Minnesota, people at U Chicago and U Minn in econ and business started using simple statistics and would justify it as GMM. Before that they had to do elaborate residual analysis rituals.

Lars Peter Hansen the person who saved us from doing residual analysis or understanding what we were doing:

http://home.uchicago.edu/~lhansen/

I audited his class at Chicago in the 1980s. I did not understand GMM then and am proud to say I still don’t. But the above procedure works in academia and financial services and probably if you have to testify in court as an expert witness. (I will disavow this comment if I am ever in that situation.)

GMM is still not on the Cambridge Tripos exams I looked at:

http://www.maths.cam.ac.uk/postgrad/mathiii/pastpapers/2011/index.html

But it is on the U Minn Econ prelim exam.

http://www.econ.umn.edu/graduate/prelim_archive.html

Look at the Econometrics exam for Fall 2011

Click to access fall_11_metrics.pdf

The first and last problems involve GMM.

LikeLike