e-book The General Factor of Intelligence: How General Is It?

Free download. Book file PDF easily for everyone and every device. You can download and read online The General Factor of Intelligence: How General Is It? file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with The General Factor of Intelligence: How General Is It? book. Happy reading The General Factor of Intelligence: How General Is It? Bookeveryone. Download file Free Book PDF The General Factor of Intelligence: How General Is It? at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF The General Factor of Intelligence: How General Is It? Pocket Guide.

To add an air of verisimilitude, I topped each test score off with a little extra noise mean zero, s.

The Construct of General Intelligence.

Once again, let me emphasize that every ability contributing to the test scores is completely independent of every other, and none of them is preponderant on any of the tests, much less all of them. When I do a factor analysis as before, I find that a single made-up factor, call it g , describes nearly half 0. The g loadings are as follows: variable 1 2 3 4 5 6 7 8 9 10 11 loadings 0. As they used to say: This is no coincidence, comrades! More, if I do a standard test for whether this pattern of correlations is adequately explained by a single factor, the data pass with flying colors chi-squared is All of which is, by construction, a complete artifact.

Once again, this isn't a fluke. Repeating the simulation from scratch i. You can use my code for this simulation to play around with what happens as you vary the number of tests and the number of abilities. Now, I don't mean to suggest this model of thousands of IID abilities adding up as a serious depiction of how thought works, or even of how intelligence test scores work.

My point, like Thomson's, is to show you that the signs which the g -mongers point to as evidence for its reality, for there having to be a single predominant common cause, actually indicate nothing of the kind. Thomson's model does this in a particularly extreme way, where those signs are generated entirely through the imprecision of our measurements.


  1. Psychology: Basics of Intelligence Types.
  2. The Complete Idiots Guide to European History!
  3. Navigation menu.

There are other models — for instance, the "dynamical mutualism" model of van der Maas et al. This should surprise no one who's even casually familiar with distributed systems or self-organization. Those supposed signs of a real general factor are thus completely uninformative as to the causes of performance on intelligence tests. Heritability is irrelevant Someone will object that g is highly heritable, and say that this couldn't be true if it wasn't just an artifact.

But this also has no force: Thomson's model can easily be extended to give the appearance of heritability, too. Having spent far too long, in a previous post , covering what heritability is, why estimating the heritability of IQ is difficult to meaningless, and why it tells us nothing about how malleable IQ is, I won't re-traverse that ground here.

Determining the heritability of an unobserved variable like g raises a whole extra set of problems — there is a reason you see so many more estimates of the heritability of IQ than of g — though if you want to define "general intelligence" as a certain weighted sum of test scores, that is at least operationally measurable. Suppose that, mirabile dictu , all the problems are solved and we learn the heritability of g , and it's about the same as the best estimate of the narrow-sense heritability of IQ, which is 0.

Does it make sense to go from " g is heritable" to " g is real and important"? I have to say that I find it an extraordinarily silly inference, and I'm astonished that anyone who understands how to calculate a heritability has ever thought otherwise. Height, in developed countries, has a heritability around 0. Blood triglyceride levels have a heritability of about 0. Thus the sum of height and triglycerides is heritable.

How heritable will depend on the correlations between the additive components of height and those of triglycerides; assuming, for simplicity, that there aren't any, the heritability of their sum will be anywhere from 0. The fact that this trait is heritable doesn't make it any less meaningless. It'd still be embarrassing for the Thomson model if it couldn't produce its appearance, since after all no one is saying that the measured or even real heritability of IQ is always and exactly zero.

But that's very easy, and the logic is the same as for combining height and triglycerides. Assume, as in classical biometric models, that the strength of each ability for each person is then the sum of three components, one purely genetic and additive across genes, one purely genetic and associated with gene interactions, and one purely environmental, and that these are perfectly independent of each other.

Say that the strict-sense heritability of each ability, the ratio of the additive genetic variance to the total variance in the ability, is 0. The test scores, being linear combinations of abilities plus noise, will also be heritable. The g found by factor analysis, being a linear combination of the test scores, is itself a linear combination of the abilities and noise, and so, in turn, heritable.

If they are uncorrelated, then the heritability of the test scores will be slightly less than 0. If the environmental contributions to different abilities are positively correlated, the total environmental variance in the test scores will be larger, so their heritability will be lower. Since, to repeat, the meta-analysis of Devlin, Daniels and Roeder puts the heritability of IQ at around 0. In a sentence: Thomson's ability-sampling model not only creates the illusion of a general factor of intelligence where none exists, it can also make this illusory factor look heritable.

What has the factorial analysis of human abilities ever done for us? It might be the case that, while exploratory factor analysis isn't a generally reliable tool for causal inference, for some reason it happens to work in psychological testing. To believe this, I would want to see many cases where it had at least contributed to important discoveries about mental structure which had some other grounds of support. These are scarce. The five-factor theory of personality, as I mentioned above, is probably the best candidate, and it fails confirmatory factory analysis tests.

As Clark Glymour points out, lesion studies in neuropsychology have uncovered a huge array of correlations among cognitive abilities, many of them very specific, none of which factor analyses predicted, or even hinted at. Similarly, congenital defects of cognition, like Williams's Syndrome , drive home the point that thought is a biological process with a genetic basis if that needs driving.

But Williams's Syndrome is simply not the kind of thing anyone would have expected from factor analysis, and for that matter a place where the IQ score, while not worthless , is not much help in understanding what's going on.

General Intelligence Definition – The G Factor

Stepping back a bit, the lack of success of factor analysis in psychology is actually surprising , because of the circularity in how psychological tests have come to be designed. The psychologists start with some traits or phenomena, which seem somehow similar to them, to exhibit a common quality, be it "intelligence" or "neuroticism" or "authoritarianism" or what-have-you. The psychologists make up some tests where a high score seems, to intuition, to go with a high degree of the quality. They will even draw up several such tests, and show that they are all correlated, and extract a common factor from those correlations.

So far, so good; or at least, so far, so non-circular. This test or battery of tests might be good for something. But now new tests are validated by showing that they are highly correlated with the common factor, and the validity of g is confirmed by pointing to how well intelligence tests correlate with one another and how much of the inter-test correlations g accounts for.

That is, to the extent construct validity is worried about at all, which, as Borsboom explains , is not as much as it should be. There are better ideas about validity, but they drive us back to problems of causal inference. By this point, I'd guess it's impossible for something to become accepted as an "intelligence test" if it doesn't correlate well with the Weschler and its kin, no matter how much intelligence, in the ordinary sense, it requires, but, as we saw with the first simulated factor analysis example, that makes it inevitable that the leading factor fits well.

I don't want to be mis-understood as being on some positivist-behaviorist crusade against inferences to latent mental variables or structures. As I said, my deepest research interest is, exactly , how to reconstruct hidden causal structures from data. Furthermore, I think it's pretty plain that psychologists have found compelling evidence for many kinds of latent mental structure. For instance, I defy anyone to explain the experimental results on mental rotation without positing mental representations which act in very specific ways.

But exploratory factor analysis is not a solution to this problem. Doing without g The end result of the self-confirming circle of test construction is a peculiar beast. To the extent g correlates with anything from actual cognitive psychology, it's working memory capacity see this , and especially the conclusion. If we want to understand the mechanisms of intelligent thought, how they are implemented biologically, and how they grow and flourish or fail to do so, I cannot see how this helps at all. Of course, if g was the only way of accounting for the phenomena observed in psychological tests, then, despite all these problems, it would have some claim on us.

But of course it isn't. My playing around with Thomson's ability-sampling model has taken, all told, about a day, and gotten me at least into back-of-the-envelope, Fermi-problem range. In fact, the biggest problem with Thomson's model is that the appearance of g is too strong, since it easily passes tests for there being only a single factor, when real intelligence tests, such as the Weschler, all fail them.

The G Factor - General Intelligence Definition and Correlations

If it wasn't a distraction from my real work, I'd look into whether weakening the assumption that tests are completely independent, uniform samples from the pool of shared abilities couldn't produce something more realistic. In particular, I'd try self-reinforcing urn schemes. If we must argue about the mind in terms of early-twentieth-century psychometric models, I'd suggest that Thomson's is a lot closer than the factor-analytical ones to what's suggested by the evidence from cognitive psychology, neuropsychology , functional brain imaging , general evolutionary considerations and, yes, evolutionary psychology which I think well of , when it's done right : that there are lots of mental modules , which are highly specialized in their information-processing, and that almost any meaningful task calls on many of them, their pattern of interaction shifting from task to task.

All of this, of course, is completely compatible with IQ having some ability, when plugged into a linear regression, to predict things like college grades or salaries or the odds of being arrested by age This predictive ability is vastly less than many people would lead you to believe [ cf. This would still be true if I introduced a broader mens sana in corpore sano score, which combined IQ tests, physical fitness tests, and to really return to the classical roots of Western civilization rated hot-or-not sexiness.

Indeed, since all these things predict success in life of one form or another , and are all more or less positively correlated, I would guess that MSICS scores would do an even better job than IQ scores. I could even attribute them all to a single factor, a for arete , and start treating it as a real causal variable.


  • NoSQL web development with Apache Cassandra.
  • General intelligence factor.
  • Tourism and Innovation (Contemporary Geographies of Leisure, Tourism and Mobility)?
  • Monitoring Sweatshops: Workers, Consumers, And The.
  • Appalachian Elegy: Poetry and Place?
  • By that point, however, I'd be doing something so obviously dumb that I'd be accused of unfair parody and arguing against caricatures and straw-men. If, after looking at your watch, you say that it's 12 o'clock, and I point out that your watch has stopped at 12, I am not saying that it's not 12 o'clock, just that your watch doesn't actually give you any evidence about the time. Similarly, pointing out that factor analysis and related techniques are unreliable guides to causal structure does not establish the non-existence of a one-dimensional latent variable driving the success of almost all human mental performance.

    It's possible that there is such a thing. But the major supposed evidence for it is irrelevant, and it accords very badly with what we actually know about the functioning of the brain and the mind. The refrigerator-mother of methodology I am not sure what the oddest aspect of this situation is, because there are so many. It may be a statistician's bias, but the things I keep dwelling on are the failures of methodology , which are not, alas, confined to all-correlations-all-the-time psychologists, but also seen in the right that is, wrong sort of labor-market sociologist, economists who regress countries' growth rates on government policies, etc.

    As the late sociologist Aage Sorensen said e. A more charitable view would be that these researchers are piling up descriptions, and hoping that someone will come along, any decade now, with explanations. Many psychometric and econometric theorists know much better, but they seem to have little influence on practice. To paraphrase Hume : When we run over libraries, persuaded of these principles, what havoc must we make? If we take in our hand any paper; of macroeconomics or correlational psychology, for instance; let us ask, Does it draw its causal inferences from observations with consistent methods?

    Does it draw its causal inferences from experiments, controlled or randomized? Commit it then to the recycling bin: for it can contain nothing but sophistry and illusion. If I want quick summaries of my data, then means, variances and correlations are reasonable things to use, especially if all the distributions are close to Gaussian.

    If I want to do serious analyses, I need to start comparing distributions, and it's not as if there aren't methods to do this. If I want to do data mining, then sticking to easily-manipulated linear models makes lots of sense; if I want to find causal relationships, at the very least I should test for nonlinearities which hardly anyone ever seems to do in the IQ field , or, better yet, turn to non-parametric estimates.

    If there are lots of positive correlations and I want to summarize them, then finding some factors and checking them by decomposing the variance is one reasonable trick. If I want to argue that there must be a preponderant common cause, it's no good to keep pointing out how much of the variance that first factor describes, when plenty of other, incompatible causal structures will give me that too.

    There is a name for this mode of reasoning. An intelligent response to this criticism would be to look for other aspects of the data including things other than correlation coefficients , or maybe even new experiments , which could tell apart different causal structures. The fact that, years after Spearman, everyone is still just manipulating the correlation matrix shows the lack of such intelligence. I have deliberately tried to avoid, here, the issues which make the argument about g and IQ so much more heated than ones about, say, labor-market sociology.

    But those issues do exist, and are heated, and so you might think that they might drive people to use better methods which could help settle the questions. This doesn't seem to happen. Some examples: If you insist on looking at differences in IQ scores between social groups, and doing so without trying real causal inference, it is still mystifying to me why you would, at this late date, stick to comparing means, variances and correlations.

    It would be vastly more informative to look at the whole relative distribution. This, in turn, can be adjusted for the relative distribution of covariates, in much more flexible and powerful ways than ordinary regression allows. The math should not be beyond anyone who understands what a distribution function is.

    The propensity-score-matching method of estimating causal effects, due to Don Rubin and co-workers, can be adapted to the meanest understanding , but I can't find anyone who's done the obvious study of using it to estimate the difference in IQ between blacks and whites of similar education, health, economic status, etc.

    No customer reviews

    If you know of such a study, tell me. This would in no way tell us whether the gap if there really is one was genetic, but it would tell us how big a mean difference we're looking at, in a way which regression simply can't.

    Search form

    Taking the IQ gap at face value, a persistent question has been whether the tests are biased. Suppose there is an underlying variable of general intelligence. I doubt it, but I've been wrong before. Nobody claims that IQ tests perfectly measure general intelligence.

    So we have a latent trait, and an imperfect index of the trait which shows a difference between groups. The question is whether the index measures the trait the same way in the two groups. What people have gone to great lengths to establish is that IQ predicts other variables the same way for the two groups, i. This is not the same thing, but it does have a bearing on the question of measurement bias: it provides strong reason to think it exists.

    As Roger Millsap and co-authors have shown in a series of papers going back to the early s e. Since there's been persistent doubt about whether intelligence tests measure intrinsic ability or acquired knowledge, I'd have hoped that someone would do the experiment of controlling what the test-takers know. Nobody seems to have tried this until very recently , and lo and behold it makes the black-white IQ gap go away, and this on tests which are quite respectably g -loaded, i. The psychologist Robert Abelson has a very nice book on Statistics as Principled Argument where he writes that "Criticism is the mother of methodology".

    I was going to say that such episodes cast that in doubt, but it occurred to me that Abelson never says what kind of mother. To combine Abelson's metaphor with Harlow's famous experiments on love in monkeys , observational social science has been offered a choice between two methodological mothers, one of the warm and cuddly and familiar and utterly un-nourishing the old world of linear regression, analysis of variance, factor analysis, etc. Not surprisingly, social scientists, being primates, overwhelmingly go for the warm fuzzies.

    This, to me, indicates a deep failure on the part of the statistical profession to which I am otherwise proud to belong. It is never a good sign when your discipline's knowledge is the wire-mesh mother all the baby monkeys avoid if at all possible. Less metaphorically, the perpetuation of these fallacies decade after decade shows there is something deeply amiss with the statistical education of social scientists.

    Summary Building factors from correlations is fine as data reduction, but deeply unsuited to finding causal structures. The mythical aspect of g isn't that it can be defined, or, having been defined, that it describes a lot of the correlations on intelligence tests; the myth is that this tells us anything more than that those tests are positively correlated. It has been known for almost as long as factor analysis has been around that positive correlations can arise in many ways which involve nothing remotely like a general factor of intelligence.

    To find out, Warne and Burningham searched the literature to find mental ability studies in non-industrialised, non-Western cultures defined as less than half the population being White or European. The analysis covered nearly datasets from 31 cultures including Thailand, Uganda, Papau New Guinea, Guyana — from every inhabited continent and world region save Europe and Australia. The median sample size was , but due to some very large samples Warne and Burningham were working with 50, participants in all.

    They wanted to explore which cultures and which sets of tasks featured performance variation that could be reduced down to one factor akin to g , and which would firmly resist. There are many ways to do factor analysis, especially around the decision rule of when to stop generating underlying factors, and this choice obviously influences the results that follow.

    Short intro to: General Intelligence - IQ and g factor

    In other cases, two underlying factors emerged, but these were similar enough to also end up reducing to one factor in a second round of analysis, saving one single exception. More so when you note that, on average, the first factor extracted explained almost half of the variance in performance across different tests — very similar to the g research in Western samples.

    It's clear and complete : it presents the state of the art in a language suitable for undergraduates. It's not censured : you have access to what the author said before he was first silenced and eventually sacked by Edimburgh University. Putting the book on lilne doesn't imply any acceptation from me of the ideas and opinions expressed by Chris Brand I wouldn't be enough competent for that. No distribution of the book is allowed by any means mail, web, etc.

    This book can't be put on line at another location that Douance. Personnal copy in electronic format for scholarly use is permitted. You accept that downloading the book involves NO challenge to the copyright status as intellectual property of the original book. By downloading the book you fully recognize that Philippe Gouillou has no responsabililty in its content.