Monday, December 14, 2009
Google results for +"nobel laureate" +X, where X is one of the following:
Of course, there are more winners to refer to in Physics than in Economics, so we should control for that. Dividing the number of Google results by the number of winners gives these per capita rates:
If the intellectual merit of a body of ideas is not so well established, you're more likely to deflect attention by reassuring everyone that, hey, it can't be that crazy -- after all, the guy is a Nobel laureate. Perhaps that's why physics ranks above chemistry here, what with string theory etc. taking it further into speculation compared to more grounded chemistry.
Tuesday, November 17, 2009
At Cognitive Daily, Men often treat their friends better than women do:
The researchers say these three studies show that men are more tolerant of their friends' failings than women. Does this mean that men are more "sociable"? That's less certain. After all, it could be that women value the friendships more, and so are harsher judges when they perceive a betrayal. Regardless of your interpretation of these results, however, it seems that the stereotype of "men harsh, women friendly" is not always valid. In many cases, it can be said that women are less tolerant than men.
The research focused on college roommates. The only area where males were harsher than females in evaluating their roommates was in hygiene. In any case, there's other research which I've drawn upon to suggest that males are much better are scaling up in terms of social units capable of "collective action" than females.
Monday, November 16, 2009
In this discussion about pop music at Steve Sailer's, the topic of generations came up, and it's one where few of the people who talk about it have a good grasp of how things work. For example, the Wikipedia entry on generation notes that cultural generations only showed up with industrialization and modernization -- true -- but doesn't offer a good explanation for why. Also, they don't distinguish between loudmouth generations and silent generations, which alternate over time. As long as a cohort "shares a culture," they're considered a generation, but that misses most of the dynamics of generation-generation. My view of it is pretty straightforward.
First, we have to notice that some cohorts are full-fledged Generations with ID badges like Baby Boomer or Gen X, and some cohorts are not as cohesive and stay more out of the spotlight. Actually, one of these invisible cohorts did get an ID badge -- the Silent Generation -- so I'll refer to them as loudmouth generations (e.g., Baby Boomers, Gen X, and before long the Millennials) and silent generations (e.g., the small cohort cramped between Boomers and X-ers).
Then we ask why do the loudmouth generations band together so tightly, and why do they show such strong affiliation with the generation that they continue to talk and dress the way they did as teenagers or college students even after they've hit 40 years old? Well, why does any group of young people band together? -- because social circumstances look dire enough that the world seems to be going to hell, so you have to stick together to help each other out. It's as if an enemy army invaded and you had to form a makeshift army of your own.
That is the point of ethnic membership badges like hairstyle, slang, clothing, musical preferences, etc. -- to show that you're sticking with the tribe in desperate times. That's why teenagers' clothing has logos visible from down the hall, why they spend half their free time digging into a certain music niche, and why they're hyper-sensitive about what hairstyle they have. Adolescence is a socially desperate time, not unlike a jungle, in contrast to the more independent situation you enjoy during full adulthood. Being caught in more desperate circumstances, teenagers freak out about being part of -- fitting in with -- a group that can protect them; they spend the other half of their free time communicating with their friends. Independent adults have fewer friends, keep in contact with them much less frequently, and don't wear clothes with logos or the cover art from their favorite new album.
OK, so that happens with every cohort -- why does this process leave a longer-lasting impact on the loudmouth cohorts? It is the same cause, only writ large: there's some kind of social panic, or over-turning of the status quo, that's spreading throughout the entire culture. So they not only face the trials that every teenager does, but they've also got to protect themselves against this much greater source of disorder. They have to form even stronger bonds, and display their respect for their generation much longer, than cohorts who don't face a larger breakdown of security.
Now, where this larger chaos comes from, I'm not saying. I'm just treating it as exogenous for now, as though people who lived along the waterfront would go through periods of low need for banding together (when the ocean behaved itself) and high need to band together (when a flood regularly swept over them). The generation forged in this chaos participates in it, but it got started somewhere else. The key is that this sudden disorder forces them to answer "which side are you on?" During social-cultural peacetime, there is no Us vs. Them, so cohorts who came of age in such a period won't see generations in black-and-white, do-or-die terms. Cohorts who come of age during disorder must make a bold and public commitment to one side or the other. You can tell when such a large-scale chaos breaks out because there is always a push to reverse "stereotypical gender roles," as well as a surge of identity politics.
The intensity with which they display their group membership badges and groupthink is perfectly rational -- when there's a great disorder and you have to stick together, the slightest falter in signaling your membership could make them think that you're a traitor. Indeed, notice how the loudmouth generations can meaningfully use the phrase "traitor to my generation," while silent generations wouldn't know what you were talking about -- you mean you don't still think The Ramones is the best band ever? Well, OK, maybe you're right. But substitute with "I've always thought The Beatles were over-rated," and watch your peers with torches and pitchforks crowd around you.
By the way, why did cultural generations only show up in the mid-to-late 19th C. after industrialization? Quite simply, the ability to form organizations of all kinds was restricted before then. Only after transitioning from what North, Wallis, and Weingast (in Violence and Social Orders -- working paper here) call a limited access order -- or a "natural state" -- to an open access order, do we see people free to form whatever political, economic, religious, and cultural organizations that they want. In a natural state, forming organizations at will threatens the stability of the dominant coalition -- how do they know that your bowling league isn't simply a way for an opposition party to meet and plan? Or even if it didn't start out that way, you could well get to talking about your interests after awhile.
Clearly young people need open access to all sorts of organizations in order to cohere into a loudmouth generation. They need regular hang-outs. Such places couldn't be formed at will within a natural state. Moreover, a large cohort of young people banding together and demanding that society "hear the voice of a new generation" would have been summarily squashed by the dominant coalition of a natural state. It would have been seen as just another "faction" that threatened the delicate balance of power that held among the various groups within the elite. Once businessmen are free to operate places that cater to young people as hang-outs, and once people are free to form any interest group they want, then you get generations.
Finally, on a practical level, how do you lump people into the proper generational boxes? This is the good thing about theory -- it guides you in practice. All we have to do is get the loudmouth generations' borders right; in between them go the various silent or invisible generations. The catalyzing event is a generalized social disorder, so we just look at the big picture and pick a peak year plus maybe 2 years on either side. You can adjust the length of the panic, but there seems to be a 2-year lead-up stage, a peak year, and then a 2-year winding-down stage. Then ask, whose minds would have been struck by this disorder? Well, "young people," and I go with 15 to 24, although again this isn't precise.
Before 15, you're still getting used to social life, so you may feel the impact a little, but it's not intense. And after 24, you're on the path to independence, you're not texting your friends all day long, and you've stopped wearing logo clothing. The personality trait Openness to Experience rises during the teenage years, peaks in the early 20s, and declines after; so there's that basis. Plus the likelihood to commit crime -- another measure of reacting to social desperation -- is highest between 15 and 24.
So, just work your way backwards by taking the oldest age (24) and subtracting it from the first year of the chaos, and then taking the youngest age (15) and subtracting it from the last year of the chaos. "Ground zero" for that generation is the chaos' peak year minus 20 years.
As an example, the disorder of the Sixties lasted from roughly 1967 to 1972. Applying the above algorithm, we predict a loudmouth generation born between 1943 and 1957: Baby Boomers. Then there was the early '90s panic that began in 1989 and lasted through 1993 -- L.A. riots, third wave feminism, etc. We predict a loudmouth generation born between 1965 and 1978: Generation X. There was no large-scale social chaos between those two, so that leaves a silent generation born between 1958 and 1964. Again, they don't wear name-tags, but I call them the disco-punk generation based on what they were listening to when they were coming of age.
Going farther back, what about those who came of age during the topsy-turvy times of the Roaring Twenties? The mania lasted from roughly 1923 to 1927, forming a loudmouth generation born between 1899 and 1912. This closely corresponds to what academics call the Interbellum Generation. The next big disruption was of course WWII, which in America really struck between 1941 and 1945, creating a loudmouth generation born between 1917 and 1930. This would be the young people who were part of The Greatest Generation. That leaves a silent generation born between 1913 and 1916 -- don't know if anyone can corroborate their existence or not. That also leaves The Silent Generation proper, born between 1931 and 1942.
Looking forward, it appears that these large social disruptions recur with a period of about 25 years on average. The last peak was 1991, so I predict another one will strike in 2016, although with 5 years' error on both sides. Let's say it arrives on schedule and has a typical 2-year build-up and 2-year winding-down. That would create a loudmouth generation born between 1990 and 2003 -- that is, the Millennials. They're already out there; they just haven't hatched yet. And that would also leave a silent generation born between 1979 and 1989.
My sense is that Millennials are already starting to cohere, and that 1987 is more like their first year, making the silent generation born between 1979 and 1986 (full disclosure: I belong to it). So this method surely isn't perfect, but it's pretty useful. It highlights the importance of looking at the world with some kind of framework -- otherwise we'd simply be cataloguing one damn generation after another.
Tuesday, May 05, 2009
A simple but powerful way to determine whether or not there's a irrational bubble is to look for a lot of people who are participating in a trend who have no business doing so. For instance, a Mexican strawberry-picker making $15,000 a year who gets a $720,000 loan for a home. If these don't-belong-there people make up a larger and larger fraction of all who get loans, that strongly suggests that everyone is trying to get in on a speculative bubble -- and that the gatekeepers of the activity are increasingly debauching their entry standards to accommodate the losers.
One datum that suggests an irrational bubble in education is that a much larger fraction of the population is going to college now, and that not surprisingly the average IQ of college students has declined by about 2/3 s.d. -- admissions boards began to scrape deeper down into the sludgebucket of society.
How about looking even earlier? High school is compulsory, so we can't really use high school enrollment to judge whether there's a bubble or not. But what about the sub-group of high school that ostensibly is there to prepare college-bound students for college? That is up to the choice of students, perhaps being bullied by their parents. There is strong evidence even at this early stage of an irrational bubble.
What got me thinking about this was a recent NYT article on how teachers feel about the Advanced Placement program, which is based on a report from the Thomas B. Fordham Institute. The key item that popped out was the claim that participation in the AP program has exploded in recent years, and that this has made a fair fraction of teachers anxious about whether there are students there who shouldn't be. This sure smells like a bubble.
First, let's make sure that the AP program really is exploding as they say, and then we'll see if there's a rational basis for it or not. To measure participation in the AP program, I simply took the number of AP tests taken and divided it by the high school population size. (The AP data are here, and the high school pop data are here, Table A-1.) The AP data go back to 1988, while the high school pop data end in 2007, so I looked at the period from 1988 to 2007. Here are both the total number of AP tests taken and the per capita rate:
An exponential trend accounts for 99.8% of the year-to-year variation for the total number of tests taken, and 99.2% in the per capita case. So, clearly participation in the AP program has been exploding at least since 1988.
Now, is there a sound basis for this increase -- like, maybe kids these days are just getting exponentially smarter? Without looking at the data, we know this is wrong since the main determinant of doing well in AP classes is IQ, and that is influenced mostly by genes and unpredictable aspects of the environment, which haven't been changing so rapidly from one year to the next.
Turning to data on how well 17 year-olds are doing academically, let's look at some tables from the 2007 version of the Digest of Education Statistics (all under Chapter 2, and then Educational Achievement). Table 112 shows that on the National Assessment of Educational Progress, the average reading score for 17 y.o.s did not change from 1971 to 2004. Table 115 shows that the percent of 17 y.o. students who are at the 300 level or above in reading did not change from 1971 to 2004. Tables 125 and 126 show the same lack of change for math skills tested by the NAEP. Table 135 shows that the average Critical Reading score on the SAT did not change from 1988 onward -- indeed, it was steady back to about 1976, and had been declining before then. There was a modest uptick in Math scores (15 points, or 0.15 s.d.). The Critical Reading or Verbal score is more highly g-loaded than the Math score for the SAT, or is a better measure of IQ, which means the apparent uptick in Math scores may not mean as much as we'd think.
Taken together, these data show that the academic fundamentals of high schoolers has not changed since the 1970s. If there has been no upswing at all in the fundamentals -- let alone an exponential one -- then the explosion of the AP program is accounted for completely by irrational factors. It seems just like the housing bubble -- the size of deserving borrowers didn't explode, so the surge in borrowing must have been due to a bunch of undeserving people pouring into the building, namely low-income people. Here are two graphs showing that this happened in the AP program too:
The first shows the distribution of AP scores, where 5 is greatest. You can check the numbers for yourself in the previous link to the AP data, but there has been no change in the percent of all tests that received a score of 4 or 5 -- there have not been more and more smarties piling into AP classrooms, at least not since 1988. Therefore, everyone who deserved to be there was already there. However, the percent of all tests receiving a score of 1 -- telling the student, "why did you even bother?" -- has doubled from 10% to 21%. Those receiving a 2 shrunk a tiny amount, from about 23% to 21%. But those receiving a 3 declined from about 32% to 24%. This means that, unlike for smarties, more and more dummies have been allowed into the AP program.
This is reflected in the change in the mean and standard deviation of test scores: keeping the smarties fixed while adding a lot more dummies will drag down the mean and increase the heterogeneity or variance. That's analogous to the housing bubble causing a decline in the mean creditworthiness of the population of borrowers, and an increase in their heterogeneity, as both the sound and the unsound begin to rub shoulders in loan offices. And just as lenders increasingly cheapened their standards by not requiring down payments or proof of income, so high school teachers and administrators have allowed increasingly ill-prepared -- stupid -- students into the AP program.
In sum, there is very strong evidence from AP tests for a speculative bubble in education. Most of what I've read on whether or not such a bubble exists has focused on college -- soaring tuition, more and therefore dumber students, and so on. These data, though, show that the mania extends even to high school, not just higher ed. For at least the past five years, there have been many news stories about competitive admission to pre-school, so perhaps someone could dig up some numbers to show an exponential increase there too that can't be rationalized by a change in fundamentals. In any case, it's clear that this bubble is much more general than the college data suggest.
Curiously, the phrase "education bubble" has not appeared at all in the NYT, although it has appeared many times in the blogs that the newspaper hosts. Googling the phrase gets 39,000 hits. Rises and falls in tuition get plenty of coverage, but that doesn't show that the reporters are aware of the irrational bubble -- they just think it's unfair, that college should be cheaper so that more can attend. But just as no one was allowed to say that most low-income borrowers were undeserving of home loans since they were disproportionately black and Hispanic, so we aren't allowed to say that a lot of college students are nowhere near being "college material" -- that would violate the "demotic life and times," as Jacques Barzun has dubbed the zeitgeist from roughly the 1960s until today. We cripple our minds by imbibing political correctness.
The bursting of the education bubble may be decades away -- it sure has been going on for awhile, so its period may be much longer than that of the housing or stock market bubbles. Let's just hope that when it happens, it will turn out that hedge funds and investment banks won't have exposed themselves to all of this silliness, and that we won't be plunged into another multi-year recession.
Thursday, February 14, 2008
Red states, blue states, and affordable family formation is a commentary on new article by Steve about his Affordable Family Formation theory. I don't have much to add, except a note on this:
To get back to the main point, Sailer is making a geographic argument, that Democrats do better in coastal states because families are less likely to live in coastal metropolitan areas, because housing there is so expensive, because of the geography: less nearby land for suburbs. This makes a lot of sense, although it doesn't really explain why the people without kids want to vote for Democrats and people with kids want to vote for Republicans. I can see that more culturally conservative people are voting Republican, and these people are more likely to marry and have kids at younger ages--but in that sense the key driving variable is the conservatism, not the marriage or the kids.
I think that Steve's response would be that a family and kids tend to make you more inclined toward social conservatism. Specifically, full-throated principled defenses of lifestyle libertarianism are less attractive to people who aren't going to be indulging in that in any case because of the constraints of family life.
Saturday, December 22, 2007
The New York Times has a story, Where Boys Were Kings, a Shift Toward Baby Girls:
...South Korea is the first of several Asian countries with large sex imbalances at birth to reverse the trend, moving toward greater parity between the sexes. Last year, the ratio was 107.4 boys born for every 100 girls, still above what is considered normal, but down from a peak of 116.5 boys born for every 100 girls in 1990.
Please note that the "normal" sex ratio is usually skewed somewhat toward males, around 105 to 100 (the explanation I received about this is that sperm carrying the Y are faster because they are smaller, I appreciate anyone to falsifying this if they know the "true story"). But I also found it peculiar that the article did not note that another East Asian society has switched from son to daughter preference in the past few decades, Japan. The moral of this story is, I think, that economic and social development are more critical in shaping these trends than laws enacted from on high. Japan developed earlier than South Korea, and the change in societal attitudes on this issue occurred earlier.
Sunday, April 22, 2007
PNAS has a paper titled Growth, innovation, scaling, and the pace of life in cities, which notes that a majority of humans now live in cities. I know that historically cities were a population sink (and only a small minority ever did live in cities), but, I have to wonder what evolutionary implications the normativeness of city life will have on our species over the next few hundred years (assuming some sort of collapse or explosion doesn't make the idea of humanity irrelevant)? I say this because I suspect that the transition from hunter-gatherer to "dense" village living was highly significant (as illustrated by the mass disease die off in the New World when exposed to the Eurasian pathogen pool). Robin Dunbar's work suggests that our cognitive social intelligence doesn't scale up much past around 200 individuals. Villages aren't necessarily that much more populous than this, but cities are.