The force was with him!

In my post below where I focused on patent law it was noted that even more obviously blatant abuses of the spirit of intellectual property occur in copyright. So I was interested to see that George Lucas has lost a law suit in the United Kingdom in relation to the idea of “storm troopers”:

Nevertheless, the High Court rejected the multi-billionaire director’s claim and the focus switched to design rights, specifically whether the helmets sold were works of art or merely industrial props.

If Lucasfilm could convince the courts the 3D works were sculptures, they would be protected by copyright for the life of the author plus 70 years.

If not, the copyright protection would be reduced to 15 years from the date they were marketed, meaning it would have expired and Mr Ainsworth would be free to sell them.

The High Court and Court of Appeal found in Mr Ainsworth’s favour, and despite Lucas being backed by directors Steven Spielberg, James Cameron and Peter Jackson, the Supreme Court has now followed suit.

Someone on twitter quipped that Lucas should be paying royalties to the Germans for the idea of stormtroopers. But I immediately recalled that many of the ideas which set the frame for the Star Wars series are actually lifted whole cloth from pre-World War II pulp science fiction. In particular the ideas of E. E. Smith and his Lensman series.

When sociology meets statistical genetics

In Dr. Daniel MacArthur’s post on Roots into the Future Blaine Bettinger left an interesting comment:

It will be interesting to see how 23andMe deals with the pool of people that respond to the 10,000 free kits. Doesn’t seem like they can pre-screen applicants, since African American heritage is sometimes more sociological than genetic (based on previous genetic studies, anyway). In other words, who’s to say who is an African American and who isn’t?

And how will they deal with the unscrupulous people who apply with the full knowledge that they have no recent African ancestry? Certainly they won’t be screen those people out, even with surveys or other methods.

My concerns probably won’t apply to the genetic association studies, since they can look for test-takers that have, for example, a certain % of African American ancestry, or can look for African American ancestry in the region of the genome where the association is believed to reside (after it’s predicted to exist).

However, my concerns will certainly apply to any conclusions they might make about African American genetic ancestry. For example, a conclusion such as “XX% of African Americans have less than XX% of African American DNA,” or “XX% of African Americans have European Y-DNA signatures.” These calculations will unfortunately be biased by the “unscrupulous”, even if they ask for surveys or other methods to deter bias. The best they might be able to do is “XX% of African Americans with 5% or more of African American DNA have European Y-DNA,” and conclusions that take the “unscrupulous” bias into account.

Read More

Ingenuity's flight toward rents

Andrew Oh-Willeke, Esq., observes:

One example of cyclicality that continues to today is the practice of law. The basic principles of Roman private law and the complaints that people made about lawyers and litigation were remarkably similar in the 300s to what they are today.

In the 6th century Justinian the Great sponsored a compilation of the body of law which was being widely practiced in the Roman Empire at the time, what is now known as the Corpus Juris Civilis. This is not an abstract or obscure point in the history of modern law:

The present name of Justinian’s codification was only adopted in the 16th century, when it was printed in 1583 by Dionysius Gothofredus under the title “Corpus Juris Civilis”. The legal thinking behind the Corpus Juris Civilis served as the backbone of the single largest law reform of the modern age, the Napoleonic Code, which marked the abolition of feudalism.

Imagine that the astronomical models of Ptolemy served as a basis for modern astrophysics! There’s only a vague family resemblance in this case. The difference is that law is fundamentally a regulation of human interaction, and the broad outlines of human nature remain the same as they were during the time of Justinian and Theodora. In this way law resembles many humanities, which don’t seem to exhibit the same progressivity of science. Our cultures may evolve, but there are constraints imposed by our nature as human beings. Human universals in humanistic enterprises speak to us across the ages. The story of Joseph and his brothers in in Genesis speaks to us because it is not too unfamiliar from our own. The meditations of Arjuna are not incomprehensible to the modern, even if they come from the imagination of Indians living thousands of years in the past. The questions and concerns of the good life are fundamentally invariant because of the preconditions of our biology.

Read More

Zack Ajmal's public domain genotype

See his announcement: Genome in the Wild. If you don’t know, Zack is the driving force behind the Harappa Ancestry Project. Seeing how the Indian scientific-bureaucratic complex still seems to be retarding rapid progress in human genomics (how many people have heard of the Indian Genome Variation Consortium? Their blog, which is hosted on blogspot, was last updated 1 year ago!), I think it can be argued that Zack has done more for the understanding of the population relationships of the Indian populations in the last 6 months than the Indian government has in 60 years! I’m hoping I’ll be proven wrong with a list of awesome and result rich publications in the comments.

War in Pre-Columbian Sumeria

For most of my life I have had an implicit directional view of Holocene human culture. And that direction was toward more social complexity and cultural proteanism. Ancient Egypt traversed ~2,000 years between the Old Kingdom and the fall of the New Kingdom. But it s rather clear that the cultural distance which separated the Egypt of Ramesses and that of Khufu was smaller than the cultural distance which separates that of the Italy of Berlusconi and the Italy of Augustus. Not only is the pace of change more rapid, but the change seems to tend toward complexity and scale. For most of history most humans were primary producers (or consumers as hunter-gatherers). Today primary producers are only a proportion of the labor force (less than 2% in the USA), and there are whole specialized sectors of secondary producers, service workers, as well as professionals whose duty is to “intermediate” between other sectors and smooth the functioning of society. The machine is more complex than it was, and it has gotten more complex faster and faster.

This is a accurate model as far as it goes, but of late I have started to wonder if simply describing in the most summary terms the transition from point A to Z and omitting the jumps from B to C to … Y may hide a great of the “action” of human historical process. My post “The punctuated equilibrium of culture” was inspired by my deeper reflection about the somewhat staccato character of cultural evolution. Granting that the perception of discontinuity is a function the grain at which we examine a phenomenon, I think one can argue that to a great extent imagining the change of cultural forms as analogous to gradualistic evolution or the smooth descent of a ball toward the center of the earth is deceptive. The theories of history which many pre-modern peoples espoused can give us a window into perception of changes in the past: history was quite often conceived of as cyclical, rising and falling and rising. And yet even in the days of yore there were changes and increases in complexity. The Roman legions of Theodosius the Great in 390 A.D. were more complex institutions than those of Scipio Africanus in 200 B.C. The perception of stasis, and even decline, is due to the fact that the character and complexity of societies did not seem to exhibit direction over the short term toward progress. And that short term can be evaluated over centuries. Far longer than any plausible human lifetime. So while it is all well and fine to focus on the long term trend line, the details of how the trend emerged matter a great deal when attempting to construct a model of the past which can allow us to make robust and rich inferences. The people of the past made robust inferences over any scale of time which mattered to them. The world was nearly as likely to get less rich as more rich.

Read More

Hallucinating neural networks

Hearing voices is a hallmark of schizophrenia and other psychotic disorders, occurring in 60-80% of cases. These voices are typically identified as belonging to other people and may be voicing the person’s thoughts, commenting on their actions or ideas, arguing with each other or telling the person to do something. Importantly, these auditory hallucinations are as subjectively real as any external voices. They may in many cases be critical or abusive and are often highly distressing to the sufferer.

However, many perfectly healthy people also regularly hear voices – as many as 1 in 25 according to some studies, and in most cases these experiences are perfectly benign. In fact, we all hear voices “belonging to other people” when we dream – we can converse with these voices, waiting for their responses as if they were derived from external agents. Of course, these percepts are actually generated by the activity of our own brain, but how?

There is good evidence from neuroimaging studies that the same areas that respond to external speech are active when people are having these kinds of auditory hallucinations. In fact, inhibiting such areas using transcranial magnetic stimulation may reduce the occurrence or intensity of heard voices. But why would the networks that normally process speech suddenly start generating outputs by themselves? Why would these outputs be organised in a way that fits speech patterns, as opposed to random noise? And, most importantly, why does this tend to occur in people with schizophrenia? What is it about the pathology of this disorder that makes these circuits malfunction in this specific way?

An interesting approach to try and get answers to these questions has been to model these circuits in artificial neural networks. If you can generate a network that can process speech inputs and find certain conditions under which it begins to spontaneously generate outputs, then you may have an informative model of auditory hallucinations. Using this approach, a couple of studies from several years ago from the group of Ralph Hoffman have found some interesting clues as to what may be going on, at least on an abstract level.

Their approach was to generate an artificial neural network that could process speech inputs. Artificial neural networks are basically sets of mathematical functions modelled in a computer programme. They are designed to simulate the information-processing functions carried out by individual neurons and, more importantly, the computational functions carried out by an interconnected network of such neurons. They are necessarily highly abstract, but they can recapitulate many of the computational functions of biological neural networks. Their strength lies in revealing unexpected emergent properties of such networks.

The particular network in this case consisted of three layers of neurons – an input layer, an output layer, and a “hidden” layer in between – along with connections between these elements (from input to hidden and from hidden to output, but crucially also between neurons within the hidden layer). “Phonetic” inputs were fed into the input layer – these consisted of models of speech sounds constituting grammatical sentences. The job of the output layer was to report what was heard – representing different sounds by patterns of activation of its forty-three neurons. Seems simple, but it’s not. Deciphering speech sounds is actually very difficult as individual phonetic elements can be both ambiguous and variable. Generally, we use our learned knowledge of the regularities of speech and our working memory of what we have just heard to anticipate and interpret the next phonemes we hear – forcing them into recognisable categories. Mimicking this function of our working memory is the job of the hidden layer in the artificial neural network, which is able to represent the prior inputs by the pattern of activity within this layer, providing a context in which to interpret the next inputs.

The important thing about neural networks is they can learn. Like biological networks, this learning is achieved by altering the strengths of connections between pairs of neurons. In response to a set of inputs representing grammatical sentences, the network weights change in such a way that when something similar to a particular phoneme in an appropriate context is heard again, the pattern of activation of neurons representing that phoneme is preferentially activated over other possible combinations.

The network created by these researchers was an able student and readily learned to recognise a variety of words in grammatical contexts. The next thing was to manipulate the parameters of the network in ways that are thought to model what may be happening to biological neuronal networks in schizophrenia.

There are two major hypotheses that were modelled: the first is that networks in schizophrenia are “over-pruned”. This fits with a lot of observations, including neuroimaging data showing reduced connectivity in the brains of people suffering with schizophrenia. It also fits with the age of onset of the florid expression of this disorder, which is usually in the late teens to early twenties. This corresponds to a period of brain maturation characterised by an intense burst of pruning of synapses – the connections between neurons.

In schizophrenia, the network may have fewer synapses to begin with, but not so few that it doesn’t work well. This may however make it vulnerable to this process of maturation, which may reduce its functionality below a critical threshold. Alternatively, the process of synaptic pruning may be overactive in schizophrenia, damaging a previously normal network. (The evidence favours earlier disruptions).

The second model involves differences in the level of dopamine signalling in these circuits. Dopamine is a neuromodulator – it alters how neurons respond to other signals – and is a key component of active perception. It plays a particular role in signalling whether inputs match top-down expectations derived from our learned experience of the world. There is a wealth of evidence implicating dopamine signalling abnormalities in schizophrenia, particularly in active psychosis. Whether these abnormalities are (i) the primary cause of the disease, (ii) a secondary mechanism causing specific symptoms (like psychosis), or (iii) the brain attempting to compensate for other changes is not clear.

Both over-pruning and alterations to dopamine signalling could be modelled in the artificial neural network, with intriguing results. First, a modest amount of pruning, starting with the weakest connections in the network, was found to actually improve the performance of the network in recognising speech sounds. This can be understood as an improvement in the recognition and specificity of the network for sounds which it had previously learned and probably reflects the improvements seen in human language learners, along with the concomitant loss in ability to process or distinguish unfamiliar sounds (like “l” and “r” for Japanese speakers).

However, when the network was pruned beyond a certain level, two interesting things happened. First, its performance got noticeably worse, especially when the phonetic inputs were degraded (i.e., the information was incomplete or ambiguous). This corresponds quite well with another symptom of schizophrenia, especially those who experience auditory hallucinations – sufferers show phonetic processing deficits under challenging conditions, such as a crowded room.

The second effect was even more striking – the network started to hallucinate! It began to produce outputs even in the absence of any inputs (i.e., during “silence”). When not being driven by reliable external sources of information, the network nevertheless settled into a state of activity that represented a word. The reason the output is a word and not just a meaningless pattern of neurons is that the previous learning that the network undergoes means that patterns representing words represent “attractors” – if some random neurons start to fire, the weighted connections representing real words will rapidly come to dominate the overall pattern of activity in the network, resulting in the pattern corresponding to a word.

Modeling alterations in dopamine signalling also produced both a defect in parsing degraded speech inputs and hallucinations. Too much dopamine signalling produced these effects but so did a combination of moderate over-pruning and compensatory reductions in dopamine signalling, highlighting the complex interactions possible.

The conclusion from these simulations is not necessarily that this is exactly how hallucinations emerge. After all, the artificial neural networks are pretty extreme abstractions of real biological networks, which have hundreds of different types of neurons and synaptic connections and which are many orders of magnitude more complex numerically. But these papers do provide aat least a conceptual demonstration of how a circuit designed to process speech sounds can fail in such a specific and apparently bizarre way. They show that auditory hallucinations can be viewed as the outputs of malfunctioning speech-processing circuits.

They also suggest that different types of insult to the system can lead to the same type of malfunction. This is important when considering new genetic data indicating that schizophrenia can be caused by mutations in any of a large number of genes affecting how neural circuits develop. One way that so many different genetic changes could lead to the same effect is if the effect is a natural emergent property of the neural networks involved.

Hoffman, R., & Mcglashan, T. (2001). Book Review: Neural Network Models of Schizophrenia The Neuroscientist, 7 (5), 441-454 DOI: 10.1177/107385840100700513

Hoffman, R., & McGlashan, T. (2006). Using a Speech Perception Neural Network Computer Simulation to Contrast Neuroanatomic versus Neuromodulatory Models of Auditory Hallucinations Pharmacopsychiatry, 39, 54-64 DOI: 10.1055/s-2006-931496

Mirrored from Wiring the Brain

Dominance, the social construct that confuses

A story in The Los Angeles Times seems to point medical implications of being a sickle cell carrier, Sickle cell trait: The silent killer:

At least 17 high school and college athletes’ deaths have been tied to sickle cell trait during the past 11 years. The group includes Olivier Louis, a player at Wekiva High School near Orlando, who died on Sept. 7, 2010, following his first football practice.

You have surely heard about sickle cell anemia. It is a recessive disease which expresses in those who carry two sickle cell alleles. T-boz of TLC has the disease for example due to her homozygosity. But the allele also famously confers some resistance against malaria, which explains its concentration in regions which have historically been malarial. Sickle cell is arguable the classic case of heterozygote advantage driving the emergence of a recessive disease. The frequency of the allele is balanced at the equipoise between the proportion of people who are more susceptible to malaria if its proportion is too low and those who express sickle cell anemia if its proportion is too high. This advantage is obviously context sensitive. The standard assumption is that in a non-malarial environment selection pressure against anemia will drive the frequency of the allele down over time as heterozygotes don’t impose a floor in the proportion of the mutant allele. This seems to have occurred among African Americans, they’re ~80% West African, but their frequency of the sickle cell anemia allele is less than {0.80*(the West African proportion)} from what I know (remember that the median number of generations which an African American’s black ancestors have been in the USA is probably ~10).

Read More

~1 month into the new social network order

I am only being added to Google+ “circles” at a clip of half a dozen per day. This is off the peak of nearly 20 or so per day a little over a week ago. I’m now at nearly 500 people in my Google circles, though only 5 were individuals whom I added proactively. I honestly have no idea who 2/3 of these people are, though it seems that most of them know me through my blogs. About ~75 people I know rather well, though fewer than 50 are people who I’ve met in real life (many of these only once or twice). In contrast on Facebook there are hundreds of people I’ve met and known and know in real life. Very few of my college or high school friends have “added me” to their circles. In contrast, the people who I am socially engaged with currently have added me. It’s like Google+ is a vast and shallow circle extending outward into my present social space, both explicit (people I know) and implicit (those who know me through my web presence). In contrast Facebook has more historical depth. Though it’s been around a lot longer too, so the comparison isn’t fair.

Read More

Around the Web – July 25th, 2011

I assume you’re hot?

Killings in Norway Spotlight Anti-Muslim Thought in U.S.. I’ve read Gates of Vienna before. Despite my anti-multiculturalist attitudes I generally departed with them over their sloppy marshaling of history. Two wrongs doesn’t make a right. Ironically I was introduced to the blog mostly by someone who is now a moderately scary Muslim (they converted, at the time they were very not Muslim. Now they engage in quasi-apologia for reactionary Muslim behavior like death threats against blasphemers).

What should evolutionary psychology comprise? Also see John Hawks.

Epigenetic ‘Memory’ Key to Nature Versus Nurture. Epigenetics is trending.

Rival Debt Plans Being Assembled by Party Leaders. No comment.

Read More