The End of Insight – monkeys lost in their own castles
This article relates to the nature of belief. When science explanations are beyond the comprehension of intelligent adults, science becomes unsatisfying dogma.
“In my own field of complex systems theory, Stephen Wolfram has emphasized that there are simple computer programs, known as cellular automata, whose dynamics can be so inscrutable that there’s no way to predict how they’ll behave; the best you can do is simulate them on the computer, sit back, and watch how they unfold. Observation replaces insight. Mathematics becomes a spectator sport.
If this is happening in mathematics, the supposed pinnacle of human reasoning, it seems likely to afflict us in science too, first in physics and later in biology and the social sciences (where we’re not even sure what’s true, let alone why).
When the End of Insight comes, the nature of explanation in science will change forever. We’ll be stuck in an age of authoritarianism, except it’ll no longer be coming from politics or religious dogma, but from science itself.”
In the near future Google will determine what we “know” or “believe”. Asking the Internet will be as easy as asking our own memory and the answers will be more reliable. We won’t “know” why; we will only know that the Google answers are right. Google will be the modern oracle.
Perhaps we can avoid this fate by expanding human intelligence and consciousness. Or perhaps a complex universe just won’t fit into a human brain.





Don’t worry, as soon as our complex computers which we are using to model the universe turn sentient we will all be taken care of by a benevolent AI. I look forward to worshiping Omnius.
But, seriously, do you think Mentats are possible?
But, seriously, do you think Mentats are possible?
if you imagine that the evolutionary adaptive landscape has generated cognitive constraints because of the gradualistic stepwise nature of the process, perhaps. evolution has shaped us with an intuitive sense of face recognition. what if we could reorganize the human brain so that we had a gestalt sense of the quickest way to simultaneously solve sets of linear equations? the easiest possible way might be some analogy to GMO, simply transfer characters from other species, though this would sensorily easiest (i’m not sure i’d want a dog’s sense of smell though). but even cognitively some species do have tendencies that we might be able to use, for example, there might be neat things we can take from birds who can discern differences that are extremely subtle and complex in songs, that can’t just be a function of hearing.
in any case, right now specialists in many fields ‘develop’ reflexive skills through a lifetime’s worth of particular inputs. what if we could simply hardwire it into brains? i mean, in a world without intuitive face recognition there would likely be individuals who specialize in the field.
“Observation replaces insight. Mathematics becomes a spectator sport.”
Sounds like empiricism to me.
We are so used to thinking of consciousness as something attached to a man-sized brain with two eyes, two ears, a nose and touch and taste sensors that I don’t know if it would be possible for us to empathize with or understand the consciousness that could reside within a computer perhaps a million times larger than one man’s brain, with hundreds of sensors of every kind, from telescopic to microscopic, from ultraviolet to infrared. Would we and IT even be able to consider the same question in the same way?
Robert,
I’ve often wondered the same thing. Our sentience rests very heavily on the fact that we are biological beings (animals) with very animalistic drives. An AI would have radically different motives, it wouldn’t care about sex or food or more complex desires of humanity. I wonder if we would even recognize it as sentient and if it would even consider us much different from bacteria.
Not to blogwhore or anything, but last night I posted about this very same topic, also sparked by Strogatz’s & related other entries.
http://akinokure.blogspot.com/2006/01/dangerous-ideas-round-up-i-limits-of.html
We already accept that the spectrometer is the authority for discerning properties of the wavelengths of light that we can’t see — i.e., pretty much all light. We have to trust what it says since we can’t develop even the vaguest speculation of what gamma rays or radio waves look like. Not only is this a good thing, since we’re less ignorant than before and can use this new knowledge to do useful things like broadcast information far away, but it doesn’t rob us of our sense of wonder. Sure, the spectrometer can see those wavelengths, but they’re still perfectly mysterious to us on a gut-satisfying level, and thus still a source of wonder an intrigue. Same would be true of a “science spectrometer” that delineated areas of concern we didn’t even know existed, and then proceeded to tell us what they’re like. We’d be more knowledgeable while still retaining our wonder at the unsensed (if not the unknown).
I find it silly that in the future “Google” will determine our knowledge and beliefs. For that to happen, our inate curiosity must take such a giant leap that we’re no longer satisfied by our own memorized knowledge to such a great extent that it itself be dwarfed by the available “Google”-knowledge we then constantly crave, and I don’t see that happening anytime soon.
In a sense Big G already “knows” far more than any individual human being about many things.
The problem is rather that the interface is very passive and command driven by the user.
Now imagine in the future that speech recognition and language skills make interacting with G as easy as talking to a human expert.
Kurtzweil predicts that individual CPU’s will reach human information-processing power around about 2030. But obviously large arrays of computers, like the Web, will pass individual human capability some time before that.
Now add portability and wireless access. A bluetooth headset.
You will be able to walk around everywhere with a “guardian angel” that can continuously instruct, advise and entertain you in anything you want to know.
simple computer programs, known as cellular automata
For those who don’t know, he’s probably refering to ‘life’.
Hmmm, Cosma Shalizi, as I recal, has some unflatering things to say about Wolfram.
There’s always the chance that someone will discover a short and “insightful” solution to the 4 color map problem — though probably with the aid of some new theoretical tools.
I really don’t see what the problem is — most math beyond arithmetic is beyond most people, and the most advanced math is beyond all but a tiny few. Some problems require brute computation, which is what computers do, and idiot savants; neither is especially “intelligent”.
Conway’s Life is two-dimensional and therefore needlessly complex for Wolfram’s purposes – most of the book deals with one dimensional, two-state automata, usually presented with sucsessive steps stacked under the seed pattern: http://liinwww.ira.uka.de/~rahn/ca.pics/basic_rules_random/110.html
I recommend picking up A new kind of science as a casual read. It works pretty well both as a very thorough text on cellular automata and as a hubris-themed comedy
A New Kind of Science is immensely thought-provoking, perhaps mostly because it doesn’t draw any conclusions! Just as space robots can explore where man cannot go and UV sensors can see what human eyes can’t, it strikes me that a superbrain could come up with and consider philosophical questions men couldn’t even dream of. And then perhaps it can tell us those few simple answers that happen to apply to our deprived less-conscious lives, while keeping the good stuff to itself! Now I’m depressed.
Robert Speirs: “And then perhaps it can tell us those few simple answers that happen to apply to our deprived less-conscious lives, while keeping the good stuff to itself! Now I’m depressed.”
That is pretty much how I see the future in fifty years. No matter how much I study, no matter how much I augment myself with advanced biotech or cybertech, I wonÂ’t be able to understand the interesting questions, much less the answers.
I suspect the average person in modern society already feels this way. No matter what the topic, there is a scientist happy to explain how little the person really knows about the subject.
Consider two curves. First, a normal curve (with fat tails) that represents the distribution of IQ/education over a populace. Second, another curve that represents the accumulated knowledge of mankind weighted by some measure of importance. In the past the curves greatly overlapped; that is, most people understood most important knowledge. Some might be a little better at weaving or hunting, but everyone knew the basics. Over time the “knowledge” curve shifts to the right, with more important knowledge being understood by fewer people. In recent history the shift is occurring much more rapidly. So rapidly that even the brightest will be left behind.
So far as the raw-processing-power-type problems go, we’re probably already there, no? There was an article in a recent New Yorker about computers and chess. Apparently the tiptop computers can now beat even the greatest grandmasters pretty handily. Oddly, the effect — now that we’ve gotten over feeling dehumanized — has been to revive chess a bit. People are studying the computer-chess games and learning how to be better human players from them.
Michael,
I don’t think humans are trying to become better players by studying how computers play, rather chess is basically pattern matching on a grand scale, and additionally individual humans have a tendancy to follow certain patterns of play of their own. When Grand Master X takes on Grand Master Y, he studies every game Y has played in the past, that he can get his hands on. X then deduces patterns of play from this, in that Y usually will play the “Castillian Defense” to counter the “Four Knights” opening, or some such. What Grand Masters have complained about in the past, is that they have been asked to compete against computers, while having had no chance to study their game playing style ahead of the match. Now increasingly they are getting this chance, and so the odds of them grokking how the computer AI will play are better.
“Many think we should cultivate the habit of thinking of what we are doing. The precise opposite is the case. Civilization advances by extending the number of important operations which we can perform without thinking about them.”
Alfred North Whitehead
“People are studying the computer-chess games and learning how to be better human players from them.”
I dont understand why one would compete with a machine. Thats what autism is for. Match the machine per task.
Chess is like sewing. You dont compete with a sewing machine, you use it to create something that a single-task machine wouldnt think of. Maybe in the future, someone will come up a use for chess-playing machines that would be fun for everyone.
As to googling answers to fake wisdom, I believe thats already happening here.
“
Kurtzweil predicts that individual CPU’s will reach human information-processing power around about 2030. But obviously large arrays of computers, like the Web, will pass individual human capability some time before that.”
Man I thought this blog was anti-religious. All reason disapears when the Great Prophet Kurtzweil can be quoted from.
Liv: ?As to googling answers to fake wisdom, I believe thats already happening here.?
This reminds me of Searle’s Chinese Room thought experiment, http://www.iep.utm.edu/c/chineser.htm. If we are manipulating belief labels without understanding are we truly intelligent?
At some level I believe everyone engages in this. A person evokes the Theory of Relativity with little understanding of what the theory means. A different person does the same but has some familiarity with the concepts and the equations. A third person is actively engaged in gravitational research. All three use the label but their understanding of what the label means is vastly different.
So if ?googling answers? is fake wisdom, what is real wisdom?
If you read a novel and remembers the authors name and the characters name without sinking yourself in its stories, without feeling how the character felt, you’re just a show-off intellectual (or wannabe). No different than someone who googled the novel’s gist.
eoin,
My personal rough estimates would not be too far from Kurzweil’s. His data is just much more readily available on the web.
Note however: Because the growth rate in computer processing power (usually thought of as a form of Moore’s law) implies such rapid exponential growth, that a very large error could be taken care of with just a few years leeway in the date.
e.g. Even if he was out by an order of magnitude in his power estimates, that means maybe 4 or 5 years error in the date. So the general picture he is painting about computer power would still be valid IMHO.
After all this is really a very simple calculation. Just multiply the number of neurons in the brain by the typical firing rate and you get an estimate of the maximum information-processing power for the whole brain. Then fit that to the long-established power law trend for CPU processing power to project when we will pass it.
FWIW I am by no means a blind disciple of Ray Kurzweil. I don’t agree with his ideas about “the Singularity” for instance.
All three use the label but their understanding of what the label means is vastly different.
I agree Fly, we all do this.
It’s an inevitable consequence of specialization.
For instance, I seriously studied QM many years ago. But I still read as much as I can on QM to see if there are any new developments I need to know about.
I definitely don’t know as much as a person working in the field. For instance, much of particle physics is way beyond my competence. I often skim papers rather than work through them in detail. I’m definitely not going to invest the work needed to fully understand all the details in every theory unless it’s something really important to me. Who has the time? In fields which are still in flux it is usually sufficient for me just to have a general picture.
Basically we trust the scientific community to sort out the facts in all those fields where we are not competent to judge on them personally.
Liv,
Hmmm, I can?t say that I?m familiar with people using Google to fake knowledge. I?m more familiar with using Google to refresh my memory of a paper or subject so that I can present more accurate information. Or to quickly explore an unfamiliar topic that seems pertinent to a discussion.
I guess if a person used Google to successfully fake knowledge I wouldn?t mind. (A little like the Turing Test isn?t it.) I?d rather be given fake knowledge that was correct than mistaken knowledge from an expert (who learned his field thirty years ago and hasn?t kept up).
Since I can?t read the mind of the person posting, I can?t know if the wisdom is real or faked and I can?t know what would motivate the person. What I can do is compare what I read against what I already know and decide whether the new information is wisdom or garbage or unknown.
But that doesn?t really interest me. What does interest me is what does it mean to ?know? something?
Liv,
As to googling answers to fake wisdom, I believe thats already happening here.
Yes. But a fool who persists in his folly may become wise.
I plead guilty Liv.
I cannot tell you how many times I have caught the faint scent of something interesting on the web and then tracked it to its source with the help of Google.
I regard Google and ArXiv as my two greatest resources on the Web. The field of human knowledge is so vast that no person can know any more than a tiny part of it. The problem is not to miss anything that is important to you. The Blogs help here too.
There is no doubt that cybertechnology already hugely magnifies and extends the power and knowledge of my mind. That is its charm and the main source of its addictiveness. I can leverage the power of my knowledge enormously with the help of the Web.
We are symbionts, my computer and I. We are no longer truly seperate things. One day we will be one.
?We are symbionts, my computer and I. We are no longer truly seperate things. One day we will be one.?
Yeah. My memory fades so I trust my computer more. The partnership will become more unequal with each passing year. Someday we may be one, but my fraction of the one might be pretty small.
Kidding aside. I have been wondering how much personal data people are storing on the Internet for quick and universal access. Today I?m tied to my DSL connection and my desktop computer. Soon I hope to have a light, portable computer with a high bandwidth wireless connection. I?m curious how my life would change if the Internet were with me wherever I go.
There is so much software on my computer that I have selected, customised and even written, that a large part of me and my life is on my PC. Likewise the PC and the Web have transformed me through the capabilities they have enabled and enhanced.
We are not what we would have been if we had never met. It’s a bit like a marriage I guess.
:-)
Imagine a wearable computer.
Would your life ever be the same?
LOL.
Matrix days are here. Now do I leave the comfort of google for my typhoon-stricken islands to know how is it to feel my mom’s hand or anyone’s hand when iron sheets are flying around?
To know is more personal I guess. Thousands of dollars spent on your education is a scam when all you need is internet access. How to perform surgery, but have you performed one? How to program in C++, have you debugged a program? I look forward to the days when the teacher would say, “something is wrong with the facts in this website, go figure.” Fact-checkers. BUt who have time and enough knowledge to police all sites when everyone grew up with the same standard knowledge? How would you know if what you know is for real. Besides if you know it and have memorized it, who cares when everyone knows it and no one really needs to hear your version since its the same version anyway.
For example, the Media/Internet says, hot women should look like Nicole Kidman. Maureen Dowd said – “What I miss is the unique beauty, who do we have to look all the same?’ I was thinking, who says which beauty is unique?
Think about it, Hispanics have thicker lips than Caucasians. Why is it that women are getting botox to have fuller lips? WHy are fuller lips attractive? Because it feels better when you kiss them. Or maybe because those lips are attached to human beings who are known to be more passionate.
I’m not sure why fuller lips are considered attractive. Some of the most beautiful women I have admired have had thin lips.
But could she come between me and my computer? Especially when I’m having “fun” debugging it?
(Yes)
Dan, Fly,
I am very attached to my computer also, and would be inseperable without her. But let me go a step further and reveal that I call my computer “Lola”.
Once at a party an ex said to me, “The problem with you is that you spent too much time with Lola, when we were together” – to surprized stares from onlookers. But in a very real sense she felt I was cheating on her by indulging in too much computer time.
But of course it’s always fun when a work colleague asks me if you want to hangout after work, and I say, “No, I have to go home and spend some quality time with Lola!”
I wonder if a cube-like machine can get an erection and satisfaction from men, do you think there would be Miss Universe Cube someday?
?But could she come between me and my computer? Especially when I’m having “fun” debugging it??
When I was obsessively building a long distance telephone network simulator on my Mac II, I worked long hours, seven days a week. My significant other (early twenties, sexy bod, pretty engineer) put on a nightie and climbed into my lap. No way, my mind was several levels down in the code and wasn?t about to surface for food, sleep, or sex.
Liv: ?I wonder if a cube-like machine can get an erection and satisfaction from men, do you think there would be Miss Universe Cube someday??
Considering how many boys and young men I?ve seen addicted to video games, I suspect there is already a Miss Universe Cube.
I have wondered if the male/female imbalance in China and India will turn out to be not a big deal because of the many alternative forms of gratification available to males in modern society.
Perhaps in modern society people are less social because there are so many other outlets. Blogs, pets, porn, soap operas, movies, TV, etc. Fantasy life might be more rewarding than real life.
Lips thin out as one ages, so the reasoning is that lip thickness is a reliable cue for fertility, like tight skin. Also have to question the whole “the media / internet says hot women should look like Nicole Kidman” argument. For one thing, Nicole Kidman is not what guys are thinking of when they’re… y’know. They’re thinking about females who look like strippers, coctail waitresses, porn stars, rap video models, and so on. Almost no straight guys read women’s fashion mags, and those who do are interested in the design of the clothes, not the walking coathanger models.
Second, remember this whole discussion was about a non-human machine investigating & telling us the results of some phenomenon that we can’t even perceive, let alone grapple with. However, human hotness and/or beauty are things that we do just fine at ourselves. No need to ask Google; just ask the guy in line outside a strip club. Now, just as a spectrometer can give us even more fine-grained info on the light we *can* see, technology may also be able to abstract things from the set of hot people to tell us about some property we didn’t know about before. But we wouldn’t have to *rely* on it for basic info.
Most importantly, at the end of the day, hotness / beauty really *is* in the eye of the beholder. There’s no objective property “beauty” that we detect, as our intuitive physics detects matter and motion. So, there’s no way the beauty-ometer could tell us: hey, everything you thought you knew was wrong. Whatever we hold to be beautiful is beautiful. Thus, no way that a media campaign (or whatever) could convince us guys that Nicole Kidman is the ideal of female hotness if we didn’t already think that to begin with (unlikely), but also no way for Maureen Dowd to take over and program us to believe she’s anywhere near the top of the hotness pyramid. It would be like a taste-ometer telling us that — no, in fact, dirt tastes yummy and cheesecake is yucky, or that some fleeting fashion among upscale restaurants is ipso facto tastier than cheesecake.
“do you think there would be Miss Universe Cube someday?”
OK Liv, you provoked me into it:
NSFW
Last Year’s Contest. 4 pages of pix
Dan,
I’d give my vote to #7, who looks a little like, or perhaps was modelled on, Monica Bellucci – who IMO is the most beautiful woman in the world.
BTW, years ago, when a had a Win95, I downloaded some sounds clips that replaced the normal Windows operation sounds, with a breathy, hyper-female voice. So for instance, if you launched an application, you would hear, “Ohhhh Yesss” ;)
Probably for me #59 or #24
Some of last year’s (3 pages) were good too.
This is slightly offtopic but I thought those who read this far down the thread would appreciate the following link:
DOUGLAS HOFSTADTER: Sounds Like Bach
http://www.unc.edu/~mumukshu/gandhi/gandhi/hofstadter.htm
Hofstadter was one of my favorite authors and has influenced much of my thinking. I find his present views on AI very interesting.
PS I found this link by reading the comments to this post by Dean Esmay (courtesy of Instapundit):
http://www.deanesmay.com/posts/1136379832.shtml
Fly, thank you so much for the Hofstadter link! He is one of my heroes too, and this text shows why. Who else has spoken so deeply and cogently on the interface between the real and the the artificial? Those “thin partitions” as Pope called them.
The content of his text is important to mull on too. I am tempted to get one of those EMI CDs and see what they “say” to me.
Fly,
The article by Hofstadter is good, but seems to be somewhat dated, do you know when he wrote it? I have read his work “Godel, Esher, Bach”, and it’s honest on his part to now express a slightly different opinion on the future of AI and the future.
BTW, like with the Chess playing machines, and now Music, it seems to me – as I’ve often said before – that all thinking is just pattern matching. And as others have commented on here in the past, once we consistently and accurately identify a usual pattern and its outcome, we encode that snippet of “thought” and reuse it without modification for the most part. So that most thoughts and decisions we make are actually scripted and not original on a day-to-day basis. This allows us to free up processing to resolve the new daily patterns we encounter.
The difference between the low IQ (L) individual and the high IQ (H), IMO, is that L works almost exclusively off scripted responses, while almost never change, while H both reevaluates and reprocesses familiar routines and discerns new patterns which need processing more readily. In essence H is always more “curious” about the world, and more questioning of things s/he “knows” already.
This of course is probably due in no small part to neoteny, particularly as related to brain function. In essence H retains a more childlike brain, in that it is more malleable and even in older age, still striving to resolve new patterns, reflected in new ideas, thoughts and perceptions.
?The article by Hofstadter is good, but seems to be somewhat dated, do you know when he wrote it??
No. I noticed the reference to 1995 so I?m guessing within a few years after that. I?ve read most of Hofstadter?s early books but none in the last decade. I believe that Hofstadter strongly influenced people such as Roger Penrose who also pushed the idea that digital algorithms couldn?t duplicate human thought.
Re: neoteny, Rushton notes that his blacks – whites – asians rule applies not only to mean IQ in adulthood but also in length of development until full potential is reached: blacks fully develop the earliest, followed by whites, w/ asians developing latest. Also true of puberty. So here mean IQ & markedness of neoteny overlap. Don’t know how long it takes Ashkenazis to reach their +/- final IQ state, though.
But as for the mostly scripted nature of our quotidian thinking, that’s not true for language & linguistic thought, whether reflecting / thinking aloud to oneself or communication w/ others. The combinatory rules and stored lexical items are +/- fixed (modulo the acquisition of the occasional new word), but the actual sentences generated and the thoughts these sentences express, are in general novel. Just ask yourself how many times you’ve said any of the sentences you wrote in your post. I think what you mean is more like the topics that one reflects on or communicates about — lower IQ people daily generate novel sentences & thoughts but restricted to more ordinary topics, while IQ people also generate novel sentences & thoughts but on topics few discuss or that haven’t been explored yet.
?The difference between the low IQ (L) individual and the high IQ (H), IMO, is that L works almost exclusively off scripted responses, while almost never change, while H both reevaluates and reprocesses familiar routines and discerns new patterns which need processing more readily. In essence H is always more “curious” about the world, and more questioning of things s/he “knows” already.?
(I probably shouldn?t comment since there are people on this blog who are far better informed than I on this topic. But what the hell?just take this with a large grain of salt.)
I feel that differences in ?g? are largely differences in degree and not in kind. Based on studies of retinal neurons, synaptic networks are highly adaptable, changing organization within seconds in response to different stimuli. I?d extrapolate from that (and a bunch of other stuff) and conclude that low IQ people also continually adjust their learned patterns and responses. (Unfortunately they tend to re-learn the wrong stuff again and again.)
(I believe there are other components to IQ but I?ll just focus on verbal IQ.) Vocabulary strongly correlates with ?g?. I suspect that is because vocabulary is a measure of a person?s ability to remember and differentiate between patterns. A smart Eskimo can remember and understand the difference between more kinds of ice and snow than a dumb Eskimo. So his ?snow? vocabulary is greater. ?g? measures how many mental balls can you successfully juggle at the same time. (I don?t just mean size of working memory. The number and quality of distinct patterns you can shift in and out of working memory is also important.) More mental balls mean more complex response patterns that may (though at times may not) better match reality.
That leaves open the question of what biological factors determine pattern-matching ability. Total brain size, size of specific brain regions, brain fiber connections, myelization, brain blood flow, brain transmitters and receptors, the cellular molecular biochemistry that supports neuron function, ?
Re: neoteny, Rushton notes that his blacks – whites – asians rule applies not only to mean IQ in adulthood but also in length of development until full potential is reached: blacks fully develop the earliest, followed by whites, w/ asians developing latest.
don’t rely on rushton’s rule as a law from which you can derive things. there are plenty of phenotypes where the order is different. specifically, the r vs. K dichotomy is probably weaker than you would think, or at least, most of the variation is not due to genetics. i am not disputing the traits rushton has collected, i’m saying that the contention that the selection of traits matters is probably valid, i could find 100 traits where it was africans – asians – whites if given a day i suspect. part of it is that rushton relies heavily on out-of-africa events 50,000 years BP so that phylogeny can predict traits very accurately (i.e., africans vs. eurasians, then europeans vs. asians).
I think some of the phenomenon Wolfram is referring to has already affected how science is carried out. I have always been very interested in mentally predicting the behavior of systems form the behavior of their component parts (such as how a mechanism will work given how the gears and levers fit together. It seems that (to my disappointment) in some areas of science, particularly biology, many scientists don’t even bother trying to figure out how systems will behave using their own brainpower, but instead rely on computer simulations or search algorithms. Even when design problems (such as inventing new pharmaceuticals) are carried out using experimental methods that have nothing to do with computer science, phrases such as “search space” are commonly used. It is as if systematic computation has replaced insight in being the driving force for innovation.
Arosko: ?It is as if systematic computation has replaced insight in being the driving force for innovation.?
As the cost of computation drops the mixture of insight/computation should continue to shift to computation. If the methods of computation weren?t also improving then I?d expect a new balance to eventually be reached where more computation didn?t help. Then knowledge would have to be reformulated so that human insight could be used more effectively.
However, the methods of computation are also rapidly improving. More sophisticated algorithms for pattern matching are being developed. Machine learning algorithms are improving. Computational methods are being developed so that software can restructure the data and the solution approach. So I expect computational methods to become even more successful compared to human insight.
Eventually I believe computers will have more ?insight? than humans. I suspect that is what Hofstadter is saying in the ?Sounds like Bach? article. The ?special? quality of thought that makes us human will turn out to be sophisticated pattern matching that can be reduced to computation.
My guess is that in the coming decades there will be real (not philosophical) answers that can be implemented in software concerning what it means to ?know?, to ?believe?, to have ?insight?, to ?create?. The human brain will be viewed as just another computational device and one that is poorly adapted to solving many important problems.
My hope is that the knowledge gained will allow me to hitchhike along and expand my consciousness, IQ, and insight. But it is just a hope.
I agree. I believe that essentially all information-containing phenomena in the universe (including human thought, creativity, emotions, etc.) are “computation” in the broadest sense of the word, as in that a universal computer, given the proper set of instructions, could faithfully reproduce them.
The important question is, however, for which problems is it far more efficient for us humans to program a computer to solve using our intelligence, or carry out a random search for, and which would we be better off solving directly using our own brains. I think that most people would agree that keeping track of billions of 15-digit numbers is best accomplished with computers, and understanding a novel is best accomplished by human brains. When it gets to things like determining how best to design an industrial assembly line, what molecule will inhibit a certain enzyme, or even how the behavior of a cell signaling network will change in response to a stimulus, the answer is less clear. Some scientists favor computer-based searching and simulation, some favor “rational” approaches based on human intuition and insight, and some favor methods such as directed evolution that use neither computer nor human “creativity” but implement what is essentially a random computational search in a physical rather than virtual environment.