In later chapters of Conscilience, Edward O. Wilson starts talking about the innate tendencies we have towards certain methods of perceiving the world and social organization, much of which is analyzed via sociobiology and evolutionary psychology. I'm trying to piece together what this would look like at the level of the neuron, or neural organization and how it might work in conjunction with our innate instincts as biological beings to shape how we operate. I'll skip over the fact that the genetic evidence backing this up is very sparse, but what with genetics being in its infancy, we can hold out on the evidence. (By the way, "Genes, Mind, and Culture" by Lumsden and Wilson covers this in detail, and I intend to write something about it soon.) The main point I came up with on this count is the innate aversion to incest - an extreme example, I know, but it's well researched and serves as a good template from which to build this idea. The deal is, we're biologically programmed to never have sexual interest in anyone we spend a great deal of time with prior to being 30 months old - a phenomena called the Westermarck effect. I'm thinking that there's some mechanism that categorizes and isolates all interpersonal connections made prior to that age, and that somehow blocks it off from interaction with later processes that take place regarding sexual interest. Imagine that as a result, those processes are somehow coded such that if they're activated while the networks having to do with sexuality are activated, a series of events is programmed to occur such that a strong sense of disgust is produced. This is just an example of the kind of networks and network interactions that are built into our system; these are the things we're hard-wired to do - or rather, the patterns our behavior is hard-wired to follow - although the actual events, people, concepts, etc. aren't there yet. Basically, we're built with a context, and our life experiences create their content. And that's where brain plasticity comes into play. So the epigenetics dictate the predispositions we have, and all we do is fill and/or modify them to fit our environment. (By the way, I'm aware that this is a very generalized and bare-boned breakdown of how things work, but for the purposes of this discussion, I'm putting it aside.)
As we know, it's nearly impossible at this point to actually understand the interactions that go on within the brain that result in consciousness and intelligence. Not on a neural level, anyway - we're just too far from that technologically. At least, that's the general consensus. But I'm thinking that the key to understanding how the brain works is to understand the patterns that we start with, because the fact is, there's an infinite number of ways in which a person can turn out given the body they're born with.
This brings me to neural nets. The idea behind neural nets is that they simulate a set of nodes which each relay information with a certain weight and with certain instructions to the next set of nodes. They're built with a particular set of instructions, which in this case has to do with the analysis and identification of input, but they're also built to be flexible and to learn. They have an algorithm that makes it so that each time the input is identified correctly, the circuits and patterns that lead to that decision are strengthened; if the identification is incorrect, those pathways are weakened. (By strengthen, I mean they are given less weight in relation to the rest of the circuit; vice versa for weakening). Eventually these networks will be able to correctly "learn" to identify the input. We had a really good example of this in class, where a neural net was built to figure out the past tense of various verbs from the infinitive form. (I don't know if that's the correct grammatical term for it, but whatever.) At first, given a small amount of trials (and thus, with a small set of information from which to pick up on patterns and rules) the neural net was able to learn a few basic infinitive-past tense pairings. However, after more trials, the net started making mistakes, and started generalizing the rules. For example, when prompted with "go," the machine would respond with "goed"; "went" with "wented"; and "is" with "ised." After many more trials, and learning from many more errors, the machine eventually got them all right. The uncanny thing about this is that the performance of this neural net mimics exactly the patterns of language acquisition of children.
I'm inclined to believe that this is a pretty good model of some aspects of how our nervous system adapts and learns as it receives more stimulation and more input over time. When combined with what I said earlier about brain plasticity and epigenetics, you could just say that the nerve nets have certain innate patterns that are programmed into them, and that the plasticity is their ability to adapt to new information while staying true to a patterns that turns out to provide a large degree of freedom for variation.
The catch about neural networks, however, is that once the network has been fully developed, it's impossible to go back and analyze the contribution of any one node in relation to the whole of the network. Yes, you could track the activity given an input and result, but it would require using every possible type of input to get a complete picture of how the nodes interact. With a small network, this is feasible, but not so much with a large-scale model like a human brain. There is no way to map out or predict the behavior of the whole neural network based on the rules of the nodes themselves or how they're connected (I'm not sure exactly why this is, but I'm going to get more info on this whole idea from my professor and I'll touch on it later.) and therefore this approach wouldn't actually leave us with a cohesive idea of how the higher brain functions take place on a nerve-by-nerve level. But it's not impossible to understand how it's wired from the beginning. Okay, it is at the moment, because we don't know enough about the connection between genes and behavior, but at some point in the hopefully not too distant future this could be a viable approach.
I'll put a quick interjection in here about genetic algorithms, mostly because they sound really cool, and also because there has to be some element in the mind of combining existing information to create new concepts and reactions. If we just functioned like neural networks, we'd be fantastic at learning and processing stimuli, but we'd be unable to produce creative thought. We'd function completely as a result of cause and effect, and our reactions would never change. However, genetic algorithms do make it possible to create novel behavior. This occurs by synthesis of hybrid concepts from existing ones, taking the handful of "ideas" produced via this process, and then combining them. Think of it as idea mating, complete with isolation mechanisms so that incongruent ideas don't combine and with a full compliment of heritable information being passed down each "generation." This turns out to be the only way for computers to do things like problem-solving - they have a set of parameters to work with, a range of values for the various parameters, an objective, and a set of rules for possible ways to achieve the objective. They then produce random combinations of the parameters in various shapes and forms, test them out, and then randomly combine the traits of the best candidates to form new "concepts," and eventually they settle on a solution. The main constraints of the system is that they might overlook an even more effective solution by virtue of random chance, but that's how natural evolution works as well. The disconcerting part about applying this to a model of how we think is that it implies that creativity is completely random, which I'm not completely sure of. But then again, much of the world's series of events happen as a result of very particular coincidences, so we might not have the right to dispute that point.
Back to the main theme. The best way I can think of to actually model the brain functions is to take this neural net idea, include the capabilities of genetic computation, and make it able to process every conceivable mental process. Bear in mind that it needs to have the capability to perform every conceivable process, not the inborn ability to do so. A neural net can't accurately perform at first, it requires learning to do so; our model of the brain would be the same. And we'll skim over the part about how it would be immensely difficult to figure out every conceivable mental process, let alone model it computationally. Given a model with these characteristics, would we then have a system that can accurately model human thought processes? Well, it could, but it would need prolonged interaction with an environment to even venture out of its program infancy. Furthermore, it would need stimulation and guidance that would provide ample information with which to figure out the rules by which the processes should be governed. Just imagine what that would require given the amount of experience an infant needs to grow to become a competent individual. We'd pretty much have to provide the system with a life and an interactive environment that would attend to its learning process. Beyond that, we'd have to find a way by which the machine would interact with the environment, and that would involve apparatuses for vision and hearing at least, and a tactile and motor program if it's to have motor capabilities. And we'd have to wire all those senses up to the main neural network so the stimuli could be processed... and so on.
To fully simulate a human mind, it seems one needs a human body, since the mind doesn't function independently of the physical space that gives rise to it. Our bodies and our physiology came before our language and intellectual abilities, so who are we to think that we could isolate the mind and simulate that alone? Furthermore, I am very skeptical of our ability to create a mind that is pre-programmed with all the experience and information it would need to function and problem-solve normally without allowing it to learn and develop in a given environment. The only way I can see that this could occur would be for it to come fully-loaded with regards to most semantic, episodic, and motor memory, and its learning processes would have to be guided so it could appropriately interact with the world using the information given. And I'm not sure we could provide a computer program to facilitate this learning process at faster rate. (since I'm not sure that that the time and effort required would outweigh the benefits of this project in the first place if we couldn't speed through this process, or if it had to be performed by a person.) So in essence, we'd have to build a full-sized human with the mind of a baby, and teach it, well, everything. Give it a life, friends, let it make mistakes, teach it, and so on. (Wow, welcome to science fiction.) This is where my thought process on the subject gives way. For all the effort and investment, in regards to resources and the like, you might as we just have a baby. But then again, since when has science not done something flashy and exciting just because it was unnecessary? I'm sure somebody exists who would put down the financial investment to fund such a project, despite the fact that the same result - a person! - can be produced with much less effort, because the hardware has already been designed by Mother Nature. Hell, if I were rich enough, I'd put the money down, despite my stance that it's utterly frivolous at that point. It's just too cool of a prospect not to.