English (United Kingdom)French (Fr)Russian (CIS)Espa
Home Library Basics Methods - A Brief History on Studying the Brain & Mind
PDF Печать E-mail
Рейтинг пользователей: / 15
ХудшийЛучший 
??????????? - Basics
Автор: NHA   
01.05.2009 22:05
Tags NHAR2 - brain - mind - basics - mind mapping - update
There are no translations available.

 

A Brief History on Studying the Brain & Mind

 

(This article is complementary material for Tutorial 1)

 

New discovery about the brain and mind and the nature of intelligence is increasing daily.  If you have never studied brains & minds before, or if you are returning to the field after anything longer than a year, you are in for some surprises. This article will get you up to date, bust any myths you may have come across and may save you time by showing which long-held beliefs about the brain and mind have turned out to be false.

...Our story begins around 4000BCE:

The earliest known written article about the brain is about six thousand years old. It was drafted by an anonymous Sumerian. S/he wrote down a description of the euphoric mind-altering sensations caused by ingesting the common poppy plant, calling it “hul gil” or “plant of joy.”

...Fast forward 6000 years to 1999:

"the potential of the psychedelic drugs to provide access to the interior universe, is, I believe, their most valuable property." (Alex Shulgin)

 

The Pursuit of Intelligence: Diggers and Dreamers

If you are interested in the brain and intelligence as a lay student, a long-term neurohacker, or have a DIY approach to learning about the mind, you probably consider yourself ‘outside the mainstream’ of science and research or not even having anything to do with it. Perhaps you think of ‘proper’ neuroscientists as being firmly “inside the mainstream” with white coats and labs and funding and tech etc., and in most cases today this may be so.

But it has by no means always been so. Historically the rise of religio-politics held back research in medicine and brain science for a long time; by making interference with “gods creation” (in the form of autopsies, dissection, anesthetic, transfusions etc) illegal and often punishable by death (hence being caught doing it was not heritable). This left neuroscience itself for most of our recorded history very much outside ‘the mainstream’, populated by “those who didn’t get caught” via sheer natural selection. Back then neurohacking and even anatomy was definitely not for wimps.

Those of us who learn about the nature of brains and minds today see as far as we do largely by standing on the shoulders of generations of lone rebel investigators or small groups, each contributing to the knowledge of the whole. Observant butchers, bold apothecaries, shamans, stoned alchemists, grave robbers, closet anatomists, technical nerds with good drafting skills; and their only available specimens (animals, accidents, plague victims, unsuccessful highwaymen etc).

I feel it rather “sticks it to the establishment”, as the saying goes, that the entire neuroscience field today (whether academics like it or not), is the proud legacy of outlaws fighting against the odds to understand the realities of brain & mind and save lives; via illegal practices, experiments and investigations largely carried out covertly at night in sheds.

You know -Neurohackers.

So first of all let’s think appreciatively of all those noble dudes who put their lives on the line and their hands in all the yucky stuff, in pursuit of the same truth and the same goals we seek today –finding out how this stuff works and ways to use that information beneficially.

 

Old Map, New Territory

It is a good thing that this attitude did not remain the zeitgeist. If Ms Nightingale, Mister Fleming, Monsieur Pasteur or Mme Curie had refused to interfere with 'gods creation', most of us would have curled up and died of something horrible a long time ago.

We now have some great techniques for exploring the brain, but our censored past has left us with one unfortunate legacy –an old inaccurate paradigm (set of beliefs) about the brain and 'maps' of how it works that were designed largely before we knew anything about it; after looking only at the outside and observing a few reflex responses. This was admittedly a giant step forward from “God created it so mind your own damned business”, but the map it gave us has now about the same relationship to the territory of modern brain physiology as that between Copernicus’ map and modern astronomy.

 

Outside In

How was that original map of the brain built? Think about how the brain has developed through millions of years of evolution, from fish and reptiles to rodents, apes, and humans. Now imagine an ancient brain, sitting on a table in front of you, making the same evolutionary journey in speeded-up fast motion. You'd see it evolving upward and outward, starting from the brain stem, then adding structures like the cerebellum, then middle structures and the central system, before the cortex grew on top. Something similar happens in every developing foetus: the brain stem, middle parts and central areas developed each in their turn and the cortex develops last. The pre frontal cortex and corpus callosum do not develop until after birth, and are not completed for many years.

Neuroscientists of past ages had to make the same journey in reverse as they tried to understand the brain from the outside in, before we had access to scanners and suchlike. The starting point, the most obvious part of the brain, is the cortex, the creased and folded partthat sits on top. The cortex and much of the outside of the brain is symmetrical; appearing like a mirror image, the cerebral hemispheres, connected from left to right by the neural equivalent of a broad parallel bus called the corpus callosum. Anatomically each hemisphere of the cortex again divides into four regions or lobes—the frontal, temporal (on the side), parietal (in the middle towards the back), and occipital (at the very back). Under the cortex is what used to be called the limbic region, a central area containing parts that play a major role in emotional, imaginative and memory processing and are very important in NH: the amygdala, thalamus, hypothalamus, septum, and hippocampus. Under this central system lies what is now called the midbrain; the pons and medulla, and the brain stem. At the top of the brain stem across the back , tucked under the occipital lobe, is the cerebellum.

 

Lumps on the Head, Lumps in the Head

Early thinkers like Aristotle, Galen, and St Augustine supported the idea that the brain’s ventricles (large, open cavities inside the brain that are filled with cerebro-spinal fluid) were the seat of higher mental functions such as thinking and reasoning. This wild (and mistaken) guess persisted, as medieval 'science' was inclined to do, for many centuries.

During the Renaissance, Leonardo da Vinci (1452-1519) and Andreas Vesalius (1514-1564) lifted the lid of the skull (illegally) to make accurate anatomical sketches of what they found inside. Taking the human body apart and numbering the pieces, they were illustrating parts of the brain and speculating that they were dedicated to different mental functions around 500 years before neuroscientists and brain scanners started doing much the same thing.

In the early 1800s, the "science" of phrenology was introduced by Franz Gall, who theorized that studying the skull surfaces of people who had died would allow him to map their personalities onto the brain. He believed that different areas of the brain were responsible for different traits and that certain bumps or depressions on the skull would correlate with the person's characteristics. Unfortunately, he was completely wrong. But to be fair, mapping the mind and its functions was no easy business back then.

Phrenology began to fall out of favor in the mid-1800s, when other discoveries refuted Dr. Gall's theory. Nevertheless, he was vaguely on the right track in thinking that in some way location affected function. We now know the brain is made up of networks of neurons that receive and process input according to their location, and transmit information to other networks and the body, across synapses through chemical means. Processes do correlate with areas, but not in the way that was thought.

 

The Form is not the Function

With surface examinations and basic anatomy, and not a few wild guesses, our predecessors assembled their ‘maps of the brain and mind’. Phrenologists had looked at the lumps on the outside, and the new ‘neuroanatomy’ looked at the lumps in the inside. Because there are quite clearly different lumps, both schools decided that lumps = modules; areas or ‘lobes’. This was a fair enough classification anatomically, since the liver and lungs had already been dissected and were found to be full of lobes (one lung has three, the other two, and the liver four) but we should have taken the hint that lobes in most organs often perform shared functions or different parts of the same function, rather than ‘separate lobe = separate function’.

Human dissection and especially messing about with the brain remained illegal in many “civilized” god-fearing places until the late 1800s, so not much of the ‘map’ was filled in by the time they were even officially allowed to look at the territory. The ‘module’ idea was indirectly correct, but early explorers automatically assumed gross structure was determinant of modularity (this now seems a bit like concluding that each of the body’s limbs must contain one of its organs).

So the brain lobes were labelled: Occipital, Parietal, Temporal, Limbic and Frontal, (and most of these labels are still used today although 'limbic' is dying out,) and the functions of intelligence were slowly divined from evidence as following: “If we stimulate this bit of brain this guy experiences a memory, so that bit there must control memory”.

This seems laughable now. The brain is 3D, but back then living brains could only be examined in action from the outermost layers. In reality, looking at an individual brain lobe and asking ‘is it this part that controls memory?” is a bit like looking at an individual hand and asking, “is it this part that plays the guitar?”

...We now know that networks, not lobes, are related to functions. But this was the only map we had. (Stunningly, the “one thing = one function” mistake was made again years later when the human genome came under scrutiny. People would ask, “Is it this gene that causes—“)

 

The Split-Brain Hypothesis: Do People Really have a Left Brain and a Right Brain?

No they don't, although this is a very popular myth. The idea that different parts of the brain might be devoted to more clearly defined functions persisted, notably in the work of Paul Broca (1824-1880) and Carl Wernicke (1848-1904) on the localization of different aspects of language. It was long believed that the right hemisphere of the cerebral cortex controlled artistic creativity and the use of language and the left side controlled analytical thinking and working with numbers. This is now disproven, although there is some lateral symmetry of function (for example, the right temporal area processes most analogical factors of language and the left temporal area processes most grammatical aspects of language.) There are two network 'hubs' in each hemisphere (and two in the middle).

 

Matters of the Mind

The science of mind progressed even more slowly than that of the brain and physiology. Even as recently as the first half of the 20th century. Psychiatry was dominated by the sex-obsessed speculations of Sigmund Freud (now regarded as “pre-scientific”), Freud proposed absurdly elaborate and often quite bizarre sexual explanations for mental illnesses that are now much more plausibly explained by the straightforward science of brain chemistry and correlative functions. In retrospect, Freud's psychoanalysis appears as a pause in the evolution of biological approaches to brain and mind.

Many psychologists were also persuaded to view what happened inside the brain as an irrelevance, due to the influence of behaviourism. Behaviorism was based on experiments like Pavlov’s classic finding that dogs can be "conditioned" to salivate when someone rings a bell: if you ring a bell every time you give your dog his dinner, and the dog salivates in anticipation of eating, the dog will eventually learn to salivate even if you ring the bell without serving any food.

For behaviorists, every aspect of animal behaviour—from the way infants become bonded to their mothers to the reasons behind deviant human behaviour, such as football hooliganism—can be explained in terms of "stimulus" and "response". There was, according to the behaviourists, no way of knowing what went on in the "black box" of the brain: you certainly couldn’t lift the skull and see what was happening inside (amusingly, many years later, brain imaging techniques would allow psychologists to do exactly that!)

In the 1960s a new way of thinking about thinking—cognitive psychology. Cognitive psychology relies on computer-inspired models to illuminate the brain: where behaviourists assumed the brain was a "black box", cognitive psychologists filled the dark space with flowchart-style boxes, linked with arrows, and tried to see where it got them. Mental functions, such as memory, creativity and intellect, kinesthetics and locomotion (the cognitive control of our bodily movements), were understood in terms of sequential processes that happen with just as much as logic as computer programs but also, given the complexity of the brain, somewhat more human fallibility.

No-one has ever gone so far as to suggest the brain actually is a computer; that's way too limited a view. But having established that something like human memory (to cite just one example) has a number of quite distinct components (short-term, working, sensorimotor, spatial, procedural, declarative, eidetic), the temptation, nevertheless, has been to try to find where those components are actually located in the brain—to ground the computer metaphor of cognitive psychology in the real, embodied, living grey matter of networks.

By itself, cognitive psychology could never have achieved this. As the behaviourists realized decades before, there are any number of ways that the "black box:" might be wired up to produce a particular response from a given stimulus. The task is a bit like trying to understand the exact coding of an entire operating system by observing input and output! How, then, to bridge the gap between theoretical models of how our minds work and the brains—once memorably described as being "like two fistfuls of porridge"—that actually do the job?

The solution has come from various sciences, a closely linked ensemble of neurological and other relevant disciplines that study different aspects of the brain’s workings in somewhat different ways.

Neuroanatomy, for example, focuses on the structure of the brain; it can be studied by dissecting the brains of animals or people who have died or by looking at brain scans of living people. This has helped us to see what areas of the brain are active when different functions are performed.

Neurochemistry looks at the chemical messengers (neurotransmitters) that carry signals between brain cells, around fairly well-defined "circuits".  This has helped us determine what physiological events and behaviors transmitters correlate with.

Neuropsychology studies patients unlucky enough to have suffered "lesions" (relatively localized brain damage caused by strokes, brain tumours, and head injuries) and tries to find out what kinds of things those people can no longer do. This has helped us understand what networks could be involved in what functions.

Biological psychology studies the links between brain, body and mind, and is one of the more useful fields for NH information, having given us a much clearer understanding of how we learn and the effects of anxiety and lifestyle on body & mind. This is invaluable for personal practice.

Evolutionary psychology studies the links between evolutionary environment, genetic inheritance and behavior, and is a useful field in understanding NH theory.

Genetics and most especially epigenetics has given us vital information that together with Hebbian plasticity have solved many of the mysteries of intelligence and given us powerful tools in NH. Yet these terms are so recent you may not have even heard of them yet. As I said, you're in for a few surprises.

Importantly, the “nature or nurture” argument is over. The discovery of the brain’s plasticity and the nature of epigenetics have shown us that it is interaction between genes and an appropriate environment that results in the emergence of intelligence.

 

Intelligence is not IQ

This is going to be a tough myth to bust, but the proof is inescapable. During the latter part of the 20th century, neuroscientists made good progress understanding not just the gross structure of the brain but also in separating the functions of intelligence, such as intellect, creativity, memory, imagination and emotion into their interacting components. In the 21st century, we know that several functions are essential for intelligence and IQ is only one of them. Mapping the brain and understanding exactly how the different components and circuits work together has become much easier due to the technology and techniques now available to us.

 

The Latest Part of this History: We Were There, Dudes!

Between 1980 and 2000, six important steps in neuroscience and other areas advanced our understanding of brain dynamics and function by several orders of magnitude:

    • Magnetoencephalography (MEG) and functional magnetic resonance (imaging (fMRI) as complementary tools.)
    • Fast computers and availability of sophisticated neurocomputing software that accelerated progress in all fields of research.
    • Discoveries about the nature and effects of anxiety.
    • The human genome project and developments in proteomics & epigenetics.
    • Fundamental discoveries in plasticity & memory research.
    • The internet and availability of shared research, with interaction online between persons of all interests and fields sharing their experiences and ideas and THAT MEANS YOU so join the forum and take advantage of this, because the more you communicate what you want to know, the more you will find out about it.

These six areas have already [2009] borne much fruit. The rate of the development of technology over the last few decades has affected what science knows about the brain and intelligence massively and fundamentally, and I feel privileged to have lived through a time when after so long with not much more to work on than the methods and ideas and sparse discoveries of past centuries, “a series of fortunate events” finally happened to neuroscience.

The first was computers. WHAM! Admittedly they took up half a room, used tons of reels of tape, necessitated working with People Who Were Not Scientists (eek!) and there were still the perennial fundamentalist old farts and luddites around shouting, “It will all end in tears, I know it!”

Horror movies began to feature computers that wanted to take over the world, and conspiracy theories were rife. But computers were such an obvious boon to research; the new “computer engineers” we encountered around labs and hospitals were familiarly creative and comfortably eccentric, and many of us rubbed our hands and sat up and said, “Oh good!” (much like we had about Crick & Watson cracking DNA without a clue in our youthful nerdy heads where it would all lead and how fast).

We were all just getting the hang of computers’ fantastic applications (in graphic anatomy, CT scanning and biochemical simulation, for a start), and the way in which every new computer was a lot smaller, cool, look you can even have one at home now, when in the early seventies WHAM! Here came MRI.

For the first time ever we suddenly had more tools to play with than we had time to play with them, and again we thought with glee, “Oh Good!” All this was a lot more fun than watching rats run mazes every day for five years and trying to interpret their EEGs.

Everyone in neuroscience was just about catching up with themselves again by the late eighties; although admittedly researchers were already having to juggle time between reading other peoples results and analysing their own as the computer-literate community started to exchange much larger amounts of information. A decade ago there had been maybe 50 neuro papers a year in science journals, suddenly there were 50,000 and growing.

Then WHAM! fMRI,

WHAM! Completion of the human genome project;

WHAM! Multimodal Neuroimaging,

WHAM! Much faster computers and

WHAM! WHAM! WHAM! New relevant research data coming online at around ten new items a day. And suddenly, all of the papers are information ‘biscuit bombs’ (they are hyperlinked to many other, equally relevant papers on similar projects which consequently may land in your files shortly afterwards).

...And at this point many of us started sitting back and saying, “Oh God!” This is a bit like the horn of plenty that won’t stop plentying. The information is now so fast and complex it actually hits a bottleneck going through humans, who can’t read or type fast enough to keep up with the flow and add to it as well.

Confused as a baby on a topless beach, neuroscientists and science writers grab the nearest stuff that looks most relevant and bookmark the rest, hoping that maybe later they may get time to take a look at all of it.

What we try to do for you here is pull out of this forest of information all the latest science related to NH, saving you research time and assisting your progress. From the latest discoveries, techniques and experiments we construct our tutorials (they are revised and updated every year) so that you can put this information to immediate practical use.

This is especially important with regard to disorders and problems. If anything that can improve our mental health exists, it is useful to know about it as soon as possible. We can only make informed decisions about our lives when we are informed.

 

Where are you Getting your Information From?

We do not live in a perfect world where all media reporting is accurate and has your wellbeing as its first directive. Errors and misleading information about the brain and mind occur a lot in the media; for example in popular newspapers, websites or magazines we see stuff like:

“The Amygdala -the part of the brain that controls fear”, or :

“The gene for schizophrenia -a split personality,”

-and in real terms, such statements are simply not true. However, most people remember them (witness how many people still believe “scientists say we only use ten percent of our brains”; which is utter nonsense unless you are in a coma). Compounding students' confusion are folk myths like these, popular books promoting scientific hypotheses (such as the R-L brain hypothesis) that don’t bother to impress upon the reader what a hypothesis IS (an unproven idea) and are consequently believed as facts rather than current speculations.

This is not necessarily the fault of authors; in a society where being truthful about mistakes is viewed as unprofitable, there is no infrastructure for a researcher to announce to the world that their previous ideas were in fact completely wrong (although the internet is helping to address that. Things are getting better, as we have seen.) But unfortunately when a speculation or hypothesis is proved wrong, the public reader (or the next writer inspired by the source) is rarely informed.

Consequently, in the eyes of the general public, newspapers and popular media, the “old map” is even now more or less maintained as current.

Your search for the truth can thus be limited by the restrictions inherent in mainstream science. The very same problem hits those of us who are researchers. Mainstream society does not support the search for scientific truth; it supports the search for money and products. Wherever the search for truth and the possibility of profit coincide, funding shall be given for research, wherever it does not, it will not. The hunt for the cure for a disease is always funded by those who want to sell it, not the sufferers or the scientists seeking it. The desire to understand and improve human intelligence will only find its projects financially supported if there are end products or methods that can be sold. Projects that are funded will be covered (advertised) in the media, those which are not will not.

To give an example of how this affects what you can learn about personally, let me ask you: When did you first learn about the brain's plasticity and the rule “cells that fire together wire together”? Just now? In the 90s from Joseph LeDoux's books? In early 2000 from New Scientist magazine or Scientific American?

Guess what? –It was discovered by Hebb in 1949! It is plasticity that enables us to both improve our intelligence and prevent dementia. But very little money could be made from public awareness of that discovery; plasticity isn't something you can sell, it's something you can learn to direct for free, as you'll find out when you practise NH. So in fact it’s probably something that advertising executives, drug companies and tyrannical dictators would rather the public didn’t know. It could be very bad for business.

Here's something else pretty fundamental to intelligence that I bet you didn't know yet: Maternal emotional balance determines the relative sizes of an embryo’s rear versus front brain networks in the womb. More anxiety releases the chemicals that build smaller frontal lobes and larger rear ones (less anxiety = larger frontal lobes, smaller rear ones). Larger rear lobes correlate with violent behavior, paranoia and poor mental health. This has been known about since at least the sixties, but you're unlikely to hear about it anytime soon. Any sane society, having this information, would immediately put large resources into maintaining healthy states of mind in pregnant people, but nobody here is claiming that we live in a sane society.

That doesn't mean we're conspiracy theorists; largely humanity is all in the same boat, because all that is missing is intelligence. But our point is, it is largely pointless reading about intelligence or any brain science in popular “daily” newspapers or magazines unless you are only interested in new products, techniques or drug discovery. Academic neuroscience will remain a worthwhile information source only if the material is less than five years old, but will probably be missing ‘since-discovered’ information if more than two years old. Forget any text books older than 5 years; go for the latest publications you can.

Online open source science sites like MIT’s are on the whole good, but remember, not all the contributors will have kept up to date themselves! Being a professor does not mean someone is a latest-information fanatic; in some cases quite the opposite. We should check sources with the usual caution -there are professors out there in ‘respected institutions’ (such as Mensa) lecturing on how god doesn’t like stem cell research, so always make sure you know not just where information is coming from but what point of view it is coming from. Of course I have to recommend our own tutorials, because that’s what being a writer is all about.

So, take a careful look at where you get your information from, who runs the sites it's on, and who's funding the research it's written about. The truth IS out there...but so are lies.

 


Sources:

Alex Ramonsky 2009, [I've Made My Mind Up Now] Rough draft, authors copy: IPMA

Chris Woodford 2004. [Brain Modularity]: Creative Commons License

John van Wyhe, The History of Phrenology on the Web, (http://pages.britishlibrary.net/phrenology/).

A History of Psychiatry by Edward Shorter. New York: John Wiley, 1997.

"Emotion, Plasticity, Context and Regulation: Perspectives From Affective Neuroscience." Davidson, Jackson, and Kalin, Psychological Review, 2000, Vol 26, No 7, 860-900.

Joseph Chilton Pearce [Magical Child]: ISBN 0452267897

Joseph LeDoux's website (Information about LeDoux's pioneering research),

(http://www.cns.nyu.edu/home/ledoux/)

Обновлено 02.08.2013 13:48