Some people can hold huge amounts of information in their mind and even manipulate it, trying out different ideas, while other people can only hold small amounts. Why do people have the particular capacity they have? How can we investigate these differences between people? It turns out the key to answering these questions is to get people to remember information in only one of their five senses, for example, vision. By doing this we narrow down the field of things to investigate. We can look at the precise brain anatomy related to just that one sense in different people and figure out which parts of their brain allow for greater information capacity. This is exactly what we did in our Cerebral Cortex paper. We found that people with a physically larger visual cortex – the part at the back of the brain that deals with what we see – could hold more temporary information in their memory. This is interesting for a number of reasons because it suggests that the physical parameters of our brains set the limits to what we can do with our minds.
The larger your visual cortex the more visual information it can hold. But the “visual cortex bucket” has to actively hold on to the information. It takes voluntary effort on your behalf to continually hold this information and then use it.
It is worth noting that size is not everything. Many other brain factors can and will influence your mental life and indeed your working memory capacity.
These factors include the degree of internal connections between different brain areas, the level of neural transmitters, the hormones in your body and brain, and of course the amount of stress you are under.
In our study, we found that both the thickness and the surface size of the visual cortex independently predicted how much people could hold in visual working memory. So indirectly at least, it seems that your parents or ancestors might have passed their visual cortex down to you, or at least its size.
1. We like to think of romantic feelings as spontaneous and indescribable things that come from the heart. But it’s actually your brain running a complex series of calculations within a matter of seconds that’s responsible for determining attraction.
2. Playing an instrument is the brain’s equivalent of a full-body workout. As you play, your brain simultaneously processes different information in intricate, interrelated, and astonishingly fast sequences.
4. As we grow, we install pain detectors in most areas of our body. Just like all nerve cells, these detectors conduct electrical signals, sending information from wherever they’re located back to your brain. But, unlike other nerve cells, nociceptors only fire if something happens that could cause or is causing damage.
Researchers at the Beckman Institute at the University of Illinois at Urbana-Champaign have developed a new technique that can noninvasively image the pulse pressure and elasticity of the arteries of the brain, revealing correlations between arterial health and aging.
Brain artery support, which makes up the cerebrovascular system, is crucial for healthy brain aging and preventing diseases like Alzheimer’s and other forms of dementia.
The researchers, led by Monica Fabiani and Gabriele Gratton, psychology professors in the Cognitive Neuroscience Group, routinely record optical imaging data by shining near-infrared light into the brain to measure neural activity. Their idea to measure pulse pressure through optical imaging came from observing in previous studies that the arterial pulse produced strong signals in the optical data, which they normally do not use to study brain function. Realizing the value in this overlooked data, they launched a new study that focused on data from 53 participants aged 55-87 years.
“When we image the brain using our optical methods, we usually remove the pulse as an artifact—we take it out in order to get to other signals from the brain,” said Fabiani. “But we are interested in aging and how the brain changes with other bodily systems, like the cardiovascular system. When thinking about this, we realized it would be useful to measure the cerebrovascular system as we worry about cognition and brain physiology.”
The initial results using this new technique find that arterial stiffness is directly correlated with cardiorespiratory fitness: the more fit people are, the more elastic their arteries. Because arterial stiffening is a cause of reduced brain blood flow, stiff arteries can lead to a faster rate of cognitive decline and an increased chance of stroke, especially in older adults.
Using this method, the researchers were able to collect additional, region-specific data.
“In particular, noninvasive optical methods can provide estimates of arterial elasticity and brain pulse pressure in different regions of the brain, which can give us clues about the how different regions of the brain contribute to our overall health,” said Gratton. “For example, if we found that a particular artery was stiff and causing decreased blood flow to and loss of brain cells in a specific area, we might find that the damage to this area is also associated with an increased likelihood of certain psychological and cognitive issues.”
The researchers are investigating ways to use this technique to measure arterial stiffness across different age groups and specific cardiovascular or stress levels. High levels of stress, especially over a long amount of time, may affect arterial health, according to the researchers.
“This is just the beginning of what we’re able to explore with this technique. We’re looking at other age groups, and in the future we intend to study people with varying levels of long-term stress,” said Fabiani. “When people are stressed for long periods of time, like if they’re caring for a sick parent, stress might generate vasoconstriction and higher blood pressure, with significant consequences for arterial function in the brain. We are interested in knowing whether this may be an important factor leading to arterial stiffness.”
The researchers are also able to gather information about pulse transit time, or how long it takes the blood to flow through the brain’s arteries, and visualize large arteries running along the brain surface.
“Our goal is to find more information about what causes arterial stiffness, and how regional arterial stiffness can lead to specific health problems. Our findings continue to bolster the idea that an important key to aging well is having good cerebrovascular health,” said Fabiani.
A recent review by Aaron Schurger in Science Magazine pointed me to Michael Graziano’s 2013 book “Consciousness and the Social Brain”, which I immediately downloaded, read, and abstracted. Very engaging and clear writing (although I am dumbfounded that he makes no reference to Thomas Metzinger’s work and ‘ego tunnel’ model, which has common elements with his own.) In Graziano’s theory awareness is information, the brain’s simplified, schematic model of the complicated, data-handling process of attention. A brain can use the construct of awareness to model its own attentional state or to model someone else’s attentional state. An extract from Schurger’s review:
In Consciousness and the Social Brain, Michael Graziano argues that consciousness is a perceptual construct—the brain attributes it to other people in much the same way that the brain attributes speech to the ventriloquist’s puppet. To clarify, imagine being greeted by a very lifelike android version of your best friend with a prerecorded behavioral program that had you genuinely fooled for a few minutes. From your perspective, for those minutes, the android was endowed with consciousness. Thus there need be no truth or falsity to the statement “My friend standing before me is conscious.” Your brain decides that the android–best friend standing in front of you is conscious, and that is what you perceive to be true.
According to Graziano’s “attention schema” theory, our own consciousness is also a perceptual construct—a unique one that emerges when the brain applies the same perceptual attribution recursively to itself. We attribute consciousness to others as part of our perceptual model of what they are paying attention to (an inference particularly useful for predicting their behavior). This model describes the process of attention as a mysterious something extra in the brains of beings that are selectively processing information that guides their behavior. When the brain applies the model to itself, “I” become endowed with this extra something as well—although, as with the android, it was never there in the first place.
According to the theory, consciousness is to attention what the body schema is to the body: it is the brain’s perceptual description of its own process of attention. The two phenomena are thus locked “in a positive feedback loop,” which explains the tight connection between attention and consciousness. In essence, consciousness is a descriptive story about a real physical phenomenon (attention). The ink in which the story is written (neural activity) is real, and the physical phenomenon that the story is “about” (attention) is real. But, like the talking puppet, the story itself need not be real. We say that we have consciousness, and that it seems irreducible to physical phenomena, because that is how the brain describes the process of attention (in ourselves and in others): as something ineffable.
I’ll also give you a clip from my abstracting of the book:
The heart of the theory is that awareness is a schematized, descriptive model of attention. The model is not perfectly accurate, but it is good enough to be useful. It is a rich information set, as rich as a sensory representation. It can be bound to a representation of an object as though it were another sensory attribute like color or motion….the purpose of a model in the brain is to be useful in interacting with the world, not to be accurate.
The body schema and the attention schema may share more than a formal similarity. They may partially overlap. The body schema is an internal model— an organized set of information that represents the shape, structure, and movement of the body, that distinguishes between objects belonging to the body and objects that are foreign.
In the present theory, the attention schema is similar to the body schema. Rather than representing one’s physical body, it models a different aspect of oneself, also a complex dynamical system, the process of attention— the process by which some signals in the brain become enhanced at the expense of others. It is a predictive model of attention, its dynamics, its essential meaning, its potential impact on behavior, what it can and can’t do, what affects it, and how. It is a simulation. The quirky way that attention shifts from place to place, from item to item, its fluctuating intensity, its spatial and temporal dynamics— all of these aspects are incorporated into the model.
In a world-first, a newly published study has captured in detail the brain electrical activity in children as they emerge from anaesthesia, shedding light on why some are distressed and agitated when they wake up.
Researchers from Swinburne University of Technology together with colleagues from the Murdoch Childrens Research Institute (MCRI) were able to collect electroencephalography (EEG) data on children who exhibited emergence delirium.
Emergence delirium is a major risk associated with anaesthesia in children and occurs when patients wake up from anaesthesia in a delirious and disassociated state.
Swinburne Professor David Liley said PhD student Jessica Martin and staff at MCRI were able to record, with unprecedented fidelity, brain electrical activity from 60 children aged 5-15 years who emerged from anaesthesia some of whom went on to exhibit emergence delirium.
“This clinical phenomenon is prevalent in children aged six and under, with an estimated 10-30% exhibiting emergence delirium,” said Professor Liley.
Researchers found that the brain activity recorded just after stopping sevoflurane (a form of gas anaesthesia) in children exhibiting emergence delirium was substantially different to those children who woke up peacefully.
Associate Professor Andrew Davidson from MCRI said they discovered that children who wake up suddenly from a deeper plane of anaesthetic are more likely to develop the delirium.
“In contrast, the children who develop sleep like patterns on their EEG before they wake up are more likely to wake up peacefully.”
“Intriguingly, emergence delirium looks very much like the more severe form of night terror, which occurs when some pre-school children are disturbed during deep sleep.
“Our study suggests the EEG signatures and the mechanisms may indeed be similar between night terror and emergence delirium.
“Allowing children to wake up in a quiet and undisturbed environment should increase the likelihood that they go into a light sleep-like state after the anaesthetic and then wake up peacefully,” said Associate Professor Davidson.
The findings will have significant implications in both predicting those children who will go on to develop emergence delirium, as well as helping medical professionals better understand its causes in both children and adults.
Consciousness in AI is a topic which is argued by not only computer and cognitive scientists, but also philosophers. Philosophers like John Searle and Hubert Dreyfus have argued against the idea that a computer can gain consciousness. For an example, arguments like Chinese Room have been proposed against the idea of strong AI. But there are also philosophers like Daniel Dennett and Douglas Hofstadter, who have argued that the computers can gain consciousness.
Although there are debates about the how to create a conscious machine, for this article I choose to look at the creation of machine consciousness in another way. Do we have to design the AI’s architecture with the conscious from the beginning to make a conscious AI? Will the AI be able to gain consciousness of its own? Or will the consciousness be emerged form when the AI’s architecture when it gained sufficient enough complexity by evolution or by self modification without human interference?
Consciousness without Human Design
Although consciousness is an important quality, defining it clearly is a somewhat difficult task. But we can roughly define it with two main components, the Awareness (Phenomenal Awareness) and the Agency. Awareness is ability to the external world and also feel or sense the content of the own mind. And the Agency is the control over external world and also the control over our self or the mental states. Which means the control the both behavioral (control external organs, hands, feet, etc.) and mental aspects. We should also be aware of the control it to become conscious. We should know/feel that we have the control (or that we are doing it). The actions we are not aware like beating of the heart, breathing or things we do without thinking (for example, walking or driving without thinking or concentrating on it or thinking about something else) aren’t taken as conscious actions. So after including all of these, we can define the consciousness as (or at least I’m using this definition for this article) the awareness and control over external objects and also awareness of ones own mental content. Another way of putting it is having a sense of self-hood.
According to the above definition of consciousness, we can see that the concept self is also linked with the consciousness. So, what is self? The self can be defined as the representation of one’s identity or the subject of experience. In other words self is the part that receives the experiences or the part that has the awareness. The self is an integral part in human motivation, cognition, affect, and social identity.
Concept of self may not be a something that we are born with. According to the psychoanalyst Sigmund Freud, the part of the mind which creates the self is developed later in the psychological development of the child. In the beginning a child only has the Id. Id is a set of desires which cannot be controlled by the child and only seeks pleasure (Pleasure Principle). But later in the development process a part of the Id is transformed into the Ego. And this Ego creates the concept of self in the child. Now the question will be, Can AI be developed into a stage where it can also create something like Ego like the human mind? If the AI has a structure which contains the necessary similarities to a human mind or the AI has an artificial brain similar to the human brain and nervous system, then AI may be able to undergo a process which create some sort of an Ego similar to human Ego. And for humans this Ego is created because of the interactions which a child has with the external world. So like that, maybe the influences which AI faces can trigger the creation of the Ego in AI.
According to the theory of Jacques Lacan the process of creating the self of a child happens in the stage called the mirror stage. In this stage the child (in 6-18 months of age) sees an external image of his or her body (trough a mirror or represented to the child through the mother or primary caregiver) and identify it as a totality. Or in other words the child realized that he or she is not an extension of the world, but a separate entity from the rest of the world. And the concept of self is developed through this process. So, can an AI go through this kind of a process or a stage and develop a self? Regardless of whether the structure of the AI is similar to a human mind or not, the realization of the fact that it is a separate individual from the first time will be a new and revolutionary experience to AI (if the AI is sophisticated enough to process that kind of realization or experience in a proper way). So this kind of an experience may be able to make a change in the AI which may be able to give the AI an idea about self. But if this stage of AI is similar to the mirror stage, then the AI must also have a way of seeing its own reflection in order to undergo this kind of a process. If the AI has a body (robot, maybe) and doesn’t extend beyond that body then this won’t be a problem. But if the AI can be copied into new hardware or extend itself through a network or hardware, then defining its boundaries can be somewhat difficult. So seeing it as something that is not fragmented and has clear boundaries will a bit tricky. But if the architecture of the AI may allow a different way of defining boundaries and see it as an individual then this would work.
When we consider the other animals, we can see that an animal must have a certain complexity to have the self awareness (or consciousness). Methods like Red Spot Technique have shown that animals like some species of ape and dolphins have shown self awareness and some animals haven’t. So we can assume that AI must also have an architecture with sufficient enough complexity for it to develop a consciousness. So at some point in the process of evolution, the AI must be able to achieve the necessary complexity, in order for the AI to become conscious. But if the evolution of the AI is similar to the evolution process in Darwinian theory, then the AI which finally achieve the consciousness won’t be the ones that the process of evolution begins with because the new generation of AI is built by merging the best architectures of the old generation of AI and mutating them. So for this merging and mutating process the AI may need human assistance.
But a single AI also can undergo a sort of an evolutionary process of its own. And such process would be self improvement or more precisely recursive self improvement. Recursive self-improvement is the ability of an AI to program its own software or add parts to its structure or architecture (maybe hardware vise too). So this process also will be able to make the AI achieve necessary complexity in some point.
Like that, maybe the AI will be able to produce consciousness through self modification, or through a stage in their psychological development process by themselves without humans specifically designing it to be conscious from the beginning.
With enough practice, some learners of a second language can process their new language as well as native speakers, research at the University of Kansas shows.
Using brain imaging, a trio of KU researchers was able to examine to the millisecond how the brain processes a second language. They then compared their findings with their previous results for native speakers and saw both followed similar patterns.
The research by Robert Fiorentino and Alison Gabriele, both associate professors in the linguistics department, and José Alemán Bañón, a former KU graduate student who is now a postdoctoral researcher at the University of Reading in the United Kingdom, was published this month in the journal Second Language Research.
For years, linguists have debated whether second-language learners would ever resemble native speakers in their ability to process language properties that differ between the first and second language, such as gender agreement, which is a property of Spanish but not English. In Spanish, all nouns are categorized as masculine or feminine, and various elements in the sentence, such as adjectives, need to carry the gender feature of the noun as well.
Some researchers argued that even those who spoke a second language with a high level of accuracy were using a qualitatively different mechanism than native speakers.
“We realized that these different theories proposing that either second-language learners use the same mechanism, or a different mechanism could actually be teased apart by using brain-imaging techniques,” Gabriele said.
The team studied 26 high-level Spanish speakers who hadn’t learned to speak Spanish until after age 11 and grew up with English as the majority language. The speakers used Spanish on a daily basis and had spent an average of a year and a half in a Spanish-speaking country.
They were compared with 24 native speakers, who were raised in Spanish-speaking countries and stayed in their home country until age 17.
To measure language processing as it happens, the team used a method known as electroencephalography (EEG), which uses an array of electrodes placed on the scalp to detect patterns of brain activity with high accuracy in timing.
Once hooked up to the EEG, the test subjects were asked to read sentences, some of which had grammatical errors in either number agreement or gender agreement.
The researchers then compared the results of the second-language learners to native speakers. They found that the highly proficient second-language speakers showed the same patterns of brain activity as native speakers when processing grammatical violations in sentences.
“We show that the learners’ brain activity looks qualitatively similar to that of native speakers, suggesting that they are using the same mechanisms,” Fiorentino said.
The study highlights the brain’s plasticity and its ability to acquire a new complex system even in adulthood.
“A lot of researchers have argued that there is some sort of language learning mechanism that might atrophy over the life span, particularly before puberty. And, we certainly have a lot of evidence that it is difficult to process your second language at nativelike levels and you have to go through quite a bit of effort to find people who can,” Gabriele said. “But I think what this paper shows is that it is possible.”
Gabriele and Fiorentino are working on a second phase of the research, studying how the brain processes a second language at the initial stages of exposure. Their preliminary results suggest that properties that are shared between the first and second language show patterns of brain activity that are very similar in learners and native speakers. This suggests that learners build on the representation for language that is already in place when learning a second language.