"Let's put these arguments aside and try instead to understand what the vast, unknown mechanisms of the brain may do. Then we'll find more self-respect in knowing what wonderful machines we are." Marvin Minsky, The Society of Mind
Go is an ancient board game that has its origins in China. While chess has a reputation for complexity, with each move giving a player about 40 possible moves, Go's rules are very simple in contrast but they open the door to a hugely complex game. A Go player faces 200 different possible moves at the game's beginning and the number of possible moves quickly grows as the game proceeds.
While the complexity of Go is not sufficient to explain why the defeat of the Korean Go champion Lee Se-dol made headline news in the tech sections of newspapers, the fact that the champion was defeated by a computer running a computer program called AlphaGo may.
The victory of a machine over a human is noteworthy because it provides a data point in the long controversy over whether it is possible to create machines that are capable of human intelligence.
Since its beginnings in the 1950s, the field of artificial intelligence (AI) emerged from new understandings about human development (e.g. Piaget's constructivism), new understandings about how the brain actually works based on neurology, and discoveries in the life sciences such as the discovery of how DNA made copies of itself.
Being able to create machines that exhibit human intelligence requires a model of how the human mind works. Clues for where to begin were found in the scientific explanation that living things were composed from non-living materials: "every living thing was found to be composed of smaller cells, and cells turned out to be composed of complex but comprehensible chemicals...mysteriously pulsing hearts turned out to be no more than mechanical pumps composed of networks of muscle cells." So instead of trying to create a brain, AI investigations proposed "tiny machines" that did the mind's work. AI pioneer Marvin Minsky called these "agents of the mind." (Minksy, 1988)
In the 1970s Minsky and his colleague Seymour Papert began to develop their "The Society of Mind" theory. The theory "proposes that intelligence is not the product of any singular mechanism, but comes from the managed interaction of a diverse variety of resourceful agents." The many diverse "agents" are necessary if we are to be able to account for the huge variety of tasks that make up human behavior. The variety cannot be accounted for by reference to a unitary mind. (MIT-Media-Lab, 2016)
The Society of Mind theory is supported by our everyday experience as Minksky explains:
You know that everything you think and do is thought and done by you. But what's a you? What kinds of smaller entities cooperate inside your mind to do your work? To start to see how minds are like societies, try this: pick up a cup of tea! Your GRASPING agents want to keep hold of the cup. Your BALANCING agents want to keep the tea from spilling out. Your THIRST agents want you to drink your tea. Your MOVING agents want to get the cup to your lips.
These processes go on without much or any conscious direction from you because you are engaging in other processes like talking with your companion tea-drinkers. (Minksy, 1988) While processes like holding a cup or walking seem very simple, they are very difficult for machines to accomplish. You can see a video of robots struggling to accomplish trivial (for us) tasks here.
Significantly the Society of Mind theory is supported by what is known about how the brain actually functions. According to cognitive neuroscientist Steven Pinker:
The mind is organized into modules or mental organs, each with a specialized design that makes it an expert in one arena of interaction with the world. The module's basic logic is specified by our genetic program. Their operation was shaped by natural selection to solve the problems of the hunting and gathering life led by our ancestors. (Pinker, 1997, p. 21)
The AlphaGo program is based on the three features of AI: neural networks, machine learning, and deep learning.
A neural network is a highly simplified version of brain tissue in which neuron-like circuits are connected to create nodes. An individual node has very limited capability but connected together these nodes are very complex behavior. The AlphaGo neural network has nodes that allow it to learn (machine learning) to improve its performance and it also has nodes that evaluate potential moves. These evaluation nodes use deep learning which means they have learned thousands of games of Go played by humans. The program can thus judge which moves are most likely to lead to success. (Vincent, 2016) Lee Se-dol was amazed by how well the program played him.
Of course, while I am writing about intelligent machines, I am also wondering about what insights AI might provide about human learning in the context of schools.
One of the powerful ideas about human development is that our personal growth depends on our ability to gain "agency"; that is, "the ability to make choices about and take an active role in one's life path, rather than solely being the product of one's circumstances." (Farrington et. al, 2012)
Using Minsky's model that conceives of the mind as thousands of "resourceful agents", we could redefine education as the activity that results in the acquisition and construction of as large a collection of resourceful agents as possible. For example, while schools provide most students with a small collection of the resourceful agents related to reading: decoding agents, a small set of the comprehension agents, spelling agents, there is limited opportunity for students to either acquire or develop their own personal collection of resourceful agents.
Schools generally do a poor job of "machine learning" that is, helping the students acquire the skills of self-evaluation so that they can continuously improve the resourcefulness of particular agents.
A more comprehensive collection of agents, ones related to connecting reading in different disciplines along with the deep learning (wide learning in a variety of disciplines) would have the effect (in AI language) of creating deeper and wider neural networks that would enhance the self-evaluation agency and deepen the deep learning.
The AI concept of intelligence as the result of "resourceful agents" provides a much more comprehensible set of targets for learning than our usual categories: creativity and innovation; critical thinking and problem solving, and so forth. It also provides a conceptual framework for design: neural networks (links between discrete skills), machine learning (feedback loops to optimize practice); and deep learning (interaction with records of human experience in relevant content).
Farrington, C. A., Roberick, M., Allensworth, E., Nagaoka, J. K., Keyes, T. S., Johnson, D. W., & Beechum, N. O. (2012). Teaching Adolescents to Become Learners: The Role of Non-Cognitive Factors in Shaping School Performance: A Critical Literature Review.
MIT-Media-Lab (2016). Brief Academic Biography of Marvin Minsky.
Minksy, M. (1988). The Society of Mind. Simon & Schuster.
Pinker, S. (1997). How the Mind Works. New York: W.W. Norton & Company.
Vincent, J. (2016). What counts as artificially intelligent? AI and deep learning, explained.
Dr. John Holton
Dr. John Holton joined the S²TEM Centers SC in July of 2013, as a research associate with an emphasis on the STEM literature including state and local STEM plans from around the nation.