Home » Articles » LS5. Mental architecture

LS5. Mental architecture

Transient-items-with-pointers (from the analogy with computer software in the last piece) would be a blatant straw-man if it misrepresented existing theories.  But no theory really explains how sentence structure is implemented and, in this piece, knocking down the idea is perfectly legitimate.


Phrenology head for architectureMental architecture is being emphasised because, together with the sensorimotor channels available for phonological sending and receiving, it determines how language operates.  Human language is contingent on these characteristics, not vice versa.  The characteristics started evolving to meet other needs in precursor species, not specifically to meet universal needs of language.

Homo sapiens has in effect a single, half-duplex send/receive channel.  This leads to phenomena such as canonical sequencing of predicate and arguments, for example Subject then Verb then Object in English.  An extra-terrestrial might have multiple channels working in parallel.  ET could send or receive S+V+O all at the same instant (assuming S, V and O are universal).

The vocal/auditory or signing/visual channels humans use are well understood and the resulting constraints are obvious.  But mental architecture is still the subject of speculation amongst neuroscientists, and its effect on human language has been largely ignored by linguists.

These thoughts are based on zero knowledge and therefore need some other justification.  The alternative view would be that the physiological infrastructure supporting language evolved specifically for the purpose.  But that leads to a chicken-and-egg paradox and would also fail to constrain speculation about mental architecture.


Theories of human language processing start neatly with the brain as a sort of computing device.  But a theory rarely specifies which of the possible definitions of ‘computing’ it uses.  In most theories the tacit assumption seems to be a device that performs like a stored-program computer.  This is understandable because every academic is now familiar – as user or programmer – with electronic devices.  Understandable, but are they misled by their experience of unlimited richness and variety of function and data?  Is that why there is a rich variety of theories and no clear winner 57 years on from Chomsky (1957)?

A stored program consists of a set of instructions, each specifying the type of operation and the addresses of locations in the computer’s storage:


A location may accommodate a data value on which the operation is performed; or it may be a point within the program itself holding the instruction to be performed next (instead of defaulting to the next in sequence).  This is the underlying architecture enabling electronic computing to be applied in so many ways.

If the brain worked in a similar way, human language processing could be as described in LS3 and 4 – including the pointers needed for efficiency.  A pointer in one item would point to the location of another item, permanent or transient.

For that to be possible the brain’s storage must be addressable but neuroscience provides no evidence of addressability.

No alternative

Could the issue of addressability be avoided?  An alternative architecture might execute instructions in a small number of special locations, perhaps with particular ones dedicated to particular operations.  These locations would be somewhat like a computer’s ‘registers’ (which are distinct from its much larger RAM).  Perhaps one instruction could involve more than two registers, thus enriching the functionality that can be achieved.

But each language user stores a large number of words.  Somehow the connection between a vocabulary of 105 stored words and sentence structures has to be explained.  A program cannot operate on items selected from large-volume data without addressability.

Could the assumptions here about how the items in a structure are stored be wrong?  Meaning might be localised rather than distributed.  It might be copied to transient items.  And items in a structure might be gathered into a set of registers.

But there would still be a need for addressability to allow the program to vary the sequence in which its instructions are executed in order to deal with particular situations.  For example in John kissed Lucy, it becomes necessary to delete the junction between John and kissed when Lucy is encountered; but such deletion doesn’t occur following every incoming word.

OK so far?

Alert readers will realise that we could be on the verge of iconoclasm.  Is anyone alert enough to find a flaw in the argumentation thus far?  If no one shouts, the next piece will take you into unfamiliar territory.

Mr Nice-Guy

One comment