Home » Articles » LS32. Can’t see the forest for the trees?

LS32. Can’t see the forest for the trees?

 An eminent prof has risen to the challenge of LanguidSlog.  Lots of emails have gone to and fro.  I’m encouraged that the blog is at least understandable.  This fresh input is quite combative but has not been lethal – or not yet.

By the way, you can write to mrniceguy@languidslog.com.  Nothing will appear in the blog without your permission.

At one point the prof warned that I would need to show that NG can ‘handle the full range of phenomena exhibited by relativization – unboundedness, multiple dependencies, across-the-board extraction from conjunction, adjunct islands, etc.’  OK, I can try.  But the comment shows one basic point has been missed.

LanguidSlog’s primary purpose is to persuade every theoretical linguist to reveal the underlying mechanisms they assume and then to show how those mechanisms achieve the moves that result in observable syntax.  By LS8 I was boldly implying that no one could convincingly explain John kissed Lucy, never mind adjunct islands.  Since that point in the blog I have developed Network Grammar to show what can be done while avoiding the shaky assumptions that are implicit throughout orthodox linguistics.  Rubbishing NG does not answer my question about underlying mechanisms.

This post briefly restates what I believe to be the problem for linguistics.

Mental arcitecture

Part of what I have been calling ‘the mental architecture’ must be for controlling functions that are automatic or instinctive, i.e. they have a direct genetic basis.  IT provides metaphors such as ‘hardwired’ or ‘pre-programmed’ which should not be too misleading.

Mental architecture must also include some programmable space.  Knowledge is stored there as a result of the individual’s experience.  The major part – perhaps all – of language is in this category.

The programmable space must consist of elementary units which are essentially undifferentiated.  The same can be said of computer RAM but a ‘bit’ in RAM has just two states, 0 and 1.  The units of mental storage are known to have variable levels of activation.  We can therefore assume that a unit represents a magnitude, not just 0 and 1, and that the magnitude can vary over time (for example, activation is known to decay).

Neuroscience can show that the units are linked together in a network.  The number of links at a particular unit – i.e. at a network node – is limited but there must be at least two outward paths to allow the phenomenon known as ‘spreading activation’.

From ‘undifferentiated’, it follows that the representation of anything know-able cannot be localised to a particular node – no more than a particular bit in a computer RAM is dedicated to anything externally meaningful.  You may have heard that researchers have detected, for example, a ‘Halle Berry’ neuron and a ‘Jennifer Aniston’ neuron.  That doesn’t conflict with the ideas here.  A particular node is the root of a particular concept, the concept being given by the paths fanning out from that node.

Language

For language the crucial point is that decoding from phonological words to conceptual propositions is achieved by this ‘fanning out’.  A proposition is delivered with no delay if it can be formed with sufficient activation.  But a proposition may have to wait milliseconds for later phonological material for sufficient activation to accumulate.  This facility is what makes syntax so rich in its possibilities.

Network Grammar

The mechanisms used in NG are simple.  There is no separate ‘program’.  There is no need to store intermediate results, no need to remember the storage location, no need to retrieve those results later in the process.  In summary, no need to assume that the mental architecture works like computer RAM, with its addressability.

Language logging

Every theorist should look honestly at their own theory and determine the mechanisms it needs.  Does the process store, remember and retrieve?

If it does then addressability and a stored program are needed.  The theory cannot represent what actually happens in human language processing.  It can only describe the outcomes.

My belief is that only dependency grammars have a chance of surviving without these unsustainable assumptions – and most DGs are not yet adequately explained.  Yes, this means that a lot of syntacticians are barking up the wrong tree.

How much more rude must I be to provoke a response?

Mr Nice-Guy

Comments