This phenomenon is usually exemplified by nesting a relative clause in another clause, then that one in another, and so on. Wikipedia shows the build-up of an improbable English sentence, but let’s go straight to German where this sort of thing is apparently more acceptable.
In English the sentence would be:
(153) …that John saw Peter help Mary let the children swim
In (153) the subject-verb junctions are John__saw, Peter__help, Mary__let and children__swim. All are quite local and the left-to-right sequence gives the hierarchy from main clause downwards. There is no overlapping.
In (152) the junctions are essentially the same (Jens__sieht etc) but these are nested one within another. The L-to-R sequence of subjects is the same as in (153) but the L-to-R sequence of verbs is the reverse – from the lowest clause upwards.
A read of LS41 should show how this works. The subject nouns (strictly, heads of noun phrases) are held in accession sequence. Then each of the verbs is linked to one of them in the reverse sequence. Thus the nouns are used on the basis of ‘last in first out’.
The sequence in which (verb) / AGENT / (noun) propositions are delivered doesn’t matter. Cognition deals with bundles of propositions whose delivery is near-simultaneous.
Why not for English?
So the story in LS41 seems to be true for German. But centre-embedded sentences don’t really occur in English. Perhaps LS41 is not universal.
LanguidSlog has previously suggested that decay of activation can affect processing. If the words in a sentence decay at the same rate then the most recently processed will be the most active – and, all other things being equal, most likely to be paired with the current word.
That still doesn’t rule out centre-embedding. Differences in decay may be sufficient for selecting between active words that are widely spaced, but insufficient for distinguishing the succession of nouns closely-spaced (and therefore near-equally active) nouns in a centre-embedded sentence. For English, relying on decay could be more efficient; for German, the LS41 mechanism could be more reliable. Is your language a Lotus or a Mercedes?
Dutch allows a third possibility:
In (154) the L-to-R sequence of both nouns and verbs gives the hierarchy from main clause downwards. The subject-verb junctions are Jan__zag, Piet__helpen, Marie__laten and kinderen__zwemmen. These must cross each other because subject nouns are separated from verbs. The nouns are used on the basis of ‘first in first out’.
As defined in LS41 the mechanism builds a chain for a sequence of phonological words P1, P2, P3 and P4:
If P5 followed, it would replace the null but bring a further null to the right. All this happens regardless of whether any junctions are formed between these Ps. The sequence in which junction possibilities are tried is right-to-left.
To explain (154) we need a variant of this mechanism. Suppose instead the sequence of phonological words built a chain thus:
The difference is very small. The ‘German’ chain is built with links thus:
and the ‘Dutch’ chain is built with links thus:
These could be a lexical property of the words concerned. But once a chain starts with one type of link it must continue with the same.
With the variant mechanism, if P5 followed, it would replace the null and bring a further null to the left. In this way the chain accumulates in the reverse sequence.
If we stipulate that the sequence in which junction possibilities are tried is still right to left, a sentence like (154) can be correctly processed. P1 is Jan, P2 is Piet, P3 is Marie and P4 is kinderen. Then P5 is zag and a junction is identified with the rightmost word in the chain (P1), Jan__zag – exactly as required.
An objection to this account is that the sequence of the chain could be wrong for analysing other parts of sentences. Perhaps it is the case that LS41-style processing (with the chain in either sequence) only applies to words of types that can be strung together – like the subject nouns in these examples. The position of the chain in relation to the other words in the sentence can be determined by activation – because the words in the chain decay like any other.
Inverse cross-serial dependency
Both (152) and (154) have all their subject nouns before all their verbs. In (152) the top-down hierarchy of the sentence is shown by the nouns left-to-right and by the verbs right-to-left. In (154) the hierarchy is shown by both nouns and verbs left-to-right. A question that has been asked in the linguistics literature is: Does any language allow the hierarchy to be shown by both nouns and verbs right-to-left? Such a sentence would be something like:
(155) …that the barn the man the children we paint help let
Sentence (155) has to use bad English because no such language has been found. A fourth possibility (which I haven’t seen mentioned anywhere) is:
(156) …that the barn the man the children we let help paint
Four possibilities are formed from two variables. The arrows in the following table all indicate the left-to-right word sequence; falling means from top to bottom of the clause hierarchy, and rising means bottom to top.
NG easily accounts for a language having ‘falling’ or ‘rising’ verb sequences – by either allowing (verb)__(verb) junctions or allowing (verb)__(verb) junctions. The languages presumed missing are characterised by ‘rising’ noun sequences.
With its key principle of sentence meaning being a bundle of propositions with no structure or sequence, NG can’t offer a plausible reason for ’rising’ noun sequences being impossible. Should we have to prove ‘can’t exist’ because ‘doesn’t exist’? The only possible change is to ‘does exist’ – which would vindicate NG.
Another challenge comes from evidence of a limit on the number of (noun)__(verb) junctions in sentences like (152) and (154): three for written sentences, fewer for spoken. The NG view must be that sentences more complex than this are grammatical and should be comprehensible. Not being a native speaker of any of the relevant languages, I find it difficult to decide how concerned to be. And there are more urgent topics for LanguidSlog to address.