Maybe I'm reading too much into it, but a genealogical analysis of language seems like a strange way to attack Chomsky's overall hypothesis. The fundamental point of the poverty-of-stimulus argument is that there must be strong inductive biases in whatever language-learning apparatus we possess, since we seem to quickly converge on something deemed grammatical in our language, with relatively few examples.
Showing that a particular property varies pretty randomly across languages, correlated only with lineage, is decent evidence that that property isn't universal. That would attack some stronger possible formulations of universal grammar (which maybe Chomsky has actually advanced; I don't recall) that claim not only that we have an inductive bias in language learning, but that: here is a list of specific language features that are universal.
But it doesn't show that there isn't some inductive bias, or even a generative-grammar-style inductive bias, where "human language, in all its possible variations" is a parameterized space significantly smaller than the space of all possible symbol protocols, and relatively easy for us to learn because we're hardwired with the parameterization (but not the specific parameter settings).
Admittedly I am familiar with machine learning and not very familiar with current work in linguistics, so I could be totally missing some reason that makes my field's methods inapplicable to theirs, inconceivable as that might be. ;-)
Chomsky's idea wasn't that there was a strong inductive bias in language learning, it was that language isn't learned at all; it grows in the brain, as his accolyte Pinker claims, "like teeth grow in the mouth." You don't "learn" to have teeth, do you?, Pinker asks.
How can this be, when we see so obviously that kids raised by Greek speakers learn Greek while those raised by English speakers learn English?
It's because, acc. to Chomsky and his followers ("Modern Linguists"), Greek and English are really the same language, and kids don't LEARN this language, called "Universal Grammar," they GROW it. What they DO learn is the values of some "parameters" that cause different instantiations of UG to appear superficially different, like Greek and English.
This quickly required the definition of "language" to exclude every aspect of language except syntax, because everything else so obviously was learned and wasn't universal, and even with syntax, they ended up having to exclude the syntax actually used by real speakers really speaking. That syntax become known as "performance" as distinct from "competence", which was the universal language people really knew deep down inside but which got corrupted in real use by various noise-inducing factors.
So "real language" became syntax only, and then, only carefully constructed written examples of proper, uncorrupted "competence" syntax. And with those, all native speakers could use their innate rules plus parameter settings to unanimously agree on which word sequences were valid and which were invalid. You had a rule-based syntax with universal rules, right?
Except that, with each passing year, researchers outside the linguistics departments found more and more examples where native speakers disagreed over validity, and validity decisions that weren't discrete (clearly valid or clearly invalid) but were often shades of gray. (Inside linguistics departments, conformity to Chomskyism orthodoxy was usually enforced, as academia tends to do, so "linguistics research" has supported Chomsky for decades, while cognitive science hasn't.)
There were so many differences among native speakers of a single language, not to mention the ever-growing catalog of diversity between languages and the way that languages gradually diverge instead of in clearly discrete jumps, and so many shades of gray in judgments among natives that the notion of parameters got more and more ridiculous, even with the notion of "language" pared down to almost nothing.
Sure, you could claim that all books are the same universal book, too, as long as each character is a parameter. If so, then all languages are the same language under the surface, too, plus or minus some parameter settings.
Chomsky's ridiculous language ideas would have been thrown out long ago if he hadn't been such a leftist "intellectual icon" in the leftist temples of academia and media. Instead, his theory just changed dramatically from version to version but remained unquestionably true throughout. Every few years he would significantly revamp his "program." In 2002 he seemed to abandon everything about syntax, too, except for recursion. What's unique about human language, as opposed to the general principles found in all human (and some animal) cognition, is just the innate, universal ability to handle recursion in syntax. (That paper apparently infuriated Pinker by cutting his "Language Instinct" position down to just a syntactical recursion instinct.)
What a bunch of nonsense. And this paper is just one more nail in the coffin of "modern linguistics".
This is factually wrong in a number of ways. Modern linguists don't throw out "every aspect of language but syntax" (as you yourself indicate later in your own post). There are falsifiability problems ("all books are the same universal book too, so long as each character is a parameter") with some theories, but you have to start with a theory at some point.
I can relate to your sentiment that he's favored because he's such a lefty, but it's not really an argument.
He revamps his "program" every few years because he recognizes it as wrong. I'm not sure why you cite that as a bad thing?
"He revamps his "program" every few years because he recognizes it as wrong. I'm not sure why you cite that as a bad thing?"
The "bad thing" is not the person who decides that he was wrong; the bad thing is the theory that was wrong and the "modern linguists" who still promote it. The notion of "universal grammar" came from the original version(s) of the theory: Everyone knows a fantastically complex rule system perfectly by some very young age, which can't possibly be learned from noisy, real-world experience in so short a time, so language must be innate, like teeth, and since humans are all the same species, the language must be a universal language, which must have some parameters to explain the illusion that they aren't the same.
By 2002, there were not enough legs of that original theory still standing (still believed by even Chomsky himself) to support the notion of a "universal grammar."
Yet despite the fact that nobody starting fresh with what we know today would ever propose a theory of innate, universal grammar, we still have most people calling themselves "modern linguists" claiming to believe it.
Universal grammar is nonsense, and modern linguists' failure to drop it is the "bad thing."
"...you have to start with a theory at some point."
Yes, and you have to drop it at some point when new evidence keeps making it less and less plausible. That point was years ago.
The alternative to Universal Grammar is Skinner style behaviorism; which is clearly wrong.
Think of Universal Grammar is a rule for building grammar rules.
It's pretty clear that humans have a distinct innate ability to learn language: no monkey can learn English no matter how much you try to teach it. You can teach animals all kinds of interesting behavior but you can't teach them language.
> Yet despite the fact that nobody starting fresh with what we know today would ever propose a theory of innate, universal grammar, we still have most people calling themselves "modern linguists" claiming to believe it.
Quite the contrary. Anyone starting fresh would probably start with an assumption about some innate ability to learn language.
The alternative to UG is not behaviorism; there are countless alternatives. There are all sorts of learning algorithms that are more plausible than UG or behaviorism.
Yes, it IS clear that humans have an innate ability to LEARN languages, as you insist. Unfortunately, UG denies this, claiming that we CAN'T possibly learn anything as rich and complex as a human language in so short a time with so little, and such messy, input, and since humans have NO innate ability to LEARN human (first) languages, they must instead GROW them "like you grow teeth."
"The alternative to UG" isn't behaviorism, it's that languages are LEARNED.
>"Quite the contrary. Anyone starting fresh would probably start with an assumption about some innate ability to learn language."
You're so right, except that your claim is not contrary to me, it's contrary to UG. Now try to convince the modern linguists of your theory that humans have the innate ability to LEARN first languages and see how that goes.
You are substantially mis-characterizing modern linguistic theory/linguists. The 'grow them like you grow teeth' is meant to indicate that, given certain inputs/environment, a child will develop normal language function. You don't need to 'learn' it in the sense that you do need to learn e.g. how to read. The reason for the contrast (argues a linguist) is that we have some internal cognitive structures that react to certain kinds of input, namely linguistic input, and that that reaction is called 'learning your first language(s)'. They disagree with the 'common sense' approach, that learning your first language is just like learning anything else.
I would characterize the debate between linguists and a certain class of cognitive scientists like this:
CogSci: Hey, you keep talking about UG/Innate mechanisms! We don't like that/it seems implausible. Instead, we should just have general learning algorithms that can be utilized to learn language!
Ling: Cool! Show us! Show us!
CogSci: Well...Here's a machine learning model that can learn English past tense with the following training data.
Ling: Oh. Um. Hmmm. Yeah, the data is more complex than that. How far can you get with this data (unloads data by the truckful). Also: that looks like how adults learn things (the kinds of errors made), not really how kids learn language (they make different kinds of errors). Can you model that?
CogSci: It's a simple model! It can't handle that data. That's for a later paper! Also, we don't care about the error classifications, as long as it looks like learning.
Ling: Ok. Let us know when that paper comes out. Have you seen this Bantu data? It's pretty cool too.
CogSci: later: Ok, look. We didn't get the model to work, but we really think your multiplying entities. I mean, it's just crazy/biologically-implausible/ugly to postulate this innate knowledge.
Ling: Yup. But here's the deal. We can't manage to actually explain everything we want even if we postulate innate rules/knowledge like crazy. Maybe we have a fundamentally broken model. Maybe machine learning really will come and eat our babies (or maybe the kinds of things we're postulating will turn out to built on top of machine learning, as explanations at different levels). But so far, it's the best we've got.
Obviously, people write books on these arguments, so some massive simplification was done here. And there is some really cool work being done by general cognitive scientists in the language space.
I'm not contradicting UG. From what I took out of the linguistics course I took at university, the idea is that part of this innate ability is an innate understanding of grammar, or rather we expect to learn grammar and so when we receive all the "input" our mind tries to make sense of it.
Re Leftist: Many subscribe to the democratic theory of truth, since if everyone thinks it, it's probably true. We are conforming creatures, the iconoclast is the exception, and it also explains why paradigms in science last so long.
Re Universal Grammar: But I think parallel evolution is the rule, not the exception; when confronted by the same needs, with the same tools, similar solutions are often arrived at. In the case of human communication, the needs seem to be to describe things, changes and qualities. The tools are probably what amounts to the Universal Grammar: things like the sequence of things being noticeable; that you can flag one thing to relate to another (like cases, tenses, number). The specific rules then just fall out, depending on whatever gets a foothold in the community.
Going out on a limb here: I have doubts that recursion is really fundamental to human language: most people top out very quickly (e.g. 2-3 levels). especially if they actually want to think in terms of it. That is, it only has a (very) finite recursive structure, which you can simulate with a less powerful grammar (e.g. regular instead of CFG). What we do have is the ability to reference other things (which can be cyclic): "Like this sentence."
Chomsky's idea wasn't that there was a strong inductive bias in language learning, it was that language isn't learned at all [...] It's because, acc. to Chomsky and his followers ("Modern Linguists"), Greek and English are really the same language, and kids don't LEARN this language, called "Universal Grammar," they GROW it. What they DO learn is the values of some "parameters" that cause different instantiations of UG to appear superficially different, like Greek and English.
At least from the perspective of machine learning, those seem like similar claims to me! A very common form of strong inductive bias is that the learner has a parameterized model class, and is learning model parameters. For mathematical simplicity the models are usually simpler than anything towards the upper ends of the Chomsky hierarchy, but there's no particular reason the model class can't involve a regular expression or context-free or context-sensitive grammar.
My impression of the UG proponents is that they think the size of the parameter space is quite small relative to the total size of the symbol-protocol space, so the language-learning problem is greatly simplified, perhaps so much that it's not even worth calling it learning (but that seems like a philosophical dispute with a gray area). The way to argue against that afaict would be to somehow show that in fact the minimal size of the parameter space is quite large. Perhaps computational methods don't make that feasible to do currently though, hence the focus on a genealogical analysis of one particular language feature.
The fact that native speakers of a single language disagree on the validity of certain utterances does not invalidate the concept of Universal Grammar. No two individuals learn their native languages in exactly the same way. The language that they pick up is shaped by their unique experiences with other people using the language, so it is unlikely that any two individuals have the exact same "instance" of the language (a more technical way to put this is that everyone has their own ideolect), and thus may differ in the judgements on whether particular utterances in that language are valid.
Even though individuals may speak the same language, because the concept of this language in each of their heads may differ slightly due to their individual experiences, one could say that a few of their parameters differ. If groups of individuals are separated from one another, these parameters can end up diverging between populations, leading to differing dialects and even separate languages over longer periods of time.
From reading the Ars Technica piece (I don't have access to the original Nature article), it seems like they've shown that language features are highly dependent on lineage, but I don't see how this result is incompatible with the concept of Universal Grammar. Because a language has to be relearned by every individual born into the language environment, there is a lot imperfect copying going on, leading to individuals having slightly different parameters of their Universal Grammars. There is a direct biological analogy here: imperfect copying of DNA leads to mutations which can lead to speciation over time. The values of the DNA base pairs are the parameters, but the mechanism by which the DNA is translated into proteins by every individual is still the same. This mechanism is analogous to Universal Grammar translating parameters into the actual spoken language.
It is true that Universal Grammar tends to look at statistical occurrences of features across languages in order to deduce universal statements about them. This is the weakest part of the theory, because the absence of a feature across all languages does not necessarily mean that such a feature cannot arise, nor does the presence of a feature across all languages necessarily mean that it must always be present. Perhaps one of the worst results of language extinction is the fact that there will be fewer data sets to test proposed language universals against.
To address your other assertion, the concept of Universal Grammar does not only apply to syntax. For instance, there are phonological universals, which are deduced based on phonotactics of known languages. For example, all known languages have at the very least the minimal vowel system of three vowels /i a u/ or slight variations thereof.
"...Because a language has to be relearned by every individual born into the language environment, there is a lot imperfect copying going on, leading to individuals having slightly different parameters of their Universal Grammars."
My point with respect to the parameters notion is that the more closely we examine every aspect of language, the finer the distinctions we discover. At some point, the number of necessary "learned parameters" grows so large that it's like claiming that all books are the same except for the character parameters. The characters ARE the book, with some near-universal exceptions such as paper and binding, and the learned language parameters ARE the language, with some near-universal exceptions that are general cognitive properties of human brain architecture that apply to much more than just language.
Just because the parameters may be diverse does not mean the rules need to be just as diverse. If I may use the biological analogy again, despite the myriad combinations of DNA base pairs, they are all still translated by a universal genetic code that's simple enough to describe in one table (http://en.wikipedia.org/wiki/DNA_codon_table ). I'm fully aware that a description of Universal Grammar would likely be more complex than this though.
The collection of language parameters in a person's head is not the language itself, just like the DNA of an individual is not the species itself. It's the translation of those parameters by a shared mechanism that gives rise to the spoken language. This is where I think your book analogy falls through.
Thank you for this succinct analysis. Put this way, Chomsky's model sounds downright Platonic in its insistence that the messy things we observe are merely instances of an ideal form.
I studied linguistics in the 90's, including a class and a few seminars with Chomsky, but left grad school after a year. At this point I'm neither a linguist nor an expert but maintain a slightly more than passive interest in the field.
These kinds of crackpot articles surface every couple of years in the mainstream press, always from people who demonstrate no knowledge of Chomsky's work beyond having read the back cover of his 1965 "Aspects of the Theory of Syntax."
I didn't read the original article, but as described in the Ars article, the thesis makes absolutely no sense whatsoever. The fact that we may have a hard time reconciling how languages can be diverse in their handling of subject/object/verb order doesn't disprove the concept of Universal Grammar. In fact, it's an argument for Universal Grammar that we can even have a coherent conversation about the way all languages handle subject/object/verb order - Universal Grammar is not about providing a set of rules describe all languages, it's about explaining what a possible rule is, and you have to approach that with logic rather than statistics.
The Ars article talks about "languages that have been evolving for a minimum of 4,000 years," as if those languages simply sprung into existence a little more than 4 millennia ago with no relation to anything else. This is the linguistic equivalent of creationism.
[Edit: expanded acronym "UG" to "Universal Grammar" for clarity]
According to the article, Chomsky probably didn't invent universal grammar, but he was certainly a strong proponent.
As to the second point, the article claims that languages have been evolving for a minimum of 4,000 years, not a maximum of four thousand years. This is a perfectly accurate statement about all known species of animals. All it really says is that we have records that lead us to know for certain that certain sets of languages had common roots as soon as 4,000 years ago, but we are not sure about their common roots before that.
This sort of argument is very common in genetics; I recall reading a book about how human men can all be traced to a common ancestor 90,000 years back. This does not imply that scientists believe humans sprang into being 90,000 years ago.
Well, I was admitted to a PhD program in linguistics which required more than passing familiarity with the basic concepts of Chomsky's research, of which Universal Grammar is the basic concept.
I think it's very worthwhile to read the original article in this case as it supports something far less strong than what the Ars article does. It's argument is pretty compelling when you get to what it's actual argument is/should have been. But it hardly topples what it claims to.
Yeah, I tried to be careful to criticize only the Ars article, since I imagine there's a strong possibility the original may have been misquoted or misinterpreted.
A friend of mine, who is a graduate student in linguistics, told me the following story. (Caveat: this is urban legend-ish; the details could have been exaggerated.)
Once Chomsky was giving a lecture on different features of languages. For some feature (call it X; I forget the technical details), he boldly asserted that no human language could conceivably do X and proceeded to go on and on about why X was absolutely impossible.
There was a hand from the audience. "Estonian does X."
It's impossible to have an intelligent discussion without the original paper.
Chomsky didn't claim that all grammars obey the same word order, so it sounds like they're making bigger claims than they should be, assuming they're just showing that word-order is mostly lineage-dependent.
The original paper is always at the bottom of Ars articles. That's one reason I subscribe.
Anyway, there seems to be a growing body of evidence contrary to Chomsky's universal grammar–see Dan Everett for some more, e.g. It's interesting to see some more about it.
EDIT: As pointed out below, you probably can't view this unless you're at a university/college with access to Nature. Sorry about that. Open Access is something you should support if this pisses you off.
I will say: Putting the link at the bottom of the article automatically makes this better than 90% of the "science journalism" I ever see.
Now for problem number two: I don't have thirty-two dollars (!) to spend reading this one paper. Neither do most other people who aren't already in the field.
For so long as the primary literature is behind these enormous paywalls, science will continue to consist of a tiny tiny percentage of humanity talking to themselves.
Oh yeah! Working at a university pays off again. I'm glad you raised that point, I was about to start commenting about the text of the article and now I realize that everyone may not have access.
Or check the local Barnes & Noble. The one in my small town at least has "Nature" in the magazine section, at around $20/copy (but of course you can read it while standing in the magazine section...).
At first I thought "this can't be the well known peer reviewed journal!?", but it was full of full research papers and sure appeared to be a peer reviewed journal. I don't know if it is the exact same edition as universities get, or some special edition aimed at members of the general public who happen to like reading research papers. It did have a section in front full of general science news and articles, and the part with the papers didn't seem to have a table of contents which struck me as odd.
This wouldn't be the first unexpected thing I have found at a "normal" bookstore. At either a small Waldon's or B. Dalton's in a mall I found a copy of Misner, Thorne, and Wheeler's tome "Gravitation".
There is here on HN a user that makes freely available for download papers behind pay wall, but can't find his account name (I thought I had it bookmarked).
Chomsky didn't claim that all grammars obey the same word order
I didn't get the sense that the article was saying he said that. It does seem pretty clear from the article that the research suggests that the Generative theory is wrong.
I don't get the impression that it's refuting the generative grammar theory as a whole, though, just the assertion by (many? most?) generative linguists that grammars are governed by a set of innate constraints. Granted, I'm not familiar with how much of the basis for acceptance of generative grammar theory hangs on the concept of a universal grammar.
Yeah - so the claim was that the deep structure of the grammar would be persistent - is that right? And this deep structure is something akin to that which is revealed through logical analysis. Is that the gist?
I understand the related language of thought arguments better... and have for a long time thought that their empirical credentials were dubious. A straightforward syntactic analysis (as it seems to be described in the article) - isn't going to settle dog meat.
Facts: I can write sentences that no one has ever spoken before. The rest of you will have opinions whether they are allowed in English. Those opinions will tend to agree.
Conclusion: Our knowledge of English is not purely derived from the sentences we've heard. We must have had some model of a language in common when we started, and we fit the parameters to what we heard as children.
I find that more convincing than anything in the linked article.
Ok, maybe this is why UG always sounds so wrongheaded to me when explained in layman's terms:
Our knowledge of English is not purely derived from the sentences we've heard.
If you were to take the sentences in isolation, devoid of meaning and context, I suppose you might end up being circular. But new sentences are being thrown at you all the time (the training data is ~unlimited). Even a computer can recognize common patterns unsupervised, then apply those patterns to learning other patterns on subsequent data, and finally classify examples has never seen before -- this seems weak.
And doubly so if you consider that it isn't just a sentence we've heard, but a sentence about a real noun and a real verb that exists in space (at least initially). If I pick up a ball and say "ball" consistently, bam, you now have some kind of tagged input to work with.
We must have had some model of a language in common
when we started, and we fit the parameters to what we
heard as children.
Even taken the first sentence as given, why does that imply a UG? Perhaps the model is built from observed reality first, then language is built using that model? Or, maybe, to argue Chomsky's side, the "UG" is an extremely simple bias...
Now, Chomsky has probably addressed these things, but to a layman, the typical justifications seem a little weak.
My point was that once you assign the sound "ball" to a ball, you are also simultaneously (or eventually, as you see more examples) adding the sound "ball" to category of sounds that describe things.
>The rest of you will have opinions whether they are allowed in English. Those opinions will tend to agree.
These agreeing opinions could easily be due to a common cultural heritage and a history of reinforcement. People get reinforced for speaking in a way that is understood (and seen as correct) by others and so they speak that way. It benefits them to conform. Since everyone who speaks English shares a cultural language heritage, there is obviously a shared history of reinforcement. The shared history of reinforcement leads to widespread agreement about rules, and yet there are still plenty of examples of English dialects that violate rules of other dialects.
For instance, Ebonics vs. American English. Both are valid dialects with different rules. Both are English. Are you claiming that Ebonics speakers have a different generative mechanism than American English speakers? Obviously Ebonics has evolved __from__ American English, into something new.
Creative, meaningful responding exists outside of language and the evidence is overwhelming that this creativity is due to reinforcement history rather than some innate quality of the brain.
Anyway I don't think your facts lead to your conclusion at all.
Obviously, feedback is part of learning language. B. F. Skinner argued that it was the whole explanation, as you're claiming. That was the idea Chomsky challenged.
You're not addressing the crucial point. The suprise isn't that people can invent new sentences. It's how well they can tell if other speakers will regard those sentences as valid English (or valid Ebonics). People can evaluate sentences, and forms of sentences, that have never been spoken. They could not have learned that by feedback, and linguists stuggle to state the precise rules by which they do it.
Why not? I'm not saying that grammar doesn't exist, I'm asking why it needs to be innate?
The existence of grammar rules is what allows us to determine if a sentence is grammatically correct. These rules could be learned through experience. Why do they need to be innate?
I believe you don't understand Chomsky's argument. I'm not a die hard behaviourist I just know the behaviourist position well enough to know that your particular formulation of the "language is special" theory is easily refuted.
Obviously Ebonics has evolved __from__ American English, into something new.
That's not obvious at all. "Ebonics" (or AAVE -- the grammar, as opposed to transient slang) has much in common with Hiberno-Englishes found elsewhere. As a sometime speaker of Newfanese, I find the tense constructions of AAVE very familiar and predictable.
Alternative Conclusion: Humans are very good at spotting patterns and making generalisations.
I can identify that certain plants are trees, or certain animals are dogs, even trees or dog breeds I've never encountered before. Most people would agree with my identification.
People from a similar cultural background to me will generally agree with me as to whether a completely novel piece of music is in tune and rhythmic (i.e. "grammatical", not necessarily "good"). Music from some traditions often sounds discordant (i.e. "ungrammatical") to me, due to the different scales used, yet I am sure that they must be considered grammatical by those who are familiar with such music.
I think Ars and maybe the authors of the original article are way over-stating the significance of this article. I think the evidence presented in the article is convincing but it is not convincing as a refutation of UG. It seems at best to argue that head-firstness or complement-firstness are not actually features of languages.
The main argument I took from Chomsky is that the human brain is predisposed to language. I don't think proving whether language may have (/ may have not) evolved from a single root is so interesting.
The interesting part for me, is that the brain is an organ that has evolved in part due to its capability to process language, which in turn has been able to affect our capacity and capability to think.
I believe humans do have an innate capacity for language - and this capacity has provided us with an evolutionary advantage.
Whether or not all languages are similar or different isn't the point - the fact that the brain is able to efficiently process (and evolve) language can be seen as an example of evolutionary biology whether or not the precise mechanisms are understood.
It's more than that there is just a predisposition for language. Chomsky argues that this predisposition is a result of a universal grammar, or set of grammar rules for organizing language that is a fundamental part of our brains:
So, for this theory, it does matter what differences and similarities there are in grammar for different languages, and how those similarities or differences came about. What this study does is show that at least for one aspect of language, word order, what might seem to be universal fundamental rules of grammar are not, and are shaped by cultural, not genetic, evolution.
But I don't think a suggestion that there's such a thing as 'universal grammar' is the same as a suggestion that syntactic grammar is going to be shared between all languages.
As I understand it, the unifying element of UG could be simplistic or complex - which might make the concept hard to discredit.
I'm pretty certain that syntactic grammar would be the result of cultural and genetic evolution. If both factors are available to influence a language's evolution, I'd imagine that both would be likely to influence the way that language evolves to some degree.
I think the most interesting part of this mystery is how the brain's capacity for language might be innate - and in this respect, looking at the resultant artefacts produced by this processing might be a bit like looking at the remnants scattered around the room after a party the night before.
Perhaps the concept of a shared UG is more relevant to the process than the resulting syntactic structure?
> given that there aren't obvious word order patterns across languages, how does the human brain do so well at learning the rules that are a peculiarity to any one of them?
That simply means there are rules at a deeper level in the brain, and they may crystallize this way, or that way, depending on the particular language.
The study, if correct, simply says the rules are not apparent at the outermost level. Well, dig deeper, go closer to the hardware, and at some level the universal order should appear again.
After all, my neurons don't work any different than your neurons. But I suspect the highest level where universal rules operate is higher than that.
In bird studies, one bird species can be raised completely around a foreign bird species until they learn the foreign songs, yet the native fledgling will sing the new notes in their own species specific patterns (as if there is a neural template).
I'm not a linguistic professional, but I imagine humans have some hardwired language patterns and others that are historical artifacts.
Or it could just be because they are different species - that means different vocal chords, different brains, different ears and so on. It's equivalent to saying "chimpanzees act like baby humans when taught to play with blocks, yet still retain distinctive means of movement".
That study was about the presence of a neural template (and perhaps a linguistic one). Before the study people thought it might be possible for any singing bird to sing any other bird's songs.
An abstracted view of the point of the study is perhaps there is some hardwiring in our communication. Kernels of hardwiring that can't be nurtured away. In humans it would be analogous to the universal laws of linguistics.
It was just something to think about. Next time I'll just link the study.
And all this time, I thought the language hierarchy, and it's correspondence with math/logic categories, was the real theory. UG is kind of an irrelevant side point, sort of an argument between "strong" vs "weak".
Type 0 - Recursivey enumerable.
Type 1 - Context-sensitive.
Type 2 - Context-free.
Type 3 - Regular.
As all languages fall into one of those 4 categories, and no other (so far), that seems to me to be THE definition of "following the rules" to an extreme.
Once you've identified which category a language is in, the word order and semantics are pretty damned irrelevant, as any experience computer programmer can attest (Java vs C++, Python vs Ruby, etc).
I'm writing a robust parser for English using the principles of construction grammar. In contrast with Chomsky, CG has as its chief characteristic a pairing between form and semantics. We learn that "bird" means the concept of bird. We learn idioms the same way: "he's a cool customer" means the concept of a person calm in pressure situations.
One is indeed lucky if born into a culture with patterns between these pairings.
In my opinion there is no universal grammar and if my parser eventually works for English, it will have completely different construction rules for each targeted natural language beyond English.
For one thing, if we're looking for a biological basis of language, what makes subject-verb order the right level to look on? Why not look at the fact that language is arranged into subjects and verbs as the relevant layer?
In such a rule, the preposition is the head. There is an assertion that languages have the property head-first or complement-first. In this case, the complement-first rule would instead be
PP := NP(Noun Phrase) P(Preposition)
So in head-first languages, we would see:
PP := P NP
The man is PP(in the house)
VP := V NP
I VP(ate the bird)
And in complement-first languages, we would see:
PP := NP P
The man is PP(the house in)
VP := NP V
I VP(the bird ate)
The article states that if head-first and complement-first were robust, universal features of languages, then we would expect the evolutionary model to show that the appearances of word orders demonstrating these features are dependent on each other across language families. However, it does not show this across the language families. It only shows that certain word orders created by head-firstness or complement-firstness are dependent in certain language families.
I hope this was a decent enough explanation and easy for others to follow. I was a Linguistics major, so it can be hard for me to explain it well and in an easy to understand way.
[edited in an attempt at formatting]
[edited to provide more examples]
I'm admittedly having a bit of trouble parsing the first sentence of your last paragraph.
When you say "if head-first and complement-first were robust, universal features of languages", do you mean "head-first and complement-first cover the whole set of possibilities for the property 'word order'", or "all languages (within a family?) are one or the other", or something more like "a given language cannot contain both head-first and complement-first structures", or something different?
Also, with "appearances of word orders demonstrating these features are dependent on each other across language families", in what sense do you mean that the appearances would be dependent on each other?
I mean something along the lines of if head-first and complement-first are the only possibilites for phrase rules and that they must apply to all phrase rules for a given language, i.e.
for all X, s.t. X is a grammatic category which can be the head of a phrase ; XP := X _
or
for all X ; s.t. X is a grammatic category which can be the head of a phrase; XP := _ X
Then, modulo processes which change a sentence from its underlying form to its surface form, we would only see forms characteristic of head-first or complement-first phrase structure.
In the second case, I intend dependent as the article means dependent. That is co-appearing because of the same underlying feature. To use the genetic analogy, co-appearing because one gene (the head firstness gene) determines them.
However, I'm not seeing how this strikes at Chomsky's theory. It does give us some insight into mental machinery, that languages aren't necessarily parsimonious in reusing the same structures across NPs and VPs. But what's the significance?
The Ars article way over-states the significance. It should really say something like "Chomsky was wrong, languages are not only head-first or complement-first, and they only appear that way because of cultural inheritance, not universal rules" But that's not very exciting, is it?
The phrase "Manufacturing Consent" has nothing to do with universal grammar. It is about the factors that create a news environment that favors specific interests.
Showing that a particular property varies pretty randomly across languages, correlated only with lineage, is decent evidence that that property isn't universal. That would attack some stronger possible formulations of universal grammar (which maybe Chomsky has actually advanced; I don't recall) that claim not only that we have an inductive bias in language learning, but that: here is a list of specific language features that are universal.
But it doesn't show that there isn't some inductive bias, or even a generative-grammar-style inductive bias, where "human language, in all its possible variations" is a parameterized space significantly smaller than the space of all possible symbol protocols, and relatively easy for us to learn because we're hardwired with the parameterization (but not the specific parameter settings).
Admittedly I am familiar with machine learning and not very familiar with current work in linguistics, so I could be totally missing some reason that makes my field's methods inapplicable to theirs, inconceivable as that might be. ;-)