In The Structure of a Semantic Theory, Katz and Fodor lay out their reasoning for what a semantic theory ought to look like and what it ought to be able to do. One key feature of any semantic theory is a solution to what they call the “projection problem”: how is it possible that people who have heard and uttered a finite number of sentences can somehow generate and comprehend an infinite variety of other sentences? This is important not only in a purely theoretical sense, but also because the answer (whatever it may be) must be something which can be rapidly employed at nearly all times by all speakers, for the vast preponderance of sentences being uttered at any given moment have never been uttered before, and yet communication generally achieves a measure of success.

As a computer programmer, I have found that there are few better tests of how well I understand some complex procedure than to try to teach a computer to perform it. Never having heard of Grice's maxims, the infernal machines are completely unforgiving. As far as metalanguage goes, at least they'll complain if what I say is syntactically invalid; semantically, however, the responsibility is all mine. Computers can't guess what I mean, they can only do what I say, and that charge they carry out with the utmost fidelity, even if what I said turns out to have been dangerously wrong. Perhaps for this reason (and perhaps also because of their stereotypically dry humor), programmers like to joke wistfully about how much human time and effort could be saved if only processors or programming languages were outfitted with a DWIM instruction. The acronym expands to “Do What I Mean”; frustrated programmers occasionally even say the phrase aloud to a misbehaving machine, unself-consciously, only half-jokingly. When computer software does do the right thing, it's not only cause for celebration, but also evidence that the programmer has both conceptualized and communicated the procedure accurately. The contrapositive is, of course, equally true: if the programmer's mind is cloudy, his program can't possibly dot all the i's and cross the t's.

On the basis of my experience, and following Katz and Fodor, I propose that there is a “metaprojection problem.” How is it possible that someone who is not fully conscious of his own language generation process — let alone in conscious control of it — could somehow generate a theory capable of explaining the infinite variety of real human language?

Efforts have nonetheless been made (as evidenced by the continued existence of the field of linguistics). In the absence of what I consider a necessary condition, a theory can go only so far. Saussure starts from first principles, observing the general lack of inherent rationale for the correspondences between shapes, sounds, and the meanings assigned to them. He designates the correspondence between “signifier” and “signified” as “arbitrary” in the sense that either could have just as easily been assigned another partner when the assignments were being handed out. Indeed, my intuition agrees that is the case. I can even easily imagine a reason why this is advantageous (because, combinatorically, it can account for the very large amount of meaning needing to be represented). But on what basis did this arise? Is the arbitrariness of linguistic signs a historical accident, itself arbitrary? Saussure does not give us data, presumably difficult to come by, to support his assertions. Instead, he is the first in a long line of linguistic theorists to extrapolate from his intuition and experience, then make a strong rhetorical case for his conclusions. Jakobson takes this to its logical endpoint, propounding a series of big ideas carried along in part by his charisma.

Near the end of his life, Saussure despaired for the future of linguistics. Doubtless he would have taken heart to see the field grow up in the 1960s, with Katz and Fodor's high-level outline of the requirements for any theory and Chomsky's first attempt at systematizing universal rules of grammar. Modern linguistics became possible in the last few centuries, when the world became sufficiently small for linguists to easily acquire and study data about widely disparate languages, and became scientifically respectable over the last several decades, as the idea of applying scientific rigor to language gained currency. Having passed these two points of inflection, the field began to attract more minds.

Far from jettisoning the role of charisma in linguistics, Chomsky in particular acquired a cadre of followers as he laid the groundwork for the study of syntax. Adhering to the longstanding assumption of universality, his early theory of generative grammar posits that all human languages are related to some abstract “deep structure” by a set of rule-based transformations. (Perhaps there could be a human language for which the set of transformations is null, which would be terribly interesting to identify and study, but for some odd reason this appears not to be the case.) For those of a logical problem-solving bent, this is a satisfying sort of question to study. A syntactic theory's effectiveness can be tested: throw real sentences at it and see how it holds up. The goal of any syntactic theory is not to generate any particular sentence in the same way that a human would, but to generate all the valid sentences in a language and none of the invalid ones, given the language's lexicon and ruleset.

Chomsky's approach has its idiosyncrasies. Most fundamentally, he suffers from the computer scientist's fetish for solving problems by adding layers of abstraction. An old saw of unclear provenance goes, “Any problem in computer science can be solved with another layer of indirection. But that usually will create another problem.” Two of the basic assumptions in Aspects of the Theory of Syntax are of this form: first, that a theory of grammar should be derived and operate independently from meaning; second, that the significant discrepancies in human languages are mere surface effects of some uniform deep structure. I have long been suspicious of these assumptions. (In a paper about meaning, it goes without saying that calling something deep does not make it so.) I know all too well the instinct to which Chomsky succumbs. Harnessed properly, as part of a larger toolkit of techniques, it's a healthy instinct for a programmer — or a linguist, I imagine — to have. Plus, these assumptions make useful starting points, especially the former. Chomsky lays his motivation bare (Katz and Fodor, 480):

Part of the difficulty with the theory of meaning is that “meaning” tends to be used as a catch-all term to include every aspect of language that we know very little about. Insofar as this is correct, we can expect various aspects of this theory to be claimed by other approaches to language in the course of their development.

If this nebulous “meaning” can be left out of his theory, Chomsky figures, then perhaps the problem of syntax can be circumscribed and made tractable. Better yet, his theory of syntax can later be combined with any of several future theories of meaning as they become available, as long as they make the same clear delineation between grammar and meaning. This orthogonality is very much in the computer science tradition, wherein solutions to smaller problems can be mixed and matched along well-defined interfaces to solve larger problems.

Chomsky's reductionist approach avoids addressing the metaprojection problem by deliberately restricting its scope, concerned neither with human language generation nor with the relation of meaning and structure, and the fate of his theory is written into its foundational assumptions. As tricky details arise, Chomsky plays his “layer of abstraction” card over and over, well past the point of simplicity or elegance, simply because the theory keeps requiring it. Need a third element in the tree? Invent a new parent to keep things binary. For Wh-movement, he creates a new slot near the top of the tree, ready to receive interrogative pronouns when they move, otherwise empty. When some human language exhibits no Wh-movement, such as Chinese, he claims that there really is some and we just can't see it. (Just like a computer scientist, to characterize “null” as a meaningful quantity when it's convenient!) Later, when the mapping between surface structure and deep structure proves insufficient to explain certain language phenomena, he invents a further distinction between logical form and “phonetic form.” For each new problem that crops up, we can guess how Chomsky is going to address it; we almost don't need him anymore. Fascinating at its center, his approach becomes self-parody along its periphery. The software developer Jamie Zawinski writes amusingly of the myopia of the specialist: “To a database person, every nail looks like a thumb. Or something like that.” The shoe fits Chomsky.

Of course, his is not the only theory of syntax. Other theories inherit some of his assumptions and discard others. Head-driven phrase structure grammar, or HPSG, is especially interesting to me for two reasons: it lends itself particularly well to computerized testing, and it does not strive to keep grammar and meaning separate. On the contrary, its lexicon is enriched over and above that of the Chomskian model. For example, verbs are listed not merely as verbs, but carry information about their contextual requirements: how many subjects, how many objects, and of what types? HPSG does not discard universality per se — that verbs have unique semantic requirements does seem feasible as a universal feature of language — but the lexicons of different languages are free to look quite different from one another, and from the simpler lists found in Chomsky-style theories.

Again, these syntactic theories do not claim to mimic the process by which humans form individual sentences. A full-on semantic theory would need to account for this. In Words and Rules, Pinker takes a stab at it, claiming that irregular verbs are connected with much of the brain's language circuitry and observing the connections in the laboratory wherever possible. Unlike Chomsky, Pinker does seek to explain how an idea gets expressed in words, and he believes that deviations from rules are, rather than outliers to be reconciled, a native part of the process. Informed by his experimental psychologist's sensibility, Pinker's angle of attack is as empirical as it is theoretical. Just as his working hypothesis builds on Chomsky's ideas, so too does his scientific approach add rigor, testability, and plenty of actual testing to that which Chomsky introduced in the 1960s. Pinker's theory — that humans generate and comprehend language through an algorithm combining loose associations and hard abstractions — makes sense, and the data he references seem to support it.

It could well be that Pinker has found the next step forward in understanding human language. The first generation of modern linguists issued intuitive pronouncements. The next generation brought mathematical precision to bear. The latest crop takes advantage of advances in functional brain imaging. More research must be done along these lines, but how? Another way of looking at language and the brain could come from neurology. Oliver Sacks' Musicophilia is primarily concerned with music, but intersections with language are unavoidable.

I myself encountered the significance of this pairing earlier this semester, in a music composition course. We spent most of our class time learning about “extended techniques,” ways of coaxing new sounds from old instruments or from found objects. Early on, the professor asked us to describe the music each of us listens to, the music we play (if any), and the music we'd like to compose. After a lot of far-out answers, I explained that I'm a pianist; my favorite music to hear and play is that of Medtner, and to a lesser extent that of his friend Rachmaninov; and that as a first-timer with designs on a lifetime in composition, I would like to try my hand at writing something in sonata form, heavily contrapuntal, maybe even with a fugue, if I could be so lucky as to manage that. “Why?” asked my professor, after a pause, incredulous, then hinted that we could look forward to discussing this later.

A week or so afterwards, when the teaching assistant in a private conversation suggested that I try using some extended techniques in my composition, a metaphor came upon me. ”[Extended techniques] are a foreign language to me,” I said, “and as someone fascinated by language, that's a definite plus. But my musical imaginings don't include them. When I want to convey some message in words, even without regard for the addressee, I instinctively reach for English words: they're native to me and the right ones come quickly. Well, the late Romantic idiom is my native musical language. If I used extended techniques in my music, they would sound strange and out of place. I'm interested to learn about them, but right now I don't feel comfortable using them.”

No sooner than I finished, I realized my characterization was as factual as metaphorical. Music, as language, is a means of communication. There are related families of music within it, within which there are identifiably unique individual styles of expression that nonetheless show membership within some community of styles. There are genres every bit as real as Bakhtin's speech genres. Synesthesia aside (though Sacks goes into this a bit), there is color in music, and it's every bit as difficult to pin down as color in speech, perhaps more so. There are musical forms which carry a syntax with them, and there are instances of elliptical music that deliberately flout form while still carrying a comprehensible payload. There is experimental music that throws away all the rules in order to exploit sound for its own sake, just as there is experimental poetry that does the same. Not only is it safe to say that most of the conceptualizations about language we have encountered apply to music, but also that these conceptualizations could as well have been made about music first, then extended to language.

Some of the later chapters in Musicophilia make just that sort of leap. Where Pinker's window into language is through irregular verbs, Sacks' window into the mind is through music, which he believes is wired into many different parts of the brain at some primal level. Sacks describes cases where music persists even after the loss of language: aphasics who can still sing as articulately as ever, others who have learned to cover for their speech loss through song, even one who was able to regain some measure of speech (and other lost brain function) through music therapy. One chapter is devoted to Williams syndrome, which results from gene deletion much as Down's syndrome results from gene addition. While overall intelligence is lowered in Williams, other hallmarks include preternatural gregariousness, strikingly original word choices, and heightened musical sensitivity. It's awfully interesting, even in pathology, that these changes should be clustered together. Sacks is not attempting to explain language in this book, so he gets a free pass for avoiding the metaprojection problem; still, perhaps functional brain imaging can shed more light on what's happening in the minds of those with Williams, and to what extent similar processes are at work in unaffected brains.

The “metaprojection problem” is a nice enough critique, but I of course cannot propose a direct means for someone to achieve the sort of self-awareness required to overcome it. “Expanding one's consciousness” is vague (and perhaps necessarily so by definition). What I can propose, as the linguists I'm critiquing have done, are ideas that may get us closer. Chewing on a variety of tractably small problems may help to at least understand the issues (though Kristeva seems to think that theorizing about language has the potential to do harm). Modern observational and experimental techniques, such as those of psychology and neurology, certainly will move our shared comprehension forward. Most importantly, since language sits at the nexus of a huge variety of disciplines — poetry, literature, pedagogy, culture, psychology, neurology, evolutionary biology, computer science, music, and mathematics, to name a few — the more connections we can find, the better. Surely there are network effects of interdisciplinary intertextuality to be found and exploited.

Let us return to the realm of computer programming. Although each sentence is almost certainly new, imagine if we were to have a complete, searchable map of every sentence ever uttered. Given any new sentence, we could almost certainly find in the map a previous sentence arbitrarily close in meaning. (In mathematical language, we could express this with the “delta-epsilon” notation used for, among other things, proving fundamental properties of continuous functions.) This giant database of sentences could even be grouped by family resemblance, such that a computer could simulate the varying precision of human speech by pseudo-randomly choosing better or worse exemplars of a particular sentential meaning. This would be overkill for computerized testing of a syntactic theory, but could be a strong advantage when it comes time to subject a semantic theory to the Turing test, wherein a human judge converses electronically with either a human or machine and attempts to determine, based on linguistic criteria, which of the two he's dealing with.

I remarked earlier on the human value of programming a procedure into a computer: when it works, the programmer probably understands the procedure pretty well. It's not proof, however. In computer science — usually unrelated to software development, but relevant in this particular area — a rule of thumb is that a program can either be provably correct or actually try to do something interesting. Proof is hard to come by, but it's often possible to teach the computer to check its own work. In order to achieve this, the programmer has to understand not only the procedure in question, but also something about the nature of his own understanding. I can't think of a better self-test, and I can't think of a better analogy to the origin and function of theories about language and meaning.

Meanwhile, we know to expect computers to make imperfect interlocutors. Until we achieve linguistic nirvana and a solution to the metaprojection problem, we should continue to expect semantic theories to fall short. After all, we've been making language for millennia, and only recently have we begun generalizing about it in earnest. Until at least one of us becomes fully conscious of the admixture of experience, insight, and rules that generates his own language, we can't hope to impart the process to machines, or to propose a semantic theory that dares to explain the magic of human language once and for all. Unlike Saussure, however, I expect we'll get there — possibly even soon.