I don't agree that natural language processing is at all related to the symbol grounding problem. Symbol grounding, as explained in the provided article, associates internal symbols with meanings. In the putative internal language used by humans (something like linguistics's deep structure), symbols--perhaps distributed patterns of brain activity--are somehow tied to meanings. But computers don't face this problem at all. Logical predicates need not--and possibly cannot--have meaning to computers. The sentence father(david,josh). doesn't connote anything to the computer; it doesn't imply love or filial duty or the inevitable disappointments of coming to know one's parents. Similarly, a computer can determine that Give the book. isn't a grammatical English sentence--not because the meaning of "give" isn't compatible with the sentence, but because presumably "give" is known to take two NP arguments, not one. To reiterate, I don't think NLP and symbol grounding are related because I don't think that NLP has to--or is capable of--associating symbols with meanings. (I hope I haven't misunderstood the symbol grounding problem.)