Semantics is the study of the meaning of linguistic
expressions. The language can be a natural language, such as English or Navajo,
or an artificial language, like a computer programming language. Meaning in
natural languages is mainly studied by linguists. In fact, semantics is one of
the main branches of contemporary linguistics. Theoretical computer scientists
and logicians think about artificial languages. In some areas of computer
science, these divisions are crossed. In machine translation, for instance,
computer scientists may want to relate natural language texts to abstract
representations of their meanings; to do this, they have to design artificial
languages for representing meanings.
There are strong connections to philosophy. Earlier in
this century, much work in semantics was done by philosophers, and some
important work is still done by philosophers.
Anyone who speaks a language has a truly amazing
capacity to reason about the meanings of texts. Take, for instance, the
sentence
(S) I can't untie that knot with one hand.
Even though you have probably never seen this
sentence, you can easily see things like the following:
1. The sentence is about the abilities of whoever spoke or wrote it. (Call this person the speaker.)
2. It's also about a knot, maybe one that the speaker is pointing at.
3. The sentence denies that the speaker has a certain ability. (This is the contribution of the word ‘can't'.)
4. Untying is a way of making something not tied.
5. The sentence doesn't mean that the knot has one hand; it has to do with how many hands are used to do the untying.
The meaning of a sentence is not just an unordered
heap of the meanings of its words. If that were true, then ‘Cowboys ride
horses’ and ‘Horses ride cowboys’ would mean the same thing. So we need to
think about arrangements of meanings.
Here is an arrangement that seems to bring out the
relationships of the meanings in sentence (S).
Not [ I [ Able [ [ [Make [Not [Tied]]] [That knot ] ]
[With One Hand] ] ] ]
The unit [Make [Not [Tied]] here corresponds to the
act of untying; it contains a subunit corresponding to the state of being
untied. Larger units correspond to the act of untying-that-knot and to the act
to-untie-that-knot-with-one-hand. Then this act combines with Able to make a
larger unit, corresponding to the state of
being-able-to-untie-that-knot-with-one-hand. This unit combines with I to make
the thought that I have this state -- that is, the thought that
I-am-able-to-untie-that-knot-with-one-hand. Finally, this combines with Not and
we get the denial of that thought.
This idea that meaningful units combine systematically
to form larger meaningful units, and understanding sentences is a way of
working out these combinations, has probably been the most important theme in
contemporary semantics.
Linguists who study semantics look for general rules
that bring out the relationship between form, which is the observed arrangement
of words in sentences and meaning. This is interesting and challenging, because
these relationships are so complex.
A semantic rule for English might say that a simple
sentence involving the word ‘can't’ always corresponds to a meaning arrangement
like
Not [ Able ... ],
but never to one like
Able [ Not ... ].
For instance, ‘I can't dance’ means that I'm unable to
dance; it doesn't mean that I'm able not to dance.
To assign meanings to the sentences of a language, you
need to know what they are. It is the job of another area of linguistics,
called syntax, to answer this question, by providing rules that show how
sentences and other expressions are built up out of smaller parts, and
eventually out of words. The meaning of a sentence depends not only on the
words it contains, but on its syntactic makeup: the sentence
(S) That can hurt you,
for instance, is ambiguous -- it has two
distinct meanings. These correspond to two distinct syntactic structures.
In one structure ‘That’ is the subject and ‘can’ is an auxiliary verb (meaning
“able”), and in the other ‘That can’ is the subject and ‘can’ is a noun
(indicating a sort of container).
Because the meaning of a sentence depends so closely
on its syntactic structure, linguists have given a lot of thought to the
relations between syntactic structure and meaning; in fact, evidence about
ambiguity is one way of testing ideas about syntactic structure.
You would expect an expert in semantics to know a lot
about what meanings are. But linguists haven't directly answered this question
very successfully. This may seem like bad news for semantics, but it is
actually not that uncommon for the basic concepts of a successful science to
remain problematic: a physicist will probably have trouble telling you what
time is. The nature of meaning, and the nature of time, are foundational
questions that are debated by philosophers.
We can simplify the problem a little by saying that,
whatever meanings are, we are interested in literal meaning. Often, much
more than the meaning of a sentence is conveyed when someone uses it. Suppose
that Carol says ‘I have to study’ in answer to ‘Can you go to the movies
tonight?’. She means that she has to study that night, and that this is a
reason why she can't go to the movies. But the sentence she used
literally means only that she has to study. Nonliteral meanings are studied in pragmatics,
an area of linguistics that deals with discourse and contextual effects.
But what is a literal meaning? There are four sorts of
answers: (1) you can dodge the question, or (2) appeal to usage, or (3) appeal
to psychology, or (4) treat meanings as real objects.
(1) The first idea would involve trying to reconstruct
semantics so that it can be done without actually referring to meanings. It
turns out to be hard to do this -- at least, if you want a theory that does
what linguistic semanticists would like a theory to do. But the idea was
popular earlier in the twentieth century, especially in the 1940s and 1950s,
and has been revived several times since then, because many philosophers would
prefer to do without meanings if at all possible. But these attempts tend to
ignore the linguistic requirements, and for various technical reasons have not
been very successful.
(2) When an English speaker says ‘It's raining’ and a
French speaker says ‘Il pleut’ you can say that there is a common pattern of
usage here. But no one really knows how to characterize what the two utterances
have in common without somehow invoking a common meaning. (In this case, the
meaning that it's raining.) So this idea doesn't seem to really explain what
meanings are.
(3) Here, you would try to explain meanings as ideas.
This is an old idea, and is still popular; nowadays, it takes the form of
developing an artificial language that is supposed to capture the "inner
cognitive representations" of an ideal thinking and speaking agent. The
problem with this approach is that the methods of contemporary psychology don't
provide much help in telling us in general what these inner representations are
like. This idea doesn't seem yet to lead to a methodology that can produce a
workable semantic theory.
(4) If you say that the meaning of ‘Mars’ is a certain
planet, at least you have a meaning relation that you can come to grips with.
There is the word ‘Mars’ on the one hand, and on the other hand there is this
big ball of matter circling around the sun. This clarity is good, but it is
hard to see how you could cover all of language this way. It doesn't help us
very much in saying what sentences mean, for instance. And what about the other
meaning of ‘Mars’? Do we have to believe in the Roman god to say that ‘Mars’ is
meaningful? And what about ‘the largest number’?
The approach that most semanticists endorse is a
combination of (1) and (4). Using techniques similar to those used by
mathematicians, you can build up a complex universe of abstract objects that
can serve as meanings (or denotations) of various sorts of linguistic
expressions. Since sentences can be either true or false, the meanings of
sentences usually involve the two truth values true and false. You can
make up artificial languages for talking about these objects; some semanticists
claim that these languages can be used to capture inner cognitive representations.
If so, this would also incorporate elements of (3), the psychological approach
to meanings.
Finally, by restricting your attention to selected parts of
natural language, you can often avoid hard questions about what meanings in
general are. This is why this approach to some extent dodges the general
question of what meanings are. The hope would be, however, that as more
linguistic constructions are covered, better and more adequate representations
of meaning would emerge.
Though "truth values" may seem artificial as
components of meaning, they are very handy in talking about the meaning of
things like negation; the semantic rule for negative sentences says that their
meanings are like that of the corresponding positive sentences, except that the
truth value is switched, false for true and true for false. ‘It isn't raining’
is true if ‘It is raining’ is false, and false if ‘It is raining’ is true.
Truth values also provide a connection to validity
and to valid reasoning. (It is valid to infer a sentence S2 from S1 in
case S2 couldn't possibly be true when S1 is false.) This interest in valid
reasoning provides a strong connection to work in the semantics of artificial
languages, since these languages are usually designed with some reasoning task
in mind. Logical languages are designed to model theoretical reasoning such as
mathematical proofs, while computer languages are intended to model a variety
of general and special purpose reasoning tasks. Validity is useful in working
with proofs because it gives us a criterion for correctness. It is useful in
much the same way with computer programs, where it can sometimes be used to
either prove a program correct, or (if the proof fails) to discover flaws in
programs.
These ideas (which really come from logic) have proved
to be very powerful in providing a theory of how the meanings of
natural-language sentences depend on the meanings of the words they contain and
their syntactic structure. Over the last forty years or so, there has been a
lot of progress in working this out, not only for English, but for a wide
variety of languages. This is made much easier by the fact that human languages
are very similarin the kinds of rules that are needed for projecting meanings
from words to sentences; they mainly differ in their words, and in the details
of their syntactic rules.
Recently, there has been more interest in lexical
semantics -- that is, in the semantics of words. Lexical semantics is not so
much a matter of trying to write an "ideal dictionary". (Dictionaries
contain a lot of useful information, but don't really provide a theory of
meaning or good representations of meanings.) Rather, lexical semantics is
concerned with systematic relations in the meanings of words, and in recurring
patterns among different meanings of the same word. It is no accident, for
instance, that you can say ‘Sam ate a grape’ and ‘Sam ate’, the former saying
what Sam ate and the latter merely saying that Sam ate something. This same
pattern occurs with many verbs.
Logic is a help in lexical semantics, but lexical
semantics is full of cases in which meanings depend subtly on context, and
there are exceptions to many generalizations. (To undermine something is to
mine under it; but to understand something is not to stand under it.) So logic
doesn't carry us as far here as it seems to carry us in the semantics of
sentences.
Natural-language semantics is important in trying to
make computers better able to deal directly with human languages. In one
typical application, there is a program people need to use. Running the program
requires using an artificial language (usually, a special-purpose command
language or query-language) that tells the computer how to do some useful
reasoning or question-answering task. But it is frustrating and time-consuming
to teach this language to everyone who may want to interact with the program.
So it is often worthwhile to write a second program, a natural language
interface, that mediates between simple commands in a human language and the
artificial language that the computer understands. Here, there is certainly no
confusion about what a meaning is; the meanings you want to attach to natural
language commands are the corresponding expressions of the programming language
that the machine understands. Many computer scientists believe that natural
language semantics is useful in designing programs of this sort. But it is only
part of the picture. It turns out that most English sentences are ambiguous to
a depressing extent. (If a sentence has just five words, and each of these
words has four meanings, this alone gives potentially 1,024 possible combined
meanings.)
Generally, only a few of these potential meanings will be at all
plausible. People are very good at focusing on these plausible meanings,
without being swamped by the unintended meanings. But this takes common sense,
and at present we do not have a very good idea of how to get computers to
imitate this sort of common sense. Researchers in the area of computer science
known as Artificial Intelligence are working on that. Meanwhile, in building
natural-language interfaces, you can exploit the fact that a specific
application (like retrieving answers from a database) constrains the things
that a user is likely to say. Using this, and other clever techniques, it is
possible to build special purpose natural-language interfaces that perform
remarkably well, even though we are still a long way from figuring out how to
get computers to do general-purpose natural-language understanding.
Semantics probably won't help you find out the meaning
of a word you don't understand, though it does have a lot to say about the
patterns of meaningfulness that you find in words. It certainly can't help you
understand the meaning of one of Shakespeare's sonnets, since poetic meaning is
so different from literal meaning. But as we learn more about semantics, we are
finding out a lot about how the world's languages match forms to meanings. And
in doing that, we are learning a lot about ourselves and how we think, as well
as acquiring knowledge that is useful in many different fields and
applications.
Tidak ada komentar:
Posting Komentar