A less-technical description of Waleed Kadous' PhD
What is your thesis really about?
A fair enough question -- I mean what does "Extending Classification
models to Temporal Domains" actually mean?
One very interesting area of research right now is machine
learning. This area is really abou trying to make machines
learn from their experience -- if a computer observes
something new, it tries to take the new observation into
account. It can also be viewed as trying to make computers
that are more like people: the more experience we have of
something, the better we get at it. Computers aren't like that
One interesting part of machine learning is concept learning,
sometimes known as classification. Concept learning works
something like this: you are given examples (called
instances) of different types of things (called
classes), and you have to come up with a way to tell
them apart (this is called a classifier). Sometimes,
you are given hints as well about how to tell them apart (this
is called background knowledge).
Sounds complicated? Well, people do it every day. For example,
most people learn to tell fruit apart by seeing lots of
different examples of fruit; not from a dictionary.
I might give you lots of different fruit, and tell you what
kind of fruit it is. I might give you a spherical, orange,
pitted object and tell you it's an orange; a smooth-surfaced,
spherical red object about hand-sized and tell you it's an
apple; a small, green, smooth-skinned spherical object, which
is an apple; and a long yellow, smooth-skinned object called a
After seeing a few more examples of fruit, you'll be able to
guess the kind of fruit without me telling you what kind of
fruit it is. Nobody quite understands how we do this - there's
a lot of speculation about it. Some people say that we simply
remember every single fruit we've seen and find what looks the
most similar; some people think we try to make rules (like: if
it's long, yellow and bent then it's probably a
Researchers have figured out ways to do this kind of learning
- telling different kinds of flowers apart, deciding whether
you should get a loan or not, all that sort of stuff.
Telling apart oranges and apples is one thing, but the world's
a little more complex than that. People seem to have the
ability to recognise patterns that occur over time. Imagine
now that I'm not asking you to classify oranges and apples,
but trying to get you to recognise something more complex -
say, something that varies over time - for example, different
melodies. Even if a melody is played on a different
instrument, slower or faster, or in a different style, we can
tell that it might be the same or different from another
This is what I am interested in: How do we learn to classify
things that vary over time? And how can we use existing
techniques to solve these kind of learning problems? In
particular, is it possible to find a way to make a computer
recognise temporal variations the same way we do in a general
manner that people can?
Note that I said in a general manner. It might be
easy to solve one particular learning problem, but that really
does not tell us how to solve another one.
To test out my theories, I'm looking at three "testbed"
applications, three different learning problems:
- Auslan sign recognition
- Auslan is the language of the Australian Deaf
community. Like most sign languages, it involves movements
of the hands, as well as facial expressions. By capturing
information using a pair of instrumented gloves, we can try
to learn a small subset of Auslan signs
- A lot of what is used to control robots today uses a very
simple analysis of sensors. Perhaps by adding the ability to
recognise complex patterns, more interesting behaviours
could be developed.
- ECG Analysis
- Electrocardiographs (ECG's) are used in diagnosing heart
problems. Doctors already have rules about what particular
patterns in ECG's mean; what would a computer make of the
data? Would it come up with the same rules as doctors use?