VASE Lab logo

Demonstrations of VASE Lab projects

A demonstration of VR technology to representatives from a
  local school and the Mayor of Wivenhoe

A demonstration of VR technology to representatives from a local school and the Mayor of Wivenhoe

We believe that the best way to prove the effectiveness of a piece of work is to produce a working demonstration. Most of our demonstrations can't be shown effectively on the Web, they need to be seen in person. However, a few of them can.

Facial features

There is a great deal of interest these days in finding faces in images and in tracking facial features. We looked into this ourselves during the 1990s in the context of video coding. The particular apprach we explored was so-called model-based coding, in which we tracked the motion of facial features in a sequence, inferred what was happening in 3D, and used 3D graphics to animate the result. The original work at Essex was carried out by Munevver Kokuer (the subject of the first sequence below), who produced coded imagery that could be teansferred over an analogue modem and lip-read at the far end. Her work was extended by Ali Al-Qayedi in the late 1990s. Ali introduced the idea of animation agents using Tcl, and was able to achieve minuscule data rates. These sequences are from Ali's work; they are all rather short, which reflects what was achievable on the hardware of the time.

Talking sequence
62 frames, CIF size (352 × 288 pixels)
Original sequence
Tracking facial features in 2-D
Tracking the head in 3-D
Tracking the mouth corners
MBC-coded sequence

Miss America sequence
109 frames, CIF size (352 × 288 pixels)
Original sequence
Tracking facial features in 2-D
Tracking the head in 3-D
Tracking the mouth corners
MBC-coded sequence

Peter sequence
100 frames, CIF size (352 x 288 pixels)
Original sequence
Tracking the head in 3-D
Tracking the mouth corners
MBC-coded sequence

Eckehard sequence
100 frames, CIF size (352 x 288 pixels)
Original sequence
Tracking the head in 3-D
Tracking the mouth corners
MBC-coded sequence

VRML modelling

Your web browser needs to have a VRML1 or VRML2 (aka VRML97) viewer available in order to view the models described below. If, when you follow a link to a model, it appears in your browser as text rather than a 3D world, your browser isn't set up to handle VRML. These models were produced by Dr Christine Clark as an aside from her main research. They were produced astonishingly quickly — for example, the campus model below took about half a working week. This is partly a testament to Christine's modelling and programming expertise and partly because she wrote code to do much of the grunt work, producing VRML code via a series of procedure calls.

    A model of the University of Essex campus (VRML 1, VRML 2). This forms an interface to the University's Web-based campus information system. Essex was the first university top produce a VRML campus model anywhere in the world, back in July 1995. (A more detailed discussion is available.)

  • A VRML model of the VASE Lab itself (VRML 1, VRML 2), with elements of the scene acting as interfaces to further information: for example, the computers link to projects taking place in the Lab and the telephone links to Essex's on-line telephone and email directory, implemented via an X.500 information service. (This model is interfaced to the appropriate part of the campus model, of course.)

  • A VRML reconstruction of Colchester's Roman Temple to Claudius (VRML 1, VRML 2), based on archaeological excavations by the Colchester Archaeological Trust.

  • VRML models of a proposed theatre in Wivenhoe, a town near the University. The model is program-generated, so we can easily generate it from the outside (VRML 1, VRML 2, corresponding photograph), with the roof removed (VRML 1, VRML 2), ground floor only (VRML 1, VRML 2), and with pictures on the walls (VRML 1, VRML 2).

    The model was devised principally to visualize how the theatre will look, especially from the inside. However, its use does not necessarily end with the conversion of the existing building into a theatre, as we there are other possible uses for the model:

    • To allow people to visualize the view of the stage from any seat in the theatre; this would be useful when booking tickets, for example.

    • To use it as an interface to a booking system. For any performance, vacant and booked seats could be shown in different colours, and clicking the mouse on any seat could bring up a booking form into which the customer loaded name, address and credit card details.

    • To provide a vehicle for designing the lighting (and perhaps sound) for productions.

We started generating VRML models long before there were any tools to help us. When we started work on our first model, of the Essex campus, we wrote a script in the the Tcl programming language, which generated VRML using information measured from the architect's drawings. This approach allowed the information to be represented in a much more concise way than writing VRML directly.

We have subsequently extended and enhanced the scheme as other models were devised. We are now able to make use of a "library" of object-generating scripts, including some quite sophisticated ones such as the column-generator used in the temple model, which can generate columns with any number of flutes to an arbitrary accuracy.

Java Applets

Your web browser needs to be able to handle Java applets to view the demostrations below.