The following equation is used in volume rendering by
forward ray casting. Define the terms and state its
significance to the computational expense of the ray casting
process. \marks{7}
\(\alpha_{new} = \alpha_{current} + (1-\alpha_{current}) * \alpha_{accumulated}.\)
Answer: Alpha new is the new value of alpha for the ray leaving this cell, alpha current is the
alpha of the current cell, alpha accumulated is the value of alpha for the current ray. [3]
Alpha is opacity, =1 for totally opaque. [1]
The importance here is that as alpha accumulate approaches 1 the ray can be terminated
early since the ray is so opaque nothing behind the current cell can be seen. We can easily
compute how many pixel counts could possibly be changed by the remaining cells [3]
A brain surgeon obtains a CT scan of a patients head prior to
performing an operation to remove a tumour. The data obtained are scalar
values corresponding to density defined on regular grid of structured
points \(200x200x200\). The surgeon wishes to know the position and
size of the tumour with respect to anatomical landmarks such as
the skull, blood vessels and nerves. Discuss the relative merits of
Marching cubes algorithm and volume rendering
by ray casting for this application. \marks{6}
Answer: Marching cubes produces a single 3D surface at a particular value of the
scalar field. If the tumour can be delimited by a particular value, MC would work
very well. [1] Danger of getting spurious contours from transistions between materials. [2]
Volume rendering is very slow [1] but more accurate and shows all the data, not just a surface [1]
The data size is very large, volume rendering is not likely to be effective here [1]
How would illumination of gradient magnitudes improve the volume rendered images.
Does it have any disadvantages? \marks{4}
Answer: It would show subtle variations in density, such that might occur at the edges of
the tumour and hence might be useful [2] However the surgeon would not know which
areas are dark due to shadow and which are dark due to no material being there [2]
Interaction is a very desirable characteristic of a visualisation
algorithm. How can volume rendering
be accelerated such that it can be performed at interactive rates? \marks{8}
Answer: Any of the following:
- ray casting with templates [1].
- a shear to perform allow cells to be blended a plane at a time [2]
- Front-to-back ray casting allows early termimation of the ray. [1]
- a final 2D image warp to render the final image [1]
All three above are an improvement, but orthographic projection only.
- run-length compression of the volumes to reduce work that needs to be done. [1]
Allows larger volumes to be rendered at low memory footprint.
- Texture mapping with blending can be used to render volumes in an object-order fashion.
Best solution, requires custom hardware, allows perspective projection. [2]
%Question 2
Within the context of visualisation, data are thought of as
having structure and value. Using examples, state the meaning of
these two terms. Why is data topology important to the choice of
visualisation algorithm? \marks{5}
Answer: structure - the positions of the points where data are defined
and the relationships between them. Value - the data values themselves. [3].
Any example that illustrates this is valid.
Topology defines the method of interpolation to use on the data [2]
For the following list of users and problems, state how
visualisation may help to address the problem and state the
desirable characteristics of a visualisation algorithm for the
problem.
An aerodynamic engineer wanting to understand the position and
behaviour of vortices over the surface of an aircraft wing from an
aerodynamic simulation.
Answer: A global flow visualisation technique, and find location
of vortices. Needs to be interactive, or use critical points analysis.
A police officer looking for a fraudster in bank transaction
records.
Answer: Show relationships linking entities : Netmap or daisy display.
An engineer studying vibration patterns in a steel beam from
simulation data.
Answer: Need to visualise motion normal to a surface, and show nodal points
warping is probably best, flow glyphs also possible.
A home buyer searching for a suitable property in lists of
properties for sale.
Answer: Need to see multi-attribute information clearly and show
location of property.
Multi-dimensional glyphs plotted on a map of the region.
[ 1 mark for the method, 2 marks for desirable features]
A number of methods are possible, marks awarded if it is well reasoned.
Give an example of an appropriate visualisation method for each of the
above. \marks{12}
Visualising multi-attribute data is a common problem in
visualisation. Give two strategies for visualising multi-attribute
data in each of the following applications:
Customer records in a client database containing information
about company age, credit limit, account balance, account duration
and number of employees.
Glyph plotting is probably best, a number of ways of scaling glyphs so
the attributes can be seen. Variations on dimensions and axes.
A 3D regular grid of velocity vectors and scalar values of temperature
and pressure of a gas in a boiler.
Streamtubes can be used, dashed streamlines are possible, could combine a
flow method such as streamlines with volume rendering. Glyphs would not work
well in 3D.
Briefly compare the methods you have chosen for each application. \marks{8}
%Question 3
Computing the path of a streamline in a vector field is a computationally
expensive process. Outline the main tasks that have to be performed during
streamline computation and comment on how their computational complexity grows
with increasing data size. \marks{7}
- Interpolation - both in space and in time. Doesn't grow with data size.
- Searching for next cell - grows exponentially with data size.
- Integration of the vector field - doesn't grow with data size.
- Coordinate transformation - doesn't change with data size.
How does the variable step Euler algorithm reduce these computational
demands? \marks{4}
Always advect the particle to the boundary of the next cell, so no searching needs
to be done. Don't need to do interpolation either since we assume vector to be constant
along each step.
Why is the variable-step Euler method well-suited to the line-integral
convolution algorithm? \marks{2}
The VS Euler method assumes a straight path through each grid box. Since the LIC
algorithm is performing flow viz at grid point resolution, we don't need sub-grid
accuracy.
A museum has a unique set of carved stone tablets that they want
to digitally capture and make available to scholars to study at remote
locations via a network.
Explain the key idea behind the following techniques and evaluate
their suitability for this application.
Image based capture.
Very suitable - low bandwidth, ease of capture. Problem is no geometric model
for scholars to analyse, only image information. Good for visualisation, but not
quantitative.
Accessibility shading.
Requires a model of the surface. Very good for showing features difficult to see
with the eye. Ideal for this application.
Image based lighting.
Complex to capture, can show small details clearly. Doesn't allow movement of the
object. Again, no model for quantitative research
Model based capture. \marks{12}
Great for quantitative research, but model needs to be acquired for multiple range images
and zipped together. Large bandwidth and rendering requirements, but can use qsplat to
reduce demand.
Any well reasoned answer with regards to suitability is acceptable.