Creating interactive, 3D models of MRI sequences in virtual reality
CS faculty seem to require lots of medical attention and, while doing so, get access to
images created by various technologies. The analysis of these images is done by
software as well as the human eye. VR opens a new way to explore these images.
Project goals:
The output of an MRI is a set of sequences of 2D images. Each image
represents a 2D flat slice of the object, where the density of the object's
voxels are reflected in the black/white value of the corresponding pixel.
The images in each sequence are taken from a fixed orientation but are
pictures of different depths. The software needs to take one sequence
and generate a model.
The software needs to blend models from different sequences of the
same MRI together into one composite model.
The software needs to display your model in VR.
The software needs to allow the user to rotate, move, and zoom while
viewing the model in VR.
The software needs to allow the user to peel away layers of the model in
VR in order to see inside the part of the body being imaged.
The software should identify structures in the model that are specific to
the part of the body being imaged. For example, the software should be
able to detect that a collection of voxels belong to the same internal
structure (e.g., this is a medial meniscus, etc.)
Once internal structures are identified, the software should be able to
highlight them and/or remove them from view in the 3D model.
If the MRI is annotated with notes from the radiologist, the software
should highlight those places in the model in some fashion.