Visualization

Volume Rendering in Blender with OSL

In my last post, I turned some dental X-rays in to a raw ‘brick’ for visualization in Paraview.  It was easy to turn on the volume rendering and have a look.  Getting volume rendering into Blender from out in the wild is a bit more work.

For the Blender Internal (BI) render engine, there were a handful of quick to setup options such as 8-bit raw, image sequences and a special Blender voxel file format, among others.  In Cycles, the options are more limited.  Since the BI is fully deprecated in Blender 2.8, I’m going to write about how to use the Open Shader Language (OSL).  OSL is a shader language developed by Sony Picture Imageworks and open sourced.  Blender has OSL support built in, but you may have to either get Blender directly from their website or build it yourself as not all packaged versions of Blender (eg, Ubuntu) have it built in.

A word of warning: OSL can be a bit unstable and will easily crash Blender.  When you’re developing, save frequently.  And if you have Blender crash on you, begin saving even more frequently.  Just be aware that this path can be fraught with peril.  If you’re having too many problems, down sample your images (ie, smaller images with fewer planes), which usually helps.

Process

First we’re going to generate an image stack from the raw brick file in my previous post. Then we’re going to code up some OSL to load the files.  And finally, it’ll be time to use the OSL in Blender.

Image Stack

Lets default back to Julia as that’s been my go to language for the past couple of years.

If you recall, the planes are 512×512 16-bit integers (signed or not, doesn’t matter since sensor was apparently 12-bits.)  So we’ll need to open the file, load it one plane at a time, and then save each plane as a numbered image.

Below is a Julia script that will read through the tooth.raw brick file from the previous post one plane at a time, saving each plane out as a .png image.

# The 'Images' package will need to be installed, possibly along with 'ImageMagick'
using Images, Printf

plane = zeros(Int16, 512, 512) # Each plane is 512x512x16-bit
fid = open("tooth.raw", "r")
for i in 1:608
read!(fid, plane) # Reading in a plane from the file
ofname = @sprintf "img_%04d.png" i
save(ofname, plane/4096.0) # Normalizing and saving plane as a png.
end
close(fid)

OSL

I’ll try to make the OSL generic enough that the code can be used by others with (hopefully) little modification.

First, the code:

shader volumeTexture(
  point Vector = P,
  int num_planes = 1,
  output color colorData = 1,
  output float nonColorData = 1,
  ){
    float x = Vector[0];
    float y = Vector[1];
    float z = Vector[2];

   int pos = (int)(z*num_planes);
    if (pos < 0 || pos >= num_planes) {
        colorData = -1;
        nonColorData = -1;
        return;
    }
// Modify the lines below for your path and filenames
   string path  = "/home/sciviz/blog/OSL_Volume/tooth/";
    string file  = format("img_%04d.png", pos + 1);
    string fname = concat(path, file);

   colorData    = texture(fname, x, 1.0 - y);
   nonColorData = texture(fname, x, 1.0 - y);
}

This will need to be in a .osl script that you create in Blender’s internal text editor.  Also make sure to turn on the Open Shading Language by clicking the checkbox in the properties panel.

Enable OSL for Blender
Enabling OSL for both Blender 2.79b and 2.80

In OSL, the function header defines the inputs and outputs of the shader.  When compiled in Blender and used in the node editor, these inputs and outputs will show up on the interface of the script node as connectors.  You’ll need to connect the object’s texture coordinates to the Vector input of the node.

OSL for Blender
OSL Code and node setup for Blender 2.80

Every time you make a change to the OSL script, don’t forget to click the ‘Script Node Update’ button to recompile the node.

The OSL node is translating the input coordinate to a particular image plane based off of the Z coordinate, and then to a specific pixel on the aforementioned image from the X and Y coordinates.  The code will return a -1 for requested Z locations outside of the image stack.

OSL normalizes the dimensions of the image planes to [0.0, 1.0], so in our 512×512 images, they range from 0 to 1 in both the x and y directions.  Additionally, the input argument for number of z-planes is also designed to make the z-planes behave in a similar fashion.  If the Blender object that has the shader applied to it is scaled in any dimension, that scaling will stretch or squish the colors in that dimension.  And if a scaled object has its scaling applied, then the shader will ‘snap’ back to occupying the volume of a unit cube.

If your application needs to recover some sort of aspect ratio or accurately scale the volume into another space, I find it’s easiest to first create a unit cube with one corner at the origin (NB – NOT the default Blender cube as it’s 2x2x2) and then manipulate that cube without ever applying scaling, rotation or translation.

Some of the nice things about doing volume rendering in Blender is that the light from the scene interacts with the volume as well as being able to co-mingle multiple volume shaders together and use emission.  Additionally, Blender 2.80 has a new ‘Principled Volume’ shader which acts as a one-stop shop for volumetrics settings.

Blender volume render of tooth x-rays using OSL

 


7 Comments

    • David

      Thanks. I’m interested, but unfortunately, I let my IEEE membership lapse several years back, so I can’t get to the paper.

      It’s a more difficult problem for graphics, so I’ve been looking into it a bit more lately.

    • David

      Petra,

      I’m sorry I haven’t responded to your previous posts until now. It’s been quite hectic lately.

      I don’t have the data anywhere on the website, and I’m unsure if I want to put it out there since its personal medical information. I’ll send you an e-mail privately and maybe we can talk about getting the data to you that way.

  • Partth

    Hi David,

    I have typically used .x3d import for my cfd data, but have found that using Blender to render my surface contour data has led to oftentimes more blurry contours. Is there a preferred set of shaders you use to retain the image quality from paraview (as in my case the visualization looks much more crisp in PV than Blender)? In other words what are some tips you may have as to improve the image quality of a colored contour in Blender renders?

    Also, have you heard of the recent Blender VTK nodes for use in volume rendering? It may be worth checking out for your efforts. I have tried to make it work but cannot figure out a way to load VTK image data into blender for isosurface/vol rendering ( after I use “resample to image” in ParaView to allow my CFD surface data to be volume rendered in PV). Any thoughts? I have yet to try your OSL approach as my data set is not raw so I wasn’t sure if I could apply your approach to another data format.