During undergrad at Penn, I worked on a GPU powered path tracer. I learned quite a bit during the project, and below I'll outline some of the features that I implemented for this project.
The first major part of the project was to get the framework setup properly. The basis of any ray-tracing enabled renderer is determining intersections between the rays that you fire and the scene itself. To keep things simple during testing, I ran the renderer and as soon as a ray hit an object, I returned that object's surface color.
My initial test scene was a cornell box with some spheres placed in it. Once I had the intersection testing finished, I started working on a simple intregation scheme. Namely, direct lighting.
Since I knew I eventually wanted to have a path tracing renderer, I kept the BRDFs limited to diffuse during this iteration. Looking back, this step wasn't too important in the grand scheme of things, but I was comfortable with simple direct lighting ray tracing, and wanted to get my feet wet in CUDA with something familiar.
Next, I swtiched the integration scheme to full path tracer.
The first thing you'll notice is that the shadows in this image are quite a bit brighter than the ray traced image. This is because light rays are not limited to a single bounce in this integration scheme. Rays can bounce around an arbitrary numbers of times, until they hit either a light or nothing. This allows surfaces to interact with each via light transportation.
With the simple path tracer up and running, I started working on some BRDFs to make the scene more interesting.
The walls still use a diffuse BRDF, while the spheres have some that a bit more interesting. The red sphere has Cook-Torrance microfacet specular highlights, which involves using a model that pretends the surface of the object has tiny micro facets covering it. This gives a rougher highlight than Blinn or Phong shading. The two other spheres have Fresnel reflections applied to them. That BRDF involves mixing pure reflection and pure refraction based on the incident angle of the incoming ray. At the time, I was extremely happy with this image. But, in hindsight, there are some problems. The caustic on the red wall is much too bright to be physically realistic. This problem was present throughout development, and it took quite a bit of digging to find the root cause. As it turns out, the random seed generator I was using was rolling over if the renderer ran too long. Here's an extreme example of the problem and it's fix:
When I got that top render back needless to say I was pretty concerned. After some digging I eventually narrowed down the problem to the seed generator, and was finally able to fix the problem.
One other utility that I implemented for a performance boost is something called stream compacting. I talk about it in this blog post. Basically, it involves storing light rays in a slightly different way, but really speeds things up.
Overall I am pretty happy with this project. I learned a lot about both rendering and GPU programming, which is what I set out to do with this project. The renderer itself is still woefully simple. The images are overall extremely noisy, due to the fact that this is still purely brute force with no importance sampling. Furthermore, the only acceleration structure is simple per object bounding boxes. Adding in support for a kd-tree of bvh tree would speed the renderer up quite a bit.
I used this project as my senior design requirement for DMD at Penn. I got interested in using the Mitusba rendering engine during my summer at Cornell, and decided that writing xml scene files was kind of a pain. So I automated that process so I could model as I normally would in Maya, and then generate the data I needed to send to Mitsuba.
With Yingting (Lucy) Xiao and Xiaoyan (Zia) Zhu.
As an undergraduate, this was the final project of our first graphics programming class. Presented as a "mini-Maya", the goal was to work in groups of 3 to build a (very) basic polygonal mesh editor. The core of this is the half-edge data structure, which is a neat way to represent polygons to make them easily editable.
I mocked up an example in Maya proper:
As you can see, each triangle has an edge pointing to each of the vertices that define it. This way, the user can select any of the three attributes of the mesh.
Beyond the base data structure, there were 3 distinct roles for this project. I was the "geometer", responsible for editing individual components of the mesh, as well as implementing Catmull-Clark subdivision. That algorithm is a way to "smooth" a polygonal mesh easily. Here is a diagram from wikipedia:
It's a pretty straight forward algorithm actually. Although at the time it seemed absurdly complicated. One neat thing about it is that you can tag edges as "sharp" such that they will stay creased as the rest of the mesh smooths.
Lucy was the "deformer", which involved implementing both free form and global deformations. This involves creating a low resolution lattice of points that can be edited. Those lattice points then are used to drive deformations of the mesh.
Then global deformations are just moving the lattice points algorithmically.
Zia was the "visualizer", which involved implementing an quaternion arcball camera. This type of camera involves moving around on a 4th dimensional sphere, so I'm going to my explanation here. She also implemented a number of simple shading models in glsl.
Results of my part, as well as Lucy and Zia's, can be seen in this video:
I actually learned a little bit about rendering from this project. There's a lot of geometry to render, so I learned about Mitsuba (the renderer I was using) to keep render times reasonable.
See http://blog.jeremynewlin.info/search/label/garden
With Xiaoyan (Zia) Zhu.
This simulator was Zia and I's final project final project for Physically Based Animation. We implemented directly from A Practial Simulation for Dispersed Bubble Flow from Doyub Kim and Oh-Young Song at Seoul National University and Hyeoung-Seok Ko at Sejong University.
The main idea behind this algorithm is to simulate the gas-fluid flow on a grid, and then project that simulation onto the bubble particles. This allows you to skip explicitly calculate the interactions between the two different media. This grid is referred to as a fraction field in the paper. You initialize each grid cell to have a volume of 1 (pure water), and then subtract each bubble's volume from the cell that contains it. Then you advect the velocity field of the grid and project those velocities onto the bubbles. This advection naturally creates buoyancy and swirl effects due to the fractional densities in each cell.
To add some interesting behavior, you can add stochastic behavior by jittering each bubble's velocity based on some user inputs and local bubble cluster densities. You can also add a "break up" term that breaks large bubbles into smaller clusters based on some frequency.
It's a pretty simple paper, all things considered. Zia also added in geometric sources, which involved converting polygonal meshes to a level set, and then spawning bubbles from within the level set.
We also did some quick temperature comparisons to see how that affected the bubbles' behavior.
As for rendering, I ported the bubbles to Maya and rendered them using Mitsuba (and another one of my projects).
The simulation is pretty slow, and I've been debating porting it to the GPU (as the original authors did), but I can never seem to find the time.
A full video of our results can be seen here: