Wednesday, November 18, 2009

Fixes and a new direction

There apparently was a bug that was preventing the images from converging. It is now gone.
I also implemented a new algorithm which doesn't seem to improve the quality of the image that much, but makes the speed/space more controllable, and makes it save much more space anyway.

New algorithm:

Old algorithm:

Both were rendered in approximately 20 minutes. The glossy reflections on the old algorithm are that much better because they have been oversampled by spawning 8 rays. This has not been done in the newer algorithm yet for simplicity's sake. Note how the diffuse darker areas on the new picture are much cleaner than they are on the old picture.
There is also a boolean object sphere in the water. I wanted to test if my implementation of boolean objects still worked, since they are the main reason the renderer is so slow, and also took me like a month to get working correctly. You can see it better in the old image, where the outline of the sphere is visible as darker than the rest of the white background. It is on the right side.

Next, I plan on adding a sort of importance sampling to the new one (where all its power really lies).

Saturday, October 31, 2009

Stochastic Progressive Photon Mapping

It has been implemented. Also, the hashmap, which got me twice as many samples per second. The multithreading is broken with a race condition.

Unfortunitely, I can't seem to get the same results as the paper, with the glossy reflections converging even quicker than the diffuse surfaces. It seems as though my glossy reflections are converging really slow (but actually converging)

I do have a few new ideas which seem much simpler to implement than my previous voronoi cell based method (which was too complicated for me to bother spending the time not studying for school implementing) and thus fairly publishable.

Friday, July 3, 2009

I multiplied the speed by 10. now I get 15mill samples within half an hour.
Also some bugfixes
7 hours, 150mill samples:

Thursday, June 4, 2009

Progressive Photon Mapping 2.

(After about 15 million samples and 7 hours)

The code for the progressive photon mapping is finally written. For the same number of samples, the algorithm does look better than path tracing, but the sample speed is far worse. While I am not quite doing it in the same way as the paper describes, my method has the same, if not better big-O time per sample. A proof has yet to be done for that. Right off the bat I can think of a couple of optimizations: the first two nodes on the photon can be ignored and not added to the map, and the direct lighting can be computed explicitly. I say this because it seems as though this algorithm is much slower for direct lighting, where graininess and complex lighting is not a problem, and fails completely for antialiasing, which is only really important under direct lighting.

4000 samples:

17000 samples:

1million samples:

Wednesday, April 1, 2009

DOF!!! (and new matts)

path traced

note: checker pattern+ improved glossy+glossy refraction

Tuesday, March 10, 2009

Sunday, February 22, 2009

yay- 4000 spp

I fixed boolean opperations, and I have a webpage.

anybody who can, can yall try to open tlrcam's jar and see if you can read the source?

Tuesday, February 10, 2009

at 900 spp.

i ran some tests and found that path tracing runs at about 8000 mps on this image, and mlt/this runs at 7000 mps. not as big a difference over the long run as i though. it means i can stop trying to optimize it as much.

Problem: both mlt, this, and path tracing start out rendering at about 30000 mps, then go on to an average of 20000 mps for about 5 miniutes, (the warm up ends after about 1 miniute), and then finally jumps down to below 10000 mps. I have no idea why it is doing this.
ideas: Java's garbage collector not acting up...
mt random number generator something
memory leak.
no clue.

Any ideas?

Monday, February 9, 2009

More from the new MLT algorithm

I improved it a little, and it seems to work fairly well. Now all I need is an importer, to blender or some other program, and I will open source it- well, possibly with a copyright so I can prove that I wrote it, something like PBRT has.- all except for the new MLT code, which I'd prefer to keep a secret a bit longer.

Now tell me, is this an improvment worth pursuing? Or should I go ahead and implement the paper on adaptive multi-dimensional sampling that seems to show such great improvements (does anybody know of any renderers that use this?)

Regular path tracing- 300 spp- this was the first time i tried to benchmark it without eclipse being open and me doing stuff. on my 1.9ghz core 2 duo, with 2 threads, on vista 64, i was getting an average of 10000 mps

The image after 315 spp, I'm not going to say how long this took because it was only on one core and was rendered while I was using my computer intensively.

The new algorithm after 20spp, at about 10 minutes on a dual core. -Lprob .01-.99 -maxRej 10-1000

the regular MLT algorithm with 20spp at about 10 miniutes, with an -Lprob .4 and maxrej of 500

the regular image's lprob and maxrej was set to be what the new algorithm's lprob and maxrej would average out to given thoes paremeters and scene. All MLT is based off of Keleman et al.'s robust mutation strategy paper, and paper on hybrid MLT.

Sunday, February 8, 2009

stratisfied sampling+MLT

I haven't found any papers on this subject exactly, but I had an idea (and hopefully if there is nothing out there, I can write my own paper). What I seem to have had the most trouble with in MLT was to sample uniformly(as uniformly as possible) every direction at a bounce, where applicable.  The trouble is, it is very hard to keep track of where MLT has sampled previously, and what it should do next/how it should do it next.  What I have done is essentially modified the balance heuristic in order to sample every pixel as evenly as possible
, and sample every direction as evenly as possible.  This only required slight modification to my path tracer, and does not introduce any bias.  There are numerous other subtleties, but I'd rather not go over all of them, because if It is a unique idea/implementation, I could write a paper on it, which would be nice.

With the modified balance heuristic at 51 spp at about 20 miniutes:
without the modification at 81 spp at about 30 miniutes:

Saturday, February 7, 2009

High Dynamic Range lighting

I have the preliminary HDRI implemented, i just want to do it now with more than one sort of map (lat/long)
Now I have insane amounts of work to do, so I will not be able to post anything else for a while.

Tuesday, February 3, 2009


that previous pic had a bug in it, a bug that happened to make it look way cooler than it should have (and take longer to render)

here is the less cool but bug less image.

I still have to reimplemented virtually all my materials and SSS, but I've decided, after noticing that with some fixes it runs about 1/4 to 1/2 the speed of indigo that my renderer is not completely horrible. when i had the bug that made the image look cool, I realized that all I needed was to spend some time making making cooler scenes for it to render. Well, i'm not going to do that. I currently have no time to do it, and a kinda lame importer, not to mention like no materials as of now. Instead, what i'm going to do is to fix my thin lense camera to work with bidi, add hdri and add bloom. probably on the way, add a shader or two, because i've finally decided upon a material system and it should be easy to do now.

Another idea I've been contemplating is writing a shader that should be able to read user defined shaders and produce corrosponding sampling sets. I'm sure this has been done before, I even remember reading parts of the paper. But I don't have the paper, so I'll just figure it out when I want to.

speed and rings

well, i did some behind the scene stuff with the tracing code, and took away a linked list that i was using (the algorithm was easier to implement that way in the beginning), and now its about 3-10 times as quick. Its now running at 30000 samps/s

this image was taken after about 6 hours and 2000 muts/pixel. I feel as though I've made so much progress in the past 2 weeks)

Monday, February 2, 2009

All Better

Fixed a bug in the MLT code, now there are no magic squares on the walls.
I still want to learn to implement latin hypercube stratified sampling with the MLT, and Haltron and the likes. I'm thinking about giving up all hopes of simple pathtracing and making this a pure MLT+-bidirectional renderer. i figure i can use the same geometry, scene, and visual core in order to add regular path tracing capiabilities one day. the added simplicity would make it much easier to implement lots of MLT specific optimizations.

There is also the possibility that i might abandon all hopes of this being a usefull renderer, and make it super unbiased (polarization, spectra, quantum properties of light... double slit...)

In other news, I've been reqruited to make my renderer suitable for animations. there are many reasons other than speed that I'm ignoring that would make my renderer useless for animations. The reason I agreed was because i had an idea that if i applied MLT directly to the animation sequence, the most relevant parts of the sequence would render first, and each image would be equaly rendered. My delta sampling could also be applied with respect to time so that that artifacts cause the animation to be less fuzzy.

I have way too many ideas and way too little time to try them all. Its sad.

Saturday, January 31, 2009

Best CG paper ever

Beard removal techniques:

I begin at the end

So I definitely might exist, and I definitely might have finally completely figured out how to implement metropolis light transport along with my crazy extension.

I still need bidirectional path tracing, textures (even though i am against them), a better scene format, more brdfs, and an sss shader to take advantage of my scattering capabilities. (that was the whole point of refactoring the project like a couple months back)

Only problem is, now I have lots of work to do.