Mandelbulbs Mandelbrot sets transformed into 3D

Now it works, thanks.

How can Optix know if I have a quadro or a tesla card?

Checking either the PCI device ID or string matching against the device name (on Windows, as given in the driver’s .INF file) ? Not sure…

Christian

Anaglyph bots are invading. You’ll need Red/Blue or Red/Cyan glasses for this thing to pop out of your screen.
This will be a feature of the next version (due soon)

External Media

Ask NVIDIA for a pair of 3D glasses so you can add that as an option also ;)

My company is likely getting one of these 120 Hz LCDs and the 3DVision goggles and a Quadro card.

Maybe as a side project I can convert the Mandelbulb code to use Quad Buffered stereo. But for sure I’d also take such glasses as a freebie. ;)

Christian

I think it can be a killer addition. At least when testing the glasses at NVISION in 2008, the eperience in games was very convincing.

the people with only one eye among you are very sad now :(

Why? With the NVIDIA glasses they will have the same experience as in real-life I guess?

Hi everybody.

I went out today and bought two nVidia GTX 260 just to play with this thing. They are automatically used by Optix to double the performance compared to a single GTX 260. A bit like SLI for raytracing ;)
The performance is now sufficient for me to go inside the mandelbulb and to pan and zoom around in 1920x1200 resolution on my 24 inch LCD. WOW. Insane I tell ya.

External Media


External Media

Oh great, the nVidia performance monitor tool got captured in my screenies. Anyway, you get the idea. If the 3D effect does not work for you

  1. check whether you have your 3D goggles on and
  2. try to squint a little. I’ve set a pretty evil eye separation distance so the effect is quite intense.

Anyway, here is a new binary with the Anaglyph 3D mode that you can toggle with the “3” key. I even added a means to control eye separation and convergence, so you can choose some eye friendly settings. The coolest thing to do is to toggle go-inside mode with “g” and then move the camera into the bulb. The intensity of the stereo effect then depends on how close the camera moves towards the center of the bulb. Note that you’re seing the exteror hull of the bulb wrapped around the camera, it is not truly the inside that you are seeing.

You can create some insane interior views with this technique. Also mind that while holding the shift key and using the right mouse button you can adjust the field of view, which can be very handy in the interior view. To deepen some of the tunnels you may find, increase the iteration count with “i”. And you can also sharpen the details more with “d”, but watch out for aliasing ;)

As always, the binary is compressed with 7-zip, which is about half the size of what a regular .zip archive would be. Use the 7-zip tool found on www.7-zip.org

Christian
mandelbulb_win32.zip (2.54 MB)

I love the app… External Image

I was checking out various Mandelbulbs, and I noticed each time I hit the ‘-’ key, my GPU utilization’s would go up a bit.

I proceeded to keep hitting - until i reached 30. By then my GPU utilization was high and even, and still getting 16 FPS on the Mandelbulb i was viewing.

This is my systems utilization, when I upped it by hitting ‘-’ 30 times:

Outstanding GPU load balancing…

If I fire the program up, looking at the default view, I am almost at 93FPS runing a single 295 in SLI mode, and 280 in dedicated PhysX mode.

Sweet!

If I look at the default view, @ 1920x1200 using the ‘f’ key, I am getting 38FPS.

The Optix libraries are good at distributing the work load among several GT2xx GPUs. Unfortunately G80 and G92 are not included (lame decision!) That make me buy two GTX 260 just today (well, from a financial stantpoint: good decision for nVidia, duh)

The reason it takes more time to render when you press “-” a lot is that you enter the range of negative exponents. Then I use a slower rendering method that uses more trigonometry (sine/cosine expressions and such). The fractals that result are of “mandelier” type (hollow objects that ressemble chandeliers).

Well, the EGVA boys know now… :P

http://www.evga.com/FORUMS/tm.aspx?high=&a…p;mpage=1#62220

I’ll say… Even on GPU’s in SLI and Dedicated PhysX Mode. External Media

Thanks for the info.

Note: I was starting to think that Mandelbub might only used 1 CPU core, due to this test:

I did reboot fresh, and only ran the Performance Monitor, and Mandelbub. It looks like that single core of my CPU would probably be my systems bottleneck…

This again, is just Mandelbub’s default screen.

External Media

For trivia my (3) GPU’s utilization running Mandelbub’s default screen, at it’s default res, is 64% - 70% - 59% utilization.

Once again I did reboot fresh, and only ran the Performance Monitor, and Mandelbub…

Let the default screen come up, and used the advanced ‘f’ key to go 1920x1200 res. (On my system.)

I guess like games, res is important, and let me see more cores would be used in the system deemed it necessary.

My CPU utilization now looks like this with Mandelbub’s default screen at 1920x1200. (Getting 37.96FPS.)

External Media

For trivia my (3) GPU’s utilization running Mandelbub’s default screen, at 1920x1200 res, is 72% - 82% - 67% utilization.

I guess I am surprised how high one core’s utilization may go, before the second appears to come to the rescue. :P

Looking back at my first shot now, it does appear like a second core was just starting to load up.

Another observation one might make, is that in this case, the GPU’s are doing a much better job at even workload distribution than my CPU cores are…

Sweet!

Talonman External Media

External Media

You know, a benchmark button might be fun? External Image

Just a thought…

Mandelbulb Performance Trivia:
I disabled my 280 to see how my 295 would do alone.

So for me, default view, at default res: (1) 280 adds 19FPS. (Was 93, now 74, diff=19.)

And the default view, at 1920x1200 res: (1) 280 adds 9FPS. (Was 37, now 28, diff=9.)

Problem is: the default view has already changed between the last two posted binaries. And the actuall shader code might receive subtle and not so subtle changes as well with every release. And I am releasing at least one version per week. So this is not a good base for a benchmark, really.

Use the other Optix code samples for benchmarking. These are not changing too often ;)

Christian

A built in script still might be fun…

If you care to share, be sure to post your latest version.

I’d run it!! External Image

Keep up the fine job. External Image

(This is the only Optix code program that I know of, and that I can download.)

If there are any others all ready to run on Windows 7, feel free to rub my nose on them.

I just wanted to let you know I educated myself more about the OptiX engine. (Post #25.)

[url=“http://www.evga.com/FORUMS/tm.aspx?m=81450”]http://www.evga.com/FORUMS/tm.aspx?m=81450[/url]

What a fine product!

Also wanted to let you know that I have yet to find another app with better workload distribution. (Been looking too…)
[url=“http://www.evga.com/FORUMS/tm.aspx?m=85501”]http://www.evga.com/FORUMS/tm.aspx?m=85501[/url]

I do think it’s a thing of beauty.

I encourage you to keep tinkering with this app, it may be more unique than you may think. ;)

Keep up the fine job, and as always, if you have any updates to share, I would love to test them out.

I still stand firm that a benchmark button on this app, would be a gem to help show the value of the GPU Revolution to us Windows 7 commoners.
It promotes the idea that more GPU’s = more performance, better than any app I know.

In my opinion, OptiX’s multi-GPU scaling, is the mark to hit. External Image

Reviving an old thread :)

I couldn’t get the win32 executable to run on my GTX 460 SLI using current drivers 260.99. But I love the program and knew my new video cards should provide a vastly improved frame rate compared to the previous 8800 Ultra SLI setup. So spent a little time modifying the source code posted here, finally getting a successful executable to compile using OptiX 2.0 and CUDA Toolkit 3.1 both x64. Hope someone else can enjoy it!

UPDATE: Recompiled with a newer version of the source code (thanks CBuchner1) which includes 3D and the option to go inside the bulb. I included the source code too. The program will load the backdrop from \Program Files\Nvidia\OptiX SDK 2.0.0\SDK\Data\background.hdr; I included the default CedarCity.hdr renamed as background.hdr, but you create your own and name it background.hdr and it will then load. I may a nice one from a 360 panorama of the Grand Canyon. It is 20Mb so didn’t include it.

Reviving an old thread :)

I couldn’t get the win32 executable to run on my GTX 460 SLI using current drivers 260.99. But I love the program and knew my new video cards should provide a vastly improved frame rate compared to the previous 8800 Ultra SLI setup. So spent a little time modifying the source code posted here, finally getting a successful executable to compile using OptiX 2.0 and CUDA Toolkit 3.1 both x64. Hope someone else can enjoy it!

UPDATE: Recompiled with a newer version of the source code (thanks CBuchner1) which includes 3D and the option to go inside the bulb. I included the source code too. The program will load the backdrop from \Program Files\Nvidia\OptiX SDK 2.0.0\SDK\Data\background.hdr; I included the default CedarCity.hdr renamed as background.hdr, but you create your own and name it background.hdr and it will then load. I may a nice one from a 360 panorama of the Grand Canyon. It is 20Mb so didn’t include it.

Would anyone have a suggestion why I’m getting the following message when running the .exe from post #37 (I’m also using a GTX 460, Win7 x64)?

“The application has failed to start because its side-by-side configuration is incorrect. Please see the application event log or use the command-line sxstrace.exe tool for more detail.”

Looks like a great app, especially the stereo mode!