Getting started / easy one-shot code gen. for DF evaluators?
Hi MDL developers, first of all, thanks for your work and for finally making the SDK publicly available! Looks like the MDL really got it all right and could solve many of the interoperability issues that exist with heterogeneous tool pipelines and PBR today. The SDK is quite impressive too (nice lean ABI layer, BTW), and I'm currently reading into the docs / trying to get on top of things, so I hope you don't mind my (at this point possibly naive) questions. It seems the following recipe would suffice to get some basic rendering and running (only considering the surface BSDF for simplicity of discussion): [list] [.]Implement the BSDFs that I want my renderer to use for MDL materials as functions that return 'color' in some kind of built-ins module (OSL seems complete enough to express those)[/.] [.]Write some code that traverses the material's 'surface.scattering' expression and replaces all BSDFs found with those functions[/.] [.]Pin the expression onto some dummy material's dummy BSDF's 'tint' parameter, to generate a single entry point that accepts 'state'[/.] [/list] Yet API-wise the whole thing feels somewhat off-beat, but it also seems needlessly complex to write code against several (in case of supporting CUDA / GLSL) additional APIs for compilation, where I already have compilation abstracted by the MDL SDK... Questions: [olist] [.]The hack sketched out above would work, right?[/.] [.]Is there a nicer way (there's a 'LinkUnit' and designated built-in modules that support 'native' annotations which hint into that direction, but little detail)?[/.] [.]Is there some way to lock parameters to constants in class compilation mode / or, alternatively a way to mark inputs as varying (even if they default to a constant) for instance-level compilation?[/.] [/olist]
Hi MDL developers,

first of all, thanks for your work and for finally making the SDK publicly available! Looks like the MDL really got it all right and could solve many of the interoperability issues that exist with heterogeneous tool pipelines and PBR today.

The SDK is quite impressive too (nice lean ABI layer, BTW), and I'm currently reading into the docs / trying to get on top of things, so I hope you don't mind my (at this point possibly naive) questions.

It seems the following recipe would suffice to get some basic rendering and running (only considering the surface BSDF for simplicity of discussion):

  • Implement the BSDFs that I want my renderer to use for MDL materials as functions that return 'color' in some kind of built-ins module (OSL seems complete enough to express those)
  • Write some code that traverses the material's 'surface.scattering' expression and replaces all BSDFs found with those functions
  • Pin the expression onto some dummy material's dummy BSDF's 'tint' parameter, to generate a single entry point that accepts 'state'

Yet API-wise the whole thing feels somewhat off-beat, but it also seems needlessly complex to write code against several (in case of supporting CUDA / GLSL) additional APIs for compilation, where I already have compilation abstracted by the MDL SDK...

Questions:
  1. The hack sketched out above would work, right?
  2. Is there a nicer way (there's a 'LinkUnit' and designated built-in modules that support 'native' annotations which hint into that direction, but little detail)?
  3. Is there some way to lock parameters to constants in class compilation mode / or, alternatively a way to mark inputs as varying (even if they default to a constant) for instance-level compilation?

#1
Posted 02/13/2018 03:00 PM   
Hi, Given that I must admit that I'm not fully understanding the details of the 'hack' your proposing, I might not be exactly answering your questions - but I hope I can still help you here. Implementing BSDF support is currently the biggest hurdle to get MDL support running in a renderer. Right now we don't offer a turnkey solution for that, i.e. the compiler cannot generate ready-to-use BSDF code for you yet. Instead you need to implement all BSDF support inside you renderer, which means you basically need to do the following: [list] [.]Implement functionality for everything in df:: in your renderer.[/.] [.]Analyze the compiled material's surface.scattering and build a representation of it for your renderer.[/.] [.]Generate code for all expressions connected for df parameters and wire them up such that you can call them from inside your BSDF machinery.[/.] [/list] Quite soon, however, we will offer functionality to compile surface.scattering into ready-to-use BSDF evaluation and importance sampling functions (potentially only for the PTX backend initially). This should make the integration into a physically based renderer much simpler. But even with that, I think implementing your own BSDF machinery can be beneficial since it allows to tightly integrate everything with the small details and peculiarities each renderer has.
Hi,

Given that I must admit that I'm not fully understanding the details of the 'hack' your proposing, I might not be exactly answering your questions - but I hope I can still help you here.

Implementing BSDF support is currently the biggest hurdle to get MDL support running in a renderer. Right now we don't offer a turnkey solution for that, i.e. the compiler cannot generate ready-to-use BSDF code for you yet. Instead you need to implement all BSDF support inside you renderer, which means you basically need to do the following:

  • Implement functionality for everything in df:: in your renderer.
  • Analyze the compiled material's surface.scattering and build a representation of it for your renderer.
  • Generate code for all expressions connected for df parameters and wire them up such that you can call them from inside your BSDF machinery.


Quite soon, however, we will offer functionality to compile surface.scattering into ready-to-use BSDF evaluation and importance sampling functions (potentially only for the PTX backend initially). This should make the integration into a physically based renderer much simpler. But even with that, I think implementing your own BSDF machinery can be beneficial since it allows to tightly integrate everything with the small details and peculiarities each renderer has.

#2
Posted 02/15/2018 01:47 PM   
Like Matthias said the hacky solution is not entirely clear to us, we might need some more context. Maybe some comments regarding OSL: OSL without implementing custom closures cannot faithfully represent MDL's BSDF. OSL provides many closures that match MDL BSDF (like GGX or a henyey greenstein phase function for volumetric rendering) but BSDF layering can only approximately be implemented in OSL. It would need ideally custom closures (which OSL imho begs for and many renderers like Arnold provide) For a start an approximation might be enough though. 2. "native" is a way to use non mdl functions inside an MDL material. its mostly for debugging purposes. right now we have only class vs instance compilation. We work on an option to customize which parameters to expose after compilation but i can not confirm whether it will be part of the next release. It has pretty high priority though.
Like Matthias said the hacky solution is not entirely clear to us, we might need some more context. Maybe some comments regarding OSL:

OSL without implementing custom closures cannot faithfully represent MDL's BSDF. OSL provides many closures that match MDL BSDF (like GGX or a henyey greenstein phase function for volumetric rendering) but BSDF layering can only approximately be implemented in OSL. It would need ideally custom closures (which OSL imho begs for and many renderers like Arnold provide) For a start an approximation might be enough though.

2. "native" is a way to use non mdl functions inside an MDL material. its mostly for debugging purposes.

right now we have only class vs instance compilation. We work on an option to customize which parameters to expose after compilation but i can not confirm whether it will be part of the next release. It has pretty high priority though.

#3
Posted 02/15/2018 04:44 PM   
Thanks for your replies! OSL? Wait... Sorry, I'm switching between the two ITM and and both have three letters. I meant MDL, of course! :D > OSL without implementing custom closures cannot faithfully represent MDL's BSDF > [...] custom closures which OSL imho begs for Yes, no dependence on 'L' for measured curve / Schlick. Also: No 'eta' or 'other_ior' input anywhere for Fresnel vs. anything but air. If I wanted to compile MDL to OSL (despite the limitations), I guess it'd be much easier emitting source code while traversing the expression trees produced by the SDK... > "native" is a way to use non mdl functions inside an MDL material. its mostly for debugging purposes. Ah! Mostly? Will it generate calls to external code you can link against via LLVM and PTX APIs (like the PTX backend's texture call mode "direct_call")? > We work on an option to customize which parameters to expose after compilation but i can not confirm whether it will be part of the next release. > It has pretty high priority though. > Quite soon, however, we will offer functionality to compile surface.scattering into ready-to-use BSDF evaluation and importance sampling functions > (potentially only for the PTX backend initially). This should make the integration into a physically based renderer much simpler. But even with that, Good news! > I think implementing your own BSDF machinery can be beneficial since it allows to tightly integrate everything with the small details and peculiarities > each renderer has. Yes, I agree for the integration into a full-blown, existing renderer, yet that's not the only possible use case out there: I know, "MDL is not supposed to be a shading language", yet it seems perfectly well-suited to express sampling strategy and distribution functions as you might have in your renderer, preprocessing stage, or whatever tool (say, something new with MDL as its only material representation). MDL is already portable between CPU & GPU, so it would only seem natural to let the user apply it how it suits hir needs. Please make the SDK allow it. Reference implementations for the DFs would be super useful for both direct integration, conformance testing of alternatives and tightening of the spec. > Like Matthias said the hacky solution is not entirely clear to us I was talking about programmatically transforming a material into something crude like this [code] material entry_evalBsdf( float3 direction ) in material( ior: // don't care - let me set 'state' and return 'color' // [ ... surface.scattering with 'bsdf' constructors replaced with implementing functions ... ] ); [/code] if we're fine with point lights or brute force, or, with importance sampling might look somewhat like this (I guess this case would need a smarter code transform to split up the probability cake): [code] material entry_sampleBsdf( float2 uniformRandom ) = let { float3 direction = // [ ... surface.scattering with 'bsdf' constructors replaced with implementing expressions... ] } in material( ior: // don't care - let me set 'state' and return 'color' // [ ... surface.scattering with 'bsdf' constructors replaced with implementing expressions... ] ); [/code] Note that I'm not seriously considering something like this. I was hoping you'd point me to less crazy realms that let me exploit the MDL-SDK as a (more or less) general purpose compilation and linking framework. Seems like linking currently has to happen via LLVM and/or CUDA APIs.
Thanks for your replies!

OSL? Wait... Sorry, I'm switching between the two ITM and and both have three letters. I meant MDL, of course! :D

> OSL without implementing custom closures cannot faithfully represent MDL's BSDF
> [...] custom closures which OSL imho begs for

Yes, no dependence on 'L' for measured curve / Schlick. Also: No 'eta' or 'other_ior' input anywhere for Fresnel vs. anything but air.

If I wanted to compile MDL to OSL (despite the limitations), I guess it'd be much easier emitting source code while traversing the expression trees produced by the SDK...

> "native" is a way to use non mdl functions inside an MDL material. its mostly for debugging purposes.

Ah! Mostly? Will it generate calls to external code you can link against via LLVM and PTX APIs (like the PTX backend's texture call mode "direct_call")?

> We work on an option to customize which parameters to expose after compilation but i can not confirm whether it will be part of the next release.
> It has pretty high priority though.

> Quite soon, however, we will offer functionality to compile surface.scattering into ready-to-use BSDF evaluation and importance sampling functions
> (potentially only for the PTX backend initially). This should make the integration into a physically based renderer much simpler. But even with that,

Good news!

> I think implementing your own BSDF machinery can be beneficial since it allows to tightly integrate everything with the small details and peculiarities
> each renderer has.

Yes, I agree for the integration into a full-blown, existing renderer, yet that's not the only possible use case out there:

I know, "MDL is not supposed to be a shading language", yet it seems perfectly well-suited to express sampling strategy and distribution functions as you might have in your renderer, preprocessing stage, or whatever tool (say, something new with MDL as its only material representation). MDL is already portable between CPU & GPU, so it would only seem natural to let the user apply it how it suits hir needs. Please make the SDK allow it. Reference implementations for the DFs would be super useful for both direct integration, conformance testing of alternatives and tightening of the spec.

> Like Matthias said the hacky solution is not entirely clear to us

I was talking about programmatically transforming a material into something crude like this

material entry_evalBsdf(
float3 direction
) in material( ior: // don't care - let me set 'state' and return 'color'
// [ ... surface.scattering with 'bsdf' constructors replaced with implementing functions ... ]
);


if we're fine with point lights or brute force, or, with importance sampling might look somewhat like this (I guess this case would need a smarter code transform to split up the probability cake):

material entry_sampleBsdf(
float2 uniformRandom
) = let {
float3 direction =
// [ ... surface.scattering with 'bsdf' constructors replaced with implementing expressions... ]
} in material( ior: // don't care - let me set 'state' and return 'color'
// [ ... surface.scattering with 'bsdf' constructors replaced with implementing expressions... ]
);


Note that I'm not seriously considering something like this. I was hoping you'd point me to less crazy realms that let me exploit the MDL-SDK as a (more or less) general purpose compilation and linking framework. Seems like linking currently has to happen via LLVM and/or CUDA APIs.

#4
Posted 02/16/2018 01:59 AM   
If you feed in the missing state through the material interface, (i.e. view-direction, lightsources, environment) MDL is in principle powerfull enough to implement necessary evaluation, ignoring shadows or complex light transport. I am hestitant to reccomend something like this, even if it could work. It would be quite some work that, after an initial start, would be wasted. "native" generates calls to external code. If you can implement functionality outside of MDL that you want to call, what is the big step to writing the whole BSDF eval outside and just call MDL for the texturing functions? The compiler does work outside of the MDL material evaluation. in iray we for example use it also to allow programming custom environment functions. So if you just look for a compiler infrastructure that generates code for multiple targets, that should work. You could probably devisean interface for the bsdf and implement the eval and sample functions as MDL functions, compile to your target architectue and call those mdl functions from your renderer as needed. Instead of the full bsdf machinery, another way would be to implement the ue4 material model only for a start and use distilling to simplify any material to ue4. for the paraeters you could call the compiled MDL code like its shown in the samples shipped with the SDK. In many cases the differences will be small/0 and once we ship the the bsdf evaluation sample it would be a smaller step to switch to this. >Seems like linking currently has to happen via LLVM and/or CUDA APIs LLVM is our universal backend driving everything from I86 CPU to CUDA to potentially everything an LLVM backend exists for. The current samples shiped illustrate calling an MDL function from CPU code. The bsdf sample matthias talked about will eventually work on cpu, but there is additional work to adapt it to windows and linux calling conventions.
If you feed in the missing state through the material interface, (i.e. view-direction, lightsources, environment) MDL is in principle powerfull enough to implement necessary evaluation, ignoring shadows or complex light transport.
I am hestitant to reccomend something like this, even if it could work. It would be quite some work that, after an initial start, would be wasted.

"native" generates calls to external code. If you can implement functionality outside of MDL that you want to call, what is the big step to writing the whole BSDF eval outside and just call MDL for the texturing functions?

The compiler does work outside of the MDL material evaluation. in iray we for example use it also to allow programming custom environment functions. So if you just look for a compiler infrastructure that generates code for multiple targets, that should work. You could probably devisean interface for the bsdf and implement the eval and sample functions as MDL functions, compile to your target architectue and call those mdl functions from your renderer as needed.

Instead of the full bsdf machinery, another way would be to implement the ue4 material model only for a start and use distilling to simplify any material to ue4. for the paraeters you could call the compiled MDL code like its shown in the samples shipped with the SDK.
In many cases the differences will be small/0 and once we ship the the bsdf evaluation sample it would be a smaller step to switch to this.

>Seems like linking currently has to happen via LLVM and/or CUDA APIs
LLVM is our universal backend driving everything from I86 CPU to CUDA to potentially everything an LLVM backend exists for. The current samples shiped illustrate calling an MDL function from CPU code. The bsdf sample matthias talked about will eventually work on cpu, but there is additional work to adapt it to windows and linux calling conventions.

#5
Posted 02/16/2018 02:06 PM   
Scroll To Top

Add Reply