Artlantis 6

Artlantis 6.0 & 6.5 Render en Studio

Onderwerpen op deze pagina:

HDR/LDR render pipeline
Per pixel transparancies
Global Illumination
Displacement mapping
Physical lighting
Sun light & Sky models
Links
Anisotrophic reflectance models
Dielectric materials
Dynamic tone mapping
Caustics
Download Rendersoft principles PDF


Techniek achter Artlantis 6.x render software

 

* HDR/LDR render pipeline

High-dynamic-range rendering (HDRR or HDR rendering), also known as high-dynamic-range lighting, is the rendering of computer graphics scenes by using lighting calculations done in a larger dynamic range. This allows preservation of details that may be lost due to limiting contrast ratios. Video games and computer-generated movies and special effects benefit from this as it creates more realistic scenes than with the more simplistic lighting models used.

Graphics processor company Nvidia summarizes the motivation for HDRR in three points: bright things can be really bright, dark things can be really dark, and details can be seen in both.

import of HDRI in Artlantis 6

Today, real-time 3d rendering applications are visually more impressive than before. Thanks to the hardware evolution which allows developers and artists to produce incredible effects.

High Dynamic Range Rendering is a set of techniques that emerge two years ago in some video games like Half-Life 2: Lost Coast or Oblivion. It was not possible to use real HDR rendering before due to GPU limitations.

In this document, we will expose what is HDR and how can it be used in real time rendering. The OpenGL project associated with this article will demonstrate the use of various HDR effects.

In order to render a scene with High Dynamic Range, you need to compute every calculations using HDR capable variables and buffers. All your GPU rendering pipeline must support HDR : lighting, textures, post processing effects, etc.

The normal rendering process deals with 32 bits colors and each component range in [0;1]. In HDR, values can be greater than 1. If your GPU doesn't support float textures and float render textures, all values will be clamped to [0;1]. HDR effects are more easily developed with vertex and fragment programs.

High Dynamic Range Rendering and Imaging make the virtual scene more real by controlling the final image with exposure, bloom or other effects. HDR tends to mimic natural effects because it eliminates clamping and can represent wide range of intensity levels as in real scenes.

HDR is still a young technique and consequently was not used often in the industry. Recent video cards with shaders and floating point buffer support allow HDR rendering to emerge in commercial products like video games.

Even though current hardware can manage High Dynamic Range image, our screens can only render 16 million colors. That's why tone-mapping must be applied in order to map HDR value to screen limited value. HDR screens are still being experimented but we can hope to be equipped with full HDR hardware in the future.

 

* Per pixel transparencies

The technique of alpha blending has a long history in two- and three-dimensional image synthesis. There are many ways to blend colors [10], but the most important issue is that we can only get realistic results if we sort transparent objects by their distance from the camera. Unfortunately, this requirement is not compatible with incremental rendering and z-buffer based visibility determination, which allow the processing of objects in an arbitrary order. Sorting objects or even triangles in a way that occluders follow objects occluded by them is difficult and is usually impossible without further subdivision of objects. The problem is that an object is associated with a depth interval and not with a single distance value, so no direct ordering relation can be established. A possible solution for non-intersecting triangles is the application of the painters algorithm [1], but this has super-linear complexity and its GPU implementation is prohibitively complicated.

Figure 1: Order matters when the scene contains transparent objects.

If objects may intersect each other, then the situation gets even worse. A typical case of intersecting transparent objects are particle systems, which are tools to discretize, simulate and visualize natural phenomena like fog, smoke, fire, cloud, etc. The simplest way of their visualization applies planar billboards, but this approach results in abrupt changes where particles intersect opaque objects. The solution for this problem is the consideration of the spherical extent of the particle during rendering, as proposed in the concept of spherical billboards [9], also called soft particles. Spherical billboards nicely eliminate billboard clipping and popping artifacts at a negligible additional computational cost, but they may still create artifacts where particles intersect each other. Most importantly, when the z-order of billboards changes due to the motion of the particle system or the camera, popping occurs. This effect is more pronounced if particles have non-identical colors or textures.

 

Render Principles, published by Ontmoeting, the Netherlands

 

* Global illumination

Rendering with global illumination. Light is reflected by surfaces, and colored light transfers from one surface to another. Notice how color from the red wall and green wall (not visible) reflects onto other surfaces in the scene. Also notable is the caustic projected onto the red wall from light passing through the glass sphere.

Global illumination (shortened as GI) or indirect illumination is a general name for a group of algorithms used in 3D computer graphics that are meant to add more realistic lighting to 3D scenes. Such algorithms take into account not only the light which comes directly from a light source (direct illumination), but also subsequent cases in which light rays from the same source are reflected by other surfaces in the scene, whether reflective or not (indirect illumination).

Theoretically reflections, refractions, and shadows are all examples of global illumination, because when simulating them, one object affects the rendering of another object (as opposed to an object being affected only by a direct light). In practice, however, only the simulation of diffuse inter-reflection or caustics is called global illumination.

Images rendered using global illumination algorithms often appear more photorealistic than images rendered using only direct illumination algorithms. However, such images are computationally more expensive and consequently much slower to generate. One common approach is to compute the global illumination of a scene and store that information with the geometry, e.g., radiosity. That stored data can then be used to generate images from different viewpoints for generating walkthroughs of a scene without having to go through expensive lighting calculations repeatedly.

Radiosity, ray tracing, beam tracing, cone tracing, path tracing, Metropolis light transport, ambient occlusion, photon mapping, and image based lighting are examples of algorithms used in global illumination, some of which may be used together to yield results that are not fast, but accurate.

These algorithms model diffuse inter-reflection which is a very important part of global illumination; however most of these (excluding radiosity) also model specular reflection, which makes them more accurate algorithms to solve the lighting equation and provide a more realistically illuminated scene.

The algorithms used to calculate the distribution of light energy between surfaces of a scene are closely related to heat transfer simulations performed using finite-element methods in engineering design.

In real-time 3D graphics, the diffuse inter-reflection component of global illumination is sometimes approximated by an "ambient" term in the lighting equation, which is also called "ambient lighting" or "ambient color" in 3D software packages. Though this method of approximation (also known as a "cheat" because it's not really a global illumination method) is easy to perform computationally, when used alone it does not provide an adequately realistic effect. Ambient lighting is known to "flatten" shadows in 3D scenes, making the overall visual effect more bland. However, used properly, ambient lighting can be an efficient way to make up for a lack of processing power.

 

Ray Tracing principle.

 

* Displacement Mapping

Displacement mapping is an alternative computer graphics technique in contrast to bump mapping, normal mapping, and parallax mapping, using a (procedural-) texture- or height map to cause an effect where the actual geometric position of points over the textured surface are displaced, often along the local surface normal, according to the value the texture function evaluates to at each point on the surface. It gives surfaces a great sense of depth and detail, permitting in particular self-occlusion, self-shadowing and silhouettes; on the other hand, it is the most costly of this class of techniques owing to the large amount of additional geometry.

For years, displacement mapping was a peculiarity of high-end rendering systems like PhotoRealistic RenderMan, while realtime APIs, like OpenGL and DirectX, were only starting to use this feature. One of the reasons for this is that the original implementation of displacement mapping required an adaptive tessellation of the surface in order to obtain enough micropolygons whose size matched the size of a pixel on the screen.

Meaning of the term in different contexts

Displacement mapping includes the term mapping which refers to a texture map being used to modulate the displacement strength. The displacement direction is usually the local surface normal. Today, many renderers allow programmable shading which can create high quality (multidimensional) procedural textures and patterns at arbitrary high frequencies. The use of the term mapping becomes arguable then, as no texture map is involved anymore. Therefore, the broader term displacement is often used today to refer to a super concept that also includes displacement based on a texture map.

Renderers using the REYES algorithm, or similar approaches based on micropolygons, have allowed displacement mapping at arbitrary high frequencies since they became available almost 20 years ago.

The first commercially available renderer to implement a micropolygon displacement mapping approach through REYES was Pixar's PhotoRealistic RenderMan. Micropolygon renderers commonly tessellate geometry themselves at a granularity suitable for the image being rendered. That is: the modeling application delivers high-level primitives to the renderer. Examples include true NURBS- or subdivision surfaces. The renderer then tessellates this geometry into micropolygons at render time using view-based constraints derived from the image being rendered.

Other renderers that require the modeling application to deliver objects pre-tessellated into arbitrary polygons or even triangles have defined the term displacement mapping as moving the vertices of these polygons. Often the displacement direction is also limited to the surface normal at the vertex. While conceptually similar, those polygons are usually a lot larger than micropolygons. The quality achieved from this approach is thus limited by the geometry's tessellation density a long time before the renderer gets access to it.

This difference between displacement mapping in micropolygon renderers vs. displacement mapping in a non-tessellating (macro)polygon renderers can often lead to confusion in conversations between people whose exposure to each technology or implementation is limited. Even more so, as in recent years, many non-micropolygon renderers have added the ability to do displacement mapping of a quality similar to that which a micropolygon renderer is able to deliver naturally. To distinguish between the crude pre-tessellation-based displacement these renderers did before, the term sub-pixel displacement was introduced to describe this feature.

Sub-pixel displacement commonly refers to finer re-tessellation of geometry that was already tessellated into polygons. This re-tessellation results in micropolygons or often microtriangles. The vertices of these then get moved along their normals to achieve the displacement mapping.

True micropolygon renderers have always been able to do what sub-pixel-displacement achieved only recently, but at a higher quality and in arbitrary displacement directions.

Recent developments seem to indicate that some of the renderers that use sub-pixel displacement move towards supporting higher level geometry too. As the vendors of these renderers are likely to keep using the term sub-pixel displacement, this will probably lead to more obfuscation of what displacement mapping really stands for, in 3D computer graphics.

In reference to Microsoft's proprietary High Level Shader Language, displacement mapping can be interpreted as a kind of "vertex-texture mapping" where the values of the texture map do not alter pixel colors (as is much more common), but instead change the position of vertices. Unlike bump, normal and parallax mapping, all of which can be said to "fake" the behavior of displacement mapping, in this way a genuinely rough surface can be produced from a texture. It has to be used in conjunction with adaptive tessellation techniques (that increases the number of rendered polygons according to current viewing settings) to produce highly detailed meshes.

 

 

* Physical lighting

Physically-based shading means leaving behind phenomenological models, like the Phong shading model, which are simply built to "look good" subjectively without being based on physics in any real way, and moving to lighting and shading models that are derived from the laws of physics and/or from actual measurements of the real world, and rigorously obey physical constraints such as energy conservation.

For example, in many older rendering systems, shading models included separate controls for specular highlights from point lights and reflection of the environment via a cubemap. You could create a shader with the specular and the reflection set to wildly different values, even though those are both instances of the same physical process. In addition, you could set the specular to any arbitrary brightness, even if it would cause the surface to reflect more energy than it actually received.

In a physically-based system, both the point light specular and the environment reflection would be controlled by the same parameter, and the system would be set up to automatically adjust the brightness of both the specular and diffuse components to maintain overall energy conservation. Moreover you would want to set the specular brightness to a realistic value for the material you're trying to simulate, based on measurements.

Physically-based lighting or shading includes physically-based BRDFs, which are usually based on microfacet theory, and physically correct light transport, which is based on the rendering equation (although heavily approximated in the case of real-time games).

It also includes the necessary changes in the art process to make use of these features. Switching to a physically-based system can cause some upsets for artists. First of all it requires full HDR lighting with a realistic level of brightness for light sources, the sky, etc. and this can take some getting used to for the lighting artists. It also requires texture/material artists to do some things differently (particularly for specular), and they can be frustrated by the apparent loss of control (e.g. locking together the specular highlight and environment reflection as mentioned above; artists will complain about this). They will need some time and guidance to adapt to the physically-based system.

On the plus side, once artists have adapted and gained trust in the physically-based system, they usually end up liking it better, because there are fewer parameters overall (less work for them to tweak). Also, materials created in one lighting environment generally look fine in other lighting environments too. This is unlike more ad-hoc models, where a set of material parameters might look good during daytime, but it comes out ridiculously glowy at night, or something like that.

Here are some resources to look at for physically-based lighting in games:

SIGGRAPH 2013 Physically Based Shading Course, particularly the background talk by Naty Hoffman at the beginning. You can also check out the previous incarnations of this course for more resources.

Äbastien Lagarde, Adopting a physically-based shading model and Feeding a physically-based shading model.

 

Some links

Bestlinks Artlantis at the Publishers site 'Ontmoeting' (that's meeting in Dutch) with manuals about rendering.

And of course, I would be remiss if I didn't mention Physically-Based Rendering by Pharr and Humphreys, an amazing reference on this whole subject and well worth your time, although it focuses on offline rather than real-time rendering.

https://www.marmoset.co/toolbag/learn/pbr-theory

There is of course much more to say on the topic of physically-based rendering; this document has served only as a basic introduction. If you haven t already, read Joe Wilson s tutorial on creating PBR artwork. For those wanting more technical information, I could recommend several readings:

* John Hable's excellent blog post: Everything Is Shiny

http://filmicgames.com/archives/547

* John Hable's even better blog post: Everything Has Fresnel

http://filmicgames.com/archives/557

Äbastien Lagarde s summary of Rendering Remember Me

http://www.fxguide.com/featured/game-environments-
parta-remember-me-rendering/

* Come to think of it, all of Äbastien Lagarde's Blog is good stuf

https://seblagarde.wordpress.com/

* The SIGGRAPH 2010 course on PBR

http://renderwonk.com/publications/s2010-shading-course/

* Always worth mentioning: The Importance of Being Linear

http://http.developer.nvidia.com/GPUGems3/gpugems3_ch24.html

NVIDIA Developer zone

Chapter 24. The importance of Being Linear.

In this chapter, we've tried to keep the descriptions and advice simple, with the intent of making a fairly nontechnical argument for why you should use linear color spaces. In the process, we've oversimplified. For those of you who crave the gory details, an excellent treatment of the gamma problem can be found on Charles Poynton's Web page: http://www.poynton.com/GammaFAQ.html.

The Wikipedia entry on gamma correction is surprisingly good:

http://en.wikipedia.org/wiki/Gamma_correction.

For details regarding sRGB hardware formats in OpenGL and DirectX, see these resources:

http://www.nvidia.com/dev_content/nvopenglspecs/GL_
EXT_texture_sRGB.txt

http://www.opengl.org/registry/specs/EXT/framebuffer_sRGB.txt

http://msdn2.microsoft.com/en-us/library/bb173460.aspx

Thanks to Gary King and Mark Kilgard for their expertise on sRGB and their helpful comments regarding this chapter. Special thanks to actor Doug Jones for kindly allowing us to use his likeness. And finally, Figure was highly inspired by a very similar version we found on Wikipedia.

* Sun light & skylight models

Setting up AutoCAD Sun Light is Easy!

http://www.cad-notes.com/setting-up-autocad-sun-light-is-easy/

Do you think setting up lights for AutoCAD rendering is difficult? Or do you think trial and error for setting up AutoCAD lights take too much time? Not really. In this rendering tutorial, you will learn to setup the sun light easily and a little trick to speed up the rendering test. This is the second part of our rendering tutorial. We have defined our camera view in previous tutorial. Next, we are going to define the AutoCAD lighting. We will discuss sun light (or natural lighting) and artificial lighting separately. Sun light comes first. Lighting is one of the most important thing in rendering.

If you already downloaded the DWG file from the previous tutorial, I apologize. I forgot to add floor to the model, you can download this one to continue with this tutorial. And of course, you can use your own model.

http://www.antoniobosi.com/maya-tutorials-mental-ray-3d
/maya-tutorial-c/exterior-render-tutorial-maya-mental-ray

Exterior Render Tutorial for Maya and Mental Ray (physical Sun and Sky techniques)

We are going to take a look to one of the most useful features in Mental Ray: Physical Sun e Sky.

If you enjoy in creating exterior renders in Maya this simple guide will help you understand and use the best tool that mental ray have to achieve exterior renders.

 

 

* Anisotropic reflectance models

http://en.wikipedia.org/wiki/Bidirectional_reflectance_distribution_function

The bidirectional reflectance distribution function (BRDF; fr, omega i en omega r ) is a function of four real variables that defines how light is reflected at an opaque surface. It is employed both in the optics of real-world light, in computer graphics algorithms, and in computer vision algorithms. The function takes an incoming light direction, omega-i, and outgoing direction, omega-r (taken in a coordinate system where the surface normal (lies along the z-axis), and returns the ratio of reflected radiance exiting along omega-r to the irradiance incident on the surface from direction omega-i. Each direction omega is itself parameterized by azimuth angle phi and zenith angle theta, therefore the BRDF as a whole is a function of 4 variables. The BRDF has units sr power -1, with steradians (sr) being a unit of solid angle.

A specular highlight is the bright spot of light that appears on shiny objects when illuminated (for example, see image at right). Specular highlights are important in 3D computer graphics, as they provide a strong visual cue for the shape of an object and its location with respect to light sources in the scene.

 

* Dielectric Materials

http://en.wikipedia.org/wiki/Dielectric_Shader

The Dielectric Physical Phenomenon Shader is a shader used by the LightWave and Mental Ray 3D rendering engines. It is based on a dielectric model of physics, which describes how electromagnetic fields behave inside materials. The dielectric shader is able to realistically render the behavior of light rays passing through materials with differing refractive indices, as well as attenuation of the ray as it passes through a material.

The shader uses Fresnel equations to simulate reflectance and transmittance of light passing through the dielectric interface, as well as using Snell's law to determine the angle of refraction. In addition Beer's law is used to determine absorption of rays passing through dielectric materials.

Two types of dielectric interfaces are supported:

- dielectric-air between a dielectric material and air.

- dielectric-dielectric between two dielectric materials.

 

* Dynamic tone mapping

Tone mapping is a technique used in image processing and computer graphics to map one set of colors to another to approximate the appearance of high dynamic range images in a medium that has a more limited dynamic range. Print-outs, CRT or LCD monitors, and projectors all have a limited dynamic range that is inadequate to reproduce the full range of light intensities present in natural scenes. Tone mapping addresses the problem of strong contrast reduction from the scene radiance to the displayable range while preserving the image details and color appearance important to appreciate the original scene content.

 

* Caustics

From Wikipedia, the free encyclopedia

Caustics produced by a glass of water

For other uses, see Caustic (disambiguation).

In optics, a caustic or caustic network is the envelope of light rays reflected or refracted by a curved surface or object, or the projection of that envelope of rays on another surface. The caustic is a curve or surface to which each of the light rays is tangent, defining a boundary of an envelope of rays as a curve of concentrated light. Therefore in the image to the right, the caustics can be the patches of light or their bright edges. These shapes often have cusp singularities.

Nephroid caustic at bottom of tea cup

Caustics made by the surface of water

Concentration of light, especially sunlight, can burn. The word caustic, in fact, comes from a Greek's word: burnt, via the Latin causticus, burning. A common situation where caustics are visible is when light shines on a drinking glass. The glass casts a shadow, but also produces a curved region of bright light. In ideal circumstances (including perfectly parallel rays, as if from a point source at infinity), a nephroid-shaped patch of light can be produced. Rippling caustics are commonly formed when light shines through waves on a body of water.

Another familiar caustic is the rainbow. Scattering of light by raindrops causes different wavelengths of light to be refracted into arcs of differing radius, producing the bow.

Computer graphics

A computer-generated image of a wine glass ray traced with photon mapping to simulate caustics

In computer graphics, most modern rendering systems support caustics. Some of them even support volumetric caustics. This is accomplished by raytracing the possible paths of a light beam, accounting for the refraction and reflection. Photon mapping is one implementation of this. The focus of most computer graphics systems is aesthetics rather than physical accuracy. Some computer graphic systems work by "forward ray tracing" wherein photons are modeled as coming from a light source and bouncing around the environment according to rules. Caustics are formed in the regions where sufficient photons strike a surface causing it to be brighter than the average area in the scene. Backward ray tracing works in the reverse manner beginning at the surface and determining if there is a direct path to the light source. Some examples of 3D ray-traced caustics can be found here.

Caustic engineering

Researchers have found that they can make use of caustics to create a desired image by shaping transparent material in a particular way. A surface of a panel of transparent material (e.g. acrylic glass) can be shaped such that the panel refracts light in a specific way to form the chosen image whenever the panel is held at a particular angle between a light source and a white wall.


Draai de iPad rond en het in Artlantis gerenderde ontwerp
draait mee in de actuele ruimte.
 

• Vragen? Stel ze gerust, wij kennen het nieuwe programma.

Bel ons gedurende werkdagen.

omhoog