Recently I came across a quite demanding and intriguing request which derives from a pretty common problem in contemporary shows.
What usually happens and seems to be needful in many occasions (regarding the stage visuals in large scale) is to combine projections/videos (in our case I ll stay on generative or 3d rendered content) with LASERs.
The main issue I will examine here, is that for most of the ready made solutions out there, there are not so many options if you are about to play and tweak stuff – in realtime – especially when it comes to the radius and the shape of your laser beam, or to the interaction of the LASER with the artificial lightsources of your 3D video.
In order to give a quick example, I ll refer back to a recent work I prepared for KNTXT music commissioned by the very well known Show Director and Stage Designer Rene Van Dijk.
The task was to prepare certain content in order to exploit the capability of a LASER beam to follow respectively a 3d Rendered light source (a point light) which was traveling through a huge “X” (the iconic trademark and logo of KNTXT music productions) made out of LED panels.
The most well known and simple solution is to use Madmapper & Madlaser in combination with Resolume or your favorite VJ program.
However, this is working out well if you have in advance made two different sequences. One for the actual texture which will be emitted by the led screens (or projected by beamers) and another one, using just one color channel (black and white) to represent the motion of the point light as a dot in the 2D space of the projected image. Then the Madmapper will take care of reading back the values per pixel of the monochromatic texture and translate them to the LASER’ s projection space. With this technique you have a huge advantage of minimizing rendering times (even if you have to press the render button twice) but you dont have to get stressed or anxious about the LASER moving heads and its programming.
But there is no need to mention that this method has its own limits (as any method does indeed), as you go further you ask for more and when you achieve more -usually- you go further.
Let’s pick for instance the beam radius. Since we want to transcribe somehow the light (which is not being rendered as an object in terms of 3D visualization – how we do perceive light in 3D is equivalent to the real world, by the shadows it produces over matter and the way it interacts with different materials) although the only information we do care about is its position and maybe its intensity, sometimes it is needful to be more precise and have better overall control.
So as it is proposed and if it is about to be render a point light as a 3d object, we may replace it by an emissive/flat-textured ball. Thus for we exepct as an output a white dot over a black background moving along. A quick observation or a prior assumption is that the scale of the ball will (obviously) affect the radius of the rendered dot, and whenever the 3D ball is occluded or get black the LASER beam will go off.
Because the point light is now being represented as a 3D object, one more factor which may affect its scale on the output image (the radius of the rendered dot) is its distance from the 3D Camera in our scene, as long as its relative position from the center of our Frame View (see Barrel Distortion effect). Nonetheless, we will need always to keep in mind the total pixels that should describe the shape of the dot doing in prior all the respective calculations as this will consequently affect the size (radius) of the LASER’s beam in the physical world.
On the other hand we will not have an elegant way to play with the shape or the radius of the LASER’s beam in realtime. The only two options remain are either to have multiple rendered sequences or/and to combine the (process) afterwards by using some TFX (Erode | Dilate – to control the size | Feedback loops to create trails | and Hue or ShiftRGB (Chromatic Aberration) to change colors) . In any case, the clue is that we are very limited when it comes to rasterized/fixed output images.
If you follow me then you may probably imagined a few solutions already so to fix the scale of the ball by billboarding it, or by changing the camera to an Orthographic one (don’t do it, you will miss all the fun part).
Few paragraphs above I deducted somehow the solution when I referred to the properties we care about. Here is a reminder:
Size is an absolute value, Position is a 2D vector and Intensity can be either a normalized value or a boolean one.
Basically what we do care about is a point in 2D space and nothing more. However, what we do have now in our 3D program is a point (indeed) which should be rendered somehow and must be quite visible and preserve its area (in pixels) when its needed (forgot to mention in advance how problematic is the anti-aliasing with lasers).
(To be continued)