Dead And Angry

Posted: July 14, 2011 in Computer Graphics, Games

That is the spooky title we gave to a demo game we’ve made at the URJC.

This game has been made by a team of 4 people (which includes me) with no prior knowledge of Unreal Development Kit.

It features custom models, AI, sceneries, UI, cinematics, gameplay and a mutiplayer mode.

In my opinion, UDK can be an extremely sweet tool for artists and designers but it can also be a pain to programmers (let’s recall that I’m talking about the free UDK version) due to the lack of consistent and well-organized documentation and examples. But in the other hand, the community is very active and responsive.

All I have to say is that we’ve learned a lot about the whole process of making a game, especially about how important the preproduction is. And how many features you have to prematurely trash due to deadlines.

Advertisements

Abstract

This was a project for my Masters in Computer Graphics, Games and Virtual Reality at URJC.

We were asked to develop some sort of mine train real-time animation from scratch. It had to feature dinamically-generated tunnel bumps with Perlin noise, on-board camera view and three rendering modes: Polygon fill, Wireframe and Points. We chose OpenSceneGraph for the job.

Design tools

As a big fan of rollercoasters I had spent hours on the NoLimits Rollercoaster Simulator which has a quite mature coaster editor. There’s plenty of coasters made with NoLimits around the net, most of them are reconstructions of real ones.

I thought it could be a good idea to be able to load coaster models in NoLimits (.nltrack) format as it would allow us to design the track and the scene in a visual way using the NL Editor.

The .nltrack format is binary and not documented. It contains the shape of the track as control points of cubic Bezier curves. It also contains info about the colors, supports, external .3DS objects and info about the general appearance of the rollercoaster.

Using Hexplorer and the NL editor itself I was able to figure out the control points and the location/scaling/rotation of the external 3D models. Later I discovered that there’s a library called libnltrack, which helped a lot.

My pal Danny modeled a couple of rooms, an outdoor scene and a lot of mine props (barrels, shovels, …). Then he imported them into the editor and laid out a coaster track passing trough all of the scene.

Coaster geometry

Correct generation of the rails and the crossbeams for the track was a bit of a challenge, and it needed to be efficient!.

I came up with a solution based on the concept of a “slider”, a virtual train which can be placed at any place around the track (just specifying how many kilometers away from the origin (the station) it would be), and it returns three orthonormal vectors forming a base which was then used to transform vertices to the train’s POV.

By using two sliders, one ahead of the other one can set vertices back and forth to form triangle strips in order to generate perfectly stitched cylinders. I ran into a couple of problems when the track was almost vertical but I finally managed to solve them.

Upon startup, the geometry for the whole coaster is generated. The engine generates about 15 meters of track per geode, this way OpenSceneGraph is able to cull the out-of-sight track segments efficiently. Besides, two levels of detail are generated based on the distance to the camera.

As for the crossbeams, it’s just a .3ds model which is repeatedly placed along the track.

Tunnels

The program generates a 256×256 grayscale perlin noise texture which is then used as a displacement mapping for a cylinder mesh generated around the track on load time.

The editor is able to mark segments as ‘tunnel’ easily turning tunnels on or of in a per-segment basis.

The meshes are also segmented for better culling and stitched together. They have a diffuse rock and floor texture applied.

Train

The train is a .3DS model by Danny which has a slider assigned to it and its animated following an extremely simple phyisics scheme based on the potential energy of the train. It has a spotlight on the front so the track, rooms and tunnels are illuminated as the train goes trhough. Moreover the illumination of the train mesh is switched from the sunlight to the spotlight based on wether it’s in a tunnel or not.

Effects

A skydome, lens flare (props to Tomás), and OSG’s impementation of shadow mapping were added in.

Audio and others

In the last minute before the deadline, supports for the track were generated as regularly-placed cylinders, but unfortunately that wasn’t there yet at the time the screenshots and the videos were taken.

A white noise audio file is played with a pitch and volume proportional to the train speed.

To be done

Due to the tight timing constraints we were subject to I was forced to leave a lot of things to be done, among them:

– Per-pixel lighting.

– Post-processing effects (vignette and HDR)

 

Hi everybody!

It’s been quite a while since my last post. I’ve been very busy with my Masters in CG, Games and VR.. wait.., in fact I’m still very busy!!

Today I’d like to tell you guys about a cool project I finished some months ago while I was working at a research institute

Intro

We needed a wireless versatile interface for sensoring buttons, joysticks and other human-operated transducers while maximizing compatibility and lowering production costs.

I had previously designed an HID-compliant USB device for sensing digger and crane controls, but the current trend seems to be something like getting rid of wires and filling the environment with radiation :-p.

I wanted the device to be a BT HID, as it has lots of advantages:

  • Many computers and devices are equipped with BT now, there’s no need to build an USB receiver.
  • Mainstream OSes have built-in drivers for HIDs
  • It can be used out-of-the-box with almost every game/app which supports a joystick/gamepad
  • Can be read directly through DirectInput , etc
  • Robust (CRC, they can live in the range of other BTs or 802.x devices…)
  • Multi-platform

And a few drawbacks:

  • More costly than specific point-to-point solutions such as the nRF family
  • Much more power-hungry

Project requirements

  • Wireless
  • Decent stamina
  • Bluetooth (HID profile)
  • Robust
  • Affordable production cost
  • Small enough to be mounted inside control panels

I conducted a bit of a research and couldn’t find a commercial product which fulfilled our requirements. The only HID bluetooth devices I could find were the Wiimote and the PS3’s sixaxis/dualshock 3. Both closed and in the case of the Wiimote, with a proprietary HID report-based subprotocol. Altough we could have been using the WiiUse library I wanted my own solution instead.

Selecting the components

I spent over two weeks surfing the net for the best suited components, and here’s what I came up with:

The Bluetooth transceiver

There’s plenty of all-in-one Bluetooth modules specially tailored for embedded designs. These are extremely useful since they integrate all the radio and baseband hardware (and even the antenna!) in a tiny self-contained mini-board, freeing the designer of those heavy-duty RF design tasks. But most of them are hardcoded with the RFCOMM Bluetooth profile (RS232 serial port over the Bluetooth link), and don’t allow the user to add or change Bluetooth services.

After trying a few of them and exchanging some e-mails with providers and manufacturers, the best I could find then was the Bluegiga’s WT12, a Class 2 Bluetooth module for embedded systems. What makes it different from everything else is that it’s a low cost module which runs a proprietary but documented firmware called iWrap. The iWrap implements the Bluetooth Stack from L2CAP down to the baseband, you can communicate with an external processor/microcontroller via a baudrate-programmable UART. It features a documented plaintext command set for configuring and interacting with the stack (creating L2CAP connections, notifying the host when a new connection is awaiting to be accepted, etc). They even offered us some free samples!. The drawback was that the iWrap3 firmware didn’t support custom Service registers, so you were basically stuck with the stock profiles.

The Microcontroller

The microcontroller would be in charge of running a fully custom firmware to initialize the iWrap stack, sampling its GPIOs for sensor data, managing the status of the battery and signaling the user of the general status via a 2-color LED among other tasks.

Since I had extensive prior experience with the Microchip’s PICmicro family of microcontrollers I decided to go with the PIC18LF4550 in a QFP package. The 18LF2550 is a small yet powerful 8-bit microcontroller with Flash memory which yields up to 12 MIPS at 48 MHz, has built-in USB, timers, PWM and many more peripherals and a great software toolchain and libraries. The ‘L’ stands for extended voltage range, meaning that it’s able to run at 3.3V which is the logic voltage for the WT12 module.

Power management and battery

I thought it would be great to use a USB port for recharging the battery as the 18LF4550 has built-in USB, and being able to use the USB link instead when the battery is nearly dead.

The MAX1811 is a great battery charger/monitor which is able to charge a Li-Ion single cell battery from a 100mA or 500mA USB port. It signals when the charge has finished, monitors the cell temperature and much more.

For the battery I chose to use the PS3 controller battery since it’s inexpensive, available everywhere and there are extended 1200 mAh versions for over 8 € !.

Finally, for power management I used the TI’s BQ2050, a fuel gauge IC able to communicate with an external host via 1-wire protocol for measures like the remaining charge in the battery among many other parameters.

System diagram

The first prototype

After calculating lots of parameters for discrete components from the datasheets of the ICs I wrote a couple of schematics in a piece of paper and built a handwired prototype on a proto-board.

Note the brown wire mess in the external board. That is the WT12 with its pads directly soldered to wires.

The firmware

That was the toughest part of the whole project. When a problem can be equally caused by a line of code or by a loose wire it always result in lots of fun ;-p

As you can see on the previous photo, I had a Microchip ICD2 (which got broken and was replaced by an ICD3) hooked up to the board. That gave me the greatly appreciated possibility of rebuilding the firmware and uploading it directly to the on-board uC, as well as doing painful remote debugging.

I will save you the nuts and bolts of the firmware, since it quickly grew into a complex and hard to debug piece of software. But I’d like to point its main features:

  • Implements a lexical analyzer to parse messages from the WT12.
  • Fully interrupt-driven. Active waits are avoided at all costs.
  • It efficiently disconnects or scales the clock from various parts of the chip depending on the current usage to save power.
  • Manages the status of a bi-color LED to let the user be aware of the current connection/charge status
  • Switches between USB and Bluetooth mode transparently to the user just plugging/unplugging the USB wire
  • Implements the SDP Bluetooth layer (the one in the iWrap was feature-incomplete for my goals)
  • Implements the BT-HID layer (“)
  • Sensors the inputs as a scan-matrix with up to 32 digital inputs
  • Sensors 8 analog inputs
  • Manages 8 3.3V CMOS compatible outputs
  • Warns the user if the battery is almost dead and turns of the device if the voltage level goes down the minimum safe levels
  • Has a custom HID-based protocol for reading the battery and device status from the PC, or perform other tasks such as remote shutdown
  • Implements the OneWire protocol with two CCP modules
  • Wakes up the device from sleep mode on signal changes
  • Implements a Bluetooth pairing PIN code
  • Is bootloader-capable for upgrading the firmware
That took a few months to develop. Tools like a low cost logic analyzer did often come in handy.

The schematic and board layout

I used CADSoft’s Eagle which is a good CAD/layout software that allows you to design 2 sided boards and it’s free (with some constraints) for non-commercial projects. Of course, I had to create new footprints and symbols for components that were missing in the stock Eagle’s library. The Eagle library from SparkFun was very helpful tough.

The system uses a 3.3V LDO voltage regulator-based power source for both USB and Battery operation modes. For the analog part I built a RL filter before the analog reference voltage input pin for power noise filtering, and each analog input has its own low-pass capacitor.

Regarding the digital inputs, 8 diode arrays were used for the scan-matrix method implementation.

A 16 MHz low profile xtal had been placed near the uC.

The I/O pins are simply IDC connectors, so another board with real sensors or better connectors can be stacked-up to this.

There is a special programming port for connecting the MPLAB ICD PIC programmer to the board.

Once I was happy with the schematic, and it had been tested in the proto-board I moved on to board layout. But before doing so, I had to decide where I was going to send the resulting Gerber files for manufacturing. After looking lots of low-cost prototype PCB manufacturers, I finally came upon to Gold Phoenix (which is the backend for SparkFun’s BatchPCB).

Then I studied their board constraints for prototypes that affected the thickness of the vias and tracks, and also the drill sizes. Fortunately, SparkFun have on their site a .dru design rules file for Eagle which was extremely useful.

The final layout was all carefully placed and routed by hand.

I strictly followed design guides from the manufacturers of the ICs, and used ground planes and different thickness tracks according to good design guidelines.

The WT12 has its pads facing the bottom of the board. It’s intended to be soldered in a reflow facility, so I had to figure out how to solder it by hand. My solution was to make the pads slightly larger in the footprint to be able to melt the solder in its pins by applying heat in the part of the pads that show up under the module.

GoldPhoenix sent us 19 boards

Assembly and test

I still remember how hard my heart was beating when I first connected the battery to the finished and assembled board prototype after a whole evening of tweezers, solder paste and looking through a giant magnifier :D… And turned out It didn’t work the first time!

While tracking down the problem I discovered that the datasheet of the voltage regulator had the pinout completely wrong!. Then I desoldered the part and replaced it with a TO-92 with the pins in place.

.. et voilá !!

Fully assembled board (top)

Fully assembled board (bottom)

It turned out that it worked like a charm!, however a bit more of debugging and development was needed with the final thing!

When it’s on the LED flashes in green, indicating that it’s in visible mode. Then you pair your PC with it and asks you for the PIN code. Once that is done, it’s recognized as a standard USB gamepad with 8 axis and 32 buttons! and it’s ready to use with any application. After 5 minutes of inactivity or on receipt of the shutdown command it turns off.

When any of the digital inputs is asserted it turns on again and tries to reestablish the bluetooth link. When you plug the USB cable the LED turns red and the battery gets recharged. The BT link is dropped and the device uses the USB link.

I didn’t have enough time to perform extensive testing but the battery life was more than decent and it works perfectly.

Max, the astronaut

Posted: January 30, 2011 in Animation projects

This is our first animation. We made it as an assignment for two subjects called “Computer animation” and “Character Behavior and Modeling” of the Masters in Computer Graphics, Games and Virtual Reality at URJC (Madrid, Spain).

It’s about a seven year old boy named Max, who dreams about reaching the moon in his spaceship.

The whole animation is very cartoony. We spent almost 3 weeks in the making.

It has been modeled in Maya, rendered with Mental Ray, assembled with Adobe Premiere and postprocessed in Adobe After Effects.

We used mocap for most of the animation loops and the dashboard animation was made in Adobe Flash.

It features a beautiful original theme from Roberto Gutierrez called “Reach the moon” in the ending credits.

Here’s the video.

Props to Roberto Gutierrez, Daniel Cachazo and Rosa Maroco.

Thanks for watching! ;-D

P.D: Yes, there’s wind in our moon :-p

UPDATE (29/5/2011): We have registered Max, the astronaut for the SIGMAD 2011 Animation Festival in the amateur category. Keep your fingers crossed and wish us the best!!

Intro

This is a project I developed while working at LSyM at the University of Valencia (Spain).
They had recently built a C.A.V.E system mounted on top of a powerful Manesmann 6 DOF Stewart mobile platform.
The system was sitting virtually unused and I wanted to develop some demo to show to visitors until a decent application was finally done.
I wasn’t allowed much time to do so and had to spend lots of my spare time working on it.
This is what I had:
The C.A.V.E platform at IRTIC (University of Valencia)
A C.A.V.E composed of:
  • Four hi-performance active-stereoscopic ProjectionDesign projectors.
  • A chair with a little dashboard with two joysticks and some buttons.

As you can see in the picture, the projectors are mounted on-board and have a wide-angle lens for retroprojection on the screens through mirrors. Each one has two DVI inputs (left/right field) and a genlock signal output.

A 6-DOF mobile platform:

  • It features X, Y, Z, heave, surge, and sway (rotation around the three axis).
  • It’s electro-hydraulic and its linear actuators are driven by that gray box at the bottom.
  • The box is connected to a dedicated control PC via fiber optic communication.
  • It’s capable of lifting up to 1000 Kg (I’m not really sure about this)
  • It works with 380 V (industrial range).
  • The control PC is connected via Ethernet to the application machine and runs a propietary manufacturer control program.
  • It receives frames over UDP with the instant position (the 6 DOF) at a rate of 60Hz approx.

A cluster composed of 5 machines:

  • 4 Quad-Core with nVidia Quadro. Each one of them renders a wall of the C.A.V.E (both left and right fields using independent DVI cables connected to both heads of the Quadros).
  • The cards are synchronized using nVidia Quadro G-Sync and an IR emitter is located on-board the C.A.V.E and connected to the master projector via GenLock for syncing the 3D glasses up.
  • The machines are running Windows XP x64.
  • 1 Master machine with a mid-range graphics card and a bit more underpowered.
  • The 5 machines are interconnected via a Gigabit Ethernet hub.

Amazing, huh?.

Just do it

I found myself with an equipment worth tens of thousands of euros and little time to leverage its full potential, and of course I was pretty much on my own.

I found Rollercoaster 2000 by PlusPlus, which is basically a very basic rollercoaster simulator. It takes a plain-text file containing the description of the track as Bezier control points and plays a 3D animation of the coaster in an OpenGL window. Graphics are VERY basic but it is fully open-sourced, so that’s just what I needed.

Rollercoaster 2K screenshot

Of course this app wasn’t ready for our CAVE & platform off the shelf and here it comes the fun part:

We were using VRJuggler as a middleware for rendering in the CAVE (by Carolina Cruz-Neira, the inventor of the CAVE herself!).

This middleware is a convenient way to deploy graphics applications across a wide range of VR systems and configurations.

It takes  care of intra-cluster communications, cameras orientation and frustum settings, I/O devices management ……

Making things work

Don’t get me wrong, the RC2K app basically works, it’s correct in math terms and it’s free. You can’t complain under these conditions and I’m grateful to its author for it. But the source is not very well commented out and not very well structured (lots of global vars, almost no use of structs…). Besides a lot of comments and symbols were written in French, which I don’t speak so I had to figure out a lot of things but it was pretty straightforward.

The first step was to encapsulate the app into a C++ class derived from some VRJuggler stuff.

Then I discovered that the physics (dynamic) loop was tightly coupled to the rendering loop. In fact they were the same!. As an immediate result the coaster physics were not very stable.

The physics thread

The solution was to spawn a thread in charge of physics. I isolated the variables involved in physics calculation and the thread just perform a simulation step at a constant rate. I defined a double buffer with the dynamic state protected with a mutex.

Master/Slaves model

In order to fit the architecture of our cluster I defined the master machine as the simulation controller. It only calculates the next physics step, and draws a view in an OpenGL window for the operator’s delight 😀

For each frame, VRJuggler takes the front physics buffer and broadcasts it to the slaves which are the 4 other machines in charge of rendering the C.A.V.E walls.

The slave machines basically get the data and draw its corresponding view (both left and right stereoscopic fields).

An XML config file allows VRJuggler to apply different camera configurations on a per-machine basis (based on the machine host name).

So the .EXE and .XML files are the same for all the machines and VRJuggler takes care of the rest (windows setup, stereo camera calculations, …).

It may look simple, however nobody knows how much pain did I pass through to get that working :-p

For execution I had set up a shared folder in the master machine with read access for all the slaves. I tried to launch them all via RPC but after hours of research I gave up and ended up using some simple TCP-based remote launcher a co-worker made.

Feeling accelerated?

The platform was for sure the most exciting part. Here’s how I did it.

The idea with mobile platforms is to simulate the accelerations and decelerations by tilting the platform. This changes our center of gravity and tricks your inner ear into thinking that you’re on the  go.

The best way of dealing with these platforms is the Classical Washout filter, mostly used in flight simulators. To put it in simple words it’s a PID that transforms aircraft accelerations into motion cues that can be directly fed to the platform. It also does tilt coordination. Its objective is to orient the gravity vector so the rider feels a sustained acceleration while the visual display remains the same.

We had this filter and the library sending packets to the platform implemented in old Borland C++ which I had to port to Visual Studio. Once the port was done I had to adjust the filters thresholds that were stored in plain text files using a software “Platform simulator” and then fine-tune them in the real thing to match my taste.

The filter takes the angular velocity and the specific forces for the coaster train. For the angular velocities someone pointed me to the Darboux vector.

Now that all the explanations are done, let’s watch it in action!

Result

Conclusions

As you can see in the video the platform doesn’t rock you so bad (I wish it did :-D). I had to further adjust the Washout thresholds to make it a tougher ride.

It would also have been cool to simulate the track shaking and do better graphics (I didn’t rewrite the OpenGL part except for a few details, so that’s pure RC2K graphics).

It took me 1 and a half weeks.

Of course, the platform has mechanical limits and it can’t play a 360º loop, cobra rolls or corkscrews but it does its best.

I wish I had implemented a run counter since it quickly become one of our best valued demos and a mandatory one for visitors. I had spent hours running the demo for large groups of visitors from other universities and others.

Even the rector of the university ‘s wife had a ride at a special event the last year !! Amazing

Special thanks

Props to Ignacio Garcia for his advice and to M.A. Gamón for his support on the Washout filter.

The attack of the parallel app!

Posted: January 19, 2011 in Uncategorized

Intro

In the past decades, traditional silicon-based “sequential” computers have reached the limits of the physics. Since the early 80s until almost a decade ago, the main trend in performance improvement was to increase the clock speed and shrink the size of the elemental transistor. New processors families came often with higher and higher core frequencies and smaller process sizes.
As you may know, the CMOS transistor technology (the one used in most of today’s electronics) is well known for consuming power when switching from one logic state to the opposite. This power is spread in the form of heat. So at 3,8 GHz it became too difficult to keep those micro-hells cool inexpensively.

Besides, at higher frequencies the period decreases, what means that the length of the trace that an electric pulse can travel shortens. This leads to problems where two functional units are too far from each other (inside the chip) and some sort of latch between them has to be placed in order to keep them working at that frequency.

Finally, Intel cancelled their plans to produce a 4 Ghz Pentium 4 in favor of dual-cores.

Today, processors and other computing devices are improved by adding parallel processing units, but this concept is not new at all. In the 70s the first multiprocessor machines began to appear at selected universities and in the 80s the equipment and tools for parallel computing became much more mature and supercomputing companies such as CRAY appeared. During the 90s and the 00s their computing power and electric power consumption grew exponentially.

However, CPUs are not the only parallel processors you can find. Graphics cards (even the oldest fixed-function GPUs) have always had an inherently parallel architecture due to the fact that many graphics computations and other stuff needed for rendering a frame can be carried out in parallel (for example, the 3DFX Vodoo2 had 2 parallel texturing units back in 1998).

Now graphic cards can be programmed in several ways. The first programmable cards allowed the user to replace certain stages of the graphics pipeline such as the rasterization with their own programs (i.e shaders) running on the GPU in parallel with the CPU.

People wanting to use the GPU for general non-graphics purposes those days used to encode the data into textures, run a shader in the GPU and get back the results embedded into another texture. The development of a hardware and software architecture for allowing the execution of general purpose programs on the GPU was just a matter of time and today we can do so with technologies like CUDA and OpenCL. This is called General Purpose computation on Graphics Processing Units, or GPGPU.

Now, most of the largest supercomputers rated at the top 500 (which is now headed by the chinese GPGPU-based Tianhe-1A with 2.57 petaflops/s) are leveraging this technologies that provide tons of computing power with lower electric power consumption than traditional systems.

Saving lives massively

There’s a wide range of problems that unless aided by a supercomputer would not be viable to solve. But… where do I begin searching?

You might be asking yourself: In which kind of problems are the biggest supercomputers being used?

Supercomputer usage (November '10)Take a look at the graph (Top500 supercomputers usage for November ’10), you’ll see a big fat “Not Specified” piece of the pie. I’m not sure what it means but I guess it must be military or government undisclosed projects, or perhaps secret industrial commercially-aimed applications.

The next biggest field of application is research. This makes sense as pharmaceutics and universities make extensive use of them. As you can see, supercomputer applications range in nature from Aerospace to Finance.

Let’s take a look at the paper “

Simulation and modeling of synuclein-based “protofibril”structures: as a means of understanding the molecular basis of Parkinson’s disease“.

 

In a nutshell, the researchers at the University of California–San Diego conducted a study to determine how the alpha-synuclein (which are unknown-function proteins) aggregate and are bound to membranes thus eventually preventing normal cell functions that are associated with Parkinson’s disease.

These membrane-binding surfaces can be treated with pharmaceutical intervention for dissolution.

 

For their purpose, they used molecular modeling and simulation techniques to guess the folding pathway of the protein into structures that bind to membranes.

 

 

They first developed a program called MAPAS to assess how strongly the protein interacts with the membrane.

 

 

Then they ran a set of Molecular Dynamics on the IBM BlueGene. This basically gave the membrane regions with more binding probability.

 

 

With this data they performed more Molecular simulation on that regions to simulate the binding process.

 

 

As they say in the paper, the results of the MAPAS program matched already known results in a test problem.

 

 

In mi opinion, the BlueGene isn’t actually better suited for this applications than others out there but I guess that it was more accessible for the researchers since it was conceived for biomolecular research and because of geographical proximity (California).

 

This project took 1.2 million processor hours.

What’s next?

Turns out that the parallel paradigm is already and is going to be the future trend. So learning this technologies is worth it, but this of course doesn’t come for free. Parallel applications are harder to write and debug than sequential ones. The programmer must face new difficulties like race conditions which lead to synchronization problems which, in turn, might end up with dead-locks or incorrect results if not watched closely.

Another important aspect of parallelism for performance is how a global problem is split down into smaller ones which can be solved individually by different processing elements and then put back together to compute the final result. This is, of course, absolutely problem (and machine)-dependent and might not be an easy, yet important, task. Once again “a good programmer must know the machine he’s programming for“.

The medium-term goal is reaching 1 Exaflop processing power and it looks feasible that computer technology changes once more in the meanwhile. The Moore’s law, which held true for decades is no longer applicable for transistors. However it can be extrapolated to processors.



As a matter of fact technology is continuously and quickly evolving. Who knows where it will lead us to in the future!

The good old demoscene

Posted: January 16, 2011 in Uncategorized

The last week I was assigned some homework for a subject called Advanced Rendering.

What I have to do is find a tunnel effect from the demo scene that I like and comment it.

Well, after watching TONS of demos, some of them as old as 1994, which required a bit of black sorcery such as Dosbox or Wine. A lot of great stuff came across, like this state-of-the-art metaballs tunnel featuring equipotential surfaces and advanced lighting.

This is kickass stuff, but since we were asked to avoid repeating the same demo and this is highly findable I wanted to dig (a lot) deeper into the demoscene.

During my journey I watched a lot of impressive old school stuff for that era such as pixel shading in the Amiga or Full Scene Post Processing FX in early 1998.

But none of them were really catching my eye until I found We Cell, by Kewlers (2004). Here’s the video:

The demo itself is brilliant for that era, but let’s go to the tunnel part at 1:03

It might look like some sort of polygonal mesh, but it isn’t. Unlike classic tunnel effects, everything you see is made of 2D sprites.

The walls of the tunnel are basically cleverly positioned sprites. Their brightness and size decreases with their distance to the camera. The lighting here looks static.

There’s also a Depth Of Field effect which plays with a Gaussian Blur depending on the distance of each sprite to the camera. As we ride the tunnel we see little spheres that approach the camera. These are also sprites.

If you download the demo and open de “data” subdirectory there you’ll find all the sprites as JPG files. They’re not dynamically generated.

All trough the tunnel we can see a HUD with some text at the top left corner and some sort of irregular frame wrapping the scene (the HUD can be also seen in the “data” dir). Along with the HUD, a bunch of translucent cells ascending or falling can also be seen in the left half of the screen as well.

Both the HUD, bubbles and camera are sequenced and animated with the music, what gives a live organic and rich environment worth watching.

Even more impressive is the fact that they used NO SHADERS at all! so everything must be software computed. I guess that CPU-intensive effects like the DOF (Depth Of Field) must be written in raw assembler as usual in the demoscene.

I had run the demo a couple of times to see if the tunnel shape is dynamically generated or it’s already precomputed and it looks like it’s dynamic.

There’s another tunnel at 4:25 but it uses the very same tricks and techniques.

Here’s the info about the demo:

Release date: August 2004

Party: Assembly 2004

Platform: Windows

Exe size: 5.5 Mb (download link at scene.org)

I have no info about the graphics API or the Workflow tools used here, but it looks like pure C/ASM + OpenGL.

Now let’s talk about this demo and its creators:

Kewlers was a finnish-based demogroup. Their legacy, their slogan “Kewlers doesn’t suck at all”, and their productions marked an era of newschool productions with oldschool soul.

We cell is a demo presented to the Assembly ’04 that got the fifth position. Kewlers did never win the first price in any of the four Assembly parties they participated in with productions such as Variform, we cell, 1995 and “a significant deformation near the cranium”. But they got 6 scene.org awards in 3 different years. None of them for “we cell” which was nominated for 4 categories: Best demo, Best effects, Best soundtrack and Public choice.

In a world of pixel shaders, Curly Brace, the main coder wanted to innovate by making a great production without using those new hardware possibilities that everyone was then using. He archieved magnificent scenes without a polygon or a shader (just sprites, particles, and software effects).

In the other hand we have the music of Little Bitchard, one of the most versatile and prolific musicians of the recent story.

Kewlers brought the best of the oldschool to newschool, the code innovation, simplicity and direct impact.

Unfortunately after their following great demo, 1995, in 2006 they quitted the demoscene.

Last active members:

– Actor Doblan (musician, graphic artist)

– Albert Huffman (coder)

– Curly Brace (coder, 3D artist)

– DashNo (graphic artist)

– Fred Funk (musician, graphic artist)

– Little Bitchard (musician)

– Math Daemon (coder)

– Mel Funktion (musician)

– Scootr Lovr (coder, 3D artist)

(sources: escena.org, wikipedia)

I remember the old times when I used to watch demos on my 486DX2@66Mhz !!