Archive for the ‘Software projects’ Category

Howdy!

Today is a great day. The version of Spelunky for the PlayStation®3 and the PlayStation®Vita we’ve been working on at BlitWorks has been released both in America and Europe.

Spelunky is a 2D platformer with randomized levels where endless combinations of crazy stuff can happen everytime you play, thus redefining the word ‘addictive’.

The game came out last year for the XBox360 achieving great success. Earlier this month a slightly more featured version for Steam and GOG was released, but today it makes its debut in Sony’s desktop and (for the first time) handheld consoles.

What sets this port apart from the others is:

– Cross-buy & Cross-play: Buy the PS3 version and get the Vita one for free (and vice-versa). Play in one console and your progress will be automatically sync’d in the cloud letting you continue in another console where you left.

– Wireless co-op mode: Up to 3 Vitas can be hooked up via Wi-Fi with a PS3 to play in co-op (either adventure or deathmatch). Or use your Vita to play against other vitas on the go via Ad-Hoc.

– Every Vita owns its own camera, so there’s no need for everyone in the game to be constrained to the same frame like in other versions.

– Touch features and accelerometer 3D effects on menus! (Vita only)

– Controller vibration feature in PS3

There’s a free demo awaiting for you to try at PlayStation Store.

The full version is priced at $14.99 in America and 14.99 € in Europe. If you’re a PS Plus subscriber you’ll get a 20% discount.

http://blog.us.playstation.com/2013/08/27/spelunky-lands-on-ps3-and-ps-vita-today/

The port is getting fairly good reviews so far 🙂

Enjoy!

I have just made a new video of the devil’s mine using the relatively recent feature of YouTube, the 3D player.

The coolest thing about it is that the video is uploaded in side by side (only horitzontal I guess), and the player lets you choose your favorite viewing mode (anaglyph with 3 pairs of colors, interlaced (best for LG 3D Cinema TV’s), or the ubiquitious side-by-side).

Click here to view the video in the YT 3D player

The technical term for the so-called 3D is in fact stereoscopy. (i.e: when two images, one for each eye are produced, transmitted and rendered instead of one). And contrary to the popular belief is pretty simple to implement and it’s not rocket science. In fact the first stereoscopic movies (anaglyph) were born in the early 50s! (do you remember that guy with anaglyph glasses in Back to the Future? XD).

However it can be very tricky to get it right

Enjoy!

Intro

Full scene antialiasing is being kind of a trending topic these days of inexpensive big flat displays and powerful GPUs.

Traditional AA algorithms used to rely on some sort of supersampling (i.e: rendering the scene to an n times bigger buffer and then mapping and averaging more than one supersampled pixel to a single final pixel).

Multisampling AA is the most widespread technique. 4x-8x MSAA can yield good results but can also be computationally expensive.

Morphological Antialiasing is a fairly recent technique which has grown in popularity in the recent years. In 2009, Alexander Reshethov (Intel) proposed an algorithm to detect shape patterns in aliased edges and then blending the pixels pertaining to an edge with their 4 neigborhood based on the sub-pixel area covered by the mathematical edge line.

Reshethov’s implementation wasn’t practical on GPU since it was tightly coupled to the CPU, but the concept had a lot of potential. His demo takes a .ppm image as an input and then outputs an antialiased .ppm as an output.

However, there’s been a lot of activity on this topic since then and a few GPU-accelerated techniques have been presented.

Jimenez’s MLAA

Among them, Jorge Jimenez and Diego Gutierrez’s team at the University of Zaragoza (Spain) have developed a symmetrical 3-pass post-processing technique named Jimenez’s MLAA.

According to the tests conducted by the authors, It can achieve visual results between MSAA 4x and 8x with an average speedup of 11.8x ! (GeForce9800GTX+). On the counterpart it suffers from classic MLAA problems such as handling of sub-pixel features but you can tweak some parameters to get really good results with virtually non noticeable glitches at a fraction of the time and memory that MSAA takes!

The algorithm, in a nutshell, works as follows:

In the first step a luma-based discontinuity test is performed on the RTT’ed scene for the current pixel and its 4-neighborhood. The result is encoded in an RGBA edges texture.

One can easily notice that it produces artifacts in zones that are not necessarily edges. The threshold can be tweaked, but converting RGB to the luma space has its issues when two completely different colors map to similar luma values.

The second step takes the edges texture and with the help of a precomputed area texture determines for each edgel (pixel belonging to an edge) the area above and below the mathematical edge crossing the pixel. This areas are encoded into another RGBA texture and used as blending weights. Here a specially smart use of the hardware bilinear filtering is made by sampling inbetween two texels to fetch two values in one single access.

In the last step the original aliased image and the blending weights texture are used to do the actual blending and generate the final image.

Here’s the original aliased image (taken from NoLimits)

All of the screenshots here are lossless PNGs, so go ahead and zoom in 😀

Translation into GLSL

You can download the source code for the original demo here.

There’s a DX9 and a DX10 version. The shaders were obviously written in HLSL. Everything contained into a single .fx file.

So in order to make it work in OpenSceneGraph I had to first translate it into 3 GLSL fragment shaders and 1 vertex shader. It needs GLSL 1.3 at least to work.

Integration into OpenSceneGraph

OSG doesn’t have a programmable post-fx pipeline itself. Instead, there’s a third party library named OSGPPU which allows you to set up a graph made up of PPUs (Post Processing Units). Each one of which have an associated shader program, one or more input textures (inherently the one from the previous step), and an output texture which can be plugged to the next step and so on.

The construction of the postFX pipeline for JMLAA was painless, however there is a detail that I haven’t still been able to figure out: correct stencil buffer usage.

An optimization which may yield a big performance boost is the usage of the stencil buffer as a processing mask. When creating the edges texture in the first step you also write an 1 to the previously fully zeroed stencil buffer in its corresponding location. The pixels that don’t satisfy the condition of being part of an edge are (discard;)ed. In the subsequent steps the values of the stencil are used as a mask, so pixels not belonging to edges are quickly discarded in the graphics pipeline.

But for some reason, OSGPPU either doesn’t clear the stencil properly or updates it prematurely, so I couldn’t get this working and had to process every pixel in all three steps without discarding everything. But even though so, I noticed no performance hit when loading fairly complex models. Here’s the thread where I asked for help.

Results

I wrote a little demo app which disables the default OSG’s MSAA, loads up a 3D model (it supports a few different formats) and displays it on a viewer. You can view the intermediate (edges and weights) textures, as well as the original and antialiased final images. By default it uses a depth-based discontinuity test instead of the luma one.

This is the original aliased image (zoomed in by 16x):

And this one is the filtered final image produced by JMLAA:

You will find more details on JMLAA in the book GPU Pro 2 !

Download

You can download here  a VS 2008 project along with the source, a default model, the shaders and the precompiled binaries for OSG/OSGPPU. It should compile and run out-of-the-box.

Yet another Sonic clone

Posted: July 15, 2011 in Games

Intro

I’ve been a Sonic series enthusiast since I got my Sega Genesis as a child. To me, it’s the perfect match between platforms and speed, two genres I love. Even though time flies, it never gets old.

First off, I must say that I’m not associated with Sega or Sonic team in any way, and what I’m gonna show you is just a simple fan game I made for fun.

This was one of my first amateur side scrollers that I made like 5 or 6 years ago or so, I had made a couple of very simple scrollers in Flash in the past, but I wanted to work with old school tiles.

MFC stands for the Microsoft Foundation Classes which I used as a rendering API. Since I wanted to use it as a project for a college subject the use of MFC, though not the most efficient, was mandatory.

It features a couple of badniks, rings, goal billboards, moving platforms, springs, a final boss and a level editor.

I found the tiles, backgrounds and sprites on some website so I only needed to focus on programming.

Got ring?

As you may know, in the old times the home consoles and arcade machines were mostly tile-based engines. In a nutshell, everything you saw on the screen was made up of fixed-size *usually* square tiles which were laid out in a particular fashion according to some table in memory.

The video signal generator just checked that table and the tiles in video memory to generate the video output.

Of course there’s a lot of nuts and bolts to it (scrolling playfields, mirroring, transparency, overlapping…) but that’s another story.

This app allows you to set up a scrolling playfield made of tiles and specify its absolute position on screen. It will determine the tiles visible in the viewport, their offset and how they should be displayed. Then the backend MFC renderer does the rendering job.

As for the animated sprites, there’s a base AnimatedSprite class which implements an interface for rendering and for specifying the status of the animation (playing, stopped,…) as well as its speed and extents.

The physics are simple enough to make the game look almost like the original (of course, it’s substantially less feature-complete). Collisions with the scenery are handled in a per-tile basis where each tile has its own collision properties.

The game logic and automatas glue everything else up.

Regarding sound, the Windows MM API is used and there’s a folder with .wav sound cues and BGM on it. But I have recently discovered that it freezes on Windows 7 when it tries to play more than one sound simultaneously. But there’s a workaround.

The final boss is a funny inter-company match where Super Mario throws items from his game 😀

The whole game was created from scratch in about three weeks working about 4 hours per day in the evenings.

Unfortunately and due to a hard disk failure, I lost the source code, but I still have the binaries.

And I keep it as a bit of history.

Dead And Angry

Posted: July 14, 2011 in Computer Graphics, Games

That is the spooky title we gave to a demo game we’ve made at the URJC.

This game has been made by a team of 4 people (which includes me) with no prior knowledge of Unreal Development Kit.

It features custom models, AI, sceneries, UI, cinematics, gameplay and a mutiplayer mode.

In my opinion, UDK can be an extremely sweet tool for artists and designers but it can also be a pain to programmers (let’s recall that I’m talking about the free UDK version) due to the lack of consistent and well-organized documentation and examples. But in the other hand, the community is very active and responsive.

All I have to say is that we’ve learned a lot about the whole process of making a game, especially about how important the preproduction is. And how many features you have to prematurely trash due to deadlines.

Abstract

This was a project for my Masters in Computer Graphics, Games and Virtual Reality at URJC.

We were asked to develop some sort of mine train real-time animation from scratch. It had to feature dinamically-generated tunnel bumps with Perlin noise, on-board camera view and three rendering modes: Polygon fill, Wireframe and Points. We chose OpenSceneGraph for the job.

Design tools

As a big fan of rollercoasters I had spent hours on the NoLimits Rollercoaster Simulator which has a quite mature coaster editor. There’s plenty of coasters made with NoLimits around the net, most of them are reconstructions of real ones.

I thought it could be a good idea to be able to load coaster models in NoLimits (.nltrack) format as it would allow us to design the track and the scene in a visual way using the NL Editor.

The .nltrack format is binary and not documented. It contains the shape of the track as control points of cubic Bezier curves. It also contains info about the colors, supports, external .3DS objects and info about the general appearance of the rollercoaster.

Using Hexplorer and the NL editor itself I was able to figure out the control points and the location/scaling/rotation of the external 3D models. Later I discovered that there’s a library called libnltrack, which helped a lot.

My pal Danny modeled a couple of rooms, an outdoor scene and a lot of mine props (barrels, shovels, …). Then he imported them into the editor and laid out a coaster track passing trough all of the scene.

Coaster geometry

Correct generation of the rails and the crossbeams for the track was a bit of a challenge, and it needed to be efficient!.

I came up with a solution based on the concept of a “slider”, a virtual train which can be placed at any place around the track (just specifying how many kilometers away from the origin (the station) it would be), and it returns three orthonormal vectors forming a base which was then used to transform vertices to the train’s POV.

By using two sliders, one ahead of the other one can set vertices back and forth to form triangle strips in order to generate perfectly stitched cylinders. I ran into a couple of problems when the track was almost vertical but I finally managed to solve them.

Upon startup, the geometry for the whole coaster is generated. The engine generates about 15 meters of track per geode, this way OpenSceneGraph is able to cull the out-of-sight track segments efficiently. Besides, two levels of detail are generated based on the distance to the camera.

As for the crossbeams, it’s just a .3ds model which is repeatedly placed along the track.

Tunnels

The program generates a 256×256 grayscale perlin noise texture which is then used as a displacement mapping for a cylinder mesh generated around the track on load time.

The editor is able to mark segments as ‘tunnel’ easily turning tunnels on or of in a per-segment basis.

The meshes are also segmented for better culling and stitched together. They have a diffuse rock and floor texture applied.

Train

The train is a .3DS model by Danny which has a slider assigned to it and its animated following an extremely simple phyisics scheme based on the potential energy of the train. It has a spotlight on the front so the track, rooms and tunnels are illuminated as the train goes trhough. Moreover the illumination of the train mesh is switched from the sunlight to the spotlight based on wether it’s in a tunnel or not.

Effects

A skydome, lens flare (props to Tomás), and OSG’s impementation of shadow mapping were added in.

Audio and others

In the last minute before the deadline, supports for the track were generated as regularly-placed cylinders, but unfortunately that wasn’t there yet at the time the screenshots and the videos were taken.

A white noise audio file is played with a pitch and volume proportional to the train speed.

To be done

Due to the tight timing constraints we were subject to I was forced to leave a lot of things to be done, among them:

– Per-pixel lighting.

– Post-processing effects (vignette and HDR)

 

Intro

This is a project I developed while working at LSyM at the University of Valencia (Spain).
They had recently built a C.A.V.E system mounted on top of a powerful Manesmann 6 DOF Stewart mobile platform.
The system was sitting virtually unused and I wanted to develop some demo to show to visitors until a decent application was finally done.
I wasn’t allowed much time to do so and had to spend lots of my spare time working on it.
This is what I had:
The C.A.V.E platform at IRTIC (University of Valencia)
A C.A.V.E composed of:
  • Four hi-performance active-stereoscopic ProjectionDesign projectors.
  • A chair with a little dashboard with two joysticks and some buttons.

As you can see in the picture, the projectors are mounted on-board and have a wide-angle lens for retroprojection on the screens through mirrors. Each one has two DVI inputs (left/right field) and a genlock signal output.

A 6-DOF mobile platform:

  • It features X, Y, Z, heave, surge, and sway (rotation around the three axis).
  • It’s electro-hydraulic and its linear actuators are driven by that gray box at the bottom.
  • The box is connected to a dedicated control PC via fiber optic communication.
  • It’s capable of lifting up to 1000 Kg (I’m not really sure about this)
  • It works with 380 V (industrial range).
  • The control PC is connected via Ethernet to the application machine and runs a propietary manufacturer control program.
  • It receives frames over UDP with the instant position (the 6 DOF) at a rate of 60Hz approx.

A cluster composed of 5 machines:

  • 4 Quad-Core with nVidia Quadro. Each one of them renders a wall of the C.A.V.E (both left and right fields using independent DVI cables connected to both heads of the Quadros).
  • The cards are synchronized using nVidia Quadro G-Sync and an IR emitter is located on-board the C.A.V.E and connected to the master projector via GenLock for syncing the 3D glasses up.
  • The machines are running Windows XP x64.
  • 1 Master machine with a mid-range graphics card and a bit more underpowered.
  • The 5 machines are interconnected via a Gigabit Ethernet hub.

Amazing, huh?.

Just do it

I found myself with an equipment worth tens of thousands of euros and little time to leverage its full potential, and of course I was pretty much on my own.

I found Rollercoaster 2000 by PlusPlus, which is basically a very basic rollercoaster simulator. It takes a plain-text file containing the description of the track as Bezier control points and plays a 3D animation of the coaster in an OpenGL window. Graphics are VERY basic but it is fully open-sourced, so that’s just what I needed.

Rollercoaster 2K screenshot

Of course this app wasn’t ready for our CAVE & platform off the shelf and here it comes the fun part:

We were using VRJuggler as a middleware for rendering in the CAVE (by Carolina Cruz-Neira, the inventor of the CAVE herself!).

This middleware is a convenient way to deploy graphics applications across a wide range of VR systems and configurations.

It takes  care of intra-cluster communications, cameras orientation and frustum settings, I/O devices management ……

Making things work

Don’t get me wrong, the RC2K app basically works, it’s correct in math terms and it’s free. You can’t complain under these conditions and I’m grateful to its author for it. But the source is not very well commented out and not very well structured (lots of global vars, almost no use of structs…). Besides a lot of comments and symbols were written in French, which I don’t speak so I had to figure out a lot of things but it was pretty straightforward.

The first step was to encapsulate the app into a C++ class derived from some VRJuggler stuff.

Then I discovered that the physics (dynamic) loop was tightly coupled to the rendering loop. In fact they were the same!. As an immediate result the coaster physics were not very stable.

The physics thread

The solution was to spawn a thread in charge of physics. I isolated the variables involved in physics calculation and the thread just perform a simulation step at a constant rate. I defined a double buffer with the dynamic state protected with a mutex.

Master/Slaves model

In order to fit the architecture of our cluster I defined the master machine as the simulation controller. It only calculates the next physics step, and draws a view in an OpenGL window for the operator’s delight 😀

For each frame, VRJuggler takes the front physics buffer and broadcasts it to the slaves which are the 4 other machines in charge of rendering the C.A.V.E walls.

The slave machines basically get the data and draw its corresponding view (both left and right stereoscopic fields).

An XML config file allows VRJuggler to apply different camera configurations on a per-machine basis (based on the machine host name).

So the .EXE and .XML files are the same for all the machines and VRJuggler takes care of the rest (windows setup, stereo camera calculations, …).

It may look simple, however nobody knows how much pain did I pass through to get that working :-p

For execution I had set up a shared folder in the master machine with read access for all the slaves. I tried to launch them all via RPC but after hours of research I gave up and ended up using some simple TCP-based remote launcher a co-worker made.

Feeling accelerated?

The platform was for sure the most exciting part. Here’s how I did it.

The idea with mobile platforms is to simulate the accelerations and decelerations by tilting the platform. This changes our center of gravity and tricks your inner ear into thinking that you’re on the  go.

The best way of dealing with these platforms is the Classical Washout filter, mostly used in flight simulators. To put it in simple words it’s a PID that transforms aircraft accelerations into motion cues that can be directly fed to the platform. It also does tilt coordination. Its objective is to orient the gravity vector so the rider feels a sustained acceleration while the visual display remains the same.

We had this filter and the library sending packets to the platform implemented in old Borland C++ which I had to port to Visual Studio. Once the port was done I had to adjust the filters thresholds that were stored in plain text files using a software “Platform simulator” and then fine-tune them in the real thing to match my taste.

The filter takes the angular velocity and the specific forces for the coaster train. For the angular velocities someone pointed me to the Darboux vector.

Now that all the explanations are done, let’s watch it in action!

Result

Conclusions

As you can see in the video the platform doesn’t rock you so bad (I wish it did :-D). I had to further adjust the Washout thresholds to make it a tougher ride.

It would also have been cool to simulate the track shaking and do better graphics (I didn’t rewrite the OpenGL part except for a few details, so that’s pure RC2K graphics).

It took me 1 and a half weeks.

Of course, the platform has mechanical limits and it can’t play a 360º loop, cobra rolls or corkscrews but it does its best.

I wish I had implemented a run counter since it quickly become one of our best valued demos and a mandatory one for visitors. I had spent hours running the demo for large groups of visitors from other universities and others.

Even the rector of the university ‘s wife had a ride at a special event the last year !! Amazing

Special thanks

Props to Ignacio Garcia for his advice and to M.A. Gamón for his support on the Washout filter.