Deep Learning

download article

To dig even deeper into deep learning, please have a look at the technical report I wrote on my findings (PDF document).

I have had the pleasure of diving into the deep waters of deep learning and learned to swim around.

Deep learning is a topic in the field of artificial intelligence (AI) and is a relatively new research area although based on the popular artificial neural networks that supposedly mirror brain function. With the development of the perceptron in the 1950s and 1960s by Frank RosenBlatt, research began on artificial neural networks. To further mimic the architectural depth of the brain, researchers wanted to train a deep multi-layer neural network – this, however, did not happen until Geoffrey Hinton in 2006 introduced Deep Belief Networks.

Recently, the topic of deep learning has gained public interest. Large web companies such as Google and Facebook have a focused research on AI and an ever increasing amount of compute power, which has led to researchers finally being able to produce results that are of interest to the general public. In July 2012 Google trained a deep learning network on YouTube videos with the remarkable result that the network learned to recognize humans as well as cats, and in January this year Google successfully used deep learning on Street View images to automatically recognize house numbers with an accuracy comparable to that of a human operator. In March this year Facebook announced their DeepFace algorithm that is able to match faces in photos with Facebook users almost as accurately as a human can do.

To get some hands-on experience I set up a Deep Belief Network using the python library Theano and by showing it examples of human faces, I managed to teach it the features such that it could generate new and previously unseen samples of human faces.

The ORL Database of Faces contains 400 images of the following kind:

facesORL

By training with these images, the Deep Belief Network generated these examples of what it believes a face to look like

facesORLsamples-2

The variation in the head position and facial expressions of the dataset makes the sampled faces a bit blurry, so I wanted to try out a more uniform dataset.

The Extended Yale Face Database B consists of images like the following

facesYALE

and in the cropped version we have 2414 images that are uniformly cropped to just include the faces.
Training the Deep Belief Network with this dataset, it generated these never before seen images that actually look like human faces. In other words; these images are entirely computer generated, as a result of the Deep Learning algorithm. Based only on the input images the algorithm has learned how to “draw” the human faces below:

facesYALEsamples-2

Interns: Oculus Rift development challenge

oculus-vr-logo

20140326_152233

During our internship at Hinnerup Net A/S we got a three day challenge to try out the Oculus Rift SDK kit and see what we could come up with in such a short amount of time. Anything would be acceptable, as long as the Oculus Rift device was used and a working demo could be presented on it.

You would think something like: “What? Only three days?! With no initial knowledge of 3D programming and/or VR? With no experience with Oculus Rift? Forget it! It simply can’t be done!”. Well, as we will demonstrate, it’s far easier than you might expect to get up and running with the Oculus Rift device.

First thing we did was to head to the Oculus Rift homepage and download the Software Development Kit. Then we proceeded to plug in the Rift and perform a calibration of the unit. After putting the Oculus Rift device on and spinning blindly around we encountered a smaller issue. The software proclaimed that we weren’t spinning enough, after a few extra tries with the same errors, and getting mildly motion-sick we decided that calibration probably wasn’t important as the Rift could deliver a satisfactory picture, so we moved on.

We then proceeded to taking a look at how we could make our own “demo” for the Oculus Rift. From the Oculus Developer pages, we saw that it supported both Unity and the Unreal engine. Based on what we had heard about Unity and our already existing curiosity about it, we decided to go the Unity route and see what would happen. We downloaded the Rift SDK for unity, and accepted the offer for 1 month free trial of Unity pro. The SDK came with a Unity demo project to test out how the integration worked.

Untitled


After just playing around for some time in the Unity SDK, we decided to try downloading a Unity project that did not have Rift support with the goal to add support for Oculus Rift to it. Using the supplied Unity package it was pretty straight forward to do this. All we had to do was to either replace the standard Unity camera with the supplied Rift camera, or replace the player controller with a Rift controller, depending on the type of project. This was the first time we tried working with Unity so it took some time to familiarize ourselves with the tools and the API.

 

We decided to download a demo from Unity Asset Store called “Bootcamp”, and modify it to support Oculus Rift. We removed the player controller and changed it with a “ovr player controller”, which was fairly easy to accomplish. After that we thought it could be fun to place a spaceship into the game, so we downloaded a spaceship model and placed it into the project.


Scene before (click to enlarge):
WithoutSpaceship
Scene after: (click to enlarge)
WithSpaceship


With that done we wanted to try to build something from scratch, so we inserted some terrain and blocked the edges with mountains. Shamelessly using parts of the “Project: Space Shooter” from the Unity’s learn section, we added visible shots using a texture and one of the built in shaders. To give the Rift controller the option of shooting we had to go into the scripting section and perform some minor modifications to the code.

 

unityScript

The Update method is called on each active object for every frame generated. Line 190 to line 194 is what we had to do to make a player able to shoot. The nextShotTime, fireRate, shot and shotSpawn are fields we had to add. Only shot and shotSpawn are of any interest, shot is an actual GameObject that should be spawned on line 192. The shotSpawn is an empty GameObject which we linked to the camera, in relative positions. That means that the spawner will follow the camera view, so shots will spawn in the direction a player is looking. The Instantiate method basically just makes a new shot and spawn it at the position of the shotSpawn.

 

So … In conclusion and to sum it all up: In a period of three days we went from not knowing anything about 3D programming, Virtual Reality, Oculus Rift or Unity, to being able to modify an existing Unity project and make it take full advantage of the immersive experience a Oculus Rift Virtual Reality display unit can offer.

Vejdirektoratet og kommunerne lancerer nyt KommuneAtlas

​Vejdirektorat og kommunerne har i fællesskab lanceret et digitalt KommuneAtlas med nøgletal om det danske vejnet. Det digitale KommuneAtlas giver alle interesserede mulighed for at danne egne faktakort og trække data inden for udvalgte temaer.

Den primære målgruppe er medarbejdere og ledere i kommunerne og Vejdirektoratet. Hertil kommer det politiske niveau samt borgere og trafikanter, der nås gennem samarbejdet.

Formålet med det digitale KommuneAtlas er at give et helhedsorienteret billede af vejnettet. Det nye digitale KommuneAtlas findes på adressen kommuneatlas.samkom.dk

Applikationen er videreudviklingen af SAMKOM publikationen “KommuneAtlas – kort om veje og trafik”, som er udkommet i 2003, 2004 og 2010.

I Hinnerup Net er vi stolte over resultatet af den tekniske indsats der er leveret i anledningen – og ser frem også at implementere til de udvidelser og forbedringer, der allerede er planlagt.