Tag Archives: featured

 

Microsoft Azure IoT

Systemet i skematisk form:
azure-iot-nodemcu
Klik på billedet for at se det i stort format.

Den anvendte NodeMCU-enhed
nodemcu
Klik på billedet for at se det i stort format.
Vi har leget med Microsoft Azure cloud og Internet-of-Things (IoT) dataopsamling, analyse- og rapporterings-teknologier.

Ganske hurtigt endte vi med en kørende demo, der kunne afsende data fra en Internet opkoblet enhed, lade Microsoft Azure indsamle, processere, analysere disse data og endeligt at være istand til at visualisere de behandlede data på et web-baseret real-time dashboard.

Azure IoT Suite

Azure IoT Suite er en samling af teknologier der målretter sig Internet of Things enheder, og kommunikation imellem Azure og IoT enhederne.

Her et uddrag fra Microsoft Azure IoT Suite beskrivelsen:

  • Get started quickly with preconfigured solutions
  • Tailor preconfigured solutions to meet your needs
  • Enhance the security of your IoT solutions
  • Support a broad set of operating systems and protocols
  • Easily connect millions of devices
  • Analyze and visualize large quantities of operational data
  • Integrate with your existing systems and applications
  • Scale from proof of concept to broad deployment

Vi kan anbefale disse videoer, til hurtigt at danne sig et overblik over Azure IoT Suite teknologierne:

Introducing the Microsoft Azure IoT Suite
Link til indhold på azure.microsoft.com, klik på billedet for at se video
Introduction to IoT Suite and IoT Hub for developers
Link til indhold på azure.microsoft.com, klik på billedet for at se video


Gennemgang af demoen udfra system skemaet

Her følger en beskrivelse af demonstrationssystemet vi satte op. Start med at granske system skema billedet. Afsnittene herefter tager udgangspunkt i hvert af elementerne i dette billede fra venstre mod højre.

NodeMCU

Til eksperimentet har vi brugt det indlejrede-system NodeMCU. NodeMCU-enhederne udemærker sig ved at være meget små (ca. 5 x 3 cm), nemme at programmere (via Arduino IDE værktøjet) og har såvel indbyggede GPIO pins samt WiFi funktionalitet.

Der er kun anvendt en enkelt NodeMCU enhed, men man ville sagtens kunne bruge mange flere, f.eks. i et produktionsscenarie, op imod Microsoft Azure.

Denne guide beskriver hvordan du kan opsætte NodeMCU til at fungere op imod Azure IoT.

Azure Function

Som det første satte vi en Azure Function op, som en HTTP WebHook triggered funktion, dvs. en funktion der kan kaldes via et alment HTTP request ind imod funktionens dedikerede URL hos Azure med en række parametre. Dette viser hvordan en IoT enhed kan kalde ind på f.eks. en web service eller som her direkte ind på en Azure Function, og på den vis foranledige afviklingen af et tilpasset stykke forretningskode. Dette kunne eksempeltvis resultere i at en e-mail blev afsendt, at en særlig database række blev skrevet ned, eller at en fejllog-linie blev genereret, og meget andet.

Azure IoT Hub

NodeMCU-enheden er programmeret til at generere tilfældigt data hvert tiende sekund. Disse data bæres ind via en Azure IoT Hub. Dette endpoint fungerer som kontaktfladen imellem IoT enhederne og Azure skyen, og omvendt, også fra Azure til IoT enhederne (såkaldt bi-direktionel kommunikation). Azure IoT Hub’en skalerer automatisk op til mange millioner samtidige IoT enheder.

Eksempel på JSON data afsendt fra enheden til Azure:

{
    "Dev":"nodemcu-hinnerup",
    "Utc":"2016-10-15T22:29:33",
    "Celsius":25.00, "Humidity":50.00, "hPa":1012, "Light":0,
    "WiFi":1, "Mem":21416, "Id":274,
    "Geo":"Aarhus"
}

Azure Event Hub (input)

Dataene flyder herfra videre til en såkaldt Azure Event Hub. Azure Event Hub’en er en mellemligende kø, der kan håndtere millioner af event beskeder i sekundet. Således samles data op, og gives videre i det tempo den øvrige processerings-pipeline i systemet kan tage fra med. Disse input data kaldes “eventhub-hinnerup-input” i T-SQL koden herunder.

Azure Stream Analytics

Den eneste data-pipeline vi har anvendt er Azure Stream Analytics. Her indsamles dataene i realtid og transformation, beregninger og analyse udføres efter nogle opstillede forretningsregler. Disse implementeres i T-SQL målrettet Stream Analytics området.

En række særlige kommandoer er tilgængelige man ikke finder i standard T-SQL, heriblandt “TumblingWindow” grupperingen vi gjorde brug af:

-- Input data transformation, calculations and analysis
WITH ProcessedData as (
    SELECT
        -- Telemetry device data
        MAX(Celsius) MaxTemperature,
        MIN(Celsius) MinTemperature,
        AVG(Celsius) AvgTemperature,
        MAX(Humidity) MaxHumidity,
        MIN(Humidity) MinHumidity,
        AVG(Humidity) AvgHumidity,
        MAX(hPa/100) MaxPressure,
        MIN(hPa/100) MinPressure,
        AVG(hPa/100) AvgPressure,
        -- Telemetry monitoring metrics
        MAX(WiFi) WiFiConnectAttempts,
        MAX(Mem) FreeMem,
        -- Telemetry device info
        location,
        deviceId,
        -- Time stamp
        System.Timestamp AS Timestamp
    FROM
        [eventhub-hinnerup-input]
    GROUP BY
        TumblingWindow (second, 60),
        deviceId,
        location
)
-- Output data
SELECT * INTO [eventhub-hinnerup-output] FROM ProcessedData

Azure Event Hub (output)

De processerede data flyder nu ud via et Azure Event Hub output (“eventhub-hinnerup-output” i T-SQL koden herover). Disse kan udstilles via f.eks. WebSockets hvilket vi valgte. Der er mange andre muligheder, f.eks. kunne man også udstille dataene via en Azure API App (så et eksternt system som f.eks. en web-service også ville kunne tilgå dataene). Man kan godt vælge flere output former på en gang.

Azure Web App

Vi fandt en skabelon til et real-time dashboard, der kan konsumere Azure IoT data via WebSockets. Skabelonen kan hentes på GitHub her. Dette website deployede vi til en Azure Web App, og var således i rekord fart i luften med et automatisk opdaterende real-time dashboard der kan vise de indsamlede data.

azure-iot-realtime-chart
Klik på billedet for at se det i stort format.

Vi har godt nok kun én enhed tilknyttet systemet, men data fra alle tilknyttede enheder (som man kan specificere nærmere kriterier for i Azure portalen) ville i givet fald være blevet vist.

Afrunding

Her kan du se det samlede overblik i Azure portalen som vi endte ud med:

azure-portal-dashboard
Klik på billedet for at se det i stort format.

Man kunne nemt have indsat storage, til f.eks. en MS SQL database og/eller til et data warehouse, og så derfra bygge videre med Azure HDInsights big-data analyse og videre endnu med kunstig intelligens processering. Man kunne også godt have koblet andre enheder på websitet (mobil browsere fx), ligesom de behandlede data godt ville kunne flyde over i et eksternt system via f.eks. en Azure API App løsning. Men, det går desværre nok en hel del ud over det tiltænkte omfang for denne demonstration. Indrømmet, da vi jo er nørder, var det meget svært at skære fra.

Til enterprise og produktionsbrug vil vi anbefale at der tages et kig på Microsoft PowerBI værktøjet til real-time dashboard visualiseringer og videre dataanalyse formål.

Som en afslutning er der vist blot tilbage at sige, at det var skægt og relativt nemt at lege med, og vi håber du har fundet artiklen interessant.

 

Deep Learning

download article

To dig even deeper into deep learning, please have a look at the technical report I wrote on my findings (PDF document).

I have had the pleasure of diving into the deep waters of deep learning and learned to swim around.

Deep learning is a topic in the field of artificial intelligence (AI) and is a relatively new research area although based on the popular artificial neural networks that supposedly mirror brain function. With the development of the perceptron in the 1950s and 1960s by Frank RosenBlatt, research began on artificial neural networks. To further mimic the architectural depth of the brain, researchers wanted to train a deep multi-layer neural network – this, however, did not happen until Geoffrey Hinton in 2006 introduced Deep Belief Networks.

Recently, the topic of deep learning has gained public interest. Large web companies such as Google and Facebook have a focused research on AI and an ever increasing amount of compute power, which has led to researchers finally being able to produce results that are of interest to the general public. In July 2012 Google trained a deep learning network on YouTube videos with the remarkable result that the network learned to recognize humans as well as cats, and in January this year Google successfully used deep learning on Street View images to automatically recognize house numbers with an accuracy comparable to that of a human operator. In March this year Facebook announced their DeepFace algorithm that is able to match faces in photos with Facebook users almost as accurately as a human can do.

To get some hands-on experience I set up a Deep Belief Network using the python library Theano and by showing it examples of human faces, I managed to teach it the features such that it could generate new and previously unseen samples of human faces.

The ORL Database of Faces contains 400 images of the following kind:

facesORL

By training with these images, the Deep Belief Network generated these examples of what it believes a face to look like

facesORLsamples-2

The variation in the head position and facial expressions of the dataset makes the sampled faces a bit blurry, so I wanted to try out a more uniform dataset.

The Extended Yale Face Database B consists of images like the following

facesYALE

and in the cropped version we have 2414 images that are uniformly cropped to just include the faces.
Training the Deep Belief Network with this dataset, it generated these never before seen images that actually look like human faces. In other words; these images are entirely computer generated, as a result of the Deep Learning algorithm. Based only on the input images the algorithm has learned how to “draw” the human faces below:

facesYALEsamples-2

 

Hinnerup Net A/S presentation at NVIDIA GTC 2012

 Photo by NVIDIA (flickr.com)

At the NIVIDIA GPU Technology Conference in San Jose, California 13th to 17th of May 2012, Michael S. Fosgerau from Hinnerup Net A/S and Henrik Høj Madsen from LEGO presented a session about a number of development challenges and learnings from developing a scalable and distributed GPGPU 3D optimization and rendering system (link no longer works – new link here)for LEGO.

The system consists of 16 blade servers of the type NVIDIA Optiplex 2200 S4, each containing four NVIDIA Quadro FX 5800 GPUs, summing up to a total of:

  • 64 Quadro FX 5800 GPUs
  • 256 Gb GPU memory
  • 15360 CUDA cores

We participated the full conference and have returned home with new ideas, inspiration and knowledge for future projects within the field of GPGPU and massive parallel computing.

Graphic cards are not just for games and fancy effects. Vast computational power hides away in many modern GPU cards. During the conference, NVIDIA launched their new flag ship model, the GTX 690 card that contains two Kepler GPUs with a shared computational power of 2 x 2,810.88 GFLOPS (FMA), or approx. 5.6 TFLOPS. This was just recently what could be squeezed out of a supercomputer (around year 2000). You can see the entire key note by NVIDIA’s CEO Jen-Hsun Huang here: Opening day key note, GTC 2012.

A detailed article about the presentation kan be read on NVIDIA’s blog.

Slides from the presentation will be made public on the NIVIDIA website. When this happens we will update this blog post with links to the slides.