Gunnebo Business Solutions, MQTT, Protocols

MQTT for Dummies

In this article, I will be discussing one of the most trending topics in IoT. I will take you through a beginner level tutorial on MQTT which is currently the most used protocol in IOT projects. 

MQTT Message Queuing Telemetry Transport-1_low

MQTT stands for Message Queueing Telemetry Transport Protocol. To put MQTT in a nutshell, it is  “A lightweight event and message-oriented protocol allowing devices to asynchronously communicate efficiently across constrained networks to remote systems”. I know that this doesn’t really help much. So let’s try to decode that definition and understand what MQTT is and how to use it.

What is MQTT?

Again, for people who have no idea about MQTT, it is a protocol for machine-to-machine communication. It uses a publisher-subscriber model for communication. If you are from a programming background, you probably would have some knowledge about the publisher-subscriber model. Anyway, we will discuss the publisher-subscriber model and how MQTT works later in the tutorial.

MQTT over HTTP for IoT

Before going onto discuss how MQTT works, let’s first try to understand how it came to existence. MQTT came to exist as a replacement for HTTP because HTTP could not properly answer the challenges in IOT and M2M projects. Unlike web applications, IOT projects have some peculiar challenges. One of the main concerns is that IOT requires the event-driven paradigm. Some of the features of this event-driven paradigm are:

  • Emitting information one-to-many 
  • Listening to events whenever they happen 
  • Distributing minimal packets of data in huge volumes 
  • Pushing information over unreliable networks  

Some other challenges you face in an M2M application 

  • Volume (cost) of data being transmitted 
  • Power consumption 
  • Responsiveness 
  • Reliable delivery over fragile connections 
  • Security and privacy 
  • Scalability

MQTT was successfully able to cope with these challenges due to their features. 

Why MQTT is good for M2M and IoT applications

MQTT has unique features you can hardly find in other protocols, like:

  • It’s easy to implement in software as it is a lightweight protocol.
  • MQTT is based on a messaging technique. This makes it faster in data transmission compared to its alternatives.
  • It uses minimized data packets which results in low network usage.
  • Low power usage. As a result, it saves the connected device’s battery.
  • Most importantly it works on real-time which makes it ideal for IoT applications.

We learnt earlier that MQTT works through a publisher-subscriber model. In a P2S system, the publisher sends its messages to a topic. Then, every subscriber of that topic will receive the message. In MQTT, Broker handles the topic and messaging process while MQTT clients behave as publishers and subscribers.

Components of MQTT

To learn about how MQTT works, we have to understand some concepts in MQTT. The fundamental components of MQQT protocol are explained below.

Broker

The broker is a server that handles the communication and data transmission between the clients. It is responsible for the distribution, management and storage of data sent and retrieved by the clients. The broker acts like a centralized hub that regulates the message exchange. 

In the case where a broker breaks down, the whole communication process breaks down as there is no way for the clients to communicate with each other directly. Therefore, the Broker Bridging mechanism was introduced to prevent such cases and build a fail-safe broker network. 

There is a number of broker applications available on the internet including the popular ones; Mosquitto and HiveMQ or you can also use cloud-based brokers from cloud providers such as IBM or Azure.

Clients (Publisher, Subscriber)

These are basically the end-users who retrieve the data distributed by the broker. Each client is assigned a unique ID to identify themselves and the session when connected to the broker. A client could either be a publisher who publishes messages under a specific topic or a subscriber who receives messages relevant to a topic, at one time.

Message

These are the chunks of data sent and received by the clients. Each message consists of a command and a payload section. The command part determines the type of message and there are 14 message types available in MQQT.

Topic

This is the namespace or literally the topic that describes what the message is about. Each message gets assigned to a topic and clients can publish, subscribe or do both to a topic. The clients can also unsubscribe from a topic if they want to. MQTT topics are just strings with a hierarchical structure. 

Assume that there is a topic called “home/kitchen”. We call home and kitchen as levels of the topic while home being topper level topic than the kitchen. Also, topics can use wild cards such as ‘+’ and ‘#’.

Publish

This is the process of clients (Publisher) sending data to the broker under a topic be distributed among the clients (Subscriber) who have requested data from the same topic.

Subscribe

This is the process of Clients (Subscribers) receiving data specific to a topic they have previously subscribed to, from the Clients (Publishers) through the broker.

QOS: Quality of Service

Each message is given an integer value from 0 to 2 to specify the delivery mode. This is known as Quality of Service. There are three different types of QOS.

  • 0 (Fire and forget) – the message is delivered only once, acknowledgement not given, high-speed delivery method.
  • 1 (Acknowledgement) – the message is delivered once or several times until an acknowledgement is received. 
  • 2 (Synchronized) – the message is delivered only once, guaranteed delivery, comparatively slower.

Practical use of MQTT

Its time to do some practical things here and get used to dealing with the MQTT protocol. As you learnt previously, there are many MQTT clients developed for each programming language.  I will use Paho python MQTT client as I am a fan of Python and it is probably the best MQTT client out there. 

mosquitto-text-side-28

First, you need a broker to create an application with MQTT. One of the most popular MQTT brokers is Mosquitto. You can install it with the following command. 

sudo apt-get install mosquitto

We set up it to work on our localhost. By default, Mosquitto listens to port 1883. Next, install the MQTT client with pip command.

sudo apt-get install mosquitto

We setup it to work on our localhost. By default, Mosquitto listens to port 1883. Next, install the MQTT client with pip command.

pip install paho-mqtt

This command will install python MQTT client library on your machine. The core of the client library is the client class which provides all of the functions to publish messages and subscribe to topics. 

There are several important methods in Paho MQTT client class which you should know:

  • connect()
  • disconnect()
  • subscribe()
  • unsubscribe()
  • publish()

Each of these methods is associated with a callback

Publishing a message

One of the main tasks you do with MQTT is publishing messages. A simple code that publishes a message usually has 4 steps.

  • Import the paho.mqtt.client class
  • Creating a client instance with Client() constructor
  • Connecting to the server with connect() method
  • Publish messages with publish() message.
import paho.mqtt.client as mqtt

clientName = mqtt.Client("uniqueClientId)
clientName.connect("localhost",1883,60)
clientName.publish("TopicLevel1/test", "Your Message Here");
clientName.disconnect();

Most of the code is self-explaining. First, you create an instance of an MQTT client. Then you connect with the broker running on Localhost. Then the client publishes its message on “TopicLevel1/test” topic. After that, it disconnects from the broker.

Subscribing to a topic

You know that MQTT is not a one to one messaging protocol as it connects many devices. The trick here is that message from any device is assigned to a topic. Any devices that are subscribed to that topic will receive the message. Similarly, you can publish messages to Topics.

You can subscribe to a topic with subscribe() method in Client class. Subscribing to a topic has the same steps as to publishing messages. I am not going to repeat it as you can easily identify these steps from the code.

import paho.mqtt.client as mqtt

def on_connect(subscriber, userdata, flags, rc):
  subscriber.subscribe("topic/test")

def on_message(client, userdata, msg):
  if msg.payload.decode() == "Disconnect!":
    subscriber.disconnect()
    
subscriber = mqtt.Client("subscribeeId")
subscriber.connect("localhost",1883,60)

subscriber.on_connect = on_connect
subscriber.on_message = on_message

subscriber.loop_forever()

In this application, the client works as a subscriber. It subscribes to the topic to the broker which in this case, runs on localhost. Whenever it receives a message, it calls for the on_message() method. If the received message is disconnected, it immediately disconnects from the broker. This is a very simple use of subscriber method. You can write more complicated logic using the same callback functions. 

So in this article, you get a concise yet comprehensive idea about MQTT. Its’ time to move on to the conclusion to recall yourself as to what is the gist of the article.

Conclusion

MQTT is a lightweight, flexible and a simple but very efficient protocol that has a definite advantage over others when it comes to IoT and M2M solutions; considering its low bandwidth and low power consumption, response time and multiple usages. In conclusion, it could be said that MQTT is the best protocol so far when it comes to IOT development.

If you want to know more about MQTT, you can check the links below or have any questions, please reach out to me: bjorn.nostdahl@gunnebo.com

MQTT and ActiveMQ on RPI

MQTT for PIC Microcontrollers

 

 

Gunnebo Business Solutions, IBM International Business Machines, Node RED

Node-RED on SIMATIC IoT 2040

With the high pace the technology industry is moving in towards development, many different fields and areas have been considered as hot zones. The different innovations motivates researchers to create and develop better devices and technologies that are helpful. However, the more we advance, the more technology gets complicated and sophisticated. This is observed much more in hardware developments as the number of components used keeps increasing year by year to keep up with the need.

SIMATIC IOT2040 By Siemens

A leading company in innovations and developments is Siemens, which is German based. Siemens is specialized in technologies that would impact the Industry, Energy, Healthcare, and Infrastructure & Cities fields. With their many powerful groundbreaking products in the market, Siemens have been interested in the IoT field, releasing their SIMATIC IOT2000 series. This series targets the industry field, allowing different machines to analyse and utilize data sources from all around the globe.

iot2000-2040

Current issues include weak communication with overseas machinery due to different use of languages and different source codes. SIMATIC IOT2040 is one of the series by Siemens which is the up-to-date version to the SMATIC series. This version includes the following:

  • Energy-saving processor, with many compatible interfaces including: Intel Quark x1020 (+Secure Boot), 1 GB RAM, 2 Ethernet ports, 2 x RS232/485 interfaces, battery-backed RTC.
  • Supports Yocto Linux.
  • Arduino shields und miniPCIe cards can be used for expansions.
  • Programming with high-level languages
  • Compact design and DIN rail mounting
  • Proven SIMATIC quality offers great ruggedness, reliability and longevity

This version in particular is worth mentioning due to its ability to be used with many different hardware and solutions. This product is mostly used with different other add-ons, which help deliver the target efficiently.

Setup: SIMATIC IOT2040

A very common application for the SIMATIC IOT 2040 is the use in Micro-SD cards (minimum capacity of 16GB). Many of the previously stated features of the series can be used with the card, helping in making the experience much better. The following is a tutorial guide, explaining how to successfully prepare and install SIMATIC IOT2040:

Preparation:

  1. Download the following image from Siemens source: https://support.automation.siemens.com
  2. Remove all trash and deleted files fomr your SD-Adaptor and flash the image to your disk imager.
  3. You can now safely insert the SD-card in the SIMATIC IOT 2040.
  4. You should then connect your devices to the computer/laptop using an Ethernet cable.
  5. A strong internet connection should be available at all times. Adjust your Ethernet IP address to an IP around 192.168.200.1, which is considered as the static IP for IOT2040. (Subnet mask 255.255.255.0)
  6. Use the Secure Shell Protocol with the following IP: 168.200.1, using user root. Insert your own password for the root.
  7. SSH into 192.168.200.1 with user root. Set a password for root:
  8. Then run:
  9. Now you are ready to run the installation for SIMATIC IOT2040 successfully.

Installation:

  1. Edit file /etc/opkg/base-feeds.conf  and add these lines:
  2. Now you can run the file with the name “opkg update” and install the git directly.

Node-Red: Flow Based Solution

One of the most modern tools that is considered a breakthrough is Node-Red. Node-Red is a development tool, created by IBM initially, to wire hardware devices together with APIs and online services. This technology is flow based, which was inspired by the use of Internet of Things (IOT). In simple terms, the technology focuses on the use of  browser software that helps users to develop different tools using flow diagrams. It was created as a means to simplify development; making it available for users with basic knowledge. The tool focuses on the ease of use of software and online services, using a direct connection through the internet.

Untitled-3

The previously stated technologies could come really handy when they are used together. Both tools have the capability to be used in parallel aiming towards the same final result. NodeRed will be used to ease the use of SIMATC through simple flow diagrams. SIMATIC is generally a very important tool, yet too complicated for normal users. Thus, the use of NodeRed is crucial in this case, where you will need to control the development process and wiring of the hardware with the online services as smooth as possible.

Setup: Node-Red

The following is a tutorial guide, explaining how to successfully prepare and install Node-Red:

  1. Through the menu named software, you can move forward to the Manage packages page.
  2. Set Node-Red on Auto-start together with Mosquitto MQTT Broker.
  3. Here is where we integrate SIMATIC IOT 2000 with Nod- Red. You are expected to install the nodes for the SIMAMTIC.
  4. In the following directory “/home/root/.node-red” create a root folder by the name of nodes; where you will place the installed nodes in the previous step.
  5. Put custom nodes here if needed. For example from Git repository:

    Dependencies and nodes under npm can be installed directly to  /home/root/.node-red
  6. In the following directory “/home/root/.node-red” create a root folder by the name of nodes; where you will place the installed nodes in the previous step.

 

If you want to know more about IoT and Node-RED, feel free to contact me at: bjorn.nostdahl@gunnebo.com 🙂

Artificial Intelligence (AI), Commercial, Gunnebo Business Solutions, Gunnebo Retail Solution, Machine Learning (ML), USA

Autonomous and Frictionless Stores

Earlier this year, I visited US for a couple of weeks, and having a genuine interest in retail technology, I visited quite a few retail stores. I went to see classical stores, but also had the chance to have a preview of the future of retail: Autonomous and Frictionless Stores!

Customers in this digital world don’t want to spend too much time while shopping. They want everything to happen very fast. Customers are looking for a seamless shopping experience all the time. That’s how the concept of frictionless stores came to exist. Frictionless stores are one the biggest new thing in consumer shopping.

iot smart retail futuristic technology concept, happy girl try to use smart display with virtual or augmented reality in the shop or retail to choose select ,buy cloths and give a rating of products
Photo: Adobe Stock

What are Frictionless Stores

The concept of frictionless stores started a few years ago. When I talk to retailers this is one of the topics that always pops up. All major brands are looking for innovative ways to create better customer experience and frictionless stores is one way to make that happen. These store improves the shopping experience to the point where customers don’t have to wait at any point of shopping such as selecting, receiving and paying for the product. Initially frictionless stores only confined to ease and less hassle shopping. But as innovations such as mobile wallets, digital receipts, free and fast shipping, and one-click purchasing emerged and began to reshape the consumer shopping experience, the definition began to be reshaped as well. Today, a frictionless experience means more than just less hassle. It means greater speed, personalization, and wow experiences.

How Frictionless Stores work

Let’s try to understand ow frictionless stores work. In frictionless stores, Buyers and sellers are connected in a way that provides buyers the ability to instantly find, compare and buy products and services they need. In frictionless stores, customers should feel that they have full control. The concept and technology has evolved over time, and nowadays customers expect to have this experience through their smartphones. Retailers and brands are trying to find new ways modifying the definition of frictionless stores to provide customers the best possible shopping experience. They need that commitment to stay ahead of the competition. As a result of that, nowadays, frictionless shopping means eliminating anything that negatively impacts customer experience.

Importance of Frictionless Stores

How has frictionless shopping fared according to researches? Alliance Datacenter has done a study and found out that customers from all generations looking for a great service and an ideal shopping experience. This is true for all the areas in the world. If some brand fails to deliver what they want, customers will find a different one. According to the research, 76 percent of consumers said they give brands only two to three times before they stop shopping with them.  Another 43 percent said their main reason to leave a brand is poor experience in shopping. What all these means is that if a customer encounters friction they will run away from that brand fast without probably giving a second chance.

Amazon Go Stores

Similar to frictionless stores, Amazon introduced Amazon Go stores. What is special about Amazon Go is you don’t have to wait for checkouts. That basically means you no longer have to wait in queues. First Amazon Go store was a grocery store of 1800 square feet. It spread fast, in fact, you can see a lot of Amazon Go stores now in the USA and Europe.

Amazon Go First Store_0_low

How is this even possible? What technologies have they used? Amazon was doing many types of research in the areas of computer vision, sensor fusion, and deep learning. Amazon Go is a fruitful result of that. You need Amazon Go application to do shopping with Amazon Go stores. All you have to do open your Go app, choose the product you want, buy it and the just leave. This application can detect when a product is purchased or returned to the shop. The application can remember what you bought and you can revisit these details at your virtual cart. When you finish shopping, you will be charged and you will receive a receipt for what you buy

Buy Awesome foods with Amazon Go stores

You may wonder now what you can buy there? What items are available on Amazon Go stores? I will just point out how one Amazon Go store had marketed their shop. “We offer all the delicious meals for breakfast, lunch or dinner. We have many fresh snack options made every day by our chefs at our local kitchens and bakeries. You can buy a range of grocery items from milk and locally made chocolates to staples like bread and artisan cheeses. Try us, you will find well-known brands you love in our shops.” by the way, don’t expect to go in there and buy books, tech or clothes or anything else that Amazon sells online. It’s basically quick-and-easy food and other groceries. It’s just that there’s no cashier.

image_0a916596-0d11-4f75-a2ca-7f4bf93aa4e6.img_9445

So many people have been attracted to Amazon Go stores so it is quite evident that this concept will make a huge impact on the future of retail stores.

If you want to know more about frictionless Sstores, feel free to contact me at: bjorn.nostdahl@gunnebo.com or check out these related articles:

Microchip, Microcontroller, MQTT, PIC24, Protocols

MQTT for PIC Microcontrollers

The IoT (internet of things) world is bursting, in 2018 there were 23.14 billion connected devices, and it is projected to get to 30.73 billion by 2020 (from statista.com).

Embedded systems are at the center of this IoT drive, smart homes, smart cars, etc. all have embedded systems as their backbone.

Global network background-1

Microcontrollers are the drivers of embedded systems. They give devices the ability to collect data from the environment, send and receive these data and execute the needed instructions or carry out specified actions. Like turning on the heater when the temperature in the room goes below a specified level.

ARM and the PIC microcontrollers are the common microcontrollers used in embedded systems and IoT. When these devices send and receive information over a network (say the internet), they do so using transfer and transport protocols that control this transfer processes.

The hypertext transfer protocol (HTTP) is the most popular communication protocol used over the internet to send and receive data. In IoT communications this protocol is still used in most applications. A more efficient protocol is the messaging queue telemetry transport (MQTT) protocol that is optimized for low connectivity and low power requirement. The MQTT protocol finds immediate application in remote locations where batteries are used and need to be conserved.

The HTTP system transfers data via the request-response paradigm. This transfer protocol requires devices querying other devices directly for data. This leads to increase in bandwidth requirement and power consumption. Since devices have to respond to requests one after the other, multiple, asynchronous and simultaneous communication cannot be effected. This comes as a disadvantage for IoT applications where multiple devices communicate at the same time. HTTP does not allow for multiple simultaneous communication, being synchronous.

The MQTT protocol solves these.

What is the MQTT protocol?

I gave a detailed description of what MQTT is in a previous post. But for this post, I’ll reintroduce just the important points.

The MQTT is a lightweight broker-based publish/subscribe messaging protocol designed to be an open, simple, and easy to implement data transfer protocol, designed to optimize bandwidth and power consumption. It is a machine-to-machine (M2M) communication paradigm that allows devices to send and receive data faster and more reliably without being connected directly.

MQTT finds immediate need where the network is expensive, unreliable or of low bandwidth. As well as when the embedded devices are of limited processor or memory resources.

The MQTT (message queue telemetry transport) protocol works in direct contrast with the hypertext transfer protocol (HTTP) which is popularly used in sending data and communicating with devices over the internet.

MQTT provides for one-to-many communication and message distribution. It is unconcerned of the sender or the content of the message, and uses the TCP/IP to provide network connectivity. Has a small transport overhead (a message sent with this protocol can have a header of as small as 2 bytes), and with features that ensures lost connections or data can be accessed.

MQTT in microcontroller connectivity

Using the MQTT protocol in microcontrollers will improve the efficiency of data transfer, reduce the power and bandwidth requirements and introduce asynchronous communication among devices. All these come in handy with the limited memory capacity of microcontrollers, need for faster and more reliable data transfer among IoT devices and increase in IoT devices in circulation and mainstream adoption of the technology.

This protocol guarantees faster, more power efficient (than the HTTP), low latency and dependency communication among devices. This is because the MQTT protocol works on a publish-subscribe paradigm. With this model, there is no direct connection and communication between network devices, instead there is a middleman, called the broker.

To use the MQTT protocol for communication with your microcontroller, a broker is required to collect and dispatch data among devices. The broker (also known as the server) facilitates the publish-subscribe model, in a similar fashion as the client-server models. The clients (that is the connected devices) subscribe to virtual channels, known as topics. Other devices that want to send out information (known as a message) will publish the information on a specified topic to the broker. The broker then distributes the message to all the clients that subscribe to that topic topic.

Telecommunication concept with abstract network structure and server room background.png

The broker is the core part of the MQTT operation. The broker is the middleman in data transfer using this protocol. The broker/server stands at the center of M2M communication. It receives messages (on a particular topic) from devices connected using the protocol and aggregates them for transfer to other devices that subscribed to the topic.

This kind of communication provides for continuous availability and distribution of data among devices. The advantage that this kind of communication provides is lack of dependence on direct M2M connection (that besets the HTTP communication system). Devices practically work and run on their own independent of the presence or interruption of other devices. This type of connection provides real time data; this is because the broker constantly publishes the received messages to connected device. Messages that were not subscribed for are destroyed, and those that have subscribers are dispatched to the devices. With this, any interruption that occurs with the connection of one device does not affect the entire network, instead, all the messages sent while it was away are retained and push to its when it comes back on the network. The MQQT protocol is more data-centric that identity-centric.

The Programmable intelligent computer (PIC) is a Harvard architecture microcontroller that is regarded as the industry standard due to its robust features. It is a more sophisticated microcontroller than other microcontrollers like the Raspberry Pi microcontrollers, as it provides more functionalities and features than the other microcontrollers.

My previous article talked about the deployment of the MQTT protocol on the Raspberry Pi microcontroller, Gunnebo AB’s mikroPascal MQTT library puts the MQTT protocol on the PIC microcontroller.

SIU04

Our MQTT library for PIC Microcontroller brings faster and better connectivity for the PIC microcontroller. It enables PIC microcontrollers to communicate using the MQTT protocol. The MikroPascal library is built around MQTT protocol with QoS of 0, it is built on the existing TCP and IP stack based on the mikroPascal TCP/IP library, lib_enc600_v3_5, adding the MQTT layer on it.

The MQTT library is built as a wrapper around TCP/IP protocol with the purpose of providing features to publish and subscribe to text messages to specific topics, by the means of MQTT protocol.

The library carries out the following core functions:

  • Establishes TCP/IP sockets,
  • Formats MQTT packets and prepares them for transmission,
  • Extract contents from subscription messages arriving in MQTT packets,
  • Transmits MQTT packets over TCP/IP,
  • Provides test (ping) methods to test the health of connection,
  • Provides functions for subscribing to and publishing to topics as well as unsubscribing from topics.

The library reduces RAM memory requirements and provides better performance by supplying the library functions with input parameters that are pointers to arrays.

mp1

The basic work flow of the library on the PIC microcontroller is as follows. The microcontroller reserves the address for the message/information and provides pointers to this address. The MQTT library takes it from here and use the pointer to obtain or receive messages to the controller.

mp2

To communicate via the MQTT protocol on your PIC project, there are some prerequisites that your project must meet.

With the mikroPascal MQTT library, we implement this light weight protocol for the PIC microcontroller. The library can be downloaded here.

The library brings all the benefits of the MQTT protocol to PIC users enabling users to package and send data in their IoT project seamlessly, faster, with less memory requirement and wider connection with other devices.

The library can be downloaded here at the libstock repo, you can run a demo of the library to see how it works, and also check out our open source codes on github.

We welcome you to contribute to this library and please also fork it for other mictrocontrollers. If you have any questions, please reach out to me: bjorn.nostdahl@gunnebo.com

Security, TLS/SSL

History of SSL/TLS Attacks and Patches

During the last few days, it has been reported that Yubico is replacing some of their physical security keys, due to a firmware problem. This reminds us that IT security is evolving, bugs are found and you need to keep up to date to keep your systems secure. My previous posts regarding SSL/TLS and x.509 has been quite popular, so here comes another security related post 🙂

When hosting a global Software as a Service platform, it is vital to be in control of Cloud Security. Cloud Security consists of a set of policies, controls, procedures and technologies that work together to protect cloud-based systems, data and infrastructure. These security measures are configured to protect data, support regulatory compliance and protect customers’ privacy as well as setting authentication rules for individual users and devices.

AdobeStock_139964445_low

One way of securing these services is SSL/TLS encryption of communication. The first implementation of SSL was implemented by Netscape in 1994, and this post attempts to provide a historical view of the SSL/TLS protocol, as attacks and countermeasures were introduced. If one reads the current TLS v1.2 or v1.3 protocol specifications, there are many aspects of the design which do not have an obvious reason, but whose origin comes from the long list of academic research which has broken previous versions.

The birth of SSL

As SSLv1 was never released, we first mention SSLv2, which was designed and implemented by Netscape in 1995. The SSLv2 protocol is very different from later versions, but has a similar traffic flow. The client connects to a server and sends a “hello” which identifies some aspects of the clients capabilities. The client and server negotiate which cipher they wish to use, and the client sends a random key encrypted with the server’s RSA public key which is used to subsequently encrypt the message traffic.

The protocol quickly proved to have numerous flaws, and within a couple of years an effectively new protocol, SSLv3, was designed to replace it. SSLv2 was formally deprecated in 2011, and no modern TLS library supports it anymore.

SSL as we know it

SSLv3 is the first SSL version which is recognizably similar to modern TLS. As in SSLv2 the client connects to a server, a handshake is performed, and subsequent records are encrypted using a key that is shared using public key cryptography. However there are several essential differences.

Another key addition is that in SSLv3 it is possible to use algorithms with forward security. In this mode, instead of decrypting an RSA ciphertext sent by the client, the client and server agree on a key using Diffie-Hellman key exchange, and the server signs a message which allows the client to verify that it is performing a key exchange with the intended server. However RSA based key exchange was still retained, and widely used.

In SSLv3 the entire handshake is hashed together and used with the agreed keys to create two “Finished” messages which the client and server exchange on the encrypted channel. These ensure that an attacker cannot modify traffic between the client and server in such a way as to change the outcome of the handshake. For instance, if a MITM could remove all of the strong ciphersuites from a client hello message and force a downgrade to a weak cipher, the protocol could be easily attacked.

In SSLv3, messages are encrypted using either the stream cipher RC4, or else a block cipher in CBC mode. In CBC mode, the plaintext must be a multiple of the cipher’s block size (typically 8 or 16 bytes), which requires making use of padding scheme to increase the length of messages which are not correctly sized. In SSLv3, the length of padding is indicated with a single byte at the end of the record, and the specified number of bytes are discarded by the receiver. The value of the padding bytes is not specified.

The message is authenticated using a slight variant of HMAC (based on an early HMAC design prior to HMAC’s standardization). But, critically, in SSLv3 it is the plaintext (rather than the ciphertext) which is authenticated, and the CBC padding bytes are not authenticated at all. These errors proved to be the source of a number of serious exploits which plagued TLS for years.

eCommerce compel TLS v1.0

After a time it became clear that the SSL protocol would prove crucial for commerce on the early Internet, and eventually the development was moved to the IETF. The name ended up changing due to a political compromise between Netscape and Microsoft, who had a competing PCT protocol. However the actual TLS v1.0 specification is only slightly different from SSLv3.

Credit card security

The most notable changes was the replacement of an SSLv3-specific HMAC variant with the standard version, replacing the SSLv3 specific PRF with a new design, and tightening up the rules for how blocks are padded. In SSLv3 the padding bytes were unspecified, while in TLS v1.0 and later versions the padding must follow a specified format. The block padding change was at the time merely a simplification, but it proved critical when the POODLE attack was developed in 2014.

At the time POODLE attack was developed, SSLv3 was already obsolete, but many browsers included a fallback mechanism where if the server rejected a TLS v1.0 or higher connection, the browser would subsequently try to connect using SSLv3. A man-in-the-middle attacker could intercept the TLS v1.0 connection, send an (unauthenticted) TLS alert closing the connection, and then attack the SSLv3 connection. There is no direct protocol fix for POODLE, since it is not possible to retroactively fix the padding bytes in unpatched clients. The main resolutions were the disabling or removal of SSLv3 support on both client and server sides, and the creation of the “fallback SCSV” indicator. The fallback SCSV allows a client to indicate to the server that it is performing a version fallback, which is done by including a special value in the ciphersuite list which cannot be actually negotiated but simply serves as a flag which can be understood by servers who recognize it (SCSV is short for “Signaling Cipher Suite Value”). A special ciphersuite value was chosen because in a TLS v1.0/v1.1 client hello format there is no other way of reliably indicating such information.

If a server sees a connection from a client indicating fallback, but the client is attempting to negotiate an older version than what the server supports, it closes the connection. Then, when a MITM attacker tries to force a downgrade, when the client opens the vulnerable SSLv3 connection, the server will detect the SCSV and close the connection, preventing the attack. It is not possible for the MITM to remove the SCSV, because the contents of the handshake transcript are authenticated by the Finished messages.

Browser Exploit leads to TLS v1.1

TLS v1.1, released in 2006, involves a single small patch to TLS v1.0. In TLS v1.0 and all earlier versions, the CBC state is carried across records. Another way of thinking about this is it is as if each packet is encrypted with an IV which is equal to the last ciphertext block of the previous record. This resolved an issue that had been identified in 2006 by a researcher. Later, in 2011, this attack was refined via use of JavaScript and dubbed BEAST, providing a practical break of TLS v1.0 and earlier when used with HTTPS.

At the time BEAST was a substantial issue because many implementations of TLS had not been updated to support TLS v1.1 or v1.2. A workaround was developed for SSLv3/TLS v1.0 connections, commonly termed 1/n-1 record splitting. Each CBC encrypted record would be split into a 1 byte record followed by a record containing the rest of the plaintext. Since the first record included a message authentication code (which could not be predicted by an attacker who does not know the session key), this serves as a way of randomizing the IV.

Another common countermeasure was to favor use of the RC4 stream cipher, which did not have the problems of CBC ciphersuite. But the RC4 cipher dates back to the 1980s, and by 2013 it had been shown convincingly that biases in the RC4 ciphertext could allow an attacker to recover secrets sent over a TLS channel, albeit in a scenario requiring access to data from many millions of connections.

The next big step with TLS v1.2

TLS v1.2, released in 2008, was the first major enhancement to the protocol since SSLv3. It adds support for negotiating which digest algorithms will be used (instead of hard coding use of SHA-1), adds support for modern AEAD ciphers, and adds support for extensions.

Extensions are a critical feature which was long lacking in TLS. Each extension is tagged with a type, and implementations are required to ignore extensions which they do not recognize. This feature proved essential for resolving several protocol level problems which were discovered in the period between TLS v1.2 and v1.3

Despite adopting several modern cryptographic features, TLS v1.2 also suffered from a number of high profile attacks. The first of these was the renegotiation attack, discovered in 2009. TLS allows both clients and servers to at any time request a new session be renegotiated; effectively a new handshake is performed, but instead of being in the clear it occurs over the already encrypted channel. Several HTTP servers, including IIS, make use of this for client authentication. The initial connection is encrypted but not authenticated, and if the client attempts to access a protected resource a renegotiation is performed which includes client certificate authentication. The renegotiation bug breaks
this entirely. First an attacker creates a new TLS connection to the server, and sends some arbitrary data (for example, the start of an HTTP request). The attacker then proxies a legitimate client attempting to connect to the server, and sends the handshake data through its own channel. From the perspective of the server, it appears as if the client has sent some encrypted data, then authenticated itself with a certificate, then sent some additional data which was both encrypted and authenticated. Depending on the server logic, this might allow the attacker to insert data which the server would interpret as having come from the authenticated client. The fix was to properly bind the inner and
outer negotiations, such that it was not possible for the attacker to proxy. This was done by adding a new extension, which was standardized in RFC 5746. With this extension enabled, renegotiations inside an existing channel are cryptographically bound to the existing channel using the value of the TLS finished message. Since in the attack the client is unaware of being proxied within another TLS channel, the renegotiation will fail, preventing the attack.

The problems with renegotiation did not end there, however. In 2014 a new set of attacks where developed including the devastating triple handshake attack. In this attack, a client connects to a malicious server. The malicious server opens a new TLS connection as a client with some victim server. It forwards the client’s random value, then sends back the victim server’s random back to the client. Upon receiving the client’s encrypted master secret, it forwards the same to the victim server. In the end, there are two TLS connections, one between the client and the attacker, and the other between the the attacker and the victim server, and both are using the same session keys. In the next step, the client reconnects to the attacker, resuming its previous session, and in turn the attacker resumes its connection with the victim server.

Due to how session resumptions work, in this case the finished messages in the two handshakes will be identical. Then, the malicious server can attempt to perform some action on the victim server which triggers a request for client certificate authentication (for example, requesting access to a protected resource). It forwards the authentication request to the victim client, who responds. The attack proceeds much like the renegotiation attack of 5 years prior, and since the finished messages of the two connections are in this case identical, the previously devised extension fails to detect the proxying. This was addressed with a new extension, the extended master secret, which ensures the master secret for a session is bound to the entire handshake transcript, instead of just the client and server random fields.

Implementation errors also caused notable problems for TLS v1.2. It has been known since 1998 that the RSA key exchange is vulnerable to an oracle attack, the so called “million message attack”. In a nutshell, before encrypting the master secret with a servers RSA public key, the client pads it in a certain way. Upon decryption, the server must reject any invalid padding which does not conform the the standard. But, it turns out that given access to an “oracle” which tells if a particular RSA ciphertext is or or not correctly formatted, it is possible for an attacker to decrypt any ciphertext encrypted using that key. A TLS server can act as such an oracle, and problems have been repeatedly found in various implementations over the last 20 years, including the recent ROBOT and CAT9 attacks.

Bringing TLS into the future with v1.3

After 10 years and numerous patches, TLS v1.2 was in a state where using it securely required a number of extensions and avoiding a number of known-insecure features such as static RSA key exchange, RC4 ciphersuites, and CBC ciphersuites. TLS v1.3 addresses these issues by omitting them entirely.

In addition, there was a strong desire by many large network players (such as Google, Cloudflare, and Mozilla) to minimize the number of round trips required to handshake, as this directly impacts the user visible performance of web pages. This led to a substantially redesigned handshake protocol which has fewer round trips. I will explore the changes and implications of the TLS v1.3 design in a future post.

If you want to discuss more about SSL/TLS, feel free to contact me at bjorn.nostdahl@nostdahl.com or check out these previous articles on SSL/TLS and x.509:

Thanx to Jack Lloyd for his invaluable input into this post 🙂

Agile, Gunnebo Business Solutions, Methodology, Scrum

Agile and Scrum Methodology Workshop

I recently had the chance to join Henrik Lindberg from Acando for an Agile Scrum workshop. In this post I will write about the workshop and the basics of Agile and Scrum. There is so much to learn and explore in agile, and I hope this introduction will compel further reading.

Agile Methodology

Unless you live offline, you probably are aware of the latest trend in the corporate world, which is the agile approach. Agile, in recent times has grown into a revolutionary movement that is transforming the way professionals work. Agile is a methodology that keeps the equilibrium of your priorities. Thus, the work is done faster, and project requirements are with great efficiency.

Working agile, people tend to forget about the four values from the agile manifesto:

  1. Individuals and interactions over processes and tools
  2. Working software over comprehensive documentation
  3. Customer collaboration over contract negotiation
  4. Responding to change over following a plan

Equally important is the twelve principles behind the agile manifesto:

  1. Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.
  2. Welcome changing requirements, even late in  development. Agile processes harness change for the customer’s competitive advantage.
  3. Deliver working software frequently, from a  couple of weeks to a couple of months, with a preference to the shorter timescale.
  4. Business people and developers must work  together daily throughout the project.
  5. Build projects around motivated individuals.  Give them the environment and support they need, and trust them to get the job done.
  6. The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.
  7. Working software is the primary measure of progress.
  8. Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.
  9. Continuous attention to technical excellence and good design enhances agility.
  10. Simplicity–the art of maximizing the amount of work not done–is essential.
  11. The best architectures, requirements, and designs emerge from self-organizing teams.
  12. At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.

Major Differences between Waterfall and Agile

  • The waterfall approach is a sequential model of project management. Here the development team can only move to the next stage if the previous step is successfully completed.
  • In the agile approach, the execution of processes is concurrent. This enables effective communication between the client, the manager, and the team.
  • Waterfall assumptions are not well-suited for large-sized projects, whereas agile lets you manage complicated tasks with great ease.
  • Agile methodology is being embraced by managers worldwide for its greater flexibility.
  • The development plan is reviewed after each step in case of agile, while for the Waterfall approach it will be only during the test phase.

The agile development is based on the interactive functionality, according to which the planning, the development, the prototyping and many other key phrases of the development may pop up more than once in line with the project requirements. The agile also adheres to the incremental model, where the product is designed, implemented and tested in increasing order (complexity of the task increases in the ascending order). The development is termed as finished, only if every minute specification and requirement is met.

When to Use The Agile Methodology?

  • In a Scenario, When You Require Changes to Be Implemented
  • When the Goal of the Project Isn’t Crystal Clear
  • When You Need to Add a Few New Features to the Software Development
  • When the Cost of the Rework Is Low
  • When Time to Market Is of Greater Paramount Importance than the Full Feature Launch
  • When You Want to See the Progress in the Sequential Manner

Scrum Methodology

Scrum is the latest agile framework for product success in small-to-big organizations, which is creating a lot of buzz in the present IT world. Managers’ worldwide united hold a belief that Scrum is far more than the execution of processes and methods; it plays an integral role by supporting teams meet their aggressive deadlines and complicated project demands. The Scrum is a collaborative agile approach that involves the breaking down of substantial processes into smaller tasks so that they are done efficiently in a streamline manner.

Scrum is a lightweight, agile framework that successfully manages and accelerates project development. This framework is proven to cut down on project complexity and focus largely on the building products that are in accordance with client expectations. Generally, people sometimes use Agile and Scrum as interchangeable, but there is a big difference. The agile approach is a series of steps, on the other hand, Scrum a subset of agile.

There are three principles of Scrum:

  • Transparency
  • Inspection
  • Adaptation

Scrum Roles

Are you interested in switching to the Scrum approach of development? Then, you must know the various Scrum roles.

Three-main-scrum-roles-1_low.png

The Product Owner

He/she is responsible for providing the vision of the product. The product owner will play the central role in breaking down the project into smaller tasks and then prioritize them.

Responsibilities

  • Defining the Vision
  • Managing the Product Backlog
  • Prioritizing Needs
  • Overseeing Development Stages
  • Anticipating Client Needs
  • Acting as Primary Liaison
  • Evaluating Product Progress at Each Iteration

The ScrumMaster

He/she is someone with extensive expertise over the framework. The ScrumMaster will make ascertain that the development team is adhering to the Scrum model. They will also coach the team on this.

Responsibilities

  • Coaching the Team
  • Managing and Driving the Agile Process
  • Protect the Team from External Interference
  • Managing the Team
  • Foster Proper Communication
  • Dealing with Impediments
  • Be a Leader

The Development Team

This involves a panel of qualified developers those who form the core of the project development. Each individual in the team brings his/her own unique skills to the table.

Responsibilities

  • The Entire Team Is Accountable for the Work
  • They Are No Titles and Subheading
  • Sit Together to Communicate with One Another

Scrum Artifacts

sprint-02

Artifact #1: Product Backlog

Product backlog involves a sequence of fundamental requirements in a prioritized order. The requirements are provided by the provided owner to the Scrum Team. The backlog of product emerges and evolves with time, and the owner of the product is solely responsible for content & its validity.

Artifact #2: Sprint Backlog

It is the subset of the product backlog that the team will put in the hard efforts to achieve the “To Do’s.”  The work in the sprint backlog is sliced down in smaller tasks by the team. All the items of the sprint backlog must be developed, tested, documented and integrated to meet the needs of the clients.

Artifact #3: Product Increment

The product increment is an artifact of Scrum with significant importance. The product increment must in line with the “Definition of Done” by the development team, and the product increment has to be approved by the product owner.

Definition of Done in Scrum Methodology

Definition of Done from varying from one scrum team to another. It is an acceptance criterion that drives the quality of work when the user story is complete. In other words, Definition of Done is the quality checklist with the development team.

Burndown Chart

The Burndown chart is a means to track the progress of a project on the Scrum. The ScrumMaster is responsible for updating this chart at the end of each sprint. The horizontal axis on the release Burndown chart represent the sprints, while the vertical one will make you aware of the remaining work at the beginning of each sprint.

Backlog Refinement

Backlog refinement is the act of updating/adding estimates, details, and order for the items in the product backlog. This improves story descriptions.

User Story

Commonly known as the “Definition of Requirement,” the user story in Scrum provides enough information to the development team so that they provide a reasonable estimate for the project. The user stories are about one or two sentences, a set of conversations that define the desired functionality.

User Story Acceptance Criteria

Acceptance criteria in terms of Scrum methodology are a set of conditions that the software product must meet in order to the acceptance by the user, customer or the other stakeholders. In layman’s terms, it is also a set of statements that determine user features, requirements or functionalities of an application.

User Story Relative Estimation

Relative estimation is the procedure of estimating task completion. The estimate is not in terms of the time, rather the items that are similar to one another in terms of complexity.

Scrum Events

There are five defined Scrum Events.

Sprint Planning

The Sprint Planning is an event in the Scrum framework. Here the team in a collaboration will decide on the task they will focus during that sprint, and discusses their initial plan to meet those product backlog tasks.

Sprint Goal

The sprint goal is defined as the objective set for the sprint that needs to be met via the implementation of the Product Backlog. The sprint goals are obtained after long discussions between the Product Owner and the Development team.

Daily Scrum

For the Scrum approach, each day of a Sprint, the team meets and holds a discussion on a number of aspects, and this meeting is known as the Daily Scrum.

Sprint Review

The sprint review is held at the end of each of the sprint. This is done to inspect the product increment.

Sprint Retrospective

The Sprint Retrospective is held between the development team and the ScrumMaster to discuss how the previously Sprint went, and what can be done to make the upcoming Sprint more productive.

In the end, after reading this entire article, you probably got a basic overview of the Scrum approach. If you want to talk about agile and scrum, feel free to contact me at bjorn.nostdahl@nostdahl.com. You can also read more about agile in this article:

Commercial, Fashion, Gunnebo Business Solutions, Innovation, Microsoft Azure, Reflections, Retail

Microsoft Pivot: Envisioning the Future of Digitalization

May 29th, Satya Nadella CEO of Microsoft invited Nordic customers and partners for a small conference in Sweden, putting forth his ambitions for the future. This was Nadella’s first ever visit to Sweden, since stepping into the shoes of the company’s CEO. He touched upon issues of today’s tech world, but most importantly made Swedish people aware of his company’s firm belief in global digitization and describe what the future holds in store.

Satya_smiling-print-1024x683
Photo: Brian Smale/Microsoft

Self-discipline & excitement-seeking are two pillars of Satya’s personality, which has made this Indian-origin techy an intellect worldwide; from Asia to Europe we all treat him with the warm welcome.

Dinner with Microsoft’s ISV Team

A day before the Nadella’s address, I was invited for dinner and socializing hosted by Joanna and Martin from Microsoft. ISV stands for Independent Software Vendor and present individual or companies those who develop, market and sell software running on third-party hardware and software platforms such as the Microsoft.

20190528_182652253_iOS_low

The term ISV is prominent in the tech world, used by most tech companies including Microsoft. To understand this from a layman’s term, when Microsoft was in the pursuit of developing windows, its partnerships with numerous companies/individuals to take their project forward both on the technical and non-technical front.

Next Morning, I had the opportunity to see some of the companies that has implemented their solutions on the Microsoft platforms at hotel Berns. There we received a pep talk to Microsoft and partners on the future and what efforts we need to put to make sure it is heading the right direction.

20190529_072852893_iOS_low

The Microsoft tech show commenced in style with the tunes of Sweden’s most renowned DJ and Saxophone artist Andreas Ferrronato. His soul-soothing set the mood 🙂

The Volvo Group Digitizing its Operations

Hillevi Pihlblad from Volvo Group talked about how employees hate change and across the globe it is not easy to adapt to changes. Further, she illustrated how the Volvo Group has translated the changes into something positive and made people understand why embracing change can make their lives convenient.

20190529_073735681_iOS_low

The H&M Group And The Use of AI To Serve Their Customers The Best

A senior executive and business leader of the H&M Group, Arti Zeighami talked about how the company is investing in Artificial Intelligence technology to tailor store offerings. Heading the Advanced Analytics and AI function, he gave a presentation on how H&M Group is implementing advanced algorithms to scrutinize sales an returns. Further, helped them more efficiently predict the needs and demands of their customers.

20190529_075341909_iOS_low.jpg

Satya Nadella, The Man of the Moment Taking The Center Stage

Then, finally came the moment when Helene Barnekow introduced Microsoft’s CEO Satya Nadella. He was treated with great warm claps from the tech people present.

20190529_080634744_iOS_low

Nadella who took over the job role of CEO from Steve Ballmer in 2014, is globally renowned for his dynamic leadership and a true passion for technology innovations. Prior to being the company’s CEO, Nadella was Microsoft EVP of the cloud and enterprise group.

His journey as a CEO has transformed Microsoft in terms of technology, also accentuating the company’s business model and corporate culture.  His emphatic leadership abilities steered Microsoft from struggling smartphone strategy to focus on other technical aspects such as the Augmented Reality and Cloud Computing.

He was also responsible for the purchase of Linkedin, a network of professionals for around $26.2. Did you know since taking over as CEO, company’s stocks have increased by 150%?

20190529_082927443_iOS_low

The theme of the address by Satya Nadella was how communities and companies are uniting together for the digitized future of Sweden. This speech was largely about Microsoft’s own digital products and services, and how they can drive the world forward.

On his address to the tech people of Sweden, he threw light on various segments of technology-the Artificial Intelligence, Digital Transformation & Innovation. The American giant was in Stockholm to make a big announcement about setting up data centers in the country.

“We have the ambition that the data centers we build in Sweden should be among the most sustainable in the world, this is another step in strengthening our position as a long-term digitization partner for Swedish businesses”

Key Highlights from Nadella’s Address

“It would be wrong for me not to talk about trust. Because in the end, it is something that will be very important to us – not only to create new technology but to really assure that there is confidence in the technology that we create” he says on stage and continues “We need to create systems that handle personal data and security as a human right.”

Satya Nadella talked about the recent investment his company is making in Sweden. Among all the tech things, Microsoft two centers to be built in Gävle and Sandviken, they will be the most sustainable in the world.

“We will use one hundred percent renewable energy. They will also be completely free from operational emissions. We set a new standard when it comes to the next generation data center. It starts here in Sweden,” said Satya Nadella.

Apart from the data centers, Satya Nadella also highlighted the recent key partnerships during his speech at the China Theater. He further talked about company collaboration with Kiruna, this city makes use of Microsoft Hololens and AR to plan the city’s underground infrastructure.

Microsoft in Sweden

Satya Nadella, Microsoft’s CEO Put Forth Examples of Company’s Interest in the Country;

”There have been huge breakthroughs in the last three years, regardless of whether we are talking about object identification or voice recognition. This must be translated into infrastructure. Here we invest heavily.”

“Take Spotify who has a new very cool podcast tool. It lets anyone do their own podcast and they use our speech recognition to convert speech into text. The most interesting thing they do is that for anyone who wants to modify their podcast, they can enter and edit in writing and that the podcast then automatically changes. It shows how to use AI to make it more efficient”

Ending the Visit on a High

Later in the day, Nadella visited the Samhall innovation Days, a hackathon with the aim of “creating the conditions for people with a diagnosis within the autism spectrum to come into operation”, in the company’s press release.

Last summer, Microsoft announced two data centers in Norway to take their cloud computing services to entire Europe.

“By building new data center regions in Norway, we facilitate growth, innovation and digital transformation of Norwegian businesses – whether large companies, the public sector or some of the 200,000 small and medium-sized companies that together create the future of Norway,” said CEO Kimberly Lein-Mathisen in Microsoft Norway when the Norwegian plans became known.

Nadella declared that both the data centers will run on 100% renewable energy, so this project is in the welfare of the country, creating an ocean of newer opportunities for the locals. He also talked about his company’s association with the tech companies/communities in Sweden; one is the Kiruna city and other being the Sandvik Company.

The address at the China Theater in Stockholm by Microsoft’s top boss, Satya Nadella was like a pep talk. He gave his viewpoint on a variety of technology aspect. Most importantly, he announced the company’s program of building two data centers in this Nordic country.

Artificial Intelligence (AI), Business Intellegence (BI), Gunnebo Business Solutions, Machine Learning (ML), Microsoft Azure

Machine Learning and Cognitive Services

Machine learning is gradually becoming the driving force for every business.  Business organizations, large or small trying to seek machine learning models to predict present and future demands and do innovation, production, marketing, and distribution for their products.

Business value concerns of all forms of value that decides the well-being of a business. It’s a much broader term than economic value encompassing many other factors such as customer satisfaction, employee satisfaction, social values etc. It’s the key measurement of the success of a business. AI helps you to Accelerate this business value in two ways. That’s through allowing to make correct decisions and innovation.

Machine learning technologies. Millennial students teaching a robot to analyse data
nadia_snopek

Remember the days when Yahoo was the major search engine and Internet Explorer was the Major web browser. One of the main reason for their downfall was their inability to make correct decisions. Wise decisions are made by analyzing data. More data you analyze, better decisions you make. Machine Learning greatly support in this cause.

There was a time, Customers accepted what companies were offering them. Things are different now. Demands of customers for new features are ever more increasing. Machine Learning has been the decisive factor behind almost every new innovation whether it be face recognition, personal assistants or autonomous vehicles.

Machine Learning in more details

First starts with learning what machine learning is. Machine learning enables systems to learn and make decisions without explicitly programming for it.  Machine learning is applied in a broad range of fields. Nowadays, Almost every human activity getting automated with the help of machine learning. A particular area of study that machine learning largely exploited is data science.

Data science plays with data. Data must be extracted to make the best decisions for a business.

The amount of data that a business has to work with is enormous today. For example, social media producing billions of data every day. To stay ahead of your competitors, every business must make the best use of this data. That’s where you need machine learning.

Machine learning has invented many techniques to make better decisions out of large data sets. These include Neural networks, SVM, Reinforcement learning and many other algorithms.

Among them, Neural networks are leading the way. It improves consistently spawning child technologies such as convolutional and recurrent neural networks to provide better results in different scenarios.

AdobeStock_178345630_low

Learning machine learning from the beginning, and trying to develop models from scratch is not a wise idea. That yields huge cost and demands a lot of expertise in the subject. That why someone should try to take the assistance of a machine learning vendor. Google, Amazon, Microsoft they all provides Machine learning services. Let’s take Microsoft as an example, and review what qualities we should look for when selecting a vendor.

Using cloud as a solution for machine learning

It simplifies and accelerates the building, training, and deployment of machine learning models. It provides with a set of APIs to interact with when creating models hiding all the complexity in devising machine learning algorithms. Azure has the capability to identify suitable algorithms and tune hyperparameters faster. Autoscale is a built-in feature of Azure cloud services which automatically scale applications. This autoscaling feature has many advantages. It allows your application to perform best while keeping the cost to a minimum. Azure machine learning APIs can be used with any major technologies such as C# and Java.

There are many other advantages you will have with cloud Machine Learning

  • Flexible pricing. You pay for what you use.
  • High user-friendliness. Easier to learn and less restrictive.
  • More accurate predictions based on a wide range of algorithms.
  • Fine tuning results are easier.
  • Ability to publish your data model as a web service Which is easy to consume.
  • The tool allows data streaming platforms like Azure Event Hubs to consume data from thousands of concurrently connected devices.
  • You can publish experiments for data models in just a few minutes whereas expert data scientists may take days to do the same.
  • Azure security measures manage the security of Azure Machine Learning that protects data in the cloud and offers security-health monitoring of the environment

Using Cognitive Services to power your business applications

We will go on to discuss how Azure cognitive service can be used power up a business application. Azure cognitive services are a combination of APIs, SDKs, and services which allows developers to build intelligent applications without having expertise in data science or AI. These applications can have the ability to see, hear, speak, understand or even to reason.

AdobeStock_252431727_low

Azure cognitive services were introduced to extend the Microsoft existing portfolio of APIs.

New services provided by Azure cognitive services includes

  • Computer vision API which provides with advanced algorithms necessary to implement image processing
  • Face API to enable face detection and recognition
  • Emotion API gives options to recognize the emotion of a face
  • Speech service adds speech functionalities to applications
  • Text analytics can be used for natural language processings

Most of these APIs were built targeting business applications. Text analytics can be used to harvest user feedbacks thus allowing businesses to take necessary actions to accelerate their value. Speech services allow business organizations to provide better customer services to their clients. All these APIs have a free trial which can be used to evaluate them. You can use these cognitive services to build various types of AI applications that will solve a complex problem for you thus accelerating your business value.

If you want to talk more about ML and AI, feel free to contact me: bjorn.nostdahl@gunnebo.com 🙂

Gunnebo Business Solutions, Innovation, Reflections

InnoTown 2019

Finally the day is here, and it is time for another InnoTown in my beautiful hometown Ålesund. Ålesund’s Art Nouveau architecture is known far and wide. The myriad of turrets, spires and beautiful ornamentation are like something from a fairytale.

61399418_1571518979651758_4081794471233060864_n
Photo: Sponland Foto AS

The InnoTown Conference, arranged over two decades has brought speakers that have skilfully entertained the audiences with their unique stories. Their talks contain a rich blend of deeply motivational personal journeys and exceptionally practical business cases. They leave us energized and confident to capitalise on new ideas beyond the tried and tested. Ålesund is also the centre of one of the most innovative and dynamic regions in Norway, with a long history of groundbreaking innovation.

9H6A2687_low
Photo: InnoTown

This year Monica Parker moderated and narrated the event. Monica is an ex-homicide investigator, organisational behaviorist, speaker, author, workplace nerd, status quo challenger and founder of HATCH Analytics guiding businesses through change.

9H6A2712_low.jpg
Photo: InnoTown

Monica has fifteen years experience in understanding the influence of environment and processes on human behavior. Monica specializes in an evidence-based approach to change, using social scientific methods of data collection grounded in a pragmatic and commercial foundation.

“I have always been interested in human behaviour, my background as a homicide investigator in the U.S. gave me an analytical mind and a broad understanding of behavioral psychology.”

What is Creativity?

Fredrik Haren talks on creativity, innovation and idea generation – and on how to develop a global mindset and build a truly global company.  His speeches help the audience understand how valuable it is to think in new ways and how difficult this is to achieve. He also speaks on business creativity and human innovation, on embracing disruption and change, and about the importance of having a global, human mindset. Fun fact about Fredrik is that he owns three private islands and he invited the InnoTown audience to join him there 😎

Creativity is the act of turning new and imaginative ideas into reality. Creativity is characterized by the ability to perceive the world in new ways, to find hidden patterns, to make connections between seemingly unrelated phenomena, and to generate solutions. Creativity involves two processes: thinking, then producing. Creativity is a combination force, it’s our ability to tap into our ‘inner’ pool of resources, knowledge, insight, information, inspiration and all the fragments populating our minds that we have accumulated over the years just by being present and alive and awake to the world and to combine them in extraordinary new ways.

9H6A2723_low
Photo: InnoTown

Talking more on creativity, Fredrik spoke about how great nations have been able to remain relevant with amazing ideas. Over the years,  USA have come up with grand and great ideas and creative innovations such as Tesla, SpaceX, Microsoft, Apple.

For Japan, creative small incremental changes have been key. All of these incremental changes add up, and make a significant positive impact on organizations. One approach to continuous incremental improvement in japan is called kaizen. In business, kaizen refers to activities that continuously improve all functions and involve all employees from the CEO to the assembly line workers.

China economy find it’s niche in mass manufacturing, and this lead to huge market of copying in Chinese economy. China, copy and copy right has led to a lot of good products and huge revenues for China. 

Mongolia nomadic mindset has been very important for the development of such a small nation. The Mongolian people belong to one of the oldest nomadic cultures in the world and still highly regarded as a major part of the fabric which is Mongolia and commercial success of Mongolia. They move to where the business is.

Iceland is probably the most improved nation in Europe this century, and they have gone about this by simply getting out and getting inspired in all facet.

Philippines creativity became widely known in 2010 when inmates in Cebu prison  were recorded dancing to Michael Jackson’s thriller. The dance performance took place in a prison on the Filipino island of Cebu and the purpose was to showcase the island’s proud dance tradition

The value of travel, let go of parts of your identity and embrace other cultures and mindsets. That way you might become a fountain of ideas.

He concluded by saying in order to put in the hard work to acquire a skill, you need to believe that the activity really is a skill you can learn. When you believe the activity is a talent then you don’t bother to work hard at it, because you attribute any limitations in your performance to your lack of talent.

Sustainable Energy

What are the structural changes that are needed in the financial industry to align the business models with the Paris Climate Agreement and United Nation Sustainability Goals?  Sustainability has become the fundamental to resilient business. It is about trust and the ability to provide long term value creation. Sustainable energy is a form of energy that meet our today’s demand of energy without putting them in danger of getting expired or depleted and can be used over and over again. Sustainable energy should be widely encouraged as it do not cause any harm to the environment and is available widely free of cost.

This energy is replenishable and helps us to reduce greenhouse gas emissions and causes no damage to the environment. If we are going to use fossil fuels at a steady rate, they will expire soon and cause adverse affect to our planet

Thina Margrethe Saltvedt is a senior advisor Sustainable Finance at Nordea Bank. Before joining the Sustainable Finance group in January 2018, she worked as a chief analyst macro/oil for 10 years. Thina is a member of the Minister of Climate and Environment’s Climate Council.

Sustainable development gained traction after the Brundtland commission definition statement, created on behalf of the United Nations in 1983 to reflect about ways to save the human environment and natural resources and prevent deterioration of economic and social development: “development that meets the needs of the present without compromising the ability of future generations to meet their own needs.”

Also the Norwegian Pension Fund focuses on a sustainable investment profile and how they seek to reduce financial risks associated with the environmental and social practices of companies in their portfolio.

Thina spoke about how Sustainable energy management is a key issue for companies today. She said long-term strategic thinkers and energy entrepreneurs must be closely involved in these practices. They must play an important role in helping their organizations make more sustainable energy choices. Energy entrepreneurs must support energy consumption reduction efforts with their in-depth knowledge of energy costs. And they help steer management towards the right decisions for using renewable energy. She reiterated that sustainability is a business approach to creating long-term value by taking into consideration how a given organization operates in the ecological, social and economic environment. Sustainability is built on the assumption that developing such strategies foster company longevity.

As the expectations on corporate responsibility increase, and as transparency becomes more prevalent, companies are recognizing the need to act on sustainability. Professional communications and good intentions are no longer enough. Also, investors today avoid taking on climate risks. Making short term decisions in the wrong direction might involve costs and risk for the investment.

A Missed Opportunity for the Maritime Community

The super yacht Industry! How did this Norewgian-born end up designing custom motor yachts for an international clientele? What is the super yacht market like and what does this mean to Norway?

Espen Oeino first involvement in yacht design is the popular ECCO yacht, and he hasn’t looked back since then. 

Photo: Staale Wattø / InnoTown

For over twenty years Espen Oeino has been hard at work. The world recognized designer, founded his technical and design office specializing in yacht design, naval architecture and engineering related disciplines in Monaco. With a multicultural and multi disciplined staff of 20, based in the yachting capital of Monaco, the studio has left a significant imprint on the new build market and is today considered one of the world’s leading design studios for large bespoke motor yachts.

Espen Oeino talked on the world of interior superyacht design and how it has been largely dominated by a small specialized group of yacht stylists, interior designers and architects. He also gave opinions on the mindset of the yacht designers.

Espen Oeino also gave an honest opinion on yacht designers offering both exterior and interior design, explaining that some clients prefer to entrust their yacht’s interior to a different designer.

Initially, your designer of choice will want to find out your ambitions for the yacht where you’ll sail, who with, and what you plan to do aboard. This information will inform their designs, and these discussions will often involve reference boats  other yachts you have admired, and showcase vessels you have seen at boat shows around the world, or in the designer’s portfolio. For constructing each Superyacht up to 1.000 people can be employed for up to four years.

Finally, he gave an insight to how yachts are constructed explaining that no yachts are built from A-Z in the Nordics, even if the competence is there.

The Generation Gap

For the first time in history we have four different generations competing in the workplace and marketplace, all with separate ways of working, communicating and expectations of these relationships. Forget the outdated emphasis on Millennials, the biggest challenge facing business now is how to manage and maintain a multi-generational and diverse workforce and consumer-base.  The Generation Gap is becoming an increasingly difficult issue for business. It is a difference of opinions between one generation and another regarding beliefs, politics, or values. In today’s usage, generation gap often refers to a perceived gap between younger people and their parents or grandparents.

Dr. Paul Redmond specializes in ‘Generational Intelligence’, helping companies and services navigate the complex generational developments within politics, society and the workplace.

Paul educates companies on long-term generational trends on how business can adapt, not only for Millennials and Gen Z’s needs, but crucially across the generations. His aim is to smash some of the misconceptions and generalisations that come with speaking about the generations.

GenerationsFlow

Dr. Paul Redmond talked on how Generation Gap is Caused by Collectivism and Individualism: Whether in non-western countries or westerns countries, when there are changes in politics and societies, families and individuals are affected greatly. In many non-western countries such as China, Japan and Korea, collectivism is considered as one of the most significant values in the cultures, whereas in western countries such as the United states, individualism is emphasized. However, from the middle of 19th century, a great number of Asian people immigrate to the United States.

He mentioned how the generation gap is growing for each generation, and every generation has its own approach to life and work. At the risk of stating the obvious, the interactions between generations have profound effects on families and their businesses. Today, Boomers are mostly in charge and getting hit with the new wave of change brought in by the Millennial generation, born between 1982 and 2004.

Change and how to Embrace it..

Monica Parker, the moderator, comes back to talk about change and how it can positively impact lives. Change is a big part of being successful. Not only is change good, but it’s accelerating at an increasingly rapid pace.This means that you need to keep adapting. It’s both a survival skill and a success skill

Monica Parker the moderator talked about change and how it never stands still in real life. It’s not like the movies where characters can freeze-frame and the writer takes the viewer on some tangential story. In real life, change happens constantly. You can fight it or welcome it. It’s your choice. Change will occur regardless.

Monica explained how all companies are tech companies today in the age of IoT. The total installed base of IoT connected devices is projected to amount to 75.44 billion worldwide by 2025, a fivefold increase in ten years. 

She told the story of how China uses AI to monitor people and India use AI to reunite orphans with their family, and in 2018, the Chinese government has installed close to 200 million surveillance cameras across the country, which amounts to approximately 1 camera per 7 citizens.

Monica also gave an insight on how change is going to affect employment rate, explaining how Kids growing up today will have 17 jobs in 5 different industries according to a research from McKindle.

She rounded up by explaining how the decisions to vote Trump and Brexit is a backlash for globalization.

What are you Great at?

René Carayol is back and this time he shares the magic and simplicity of the SPIKE philosophy. Spike is the product of some 30 years of supporting the growth and development of individuals and businesses of all shapes and sizes. The vital and essential ingredient of the Spike methodology is that absolutely everyone has at least one inherent strength.Sometimes, it’s hard to see these things in ourselves. Our natural abilities/talents/gifts come so naturally to us that we often don’t even notice them! We tend to assume that our gifts come just as naturally to everyone else, therefore, they aren’t special or unique, nor are they particularly useful. But that assumption isn’t accurate because we all have multiple gifts and something to offer the world it’s just a matter of getting clear on what those things are so that we feel confident in sharing what we have to offer.

Rene gave his opinion on humanity. According to him, humanity is plural, not singular. We embrace the diversity of humanity and all it brings to innovation. Because the best way the world works is everybody in and nobody out.

Finally he spoke on the concept of “Challenge up, support down”. “Challenge up” is when people honestly disagree with ideas and policies that come from top management. Once final decisions are made, they “support down” by standing behind the decisions and making them work in the department and the community. This philosophy is important for greatness.

Carayol is one of the world’s leading business gurus specialising in leadership, culture and transformation, drawing from his own unique experiences on the boards of the biggest British and American organisations. He has had the privilege of working closely with some of the world’s prominent leaders; from former US President Bill Clinton to the seventh Secretary-General of the United Nations Kofi Annan, former US Secretary of State Colin Powell and Sir Richard Branson.

 

Read more about InnoTown here: InnoTown
Read more about Ålesund here: Visit Norway

 

Gunnebo Business Solutions, Operations, Tactical Meetings

Efficient Technical Support Tactical Meetings

Gunnebo Business Solutions AB is working on establishing an international, dynamic and enthusiastic software development team who will build sophisticated security and business applications. Within the new organization, customer support and operations plays a vital role.

Global communication network concept

To be able to effectively help our customers, we are implementing and improving our routines around the support process. We have started our journey with ITIL version 4 and DevOps, but lately an article from Holocracy regarding “Tactical Meetings” caught my eye. Tactical meetings are held regularly, on a weekly basis, with the intention of removing any obstacles that may arise preventing the team from achieving their goals for that cycle (the duration between 2 meetings), or to update the rest of the team as to what is going on with the task assigned to a team member.

Tactical Meeting Procedure and Expectations

Tactical meetings are usually kept short and on point. These meetings usually can be divided into five main parts, they are namely “Check-in”, “Checklist, Metrics, Project Updates”, “Agenda Building”, “Triaging Issues” and finally the “Closing Round”.

  • Check-in Phase: This phase is basically sort of a get to know your team member phase, here team members are given a little time to talk about how they are doing or feel free to express how they are feeling at that moment (maybe they were feeling little blue at that moment or rejoicing about something special, or maybe they were just not themselves), just so that the other members would know where that member was coming from.
  • Checklist, Metrics, Project Updates Phase: Here the team members are given the opportunity to provide the rest of the team with some context about the issues they are facing with regards to the task they were assigned. The other team members are also encouraged to either ask the member questions or may save them for a later time in the meeting.
  • Agenda Building Phase: At this point, the facilitator (the person chairing the meeting, who is usually the team leader or a supervisor or someone from the management) would go ask the members to let him/her know of the problems they are facing. These problems are known as “Tensions”. Here the team member would either give a short phrase describing the tension or if they do not have any tensions, they could just say pass.
  • Triaging Issues Phase: This is the point where the team is allowed to discuss the issues they are facing in detail and try and come up with solutions to the tensions, keeping in mind any limitation that may arise in completing the task from the side of the team member who is facing the tension. The facilitator also plays a larger role here in keeping the topic on point and not letting the discussions to get derailed at any point. He / She can also add possible tensions that may arise from implementing the solutions into the agenda. But, once tension is crossed off the list, it cannot be revisited for that meeting.
  • Closing Round Phase: This is very similar to that of the check-in phase, but here the team members would deliberate on how they feel about the solutions that they have come up with, and whether or not they are happy with it.

The process of an Efficient Tactical Meeting

AdobeStock_102381789_low

The efficiency of a tactical meeting depends largely on the shoulders of the facilitator. An efficient facilitator would use a few tricks to keep the meetings short and on point. Here are some key tricks that a facilitator would need to use in order to achieve high levels of efficiency.

Recap from the previous cycle

Here the facilitator would go around the table, asking each member to present any updates from the solution(s) to the tension(s) they faced at the previous meeting. At this point, a good facilitator would have a checklist of the tensions and their solutions from the previous cycle and cross them off after they are resolved. Here the team members are also allowed to request to add items to the checklist as long as it is in keeping with the solution to the tension as well as accepted as a valid point for the solution by the other team members.

Keeping up with the time

This is when the facilitator allocates a certain amount of time for each task. For example, while building the agenda, the facilitator would ask the members to keep their tensions short and sweet, and sometimes even ask them to use one or two words to describe their tension, as these tensions can be elaborated in the triaging phase, it is not particularly necessary for everyone to understand the tension at this point. However, when it comes to the triaging phase, it is important that the facilitator finds a balance between allocating enough time for each tension in the agenda as well as being able to keep the meeting moving forward. Here it is considered good practice to not discuss minor issues (especially when considering technical support issues) in depth, but to find quick solutions and move to the next tension.

Processing the Tensions

This is the most important part of the role of a facilitator. Here the facilitator would ask the team member what their tension is, and then ask them what they need. The team member would then give a quick description of their tension and then give the team the solution to his/her tension or engage the other members of the team to come up with a fruitful solution. These tensions would be captured by the secretary and also the solution for the tension that was accepted by the team member. This would help the facilitator in the next cycle meeting when they recap the previous cycle. Finally, the facilitator would ask the member if they are happy with the solution, and if they are, move on to the next tension.

Tasks of a Facilitator

  • While most of the tensions in a technical support framework are quite straightforward, there are instances where the solution would require multiple steps to achieve the solution. Since these tactical meetings are set frequently, there may not always be time to complete all the steps required to achieve the solution. Hence, the facilitator would ask the team member for a “Next Action”, this is quite literally, what the team member wants to do next to achieve the solution to his/her tension. This could also be helpful to the facilitator to keep track of the checklist for the next cycles recap phase.
  • In cases where there is only one step, or is at the final step, the facilitator could also ask for the outcome of the project. A “Project” is a solution with a definite endpoint.
  • The facilitator can also ask team members to share information on tensions where there may not be an immediate solution.
  • In the case of where a member does not know how to express their tension(s), the facilitator could also either ask the team members to address the tension or even offer a possible pathway for the team member to address it by him/her self.
  • Another important task for the facilitator would be to make sure that only one tension is being discussed at one point. There may be instances where another team member would want to discuss a related or similar tension to that of which is being discussed. At this point the facilitator is required to refocus the team’s attention to the tension and hand, to ensure the meeting is efficient.
  • In cases where the teams come up with multiple solutions to the same tension, it is the job of the facilitator to urge the team to come up with a consensus as to what is the better solution. If the team member with the tension is not sure whether he/she would be able to achieve the solution, they can also request the help of other team members to reach their goal.
  • If the situation arises where the solution that the team has come up with, is not in keeping with the organization’s policies or is not a service provided by the organization, the facilitator has the job of taking this matter to the management and try to come up with a solution at that level.

In summary, the main objectives of a technical support tactical meeting are to spend more time talking about the important things and find a solution to help the customer more efficiently and satisfactorily. The purpose of these meetings is not to talk about things that are beyond the control of the team, or talk strategies or even politics, the purpose is to spend less time complaining and working together as a team to help each and every member of the team to perform their work efficiently and effectively. Hence, that is why we have not only implemented weekly tactical meetings at our organization, we also abide by the guidelines put forward in this article.

If you want to talk more about software support and operations, feel free to contact me at bjorn.nostdahl@gunnebo.com