Gunnebo Business Solutions

The demise of Major Retail Chains in the US

The number of retail chain stores closing over the past few years has dramatically increased. Yet, as of 2019, large stores announced the closing of almost 7000 stores. This is comparatively more than 2018 where 5000 stores shut down. If this continues, it could be the retail apocalypse. 

Store closing sign on business store front.

The sole reason behind this rapid closing of traditional stores is the rise of e-commerce outlets where traditional stores are no longer an attention- grabber. As mentioned in the previous article, online sellers like Amazon have made it really difficult for stores to withstand in this digital era.

Most of these stores have begun changing their sales strategies particularly focussing on digital marketing. Thus, they have now moved from brick and mortar stores to e-commerce sites. The trend these days is to do purchases online. This saves time and money for needing to visit stores. Buying products online is a more convenient method.

Apart from that, bankruptcy is another major concern because of private-equity firms like what happened to Toys R Us. Most of these private-equity companies are more likely to go bankrupt than public companies. Although these companies express that they can help to grow capital, they are silent about debt and leverage buyouts.

What happened to American Apparel?

American apparel was a major selling store since its launch in 1997. It faced a lot of sexual harassment and defamation issues costing the company $3 million as fines. In 2017, facing bankruptcy for the second time, they had to close most of their stores.

It was purchased by Gildan Activewear by the end of 2017 for $88 million. The major change is that sexual images are no longer there. The new American Apparel emphasizes more on materials and pricing to attract customers.

Toys R Us: the kids’ dream store

Toys R Us has been around for 70 years and was every child’s dream store. This company faced its doom when taken by a private equity firm in 2005. It faced an unsustainable debt and filed bankruptcy in 2017.  

April 20, 2018 San Mateo / CA / USA - Toys R Us logo and "Going out of business" announcement above the entrance to one of the locations in San Francisco bay area; customers going into the store

This decision was taken as suppliers pressurized and they liquidated remaining stores. They had to close almost 700 stores of theirs and sold the rest to Canada. Many say there might be a comeback but Toys R Us will be a fond memory for many.

Victoria’s Secret – A girl’s best friend

Parent company L Brands announced closing almost 53 Victoria’s Secret stores in 2019. They even did close 30 stores in 2018 due to bad sales. One reason was the controversial comments that the chief marketing officer spoke of transgender and plus size. 

Panoramic view of a Victoria's Secret store front

This incident outraged many and led to a reduction in sales. Apart from that, many people claim that the quality of products has dropped as well.

Abercrombie and Fitch – new  toned-down version

Abercrombie and Fitch have planned to close almost 40 stores in the USA this year 2019. They have made plans to change their marketing strategies and be more sales-oriented. Getting rid of sexual ads was another point of focus. 

They have also mentioned opening 3 flagship stores redesigning others as a part of the restructuring. Moreover, The change of infrastructure in which they have moved from dark tones to a more bright and light atmosphere is a key highlight.

GAP – shutting down rapidly

One of the most famous brands, GAP is closing doors of almost 700 stores worldwide. Its sister stores like the Banana Republic are struggling to survive as they are not performing well. 

August 20, 2019 Palo Alto / CA / USA - Gap store in Stanford Shopping Center in San Francisco bay area

They are planning to split into two and have decided to change business models to increase profit. The Old Navy will go on a separate way. This is another marketing strategy as well to increase sales.

Payless- bankruptcy victim

A few years back, Payless went into bankruptcy for the second time. Now it is finally closing doors in 2019. They are planning to close almost 2300 stores within the USA. One reason was that the shoe store only has physical stores and with the increase in online sales, store sales have gone down. 

AdobeStock_284256038_Editorial_Use_Only.jpeg

On the other hand, Just like Toys R us, private equity buyouts are very hard to bear due to an extensive amount of debt. They also began liquidation sales which went until June 2019.

Michael Kors: Restructuring

Michael Kors closed almost 100 stores in 2017. This year they will be shutting 50 stores and have stated that there could be more. This is because of the low sales that the company encountered. It has also faced great difficulty to focus on average buyers being a luxury brand. However, they have partnered with Amazon and have moved to online sales which is a positive movement.

View at Michael Kors shop. Michael Kors  is a New York City-based fashion designer widely known for designing classic American sportswear for women.

Claire’s –  The Accessory Chest overcomes Bankruptcy 

Claire’s is a very popular store for teens with an extensive amount of accessories.  However, Claire’s also sought Chapter 11 protection due to bankruptcy. It, however, eliminated its debt of $1.9 million. They also underwent restructuring and eliminated about 2000 locations. 

Gymboree – Children’s clothing 

Gymboree was another victim of bankruptcy and sought for protection under chapter 11. It was another store that was dependant on brick and mortar. It faced a lot of loss in sales that made the company face debt. It has planned to close all Gymboree and Crazy 8 stores within the US and Canada which is about 900 stores in total.

J.C. Penny – 116-year-old outlet

J.C. Penny closed about 130 stores back in 2017 due to a dip in sales and has also decided to close 18 more stores. This Giant retailer was forced to close their shops because they have been losing a huge amount of money, due to a reduction in sales. At the moment, it is a broke company and stores are out of date. There are assumptions that they have been planning to open some toy shops at their brick and mortar stores.

August 14, 2019 San Jose / CA / USA - JCPenney department store located in a mall in South San Francisco bay area

Conclusion

The list of retail stores calling it quits can go on and on. It is almost like a global epidemic that infests to every retail chain and shuts it down. The list mentioned here is just a few of them. More and more companies are facing issues either due to bankruptcy or lack of sales because of online purchases. Yet, some stores like Gucci, Luis Vuitton and Claire’s have risen from this problem and have adapted new marketing strategies.

We are living in a new world, with new channels and if you want to know how to make your retail business ready for the future, please reach out to me: bjorn.nostdahl@gunnebo.com

Debounce, Gunnebo Business Solutions, Internet of Things (IoT), Microsoft Azure, Node RED

Debounce Algorithms in IoT

Working in IoT we sometimes need to handle large data streams of information, that might or might not be totally accurate. Streams might contain noise, inaccurate/unreal readings and other unwanted data.

Digital oscilloscope-1

Switch debouncing

Debouncing can be done on the hardware itself, or in software. Hardware debouncing can be done either using an S-R circuit or an R-C circuit. Two famous algorithms to do software debouncing is vertical counter and shift registers. Despite being well-known, in literature, these methods are typically presented as a code dump with little or no explanation. In this article, I will touch upon these circuits, methods and other algorithms and their use in IoT debouncing.

Understanding Switch Bounce

When the contacts of mechanical switches toggle from one position to another, these contacts bounce (or “chatter”) for a brief moment. During the first millisecond, the bounces are closely spaced and irregular, and although all of it happens in the course of milliseconds, high-speed logic will detect these bounces as genuine presses and releases.

Electrical panel. Low voltage device. Electrical equipment. Power supply. Electro substation. Power net.jpg

A button release produces bounces too, but it is common for a switch release to produce less bounce than for a switch press.

Switches usually become stable after 5-20ms depending on the quality, size and electronics of the hardware.

Hardware Debouncing

Debouncing using S-R circuits

Switch debouncing using S-R circuit is one of the earliest hardware debouncing methods. In this circuit, S-R latch avoids bounces in the circuit along with the pull-up resistor. It is still the most effective debouncing approach.

The figure below depicts a simple digital debouncing circuit which is used quite often.

1

The circuit utilizes two cross-coupled NAND gates which aim to create an S-R latch, A SPDT (Single Pole Double Throw) switch and two pull up resistors. Then the resistor produces and generates a logic ‘one’ for the gates and the Switch pulls one of the inputs to ground.

If the switch is kept in a position as  seen in the figure, the output of the upper gate is ‘1’ regardless of the input of the other gate and the one created by the bottom pull up resistor which stimulates the lower NAND gate to zero, rapidly in turn hustles back to the other gate. If the switch moves back and forth like a pendulum between the contacts and is suspended or halted for a while in neither one of the regions amidst the terminals, the latch preserves its’ state because ‘0’ from the bottom NAND gate is fed back. The switch may move between the contacts but the latch’s output assures that not in any way it would bang back and therefore, the switch is bounce free.

R-C Debouncing

Although S-R is still common, it’s bulkiness cause problems when you try to use it frequently. You can see that it uses many hardware pieces. Another drawback to using S-R circuits is SPDT switches are more expensive than SPST switches. Thus, a new approach of debouncing emerged using an R-C circuit. The basic principle behind it is to use a capacitor to filter out swift adjustments or changes in the switch signal.

The following image demonstrates a basic R-C circuit which is used for debouncing.

2

It is a simple circuit which uses two Resistors, a Capacitor, a Schmidt trigger hex inverter and an SPST switch.

  • In the event where the switch opens, the voltage across the capacitor which is initially zero begins to charge to Vcc through R1 & R2. The voltage at Vin is higher and hence, the output of the inverting Schmitt trigger is low (logic 0)
  • When the switch is closed, the capacitor discharges to zero and subsequently, the voltage at Vin is ‘0’ and output of the inverting Schmidt trigger is high (logic 1)

At the time of the bouncing condition, the capacitor will halt the voltage at Vin when it comes to either Vcc or Gnd.

You may wonder why a standard inverter is not used. There is a problem for using the standard inverter gate here. TTL defines a zero input when the applied voltage is between 0 and 0.8 and the output in certain circumstances or situations is very unpredictable or unforeseeable. Thus, we must use a Scmitt trigger hex inverter. Thereby, the output remains constant even if the inputs vary or dither and it also ensures to prevent the output from switching due to its’ hysteresis trait.

Software Debouncing

We can debounce switches using the software as well. The basic principle is still to switch signals and filter out glitches if any. The most used algorithms used for that are counters and shift registers.

Counter Method

The first approach uses a counter to time how long the switch signal has been low. If the signal has been low continuously for a set amount of time, then it is considered pressed and stable. 

Let’s see the steps in the Counter method. 

First, we set up the count value to Zero. Then set up a sampling event with a certain period, say 1 ms. You can use a timer for that. On the sample event, Do the following things.

If the switch signal is high, reset the counter variable to 0 and set the internal switch state to ‘released’. If the switch signal is low, increment the counter variable by 1 until it reaches 10. once the counter reached 10, set the internal switch state to ‘pressed’. 

Shift Register Method

Similar to that of the counter method. The only difference is that it uses a shift register. The algorithm assumes unsigned 8-bit reg value usually found in microcontrollers

First, set up the shift register variable to xFF. Set up a sampling event of period 1 ms with the help of a timer. On the sample event, Do the following things.

First, shift the variable towards MSB, the most significant bit. Set LSB, the least significant bit to the current switch value. if the shift register value is equal to 0, set internal switch state to ‘pressed’. otherwise, set internal switch state to ‘released’. 

IoT Sensor Bounce

Recently my team has been working on telemetry involving OCR decoding of License Plates. I consider data from an OCR routine, a temperature sensor or a push button the same thing and debouncing the telemetry can be done very much in the same way.

Collection of European license plates from different countries

First of all, we needed to clean up the data stream by filtering out incorrect values. Since there are not control digits on license plates, we chose to trust the result if the camera would return three similar plates within five iterations.

If you want to know more about how to debounce data streams or if you have any questions, please reach out to me: bjorn.nostdahl@gunnebo.com

 

Gunnebo Business Solutions, MQTT, Protocols

MQTT for Dummies

In this article, I will be discussing one of the most trending topics in IoT. I will take you through a beginner level tutorial on MQTT which is currently the most used protocol in IOT projects. 

MQTT Message Queuing Telemetry Transport-1_low

MQTT stands for Message Queueing Telemetry Transport Protocol. To put MQTT in a nutshell, it is  “A lightweight event and message-oriented protocol allowing devices to asynchronously communicate efficiently across constrained networks to remote systems”. I know that this doesn’t really help much. So let’s try to decode that definition and understand what MQTT is and how to use it.

What is MQTT?

Again, for people who have no idea about MQTT, it is a protocol for machine-to-machine communication. It uses a publisher-subscriber model for communication. If you are from a programming background, you probably would have some knowledge about the publisher-subscriber model. Anyway, we will discuss the publisher-subscriber model and how MQTT works later in the tutorial.

MQTT over HTTP for IoT

Before going onto discuss how MQTT works, let’s first try to understand how it came to existence. MQTT came to exist as a replacement for HTTP because HTTP could not properly answer the challenges in IOT and M2M projects. Unlike web applications, IOT projects have some peculiar challenges. One of the main concerns is that IOT requires the event-driven paradigm. Some of the features of this event-driven paradigm are:

  • Emitting information one-to-many 
  • Listening to events whenever they happen 
  • Distributing minimal packets of data in huge volumes 
  • Pushing information over unreliable networks  

Some other challenges you face in an M2M application 

  • Volume (cost) of data being transmitted 
  • Power consumption 
  • Responsiveness 
  • Reliable delivery over fragile connections 
  • Security and privacy 
  • Scalability

MQTT was successfully able to cope with these challenges due to their features. 

Why MQTT is good for M2M and IoT applications

MQTT has unique features you can hardly find in other protocols, like:

  • It’s easy to implement in software as it is a lightweight protocol.
  • MQTT is based on a messaging technique. This makes it faster in data transmission compared to its alternatives.
  • It uses minimized data packets which results in low network usage.
  • Low power usage. As a result, it saves the connected device’s battery.
  • Most importantly it works on real-time which makes it ideal for IoT applications.

We learnt earlier that MQTT works through a publisher-subscriber model. In a P2S system, the publisher sends its messages to a topic. Then, every subscriber of that topic will receive the message. In MQTT, Broker handles the topic and messaging process while MQTT clients behave as publishers and subscribers.

Components of MQTT

To learn about how MQTT works, we have to understand some concepts in MQTT. The fundamental components of MQQT protocol are explained below.

Broker

The broker is a server that handles the communication and data transmission between the clients. It is responsible for the distribution, management and storage of data sent and retrieved by the clients. The broker acts like a centralized hub that regulates the message exchange. 

In the case where a broker breaks down, the whole communication process breaks down as there is no way for the clients to communicate with each other directly. Therefore, the Broker Bridging mechanism was introduced to prevent such cases and build a fail-safe broker network. 

There is a number of broker applications available on the internet including the popular ones; Mosquitto and HiveMQ or you can also use cloud-based brokers from cloud providers such as IBM or Azure.

Clients (Publisher, Subscriber)

These are basically the end-users who retrieve the data distributed by the broker. Each client is assigned a unique ID to identify themselves and the session when connected to the broker. A client could either be a publisher who publishes messages under a specific topic or a subscriber who receives messages relevant to a topic, at one time.

Message

These are the chunks of data sent and received by the clients. Each message consists of a command and a payload section. The command part determines the type of message and there are 14 message types available in MQQT.

Topic

This is the namespace or literally the topic that describes what the message is about. Each message gets assigned to a topic and clients can publish, subscribe or do both to a topic. The clients can also unsubscribe from a topic if they want to. MQTT topics are just strings with a hierarchical structure. 

Assume that there is a topic called “home/kitchen”. We call home and kitchen as levels of the topic while home being topper level topic than the kitchen. Also, topics can use wild cards such as ‘+’ and ‘#’.

Publish

This is the process of clients (Publisher) sending data to the broker under a topic be distributed among the clients (Subscriber) who have requested data from the same topic.

Subscribe

This is the process of Clients (Subscribers) receiving data specific to a topic they have previously subscribed to, from the Clients (Publishers) through the broker.

QOS: Quality of Service

Each message is given an integer value from 0 to 2 to specify the delivery mode. This is known as Quality of Service. There are three different types of QOS.

  • 0 (Fire and forget) – the message is delivered only once, acknowledgement not given, high-speed delivery method.
  • 1 (Acknowledgement) – the message is delivered once or several times until an acknowledgement is received. 
  • 2 (Synchronized) – the message is delivered only once, guaranteed delivery, comparatively slower.

Practical use of MQTT

Its time to do some practical things here and get used to dealing with the MQTT protocol. As you learnt previously, there are many MQTT clients developed for each programming language.  I will use Paho python MQTT client as I am a fan of Python and it is probably the best MQTT client out there. 

mosquitto-text-side-28

First, you need a broker to create an application with MQTT. One of the most popular MQTT brokers is Mosquitto. You can install it with the following command. 

sudo apt-get install mosquitto

We set up it to work on our localhost. By default, Mosquitto listens to port 1883. Next, install the MQTT client with pip command.

sudo apt-get install mosquitto

We setup it to work on our localhost. By default, Mosquitto listens to port 1883. Next, install the MQTT client with pip command.

pip install paho-mqtt

This command will install python MQTT client library on your machine. The core of the client library is the client class which provides all of the functions to publish messages and subscribe to topics. 

There are several important methods in Paho MQTT client class which you should know:

  • connect()
  • disconnect()
  • subscribe()
  • unsubscribe()
  • publish()

Each of these methods is associated with a callback

Publishing a message

One of the main tasks you do with MQTT is publishing messages. A simple code that publishes a message usually has 4 steps.

  • Import the paho.mqtt.client class
  • Creating a client instance with Client() constructor
  • Connecting to the server with connect() method
  • Publish messages with publish() message.
import paho.mqtt.client as mqtt

clientName = mqtt.Client("uniqueClientId)
clientName.connect("localhost",1883,60)
clientName.publish("TopicLevel1/test", "Your Message Here");
clientName.disconnect();

Most of the code is self-explaining. First, you create an instance of an MQTT client. Then you connect with the broker running on Localhost. Then the client publishes its message on “TopicLevel1/test” topic. After that, it disconnects from the broker.

Subscribing to a topic

You know that MQTT is not a one to one messaging protocol as it connects many devices. The trick here is that message from any device is assigned to a topic. Any devices that are subscribed to that topic will receive the message. Similarly, you can publish messages to Topics.

You can subscribe to a topic with subscribe() method in Client class. Subscribing to a topic has the same steps as to publishing messages. I am not going to repeat it as you can easily identify these steps from the code.

import paho.mqtt.client as mqtt

def on_connect(subscriber, userdata, flags, rc):
  subscriber.subscribe("topic/test")

def on_message(client, userdata, msg):
  if msg.payload.decode() == "Disconnect!":
    subscriber.disconnect()
    
subscriber = mqtt.Client("subscribeeId")
subscriber.connect("localhost",1883,60)

subscriber.on_connect = on_connect
subscriber.on_message = on_message

subscriber.loop_forever()

In this application, the client works as a subscriber. It subscribes to the topic to the broker which in this case, runs on localhost. Whenever it receives a message, it calls for the on_message() method. If the received message is disconnected, it immediately disconnects from the broker. This is a very simple use of subscriber method. You can write more complicated logic using the same callback functions. 

So in this article, you get a concise yet comprehensive idea about MQTT. Its’ time to move on to the conclusion to recall yourself as to what is the gist of the article.

Conclusion

MQTT is a lightweight, flexible and a simple but very efficient protocol that has a definite advantage over others when it comes to IoT and M2M solutions; considering its low bandwidth and low power consumption, response time and multiple usages. In conclusion, it could be said that MQTT is the best protocol so far when it comes to IOT development.

If you want to know more about MQTT, you can check the links below or have any questions, please reach out to me: bjorn.nostdahl@gunnebo.com

MQTT and ActiveMQ on RPI

MQTT for PIC Microcontrollers

 

 

Gunnebo Business Solutions, IBM International Business Machines, Node RED

Node-RED on SIMATIC IoT 2040

With the high pace the technology industry is moving in towards development, many different fields and areas have been considered as hot zones. The different innovations motivates researchers to create and develop better devices and technologies that are helpful. However, the more we advance, the more technology gets complicated and sophisticated. This is observed much more in hardware developments as the number of components used keeps increasing year by year to keep up with the need.

SIMATIC IOT2040 By Siemens

A leading company in innovations and developments is Siemens, which is German based. Siemens is specialized in technologies that would impact the Industry, Energy, Healthcare, and Infrastructure & Cities fields. With their many powerful groundbreaking products in the market, Siemens have been interested in the IoT field, releasing their SIMATIC IOT2000 series. This series targets the industry field, allowing different machines to analyse and utilize data sources from all around the globe.

iot2000-2040

Current issues include weak communication with overseas machinery due to different use of languages and different source codes. SIMATIC IOT2040 is one of the series by Siemens which is the up-to-date version to the SMATIC series. This version includes the following:

  • Energy-saving processor, with many compatible interfaces including: Intel Quark x1020 (+Secure Boot), 1 GB RAM, 2 Ethernet ports, 2 x RS232/485 interfaces, battery-backed RTC.
  • Supports Yocto Linux.
  • Arduino shields und miniPCIe cards can be used for expansions.
  • Programming with high-level languages
  • Compact design and DIN rail mounting
  • Proven SIMATIC quality offers great ruggedness, reliability and longevity

This version in particular is worth mentioning due to its ability to be used with many different hardware and solutions. This product is mostly used with different other add-ons, which help deliver the target efficiently.

Setup: SIMATIC IOT2040

A very common application for the SIMATIC IOT 2040 is the use in Micro-SD cards (minimum capacity of 16GB). Many of the previously stated features of the series can be used with the card, helping in making the experience much better. The following is a tutorial guide, explaining how to successfully prepare and install SIMATIC IOT2040:

Preparation:

  1. Download the following image from Siemens source: https://support.automation.siemens.com
  2. Remove all trash and deleted files fomr your SD-Adaptor and flash the image to your disk imager.
  3. You can now safely insert the SD-card in the SIMATIC IOT 2040.
  4. You should then connect your devices to the computer/laptop using an Ethernet cable.
  5. A strong internet connection should be available at all times. Adjust your Ethernet IP address to an IP around 192.168.200.1, which is considered as the static IP for IOT2040. (Subnet mask 255.255.255.0)
  6. Use the Secure Shell Protocol with the following IP: 168.200.1, using user root. Insert your own password for the root.
  7. SSH into 192.168.200.1 with user root. Set a password for root:
  8. Then run:
  9. Now you are ready to run the installation for SIMATIC IOT2040 successfully.

Installation:

  1. Edit file /etc/opkg/base-feeds.conf  and add these lines:
  2. Now you can run the file with the name “opkg update” and install the git directly.

Node-Red: Flow Based Solution

One of the most modern tools that is considered a breakthrough is Node-Red. Node-Red is a development tool, created by IBM initially, to wire hardware devices together with APIs and online services. This technology is flow based, which was inspired by the use of Internet of Things (IOT). In simple terms, the technology focuses on the use of  browser software that helps users to develop different tools using flow diagrams. It was created as a means to simplify development; making it available for users with basic knowledge. The tool focuses on the ease of use of software and online services, using a direct connection through the internet.

Untitled-3

The previously stated technologies could come really handy when they are used together. Both tools have the capability to be used in parallel aiming towards the same final result. NodeRed will be used to ease the use of SIMATC through simple flow diagrams. SIMATIC is generally a very important tool, yet too complicated for normal users. Thus, the use of NodeRed is crucial in this case, where you will need to control the development process and wiring of the hardware with the online services as smooth as possible.

Setup: Node-Red

The following is a tutorial guide, explaining how to successfully prepare and install Node-Red:

  1. Through the menu named software, you can move forward to the Manage packages page.
  2. Set Node-Red on Auto-start together with Mosquitto MQTT Broker.
  3. Here is where we integrate SIMATIC IOT 2000 with Nod- Red. You are expected to install the nodes for the SIMAMTIC.
  4. In the following directory “/home/root/.node-red” create a root folder by the name of nodes; where you will place the installed nodes in the previous step.
  5. Put custom nodes here if needed. For example from Git repository:

    Dependencies and nodes under npm can be installed directly to  /home/root/.node-red
  6. In the following directory “/home/root/.node-red” create a root folder by the name of nodes; where you will place the installed nodes in the previous step.

 

If you want to know more about IoT and Node-RED, feel free to contact me at: bjorn.nostdahl@gunnebo.com 🙂

Artificial Intelligence (AI), Commercial, Gunnebo Business Solutions, Gunnebo Retail Solution, Machine Learning (ML), USA

Autonomous and Frictionless Stores

Earlier this year, I visited US for a couple of weeks, and having a genuine interest in retail technology, I visited quite a few retail stores. I went to see classical stores, but also had the chance to have a preview of the future of retail: Autonomous and Frictionless Stores!

Customers in this digital world don’t want to spend too much time while shopping. They want everything to happen very fast. Customers are looking for a seamless shopping experience all the time. That’s how the concept of frictionless stores came to exist. Frictionless stores are one the biggest new thing in consumer shopping.

iot smart retail futuristic technology concept, happy girl try to use smart display with virtual or augmented reality in the shop or retail to choose select ,buy cloths and give a rating of products
Photo: Adobe Stock

What are Frictionless Stores

The concept of frictionless stores started a few years ago. When I talk to retailers this is one of the topics that always pops up. All major brands are looking for innovative ways to create better customer experience and frictionless stores is one way to make that happen. These store improves the shopping experience to the point where customers don’t have to wait at any point of shopping such as selecting, receiving and paying for the product. Initially frictionless stores only confined to ease and less hassle shopping. But as innovations such as mobile wallets, digital receipts, free and fast shipping, and one-click purchasing emerged and began to reshape the consumer shopping experience, the definition began to be reshaped as well. Today, a frictionless experience means more than just less hassle. It means greater speed, personalization, and wow experiences.

How Frictionless Stores work

Let’s try to understand ow frictionless stores work. In frictionless stores, Buyers and sellers are connected in a way that provides buyers the ability to instantly find, compare and buy products and services they need. In frictionless stores, customers should feel that they have full control. The concept and technology has evolved over time, and nowadays customers expect to have this experience through their smartphones. Retailers and brands are trying to find new ways modifying the definition of frictionless stores to provide customers the best possible shopping experience. They need that commitment to stay ahead of the competition. As a result of that, nowadays, frictionless shopping means eliminating anything that negatively impacts customer experience.

Importance of Frictionless Stores

How has frictionless shopping fared according to researches? Alliance Datacenter has done a study and found out that customers from all generations looking for a great service and an ideal shopping experience. This is true for all the areas in the world. If some brand fails to deliver what they want, customers will find a different one. According to the research, 76 percent of consumers said they give brands only two to three times before they stop shopping with them.  Another 43 percent said their main reason to leave a brand is poor experience in shopping. What all these means is that if a customer encounters friction they will run away from that brand fast without probably giving a second chance.

Amazon Go Stores

Similar to frictionless stores, Amazon introduced Amazon Go stores. What is special about Amazon Go is you don’t have to wait for checkouts. That basically means you no longer have to wait in queues. First Amazon Go store was a grocery store of 1800 square feet. It spread fast, in fact, you can see a lot of Amazon Go stores now in the USA and Europe.

Amazon Go First Store_0_low

How is this even possible? What technologies have they used? Amazon was doing many types of research in the areas of computer vision, sensor fusion, and deep learning. Amazon Go is a fruitful result of that. You need Amazon Go application to do shopping with Amazon Go stores. All you have to do open your Go app, choose the product you want, buy it and the just leave. This application can detect when a product is purchased or returned to the shop. The application can remember what you bought and you can revisit these details at your virtual cart. When you finish shopping, you will be charged and you will receive a receipt for what you buy

Buy Awesome foods with Amazon Go stores

You may wonder now what you can buy there? What items are available on Amazon Go stores? I will just point out how one Amazon Go store had marketed their shop. “We offer all the delicious meals for breakfast, lunch or dinner. We have many fresh snack options made every day by our chefs at our local kitchens and bakeries. You can buy a range of grocery items from milk and locally made chocolates to staples like bread and artisan cheeses. Try us, you will find well-known brands you love in our shops.” by the way, don’t expect to go in there and buy books, tech or clothes or anything else that Amazon sells online. It’s basically quick-and-easy food and other groceries. It’s just that there’s no cashier.

image_0a916596-0d11-4f75-a2ca-7f4bf93aa4e6.img_9445

So many people have been attracted to Amazon Go stores so it is quite evident that this concept will make a huge impact on the future of retail stores.

If you want to know more about frictionless Sstores, feel free to contact me at: bjorn.nostdahl@gunnebo.com or check out these related articles:

Microchip, Microcontroller, MQTT, PIC24, Protocols

MQTT for PIC Microcontrollers

The IoT (internet of things) world is bursting, in 2018 there were 23.14 billion connected devices, and it is projected to get to 30.73 billion by 2020 (from statista.com).

Embedded systems are at the center of this IoT drive, smart homes, smart cars, etc. all have embedded systems as their backbone.

Global network background-1

Microcontrollers are the drivers of embedded systems. They give devices the ability to collect data from the environment, send and receive these data and execute the needed instructions or carry out specified actions. Like turning on the heater when the temperature in the room goes below a specified level.

ARM and the PIC microcontrollers are the common microcontrollers used in embedded systems and IoT. When these devices send and receive information over a network (say the internet), they do so using transfer and transport protocols that control this transfer processes.

The hypertext transfer protocol (HTTP) is the most popular communication protocol used over the internet to send and receive data. In IoT communications this protocol is still used in most applications. A more efficient protocol is the messaging queue telemetry transport (MQTT) protocol that is optimized for low connectivity and low power requirement. The MQTT protocol finds immediate application in remote locations where batteries are used and need to be conserved.

The HTTP system transfers data via the request-response paradigm. This transfer protocol requires devices querying other devices directly for data. This leads to increase in bandwidth requirement and power consumption. Since devices have to respond to requests one after the other, multiple, asynchronous and simultaneous communication cannot be effected. This comes as a disadvantage for IoT applications where multiple devices communicate at the same time. HTTP does not allow for multiple simultaneous communication, being synchronous.

The MQTT protocol solves these.

What is the MQTT protocol?

I gave a detailed description of what MQTT is in a previous post. But for this post, I’ll reintroduce just the important points.

The MQTT is a lightweight broker-based publish/subscribe messaging protocol designed to be an open, simple, and easy to implement data transfer protocol, designed to optimize bandwidth and power consumption. It is a machine-to-machine (M2M) communication paradigm that allows devices to send and receive data faster and more reliably without being connected directly.

MQTT finds immediate need where the network is expensive, unreliable or of low bandwidth. As well as when the embedded devices are of limited processor or memory resources.

The MQTT (message queue telemetry transport) protocol works in direct contrast with the hypertext transfer protocol (HTTP) which is popularly used in sending data and communicating with devices over the internet.

MQTT provides for one-to-many communication and message distribution. It is unconcerned of the sender or the content of the message, and uses the TCP/IP to provide network connectivity. Has a small transport overhead (a message sent with this protocol can have a header of as small as 2 bytes), and with features that ensures lost connections or data can be accessed.

MQTT in microcontroller connectivity

Using the MQTT protocol in microcontrollers will improve the efficiency of data transfer, reduce the power and bandwidth requirements and introduce asynchronous communication among devices. All these come in handy with the limited memory capacity of microcontrollers, need for faster and more reliable data transfer among IoT devices and increase in IoT devices in circulation and mainstream adoption of the technology.

This protocol guarantees faster, more power efficient (than the HTTP), low latency and dependency communication among devices. This is because the MQTT protocol works on a publish-subscribe paradigm. With this model, there is no direct connection and communication between network devices, instead there is a middleman, called the broker.

To use the MQTT protocol for communication with your microcontroller, a broker is required to collect and dispatch data among devices. The broker (also known as the server) facilitates the publish-subscribe model, in a similar fashion as the client-server models. The clients (that is the connected devices) subscribe to virtual channels, known as topics. Other devices that want to send out information (known as a message) will publish the information on a specified topic to the broker. The broker then distributes the message to all the clients that subscribe to that topic topic.

Telecommunication concept with abstract network structure and server room background.png

The broker is the core part of the MQTT operation. The broker is the middleman in data transfer using this protocol. The broker/server stands at the center of M2M communication. It receives messages (on a particular topic) from devices connected using the protocol and aggregates them for transfer to other devices that subscribed to the topic.

This kind of communication provides for continuous availability and distribution of data among devices. The advantage that this kind of communication provides is lack of dependence on direct M2M connection (that besets the HTTP communication system). Devices practically work and run on their own independent of the presence or interruption of other devices. This type of connection provides real time data; this is because the broker constantly publishes the received messages to connected device. Messages that were not subscribed for are destroyed, and those that have subscribers are dispatched to the devices. With this, any interruption that occurs with the connection of one device does not affect the entire network, instead, all the messages sent while it was away are retained and push to its when it comes back on the network. The MQQT protocol is more data-centric that identity-centric.

The Programmable intelligent computer (PIC) is a Harvard architecture microcontroller that is regarded as the industry standard due to its robust features. It is a more sophisticated microcontroller than other microcontrollers like the Raspberry Pi microcontrollers, as it provides more functionalities and features than the other microcontrollers.

My previous article talked about the deployment of the MQTT protocol on the Raspberry Pi microcontroller, Gunnebo AB’s mikroPascal MQTT library puts the MQTT protocol on the PIC microcontroller.

SIU04

Our MQTT library for PIC Microcontroller brings faster and better connectivity for the PIC microcontroller. It enables PIC microcontrollers to communicate using the MQTT protocol. The MikroPascal library is built around MQTT protocol with QoS of 0, it is built on the existing TCP and IP stack based on the mikroPascal TCP/IP library, lib_enc600_v3_5, adding the MQTT layer on it.

The MQTT library is built as a wrapper around TCP/IP protocol with the purpose of providing features to publish and subscribe to text messages to specific topics, by the means of MQTT protocol.

The library carries out the following core functions:

  • Establishes TCP/IP sockets,
  • Formats MQTT packets and prepares them for transmission,
  • Extract contents from subscription messages arriving in MQTT packets,
  • Transmits MQTT packets over TCP/IP,
  • Provides test (ping) methods to test the health of connection,
  • Provides functions for subscribing to and publishing to topics as well as unsubscribing from topics.

The library reduces RAM memory requirements and provides better performance by supplying the library functions with input parameters that are pointers to arrays.

mp1

The basic work flow of the library on the PIC microcontroller is as follows. The microcontroller reserves the address for the message/information and provides pointers to this address. The MQTT library takes it from here and use the pointer to obtain or receive messages to the controller.

mp2

To communicate via the MQTT protocol on your PIC project, there are some prerequisites that your project must meet.

With the mikroPascal MQTT library, we implement this light weight protocol for the PIC microcontroller. The library can be downloaded here.

The library brings all the benefits of the MQTT protocol to PIC users enabling users to package and send data in their IoT project seamlessly, faster, with less memory requirement and wider connection with other devices.

The library can be downloaded here at the libstock repo, you can run a demo of the library to see how it works, and also check out our open source codes on github.

We welcome you to contribute to this library and please also fork it for other mictrocontrollers. If you have any questions, please reach out to me: bjorn.nostdahl@gunnebo.com

Security, TLS/SSL

History of SSL/TLS Attacks and Patches

During the last few days, it has been reported that Yubico is replacing some of their physical security keys, due to a firmware problem. This reminds us that IT security is evolving, bugs are found and you need to keep up to date to keep your systems secure. My previous posts regarding SSL/TLS and x.509 has been quite popular, so here comes another security related post 🙂

When hosting a global Software as a Service platform, it is vital to be in control of Cloud Security. Cloud Security consists of a set of policies, controls, procedures and technologies that work together to protect cloud-based systems, data and infrastructure. These security measures are configured to protect data, support regulatory compliance and protect customers’ privacy as well as setting authentication rules for individual users and devices.

AdobeStock_139964445_low

One way of securing these services is SSL/TLS encryption of communication. The first implementation of SSL was implemented by Netscape in 1994, and this post attempts to provide a historical view of the SSL/TLS protocol, as attacks and countermeasures were introduced. If one reads the current TLS v1.2 or v1.3 protocol specifications, there are many aspects of the design which do not have an obvious reason, but whose origin comes from the long list of academic research which has broken previous versions.

The birth of SSL

As SSLv1 was never released, we first mention SSLv2, which was designed and implemented by Netscape in 1995. The SSLv2 protocol is very different from later versions, but has a similar traffic flow. The client connects to a server and sends a “hello” which identifies some aspects of the clients capabilities. The client and server negotiate which cipher they wish to use, and the client sends a random key encrypted with the server’s RSA public key which is used to subsequently encrypt the message traffic.

The protocol quickly proved to have numerous flaws, and within a couple of years an effectively new protocol, SSLv3, was designed to replace it. SSLv2 was formally deprecated in 2011, and no modern TLS library supports it anymore.

SSL as we know it

SSLv3 is the first SSL version which is recognizably similar to modern TLS. As in SSLv2 the client connects to a server, a handshake is performed, and subsequent records are encrypted using a key that is shared using public key cryptography. However there are several essential differences.

Another key addition is that in SSLv3 it is possible to use algorithms with forward security. In this mode, instead of decrypting an RSA ciphertext sent by the client, the client and server agree on a key using Diffie-Hellman key exchange, and the server signs a message which allows the client to verify that it is performing a key exchange with the intended server. However RSA based key exchange was still retained, and widely used.

In SSLv3 the entire handshake is hashed together and used with the agreed keys to create two “Finished” messages which the client and server exchange on the encrypted channel. These ensure that an attacker cannot modify traffic between the client and server in such a way as to change the outcome of the handshake. For instance, if a MITM could remove all of the strong ciphersuites from a client hello message and force a downgrade to a weak cipher, the protocol could be easily attacked.

In SSLv3, messages are encrypted using either the stream cipher RC4, or else a block cipher in CBC mode. In CBC mode, the plaintext must be a multiple of the cipher’s block size (typically 8 or 16 bytes), which requires making use of padding scheme to increase the length of messages which are not correctly sized. In SSLv3, the length of padding is indicated with a single byte at the end of the record, and the specified number of bytes are discarded by the receiver. The value of the padding bytes is not specified.

The message is authenticated using a slight variant of HMAC (based on an early HMAC design prior to HMAC’s standardization). But, critically, in SSLv3 it is the plaintext (rather than the ciphertext) which is authenticated, and the CBC padding bytes are not authenticated at all. These errors proved to be the source of a number of serious exploits which plagued TLS for years.

eCommerce compel TLS v1.0

After a time it became clear that the SSL protocol would prove crucial for commerce on the early Internet, and eventually the development was moved to the IETF. The name ended up changing due to a political compromise between Netscape and Microsoft, who had a competing PCT protocol. However the actual TLS v1.0 specification is only slightly different from SSLv3.

Credit card security

The most notable changes was the replacement of an SSLv3-specific HMAC variant with the standard version, replacing the SSLv3 specific PRF with a new design, and tightening up the rules for how blocks are padded. In SSLv3 the padding bytes were unspecified, while in TLS v1.0 and later versions the padding must follow a specified format. The block padding change was at the time merely a simplification, but it proved critical when the POODLE attack was developed in 2014.

At the time POODLE attack was developed, SSLv3 was already obsolete, but many browsers included a fallback mechanism where if the server rejected a TLS v1.0 or higher connection, the browser would subsequently try to connect using SSLv3. A man-in-the-middle attacker could intercept the TLS v1.0 connection, send an (unauthenticted) TLS alert closing the connection, and then attack the SSLv3 connection. There is no direct protocol fix for POODLE, since it is not possible to retroactively fix the padding bytes in unpatched clients. The main resolutions were the disabling or removal of SSLv3 support on both client and server sides, and the creation of the “fallback SCSV” indicator. The fallback SCSV allows a client to indicate to the server that it is performing a version fallback, which is done by including a special value in the ciphersuite list which cannot be actually negotiated but simply serves as a flag which can be understood by servers who recognize it (SCSV is short for “Signaling Cipher Suite Value”). A special ciphersuite value was chosen because in a TLS v1.0/v1.1 client hello format there is no other way of reliably indicating such information.

If a server sees a connection from a client indicating fallback, but the client is attempting to negotiate an older version than what the server supports, it closes the connection. Then, when a MITM attacker tries to force a downgrade, when the client opens the vulnerable SSLv3 connection, the server will detect the SCSV and close the connection, preventing the attack. It is not possible for the MITM to remove the SCSV, because the contents of the handshake transcript are authenticated by the Finished messages.

Browser Exploit leads to TLS v1.1

TLS v1.1, released in 2006, involves a single small patch to TLS v1.0. In TLS v1.0 and all earlier versions, the CBC state is carried across records. Another way of thinking about this is it is as if each packet is encrypted with an IV which is equal to the last ciphertext block of the previous record. This resolved an issue that had been identified in 2006 by a researcher. Later, in 2011, this attack was refined via use of JavaScript and dubbed BEAST, providing a practical break of TLS v1.0 and earlier when used with HTTPS.

At the time BEAST was a substantial issue because many implementations of TLS had not been updated to support TLS v1.1 or v1.2. A workaround was developed for SSLv3/TLS v1.0 connections, commonly termed 1/n-1 record splitting. Each CBC encrypted record would be split into a 1 byte record followed by a record containing the rest of the plaintext. Since the first record included a message authentication code (which could not be predicted by an attacker who does not know the session key), this serves as a way of randomizing the IV.

Another common countermeasure was to favor use of the RC4 stream cipher, which did not have the problems of CBC ciphersuite. But the RC4 cipher dates back to the 1980s, and by 2013 it had been shown convincingly that biases in the RC4 ciphertext could allow an attacker to recover secrets sent over a TLS channel, albeit in a scenario requiring access to data from many millions of connections.

The next big step with TLS v1.2

TLS v1.2, released in 2008, was the first major enhancement to the protocol since SSLv3. It adds support for negotiating which digest algorithms will be used (instead of hard coding use of SHA-1), adds support for modern AEAD ciphers, and adds support for extensions.

Extensions are a critical feature which was long lacking in TLS. Each extension is tagged with a type, and implementations are required to ignore extensions which they do not recognize. This feature proved essential for resolving several protocol level problems which were discovered in the period between TLS v1.2 and v1.3

Despite adopting several modern cryptographic features, TLS v1.2 also suffered from a number of high profile attacks. The first of these was the renegotiation attack, discovered in 2009. TLS allows both clients and servers to at any time request a new session be renegotiated; effectively a new handshake is performed, but instead of being in the clear it occurs over the already encrypted channel. Several HTTP servers, including IIS, make use of this for client authentication. The initial connection is encrypted but not authenticated, and if the client attempts to access a protected resource a renegotiation is performed which includes client certificate authentication. The renegotiation bug breaks
this entirely. First an attacker creates a new TLS connection to the server, and sends some arbitrary data (for example, the start of an HTTP request). The attacker then proxies a legitimate client attempting to connect to the server, and sends the handshake data through its own channel. From the perspective of the server, it appears as if the client has sent some encrypted data, then authenticated itself with a certificate, then sent some additional data which was both encrypted and authenticated. Depending on the server logic, this might allow the attacker to insert data which the server would interpret as having come from the authenticated client. The fix was to properly bind the inner and
outer negotiations, such that it was not possible for the attacker to proxy. This was done by adding a new extension, which was standardized in RFC 5746. With this extension enabled, renegotiations inside an existing channel are cryptographically bound to the existing channel using the value of the TLS finished message. Since in the attack the client is unaware of being proxied within another TLS channel, the renegotiation will fail, preventing the attack.

The problems with renegotiation did not end there, however. In 2014 a new set of attacks where developed including the devastating triple handshake attack. In this attack, a client connects to a malicious server. The malicious server opens a new TLS connection as a client with some victim server. It forwards the client’s random value, then sends back the victim server’s random back to the client. Upon receiving the client’s encrypted master secret, it forwards the same to the victim server. In the end, there are two TLS connections, one between the client and the attacker, and the other between the the attacker and the victim server, and both are using the same session keys. In the next step, the client reconnects to the attacker, resuming its previous session, and in turn the attacker resumes its connection with the victim server.

Due to how session resumptions work, in this case the finished messages in the two handshakes will be identical. Then, the malicious server can attempt to perform some action on the victim server which triggers a request for client certificate authentication (for example, requesting access to a protected resource). It forwards the authentication request to the victim client, who responds. The attack proceeds much like the renegotiation attack of 5 years prior, and since the finished messages of the two connections are in this case identical, the previously devised extension fails to detect the proxying. This was addressed with a new extension, the extended master secret, which ensures the master secret for a session is bound to the entire handshake transcript, instead of just the client and server random fields.

Implementation errors also caused notable problems for TLS v1.2. It has been known since 1998 that the RSA key exchange is vulnerable to an oracle attack, the so called “million message attack”. In a nutshell, before encrypting the master secret with a servers RSA public key, the client pads it in a certain way. Upon decryption, the server must reject any invalid padding which does not conform the the standard. But, it turns out that given access to an “oracle” which tells if a particular RSA ciphertext is or or not correctly formatted, it is possible for an attacker to decrypt any ciphertext encrypted using that key. A TLS server can act as such an oracle, and problems have been repeatedly found in various implementations over the last 20 years, including the recent ROBOT and CAT9 attacks.

Bringing TLS into the future with v1.3

After 10 years and numerous patches, TLS v1.2 was in a state where using it securely required a number of extensions and avoiding a number of known-insecure features such as static RSA key exchange, RC4 ciphersuites, and CBC ciphersuites. TLS v1.3 addresses these issues by omitting them entirely.

In addition, there was a strong desire by many large network players (such as Google, Cloudflare, and Mozilla) to minimize the number of round trips required to handshake, as this directly impacts the user visible performance of web pages. This led to a substantially redesigned handshake protocol which has fewer round trips. I will explore the changes and implications of the TLS v1.3 design in a future post.

If you want to discuss more about SSL/TLS, feel free to contact me at bjorn.nostdahl@nostdahl.com or check out these previous articles on SSL/TLS and x.509:

Thanx to Jack Lloyd for his invaluable input into this post 🙂

Agile, Gunnebo Business Solutions, Methodology, Scrum

Agile and Scrum Methodology Workshop

I recently had the chance to join Henrik Lindberg from Acando for an Agile Scrum workshop. In this post I will write about the workshop and the basics of Agile and Scrum. There is so much to learn and explore in agile, and I hope this introduction will compel further reading.

Agile Methodology

Unless you live offline, you probably are aware of the latest trend in the corporate world, which is the agile approach. Agile, in recent times has grown into a revolutionary movement that is transforming the way professionals work. Agile is a methodology that keeps the equilibrium of your priorities. Thus, the work is done faster, and project requirements are with great efficiency.

Working agile, people tend to forget about the four values from the agile manifesto:

  1. Individuals and interactions over processes and tools
  2. Working software over comprehensive documentation
  3. Customer collaboration over contract negotiation
  4. Responding to change over following a plan

Equally important is the twelve principles behind the agile manifesto:

  1. Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.
  2. Welcome changing requirements, even late in  development. Agile processes harness change for the customer’s competitive advantage.
  3. Deliver working software frequently, from a  couple of weeks to a couple of months, with a preference to the shorter timescale.
  4. Business people and developers must work  together daily throughout the project.
  5. Build projects around motivated individuals.  Give them the environment and support they need, and trust them to get the job done.
  6. The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.
  7. Working software is the primary measure of progress.
  8. Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.
  9. Continuous attention to technical excellence and good design enhances agility.
  10. Simplicity–the art of maximizing the amount of work not done–is essential.
  11. The best architectures, requirements, and designs emerge from self-organizing teams.
  12. At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.

Major Differences between Waterfall and Agile

  • The waterfall approach is a sequential model of project management. Here the development team can only move to the next stage if the previous step is successfully completed.
  • In the agile approach, the execution of processes is concurrent. This enables effective communication between the client, the manager, and the team.
  • Waterfall assumptions are not well-suited for large-sized projects, whereas agile lets you manage complicated tasks with great ease.
  • Agile methodology is being embraced by managers worldwide for its greater flexibility.
  • The development plan is reviewed after each step in case of agile, while for the Waterfall approach it will be only during the test phase.

The agile development is based on the interactive functionality, according to which the planning, the development, the prototyping and many other key phrases of the development may pop up more than once in line with the project requirements. The agile also adheres to the incremental model, where the product is designed, implemented and tested in increasing order (complexity of the task increases in the ascending order). The development is termed as finished, only if every minute specification and requirement is met.

When to Use The Agile Methodology?

  • In a Scenario, When You Require Changes to Be Implemented
  • When the Goal of the Project Isn’t Crystal Clear
  • When You Need to Add a Few New Features to the Software Development
  • When the Cost of the Rework Is Low
  • When Time to Market Is of Greater Paramount Importance than the Full Feature Launch
  • When You Want to See the Progress in the Sequential Manner

Scrum Methodology

Scrum is the latest agile framework for product success in small-to-big organizations, which is creating a lot of buzz in the present IT world. Managers’ worldwide united hold a belief that Scrum is far more than the execution of processes and methods; it plays an integral role by supporting teams meet their aggressive deadlines and complicated project demands. The Scrum is a collaborative agile approach that involves the breaking down of substantial processes into smaller tasks so that they are done efficiently in a streamline manner.

Scrum is a lightweight, agile framework that successfully manages and accelerates project development. This framework is proven to cut down on project complexity and focus largely on the building products that are in accordance with client expectations. Generally, people sometimes use Agile and Scrum as interchangeable, but there is a big difference. The agile approach is a series of steps, on the other hand, Scrum a subset of agile.

There are three principles of Scrum:

  • Transparency
  • Inspection
  • Adaptation

Scrum Roles

Are you interested in switching to the Scrum approach of development? Then, you must know the various Scrum roles.

Three-main-scrum-roles-1_low.png

The Product Owner

He/she is responsible for providing the vision of the product. The product owner will play the central role in breaking down the project into smaller tasks and then prioritize them.

Responsibilities

  • Defining the Vision
  • Managing the Product Backlog
  • Prioritizing Needs
  • Overseeing Development Stages
  • Anticipating Client Needs
  • Acting as Primary Liaison
  • Evaluating Product Progress at Each Iteration

The ScrumMaster

He/she is someone with extensive expertise over the framework. The ScrumMaster will make ascertain that the development team is adhering to the Scrum model. They will also coach the team on this.

Responsibilities

  • Coaching the Team
  • Managing and Driving the Agile Process
  • Protect the Team from External Interference
  • Managing the Team
  • Foster Proper Communication
  • Dealing with Impediments
  • Be a Leader

The Development Team

This involves a panel of qualified developers those who form the core of the project development. Each individual in the team brings his/her own unique skills to the table.

Responsibilities

  • The Entire Team Is Accountable for the Work
  • They Are No Titles and Subheading
  • Sit Together to Communicate with One Another

Scrum Artifacts

sprint-02

Artifact #1: Product Backlog

Product backlog involves a sequence of fundamental requirements in a prioritized order. The requirements are provided by the provided owner to the Scrum Team. The backlog of product emerges and evolves with time, and the owner of the product is solely responsible for content & its validity.

Artifact #2: Sprint Backlog

It is the subset of the product backlog that the team will put in the hard efforts to achieve the “To Do’s.”  The work in the sprint backlog is sliced down in smaller tasks by the team. All the items of the sprint backlog must be developed, tested, documented and integrated to meet the needs of the clients.

Artifact #3: Product Increment

The product increment is an artifact of Scrum with significant importance. The product increment must in line with the “Definition of Done” by the development team, and the product increment has to be approved by the product owner.

Definition of Done in Scrum Methodology

Definition of Done from varying from one scrum team to another. It is an acceptance criterion that drives the quality of work when the user story is complete. In other words, Definition of Done is the quality checklist with the development team.

Burndown Chart

The Burndown chart is a means to track the progress of a project on the Scrum. The ScrumMaster is responsible for updating this chart at the end of each sprint. The horizontal axis on the release Burndown chart represent the sprints, while the vertical one will make you aware of the remaining work at the beginning of each sprint.

Backlog Refinement

Backlog refinement is the act of updating/adding estimates, details, and order for the items in the product backlog. This improves story descriptions.

User Story

Commonly known as the “Definition of Requirement,” the user story in Scrum provides enough information to the development team so that they provide a reasonable estimate for the project. The user stories are about one or two sentences, a set of conversations that define the desired functionality.

User Story Acceptance Criteria

Acceptance criteria in terms of Scrum methodology are a set of conditions that the software product must meet in order to the acceptance by the user, customer or the other stakeholders. In layman’s terms, it is also a set of statements that determine user features, requirements or functionalities of an application.

User Story Relative Estimation

Relative estimation is the procedure of estimating task completion. The estimate is not in terms of the time, rather the items that are similar to one another in terms of complexity.

Scrum Events

There are five defined Scrum Events.

Sprint Planning

The Sprint Planning is an event in the Scrum framework. Here the team in a collaboration will decide on the task they will focus during that sprint, and discusses their initial plan to meet those product backlog tasks.

Sprint Goal

The sprint goal is defined as the objective set for the sprint that needs to be met via the implementation of the Product Backlog. The sprint goals are obtained after long discussions between the Product Owner and the Development team.

Daily Scrum

For the Scrum approach, each day of a Sprint, the team meets and holds a discussion on a number of aspects, and this meeting is known as the Daily Scrum.

Sprint Review

The sprint review is held at the end of each of the sprint. This is done to inspect the product increment.

Sprint Retrospective

The Sprint Retrospective is held between the development team and the ScrumMaster to discuss how the previously Sprint went, and what can be done to make the upcoming Sprint more productive.

In the end, after reading this entire article, you probably got a basic overview of the Scrum approach. If you want to talk about agile and scrum, feel free to contact me at bjorn.nostdahl@nostdahl.com. You can also read more about agile in this article:

Commercial, Fashion, Gunnebo Business Solutions, Innovation, Microsoft Azure, Reflections, Retail

Microsoft Pivot: Envisioning the Future of Digitalization

May 29th, Satya Nadella CEO of Microsoft invited Nordic customers and partners for a small conference in Sweden, putting forth his ambitions for the future. This was Nadella’s first ever visit to Sweden, since stepping into the shoes of the company’s CEO. He touched upon issues of today’s tech world, but most importantly made Swedish people aware of his company’s firm belief in global digitization and describe what the future holds in store.

Satya_smiling-print-1024x683
Photo: Brian Smale/Microsoft

Self-discipline & excitement-seeking are two pillars of Satya’s personality, which has made this Indian-origin techy an intellect worldwide; from Asia to Europe we all treat him with the warm welcome.

Dinner with Microsoft’s ISV Team

A day before the Nadella’s address, I was invited for dinner and socializing hosted by Joanna and Martin from Microsoft. ISV stands for Independent Software Vendor and present individual or companies those who develop, market and sell software running on third-party hardware and software platforms such as the Microsoft.

20190528_182652253_iOS_low

The term ISV is prominent in the tech world, used by most tech companies including Microsoft. To understand this from a layman’s term, when Microsoft was in the pursuit of developing windows, its partnerships with numerous companies/individuals to take their project forward both on the technical and non-technical front.

Next Morning, I had the opportunity to see some of the companies that has implemented their solutions on the Microsoft platforms at hotel Berns. There we received a pep talk to Microsoft and partners on the future and what efforts we need to put to make sure it is heading the right direction.

20190529_072852893_iOS_low

The Microsoft tech show commenced in style with the tunes of Sweden’s most renowned DJ and Saxophone artist Andreas Ferrronato. His soul-soothing set the mood 🙂

The Volvo Group Digitizing its Operations

Hillevi Pihlblad from Volvo Group talked about how employees hate change and across the globe it is not easy to adapt to changes. Further, she illustrated how the Volvo Group has translated the changes into something positive and made people understand why embracing change can make their lives convenient.

20190529_073735681_iOS_low

The H&M Group And The Use of AI To Serve Their Customers The Best

A senior executive and business leader of the H&M Group, Arti Zeighami talked about how the company is investing in Artificial Intelligence technology to tailor store offerings. Heading the Advanced Analytics and AI function, he gave a presentation on how H&M Group is implementing advanced algorithms to scrutinize sales an returns. Further, helped them more efficiently predict the needs and demands of their customers.

20190529_075341909_iOS_low.jpg

Satya Nadella, The Man of the Moment Taking The Center Stage

Then, finally came the moment when Helene Barnekow introduced Microsoft’s CEO Satya Nadella. He was treated with great warm claps from the tech people present.

20190529_080634744_iOS_low

Nadella who took over the job role of CEO from Steve Ballmer in 2014, is globally renowned for his dynamic leadership and a true passion for technology innovations. Prior to being the company’s CEO, Nadella was Microsoft EVP of the cloud and enterprise group.

His journey as a CEO has transformed Microsoft in terms of technology, also accentuating the company’s business model and corporate culture.  His emphatic leadership abilities steered Microsoft from struggling smartphone strategy to focus on other technical aspects such as the Augmented Reality and Cloud Computing.

He was also responsible for the purchase of Linkedin, a network of professionals for around $26.2. Did you know since taking over as CEO, company’s stocks have increased by 150%?

20190529_082927443_iOS_low

The theme of the address by Satya Nadella was how communities and companies are uniting together for the digitized future of Sweden. This speech was largely about Microsoft’s own digital products and services, and how they can drive the world forward.

On his address to the tech people of Sweden, he threw light on various segments of technology-the Artificial Intelligence, Digital Transformation & Innovation. The American giant was in Stockholm to make a big announcement about setting up data centers in the country.

“We have the ambition that the data centers we build in Sweden should be among the most sustainable in the world, this is another step in strengthening our position as a long-term digitization partner for Swedish businesses”

Key Highlights from Nadella’s Address

“It would be wrong for me not to talk about trust. Because in the end, it is something that will be very important to us – not only to create new technology but to really assure that there is confidence in the technology that we create” he says on stage and continues “We need to create systems that handle personal data and security as a human right.”

Satya Nadella talked about the recent investment his company is making in Sweden. Among all the tech things, Microsoft two centers to be built in Gävle and Sandviken, they will be the most sustainable in the world.

“We will use one hundred percent renewable energy. They will also be completely free from operational emissions. We set a new standard when it comes to the next generation data center. It starts here in Sweden,” said Satya Nadella.

Apart from the data centers, Satya Nadella also highlighted the recent key partnerships during his speech at the China Theater. He further talked about company collaboration with Kiruna, this city makes use of Microsoft Hololens and AR to plan the city’s underground infrastructure.

Microsoft in Sweden

Satya Nadella, Microsoft’s CEO Put Forth Examples of Company’s Interest in the Country;

”There have been huge breakthroughs in the last three years, regardless of whether we are talking about object identification or voice recognition. This must be translated into infrastructure. Here we invest heavily.”

“Take Spotify who has a new very cool podcast tool. It lets anyone do their own podcast and they use our speech recognition to convert speech into text. The most interesting thing they do is that for anyone who wants to modify their podcast, they can enter and edit in writing and that the podcast then automatically changes. It shows how to use AI to make it more efficient”

Ending the Visit on a High

Later in the day, Nadella visited the Samhall innovation Days, a hackathon with the aim of “creating the conditions for people with a diagnosis within the autism spectrum to come into operation”, in the company’s press release.

Last summer, Microsoft announced two data centers in Norway to take their cloud computing services to entire Europe.

“By building new data center regions in Norway, we facilitate growth, innovation and digital transformation of Norwegian businesses – whether large companies, the public sector or some of the 200,000 small and medium-sized companies that together create the future of Norway,” said CEO Kimberly Lein-Mathisen in Microsoft Norway when the Norwegian plans became known.

Nadella declared that both the data centers will run on 100% renewable energy, so this project is in the welfare of the country, creating an ocean of newer opportunities for the locals. He also talked about his company’s association with the tech companies/communities in Sweden; one is the Kiruna city and other being the Sandvik Company.

The address at the China Theater in Stockholm by Microsoft’s top boss, Satya Nadella was like a pep talk. He gave his viewpoint on a variety of technology aspect. Most importantly, he announced the company’s program of building two data centers in this Nordic country.

Artificial Intelligence (AI), Business Intellegence (BI), Gunnebo Business Solutions, Machine Learning (ML), Microsoft Azure

Machine Learning and Cognitive Services

Machine learning is gradually becoming the driving force for every business.  Business organizations, large or small trying to seek machine learning models to predict present and future demands and do innovation, production, marketing, and distribution for their products.

Business value concerns of all forms of value that decides the well-being of a business. It’s a much broader term than economic value encompassing many other factors such as customer satisfaction, employee satisfaction, social values etc. It’s the key measurement of the success of a business. AI helps you to Accelerate this business value in two ways. That’s through allowing to make correct decisions and innovation.

Machine learning technologies. Millennial students teaching a robot to analyse data
nadia_snopek

Remember the days when Yahoo was the major search engine and Internet Explorer was the Major web browser. One of the main reason for their downfall was their inability to make correct decisions. Wise decisions are made by analyzing data. More data you analyze, better decisions you make. Machine Learning greatly support in this cause.

There was a time, Customers accepted what companies were offering them. Things are different now. Demands of customers for new features are ever more increasing. Machine Learning has been the decisive factor behind almost every new innovation whether it be face recognition, personal assistants or autonomous vehicles.

Machine Learning in more details

First starts with learning what machine learning is. Machine learning enables systems to learn and make decisions without explicitly programming for it.  Machine learning is applied in a broad range of fields. Nowadays, Almost every human activity getting automated with the help of machine learning. A particular area of study that machine learning largely exploited is data science.

Data science plays with data. Data must be extracted to make the best decisions for a business.

The amount of data that a business has to work with is enormous today. For example, social media producing billions of data every day. To stay ahead of your competitors, every business must make the best use of this data. That’s where you need machine learning.

Machine learning has invented many techniques to make better decisions out of large data sets. These include Neural networks, SVM, Reinforcement learning and many other algorithms.

Among them, Neural networks are leading the way. It improves consistently spawning child technologies such as convolutional and recurrent neural networks to provide better results in different scenarios.

AdobeStock_178345630_low

Learning machine learning from the beginning, and trying to develop models from scratch is not a wise idea. That yields huge cost and demands a lot of expertise in the subject. That why someone should try to take the assistance of a machine learning vendor. Google, Amazon, Microsoft they all provides Machine learning services. Let’s take Microsoft as an example, and review what qualities we should look for when selecting a vendor.

Using cloud as a solution for machine learning

It simplifies and accelerates the building, training, and deployment of machine learning models. It provides with a set of APIs to interact with when creating models hiding all the complexity in devising machine learning algorithms. Azure has the capability to identify suitable algorithms and tune hyperparameters faster. Autoscale is a built-in feature of Azure cloud services which automatically scale applications. This autoscaling feature has many advantages. It allows your application to perform best while keeping the cost to a minimum. Azure machine learning APIs can be used with any major technologies such as C# and Java.

There are many other advantages you will have with cloud Machine Learning

  • Flexible pricing. You pay for what you use.
  • High user-friendliness. Easier to learn and less restrictive.
  • More accurate predictions based on a wide range of algorithms.
  • Fine tuning results are easier.
  • Ability to publish your data model as a web service Which is easy to consume.
  • The tool allows data streaming platforms like Azure Event Hubs to consume data from thousands of concurrently connected devices.
  • You can publish experiments for data models in just a few minutes whereas expert data scientists may take days to do the same.
  • Azure security measures manage the security of Azure Machine Learning that protects data in the cloud and offers security-health monitoring of the environment

Using Cognitive Services to power your business applications

We will go on to discuss how Azure cognitive service can be used power up a business application. Azure cognitive services are a combination of APIs, SDKs, and services which allows developers to build intelligent applications without having expertise in data science or AI. These applications can have the ability to see, hear, speak, understand or even to reason.

AdobeStock_252431727_low

Azure cognitive services were introduced to extend the Microsoft existing portfolio of APIs.

New services provided by Azure cognitive services includes

  • Computer vision API which provides with advanced algorithms necessary to implement image processing
  • Face API to enable face detection and recognition
  • Emotion API gives options to recognize the emotion of a face
  • Speech service adds speech functionalities to applications
  • Text analytics can be used for natural language processings

Most of these APIs were built targeting business applications. Text analytics can be used to harvest user feedbacks thus allowing businesses to take necessary actions to accelerate their value. Speech services allow business organizations to provide better customer services to their clients. All these APIs have a free trial which can be used to evaluate them. You can use these cognitive services to build various types of AI applications that will solve a complex problem for you thus accelerating your business value.

If you want to talk more about ML and AI, feel free to contact me: bjorn.nostdahl@gunnebo.com 🙂