The number of retail chain stores closing over the past few years has dramatically increased. Yet, as of 2019, large stores announced the closing of almost 7000 stores. This is comparatively more than 2018 where 5000 stores shut down. If this continues, it could be the retail apocalypse.
The sole reason behind this rapid closing of traditional stores is the rise of e-commerce outlets where traditional stores are no longer an attention- grabber. As mentioned in the previous article, online sellers like Amazon have made it really difficult for stores to withstand in this digital era.
Most of these stores have begun changing their sales strategies particularly focussing on digital marketing. Thus, they have now moved from brick and mortar stores to e-commerce sites. The trend these days is to do purchases online. This saves time and money for needing to visit stores. Buying products online is a more convenient method.
Apart from that, bankruptcy is another major concern because of private-equity firms like what happened to Toys R Us. Most of these private-equity companies are more likely to go bankrupt than public companies. Although these companies express that they can help to grow capital, they are silent about debt and leverage buyouts.
What happened to American Apparel?
American apparel was a major selling store since its launch in 1997. It faced a lot of sexual harassment and defamation issues costing the company $3 million as fines. In 2017, facing bankruptcy for the second time, they had to close most of their stores.
It was purchased by Gildan Activewear by the end of 2017 for $88 million. The major change is that sexual images are no longer there. The new American Apparel emphasizes more on materials and pricing to attract customers.
Toys R Us: the kids’ dream store
Toys R Us has been around for 70 years and was every child’s dream store. This company faced its doom when taken by a private equity firm in 2005. It faced an unsustainable debt and filed bankruptcy in 2017.
This decision was taken as suppliers pressurized and they liquidated remaining stores. They had to close almost 700 stores of theirs and sold the rest to Canada. Many say there might be a comeback but Toys R Us will be a fond memory for many.
Victoria’s Secret – A girl’s best friend
Parent company L Brands announced closing almost 53 Victoria’s Secret stores in 2019. They even did close 30 stores in 2018 due to bad sales. One reason was the controversial comments that the chief marketing officer spoke of transgender and plus size.
This incident outraged many and led to a reduction in sales. Apart from that, many people claim that the quality of products has dropped as well.
Abercrombie and Fitch – new toned-down version
Abercrombie and Fitch have planned to close almost 40 stores in the USA this year 2019. They have made plans to change their marketing strategies and be more sales-oriented. Getting rid of sexual ads was another point of focus.
They have also mentioned opening 3 flagship stores redesigning others as a part of the restructuring. Moreover, The change of infrastructure in which they have moved from dark tones to a more bright and light atmosphere is a key highlight.
GAP – shutting down rapidly
One of the most famous brands, GAP is closing doors of almost 700 stores worldwide. Its sister stores like the Banana Republic are struggling to survive as they are not performing well.
They are planning to split into two and have decided to change business models to increase profit. The Old Navy will go on a separate way. This is another marketing strategy as well to increase sales.
Payless- bankruptcy victim
A few years back, Payless went into bankruptcy for the second time. Now it is finally closing doors in 2019. They are planning to close almost 2300 stores within the USA. One reason was that the shoe store only has physical stores and with the increase in online sales, store sales have gone down.
On the other hand, Just like Toys R us, private equity buyouts are very hard to bear due to an extensive amount of debt. They also began liquidation sales which went until June 2019.
Michael Kors: Restructuring
Michael Kors closed almost 100 stores in 2017. This year they will be shutting 50 stores and have stated that there could be more. This is because of the low sales that the company encountered. It has also faced great difficulty to focus on average buyers being a luxury brand. However, they have partnered with Amazon and have moved to online sales which is a positive movement.
Claire’s – The Accessory Chest overcomes Bankruptcy
Claire’s is a very popular store for teens with an extensive amount of accessories. However, Claire’s also sought Chapter 11 protection due to bankruptcy. It, however, eliminated its debt of $1.9 million. They also underwent restructuring and eliminated about 2000 locations.
Gymboree – Children’s clothing
Gymboree was another victim of bankruptcy and sought for protection under chapter 11. It was another store that was dependant on brick and mortar. It faced a lot of loss in sales that made the company face debt. It has planned to close all Gymboree and Crazy 8 stores within the US and Canada which is about 900 stores in total.
J.C. Penny – 116-year-old outlet
J.C. Penny closed about 130 stores back in 2017 due to a dip in sales and has also decided to close 18 more stores. This Giant retailer was forced to close their shops because they have been losing a huge amount of money, due to a reduction in sales. At the moment, it is a broke company and stores are out of date. There are assumptions that they have been planning to open some toy shops at their brick and mortar stores.
The list of retail stores calling it quits can go on and on. It is almost like a global epidemic that infests to every retail chain and shuts it down. The list mentioned here is just a few of them. More and more companies are facing issues either due to bankruptcy or lack of sales because of online purchases. Yet, some stores like Gucci, Luis Vuitton and Claire’s have risen from this problem and have adapted new marketing strategies.
We are living in a new world, with new channels and if you want to know how to make your retail business ready for the future, please reach out to me: firstname.lastname@example.org
Working in IoT we sometimes need to handle large data streams of information, that might or might not be totally accurate. Streams might contain noise, inaccurate/unreal readings and other unwanted data.
Debouncing can be done on the hardware itself, or in software. Hardware debouncing can be done either using an S-R circuit or an R-C circuit. Two famous algorithms to do software debouncing is vertical counter and shift registers. Despite being well-known, in literature, these methods are typically presented as a code dump with little or no explanation. In this article, I will touch upon these circuits, methods and other algorithms and their use in IoT debouncing.
Understanding Switch Bounce
When the contacts of mechanical switches toggle from one position to another, these contacts bounce (or “chatter”) for a brief moment. During the first millisecond, the bounces are closely spaced and irregular, and although all of it happens in the course of milliseconds, high-speed logic will detect these bounces as genuine presses and releases.
A button release produces bounces too, but it is common for a switch release to produce less bounce than for a switch press.
Switches usually become stable after 5-20ms depending on the quality, size and electronics of the hardware.
Debouncing using S-R circuits
Switch debouncing using S-R circuit is one of the earliest hardware debouncing methods. In this circuit, S-R latch avoids bounces in the circuit along with the pull-up resistor. It is still the most effective debouncing approach.
The figure below depicts a simple digital debouncing circuit which is used quite often.
The circuit utilizes two cross-coupled NAND gates which aim to create an S-R latch, A SPDT (Single Pole Double Throw) switch and two pull up resistors. Then the resistor produces and generates a logic ‘one’ for the gates and the Switch pulls one of the inputs to ground.
If the switch is kept in a position as seen in the figure, the output of the upper gate is ‘1’ regardless of the input of the other gate and the one created by the bottom pull up resistor which stimulates the lower NAND gate to zero, rapidly in turn hustles back to the other gate. If the switch moves back and forth like a pendulum between the contacts and is suspended or halted for a while in neither one of the regions amidst the terminals, the latch preserves its’ state because ‘0’ from the bottom NAND gate is fed back. The switch may move between the contacts but the latch’s output assures that not in any way it would bang back and therefore, the switch is bounce free.
Although S-R is still common, it’s bulkiness cause problems when you try to use it frequently. You can see that it uses many hardware pieces. Another drawback to using S-R circuits is SPDT switches are more expensive than SPST switches. Thus, a new approach of debouncing emerged using an R-C circuit. The basic principle behind it is to use a capacitor to filter out swift adjustments or changes in the switch signal.
The following image demonstrates a basic R-C circuit which is used for debouncing.
It is a simple circuit which uses two Resistors, a Capacitor, a Schmidt trigger hex inverter and an SPST switch.
In the event where the switch opens, the voltage across the capacitor which is initially zero begins to charge to Vcc through R1 & R2. The voltage at Vin is higher and hence, the output of the inverting Schmitt trigger is low (logic 0)
When the switch is closed, the capacitor discharges to zero and subsequently, the voltage at Vin is ‘0’ and output of the inverting Schmidt trigger is high (logic 1)
At the time of the bouncing condition, the capacitor will halt the voltage at Vin when it comes to either Vcc or Gnd.
You may wonder why a standard inverter is not used. There is a problem for using the standard inverter gate here. TTL defines a zero input when the applied voltage is between 0 and 0.8 and the output in certain circumstances or situations is very unpredictable or unforeseeable. Thus, we must use a Scmitt trigger hex inverter. Thereby, the output remains constant even if the inputs vary or dither and it also ensures to prevent the output from switching due to its’ hysteresis trait.
We can debounce switches using the software as well. The basic principle is still to switch signals and filter out glitches if any. The most used algorithms used for that are counters and shift registers.
The first approach uses a counter to time how long the switch signal has been low. If the signal has been low continuously for a set amount of time, then it is considered pressed and stable.
Let’s see the steps in the Counter method.
First, we set up the count value to Zero. Then set up a sampling event with a certain period, say 1 ms. You can use a timer for that. On the sample event, Do the following things.
If the switch signal is high, reset the counter variable to 0 and set the internal switch state to ‘released’. If the switch signal is low, increment the counter variable by 1 until it reaches 10. once the counter reached 10, set the internal switch state to ‘pressed’.
Shift Register Method
Similar to that of the counter method. The only difference is that it uses a shift register. The algorithm assumes unsigned 8-bit reg value usually found in microcontrollers
First, set up the shift register variable to xFF. Set up a sampling event of period 1 ms with the help of a timer. On the sample event, Do the following things.
First, shift the variable towards MSB, the most significant bit. Set LSB, the least significant bit to the current switch value. if the shift register value is equal to 0, set internal switch state to ‘pressed’. otherwise, set internal switch state to ‘released’.
IoT Sensor Bounce
Recently my team has been working on telemetry involving OCR decoding of License Plates. I consider data from an OCR routine, a temperature sensor or a push button the same thing and debouncing the telemetry can be done very much in the same way.
First of all, we needed to clean up the data stream by filtering out incorrect values. Since there are not control digits on license plates, we chose to trust the result if the camera would return three similar plates within five iterations.
If you want to know more about how to debounce data streams or if you have any questions, please reach out to me: email@example.com
Finally it is vacation time again – and I can spend some precious time with my family 🙂
17th May Celebration
The summer starts (as always) with the celebration of the Norwegian Constitution Day – the national day of Norway – and is an official public holiday observed on May 17th each year. Among Norwegians, the day is referred to simply as syttende mai (lit. “seventeenth May”), Nasjonaldagen (The National Day) or Grunnlovsdagen (The Constitution Day).
The Constitution of Norway was signed at Eidsvoll on May 17th in the year 1814. The constitution declared Norway to be an independent kingdom in an attempt to avoid being ceded to Sweden after Denmark–Norway’s devastating defeat in the Napoleonic Wars.
The unique tradition of the celebration of the 17th of May brings some complexities. That is the reason why it is often confusing and hard to understand for a new-comer to Norway with all the fuzz during the day. The children carry flags and march together with bands. Ice cream, hot dogs and other goods to eat are abundant. The buildings are decorated with Norwegian flags and women and men of all ages dress in their Bunad, or national costume. Graduating high school students wear their russedress and celebrate the approaching school year’s end. The sound of loud music is heard from every corner. Understanding of all these 17th of May traditions requires some historical and social background knowledge.
Working in Sweden, I had the opportunity to attend the Sweden Rock festival. Classic rock, hard rock, metal and even some blues – something for all tastes. Clean and neat, safe and secure. Ocean-near location, cozy grass hillsides and real lavatories. But most of all, a really friendly and good-natured atmosphere. Like one big family, with members from over 50 countries. The best audience in the world, quite frankly.
The festival ran from Wed, 5 Jun 2019 – Sat, 8 Jun 2019, and I had a really great time in one one of the most exhilarating festival in the world. It promises to be a good time.
Tons of Rock
Lagends like Kiss, Def Leppard and Satyricon played at Tons of Rock, and I was overwhelmed to attend this great event!
Dream Theater had a great show, celebrating the 25th anniversary of the masterpiece “Images & Words” with us at Ekebergsletta.
Maintenance and Spring Cleaning
Spring is the time to roll up the sleeves and fix whatever the long winter has torn down or wore out. Painting the house, cleaning the back yard and of course applying antifouling paint on the boat to keep the sea-weed away.
I also took the time to fit out the shed with a hardware lab, so the family and myself can work on mechanic repairs outside. The first task at hand was to fix the injection fuel pump for the boat as this had corroded during the winter.
Sun and Sand in Spain
This year (as always) we will go to Orihuela Costa in Spain. Orihuela Costa is a coastal region situated on the Costa Blanca in the province of Alicante. It sits close to the border with Murcia, near La Manga and the beautiful Mar Menor. Despite its name it is actually around 20 kilometres away from the main resort of Orihuela.
The region covers several resorts including Punta Prima, Playa Flamenca, La Zenia, Cabo Roig, Villamartin, Campoamor and Mil Palmeras.
Orihuela Costa is an ideal destination for those looking for a beach family holiday, with 16 kilometres of diverse coastline. It is also an excellent choice for golf holidays; in fact Orihuela Costa is well equipped for all needs, with lots of amenities, facilities and leisure attractions
Cartagena Roman Theater
The theater was built between 5 and 1 BCE, and for centuries was covered by a cathedral built over the upper part of the the theater’s “cavea,” or seating area. The first remains were discovered in 1988, and the theater underwent restoration, completed in 2003. Today the ancient arena still holds performances, and there is a museum at the site displaying the finds from a series of archaeological excavations.
Alicante boasts many treasures. And among its most prominent is indisputably the Castle of Santa Bárbara, which stands on Mount Benacantil overlooking the city. Hewn from millenary stone, this fortress has witnessed centuries of history in this ancient city and even played a leading role in several chapters.
Leisure and Activities for the Kids
Not living the kids out, there would also be fancy events and activities they can do to make them enjoy this holiday. I understand that visiting some of these places can be a bore to them, and some places may even be over exiting..
Music and Entertainment for the Dad
Every Tuesday and Friday I will go to the lunch jam at El Rincon Del Tio Cali where the local musicians will entertain and involve us in fantastic musical experiences. I most likely would be visiting here on my own probably after the kids have gone to other fancy places to catch fun time.
Leyendas del Rock
Leyendas del Rock is a rock and metal festival that takes place in the city of Villena, Spain. Now celebrating their fourteenth edition – and the seventh consecutive year in it current location – the festival has become a respected date in the calendar of heavy metal fans thanks to the lineups of the scene’s favourites.
With more than 60 performers spread across four days and five stages, the high-octane display of hard-hitting rock sounds will this year be headlined by Australian four-piece Airbourne and legendary Irish rockers Thin Lizzy, among others.
So, have a nice vacation everyone – I will see you again in 26th August 🙂
In this article, I will be discussing one of the most trending topics in IoT. I will take you through a beginner level tutorial on MQTT which is currently the most used protocol in IOT projects.
MQTT stands for Message Queueing Telemetry Transport Protocol. To put MQTT in a nutshell, it is “A lightweight event and message-oriented protocol allowing devices to asynchronously communicate efficiently across constrained networks to remote systems”. I know that this doesn’t really help much. So let’s try to decode that definition and understand what MQTT is and how to use it.
What is MQTT?
Again, for people who have no idea about MQTT, it is a protocol for machine-to-machine communication. It uses a publisher-subscriber model for communication. If you are from a programming background, you probably would have some knowledge about the publisher-subscriber model. Anyway, we will discuss the publisher-subscriber model and how MQTT works later in the tutorial.
MQTT over HTTP for IoT
Before going onto discuss how MQTT works, let’s first try to understand how it came to existence. MQTT came to exist as a replacement for HTTP because HTTP could not properly answer the challenges in IOT and M2M projects. Unlike web applications, IOT projects have some peculiar challenges. One of the main concerns is that IOT requires the event-driven paradigm. Some of the features of this event-driven paradigm are:
Emitting information one-to-many
Listening to events whenever they happen
Distributing minimal packets of data in huge volumes
Pushing information over unreliable networks
Some other challenges you face in an M2M application
Volume (cost) of data being transmitted
Reliable delivery over fragile connections
Security and privacy
MQTT was successfully able to cope with these challenges due to their features.
Why MQTT is good for M2M and IoT applications
MQTT has unique features you can hardly find in other protocols, like:
It’s easy to implement in software as it is a lightweight protocol.
MQTT is based on a messaging technique. This makes it faster in data transmission compared to its alternatives.
It uses minimized data packets which results in low network usage.
Low power usage. As a result, it saves the connected device’s battery.
Most importantly it works on real-time which makes it ideal for IoT applications.
We learnt earlier that MQTT works through a publisher-subscriber model. In a P2S system, the publisher sends its messages to a topic. Then, every subscriber of that topic will receive the message. In MQTT, Broker handles the topic and messaging process while MQTT clients behave as publishers and subscribers.
Components of MQTT
To learn about how MQTT works, we have to understand some concepts in MQTT. The fundamental components of MQQT protocol are explained below.
The broker is a server that handles the communication and data transmission between the clients. It is responsible for the distribution, management and storage of data sent and retrieved by the clients. The broker acts like a centralized hub that regulates the message exchange.
In the case where a broker breaks down, the whole communication process breaks down as there is no way for the clients to communicate with each other directly. Therefore, the Broker Bridging mechanism was introduced to prevent such cases and build a fail-safe broker network.
There is a number of broker applications available on the internet including the popular ones; Mosquitto and HiveMQ or you can also use cloud-based brokers from cloud providers such as IBM or Azure.
Clients (Publisher, Subscriber)
These are basically the end-users who retrieve the data distributed by the broker. Each client is assigned a unique ID to identify themselves and the session when connected to the broker. A client could either be a publisher who publishes messages under a specific topic or a subscriber who receives messages relevant to a topic, at one time.
These are the chunks of data sent and received by the clients. Each message consists of a command and a payload section. The command part determines the type of message and there are 14 message types available in MQQT.
This is the namespace or literally the topic that describes what the message is about. Each message gets assigned to a topic and clients can publish, subscribe or do both to a topic. The clients can also unsubscribe from a topic if they want to. MQTT topics are just strings with a hierarchical structure.
Assume that there is a topic called “home/kitchen”. We call home and kitchen as levels of the topic while home being topper level topic than the kitchen. Also, topics can use wild cards such as ‘+’ and ‘#’.
This is the process of clients (Publisher) sending data to the broker under a topic be distributed among the clients (Subscriber) who have requested data from the same topic.
This is the process of Clients (Subscribers) receiving data specific to a topic they have previously subscribed to, from the Clients (Publishers) through the broker.
QOS: Quality of Service
Each message is given an integer value from 0 to 2 to specify the delivery mode. This is known as Quality of Service. There are three different types of QOS.
0 (Fire and forget) – the message is delivered only once, acknowledgement not given, high-speed delivery method.
1 (Acknowledgement) – the message is delivered once or several times until an acknowledgement is received.
2 (Synchronized) – the message is delivered only once, guaranteed delivery, comparatively slower.
Practical use of MQTT
Its time to do some practical things here and get used to dealing with the MQTT protocol. As you learnt previously, there are many MQTT clients developed for each programming language. I will use Paho python MQTT client as I am a fan of Python and it is probably the best MQTT client out there.
First, you need a broker to create an application with MQTT. One of the most popular MQTT brokers is Mosquitto. You can install it with the following command.
sudo apt-get install mosquitto
We set up it to work on our localhost. By default, Mosquitto listens to port 1883. Next, install the MQTT client with pip command.
sudo apt-get install mosquitto
We setup it to work on our localhost. By default, Mosquitto listens to port 1883. Next, install the MQTT client with pip command.
pip install paho-mqtt
This command will install python MQTT client library on your machine. The core of the client library is the client class which provides all of the functions to publish messages and subscribe to topics.
There are several important methods in Paho MQTT client class which you should know:
Each of these methods is associated with a callback.
Publishing a message
One of the main tasks you do with MQTT is publishing messages. A simple code that publishes a message usually has 4 steps.
Import the paho.mqtt.client class
Creating a client instance with Client() constructor
Most of the code is self-explaining. First, you create an instance of an MQTT client. Then you connect with the broker running on Localhost. Then the client publishes its message on “TopicLevel1/test” topic. After that, it disconnects from the broker.
Subscribing to a topic
You know that MQTT is not a one to one messaging protocol as it connects many devices. The trick here is that message from any device is assigned to a topic. Any devices that are subscribed to that topic will receive the message. Similarly, you can publish messages to Topics.
You can subscribe to a topic with subscribe() method in Client class. Subscribing to a topic has the same steps as to publishing messages. I am not going to repeat it as you can easily identify these steps from the code.
In this application, the client works as a subscriber. It subscribes to the topic to the broker which in this case, runs on localhost. Whenever it receives a message, it calls for the on_message() method. If the received message is disconnected, it immediately disconnects from the broker. This is a very simple use of subscriber method. You can write more complicated logic using the same callback functions.
So in this article, you get a concise yet comprehensive idea about MQTT. Its’ time to move on to the conclusion to recall yourself as to what is the gist of the article.
MQTT is a lightweight, flexible and a simple but very efficient protocol that has a definite advantage over others when it comes to IoT and M2M solutions; considering its low bandwidth and low power consumption, response time and multiple usages. In conclusion, it could be said that MQTT is the best protocol so far when it comes to IOT development.
If you want to know more about MQTT, you can check the links below or have any questions, please reach out to me: firstname.lastname@example.org
With the high pace the technology industry is moving in towards development, many different fields and areas have been considered as hot zones. The different innovations motivates researchers to create and develop better devices and technologies that are helpful. However, the more we advance, the more technology gets complicated and sophisticated. This is observed much more in hardware developments as the number of components used keeps increasing year by year to keep up with the need.
SIMATIC IOT2040 By Siemens
A leading company in innovations and developments is Siemens, which is German based. Siemens is specialized in technologies that would impact the Industry, Energy, Healthcare, and Infrastructure & Cities fields. With their many powerful groundbreaking products in the market, Siemens have been interested in the IoT field, releasing their SIMATIC IOT2000 series. This series targets the industry field, allowing different machines to analyse and utilize data sources from all around the globe.
Current issues include weak communication with overseas machinery due to different use of languages and different source codes. SIMATIC IOT2040 is one of the series by Siemens which is the up-to-date version to the SMATIC series. This version includes the following:
Energy-saving processor, with many compatible interfaces including: Intel Quark x1020 (+Secure Boot), 1 GB RAM, 2 Ethernet ports, 2 x RS232/485 interfaces, battery-backed RTC.
Supports Yocto Linux.
Arduino shields und miniPCIe cards can be used for expansions.
Programming with high-level languages
Compact design and DIN rail mounting
Proven SIMATIC quality offers great ruggedness, reliability and longevity
This version in particular is worth mentioning due to its ability to be used with many different hardware and solutions. This product is mostly used with different other add-ons, which help deliver the target efficiently.
Setup: SIMATIC IOT2040
A very common application for the SIMATIC IOT 2040 is the use in Micro-SD cards (minimum capacity of 16GB). Many of the previously stated features of the series can be used with the card, helping in making the experience much better. The following is a tutorial guide, explaining how to successfully prepare and install SIMATIC IOT2040:
Remove all trash and deleted files fomr your SD-Adaptor and flash the image to your disk imager.
You can now safely insert the SD-card in the SIMATIC IOT 2040.
You should then connect your devices to the computer/laptop using an Ethernet cable.
A strong internet connection should be available at all times. Adjust your Ethernet IP address to an IP around 192.168.200.1, which is considered as the static IP for IOT2040. (Subnet mask 255.255.255.0)
Use the Secure Shell Protocol with the following IP: 168.200.1, using user root. Insert your own password for the root.
SSH into 192.168.200.1 with user root. Set a password for root:
Now you are ready to run the installation for SIMATIC IOT2040 successfully.
/etc/opkg/base-feeds.conf and add these lines:
src all http://iotdk.intel.com/repos/1.1/iotdk/all
src x86 http://iotdk.intel.com/repos/1.1/iotdk/x86
Now you can run the file with the name “opkg update” and install the git directly.
$opkg install git
Node-Red: Flow Based Solution
One of the most modern tools that is considered a breakthrough is Node-Red. Node-Red is a development tool, created by IBM initially, to wire hardware devices together with APIs and online services. This technology is flow based, which was inspired by the use of Internet of Things (IOT). In simple terms, the technology focuses on the use of browser software that helps users to develop different tools using flow diagrams. It was created as a means to simplify development; making it available for users with basic knowledge. The tool focuses on the ease of use of software and online services, using a direct connection through the internet.
The previously stated technologies could come really handy when they are used together. Both tools have the capability to be used in parallel aiming towards the same final result. NodeRed will be used to ease the use of SIMATC through simple flow diagrams. SIMATIC is generally a very important tool, yet too complicated for normal users. Thus, the use of NodeRed is crucial in this case, where you will need to control the development process and wiring of the hardware with the online services as smooth as possible.
The following is a tutorial guide, explaining how to successfully prepare and install Node-Red:
Through the menu named software, you can move forward to the Manage packages page.
Set Node-Red on Auto-start together with Mosquitto MQTT Broker.
Here is where we integrate SIMATIC IOT 2000 with Nod- Red. You are expected to install the nodes for the SIMAMTIC.
In the following directory “/home/root/.node-red” create a root folder by the name of nodes; where you will place the installed nodes in the previous step.
Put custom nodes here if needed. For example from Git repository:
npm install<git repo url>
Dependencies and nodes under npm can be installed directly to
In the following directory “/home/root/.node-red” create a root folder by the name of nodes; where you will place the installed nodes in the previous step.
Earlier this year, I visited US for a couple of weeks, and having a genuine interest in retail technology, I visited quite a few retail stores. I went to see classical stores, but also had the chance to have a preview of the future of retail: Autonomous and Frictionless Stores!
Customers in this digital world don’t want to spend too much time while shopping. They want everything to happen very fast. Customers are looking for a seamless shopping experience all the time. That’s how the concept of frictionless stores came to exist. Frictionless stores are one the biggest new thing in consumer shopping.
What are Frictionless Stores
The concept of frictionless stores started a few years ago. When I talk to retailers this is one of the topics that always pops up. All major brands are looking for innovative ways to create better customer experience and frictionless stores is one way to make that happen. These store improves the shopping experience to the point where customers don’t have to wait at any point of shopping such as selecting, receiving and paying for the product. Initially frictionless stores only confined to ease and less hassle shopping. But as innovations such as mobile wallets, digital receipts, free and fast shipping, and one-click purchasing emerged and began to reshape the consumer shopping experience, the definition began to be reshaped as well. Today, a frictionless experience means more than just less hassle. It means greater speed, personalization, and wow experiences.
How Frictionless Stores work
Let’s try to understand ow frictionless stores work. In frictionless stores, Buyers and sellers are connected in a way that provides buyers the ability to instantly find, compare and buy products and services they need. In frictionless stores, customers should feel that they have full control. The concept and technology has evolved over time, and nowadays customers expect to have this experience through their smartphones. Retailers and brands are trying to find new ways modifying the definition of frictionless stores to provide customers the best possible shopping experience. They need that commitment to stay ahead of the competition. As a result of that, nowadays, frictionless shopping means eliminating anything that negatively impacts customer experience.
Importance of Frictionless Stores
How has frictionless shopping fared according to researches? Alliance Datacenter has done a study and found out that customers from all generations looking for a great service and an ideal shopping experience. This is true for all the areas in the world. If some brand fails to deliver what they want, customers will find a different one. According to the research, 76 percent of consumers said they give brands only two to three times before they stop shopping with them. Another 43 percent said their main reason to leave a brand is poor experience in shopping. What all these means is that if a customer encounters friction they will run away from that brand fast without probably giving a second chance.
Amazon Go Stores
Similar to frictionless stores, Amazon introduced Amazon Go stores. What is special about Amazon Go is you don’t have to wait for checkouts. That basically means you no longer have to wait in queues. First Amazon Go store was a grocery store of 1800 square feet. It spread fast, in fact, you can see a lot of Amazon Go stores now in the USA and Europe.
How is this even possible? What technologies have they used? Amazon was doing many types of research in the areas of computer vision, sensor fusion, and deep learning. Amazon Go is a fruitful result of that. You need Amazon Go application to do shopping with Amazon Go stores. All you have to do open your Go app, choose the product you want, buy it and the just leave. This application can detect when a product is purchased or returned to the shop. The application can remember what you bought and you can revisit these details at your virtual cart. When you finish shopping, you will be charged and you will receive a receipt for what you buy
Buy Awesome foods with Amazon Go stores
You may wonder now what you can buy there? What items are available on Amazon Go stores? I will just point out how one Amazon Go store had marketed their shop. “We offer all the delicious meals for breakfast, lunch or dinner. We have many fresh snack options made every day by our chefs at our local kitchens and bakeries. You can buy a range of grocery items from milk and locally made chocolates to staples like bread and artisan cheeses. Try us, you will find well-known brands you love in our shops.” by the way, don’t expect to go in there and buy books, tech or clothes or anything else that Amazon sells online. It’s basically quick-and-easy food and other groceries. It’s just that there’s no cashier.
So many people have been attracted to Amazon Go stores so it is quite evident that this concept will make a huge impact on the future of retail stores.
If you want to know more about frictionless Sstores, feel free to contact me at: email@example.com or check out these related articles:
The IoT (internet of things) world is bursting, in 2018 there were 23.14 billion connected devices, and it is projected to get to 30.73 billion by 2020 (from statista.com).
Embedded systems are at the center of this IoT drive, smart homes, smart cars, etc. all have embedded systems as their backbone.
Microcontrollers are the drivers of embedded systems. They give devices the ability to collect data from the environment, send and receive these data and execute the needed instructions or carry out specified actions. Like turning on the heater when the temperature in the room goes below a specified level.
ARM and the PIC microcontrollers are the common microcontrollers used in embedded systems and IoT. When these devices send and receive information over a network (say the internet), they do so using transfer and transport protocols that control this transfer processes.
The hypertext transfer protocol (HTTP) is the most popular communication protocol used over the internet to send and receive data. In IoT communications this protocol is still used in most applications. A more efficient protocol is the messaging queue telemetry transport (MQTT) protocol that is optimized for low connectivity and low power requirement. The MQTT protocol finds immediate application in remote locations where batteries are used and need to be conserved.
The HTTP system transfers data via the request-response paradigm. This transfer protocol requires devices querying other devices directly for data. This leads to increase in bandwidth requirement and power consumption. Since devices have to respond to requests one after the other, multiple, asynchronous and simultaneous communication cannot be effected. This comes as a disadvantage for IoT applications where multiple devices communicate at the same time. HTTP does not allow for multiple simultaneous communication, being synchronous.
The MQTT protocol solves these.
What is the MQTT protocol?
I gave a detailed description of what MQTT is in a previous post. But for this post, I’ll reintroduce just the important points.
The MQTT is a lightweight broker-based publish/subscribe messaging protocol designed to be an open, simple, and easy to implement data transfer protocol, designed to optimize bandwidth and power consumption. It is a machine-to-machine (M2M) communication paradigm that allows devices to send and receive data faster and more reliably without being connected directly.
MQTT finds immediate need where the network is expensive, unreliable or of low bandwidth. As well as when the embedded devices are of limited processor or memory resources.
The MQTT (message queue telemetry transport) protocol works in direct contrast with the hypertext transfer protocol (HTTP) which is popularly used in sending data and communicating with devices over the internet.
MQTT provides for one-to-many communication and message distribution. It is unconcerned of the sender or the content of the message, and uses the TCP/IP to provide network connectivity. Has a small transport overhead (a message sent with this protocol can have a header of as small as 2 bytes), and with features that ensures lost connections or data can be accessed.
MQTT in microcontroller connectivity
Using the MQTT protocol in microcontrollers will improve the efficiency of data transfer, reduce the power and bandwidth requirements and introduce asynchronous communication among devices. All these come in handy with the limited memory capacity of microcontrollers, need for faster and more reliable data transfer among IoT devices and increase in IoT devices in circulation and mainstream adoption of the technology.
This protocol guarantees faster, more power efficient (than the HTTP), low latency and dependency communication among devices. This is because the MQTT protocol works on a publish-subscribe paradigm. With this model, there is no direct connection and communication between network devices, instead there is a middleman, called the broker.
To use the MQTT protocol for communication with your microcontroller, a broker is required to collect and dispatch data among devices. The broker (also known as the server) facilitates the publish-subscribe model, in a similar fashion as the client-server models. The clients (that is the connected devices) subscribe to virtual channels, known as topics. Other devices that want to send out information (known as a message) will publish the information on a specified topic to the broker. The broker then distributes the message to all the clients that subscribe to that topic topic.
The broker is the core part of the MQTT operation. The broker is the middleman in data transfer using this protocol. The broker/server stands at the center of M2M communication. It receives messages (on a particular topic) from devices connected using the protocol and aggregates them for transfer to other devices that subscribed to the topic.
This kind of communication provides for continuous availability and distribution of data among devices. The advantage that this kind of communication provides is lack of dependence on direct M2M connection (that besets the HTTP communication system). Devices practically work and run on their own independent of the presence or interruption of other devices. This type of connection provides real time data; this is because the broker constantly publishes the received messages to connected device. Messages that were not subscribed for are destroyed, and those that have subscribers are dispatched to the devices. With this, any interruption that occurs with the connection of one device does not affect the entire network, instead, all the messages sent while it was away are retained and push to its when it comes back on the network. The MQQT protocol is more data-centric that identity-centric.
The Programmable intelligent computer (PIC) is a Harvard architecture microcontroller that is regarded as the industry standard due to its robust features. It is a more sophisticated microcontroller than other microcontrollers like the Raspberry Pi microcontrollers, as it provides more functionalities and features than the other microcontrollers.
My previous article talked about the deployment of the MQTT protocol on the Raspberry Pi microcontroller, Gunnebo AB’s mikroPascal MQTT library puts the MQTT protocol on the PIC microcontroller.
Our MQTT library for PIC Microcontroller brings faster and better connectivity for the PIC microcontroller. It enables PIC microcontrollers to communicate using the MQTT protocol. The MikroPascal library is built around MQTT protocol with QoS of 0, it is built on the existing TCP and IP stack based on the mikroPascal TCP/IP library, lib_enc600_v3_5, adding the MQTT layer on it.
The MQTT library is built as a wrapper around TCP/IP protocol with the purpose of providing features to publish and subscribe to text messages to specific topics, by the means of MQTT protocol.
The library carries out the following core functions:
Establishes TCP/IP sockets,
Formats MQTT packets and prepares them for transmission,
Extract contents from subscription messages arriving in MQTT packets,
Transmits MQTT packets over TCP/IP,
Provides test (ping) methods to test the health of connection,
Provides functions for subscribing to and publishing to topics as well as unsubscribing from topics.
The library reduces RAM memory requirements and provides better performance by supplying the library functions with input parameters that are pointers to arrays.
The basic work flow of the library on the PIC microcontroller is as follows. The microcontroller reserves the address for the message/information and provides pointers to this address. The MQTT library takes it from here and use the pointer to obtain or receive messages to the controller.
To communicate via the MQTT protocol on your PIC project, there are some prerequisites that your project must meet.
With the mikroPascal MQTT library, we implement this light weight protocol for the PIC microcontroller. The library can be downloaded here.
The library brings all the benefits of the MQTT protocol to PIC users enabling users to package and send data in their IoT project seamlessly, faster, with less memory requirement and wider connection with other devices.
The library can be downloaded here at the libstock repo, you can run a demo of the library to see how it works, and also check out our open source codes on github.
We welcome you to contribute to this library and please also fork it for other mictrocontrollers. If you have any questions, please reach out to me: firstname.lastname@example.org
During the last few days, it has been reported that Yubico is replacing some of their physical security keys, due to a firmware problem. This reminds us that IT security is evolving, bugs are found and you need to keep up to date to keep your systems secure. My previous posts regarding SSL/TLS and x.509 has been quite popular, so here comes another security related post 🙂
When hosting a global Software as a Service platform, it is vital to be in control of Cloud Security. Cloud Security consists of a set of policies, controls, procedures and technologies that work together to protect cloud-based systems, data and infrastructure. These security measures are configured to protect data, support regulatory compliance and protect customers’ privacy as well as setting authentication rules for individual users and devices.
One way of securing these services is SSL/TLS encryption of communication. The first implementation of SSL was implemented by Netscape in 1994, and this post attempts to provide a historical view of the SSL/TLS protocol, as attacks and countermeasures were introduced. If one reads the current TLS v1.2 or v1.3 protocol specifications, there are many aspects of the design which do not have an obvious reason, but whose origin comes from the long list of academic research which has broken previous versions.
The birth of SSL
As SSLv1 was never released, we first mention SSLv2, which was designed and implemented by Netscape in 1995. The SSLv2 protocol is very different from later versions, but has a similar traffic flow. The client connects to a server and sends a “hello” which identifies some aspects of the clients capabilities. The client and server negotiate which cipher they wish to use, and the client sends a random key encrypted with the server’s RSA public key which is used to subsequently encrypt the message traffic.
The protocol quickly proved to have numerous flaws, and within a couple of years an effectively new protocol, SSLv3, was designed to replace it. SSLv2 was formally deprecated in 2011, and no modern TLS library supports it anymore.
SSL as we know it
SSLv3 is the first SSL version which is recognizably similar to modern TLS. As in SSLv2 the client connects to a server, a handshake is performed, and subsequent records are encrypted using a key that is shared using public key cryptography. However there are several essential differences.
Another key addition is that in SSLv3 it is possible to use algorithms with forward security. In this mode, instead of decrypting an RSA ciphertext sent by the client, the client and server agree on a key using Diffie-Hellman key exchange, and the server signs a message which allows the client to verify that it is performing a key exchange with the intended server. However RSA based key exchange was still retained, and widely used.
In SSLv3 the entire handshake is hashed together and used with the agreed keys to create two “Finished” messages which the client and server exchange on the encrypted channel. These ensure that an attacker cannot modify traffic between the client and server in such a way as to change the outcome of the handshake. For instance, if a MITM could remove all of the strong ciphersuites from a client hello message and force a downgrade to a weak cipher, the protocol could be easily attacked.
In SSLv3, messages are encrypted using either the stream cipher RC4, or else a block cipher in CBC mode. In CBC mode, the plaintext must be a multiple of the cipher’s block size (typically 8 or 16 bytes), which requires making use of padding scheme to increase the length of messages which are not correctly sized. In SSLv3, the length of padding is indicated with a single byte at the end of the record, and the specified number of bytes are discarded by the receiver. The value of the padding bytes is not specified.
The message is authenticated using a slight variant of HMAC (based on an early HMAC design prior to HMAC’s standardization). But, critically, in SSLv3 it is the plaintext (rather than the ciphertext) which is authenticated, and the CBC padding bytes are not authenticated at all. These errors proved to be the source of a number of serious exploits which plagued TLS for years.
eCommerce compel TLS v1.0
After a time it became clear that the SSL protocol would prove crucial for commerce on the early Internet, and eventually the development was moved to the IETF. The name ended up changing due to a political compromise between Netscape and Microsoft, who had a competing PCT protocol. However the actual TLS v1.0 specification is only slightly different from SSLv3.
The most notable changes was the replacement of an SSLv3-specific HMAC variant with the standard version, replacing the SSLv3 specific PRF with a new design, and tightening up the rules for how blocks are padded. In SSLv3 the padding bytes were unspecified, while in TLS v1.0 and later versions the padding must follow a specified format. The block padding change was at the time merely a simplification, but it proved critical when the POODLE attack was developed in 2014.
At the time POODLE attack was developed, SSLv3 was already obsolete, but many browsers included a fallback mechanism where if the server rejected a TLS v1.0 or higher connection, the browser would subsequently try to connect using SSLv3. A man-in-the-middle attacker could intercept the TLS v1.0 connection, send an (unauthenticted) TLS alert closing the connection, and then attack the SSLv3 connection. There is no direct protocol fix for POODLE, since it is not possible to retroactively fix the padding bytes in unpatched clients. The main resolutions were the disabling or removal of SSLv3 support on both client and server sides, and the creation of the “fallback SCSV” indicator. The fallback SCSV allows a client to indicate to the server that it is performing a version fallback, which is done by including a special value in the ciphersuite list which cannot be actually negotiated but simply serves as a flag which can be understood by servers who recognize it (SCSV is short for “Signaling Cipher Suite Value”). A special ciphersuite value was chosen because in a TLS v1.0/v1.1 client hello format there is no other way of reliably indicating such information.
If a server sees a connection from a client indicating fallback, but the client is attempting to negotiate an older version than what the server supports, it closes the connection. Then, when a MITM attacker tries to force a downgrade, when the client opens the vulnerable SSLv3 connection, the server will detect the SCSV and close the connection, preventing the attack. It is not possible for the MITM to remove the SCSV, because the contents of the handshake transcript are authenticated by the Finished messages.
Browser Exploit leads to TLS v1.1
At the time BEAST was a substantial issue because many implementations of TLS had not been updated to support TLS v1.1 or v1.2. A workaround was developed for SSLv3/TLS v1.0 connections, commonly termed 1/n-1 record splitting. Each CBC encrypted record would be split into a 1 byte record followed by a record containing the rest of the plaintext. Since the first record included a message authentication code (which could not be predicted by an attacker who does not know the session key), this serves as a way of randomizing the IV.
Another common countermeasure was to favor use of the RC4 stream cipher, which did not have the problems of CBC ciphersuite. But the RC4 cipher dates back to the 1980s, and by 2013 it had been shown convincingly that biases in the RC4 ciphertext could allow an attacker to recover secrets sent over a TLS channel, albeit in a scenario requiring access to data from many millions of connections.
The next big step with TLS v1.2
TLS v1.2, released in 2008, was the first major enhancement to the protocol since SSLv3. It adds support for negotiating which digest algorithms will be used (instead of hard coding use of SHA-1), adds support for modern AEAD ciphers, and adds support for extensions.
Extensions are a critical feature which was long lacking in TLS. Each extension is tagged with a type, and implementations are required to ignore extensions which they do not recognize. This feature proved essential for resolving several protocol level problems which were discovered in the period between TLS v1.2 and v1.3
Despite adopting several modern cryptographic features, TLS v1.2 also suffered from a number of high profile attacks. The first of these was the renegotiation attack, discovered in 2009. TLS allows both clients and servers to at any time request a new session be renegotiated; effectively a new handshake is performed, but instead of being in the clear it occurs over the already encrypted channel. Several HTTP servers, including IIS, make use of this for client authentication. The initial connection is encrypted but not authenticated, and if the client attempts to access a protected resource a renegotiation is performed which includes client certificate authentication. The renegotiation bug breaks
this entirely. First an attacker creates a new TLS connection to the server, and sends some arbitrary data (for example, the start of an HTTP request). The attacker then proxies a legitimate client attempting to connect to the server, and sends the handshake data through its own channel. From the perspective of the server, it appears as if the client has sent some encrypted data, then authenticated itself with a certificate, then sent some additional data which was both encrypted and authenticated. Depending on the server logic, this might allow the attacker to insert data which the server would interpret as having come from the authenticated client. The fix was to properly bind the inner and
outer negotiations, such that it was not possible for the attacker to proxy. This was done by adding a new extension, which was standardized in RFC 5746. With this extension enabled, renegotiations inside an existing channel are cryptographically bound to the existing channel using the value of the TLS finished message. Since in the attack the client is unaware of being proxied within another TLS channel, the renegotiation will fail, preventing the attack.
The problems with renegotiation did not end there, however. In 2014 a new set of attacks where developed including the devastating triple handshake attack. In this attack, a client connects to a malicious server. The malicious server opens a new TLS connection as a client with some victim server. It forwards the client’s random value, then sends back the victim server’s random back to the client. Upon receiving the client’s encrypted master secret, it forwards the same to the victim server. In the end, there are two TLS connections, one between the client and the attacker, and the other between the the attacker and the victim server, and both are using the same session keys. In the next step, the client reconnects to the attacker, resuming its previous session, and in turn the attacker resumes its connection with the victim server.
Due to how session resumptions work, in this case the finished messages in the two handshakes will be identical. Then, the malicious server can attempt to perform some action on the victim server which triggers a request for client certificate authentication (for example, requesting access to a protected resource). It forwards the authentication request to the victim client, who responds. The attack proceeds much like the renegotiation attack of 5 years prior, and since the finished messages of the two connections are in this case identical, the previously devised extension fails to detect the proxying. This was addressed with a new extension, the extended master secret, which ensures the master secret for a session is bound to the entire handshake transcript, instead of just the client and server random fields.
Implementation errors also caused notable problems for TLS v1.2. It has been known since 1998 that the RSA key exchange is vulnerable to an oracle attack, the so called “million message attack”. In a nutshell, before encrypting the master secret with a servers RSA public key, the client pads it in a certain way. Upon decryption, the server must reject any invalid padding which does not conform the the standard. But, it turns out that given access to an “oracle” which tells if a particular RSA ciphertext is or or not correctly formatted, it is possible for an attacker to decrypt any ciphertext encrypted using that key. A TLS server can act as such an oracle, and problems have been repeatedly found in various implementations over the last 20 years, including the recent ROBOT and CAT9 attacks.
Bringing TLS into the future with v1.3
After 10 years and numerous patches, TLS v1.2 was in a state where using it securely required a number of extensions and avoiding a number of known-insecure features such as static RSA key exchange, RC4 ciphersuites, and CBC ciphersuites. TLS v1.3 addresses these issues by omitting them entirely.
In addition, there was a strong desire by many large network players (such as Google, Cloudflare, and Mozilla) to minimize the number of round trips required to handshake, as this directly impacts the user visible performance of web pages. This led to a substantially redesigned handshake protocol which has fewer round trips. I will explore the changes and implications of the TLS v1.3 design in a future post.
If you want to discuss more about SSL/TLS, feel free to contact me at email@example.com or check out these previous articles on SSL/TLS and x.509:
I recently had the chance to join Henrik Lindberg from Acando for an Agile Scrum workshop. In this post I will write about the workshop and the basics of Agile and Scrum. There is so much to learn and explore in agile, and I hope this introduction will compel further reading.
Unless you live offline, you probably are aware of the latest trend in the corporate world, which is the agile approach. Agile, in recent times has grown into a revolutionary movement that is transforming the way professionals work. Agile is a methodology that keeps the equilibrium of your priorities. Thus, the work is done faster, and project requirements are with great efficiency.
Working agile, people tend to forget about the four values from the agile manifesto:
Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan
Equally important is the twelve principles behind the agile manifesto:
Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.
Welcome changing requirements, even late in development. Agile processes harness change for the customer’s competitive advantage.
Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.
Business people and developers must work together daily throughout the project.
Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.
The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.
Working software is the primary measure of progress.
Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.
Continuous attention to technical excellence and good design enhances agility.
Simplicity–the art of maximizing the amount of work not done–is essential.
The best architectures, requirements, and designs emerge from self-organizing teams.
At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.
Major Differences between Waterfall and Agile
The waterfall approach is a sequential model of project management. Here the development team can only move to the next stage if the previous step is successfully completed.
In the agile approach, the execution of processes is concurrent. This enables effective communication between the client, the manager, and the team.
Waterfall assumptions are not well-suited for large-sized projects, whereas agile lets you manage complicated tasks with great ease.
Agile methodology is being embraced by managers worldwide for its greater flexibility.
The development plan is reviewed after each step in case of agile, while for the Waterfall approach it will be only during the test phase.
The agile development is based on the interactive functionality, according to which the planning, the development, the prototyping and many other key phrases of the development may pop up more than once in line with the project requirements. The agile also adheres to the incremental model, where the product is designed, implemented and tested in increasing order (complexity of the task increases in the ascending order). The development is termed as finished, only if every minute specification and requirement is met.
When to Use The Agile Methodology?
In a Scenario, When You Require Changes to Be Implemented
When the Goal of the Project Isn’t Crystal Clear
When You Need to Add a Few New Features to the Software Development
When the Cost of the Rework Is Low
When Time to Market Is of Greater Paramount Importance than the Full Feature Launch
When You Want to See the Progress in the Sequential Manner
Scrum is the latest agile framework for product success in small-to-big organizations, which is creating a lot of buzz in the present IT world. Managers’ worldwide united hold a belief that Scrum is far more than the execution of processes and methods; it plays an integral role by supporting teams meet their aggressive deadlines and complicated project demands. The Scrum is a collaborative agile approach that involves the breaking down of substantial processes into smaller tasks so that they are done efficiently in a streamline manner.
Scrum is a lightweight, agile framework that successfully manages and accelerates project development. This framework is proven to cut down on project complexity and focus largely on the building products that are in accordance with client expectations. Generally, people sometimes use Agile and Scrum as interchangeable, but there is a big difference. The agile approach is a series of steps, on the other hand, Scrum a subset of agile.
There are three principles of Scrum:
Are you interested in switching to the Scrum approach of development? Then, you must know the various Scrum roles.
The Product Owner
He/she is responsible for providing the vision of the product. The product owner will play the central role in breaking down the project into smaller tasks and then prioritize them.
Defining the Vision
Managing the Product Backlog
Overseeing Development Stages
Anticipating Client Needs
Acting as Primary Liaison
Evaluating Product Progress at Each Iteration
He/she is someone with extensive expertise over the framework. The ScrumMaster will make ascertain that the development team is adhering to the Scrum model. They will also coach the team on this.
Coaching the Team
Managing and Driving the Agile Process
Protect the Team from External Interference
Managing the Team
Foster Proper Communication
Dealing with Impediments
Be a Leader
The Development Team
This involves a panel of qualified developers those who form the core of the project development. Each individual in the team brings his/her own unique skills to the table.
The Entire Team Is Accountable for the Work
They Are No Titles and Subheading
Sit Together to Communicate with One Another
Artifact #1: Product Backlog
Product backlog involves a sequence of fundamental requirements in a prioritized order. The requirements are provided by the provided owner to the Scrum Team. The backlog of product emerges and evolves with time, and the owner of the product is solely responsible for content & its validity.
Artifact #2: Sprint Backlog
It is the subset of the product backlog that the team will put in the hard efforts to achieve the “To Do’s.” The work in the sprint backlog is sliced down in smaller tasks by the team. All the items of the sprint backlog must be developed, tested, documented and integrated to meet the needs of the clients.
Artifact #3: Product Increment
The product increment is an artifact of Scrum with significant importance. The product increment must in line with the “Definition of Done” by the development team, and the product increment has to be approved by the product owner.
Definition of Done in Scrum Methodology
Definition of Done from varying from one scrum team to another. It is an acceptance criterion that drives the quality of work when the user story is complete. In other words, Definition of Done is the quality checklist with the development team.
The Burndown chart is a means to track the progress of a project on the Scrum. The ScrumMaster is responsible for updating this chart at the end of each sprint. The horizontal axis on the release Burndown chart represent the sprints, while the vertical one will make you aware of the remaining work at the beginning of each sprint.
Backlog refinement is the act of updating/adding estimates, details, and order for the items in the product backlog. This improves story descriptions.
Commonly known as the “Definition of Requirement,” the user story in Scrum provides enough information to the development team so that they provide a reasonable estimate for the project. The user stories are about one or two sentences, a set of conversations that define the desired functionality.
User Story Acceptance Criteria
Acceptance criteria in terms of Scrum methodology are a set of conditions that the software product must meet in order to the acceptance by the user, customer or the other stakeholders. In layman’s terms, it is also a set of statements that determine user features, requirements or functionalities of an application.
User Story Relative Estimation
Relative estimation is the procedure of estimating task completion. The estimate is not in terms of the time, rather the items that are similar to one another in terms of complexity.
There are five defined Scrum Events.
The Sprint Planning is an event in the Scrum framework. Here the team in a collaboration will decide on the task they will focus during that sprint, and discusses their initial plan to meet those product backlog tasks.
The sprint goal is defined as the objective set for the sprint that needs to be met via the implementation of the Product Backlog. The sprint goals are obtained after long discussions between the Product Owner and the Development team.
For the Scrum approach, each day of a Sprint, the team meets and holds a discussion on a number of aspects, and this meeting is known as the Daily Scrum.
The sprint review is held at the end of each of the sprint. This is done to inspect the product increment.
The Sprint Retrospective is held between the development team and the ScrumMaster to discuss how the previously Sprint went, and what can be done to make the upcoming Sprint more productive.
In the end, after reading this entire article, you probably got a basic overview of the Scrum approach. If you want to talk about agile and scrum, feel free to contact me at firstname.lastname@example.org. You can also read more about agile in this article:
May 29th, Satya Nadella CEO of Microsoft invited Nordic customers and partners for a small conference in Sweden, putting forth his ambitions for the future. This was Nadella’s first ever visit to Sweden, since stepping into the shoes of the company’s CEO. He touched upon issues of today’s tech world, but most importantly made Swedish people aware of his company’s firm belief in global digitization and describe what the future holds in store.
Self-discipline & excitement-seeking are two pillars of Satya’s personality, which has made this Indian-origin techy an intellect worldwide; from Asia to Europe we all treat him with the warm welcome.
Dinner with Microsoft’s ISV Team
A day before the Nadella’s address, I was invited for dinner and socializing hosted by Joanna and Martin from Microsoft. ISV stands for Independent Software Vendor and present individual or companies those who develop, market and sell software running on third-party hardware and software platforms such as the Microsoft.
The term ISV is prominent in the tech world, used by most tech companies including Microsoft. To understand this from a layman’s term, when Microsoft was in the pursuit of developing windows, its partnerships with numerous companies/individuals to take their project forward both on the technical and non-technical front.
Next Morning, I had the opportunity to see some of the companies that has implemented their solutions on the Microsoft platforms at hotel Berns. There we received a pep talk to Microsoft and partners on the future and what efforts we need to put to make sure it is heading the right direction.
The Microsoft tech show commenced in style with the tunes of Sweden’s most renowned DJ and Saxophone artist Andreas Ferrronato. His soul-soothing set the mood 🙂
The Volvo Group Digitizing its Operations
Hillevi Pihlblad from Volvo Group talked about how employees hate change and across the globe it is not easy to adapt to changes. Further, she illustrated how the Volvo Group has translated the changes into something positive and made people understand why embracing change can make their lives convenient.
The H&M Group And The Use of AI To Serve Their Customers The Best
A senior executive and business leader of the H&M Group, Arti Zeighami talked about how the company is investing in Artificial Intelligence technology to tailor store offerings. Heading the Advanced Analytics and AI function, he gave a presentation on how H&M Group is implementing advanced algorithms to scrutinize sales an returns. Further, helped them more efficiently predict the needs and demands of their customers.
Satya Nadella, The Man of the Moment Taking The Center Stage
Then, finally came the moment when Helene Barnekow introduced Microsoft’s CEO Satya Nadella. He was treated with great warm claps from the tech people present.
Nadella who took over the job role of CEO from Steve Ballmer in 2014, is globally renowned for his dynamic leadership and a true passion for technology innovations. Prior to being the company’s CEO, Nadella was Microsoft EVP of the cloud and enterprise group.
His journey as a CEO has transformed Microsoft in terms of technology, also accentuating the company’s business model and corporate culture. His emphatic leadership abilities steered Microsoft from struggling smartphone strategy to focus on other technical aspects such as the Augmented Reality and Cloud Computing.
He was also responsible for the purchase of Linkedin, a network of professionals for around $26.2. Did you know since taking over as CEO, company’s stocks have increased by 150%?
The theme of the address by Satya Nadella was how communities and companies are uniting together for the digitized future of Sweden. This speech was largely about Microsoft’s own digital products and services, and how they can drive the world forward.
On his address to the tech people of Sweden, he threw light on various segments of technology-the Artificial Intelligence, Digital Transformation & Innovation. The American giant was in Stockholm to make a big announcement about setting up data centers in the country.
“We have the ambition that the data centers we build in Sweden should be among the most sustainable in the world, this is another step in strengthening our position as a long-term digitization partner for Swedish businesses”
Key Highlights from Nadella’s Address
“It would be wrong for me not to talk about trust. Because in the end, it is something that will be very important to us – not only to create new technology but to really assure that there is confidence in the technology that we create” he says on stage and continues “We need to create systems that handle personal data and security as a human right.”
Satya Nadella talked about the recent investment his company is making in Sweden. Among all the tech things, Microsoft two centers to be built in Gävle and Sandviken, they will be the most sustainable in the world.
“We will use one hundred percent renewable energy. They will also be completely free from operational emissions. We set a new standard when it comes to the next generation data center. It starts here in Sweden,” said Satya Nadella.
Apart from the data centers, Satya Nadella also highlighted the recent key partnerships during his speech at the China Theater. He further talked about company collaboration with Kiruna, this city makes use of Microsoft Hololens and AR to plan the city’s underground infrastructure.
Microsoft in Sweden
Satya Nadella, Microsoft’s CEO Put Forth Examples of Company’s Interest in the Country;
”There have been huge breakthroughs in the last three years, regardless of whether we are talking about object identification or voice recognition. This must be translated into infrastructure. Here we invest heavily.”
“Take Spotify who has a new very cool podcast tool. It lets anyone do their own podcast and they use our speech recognition to convert speech into text. The most interesting thing they do is that for anyone who wants to modify their podcast, they can enter and edit in writing and that the podcast then automatically changes. It shows how to use AI to make it more efficient”
Ending the Visit on a High
Later in the day, Nadella visited the Samhall innovation Days, a hackathon with the aim of “creating the conditions for people with a diagnosis within the autism spectrum to come into operation”, in the company’s press release.
Last summer, Microsoft announced two data centers in Norway to take their cloud computing services to entire Europe.
“By building new data center regions in Norway, we facilitate growth, innovation and digital transformation of Norwegian businesses – whether large companies, the public sector or some of the 200,000 small and medium-sized companies that together create the future of Norway,” said CEO Kimberly Lein-Mathisen in Microsoft Norway when the Norwegian plans became known.
Nadella declared that both the data centers will run on 100% renewable energy, so this project is in the welfare of the country, creating an ocean of newer opportunities for the locals. He also talked about his company’s association with the tech companies/communities in Sweden; one is the Kiruna city and other being the Sandvik Company.
The address at the China Theater in Stockholm by Microsoft’s top boss, Satya Nadella was like a pep talk. He gave his viewpoint on a variety of technology aspect. Most importantly, he announced the company’s program of building two data centers in this Nordic country.