Gunnebo Business Solutions, Technical

Getting started with Kafka

What is a Database? 

A database is an organized collection of information (data). You can store data in a database and retrieve them when needed. Retrieving data would be faster than traditional data storing methods (Information written on paper etc..). Data stored in a database in key: value pairs. 

Modern Databases follow the ACID standard. It consists of, 

Atomicity: Method for handling errors. It guarantees that transactions are either committed fully or aborted/failed. If any of the statements in a transaction failed, then the transaction would be considered failed and operation will be aborted. Atomicity provides a guarantee on preventing the desired updates/modifications over the database partially. 

Consistency: Guarantee data integrity. It ensures that transactions can alter the database state, only if a transaction is valid and follows all defined rules. For example, a database for bank account information cannot have the same account number for two people.  

Isolation: Transactions do not depend on one another. In the real world, transactions are executed concurrently (multiple transactions read/write to the database at the same time). Isolations guarantee concurrent transactions are treated the same way if they happen sequentially (one transaction needs to complete execution for the next one to get executed). 

Durability: Guarantees the once committed transaction, will remain committed even in case of a system failure. This means committed/completed transactions will be saved in permanent non-volatile memory. (Eg: Disk drive). 

What is Kafka? 

According to the main developer of Apache Kafka, Jay Kreps, Kafka is “A System optimized for writing”.  Apache Kafka is an Event Streaming Platform, a fast, scalable, fault-tolerant, publish-subscribe messaging system. Kafka is written in programming languages Scala and Java. 

Why the need for yet another system? 

At the time that Kafka was born, which is 2011. SSDs were not common and when it comes to database transactions, disks were the first bottleneck. LinkedIn wanted a system that is faster, scalable, and highly available. Kafka was born to provide a solution for big data and vertical scaling.  

When the need arises to manage a large number of operations, message exchange, monitoring, alarms, and alerts, the system should be faster and reliable. Vertical scaling is upgrading physical hardware (RAM, CPU, etc..) and it’s not ideal always and it may not be cost-effective. Kafka lets the system scale horizontally, coupled with a proper architecture. 

LinkedIn later made Kafka open-sourced which results in improving Kafka over time. Before Kafka, LinkedIn used to ingest 1 billion messages per day, now it’s 7 trillion messages, 100 clusters, 4000 brokers, and 100K topics over 7 million partitions. 

Strengths of Kafka 

  • Decouple producers and consumers by using a push-pull model. (Decouple sender and consumer of data). 
  • Provide persistence for message data within the messaging system to allow multiple consumers. 
  • Optimize for high throughput of messages. 
  • Allow for horizontal scaling. 

What is the Publish/Subscribe model? 

A message pattern that splits senders from receivers. 

All the messages are split into classes. Those classes are called Topics in Kafka.  

A receiver(subscriber) can register into a particular topic(s), and the receiver will be notified about the topic asynchronously. Those topics are generated by Senders/Publishers. For example, there is a notification for new bank account holders. When someone creates a new bank account, they will automatically get that notification without any prior action from the bank. 

The publish/Subscribe model is used to broadcast a message to many subscribers. All the subscribed receivers will get the message asynchronously. 

Core concepts in Kafka 

Message: The unit of data within Kafka is called a message. You can see a message as a single row in a database. For example, We can take an online purchase transaction. Item purchased, the bank account used to pay, and information delivered can be called a message. Since Kafka doesn’t have a predefined schema, a message can be anything. The message is composed of an array of bytes and messages can be grouped in batches. So we don’t have to wait until one message to get delivered to send another message. A message can contain metadata like the value ‘key’ which is used in partitioning.  

Producer: The application that sends data to Kafka is called the Producer. A producer sends and Consumer pulls data, while they don’t need to know one another. Only they should agree upon where to put and how to call the messages. We should always make sure that the Producer is properly configured because Kafka isn’t responsible for sending messages but only for message delivery. 

Topic: When the Producer is writing, it’s writing in a Topic. Data sent by the Producer will be stored in a Topic. The producer sets the topic name. You can see Topic as a file. When writing into Topic, data will be appended and when someone is reading, they read from top to bottom. Topics starting with _ are for internal use. Existing data cannot be modified as in a database. 

Broker: Kafka instance is called the Broker. It is in charge of storing the topics sent from the Producer, serving data to the Consumer. Each broker is in charge to handle assigned topics, and ZooKeeper holds who is in charge of every Topic. A Kafka cluster is a multitude of instances(Brokers). The broker itself is lightweight, very fast, and without the usual overheads of java like garbage collectors, page & memory management, etc… The broker also handles replication. 

Consumer: Reads data from the topic of choice. To access data Consumer needs to subscribe to Topic. Multiple consumers can read into the same Topic. 

Kafka and Zookeeper 

Kafka works together with a configuration server. The server of choice is Zookeeper, which is a centralized server. Kafka is in charge of holding messages and Zookeeper is in charge of configuration and metadata of those messages. So everything about the configuration of Kafka ends up in Zookeeper. They work together ensuring high availability. There are other options available, but Zookeeper seems to be the industry choice. Both are open-sourced and they work well together.  

Zookeeper is very resource-efficient and designed to be highly available. Zookeeper maintains configuration information, naming and provides distributed synchronization. Zookeeper runs in the cluster and Zookeeper cluster must be an odd number (1,3,5..).  3 and 5 guarantees high availability. 1 instance does not because if 1 instance goes down, we lose Zookeeper. Commonly Zookeeper is installed on separate machines than Kafka works because Zookeeper needs to be highly available. There are official plans to abandon Zookeeper to be used with Kafka and in the future, only use Kafka. 

Kafka and Zookeeper Installation 

In GNU Linux, you must have Java installed (ver.8 or newer). Download Kafka from https://downloads.apache.org/kafka/.  

Run the following commands in your terminal (Assumed your downloaded filename is kafka_2.13-2.6.0.tgz). 

Run the tarball :  

tar -xzf kafka_2.13-2.6.0.tgz 

Change the directory into the one with Kafka binaries :  

cd kafka_2.13-2.6.0 

Run Zookeeper :  

bin/zookeeper-server-start.sh config/zookeeper.properties 

Start the Kafka broker service :  

bin/Kafka-server-start.sh config/server.properties 

The downloaded tarball comes with all the binaries, configuration files, and some utilities. System d files, PATH variables adjusted (added to PATH), and configuration for your requirements are not included. 

Kafka Overview 

F:\GUNNEBO\Visio\2021\Kafka Overview.jpg

Kafka stores data as Topics. Topics get partitioned & replicated across multiple brokers in a cluster. Producers send data to topics so Consumers could read them. 

What is a Kafka Partition? 

Topics are split into multiple Partitions and parallelize the work. New Partition writes are appended at the end of the segment. By utilizing Partitions, we can write/read data with multiple Brokers, speeding up the process and thus reducing bottleneck and it adds scalability to the system. 

Topic overview 

A Topic named topic name is divided into four partitions and each partition can be written/read using a different Broker. New data will be written at the end of each Zpartition by its respective Broker. 

Multiple Consumers can read from the same Partition simultaneously, When a Consumer reads, it will read data from the offset. Offset is basically like a timestamp of a message provided by the Producer saved in metadata. Consumers can either read from the beginning or read from a certain timestamp/offset. 

Each Partition has one server called Leader(That’s one Kafka Broker which is in charge of serving that specific data), and sometimes more servers act as Followers. Leader handles all read/write requests for the Partition while Followers passively replicate Leader. If a Leader fails for some reason, one of the Followers automatically becomes the Leader. Therefore Leaders can change over time. Every Broker can serve as the Leader for some data. Each Broker is a Leader for some Partitions and acts as a Follower to other Partitions, providing load balance within the cluster. Zookeeper provides information on which is the Leader for a certain part of data. 

Commercial, Gunnebo Business Solutions, Innovation, Microcontroller, Microsoft Azure, Reflections

Everyday Safe and IoT for the Consumer

The whole world have advanced into the digital age and as a lot of appliances, gadgets, and accessories now depend on internet for their operation. These devices are designed with state-of-the-art technology so they can communicate smoothly at any time, and have now become so popular that they outnumber the human population. There are approximately 7.62 billion people around the world but surprisingly, we have 20 billion IoT devices which are all connected to the internet.

New IoT devices emerge every day; we see home automation systems, smartwatches, smart gadgets, smart vehicles and a long list of other things that makes your life easier and more fun in today’s world.

Through my work in Innovation at Gunnebo Business Solutions, I get to work on quite a few cutting edge projects to bring Gunnebo into the connected future. The GBS team strives to develop a scalable collaboration platform that supports each business units digitalization and software offering. Our main focus is actually to lead Gunnebo’s business units into the digital future of Software Services and enable product as a service sales.

Global network background-1

I am currently working on a really exciting project with our Safe Storage Business Unit. We are working on a brand new smart-safe, which can easily be integrated into different parts of the home – kitchen, bedroom or bathroom – to store your valuables. The safe is designed to suit everyday needs and it can be used for storing valuables such as car keys, jewelry, credit card, visas, passports or any other thing important to you.

The safe is designed to be a simple and convenient solution that can be accessed by customers around the world. Anyone interested in getting the best security for their valuables would try out this option. Not only does the safe keep your valuables safe, but it’s also aesthetically appealing and made from the best technology which is only even more attractive.

As any smart device, this safe will of course be easily be connected to the owners mobile phone and send telemetry to the cloud. This is where I come in. I am working with our team in Markersdorf on merging classic and mechanical parts of a safe securely with modern IoT technlology.

IMG_5144

To make sure that our new IoT device deliver to its potential, it is developed with state-of-the-art technology, both physically and on the firmware and software side, that makes it reliable and easy to use.

To ensure the efficiency of our operations we work with agile partners like Microsoft, 3H, Polytech Software and others to help fusing entrepreneurial spirit with professional development of the product. Through their involvement, we have been able to achieve optimal results.

everyday safe

As mentioned earlier, the Internet of things (IoT) is a system of interrelated computing devices, mechanical and digital machines. This means that it can be just anything from your television to your wristwatch. Over time, the scope of IoT devices has changed from what it used to be due to the convergence of multiple technologies, real-time analytics, machine learning, commodity sensors, and embedded systems.

An IoT device exposes its users to a number of impressive benefits which include increased interaction between devices, allows great automation and control, easier to operate, saves time, saves money, increased efficiency and time saving and so on. But it still has a few drawbacks of its own such as may easily become highly complex, may be affected by privacy and security breach, reduced safety for users and so on.

The market for IoT devices is expanding every day and becoming more popular as its number of users also increases. This might be the first IoT device from Gunnebo, but it is definitely not the last.

If you want to know how Gunnebo works with IoT in the consumer market, feel free to contact me at bjorn.nostdahl@gunnebo.com

Agile, Gunnebo Business Solutions, Methodology, Scrum, USA

Certified Scrum Product Owner

Having worked as a product owner for years, I finally decided to take things to take the next level with a certification training known as a Certified Scrum Product Owner.

A CSPO course is an interactive course that would last for two 8-hour days. During this course, we learned basic things about the scope of Scrum and the functions of a Scrum Product Owner. We were taught using case studies, exercises, and discussions. More importantly, topically treated included how to identify user needs, backlog, how to manage stakeholders, an overview of sizing in Scrum and how to create, maintain and order a product.

The CSPO Training was conducted by Chris Sims. He’s a certified scrum product owner, agile coach and C++ expert that helps companies run efficiently and happily. He’s also the founder of Agile Learning Labs and a co-author for two best-sellers, namely; The Elements of Scrum and Scrum: a Breathtakingly Brief and Agile Introduction.

IMG_20200124_165112

The CSPO training session was held in Silicon Valley, midway between San Francisco and San Jose, at the Seaport Conference Center. The facilities here were perfect for the setting of the training, and as a bonus, we got to see the towing of a drug houseboat (that was our theory at least).

seaport

A Scrum Master works to help an inexperienced team get familiar with the operations and effects of Scrum. In comparison, a Product Owner Owner priority is to make sure that customers are satisfied with the quality of service they get and usually helps to create the product vision or order for a Product Backlog.

At the end of the training, a CSPO is equipped with the skills to serves as a product owner in a scrum team. The role of the product owner is vital to make sure that the product can offer the desired amount of satisfaction to the customer when required. This is possible for him in a number of ways if you consider the resources available at his disposals such as the team, business stakeholders and the development process adopted by the organization.

A CSPO is trained to take on the role of the product owner in a scrum team. The product owner is a vital element in ensuring that the product can offer optimal value to the customer in a timely manner. He can achieve this in a number of ways if you take factors such as the team, business stakeholders and the development process of the organization.

The responsibilities of a CSPO

The first is the development and writing of the product vision. To do this, he’s to work with a clear mind about the functions and benefits of the product to the consumer. It also includes writing a list of product features. Basically, product features are product requirements written from the user’s perspective. These features are usually written as a detailed description of the capability of the product in the hands of the customer.

The CSPO also helps to compile a list of features into the Product Backlog. It’s important that the product owner has the ability to make the team understand the scope of the project and work together to get things done. He also reviews, tests, and assesses the final product. A CSPO can also request changes to the product if there are any issues with it.

Getting a Certified Scrum Product Owner® (CSPO®) certification exposes anyone to a lot of benefits. Firstly, the CSPO certification will expose you to more career opportunities and it becomes easier to work in different industry sectors that adopt the use of Agile. This will expose any expert to different companies and occupational positions. Also, it shows that your expert in Scrum. This way, it’s easier for you to let your employees and team members know of your capabilities.

On another note, the certification will teach you the history of the Scrum Foundation and the role of a Product Owner. The classes to train you for the certification will orientate you on the roles and duties of a product owner. It also takes you into close contact with Agile practitioners who want to improve their skill level. A CSPO certification is a sign of a product owner’s reliability.

Scrum teams operate at a level of efficiency and speed that may be a problem for traditional product management. Learn about the skills adopted by product owners to lead their team and achieve optimal results. Anyone who takes part in a CSPO training will be a part of exercises and simulations related to Business value estimation, Product strategy, an overview of product owner role, Release planning, Effective communication with stakeholders, story splitting, acceptance criteria, user stories, product strategy, lean product discovery and Artifacts including burn charts.

sprint-02

Working with Scrum for quite a few years now, I have assembled a set of methodologies and syntaxes on how to write good requirements for your team. Below I will share the requirement format and lifecycle I use in my daily work, and I hope it will help you too working in an Agile team.

Epic

Software development teams work on very complicated projects. It is crucial to understand every requirement and feature required by the customer. 

An epic is a large body of work broken down into several tasks or small user stories. It always denotes a high-level descriptive version of the client’s requirements. As epic is the description of the user’s needs, its scope is expected to change over time. Hence, Epics are always shipped in the form of sprints across teams. While, Epics often encompass multiple teams on multiple projects, and can even be tracked on numerous boards. Moreover, epics help the team break down a main project’s work into shippable pieces without disturbing the main project’s delivery to the customer.

Format

For a <persona> who <has a painpoint> the <product or solution> is a <type of solution> that <solves an issue in a certain way> unlike <the old solution or competitor> our solution <has ceirtain advantages>

Acceptance Criteria

Success criteria <> Acceptance criteria <> In scope <> Out of scope <>

Lifecycle

An Epic can only be created and moved into the backlog by the Product Owner. When all sub-tasks are Resolved, the Epic can be resolved. When the functionality of the Epic is delivered to the end customer, the Epic will be Closed. It is a complicated task to create an Epic. The following steps should be followed to develop an agile epic. 

They are starting with the user Recording / Reporting, which includes drafting the epic for project managers and the team. Second, comes the Description where the process of achieving the proposed project is described. Next is the Epic Culture, which denotes the epic team’s size based on the company culture. Finally, the most important one is the Timeline or Time Frame, where the team decides on how long they take to complete the project.

Feature

When a developer team develops one extensive software system, there will be lots of requirements gathered from the customer to understand what is precisely the customer’s requirement. The customer might not have an understanding of how the gathered requirements are used, but the development team knows that these requirements are finally the features of the system being developed.

A feature is a small, distinguishing characteristic of a software item, which is also a client-valued function. Features are small and typically can be implemented within a sprint. When we describe a feature, we use the same format as a User Story, but with a broader scope. 

Format

As a <particular class of user> , I want to <be able to perform/do something> so that <I get some form of value or benefit>

Lifecycle

A Feature can only be created and moved into the backlog by the Product Owner. When all sub-tasks are Resolved, the Feature can be resolved. When the functionality of the Feature is delivered to the end customer, the Feature will be Closed.

A feature can be added to a system as per the customer’s requirement even after development is completed or during the development phase. The user creates a feature, and the features are added to the features inbox. The product team sorts the features and adds them to a feature list for the feature team for elaboration. The feature manager contacts the appointed teams to start inspections. After implementing the feature by the engineering team, it is added to the release tracking page, and once it is completed, the QA team will carry out the final testing. The feedback team starts feedback gathering, and the feature moves to Aurora and Beta. Finally, the feature is released.

User Story

When working on a complex project, the development team must ensure that they have fully understood the customer’s requirements. 

In software development and product management, a user story is an informal, natural language description of a software system’s features. User stories are often written from the perspective of an end-user or user of a system. Furthermore, user stories break down the big picture into epics that are more user-focused and in a way that the engineering team clearly understands the product requirements.

Format

As a <particular class of user> , I want to <be able to perform/do something> so that <I get some form of value or benefit>

Acceptance Criteria

Given <some context> When <some action is carried out> Then <a particular set of observable consequences should obtain>

Lifecycle

A User Story can only be created and moved into the backlog by the Product Owner. When all sub-tasks are Resolved, the User Story can be resolved. When the functionality of the User Story is delivered to the end customer, the User Story will be Closed.

The stakeholder gives the idea in the form of a change request or new functionality, captured by the product owner as a business request, and creates the user story. Then the user story is added to the backlog, and with the help of the sprint team, it is groomed by the product owner. The user story is then broken down into acceptance criteria for prioritization. However, whether the owner accepts or rejects the story depends on the acceptance criteria. Finally, the user story is recognized as complete and closed and returned to the backlog for future iterations.

Task Story

The Task Story work item is more technical than an agile User Story. Instead of forcing the User Story format, it is better to use a Feature-driven development (FDD) process, describing what is expected more technically. FDD blends several industry-recognized best practices into a cohesive whole. These practices are driven from a client-valued functionality perspective where its primary purpose is to deliver tangible, working software repeatedly on time.

Format

<action> the <result> by/for/of/to a(n) <object>

Example: Send the Push Notification to a Phone

Acceptance Criteria

Given <some context> When <some action is carried out> Then <a particular set of observable consequences should obtain>

Lifecycle

A Task Story can only be created and moved into the backlog by the Product Owner. When all sub-tasks are Resolved, Task Story can be resolved. When the functionality of the Task Story is delivered to the end customer, the Task Story will be Closed.

Bug

Any software development team can come across faults in the product they are working on, and these faults are identified in the testing phase. 

Errors, flaw, or fault in a computer program or system that causes it to produce an incorrect or unexpected result or behave in unintended ways is called a software bug. The process of finding and fixing bugs is termed “debugging” and often uses formal techniques or tools to pinpoint bugs, and since the 1950s, some computer systems have been designed to also deter, detect or auto-correct various computer bugs during operations.

Format

Found in <module> summary <short description> reproduced by <reproduction steps> result <what happened> expected <what was expected to happen>

Lifecycle

The Bug work item can be created by anyone but is usually made by QA or Operations via a customer. When the bug is fixed, it should not be closed until confirmed by the creator.

There are six stages in the bug life cycle. When the bug is created and yet to be approved, it is in its New stage. Next, it is Assigned to a development team. Now the development team starts to work to fix the defect. When the developer fixes the bug by making necessary changes to the code and verifying them, it can be marked as Fixed. When the code is in the fixed state, it is given to a tester to retest until the tester tests the code in a state called the Pending Retest. Once the tester has tested the code to see if the developer has successfully fixed the defect, the status is changed to Retest.

Spike

Although we have epics and user stories to break down complex projects and make it understandable to the engineers, there can still be confusion.

A Spike aims at gathering information to sort out the unclear sections the team comes across in the user stories. A spike can be known as research, architectural, or refactoring spike. When the group comes across such confusing situations, they have to create a functional or technical experiment to evaluate. It can be any type of research the team does, the final goal is to solve unclear requirements.

Format

In order to <achieve some goal> a <system or persona> needs to <perform some some action>

Example: In order to estimate the “push notification” story a developer needs to research if Azure services meets the requirements.

Lifecycle

A Spike can be created by anyone, but can only be moved into the backlog by the Product Owner. The print team has the responsibility to create acceptance criteria. When Spike’s goal is met, it can be Resolved or Closed, depending on the owner’s decision.

Task

Stories are written in a way that is easy to understand by the customer, and there are no technical terms or instructions related to development. Now the story has to be converted to a detailed instruction list that is easy to understand by the developer.

A Task is a piece of work for the developers or any other team member. It gives the developer an idea about what should be done during development, such as creating tests, designing something, adding codes, the features that should be automated, etc.

Format

There is no specific format for a task, it can be written in the format of a note or a todo list.

Lifecycle

A task can be created by anyone, but it is typically created by a developer as a child to a User Story or a Task Story.

A New task can be Created as a user action or part of process execution, and Candidates are set to groups of people. Next, individuals are directly Assigned as a part of process execution or if requested by API. Sometimes an assignee might want to Delegate a part of the work. Once the requested work is resolved the assignee will want to pass the work back to the original owner. Finally, the task is Completed.

Issue

An Issue is a description of an idea or a problem. It also can be outlined as an improvement that should take place in the product. If resolved, it would increase the value of the final product or reduce waste in development time.

Format

There is no specific format for an issue, it is more like a note and can be written in the format of a User Story or Spike.

Lifecycle

Anyone can create an Issue, but only the Product Owner can convert it into a User Story or a Spike and put it into the backlog. The life cycle of work can be defined by setting an issue workflow as follows:

When an issue is created, the time is taken to resolve, it will be decided to depend on the issue’s size. When an issue is created, it is in its Open state. Usually, a QA will create an issue and assign it to a developer who can solve it. When the programmer is working on resolving the issue, it is in its In Progres state. After the issue is solved, it goes to the Resolved state. An issue can go to its Closed state only if the creator is happy with it. However, when an issue goes to its closed stage, it does not mean that it is completely solved, but there can be chances for it to arise again. Then the issue is Reopened, and the same process takes place to figure out the issue and fix it.

Concluding this post, I want to say that Chris’ training skills were at the top level and all of his stories about Silicon Valley, how he started Agile Learning Labs, and his career as a product owner, engineering manager, scrum owner, software engineer, musician, and auto mechanic – and there were impressive lunchtime discussions.

To learn more about the role of a product owner, you can contact me at bjorn.nostdahl@nostdahl.com.

There’s more information about agile in my articles on Social Agility and Agile and Scrum Methodology Workshop.

DevOps, Gunnebo Business Solutions, Microsoft, Microsoft Azure

Microsoft LEAP: Design with Best Practices

All good things come to an end and LEAP is no exception. It was a great week full of interesting and enlightening sessions. Day 5 was a fitting end to the week with its focus on Design with best practices.

group-photo-2.jpg

Let’s get to the sessions; the day began with a keynote by Derek Martin on the topic Design for failure. Derek is a Principal Program Manager and spoke about what not to do when designing a product. He spoke about building Azure and how the lessons learned can be used to understand and anticipate challenges.

img_5039.jpg

The focus was given to managing unexpected incidents not only in the application environment but also in the cloud as a whole.

Brian Moore took over with his keynote on Design for Idempotency – DevOps and the Ultimate ARM Template. He is a Principal Program Manager for Azure. The focus of the session was on creating reusable Azure Resource Manager Templates and language techniques to optimize deployments on Azure. The intention of these reusable templates is to introduce a “Config as code” approach to DevOps.

He took his time to explain “the Ultimate ARM Template” and other key points about the template. Brian Moore explained that the Ultimate ARM Template utilized utilised any language constructs to increase the impact of minimal code. The template simply looks to simplify all of your work. It also offers a variety of benefits for all of its users to enjoy. To guarantee the efficiency of ARM, he explained the practice to avoid. It’s a template which provides you with the best options for the most effective results and lack nothing essential.

img_5044.jpg

Alexander Frankel, Joseph Chan, and Liz Kim conducted their join keynote on Architecting well-governed environment using Azure Policy and Governance after the morning coffee break.

They illustrated real-life examples of how large enterprises scale their Azure applications with Azure Governance services like Azure Policy, Blueprints, Management Groups, Resource Graph and Change History.

The next session was on Monitor & Optimize your cloud spend with Azure Cost Management and was conducted by Raphael Chacko. Raphael is a Principal Program Manager at Azure Cost Management.

The keynote’s main focus was optimizing expenditure on Azure and AWS through cost analysis, budgeting, cost allocation, optimization, and purchase recommendations. The main features of Azure Cost management were highlighted.

img_5052

It was right back to business after a quick lunch break. Stephen Cohen took over with his session on Decomposing your most complex architecture problems.

Most of the session was spent on analyzing and coming up with answers to complex architecture-related problems raised by participants. It was a very practical session and addressed many commonly faced issues.

img_5055.jpg

The next session was conducted by Mark Russinovich, the CTO of Microsoft Azure.

img_5063

Day 5 had a shorter agenda and was concluded with Derek Martin returning for another keynote on Networking Fundamentals. Derek spoke about Azure Networking Primitives and how it can be used to leverage the networking security of any type of organization using Azure environments. Azure Networking Primitives can be used in a flexible manner so that newer modern approaches to governance and security protocols can be adopted easily.

And that was it. The completion of a great week of LEAP. I hope all of you enjoyed this series of articles and that they gave you some level of understanding about the innovations being done in the Azure ecosystem.

DevOps, Gunnebo Business Solutions, Microsoft, Microsoft Azure

Microsoft LEAP: Design for Efficiency, Operations and DevOps

I just left Microsoft Headquarters after another interesting day at LEAP. Today’s topics were quite interesting, especially DevOps, because of all the innovations that are being made. I’m actually a little emotional that there’s just one more day remaining.

Banner of DevOps vector illustration concept-1
Jason Warner began the day’s session with his keynote on From Impossible to Possible: Modern Software Development Workflows. As the CTO of Github, Jason shared much of his experience regarding the topic.

The underlying theme of the keynote was on creating an optimal workflow that leads to the success of both the development process as well as the team. He pointed out the inevitable nature of modernization and said its important that the company does not become a mediocre or get worse.

IMG_20200130_075743

Before he went on to the topic of the day, Jason spoke about himself. He also didn’t hesitate to share some valuable history and information about his life. Jason Warner introduced the audience to some brief insight into the capabilities of GitHub and the success they have managed to achieve so far.

According to Jason, to ensure proper modernisation must have a workflow that consists the following; automation, intelligence and open source. Next, he identified GitHub’s ability to produce the best workflows to improve company efficiency. It didn’t end there as he continued by talking about the benefits of workflow inflation

IMG-20200130-WA0009
Abel Wang continued with the next session and his keynote was on Real World DevOps. Abel is a Principal Cloud Advocate for Azure.
This session was truly valuable as it covered the full process of a production SDLC and many other important areas such as infrastructure, DNS, web front ends, mobile apps, and Kubernetes API’s.

At the start of his presentation, Abel Wang introduced us to his team and gave a run down on some vital information about DevOps. Why do you need DevOps? Well, they are solution providers, support any language and boast a three-stage conversation process for results.

After a much-needed coffee break, we embarked on the next session on Visual Studio and Azure, the peanut butter and jelly of cloud app devs. The speaker, Christos Matskas is a Product Marketing Manager at Microsoft.

The session focused on explaining how well Azure and Visual Studio support development, live debugging, and zero downtime deployments. Christos also spoke about leveraging integrated Azure tools to modernize .Net applications.

The goal of those at Visual Studio are committed to providing developers with the best tools available. It supports all types of developers and redefines their coding experience. The great thing about Visual Studio is that they don’t rest on their laurels and are constantly in search of innovation. It even comes with a Visual Studio Live feature that allows developers share content with each other in real-time.

Evgeny Ternovsky, Shiva Sivakumar jointly conducted the next session on Full stack monitoring across your applications, services, and infrastructure with Azure Monitor. Many demonstrations were performed to overview the capabilities of Azure monitor.
The demos included monitoring VMs, Containers, other Azure services, and applications. In addition, setting up predictive monitoring for detecting anomalies and forecasting was also discussed.

Azure has a full set of services which it uses to oversee all your security and management needs. They have all the tools you need and are built into the platform to reduce any 3rd party integration. As if not enough, Azure managed to develop a set of newer features; partner integration, monitor containers everywhere, new pricing option, trouble shoot network issues later.

Screenshot_20200215-224807_Chrome

Subsequent to lunch, I joined the alternative session, which was on Artificial Intelligence and Machine Learning. The session was on the use of Azure Cognitive Services and using it with optimized scaling in order to optimize the customer care services provided organizations such as telecoms and telemarketers.
Then we were back at another joint session by Satya Srinivas Gogula and Vivek Garudi and the keynote was on the topic Secure DevOps for Apps and Infrastructure @ Microsoft Services.

IMG_20200130_091401

The speaker spoke about the wide adoption of DevOps practices and Open Source Software (OSS) and the vulnerabilities they introduce. The latter part of the session focused on best practices for secure DevOps with Azure.

The next keynote was on Transforming IT and Business operations with real-time analytics: From Cloud to the intelligent edge. It was jointly delivered by Jean-Sébastien Brunner and Krishna Mamidipaka and focussed on the challenges faced by IT and Business teams trying to understand the behavior of applications.
The speakers explained the benefits of Azure Stream Analytics to ingest, process, and analyze streaming data in order to enable better analytics.

A good example of when Azure is at its best is that it can be used for earthquake and storm predictions.

Taylor Rockey concluded the day with his keynote on MLOps: Taking machine learning from experimentation to production. MLOps is an integration between machine language and DevOps. MLOps has proven to have numerous benefits including; scalability, monitoring, repeatability, accountability, traceability and so on. This platform had impressive features that make it a first-choice for many developers.
The problems that many organizations face is the lack of proper understanding and tooling to use Machine Learning for production applications. The session focussed on the use of Machine Learning for production applications with the use of Azure Machine Learning and Azure DevOps.

And that’s a wrap. Don’t forget to tune into tomorrow’s article.

DevOps, Gunnebo Business Solutions

Microsoft LEAP: Design for Availability and Recoverability

Day 3 of Microsoft LEAP was just completed. It was a day packed with many interesting keynotes regarding improving the availability and recoverability of Azure applications. By now, you know the drill, check out my notes on Day 2 here.

Mark Fussell and Sudhanva Huruli co-hosted the opening keynote on the topic Open Application Model (OAM) and Distributed Application Runtime (Dapr). Mark has been with Microsoft for nearly 2 decades and is now a Principal PM Lead. Sudhanva is a Program Manager. Both of them work on the Azure Service Fabric platform.
The open application model was discussed in detail and the focus was on separating operational needs from development concerns.

ae7b0c80-eff0-11e9-94de-18e56bcaeff1

Mark Fussell started by describing the topology of applications which many users utilised. He also stated that developers write each application to interact with different services. Then, Mark spoke about the reason behind the creation of Dapr. It was a designed as a solution to tackle the problems of microservice development. Dapr would allow the building of apps using any language on any framework. Microsoft is already onboard to tap into benefits which it offers. It offers the benefits of enjoying stateful microservice in any language.

Sudhanva Huruli’s speech on OAM was intriguing and revealing. According to him, the OAM was a platform agnostic specification to help define cloud native applications. Users can trust it’s quality because it was built by the largest teams at Microsoft and Ali Baba. It can be applied in a number of ways. It’s benefits include encapsulating application code, offering discretionary runtime overlays, discretionary application boundaries and defines application instances.

The program is fully managed by Azure, so that you can focus on applications.

The opening session was followed by another join session by Muzzammil Imam and Craig Wilhite who hold the positions of Senior PM and PM respectively.
This keynote was on the topic of Windows Containers on AKS and it detailed the process of converting a legacy application into a cloud application and hosting it on a Windows container on an Azure Kubernetes service.

Their presentation showed that a lot of on-premise workload is done on windows; about 72%. There seems to to be a light at the end of the tunnel as there have been numerous good reviews about the Windows Container. It’s adoption is even growing steadily and there is room for more improvement. Microsoft containers will keep getting better with continuous innovation.

Kubernetes is a great option in Azure. It’s a vanguard in the future of app development and management and it can help you ship faster, operate easily and scale confidently. Azure Kebernetes Services will help you handle all the hard parts and give room a better future.

IMG_20200129_101806.jpg

After the coffee break, we were back for the next session conducted by Brendan Burns on Securing software from end-to-end using Kubernetes and Azure. Brendan is a Distinguished Engineer at Microsoft. This session focussed on continuous delivery with Kubernetes. Some of the sub-themes were continuous integration with GitHub Actions, Access Control in Kubernetes, and Gatekeeper for Kubernetes.

The last session before lunch was conducted by Jeff Hollan, a Principal PM Manager for Microsoft Azure Functions. The keynote was on Serverless and Event-Driven functions for Kubernetes and beyond. To put it simply, they seem just like the features of Kubernetes.

IMG_20200129_112049

The focus was on stateless event-driven serverless computing which is enabled by Azure functions. Many new hosting and programming models that enable new event-driven scenarios were discussed.

When used with severless, it allows developers focus on what really matters; their code. There are a variety of applications which it can be used as. Kubernetes also does well when dealing with event-driven applications

Next to speak was Kirpa Singh. He was a manager at Microservices and Performance Tuning. During his session, he spoke on what makes microservices a better option. He went on by speaking about the benefits of microservice architecture for projects. It was designed for large applications that require a high release velocity, complex applications that need to be highly scalable a, applications with rich domains or subdomains and so on. It offers users agility, focus, technology and isolation.

After lunch, we saw more of the Microsoft campus. Then it was back to the next session.
The session after the lunch break was the OSS Architecture Workshop conducted by Jeff Dailey, Patrick Flynn, and Terry Cook. One of the core themes of the workshop was Open Source stacks. They spoke about building Hybrid resilient data pipelines and infrastructure using open source. This was done through a breakout session at which the attendees were separated into groups and drafted architectures that supported both on-premise and cloud deployments.

During this session, they discussed about Open Source. But why open source? It allows easier migration, deliver poly cloud options via APIs, drives Azure consumption and so on.

Mark Brown conducted the next session on Building high-performance distributed applications using Azure Cosmos DB. He is a Principal PM in the Azure Cosmos DB Team.
The session’s key theme was building globally distributed cloud applications with high availability while ensuring extreme low latency. Many real-world demos were explored during the session and these will help us, developers, to tackle these issues in our own projects.

Hans Olav Norheim, a Principal Software Engineer, concluded the sessions for the day with a keynote on Designing for 99.999% – Lessons and stories from inside Azure SQL DB.
The session focussed on building applications with almost 100% uptime while covering design choices, principles, and lessons learned that can be used in our own projects to overcome uptime issues.

Thus were the proceedings of Day 3. I conclude my note while looking forward to the next set of sessions with the theme Design for efficiency & Operations & DevOps.
I’ll be publishing another article tomorrow.

Gunnebo Business Solutions, Microservices, Microsoft, Microsoft Azure, Technical

Microsoft LEAP: Design for Performance and Scalability

I’m at Microsoft for LEAP and we just wrapped up another day of interesting discussions. If you missed my update regarding day 1, make sure to have a look at it here.

Today’s theme was Design for Performance and Scalability. Many legacy applications are being replaced because they are not performance-oriented and scalable at their core. This is something that has to be introduced right from the design stage. Today’s speakers covered many of the core areas which need to be optimized to enable both performance and scalability.

Intelligence (BI) and business analytics (BA) with key performan

Vamshidhar Kommineni took us right from breakfast to using Azure storage for the data storage needs of Azure applications and how it can be used to enhance performance. Vamshidhar spoke about the innovations in the storage services layer made in the year 2019. He also shared briefly that plans for 2020. 

Corey Newton-Smith was next and focused on IoT applications. Corry has been with Microsoft since 2003 and currently functions as the Principal Group PM for IoT Central. She shared the current state of IoT and Microsoft’s plans for the near future highlighting their vision.

Corey explains that Azure IoT represented a new era of digitization amongst industries. It was an innovation that allowed brands the ability to do so much more. The objective behind the production of this platform is enabling a digital feedback loop. She discussed that Microsoft had done so much to make the IoT better. Now, it was capable of bidirectional communication, can be scaled to suit enterprise of any size and provides end-to-end security. Microsoft was planning an improvement that would allow it to support scenarios that are not currently cloud feasible. What’s more? Everything can be tailored specifically to the exact solutions that you need.

The next session began after some light mingling during the coffee break. It was back to business with Jose Contreras and his keynote on decomposing Monoliths into Microservices.

IMG_20200128_103740

Enterprise applications have made a gradual transition from being monolithic to being Microservice based. Jose explained strategies that can help with this process focussing on Memory, Computing, and Schema. He then discussed migrating existing monolith applications into Microservices without affecting ongoing operations. He focussed on the design, execution, and DevOps aspects.

Jose spoke on a number of factors to really prove the usefulness of transforming monolith to microservices. As part of his talk, he highlighted the factors to consider in making use of this service, differences between private and shared cache and considerations for using cache.

Interestingly, he moved on and started talking about Azure Compute. He listed all of their available services and gave detailed information on its hosting models, DevOps Criteria, Scalability criteria, and other criteria.

Clemens Vasters’s keynote focussed on how messaging is shaping enterprise applications. Importantly, he spoke on how Microsoft Azure could make all of it better.
He is a Product Architect at Microsoft and highlighted how open standards and messaging can be used to move applications to the cloud. Some of the areas he touched on were Event Hubs, Service Bus, Event Grid, and CNCF Cloud Events, and Relay with web sockets.

According to him, users can use a series of options to connect a range of devices. Ease of connectivity is guaranteed by the use of intelligent edge or intelligent cloud. Basically, it can be applied to varying scales and still work well with Telco 4G/5G. Despite all of this, cloud services can be applied to create automotive and smart cities, support industrial automation and speed up processes.

Clemens continued by clearing the air on the standards which the cloud service operated on. Everything is built according to standards and designed to be secure. Such was the level of quality in display.

After a quick lunch break, an alternative session was conducted for those who were already familiar with the campus. This session on Messaging Guidance was conducted by Francis Cheung and was related to session 4. However, Francis focused more on how we could assess if some of those tools were a good fit for our projects. He also touched on managing and versioning message schemas.

Next was David Barkol’s session focusing on Designing an Event-driven Architecture on Azure through a workshop approach. He challenged attendees to solve problems related to messaging in a group setting. As a Principal Technical Specialist for Azure, David used his vast experience to reinforce the existing knowledge of attendees about Azure messaging services. He really had a lot of interesting things to say.

Using a few simple statements, he was able to highlight the problems of the customer, identify their needs and how to solve them with the use of event-driven architecture. As a platform, the event-driven architecture will eliminate any bottlenecks and allow for easier transmission of information. Azure messaging services will solve all of the demands identified by the consumer. He also mentioned that Event Hubs GeoDR will also provide a backup or secondary region.

20200128_142749

Derek Li conducted his keynote next. He focussed on Serverless platforms based on Azure Functions and Logic Apps. Derek is a Senior Program Manager. His keynote focused on how serverless technologies have impacted how applications are built. He also spoke on how Azure Functions and Logic Apps can be used to speed up delivery.

The last session was after a very welcome Cola Zero break. It refreshed us for Rahul Kalaya’s keynote on deriving insights from IoT Data with Azure Time Series Insights.
Rahul spoke about design choices, principles and lessons learned with regards to maintaining the highest possible uptime of cloud databases and servers. Many stories from his experiences with Azure SQL made the keynote even more interesting.
And that was it. The completion of a day of meaningful sessions.

I look forward to sharing my next article on Day 3: Designing for Availability and Recoverability.

Gunnebo Business Solutions, Internet of Things (IoT), Microsoft, Microsoft Azure, OpenID Connect, Security, TLS/SSL

Microsoft LEAP: Design for Security

This year is already off to a fantastic start! I am so excited to be here at the LEAP conference at the Microsoft Headquarters in Redmond Seattle. LEAP is a perfect way for me to keep up to date with new technology and how to apply it here at Gunnebo.

IMG_5034

The focus of the day was to Design for Security. The threat of cyber attacks and hackers is still as pressing as ever, so the need for cloud security is crucial. Although technological advancement has triggered an evolution in cloud security over the years, keeping the right level of visibility and control over their applications is still a challenge to many organizations. This means that finding a balance between cloud security and ease of use is a hard nut to crack. Today’s program discusses how Azure can cope up with this issue. Also, speakers are expected to introduce new and updated features Azure brought recently to improve the security of cloud applications.

IMG_4953

The highlight of today’s program consists of five great keynotes. The first on the list was Scott Guthrie, the executive vice president for Microsoft’s Cloud. He is an incredible orator and kept the audience thrilled with his in-depth explanations on how Azure helps organizations to deliver product innovation and better customer experience securely. It was frankly impossible to have been there without taking away more than a few vital points and a better understanding of Azure.

IMG_4958

Then Stuart Kwan, who is a principal program manager at Microsoft, was the next in line. He backed up Scott Guthrie with a great keynote on how authentication works on today’s applications. Stuart has a wealth of experience under his belt, and he has worked on identity and security-related technologies since joining Microsoft in 1996. Few people have more experience in that field. He is the guy to listen to on topics like Active Directory Federation Services and Windows Identity Foundation. The main focus was on OAuth, Open ID Connect, and SAML. OpenID Connect is a simple identity layer built on top of the OAuth 2.0 protocol. OAuth 2.0 defines mechanisms to obtain and use access tokens to access protected resources, but they do not define standard methods to provide identity information. OpenID Connect implements authentication as an extension to the OAuth 2.0 authorization process. It includes information about the end-user in the form of an id_token that verifies the identity of the user and provides necessary profile information about the user.

When Yuri Diogenes took control of the stage, everyone knew that his talk would be primarily based on how cloud security is evolving and becoming more mature. Yuri is a Senior Program Manager at Microsoft for Cloud and AI Security.

IMG_4968

Before Yuri moved on to talk about Azure security, he provided some insights into the problematic scenarios that many companies find themselves. According to him, security hygiene has to be taken seriously or any cloud-based infrastructure would suffer. Basically, organizations have to protect themselves against modern-day threats. He carefully explained that Azure Security Center is a unified infrastructure security management system that strengthens the security posture of your data centers, and provides advanced threat protection across your hybrid workloads in the cloud – whether they’re in Azure or not – as well as on-premises. In simple terms, Azure security is the new security hygiene which you need.

Yuri went further to explain the benefits of Azure security center and Azure Sentinel. It provides all-round security and also affords a degree of customizability. According to him, Azure is capable of protecting Linux and Windows VMs from threats, protecting cloud-native workloads from threats, detecting file-less attacks, cloud workload protection for containers and so on.

IMG_4984.jpg

The next person on stage was Nicholas DiCola who was a Security Jedi at Microsoft. He thrilled the audience with his discussions on the Azure Sentinel. He explained to everyone how the Sentinel functions as a cloud-native SIEM for intelligent security analytics for an entire organization. It offers limitless cloud speed and could be used at any scale. It also provides its users with faster threat protection and will easily integrate will all existing tools.

According to him, the Azure Sentinel was designed to collect visibility, helps in detecting analytics and hunting, investigates any incidents and respond automatically to them. Azure Sentinel gets data to function from numerous sources such as Linux Agent, Windows Agent, cloud services, custom app, appliances, azure services and so on. After collating all necessary data, it’s analytics scan for any possible threats. Then, you will now be able to monitor your data and activity.

Last but not least we had a session with Sumedh Barde and Narayan Annamalai. They opened a fascinating discussion on how to secure certificates, connection strings, or encryption keys and new networking capabilities of Azure. Sumedh Barde is Program Manager on the Azure Security team, and Narayan is the leader of the SDN product management group in Microsoft Azure that focuses on virtual networks, load balancing, and network security.

These two gave us great insight into the Azure Key Vault. They explained to us how it functions as a tool for securely storing and accessing secrets. From what I learned from the conference, the secret to tightly controlling and securing access on things API keys, passwords, or certificates is to use a vault. A vault is your very own logical group of secrets.

It was a great day here in Redmond and an excellent opportunity to brush up my knowledge of cloud security. I’m actively looking forward to tomorrow.

Gunnebo Business Solutions, Holiday, Reflections, Travel, USA

California, here I come!

Traveling to the US for the second time, I had the chance to take a weekend off to do some sightseeing. Even though I had just a few days, I wanted to cover as much ground as possible.

When the wintry winds start blowing
And the snow is starting to fall
Then my eyes turn westward knowing
That’s the place that I love best of all

California is one of the most well known places in the world for numerous reasons, but chief among them is its identity as pivotal place in world entertainment and technological innovations. It is the birthplace of popular figures like Jessica Alba, Tom Brady, and Clint Eastwood. Ahead of my trip to California, I did a little research on its key areas and here is my summary.

img_4887

San Francisco is officially referred to as City and County of San Francisco and colloquially called SF, San Fran, Frisco, or The City. With a population of about 883,305 residents in 2018, SF is the center of commerce and finance in North California. In terms of geographical dimensions, SF spans across 46.89 square miles and is second in the list of densely populated large cities and as a county only more populated by the four NYC boroughs in the entire US. In terms of population of metropolitan statistics in an areas, SF has about 4.7 million people and is the 12th largest. The economic significance of this city is further butressed by the fact that in 2018 SF had a GDP of $549 billion and is the 4th highest number in terms of economic output. When combined with San Jose, SF swells to a population of about 9.67 million and becomes the fifth largest combined statistical area (the San Jose–San Francisco–Oakland, CA Combined Statistical Area) in America.

Golden Gate Bridge panorama, San Francisco California

The Hollywood neighborhood located at the heart of Los Angeles, California, is world renowned as the home of film and entertainment in the US, it also houses numerous iconic movie studios. It’s impacts on the industry is so strong that the name “Hollywood” has become an unofficial means of referring to the movie industry and people concerned with it. Hollywood began is rise to prominence when it was officially declared a municipality in 1903, this was followed its consolidation with Los Angeles 7 years later; this was when Hollywood began to grow as a film industry. Today, Hollywood is the largest movie industry in the world.

Hollywood Hills in Los Angeles, California.

Las Vegas, popularly referred to as the “World’s gambling capital” is officially known n as the City of Las Vegas, the most common name is simply “Vegas”. The name “Las Vegas” is derived from the spanish word for “The Meadows”. In the state of Nevada, Vegas is the most populated and ranks 28th on the list of most populated cities in the US. Las Vegas covers an area of 135.8 square miles and hence, the largest city in the greater Mojave Dessert. Vegas is at the forefront of Nevada’s financial, commercial and back cultural scene, it is also renowned worldwide as the capital of gambling, nightlife, exquisite dinning, and casinos.

img_4915

On my way back from Vegas I drove through the Mojave dessert, an arid rain-shadow desert that spans across 47,877 square miles in southeastern California and southern Nevada. Of all the desserts in North America, The Mohave is the driest.

Thousands of wind turbines at sunset

Some of the Mojave dessert also encroaches into Utah and Arizona, one of its most distinctive features is the presence of Joshua trees (native to only the Mojave Dessert) in its boundaries. The region is believed to be home to over 1900  plant species. Although The Mojave Dessert has densely populated communities like Las Vegas, Palmdale abs Victorville, its central region has a sparse population.

img_4909

With that my three-day road-trip is completed and it is back to work 🙂

 

Gunnebo Business Solutions, Gunnebo Retail Solution, Reflections, Retail, Sustainability

NRF: Sustainable retail, a vital next step

I came to New York in a jacket I bought second hand for EUR 25 in my hometown Ålesund. It is nice, warm and comfortable – and this is what is important to me. If 60-70% of your closet is not used, why not sell it so someone else can benefit from it?

It is a constant source of pleasure to me as I discover that some of my ideals are being put forth as visionary in the retail industry, which I am passionate about. In this year’s NRF, sustainability is one of the key points being discussed and an aspect of it is promoting of re-sale/second hand goods. It’s quite exciting to be a part of this and I will be sharing it all with you.

Young woman is browsing a rail of clothes at mall store

 

Visionary Voices of Ambition, Purpose and Inclusion

The keynote that caught my attention was with four strong female top managers, Mercedes Abramo from Cartier, Shannon Schuyler from PwC, Shawn Outler from Macy’s and Tammy Sheffer from Rent the Runway.

Revolutionizing businesses (ranging from retail to other industries) requires more than just motivational talk, calculable action is needed to unlock potentials and push the edges of possibility. And when it comes to leading the way, some of the biggest moves are coming from female executives. Progressive voices from the biggest names in retail teamed up with CEO Action leaders to share time-honed tactics and winning strategies that are practical.

Couple working in a bulk food store

There has always been a disparity in the treatment of female employees when compared to their male counterparts. Although informal moments are important, it isn’t quite sad that top female managers are treated different than their male opponents. This isn’t obviously not an issue common only in the retail industry, it’s a humanity issue.

What makes this difference in treatment more worrisome is that females even seem to thrive more in the retail industry since it is flexible work. Retail requires a lot of contact with clients and other entities (suppliers, producers, etc), females on average excel in such roles

Issues like this constitute some of the reasons why top management and CEOs are taking political stands more than ever. CEOs are realizing how powerful their voices are and the influence it yields. Progressive views are embraced by many top executives and they can greatly influence employees.

Retail 2020: The dawn of sustainability

The role of technology in retail has started to change the entire market landscape. It’s easy to imagine the stores of the future with magic mirrors or robotic sales assistants. Yes, there have truly been advancements in the technology used in the retail industry and the dawn of 2020 seems like the time for sustainability. Is sustainability growing to the point that it becomes crucial to your transformation journey? How will brands look to adopt technology and sustainability into their operations? If you are constantly plagued by these questions, Microsoft Corporate Vice President, Retail and Consumer Goods, Shelley Bransten’s talk on sustainability and impact of technology on the industry will answer all your questions.  Together with Shelley will be Arti Zeighami, Head of AI & Sustainability for H&M, he will share H&M’s sustainability journey and how Microsoft technology equipped them to achieve their sustainability goals. It was good to see Arti again, it was truly an inspiration as always.

Arti noted that retailers are often faced with the question of quantification of product requirement, optimizing supply chain, how much to produce. He also explained that AI in H&M helps perform these computations with fantastic results. He went further to speak on some key tools employed in H&M.

Value chain analysis is a strategy tool used to monitor the internal activities of of a business, retailers in this case. Its primary function is to identify the most valuable activities and those that can be modified to enhance competitive advantage. Simply put, looking into internal activities through the analysis shows the areas where a retailer’s competitive advantages or disadvantages lie. This analysis can shed light on if a firm needs to improve its activities or reduce the operational costs of such activities.

Sometimes having the right products in the right place at the right time is what makes all the difference. Proper strategies and planning of logistic chain a can greatly improve convenience and and lead to higher ROI. Customers are likely to make purchases when products are close to them. Taking data from multiple systems will offer you totally new insights while employing various data mining techniques and technologies diversified your approach and gives you a broader perspective.

The road to sustainability is bound to have a few bumps; so, Test, fail, learn, pivot – then repeat. Making mistakes are inevitable in retail, turning these mistakes into learning experiences will greatly influence your growth rate. Taking on experiments boldly (within reason) and trying out new technology is highly encouraged.

RE-purpose revolution: Upcycling retail’s future

Sustainability has become a top trend. Let’s face it, nobody wants to be labeled as the savage who’s ruining the earth. One of the impacts of environmentalists is that they have successfully managed to make people feel that being environmentally conscious is the new cool. A similar scenario is is playing out in the cosmetics industry, there has been a big influx towards natural products. Tech and industrial giants have obviously noticed this change and adapted to make themselves to being more environment friendly. It’s almost impossible to ignore the fact that retail is shifting towards sustainability.

A lot of retail leaders are doing their best to ensure that they remain relevant and continue to offer their customers sustainable solution. Customers now base their purchase decision on the values of the brand; personal, social and environmental values. It is now clear that to get the best results, then brands have to make purpose-driven decisions that will easily transform the world into a better place. Brands can’t just go around doing what they like. They have to continuously keep the demands of the customer at the back of their mind. It’s only through this re-purpose revolution that the company can reach its peak and keep all of their customers happy. It may seem like a relatively new concept but adopting it is a great choice. So, how do you apply this to your own retail business? The best way to do this would be to learn from the best.

img_4865-1707843756-1579310298578.jpg

Hear from some of the most innovative leaders from nimble retailers and break-through digital-first brands who are meeting consumer needs and disrupting the industry.

A fact to always have handy is that consumers are becoming more progressive and conscious; they want to buy second hand, but also need to see something fresh. Customers will willingly buy second hand if it looks appealing enough and satisfies their needs, it’s only a matter  of time before retail leaders set the trend. The growing consumer awareness on sustainability and environmental consciousness are perfect conditions for retailers to slowly introduce the concept of resale. Although a good percentage of this “awareness” in consumers  stems from the urge to look cool and follow trends, it’s  still solid enough for retailers to leverage on.

At some point,  retailers should understand that re-sale does not de-valuate the brand. Among the elite, minimalism and other such ideologies being adopted. These ideologies embraced by high-end customers make them more susceptible to brands that re-sale.

Celebrities sell or give away stuff from their closet that is not used, this on its own should be enough to shed more positive light or re-sale among retailers and customers, shouldn’t it? Resale does not discriminate, the world is ready for second hand sale and those is message that should be preached. The narrative that second hand goods are for the less privileged should be discarded, there’s simply isn’t any logic to it. On the other hand, a wide acceptance of second hand sale will boost the effectiveness and logistics  industries associated with retail.

Another similar idea to re-sale are clothing rental services for luxury items. Brands like Rent The Runway Unlimited, Gwynnie Bee  and New York And Company Closet are already creating significant traction in that field. They work by offering members monthly subscriptions and allowing them  rent 4-6 items at a time, brilliant isn’t it?

Similarly, ReBag is an online store that offers second hand luxury goods. Considering that 92% of luxury item should are bought offline, ReBag switches from online to omnichannel approach to sales. It will be quite interesting to observe the positive effects re-sale will have on brands.

Curious. Distinctive. Uncompromising. Conversations with Recode

Kara Swisher entered the stage with the confidence only she can radiate. New this year, NRF has partnered with Recode to introduce its straight to the point editorial interview style to NRF 2020 Vision’s arena. In this live interview, Recode co-founder and editor-at-large Kara Swisher will leverage her background in tech, media, and commerce to investigate some of the industry’s most pressing challenges, eliciting thought-provoking insights from her soon-to-be-announced interview subject. Swisher has led insightful conversations with likes of Bill Gates, Steve Jobs, Mark Zuckerberg, Sheryl Sandberg, Hillary Clinton, Katrina Lake, Jeff Bezos, Tim Cook, Jack Dorsey, and many other leading players impacting a broad swath of industries, including retail.

Nothing like a panel of four men talking about women’s clothing

Her “victim for the day” is Ben Silbermann who is an American billionaire Internet entrepreneur. He is the co-founder and CEO of Pinterest, a visual discovery engine, which lets users organize images, links, recipes and other things.

Ben gave the audience insight in how Pinterest is created for Inspiration in how to dress, decorate and behave. However Kara pointed out the problems with internet platforms, as they easily become playground for fraud and illegal activities like child pornography. Ben insisted that Pinterest has always had a clear etiquette on contents and are continuously working on enforcing this.

Not wanting to talk too much about the future, Ben could reveal that personalization, and that the consumer should see what they want to see will be important. Also ensuring the confidence of the merchants connected to the platform is important.

Closing KeyNote Goop Lab

The Goop Lab is an upcoming American documentary series about the lifestyle and wellness company, Goop, founded by American actress Gwyneth Paltrow. The series will be premiered on Netflix on January 24, 2020.

Goop started as a newsletter from Gwyneth at a point in her life where she wanted to move from acting into another business. Trying to fill the “whitespace” in her life, she started looking into what she calls Contextual Commerce, a concept where you talk, manufacture and sell what you like and what gives you pleasure.

Gwyneth has been a hot topic the last few weeks with her “Smells Like my Vagina” candle, a topic that amused the american audience here at NRF as well as the customers, as the candle was sold out the first day.

The partnership with Netflix generated criticism of the streaming giant for letting Gwyneth Paltrow use their platform to promote her company which has been under fire for making health claims with no backup evidence. Several critics termed the affair a “win for pseudoscience”. When the first trailer was released, the show received even more notable criticism from the scientific community and evidence-based critics. Gwyneth feels that she is often treated unfairly, but it is not like she wants to take her critics to court, all PR is as you know good PR.

This concludes by visit to NRF, but I will visit EuroShop soon, and look forward to keeping you up to date on what goes on in the world of consumer retail!