Agile, Methodology, Scrum, Software Development Insights, USA

Certified Scrum Product Owner

Having worked as a product owner for years, I finally decided to take things to take the next level with a certification training known as a Certified Scrum Product Owner.

A CSPO course is an interactive course that would last for two 8-hour days. During this course, we learned basic things about the scope of Scrum and the functions of a Scrum Product Owner. We were taught using case studies, exercises, and discussions. More importantly, topically treated included how to identify user needs, backlog, how to manage stakeholders, an overview of sizing in Scrum and how to create, maintain and order a product.

The CSPO Training was conducted by Chris Sims. He’s a certified scrum product owner, agile coach and C++ expert that helps companies run efficiently and happily. He’s also the founder of Agile Learning Labs and a co-author for two best-sellers, namely; The Elements of Scrum and Scrum: a Breathtakingly Brief and Agile Introduction.

IMG_20200124_165112

The CSPO training session was held in Silicon Valley, midway between San Francisco and San Jose, at the Seaport Conference Center. The facilities here were perfect for the setting of the training, and as a bonus, we got to see the towing of a drug houseboat (that was our theory at least).

seaport

A Scrum Master works to help an inexperienced team get familiar with the operations and effects of Scrum. In comparison, a Product Owner Owner priority is to make sure that customers are satisfied with the quality of service they get and usually helps to create the product vision or order for a Product Backlog.

At the end of the training, a CSPO is equipped with the skills to serves as a product owner in a scrum team. The role of the product owner is vital to make sure that the product can offer the desired amount of satisfaction to the customer when required. This is possible for him in a number of ways if you consider the resources available at his disposals such as the team, business stakeholders and the development process adopted by the organization.

A CSPO is trained to take on the role of the product owner in a scrum team. The product owner is a vital element in ensuring that the product can offer optimal value to the customer in a timely manner. He can achieve this in a number of ways if you take factors such as the team, business stakeholders and the development process of the organization.

The responsibilities of a CSPO

The first is the development and writing of the product vision. To do this, he’s to work with a clear mind about the functions and benefits of the product to the consumer. It also includes writing a list of product features. Basically, product features are product requirements written from the user’s perspective. These features are usually written as a detailed description of the capability of the product in the hands of the customer.

The CSPO also helps to compile a list of features into the Product Backlog. It’s important that the product owner has the ability to make the team understand the scope of the project and work together to get things done. He also reviews, tests, and assesses the final product. A CSPO can also request changes to the product if there are any issues with it.

Getting a Certified Scrum Product Owner® (CSPO®) certification exposes anyone to a lot of benefits. Firstly, the CSPO certification will expose you to more career opportunities and it becomes easier to work in different industry sectors that adopt the use of Agile. This will expose any expert to different companies and occupational positions. Also, it shows that your expert in Scrum. This way, it’s easier for you to let your employees and team members know of your capabilities.

On another note, the certification will teach you the history of the Scrum Foundation and the role of a Product Owner. The classes to train you for the certification will orientate you on the roles and duties of a product owner. It also takes you into close contact with Agile practitioners who want to improve their skill level. A CSPO certification is a sign of a product owner’s reliability.

Scrum teams operate at a level of efficiency and speed that may be a problem for traditional product management. Learn about the skills adopted by product owners to lead their team and achieve optimal results. Anyone who takes part in a CSPO training will be a part of exercises and simulations related to Business value estimation, Product strategy, an overview of product owner role, Release planning, Effective communication with stakeholders, story splitting, acceptance criteria, user stories, product strategy, lean product discovery and Artifacts including burn charts.

sprint-02

Working with Scrum for quite a few years now, I have assembled a set of methodologies and syntaxes on how to write good requirements for your team. Below I will share the requirement format and lifecycle I use in my daily work, and I hope it will help you too working in an Agile team.

Epic

Software development teams work on very complicated projects. It is crucial to understand every requirement and feature required by the customer. 

An epic is a large body of work broken down into several tasks or small user stories. It always denotes a high-level descriptive version of the client’s requirements. As epic is the description of the user’s needs, its scope is expected to change over time. Hence, Epics are always shipped in the form of sprints across teams. While, Epics often encompass multiple teams on multiple projects, and can even be tracked on numerous boards. Moreover, epics help the team break down a main project’s work into shippable pieces without disturbing the main project’s delivery to the customer.

Format

For a <persona> who <has a painpoint> the <product or solution> is a <type of solution> that <solves an issue in a certain way> unlike <the old solution or competitor> our solution <has ceirtain advantages>

Acceptance Criteria

Success criteria <> Acceptance criteria <> In scope <> Out of scope <>

Lifecycle

An Epic can only be created and moved into the backlog by the Product Owner. When all sub-tasks are Resolved, the Epic can be resolved. When the functionality of the Epic is delivered to the end customer, the Epic will be Closed. It is a complicated task to create an Epic. The following steps should be followed to develop an agile epic. 

They are starting with the user Recording / Reporting, which includes drafting the epic for project managers and the team. Second, comes the Description where the process of achieving the proposed project is described. Next is the Epic Culture, which denotes the epic team’s size based on the company culture. Finally, the most important one is the Timeline or Time Frame, where the team decides on how long they take to complete the project.

Feature

When a developer team develops one extensive software system, there will be lots of requirements gathered from the customer to understand what is precisely the customer’s requirement. The customer might not have an understanding of how the gathered requirements are used, but the development team knows that these requirements are finally the features of the system being developed.

A feature is a small, distinguishing characteristic of a software item, which is also a client-valued function. Features are small and typically can be implemented within a sprint. When we describe a feature, we use the same format as a User Story, but with a broader scope. 

Format

As a <particular class of user> , I want to <be able to perform/do something> so that <I get some form of value or benefit>

Lifecycle

A Feature can only be created and moved into the backlog by the Product Owner. When all sub-tasks are Resolved, the Feature can be resolved. When the functionality of the Feature is delivered to the end customer, the Feature will be Closed.

A feature can be added to a system as per the customer’s requirement even after development is completed or during the development phase. The user creates a feature, and the features are added to the features inbox. The product team sorts the features and adds them to a feature list for the feature team for elaboration. The feature manager contacts the appointed teams to start inspections. After implementing the feature by the engineering team, it is added to the release tracking page, and once it is completed, the QA team will carry out the final testing. The feedback team starts feedback gathering, and the feature moves to Aurora and Beta. Finally, the feature is released.

User Story

When working on a complex project, the development team must ensure that they have fully understood the customer’s requirements. 

In software development and product management, a user story is an informal, natural language description of a software system’s features. User stories are often written from the perspective of an end-user or user of a system. Furthermore, user stories break down the big picture into epics that are more user-focused and in a way that the engineering team clearly understands the product requirements.

Format

As a <particular class of user> , I want to <be able to perform/do something> so that <I get some form of value or benefit>

Acceptance Criteria

Given <some context> When <some action is carried out> Then <a particular set of observable consequences should obtain>

Lifecycle

A User Story can only be created and moved into the backlog by the Product Owner. When all sub-tasks are Resolved, the User Story can be resolved. When the functionality of the User Story is delivered to the end customer, the User Story will be Closed.

The stakeholder gives the idea in the form of a change request or new functionality, captured by the product owner as a business request, and creates the user story. Then the user story is added to the backlog, and with the help of the sprint team, it is groomed by the product owner. The user story is then broken down into acceptance criteria for prioritization. However, whether the owner accepts or rejects the story depends on the acceptance criteria. Finally, the user story is recognized as complete and closed and returned to the backlog for future iterations.

Task Story

The Task Story work item is more technical than an agile User Story. Instead of forcing the User Story format, it is better to use a Feature-driven development (FDD) process, describing what is expected more technically. FDD blends several industry-recognized best practices into a cohesive whole. These practices are driven from a client-valued functionality perspective where its primary purpose is to deliver tangible, working software repeatedly on time.

Format

<action> the <result> by/for/of/to a(n) <object>

Example: Send the Push Notification to a Phone

Acceptance Criteria

Given <some context> When <some action is carried out> Then <a particular set of observable consequences should obtain>

Lifecycle

A Task Story can only be created and moved into the backlog by the Product Owner. When all sub-tasks are Resolved, Task Story can be resolved. When the functionality of the Task Story is delivered to the end customer, the Task Story will be Closed.

Bug

Any software development team can come across faults in the product they are working on, and these faults are identified in the testing phase. 

Errors, flaw, or fault in a computer program or system that causes it to produce an incorrect or unexpected result or behave in unintended ways is called a software bug. The process of finding and fixing bugs is termed “debugging” and often uses formal techniques or tools to pinpoint bugs, and since the 1950s, some computer systems have been designed to also deter, detect or auto-correct various computer bugs during operations.

Format

Found in <module> summary <short description> reproduced by <reproduction steps> result <what happened> expected <what was expected to happen>

Lifecycle

The Bug work item can be created by anyone but is usually made by QA or Operations via a customer. When the bug is fixed, it should not be closed until confirmed by the creator.

There are six stages in the bug life cycle. When the bug is created and yet to be approved, it is in its New stage. Next, it is Assigned to a development team. Now the development team starts to work to fix the defect. When the developer fixes the bug by making necessary changes to the code and verifying them, it can be marked as Fixed. When the code is in the fixed state, it is given to a tester to retest until the tester tests the code in a state called the Pending Retest. Once the tester has tested the code to see if the developer has successfully fixed the defect, the status is changed to Retest.

Spike

Although we have epics and user stories to break down complex projects and make it understandable to the engineers, there can still be confusion.

A Spike aims at gathering information to sort out the unclear sections the team comes across in the user stories. A spike can be known as research, architectural, or refactoring spike. When the group comes across such confusing situations, they have to create a functional or technical experiment to evaluate. It can be any type of research the team does, the final goal is to solve unclear requirements.

Format

In order to <achieve some goal> a <system or persona> needs to <perform some some action>

Example: In order to estimate the “push notification” story a developer needs to research if Azure services meets the requirements.

Lifecycle

A Spike can be created by anyone, but can only be moved into the backlog by the Product Owner. The print team has the responsibility to create acceptance criteria. When Spike’s goal is met, it can be Resolved or Closed, depending on the owner’s decision.

Task

Stories are written in a way that is easy to understand by the customer, and there are no technical terms or instructions related to development. Now the story has to be converted to a detailed instruction list that is easy to understand by the developer.

A Task is a piece of work for the developers or any other team member. It gives the developer an idea about what should be done during development, such as creating tests, designing something, adding codes, the features that should be automated, etc.

Format

There is no specific format for a task, it can be written in the format of a note or a todo list.

Lifecycle

A task can be created by anyone, but it is typically created by a developer as a child to a User Story or a Task Story.

A New task can be Created as a user action or part of process execution, and Candidates are set to groups of people. Next, individuals are directly Assigned as a part of process execution or if requested by API. Sometimes an assignee might want to Delegate a part of the work. Once the requested work is resolved the assignee will want to pass the work back to the original owner. Finally, the task is Completed.

Issue

An Issue is a description of an idea or a problem. It also can be outlined as an improvement that should take place in the product. If resolved, it would increase the value of the final product or reduce waste in development time.

Format

There is no specific format for an issue, it is more like a note and can be written in the format of a User Story or Spike.

Lifecycle

Anyone can create an Issue, but only the Product Owner can convert it into a User Story or a Spike and put it into the backlog. The life cycle of work can be defined by setting an issue workflow as follows:

When an issue is created, the time is taken to resolve, it will be decided to depend on the issue’s size. When an issue is created, it is in its Open state. Usually, a QA will create an issue and assign it to a developer who can solve it. When the programmer is working on resolving the issue, it is in its In Progres state. After the issue is solved, it goes to the Resolved state. An issue can go to its Closed state only if the creator is happy with it. However, when an issue goes to its closed stage, it does not mean that it is completely solved, but there can be chances for it to arise again. Then the issue is Reopened, and the same process takes place to figure out the issue and fix it.

Concluding this post, I want to say that Chris’ training skills were at the top level and all of his stories about Silicon Valley, how he started Agile Learning Labs, and his career as a product owner, engineering manager, scrum owner, software engineer, musician, and auto mechanic – and there were impressive lunchtime discussions.

To learn more about the role of a product owner, you can contact me at bjorn.nostdahl@nostdahl.com.

There’s more information about agile in my articles on Social Agility and Agile and Scrum Methodology Workshop.

DevOps, Microsoft, Microsoft Azure, Software Development Insights

Microsoft LEAP: Design with Best Practices

All good things come to an end and LEAP is no exception. It was a great week full of interesting and enlightening sessions. Day 5 was a fitting end to the week with its focus on Design with best practices.

group-photo-2.jpg

Let’s get to the sessions; the day began with a keynote by Derek Martin on the topic Design for failure. Derek is a Principal Program Manager and spoke about what not to do when designing a product. He spoke about building Azure and how the lessons learned can be used to understand and anticipate challenges.

img_5039.jpg

The focus was given to managing unexpected incidents not only in the application environment but also in the cloud as a whole.

Brian Moore took over with his keynote on Design for Idempotency – DevOps and the Ultimate ARM Template. He is a Principal Program Manager for Azure. The focus of the session was on creating reusable Azure Resource Manager Templates and language techniques to optimize deployments on Azure. The intention of these reusable templates is to introduce a “Config as code” approach to DevOps.

He took his time to explain “the Ultimate ARM Template” and other key points about the template. Brian Moore explained that the Ultimate ARM Template utilized utilised any language constructs to increase the impact of minimal code. The template simply looks to simplify all of your work. It also offers a variety of benefits for all of its users to enjoy. To guarantee the efficiency of ARM, he explained the practice to avoid. It’s a template which provides you with the best options for the most effective results and lack nothing essential.

img_5044.jpg

Alexander Frankel, Joseph Chan, and Liz Kim conducted their join keynote on Architecting well-governed environment using Azure Policy and Governance after the morning coffee break.

They illustrated real-life examples of how large enterprises scale their Azure applications with Azure Governance services like Azure Policy, Blueprints, Management Groups, Resource Graph and Change History.

The next session was on Monitor & Optimize your cloud spend with Azure Cost Management and was conducted by Raphael Chacko. Raphael is a Principal Program Manager at Azure Cost Management.

The keynote’s main focus was optimizing expenditure on Azure and AWS through cost analysis, budgeting, cost allocation, optimization, and purchase recommendations. The main features of Azure Cost management were highlighted.

img_5052

It was right back to business after a quick lunch break. Stephen Cohen took over with his session on Decomposing your most complex architecture problems.

Most of the session was spent on analyzing and coming up with answers to complex architecture-related problems raised by participants. It was a very practical session and addressed many commonly faced issues.

img_5055.jpg

The next session was conducted by Mark Russinovich, the CTO of Microsoft Azure.

img_5063

Day 5 had a shorter agenda and was concluded with Derek Martin returning for another keynote on Networking Fundamentals. Derek spoke about Azure Networking Primitives and how it can be used to leverage the networking security of any type of organization using Azure environments. Azure Networking Primitives can be used in a flexible manner so that newer modern approaches to governance and security protocols can be adopted easily.

And that was it. The completion of a great week of LEAP. I hope all of you enjoyed this series of articles and that they gave you some level of understanding about the innovations being done in the Azure ecosystem.

DevOps, Microsoft, Microsoft Azure, Software Development Insights

Microsoft LEAP: Design for Efficiency, Operations and DevOps

I just left Microsoft Headquarters after another interesting day at LEAP. Today’s topics were quite interesting, especially DevOps, because of all the innovations that are being made. I’m actually a little emotional that there’s just one more day remaining.

Banner of DevOps vector illustration concept-1
Jason Warner began the day’s session with his keynote on From Impossible to Possible: Modern Software Development Workflows. As the CTO of Github, Jason shared much of his experience regarding the topic.

The underlying theme of the keynote was on creating an optimal workflow that leads to the success of both the development process as well as the team. He pointed out the inevitable nature of modernization and said its important that the company does not become a mediocre or get worse.

IMG_20200130_075743

Before he went on to the topic of the day, Jason spoke about himself. He also didn’t hesitate to share some valuable history and information about his life. Jason Warner introduced the audience to some brief insight into the capabilities of GitHub and the success they have managed to achieve so far.

According to Jason, to ensure proper modernisation must have a workflow that consists the following; automation, intelligence and open source. Next, he identified GitHub’s ability to produce the best workflows to improve company efficiency. It didn’t end there as he continued by talking about the benefits of workflow inflation

IMG-20200130-WA0009
Abel Wang continued with the next session and his keynote was on Real World DevOps. Abel is a Principal Cloud Advocate for Azure.
This session was truly valuable as it covered the full process of a production SDLC and many other important areas such as infrastructure, DNS, web front ends, mobile apps, and Kubernetes API’s.

At the start of his presentation, Abel Wang introduced us to his team and gave a run down on some vital information about DevOps. Why do you need DevOps? Well, they are solution providers, support any language and boast a three-stage conversation process for results.

After a much-needed coffee break, we embarked on the next session on Visual Studio and Azure, the peanut butter and jelly of cloud app devs. The speaker, Christos Matskas is a Product Marketing Manager at Microsoft.

The session focused on explaining how well Azure and Visual Studio support development, live debugging, and zero downtime deployments. Christos also spoke about leveraging integrated Azure tools to modernize .Net applications.

The goal of those at Visual Studio are committed to providing developers with the best tools available. It supports all types of developers and redefines their coding experience. The great thing about Visual Studio is that they don’t rest on their laurels and are constantly in search of innovation. It even comes with a Visual Studio Live feature that allows developers share content with each other in real-time.

Evgeny Ternovsky, Shiva Sivakumar jointly conducted the next session on Full stack monitoring across your applications, services, and infrastructure with Azure Monitor. Many demonstrations were performed to overview the capabilities of Azure monitor.
The demos included monitoring VMs, Containers, other Azure services, and applications. In addition, setting up predictive monitoring for detecting anomalies and forecasting was also discussed.

Azure has a full set of services which it uses to oversee all your security and management needs. They have all the tools you need and are built into the platform to reduce any 3rd party integration. As if not enough, Azure managed to develop a set of newer features; partner integration, monitor containers everywhere, new pricing option, trouble shoot network issues later.

Screenshot_20200215-224807_Chrome

Subsequent to lunch, I joined the alternative session, which was on Artificial Intelligence and Machine Learning. The session was on the use of Azure Cognitive Services and using it with optimized scaling in order to optimize the customer care services provided organizations such as telecoms and telemarketers.
Then we were back at another joint session by Satya Srinivas Gogula and Vivek Garudi and the keynote was on the topic Secure DevOps for Apps and Infrastructure @ Microsoft Services.

IMG_20200130_091401

The speaker spoke about the wide adoption of DevOps practices and Open Source Software (OSS) and the vulnerabilities they introduce. The latter part of the session focused on best practices for secure DevOps with Azure.

The next keynote was on Transforming IT and Business operations with real-time analytics: From Cloud to the intelligent edge. It was jointly delivered by Jean-Sébastien Brunner and Krishna Mamidipaka and focussed on the challenges faced by IT and Business teams trying to understand the behavior of applications.
The speakers explained the benefits of Azure Stream Analytics to ingest, process, and analyze streaming data in order to enable better analytics.

A good example of when Azure is at its best is that it can be used for earthquake and storm predictions.

Taylor Rockey concluded the day with his keynote on MLOps: Taking machine learning from experimentation to production. MLOps is an integration between machine language and DevOps. MLOps has proven to have numerous benefits including; scalability, monitoring, repeatability, accountability, traceability and so on. This platform had impressive features that make it a first-choice for many developers.
The problems that many organizations face is the lack of proper understanding and tooling to use Machine Learning for production applications. The session focussed on the use of Machine Learning for production applications with the use of Azure Machine Learning and Azure DevOps.

And that’s a wrap. Don’t forget to tune into tomorrow’s article.

DevOps, Software Development Insights

Microsoft LEAP: Design for Availability and Recoverability

Day 3 of Microsoft LEAP was just completed. It was a day packed with many interesting keynotes regarding improving the availability and recoverability of Azure applications. By now, you know the drill, check out my notes on Day 2 here.

Mark Fussell and Sudhanva Huruli co-hosted the opening keynote on the topic Open Application Model (OAM) and Distributed Application Runtime (Dapr). Mark has been with Microsoft for nearly 2 decades and is now a Principal PM Lead. Sudhanva is a Program Manager. Both of them work on the Azure Service Fabric platform.
The open application model was discussed in detail and the focus was on separating operational needs from development concerns.

ae7b0c80-eff0-11e9-94de-18e56bcaeff1

Mark Fussell started by describing the topology of applications which many users utilised. He also stated that developers write each application to interact with different services. Then, Mark spoke about the reason behind the creation of Dapr. It was a designed as a solution to tackle the problems of microservice development. Dapr would allow the building of apps using any language on any framework. Microsoft is already onboard to tap into benefits which it offers. It offers the benefits of enjoying stateful microservice in any language.

Sudhanva Huruli’s speech on OAM was intriguing and revealing. According to him, the OAM was a platform agnostic specification to help define cloud native applications. Users can trust it’s quality because it was built by the largest teams at Microsoft and Ali Baba. It can be applied in a number of ways. It’s benefits include encapsulating application code, offering discretionary runtime overlays, discretionary application boundaries and defines application instances.

The program is fully managed by Azure, so that you can focus on applications.

The opening session was followed by another join session by Muzzammil Imam and Craig Wilhite who hold the positions of Senior PM and PM respectively.
This keynote was on the topic of Windows Containers on AKS and it detailed the process of converting a legacy application into a cloud application and hosting it on a Windows container on an Azure Kubernetes service.

Their presentation showed that a lot of on-premise workload is done on windows; about 72%. There seems to to be a light at the end of the tunnel as there have been numerous good reviews about the Windows Container. It’s adoption is even growing steadily and there is room for more improvement. Microsoft containers will keep getting better with continuous innovation.

Kubernetes is a great option in Azure. It’s a vanguard in the future of app development and management and it can help you ship faster, operate easily and scale confidently. Azure Kebernetes Services will help you handle all the hard parts and give room a better future.

IMG_20200129_101806.jpg

After the coffee break, we were back for the next session conducted by Brendan Burns on Securing software from end-to-end using Kubernetes and Azure. Brendan is a Distinguished Engineer at Microsoft. This session focussed on continuous delivery with Kubernetes. Some of the sub-themes were continuous integration with GitHub Actions, Access Control in Kubernetes, and Gatekeeper for Kubernetes.

The last session before lunch was conducted by Jeff Hollan, a Principal PM Manager for Microsoft Azure Functions. The keynote was on Serverless and Event-Driven functions for Kubernetes and beyond. To put it simply, they seem just like the features of Kubernetes.

IMG_20200129_112049

The focus was on stateless event-driven serverless computing which is enabled by Azure functions. Many new hosting and programming models that enable new event-driven scenarios were discussed.

When used with severless, it allows developers focus on what really matters; their code. There are a variety of applications which it can be used as. Kubernetes also does well when dealing with event-driven applications

Next to speak was Kirpa Singh. He was a manager at Microservices and Performance Tuning. During his session, he spoke on what makes microservices a better option. He went on by speaking about the benefits of microservice architecture for projects. It was designed for large applications that require a high release velocity, complex applications that need to be highly scalable a, applications with rich domains or subdomains and so on. It offers users agility, focus, technology and isolation.

After lunch, we saw more of the Microsoft campus. Then it was back to the next session.
The session after the lunch break was the OSS Architecture Workshop conducted by Jeff Dailey, Patrick Flynn, and Terry Cook. One of the core themes of the workshop was Open Source stacks. They spoke about building Hybrid resilient data pipelines and infrastructure using open source. This was done through a breakout session at which the attendees were separated into groups and drafted architectures that supported both on-premise and cloud deployments.

During this session, they discussed about Open Source. But why open source? It allows easier migration, deliver poly cloud options via APIs, drives Azure consumption and so on.

Mark Brown conducted the next session on Building high-performance distributed applications using Azure Cosmos DB. He is a Principal PM in the Azure Cosmos DB Team.
The session’s key theme was building globally distributed cloud applications with high availability while ensuring extreme low latency. Many real-world demos were explored during the session and these will help us, developers, to tackle these issues in our own projects.

Hans Olav Norheim, a Principal Software Engineer, concluded the sessions for the day with a keynote on Designing for 99.999% – Lessons and stories from inside Azure SQL DB.
The session focussed on building applications with almost 100% uptime while covering design choices, principles, and lessons learned that can be used in our own projects to overcome uptime issues.

Thus were the proceedings of Day 3. I conclude my note while looking forward to the next set of sessions with the theme Design for efficiency & Operations & DevOps.
I’ll be publishing another article tomorrow.

Agile, Customer Journey, Innovation, Methodology, Reflections, Software Development Insights, User Experience (UX)

Customer Experience and Customer Journey Mapping

After working in Sweden for 5-6 years I took the train from Gothenburg to Stockholm for the first time to attend a workshop on Customer Experience and Customer Journey Mapping. Since commuting across the country was a new experience for me, I chose to be 30min early for the train only to find the doors of the coach locked. Standing outside freezing, I took the time to find my ticket. I had received an SMS with my details a few days earlier, but to my surprise it said: “This is not your ticket”. So my customer experience with SJ prior to arriving at the Customer Experience and Customer Journey Mapping workshop wasn’t really a superb one.

Railway tracks and trains in Stockholm, Sweden.

While puffing and rumbling through the Swedish countryside, I had some time to prepare for the days to come. As a Product Owner it is of course imperative for me to understand everything about our product to deliver impeccable features and functionality to our customer.  However in a complex market the complete customer journey has become more important the recent years and my expectations for the next two days was to gain insights of processes and tools on how to improve my work on Customer Experience and the full Customer Journey when purchasing our services.

The workshop kicked off with an introduction of the participants and their roles, following a thorough explanation of what Customer Experience and Customer Journey Mapping is. The workshop was divided into six parts, all aiming to make content customers and employees: Strategy, Insights, Design, Measurement, Management and Culture.

The way to interact with customers and the focus on customer experience has significantly changed during the last 100 years. Back in 1913 when Woodrow Wilson was president, and USA went through the Progressive Era the focus was mostly on the product itself. However in the 1950’s the focus moved more towards the american dream and a very strong brand focus and transitioning into stronger customer relations through the 1970’s and 1980’s.

Pin up girl drinking cola in hip cafe

The focus on Customer Experience as we know is today started for real around the millennium, where new technology became available to both understand and interact with the customer. Also the consumers matured and expected more than just a product, introducing terminology like retailtainment and entertainmerce.

At its core, Customer Journey Mapping is a methodology that enable insight and understanding of customer’s with the aim of developing products or services that support innovation and business development through earning the satisfaction and loyalty of your customers. In a nutshell, I would call it a form of result oriented customer service.

The workshop was tailored to serve a wide range of people, specifically people responsible for customer experience, others were business managers, business developers, support and customer service managers, marketing managers and marketers, strategists, and product owners. Learning about Customer Experience together availed is the opportunity to create connections and offer each other insights relating to our various fields.

20191112_093702168_iOS-667077331-1573582855456

 

Customer Experience Management is a strategy, methodology and process to manage a customers exposure, interaction and transaction with a corporation, a product/service and a brand. The discipline that is about developing service and business models that prioritize the customer in all of the company’s business processes, thus creating favorable conditions for growth. One of the fundamental aspects of CX is taking time to understanding the customer’s experience and contact points with your company. It involves careful documentation and recording of all forms of contact between a customer and the company to the extent that at a glance the customer’s journey can be visualized and understood easily; A Customer Journey Map if you will, which is a documentation/mapping of all the customer’s contacts with you as a company.

Omnichannel-1-1

During the course of the two-day workshop on learning about Customer Experience, we created a Customer Journey Map for a fictitious company and integrated the results into our respective businesses. We started off by understanding Customer Experience as a business discipline and it’s concepts, what makes up a customer experience and how to understand it. We then proceeded to Customer Journey Mapping, creates models with the aid of working methods, practical steps and guidance. A large part of the workshop was focused on “learning by doing”.

We moved onto Persona and Empathy mapping, gathering customer insights, customer needs and behavior, contact points and channels. This part was more about how to appropriately gauge a customer’s response and feelings when they make contact with our business. We were also taught how to effectively record these to facilitate these to facilitate business growth modeling. Happy customer = better business!

Our knowledge on the workshop was then put to the test by presenting us with practical exercises on creating Customer Journey Map and how to measure the progress of our services linked to Customer Journey Mapping.

20191112_151241815_iOS

The final part of the learning curve was the most delicate: Business development and business management with CX and how you can introduce change to our business specifically. We were made to understand that without actual measurable growth, the aim of the workshop would not be met. So in essence, all CX and Customer Journey Mapping should lead to measurable growth.

Attending the two day workshop armed me with a lot of new skills in dealing with customers and I learnt basic understanding of Customers and experience as a business discipline. What I consider to be the most vital lesson learnt is understanding how an empathetic approach to customers can create an atmosphere that encourages sustainability and profitability in business for your company.

With all these exciting business models, I’m quite ready and anticipate implementing Customer Experience work plan!

Thank you to Camilla Lif and Johan Sjöström for a great workshop. If you enjoyed reading about the Customer Journey or have any questions or great ideas, feel free to reach out at bjorn.nostdahl@gunnebo.com 🙂

Agile, Methodology, Scrum, Software Development Insights

Agile and Scrum Methodology Workshop

I recently had the chance to join Henrik Lindberg from Acando for an Agile Scrum workshop. In this post I will write about the workshop and the basics of Agile and Scrum. There is so much to learn and explore in agile, and I hope this introduction will compel further reading.

Agile Methodology

Unless you live offline, you probably are aware of the latest trend in the corporate world, which is the agile approach. Agile, in recent times has grown into a revolutionary movement that is transforming the way professionals work. Agile is a methodology that keeps the equilibrium of your priorities. Thus, the work is done faster, and project requirements are with great efficiency.

Working agile, people tend to forget about the four values from the agile manifesto:

  1. Individuals and interactions over processes and tools
  2. Working software over comprehensive documentation
  3. Customer collaboration over contract negotiation
  4. Responding to change over following a plan

Equally important is the twelve principles behind the agile manifesto:

  1. Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.
  2. Welcome changing requirements, even late in  development. Agile processes harness change for the customer’s competitive advantage.
  3. Deliver working software frequently, from a  couple of weeks to a couple of months, with a preference to the shorter timescale.
  4. Business people and developers must work  together daily throughout the project.
  5. Build projects around motivated individuals.  Give them the environment and support they need, and trust them to get the job done.
  6. The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.
  7. Working software is the primary measure of progress.
  8. Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.
  9. Continuous attention to technical excellence and good design enhances agility.
  10. Simplicity–the art of maximizing the amount of work not done–is essential.
  11. The best architectures, requirements, and designs emerge from self-organizing teams.
  12. At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.

Major Differences between Waterfall and Agile

  • The waterfall approach is a sequential model of project management. Here the development team can only move to the next stage if the previous step is successfully completed.
  • In the agile approach, the execution of processes is concurrent. This enables effective communication between the client, the manager, and the team.
  • Waterfall assumptions are not well-suited for large-sized projects, whereas agile lets you manage complicated tasks with great ease.
  • Agile methodology is being embraced by managers worldwide for its greater flexibility.
  • The development plan is reviewed after each step in case of agile, while for the Waterfall approach it will be only during the test phase.

The agile development is based on the interactive functionality, according to which the planning, the development, the prototyping and many other key phrases of the development may pop up more than once in line with the project requirements. The agile also adheres to the incremental model, where the product is designed, implemented and tested in increasing order (complexity of the task increases in the ascending order). The development is termed as finished, only if every minute specification and requirement is met.

When to Use The Agile Methodology?

  • In a Scenario, When You Require Changes to Be Implemented
  • When the Goal of the Project Isn’t Crystal Clear
  • When You Need to Add a Few New Features to the Software Development
  • When the Cost of the Rework Is Low
  • When Time to Market Is of Greater Paramount Importance than the Full Feature Launch
  • When You Want to See the Progress in the Sequential Manner

Scrum Methodology

Scrum is the latest agile framework for product success in small-to-big organizations, which is creating a lot of buzz in the present IT world. Managers’ worldwide united hold a belief that Scrum is far more than the execution of processes and methods; it plays an integral role by supporting teams meet their aggressive deadlines and complicated project demands. The Scrum is a collaborative agile approach that involves the breaking down of substantial processes into smaller tasks so that they are done efficiently in a streamline manner.

Scrum is a lightweight, agile framework that successfully manages and accelerates project development. This framework is proven to cut down on project complexity and focus largely on the building products that are in accordance with client expectations. Generally, people sometimes use Agile and Scrum as interchangeable, but there is a big difference. The agile approach is a series of steps, on the other hand, Scrum a subset of agile.

There are three principles of Scrum:

  • Transparency
  • Inspection
  • Adaptation

Scrum Roles

Are you interested in switching to the Scrum approach of development? Then, you must know the various Scrum roles.

Three-main-scrum-roles-1_low.png

The Product Owner

He/she is responsible for providing the vision of the product. The product owner will play the central role in breaking down the project into smaller tasks and then prioritize them.

Responsibilities

  • Defining the Vision
  • Managing the Product Backlog
  • Prioritizing Needs
  • Overseeing Development Stages
  • Anticipating Client Needs
  • Acting as Primary Liaison
  • Evaluating Product Progress at Each Iteration

The ScrumMaster

He/she is someone with extensive expertise over the framework. The ScrumMaster will make ascertain that the development team is adhering to the Scrum model. They will also coach the team on this.

Responsibilities

  • Coaching the Team
  • Managing and Driving the Agile Process
  • Protect the Team from External Interference
  • Managing the Team
  • Foster Proper Communication
  • Dealing with Impediments
  • Be a Leader

The Development Team

This involves a panel of qualified developers those who form the core of the project development. Each individual in the team brings his/her own unique skills to the table.

Responsibilities

  • The Entire Team Is Accountable for the Work
  • They Are No Titles and Subheading
  • Sit Together to Communicate with One Another

Scrum Artifacts

sprint-02

Artifact #1: Product Backlog

Product backlog involves a sequence of fundamental requirements in a prioritized order. The requirements are provided by the provided owner to the Scrum Team. The backlog of product emerges and evolves with time, and the owner of the product is solely responsible for content & its validity.

Artifact #2: Sprint Backlog

It is the subset of the product backlog that the team will put in the hard efforts to achieve the “To Do’s.”  The work in the sprint backlog is sliced down in smaller tasks by the team. All the items of the sprint backlog must be developed, tested, documented and integrated to meet the needs of the clients.

Artifact #3: Product Increment

The product increment is an artifact of Scrum with significant importance. The product increment must in line with the “Definition of Done” by the development team, and the product increment has to be approved by the product owner.

Definition of Done in Scrum Methodology

Definition of Done from varying from one scrum team to another. It is an acceptance criterion that drives the quality of work when the user story is complete. In other words, Definition of Done is the quality checklist with the development team.

Burndown Chart

The Burndown chart is a means to track the progress of a project on the Scrum. The ScrumMaster is responsible for updating this chart at the end of each sprint. The horizontal axis on the release Burndown chart represent the sprints, while the vertical one will make you aware of the remaining work at the beginning of each sprint.

Backlog Refinement

Backlog refinement is the act of updating/adding estimates, details, and order for the items in the product backlog. This improves story descriptions.

User Story

Commonly known as the “Definition of Requirement,” the user story in Scrum provides enough information to the development team so that they provide a reasonable estimate for the project. The user stories are about one or two sentences, a set of conversations that define the desired functionality.

User Story Acceptance Criteria

Acceptance criteria in terms of Scrum methodology are a set of conditions that the software product must meet in order to the acceptance by the user, customer or the other stakeholders. In layman’s terms, it is also a set of statements that determine user features, requirements or functionalities of an application.

User Story Relative Estimation

Relative estimation is the procedure of estimating task completion. The estimate is not in terms of the time, rather the items that are similar to one another in terms of complexity.

Scrum Events

There are five defined Scrum Events.

Sprint Planning

The Sprint Planning is an event in the Scrum framework. Here the team in a collaboration will decide on the task they will focus during that sprint, and discusses their initial plan to meet those product backlog tasks.

Sprint Goal

The sprint goal is defined as the objective set for the sprint that needs to be met via the implementation of the Product Backlog. The sprint goals are obtained after long discussions between the Product Owner and the Development team.

Daily Scrum

For the Scrum approach, each day of a Sprint, the team meets and holds a discussion on a number of aspects, and this meeting is known as the Daily Scrum.

Sprint Review

The sprint review is held at the end of each of the sprint. This is done to inspect the product increment.

Sprint Retrospective

The Sprint Retrospective is held between the development team and the ScrumMaster to discuss how the previously Sprint went, and what can be done to make the upcoming Sprint more productive.

In the end, after reading this entire article, you probably got a basic overview of the Scrum approach. If you want to talk about agile and scrum, feel free to contact me at bjorn.nostdahl@nostdahl.com. You can also read more about agile in this article:

Artificial Intelligence (AI), Business Intellegence (BI), Machine Learning (ML), Microsoft Azure, Software Development Insights

Machine Learning and Cognitive Services

Machine learning is gradually becoming the driving force for every business.  Business organizations, large or small trying to seek machine learning models to predict present and future demands and do innovation, production, marketing, and distribution for their products.

Business value concerns of all forms of value that decides the well-being of a business. It’s a much broader term than economic value encompassing many other factors such as customer satisfaction, employee satisfaction, social values etc. It’s the key measurement of the success of a business. AI helps you to Accelerate this business value in two ways. That’s through allowing to make correct decisions and innovation.

Machine learning technologies. Millennial students teaching a robot to analyse data
nadia_snopek

Remember the days when Yahoo was the major search engine and Internet Explorer was the Major web browser. One of the main reason for their downfall was their inability to make correct decisions. Wise decisions are made by analyzing data. More data you analyze, better decisions you make. Machine Learning greatly support in this cause.

There was a time, Customers accepted what companies were offering them. Things are different now. Demands of customers for new features are ever more increasing. Machine Learning has been the decisive factor behind almost every new innovation whether it be face recognition, personal assistants or autonomous vehicles.

Machine Learning in more details

First starts with learning what machine learning is. Machine learning enables systems to learn and make decisions without explicitly programming for it.  Machine learning is applied in a broad range of fields. Nowadays, Almost every human activity getting automated with the help of machine learning. A particular area of study that machine learning largely exploited is data science.

Data science plays with data. Data must be extracted to make the best decisions for a business.

The amount of data that a business has to work with is enormous today. For example, social media producing billions of data every day. To stay ahead of your competitors, every business must make the best use of this data. That’s where you need machine learning.

Machine learning has invented many techniques to make better decisions out of large data sets. These include Neural networks, SVM, Reinforcement learning and many other algorithms.

Among them, Neural networks are leading the way. It improves consistently spawning child technologies such as convolutional and recurrent neural networks to provide better results in different scenarios.

AdobeStock_178345630_low

Learning machine learning from the beginning, and trying to develop models from scratch is not a wise idea. That yields huge cost and demands a lot of expertise in the subject. That why someone should try to take the assistance of a machine learning vendor. Google, Amazon, Microsoft they all provides Machine learning services. Let’s take Microsoft as an example, and review what qualities we should look for when selecting a vendor.

Using cloud as a solution for machine learning

It simplifies and accelerates the building, training, and deployment of machine learning models. It provides with a set of APIs to interact with when creating models hiding all the complexity in devising machine learning algorithms. Azure has the capability to identify suitable algorithms and tune hyperparameters faster. Autoscale is a built-in feature of Azure cloud services which automatically scale applications. This autoscaling feature has many advantages. It allows your application to perform best while keeping the cost to a minimum. Azure machine learning APIs can be used with any major technologies such as C# and Java.

There are many other advantages you will have with cloud Machine Learning

  • Flexible pricing. You pay for what you use.
  • High user-friendliness. Easier to learn and less restrictive.
  • More accurate predictions based on a wide range of algorithms.
  • Fine tuning results are easier.
  • Ability to publish your data model as a web service Which is easy to consume.
  • The tool allows data streaming platforms like Azure Event Hubs to consume data from thousands of concurrently connected devices.
  • You can publish experiments for data models in just a few minutes whereas expert data scientists may take days to do the same.
  • Azure security measures manage the security of Azure Machine Learning that protects data in the cloud and offers security-health monitoring of the environment

Using Cognitive Services to power your business applications

We will go on to discuss how Azure cognitive service can be used power up a business application. Azure cognitive services are a combination of APIs, SDKs, and services which allows developers to build intelligent applications without having expertise in data science or AI. These applications can have the ability to see, hear, speak, understand or even to reason.

AdobeStock_252431727_low

Azure cognitive services were introduced to extend the Microsoft existing portfolio of APIs.

New services provided by Azure cognitive services includes

  • Computer vision API which provides with advanced algorithms necessary to implement image processing
  • Face API to enable face detection and recognition
  • Emotion API gives options to recognize the emotion of a face
  • Speech service adds speech functionalities to applications
  • Text analytics can be used for natural language processings

Most of these APIs were built targeting business applications. Text analytics can be used to harvest user feedbacks thus allowing businesses to take necessary actions to accelerate their value. Speech services allow business organizations to provide better customer services to their clients. All these APIs have a free trial which can be used to evaluate them. You can use these cognitive services to build various types of AI applications that will solve a complex problem for you thus accelerating your business value.

If you want to talk more about ML and AI, feel free to contact me: bjorn.nostdahl@gunnebo.com 🙂

Operations, Software Development Insights, Tactical Meetings

Efficient Technical Support Tactical Meetings

Gunnebo Business Solutions AB is working on establishing an international, dynamic and enthusiastic software development team who will build sophisticated security and business applications. Within the new organization, customer support and operations plays a vital role.

Global communication network concept

To be able to effectively help our customers, we are implementing and improving our routines around the support process. We have started our journey with ITIL version 4 and DevOps, but lately an article from Holocracy regarding “Tactical Meetings” caught my eye. Tactical meetings are held regularly, on a weekly basis, with the intention of removing any obstacles that may arise preventing the team from achieving their goals for that cycle (the duration between 2 meetings), or to update the rest of the team as to what is going on with the task assigned to a team member.

Tactical Meeting Procedure and Expectations

Tactical meetings are usually kept short and on point. These meetings usually can be divided into five main parts, they are namely “Check-in”, “Checklist, Metrics, Project Updates”, “Agenda Building”, “Triaging Issues” and finally the “Closing Round”.

  • Check-in Phase: This phase is basically sort of a get to know your team member phase, here team members are given a little time to talk about how they are doing or feel free to express how they are feeling at that moment (maybe they were feeling little blue at that moment or rejoicing about something special, or maybe they were just not themselves), just so that the other members would know where that member was coming from.
  • Checklist, Metrics, Project Updates Phase: Here the team members are given the opportunity to provide the rest of the team with some context about the issues they are facing with regards to the task they were assigned. The other team members are also encouraged to either ask the member questions or may save them for a later time in the meeting.
  • Agenda Building Phase: At this point, the facilitator (the person chairing the meeting, who is usually the team leader or a supervisor or someone from the management) would go ask the members to let him/her know of the problems they are facing. These problems are known as “Tensions”. Here the team member would either give a short phrase describing the tension or if they do not have any tensions, they could just say pass.
  • Triaging Issues Phase: This is the point where the team is allowed to discuss the issues they are facing in detail and try and come up with solutions to the tensions, keeping in mind any limitation that may arise in completing the task from the side of the team member who is facing the tension. The facilitator also plays a larger role here in keeping the topic on point and not letting the discussions to get derailed at any point. He / She can also add possible tensions that may arise from implementing the solutions into the agenda. But, once tension is crossed off the list, it cannot be revisited for that meeting.
  • Closing Round Phase: This is very similar to that of the check-in phase, but here the team members would deliberate on how they feel about the solutions that they have come up with, and whether or not they are happy with it.

The process of an Efficient Tactical Meeting

AdobeStock_102381789_low

The efficiency of a tactical meeting depends largely on the shoulders of the facilitator. An efficient facilitator would use a few tricks to keep the meetings short and on point. Here are some key tricks that a facilitator would need to use in order to achieve high levels of efficiency.

Recap from the previous cycle

Here the facilitator would go around the table, asking each member to present any updates from the solution(s) to the tension(s) they faced at the previous meeting. At this point, a good facilitator would have a checklist of the tensions and their solutions from the previous cycle and cross them off after they are resolved. Here the team members are also allowed to request to add items to the checklist as long as it is in keeping with the solution to the tension as well as accepted as a valid point for the solution by the other team members.

Keeping up with the time

This is when the facilitator allocates a certain amount of time for each task. For example, while building the agenda, the facilitator would ask the members to keep their tensions short and sweet, and sometimes even ask them to use one or two words to describe their tension, as these tensions can be elaborated in the triaging phase, it is not particularly necessary for everyone to understand the tension at this point. However, when it comes to the triaging phase, it is important that the facilitator finds a balance between allocating enough time for each tension in the agenda as well as being able to keep the meeting moving forward. Here it is considered good practice to not discuss minor issues (especially when considering technical support issues) in depth, but to find quick solutions and move to the next tension.

Processing the Tensions

This is the most important part of the role of a facilitator. Here the facilitator would ask the team member what their tension is, and then ask them what they need. The team member would then give a quick description of their tension and then give the team the solution to his/her tension or engage the other members of the team to come up with a fruitful solution. These tensions would be captured by the secretary and also the solution for the tension that was accepted by the team member. This would help the facilitator in the next cycle meeting when they recap the previous cycle. Finally, the facilitator would ask the member if they are happy with the solution, and if they are, move on to the next tension.

Tasks of a Facilitator

  • While most of the tensions in a technical support framework are quite straightforward, there are instances where the solution would require multiple steps to achieve the solution. Since these tactical meetings are set frequently, there may not always be time to complete all the steps required to achieve the solution. Hence, the facilitator would ask the team member for a “Next Action”, this is quite literally, what the team member wants to do next to achieve the solution to his/her tension. This could also be helpful to the facilitator to keep track of the checklist for the next cycles recap phase.
  • In cases where there is only one step, or is at the final step, the facilitator could also ask for the outcome of the project. A “Project” is a solution with a definite endpoint.
  • The facilitator can also ask team members to share information on tensions where there may not be an immediate solution.
  • In the case of where a member does not know how to express their tension(s), the facilitator could also either ask the team members to address the tension or even offer a possible pathway for the team member to address it by him/her self.
  • Another important task for the facilitator would be to make sure that only one tension is being discussed at one point. There may be instances where another team member would want to discuss a related or similar tension to that of which is being discussed. At this point the facilitator is required to refocus the team’s attention to the tension and hand, to ensure the meeting is efficient.
  • In cases where the teams come up with multiple solutions to the same tension, it is the job of the facilitator to urge the team to come up with a consensus as to what is the better solution. If the team member with the tension is not sure whether he/she would be able to achieve the solution, they can also request the help of other team members to reach their goal.
  • If the situation arises where the solution that the team has come up with, is not in keeping with the organization’s policies or is not a service provided by the organization, the facilitator has the job of taking this matter to the management and try to come up with a solution at that level.

In summary, the main objectives of a technical support tactical meeting are to spend more time talking about the important things and find a solution to help the customer more efficiently and satisfactorily. The purpose of these meetings is not to talk about things that are beyond the control of the team, or talk strategies or even politics, the purpose is to spend less time complaining and working together as a team to help each and every member of the team to perform their work efficiently and effectively. Hence, that is why we have not only implemented weekly tactical meetings at our organization, we also abide by the guidelines put forward in this article.

If you want to talk more about software support and operations, feel free to contact me at bjorn.nostdahl@gunnebo.com

Artificial Intelligence (AI), Business Intellegence (BI), Machine Learning (ML), Microsoft Azure, Software Development Insights

Microsoft LEAP: Looking into the future

Cloud Computing have become one of the most profitable industries in the world and cloud will remain a very hot topic for a foreseeable future. There is a huge competition among cloud service providers to win customers by providing the best services to their customers. Cloud service providers invest a lot of money on inventions. Thus, cloud services make most of the trends in the future IT industry. Microsoft Azure and Amazon AWS is one of the leaders in innovation in their respective fields.

Data centers around the world

As the demand for cloud services rapidly increasing in all parts of the world, establishing data centers around the globe becomes a necessity. Azure has understood this well and expecting to expand its service by constructing data center regions in many parts of the world.

Microsoft-navalgroup_Brest
From news.microsoft.com article about Project Natick’s Northern Isles datacenter at a Naval Group facility in Brest, France. Photo by Frank Betermin

The world is divided into geographies defined by geopolitical boundaries or country borders. These geographies define the data residency boundaries for customer data. Azure geographies respect the requirements within geographical boundaries. It ensures data residency, compliance, sovereignty, and resiliency. Azure regions are organized into geographies. A region is defined by a bandwidth and latency envelope. Azure owns the greatest number of global regions among cloud providers. This is a great benefit for businesses who seek to bring their applications closer to users around the world while protecting data residency.

The Two Major Azure’s Global Expansion of Cloud Services

Two of the most expansion that Microsoft Azure has incorporated to improve its service updates includes the following:

Expansion of Virtual Networks and Virtual Machines Support.

With utility virtual machines like A8 and A9 that provides the advantages of operations like rapid processors and interconnection amidst more virtual cores, there can now be the seamless configuration of virtual networks for specific geographical locations and regions.

This feature gives more room for optimal operations, cloud services, complex engineering design video encoding and a lot more.

Incorporation of Azure Mobile Services, and its Expansion to Offline Features

Even with a disconnected service, this operation makes it possible for applications to operate effectively on offline features.  Furthermore, is that this extends the incorporation of Azure cloud services to apps on various platforms, including Android and iOS on mobile phones.

Then there are Availability Zones. It is the 3 rd level in the Azure network hierarchy.

Availability zones are physically separated locations. They exist inside regions. They are made up of one or more data centers. Constructing availability zones is not easier. They are not just data centers, they need advanced networking, independent power, cooling etc. The primary purpose of Availability zones is to helps customers to run mission-critical applications.

You will have following benefits with Azure availability zones

  • Better protection for your data – you won’t lose your data due to the destruction of a data center
  • High- availability, better performance, more resources for businesses to continuity.
  • 99% SLA on virtual machines

Open source technology

Microsoft took some time to understand the value of Open source technologies. But now they are doing really fine. With .Net Core and the .Net Standard, Microsoft has done a major commitment to open source. Looking at GitHub alone, Microsoft is one of the largest contributors to open source.

Redmond, Washington USA - 4th June 2018 Microsoft confirms its acquiring GitHub
“Microsoft is a developer-first company, and by joining forces with GitHub we strengthen our commitment to developer freedom, openness and innovation,” said Satya Nadella, CEO, Microsoft.

With  .Net core 3.0, Microsoft introduced many features that will enable developers to create high security fast productive web and cloud applications. .NET Core 3 is a major update which adds support for building Windows desktop applications using Windows Presentation Foundation (WPF), Windows Forms, and Entity Framework 6 (EF6). ASP.NET Core 3 enables client-side development with Razor Components. EF Core 3 will have support for Azure Cosmos DB. It will also include support for C# 8 and .NET Standard 2.1 and much more.

Mixed reality and AI perceptions

Mixed reality tries to reduce the gap between our imagination and reality. With AI, it is about to change the way how we see the world. It seems to become the primary source of entertainment. Although Mixed reality got popular in the Gaming industry, now you can see its applications in other industries as well. The global mixed reality market is booming. That’s why the biggest names in tech are battling it out to capture the MR market. All major tech products have introduced MR devices such as Meta2 handsets, GoogleGlass 2.0, Microsoft HoloLens.

Mixed reality and AI perception is a result of the cooperation of many advanced technologies. This technology stack includes Natural Language interaction, Object recognition, real-world perception, real-world visualization, Contextual data access, Cross-device collaboration, and cloud streaming.

Factory Chief Engineer Wearing VR Headset Designs Engine Turbine on the Holographic Projection Table. Futuristic Design of Virtual Mixed Reality Application

As I said earlier, Although the Gaming industry was the first to adopt mixed reality, now MR applications are more used in other industries. Let’s visit some of the industries and see how Mixed reality has transformed them and what benefits those industries get from mixed reality and AI perception.

You can see tech giants such as SAAB, NETSCAPE, DataMesh, using mixed reality in the manufacturing industry. According to research, mixed reality helps to increase worker productivity by 84%, improve collaboration among cross-functional teams by 80% and improve customer service interaction by 80%. You may wonder How mixed reality was able to achieve it? What it offers to the manufacturing industry. There are many applications of Mixed reality in manufacturing, following is a small list of them.

  • Enhanced Predictive Maintenance
  • Onsite Contextual Data Visualization
  • Intuitive IOT Digital Twin Monitoring
  • Remote collaboration and assistance
  • Accelerated 3D modeling and product design
  • Responsive Simulation training

Retail, Healthcare, Engineering, Architecture are some other industries that use mixed reality heavily.

Quantum revolution

Quantum computing could be the biggest thing in the future. It is a giant leap forward from today’s technology. It has the potential to alter our industrial, academic societal and economic landscapes forever.  You will see these massive implications nearly every industry including energy, healthcare, smart materials, and environmental system. Microsoft is taking a unique revolutionary approach to quantum with its Quantum Development Kit.

QPR18_Copenhagen_57022000x1108
Picture from cloudblogs.microsoft.com article about the potential of quantum computing

Microsoft can be considered as the only one who took quantum computing seriously in the commercial world. They have a quantum dream team which is formed by the greatest minds in physics, mathematics, computer science, and engineering to provide cutting-edge quantum innovation. Their quantum solution integrates seamlessly with Azure. They have taken a scalable topological approach towards quantum computing which helps to harness superior qubits. These superior qubits can perform complex computations with high accuracy at a lower cost.

There are three important features in Quantum development kit which makes it the go-to Quantum computing solution.

It introduces its own language, Q#. Q# created only for quantum programming. It has general programming features such as operators, native types and other abstractions.  Q# can easily integrate with Visual Studio and VS code which makes Q# feature rich. Q# is interoperable with the Python programming language. With the support of enterprise-grade tools, you can easily work on any OS windows, macOS, or Linux.

Quantum development kit provides a simulated environment which greatly supports optimizing the codes. This is very different from other quantum computing platforms which still exist in a kind of crude level. This simulation environment also helps you to debug your code, set breakpoints, estimates costs, and many other things.

As we discussed earlier, Microsoft has become the main contributor in the open source world. They provide Open source license for libraries and samples. They have tried a lot to make quantum computing easier. A lot of training materials are presented to attract developers to into quantum programming realm. The open source license is a great encouragement for developers to use the Quantum development kit in their applications while contributing to the Q# community.

Cloud services will shape the future of the IT industry. Quantum computing, Open source technologies, Mixed reality will play a great role in it.

This is my last day in Redmond, but I really look forward to coming again next year! If you have any questions, feel free to contact me at bjorn.nostdahl@gunnebo.com

DevOps, Microservices, Operations, Software Development Insights, Technical

Microsoft LEAP: Accelerating Business Value

This is my third article from Microsoft LEAP and todays’ focus is the use of microservices and Kubernetes.

Containers Are Crucial For the More Essential Microservices

A very important topic that was discussed throughout the agenda of the conference was the use of Microservices and how essential they are for most applications for the business sectors. With different approaches and angles to this topic, Brendand Buns, one of Kubernetes co-founder, gave a session which focused on the use of containers for microservices. He focused on his product, Kubernetes, which is one of the best and most recommended open-source services for the use containers with the use of policies. Microservices are important due to their ability of being agile and their sophisticated architecture which helps in a faster digital offering.

Conceptual business illustration with the words microservices-1

However, currently found microservices are used on physical services which leads to many problems. This is why the use of containers is a breakthrough which gives the user a light runtime environment. It can also be used on physical or virtual servers which is a huge development compare to older technologies.

The use of containers will also help in providing better isolation, due to the use of many executions on only one operating system. Such an opportunity will aid developers in minimizing the use of many different VMs. Brendand discussed the use of Domain driven developments against using test driven developments; in terms the more relatable for businesses and how to pick the right method. Overall, the final conclusion was to reflect the scaling levels that could be reached through using Kubernetes as a service to provide containers while using microservices for your business.

The Use of Service Fabric Mesh

One popular session in the program was by Mark Fussell and Vaclav Turecek. This talk discussed the introduction of the anticipated future product called Service Fabric Mesh, with a full comparison with the currently used cloud service. Many different points have been discussed to describe the service fabric fully. However, the audience got more excited when they heard the different benefits that are met while using this new service.

1Azure20Service20Fabric20Mesh-1532006671541.png

Mark spoke about the time taken to create instances of VMs and the hassle in the whole process. This is where service fabric shines as it creates the VMs only once, allowing it to be used through the platform. More packages can be added to the cluster further on without any time consumptions. The second point tackled by Vaclav was the hosting opportunities with service fabric which is described as high-density. Which explains why the cost is lower for service fabrics, as the applications are not connected to the VMs in particular, giving a space to connect more than one application to a single VM.

Last but not least, they both discussed the flexibility of the service fabric mesh to be used with different servers or any different environments, disregarding the current existing infrastructure. They added the fact that service fabric helps in controlling the machine lifecycle. Developers were more educated on the differences between cloud technologies and whether to transfer or not.

The Touch Point: ACI and AKS

When it comes to the use of Azure Container Instances(ACI), Justin Luk, the product manager for Azure and Kuberentes, was the best pick for such content. Developers were glad to know that containers by AKS can be used with their ACIs. The containers can be quickly used when needed without any preps, saving time and effort. Instances will also be easily deleted directly after the needed work is done. AKS is used in these on-demand moments to monitor the work and control the creation and deletion process. This can help developers provide new severs instantly when needed without any hassle. When a certain problem or demand is asked for, AKS are used to reach the needed output without any extra services or products.

An Environment of AKS: Best Practices

Another session that stood out from all the Kubernetes sessions was the one conducted by Saurya Das, another product manager in Azure. This session was to reflect the success stories by some of the developers out there that used AKS in their platforms. Developers were happy to know about the multi-tenancy when using the cluster isolation. In addition, was the different network designs that could be used with their new service. These networks can also be implemented using policies, that help make the development easier and more secure. Overall, everyone in the session was satisfied to know about the scaling opportunities to expect and the strong control for monitoring and handling it possesses.

Monitoring Your Procedures Using Azure Monitoring

On the other hand, Ralph Squillace, gave a wider image and a better understanding on multi-tenancy and their use with AKS. He discussed how it is commonly mistakenly used through the AKS products itself, whereas it is actually recommended to be used in the application directly. Ralph emphasized on such points, by relating to some best practices which were mainly of SaaS products. He gave a few tips and tricks on how your service should be in terms of security, designs, policies and much more in order to be able to integrate and handle the multi-tenancy directly and easily through the application.

dashboard

Kubernetes: Guide for its Tools

The end of this section of containers being used was bent towards introducing the different operating tools that will assess developers while using kubernetes services. Bridget Kromhout was able to introduce the developers to new tools as Terraform, Helm, Draft, Brigade, Kashti and many others. These different tools were discussed thoroughly on how to use them in terms of configurations and app development. They were also helpful in scripting for event-driven operations and to manage the app fully. Developers were happy to learn how to efficiently use Kubernetes and containers for their currently existing architectures and structures.

All in all, a very on-topic and interesting day at Micrososft LEAP 2019. I look forward to the next sessions. If you have any questions, feel free to contact me at bjorn.nostdahl@nostdahl.com