Cosmos DB, Gunnebo Business Solutions, Microservices, Microsoft Azure, Mongo DB, Technical

Microsoft LEAP: Designing for the Cloud

The Microsoft LEAP is an event for the developers worldwide who are looking for original training from Microsoft.  It takes place annually in Microsoft headquarters in Redmond, WA. The five-day conference helps the attendees to fully understand how Microsoft products can be used and how they can solve the problems of the companies. This time, the participants learned how to design a cloud in an up-to-date fashion.

 

MicrosoftTeams-image

The following piece will provide you with a glimpse through the Microsoft Leap program. The sections are the highlights with the greatest impact and effect on the developers’ community.

Deep Dive into Cloud Computing: Azure Strategy

On January 28, Microsoft kicked off the Leap program for software architects and engineers. There were loads of speakers on the agenda. Among them, Scott Guthrie was one of the strongest. Scott is in charge of Microsoft’s cloud infrastructure, servers, CRM and many more tools. He was the leader of the team that created Microsoft Azure. In his keynote, “Designed for Developers”, he discussed cloud computing technology. His aim was to help the developers with a different level of skills to reach one goal, which is sustainable development and use of cloud computing.

20190128_090001

Scott focused on how to develop clouds and maintain them.  The session was concluded with the presentation of Microsoft’s anticipated plan of providing Quantum Computing in their Azure technology.

The Strong Impact of Microservice Architecture

On this issue,  the most memorable was the session featured by Francis Cheung and Masashi Narumoto. They talked about microservices and the strong architecture that they hold. This architecture is considered a paragon in the world of cloud computing as it has raised the bar.

20190128_111511

The speakers mentioned several important features of a strong company that has the potential to succeed. And it was well-established that the success of microservice implementation depends mostly on a well-developed team with a strong strategy (preferable domain-driven).

 

No matter how beneficial microservices could be, it is not necessarily the right choice for your business. You need to be well aware of your products and the level of complexity your business needs. Having extra unrequired tools will set you back rather than take you anywhere.

SQL HyperScale as a Close Based Data Solution

This session was different as it celebrated two decades of Pass and 25 years of SQL technology being used. The speaker, Raghu Ramakrishnan,  has been Microsoft’s CTO since he moved from Yahoo in 2012. With his strong background and experience, Raghu was the best candidate to discuss the use of SQL Hyperscale and how groundbreaking this technology has been.

20190128_134351

The Hyperscale service has become a crucial update to the currently existing services. According to Ramakrishnan, this is the most modern technology of SQL services which has the highest storage with the most computing performance. This precise model has up to 100 TB of the database.

 

This technology is generally used to replace cloud computing database structures as it is more reliable and accessible than other alternatives. Microsoft has added many features to the SQL hyperscale making it the leading databasing solution in the market. With the amazing features discussed in the talk, it was really worth a separate session.

The Commercial Database: Cosmos Database

Deborah Chen, the Cosmos Database program manager at Microsoft, took the time to discuss the most viral commercial form of database out there. Most current implementations use non-relational databases. The Cosmos DB is one of the most widely used sources for databasing.

20190128_144226

As it was mentioned by Deborah, the Cosmos DB is a very volatile and responsive tool. With numerous transactions taking place in a second, response to applications (especially for real-time) is a very sensitive thing. since it is a non-relational database, the retrieving and storing of data is easier and faster. Thus, this is where Cosmos stands out, as it was intentionally created with an architecture aimed at handling such tasks.

 

She also discussed the use of Service Level Agreements (SLA). This agreement helps to provide guarantees, availability, and latency for all users, making Cosmos DB the most viral product out there.

Monitoring Your Procedures Using Azure Monitoring

Rahul Bagaria, a product manager of Azure monitoring, joined later on to talk about the importance of monitoring your work, flow, and operations.  But the monitoring process is not limited to single tasks only but to the connections, workflow, and final output. To monitor all the steps taken through the procedure is important for maintaining efficient delivery and quality assurance as a whole. It is also beneficial to pick out errors and problems in the cycle, may they arise.

20190128_154930.jpg

This is where Azure monitoring kicks in, with many strong details like log analytics and application insights. Rahul emphasized the importance of this tool and all the features it provides. His team has worked hard to provide a service that can help with multiple tasks, milestones, and services. This session helped the developers to learn why and how to monitor their work processes.

 

All in all, the first day at Microsoft LEAP 2019 was very on-topic and interesting. I look forward to the next sessions. If you have any questions, feel free to contact me at bjorn.nostdahl@gunnebo.com

Artificial Intelligence (AI), Business Intellegence (BI), Gunnebo Business Solutions, Machine Learning (ML), Microsoft Azure

Microsoft LEAP: Looking into the future

Cloud Computing have become one of the most profitable industries in the world and cloud will remain a very hot topic for a foreseeable future. There is a huge competition among cloud service providers to win customers by providing the best services to their customers. Cloud service providers invest a lot of money on inventions. Thus, cloud services make most of the trends in the future IT industry. Microsoft Azure and Amazon AWS is one of the leaders in innovation in their respective fields.

Data centers around the world

As the demand for cloud services rapidly increasing in all parts of the world, establishing data centers around the globe becomes a necessity. Azure has understood this well and expecting to expand its service by constructing data center regions in many parts of the world.

Microsoft-navalgroup_Brest
From news.microsoft.com article about Project Natick’s Northern Isles datacenter at a Naval Group facility in Brest, France. Photo by Frank Betermin

The world is divided into geographies defined by geopolitical boundaries or country borders. These geographies define the data residency boundaries for customer data. Azure geographies respect the requirements within geographical boundaries. It ensures data residency, compliance, sovereignty, and resiliency. Azure regions are organized into geographies. A region is defined by a bandwidth and latency envelope. Azure owns the greatest number of global regions among cloud providers. This is a great benefit for businesses who seek to bring their applications closer to users around the world while protecting data residency.

The Two Major Azure’s Global Expansion of Cloud Services

Two of the most expansion that Microsoft Azure has incorporated to improve its service updates includes the following:

Expansion of Virtual Networks and Virtual Machines Support.

With utility virtual machines like A8 and A9 that provides the advantages of operations like rapid processors and interconnection amidst more virtual cores, there can now be the seamless configuration of virtual networks for specific geographical locations and regions.

This feature gives more room for optimal operations, cloud services, complex engineering design video encoding and a lot more.

Incorporation of Azure Mobile Services, and its Expansion to Offline Features

Even with a disconnected service, this operation makes it possible for applications to operate effectively on offline features.  Furthermore, is that this extends the incorporation of Azure cloud services to apps on various platforms, including Android and iOS on mobile phones.

Then there are Availability Zones. It is the 3 rd level in the Azure network hierarchy.

Availability zones are physically separated locations. They exist inside regions. They are made up of one or more data centers. Constructing availability zones is not easier. They are not just data centers, they need advanced networking, independent power, cooling etc. The primary purpose of Availability zones is to helps customers to run mission-critical applications.

You will have following benefits with Azure availability zones

  • Better protection for your data – you won’t lose your data due to the destruction of a data center
  • High- availability, better performance, more resources for businesses to continuity.
  • 99% SLA on virtual machines

Open source technology

Microsoft took some time to understand the value of Open source technologies. But now they are doing really fine. With .Net Core and the .Net Standard, Microsoft has done a major commitment to open source. Looking at GitHub alone, Microsoft is one of the largest contributors to open source.

Redmond, Washington USA - 4th June 2018 Microsoft confirms its acquiring GitHub
“Microsoft is a developer-first company, and by joining forces with GitHub we strengthen our commitment to developer freedom, openness and innovation,” said Satya Nadella, CEO, Microsoft.

With  .Net core 3.0, Microsoft introduced many features that will enable developers to create high security fast productive web and cloud applications. .NET Core 3 is a major update which adds support for building Windows desktop applications using Windows Presentation Foundation (WPF), Windows Forms, and Entity Framework 6 (EF6). ASP.NET Core 3 enables client-side development with Razor Components. EF Core 3 will have support for Azure Cosmos DB. It will also include support for C# 8 and .NET Standard 2.1 and much more.

Mixed reality and AI perceptions

Mixed reality tries to reduce the gap between our imagination and reality. With AI, it is about to change the way how we see the world. It seems to become the primary source of entertainment. Although Mixed reality got popular in the Gaming industry, now you can see its applications in other industries as well. The global mixed reality market is booming. That’s why the biggest names in tech are battling it out to capture the MR market. All major tech products have introduced MR devices such as Meta2 handsets, GoogleGlass 2.0, Microsoft HoloLens.

Mixed reality and AI perception is a result of the cooperation of many advanced technologies. This technology stack includes Natural Language interaction, Object recognition, real-world perception, real-world visualization, Contextual data access, Cross-device collaboration, and cloud streaming.

Factory Chief Engineer Wearing VR Headset Designs Engine Turbine on the Holographic Projection Table. Futuristic Design of Virtual Mixed Reality Application

As I said earlier, Although the Gaming industry was the first to adopt mixed reality, now MR applications are more used in other industries. Let’s visit some of the industries and see how Mixed reality has transformed them and what benefits those industries get from mixed reality and AI perception.

You can see tech giants such as SAAB, NETSCAPE, DataMesh, using mixed reality in the manufacturing industry. According to research, mixed reality helps to increase worker productivity by 84%, improve collaboration among cross-functional teams by 80% and improve customer service interaction by 80%. You may wonder How mixed reality was able to achieve it? What it offers to the manufacturing industry. There are many applications of Mixed reality in manufacturing, following is a small list of them.

  • Enhanced Predictive Maintenance
  • Onsite Contextual Data Visualization
  • Intuitive IOT Digital Twin Monitoring
  • Remote collaboration and assistance
  • Accelerated 3D modeling and product design
  • Responsive Simulation training

Retail, Healthcare, Engineering, Architecture are some other industries that use mixed reality heavily.

Quantum revolution

Quantum computing could be the biggest thing in the future. It is a giant leap forward from today’s technology. It has the potential to alter our industrial, academic societal and economic landscapes forever.  You will see these massive implications nearly every industry including energy, healthcare, smart materials, and environmental system. Microsoft is taking a unique revolutionary approach to quantum with its Quantum Development Kit.

QPR18_Copenhagen_57022000x1108
Picture from cloudblogs.microsoft.com article about the potential of quantum computing

Microsoft can be considered as the only one who took quantum computing seriously in the commercial world. They have a quantum dream team which is formed by the greatest minds in physics, mathematics, computer science, and engineering to provide cutting-edge quantum innovation. Their quantum solution integrates seamlessly with Azure. They have taken a scalable topological approach towards quantum computing which helps to harness superior qubits. These superior qubits can perform complex computations with high accuracy at a lower cost.

There are three important features in Quantum development kit which makes it the go-to Quantum computing solution.

It introduces its own language, Q#. Q# created only for quantum programming. It has general programming features such as operators, native types and other abstractions.  Q# can easily integrate with Visual Studio and VS code which makes Q# feature rich. Q# is interoperable with the Python programming language. With the support of enterprise-grade tools, you can easily work on any OS windows, macOS, or Linux.

Quantum development kit provides a simulated environment which greatly supports optimizing the codes. This is very different from other quantum computing platforms which still exist in a kind of crude level. This simulation environment also helps you to debug your code, set breakpoints, estimates costs, and many other things.

As we discussed earlier, Microsoft has become the main contributor in the open source world. They provide Open source license for libraries and samples. They have tried a lot to make quantum computing easier. A lot of training materials are presented to attract developers to into quantum programming realm. The open source license is a great encouragement for developers to use the Quantum development kit in their applications while contributing to the Q# community.

Cloud services will shape the future of the IT industry. Quantum computing, Open source technologies, Mixed reality will play a great role in it.

This is my last day in Redmond, but I really look forward to coming again next year! If you have any questions, feel free to contact me at bjorn.nostdahl@gunnebo.com

Artificial Intelligence (AI), Gunnebo Business Solutions, Machine Learning (ML), Microsoft Azure

Microsoft LEAP: Adding Business Value and Intelligence

Adding Business Value and Intelligence

The concept of business value and intelligence is aimed at more productive measures through the utilization of various tech application and analytical tool for the assessment of raw data. Business intelligence makes use of activities like data mining, analytical processing, querying and reporting. Companies take advantage to improve their operationalization, as well as accelerate their decision making. Business intelligence is also useful in the aspect of reducing cost and expenses and also identifying new business opportunities.

Machine learning technologies. Millennial students teaching a robot to analyse data

A lot of experts have shared their ideas and spoken on various aspect of business values and intelligence relating to AI in Redmond. Notable speakers include Jennifer Marsman, Maxim Lukiyanov, Martin Wahl, and Noelle LaCharite. The aspects that they extensively spoke on is a machine and learning fundamentals, introduction to new azure machine learning service, using cognitive services to power your business applications, and how to solve business problems using AI, respectively.

Machine and Learning Fundamentals

The fundamentals of machine learning have to do with understanding both the theoretical and programming aspect. it is also important to be up to date with the latest algorithm and technology that is being implemented by the various programming tools for machine learning. The there simplest explanation of the term machine learning is that the operation of the machine in such a way that it would be able to perform various tasks.

20190131_081910

Algorithms can learn how to perform these tasks in various ways, and this brings us to the different types of machine learning. They include supervised learning which is carried out to enable the machine to identify and differentiate between various data. Unsupervised learning, on the other hand, does not have to do with a specific data or structure that the machine is supposed to produce. Another type of machine learning is reinforcement learning.

The importance of a machine model’s accuracy cannot be understated. The accuracy is what really determines how effective a machine can be for the operationalization of a company. machine models are estimated or measured mainly by prediction making and putting them to work in the real world sense. In the business world, a model cannot be accepted until it has been tested against the real world and the results are satisfactory. Measuring a machine model depends on the characteristics of such a particular model, and the circumstances the model is needed in the real world.

Two vital aspects of Machine learning are CNN and RNN. CNN is convolutional neural networks, while RNN is recurrent neural networks. For CNN mainly generate free size outputs, and are used for minimal amounts of reprocessing. RNN on the other hand functions on random inputs and outputs. They can also be sued for the processing of random sequences. So in basic terms, CNN is built such that they can be able to recognize images while RNN, on the other hand, recognizes sequences.

Presentation about machine learning technology, scientist touching screen, artificial intelligence-1

Furthermore, Jennifer Marsman helped in the description of various methods that are related to artificial intelligence, and they include the following.

  • Search and Optimization

The use of a search engine and search optimization helps to rank AI algorithms. Explaining the role of AI for search and optimization purposes on search engines could be very technical. Machines are also taught on how to work with these to rank algorithms.

  • Logic

Logic also plays a major role in AI. The application of Logic in Ai could be as an analytical tool, as a knowledge representation formalism, and also a method of reasoning. Logic can also be used in the aspect of programming language. With this, it can explore both the prospects and the problems of the success of AI.

  • Probabilistic Methods for Uncertain Reasoning

One of the most widely artificial methods for representing uncertainty is a probability. A lot of certainty factors have been utilized for quantifying uncertainty for alternative numerical schemes over the years.

  • Classifier and Statistical Learning Methods

Classifiers associated with AI includes Naive Bayes, Decision trees, perceptron, amidst a host of others. There are also various statistical learning methods and theories that are in used to evaluate the uncertainties of AI. However, there are limitations to these statistical models, and this is where logic comes.

  • Artificial Neural Networks

This is the impact of the earlier mentioned RNN and CNN on the concept of AI. A typical explanation of ANN in a natural language processing AI which can be used in the interpretation of human speech.

  • Evaluation Progress in AI

This is imperative in the estimation of the progress of the concept of AI across all sectors including business models. Three evaluation types include human discrimination, peer confrontation, and problem benchmarks.

An Introduction to New Azure Machine Learning Service

Maxim Lukiyanov spoke about the working principle of the new Azure machine learning service. The service helps to simplify and accelerate building, training, as well as the development of various machine learning models. Furthermore, the automated machine can be utilized in such a way that algorithms that are needed are easily identified, and the hyperparameters are tuned faster.

New Azure Machine Learning Service also helps to improve productivity and reduce costs with auto-scaling compute methods, as well as develops for the machine learning procedure. New Azure Machine Learning Service also have the advantage of storing the data easily on the cloud. Using the latest programming language is also a seamless operation with the New Azure Machine Learning Service, with open source frameworks like PyTorch, TensorFlow, and scikit-learn.

Maxim also spoke further on some benefits of the New Azure Machine Learning:

  • Easy and flexible pricing method, as you will have to pay a=for only the features that you use.
  • The machine learning is very easy to understand, and the tools that come with it are not in any way restrictive.
  • With the various data and algorithm of the tool, there will be more accurate predictions
  • The tools from the machine make it very easy to import data, and as well as fine-tune the results.
  • A lot of other devices can be connected easily to the platform with the aid of the tolls
  • Data models can be easily published as a web service
  • The time scale for the publish of experiments is only a matter of minutes. This is a very major upgrade when compared to expert data scientists that take days.
  • There is adequate security from the Azure security measures. And this is very useful for the storage of Data in the cloud.

Using Cognitive Services to Power your Business Applications: An Overview and Look at Different AI Use Cases

Martin Wahl explained that with Azure cognitive services, customers are set to benefit from AI with developers. With this, they will not even need the service of a data scientist, which is a major advantage to saving both time and costs. This is done by building this machine in such a way that the learning models, pipelines and infrastructure needed are packaged up on cognitive service for important activities such as vision, speech, search, processing of text, understanding languages, and many more operations. This means that anyone who is capable of writing a program at all can make use of the machine learning to improve the application.

20190131_110410.jpg

Customers who have patronized this service are already benefiting from cognitive services such as face container, text container, custom vision service support for logo detection, language detection, in-depth analysis and many more.

Martin Wahl finally explained that with Azure service, more value is added to the business, and the implementation of artificial intelligence is easier than ever.

How to Solve Complex Business Problems Using AI Without Needing a Data Scientist or Machine Learning Expert.

With the possession of basic skills like python coding, data visualization, Hadoop platform, apache spark etc. complex business problems can be solved, even without being a machine learning expert or a data scientist.  All of these are made possible through the help of AI and all that is needed is just dedication and willingness. Some procedure to go about this include:

  • Understanding the basics: This has to do with acquiring general knowledge on the basics, both theoretically and practically.
  • Learning Statistics: Statistics is core to solving business problems, and some of the aspect to be looked at include Sampling, data structures, variable, correlation, and regression etc.
  • Learning Python
  • Making attempts on an explanatory data analysis project
  • Creation of learning models
  • Understanding the technologies that are related to big data
  • Exploring deeper models
  • Completing a complex business problem.

Finally, Noelle LaCharite gave a vivid explanation of how a PoC was made and I did one myself in Delphi in 30 minutes with the aid of Azure AI.

DevOps, Gunnebo Business Solutions, Microservices, Operations, Technical

Microsoft LEAP: Accelerating Business Value

This is my third article from Microsoft LEAP and todays’ focus is the use of microservices and Kubernetes.

Containers Are Crucial For the More Essential Microservices

A very important topic that was discussed throughout the agenda of the conference was the use of Microservices and how essential they are for most applications for the business sectors. With different approaches and angles to this topic, Brendand Buns, one of Kubernetes co-founder, gave a session which focused on the use of containers for microservices. He focused on his product, Kubernetes, which is one of the best and most recommended open-source services for the use containers with the use of policies. Microservices are important due to their ability of being agile and their sophisticated architecture which helps in a faster digital offering.

Conceptual business illustration with the words microservices-1

However, currently found microservices are used on physical services which leads to many problems. This is why the use of containers is a breakthrough which gives the user a light runtime environment. It can also be used on physical or virtual servers which is a huge development compare to older technologies.

The use of containers will also help in providing better isolation, due to the use of many executions on only one operating system. Such an opportunity will aid developers in minimizing the use of many different VMs. Brendand discussed the use of Domain driven developments against using test driven developments; in terms the more relatable for businesses and how to pick the right method. Overall, the final conclusion was to reflect the scaling levels that could be reached through using Kubernetes as a service to provide containers while using microservices for your business.

The Use of Service Fabric Mesh

One popular session in the program was by Mark Fussell and Vaclav Turecek. This talk discussed the introduction of the anticipated future product called Service Fabric Mesh, with a full comparison with the currently used cloud service. Many different points have been discussed to describe the service fabric fully. However, the audience got more excited when they heard the different benefits that are met while using this new service.

1Azure20Service20Fabric20Mesh-1532006671541.png

Mark spoke about the time taken to create instances of VMs and the hassle in the whole process. This is where service fabric shines as it creates the VMs only once, allowing it to be used through the platform. More packages can be added to the cluster further on without any time consumptions. The second point tackled by Vaclav was the hosting opportunities with service fabric which is described as high-density. Which explains why the cost is lower for service fabrics, as the applications are not connected to the VMs in particular, giving a space to connect more than one application to a single VM.

Last but not least, they both discussed the flexibility of the service fabric mesh to be used with different servers or any different environments, disregarding the current existing infrastructure. They added the fact that service fabric helps in controlling the machine lifecycle. Developers were more educated on the differences between cloud technologies and whether to transfer or not.

The Touch Point: ACI and AKS

When it comes to the use of Azure Container Instances(ACI), Justin Luk, the product manager for Azure and Kuberentes, was the best pick for such content. Developers were glad to know that containers by AKS can be used with their ACIs. The containers can be quickly used when needed without any preps, saving time and effort. Instances will also be easily deleted directly after the needed work is done. AKS is used in these on-demand moments to monitor the work and control the creation and deletion process. This can help developers provide new severs instantly when needed without any hassle. When a certain problem or demand is asked for, AKS are used to reach the needed output without any extra services or products.

An Environment of AKS: Best Practices

Another session that stood out from all the Kubernetes sessions was the one conducted by Saurya Das, another product manager in Azure. This session was to reflect the success stories by some of the developers out there that used AKS in their platforms. Developers were happy to know about the multi-tenancy when using the cluster isolation. In addition, was the different network designs that could be used with their new service. These networks can also be implemented using policies, that help make the development easier and more secure. Overall, everyone in the session was satisfied to know about the scaling opportunities to expect and the strong control for monitoring and handling it possesses.

Monitoring Your Procedures Using Azure Monitoring

On the other hand, Ralph Squillace, gave a wider image and a better understanding on multi-tenancy and their use with AKS. He discussed how it is commonly mistakenly used through the AKS products itself, whereas it is actually recommended to be used in the application directly. Ralph emphasized on such points, by relating to some best practices which were mainly of SaaS products. He gave a few tips and tricks on how your service should be in terms of security, designs, policies and much more in order to be able to integrate and handle the multi-tenancy directly and easily through the application.

dashboard

Kubernetes: Guide for its Tools

The end of this section of containers being used was bent towards introducing the different operating tools that will assess developers while using kubernetes services. Bridget Kromhout was able to introduce the developers to new tools as Terraform, Helm, Draft, Brigade, Kashti and many others. These different tools were discussed thoroughly on how to use them in terms of configurations and app development. They were also helpful in scripting for event-driven operations and to manage the app fully. Developers were happy to learn how to efficiently use Kubernetes and containers for their currently existing architectures and structures.

All in all, a very on-topic and interesting day at Micrososft LEAP 2019. I look forward to the next sessions. If you have any questions, feel free to contact me at bjorn.nostdahl@nostdahl.com

Gunnebo Business Solutions, Technical

Using Lottie to Enliven your App

What is the main feature of a good app? It enhances customers’ lives through the set of well-conceived steps in user experience (UX) design. Proper UX speeds up interactions and makes activities simply and orderly organized. On the surface, the easiest way to arm the users with a clear vision of the product functionality is to give comprehensive guideleines. But the more complex tasks the app performs the more time you spend on learning how to use it properly. Complexities of manuals create tension and distract users.

Smartphone - User Manual

That’s why, when creating a Gunnebo Security Solution app, we paid special attention to the development of user-friendly instructions for our clients. Gunnebo app is aimed at remote management of our security products. Since it has many functions, including alarm control, cash operation, data analysis, and devices coordination, it takes some time for the users to study all the functions. How can one make this boring and complicated task fun? We set our sights at animations as a nice way to entertain, attract attention and make instructions illustrative.

Our next concern was about the practical implementation of this decision. Everybody who has ever dealt with animations knows that it may take a lot of time and effort to create them. Even behind a small and seemingly simple animation, there may hide long lines of code. So, we decided to try Lottie, a relatively new animation library created by Airbnb. And it turned out to be the right choice.

Lottie is an excellent library for rendering Adobe After Effects animations  for Android, iOS, MacOS, TvOS and UWP.  It uses animation data exported as JSON files from Bodymovin extension and renders Adobe After Effects animations in real time. So, engineers don’t need to re-create them by hand and can work directly with animations as they are created by designers. Another good thing is that the size of animations is small no matter how complex they are.

Lottie supports numerous flexible after-effects, like solids, masks, shape layers, etc. And it allows various manipulations with an animation (resize, loop, reverse, scrub, change color, and other). You can play just some fragment of animation or loop it if you need and do lots of other things.

For the Gunnebo app, we have developed a set of animations which familiarize users with the app’s interface and functions. These include dashboard use, calling attention, data processing, etc.lottie

Animations created with Lottie have a lot of perks. Created slides show up only if user haven’t seen them. Thus, we don’t nag users with directions, they are only shown on the first use of a specific function.

The slides load from a solution folder. That is, the person who adds/edits new slides doesn’t need to be a developer and doesn’t have to edit the code. Files are added to GIT in Azure DevOps. Folder structure in Azure DevOps and folder names determine where slides will be shown. Slides load into Telerik SlideView, so, users can swipe or tap to go through the slides.

The text is stored in html, so, the style is localized and modified easily. Gunnebo uses Crowdin for localization.

The picture below represents the structure of the project in general:

Slideshow_UWP_iOS_Android

Gunnebo developers have used the following libraries for the implementation:

https://github.com/martijn00/LottieXamarin

https://github.com/azchohfi/LottieUWP

https://docs.telerik.com/devtools/xamarin/controls/slideview/slideview-overview

https://github.com/zzzprojects/html-agility-pack

Gunnebo Business Solutions, IBM International Business Machines, Microsoft Azure, Node RED, Technical

Node-RED deployment on Azure

Today I would like to talk about the process of deployment Node-RED instances on Azure platform.

The initial tasks were:

  1. Deploy Node-RED instance to Azure cloud and provide public IP address/ DNS name to it.
  2. Secure Node-RED instance access with user credentials.
  3. Update instance with actual node set and provide ability to keep them up to date.

Let’s discuss all steps one by one.

Azure deployment

The most common and convenient way to deploy your application on Azure platform is by using Azure Resource Manager. It enables you to use all application resources as a group and to deploy, manage or delete them just in one operation. With Resource Manager, you can create a template (Azure Resource Manager template) that defines the infrastructure and configuration of your Azure solution. It allows you to deploy your solution repeatedly throughout its lifecycle being confident that your resources are deployed in a consistent state.

Resource Manager template is a JSON file that defines resources which you need to deploy to a resource group. Resource Manager analyzes the template and then converts its syntax into REST API operations for the appropriate resource providers. For the resources to be deployed in correct order, you can set dependencies between them.  It is done when one resource relies on a value from another resource, for example, in case of a virtual machine which needs a storage account for disks.

You may wonder, “What resources are and why we need them? We just want to deploy NodeJS application (Node-RED) on Azure”.  Well, a resource is a manageable item that is available on Azure. Some common resources are a virtual machine, a storage account and a virtual network, but there are much more. To start Node-RED in the cloud, we need to create VM and deploy a Docker container(image) with Node-RED inside. Since one resource relates to another one, we should create a bunch of resources in our resource group (that is a container holding related resources for our Azure solution). It includes:

  • Storage account
  • Public IP address
  • Virtual Network
  • Network interface
  • Network security group
  • Virtual Machine
  • Extensions

Resource Manager provides extensions for scenarios in case you need additional operations such as installing particular software which is not included in the setup. We used Docker Extension in order to setup Docker container on VM.

Ok, so now we are ready to create a template. The detailed description can be found here.

Here I would like to  talk only about extension section:

At this stage, we define DockerExtension resource that depends on our Virtual Machine resource. We specify to use “nodered/node-red-docker” image from DokerHub

Also, we need to enable Docker Remote API for further use:

Since we need to get access to the API we also expose port in Network Security Group:

Also, we need to map 80 VM port to 1880 (default port for Node-RED):

After defining the template, we are ready to deploy the resources to Azure. There are several ways to do that: PowerShell, Azure CLI, Azure Portal, REST API or Azure SDK.

Since we want to develop automation solution for application deployment, REST API and Azure SDK seem to be most suitable for us.  The reason why I want to highlight the Azure SDK for .NET is that it is much easier to build an application using existing wrapper classes for the API than to create your own REST wrappers and methods

Take these four steps to deploy your template with C# SDK:

1. To be able to make any requests to the API, first we need to authenticate and authorize our request. Let’s create the management client:

azureauth.properties  –  authorization file. Before you can deploy a template, you need to acquire a token for authenticating requests to Azure Resource Manager. You should also record the application ID, the authentication key, and the tenant ID which you need in the authorization file.

2. Create resource group and storage account:

3. Upload your template file to Azure:

4. Deploy template:

That’s it. On the whole, the deployment process in our case takes about 3-5 mins.

To retrieve public  IP address our Docker container is available on:

So now we have Node-RED instance up running on Azure cloud and accessible via public IP/DNS name. Let’s proceed to the next step.

Secure Node-RED instance

Node-RED Editor supports two types of authentication:

  • username/password credential based authentication
  • starting from Node-RED 0.17, authentication against any OAuth/OpenID provider such as, for example, GitHub or Twitter 

If we choose the first option, we need to add the following to our settings.js file:

Since we want to make this credentials customizable for each deployment we can’t embed this configuration in Docker file. So we need a way to execute commands inside Docker container after deployment. That’s why we use Docker Remote API to adjust credentials settings. And this is the reason to expose additional port in our template, as mentioned above.

Here is a command example to setup credentials for Node-RED:

We used .NET Client for Docker Remote API as a wrapper to REST API:

Now we have secured our Node-RED editor with custom username and password.

Keeping nodes and flows up to date

Now we need a way to provide our cloud Node-RED instance with custom node’s set and keep it up to date. We already have all tools for that. Custom nodes are stored in separate Git repository. A few options are available:

  1. Execute npm install <git repo url>  inside Node-RED userDir ( /data  for "nodered/node-red-docker"  container)
  2. Copy custom nodes to /data/nodes inside a container.

Node-RED flows can be synchronized in a similar way. By default, Node-RED Docker container stores flows data in /data/flows.json. The flows configuration file is set using an environment parameter ( FLOWS), This can be changed by setting environment variables in docker-compose configuration section:

Using this approach we can put nodes and flows file under version control inside a container and synchronize them with a remote repository.

All commands can be executed via Docker Remote API in the same way, as described in the previous section.

Each time we need to update our nodes, we just call Docker API and pull updates from repository. Also, we can backup our flows.json  by committing and pushing it into the repository.

As an improvement, we can create Git hook in order to update our Node-RED instances once some changes are pushed to our node’s repo. But this is out of the scope of this post.

Summary

Here we make a short overview of how to automate your deployments on Azure cloud with Azure Resource Manager and Azure SDK for .NET. In our example, we set up Node-RED docker container in the cloud but all mentioned steps are applicable to any similar Docker deployments.

Gunnebo Business Solutions, Methodology, Technical

Continuous Translation with Crowdin

The Gunnebo Business Solution software consists of many sub-projects and with customers spread across the world, localization and language is very important. To have an effective process, all the sub-projects uses Crowdin for translations, implemented into our Continuous Integration pipelines. Continuous Integration is a development practice that allows developers to integrate code into a shared repository several times a day. After check-in, the latest translations are pulled from Crowdin, and committed code is verified by automated builds allowing teams to detect problems early.

Introduction to Crowdin

Crowdin service https://crowdin.com enables merging of translation process into continuous integration pipeline for projects based on any kind of resource files dependles of localization.

image005

Continue reading “Continuous Translation with Crowdin”

ENC424J600, PIC24, PIC24FJ256GB206, Security, Technical, TLS/SSL

TLS “simplified”

SSL/TLS Library for PIC24 is a mikro-Pascal library developed by and for the open source community. The aim of open source projects is to provide developers with opportunities to share and learn through collaboration.

Gunnebo_Security

We have some of the greatest minds working on this, and we hope to attract as many developers from the open source community as possible to contribute to the development of the  library and to use it. Today’s post is prepared with support from Jack Lloyd, a TLS security and cryptography expert.

Continue reading “TLS “simplified””

Gunnebo Business Solutions, MQTT, Protocols, Raspberry PI, Technical

MQTT and ActiveMQ on RPI

What is the Message Queue Telemetry Transport Protocol?

MQTT  is an ISO standard publish-subscribe-based “lightweight” messaging transport protocol for use on top of the TCP/IP protocol. It is designed for connections with remote locations where a “small code footprint” is required or high latency/ low-bandwidth networks. Andy Stanford-Clark and Arlen Nipper of Cirrus Link authored the first version of the protocol in 1999.

Publish / Subscribe

The publish-subscribe messaging pattern requires  message client and broker. The client can be any device, from a micro controller to a server, which runs the MQTT library and is connected to MQTT broker over any network. The broker is responsible for receiving all messages, filtering, making a decision and distributing messages to subscribed clients based on the topic of a message. Since MQTT is based on TCP/IP, both client and broker are expected to have TCP/IP stack.

Continue reading “MQTT and ActiveMQ on RPI”

DMX, Gunnebo Business Solutions, Protocols, Technical

Prolight node for Node-RED

In the previous article on Node-RED, I talked about Art-Net node for communicating with DMX devices. In this article, I will build on that, and look into a specific device implementation.

During planning of Euroshop 2017 one requirement from the product owner was to simulate sunrise and sunset with a Prolights PixieWash. Sounds quite easy, but when you start looking into the details we found that we needed to create mathematical calculations to build arc transitioning algorithm.

Continue reading “Prolight node for Node-RED”