Gunnebo Business Solutions, Microsoft, PowerBI, Technical

Microsoft Power BI: Dashboard in a Day

What better way to start the week than crunching and visualizing data. I joined the Dashboard in a Day workshop by Microsoft and Random Forest, a one day hands-on workshop designed for business analysts.

Transition effect in bar chart statistics and bright windows

I have to say that I really enjoy digging into new tools and learning about new technology. Even if I am probably not going to be working directly with PowerBI, it makes my job easier by understanding what is possible and the boundaries of the tool.

Kicking off with a short introduction (and the mandatory Microsoft advertisements), we dove straight into the PowerBI Desktop application itself, learning how to connect to, import and transform data from various sources. After preparing the data, reformatting and splitting fields, we moved on to exploring data with powerful visualization tools.

PowerBI

The exercises were quite comprehensive, and at some point I went rough and chose to implement some of my own visualizations instead.

The event was well organized and planned, the assets were categorized in a way that made it easier to identify the specific assets that suited our needs. The attendee content consisted of Lab Manuals and Datasets that were available for download on MPN without requirement for an MPN ID.

Towards the end of the day the guys from Random Forest ensured that we had a good working knowledge and familiarity with Power BI and possessed the ability to answer questions concerning the workshop or Power BI in general. It was a tremendous learning experience and I couldn’t wait to try out those awesome new technologies! They even spent the last couple of hours of the day to support and guide us through our own datasets. I brought some statistics from one of our Business Units, and it was quite impressive how I could visualize and interactively navigate through the data.

All in all an exciting workshop, and I look forward to playing more with PowerBI in the future. If you have any questions or great ideas, feel free to contact me at bjorn.nostdahl@gunnebo.com 🙂

 

Artificial Intelligence (AI), Commercial, Gunnebo Business Solutions, Gunnebo Retail Solution, Machine Learning (ML), USA

Autonomous and Frictionless Stores

Earlier this year, I visited US for a couple of weeks, and having a genuine interest in retail technology, I visited quite a few retail stores. I went to see classical stores, but also had the chance to have a preview of the future of retail: Autonomous and Frictionless Stores!

Customers in this digital world don’t want to spend too much time while shopping. They want everything to happen very fast. Customers are looking for a seamless shopping experience all the time. That’s how the concept of frictionless stores came to exist. Frictionless stores are one the biggest new thing in consumer shopping.

iot smart retail futuristic technology concept, happy girl try to use smart display with virtual or augmented reality in the shop or retail to choose select ,buy cloths and give a rating of products
Photo: Adobe Stock

What are Frictionless Stores

The concept of frictionless stores started a few years ago. When I talk to retailers this is one of the topics that always pops up. All major brands are looking for innovative ways to create better customer experience and frictionless stores is one way to make that happen. These store improves the shopping experience to the point where customers don’t have to wait at any point of shopping such as selecting, receiving and paying for the product. Initially frictionless stores only confined to ease and less hassle shopping. But as innovations such as mobile wallets, digital receipts, free and fast shipping, and one-click purchasing emerged and began to reshape the consumer shopping experience, the definition began to be reshaped as well. Today, a frictionless experience means more than just less hassle. It means greater speed, personalization, and wow experiences.

How Frictionless Stores work

Let’s try to understand ow frictionless stores work. In frictionless stores, Buyers and sellers are connected in a way that provides buyers the ability to instantly find, compare and buy products and services they need. In frictionless stores, customers should feel that they have full control. The concept and technology has evolved over time, and nowadays customers expect to have this experience through their smartphones. Retailers and brands are trying to find new ways modifying the definition of frictionless stores to provide customers the best possible shopping experience. They need that commitment to stay ahead of the competition. As a result of that, nowadays, frictionless shopping means eliminating anything that negatively impacts customer experience.

Importance of Frictionless Stores

How has frictionless shopping fared according to researches? Alliance Datacenter has done a study and found out that customers from all generations looking for a great service and an ideal shopping experience. This is true for all the areas in the world. If some brand fails to deliver what they want, customers will find a different one. According to the research, 76 percent of consumers said they give brands only two to three times before they stop shopping with them.  Another 43 percent said their main reason to leave a brand is poor experience in shopping. What all these means is that if a customer encounters friction they will run away from that brand fast without probably giving a second chance.

Amazon Go Stores

Similar to frictionless stores, Amazon introduced Amazon Go stores. What is special about Amazon Go is you don’t have to wait for checkouts. That basically means you no longer have to wait in queues. First Amazon Go store was a grocery store of 1800 square feet. It spread fast, in fact, you can see a lot of Amazon Go stores now in the USA and Europe.

Amazon Go First Store_0_low

How is this even possible? What technologies have they used? Amazon was doing many types of research in the areas of computer vision, sensor fusion, and deep learning. Amazon Go is a fruitful result of that. You need Amazon Go application to do shopping with Amazon Go stores. All you have to do open your Go app, choose the product you want, buy it and the just leave. This application can detect when a product is purchased or returned to the shop. The application can remember what you bought and you can revisit these details at your virtual cart. When you finish shopping, you will be charged and you will receive a receipt for what you buy

Buy Awesome foods with Amazon Go stores

You may wonder now what you can buy there? What items are available on Amazon Go stores? I will just point out how one Amazon Go store had marketed their shop. “We offer all the delicious meals for breakfast, lunch or dinner. We have many fresh snack options made every day by our chefs at our local kitchens and bakeries. You can buy a range of grocery items from milk and locally made chocolates to staples like bread and artisan cheeses. Try us, you will find well-known brands you love in our shops.” by the way, don’t expect to go in there and buy books, tech or clothes or anything else that Amazon sells online. It’s basically quick-and-easy food and other groceries. It’s just that there’s no cashier.

image_0a916596-0d11-4f75-a2ca-7f4bf93aa4e6.img_9445

So many people have been attracted to Amazon Go stores so it is quite evident that this concept will make a huge impact on the future of retail stores.

If you want to know more about frictionless Sstores, feel free to contact me at: bjorn.nostdahl@gunnebo.com or check out these related articles:

Artificial Intelligence (AI), Business Intellegence (BI), Gunnebo Business Solutions, Machine Learning (ML), Microsoft Azure

Machine Learning and Cognitive Services

Machine learning is gradually becoming the driving force for every business.  Business organizations, large or small trying to seek machine learning models to predict present and future demands and do innovation, production, marketing, and distribution for their products.

Business value concerns of all forms of value that decides the well-being of a business. It’s a much broader term than economic value encompassing many other factors such as customer satisfaction, employee satisfaction, social values etc. It’s the key measurement of the success of a business. AI helps you to Accelerate this business value in two ways. That’s through allowing to make correct decisions and innovation.

Machine learning technologies. Millennial students teaching a robot to analyse data
nadia_snopek

Remember the days when Yahoo was the major search engine and Internet Explorer was the Major web browser. One of the main reason for their downfall was their inability to make correct decisions. Wise decisions are made by analyzing data. More data you analyze, better decisions you make. Machine Learning greatly support in this cause.

There was a time, Customers accepted what companies were offering them. Things are different now. Demands of customers for new features are ever more increasing. Machine Learning has been the decisive factor behind almost every new innovation whether it be face recognition, personal assistants or autonomous vehicles.

Machine Learning in more details

First starts with learning what machine learning is. Machine learning enables systems to learn and make decisions without explicitly programming for it.  Machine learning is applied in a broad range of fields. Nowadays, Almost every human activity getting automated with the help of machine learning. A particular area of study that machine learning largely exploited is data science.

Data science plays with data. Data must be extracted to make the best decisions for a business.

The amount of data that a business has to work with is enormous today. For example, social media producing billions of data every day. To stay ahead of your competitors, every business must make the best use of this data. That’s where you need machine learning.

Machine learning has invented many techniques to make better decisions out of large data sets. These include Neural networks, SVM, Reinforcement learning and many other algorithms.

Among them, Neural networks are leading the way. It improves consistently spawning child technologies such as convolutional and recurrent neural networks to provide better results in different scenarios.

AdobeStock_178345630_low

Learning machine learning from the beginning, and trying to develop models from scratch is not a wise idea. That yields huge cost and demands a lot of expertise in the subject. That why someone should try to take the assistance of a machine learning vendor. Google, Amazon, Microsoft they all provides Machine learning services. Let’s take Microsoft as an example, and review what qualities we should look for when selecting a vendor.

Using cloud as a solution for machine learning

It simplifies and accelerates the building, training, and deployment of machine learning models. It provides with a set of APIs to interact with when creating models hiding all the complexity in devising machine learning algorithms. Azure has the capability to identify suitable algorithms and tune hyperparameters faster. Autoscale is a built-in feature of Azure cloud services which automatically scale applications. This autoscaling feature has many advantages. It allows your application to perform best while keeping the cost to a minimum. Azure machine learning APIs can be used with any major technologies such as C# and Java.

There are many other advantages you will have with cloud Machine Learning

  • Flexible pricing. You pay for what you use.
  • High user-friendliness. Easier to learn and less restrictive.
  • More accurate predictions based on a wide range of algorithms.
  • Fine tuning results are easier.
  • Ability to publish your data model as a web service Which is easy to consume.
  • The tool allows data streaming platforms like Azure Event Hubs to consume data from thousands of concurrently connected devices.
  • You can publish experiments for data models in just a few minutes whereas expert data scientists may take days to do the same.
  • Azure security measures manage the security of Azure Machine Learning that protects data in the cloud and offers security-health monitoring of the environment

Using Cognitive Services to power your business applications

We will go on to discuss how Azure cognitive service can be used power up a business application. Azure cognitive services are a combination of APIs, SDKs, and services which allows developers to build intelligent applications without having expertise in data science or AI. These applications can have the ability to see, hear, speak, understand or even to reason.

AdobeStock_252431727_low

Azure cognitive services were introduced to extend the Microsoft existing portfolio of APIs.

New services provided by Azure cognitive services includes

  • Computer vision API which provides with advanced algorithms necessary to implement image processing
  • Face API to enable face detection and recognition
  • Emotion API gives options to recognize the emotion of a face
  • Speech service adds speech functionalities to applications
  • Text analytics can be used for natural language processings

Most of these APIs were built targeting business applications. Text analytics can be used to harvest user feedbacks thus allowing businesses to take necessary actions to accelerate their value. Speech services allow business organizations to provide better customer services to their clients. All these APIs have a free trial which can be used to evaluate them. You can use these cognitive services to build various types of AI applications that will solve a complex problem for you thus accelerating your business value.

If you want to talk more about ML and AI, feel free to contact me: bjorn.nostdahl@gunnebo.com 🙂

Gunnebo Business Solutions, Milestone, Technical

Extending Milestone Smart Client with Bing Maps

The utilization for Milestone’s Xprotect smart client gets more limitless with every version. It supports hardware accelerated video decoding, and this means that you can be able to view very high resolution streams about 5 times better with your lower CPU and the aid of external graphics card. Amongst all its magnificent SDK has allowed the Gunnebo team to make a great plugin for the Xprotect Smart Client 2018 that now implements Microsoft’s Bing maps.

This operation is possible due to XProtect Smart Client very powerful, adaptable and easy-to-use client application for the daily operations of security installations. Using the Milestone Integration Platform and the unique application plug-in architecture, various types of security and business systems applications can be seamlessly integrated in XProtect Smart Client.

Bing Maps has abilities such as:

  • Buildings can be created with a number of levels which are easily navigated through a pane that will be made available for you after selecting a building.
  • Cameras can be added and attached to different levels. With this you will be able to shift the camera for different levels available.
  • You will be able to have a complete geographical overview of all the camera from different sites on your smart map. With this, you can be able to build live and current feed and as well monitor the recordings from your smart map camera.
  • It can go through seamless operations of jumping to cameras or custom overlays rather than having to navigate to them manually.

Bing maps can be easily embedded into the Xprotect smart clients with the aid of the Gunnebo map plugin which allows for seamless operation and as well shows all camera locations on the map.

blog1

Window presentation foundation, a graphical subsystem from Microsoft was used by the Gunnebo team to work on the Bing map in such a way that the software development kit provides the basic programming pattern for tor the Bing.

The system requirements that this program is compartable with includes: windows 7, windows 8, widow’s server 2008, windows vista, windows 2000 service pack, widow’s server 2003 etc. Make sure you verify that your operating system is compatible with this programming reference before downloading the application or running it.

For its incorporation with milestone’s Xprotect the plugin would automatically generate location names from Milestone Camera groups/folders and groups cameras according to Milestone grouping.

The Milestone Camera and their location entity (parent folder) could be retrieved by MipSDK by the follow calls:

var items = Configuration.Instance.GetItemsByKind(Kind.Camera);

where

  • if FQID.Kind==Kind.Folder that means location(parent folder)
  • if FQID.Kind==Kind.Camera that means camera

Administrator of SmartClient can drag/drop each location into the map or specify the location address and comments.

You can search for different views for the various camera available on the view item types. Take for example if you want to see all of the views for the PTZ cameras, or those fron a particular manufacturer, or those that contain view item types:

  • Map
  • Alarm
  • Matrix
  • HTML
  • Name of camera in view
  • Add on products

With this you can be able to search for the key words available.

Location on the map

The Administrator of SmartClient can drag/drop each location into the map or specify the location address and comments. Also, you can create locations at the points of the map that interests you. with this you can be able to create a location for home office, satellite office. Aside the fact that the location will give you a full picture of your environment, you can also use them in the navigation of the map.

However, you should know that an Xprotect smart client location can only be added depending on your configuration. With this it becomes very easy with go to the general overlay of map when you are zoomed out.

Location data is stored on a central server in XProtect configuration. Administrator needs to set it up only once, then it will be shared between all SmartClients. Also once Administrator edits it in any SmartClient instance, it will be shared between all of them.

public static Guid DefaultLocationGuid = new Guid("AA2BB85A-B965-448f-BBA9-CC4DCE129411");

public static void SaveLocationCoordinates(this IList locations)
  {
   var coordinatesConfig =
         locations.Select(item =>
            new MapLocationConfigItem()
                { Latitude = item.Latitude, Longitude = item.Longitude, Name = item.Name,Address = item.Address}).ToArray();

    var node = coordinatesConfig.ToXmlNode();
    Configuration.Instance.SaveOptionsConfiguration(DefaultLocationGuid, false, node);
   }

To retrieve the stored coordinates from XProtect Central Server the follow code could be used:

var coordinatesConfigXmlNode = Configuration.Instance.GetOptionsConfiguration(DefaultLocationGuid, false);
var coordinatesConfig = coordinatesConfigXmlNode.ToInstance<MapLocationConfigItem[]>();

Where the key element is DefaultLocationGuid that contains a GUID of the our custom configuration entry.

Camera Navigator

The camera navigator is a feature that enables the ability to view all the cameras in relation to one another. This means the camera would be seen as they are laid down on a floor plan or map. With the camera navigator, you will be able to move from one camera to another in just a single view.

blog2

The plugin checks the status of each camera and marks every location on the map with colored icon:

  • green if all cameras inside that particular location are online,
  • yellow if less than half cameras are offline,
  • red if more than half cameras are offline

To get the cameras status we use MessageCommunication mechanism provided by Milestone and implemented in MipSDK.

The plugin checks the status of each camera and marks every location on the map with colored icon:

  • green if all cameras inside that particular location are online,
  • yellow if less than half cameras are offline,
  • red if more than half cameras are offline

To get the cameras status we use MessageCommunication mechanism provided by Milestone and implemented in MipSDK:

Here we initializing  MessageCommunication API and registering callback(ProvideCurrentStateResponseHandler) which should be called once we get data with cameras statuses:

MessageCommunicationManager.Start(EnvironmentManager.Instance.MasterSite.ServerId);
messageCommunication = MessageCommunicationManager.Get(EnvironmentManager.Instance.MasterSite.ServerId);
communicationObject = messageCommunication.RegisterCommunicationFilter(ProvideCurrentStateResponseHandler,
                new CommunicationIdFilter(MessageCommunication.ProvideCurrentStateResponse));

To get the callback called and receive the status (online/offline) of each particular camera we should perform this call:

messageCommunication.TransmitMessage(
               new Message(MessageCommunication.ProvideCurrentStateRequest,
                    cameras.Select(camera => camera.CameraId.ToString()).ToArray()), null, null, null);

Below is an example of the implementation of ProvideCurrentStateResponseHandler callback:

private object ProvideCurrentStateResponseHandler(Message message, FQID dest, FQID source)
   {
     Collection result = message.Data as Collection;
     if (result != null)
       {
          foreach (ItemState itemState in result)
            {

           //itemState.FQID.ObjectId - Camera Id//itemState.State - Camera State             
    }
        }
            return null;
    }

The plugin also supports “Camera Monitoring” mode where all cameras are displayed regardless of location. Each camera has Status indication there and displays the time how long it being offline.

blog3

If you are interested in knowing more about the Milestone SDK or the plugins we can offer, feel free to contact me at bjorn.nostdahl@gunnebo.com 🙂

Gunnebo Business Solutions, IBM International Business Machines, Mender, Node RED, Technical

Mender IoT Device Management

With the progress in humanity, innovations are starting to move towards being digitalized. The vast majority of all human data are stored using digital methods. This would involve the use of computers, cloud computing and the Internet of Things (IoT) which is one of the latest technological disruption. These technologies are used to connect devices through digital channels, which are used to transfer data back and forth. On the other hand, the digital world is in constant need for updates. These updates are essential to cope with the increase in data and the overall customers’ requirements.

Update Software Computer Program Upgrade Business technology Internet Concept-1

Why is Software Update Essential?

  • Bugs: One of the main problems in computing technology is the number of bugs that rise due to weak developing skills or high amounts of data; which was not accounted for. Updating your software will acts as means to fix and overstep such bugs.
  • Security: Unfortunately, cyber security is a huge issue in this era. With many threats rising in the field, updates are always released with better security settings in hopes to lower and eliminate threats.
  • Features: The most common reason for software updates is to release new competitive features that copes with the customers’ requirements.

However, with the high numbers of devices invading our planet, it is impossible to provide these software updates through physical means. This is why, Over-the-air (OTA) methods are the most efficient way to deliver software updates. In some cases, the only available method to use is OTA; where physical means can’t be used. The process of software update through OTA is a very complex process where the data is delivered over networks and digital channels to reach your device. This is a very delicate process where you have to ensure proper connection and power connectivity to avoid any errors in the process.

Mender: Your New Solution

With the intensity of such transactions, you should always look for the best service out there to implement the process as efficient as possible. Trying to build your own infrastructure that will achieve efficient OTA might be a real hassle. The amount of time and work spent on the process is way more than you can handle, so is the cost you are about to pay too.

Mender

This is why, companies should look for the different software update solution companies out there. Here is where Mender kicks in; it is an end-to-end open source software update solution for connected devices and IoT. You can consider Mender as a ready-made infrastructure that will solve all your software update issues.

Why we use Mender?

No Vendor lock-ins: One thing to look forward too while using Mender is the fact that we won’t face any vendor lock-ins. Mender is an open source, licensed under Apache 2.0. This gives it the complete freedom of being used by the customers without the interference of vendors or other third parties. With Mender, you should no longer worry about getting locked in.

Global communication network concept.png

Reduction in customer support issues: Mender focuses on their customers’ experience, making it as smooth as possible. This is achieved through strong security protocols during the update process. The process is focused to be as efficient and optimal as possible, compensating for any pitfalls in the connection. Mender uses image-based updates which acts as a safety net when connectivity problems rise. It will ensure full device connectivity at all times leading to a decrement in system failures and device recalls.

Features and Functionality

The developers of Mender are very concerned with the common software update issues and the hassle customers go through. This has helped them develop Mender with more features that other software update solution provider that surely helps in making the process simpler and more effective for the users. The following is a list of some of the features you can enjoy while using Mender:

  • Intuitive UI
  • Deployment Reports
  • Custom checks via scripting support
  • Code signing

Anticipated Progress and Updates

With Mender, there is still much more to look forward too. Gunnebo, which is a multinational business specialized in security services, are of course interested in contributing to Mender, possibly to help implementing the features we and other companies like us needs.

Our first project will be updating Node-RED flows from the Mender v.2 update module. If you are interested in contributing or want to know more – feel free to contact me at: bjorn.nostdahl@gunnebo.com

Cosmos DB, Gunnebo Business Solutions, Microservices, Microsoft Azure, Mongo DB, Technical

Microsoft LEAP: Designing for the Cloud

The Microsoft LEAP is an event for the developers worldwide who are looking for original training from Microsoft.  It takes place annually in Microsoft headquarters in Redmond, WA. The five-day conference helps the attendees to fully understand how Microsoft products can be used and how they can solve the problems of the companies. This time, the participants learned how to design a cloud in an up-to-date fashion.

 

MicrosoftTeams-image

The following piece will provide you with a glimpse through the Microsoft Leap program. The sections are the highlights with the greatest impact and effect on the developers’ community.

Deep Dive into Cloud Computing: Azure Strategy

On January 28, Microsoft kicked off the Leap program for software architects and engineers. There were loads of speakers on the agenda. Among them, Scott Guthrie was one of the strongest. Scott is in charge of Microsoft’s cloud infrastructure, servers, CRM and many more tools. He was the leader of the team that created Microsoft Azure. In his keynote, “Designed for Developers”, he discussed cloud computing technology. His aim was to help the developers with a different level of skills to reach one goal, which is sustainable development and use of cloud computing.

20190128_090001

Scott focused on how to develop clouds and maintain them.  The session was concluded with the presentation of Microsoft’s anticipated plan of providing Quantum Computing in their Azure technology.

The Strong Impact of Microservice Architecture

On this issue,  the most memorable was the session featured by Francis Cheung and Masashi Narumoto. They talked about microservices and the strong architecture that they hold. This architecture is considered a paragon in the world of cloud computing as it has raised the bar.

20190128_111511

The speakers mentioned several important features of a strong company that has the potential to succeed. And it was well-established that the success of microservice implementation depends mostly on a well-developed team with a strong strategy (preferable domain-driven).

 

No matter how beneficial microservices could be, it is not necessarily the right choice for your business. You need to be well aware of your products and the level of complexity your business needs. Having extra unrequired tools will set you back rather than take you anywhere.

SQL HyperScale as a Close Based Data Solution

This session was different as it celebrated two decades of Pass and 25 years of SQL technology being used. The speaker, Raghu Ramakrishnan,  has been Microsoft’s CTO since he moved from Yahoo in 2012. With his strong background and experience, Raghu was the best candidate to discuss the use of SQL Hyperscale and how groundbreaking this technology has been.

20190128_134351

The Hyperscale service has become a crucial update to the currently existing services. According to Ramakrishnan, this is the most modern technology of SQL services which has the highest storage with the most computing performance. This precise model has up to 100 TB of the database.

 

This technology is generally used to replace cloud computing database structures as it is more reliable and accessible than other alternatives. Microsoft has added many features to the SQL hyperscale making it the leading databasing solution in the market. With the amazing features discussed in the talk, it was really worth a separate session.

The Commercial Database: Cosmos Database

Deborah Chen, the Cosmos Database program manager at Microsoft, took the time to discuss the most viral commercial form of database out there. Most current implementations use non-relational databases. The Cosmos DB is one of the most widely used sources for databasing.

20190128_144226

As it was mentioned by Deborah, the Cosmos DB is a very volatile and responsive tool. With numerous transactions taking place in a second, response to applications (especially for real-time) is a very sensitive thing. since it is a non-relational database, the retrieving and storing of data is easier and faster. Thus, this is where Cosmos stands out, as it was intentionally created with an architecture aimed at handling such tasks.

 

She also discussed the use of Service Level Agreements (SLA). This agreement helps to provide guarantees, availability, and latency for all users, making Cosmos DB the most viral product out there.

Monitoring Your Procedures Using Azure Monitoring

Rahul Bagaria, a product manager of Azure monitoring, joined later on to talk about the importance of monitoring your work, flow, and operations.  But the monitoring process is not limited to single tasks only but to the connections, workflow, and final output. To monitor all the steps taken through the procedure is important for maintaining efficient delivery and quality assurance as a whole. It is also beneficial to pick out errors and problems in the cycle, may they arise.

20190128_154930.jpg

This is where Azure monitoring kicks in, with many strong details like log analytics and application insights. Rahul emphasized the importance of this tool and all the features it provides. His team has worked hard to provide a service that can help with multiple tasks, milestones, and services. This session helped the developers to learn why and how to monitor their work processes.

 

All in all, the first day at Microsoft LEAP 2019 was very on-topic and interesting. I look forward to the next sessions. If you have any questions, feel free to contact me at bjorn.nostdahl@gunnebo.com

Artificial Intelligence (AI), Business Intellegence (BI), Gunnebo Business Solutions, Machine Learning (ML), Microsoft Azure

Microsoft LEAP: Looking into the future

Cloud Computing have become one of the most profitable industries in the world and cloud will remain a very hot topic for a foreseeable future. There is a huge competition among cloud service providers to win customers by providing the best services to their customers. Cloud service providers invest a lot of money on inventions. Thus, cloud services make most of the trends in the future IT industry. Microsoft Azure and Amazon AWS is one of the leaders in innovation in their respective fields.

Data centers around the world

As the demand for cloud services rapidly increasing in all parts of the world, establishing data centers around the globe becomes a necessity. Azure has understood this well and expecting to expand its service by constructing data center regions in many parts of the world.

Microsoft-navalgroup_Brest
From news.microsoft.com article about Project Natick’s Northern Isles datacenter at a Naval Group facility in Brest, France. Photo by Frank Betermin

The world is divided into geographies defined by geopolitical boundaries or country borders. These geographies define the data residency boundaries for customer data. Azure geographies respect the requirements within geographical boundaries. It ensures data residency, compliance, sovereignty, and resiliency. Azure regions are organized into geographies. A region is defined by a bandwidth and latency envelope. Azure owns the greatest number of global regions among cloud providers. This is a great benefit for businesses who seek to bring their applications closer to users around the world while protecting data residency.

The Two Major Azure’s Global Expansion of Cloud Services

Two of the most expansion that Microsoft Azure has incorporated to improve its service updates includes the following:

Expansion of Virtual Networks and Virtual Machines Support.

With utility virtual machines like A8 and A9 that provides the advantages of operations like rapid processors and interconnection amidst more virtual cores, there can now be the seamless configuration of virtual networks for specific geographical locations and regions.

This feature gives more room for optimal operations, cloud services, complex engineering design video encoding and a lot more.

Incorporation of Azure Mobile Services, and its Expansion to Offline Features

Even with a disconnected service, this operation makes it possible for applications to operate effectively on offline features.  Furthermore, is that this extends the incorporation of Azure cloud services to apps on various platforms, including Android and iOS on mobile phones.

Then there are Availability Zones. It is the 3 rd level in the Azure network hierarchy.

Availability zones are physically separated locations. They exist inside regions. They are made up of one or more data centers. Constructing availability zones is not easier. They are not just data centers, they need advanced networking, independent power, cooling etc. The primary purpose of Availability zones is to helps customers to run mission-critical applications.

You will have following benefits with Azure availability zones

  • Better protection for your data – you won’t lose your data due to the destruction of a data center
  • High- availability, better performance, more resources for businesses to continuity.
  • 99% SLA on virtual machines

Open source technology

Microsoft took some time to understand the value of Open source technologies. But now they are doing really fine. With .Net Core and the .Net Standard, Microsoft has done a major commitment to open source. Looking at GitHub alone, Microsoft is one of the largest contributors to open source.

Redmond, Washington USA - 4th June 2018 Microsoft confirms its acquiring GitHub
“Microsoft is a developer-first company, and by joining forces with GitHub we strengthen our commitment to developer freedom, openness and innovation,” said Satya Nadella, CEO, Microsoft.

With  .Net core 3.0, Microsoft introduced many features that will enable developers to create high security fast productive web and cloud applications. .NET Core 3 is a major update which adds support for building Windows desktop applications using Windows Presentation Foundation (WPF), Windows Forms, and Entity Framework 6 (EF6). ASP.NET Core 3 enables client-side development with Razor Components. EF Core 3 will have support for Azure Cosmos DB. It will also include support for C# 8 and .NET Standard 2.1 and much more.

Mixed reality and AI perceptions

Mixed reality tries to reduce the gap between our imagination and reality. With AI, it is about to change the way how we see the world. It seems to become the primary source of entertainment. Although Mixed reality got popular in the Gaming industry, now you can see its applications in other industries as well. The global mixed reality market is booming. That’s why the biggest names in tech are battling it out to capture the MR market. All major tech products have introduced MR devices such as Meta2 handsets, GoogleGlass 2.0, Microsoft HoloLens.

Mixed reality and AI perception is a result of the cooperation of many advanced technologies. This technology stack includes Natural Language interaction, Object recognition, real-world perception, real-world visualization, Contextual data access, Cross-device collaboration, and cloud streaming.

Factory Chief Engineer Wearing VR Headset Designs Engine Turbine on the Holographic Projection Table. Futuristic Design of Virtual Mixed Reality Application

As I said earlier, Although the Gaming industry was the first to adopt mixed reality, now MR applications are more used in other industries. Let’s visit some of the industries and see how Mixed reality has transformed them and what benefits those industries get from mixed reality and AI perception.

You can see tech giants such as SAAB, NETSCAPE, DataMesh, using mixed reality in the manufacturing industry. According to research, mixed reality helps to increase worker productivity by 84%, improve collaboration among cross-functional teams by 80% and improve customer service interaction by 80%. You may wonder How mixed reality was able to achieve it? What it offers to the manufacturing industry. There are many applications of Mixed reality in manufacturing, following is a small list of them.

  • Enhanced Predictive Maintenance
  • Onsite Contextual Data Visualization
  • Intuitive IOT Digital Twin Monitoring
  • Remote collaboration and assistance
  • Accelerated 3D modeling and product design
  • Responsive Simulation training

Retail, Healthcare, Engineering, Architecture are some other industries that use mixed reality heavily.

Quantum revolution

Quantum computing could be the biggest thing in the future. It is a giant leap forward from today’s technology. It has the potential to alter our industrial, academic societal and economic landscapes forever.  You will see these massive implications nearly every industry including energy, healthcare, smart materials, and environmental system. Microsoft is taking a unique revolutionary approach to quantum with its Quantum Development Kit.

QPR18_Copenhagen_57022000x1108
Picture from cloudblogs.microsoft.com article about the potential of quantum computing

Microsoft can be considered as the only one who took quantum computing seriously in the commercial world. They have a quantum dream team which is formed by the greatest minds in physics, mathematics, computer science, and engineering to provide cutting-edge quantum innovation. Their quantum solution integrates seamlessly with Azure. They have taken a scalable topological approach towards quantum computing which helps to harness superior qubits. These superior qubits can perform complex computations with high accuracy at a lower cost.

There are three important features in Quantum development kit which makes it the go-to Quantum computing solution.

It introduces its own language, Q#. Q# created only for quantum programming. It has general programming features such as operators, native types and other abstractions.  Q# can easily integrate with Visual Studio and VS code which makes Q# feature rich. Q# is interoperable with the Python programming language. With the support of enterprise-grade tools, you can easily work on any OS windows, macOS, or Linux.

Quantum development kit provides a simulated environment which greatly supports optimizing the codes. This is very different from other quantum computing platforms which still exist in a kind of crude level. This simulation environment also helps you to debug your code, set breakpoints, estimates costs, and many other things.

As we discussed earlier, Microsoft has become the main contributor in the open source world. They provide Open source license for libraries and samples. They have tried a lot to make quantum computing easier. A lot of training materials are presented to attract developers to into quantum programming realm. The open source license is a great encouragement for developers to use the Quantum development kit in their applications while contributing to the Q# community.

Cloud services will shape the future of the IT industry. Quantum computing, Open source technologies, Mixed reality will play a great role in it.

This is my last day in Redmond, but I really look forward to coming again next year! If you have any questions, feel free to contact me at bjorn.nostdahl@gunnebo.com

Artificial Intelligence (AI), Gunnebo Business Solutions, Machine Learning (ML), Microsoft Azure

Microsoft LEAP: Adding Business Value and Intelligence

Adding Business Value and Intelligence

The concept of business value and intelligence is aimed at more productive measures through the utilization of various tech application and analytical tool for the assessment of raw data. Business intelligence makes use of activities like data mining, analytical processing, querying and reporting. Companies take advantage to improve their operationalization, as well as accelerate their decision making. Business intelligence is also useful in the aspect of reducing cost and expenses and also identifying new business opportunities.

Machine learning technologies. Millennial students teaching a robot to analyse data

A lot of experts have shared their ideas and spoken on various aspect of business values and intelligence relating to AI in Redmond. Notable speakers include Jennifer Marsman, Maxim Lukiyanov, Martin Wahl, and Noelle LaCharite. The aspects that they extensively spoke on is a machine and learning fundamentals, introduction to new azure machine learning service, using cognitive services to power your business applications, and how to solve business problems using AI, respectively.

Machine and Learning Fundamentals

The fundamentals of machine learning have to do with understanding both the theoretical and programming aspect. it is also important to be up to date with the latest algorithm and technology that is being implemented by the various programming tools for machine learning. The there simplest explanation of the term machine learning is that the operation of the machine in such a way that it would be able to perform various tasks.

20190131_081910

Algorithms can learn how to perform these tasks in various ways, and this brings us to the different types of machine learning. They include supervised learning which is carried out to enable the machine to identify and differentiate between various data. Unsupervised learning, on the other hand, does not have to do with a specific data or structure that the machine is supposed to produce. Another type of machine learning is reinforcement learning.

The importance of a machine model’s accuracy cannot be understated. The accuracy is what really determines how effective a machine can be for the operationalization of a company. machine models are estimated or measured mainly by prediction making and putting them to work in the real world sense. In the business world, a model cannot be accepted until it has been tested against the real world and the results are satisfactory. Measuring a machine model depends on the characteristics of such a particular model, and the circumstances the model is needed in the real world.

Two vital aspects of Machine learning are CNN and RNN. CNN is convolutional neural networks, while RNN is recurrent neural networks. For CNN mainly generate free size outputs, and are used for minimal amounts of reprocessing. RNN on the other hand functions on random inputs and outputs. They can also be sued for the processing of random sequences. So in basic terms, CNN is built such that they can be able to recognize images while RNN, on the other hand, recognizes sequences.

Presentation about machine learning technology, scientist touching screen, artificial intelligence-1

Furthermore, Jennifer Marsman helped in the description of various methods that are related to artificial intelligence, and they include the following.

  • Search and Optimization

The use of a search engine and search optimization helps to rank AI algorithms. Explaining the role of AI for search and optimization purposes on search engines could be very technical. Machines are also taught on how to work with these to rank algorithms.

  • Logic

Logic also plays a major role in AI. The application of Logic in Ai could be as an analytical tool, as a knowledge representation formalism, and also a method of reasoning. Logic can also be used in the aspect of programming language. With this, it can explore both the prospects and the problems of the success of AI.

  • Probabilistic Methods for Uncertain Reasoning

One of the most widely artificial methods for representing uncertainty is a probability. A lot of certainty factors have been utilized for quantifying uncertainty for alternative numerical schemes over the years.

  • Classifier and Statistical Learning Methods

Classifiers associated with AI includes Naive Bayes, Decision trees, perceptron, amidst a host of others. There are also various statistical learning methods and theories that are in used to evaluate the uncertainties of AI. However, there are limitations to these statistical models, and this is where logic comes.

  • Artificial Neural Networks

This is the impact of the earlier mentioned RNN and CNN on the concept of AI. A typical explanation of ANN in a natural language processing AI which can be used in the interpretation of human speech.

  • Evaluation Progress in AI

This is imperative in the estimation of the progress of the concept of AI across all sectors including business models. Three evaluation types include human discrimination, peer confrontation, and problem benchmarks.

An Introduction to New Azure Machine Learning Service

Maxim Lukiyanov spoke about the working principle of the new Azure machine learning service. The service helps to simplify and accelerate building, training, as well as the development of various machine learning models. Furthermore, the automated machine can be utilized in such a way that algorithms that are needed are easily identified, and the hyperparameters are tuned faster.

New Azure Machine Learning Service also helps to improve productivity and reduce costs with auto-scaling compute methods, as well as develops for the machine learning procedure. New Azure Machine Learning Service also have the advantage of storing the data easily on the cloud. Using the latest programming language is also a seamless operation with the New Azure Machine Learning Service, with open source frameworks like PyTorch, TensorFlow, and scikit-learn.

Maxim also spoke further on some benefits of the New Azure Machine Learning:

  • Easy and flexible pricing method, as you will have to pay a=for only the features that you use.
  • The machine learning is very easy to understand, and the tools that come with it are not in any way restrictive.
  • With the various data and algorithm of the tool, there will be more accurate predictions
  • The tools from the machine make it very easy to import data, and as well as fine-tune the results.
  • A lot of other devices can be connected easily to the platform with the aid of the tolls
  • Data models can be easily published as a web service
  • The time scale for the publish of experiments is only a matter of minutes. This is a very major upgrade when compared to expert data scientists that take days.
  • There is adequate security from the Azure security measures. And this is very useful for the storage of Data in the cloud.

Using Cognitive Services to Power your Business Applications: An Overview and Look at Different AI Use Cases

Martin Wahl explained that with Azure cognitive services, customers are set to benefit from AI with developers. With this, they will not even need the service of a data scientist, which is a major advantage to saving both time and costs. This is done by building this machine in such a way that the learning models, pipelines and infrastructure needed are packaged up on cognitive service for important activities such as vision, speech, search, processing of text, understanding languages, and many more operations. This means that anyone who is capable of writing a program at all can make use of the machine learning to improve the application.

20190131_110410.jpg

Customers who have patronized this service are already benefiting from cognitive services such as face container, text container, custom vision service support for logo detection, language detection, in-depth analysis and many more.

Martin Wahl finally explained that with Azure service, more value is added to the business, and the implementation of artificial intelligence is easier than ever.

How to Solve Complex Business Problems Using AI Without Needing a Data Scientist or Machine Learning Expert.

With the possession of basic skills like python coding, data visualization, Hadoop platform, apache spark etc. complex business problems can be solved, even without being a machine learning expert or a data scientist.  All of these are made possible through the help of AI and all that is needed is just dedication and willingness. Some procedure to go about this include:

  • Understanding the basics: This has to do with acquiring general knowledge on the basics, both theoretically and practically.
  • Learning Statistics: Statistics is core to solving business problems, and some of the aspect to be looked at include Sampling, data structures, variable, correlation, and regression etc.
  • Learning Python
  • Making attempts on an explanatory data analysis project
  • Creation of learning models
  • Understanding the technologies that are related to big data
  • Exploring deeper models
  • Completing a complex business problem.

Finally, Noelle LaCharite gave a vivid explanation of how a PoC was made and I did one myself in Delphi in 30 minutes with the aid of Azure AI.

DevOps, Gunnebo Business Solutions, Microservices, Operations, Technical

Microsoft LEAP: Accelerating Business Value

This is my third article from Microsoft LEAP and todays’ focus is the use of microservices and Kubernetes.

Containers Are Crucial For the More Essential Microservices

A very important topic that was discussed throughout the agenda of the conference was the use of Microservices and how essential they are for most applications for the business sectors. With different approaches and angles to this topic, Brendand Buns, one of Kubernetes co-founder, gave a session which focused on the use of containers for microservices. He focused on his product, Kubernetes, which is one of the best and most recommended open-source services for the use containers with the use of policies. Microservices are important due to their ability of being agile and their sophisticated architecture which helps in a faster digital offering.

Conceptual business illustration with the words microservices-1

However, currently found microservices are used on physical services which leads to many problems. This is why the use of containers is a breakthrough which gives the user a light runtime environment. It can also be used on physical or virtual servers which is a huge development compare to older technologies.

The use of containers will also help in providing better isolation, due to the use of many executions on only one operating system. Such an opportunity will aid developers in minimizing the use of many different VMs. Brendand discussed the use of Domain driven developments against using test driven developments; in terms the more relatable for businesses and how to pick the right method. Overall, the final conclusion was to reflect the scaling levels that could be reached through using Kubernetes as a service to provide containers while using microservices for your business.

The Use of Service Fabric Mesh

One popular session in the program was by Mark Fussell and Vaclav Turecek. This talk discussed the introduction of the anticipated future product called Service Fabric Mesh, with a full comparison with the currently used cloud service. Many different points have been discussed to describe the service fabric fully. However, the audience got more excited when they heard the different benefits that are met while using this new service.

1Azure20Service20Fabric20Mesh-1532006671541.png

Mark spoke about the time taken to create instances of VMs and the hassle in the whole process. This is where service fabric shines as it creates the VMs only once, allowing it to be used through the platform. More packages can be added to the cluster further on without any time consumptions. The second point tackled by Vaclav was the hosting opportunities with service fabric which is described as high-density. Which explains why the cost is lower for service fabrics, as the applications are not connected to the VMs in particular, giving a space to connect more than one application to a single VM.

Last but not least, they both discussed the flexibility of the service fabric mesh to be used with different servers or any different environments, disregarding the current existing infrastructure. They added the fact that service fabric helps in controlling the machine lifecycle. Developers were more educated on the differences between cloud technologies and whether to transfer or not.

The Touch Point: ACI and AKS

When it comes to the use of Azure Container Instances(ACI), Justin Luk, the product manager for Azure and Kuberentes, was the best pick for such content. Developers were glad to know that containers by AKS can be used with their ACIs. The containers can be quickly used when needed without any preps, saving time and effort. Instances will also be easily deleted directly after the needed work is done. AKS is used in these on-demand moments to monitor the work and control the creation and deletion process. This can help developers provide new severs instantly when needed without any hassle. When a certain problem or demand is asked for, AKS are used to reach the needed output without any extra services or products.

An Environment of AKS: Best Practices

Another session that stood out from all the Kubernetes sessions was the one conducted by Saurya Das, another product manager in Azure. This session was to reflect the success stories by some of the developers out there that used AKS in their platforms. Developers were happy to know about the multi-tenancy when using the cluster isolation. In addition, was the different network designs that could be used with their new service. These networks can also be implemented using policies, that help make the development easier and more secure. Overall, everyone in the session was satisfied to know about the scaling opportunities to expect and the strong control for monitoring and handling it possesses.

Monitoring Your Procedures Using Azure Monitoring

On the other hand, Ralph Squillace, gave a wider image and a better understanding on multi-tenancy and their use with AKS. He discussed how it is commonly mistakenly used through the AKS products itself, whereas it is actually recommended to be used in the application directly. Ralph emphasized on such points, by relating to some best practices which were mainly of SaaS products. He gave a few tips and tricks on how your service should be in terms of security, designs, policies and much more in order to be able to integrate and handle the multi-tenancy directly and easily through the application.

dashboard

Kubernetes: Guide for its Tools

The end of this section of containers being used was bent towards introducing the different operating tools that will assess developers while using kubernetes services. Bridget Kromhout was able to introduce the developers to new tools as Terraform, Helm, Draft, Brigade, Kashti and many others. These different tools were discussed thoroughly on how to use them in terms of configurations and app development. They were also helpful in scripting for event-driven operations and to manage the app fully. Developers were happy to learn how to efficiently use Kubernetes and containers for their currently existing architectures and structures.

All in all, a very on-topic and interesting day at Micrososft LEAP 2019. I look forward to the next sessions. If you have any questions, feel free to contact me at bjorn.nostdahl@nostdahl.com

Gunnebo Business Solutions, Technical

Using Lottie to Enliven your App

What is the main feature of a good app? It enhances customers’ lives through the set of well-conceived steps in user experience (UX) design. Proper UX speeds up interactions and makes activities simply and orderly organized. On the surface, the easiest way to arm the users with a clear vision of the product functionality is to give comprehensive guideleines. But the more complex tasks the app performs the more time you spend on learning how to use it properly. Complexities of manuals create tension and distract users.

Smartphone - User Manual

That’s why, when creating a Gunnebo Security Solution app, we paid special attention to the development of user-friendly instructions for our clients. Gunnebo app is aimed at remote management of our security products. Since it has many functions, including alarm control, cash operation, data analysis, and devices coordination, it takes some time for the users to study all the functions. How can one make this boring and complicated task fun? We set our sights at animations as a nice way to entertain, attract attention and make instructions illustrative.

Our next concern was about the practical implementation of this decision. Everybody who has ever dealt with animations knows that it may take a lot of time and effort to create them. Even behind a small and seemingly simple animation, there may hide long lines of code. So, we decided to try Lottie, a relatively new animation library created by Airbnb. And it turned out to be the right choice.

Lottie is an excellent library for rendering Adobe After Effects animations  for Android, iOS, MacOS, TvOS and UWP.  It uses animation data exported as JSON files from Bodymovin extension and renders Adobe After Effects animations in real time. So, engineers don’t need to re-create them by hand and can work directly with animations as they are created by designers. Another good thing is that the size of animations is small no matter how complex they are.

Lottie supports numerous flexible after-effects, like solids, masks, shape layers, etc. And it allows various manipulations with an animation (resize, loop, reverse, scrub, change color, and other). You can play just some fragment of animation or loop it if you need and do lots of other things.

For the Gunnebo app, we have developed a set of animations which familiarize users with the app’s interface and functions. These include dashboard use, calling attention, data processing, etc.lottie

Animations created with Lottie have a lot of perks. Created slides show up only if user haven’t seen them. Thus, we don’t nag users with directions, they are only shown on the first use of a specific function.

The slides load from a solution folder. That is, the person who adds/edits new slides doesn’t need to be a developer and doesn’t have to edit the code. Files are added to GIT in Azure DevOps. Folder structure in Azure DevOps and folder names determine where slides will be shown. Slides load into Telerik SlideView, so, users can swipe or tap to go through the slides.

The text is stored in html, so, the style is localized and modified easily. Gunnebo uses Crowdin for localization.

The picture below represents the structure of the project in general:

Slideshow_UWP_iOS_Android

Gunnebo developers have used the following libraries for the implementation:

https://github.com/martijn00/LottieXamarin

https://github.com/azchohfi/LottieUWP

https://docs.telerik.com/devtools/xamarin/controls/slideview/slideview-overview

https://github.com/zzzprojects/html-agility-pack