Software Development Insights, Technical

Getting started with Kafka

What is a Database? 

A database is an organized collection of information (data). You can store data in a database and retrieve them when needed. Retrieving data would be faster than traditional data storing methods (Information written on paper etc..). Data stored in a database in key: value pairs. 

Modern Databases follow the ACID standard. It consists of, 

Atomicity: Method for handling errors. It guarantees that transactions are either committed fully or aborted/failed. If any of the statements in a transaction failed, then the transaction would be considered failed and operation will be aborted. Atomicity provides a guarantee on preventing the desired updates/modifications over the database partially. 

Consistency: Guarantee data integrity. It ensures that transactions can alter the database state, only if a transaction is valid and follows all defined rules. For example, a database for bank account information cannot have the same account number for two people.  

Isolation: Transactions do not depend on one another. In the real world, transactions are executed concurrently (multiple transactions read/write to the database at the same time). Isolations guarantee concurrent transactions are treated the same way if they happen sequentially (one transaction needs to complete execution for the next one to get executed). 

Durability: Guarantees the once committed transaction, will remain committed even in case of a system failure. This means committed/completed transactions will be saved in permanent non-volatile memory. (Eg: Disk drive). 

What is Kafka? 

According to the main developer of Apache Kafka, Jay Kreps, Kafka is “A System optimized for writing”.  Apache Kafka is an Event Streaming Platform, a fast, scalable, fault-tolerant, publish-subscribe messaging system. Kafka is written in programming languages Scala and Java. 

Why the need for yet another system? 

At the time that Kafka was born, which is 2011. SSDs were not common and when it comes to database transactions, disks were the first bottleneck. LinkedIn wanted a system that is faster, scalable, and highly available. Kafka was born to provide a solution for big data and vertical scaling.  

When the need arises to manage a large number of operations, message exchange, monitoring, alarms, and alerts, the system should be faster and reliable. Vertical scaling is upgrading physical hardware (RAM, CPU, etc..) and it’s not ideal always and it may not be cost-effective. Kafka lets the system scale horizontally, coupled with a proper architecture. 

LinkedIn later made Kafka open-sourced which results in improving Kafka over time. Before Kafka, LinkedIn used to ingest 1 billion messages per day, now it’s 7 trillion messages, 100 clusters, 4000 brokers, and 100K topics over 7 million partitions. 

Strengths of Kafka 

  • Decouple producers and consumers by using a push-pull model. (Decouple sender and consumer of data). 
  • Provide persistence for message data within the messaging system to allow multiple consumers. 
  • Optimize for high throughput of messages. 
  • Allow for horizontal scaling. 

What is the Publish/Subscribe model? 

A message pattern that splits senders from receivers. 

All the messages are split into classes. Those classes are called Topics in Kafka.  

A receiver(subscriber) can register into a particular topic(s), and the receiver will be notified about the topic asynchronously. Those topics are generated by Senders/Publishers. For example, there is a notification for new bank account holders. When someone creates a new bank account, they will automatically get that notification without any prior action from the bank. 

The publish/Subscribe model is used to broadcast a message to many subscribers. All the subscribed receivers will get the message asynchronously. 

Core concepts in Kafka 

Message: The unit of data within Kafka is called a message. You can see a message as a single row in a database. For example, We can take an online purchase transaction. Item purchased, the bank account used to pay, and information delivered can be called a message. Since Kafka doesn’t have a predefined schema, a message can be anything. The message is composed of an array of bytes and messages can be grouped in batches. So we don’t have to wait until one message to get delivered to send another message. A message can contain metadata like the value ‘key’ which is used in partitioning.  

Producer: The application that sends data to Kafka is called the Producer. A producer sends and Consumer pulls data, while they don’t need to know one another. Only they should agree upon where to put and how to call the messages. We should always make sure that the Producer is properly configured because Kafka isn’t responsible for sending messages but only for message delivery. 

Topic: When the Producer is writing, it’s writing in a Topic. Data sent by the Producer will be stored in a Topic. The producer sets the topic name. You can see Topic as a file. When writing into Topic, data will be appended and when someone is reading, they read from top to bottom. Topics starting with _ are for internal use. Existing data cannot be modified as in a database. 

Broker: Kafka instance is called the Broker. It is in charge of storing the topics sent from the Producer, serving data to the Consumer. Each broker is in charge to handle assigned topics, and ZooKeeper holds who is in charge of every Topic. A Kafka cluster is a multitude of instances(Brokers). The broker itself is lightweight, very fast, and without the usual overheads of java like garbage collectors, page & memory management, etc… The broker also handles replication. 

Consumer: Reads data from the topic of choice. To access data Consumer needs to subscribe to Topic. Multiple consumers can read into the same Topic. 

Kafka and Zookeeper 

Kafka works together with a configuration server. The server of choice is Zookeeper, which is a centralized server. Kafka is in charge of holding messages and Zookeeper is in charge of configuration and metadata of those messages. So everything about the configuration of Kafka ends up in Zookeeper. They work together ensuring high availability. There are other options available, but Zookeeper seems to be the industry choice. Both are open-sourced and they work well together.  

Zookeeper is very resource-efficient and designed to be highly available. Zookeeper maintains configuration information, naming and provides distributed synchronization. Zookeeper runs in the cluster and Zookeeper cluster must be an odd number (1,3,5..).  3 and 5 guarantees high availability. 1 instance does not because if 1 instance goes down, we lose Zookeeper. Commonly Zookeeper is installed on separate machines than Kafka works because Zookeeper needs to be highly available. There are official plans to abandon Zookeeper to be used with Kafka and in the future, only use Kafka. 

Kafka and Zookeeper Installation 

In GNU Linux, you must have Java installed (ver.8 or newer). Download Kafka from https://downloads.apache.org/kafka/.  

Run the following commands in your terminal (Assumed your downloaded filename is kafka_2.13-2.6.0.tgz). 

Run the tarball :  

tar -xzf kafka_2.13-2.6.0.tgz 

Change the directory into the one with Kafka binaries :  

cd kafka_2.13-2.6.0 

Run Zookeeper :  

bin/zookeeper-server-start.sh config/zookeeper.properties 

Start the Kafka broker service :  

bin/Kafka-server-start.sh config/server.properties 

The downloaded tarball comes with all the binaries, configuration files, and some utilities. System d files, PATH variables adjusted (added to PATH), and configuration for your requirements are not included. 

Kafka Overview 

F:\GUNNEBO\Visio\2021\Kafka Overview.jpg

Kafka stores data as Topics. Topics get partitioned & replicated across multiple brokers in a cluster. Producers send data to topics so Consumers could read them. 

What is a Kafka Partition? 

Topics are split into multiple Partitions and parallelize the work. New Partition writes are appended at the end of the segment. By utilizing Partitions, we can write/read data with multiple Brokers, speeding up the process and thus reducing bottleneck and it adds scalability to the system. 

Topic overview 

A Topic named topic name is divided into four partitions and each partition can be written/read using a different Broker. New data will be written at the end of each Zpartition by its respective Broker. 

Multiple Consumers can read from the same Partition simultaneously, When a Consumer reads, it will read data from the offset. Offset is basically like a timestamp of a message provided by the Producer saved in metadata. Consumers can either read from the beginning or read from a certain timestamp/offset. 

Each Partition has one server called Leader(That’s one Kafka Broker which is in charge of serving that specific data), and sometimes more servers act as Followers. Leader handles all read/write requests for the Partition while Followers passively replicate Leader. If a Leader fails for some reason, one of the Followers automatically becomes the Leader. Therefore Leaders can change over time. Every Broker can serve as the Leader for some data. Each Broker is a Leader for some Partitions and acts as a Follower to other Partitions, providing load balance within the cluster. Zookeeper provides information on which is the Leader for a certain part of data. 

Microservices, Microsoft, Microsoft Azure, Software Development Insights, Technical

Microsoft LEAP: Design for Performance and Scalability

I’m at Microsoft for LEAP and we just wrapped up another day of interesting discussions. If you missed my update regarding day 1, make sure to have a look at it here.

Today’s theme was Design for Performance and Scalability. Many legacy applications are being replaced because they are not performance-oriented and scalable at their core. This is something that has to be introduced right from the design stage. Today’s speakers covered many of the core areas which need to be optimized to enable both performance and scalability.

Intelligence (BI) and business analytics (BA) with key performan

Vamshidhar Kommineni took us right from breakfast to using Azure storage for the data storage needs of Azure applications and how it can be used to enhance performance. Vamshidhar spoke about the innovations in the storage services layer made in the year 2019. He also shared briefly that plans for 2020. 

Corey Newton-Smith was next and focused on IoT applications. Corry has been with Microsoft since 2003 and currently functions as the Principal Group PM for IoT Central. She shared the current state of IoT and Microsoft’s plans for the near future highlighting their vision.

Corey explains that Azure IoT represented a new era of digitization amongst industries. It was an innovation that allowed brands the ability to do so much more. The objective behind the production of this platform is enabling a digital feedback loop. She discussed that Microsoft had done so much to make the IoT better. Now, it was capable of bidirectional communication, can be scaled to suit enterprise of any size and provides end-to-end security. Microsoft was planning an improvement that would allow it to support scenarios that are not currently cloud feasible. What’s more? Everything can be tailored specifically to the exact solutions that you need.

The next session began after some light mingling during the coffee break. It was back to business with Jose Contreras and his keynote on decomposing Monoliths into Microservices.

IMG_20200128_103740

Enterprise applications have made a gradual transition from being monolithic to being Microservice based. Jose explained strategies that can help with this process focussing on Memory, Computing, and Schema. He then discussed migrating existing monolith applications into Microservices without affecting ongoing operations. He focussed on the design, execution, and DevOps aspects.

Jose spoke on a number of factors to really prove the usefulness of transforming monolith to microservices. As part of his talk, he highlighted the factors to consider in making use of this service, differences between private and shared cache and considerations for using cache.

Interestingly, he moved on and started talking about Azure Compute. He listed all of their available services and gave detailed information on its hosting models, DevOps Criteria, Scalability criteria, and other criteria.

Clemens Vasters’s keynote focussed on how messaging is shaping enterprise applications. Importantly, he spoke on how Microsoft Azure could make all of it better.
He is a Product Architect at Microsoft and highlighted how open standards and messaging can be used to move applications to the cloud. Some of the areas he touched on were Event Hubs, Service Bus, Event Grid, and CNCF Cloud Events, and Relay with web sockets.

According to him, users can use a series of options to connect a range of devices. Ease of connectivity is guaranteed by the use of intelligent edge or intelligent cloud. Basically, it can be applied to varying scales and still work well with Telco 4G/5G. Despite all of this, cloud services can be applied to create automotive and smart cities, support industrial automation and speed up processes.

Clemens continued by clearing the air on the standards which the cloud service operated on. Everything is built according to standards and designed to be secure. Such was the level of quality in display.

After a quick lunch break, an alternative session was conducted for those who were already familiar with the campus. This session on Messaging Guidance was conducted by Francis Cheung and was related to session 4. However, Francis focused more on how we could assess if some of those tools were a good fit for our projects. He also touched on managing and versioning message schemas.

Next was David Barkol’s session focusing on Designing an Event-driven Architecture on Azure through a workshop approach. He challenged attendees to solve problems related to messaging in a group setting. As a Principal Technical Specialist for Azure, David used his vast experience to reinforce the existing knowledge of attendees about Azure messaging services. He really had a lot of interesting things to say.

Using a few simple statements, he was able to highlight the problems of the customer, identify their needs and how to solve them with the use of event-driven architecture. As a platform, the event-driven architecture will eliminate any bottlenecks and allow for easier transmission of information. Azure messaging services will solve all of the demands identified by the consumer. He also mentioned that Event Hubs GeoDR will also provide a backup or secondary region.

20200128_142749

Derek Li conducted his keynote next. He focussed on Serverless platforms based on Azure Functions and Logic Apps. Derek is a Senior Program Manager. His keynote focused on how serverless technologies have impacted how applications are built. He also spoke on how Azure Functions and Logic Apps can be used to speed up delivery.

The last session was after a very welcome Cola Zero break. It refreshed us for Rahul Kalaya’s keynote on deriving insights from IoT Data with Azure Time Series Insights.
Rahul spoke about design choices, principles and lessons learned with regards to maintaining the highest possible uptime of cloud databases and servers. Many stories from his experiences with Azure SQL made the keynote even more interesting.
And that was it. The completion of a day of meaningful sessions.

I look forward to sharing my next article on Day 3: Designing for Availability and Recoverability.

Microsoft, PowerBI, Software Development Insights, Technical

Microsoft Power BI: Dashboard in a Day

What better way to start the week than crunching and visualizing data. I joined the Dashboard in a Day workshop by Microsoft and Random Forest, a one day hands-on workshop designed for business analysts.

Transition effect in bar chart statistics and bright windows

I have to say that I really enjoy digging into new tools and learning about new technology. Even if I am probably not going to be working directly with PowerBI, it makes my job easier by understanding what is possible and the boundaries of the tool.

Kicking off with a short introduction (and the mandatory Microsoft advertisements), we dove straight into the PowerBI Desktop application itself, learning how to connect to, import and transform data from various sources. After preparing the data, reformatting and splitting fields, we moved on to exploring data with powerful visualization tools.

PowerBI

The exercises were quite comprehensive, and at some point I went rough and chose to implement some of my own visualizations instead.

The event was well organized and planned, the assets were categorized in a way that made it easier to identify the specific assets that suited our needs. The attendee content consisted of Lab Manuals and Datasets that were available for download on MPN without requirement for an MPN ID.

Towards the end of the day the guys from Random Forest ensured that we had a good working knowledge and familiarity with Power BI and possessed the ability to answer questions concerning the workshop or Power BI in general. It was a tremendous learning experience and I couldn’t wait to try out those awesome new technologies! They even spent the last couple of hours of the day to support and guide us through our own datasets. I brought some statistics from one of our Business Units, and it was quite impressive how I could visualize and interactively navigate through the data.

All in all an exciting workshop, and I look forward to playing more with PowerBI in the future. If you have any questions or great ideas, feel free to contact me at bjorn.nostdahl@gunnebo.com 🙂

 

Artificial Intelligence (AI), Commercial, Gunnebo Retail Solution, Machine Learning (ML), Software Development Insights, USA

Autonomous and Frictionless Stores

Earlier this year, I visited US for a couple of weeks, and having a genuine interest in retail technology, I visited quite a few retail stores. I went to see classical stores, but also had the chance to have a preview of the future of retail: Autonomous and Frictionless Stores!

Customers in this digital world don’t want to spend too much time while shopping. They want everything to happen very fast. Customers are looking for a seamless shopping experience all the time. That’s how the concept of frictionless stores came to exist. Frictionless stores are one the biggest new thing in consumer shopping.

iot smart retail futuristic technology concept, happy girl try to use smart display with virtual or augmented reality in the shop or retail to choose select ,buy cloths and give a rating of products
Photo: Adobe Stock

What are Frictionless Stores

The concept of frictionless stores started a few years ago. When I talk to retailers this is one of the topics that always pops up. All major brands are looking for innovative ways to create better customer experience and frictionless stores is one way to make that happen. These store improves the shopping experience to the point where customers don’t have to wait at any point of shopping such as selecting, receiving and paying for the product. Initially frictionless stores only confined to ease and less hassle shopping. But as innovations such as mobile wallets, digital receipts, free and fast shipping, and one-click purchasing emerged and began to reshape the consumer shopping experience, the definition began to be reshaped as well. Today, a frictionless experience means more than just less hassle. It means greater speed, personalization, and wow experiences.

How Frictionless Stores work

Let’s try to understand ow frictionless stores work. In frictionless stores, Buyers and sellers are connected in a way that provides buyers the ability to instantly find, compare and buy products and services they need. In frictionless stores, customers should feel that they have full control. The concept and technology has evolved over time, and nowadays customers expect to have this experience through their smartphones. Retailers and brands are trying to find new ways modifying the definition of frictionless stores to provide customers the best possible shopping experience. They need that commitment to stay ahead of the competition. As a result of that, nowadays, frictionless shopping means eliminating anything that negatively impacts customer experience.

Importance of Frictionless Stores

How has frictionless shopping fared according to researches? Alliance Datacenter has done a study and found out that customers from all generations looking for a great service and an ideal shopping experience. This is true for all the areas in the world. If some brand fails to deliver what they want, customers will find a different one. According to the research, 76 percent of consumers said they give brands only two to three times before they stop shopping with them.  Another 43 percent said their main reason to leave a brand is poor experience in shopping. What all these means is that if a customer encounters friction they will run away from that brand fast without probably giving a second chance.

Amazon Go Stores

Similar to frictionless stores, Amazon introduced Amazon Go stores. What is special about Amazon Go is you don’t have to wait for checkouts. That basically means you no longer have to wait in queues. First Amazon Go store was a grocery store of 1800 square feet. It spread fast, in fact, you can see a lot of Amazon Go stores now in the USA and Europe.

Amazon Go First Store_0_low

How is this even possible? What technologies have they used? Amazon was doing many types of research in the areas of computer vision, sensor fusion, and deep learning. Amazon Go is a fruitful result of that. You need Amazon Go application to do shopping with Amazon Go stores. All you have to do open your Go app, choose the product you want, buy it and the just leave. This application can detect when a product is purchased or returned to the shop. The application can remember what you bought and you can revisit these details at your virtual cart. When you finish shopping, you will be charged and you will receive a receipt for what you buy

Buy Awesome foods with Amazon Go stores

You may wonder now what you can buy there? What items are available on Amazon Go stores? I will just point out how one Amazon Go store had marketed their shop. “We offer all the delicious meals for breakfast, lunch or dinner. We have many fresh snack options made every day by our chefs at our local kitchens and bakeries. You can buy a range of grocery items from milk and locally made chocolates to staples like bread and artisan cheeses. Try us, you will find well-known brands you love in our shops.” by the way, don’t expect to go in there and buy books, tech or clothes or anything else that Amazon sells online. It’s basically quick-and-easy food and other groceries. It’s just that there’s no cashier.

image_0a916596-0d11-4f75-a2ca-7f4bf93aa4e6.img_9445

So many people have been attracted to Amazon Go stores so it is quite evident that this concept will make a huge impact on the future of retail stores.

If you want to know more about frictionless Sstores, feel free to contact me at: bjorn.nostdahl@gunnebo.com or check out these related articles:

Artificial Intelligence (AI), Business Intellegence (BI), Machine Learning (ML), Microsoft Azure, Software Development Insights

Machine Learning and Cognitive Services

Machine learning is gradually becoming the driving force for every business.  Business organizations, large or small trying to seek machine learning models to predict present and future demands and do innovation, production, marketing, and distribution for their products.

Business value concerns of all forms of value that decides the well-being of a business. It’s a much broader term than economic value encompassing many other factors such as customer satisfaction, employee satisfaction, social values etc. It’s the key measurement of the success of a business. AI helps you to Accelerate this business value in two ways. That’s through allowing to make correct decisions and innovation.

Machine learning technologies. Millennial students teaching a robot to analyse data
nadia_snopek

Remember the days when Yahoo was the major search engine and Internet Explorer was the Major web browser. One of the main reason for their downfall was their inability to make correct decisions. Wise decisions are made by analyzing data. More data you analyze, better decisions you make. Machine Learning greatly support in this cause.

There was a time, Customers accepted what companies were offering them. Things are different now. Demands of customers for new features are ever more increasing. Machine Learning has been the decisive factor behind almost every new innovation whether it be face recognition, personal assistants or autonomous vehicles.

Machine Learning in more details

First starts with learning what machine learning is. Machine learning enables systems to learn and make decisions without explicitly programming for it.  Machine learning is applied in a broad range of fields. Nowadays, Almost every human activity getting automated with the help of machine learning. A particular area of study that machine learning largely exploited is data science.

Data science plays with data. Data must be extracted to make the best decisions for a business.

The amount of data that a business has to work with is enormous today. For example, social media producing billions of data every day. To stay ahead of your competitors, every business must make the best use of this data. That’s where you need machine learning.

Machine learning has invented many techniques to make better decisions out of large data sets. These include Neural networks, SVM, Reinforcement learning and many other algorithms.

Among them, Neural networks are leading the way. It improves consistently spawning child technologies such as convolutional and recurrent neural networks to provide better results in different scenarios.

AdobeStock_178345630_low

Learning machine learning from the beginning, and trying to develop models from scratch is not a wise idea. That yields huge cost and demands a lot of expertise in the subject. That why someone should try to take the assistance of a machine learning vendor. Google, Amazon, Microsoft they all provides Machine learning services. Let’s take Microsoft as an example, and review what qualities we should look for when selecting a vendor.

Using cloud as a solution for machine learning

It simplifies and accelerates the building, training, and deployment of machine learning models. It provides with a set of APIs to interact with when creating models hiding all the complexity in devising machine learning algorithms. Azure has the capability to identify suitable algorithms and tune hyperparameters faster. Autoscale is a built-in feature of Azure cloud services which automatically scale applications. This autoscaling feature has many advantages. It allows your application to perform best while keeping the cost to a minimum. Azure machine learning APIs can be used with any major technologies such as C# and Java.

There are many other advantages you will have with cloud Machine Learning

  • Flexible pricing. You pay for what you use.
  • High user-friendliness. Easier to learn and less restrictive.
  • More accurate predictions based on a wide range of algorithms.
  • Fine tuning results are easier.
  • Ability to publish your data model as a web service Which is easy to consume.
  • The tool allows data streaming platforms like Azure Event Hubs to consume data from thousands of concurrently connected devices.
  • You can publish experiments for data models in just a few minutes whereas expert data scientists may take days to do the same.
  • Azure security measures manage the security of Azure Machine Learning that protects data in the cloud and offers security-health monitoring of the environment

Using Cognitive Services to power your business applications

We will go on to discuss how Azure cognitive service can be used power up a business application. Azure cognitive services are a combination of APIs, SDKs, and services which allows developers to build intelligent applications without having expertise in data science or AI. These applications can have the ability to see, hear, speak, understand or even to reason.

AdobeStock_252431727_low

Azure cognitive services were introduced to extend the Microsoft existing portfolio of APIs.

New services provided by Azure cognitive services includes

  • Computer vision API which provides with advanced algorithms necessary to implement image processing
  • Face API to enable face detection and recognition
  • Emotion API gives options to recognize the emotion of a face
  • Speech service adds speech functionalities to applications
  • Text analytics can be used for natural language processings

Most of these APIs were built targeting business applications. Text analytics can be used to harvest user feedbacks thus allowing businesses to take necessary actions to accelerate their value. Speech services allow business organizations to provide better customer services to their clients. All these APIs have a free trial which can be used to evaluate them. You can use these cognitive services to build various types of AI applications that will solve a complex problem for you thus accelerating your business value.

If you want to talk more about ML and AI, feel free to contact me: bjorn.nostdahl@gunnebo.com 🙂

Milestone, Software Development Insights, Technical

Extending Milestone Smart Client with Bing Maps

The utilization for Milestone’s Xprotect smart client gets more limitless with every version. It supports hardware accelerated video decoding, and this means that you can be able to view very high resolution streams about 5 times better with your lower CPU and the aid of external graphics card. Amongst all its magnificent SDK has allowed the Gunnebo team to make a great plugin for the Xprotect Smart Client 2018 that now implements Microsoft’s Bing maps.

This operation is possible due to XProtect Smart Client very powerful, adaptable and easy-to-use client application for the daily operations of security installations. Using the Milestone Integration Platform and the unique application plug-in architecture, various types of security and business systems applications can be seamlessly integrated in XProtect Smart Client.

Bing Maps has abilities such as:

  • Buildings can be created with a number of levels which are easily navigated through a pane that will be made available for you after selecting a building.
  • Cameras can be added and attached to different levels. With this you will be able to shift the camera for different levels available.
  • You will be able to have a complete geographical overview of all the camera from different sites on your smart map. With this, you can be able to build live and current feed and as well monitor the recordings from your smart map camera.
  • It can go through seamless operations of jumping to cameras or custom overlays rather than having to navigate to them manually.

Bing maps can be easily embedded into the Xprotect smart clients with the aid of the Gunnebo map plugin which allows for seamless operation and as well shows all camera locations on the map.

blog1

Window presentation foundation, a graphical subsystem from Microsoft was used by the Gunnebo team to work on the Bing map in such a way that the software development kit provides the basic programming pattern for tor the Bing.

The system requirements that this program is compartable with includes: windows 7, windows 8, widow’s server 2008, windows vista, windows 2000 service pack, widow’s server 2003 etc. Make sure you verify that your operating system is compatible with this programming reference before downloading the application or running it.

For its incorporation with milestone’s Xprotect the plugin would automatically generate location names from Milestone Camera groups/folders and groups cameras according to Milestone grouping.

The Milestone Camera and their location entity (parent folder) could be retrieved by MipSDK by the follow calls:

var items = Configuration.Instance.GetItemsByKind(Kind.Camera);

where

  • if FQID.Kind==Kind.Folder that means location(parent folder)
  • if FQID.Kind==Kind.Camera that means camera

Administrator of SmartClient can drag/drop each location into the map or specify the location address and comments.

You can search for different views for the various camera available on the view item types. Take for example if you want to see all of the views for the PTZ cameras, or those fron a particular manufacturer, or those that contain view item types:

  • Map
  • Alarm
  • Matrix
  • HTML
  • Name of camera in view
  • Add on products

With this you can be able to search for the key words available.

Location on the map

The Administrator of SmartClient can drag/drop each location into the map or specify the location address and comments. Also, you can create locations at the points of the map that interests you. with this you can be able to create a location for home office, satellite office. Aside the fact that the location will give you a full picture of your environment, you can also use them in the navigation of the map.

However, you should know that an Xprotect smart client location can only be added depending on your configuration. With this it becomes very easy with go to the general overlay of map when you are zoomed out.

Location data is stored on a central server in XProtect configuration. Administrator needs to set it up only once, then it will be shared between all SmartClients. Also once Administrator edits it in any SmartClient instance, it will be shared between all of them.

public static Guid DefaultLocationGuid = new Guid("AA2BB85A-B965-448f-BBA9-CC4DCE129411");

public static void SaveLocationCoordinates(this IList locations)
  {
   var coordinatesConfig =
         locations.Select(item =>
            new MapLocationConfigItem()
                { Latitude = item.Latitude, Longitude = item.Longitude, Name = item.Name,Address = item.Address}).ToArray();

    var node = coordinatesConfig.ToXmlNode();
    Configuration.Instance.SaveOptionsConfiguration(DefaultLocationGuid, false, node);
   }

To retrieve the stored coordinates from XProtect Central Server the follow code could be used:

var coordinatesConfigXmlNode = Configuration.Instance.GetOptionsConfiguration(DefaultLocationGuid, false);
var coordinatesConfig = coordinatesConfigXmlNode.ToInstance<MapLocationConfigItem[]>();

Where the key element is DefaultLocationGuid that contains a GUID of the our custom configuration entry.

Camera Navigator

The camera navigator is a feature that enables the ability to view all the cameras in relation to one another. This means the camera would be seen as they are laid down on a floor plan or map. With the camera navigator, you will be able to move from one camera to another in just a single view.

blog2

The plugin checks the status of each camera and marks every location on the map with colored icon:

  • green if all cameras inside that particular location are online,
  • yellow if less than half cameras are offline,
  • red if more than half cameras are offline

To get the cameras status we use MessageCommunication mechanism provided by Milestone and implemented in MipSDK.

The plugin checks the status of each camera and marks every location on the map with colored icon:

  • green if all cameras inside that particular location are online,
  • yellow if less than half cameras are offline,
  • red if more than half cameras are offline

To get the cameras status we use MessageCommunication mechanism provided by Milestone and implemented in MipSDK:

Here we initializing  MessageCommunication API and registering callback(ProvideCurrentStateResponseHandler) which should be called once we get data with cameras statuses:

MessageCommunicationManager.Start(EnvironmentManager.Instance.MasterSite.ServerId);
messageCommunication = MessageCommunicationManager.Get(EnvironmentManager.Instance.MasterSite.ServerId);
communicationObject = messageCommunication.RegisterCommunicationFilter(ProvideCurrentStateResponseHandler,
                new CommunicationIdFilter(MessageCommunication.ProvideCurrentStateResponse));

To get the callback called and receive the status (online/offline) of each particular camera we should perform this call:

messageCommunication.TransmitMessage(
               new Message(MessageCommunication.ProvideCurrentStateRequest,
                    cameras.Select(camera => camera.CameraId.ToString()).ToArray()), null, null, null);

Below is an example of the implementation of ProvideCurrentStateResponseHandler callback:

private object ProvideCurrentStateResponseHandler(Message message, FQID dest, FQID source)
   {
     Collection result = message.Data as Collection;
     if (result != null)
       {
          foreach (ItemState itemState in result)
            {

           //itemState.FQID.ObjectId - Camera Id//itemState.State - Camera State             
    }
        }
            return null;
    }

The plugin also supports “Camera Monitoring” mode where all cameras are displayed regardless of location. Each camera has Status indication there and displays the time how long it being offline.

blog3

If you are interested in knowing more about the Milestone SDK or the plugins we can offer, feel free to contact me at bjorn.nostdahl@gunnebo.com 🙂

IBM International Business Machines, Mender, Node RED, Software Development Insights, Technical

Mender IoT Device Management

With the progress in humanity, innovations are starting to move towards being digitalized. The vast majority of all human data are stored using digital methods. This would involve the use of computers, cloud computing and the Internet of Things (IoT) which is one of the latest technological disruption. These technologies are used to connect devices through digital channels, which are used to transfer data back and forth. On the other hand, the digital world is in constant need for updates. These updates are essential to cope with the increase in data and the overall customers’ requirements.

Update Software Computer Program Upgrade Business technology Internet Concept-1

Why is Software Update Essential?

  • Bugs: One of the main problems in computing technology is the number of bugs that rise due to weak developing skills or high amounts of data; which was not accounted for. Updating your software will acts as means to fix and overstep such bugs.
  • Security: Unfortunately, cyber security is a huge issue in this era. With many threats rising in the field, updates are always released with better security settings in hopes to lower and eliminate threats.
  • Features: The most common reason for software updates is to release new competitive features that copes with the customers’ requirements.

However, with the high numbers of devices invading our planet, it is impossible to provide these software updates through physical means. This is why, Over-the-air (OTA) methods are the most efficient way to deliver software updates. In some cases, the only available method to use is OTA; where physical means can’t be used. The process of software update through OTA is a very complex process where the data is delivered over networks and digital channels to reach your device. This is a very delicate process where you have to ensure proper connection and power connectivity to avoid any errors in the process.

Mender: Your New Solution

With the intensity of such transactions, you should always look for the best service out there to implement the process as efficient as possible. Trying to build your own infrastructure that will achieve efficient OTA might be a real hassle. The amount of time and work spent on the process is way more than you can handle, so is the cost you are about to pay too.

Mender

This is why, companies should look for the different software update solution companies out there. Here is where Mender kicks in; it is an end-to-end open source software update solution for connected devices and IoT. You can consider Mender as a ready-made infrastructure that will solve all your software update issues.

Why we use Mender?

No Vendor lock-ins: One thing to look forward too while using Mender is the fact that we won’t face any vendor lock-ins. Mender is an open source, licensed under Apache 2.0. This gives it the complete freedom of being used by the customers without the interference of vendors or other third parties. With Mender, you should no longer worry about getting locked in.

Global communication network concept.png

Reduction in customer support issues: Mender focuses on their customers’ experience, making it as smooth as possible. This is achieved through strong security protocols during the update process. The process is focused to be as efficient and optimal as possible, compensating for any pitfalls in the connection. Mender uses image-based updates which acts as a safety net when connectivity problems rise. It will ensure full device connectivity at all times leading to a decrement in system failures and device recalls.

Features and Functionality

The developers of Mender are very concerned with the common software update issues and the hassle customers go through. This has helped them develop Mender with more features that other software update solution provider that surely helps in making the process simpler and more effective for the users. The following is a list of some of the features you can enjoy while using Mender:

  • Intuitive UI
  • Deployment Reports
  • Custom checks via scripting support
  • Code signing

Anticipated Progress and Updates

With Mender, there is still much more to look forward too. Gunnebo, which is a multinational business specialized in security services, are of course interested in contributing to Mender, possibly to help implementing the features we and other companies like us needs.

Our first project will be updating Node-RED flows from the Mender v.2 update module. If you are interested in contributing or want to know more – feel free to contact me at: bjorn.nostdahl@gunnebo.com

Cosmos DB, Microservices, Microsoft Azure, Mongo DB, Software Development Insights, Technical

Microsoft LEAP: Designing for the Cloud

The Microsoft LEAP is an event for the developers worldwide who are looking for original training from Microsoft.  It takes place annually in Microsoft headquarters in Redmond, WA. The five-day conference helps the attendees to fully understand how Microsoft products can be used and how they can solve the problems of the companies. This time, the participants learned how to design a cloud in an up-to-date fashion.

 

MicrosoftTeams-image

The following piece will provide you with a glimpse through the Microsoft Leap program. The sections are the highlights with the greatest impact and effect on the developers’ community.

Deep Dive into Cloud Computing: Azure Strategy

On January 28, Microsoft kicked off the Leap program for software architects and engineers. There were loads of speakers on the agenda. Among them, Scott Guthrie was one of the strongest. Scott is in charge of Microsoft’s cloud infrastructure, servers, CRM and many more tools. He was the leader of the team that created Microsoft Azure. In his keynote, “Designed for Developers”, he discussed cloud computing technology. His aim was to help the developers with a different level of skills to reach one goal, which is sustainable development and use of cloud computing.

20190128_090001

Scott focused on how to develop clouds and maintain them.  The session was concluded with the presentation of Microsoft’s anticipated plan of providing Quantum Computing in their Azure technology.

The Strong Impact of Microservice Architecture

On this issue,  the most memorable was the session featured by Francis Cheung and Masashi Narumoto. They talked about microservices and the strong architecture that they hold. This architecture is considered a paragon in the world of cloud computing as it has raised the bar.

20190128_111511

The speakers mentioned several important features of a strong company that has the potential to succeed. And it was well-established that the success of microservice implementation depends mostly on a well-developed team with a strong strategy (preferable domain-driven).

 

No matter how beneficial microservices could be, it is not necessarily the right choice for your business. You need to be well aware of your products and the level of complexity your business needs. Having extra unrequired tools will set you back rather than take you anywhere.

SQL HyperScale as a Close Based Data Solution

This session was different as it celebrated two decades of Pass and 25 years of SQL technology being used. The speaker, Raghu Ramakrishnan,  has been Microsoft’s CTO since he moved from Yahoo in 2012. With his strong background and experience, Raghu was the best candidate to discuss the use of SQL Hyperscale and how groundbreaking this technology has been.

20190128_134351

The Hyperscale service has become a crucial update to the currently existing services. According to Ramakrishnan, this is the most modern technology of SQL services which has the highest storage with the most computing performance. This precise model has up to 100 TB of the database.

 

This technology is generally used to replace cloud computing database structures as it is more reliable and accessible than other alternatives. Microsoft has added many features to the SQL hyperscale making it the leading databasing solution in the market. With the amazing features discussed in the talk, it was really worth a separate session.

The Commercial Database: Cosmos Database

Deborah Chen, the Cosmos Database program manager at Microsoft, took the time to discuss the most viral commercial form of database out there. Most current implementations use non-relational databases. The Cosmos DB is one of the most widely used sources for databasing.

20190128_144226

As it was mentioned by Deborah, the Cosmos DB is a very volatile and responsive tool. With numerous transactions taking place in a second, response to applications (especially for real-time) is a very sensitive thing. since it is a non-relational database, the retrieving and storing of data is easier and faster. Thus, this is where Cosmos stands out, as it was intentionally created with an architecture aimed at handling such tasks.

 

She also discussed the use of Service Level Agreements (SLA). This agreement helps to provide guarantees, availability, and latency for all users, making Cosmos DB the most viral product out there.

Monitoring Your Procedures Using Azure Monitoring

Rahul Bagaria, a product manager of Azure monitoring, joined later on to talk about the importance of monitoring your work, flow, and operations.  But the monitoring process is not limited to single tasks only but to the connections, workflow, and final output. To monitor all the steps taken through the procedure is important for maintaining efficient delivery and quality assurance as a whole. It is also beneficial to pick out errors and problems in the cycle, may they arise.

20190128_154930.jpg

This is where Azure monitoring kicks in, with many strong details like log analytics and application insights. Rahul emphasized the importance of this tool and all the features it provides. His team has worked hard to provide a service that can help with multiple tasks, milestones, and services. This session helped the developers to learn why and how to monitor their work processes.

 

All in all, the first day at Microsoft LEAP 2019 was very on-topic and interesting. I look forward to the next sessions. If you have any questions, feel free to contact me at bjorn.nostdahl@gunnebo.com

Artificial Intelligence (AI), Business Intellegence (BI), Machine Learning (ML), Microsoft Azure, Software Development Insights

Microsoft LEAP: Looking into the future

Cloud Computing have become one of the most profitable industries in the world and cloud will remain a very hot topic for a foreseeable future. There is a huge competition among cloud service providers to win customers by providing the best services to their customers. Cloud service providers invest a lot of money on inventions. Thus, cloud services make most of the trends in the future IT industry. Microsoft Azure and Amazon AWS is one of the leaders in innovation in their respective fields.

Data centers around the world

As the demand for cloud services rapidly increasing in all parts of the world, establishing data centers around the globe becomes a necessity. Azure has understood this well and expecting to expand its service by constructing data center regions in many parts of the world.

Microsoft-navalgroup_Brest
From news.microsoft.com article about Project Natick’s Northern Isles datacenter at a Naval Group facility in Brest, France. Photo by Frank Betermin

The world is divided into geographies defined by geopolitical boundaries or country borders. These geographies define the data residency boundaries for customer data. Azure geographies respect the requirements within geographical boundaries. It ensures data residency, compliance, sovereignty, and resiliency. Azure regions are organized into geographies. A region is defined by a bandwidth and latency envelope. Azure owns the greatest number of global regions among cloud providers. This is a great benefit for businesses who seek to bring their applications closer to users around the world while protecting data residency.

The Two Major Azure’s Global Expansion of Cloud Services

Two of the most expansion that Microsoft Azure has incorporated to improve its service updates includes the following:

Expansion of Virtual Networks and Virtual Machines Support.

With utility virtual machines like A8 and A9 that provides the advantages of operations like rapid processors and interconnection amidst more virtual cores, there can now be the seamless configuration of virtual networks for specific geographical locations and regions.

This feature gives more room for optimal operations, cloud services, complex engineering design video encoding and a lot more.

Incorporation of Azure Mobile Services, and its Expansion to Offline Features

Even with a disconnected service, this operation makes it possible for applications to operate effectively on offline features.  Furthermore, is that this extends the incorporation of Azure cloud services to apps on various platforms, including Android and iOS on mobile phones.

Then there are Availability Zones. It is the 3 rd level in the Azure network hierarchy.

Availability zones are physically separated locations. They exist inside regions. They are made up of one or more data centers. Constructing availability zones is not easier. They are not just data centers, they need advanced networking, independent power, cooling etc. The primary purpose of Availability zones is to helps customers to run mission-critical applications.

You will have following benefits with Azure availability zones

  • Better protection for your data – you won’t lose your data due to the destruction of a data center
  • High- availability, better performance, more resources for businesses to continuity.
  • 99% SLA on virtual machines

Open source technology

Microsoft took some time to understand the value of Open source technologies. But now they are doing really fine. With .Net Core and the .Net Standard, Microsoft has done a major commitment to open source. Looking at GitHub alone, Microsoft is one of the largest contributors to open source.

Redmond, Washington USA - 4th June 2018 Microsoft confirms its acquiring GitHub
“Microsoft is a developer-first company, and by joining forces with GitHub we strengthen our commitment to developer freedom, openness and innovation,” said Satya Nadella, CEO, Microsoft.

With  .Net core 3.0, Microsoft introduced many features that will enable developers to create high security fast productive web and cloud applications. .NET Core 3 is a major update which adds support for building Windows desktop applications using Windows Presentation Foundation (WPF), Windows Forms, and Entity Framework 6 (EF6). ASP.NET Core 3 enables client-side development with Razor Components. EF Core 3 will have support for Azure Cosmos DB. It will also include support for C# 8 and .NET Standard 2.1 and much more.

Mixed reality and AI perceptions

Mixed reality tries to reduce the gap between our imagination and reality. With AI, it is about to change the way how we see the world. It seems to become the primary source of entertainment. Although Mixed reality got popular in the Gaming industry, now you can see its applications in other industries as well. The global mixed reality market is booming. That’s why the biggest names in tech are battling it out to capture the MR market. All major tech products have introduced MR devices such as Meta2 handsets, GoogleGlass 2.0, Microsoft HoloLens.

Mixed reality and AI perception is a result of the cooperation of many advanced technologies. This technology stack includes Natural Language interaction, Object recognition, real-world perception, real-world visualization, Contextual data access, Cross-device collaboration, and cloud streaming.

Factory Chief Engineer Wearing VR Headset Designs Engine Turbine on the Holographic Projection Table. Futuristic Design of Virtual Mixed Reality Application

As I said earlier, Although the Gaming industry was the first to adopt mixed reality, now MR applications are more used in other industries. Let’s visit some of the industries and see how Mixed reality has transformed them and what benefits those industries get from mixed reality and AI perception.

You can see tech giants such as SAAB, NETSCAPE, DataMesh, using mixed reality in the manufacturing industry. According to research, mixed reality helps to increase worker productivity by 84%, improve collaboration among cross-functional teams by 80% and improve customer service interaction by 80%. You may wonder How mixed reality was able to achieve it? What it offers to the manufacturing industry. There are many applications of Mixed reality in manufacturing, following is a small list of them.

  • Enhanced Predictive Maintenance
  • Onsite Contextual Data Visualization
  • Intuitive IOT Digital Twin Monitoring
  • Remote collaboration and assistance
  • Accelerated 3D modeling and product design
  • Responsive Simulation training

Retail, Healthcare, Engineering, Architecture are some other industries that use mixed reality heavily.

Quantum revolution

Quantum computing could be the biggest thing in the future. It is a giant leap forward from today’s technology. It has the potential to alter our industrial, academic societal and economic landscapes forever.  You will see these massive implications nearly every industry including energy, healthcare, smart materials, and environmental system. Microsoft is taking a unique revolutionary approach to quantum with its Quantum Development Kit.

QPR18_Copenhagen_57022000x1108
Picture from cloudblogs.microsoft.com article about the potential of quantum computing

Microsoft can be considered as the only one who took quantum computing seriously in the commercial world. They have a quantum dream team which is formed by the greatest minds in physics, mathematics, computer science, and engineering to provide cutting-edge quantum innovation. Their quantum solution integrates seamlessly with Azure. They have taken a scalable topological approach towards quantum computing which helps to harness superior qubits. These superior qubits can perform complex computations with high accuracy at a lower cost.

There are three important features in Quantum development kit which makes it the go-to Quantum computing solution.

It introduces its own language, Q#. Q# created only for quantum programming. It has general programming features such as operators, native types and other abstractions.  Q# can easily integrate with Visual Studio and VS code which makes Q# feature rich. Q# is interoperable with the Python programming language. With the support of enterprise-grade tools, you can easily work on any OS windows, macOS, or Linux.

Quantum development kit provides a simulated environment which greatly supports optimizing the codes. This is very different from other quantum computing platforms which still exist in a kind of crude level. This simulation environment also helps you to debug your code, set breakpoints, estimates costs, and many other things.

As we discussed earlier, Microsoft has become the main contributor in the open source world. They provide Open source license for libraries and samples. They have tried a lot to make quantum computing easier. A lot of training materials are presented to attract developers to into quantum programming realm. The open source license is a great encouragement for developers to use the Quantum development kit in their applications while contributing to the Q# community.

Cloud services will shape the future of the IT industry. Quantum computing, Open source technologies, Mixed reality will play a great role in it.

This is my last day in Redmond, but I really look forward to coming again next year! If you have any questions, feel free to contact me at bjorn.nostdahl@gunnebo.com

Artificial Intelligence (AI), Machine Learning (ML), Microsoft Azure, Software Development Insights

Microsoft LEAP: Adding Business Value and Intelligence

Adding Business Value and Intelligence

The concept of business value and intelligence is aimed at more productive measures through the utilization of various tech application and analytical tool for the assessment of raw data. Business intelligence makes use of activities like data mining, analytical processing, querying and reporting. Companies take advantage to improve their operationalization, as well as accelerate their decision making. Business intelligence is also useful in the aspect of reducing cost and expenses and also identifying new business opportunities.

Machine learning technologies. Millennial students teaching a robot to analyse data

A lot of experts have shared their ideas and spoken on various aspect of business values and intelligence relating to AI in Redmond. Notable speakers include Jennifer Marsman, Maxim Lukiyanov, Martin Wahl, and Noelle LaCharite. The aspects that they extensively spoke on is a machine and learning fundamentals, introduction to new azure machine learning service, using cognitive services to power your business applications, and how to solve business problems using AI, respectively.

Machine and Learning Fundamentals

The fundamentals of machine learning have to do with understanding both the theoretical and programming aspect. it is also important to be up to date with the latest algorithm and technology that is being implemented by the various programming tools for machine learning. The there simplest explanation of the term machine learning is that the operation of the machine in such a way that it would be able to perform various tasks.

20190131_081910

Algorithms can learn how to perform these tasks in various ways, and this brings us to the different types of machine learning. They include supervised learning which is carried out to enable the machine to identify and differentiate between various data. Unsupervised learning, on the other hand, does not have to do with a specific data or structure that the machine is supposed to produce. Another type of machine learning is reinforcement learning.

The importance of a machine model’s accuracy cannot be understated. The accuracy is what really determines how effective a machine can be for the operationalization of a company. machine models are estimated or measured mainly by prediction making and putting them to work in the real world sense. In the business world, a model cannot be accepted until it has been tested against the real world and the results are satisfactory. Measuring a machine model depends on the characteristics of such a particular model, and the circumstances the model is needed in the real world.

Two vital aspects of Machine learning are CNN and RNN. CNN is convolutional neural networks, while RNN is recurrent neural networks. For CNN mainly generate free size outputs, and are used for minimal amounts of reprocessing. RNN on the other hand functions on random inputs and outputs. They can also be sued for the processing of random sequences. So in basic terms, CNN is built such that they can be able to recognize images while RNN, on the other hand, recognizes sequences.

Presentation about machine learning technology, scientist touching screen, artificial intelligence-1

Furthermore, Jennifer Marsman helped in the description of various methods that are related to artificial intelligence, and they include the following.

  • Search and Optimization

The use of a search engine and search optimization helps to rank AI algorithms. Explaining the role of AI for search and optimization purposes on search engines could be very technical. Machines are also taught on how to work with these to rank algorithms.

  • Logic

Logic also plays a major role in AI. The application of Logic in Ai could be as an analytical tool, as a knowledge representation formalism, and also a method of reasoning. Logic can also be used in the aspect of programming language. With this, it can explore both the prospects and the problems of the success of AI.

  • Probabilistic Methods for Uncertain Reasoning

One of the most widely artificial methods for representing uncertainty is a probability. A lot of certainty factors have been utilized for quantifying uncertainty for alternative numerical schemes over the years.

  • Classifier and Statistical Learning Methods

Classifiers associated with AI includes Naive Bayes, Decision trees, perceptron, amidst a host of others. There are also various statistical learning methods and theories that are in used to evaluate the uncertainties of AI. However, there are limitations to these statistical models, and this is where logic comes.

  • Artificial Neural Networks

This is the impact of the earlier mentioned RNN and CNN on the concept of AI. A typical explanation of ANN in a natural language processing AI which can be used in the interpretation of human speech.

  • Evaluation Progress in AI

This is imperative in the estimation of the progress of the concept of AI across all sectors including business models. Three evaluation types include human discrimination, peer confrontation, and problem benchmarks.

An Introduction to New Azure Machine Learning Service

Maxim Lukiyanov spoke about the working principle of the new Azure machine learning service. The service helps to simplify and accelerate building, training, as well as the development of various machine learning models. Furthermore, the automated machine can be utilized in such a way that algorithms that are needed are easily identified, and the hyperparameters are tuned faster.

New Azure Machine Learning Service also helps to improve productivity and reduce costs with auto-scaling compute methods, as well as develops for the machine learning procedure. New Azure Machine Learning Service also have the advantage of storing the data easily on the cloud. Using the latest programming language is also a seamless operation with the New Azure Machine Learning Service, with open source frameworks like PyTorch, TensorFlow, and scikit-learn.

Maxim also spoke further on some benefits of the New Azure Machine Learning:

  • Easy and flexible pricing method, as you will have to pay a=for only the features that you use.
  • The machine learning is very easy to understand, and the tools that come with it are not in any way restrictive.
  • With the various data and algorithm of the tool, there will be more accurate predictions
  • The tools from the machine make it very easy to import data, and as well as fine-tune the results.
  • A lot of other devices can be connected easily to the platform with the aid of the tolls
  • Data models can be easily published as a web service
  • The time scale for the publish of experiments is only a matter of minutes. This is a very major upgrade when compared to expert data scientists that take days.
  • There is adequate security from the Azure security measures. And this is very useful for the storage of Data in the cloud.

Using Cognitive Services to Power your Business Applications: An Overview and Look at Different AI Use Cases

Martin Wahl explained that with Azure cognitive services, customers are set to benefit from AI with developers. With this, they will not even need the service of a data scientist, which is a major advantage to saving both time and costs. This is done by building this machine in such a way that the learning models, pipelines and infrastructure needed are packaged up on cognitive service for important activities such as vision, speech, search, processing of text, understanding languages, and many more operations. This means that anyone who is capable of writing a program at all can make use of the machine learning to improve the application.

20190131_110410.jpg

Customers who have patronized this service are already benefiting from cognitive services such as face container, text container, custom vision service support for logo detection, language detection, in-depth analysis and many more.

Martin Wahl finally explained that with Azure service, more value is added to the business, and the implementation of artificial intelligence is easier than ever.

How to Solve Complex Business Problems Using AI Without Needing a Data Scientist or Machine Learning Expert.

With the possession of basic skills like python coding, data visualization, Hadoop platform, apache spark etc. complex business problems can be solved, even without being a machine learning expert or a data scientist.  All of these are made possible through the help of AI and all that is needed is just dedication and willingness. Some procedure to go about this include:

  • Understanding the basics: This has to do with acquiring general knowledge on the basics, both theoretically and practically.
  • Learning Statistics: Statistics is core to solving business problems, and some of the aspect to be looked at include Sampling, data structures, variable, correlation, and regression etc.
  • Learning Python
  • Making attempts on an explanatory data analysis project
  • Creation of learning models
  • Understanding the technologies that are related to big data
  • Exploring deeper models
  • Completing a complex business problem.

Finally, Noelle LaCharite gave a vivid explanation of how a PoC was made and I did one myself in Delphi in 30 minutes with the aid of Azure AI.