The whole world have advanced into the digital age and as a lot of appliances, gadgets, and accessories now depend on internet for their operation. These devices are designed with state-of-the-art technology so they can communicate smoothly at any time, and have now become so popular that they outnumber the human population. There are approximately 7.62 billion people around the world but surprisingly, we have 20 billion IoT devices which are all connected to the internet.
New IoT devices emerge every day; we see home automation systems, smartwatches, smart gadgets, smart vehicles and a long list of other things that makes your life easier and more fun in today’s world.
Through my work in Innovation at Gunnebo Business Solutions, I get to work on quite a few cutting edge projects to bring Gunnebo into the connected future. The GBS team strives to develop a scalable collaboration platform that supports each business units digitalization and software offering. Our main focus is actually to lead Gunnebo’s business units into the digital future of Software Services and enable product as a service sales.
I am currently working on a really exciting project with our Safe Storage Business Unit. We are working on a brand new smart-safe, which can easily be integrated into different parts of the home – kitchen, bedroom or bathroom – to store your valuables. The safe is designed to suit everyday needs and it can be used for storing valuables such as car keys, jewelry, credit card, visas, passports or any other thing important to you.
The safe is designed to be a simple and convenient solution that can be accessed by customers around the world. Anyone interested in getting the best security for their valuables would try out this option. Not only does the safe keep your valuables safe, but it’s also aesthetically appealing and made from the best technology which is only even more attractive.
As any smart device, this safe will of course be easily be connected to the owners mobile phone and send telemetry to the cloud. This is where I come in. I am working with our team in Markersdorf on merging classic and mechanical parts of a safe securely with modern IoT technlology.
To make sure that our new IoT device deliver to its potential, it is developed with state-of-the-art technology, both physically and on the firmware and software side, that makes it reliable and easy to use.
To ensure the efficiency of our operations we work with agile partners like Microsoft, 3H, Polytech Software and others to help fusing entrepreneurial spirit with professional development of the product. Through their involvement, we have been able to achieve optimal results.
As mentioned earlier, the Internet of things (IoT) is a system of interrelated computing devices, mechanical and digital machines. This means that it can be just anything from your television to your wristwatch. Over time, the scope of IoT devices has changed from what it used to be due to the convergence of multiple technologies, real-time analytics, machine learning, commodity sensors, and embedded systems.
An IoT device exposes its users to a number of impressive benefits which include increased interaction between devices, allows great automation and control, easier to operate, saves time, saves money, increased efficiency and time saving and so on. But it still has a few drawbacks of its own such as may easily become highly complex, may be affected by privacy and security breach, reduced safety for users and so on.
The market for IoT devices is expanding every day and becoming more popular as its number of users also increases. This might be the first IoT device from Gunnebo, but it is definitely not the last.
If you want to know how Gunnebo works with IoT in the consumer market, feel free to contact me at bjorn.nostdahl@gunnebo.com
All good things come to an end and LEAP is no exception. It was a great week full of interesting and enlightening sessions. Day 5 was a fitting end to the week with its focus on Design with best practices.
Let’s get to the sessions; the day began with a keynote by Derek Martin on the topic Design for failure. Derek is a Principal Program Manager and spoke about what not to do when designing a product. He spoke about building Azure and how the lessons learned can be used to understand and anticipate challenges.
The focus was given to managing unexpected incidents not only in the application environment but also in the cloud as a whole.
Brian Moore took over with his keynote on Design for Idempotency – DevOps and the Ultimate ARM Template. He is a Principal Program Manager for Azure. The focus of the session was on creating reusable Azure Resource Manager Templates and language techniques to optimize deployments on Azure. The intention of these reusable templates is to introduce a “Config as code” approach to DevOps.
He took his time to explain “the Ultimate ARM Template” and other key points about the template. Brian Moore explained that the Ultimate ARM Template utilized utilised any language constructs to increase the impact of minimal code. The template simply looks to simplify all of your work. It also offers a variety of benefits for all of its users to enjoy. To guarantee the efficiency of ARM, he explained the practice to avoid. It’s a template which provides you with the best options for the most effective results and lack nothing essential.
Alexander Frankel, Joseph Chan, and Liz Kim conducted their join keynote on Architecting well-governed environment using Azure Policy and Governance after the morning coffee break.
They illustrated real-life examples of how large enterprises scale their Azure applications with Azure Governance services like Azure Policy, Blueprints, Management Groups, Resource Graph and Change History.
The next session was on Monitor & Optimize your cloud spend with Azure Cost Management and was conducted by Raphael Chacko. Raphael is a Principal Program Manager at Azure Cost Management.
The keynote’s main focus was optimizing expenditure on Azure and AWS through cost analysis, budgeting, cost allocation, optimization, and purchase recommendations. The main features of Azure Cost management were highlighted.
It was right back to business after a quick lunch break. Stephen Cohen took over with his session on Decomposing your most complex architecture problems.
Most of the session was spent on analyzing and coming up with answers to complex architecture-related problems raised by participants. It was a very practical session and addressed many commonly faced issues.
The next session was conducted by Mark Russinovich, the CTO of Microsoft Azure.
Day 5 had a shorter agenda and was concluded with Derek Martin returning for another keynote on Networking Fundamentals. Derek spoke about Azure Networking Primitives and how it can be used to leverage the networking security of any type of organization using Azure environments. Azure Networking Primitives can be used in a flexible manner so that newer modern approaches to governance and security protocols can be adopted easily.
And that was it. The completion of a great week of LEAP. I hope all of you enjoyed this series of articles and that they gave you some level of understanding about the innovations being done in the Azure ecosystem.
I just left Microsoft Headquarters after another interesting day at LEAP. Today’s topics were quite interesting, especially DevOps, because of all the innovations that are being made. I’m actually a little emotional that there’s just one more day remaining.
Jason Warner began the day’s session with his keynote on From Impossible to Possible: Modern Software Development Workflows. As the CTO of Github, Jason shared much of his experience regarding the topic.
The underlying theme of the keynote was on creating an optimal workflow that leads to the success of both the development process as well as the team. He pointed out the inevitable nature of modernization and said its important that the company does not become a mediocre or get worse.
Before he went on to the topic of the day, Jason spoke about himself. He also didn’t hesitate to share some valuable history and information about his life. Jason Warner introduced the audience to some brief insight into the capabilities of GitHub and the success they have managed to achieve so far.
According to Jason, to ensure proper modernisation must have a workflow that consists the following; automation, intelligence and open source. Next, he identified GitHub’s ability to produce the best workflows to improve company efficiency. It didn’t end there as he continued by talking about the benefits of workflow inflation
Abel Wang continued with the next session and his keynote was on Real World DevOps. Abel is a Principal Cloud Advocate for Azure. This session was truly valuable as it covered the full process of a production SDLC and many other important areas such as infrastructure, DNS, web front ends, mobile apps, and Kubernetes API’s.
At the start of his presentation, Abel Wang introduced us to his team and gave a run down on some vital information about DevOps. Why do you need DevOps? Well, they are solution providers, support any language and boast a three-stage conversation process for results.
After a much-needed coffee break, we embarked on the next session on Visual Studio and Azure, the peanut butter and jelly of cloud app devs. The speaker, Christos Matskas is a Product Marketing Manager at Microsoft.
The session focused on explaining how well Azure and Visual Studio support development, live debugging, and zero downtime deployments. Christos also spoke about leveraging integrated Azure tools to modernize .Net applications.
The goal of those at Visual Studio are committed to providing developers with the best tools available. It supports all types of developers and redefines their coding experience. The great thing about Visual Studio is that they don’t rest on their laurels and are constantly in search of innovation. It even comes with a Visual Studio Live feature that allows developers share content with each other in real-time.
Evgeny Ternovsky, Shiva Sivakumar jointly conducted the next session on Full stack monitoring across your applications, services, and infrastructure with Azure Monitor. Many demonstrations were performed to overview the capabilities of Azure monitor. The demos included monitoring VMs, Containers, other Azure services, and applications. In addition, setting up predictive monitoring for detecting anomalies and forecasting was also discussed.
Azure has a full set of services which it uses to oversee all your security and management needs. They have all the tools you need and are built into the platform to reduce any 3rd party integration. As if not enough, Azure managed to develop a set of newer features; partner integration, monitor containers everywhere, new pricing option, trouble shoot network issues later.
Subsequent to lunch, I joined the alternative session, which was on Artificial Intelligence and Machine Learning. The session was on the use of Azure Cognitive Services and using it with optimized scaling in order to optimize the customer care services provided organizations such as telecoms and telemarketers. Then we were back at another joint session by Satya Srinivas Gogula and Vivek Garudi and the keynote was on the topic Secure DevOps for Apps and Infrastructure @ Microsoft Services.
The speaker spoke about the wide adoption of DevOps practices and Open Source Software (OSS) and the vulnerabilities they introduce. The latter part of the session focused on best practices for secure DevOps with Azure.
The next keynote was on Transforming IT and Business operations with real-time analytics: From Cloud to the intelligent edge. It was jointly delivered by Jean-Sébastien Brunner and Krishna Mamidipaka and focussed on the challenges faced by IT and Business teams trying to understand the behavior of applications. The speakers explained the benefits of Azure Stream Analytics to ingest, process, and analyze streaming data in order to enable better analytics.
A good example of when Azure is at its best is that it can be used for earthquake and storm predictions.
Taylor Rockey concluded the day with his keynote on MLOps: Taking machine learning from experimentation to production. MLOps is an integration between machine language and DevOps. MLOps has proven to have numerous benefits including; scalability, monitoring, repeatability, accountability, traceability and so on. This platform had impressive features that make it a first-choice for many developers. The problems that many organizations face is the lack of proper understanding and tooling to use Machine Learning for production applications. The session focussed on the use of Machine Learning for production applications with the use of Azure Machine Learning and Azure DevOps.
And that’s a wrap. Don’t forget to tune into tomorrow’s article.
I’m at Microsoft for LEAP and we just wrapped up another day of interesting discussions. If you missed my update regarding day 1, make sure to have a look at it here.
Today’s theme was Design for Performance and Scalability. Many legacy applications are being replaced because they are not performance-oriented and scalable at their core. This is something that has to be introduced right from the design stage. Today’s speakers covered many of the core areas which need to be optimized to enable both performance and scalability.
Vamshidhar Kommineni took us right from breakfast to using Azure storage for the data storage needs of Azure applications and how it can be used to enhance performance. Vamshidhar spoke about the innovations in the storage services layer made in the year 2019. He also shared briefly that plans for 2020.
Corey Newton-Smith was next and focused on IoT applications. Corry has been with Microsoft since 2003 and currently functions as the Principal Group PM for IoT Central. She shared the current state of IoT and Microsoft’s plans for the near future highlighting their vision.
Corey explains that Azure IoT represented a new era of digitization amongst industries. It was an innovation that allowed brands the ability to do so much more. The objective behind the production of this platform is enabling a digital feedback loop. She discussed that Microsoft had done so much to make the IoT better. Now, it was capable of bidirectional communication, can be scaled to suit enterprise of any size and provides end-to-end security. Microsoft was planning an improvement that would allow it to support scenarios that are not currently cloud feasible. What’s more? Everything can be tailored specifically to the exact solutions that you need.
The next session began after some light mingling during the coffee break. It was back to business with Jose Contreras and his keynote on decomposing Monoliths into Microservices.
Enterprise applications have made a gradual transition from being monolithic to being Microservice based. Jose explained strategies that can help with this process focussing on Memory, Computing, and Schema.He then discussed migrating existing monolith applications into Microservices without affecting ongoing operations. He focussed on the design, execution, and DevOps aspects.
Jose spoke on a number of factors to really prove the usefulness of transforming monolith to microservices. As part of his talk, he highlighted the factors to consider in making use of this service, differences between private and shared cache and considerations for using cache.
Interestingly, he moved on and started talking about Azure Compute. He listed all of their available services and gave detailed information on its hosting models, DevOps Criteria, Scalability criteria, and other criteria.
Clemens Vasters’s keynote focussed on how messaging is shaping enterprise applications. Importantly, he spoke on how Microsoft Azure could make all of it better. He is a Product Architect at Microsoft and highlighted how open standards and messaging can be used to move applications to the cloud. Some of the areas he touched on were Event Hubs, Service Bus, Event Grid, and CNCF Cloud Events, and Relay with web sockets.
According to him, users can use a series of options to connect a range of devices. Ease of connectivity is guaranteed by the use of intelligent edge or intelligent cloud. Basically, it can be applied to varying scales and still work well with Telco 4G/5G. Despite all of this, cloud services can be applied to create automotive and smart cities, support industrial automation and speed up processes.
Clemens continued by clearing the air on the standards which the cloud service operated on. Everything is built according to standards and designed to be secure. Such was the level of quality in display.
After a quick lunch break, an alternative session was conducted for those who were already familiar with the campus. This session on Messaging Guidance was conducted by Francis Cheung and was related to session 4. However, Francis focused more on how we could assess if some of those tools were a good fit for our projects. He also touched on managing and versioning message schemas.
Next was David Barkol’s session focusing on Designing an Event-driven Architecture on Azure through a workshop approach. He challenged attendees to solve problems related to messaging in a group setting. As a Principal Technical Specialist for Azure, David used his vast experience to reinforce the existing knowledge of attendees about Azure messaging services. He really had a lot of interesting things to say.
Using a few simple statements, he was able to highlight the problems of the customer, identify their needs and how to solve them with the use of event-driven architecture. As a platform, the event-driven architecture will eliminate any bottlenecks and allow for easier transmission of information. Azure messaging services will solve all of the demands identified by the consumer. He also mentioned that Event Hubs GeoDR will also provide a backup or secondary region.
Derek Li conducted his keynote next. He focussed on Serverless platforms based on Azure Functions and Logic Apps. Derek is a Senior Program Manager. His keynote focused on how serverless technologies have impacted how applications are built. He also spoke on how Azure Functions and Logic Apps can be used to speed up delivery.
The last session was after a very welcome Cola Zero break. It refreshed us for Rahul Kalaya’s keynote on deriving insights from IoT Data with Azure Time Series Insights. Rahul spoke about design choices, principles and lessons learned with regards to maintaining the highest possible uptime of cloud databases and servers. Many stories from his experiences with Azure SQL made the keynote even more interesting. And that was it. The completion of a day of meaningful sessions.
I look forward to sharing my next article on Day 3: Designing for Availability and Recoverability.
This year is already off to a fantastic start! I am so excited to be here at the LEAP conference at the Microsoft Headquarters in Redmond Seattle. LEAP is a perfect way for me to keep up to date with new technology and how to apply it here at Gunnebo.
The focus of the day was to Design for Security. The threat of cyber attacks and hackers is still as pressing as ever, so the need for cloud security is crucial. Although technological advancement has triggered an evolution in cloud security over the years, keeping the right level of visibility and control over their applications is still a challenge to many organizations. This means that finding a balance between cloud security and ease of use is a hard nut to crack. Today’s program discusses how Azure can cope up with this issue. Also, speakers are expected to introduce new and updated features Azure brought recently to improve the security of cloud applications.
The highlight of today’s program consists of five great keynotes. The first on the list was Scott Guthrie, the executive vice president for Microsoft’s Cloud. He is an incredible orator and kept the audience thrilled with his in-depth explanations on how Azure helps organizations to deliver product innovation and better customer experience securely. It was frankly impossible to have been there without taking away more than a few vital points and a better understanding of Azure.
Then Stuart Kwan, who is a principal program manager at Microsoft, was the next in line. He backed up Scott Guthrie with a great keynote on how authentication works on today’s applications. Stuart has a wealth of experience under his belt, and he has worked on identity and security-related technologies since joining Microsoft in 1996. Few people have more experience in that field. He is the guy to listen to on topics like Active Directory Federation Services and Windows Identity Foundation. The main focus was on OAuth, Open ID Connect, and SAML. OpenID Connect is a simple identity layer built on top of the OAuth 2.0 protocol. OAuth 2.0 defines mechanisms to obtain and use access tokens to access protected resources, but they do not define standard methods to provide identity information. OpenID Connect implements authentication as an extension to the OAuth 2.0 authorization process. It includes information about the end-user in the form of an id_token that verifies the identity of the user and provides necessary profile information about the user.
When Yuri Diogenes took control of the stage, everyone knew that his talk would be primarily based on how cloud security is evolving and becoming more mature. Yuri is a Senior Program Manager at Microsoft for Cloud and AI Security.
Before Yuri moved on to talk about Azure security, he provided some insights into the problematic scenarios that many companies find themselves. According to him, security hygiene has to be taken seriously or any cloud-based infrastructure would suffer. Basically, organizations have to protect themselves against modern-day threats. He carefully explained that Azure Security Center is a unified infrastructure security management system that strengthens the security posture of your data centers, and provides advanced threat protection across your hybrid workloads in the cloud – whether they’re in Azure or not – as well as on-premises. In simple terms, Azure security is the new security hygiene which you need.
Yuri went further to explain the benefits of Azure security center and Azure Sentinel. It provides all-round security and also affords a degree of customizability. According to him, Azure is capable of protecting Linux and Windows VMs from threats, protecting cloud-native workloads from threats, detecting file-less attacks, cloud workload protection for containers and so on.
The next person on stage was Nicholas DiCola who was a Security Jedi at Microsoft. He thrilled the audience with his discussions on the Azure Sentinel. He explained to everyone how the Sentinel functions as a cloud-native SIEM for intelligent security analytics for an entire organization. It offers limitless cloud speed and could be used at any scale. It also provides its users with faster threat protection and will easily integrate will all existing tools.
According to him, the Azure Sentinel was designed to collect visibility, helps in detecting analytics and hunting, investigates any incidents and respond automatically to them. Azure Sentinel gets data to function from numerous sources such as Linux Agent, Windows Agent, cloud services, custom app, appliances, azure services and so on. After collating all necessary data, it’s analytics scan for any possible threats. Then, you will now be able to monitor your data and activity.
Last but not least we had a session with Sumedh Barde and Narayan Annamalai. They opened a fascinating discussion on how to secure certificates, connection strings, or encryption keys and new networking capabilities of Azure. Sumedh Barde is Program Manager on the Azure Security team, and Narayan is the leader of the SDN product management group in Microsoft Azure that focuses on virtual networks, load balancing, and network security.
These two gave us great insight into the Azure Key Vault. They explained to us how it functions as a tool for securely storing and accessing secrets. From what I learned from the conference, the secret to tightly controlling and securing access on things API keys, passwords, or certificates is to use a vault. A vault is your very own logical group of secrets.
It was a great day here in Redmond and an excellent opportunity to brush up my knowledge of cloud security. I’m actively looking forward to tomorrow.
Working in IoT we sometimes need to handle large data streams of information, that might or might not be totally accurate. Streams might contain noise, inaccurate/unreal readings and other unwanted data.
Switch debouncing
Debouncing can be done on the hardware itself, or in software. Hardware debouncing can be done either using an S-R circuit or an R-C circuit. Two famous algorithms to do software debouncing is vertical counter and shift registers. Despite being well-known, in literature, these methods are typically presented as a code dump with little or no explanation. In this article, I will touch upon these circuits, methods and other algorithms and their use in IoT debouncing.
Understanding Switch Bounce
When the contacts of mechanical switches toggle from one position to another, these contacts bounce (or “chatter”) for a brief moment. During the first millisecond, the bounces are closely spaced and irregular, and although all of it happens in the course of milliseconds, high-speed logic will detect these bounces as genuine presses and releases.
A button release produces bounces too, but it is common for a switch release to produce less bounce than for a switch press.
Switches usually become stable after 5-20ms depending on the quality, size and electronics of the hardware.
Hardware Debouncing
Debouncing using S-R circuits
Switch debouncing using S-R circuit is one of the earliest hardware debouncing methods. In this circuit, S-R latch avoids bounces in the circuit along with the pull-up resistor. It is still the most effective debouncing approach.
The figure below depicts a simple digital debouncing circuit which is used quite often.
The circuit utilizes two cross-coupled NAND gates which aim to create an S-R latch, A SPDT (Single Pole Double Throw) switch and two pull up resistors. Then the resistor produces and generates a logic ‘one’ for the gates and the Switch pulls one of the inputs to ground.
If the switch is kept in a position as seen in the figure, the output of the upper gate is ‘1’ regardless of the input of the other gate and the one created by the bottom pull up resistor which stimulates the lower NAND gate to zero, rapidly in turn hustles back to the other gate. If the switch moves back and forth like a pendulum between the contacts and is suspended or halted for a while in neither one of the regions amidst the terminals, the latch preserves its’ state because ‘0’ from the bottom NAND gate is fed back. The switch may move between the contacts but the latch’s output assures that not in any way it would bang back and therefore, the switch is bounce free.
R-C Debouncing
Although S-R is still common, it’s bulkiness cause problems when you try to use it frequently. You can see that it uses many hardware pieces. Another drawback to using S-R circuits is SPDT switches are more expensive than SPST switches. Thus, a new approach of debouncing emerged using an R-C circuit. The basic principle behind it is to use a capacitor to filter out swift adjustments or changes in the switch signal.
The following image demonstrates a basic R-C circuit which is used for debouncing.
It is a simple circuit which uses two Resistors, a Capacitor, a Schmidt trigger hex inverter and an SPST switch.
In the event where the switch opens, the voltage across the capacitor which is initially zero begins to charge to Vcc through R1 & R2. The voltage at Vin is higher and hence, the output of the inverting Schmitt trigger is low (logic 0)
When the switch is closed, the capacitor discharges to zero and subsequently, the voltage at Vin is ‘0’ and output of the inverting Schmidt trigger is high (logic 1)
At the time of the bouncing condition, the capacitor will halt the voltage at Vin when it comes to either Vcc or Gnd.
You may wonder why a standard inverter is not used. There is a problem for using the standard inverter gate here. TTL defines a zero input when the applied voltage is between 0 and 0.8 and the output in certain circumstances or situations is very unpredictable or unforeseeable. Thus, we must use a Scmitt trigger hex inverter. Thereby, the output remains constant even if the inputs vary or dither and it also ensures to prevent the output from switching due to its’ hysteresis trait.
Software Debouncing
We can debounce switches using the software as well. The basic principle is still to switch signals and filter out glitches if any. The most used algorithms used for that are counters and shift registers.
Counter Method
The first approach uses a counter to time how long the switch signal has been low. If the signal has been low continuously for a set amount of time, then it is considered pressed and stable.
Let’s see the steps in the Counter method.
First, we set up the count value to Zero. Then set up a sampling event with a certain period, say 1 ms. You can use a timer for that. On the sample event, Do the following things.
If the switch signal is high, reset the counter variable to 0 and set the internal switch state to ‘released’. If the switch signal is low, increment the counter variable by 1 until it reaches 10. once the counter reached 10, set the internal switch state to ‘pressed’.
Shift Register Method
Similar to that of the counter method. The only difference is that it uses a shift register. The algorithm assumes unsigned 8-bit reg value usually found in microcontrollers
First, set up the shift register variable to xFF. Set up a sampling event of period 1 ms with the help of a timer. On the sample event, Do the following things.
First, shift the variable towards MSB, the most significant bit. Set LSB, the least significant bit to the current switch value. if the shift register value is equal to 0, set internal switch state to ‘pressed’. otherwise, set internal switch state to ‘released’.
IoT Sensor Bounce
Recently my team has been working on telemetry involving OCR decoding of License Plates. I consider data from an OCR routine, a temperature sensor or a push button the same thing and debouncing the telemetry can be done very much in the same way.
First of all, we needed to clean up the data stream by filtering out incorrect values. Since there are not control digits on license plates, we chose to trust the result if the camera would return three similar plates within five iterations.
If you want to know more about how to debounce data streams or if you have any questions, please reach out to me: bjorn.nostdahl@gunnebo.com
May 29th, Satya Nadella CEO of Microsoft invited Nordic customers and partners for a small conference in Sweden, putting forth his ambitions for the future. This was Nadella’s first ever visit to Sweden, since stepping into the shoes of the company’s CEO. He touched upon issues of today’s tech world, but most importantly made Swedish people aware of his company’s firm belief in global digitization and describe what the future holds in store.
Self-discipline & excitement-seeking are two pillars of Satya’s personality, which has made this Indian-origin techy an intellect worldwide; from Asia to Europe we all treat him with the warm welcome.
Dinner with Microsoft’s ISV Team
A day before the Nadella’s address, I was invited for dinner and socializing hosted by Joanna and Martin from Microsoft. ISV stands for Independent Software Vendor and present individual or companies those who develop, market and sell software running on third-party hardware and software platforms such as the Microsoft.
The term ISV is prominent in the tech world, used by most tech companies including Microsoft. To understand this from a layman’s term, when Microsoft was in the pursuit of developing windows, its partnerships with numerous companies/individuals to take their project forward both on the technical and non-technical front.
Next Morning, I had the opportunity to see some of the companies that has implemented their solutions on the Microsoft platforms at hotel Berns. There we received a pep talk to Microsoft and partners on the future and what efforts we need to put to make sure it is heading the right direction.
The Microsoft tech show commenced in style with the tunes of Sweden’s most renowned DJ and Saxophone artist Andreas Ferrronato. His soul-soothing set the mood 🙂
The Volvo Group Digitizing its Operations
Hillevi Pihlblad from Volvo Group talked about how employees hate change and across the globe it is not easy to adapt to changes. Further, she illustrated how the Volvo Group has translated the changes into something positive and made people understand why embracing change can make their lives convenient.
The H&M Group And The Use of AI To Serve Their Customers The Best
A senior executive and business leader of the H&M Group, Arti Zeighami talked about how the company is investing in Artificial Intelligence technology to tailor store offerings. Heading the Advanced Analytics and AI function, he gave a presentation on how H&M Group is implementing advanced algorithms to scrutinize sales an returns. Further, helped them more efficiently predict the needs and demands of their customers.
Satya Nadella, The Man of the Moment Taking The Center Stage
Then, finally came the moment when Helene Barnekow introduced Microsoft’s CEO Satya Nadella. He was treated with great warm claps from the tech people present.
Nadella who took over the job role of CEO from Steve Ballmer in 2014, is globally renowned for his dynamic leadership and a true passion for technology innovations. Prior to being the company’s CEO, Nadella was Microsoft EVP of the cloud and enterprise group.
His journey as a CEO has transformed Microsoft in terms of technology, also accentuating the company’s business model and corporate culture. His emphatic leadership abilities steered Microsoft from struggling smartphone strategy to focus on other technical aspects such as the Augmented Reality and Cloud Computing.
He was also responsible for the purchase of Linkedin, a network of professionals for around $26.2. Did you know since taking over as CEO, company’s stocks have increased by 150%?
The theme of the address by Satya Nadella was how communities and companies are uniting together for the digitized future of Sweden. This speech was largely about Microsoft’s own digital products and services, and how they can drive the world forward.
On his address to the tech people of Sweden, he threw light on various segments of technology-the Artificial Intelligence, Digital Transformation & Innovation. The American giant was in Stockholm to make a big announcement about setting up data centers in the country.
“We have the ambition that the data centers we build in Sweden should be among the most sustainable in the world, this is another step in strengthening our position as a long-term digitization partner for Swedish businesses”
Key Highlights from Nadella’s Address
“It would be wrong for me not to talk about trust. Because in the end, it is something that will be very important to us – not only to create new technology but to really assure that there is confidence in the technology that we create” he says on stage and continues “We need to create systems that handle personal data and security as a human right.”
Satya Nadella talked about the recent investment his company is making in Sweden. Among all the tech things, Microsoft two centers to be built in Gävle and Sandviken, they will be the most sustainable in the world.
“We will use one hundred percent renewable energy. They will also be completely free from operational emissions. We set a new standard when it comes to the next generation data center. It starts here in Sweden,” said Satya Nadella.
Apart from the data centers, Satya Nadella also highlighted the recent key partnerships during his speech at the China Theater. He further talked about company collaboration with Kiruna, this city makes use of Microsoft Hololens and AR to plan the city’s underground infrastructure.
Microsoft in Sweden
Satya Nadella, Microsoft’s CEO Put Forth Examples of Company’s Interest in the Country;
”There have been huge breakthroughs in the last three years, regardless of whether we are talking about object identification or voice recognition. This must be translated into infrastructure. Here we invest heavily.”
“Take Spotify who has a new very cool podcast tool. It lets anyone do their own podcast and they use our speech recognition to convert speech into text. The most interesting thing they do is that for anyone who wants to modify their podcast, they can enter and edit in writing and that the podcast then automatically changes. It shows how to use AI to make it more efficient”
Ending the Visit on a High
Later in the day, Nadella visited the Samhall innovation Days, a hackathon with the aim of “creating the conditions for people with a diagnosis within the autism spectrum to come into operation”, in the company’s press release.
Last summer, Microsoft announced two data centers in Norway to take their cloud computing services to entire Europe.
“By building new data center regions in Norway, we facilitate growth, innovation and digital transformation of Norwegian businesses – whether large companies, the public sector or some of the 200,000 small and medium-sized companies that together create the future of Norway,” said CEO Kimberly Lein-Mathisen in Microsoft Norway when the Norwegian plans became known.
Nadella declared that both the data centers will run on 100% renewable energy, so this project is in the welfare of the country, creating an ocean of newer opportunities for the locals. He also talked about his company’s association with the tech companies/communities in Sweden; one is the Kiruna city and other being the Sandvik Company.
The address at the China Theater in Stockholm by Microsoft’s top boss, Satya Nadella was like a pep talk. He gave his viewpoint on a variety of technology aspect. Most importantly, he announced the company’s program of building two data centers in this Nordic country.
Machine learning is gradually becoming the driving force for every business. Business organizations, large or small trying to seek machine learning models to predict present and future demands and do innovation, production, marketing, and distribution for their products.
Business value concerns of all forms of value that decides the well-being of a business. It’s a much broader term than economic value encompassing many other factors such as customer satisfaction, employee satisfaction, social values etc. It’s the key measurement of the success of a business. AI helps you to Accelerate this business value in two ways. That’s through allowing to make correct decisions and innovation.
nadia_snopek
Remember the days when Yahoo was the major search engine and Internet Explorer was the Major web browser. One of the main reason for their downfall was their inability to make correct decisions. Wise decisions are made by analyzing data. More data you analyze, better decisions you make. Machine Learning greatly support in this cause.
There was a time, Customers accepted what companies were offering them. Things are different now. Demands of customers for new features are ever more increasing. Machine Learning has been the decisive factor behind almost every new innovation whether it be face recognition, personal assistants or autonomous vehicles.
Machine Learning in more details
First starts with learning what machine learning is. Machine learning enables systems to learn and make decisions without explicitly programming for it. Machine learning is applied in a broad range of fields. Nowadays, Almost every human activity getting automated with the help of machine learning. A particular area of study that machine learning largely exploited is data science.
Data science plays with data. Data must be extracted to make the best decisions for a business.
The amount of data that a business has to work with is enormous today. For example, social media producing billions of data every day. To stay ahead of your competitors, every business must make the best use of this data. That’s where you need machine learning.
Machine learning has invented many techniques to make better decisions out of large data sets. These include Neural networks, SVM, Reinforcement learning and many other algorithms.
Among them, Neural networks are leading the way. It improves consistently spawning child technologies such as convolutional and recurrent neural networks to provide better results in different scenarios.
Learning machine learning from the beginning, and trying to develop models from scratch is not a wise idea. That yields huge cost and demands a lot of expertise in the subject. That why someone should try to take the assistance of a machine learning vendor. Google, Amazon, Microsoft they all provides Machine learning services. Let’s take Microsoft as an example, and review what qualities we should look for when selecting a vendor.
Using cloud as a solution for machine learning
It simplifies and accelerates the building, training, and deployment of machine learning models. It provides with a set of APIs to interact with when creating models hiding all the complexity in devising machine learning algorithms. Azure has the capability to identify suitable algorithms and tune hyperparameters faster. Autoscale is a built-in feature of Azure cloud services which automatically scale applications. This autoscaling feature has many advantages. It allows your application to perform best while keeping the cost to a minimum. Azure machine learning APIs can be used with any major technologies such as C# and Java.
There are many other advantages you will have with cloud Machine Learning
Flexible pricing. You pay for what you use.
High user-friendliness. Easier to learn and less restrictive.
More accurate predictions based on a wide range of algorithms.
Fine tuning results are easier.
Ability to publish your data model as a web service Which is easy to consume.
The tool allows data streaming platforms like Azure Event Hubs to consume data from thousands of concurrently connected devices.
You can publish experiments for data models in just a few minutes whereas expert data scientists may take days to do the same.
Azure security measures manage the security of Azure Machine Learning that protects data in the cloud and offers security-health monitoring of the environment
Using Cognitive Services to power your business applications
We will go on to discuss how Azure cognitive service can be used power up a business application. Azure cognitive services are a combination of APIs, SDKs, and services which allows developers to build intelligent applications without having expertise in data science or AI. These applications can have the ability to see, hear, speak, understand or even to reason.
Azure cognitive services were introduced to extend the Microsoft existing portfolio of APIs.
New services provided by Azure cognitive services includes
Computer vision API which provides with advanced algorithms necessary to implement image processing
Face API to enable face detection and recognition
Emotion API gives options to recognize the emotion of a face
Speech service adds speech functionalities to applications
Text analytics can be used for natural language processings
Most of these APIs were built targeting business applications. Text analytics can be used to harvest user feedbacks thus allowing businesses to take necessary actions to accelerate their value. Speech services allow business organizations to provide better customer services to their clients. All these APIs have a free trial which can be used to evaluate them. You can use these cognitive services to build various types of AI applications that will solve a complex problem for you thus accelerating your business value.
If you want to talk more about ML and AI, feel free to contact me: bjorn.nostdahl@gunnebo.com 🙂
The Microsoft LEAP is an event for the developers worldwide who are looking for original training from Microsoft. It takes place annually in Microsoft headquarters in Redmond, WA. The five-day conference helps the attendees to fully understand how Microsoft products can be used and how they can solve the problems of the companies. This time, the participants learned how to design a cloud in an up-to-date fashion.
The following piece will provide you with a glimpse through the Microsoft Leap program. The sections are the highlights with the greatest impact and effect on the developers’ community.
Deep Dive into Cloud Computing: Azure Strategy
On January 28, Microsoft kicked off the Leap program for software architects and engineers. There were loads of speakers on the agenda. Among them, Scott Guthrie was one of the strongest. Scott is in charge of Microsoft’s cloud infrastructure, servers, CRM and many more tools. He was the leader of the team that created Microsoft Azure. In his keynote, “Designed for Developers”, he discussed cloud computing technology. His aim was to help the developers with a different level of skills to reach one goal, which is sustainable development and use of cloud computing.
Scott focused on how to develop clouds and maintain them. The session was concluded with the presentation of Microsoft’s anticipated plan of providing Quantum Computing in their Azure technology.
The Strong Impact of Microservice Architecture
On this issue, the most memorable was the session featured by Francis Cheung and Masashi Narumoto. They talked about microservices and the strong architecture that they hold. This architecture is considered a paragon in the world of cloud computing as it has raised the bar.
The speakers mentioned several important features of a strong company that has the potential to succeed. And it was well-established that the success of microservice implementation depends mostly on a well-developed team with a strong strategy (preferable domain-driven).
No matter how beneficial microservices could be, it is not necessarily the right choice for your business. You need to be well aware of your products and the level of complexity your business needs. Having extra unrequired tools will set you back rather than take you anywhere.
SQL HyperScale as a Close Based Data Solution
This session was different as it celebrated two decades of Pass and 25 years of SQL technology being used. The speaker, Raghu Ramakrishnan, has been Microsoft’s CTO since he moved from Yahoo in 2012. With his strong background and experience, Raghu was the best candidate to discuss the use of SQL Hyperscale and how groundbreaking this technology has been.
The Hyperscale service has become a crucial update to the currently existing services. According to Ramakrishnan, this is the most modern technology of SQL services which has the highest storage with the most computing performance. This precise model has up to 100 TB of the database.
This technology is generally used to replace cloud computing database structures as it is more reliable and accessible than other alternatives. Microsoft has added many features to the SQL hyperscale making it the leading databasing solution in the market. With the amazing features discussed in the talk, it was really worth a separate session.
The Commercial Database: Cosmos Database
Deborah Chen, the Cosmos Database program manager at Microsoft, took the time to discuss the most viral commercial form of database out there. Most current implementations use non-relational databases. The Cosmos DB is one of the most widely used sources for databasing.
As it was mentioned by Deborah, the Cosmos DB is a very volatile and responsive tool. With numerous transactions taking place in a second, response to applications (especially for real-time) is a very sensitive thing. since it is a non-relational database, the retrieving and storing of data is easier and faster. Thus, this is where Cosmos stands out, as it was intentionally created with an architecture aimed at handling such tasks.
She also discussed the use of Service Level Agreements (SLA). This agreement helps to provide guarantees, availability, and latency for all users, making Cosmos DB the most viral product out there.
Monitoring Your Procedures Using Azure Monitoring
Rahul Bagaria, a product manager of Azure monitoring, joined later on to talk about the importance of monitoring your work, flow, and operations. But the monitoring process is not limited to single tasks only but to the connections, workflow, and final output. To monitor all the steps taken through the procedure is important for maintaining efficient delivery and quality assurance as a whole. It is also beneficial to pick out errors and problems in the cycle, may they arise.
This is where Azure monitoring kicks in, with many strong details like log analytics and application insights. Rahul emphasized the importance of this tool and all the features it provides. His team has worked hard to provide a service that can help with multiple tasks, milestones, and services. This session helped the developers to learn why and how to monitor their work processes.
All in all, the first day at Microsoft LEAP 2019 was very on-topic and interesting. I look forward to the next sessions. If you have any questions, feel free to contact me at bjorn.nostdahl@gunnebo.com
Cloud Computing have become one of the most profitable industries in the world and cloud will remain a very hot topic for a foreseeable future. There is a huge competition among cloud service providers to win customers by providing the best services to their customers. Cloud service providers invest a lot of money on inventions. Thus, cloud services make most of the trends in the future IT industry. Microsoft Azure and Amazon AWS is one of the leaders in innovation in their respective fields.
Data centers around the world
As the demand for cloud services rapidly increasing in all parts of the world, establishing data centers around the globe becomes a necessity. Azure has understood this well and expecting to expand its service by constructing data center regions in many parts of the world.
From news.microsoft.com article about Project Natick’s Northern Isles datacenter at a Naval Group facility in Brest, France. Photo by Frank Betermin
The world is divided into geographies defined by geopolitical boundaries or country borders. These geographies define the data residency boundaries for customer data. Azure geographies respect the requirements within geographical boundaries. It ensures data residency, compliance, sovereignty, and resiliency. Azure regions are organized into geographies. A region is defined by a bandwidth and latency envelope. Azure owns the greatest number of global regions among cloud providers. This is a great benefit for businesses who seek to bring their applications closer to users around the world while protecting data residency.
The Two Major Azure’s Global Expansion of Cloud Services
Two of the most expansion that Microsoft Azure has incorporated to improve its service updates includes the following:
Expansion of Virtual Networks and Virtual Machines Support.
With utility virtual machines like A8 and A9 that provides the advantages of operations like rapid processors and interconnection amidst more virtual cores, there can now be the seamless configuration of virtual networks for specific geographical locations and regions.
This feature gives more room for optimal operations, cloud services, complex engineering design video encoding and a lot more.
Incorporation of Azure Mobile Services, and its Expansion to Offline Features
Even with a disconnected service, this operation makes it possible for applications to operate effectively on offline features. Furthermore, is that this extends the incorporation of Azure cloud services to apps on various platforms, including Android and iOS on mobile phones.
Then there are Availability Zones. It is the 3 rd level in the Azure network hierarchy.
Availability zones are physically separated locations. They exist inside regions. They are made up of one or more data centers. Constructing availability zones is not easier. They are not just data centers, they need advanced networking, independent power, cooling etc. The primary purpose of Availability zones is to helps customers to run mission-critical applications.
You will have following benefits with Azure availability zones
Better protection for your data – you won’t lose your data due to the destruction of a data center
High- availability, better performance, more resources for businesses to continuity.
99% SLA on virtual machines
Open source technology
Microsoft took some time to understand the value of Open source technologies. But now they are doing really fine. With .Net Core and the .Net Standard, Microsoft has done a major commitment to open source. Looking at GitHub alone, Microsoft is one of the largest contributors to open source.
“Microsoft is a developer-first company, and by joining forces with GitHub we strengthen our commitment to developer freedom, openness and innovation,” said Satya Nadella, CEO, Microsoft.
With .Net core 3.0, Microsoft introduced many features that will enable developers to create high security fast productive web and cloud applications. .NET Core 3 is a major update which adds support for building Windows desktop applications using Windows Presentation Foundation (WPF), Windows Forms, and Entity Framework 6 (EF6). ASP.NET Core 3 enables client-side development with Razor Components. EF Core 3 will have support for Azure Cosmos DB. It will also include support for C# 8 and .NET Standard 2.1 and much more.
Mixed reality and AI perceptions
Mixed reality tries to reduce the gap between our imagination and reality. With AI, it is about to change the way how we see the world. It seems to become the primary source of entertainment. Although Mixed reality got popular in the Gaming industry, now you can see its applications in other industries as well. The global mixed reality market is booming. That’s why the biggest names in tech are battling it out to capture the MR market. All major tech products have introduced MR devices such as Meta2 handsets, GoogleGlass 2.0, Microsoft HoloLens.
Mixed reality and AI perception is a result of the cooperation of many advanced technologies. This technology stack includes Natural Language interaction, Object recognition, real-world perception, real-world visualization, Contextual data access, Cross-device collaboration, and cloud streaming.
As I said earlier, Although the Gaming industry was the first to adopt mixed reality, now MR applications are more used in other industries. Let’s visit some of the industries and see how Mixed reality has transformed them and what benefits those industries get from mixed reality and AI perception.
You can see tech giants such as SAAB, NETSCAPE, DataMesh, using mixed reality in the manufacturing industry. According to research, mixed reality helps to increase worker productivity by 84%, improve collaboration among cross-functional teams by 80% and improve customer service interaction by 80%. You may wonder How mixed reality was able to achieve it? What it offers to the manufacturing industry. There are many applications of Mixed reality in manufacturing, following is a small list of them.
Enhanced Predictive Maintenance
Onsite Contextual Data Visualization
Intuitive IOT Digital Twin Monitoring
Remote collaboration and assistance
Accelerated 3D modeling and product design
Responsive Simulation training
Retail, Healthcare, Engineering, Architecture are some other industries that use mixed reality heavily.
Quantum revolution
Quantum computing could be the biggest thing in the future. It is a giant leap forward from today’s technology. It has the potential to alter our industrial, academic societal and economic landscapes forever. You will see these massive implications nearly every industry including energy, healthcare, smart materials, and environmental system. Microsoft is taking a unique revolutionary approach to quantum with its Quantum Development Kit.
Picture from cloudblogs.microsoft.com article about the potential of quantum computing
Microsoft can be considered as the only one who took quantum computing seriously in the commercial world. They have a quantum dream team which is formed by the greatest minds in physics, mathematics, computer science, and engineering to provide cutting-edge quantum innovation. Their quantum solution integrates seamlessly with Azure. They have taken a scalable topological approach towards quantum computing which helps to harness superior qubits. These superior qubits can perform complex computations with high accuracy at a lower cost.
There are three important features in Quantum development kit which makes it the go-to Quantum computing solution.
It introduces its own language, Q#. Q# created only for quantum programming. It has general programming features such as operators, native types and other abstractions. Q# can easily integrate with Visual Studio and VS code which makes Q# feature rich. Q# is interoperable with the Python programming language. With the support of enterprise-grade tools, you can easily work on any OS windows, macOS, or Linux.
Quantum development kit provides a simulated environment which greatly supports optimizing the codes. This is very different from other quantum computing platforms which still exist in a kind of crude level. This simulation environment also helps you to debug your code, set breakpoints, estimates costs, and many other things.
As we discussed earlier, Microsoft has become the main contributor in the open source world. They provide Open source license for libraries and samples. They have tried a lot to make quantum computing easier. A lot of training materials are presented to attract developers to into quantum programming realm. The open source license is a great encouragement for developers to use the Quantum development kit in their applications while contributing to the Q# community.
Cloud services will shape the future of the IT industry. Quantum computing, Open source technologies, Mixed reality will play a great role in it.
This is my last day in Redmond, but I really look forward to coming again next year! If you have any questions, feel free to contact me at bjorn.nostdahl@gunnebo.com