This year is already off to a fantastic start! I am so excited to be here at the LEAP conference at the Microsoft Headquarters in Redmond Seattle. LEAP is a perfect way for me to keep up to date with new technology and how to apply it here at Gunnebo.
The focus of the day was to Design for Security. The threat of cyber attacks and hackers is still as pressing as ever, so the need for cloud security is crucial. Although technological advancement has triggered an evolution in cloud security over the years, keeping the right level of visibility and control over their applications is still a challenge to many organizations. This means that finding a balance between cloud security and ease of use is a hard nut to crack. Today’s program discusses how Azure can cope up with this issue. Also, speakers are expected to introduce new and updated features Azure brought recently to improve the security of cloud applications.
The highlight of today’s program consists of five great keynotes. The first on the list was Scott Guthrie, the executive vice president for Microsoft’s Cloud. He is an incredible orator and kept the audience thrilled with his in-depth explanations on how Azure helps organizations to deliver product innovation and better customer experience securely. It was frankly impossible to have been there without taking away more than a few vital points and a better understanding of Azure.
Then Stuart Kwan, who is a principal program manager at Microsoft, was the next in line. He backed up Scott Guthrie with a great keynote on how authentication works on today’s applications. Stuart has a wealth of experience under his belt, and he has worked on identity and security-related technologies since joining Microsoft in 1996. Few people have more experience in that field. He is the guy to listen to on topics like Active Directory Federation Services and Windows Identity Foundation. The main focus was on OAuth, Open ID Connect, and SAML. OpenID Connect is a simple identity layer built on top of the OAuth 2.0 protocol. OAuth 2.0 defines mechanisms to obtain and use access tokens to access protected resources, but they do not define standard methods to provide identity information. OpenID Connect implements authentication as an extension to the OAuth 2.0 authorization process. It includes information about the end-user in the form of an id_token that verifies the identity of the user and provides necessary profile information about the user.
When Yuri Diogenes took control of the stage, everyone knew that his talk would be primarily based on how cloud security is evolving and becoming more mature. Yuri is a Senior Program Manager at Microsoft for Cloud and AI Security.
Before Yuri moved on to talk about Azure security, he provided some insights into the problematic scenarios that many companies find themselves. According to him, security hygiene has to be taken seriously or any cloud-based infrastructure would suffer. Basically, organizations have to protect themselves against modern-day threats. He carefully explained that Azure Security Center is a unified infrastructure security management system that strengthens the security posture of your data centers, and provides advanced threat protection across your hybrid workloads in the cloud – whether they’re in Azure or not – as well as on-premises. In simple terms, Azure security is the new security hygiene which you need.
Yuri went further to explain the benefits of Azure security center and Azure Sentinel. It provides all-round security and also affords a degree of customizability. According to him, Azure is capable of protecting Linux and Windows VMs from threats, protecting cloud-native workloads from threats, detecting file-less attacks, cloud workload protection for containers and so on.
The next person on stage was Nicholas DiCola who was a Security Jedi at Microsoft. He thrilled the audience with his discussions on the Azure Sentinel. He explained to everyone how the Sentinel functions as a cloud-native SIEM for intelligent security analytics for an entire organization. It offers limitless cloud speed and could be used at any scale. It also provides its users with faster threat protection and will easily integrate will all existing tools.
According to him, the Azure Sentinel was designed to collect visibility, helps in detecting analytics and hunting, investigates any incidents and respond automatically to them. Azure Sentinel gets data to function from numerous sources such as Linux Agent, Windows Agent, cloud services, custom app, appliances, azure services and so on. After collating all necessary data, it’s analytics scan for any possible threats. Then, you will now be able to monitor your data and activity.
Last but not least we had a session with Sumedh Barde and Narayan Annamalai. They opened a fascinating discussion on how to secure certificates, connection strings, or encryption keys and new networking capabilities of Azure. Sumedh Barde is Program Manager on the Azure Security team, and Narayan is the leader of the SDN product management group in Microsoft Azure that focuses on virtual networks, load balancing, and network security.
These two gave us great insight into the Azure Key Vault. They explained to us how it functions as a tool for securely storing and accessing secrets. From what I learned from the conference, the secret to tightly controlling and securing access on things API keys, passwords, or certificates is to use a vault. A vault is your very own logical group of secrets.
It was a great day here in Redmond and an excellent opportunity to brush up my knowledge of cloud security. I’m actively looking forward to tomorrow.
During the last few days, it has been reported that Yubico is replacing some of their physical security keys, due to a firmware problem. This reminds us that IT security is evolving, bugs are found and you need to keep up to date to keep your systems secure. My previous posts regarding SSL/TLS and x.509 has been quite popular, so here comes another security related post 🙂
When hosting a global Software as a Service platform, it is vital to be in control of Cloud Security. Cloud Security consists of a set of policies, controls, procedures and technologies that work together to protect cloud-based systems, data and infrastructure. These security measures are configured to protect data, support regulatory compliance and protect customers’ privacy as well as setting authentication rules for individual users and devices.
One way of securing these services is SSL/TLS encryption of communication. The first implementation of SSL was implemented by Netscape in 1994, and this post attempts to provide a historical view of the SSL/TLS protocol, as attacks and countermeasures were introduced. If one reads the current TLS v1.2 or v1.3 protocol specifications, there are many aspects of the design which do not have an obvious reason, but whose origin comes from the long list of academic research which has broken previous versions.
The birth of SSL
As SSLv1 was never released, we first mention SSLv2, which was designed and implemented by Netscape in 1995. The SSLv2 protocol is very different from later versions, but has a similar traffic flow. The client connects to a server and sends a “hello” which identifies some aspects of the clients capabilities. The client and server negotiate which cipher they wish to use, and the client sends a random key encrypted with the server’s RSA public key which is used to subsequently encrypt the message traffic.
The protocol quickly proved to have numerous flaws, and within a couple of years an effectively new protocol, SSLv3, was designed to replace it. SSLv2 was formally deprecated in 2011, and no modern TLS library supports it anymore.
SSL as we know it
SSLv3 is the first SSL version which is recognizably similar to modern TLS. As in SSLv2 the client connects to a server, a handshake is performed, and subsequent records are encrypted using a key that is shared using public key cryptography. However there are several essential differences.
Another key addition is that in SSLv3 it is possible to use algorithms with forward security. In this mode, instead of decrypting an RSA ciphertext sent by the client, the client and server agree on a key using Diffie-Hellman key exchange, and the server signs a message which allows the client to verify that it is performing a key exchange with the intended server. However RSA based key exchange was still retained, and widely used.
In SSLv3 the entire handshake is hashed together and used with the agreed keys to create two “Finished” messages which the client and server exchange on the encrypted channel. These ensure that an attacker cannot modify traffic between the client and server in such a way as to change the outcome of the handshake. For instance, if a MITM could remove all of the strong ciphersuites from a client hello message and force a downgrade to a weak cipher, the protocol could be easily attacked.
In SSLv3, messages are encrypted using either the stream cipher RC4, or else a block cipher in CBC mode. In CBC mode, the plaintext must be a multiple of the cipher’s block size (typically 8 or 16 bytes), which requires making use of padding scheme to increase the length of messages which are not correctly sized. In SSLv3, the length of padding is indicated with a single byte at the end of the record, and the specified number of bytes are discarded by the receiver. The value of the padding bytes is not specified.
The message is authenticated using a slight variant of HMAC (based on an early HMAC design prior to HMAC’s standardization). But, critically, in SSLv3 it is the plaintext (rather than the ciphertext) which is authenticated, and the CBC padding bytes are not authenticated at all. These errors proved to be the source of a number of serious exploits which plagued TLS for years.
eCommerce compel TLS v1.0
After a time it became clear that the SSL protocol would prove crucial for commerce on the early Internet, and eventually the development was moved to the IETF. The name ended up changing due to a political compromise between Netscape and Microsoft, who had a competing PCT protocol. However the actual TLS v1.0 specification is only slightly different from SSLv3.
The most notable changes was the replacement of an SSLv3-specific HMAC variant with the standard version, replacing the SSLv3 specific PRF with a new design, and tightening up the rules for how blocks are padded. In SSLv3 the padding bytes were unspecified, while in TLS v1.0 and later versions the padding must follow a specified format. The block padding change was at the time merely a simplification, but it proved critical when the POODLE attack was developed in 2014.
At the time POODLE attack was developed, SSLv3 was already obsolete, but many browsers included a fallback mechanism where if the server rejected a TLS v1.0 or higher connection, the browser would subsequently try to connect using SSLv3. A man-in-the-middle attacker could intercept the TLS v1.0 connection, send an (unauthenticted) TLS alert closing the connection, and then attack the SSLv3 connection. There is no direct protocol fix for POODLE, since it is not possible to retroactively fix the padding bytes in unpatched clients. The main resolutions were the disabling or removal of SSLv3 support on both client and server sides, and the creation of the “fallback SCSV” indicator. The fallback SCSV allows a client to indicate to the server that it is performing a version fallback, which is done by including a special value in the ciphersuite list which cannot be actually negotiated but simply serves as a flag which can be understood by servers who recognize it (SCSV is short for “Signaling Cipher Suite Value”). A special ciphersuite value was chosen because in a TLS v1.0/v1.1 client hello format there is no other way of reliably indicating such information.
If a server sees a connection from a client indicating fallback, but the client is attempting to negotiate an older version than what the server supports, it closes the connection. Then, when a MITM attacker tries to force a downgrade, when the client opens the vulnerable SSLv3 connection, the server will detect the SCSV and close the connection, preventing the attack. It is not possible for the MITM to remove the SCSV, because the contents of the handshake transcript are authenticated by the Finished messages.
Browser Exploit leads to TLS v1.1
TLS v1.1, released in 2006, involves a single small patch to TLS v1.0. In TLS v1.0 and all earlier versions, the CBC state is carried across records. Another way of thinking about this is it is as if each packet is encrypted with an IV which is equal to the last ciphertext block of the previous record. This resolved an issue that had been identified in 2006 by a researcher. Later, in 2011, this attack was refined via use of JavaScript and dubbed BEAST, providing a practical break of TLS v1.0 and earlier when used with HTTPS.
At the time BEAST was a substantial issue because many implementations of TLS had not been updated to support TLS v1.1 or v1.2. A workaround was developed for SSLv3/TLS v1.0 connections, commonly termed 1/n-1 record splitting. Each CBC encrypted record would be split into a 1 byte record followed by a record containing the rest of the plaintext. Since the first record included a message authentication code (which could not be predicted by an attacker who does not know the session key), this serves as a way of randomizing the IV.
Another common countermeasure was to favor use of the RC4 stream cipher, which did not have the problems of CBC ciphersuite. But the RC4 cipher dates back to the 1980s, and by 2013 it had been shown convincingly that biases in the RC4 ciphertext could allow an attacker to recover secrets sent over a TLS channel, albeit in a scenario requiring access to data from many millions of connections.
The next big step with TLS v1.2
TLS v1.2, released in 2008, was the first major enhancement to the protocol since SSLv3. It adds support for negotiating which digest algorithms will be used (instead of hard coding use of SHA-1), adds support for modern AEAD ciphers, and adds support for extensions.
Extensions are a critical feature which was long lacking in TLS. Each extension is tagged with a type, and implementations are required to ignore extensions which they do not recognize. This feature proved essential for resolving several protocol level problems which were discovered in the period between TLS v1.2 and v1.3
Despite adopting several modern cryptographic features, TLS v1.2 also suffered from a number of high profile attacks. The first of these was the renegotiation attack, discovered in 2009. TLS allows both clients and servers to at any time request a new session be renegotiated; effectively a new handshake is performed, but instead of being in the clear it occurs over the already encrypted channel. Several HTTP servers, including IIS, make use of this for client authentication. The initial connection is encrypted but not authenticated, and if the client attempts to access a protected resource a renegotiation is performed which includes client certificate authentication. The renegotiation bug breaks
this entirely. First an attacker creates a new TLS connection to the server, and sends some arbitrary data (for example, the start of an HTTP request). The attacker then proxies a legitimate client attempting to connect to the server, and sends the handshake data through its own channel. From the perspective of the server, it appears as if the client has sent some encrypted data, then authenticated itself with a certificate, then sent some additional data which was both encrypted and authenticated. Depending on the server logic, this might allow the attacker to insert data which the server would interpret as having come from the authenticated client. The fix was to properly bind the inner and
outer negotiations, such that it was not possible for the attacker to proxy. This was done by adding a new extension, which was standardized in RFC 5746. With this extension enabled, renegotiations inside an existing channel are cryptographically bound to the existing channel using the value of the TLS finished message. Since in the attack the client is unaware of being proxied within another TLS channel, the renegotiation will fail, preventing the attack.
The problems with renegotiation did not end there, however. In 2014 a new set of attacks where developed including the devastating triple handshake attack. In this attack, a client connects to a malicious server. The malicious server opens a new TLS connection as a client with some victim server. It forwards the client’s random value, then sends back the victim server’s random back to the client. Upon receiving the client’s encrypted master secret, it forwards the same to the victim server. In the end, there are two TLS connections, one between the client and the attacker, and the other between the the attacker and the victim server, and both are using the same session keys. In the next step, the client reconnects to the attacker, resuming its previous session, and in turn the attacker resumes its connection with the victim server.
Due to how session resumptions work, in this case the finished messages in the two handshakes will be identical. Then, the malicious server can attempt to perform some action on the victim server which triggers a request for client certificate authentication (for example, requesting access to a protected resource). It forwards the authentication request to the victim client, who responds. The attack proceeds much like the renegotiation attack of 5 years prior, and since the finished messages of the two connections are in this case identical, the previously devised extension fails to detect the proxying. This was addressed with a new extension, the extended master secret, which ensures the master secret for a session is bound to the entire handshake transcript, instead of just the client and server random fields.
Implementation errors also caused notable problems for TLS v1.2. It has been known since 1998 that the RSA key exchange is vulnerable to an oracle attack, the so called “million message attack”. In a nutshell, before encrypting the master secret with a servers RSA public key, the client pads it in a certain way. Upon decryption, the server must reject any invalid padding which does not conform the the standard. But, it turns out that given access to an “oracle” which tells if a particular RSA ciphertext is or or not correctly formatted, it is possible for an attacker to decrypt any ciphertext encrypted using that key. A TLS server can act as such an oracle, and problems have been repeatedly found in various implementations over the last 20 years, including the recent ROBOT and CAT9 attacks.
Bringing TLS into the future with v1.3
After 10 years and numerous patches, TLS v1.2 was in a state where using it securely required a number of extensions and avoiding a number of known-insecure features such as static RSA key exchange, RC4 ciphersuites, and CBC ciphersuites. TLS v1.3 addresses these issues by omitting them entirely.
In addition, there was a strong desire by many large network players (such as Google, Cloudflare, and Mozilla) to minimize the number of round trips required to handshake, as this directly impacts the user visible performance of web pages. This led to a substantially redesigned handshake protocol which has fewer round trips. I will explore the changes and implications of the TLS v1.3 design in a future post.
If you want to discuss more about SSL/TLS, feel free to contact me at bjorn.nostdahl@nostdahl.com or check out these previous articles on SSL/TLS and x.509:
The second day of Microsoft LEAP we focused on Deploying for the Cloud. Deploying applications in the cloud and offering Software, Platform and Infrastructure as a Service are hot topics at the moment (Well, it has been a hot topic for some time now). Choosing a good cloud provider is a very important decision to make in this process. Microsoft Azure and Amazon AWS are two of the leading cloud service providers. Since this article is based on my visit to Redmond and the LEAP keynotes, I will be looking at some of the services Azure provides.
DevOps with Microsoft Azure
Jessica Deen, the Deen of DevOps, had a great session on how DevOps is about people, process, and products. Getting it all right requires effort, but the benefits to your organization and customers can be huge. The aim of DevOps is to merge Developments, Operations and Quality Assurance for continuous delivery. DevOps is not a process or a job role. It is a total culture. You live in it. Your application lives in it.
Why DevOps was created? How DevOps increase the profit of a company? DevOps mostly focus on three main areas: Reducing human errors, reduce downtime and increase productivity. With proper DevOps processes you can reduce costs and increase productivity.
There are 3 main sections in DevOps. Continuous Integration, Continuous Deployment, and Continuous learning and monitoring. Azure has a very broad ecosystem to support that. There are 5 main tools in Azure DevOps ecosystem. Those are:
Azure Boards
Azure Pipelines
Azure Repos
Azure Test Plans
Azure Artifacts
You can track all the development stages, from idea to release, with Azure Boards. Azure Boards gives you Kanban boards, backlogs, team dashboards, and custom reporting to track all works. This helps to keep your team aligned with all the code changes throughout the development life cycle. Azure Pipelines are available for Linux, Windows, and MacOS. It supports any language. You can build, test and deploy apps written from Java, Net, PHP, NodeJS, C/C++, Ruby, Android, iOS etc. Also, it is easily extensible. With Azure Pipelines, you can easily build and push images to container registries like docker hub and Aure container Registry.
Azure is integrated with GitHub now, and with Azure Repos, now you can have Unlimited private Git repo hosting and support for TFVC that scales from a hobby project to the world’s largest Git repositories. Azure Test Plan is expected to give you end-to-end traceability. You can Run tests and log defects from your browser. Track and assess quality throughout your testing lifecycle. Finally, with Azure Artifacts, you can Create and share Maven, npm, and NuGet package feeds from public and private sources – fully integrated into CI/CD pipelines.
Vulnerabilities and Azure Monitor
Barry Dorrans, author of “Beginning ASP.NET Security” had a great session on the vulnerabilities of applications. OWASP illustrates that developers keep making the same mistakes over and over again, but what about more esoteric vulnerabilities. Actually, Microsoft releases a report called Microsoft Bulletin about their vulnerabilities. Microsoft has particularly mentioned about 8 vulnerabilities and their actions and process to fix them in one of their latest conferences. Some of them are:
Hash DoS
Padding Oracle
SharePoint ViewState RCE
Exchange RCE
Infinite Regex DoS
It’s not the time for a detailed study of them. But it’s good to have some understanding of it. Let’s discuss a couple of them.
Hash DoS is a denial of service attack caused parsing to form inputs. Let’s take an example to understand it. Assume a scenario where all form fields with A would go into slot A. To get a value back you go to the slot and look through everything. The more A fields there are the longer it takes. If you can force everything into a single slot then lookups will take more and more CPU which leads to DoS. Microsft advice not to use user inputs as dictionaries unless the user input is a string or a Hash Code for the user input is strong and you implement a session key. In the Padding Oracle, there will be a cryptographic attack to disclose information. You can avoid it by Not exposing padding oracles and Not returning detailed errors.
Azure Monitor service gives you full visibility across your App & your Infrastructure Health. Catherine Wang and Michael Milirud took us through how Azure Monitoring helps us discovering and fix issues with Diagnostics and Analytics tools. That’s not all. It tracks KPIs and proactively optimizes end-user experience. It is built around three concepts.
Unified Monitoring – A common platform for all metrics, logs, and another monitoring telemetry.
Data-Driven Insights – Advanced diagnostics and analytics powered by machine learning capabilities
Partner Integration – the Rich ecosystem of popular DevOps, issue management, SIEM, and ITSM tools
Power BI
Sergei Gundorov took us through a great keynote on Business Decisions and reminded us how most decisions are made from data. If you have better ways to analyze and present data, you can make better decisions. Microsoft introduced Power BI for business analytics. It is intended to be for both small and big businesses. But Power BI is not just a self-service tool for business analysts. Power BI has introduced many tools which enable you to analyze and visualize data very quickly. A striking feature of Power BI is its Ability to create and share reports.
With the introduction of Power BI embedded in Azure, now you can integrate power BI capabilities with your cloud application ever so smoothly. It drastically simplifies creating reports, visuals, dashboards in your app. Meanwhile, Power BI Embedded API allows developers to customize how intelligence is added to their applications.
Let’s see how Power BI Embedded have made integration easier. Power BI contents can be embedded in any application. It relies on web standards such as HTML5 and JavaScript. It works in web applications, mobile applications, and even thick client applications. SDK resources support many development platforms such as C#, JavaScript, TypeScript.
We discussed DevOps, Monitoring and Power BI here. There are more to add. Deploying your application in a good cloud service provider will make life easier. This article intended to give you an explanation about the current trends in the cloud world. Particularly, how Microsoft has improved their cloud business. There are other cloud services who provide similar functionalities. For example, Google and Amazon that also have a huge set of services. I look forward to the next LEAP sessions, and as always – If you have any questions, feel free to contact me at bjorn.nostdahl@gunnebo.com