Quantcast
Channel: Category Name
Viewing all 10804 articles
Browse latest View live

GPUs vs CPUs for Deployment of Deep Learning Models

$
0
0

This blog post was co-authored by Mathew Salvaris, Senior Data Scientist, Azure CAT and Daniel Grecoe, Senior Software Engineer, Azure CAT

Choosing the right type of hardware for deep learning tasks is a widely discussed topic. An obvious conclusion is that the decision should be dependent on the task at hand and based on factors such as throughput requirements and cost. It is widely accepted that for deep learning training, GPUs should be used due to their significant speed when compared to CPUs. However, due to their higher cost, for tasks like inference which are not as resource heavy as training, it is usually believed that CPUs are sufficient and are more attractive due to their cost savings. However, when inference speed is a bottleneck, using GPUs provide considerable gains both from financial and time perspectives. In a previous tutorial and blog Deploying Deep Learning Models on Kubernetes with GPUs, we provide step-by-step instructions to go from loading a pre-trained Convolutional Neural Network model to creating a containerized web application that is hosted on Kubernetes cluster with GPUs using Azure Container Service (AKS).

Expanding on this previous work, as a follow up analysis, here we provide a detailed comparison of the deployments of various deep learning models to highlight the striking differences in the throughput performance of GPU versus CPU deployments to provide evidence that, at least in the scenarios tested, GPUs provide better throughput and stability at a lower cost.

In our tests, we use two frameworks Tensorflow (1.8) and Keras (2.1.6) with Tensorflow (1.6) backend for 5 different models with network sizes which are in the order of small to large as follows:

  • MobileNetV2 (3.4M parameters)
  • NasNetMobile (4.2M parameters)
  • ResNet50 (23.5M parameters)
  • ResNet152 (58.1M parameters)
  • NasNetLarge (84.7M parameters)

We selected these models since we wanted to test a wide range of networks from small parameter efficient models such as MobileNet to large networks such as NasNetLarge.

For each of these models, a docker image with an API for scoring images have been prepared and deployed on four different AKS cluster configurations:

  • 1 node GPU cluster with 1 pod
  • 2 node GPU cluster with 2 pods
  • 3 node GPU cluster with 3 pods
  • 5 node CPU cluster with 35 pods

The GPU clusters were created using Azure NC6 series virtual machines with K80 GPUs while the CPU cluster was created using D4 v2 virtual machines with 8 cores. The CPU cluster was strategically configured to approximately match the largest GPU cluster cost so that a fair throughput per dollar comparison can be made between the 3 node GPU cluster and 5 node CPU cluster which is close but slightly more expensive at the time of these tests. For more recent pricing, please use Azure Virtual Machine pricing calculator.

All clusters were set up in East US region and testing of the clients occurred using a test harness in East US and Western Europe. The purpose was to determine if testing from different regions had any effect on throughput results. As it turns out, testing from different regions had some, but very little influence and therefore the results listed in this analysis only include data from the testing client in the East US region with a total of 40 different cluster configurations.

The tests were conducted by running an application on an Azure Windows virtual machine in the same region as the deployed scoring service. Using a range of 20-50 concurrent threads, 1000 images were scored, and the results recorded was the average throughput over the entire set. Actual sustained throughput is expected to be higher in an operationalized service due to the cyclical nature of the tests. The results reported below use the averages from the 50 thread set in the test cycle, and the application used to test these configurations can be found on GitHub.

Scaling of GPU Clusters

The general throughput trends for GPU clusters differ from CPU clusters such that adding more GPU nodes to the cluster increases throughput performance linearly. The following graph illustrates the linear growth in throughput as more GPUs are added to the clusters for each framework and model tested.

image

Due to management overheads in the cluster, although there is a significant increase in the throughput, the increase is not proportional to the number of GPUs added and is less than 100 percent per GPU added.

GPU vs CPU results

As stated before, the purpose of the tests is to understand if the deep learning deployments perform significantly better on GPUs which would translate to reduced financial costs of hosting the model. In the below figure, the GPU clusters are compared to a 5 node CPU cluster with 35 pods for all models for each framework. Note that, the 3 node GPU cluster roughly translates to an equal dollar cost per month with the 5 node CPU cluster at the time of these tests.

image

The results suggest that the throughput from GPU clusters is always better than CPU throughput for all models and frameworks proving that GPU is the economical choice for inference of deep learning models. In all cases, the 35 pod CPU cluster was outperformed by the single GPU cluster by at least 186 percent and by the 3 node GPU cluster by 415 percent which is of similar cost. These results are more pronounced for smaller networks such as MobileNetV2 for which the single node GPU cluster performs 392 percent and 3 node GPU 804 percent better than the CPU cluster for TensorFlow framework.

It is important to note that, for standard machine learning models where number of parameters are not as high as deep learning models, CPUs should still be considered as more effective and cost efficient. Also, there exists methods to optimize CPU performance such as MKL DNN and NNPACK. However, similar methods also exist for GPUs such as TensorRT. We also found that the performance on GPU clusters were far more consistent than CPU.

We hypothesize that this is because there is no contention for resources between the model and the web service that is present in the CPU only deployment. It can be concluded that for deep learning inference tasks which use models with high number of parameters, GPU based deployments benefit from the lack of resource contention and provide significantly higher throughput values compared to a CPU cluster of similar cost.

We hope that you find this comparison beneficial for your next deployment decision and let us know if you have any questions or comments.


Five options to help actuarial and account processes meet IFRS-17 deadlines

$
0
0

Actuarial compute solutions are what keeps insurance companies in business. The legally required reporting can only be done at scale by employing thousands of server cores for workloads like monthly and quarterly valuation and production reporting. With these compute-intensive workloads, actuaries need the elasticity of the cloud. Azure offers a variety of compute options, from large machines with hundreds of cores or thousands of smaller standard machines with fewer cores. This scalability means you can use as much compute power as needed to finish your production runs with enough time to spare to correct an error and rerun before a deadline.

image

Microsoft supports a range of actuarial compute solutions running on Azure including popular life actuarial partner platforms from Milliman, Willis Towers Watson, FIS, and Moody’s. All of these companies are experienced in both policy reserving and project modeling. Many life insurance companies today use at least one of these platforms, some of the largest insurance companies use parts of all of them. Partner solutions built on Azure provide different options for model creation and model modification. The solutions are flexible and support customization of both the model and the cloud environment. 

The new regulation, IFRS-17, adds new levels of reporting and controls to the actuarial and account processes for all insurance companies that are covered by it. These changes require many companies to re-evaluate and upgrade their actuarial applications and platforms. By utilizing an Azure partner, the insurance company relies on an industry leading cloud-platform teamed with top solution providers.

The following is a set of solutions and resources to help you work towards IFRS-17 rollout, as well as moving your actuarial compute to the cloud:

  • Integrate is a solution from Milliman. Milliman anticipates large changes in calculation needs throughout all phases, product development, valuation, Cashflow Testing, and other projection work because of IFRS-17 for companies that fall under this new internal account standard for insurance. 
  • RiskAgility FM, from Willis Towers Watson (WTW).  WTW believes that most insurance companies will build onto their existing actuarial modeling system to solve for key values needed to meet the IFRS requirements. RiskAgility from WTW offers the flexibility to wrap around valuation and modeling solutions from other software providers, while adding all needed controls and governance. It also manages the additional data storage and calculation loops required under IFRS-17.
  • Prophet Managed Cloud Service from FIS. The Prophet IFRS-17 suite of solutions are built on the existing Prophet functionality and allows users to meet IFRS requirements while using existing Prophet models and data feeds.  The new components add the needed end-to-end control and reporting functionality with all additional looping and data feeds.
  • RiskIntegrity is a solution from Moody’s Analytics RiskIntegrity IFRS 17 offers insurers an automated, secure, and flexible solution for managing a complex, data-intensive process and performing key calculations, helping to bridge actuarial and financial reporting functions. It integrates with the Moody’s Analytics AXISTM actuarial system and is fully compatible with other actuarial systems.
  • Want to learn more about options for moving your actuarial compute and modeling to the cloud, as well as meeting IFRS-17 requirements? Access the actual risk modelling use case, solution guide, and more resources.

I post regularly about new developments on social media. If you would like to follow me, you can find me on Linkedin and Twitter.

Reduce false positives, become more efficient by automating anti-money laundering detection

$
0
0

This blog post was created in partnership with André Burrell who is the Banking & Capital Markets Strategy Leader on the Worldwide Industry team at Microsoft.

image

In our last blog post Anti-money laundering – Microsoft Azure helping banks reduce false positives, we alluded to Microsoft’s high-level approach to a solution—which automates the end-to-end handling of anti-money laundering (AML) detection and management.

  • AML ≠ Anti-fraud. Anti-fraud is immediate identification and halting of transactions.
  • AML pursues the identification of suspected money laundering or other crimes.
  • Failure to have an “adequate Transaction Monitoring System” can result in substantial fines.

Due to the growing number of fines issued, there is now an increased drive to hold compliance officers, senior executives, and board members personally liable for failing to have an adequate AML program and transaction monitoring system (TMS). Any alert generated and not closed by the TMS must be reviewed by a human. Current technologies cannot assess a transaction in context. Without human intervention, it is difficult, almost impossible to adapt to the rapidly evolving patterns used by money launders or terrorists. We have many partners that address bank challenges with fraud. Among that elite group, Behavioral Biometrics solution from BioCatch and the Onfido Identity Verification Solution help automate fraud detection through frictionless detection.

Yes or no? The false positive problem

The rules use fixed risk scores of the customer, product, and geography involved. Based on the risk score, different dollar thresholds are applied within the same rule.  Aggravating these limitations, data silos (across banks) require significant data manipulation to format the data for use in TMS. This results in limited contextual information entering the TMS. The result: TMS systems are generating huge numbers of false positive alerts. Each alert must then be reviewed by a human investigator within strict timeframes. Most banks are experiencing a “false positive” rate of about 95-99 percent.  This means that only between 1 percent – 5 percent of all alerts result in an actual filing of a Suspicious Activity Report (SAR). Yikes!

Banks have generally grown through mergers and acquisitions. As they grow, promises are made to consolidate the different systems; however, in reality, those promises are rarely kept. As banks continue to grow, the problems with siloed systems are exacerbated.

Regulators require each alert to be reviewed within 60  days of generation, and a final determination must be made within 90 days. Within 30 days of determining a transaction is suspicious, a SAR must be filed. Failure to regularly meet these time frames can be grounds for regulatory action. This time pressure causes additional problems as the bank must make arbitrary decisions to meet the deadlines.

You’re missing the context

Even in cases where the bank makes the deadline, and the rules are working correctly, there is another problem: they have inadequate information because they are detecting fraud without additional context.

One type of missing context: location of the customer. For example, a customer attends a meeting in Germany while the transaction originates in London. Another type of missing context: social data. For example, social media information can reveal that a customer owns a small business where his or her presence can be confirmed.

The filing of a SAR does not mean the transaction is illegal.  However, to avoid the huge fines and costs from losses, banks start over-filing SARs. The result: legitimate customers are incorrectly flagged as having engaged in suspicious activity and thus subject to closer monitoring. More recently, such customers become subject to de-risking, i.e. closing an account to a perception of high risk. Siloed systems require investigators to access multiple systems to gather information on the customer and their transaction history to determine whether a transaction is suspicious. Additionally, the investigator must create a written record as to why the alert was closed.

Banks over-file out of an overabundance of caution and thus weaken the value of SARs to law enforcement—who are also overwhelmed with false positive SARs. These technological and data limitations are well known in the industry. Money launderers and terrorists are also known to exploit these weaknesses.

Figure 1 shows a simple illustration of the problem: 10 cash transactions are logged, and every transaction is for $9500. Only one of the ten is an illegal transaction. There is a rule that requires the reporting of all cash transactions over $10,000. The rule is intended to identify customers that are breaking up their cash transactions “structuring” to avoid the reporting of all cash transactions over $10,000; in figure 1, the rule identifies all cash transactions by customers that are between $9,500 and $9,999 over some period.

image

Figure 1

When viewed in full context, only 1 of the ten transactions is truly suspicious. Existing rule-based systems, without context, generate alerts on all ten transactions. Thus, a human must review all ten transactions. In this example, the false positive rate is 90 percent. The size of the problem becomes apparent when you consider that a TMS in small to medium-sized bank would have 20+ rules, with at least three variations per rule to address different risk-based thresholds.

As bank products and services become more complex, the more rules they will have. Can the rules in a TMS be fined tuned? Yes, from a technical perspective it is possible to refine the rules. Regulators expect banks to validate the effectiveness of their rules periodically. However, with limited contextual information, the banks are reduced to conducting simple above and below the line testing. Each time a bank tunes its rules to reduce the number of false positives, they run the risk of missing truly suspicious activity and being subject to regulatory action. Result: the blunt force approach is the status quo. Banks are forced to process the high number of false positives. And the usefulness of SARs to law enforcement is reduced. This makes TMS technology ripe for disruption through the application of machine learning and artificial intelligence (ML/AI).

Reducing false positives demands a 360 degree view for more context

Banks will eventually embrace the cloud across the majority of their technology, but for right now they need a true 360 degree view of customers across all the banks platforms/systems based on the banks full data sets. When the 360 degree view exists, true customer insights and context become more readily available; risk and compliance can be made more robust while driving down costs; and deep analytics on a bank’s dark data, customer base, and products becomes available. It also lays a strong foundation upon which to build a Know Your Customer (KYC) solution and utilizes Microsoft Dynamics (customer relationship management system) and Microsoft’s mixed reality product called Hololens to improve the bank’s employee effectiveness and speed to resolution.

Currently AML investigators typically access three to five or more systems to gather the information need to investigate the alert. Imagine through Hololens, AML investigators being able to see visualizations of networks of transactions and drilling down into the data by reaching out and selecting a node, applying predefined PowerBI dashboard templates to further interrogate the selected data; keeping multiple windows open into other systems; or enabling two or more investigators to simultaneously view an investigation and related data to reach a collaborative decision. Such a system would help banks overcome their resistance to the cloud for customer information while simultaneously meeting regulators demand for a complete 360 degree view of customers across all a bank’s platforms thus enabling better customer insights and compliance and risk management. 

Recommended next steps

To cut through the overwhelming amount of information, and to provide a path for your company, we just released the Detecting Online and Mobile Fraud with AI use case.  Here you can learn more about new types of attack vectors, types of emerging fraud every day, and how detection systems need to respond faster than ever leveraging Azure machine learning and AI solutions.

We would love for you to engage with the author and guest contributor on this topic. Feel free to reach out via the comments below.

Retail brands: gain a competitive advantage with modern data management

$
0
0

928414120

Retailers today are responsible for a significant amount of data. As a customer, I expect that the data I provide to a retailer is handled properly following the Payment Card Industry Data Security Standard (PCI DSS), General Data Protection Regulation (GDPR) and other compliance guidelines.

I also expect that the data I give to a retailer is being leveraged in a way that improves my shopping experience. For example, I want better recommendations given my purchase history, my “likes,” and my browsing. I expect the retailer to know my email and payment information for a single click checkout. These are small and trivial frictions that can be eliminated, as long as the data is handled properly with the right permissions. And the capture and analysis of big data (general information aggregated from all customers) is an opportunity for the retailer to fine-tune the overall customer experience.

However much of the data collected goes unused. This occurs because the infrastructure within an organization is unable to make the data accessible or searchable. It can’t be used to improve decision-making across the retail value chain. The unused data comes from many sources: mobile devices, digital and physical store shopping, and IoT. This means vast amounts of data points are ripe for collection. But the investment and modernization required to effectively manage and leverage the data hasn’t caught up. Today’s barriers to gaining value from data are:

  • Data silos
  • Incongruent data types
  • Performance constraints
  • Rising costs

A healthy data environment is one that enables value to be generated by moving the organization from “what happened?” to predictive and action-oriented insights: “what will happen”? This then leads the question “what should I do?” in which predictions can be automated, moving from insight to action, and generating new value.

Modern practices allow data to be more efficiently and effectively leveraged in a flexible, extensible and secure way. Deriving value from your data also means that your organization has achieved:

  • Data integration
  • Support for diverse data models
  • Unlimited scale
  • Fully-managed infrastructure
  • Lower total cost of ownership

Chart: Ideal characteristics of a data ecosystem

Today:

Barriers to gaining value from data

Tomorrow:

Deriving value from your data

Data silos Data integration
Incongruent data types Support for diverse data models

Performance constraints

Unlimited scale
Rising costs Fully-managed infrastructure
  Lower total cost of ownership (TCO)

In conjunction with leveraging new technology capabilities such as the cloud and AI, organizations must put effort into effective data management. In a way, it should be a pre-requisite for modernization and/or transformation efforts.

A data management model consists of these states: ingest, prepare, store, analyze and action. Data can be categorized into three types of data sources: purchased, public and proprietary data. In legacy environments, data is often siloed and constrained by a variety of factors such as channel, system or by utility. A major goal should be to remove these traditional constraints so that the right data can be leveraged across an organization at scale, securely. And nurturing a healthy data environment is a competitive advantage, especially as an organization’s data science capabilities grow.

Great data management provides a significant strategic advantage and enables brand differentiation when serving customers. In retail, it doesn’t matter if you are data rich or data poor. If you can’t action and operationalize the insights to power and automate decision making, your data management strategy needs to be reexamined.

Recommended next steps

Windows 10 SDK Preview Build 17754 available now!

$
0
0

Today, we released a new Windows 10 Preview Build of the SDK to be used in conjunction with Windows 10 Insider Preview (Build 17754 or greater). The Preview SDK Build 17754 contains bug fixes and under development changes to the API surface area.

The Preview SDK can be downloaded from developer section on Windows Insider.

For feedback and updates to the known issues, please see the developer forum. For new developer feature requests, head over to our Windows Platform UserVoice.

Things to note:

  • This build works in conjunction with previously released SDKs and Visual Studio 2017. You can install this SDK and still also continue to submit your apps that target Windows 10 build 1803 or earlier to the store.
  • The Windows SDK will now formally only be supported by Visual Studio 2017 and greater. You can download the Visual Studio 2017 here.
  • This build of the Windows SDK will install on Windows 10 Insider Preview builds and supported Windows operating systems.
  • In order to assist with script access to the SDK, the ISO will also be able to be accessed through the following URL: https://go.microsoft.com/fwlink/?prd=11966&pver=1.0&plcid=0x409&clcid=0x409&ar=Flight&sar=Sdsurl&o1=17754 once the static URL is published.

C++/WinRT Update for build 17709 and beyond:

This update introduces many improvements and fixes for C++/WinRT. Notably, it introduces the ability to build C++/WinRT without any dependency on the Windows SDK. This isn’t particularly interesting to the OS developer, but even in the OS repo it provides benefits because it does not itself include any Windows headers. Thus, a developer will typically pull in fewer or no dependencies inadvertently. This also means a dramatic reduction in the number of macros that a C++/WinRT developer must guard against. Removing the dependency on the Windows headers means that C++/WinRT is more portable and standards compliant and furthers our efforts to make it a cross-compiler and cross-platform library. It also means that the C++/WinRT headers will never be mangled by macros. If you previously relied on C++/WinRT to include various Windows headers that you will now have to include them yourself. It has always been good practice to always include any headers you depend on explicitly and not rely on another library to include them for you.

Highlights

Support get_strong and get_weak to create delegates: This update allows a developer to use either get_strong or get_weak instead of a raw this pointer when creating a delegate pointing to a member function.

Add async cancellation callback: The most frequently requested feature for C++/WinRT’s coroutine support has been the addition of a cancellation callback.

Simplify the use of APIs expecting IBuffer parameters: Although most APIs prefer collections or arrays, enough APIs rely on IBuffer that it should be easier to use such APIs from C++. This update provides direct access to the data behind an IBuffer implementation using the same data naming convention used by the C++ standard library containers. This also avoids colliding with metadata names that conventionally begin with an uppercase letter.

Conformance: Improved support for Clang and Visual C++’s stricter conformance modes.

Improved code gen: Various improvements to reduce code size, improve inlining, and optimize factory caching.

Remove unnecessary recursion: When the command line refers to a folder rather than a specific winmd, cppwinrt will no longer search recursively for winmd files. It causes performance problems in the OS build and can lead to usage errors that are hard to diagnose when developers inadvertently cause cppwinrt to consume more winmds than expected. The cppwinrt compiler also now handles duplicates more intelligently, making it more resilient to user error and poorly-formed winmd files.

Declare both WINRT_CanUnloadNow and WINRT_GetActivationFactory in base.h: Callers don’t need to declare them directly. Their signatures have also changed, amounting to a breaking change. The declarations alleviate most of the pain of this change. The change is necessitated by the fact that C++/WinRT no longer depends on the Windows headers and this change removes the dependency on the types from the Windows headers.

Harden smart pointers: The event revokers didn’t revoke when move-assigned a new value. This lead me to take a closer look at the smart pointer classes and I noticed that they were not reliably handling self-assignment. This is rooted in the com_ptr class template that most of the others rely on. I fixed com_ptr and updated the event revokers to handle move semantics correctly to ensure that they revoke upon assignment. The handle class template has also been hardened by the removal of the implicit constructor that made it easy to write incorrect code. This also turned bugs in the OS into compiler errors fixed in this PR.

Breaking Changes

Support for non-WinRT interfaces is disabled by default. To enable, simply #include <unknwn.h> before any C++/WinRT headers.

winrt::get_abi(winrt::hstring) now returns void* instead of HSTRING. Code requiring the HSTRING ABI can simply use a static_cast.

winrt::put_abi(winrt::hstring) returns void** instead of HSTRING*. Code requiring the HSTRING ABI can simply use a reinterpret_cast.

HRESULT is now projected as winrt::hresult. Code requiring an HRESULT can simply static_cast if you need to do type checking or support type traits, but it is otherwise convertible as long as <unknwn.h> is included first.

GUID is now projected as winrt::guid. Code implementing APIs with GUID parameters must use winrt::guid instead, but it is otherwise convertible as long as <unknwn.h> is included first.

The signatures of WINRT_CanUnloadNow and WINRT_GetActivationFactory has changed. Code must not declare these functions at all and instead include winrt/base.h to include their declarations.

The winrt::handle constructor is now explicit. Code assigning a raw handle value must call the attach method instead.

winrt::clock::from_FILETIME has been deprecated. Code should use winrt::clock::from_file_time instead.

What’s New

MSIX Support

It’s finally here! You can now package your applications as MSIX! These applications can be installed and run on any device with 17682 build or later.

To package your application with MSIX, use the MakeAppx tool. To install the application – just click on the MSIX file. To understand more about MSIX, watch this introductory video: link

Feedback and comments are welcome on our MSIX community: http://aka.ms/MSIXCommunity

MSIX is not currently supported by the App Certification Kit nor the Microsoft Store at this time.

MC.EXE

We’ve made some important changes to the C/C++ ETW code generation of mc.exe (Message Compiler):

The “-mof” parameter is deprecated. This parameter instructs MC.exe to generate ETW code that is compatible with Windows XP and earlier. Support for the “-mof” parameter will be removed in a future version of mc.exe.

As long as the “-mof” parameter is not used, the generated C/C++ header is now compatible with both kernel-mode and user-mode, regardless of whether “-km” or “-um” was specified on the command line. The header will use the _ETW_KM_ macro to automatically determine whether it is being compiled for kernel-mode or user-mode and will call the appropriate ETW APIs for each mode.

  • The only remaining difference between “-km” and “-um” is that the EventWrite[EventName] macros generated with “-km” have an Activity ID parameter while the EventWrite[EventName] macros generated with “-um” do not have an Activity ID parameter.

The EventWrite[EventName] macros now default to calling EventWriteTransfer (user mode) or EtwWriteTransfer (kernel mode). Previously, the EventWrite[EventName] macros defaulted to calling EventWrite (user mode) or EtwWrite (kernel mode).

  • The generated header now supports several customization macros. For example, you can set the MCGEN_EVENTWRITETRANSFER macro if you need the generated macros to call something other than EventWriteTransfer.
  • The manifest supports new attributes.
    • Event “name”: non-localized event name.
    • Event “attributes”: additional key-value metadata for an event such as filename, line number, component name, function name.
    • Event “tags”: 28-bit value with user-defined semantics (per-event).
    • Field “tags”: 28-bit value with user-defined semantics (per-field – can be applied to “data” or “struct” elements).
  • You can now define “provider traits” in the manifest (e.g. provider group). If provider traits are used in the manifest, the EventRegister[ProviderName] macro will automatically register them.
  • MC will now report an error if a localized message file is missing a string. (Previously MC would silently generate a corrupt message resource.)
  • MC can now generate Unicode (utf-8 or utf-16) output with the “-cp utf-8” or “-cp utf-16” parameters.

Known Issues

The SDK headers are generated with types in the “ABI” namespace. This is done to avoid conflicts with C++/CX and C++/WinRT clients that need to consume types directly at the ABI layer[1]. By default, types emitted by MIDL are *not* put in the ABI namespace, however this has the potential to introduce conflicts from teams attempting to consume ABI types from Windows WinRT MIDL generated headers and non-Windows WinRT MIDL generated headers (this is especially challenging if the non-Windows header references Windows types).

To ensure that developers have a consistent view of the WinRT API surface, validation has been added to the generated headers to ensure that the ABI prefix is consistent between the Windows headers and user generated headers. If you encounter an error like:

5>c:program files (x86)windows kits10include10.0.17687.0winrtwindows.foundation.h(83): error C2220: warning treated as error – no ‘object’ file generated

5>c:program files (x86)windows kits10include10.0.17687.0winrtwindows.foundation.h(83): warning C4005: ‘CHECK_NS_PREFIX_STATE’: macro redefinition

5>g:<PATH TO YOUR HEADER HERE>(41): note: see previous definition of ‘CHECK_NS_PREFIX_STATE’

It means that some of your MIDL generated headers are inconsistent with the system generated headers.

There are two ways to fix this:

  • Preferred: Compile your IDL file with the /ns_prefix MIDL command line switch. This will cause all your types to be moved to the ABI namespace consistent with the Windows headers. This may require code changes in your code however.
  • Alternate: Add #define DISABLE_NS_PREFIX_CHECKS before including the Windows headers. This will suppress the validation.

API Updates, Additions and Removals

When targeting new APIs, consider writing your app to be adaptive in order to run correctly on the widest number of Windows 10 devices. Please see Dynamically detecting features with API contracts for more information.

The following APIs have been added to the platform since the release of 17134. The APIs listed below have been removed.

In addition, please note the following removal since build 17744:


namespace Windows.ApplicationModel.DataTransfer 
{ public sealed class DataPackagePropertySetView : IIterable<IKeyValuePair<string, object>>, IMapView<string, object> { 
string SourceDisplayName { get; } } }

Additions:

 
namespace Windows.AI.MachineLearning {
  public interface ILearningModelFeatureDescriptor
  public interface ILearningModelFeatureValue
  public interface ILearningModelOperatorProvider
  public sealed class ImageFeatureDescriptor : ILearningModelFeatureDescriptor
  public sealed class ImageFeatureValue : ILearningModelFeatureValue
  public interface ITensor : ILearningModelFeatureValue
  public sealed class LearningModel : IClosable
  public sealed class LearningModelBinding : IIterable<IKeyValuePair<string, object>>, IMapView<string, object>
  public sealed class LearningModelDevice
  public enum LearningModelDeviceKind
  public sealed class LearningModelEvaluationResult
  public enum LearningModelFeatureKind
  public sealed class LearningModelSession : IClosable
  public struct MachineLearningContract
  public sealed class MapFeatureDescriptor : ILearningModelFeatureDescriptor
  public sealed class SequenceFeatureDescriptor : ILearningModelFeatureDescriptor
  public sealed class TensorBoolean : ILearningModelFeatureValue, ITensor
  public sealed class TensorDouble : ILearningModelFeatureValue, ITensor
  public sealed class TensorFeatureDescriptor : ILearningModelFeatureDescriptor
  public sealed class TensorFloat : ILearningModelFeatureValue, ITensor
  public sealed class TensorFloat16Bit : ILearningModelFeatureValue, ITensor
  public sealed class TensorInt16Bit : ILearningModelFeatureValue, ITensor
  public sealed class TensorInt32Bit : ILearningModelFeatureValue, ITensor
  public sealed class TensorInt64Bit : ILearningModelFeatureValue, ITensor
  public sealed class TensorInt8Bit : ILearningModelFeatureValue, ITensor
  public enum TensorKind
  public sealed class TensorString : ILearningModelFeatureValue, ITensor
  public sealed class TensorUInt16Bit : ILearningModelFeatureValue, ITensor
  public sealed class TensorUInt32Bit : ILearningModelFeatureValue, ITensor
  public sealed class TensorUInt64Bit : ILearningModelFeatureValue, ITensor
  public sealed class TensorUInt8Bit : ILearningModelFeatureValue, ITensor
}
namespace Windows.ApplicationModel {
  public sealed class AppInstallerInfo
  public sealed class LimitedAccessFeatureRequestResult
  public static class LimitedAccessFeatures
  public enum LimitedAccessFeatureStatus
  public sealed class Package {
    IAsyncOperation<PackageUpdateAvailabilityResult> CheckUpdateAvailabilityAsync();
    AppInstallerInfo GetAppInstallerInfo();
  }
  public enum PackageUpdateAvailability
  public sealed class PackageUpdateAvailabilityResult
}
namespace Windows.ApplicationModel.Calls {
  public sealed class VoipCallCoordinator {
    IAsyncOperation<VoipPhoneCallResourceReservationStatus> ReserveCallResourcesAsync();
  }
}
namespace Windows.ApplicationModel.Chat {
  public static class ChatCapabilitiesManager {
    public static IAsyncOperation<ChatCapabilities> GetCachedCapabilitiesAsync(string address, string transportId);
    public static IAsyncOperation<ChatCapabilities> GetCapabilitiesFromNetworkAsync(string address, string transportId);
  }
  public static class RcsManager {
    public static event EventHandler<object> TransportListChanged;
  }
}
namespace Windows.ApplicationModel.DataTransfer {
  public static class Clipboard {
    public static event EventHandler<ClipboardHistoryChangedEventArgs> HistoryChanged;
    public static event EventHandler<object> HistoryEnabledChanged;
    public static event EventHandler<object> RoamingEnabledChanged;
    public static bool ClearHistory();
    public static bool DeleteItemFromHistory(ClipboardHistoryItem item);
    public static IAsyncOperation<ClipboardHistoryItemsResult> GetHistoryItemsAsync();
    public static bool IsHistoryEnabled();
    public static bool IsRoamingEnabled();
    public static bool SetContentWithOptions(DataPackage content, ClipboardContentOptions options);
    public static SetHistoryItemAsContentStatus SetHistoryItemAsContent(ClipboardHistoryItem item);
  }
  public sealed class ClipboardContentOptions
  public sealed class ClipboardHistoryChangedEventArgs
  public sealed class ClipboardHistoryItem
  public sealed class ClipboardHistoryItemsResult
  public enum ClipboardHistoryItemsResultStatus
  public sealed class DataPackagePropertySetView : IIterable<IKeyValuePair<string, object>>, IMapView<string, object> {
    bool IsFromRoamingClipboard { get; }
  }
  public enum SetHistoryItemAsContentStatus
}
namespace Windows.ApplicationModel.Store.Preview {
  public enum DeliveryOptimizationDownloadMode
  public enum DeliveryOptimizationDownloadModeSource
  public sealed class DeliveryOptimizationSettings
  public static class StoreConfiguration {
    public static bool IsPinToDesktopSupported();
    public static bool IsPinToStartSupported();
    public static bool IsPinToTaskbarSupported();
    public static void PinToDesktop(string appPackageFamilyName);
    public static void PinToDesktopForUser(User user, string appPackageFamilyName);
  }
}
namespace Windows.ApplicationModel.Store.Preview.InstallControl {
  public enum AppInstallationToastNotificationMode
  public sealed class AppInstallItem {
    AppInstallationToastNotificationMode CompletedInstallToastNotificationMode { get; set; }
    AppInstallationToastNotificationMode InstallInProgressToastNotificationMode { get; set; }
    bool PinToDesktopAfterInstall { get; set; }
    bool PinToStartAfterInstall { get; set; }
    bool PinToTaskbarAfterInstall { get; set; }
  }
  public sealed class AppInstallManager {
    bool CanInstallForAllUsers { get; }
  }
  public sealed class AppInstallOptions {
    string CampaignId { get; set; }
    AppInstallationToastNotificationMode CompletedInstallToastNotificationMode { get; set; }
    string ExtendedCampaignId { get; set; }
    bool InstallForAllUsers { get; set; }
    AppInstallationToastNotificationMode InstallInProgressToastNotificationMode { get; set; }
    bool PinToDesktopAfterInstall { get; set; }
    bool PinToStartAfterInstall { get; set; }
    bool PinToTaskbarAfterInstall { get; set; }
    bool StageButDoNotInstall { get; set; }
  }
  public sealed class AppUpdateOptions {
    bool AutomaticallyDownloadAndInstallUpdateIfFound { get; set; }
  }
}
namespace Windows.ApplicationModel.UserActivities {
  public sealed class UserActivity {
    bool IsRoamable { get; set; }
  }
}
namespace Windows.Data.Text {
  public sealed class TextPredictionGenerator {
    CoreTextInputScope InputScope { get; set; }
    IAsyncOperation<IVectorView<string>> GetCandidatesAsync(string input, uint maxCandidates, TextPredictionOptions predictionOptions, IIterable<string> previousStrings);
    IAsyncOperation<IVectorView<string>> GetNextWordCandidatesAsync(uint maxCandidates, IIterable<string> previousStrings);
  }
  public enum TextPredictionOptions : uint
}
namespace Windows.Devices.Display.Core {
  public sealed class DisplayAdapter
  public enum DisplayBitsPerChannel : uint
  public sealed class DisplayDevice
  public enum DisplayDeviceCapability
  public sealed class DisplayFence
  public sealed class DisplayManager : IClosable
  public sealed class DisplayManagerChangedEventArgs
  public sealed class DisplayManagerDisabledEventArgs
  public sealed class DisplayManagerEnabledEventArgs
  public enum DisplayManagerOptions : uint
  public sealed class DisplayManagerPathsFailedOrInvalidatedEventArgs
  public enum DisplayManagerResult
  public sealed class DisplayManagerResultWithState
  public sealed class DisplayModeInfo
  public enum DisplayModeQueryOptions : uint
  public sealed class DisplayPath
  public enum DisplayPathScaling
  public enum DisplayPathStatus
  public struct DisplayPresentationRate
  public sealed class DisplayPrimaryDescription
  public enum DisplayRotation
  public sealed class DisplayScanout
  public sealed class DisplaySource
  public sealed class DisplayState
  public enum DisplayStateApplyOptions : uint
  public enum DisplayStateFunctionalizeOptions : uint
  public sealed class DisplayStateOperationResult
  public enum DisplayStateOperationStatus
  public sealed class DisplaySurface
  public sealed class DisplayTarget
  public enum DisplayTargetPersistence
  public sealed class DisplayTask
  public sealed class DisplayTaskPool
  public enum DisplayTaskSignalKind
  public sealed class DisplayView
  public sealed class DisplayWireFormat
  public enum DisplayWireFormatColorSpace
  public enum DisplayWireFormatEotf
  public enum DisplayWireFormatHdrMetadata
  public enum DisplayWireFormatPixelEncoding
}
namespace Windows.Devices.Enumeration {
  public enum DeviceInformationKind {
    DevicePanel = 8,
  }
  public sealed class DeviceInformationPairing {
    public static bool TryRegisterForAllInboundPairingRequestsWithProtectionLevel(DevicePairingKinds pairingKindsSupported, DevicePairingProtectionLevel minProtectionLevel);
  }
}
namespace Windows.Devices.Enumeration.Pnp {
  public enum PnpObjectType {
    DevicePanel = 8,
  }
}
namespace Windows.Devices.Lights {
  public sealed class LampArray
  public enum LampArrayKind
  public sealed class LampInfo
  public enum LampPurposes : uint
}
namespace Windows.Devices.Lights.Effects {
  public interface ILampArrayEffect
  public sealed class LampArrayBitmapEffect : ILampArrayEffect
  public sealed class LampArrayBitmapRequestedEventArgs
  public sealed class LampArrayBlinkEffect : ILampArrayEffect
  public sealed class LampArrayColorRampEffect : ILampArrayEffect
  public sealed class LampArrayCustomEffect : ILampArrayEffect
  public enum LampArrayEffectCompletionBehavior
  public sealed class LampArrayEffectPlaylist : IIterable<ILampArrayEffect>, IVectorView<ILampArrayEffect>
  public enum LampArrayEffectStartMode
  public enum LampArrayRepetitionMode
  public sealed class LampArraySolidEffect : ILampArrayEffect
  public sealed class LampArrayUpdateRequestedEventArgs
}
namespace Windows.Devices.PointOfService {
  public sealed class BarcodeScannerCapabilities {
    bool IsVideoPreviewSupported { get; }
  }
  public sealed class ClaimedBarcodeScanner : IClosable {
    event TypedEventHandler<ClaimedBarcodeScanner, ClaimedBarcodeScannerClosedEventArgs> Closed;
  }
  public sealed class ClaimedBarcodeScannerClosedEventArgs
  public sealed class ClaimedCashDrawer : IClosable {
    event TypedEventHandler<ClaimedCashDrawer, ClaimedCashDrawerClosedEventArgs> Closed;
  }
  public sealed class ClaimedCashDrawerClosedEventArgs
  public sealed class ClaimedLineDisplay : IClosable {
    event TypedEventHandler<ClaimedLineDisplay, ClaimedLineDisplayClosedEventArgs> Closed;
  }
  public sealed class ClaimedLineDisplayClosedEventArgs
  public sealed class ClaimedMagneticStripeReader : IClosable {
    event TypedEventHandler<ClaimedMagneticStripeReader, ClaimedMagneticStripeReaderClosedEventArgs> Closed;
  }
  public sealed class ClaimedMagneticStripeReaderClosedEventArgs
  public sealed class ClaimedPosPrinter : IClosable {
    event TypedEventHandler<ClaimedPosPrinter, ClaimedPosPrinterClosedEventArgs> Closed;
  }
  public sealed class ClaimedPosPrinterClosedEventArgs
}
namespace Windows.Devices.PointOfService.Provider {
  public sealed class BarcodeScannerDisableScannerRequest {
    IAsyncAction ReportFailedAsync(int reason);
    IAsyncAction ReportFailedAsync(int reason, string failedReasonDescription);
  }
  public sealed class BarcodeScannerEnableScannerRequest {
    IAsyncAction ReportFailedAsync(int reason);
    IAsyncAction ReportFailedAsync(int reason, string failedReasonDescription);
  }
  public sealed class BarcodeScannerFrameReader : IClosable
  public sealed class BarcodeScannerFrameReaderFrameArrivedEventArgs
  public sealed class BarcodeScannerGetSymbologyAttributesRequest {
    IAsyncAction ReportFailedAsync(int reason);
    IAsyncAction ReportFailedAsync(int reason, string failedReasonDescription);
  }
  public sealed class BarcodeScannerHideVideoPreviewRequest {
    IAsyncAction ReportFailedAsync(int reason);
    IAsyncAction ReportFailedAsync(int reason, string failedReasonDescription);
  }
  public sealed class BarcodeScannerProviderConnection : IClosable {
    IAsyncOperation<BarcodeScannerFrameReader> CreateFrameReaderAsync();
    IAsyncOperation<BarcodeScannerFrameReader> CreateFrameReaderAsync(BitmapPixelFormat preferredFormat);
    IAsyncOperation<BarcodeScannerFrameReader> CreateFrameReaderAsync(BitmapPixelFormat preferredFormat, BitmapSize preferredSize);
  }
  public sealed class BarcodeScannerSetActiveSymbologiesRequest {
    IAsyncAction ReportFailedAsync(int reason);
    IAsyncAction ReportFailedAsync(int reason, string failedReasonDescription);
  }
  public sealed class BarcodeScannerSetSymbologyAttributesRequest {
    IAsyncAction ReportFailedAsync(int reason);
    IAsyncAction ReportFailedAsync(int reason, string failedReasonDescription);
  }
  public sealed class BarcodeScannerStartSoftwareTriggerRequest {
    IAsyncAction ReportFailedAsync(int reason);
    IAsyncAction ReportFailedAsync(int reason, string failedReasonDescription);
  }
  public sealed class BarcodeScannerStopSoftwareTriggerRequest {
    IAsyncAction ReportFailedAsync(int reason);
    IAsyncAction ReportFailedAsync(int reason, string failedReasonDescription);
  }
  public sealed class BarcodeScannerVideoFrame : IClosable
}
namespace Windows.Devices.Sensors {
  public sealed class HingeAngleReading
  public sealed class HingeAngleSensor
  public sealed class HingeAngleSensorReadingChangedEventArgs
  public sealed class SimpleOrientationSensor {
    public static IAsyncOperation<SimpleOrientationSensor> FromIdAsync(string deviceId);
    public static string GetDeviceSelector();
  }
}
namespace Windows.Devices.SmartCards {
  public static class KnownSmartCardAppletIds
  public sealed class SmartCardAppletIdGroup {
    string Description { get; set; }
    IRandomAccessStreamReference Logo { get; set; }
    ValueSet Properties { get; }
    bool SecureUserAuthenticationRequired { get; set; }
  }
  public sealed class SmartCardAppletIdGroupRegistration {
    string SmartCardReaderId { get; }
    IAsyncAction SetPropertiesAsync(ValueSet props);
  }
}
namespace Windows.Devices.WiFi {
  public enum WiFiPhyKind {
    HE = 10,
  }
}
namespace Windows.Foundation {
  public static class GuidHelper
}
namespace Windows.Globalization {
  public static class CurrencyIdentifiers {
    public static string MRU { get; }
    public static string SSP { get; }
    public static string STN { get; }
    public static string VES { get; }
  }
}
namespace Windows.Graphics.Capture {
  public sealed class Direct3D11CaptureFramePool : IClosable {
    public static Direct3D11CaptureFramePool CreateFreeThreaded(IDirect3DDevice device, DirectXPixelFormat pixelFormat, int numberOfBuffers, SizeInt32 size);
  }
  public sealed class GraphicsCaptureItem {
    public static GraphicsCaptureItem CreateFromVisual(Visual visual);
  }
}
namespace Windows.Graphics.Display.Core {
  public enum HdmiDisplayHdrOption {
    DolbyVisionLowLatency = 3,
  }
  public sealed class HdmiDisplayMode {
    bool IsDolbyVisionLowLatencySupported { get; }
  }
}
namespace Windows.Graphics.Holographic {
  public sealed class HolographicCamera {
    bool IsHardwareContentProtectionEnabled { get; set; }
    bool IsHardwareContentProtectionSupported { get; }
  }
  public sealed class HolographicQuadLayerUpdateParameters {
    bool CanAcquireWithHardwareProtection { get; }
    IDirect3DSurface AcquireBufferToUpdateContentWithHardwareProtection();
  }
}
namespace Windows.Graphics.Imaging {
  public sealed class BitmapDecoder : IBitmapFrame, IBitmapFrameWithSoftwareBitmap {
    public static Guid HeifDecoderId { get; }
    public static Guid WebpDecoderId { get; }
  }
  public sealed class BitmapEncoder {
    public static Guid HeifEncoderId { get; }
  }
}
namespace Windows.Management.Deployment {
  public enum DeploymentOptions : uint {
    ForceUpdateFromAnyVersion = (uint)262144,
  }
  public sealed class PackageManager {
    IAsyncOperationWithProgress<DeploymentResult, DeploymentProgress> DeprovisionPackageForAllUsersAsync(string packageFamilyName);
  }
  public enum RemovalOptions : uint {
    RemoveForAllUsers = (uint)524288,
  }
}
namespace Windows.Media.Audio {
  public sealed class CreateAudioDeviceInputNodeResult {
    HResult ExtendedError { get; }
  }
  public sealed class CreateAudioDeviceOutputNodeResult {
    HResult ExtendedError { get; }
  }
  public sealed class CreateAudioFileInputNodeResult {
    HResult ExtendedError { get; }
  }
  public sealed class CreateAudioFileOutputNodeResult {
    HResult ExtendedError { get; }
  }
  public sealed class CreateAudioGraphResult {
    HResult ExtendedError { get; }
  }
  public sealed class CreateMediaSourceAudioInputNodeResult {
    HResult ExtendedError { get; }
  }
  public enum MixedRealitySpatialAudioFormatPolicy
  public sealed class SetDefaultSpatialAudioFormatResult
  public enum SetDefaultSpatialAudioFormatStatus
  public sealed class SpatialAudioDeviceConfiguration
  public sealed class SpatialAudioFormatConfiguration
  public static class SpatialAudioFormatSubtype
}
namespace Windows.Media.Control {
  public sealed class CurrentSessionChangedEventArgs
  public sealed class GlobalSystemMediaTransportControlsSession
  public sealed class GlobalSystemMediaTransportControlsSessionManager
  public sealed class GlobalSystemMediaTransportControlsSessionMediaProperties
  public sealed class GlobalSystemMediaTransportControlsSessionPlaybackControls
  public sealed class GlobalSystemMediaTransportControlsSessionPlaybackInfo
  public enum GlobalSystemMediaTransportControlsSessionPlaybackStatus
  public sealed class GlobalSystemMediaTransportControlsSessionTimelineProperties
  public sealed class MediaPropertiesChangedEventArgs
  public sealed class PlaybackInfoChangedEventArgs
  public sealed class SessionsChangedEventArgs
  public sealed class TimelinePropertiesChangedEventArgs
}
namespace Windows.Media.Core {
  public sealed class MediaStreamSample {
    IDirect3DSurface Direct3D11Surface { get; }
    public static MediaStreamSample CreateFromDirect3D11Surface(IDirect3DSurface surface, TimeSpan timestamp);
  }
}
namespace Windows.Media.Devices.Core {
  public sealed class CameraIntrinsics {
    public CameraIntrinsics(Vector2 focalLength, Vector2 principalPoint, Vector3 radialDistortion, Vector2 tangentialDistortion, uint imageWidth, uint imageHeight);
  }
}
namespace Windows.Media.Import {
  public enum PhotoImportContentTypeFilter {
    ImagesAndVideosFromCameraRoll = 3,
  }
  public sealed class PhotoImportItem {
    string Path { get; }
  }
}
namespace Windows.Media.MediaProperties {
  public sealed class ImageEncodingProperties : IMediaEncodingProperties {
    public static ImageEncodingProperties CreateHeif();
  }
  public static class MediaEncodingSubtypes {
    public static string Heif { get; }
  }
}
namespace Windows.Media.Protection.PlayReady {
  public static class PlayReadyStatics {
    public static IReference<DateTime> HardwareDRMDisabledAtTime { get; }
    public static IReference<DateTime> HardwareDRMDisabledUntilTime { get; }
    public static void ResetHardwareDRMDisabled();
  }
}
namespace Windows.Media.Streaming.Adaptive {
  public enum AdaptiveMediaSourceResourceType {
    MediaSegmentIndex = 5,
  }
}
namespace Windows.Networking.BackgroundTransfer {
  public enum BackgroundTransferPriority {
    Low = 2,
  }
}
namespace Windows.Networking.Connectivity {
  public sealed class ConnectionProfile {
    bool CanDelete { get; }
    IAsyncOperation<ConnectionProfileDeleteStatus> TryDeleteAsync();
  }
  public enum ConnectionProfileDeleteStatus
}
namespace Windows.Networking.NetworkOperators {
  public enum ESimOperationStatus {
    CardGeneralFailure = 13,
    ConfirmationCodeMissing = 14,
    EidMismatch = 18,
    InvalidMatchingId = 15,
    NoCorrespondingRequest = 23,
    NoEligibleProfileForThisDevice = 16,
    OperationAborted = 17,
    OperationProhibitedByProfileClass = 21,
    ProfileNotAvailableForNewBinding = 19,
    ProfileNotPresent = 22,
    ProfileNotReleasedByOperator = 20,
  }
}
namespace Windows.Perception {
  public sealed class PerceptionTimestamp {
    TimeSpan SystemRelativeTargetTime { get; }
  }
  public static class PerceptionTimestampHelper {
    public static PerceptionTimestamp FromSystemRelativeTargetTime(TimeSpan targetTime);
  }
}
namespace Windows.Perception.Spatial {
  public sealed class SpatialAnchorExporter
  public enum SpatialAnchorExportPurpose
  public sealed class SpatialAnchorExportSufficiency
  public sealed class SpatialLocation {
    Vector3 AbsoluteAngularAccelerationAxisAngle { get; }
    Vector3 AbsoluteAngularVelocityAxisAngle { get; }
  }
}
namespace Windows.Perception.Spatial.Preview {
  public static class SpatialGraphInteropPreview
}
namespace Windows.Services.Cortana {
  public sealed class CortanaActionableInsights
  public sealed class CortanaActionableInsightsOptions
}
namespace Windows.Services.Store {
  public sealed class StoreAppLicense {
    bool IsDiscLicense { get; }
  }
  public sealed class StoreContext {
    IAsyncOperation<StoreRateAndReviewResult> RequestRateAndReviewAppAsync();
    IAsyncOperation<IVectorView<StoreQueueItem>> SetInstallOrderForAssociatedStoreQueueItemsAsync(IIterable<StoreQueueItem> items);
  }
  public sealed class StoreQueueItem {
    IAsyncAction CancelInstallAsync();
    IAsyncAction PauseInstallAsync();
    IAsyncAction ResumeInstallAsync();
  }
  public sealed class StoreRateAndReviewResult
  public enum StoreRateAndReviewStatus
}
namespace Windows.Storage.Provider {
  public enum StorageProviderHydrationPolicyModifier : uint {
    AutoDehydrationAllowed = (uint)4,
  }
  public sealed class StorageProviderSyncRootInfo {
    Guid ProviderId { get; set; }
  }
}
namespace Windows.System {
  public sealed class AppUriHandlerHost
  public sealed class AppUriHandlerRegistration
  public sealed class AppUriHandlerRegistrationManager
  public static class Launcher {
    public static IAsyncOperation<bool> LaunchFolderPathAsync(string path);
    public static IAsyncOperation<bool> LaunchFolderPathAsync(string path, FolderLauncherOptions options);
    public static IAsyncOperation<bool> LaunchFolderPathForUserAsync(User user, string path);
    public static IAsyncOperation<bool> LaunchFolderPathForUserAsync(User user, string path, FolderLauncherOptions options);
  }
}
namespace Windows.System.Preview {
  public enum HingeState
  public sealed class TwoPanelHingedDevicePosturePreview
  public sealed class TwoPanelHingedDevicePosturePreviewReading
  public sealed class TwoPanelHingedDevicePosturePreviewReadingChangedEventArgs
}
namespace Windows.System.Profile {
  public enum SystemOutOfBoxExperienceState
  public static class SystemSetupInfo
  public static class WindowsIntegrityPolicy
}
namespace Windows.System.Profile.SystemManufacturers {
  public sealed class SystemSupportDeviceInfo
  public static class SystemSupportInfo {
    public static SystemSupportDeviceInfo LocalDeviceInfo { get; }
  }
}
namespace Windows.System.RemoteSystems {
  public sealed class RemoteSystem {
    IVectorView<RemoteSystemApp> Apps { get; }
  }
  public sealed class RemoteSystemApp
  public sealed class RemoteSystemAppRegistration
  public sealed class RemoteSystemConnectionInfo
  public sealed class RemoteSystemConnectionRequest {
    RemoteSystemApp RemoteSystemApp { get; }
    public static RemoteSystemConnectionRequest CreateForApp(RemoteSystemApp remoteSystemApp);
  }
  public sealed class RemoteSystemWebAccountFilter : IRemoteSystemFilter
}
namespace Windows.System.Update {
  public enum SystemUpdateAttentionRequiredReason
  public sealed class SystemUpdateItem
  public enum SystemUpdateItemState
  public sealed class SystemUpdateLastErrorInfo
  public static class SystemUpdateManager
  public enum SystemUpdateManagerState
  public enum SystemUpdateStartInstallAction
}
namespace Windows.System.UserProfile {
  public sealed class AssignedAccessSettings
}
namespace Windows.UI.Accessibility {
  public sealed class ScreenReaderPositionChangedEventArgs
  public sealed class ScreenReaderService
}
namespace Windows.UI.Composition {
  public enum AnimationPropertyAccessMode
  public sealed class AnimationPropertyInfo : CompositionObject
  public sealed class BooleanKeyFrameAnimation : KeyFrameAnimation
  public class CompositionAnimation : CompositionObject, ICompositionAnimationBase {
    void SetExpressionReferenceParameter(string parameterName, IAnimationObject source);
  }
  public enum CompositionBatchTypes : uint {
    AllAnimations = (uint)5,
    InfiniteAnimation = (uint)4,
  }
  public sealed class CompositionGeometricClip : CompositionClip
  public class CompositionGradientBrush : CompositionBrush {
    CompositionMappingMode MappingMode { get; set; }
  }
  public enum CompositionMappingMode
  public class CompositionObject : IAnimationObject, IClosable {
    void PopulatePropertyInfo(string propertyName, AnimationPropertyInfo propertyInfo);
    public static void StartAnimationGroupWithIAnimationObject(IAnimationObject target, ICompositionAnimationBase animation);
    public static void StartAnimationWithIAnimationObject(IAnimationObject target, string propertyName, CompositionAnimation animation);
  }
  public sealed class Compositor : IClosable {
    BooleanKeyFrameAnimation CreateBooleanKeyFrameAnimation();
    CompositionGeometricClip CreateGeometricClip();
    CompositionGeometricClip CreateGeometricClip(CompositionGeometry geometry);
    RedirectVisual CreateRedirectVisual();
    RedirectVisual CreateRedirectVisual(Visual source);
  }
  public interface IAnimationObject
  public sealed class RedirectVisual : ContainerVisual
}
namespace Windows.UI.Composition.Interactions {
  public sealed class InteractionSourceConfiguration : CompositionObject
  public enum InteractionSourceRedirectionMode
  public sealed class InteractionTracker : CompositionObject {
    bool IsInertiaFromImpulse { get; }
    int TryUpdatePosition(Vector3 value, InteractionTrackerClampingOption option);
    int TryUpdatePositionBy(Vector3 amount, InteractionTrackerClampingOption option);
  }
  public enum InteractionTrackerClampingOption
  public sealed class InteractionTrackerInertiaStateEnteredArgs {
    bool IsInertiaFromImpulse { get; }
  }
  public class VisualInteractionSource : CompositionObject, ICompositionInteractionSource {
    InteractionSourceConfiguration PointerWheelConfig { get; }
  }
}
namespace Windows.UI.Input.Inking {
  public enum HandwritingLineHeight
  public sealed class PenAndInkSettings
  public enum PenHandedness
}
namespace Windows.UI.Input.Inking.Preview {
  public sealed class PalmRejectionDelayZonePreview : IClosable
}
namespace Windows.UI.Notifications {
  public sealed class ScheduledToastNotificationShowingEventArgs
  public sealed class ToastNotifier {
    event TypedEventHandler<ToastNotifier, ScheduledToastNotificationShowingEventArgs> ScheduledToastNotificationShowing;
  }
}
namespace Windows.UI.Shell {
  public enum SecurityAppKind
  public sealed class SecurityAppManager
  public struct SecurityAppManagerContract
  public enum SecurityAppState
  public enum SecurityAppSubstatus
  public sealed class TaskbarManager {
    IAsyncOperation<bool> IsSecondaryTilePinnedAsync(string tileId);
    IAsyncOperation<bool> RequestPinSecondaryTileAsync(SecondaryTile secondaryTile);
    IAsyncOperation<bool> TryUnpinSecondaryTileAsync(string tileId);
  }
}
namespace Windows.UI.StartScreen {
  public sealed class StartScreenManager {
    IAsyncOperation<bool> ContainsSecondaryTileAsync(string tileId);
    IAsyncOperation<bool> TryRemoveSecondaryTileAsync(string tileId);
  }
}
namespace Windows.UI.Text {
  public sealed class RichEditTextDocument : ITextDocument {
    void ClearUndoRedoHistory();
  }
}
namespace Windows.UI.Text.Core {
  public sealed class CoreTextLayoutRequest {
    CoreTextLayoutBounds LayoutBoundsVisualPixels { get; }
  }
}
namespace Windows.UI.ViewManagement {
  public enum ApplicationViewWindowingMode {
    CompactOverlay = 3,
    Maximized = 4,
  }
}
namespace Windows.UI.ViewManagement.Core {
  public sealed class CoreInputView {
    bool TryHide();
    bool TryShow();
    bool TryShow(CoreInputViewKind type);
  }
  public enum CoreInputViewKind
}
namespace Windows.UI.WebUI {
  public sealed class BackgroundActivatedEventArgs : IBackgroundActivatedEventArgs
  public delegate void BackgroundActivatedEventHandler(object sender, IBackgroundActivatedEventArgs eventArgs);
  public sealed class NewWebUIViewCreatedEventArgs
  public static class WebUIApplication {
    public static event BackgroundActivatedEventHandler BackgroundActivated;
    public static event EventHandler<NewWebUIViewCreatedEventArgs> NewWebUIViewCreated;
  }
  public sealed class WebUIView : IWebViewControl, IWebViewControl2
}
namespace Windows.UI.Xaml {
  public class BrushTransition
  public class ColorPaletteResources : ResourceDictionary
  public class DataTemplate : FrameworkTemplate, IElementFactory {
    UIElement GetElement(ElementFactoryGetArgs args);
    void RecycleElement(ElementFactoryRecycleArgs args);
  }
  public sealed class DebugSettings {
    bool FailFastOnErrors { get; set; }
  }
  public sealed class EffectiveViewportChangedEventArgs
  public class ElementFactoryGetArgs
  public class ElementFactoryRecycleArgs
  public class FrameworkElement : UIElement {
    bool IsLoaded { get; }
    event TypedEventHandler<FrameworkElement, EffectiveViewportChangedEventArgs> EffectiveViewportChanged;
    void InvalidateViewport();
  }
  public interface IElementFactory
  public class ScalarTransition
  public class UIElement : DependencyObject, IAnimationObject {
    bool CanBeScrollAnchor { get; set; }
    public static DependencyProperty CanBeScrollAnchorProperty { get; }
    Vector3 CenterPoint { get; set; }
    ScalarTransition OpacityTransition { get; set; }
    float Rotation { get; set; }
    Vector3 RotationAxis { get; set; }
    ScalarTransition RotationTransition { get; set; }
    Vector3 Scale { get; set; }
    Vector3Transition ScaleTransition { get; set; }
    Matrix4x4 TransformMatrix { get; set; }
    Vector3 Translation { get; set; }
    Vector3Transition TranslationTransition { get; set; }
    void PopulatePropertyInfo(string propertyName, AnimationPropertyInfo propertyInfo);
    virtual void PopulatePropertyInfoOverride(string propertyName, AnimationPropertyInfo animationPropertyInfo);
    void StartAnimation(ICompositionAnimationBase animation);
    void StopAnimation(ICompositionAnimationBase animation);
  }
  public class Vector3Transition
  public enum Vector3TransitionComponents : uint
}
namespace Windows.UI.Xaml.Automation {
  public sealed class AutomationElementIdentifiers {
    public static AutomationProperty IsDialogProperty { get; }
  }
  public sealed class AutomationProperties {
    public static DependencyProperty IsDialogProperty { get; }
    public static bool GetIsDialog(DependencyObject element);
    public static void SetIsDialog(DependencyObject element, bool value);
  }
}
namespace Windows.UI.Xaml.Automation.Peers {
  public class AppBarButtonAutomationPeer : ButtonAutomationPeer, IExpandCollapseProvider {
    ExpandCollapseState ExpandCollapseState { get; }
    void Collapse();
    void Expand();
  }
  public class AutomationPeer : DependencyObject {
    bool IsDialog();
    virtual bool IsDialogCore();
  }
  public class MenuBarAutomationPeer : FrameworkElementAutomationPeer
  public class MenuBarItemAutomationPeer : FrameworkElementAutomationPeer, IExpandCollapseProvider, IInvokeProvider
}
namespace Windows.UI.Xaml.Controls {
  public sealed class AnchorRequestedEventArgs
  public class AppBarElementContainer : ContentControl, ICommandBarElement, ICommandBarElement2
  public sealed class AutoSuggestBox : ItemsControl {
    object Description { get; set; }
    public static DependencyProperty DescriptionProperty { get; }
  }
  public enum BackgroundSizing
  public sealed class Border : FrameworkElement {
    BackgroundSizing BackgroundSizing { get; set; }
    public static DependencyProperty BackgroundSizingProperty { get; }
    BrushTransition BackgroundTransition { get; set; }
  }
  public class CalendarDatePicker : Control {
    object Description { get; set; }
    public static DependencyProperty DescriptionProperty { get; }
  }
  public class ComboBox : Selector {
    object Description { get; set; }
    public static DependencyProperty DescriptionProperty { get; }
    bool IsEditable { get; set; }
    public static DependencyProperty IsEditableProperty { get; }
    string Text { get; set; }
    Style TextBoxStyle { get; set; }
    public static DependencyProperty TextBoxStyleProperty { get; }
    public static DependencyProperty TextProperty { get; }
    event TypedEventHandler<ComboBox, ComboBoxTextSubmittedEventArgs> TextSubmitted;
  }
  public sealed class ComboBoxTextSubmittedEventArgs
  public class CommandBarFlyout : FlyoutBase
  public class ContentPresenter : FrameworkElement {
    BackgroundSizing BackgroundSizing { get; set; }
    public static DependencyProperty BackgroundSizingProperty { get; }
    BrushTransition BackgroundTransition { get; set; }
  }
  public class Control : FrameworkElement {
    BackgroundSizing BackgroundSizing { get; set; }
    public static DependencyProperty BackgroundSizingProperty { get; }
    CornerRadius CornerRadius { get; set; }
    public static DependencyProperty CornerRadiusProperty { get; }
  }
  public class DataTemplateSelector : IElementFactory {
    UIElement GetElement(ElementFactoryGetArgs args);
    void RecycleElement(ElementFactoryRecycleArgs args);
  }
  public class DatePicker : Control {
    IReference<DateTime> SelectedDate { get; set; }
    public static DependencyProperty SelectedDateProperty { get; }
    event TypedEventHandler<DatePicker, DatePickerSelectedValueChangedEventArgs> SelectedDateChanged;
  }
  public sealed class DatePickerSelectedValueChangedEventArgs
  public class DropDownButton : Button
  public class DropDownButtonAutomationPeer : ButtonAutomationPeer, IExpandCollapseProvider
  public class Frame : ContentControl, INavigate {
    bool IsNavigationStackEnabled { get; set; }
    public static DependencyProperty IsNavigationStackEnabledProperty { get; }
    bool NavigateToType(TypeName sourcePageType, object parameter, FrameNavigationOptions navigationOptions);
  }
  public class Grid : Panel {
    BackgroundSizing BackgroundSizing { get; set; }
    public static DependencyProperty BackgroundSizingProperty { get; }
  }
  public class IconSourceElement : IconElement
  public interface IScrollAnchorProvider
  public class MenuBar : Control
  public class MenuBarItem : Control
  public class MenuBarItemFlyout : MenuFlyout
  public class NavigationView : ContentControl {
    UIElement ContentOverlay { get; set; }
    public static DependencyProperty ContentOverlayProperty { get; }
    bool IsPaneVisible { get; set; }
    public static DependencyProperty IsPaneVisibleProperty { get; }
    NavigationViewOverflowLabelMode OverflowLabelMode { get; set; }
    public static DependencyProperty OverflowLabelModeProperty { get; }
    UIElement PaneCustomContent { get; set; }
    public static DependencyProperty PaneCustomContentProperty { get; }
    NavigationViewPaneDisplayMode PaneDisplayMode { get; set; }
    public static DependencyProperty PaneDisplayModeProperty { get; }
    UIElement PaneHeader { get; set; }
    public static DependencyProperty PaneHeaderProperty { get; }
    NavigationViewSelectionFollowsFocus SelectionFollowsFocus { get; set; }
    public static DependencyProperty SelectionFollowsFocusProperty { get; }
    NavigationViewShoulderNavigationEnabled ShoulderNavigationEnabled { get; set; }
    public static DependencyProperty ShoulderNavigationEnabledProperty { get; }
    NavigationViewTemplateSettings TemplateSettings { get; }
    public static DependencyProperty TemplateSettingsProperty { get; }
  }
  public class NavigationViewItem : NavigationViewItemBase {
    bool SelectsOnInvoked { get; set; }
    public static DependencyProperty SelectsOnInvokedProperty { get; }
  }
  public sealed class NavigationViewItemInvokedEventArgs {
    NavigationViewItemBase InvokedItemContainer { get; }
    NavigationTransitionInfo RecommendedNavigationTransitionInfo { get; }
  }
  public enum NavigationViewOverflowLabelMode
  public enum NavigationViewPaneDisplayMode
  public sealed class NavigationViewSelectionChangedEventArgs {
    NavigationTransitionInfo RecommendedNavigationTransitionInfo { get; }
    NavigationViewItemBase SelectedItemContainer { get; }
  }
  public enum NavigationViewSelectionFollowsFocus
  public enum NavigationViewShoulderNavigationEnabled
  public class NavigationViewTemplateSettings : DependencyObject
  public class Panel : FrameworkElement {
    BrushTransition BackgroundTransition { get; set; }
  }
  public sealed class PasswordBox : Control {
    bool CanPasteClipboardContent { get; }
    public static DependencyProperty CanPasteClipboardContentProperty { get; }
    object Description { get; set; }
    public static DependencyProperty DescriptionProperty { get; }
    FlyoutBase SelectionFlyout { get; set; }
    public static DependencyProperty SelectionFlyoutProperty { get; }
    void PasteFromClipboard();
  }
  public class RelativePanel : Panel {
    BackgroundSizing BackgroundSizing { get; set; }
    public static DependencyProperty BackgroundSizingProperty { get; }
  }
  public class RichEditBox : Control {
    object Description { get; set; }
    public static DependencyProperty DescriptionProperty { get; }
    FlyoutBase ProofingMenuFlyout { get; }
    public static DependencyProperty ProofingMenuFlyoutProperty { get; }
    FlyoutBase SelectionFlyout { get; set; }
    public static DependencyProperty SelectionFlyoutProperty { get; }
    RichEditTextDocument TextDocument { get; }
    event TypedEventHandler<RichEditBox, RichEditBoxSelectionChangingEventArgs> SelectionChanging;
  }
  public sealed class RichEditBoxSelectionChangingEventArgs
  public sealed class RichTextBlock : FrameworkElement {
    FlyoutBase SelectionFlyout { get; set; }
    public static DependencyProperty SelectionFlyoutProperty { get; }
    void CopySelectionToClipboard();
  }
  public sealed class ScrollContentPresenter : ContentPresenter {
    bool CanContentRenderOutsideBounds { get; set; }
    public static DependencyProperty CanContentRenderOutsideBoundsProperty { get; }
    bool SizesContentToTemplatedParent { get; set; }
    public static DependencyProperty SizesContentToTemplatedParentProperty { get; }
  }
  public sealed class ScrollViewer : ContentControl, IScrollAnchorProvider {
    bool CanContentRenderOutsideBounds { get; set; }
    public static DependencyProperty CanContentRenderOutsideBoundsProperty { get; }
    UIElement CurrentAnchor { get; }
    double HorizontalAnchorRatio { get; set; }
    public static DependencyProperty HorizontalAnchorRatioProperty { get; }
    bool ReduceViewportForCoreInputViewOcclusions { get; set; }
    public static DependencyProperty ReduceViewportForCoreInputViewOcclusionsProperty { get; }
    double VerticalAnchorRatio { get; set; }
    public static DependencyProperty VerticalAnchorRatioProperty { get; }
    event TypedEventHandler<ScrollViewer, AnchorRequestedEventArgs> AnchorRequested;
    public static bool GetCanContentRenderOutsideBounds(DependencyObject element);
    void RegisterAnchorCandidate(UIElement element);
    public static void SetCanContentRenderOutsideBounds(DependencyObject element, bool canContentRenderOutsideBounds);
    void UnregisterAnchorCandidate(UIElement element);
  }
  public class SplitButton : ContentControl
  public class SplitButtonAutomationPeer : FrameworkElementAutomationPeer, IExpandCollapseProvider, IInvokeProvider
  public sealed class SplitButtonClickEventArgs
  public class StackPanel : Panel, IInsertionPanel, IScrollSnapPointsInfo {
    BackgroundSizing BackgroundSizing { get; set; }
    public static DependencyProperty BackgroundSizingProperty { get; }
  }
  public sealed class TextBlock : FrameworkElement {
    FlyoutBase SelectionFlyout { get; set; }
    public static DependencyProperty SelectionFlyoutProperty { get; }
    void CopySelectionToClipboard();
  }
  public class TextBox : Control {
    bool CanPasteClipboardContent { get; }
    public static DependencyProperty CanPasteClipboardContentProperty { get; }
    bool CanRedo { get; }
    public static DependencyProperty CanRedoProperty { get; }
    bool CanUndo { get; }
    public static DependencyProperty CanUndoProperty { get; }
    object Description { get; set; }
    public static DependencyProperty DescriptionProperty { get; }
    FlyoutBase ProofingMenuFlyout { get; }
    public static DependencyProperty ProofingMenuFlyoutProperty { get; }
    FlyoutBase SelectionFlyout { get; set; }
    public static DependencyProperty SelectionFlyoutProperty { get; }
    event TypedEventHandler<TextBox, TextBoxSelectionChangingEventArgs> SelectionChanging;
    void ClearUndoRedoHistory();
    void CopySelectionToClipboard();
    void CutSelectionToClipboard();
    void PasteFromClipboard();
    void Redo();
    void Undo();
  }
  public sealed class TextBoxSelectionChangingEventArgs
  public class TextCommandBarFlyout : CommandBarFlyout
  public class TimePicker : Control {
    IReference<TimeSpan> SelectedTime { get; set; }
    public static DependencyProperty SelectedTimeProperty { get; }
    event TypedEventHandler<TimePicker, TimePickerSelectedValueChangedEventArgs> SelectedTimeChanged;
  }
  public sealed class TimePickerSelectedValueChangedEventArgs
  public class ToggleSplitButton : SplitButton
  public class ToggleSplitButtonAutomationPeer : FrameworkElementAutomationPeer, IExpandCollapseProvider, IToggleProvider
  public sealed class ToggleSplitButtonIsCheckedChangedEventArgs
  public class ToolTip : ContentControl {
    IReference<Rect> PlacementRect { get; set; }
    public static DependencyProperty PlacementRectProperty { get; }
  }
  public class TreeView : Control {
    bool CanDragItems { get; set; }
    public static DependencyProperty CanDragItemsProperty { get; }
    bool CanReorderItems { get; set; }
    public static DependencyProperty CanReorderItemsProperty { get; }
    Style ItemContainerStyle { get; set; }
    public static DependencyProperty ItemContainerStyleProperty { get; }
    StyleSelector ItemContainerStyleSelector { get; set; }
    public static DependencyProperty ItemContainerStyleSelectorProperty { get; }
    TransitionCollection ItemContainerTransitions { get; set; }
    public static DependencyProperty ItemContainerTransitionsProperty { get; }
    object ItemsSource { get; set; }
    public static DependencyProperty ItemsSourceProperty { get; }
    DataTemplate ItemTemplate { get; set; }
    public static DependencyProperty ItemTemplateProperty { get; }
    DataTemplateSelector ItemTemplateSelector { get; set; }
    public static DependencyProperty ItemTemplateSelectorProperty { get; }
    event TypedEventHandler<TreeView, TreeViewDragItemsCompletedEventArgs> DragItemsCompleted;
    event TypedEventHandler<TreeView, TreeViewDragItemsStartingEventArgs> DragItemsStarting;
    DependencyObject ContainerFromItem(object item);
    DependencyObject ContainerFromNode(TreeViewNode node);
    object ItemFromContainer(DependencyObject container);
    TreeViewNode NodeFromContainer(DependencyObject container);
  }
  public sealed class TreeViewCollapsedEventArgs {
    object Item { get; }
  }
  public sealed class TreeViewDragItemsCompletedEventArgs
  public sealed class TreeViewDragItemsStartingEventArgs
  public sealed class TreeViewExpandingEventArgs {
    object Item { get; }
  }
  public class TreeViewItem : ListViewItem {
    bool HasUnrealizedChildren { get; set; }
    public static DependencyProperty HasUnrealizedChildrenProperty { get; }
    object ItemsSource { get; set; }
    public static DependencyProperty ItemsSourceProperty { get; }
  }
  public sealed class WebView : FrameworkElement {
    event TypedEventHandler<WebView, WebViewWebResourceRequestedEventArgs> WebResourceRequested;
  }
  public sealed class WebViewWebResourceRequestedEventArgs
}
namespace Windows.UI.Xaml.Controls.Maps {
  public enum MapTileAnimationState
  public sealed class MapTileBitmapRequestedEventArgs {
    int FrameIndex { get; }
  }
  public class MapTileSource : DependencyObject {
    MapTileAnimationState AnimationState { get; }
    public static DependencyProperty AnimationStateProperty { get; }
    bool AutoPlay { get; set; }
    public static DependencyProperty AutoPlayProperty { get; }
    int FrameCount { get; set; }
    public static DependencyProperty FrameCountProperty { get; }
    TimeSpan FrameDuration { get; set; }
    public static DependencyProperty FrameDurationProperty { get; }
    void Pause();
    void Play();
    void Stop();
  }
  public sealed class MapTileUriRequestedEventArgs {
    int FrameIndex { get; }
  }
}
namespace Windows.UI.Xaml.Controls.Primitives {
  public class CommandBarFlyoutCommandBar : CommandBar
  public sealed class CommandBarFlyoutCommandBarTemplateSettings : DependencyObject
  public class FlyoutBase : DependencyObject {
    bool AreOpenCloseAnimationsEnabled { get; set; }
    public static DependencyProperty AreOpenCloseAnimationsEnabledProperty { get; }
    bool InputDevicePrefersPrimaryCommands { get; }
    public static DependencyProperty InputDevicePrefersPrimaryCommandsProperty { get; }
    bool IsOpen { get; }
    public static DependencyProperty IsOpenProperty { get; }
    FlyoutShowMode ShowMode { get; set; }
    public static DependencyProperty ShowModeProperty { get; }
    public static DependencyProperty TargetProperty { get; }
    void ShowAt(DependencyObject placementTarget, FlyoutShowOptions showOptions);
  }
  public enum FlyoutPlacementMode {
    Auto = 13,
    BottomEdgeAlignedLeft = 7,
    BottomEdgeAlignedRight = 8,
    LeftEdgeAlignedBottom = 10,
    LeftEdgeAlignedTop = 9,
    RightEdgeAlignedBottom = 12,
    RightEdgeAlignedTop = 11,
    TopEdgeAlignedLeft = 5,
    TopEdgeAlignedRight = 6,
  }
  public enum FlyoutShowMode
  public class FlyoutShowOptions
  public class NavigationViewItemPresenter : ContentControl
}
namespace Windows.UI.Xaml.Core.Direct {
  public interface IXamlDirectObject
  public sealed class XamlDirect
  public struct XamlDirectContract
  public enum XamlEventIndex
  public enum XamlPropertyIndex
  public enum XamlTypeIndex
}
namespace Windows.UI.Xaml.Hosting {
  public class DesktopWindowXamlSource : IClosable
  public sealed class DesktopWindowXamlSourceGotFocusEventArgs
  public sealed class DesktopWindowXamlSourceTakeFocusRequestedEventArgs
  public sealed class WindowsXamlManager : IClosable
  public enum XamlSourceFocusNavigationReason
  public sealed class XamlSourceFocusNavigationRequest
  public sealed class XamlSourceFocusNavigationResult
}
namespace Windows.UI.Xaml.Input {
  public sealed class CanExecuteRequestedEventArgs
  public sealed class ExecuteRequestedEventArgs
  public sealed class FocusManager {
    public static event EventHandler<GettingFocusEventArgs> GettingFocus;
    public static event EventHandler<FocusManagerGotFocusEventArgs> GotFocus;
    public static event EventHandler<LosingFocusEventArgs> LosingFocus;
    public static event EventHandler<FocusManagerLostFocusEventArgs> LostFocus;
  }
  public sealed class FocusManagerGotFocusEventArgs
  public sealed class FocusManagerLostFocusEventArgs
  public sealed class GettingFocusEventArgs : RoutedEventArgs {
    Guid CorrelationId { get; }
  }
  public sealed class LosingFocusEventArgs : RoutedEventArgs {
    Guid CorrelationId { get; }
  }
  public class StandardUICommand : XamlUICommand
  public enum StandardUICommandKind
  public class XamlUICommand : DependencyObject, ICommand
}
namespace Windows.UI.Xaml.Markup {
  public sealed class FullXamlMetadataProviderAttribute : Attribute
  public interface IXamlBindScopeDiagnostics
  public interface IXamlType2 : IXamlType
}
namespace Windows.UI.Xaml.Media {
  public class Brush : DependencyObject, IAnimationObject {
    void PopulatePropertyInfo(string propertyName, AnimationPropertyInfo propertyInfo);
    virtual void PopulatePropertyInfoOverride(string propertyName, AnimationPropertyInfo animationPropertyInfo);
  }
}
namespace Windows.UI.Xaml.Media.Animation {
  public class BasicConnectedAnimationConfiguration : ConnectedAnimationConfiguration
  public sealed class ConnectedAnimation {
    ConnectedAnimationConfiguration Configuration { get; set; }
  }
  public class ConnectedAnimationConfiguration
  public class DirectConnectedAnimationConfiguration : ConnectedAnimationConfiguration
  public class GravityConnectedAnimationConfiguration : ConnectedAnimationConfiguration
  public enum SlideNavigationTransitionEffect
  public sealed class SlideNavigationTransitionInfo : NavigationTransitionInfo {
    SlideNavigationTransitionEffect Effect { get; set; }
    public static DependencyProperty EffectProperty { get; }
  }
}
namespace Windows.UI.Xaml.Navigation {
  public class FrameNavigationOptions
}
namespace Windows.Web.UI {
  public interface IWebViewControl2
  public sealed class WebViewControlNewWindowRequestedEventArgs {
    IWebViewControl NewWindow { get; set; }
    Deferral GetDeferral();
  }
  public enum WebViewControlPermissionType {
    ImmersiveView = 6,
  }
}
namespace Windows.Web.UI.Interop {
  public sealed class WebViewControl : IWebViewControl, IWebViewControl2 {
    event TypedEventHandler<WebViewControl, object> GotFocus;
    event TypedEventHandler<WebViewControl, object> LostFocus;
    void AddInitializeScript(string script);
  }
}

Removals:


namespace Windows.Gaming.UI {
  public sealed class GameMonitor
  public enum GameMonitoringPermission
}

Additional Store Win32 APIs:

The following APIs have been added to the Application Certification Kit list.

  • CM_Get_Device_Interface_List_SizeA
  • CM_Get_Device_Interface_List_SizeW
  • CM_Get_Device_Interface_ListA
  • CM_Get_Device_Interface_ListW
  • CM_MapCrToWin32Err
  • CM_Register_Notification
  • CM_Unregister_Notification
  • CoDecrementMTAUsage
  • CoIncrementMTAUsage
  • ColorAdapterGetCurrentProfileCalibration
  • ColorAdapterGetDisplayCurrentStateID
  • ColorAdapterGetDisplayProfile
  • ColorAdapterGetDisplayTargetWhitePoint
  • ColorAdapterGetDisplayTransformData
  • ColorAdapterGetSystemModifyWhitePointCaps
  • ColorAdapterRegisterOEMColorService
  • ColorAdapterUnregisterOEMColorService
  • ColorProfileAddDisplayAssociation
  • ColorProfileGetDisplayDefault
  • ColorProfileGetDisplayList
  • ColorProfileGetDisplayUserScope
  • ColorProfileRemoveDisplayAssociation
  • ColorProfileSetDisplayDefaultAssociation
  • DeviceIoControl
  • EventEnabled
  • EventProviderEnabled
  • ExitProcess
  • GetProductInfo
  • GetSystemTimes
  • InstallColorProfileA
  • InstallColorProfileW
  • LocalFileTimeToLocalSystemTime
  • LocalSystemTimeToLocalFileTime
  • MLCreateOperatorRegistry
  • RoIsApiContractMajorVersionPresent
  • RoIsApiContractPresent
  • RtlVirtualUnwind
  • SetProcessValidCallTargetsForMappedView
  • sqlite3_value_nochange
  • sqlite3_vtab_collation
  • sqlite3_vtab_nochange
  • sqlite3_win32_set_directory
  • sqlite3_win32_set_directory16
  • sqlite3_win32_set_directory8
  • ubiditransform_close
  • ubiditransform_open
  • ubiditransform_transform
  • ubrk_getBinaryRules
  • ubrk_openBinaryRules
  • unum_formatDoubleForFields
  • uplrules_getKeywords
  • uspoof_check2
  • uspoof_check2UTF8
  • uspoof_closeCheckResult
  • uspoof_getCheckResultChecks
  • uspoof_getCheckResultNumerics
  • uspoof_getCheckResultRestrictionLevel
  • uspoof_openCheckResult

The post Windows 10 SDK Preview Build 17754 available now! appeared first on Windows Developer Blog.

.NET Framework September 2018 Security and Quality Rollup

$
0
0

Today, we are releasing the September 2018 Security and Quality Rollup.

Security

CVE-2018-8421 – Windows Remote Code Execution Vulnerability

This security update resolves a vulnerability in Microsoft .NET Framework that could allow remote code execution when .NET Framework processes untrusted input. An attacker who successfully exploits this vulnerability in software by using .NET Framework could take control of an affected system. The attacker could then install programs; view, change, or delete data; or create new accounts that have full user rights. Users whose accounts are configured to have fewer user rights on the system could be less affected than users who operate with administrative user rights.

To exploit the vulnerability, an attacker would first have to convince the user to open a malicious document or application.

This security update addresses the vulnerability by correcting how .NET Framework validates untrusted input.

CVE-2018-8421

Getting the Update

The Security and Quality Rollup is available via Windows Update, Windows Server Update Services, Microsoft Update Catalog, and Docker.

Microsoft Update Catalog

You can get the update via the Microsoft Update Catalog. For Windows 10, .NET Framework updates are part of the Windows 10 Monthly Rollup.

The following table is for Windows 10 and Windows Server 2016+.

Product Version Security and Quality Rollup KB
Windows 10 1803 (April 2018 Update) Catalog
4457128
.NET Framework 3.5 4457128
.NET Framework 4.7.2 4457128
Windows 10 1709 (Fall Creators Update) Catalog
4457142
.NET Framework 3.5 4457142
.NET Framework 4.7.1, 4.7.2 4457142
Windows 10 1703 (Creators Update) Catalog
4457138
.NET Framework 3.5 4457138
.NET Framework 4.7, 4.7.1, 4.7.2 4457138
Windows 10 1607 (Anniversary Update)
Windows Server 2016
Catalog
4457131
.NET Framework 3.5 4457131
.NET Framework 4.6.2, 4.7, 4.7.1, 4.7.2 4457131
Windows 10 1507 Catalog
4457132
.NET Framework 3.5 4457132
.NET Framework 4.6, 4.6.1, 4.6.2 4457132

The following table is for earlier Windows and Windows Server versions.

Product Version Security and Quality Rollup KB Security Only Update KB
Windows 8.1
Windows RT 8.1
Windows Server 2012 R2
Catalog
4457920
Catalog
4457916
.NET Framework 3.5 4457045 4457056
.NET Framework 4.5.2 4457036 4457028
.NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2 4457034 4457026
Windows Server 2012 Catalog
4457919
Catalog
4457915
.NET Framework 3.5 4457042 4457053
.NET Framework 4.5.2 4457037 4457029
.NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2 4457033 4457025
Windows 7
Windows Server 2008 R2
Catalog
4457918
Catalog
4457914
.NET Framework 3.5.1 4457044 4457055
.NET Framework 4.5.2 4457038 4457030
.NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2 4457035 4457027
Windows Server 2008 Catalog
4457921
Catalog
4457917
.NET Framework 2.0, 3.0 4457043 4457054
.NET Framework 4.5.2 4457038 4457030
.NET Framework 4.6 4457035 4457027

Docker Images

We are updating the following .NET Framework Docker images for today’s release:

Note: Look at the “Tags” view in each repository to see the updated Docker image tags.

Previous Monthly Rollups

The last few .NET Framework Monthly updates are listed below for your convenience:

Video: R and Python in in Azure HDInsight

$
0
0

Azure HDInisght was recently updated with version 9.3 of ML Services in HDInsight, which provides integration with R and Python. In particular, it makes it possible to run R and Python within HDInsight's managed Spark instance. The integration provides:  

In the episode of Data Exposed embedded below, you can learn more about the ML Service 9.3 in HDInsight and see some demos of the R and Python integration.

Channel 9: Introducing ML Services 9.3 in Azure HDInsight

Visual Studio 2017 version 15.9 Preview 2

$
0
0

Today, we are releasing the second preview of Visual Studio 2017 version 15.9, and it can be downloaded here. This latest preview contains new features and improvements to Universal Windows Platform development, C++ debugging, and export installation settings. Read more in the feature highlight summary below and check out the Visual Studio 2017 version 15.9 Preview 2 release notes for more details on all the new, exciting features contained in this Preview.

Universal Windows Platform Development

In Visual Studio 2017 version 15.9 preview 2, we have a combination of new features and product enhancements for Universal Windows Platform developers. Read below to see what’s new!

Latest Windows 10 Insider Preview SDK: The most recent Windows 10 Insider Preview SDK (build 17754) is now included as an optional component for the Universal Windows Platform development workload. UWP developers can install this SDK to access the latest APIs available in the upcoming release of Windows 10.

Visual Studio Installer showing Windows 10 Preview SDK as optional

MSIX Package Creation: You can now create .MSIX packages in Visual Studio either through the Universal Windows Platform packaging tooling or through the Windows Application Packaging Project template. To generate a .MSIX package, set your Minimum Version to the latest Windows 10 Insider Preview SDK (build 17754), and then follow the standard package creation workflow in Visual Studio. To learn more about .MSIX packaging, check out this Build talk.

F5 Performance Improvements: We have optimized our F5 (build and deploy) tooling for faster developer productivity with UWP development. These improvements will be most noticeable for developers that target a remote PC using Windows authentication; but, all other deployment types should see improved performance as well.

UWP XAML Designer Reliability: For developers building UWP applications with a target version of Fall Creators Update (build 16299) or higher, you should notice fewer XAML designer crashes. Specifically, when the designer tries to render controls that throw with catchable exceptions, the rendered control will be replaced with a fallback control rather than crashing the design surface. These fallback controls will have a yellow border (shown below) to indicate to developers that the control has been replaced.

Fallback controls shown with yellow border

Step Back in the C++ Debugger

In Preview 2, we’ve added “Step Back” for C++ developers targeting Windows. With this feature, you can now return to a previous state while debugging without having to restart the entire process. It’s installed, but set to “off” by default as part of the C++ workload. To enable it, go to Tools -> Options -> IntelliTrace and select the “IntelliTrace snapshots” option. This will enable snapshots for both Managed and Native code.

Tools Options Dialog showing IntelliTrace snapshots option

Once “Step Back” is enabled, you will see snapshots appear in the Events tab of the Diagnostic Tools Window when you are stepping through C++ code.

Diagnostics Tools Window showing Once snapshots in events tab when Step Back is enabled when stepping through C code.

Clicking on any event will take you back to its respective snapshot. Or, you can simply use the Step Backward button on the debug command bar to go back in time. You can see “Step Back” in action in concurrence with “Step Over” in the gif below.

Gif showing Step Over and Step Backward

Acquisition

We’ve made it easier to keep your installation settings consistent across multiple installations of Visual Studio. You can now use the Visual Studio Installer to export a .vsconfig file for a given instance of Visual Studio. This file will contain information about the workloads and components you have installed. You can then import this file to add your workload and component selections to a new or existing installation of Visual Studio.

Dialog showing option to export a vsconfig file and also import it to add your workload configuration to a new or existing Visual Studio installation

Try out the Preview

If you’re not familiar with Visual Studio Previews, take a moment to read the Visual Studio 2017 Release Rhythm. Remember that Visual Studio 2017 Previews can be installed side-by-side with other versions of Visual Studio and other installs of Visual Studio 2017 without adversely affecting either your machine or your productivity. Previews provide an opportunity for you to receive fixes faster and try out upcoming functionality before they become mainstream. Similarly, the Previews enable the Visual Studio engineering team to validate usage, incorporate suggestions, and detect flaws earlier in the development process. We are highly responsive to feedback coming in through the Previews and look forward to hearing from you.

Please get the Visual Studio Preview today, exercise your favorite workloads, and tell us what you think. If you have an Azure subscription, you can provision virtual machine of this preview. You can report issues to us via the Report a Problem tool in Visual Studio or you can share a suggestion on UserVoice. You’ll be able to track your issues in the Visual Studio Developer Community where you can ask questions and find answers. You can also engage with us and other Visual Studio developers through our Visual Studio conversation in the Gitter community (requires GitHub account). Thank you for using the Visual Studio Previews.

Angel Zhou Program Manager, Visual Studio

Angel Zhou is a program manager on the Visual Studio release engineering team, which is responsible for making Visual Studio releases available to our customers around the world.


How do you use System.Drawing in .NET Core?

$
0
0

I've been doing .NET image processing since the beginning. In fact I wrote about it over 13 years ago on this blog when I talked about Compositing two images into one from the ASP.NET Server Side and in it I used System.Drawing to do the work. For over a decade folks using System.Drawing were just using it as a thin wrapper over GDI (Graphics Device Interface) which were very old Win32 (Windows) unmanaged drawing APIs. We use them because they work fine.

.NET Conf: Join us this week! September 12-14, 2018 for .NET Conf! It's a FREE, 3 day virtual developer event co-organized by the .NET Community and Microsoft. Watch all the sessions here. Join a virtual attendee party after the last session ends on Day 1 where you can win prizes! Check out the schedule here and attend a local event in your area organized by .NET community influencers all over the world.

DotNetBotFor a while there was a package called CoreCompat.System.Drawing that was a .NET Core port of a Mono version of System.Drawing.

However, since then Microsoft has released System.Drawing.Common to provide access to GDI+ graphics functionality cross-platform.

There is a lot of existing code - mine included - that makes assumptions that .NET would only ever run on Windows. Using System.Drawing was one of those things. The "Windows Compatibility Pack" is a package meant for developers that need to port existing .NET Framework code to .NET Core. Some of the APIs remain Windows only but others will allow you to take existing code and make it cross-platform with a minimum of trouble.

Here's a super simple app that resizes a PNG to 128x128. However, it's a .NET Core app and it runs in both Windows and Linux (Ubuntu!)

using System;

using System.Drawing;
using System.Drawing.Drawing2D;
using System.Drawing.Imaging;
using System.IO;

namespace imageresize
{
class Program
{
static void Main(string[] args)
{
int width = 128;
int height = 128;
var file = args[0];
Console.WriteLine($"Loading {file}");
using(FileStream pngStream = new FileStream(args[0],FileMode.Open, FileAccess.Read))
using(var image = new Bitmap(pngStream))
{
var resized = new Bitmap(width, height);
using (var graphics = Graphics.FromImage(resized))
{
graphics.CompositingQuality = CompositingQuality.HighSpeed;
graphics.InterpolationMode = InterpolationMode.HighQualityBicubic;
graphics.CompositingMode = CompositingMode.SourceCopy;
graphics.DrawImage(image, 0, 0, width, height);
resized.Save($"resized-{file}", ImageFormat.Png);
Console.WriteLine($"Saving resized-{file} thumbnail");
}
}
}
}
}

Here it is running on Ubuntu:

Resizing Images on Ubuntu

NOTE that on Ubuntu (and other Linuxes) you may need to install some native dependencies as System.Drawing sits on top of native libraries

sudo apt install libc6-dev 

sudo apt install libgdiplus

There's lots of great options for image processing on .NET Core now! It's important to understand that this System.Drawing layer is great for existing System.Drawing code, but you probably shouldn't write NEW image management code with it. Instead, consider one of the great other open source options.

  • ImageSharp - A cross-platform library for the processing of image files; written in C#
    • Compared to System.Drawing ImageSharp has been able to develop something much more flexible, easier to code against, and much, much less prone to memory leaks. Gone are system-wide process-locks; ImageSharp images are thread-safe and fully supported in web environments.

Here's how you'd resize something with ImageSharp:

using (Image<Rgba32> image = Image.Load("foo.jpg"))

{
image.Mutate(x => x
.Resize(image.Width / 2, image.Height / 2)
.Grayscale());
image.Save("bar.jpg"); // Automatic encoder selected based on extension.
}
  • Magick.NET -A .NET library on top of ImageMagick
  • SkiaSharp - A .NET wrapper on top of Google's cross-platform Skia library

It's awesome that there are so many choices with .NET Core now!


Sponsor: Rider 2018.2 is here! Publishing to IIS, Docker support in the debugger, built-in spell checking, MacBook Touch Bar support, full C# 7.3 support, advanced Unity support, and more.



© 2018 Scott Hanselman. All rights reserved.
     

Improved Engineering with Azure Pipelines

Use AI to streamline healthcare operations

$
0
0

The profound impact of machine learning (ML) and artificial intelligence (AI) is changing the way heath organizations think about many of the challenges they face. Making data-informed decisions based on actionable insights is improving many aspects of healthcare from patient diagnosis and outcomes to operational efficiencies.

Data-informed decision making

While making decisions with deep insight into relevant data, healthcare organizations must be especially mindful of how they implement such solutions. Regulations like HIPAA and HITRUST compliance require data be kept secure, private and anonymized for those who don’t require access to patient data.

Further, IT staff are often unprepared or understaffed to implement such solutions. This is why the Azure Healthcare AI Blueprint was created, to bootstrap AI solutions for healthcare organizations using Microsoft Azure Platform as a Service (PaaS). After installing the blueprint, organizations can learn from the reference implementation and better understand the components of a complete solution built with Azure services.

The Azure healthcare AI blueprint

The blueprint is installed to Azure via PowerShell scripts, and creates a complete environment that can run Azure Machine Learning Studio (MLS) experiments right away. In fact, there is a simple patient length of stay (LOS) experiment built right in.

Other demo scripts are also available if you want to delve further. There is a script to admit 10 new patients and one to discharge two patients. Running these scripts is covered in the blueprint documentation.

Operational process flow

The care line manager submits newly admitted patient data that is immediately available via a Power BI dashboard, but also is processed by the LOS machine learning experiment. The patient’s data is input into the patient registration system. The system invokes an Azure function with the data in the Fast Healthcare Interoperability Resources (FHIR) format. The Admit Patient Azure function stores data to an Azure SQL database and to MLS where it is used to predict the patient’s LOS. When patients are admitted or discharged from the facility, another Azure function writes the data to Azure SQL database.

image

There are many other services in the overall solution such as monitoring and security. But MLS is where the magic of the LOS experiment happens, driving operational decisions such as staffing and projected bed counts.

Wrapping up

There are many components and services in the blueprint, and we’ve examined just a few that are used for a single LOS experiment. More sophisticated algorithms and models are used for more complex scenarios.

Recommended next steps

  1. Read about the Azure Healthcare AI Blueprint to see if it’s a good fit for your organization. This could serve as the catalyst for your organization’s change to data-informed decision making.
  2. Download the scripts, install instructions, and other artifacts in the blueprint on GitHub.

ASP.NET Core 2.2.0-preview2 now available

$
0
0

Today we’re very happy to announce that the second preview of the next minor release of ASP.NET Core and .NET Core is now available for you to try out. We’ve been working hard on this release over the past months, along with many folks from the community, and it’s now ready for a wider audience to try it out and provide the feedback that will continue to shape the release.

How do I get it?

You can download the new .NET Core SDK for 2.2.0-preview2 (which includes ASP.NET 2.2.0-preview2) from https://www.microsoft.com/net/download/dotnet-core/sdk-2.2.0-preview2

Visual Studio requirements

Customers using Visual Studio should also install and use the Preview channel of Visual Studio 2017 (15.9 Preview 2) in addition to the SDK when working with .NET Core 2.2 and ASP.NET Core 2.2 projects. Please note that the Visual Studio preview channel can be installed side-by-side with existing an Visual Studio installation without disrupting your current development environment.

Azure App Service Requirements

If you are hosting your application on Azure App Service, you can follow these instructions to install the required site extension for hosting your 2.2.0-preview2 applications.

Impact to machines

Please note that is a preview release and there are likely to be known issues and as-yet-to-be discovered bugs. While the .NET Core SDK and runtime installs are side-by-side, your default SDK will become the latest one. If you run into issues working on existing projects using earlier versions of .NET Core after installing the preview SDK, you can force specific projects to use an earlier installed version of the SDK using a global.json file as documented here. Please log an issue if you run into such cases as SDK releases are intended to be backwards compatible.

What’s new in Preview 2

For a full list of changes, bug fixes, and known issues you can read the release notes.

SignalR Java Client updated to support Azure SignalR Service

The SignalR Java Client, first introduced in preview 1, now has support for the Azure SignalR Service. You can now develop Java and Android applications that connect to a SignalR server using the Azure SignalR Service. To get this new functionality, just update your Maven or Gradle file to reference version 0.1.0-preview2-35174 of the SignalR Client package.

Problem Details support

In 2.1.0, MVC introduced ProblemDetails, based on the RFC 7807 specification for carrying detils of an error with a HTTP Response. In preview2, we’re standardizing around using ProblemDetails for client error codes in controllers attributed with ApiControllerAttribute. An IActionResult returning a client error status code (4xx) will now return a ProblemDetails body. The result additionally includes a correlation ID that can be used to correlate the error using request logs. Lastly, ProducesResponseType for client errors, default to using ProblemDetails as the response type. This will be documented in Open API / Swagger output generated using NSwag or Swashbuckle.AspNetCore. Documentation for configuring the ProblemDetails response can be found here – https://aka.ms/AA2k4zg.

ASP.NET Core Module Improvements

We’ve introduced a new module (aspNetCoreModuleV2) for hosting ASP.NET Core application in IIS in 2.2.0-preview1. This new module adds the ability to host your .NET Core application within the IIS worker process and avoids the additional cost of reverse-proxying your requests over to a separate dotnet process.

ASP.NET Core 2.2.0-preview2 or newer projects default to the new in-process hosting model. If you are upgrading from preview1, you will need to add a new project property to your .csproj file.

<PropertyGroup>
  <TargetFramework>netcoreapp2.2</TargetFramework>
  <AspNetCoreHostingModel>inprocess</AspNetCoreHostingModel>
</PropertyGroup>

Visual Studio 15.9-preview2 adds the ability to switch your hosting model as part of your development-time experience.

Hosting in IIS

To deploy applications targeting ASP.NET Core 2.2.0-preview2 on servers with IIS, you require a new version of the 2.2 Runtime & Hosting Bundle on the target server. The bundle is available at https://www.microsoft.com/net/download/dotnet-core/2.2.

Caveats

There are a couple of caveats with the new in-process hosting model: – You are limited to one application per IIS Application Pool. – No support for .NET Framework. The new module is only capable of hosting .NET Core in the IIS process.

If you have a ASP.NET Core 2.2 app that’s using the in process hosting model, you can turn it off by setting the <AspNetCoreHostingModel> element to outofprocess in your .csproj file.

Template Updates

We’ve cleaned up the Bootstrap 4 project template work that we started in Preview 1. We’ve also added support to the default Identity UI for using both Bootstrap 3 & 4. For compatibility with existing apps the default Bootstrap version for the default UI is now Bootstrap 3, but you can select which version of Boostrap you want to use when calling AddDefaultUI.

HealthCheck Improvements

There are a few small, but important, changes to health checks in preview2.

You can now call AddCheck<T> where T is a type of IHealthCheck:

services.AddHealthChecks()
        .AddCheck<MyHealthCheck>();

This will register your health check as a transient service, meaning that each time the health check service is called a new instance will be created. We allow you to register IHealthCheck implementations with any service lifetime when you register them manually:

services.AddHealthChecks();
services.AddSingleton<IHealthCheck, MySingletonCheck>();

A scope is created for each invocation of the HealthChecksService. As with all DI lifetimes you should be careful when creating singleton objects that depend on services with other lifetimes as described here.

You can filter which checks you want to execute when using the middleware or the HealthCheckService directly. In this example we are executing all our health checks when a request is made on the ready path, but just returning a 200 OK when the live path is hit:

// The readiness check uses all of the registered health checks (default)
app.UseHealthChecks("/health/ready");

// The liveness check uses an 'identity' health check that always returns healthy
app.UseHealthChecks("/health/live", new HealthCheckOptions()
{
    // Exclude all checks, just return a 200.
    Predicate = (check) => false,
});

You might do this if, for example, you are using Kubernetes and want to run a comprehensive set of checks before traffic is sent to your application but otherwise are OK as long as you are reachable and still running.

What’s still to come?

We are investaging adding a tags mechanism to checks, so that they can be set and filtered on. We also want to provide an Entity Framework specific check that will check whatever database has been configured to be used with your DbContext.

Migrating an ASP.NET Core 2.1 project to 2.2

To migrate an ASP.NET Core project from 2.1.x to 2.2.0-preview2, open the project’s .csproj file and change the value of the the element to netcoreapp2.2. You do not need to do this if you’re targeting .NET Framework 4.x.

Giving Feedback

The main purpose of providing previews is to solicit feedback so we can refine and improve the product in time for the final release. Please help provide us feedback by logging issues in the appropriate repository at https://github.com/aspnet or https://github.com/dotnet. We look forward to receiving your feedback!

How Security Center and Log Analytics can be used for Threat Hunting

$
0
0

Organizations today are constantly under attack. Azure Security Center (ASC) uses advanced analytics and global threat intelligence to detect malicious threats, and the new capabilities that our product team is adding everyday empower our customers to respond quickly to these threats.

However, just having great tools that alert about the threats and attacks is not enough. The reality is that no security tool can detect 100 percent of the attack. In addition, many of the tools that raise alerts are optimized for low false positive rates. Hence, they might miss some suspicious outlier activity in your environment which could have been flagged and investigated. This is something that Security Center and the Azure Log Analytics team understands. The product has built-in features that you can use to launch your investigations and hunting campaigns in addition to responding to alerts that it triggers.

In the real world, if you need to do threat hunting, there are several considerations that you should consider. You not only need a good analyst team, you need an even larger team of service engineers and administrators that worry about deploying an agent to collect the investigations related data, parsing them in a format where queries could be run, building tools that help query this data and lastly indexing the data so that your queries run faster and actually give results. ASC and Log Analytics take care of all of this and will make hunting for threats much easier. What organizations need is a change in mindset. Instead of being just alert driven, they should also incorporate active threat hunting into their overall security program.

What is Threat Hunting?

Loosely defined it is the process of proactively and iteratively searching through your varied log data with the goal of detecting threats that evade existing security solutions. If you think about it, Threat Hunting is a mindset. A mindset wherein - instead of just reacting to alerts you are proactive about securing your organization’s environment and are looking for signs of malicious activity within your enterprise, without prior knowledge of those signs. Threat hunting involves hypothesizing about attackers’ behavior. Researching these hypotheses and techniques to determine the artifacts that would be left in the logs. Checking if your organization collects and stores these logs. Then verifying these hypotheses that you derived - in your environment's logs.

Hunting teaches you how to find data, how to distinguish between normal activity and an outlier, gives a better picture of your network and shows you your detection gaps. Security analysts who do regular hunting are better trained to respond and triage during an actual incident.

Today, we are going to look at some examples of these simple hunts that an analyst can start with. In our previous posts, we have already touched a little bit about this. You can read more about detecting malicious activity and finding hidden techniques commonly deployed by attackers and how Azure Security Center helps analyze attacks using Investigation and Log Search.

Scenario 1

A lot of security tools look for abnormally large data transfers to an external destination. To evade these security tools and to reduce the amount of data sent over the network, attackers often compress the collected data prior to exfil. The popular tools of choice for compression are typically 7zip/Winzip/WinRar etc. Attackers have also been known to use their own custom programs for compressing data.

For example, while using WinRar to compress the data a few of the switches that seem to be most commonly used are "a -m5 –hp." While the " a " switch specifies adding the file to an archive, the “ -m5” switch specifies the level of compression where “5” is the maximum compression level. The “-hp” switch is used to encrypt content and header information. With the knowledge of these command line switches, we may detect some of this activity.

Below is a simple query to run this logic where we are looking for these command line switches. In this example, if we look at the result we can see that all the command line data looks benign except where Svchost.exe is using the command line switches associated with WinRAR. In addition, the binary Svchost.exe is running from a temp directory when ideally it should be running from %windir%/system32 folder. Threat actors have been known to rename their tools to a well-known process name to hide in plain sight. This is considered suspicious and a good starting point for an investigation. An analyst can take one of the many approaches from here to uncover the entire attack sequence. They can pivot into logon data to find what happened in this logon session or focus on what user account was used. They can also investigate what IP addresses may have connected to this machine or what IP address this machine connected to during the given time frame.

SecurityEvent
| where TimeGenerated >= ago(2d)
| search CommandLine : " -m5 " and CommandLine : " a "
| project NewProcessName , CommandLine

Sai_Img1

Another good example like this could be the use of popular Nirsoft tools like mail password viewer or IE password viewer being used maliciously by attackers to gather passwords from email clients as well as password stored in a browser. Knowing the command line for these tools, one may find interesting log entries if they search for command line parameters such as: /stext or /scomma, which allows discovery of potentially malicious activity without needing to know the process name. To provide a previously seen example, seeing a command line like “notepad.exe /stext output.txt” is a good indication that notepad might be a renamed Nirsoft tool and likely malicious activity.

Scenario 2

Building on the earlier example where we saw Rar.exe being renamed to svchost.exe. Malware writers often use windows system process names for their malicious process names to make them blend in with other legitimate commands that the Windows system executes. If an analyst is familiar with the well-known Windows processes they can easily spot the bad ones. For example, Svchost.exe is a system process that hosts many Windows services and is generally the most abused by attackers. For the svchost.exe process, it is common knowledge that:

  • It runs from %windir%/system32 or %windir%/SysWOW64.
  • It runs under NT AUTHORITYSYSTEM, LOCAL SERVICE, or NETWORK SERVICE accounts.

Based on this knowledge, an analyst can create a simple query looking for a process named Svchost.exe. It is recommended to filter out well-known security identifiers (SIDs) that are used to launch the legitimate svchost.exe process. The query also filters out the legitimate locations from which svchost.exe is launched.

SecurityEvent
| where TimeGenerated >= ago(2d)
| where ProcessName contains "svchost.exe"
| where SubjectUserSid != "S-1-5-18"
| where SubjectUserSid != "S-1-5-19"
| where SubjectUserSid != "S-1-5-20"
| where NewProcessName !contains "C:\Windows\System32"
| where NewProcessName !contains "C:\Windows\Syswow64"

Sai2

Additionally, from the returned results we also check if svchost.exe is a child of services.exe or if it is launched with a command line that has –k switch (e.g. svchost.exe -k defragsvc). Filtering using these conditions will often give interesting results that an analyst can dig further into in order to find if this is the normal activity or if it is part of a compromise or attack.

There is nothing new or novel here. Security analysts know about it. In fact, a lot of security tools including ASC will detect this. However, the goal here is a change in mindset of not only responding to alerts but proactively looking for anomalies and outliers in your environment.

Scenario 3

After initial compromise either through Brute Force attack, spear phishing or other methods, attackers often move to the next step which can loosely be called network propagation stage. The goal of the Network Propagation phase is to identify and move to desired systems within the target environment with the intention of discovering credentials and sensitive data. Sometimes as part of this one might see one account being used to log in on unusually high number of machines in the environment or lot of different account authentication requests coming from one machine. Say if this is the second scenario where we want to find machines that have been used to authenticate accounts more than our desired threshold we could probably write a query like below:

SecurityEvent
    | where EventID == 4624
    | where AccountType == "User"
    | where TimeGenerated >= ago(1d)
    | summarize IndividualAccounts = dcount(Account) by Computer
    | where IndividualAccounts > 4

If we also wanted to see what alerts fired on these machines we could extend the above query and join them with the SecurityAlerts table.

SecurityEvent
    | where EventID == 4624
    | where AccountType == "User"
    | where TimeGenerated >= ago(1d)
    | extend Computer = toupper(Computer)
    | summarize IndividualAccounts = dcount(Account) by Computer
    | where IndividualAccounts > 4
| join (SecurityAlert
                 | extend ExtProps=parsejson(ExtendedProperties)
                 | extend Computer=toupper(tostring(ExtProps["Compromised Host"]))
                 )
on Computer

These are just a few examples. The possibilities are endless. With practice, a good analyst knows when to dig deeper and when to move on to the next item on their hunting journey. Nothing in the world of cyber gets better unless victims start defending themselves more holistically. Enabling our customers on this journey and providing them with the tools to protect themselves when they move to Azure is what drives us every day.

Happy Hunting!

Real-time data analytics and Azure Data Lake Storage Gen2

$
0
0

It’s been a little more than two months since we launched Azure Data Lake Storage Gen2, we’re thrilled and overwhelmed by the response we’ve received from customers and partners alike. We built Azure Data Lake Storage to deliver a no-compromises data lake and the high level of customer engagement in Gen 2’s public preview confirms our approach. We have heard from customers both large and small and across a broad range of markets and industries that Gen2’s ability to provide object storage scale and cost effectiveness with a world class data lake experience is exceeding their expectations and we couldn’t be happier to hear it!

Partner enablement in Gen2

In fact we are actively partnering with leading ISV’s across the big data spectrum of platform providers, data movement and ETL, governance and data lifecycle management (DLM), analysis, presentation, and beyond to ensure seamless integration between Gen2 and their solutions.

Over the next few months you will hear more about the exciting work these partners are doing with ADLS Gen2. We’ll do blog posts, events, and webinars that highlight these industry-leading solutions.

In fact, I am happy to announce our first joint Gen2 engineering-ISV webinar with Attunity on September 18th, Real-time Big Data Analytics in the Cloud 101: Expert Advice from the Attunity and Azure Data Lake Storage Gen2 Teams.

Real-time analytics and ADLS Gen2

Azure Data Lake Storage Gen2 is at the core of Azure Analytics workflows. One of the workflows that has generated significant interest is for real-time analytics. With the explosive growth of data generated from sensors, social media, business apps, many organizations are looking for ways to drive real-time insights and orchestrate immediate action using cloud analytic services. As the diagram below indicates, multiple Azure services exist to provide an end to end solution for driving real-time analytics workflows.

image

Many of our customers are just getting started with building their real-time analytics cloud architectures. We’re excited to partner with Attunity to help customers learn more about real-time analytics, data lakes and how you can quickly move from evaluation to execution.

Attunity is a recognized leader in data integration and real-time data capture with deep skills in real-time data management. They are also a Microsoft Gold partner. In our webinar, we’ll explore the following questions:

  • Why is real-time data important for driving business insights?
  • What’s a data lake and why would you use one to store your real-time data?
  • How can you use change data capture (CDC) technology to efficiently transfer data to the cloud?
  • How can you build sophisticated analytic workflows quickly?
  • Why is Azure Data Lake Storage Gen2 the best data lake for real-time analytics?

Next steps

We work with a number of amazing partners. We’re excited about the prospect of showcasing Attunity solutions and helping customers get to insights faster in their big data analytics workflows using Azure and partner solutions.

We hope you join us on September 18th for our webinar. To register for this webinar, please visit the event signup page. To sign up for the Azure Data Lake Storage Gen2 preview please visit our product page.

Announcing ML.NET 0.5

$
0
0

Today, coinciding with the .NET Conf 2018, we’re announcing the release of ML.NET 0.5. It’s been a few months already since we released ML.NET 0.1 at //Build 2018, a cross-platform, open source machine learning framework for .NET developers. While we’re evolving through new preview releases, we are getting great feedback and would like to thank the community for your engagement as we continue to develop ML.NET together in the open.

In this 0.5 release we are adding TensorFlow model scoring as a transform to ML.NET. This enables using an existing TensorFlow model within an ML.NET experiment. In addition we are also addressing a variety of issues and feedback we received from the community. We welcome feedback and contributions to the conversation: relevant issues can be found here.

As part of the upcoming road in ML.NET, we really want your feedback on making ML.NET easier to use. We are working on a new ML.NET API which improves flexibility and ease of use. When the new API is ready and good enough, we plan to deprecate the current LearningPipeline API. Because this will be a significant change we are sharing our proposals for the multiple API options and comparisons at the end of this blog post. We also want an open discussion where you can provide feedback and help shape the long-term API for ML.NET.

This blog post provides details about the following topics in ML.NET:

Added a TensorFlow model scoring transform (TensorFlowTransform)

TensorFlow is a popular deep learning and machine learning toolkit that enables training deep neural networks (and general numeric computations).

Deep learning is a subset of AI and machine learning that teaches programs to do what comes naturally to humans: learn by example.
Its main differentiator compared to traditional machine learning is that a deep learning model can learn to perform object detection and classification tasks directly from images, sound or text, or even deliver tasks such as speech recognition and language translation, whereas traditional ML approaches relied heavily on feature engineering and data processing.
Deep learning models need to be trained by using very large sets of labeled data and neural networks that contain multiple layers. Its current popularity is caused by several reasons. First, it just performs better on some tasks like Computer Vision and second because it can take advantage of huge amounts of data (and requires that volume in order to perform well) that are nowadays becoming available.

With ML.NET 0.5 we are starting to add support for Deep Learning in ML.NET. Today we are introducing the first level of integration with TensorFlow in ML.NET through the new TensorFlowTransform which enables taking an existing TensorFlow model, either trained by you or downloaded from somewhere else, and get the scores from the TensorFlow model in ML.NET.

This new TensorFlow scoring capability doesn’t require you to have a working knowledge of TensorFlow internal details. Longer term we will be working on making the experience for performing Deep Learning with ML.NET even easier.

The implementation of this transform is based on code from TensorFlowSharp.

As shown in the following diagram, you simply add a reference to the ML.NET NuGet packages in your .NET Core or .NET Framework apps. Under the covers, ML.NET includes and references the native TensorFlow library which allows you to write code that loads an existing trained TensorFlow model file for scoring.

TensorFlow-ML.NET application diagram

The following code snippet shows how to use the TensorFlow transform in the ML.NET pipeline:

// ... Additional transformations in the pipeline code

pipeline.Add(new TensorFlowScorer()
{
    ModelFile = "model/tensorflow_inception_graph.pb",   // Example using the Inception v3 TensorFlow model
    InputColumns = new[] { "input" },                    // Name of input in the TensorFlow model
    OutputColumn = "softmax2_pre_activation"             // Name of output in the TensorFlow model
});

// ... Additional code specifying a learner and training process for the ML.NET model

The code example above uses the pre-trained TensorFlow model named Inception v3, that you can download from here. The Inception v3 is a very popular image recognition model trained on the ImageNet dataset where the TensorFlow model tries to classify entire images into a thousand classes, like “Umbrella”, “Jersey”, and “Dishwasher”.

The Inception v3 model can be classified as a deep convolutional neural network and can achieve reasonable performance on hard visual recognition tasks, matching or exceeding human performance in some domains. The model/algorithm was developed by multiple researchers and based on the original paper: “Rethinking the Inception Architecture for Computer Vision” by Szegedy, et. al.

In the next ML.NET releases, we will add functionality to enable identifying the expected inputs and outputs of TensorFlow models. For now, use the TensorFlow APIs or a tool like Netron to explore the TensorFlow model.

If you open the previous sample TensorFlow model file (tensorflow_inception_graph.pb) with Netron and explore the model’s graph, you can see how it correlates the InputColumn with the node’s input at the beginning of the graph:

TensorFlow model's input in graph

And how the OutputColumn correlates with softmax2_pre_activation node’s output almost at the end of the graph.

TensorFlow model's input in graph

Limitations: We are currently updating the ML.NET APIs for improved flexibility, as there are a few limitations to use TensorFlow in ML.NET today. For now (when using the LearningPipeline API), these scores can only be used within a LearningPipeline as inputs (numeric vectors) to a learner like a classifier learner. However, with the upcoming new ML.NET APIs, the TensorFlow model scores will be directly accessible, so you score with the TensorFlow model without the current need to add an additional learner and its related train process as implemented in this sample. It creates a multi-class classification ML.NET model based on a StochasticDualCoordinateAscentClassifier using a label (object name) related to a numeric vector feature generated/scored per image file by the TensorFlow model.

Take into account that the mentioned TensorFlow code examples using ML.NET are using the current LearningPipeline API available in v0.5. Moving forward, the ML.NET API enabling to use TensorFlow will be slightly different and not based on the “pipeline”. This is related to the next section of this blog post which focuses on the new upcoming API for ML.NET.

Finally, we also want to highlight that the ML.NET framework is currently surfacing TensorFlow, but in the future we might look into additional Deep Learning library integrations, such as Torch and CNTK.

You can find an additional code example using the TensorFlowTransform with the existing LearningPipeline API here.

Explore the upcoming new ML.NET API and provide feedback

As mentioned at the beginning of this blog post, we are really looking forward to get your feedback as we create the new ML.NET API while crafting ML.NET. This evolution in ML.NET offers more flexible capabilities than what the current LearningPipeline API offers. The LearningPipeline API will be deprecated when this new API is ready and good enough.

The following links to some example feedback we got in the form of GitHub issues about the limitations when using the LearningPipeline API:

Therefore, based on feedback on the LearningPipeline API, quite a few weeks ago we decided to switch to a new ML.NET API that would address most of the limitations the LearningPipeline API currently has.

Design principles for this new ML.NET API

We are designing this new API based on the following principles of :

  • Using parallel terminology with other well-known frameworks like Scikit-Learn, TensorFlow and Spark and we will try to be consistent in terms of naming and concepts making it easier for developers to understand and learn ML.NET Core.

  • Keeping simple and concise ML scenarios such as simple train and predict.

  • Allowing advanced ML scenarios (not possible with the current LearningPipeline API as explained in the next section).

We have also explored API approaches like Fluent API, declarative, and imperative.
For additional deeper discussion on principles and required scenarios, check out this issue in GitHub.

Why ML.NET is switching from the LearningPipeline API to a new API?

As part of the preview version crafting process (remember that ML.NET is still in early previews), we’ve been getting LearningPipeline API feedback and discovered quite a few limitations we need to address by creating a more flexible API.

Specifically, the new ML.NET API offers attractive features which aren’t possible with the current LearningPipeline API:

  • Strongly-typed API: This new Strongly-typed API takes advantage of C# capabilities so errors can be discovered in compilation time along with improved Intellisense in the editors.

  • Better flexibility: This API provides a decomposable train and predict process, eliminating rigid and linear pipeline execution. With the new API, execute a certain code path and then fork the execution so multiple paths can re-use the initial common execution. For example, share a given transforms’ execution and transformed data with multiple learners and trainers, or decompose pipelines and add multiple learners.

This new API is based on concepts such as Estimators, Transforms and DataView, shown in the following code in this blog post.

  • Improved usability: Direct call to the APIs from your code, no more scaffolding or insolation layer creating an obscure separation between what the user/developer writes and the internal APIs. Entrypoints are no longer mandatory.

  • Ability to simply score with TensorFlow models. Thanks to the mentioned flexibility in the API, you can also simply load a TensorFlow model and score by using it without needing to add any additional learner and training process, as explained in the previous “Limitations” topic within the TensorFlow section.

  • Better visibility of the transformed data: You have better visibility of the data while applying transformers.

Comparison of strongly-typed API vs. LearningPipeline API

Another important comparison is related to the Strongly Typed API feature in the new API.
As an example of issues you can get when you don’t have strongly typed API, the LearningPipeline API (as illustrated in the following code) provides access to data columns by specifying the comlumn’s names as strings, so if you make a typo (i.e. you wrote “Descrption” without the ‘i’ instead of “Description”, as the typo in the sample code), you will get a run-time exception:

pipeline.Add(new TextFeaturizer("Description", "Descrption"));       

However, when using the new ML.NET API, it is strongly typed, so if you make a typo, it will be caught in compilation time plus you can also take advatage of Intellisense in the editor.

var estimator = reader.MakeEstimator()
                .Append(row => (                    
                    description: row.description.FeaturizeText()))          

Details on decomposable train and predict API

The following code snippet shows how the transforms and training process of the “GitHub issues labeler” sample app can be implemented with the new API in ML.NET.

This is our current proposal and based on your feedback this API will probably evolve accordingly.

New ML.NET API code example:

public static async Task BuildAndTrainModelToClassifyGithubIssues()
{
    var env = new MLEnvironment();

    string trainDataPath = @"Dataissues_train.tsv";

    // Create reader
    var reader = TextLoader.CreateReader(env, ctx =>
                                    (area: ctx.LoadText(1),
                                    title: ctx.LoadText(2),
                                    description: ctx.LoadText(3)),
                                    new MultiFileSource(trainDataPath), 
                                    hasHeader : true);

    var loss = new HingeLoss(new HingeLoss.Arguments() { Margin = 1 });

    var estimator = reader.MakeNewEstimator()
        .Append(row => (
            // Convert string label to key. 
            label: row.area.ToKey(),
            // Featurize 'description'
            description: row.description.FeaturizeText(),
            // Featurize 'title'
            title: row.title.FeaturizeText()))
        .Append(row => (
            // Concatenate the two features into a vector and normalize.
            features: row.description.ConcatWith(row.title).Normalize(),
            // Preserve the label - otherwise it will be dropped
            label: row.label))
        .Append(row => (
            // Preserve the label (for evaluation)
            row.label,
            // Train the linear predictor (SDCA)
            score: row.label.PredictSdcaClassification(row.features, loss: loss)))
        .Append(row => (
            // Want the prediction, as well as label and score which are needed for evaluation
            predictedLabel: row.score.predictedLabel.ToValue(),
            row.label,
            row.score));

    // Read the data
    var data = reader.Read(new MultiFileSource(trainDataPath));

    // Fit the data to get a model
    var model = estimator.Fit(data);

    // Use the model to get predictions on the test dataset and evaluate the accuracy of the model
    var scores = model.Transform(reader.Read(new MultiFileSource(@"Dataissues_test.tsv")));
    var metrics = MultiClassClassifierEvaluator.Evaluate(scores, r => r.label, r => r.score);

    Console.WriteLine("Micro-accuracy is: " + metrics.AccuracyMicro);

    // Save the ML.NET model into a .ZIP file
    await model.WriteAsync("github-Model.zip");
}

public static async Task PredictLableForGithubIssueAsync()
{
    // Read model from an ML.NET .ZIP model file
    var model = await PredictionModel.ReadAsync("github-Model.zip");

    // Create a prediction function that can be used to score incoming issues
    var predictor = model.AsDynamic.MakePredictionFunction<GitHubIssue, IssuePrediction>(env);

    // This prediction will classify this particular issue in a type such as "EF and Database access"
    var prediction = predictor.Predict(new GitHubIssue
    {
        title = "Sample issue related to Entity Framework",
        description = @"When using Entity Framework Core I'm experiencing database connection failures when running queries or transactions. Looks like it could be related to transient faults in network communication agains the Azure SQL Database."
    });

    Console.WriteLine("Predicted label is: " + prediction.predictedLabel);
}

Compare with the following old LearningPipeline API code snippet that lacks flexibility because the pipeline execution is not decomposable but linear:

Old LearningPipeline API code example:

public static async Task BuildAndTrainModelToClassifyGithubIssuesAsync()
{
        // Create the pipeline
    var pipeline = new LearningPipeline();

    // Read the data
    pipeline.Add(new TextLoader(DataPath).CreateFrom<GitHubIssue>(useHeader: true));

    // Dictionarize the "Area" column
    pipeline.Add(new Dictionarizer(("Area", "Label")));

    // Featurize the "Title" column
    pipeline.Add(new TextFeaturizer("Title", "Title"));

    // Featurize the "Description" column
    pipeline.Add(new TextFeaturizer("Description", "Description"));
    
    // Concatenate the provided columns
    pipeline.Add(new ColumnConcatenator("Features", "Title", "Description"));

    // Set the algorithm/learner to use when training
    pipeline.Add(new StochasticDualCoordinateAscentClassifier());

    // Specify the column to predict when scoring
    pipeline.Add(new PredictedLabelColumnOriginalValueConverter() { PredictedLabelColumn = "PredictedLabel" });

    Console.WriteLine("=============== Training model ===============");

    // Train the model
    var model = pipeline.Train<GitHubIssue, GitHubIssuePrediction>();

    // Save the model to a .zip file
    await model.WriteAsync(ModelPath);

    Console.WriteLine("=============== End training ===============");
    Console.WriteLine("The model is saved to {0}", ModelPath);
}

public static async Task<string> PredictLabelForGitHubIssueAsync()
{
    // Read model from an ML.NET .ZIP model file
    _model = await PredictionModel.ReadAsync<GitHubIssue, GitHubIssuePrediction>(ModelPath);
    
    // This prediction will classify this particular issue in a type such as "EF and Database access"
    var prediction = _model.Predict(new GitHubIssue
        {
            Title = "Sample issue related to Entity Framework", 
            Description = "When using Entity Framework Core I'm experiencing database connection failures when running queries or transactions. Looks like it could be related to transient faults in network communication agains the Azure SQL Database..."
        });

    return prediction.Area;
}

The old LearningPipeline API is a fully linear code path, so you can’t decompose it in multiple pieces.
For instance, the BikeSharing ML.NET sample (available at the machine-learning-samples GitHub repo) is using the current LearningPipeline API.

This sample compares the regression learner accuracy using the evaluators API by:

  • Performing several data transforms to the original dataset
  • Training and creating seven different ML.NET models based on seven different regression trainers/algorithms (such as FastTreeRegressor, FastTreeTweedieRegressor, StochasticDualCoordinateAscentRegressor, etc.)

The intent is to help you compare the regression learners for a given problem.

Since the data transformations are the same for those models, you might want to re-use the code execution related to transforms. However, because the the LearningPipeline API only provides a single linear execution, you need to run the same data transformation steps for every model you create/train, as shown in the following code excerpt from the BikeSharing ML.NET sample.

var fastTreeModel = new ModelBuilder(trainingDataLocation, new FastTreeRegressor()).BuildAndTrain();
var fastTreeMetrics = modelEvaluator.Evaluate(fastTreeModel, testDataLocation);
PrintMetrics("Fast Tree", fastTreeMetrics);

var fastForestModel = new ModelBuilder(trainingDataLocation, new FastForestRegressor()).BuildAndTrain();
var fastForestMetrics = modelEvaluator.Evaluate(fastForestModel, testDataLocation);
PrintMetrics("Fast Forest", fastForestMetrics);

var poissonModel = new ModelBuilder(trainingDataLocation, new PoissonRegressor()).BuildAndTrain();
var poissonMetrics = modelEvaluator.Evaluate(poissonModel, testDataLocation);
PrintMetrics("Poisson", poissonMetrics);

//Other learners/algorithms
//...

Where the BuildAndTrain() method needs to have both data transforms plus the different algorithm per case, as shown in the following code:

public PredictionModel<BikeSharingDemandSample, BikeSharingDemandPrediction> BuildAndTrain()
{
    var pipeline = new LearningPipeline();
    pipeline.Add(new TextLoader(_trainingDataLocation).CreateFrom<BikeSharingDemandSample>(useHeader: true, separator: ','));
    pipeline.Add(new ColumnCopier(("Count", "Label")));
    pipeline.Add(new ColumnConcatenator("Features", 
                                        "Season", 
                                        "Year", 
                                        "Month", 
                                        "Hour", 
                                        "Weekday", 
                                        "Weather", 
                                        "Temperature", 
                                        "NormalizedTemperature",
                                        "Humidity",
                                        "Windspeed"));
    pipeline.Add(_algorythm);

    return pipeline.Train<BikeSharingDemandSample, BikeSharingDemandPrediction>();
}            

With the old LearningPipeline API, for every training using a different algorithm you need to run again the same process, performing the following steps again and again:

  • Load dataset from file
  • Make column transformations (concat, copy, or additional featurizers or dictionarizers, if needed)

But with the new ML.NET API based on Estimators and DataView you will be able to re-use parts of the execution, like in this case, re-using the data transforms execution as the base for multiple models using different algorithms.

You can also explore other “aspirational code examples” with the new API here

Because this will be a significant change in ML.NET we want to share our proposals and start an open discussion with you where you can provide your feedback and help shape the long-term API for ML.NET.

Provide your feedback on the new API

Provide feedback image with two people and a swimlane

Want to get involved? Start by providing feedback at this blog post comments below or through issues at the ML.NET GitHub repo: https://github.com/dotnet/machinelearning/issues

Get started!

If you haven’t already, get started with ML.NET here!

Next, explore some other great resources:

We look forward to your feedback and welcome you to file issues with any suggestions or enhancements in the ML.NET GitHub repo.

This blog was authored by Cesar de la Torre, Gal Oshri, John Alexander, and Ankit Asthana

Thanks,

The ML.NET Team


A (Belated) Welcome to C# 7.3

$
0
0

A (Belated) Welcome to C# 7.3

Better late than never! Some of you may have noticed that C# 7.3 already shipped, back in Visual Studio 2017 update 15.7. Some of you may even be using the features already.

C# 7.3 is the newest point release in the 7.0 family and it continues themes of performance-focused safe code, as well as bringing some small "quality of life" improvements in both new and old features.

For performance, we have a few features that improve ref variables, pointers, and stackalloc. ref variables can now be reassigned, letting you treat ref variables more like traditional variables. stackalloc now has an optional initializer syntax, letting you easily and safely initialize stack allocated buffers. For struct fixed-size buffers, you can now index into the buffer without using a pinning statement. And when you do need to pin, we’ve made the fixed statement more flexible by allowing it to operate on any type that has a suitable GetPinnableReference method.

For feature improvements, we’ve removed some long time restrictions on constraints to System.Enum, System.Delegate, and we’ve added a new unmanaged constraint that allows you to take a pointer to a generic type parameter. We’ve also improved overload resolution (again!), allowed out and pattern variables in more places, enabled tuples to be compared using == and !=, and fixed the [field: ] attribute target for auto-implemented properties to target the property’s backing field.

All of these features are small additions to the language, but they should make each of these parts of the language a little easier and more pleasant. If you want more details, you can see the 15.7 release notes or check out the documentation for What’s new in C# 7.3.

Andy Gocke
C#/VB Compiler Team

Announcing Entity Framework Core 2.2 Preview 2 and the preview of the Cosmos DB provider and spatial extensions for EF Core

$
0
0

Today we are making EF Core 2.2 Preview 2 available, together with a preview of our data provider for Cosmos DB and new spatial extensions for our SQL Server and in-memory providers.

Obtaining the preview

The preview bits are available on NuGet, and also as part of ASP.NET Core 2.2 Preview 2 and the .NET Core SDK 2.2 Preview 2, also releasing today.

If you are working on an application based on ASP.NET Core, we recommend you upgrade to ASP.NET Core 2.2 Preview 2 following the instructions in the announcement.

The SQL Server and the in-memory providers are included in ASP.NET Core, but for other providers and any other type of application, you will need to install the corresponding NuGet package. For example, to add the 2.2 Preview 2 version of the SQL Server provider in a .NET Core library or application from the command line, use:

$ dotnet add package Microsoft.EntityFrameworkCore.SqlServer -v 2.2.0-preview2-35157

Or from the Package Manager Console in Visual Studio:

PM> Install-Package Microsoft.EntityFrameworkCore.SqlServer -Version 2.2.0-preview2-35157

For more details on how to add EF Core to your projects see our documentation on Installing Entity Framework Core.

The Cosmos DB provider and the spatial extensions ship as new separate NuGet packages. We’ll explain how to get started with them in the corresponding feature descriptions.

What is new in this preview?

As we explained in our roadmap annoucement back in June, there will be a large number of bug fixes (you can see the list of issues we have fixed so far here, but only a relatively small number of new features in EF Core 2.2.

Here are the most salient new features:

New EF Core provider for Cosmos DB

This new provider enables developers familiar with the EF programing model to easily target Azure Cosmos DB as an application database, with all the advantages that come with it, including global distribution, elastic scalability, “always on” availability, very low latency, and automatic indexing.

The provider targets the SQL API in Cosmos DB, and can be installed in an application issuing the following command form the command line:

$ dotnet add package Microsoft.EntityFrameworkCore.Cosmos.Sql -v 2.2.0-preview2-35157

Or from the Package Manager Console in Visual Studio:

PM> Install-Package Microsoft.EntityFrameworkCore.Cosmos.Sql -Version 2.2.0-preview2-35157

To configure a DbContext to connect to Cosmos DB, you call the UseCosmosSql() extension method. For example, the following DbContext connects to a database called “MyDocuments” on the Cosmos DB local emulator to store a simple blogging model:

public class BloggingContext : DbContext
{
  public DbSet<Blog> Blogs { get; set; }
  public DbSet<Post> Posts { get; set; }

  protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
  {
    optionsBuilder.UseCosmosSql(
      "https://localhost:8081",
      "C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==",
      "MyDocuments");
  }
}

public class Blog
{
  public int BlogId { get; set; }
  public string Name { get; set; }
  public string Url { get; set; }
  public List<Post> Posts { get; set; }
}

public class Post
{
  public int PostId { get; set; }
  public string Title { get; set; }
  public string Content { get; set; }
}

If you want, you can create the database programmatically, using EF Core APIs:

using (var context = new BloggingContext())
{
  await context.Database.EnsureCreatedAsync();
}

Once you have connected to an existing database and you have defined your entities, you can start storing data in the database, for example:

using (var context = new BloggingContext())
{
  context.Blogs.Add(
    new Blog
    {
        BlogId = 1,
        Name = ".NET Blog",
        Url = "https://blogs.msdn.microsoft.com/dotnet/",
        Posts = new List<Post>
        {
            new Post
            {
                PostId = 2,
                Title = "Welcome to this blog!"
            },
        }
      }
    });
}

And you can write queries using LINQ:

var dotNetBlog = context.Blogs.Single(b => b.Name == ".NET Blog");

Current capabilities and limitations of the Cosmos DB provider

Around a year ago, we started showing similar functionality in demos, using a Cosmos DB provider prototype we put together as a proof of concept. This helped us get some great feedback:

  • Most customers we talked to confirmed that they could see a lot of value in being able to use the EF APIs they were already familiar with to target Cosmos DB, and potentially other NoSQL databases.
  • There were specific details about how the prototype worked, that we needed to fix.
    For example, our prototype mapped entities in each inheritance hierarchy to their own separate Cosmos DB collections, but because of the way Cosmos DB pricing works, this could become unnecessarily expensive. Based on this feedback, we decided to implement a new mapping convention that by default stores all entity types defined in the DbContext in the same Cosmos DB collection, and uses a discriminator property to identify the entity type.

The preview we are releasing today, although limited in many ways, is no longer a prototype, but the actual code we plan on keeping working on and eventually shipping. Our hope is that by releasing it early in development, we will enable many developers to play with it and provide more valuable feedback.

Here are some of the known limitations we are working on overcoming for Preview 3 and RTM.

  • No asynchronous query support: Currently, LINQ queries can only be executed synchronously.
  • Only some of the LINQ operators translatable to Cosmos DB’s SQL dialect are currently translated.
  • No synchronous API support for SaveChanges(), EnsureCreated() or EsureDeleted(): you can use the asynchronous versions.
  • No auto-generated unique keys: Since entities of all types share the same collection, each entity needs to have a globally unique key value, but in Preview 2, if you use an integer Id key, you will need to set it explicitly to unique values on each added entity. This has been addressed, and in our nightly builds we now automatically generate GUID values.
  • No nesting of owned entities in documents: We are planning to use entity ownership to decide when an entity should be serialized as part of the same JSON document as the owner. In fact we are extending the ability to specify ownership to collections in 2.2. However this behavior hasn’t been implemented yet and each entity is stored as its own document.

You can track in more detail our progress overcoming these and other limitations in this issue on GitHub.

For anything else that you find, please report it as a new issue.

Spatial extensions for SQL Server and in-memory

Support for exposing the spatial capabilities of databases through the mapping of spatial columns and functions is a long-standing and popular feature request for EF Core. In fact, some of this functionality has been available to you for some time if you use the EF Core provider for PostgreSQL, Npgsql. In EF Core 2.2, we are finally attempting to address this for the providers that we ship.

Our implementation picks the same NetTopologySuite library that the PostgreSQL provider uses as the source of spatial .NET types you can use in your entity properties. NetTopologySuite is a database agnostic spatial library that implements standard spatial functionality using .NET idioms like properties and indexers.

The extension then adds the ability to map and convert instances of these types to the column types supported by the underlying database, and usage of methods defined on these types in LINQ queries, to SQL functions supported by the underlying database.

You can install the spatial extension using the following command form the command line:

$ dotnet add package Microsoft.EntityFrameworkCore.SqlServer.NetTopologySuite -v 2.2.0-preview2-35157

Or from the Package Manager Console in Visual Studio:

PM> Install-Package Microsoft.EntityFrameworkCore.SqlServer.NetTopologySuite -Version 2.2.0-preview2-35157

Once you have installed this extension, you can enable it in your DbContext by calling the UseNetTopologySuite() method inside UseSqlServer() either in OnConfiguring() or AddDbContext().

For example:

public class SensorContext : DbContext
{
  public DbSet<Measurement> Measurements { get; set; }

  protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
  {
    optionsBuilder
      .UseSqlServer(
        @"Server=(localdb)mssqllocaldb;Database=SensorDatabase;Trusted_Connection=True;ConnectRetryCount=0",
        sqlOptions => sqlOptions.UseNetTopologySuite())
  }
}

Then you can start using spatial types in your model definition. In this case, we will use NetTopologySuite.Geometries.Point to represent the location of a measurement:

using NetTopologySuite.Geometries;
...
  public class Measurement
  {
      public int Id { get; set; }
      public DateTime Time { get; set; }
      public Point Location { get; set; }
      public double Temperature { get; set; }
  }

Once you have configured the DbContext and the model in this way, you can create the database, and start persisting spatial data:

using (var context = new SensorContext())
{
  context.Database.EnsureCreated();
  context.AddRange(
    new Measurement { Time = DateTime.Now, Location = new Point(0, 0), Temperature = 0.0},
    new Measurement { Time = DateTime.Now, Location = new Point(1, 1), Temperature = 0.1},
    new Measurement { Time = DateTime.Now, Location = new Point(1, 2), Temperature = 0.2},
    new Measurement { Time = DateTime.Now, Location = new Point(2, 1), Temperature = 0.3},
    new Measurement { Time = DateTime.Now, Location = new Point(2, 2), Temperature = 0.4});
  context.SaveChanges();
}

And once you have a database containing spatial data, you can start executing queries:

var currentLocation = new Point(0, 0);

var nearestMesurements =
  from m in context.Measurements
  where m.Location.Distance(currentLocation) < 2
  orderby m.Location.Distance(currentLocation) descending
  select m;

foreach (var m in nearestMeasurements)
{
    Console.WriteLine($"A temperature of {m.Temperature} was detected on {m.Time} at {m.Location}.");
}

This will result in the following SQL query being executed:

SELECT [m].[Id], [m].[Location], [m].[Temperature], [m].[Time]
FROM [Measurements] AS [m]
WHERE [m].[Location].STDistance(@__currentLocation_0) < 2.0E0
ORDER BY [m].[Location].STDistance(@__currentLocation_0) DESC

Current capabilities and limitations of the spatial extensions

  • It is possible to map properties of concrete types from NetTopologySuite.Geometries such as Geometry, Point, or Polygon, or interfaces from GeoAPI.Geometries, such as IGeometry, IPoint, IPolygon, etc.
  • Only SQL Server and in-memory database are supported: For in-memory it is not necessary to call UseNetTopologySuite(). SQLite will be enabled in Preview 3.
  • EF Core Migrations does not scaffold spatial types correctly, so you currently cannot use Migrations to create the database schema or apply seed data without workarounds.
  • Mapping to Geography columns isn’t fully tested and may have limitations. If you attempt this, make sure that you configure the underlying column type in OnModelCreating():
    modelBuilder.Entity<Measurement>().Property(b => b.Location).HasColumnType("Geography");

    And that you specify an SRID, for example 4326, in all spatial instances you persist or use in queries:

    var currentLocation = new Point(0, 0) { SRID = 4326 };

For anything else that you find, please report it as a new issue.

Collections of owned entities

EF Core 2.2 extends the ability to express ownership relationships to one-to-many associations. This helps constraining how entities in an owned collection can be manipulated (for example, they cannot be used without an owner) and triggers automatic behaviors such as implicit eager loading. In the case of relational database, owned collects are mapped to separate tables from the owner, just like regular one-to-many associations, but in the case of a document-oriented database such as Cosmos DB, we plan to nest owned entities (in owned collections or references) within the same JSON document as the owner.

You can use the feature by invoking the new OwnsMany() API:

modelBuilder.Entity<Customer>().OwnsMany(c => c.Addresses);

Query tags

This feature is designed to facilitate the correlation of LINQ queries in code with the corresponding generated SQL output captured in logs.

To take advantage of the feature, you annotate a query using the new WithTag() API in a LINQ query. Using the spatial query from the previous example:

var nearestMesurements =
    from m in context.Measurements.WithTag(@"This is my spatial query!")
    where m.Location.Distance(currentLocation) < 2.5
    orderby m.Location.Distance(currentLocation) descending
    select m;

This will generate the following SQL output:

-- EFCore: (#This is my spatial query!)
SELECT [m].[Id], [m].[Location], [m].[Temperature], [m].[Time]
FROM [Measurements] AS [m]
WHERE [m].[Location].STDistance(@__currentLocation_0) < 2.5E0
ORDER BY [m].[Location].STDistance(@__currentLocation_0) DESC

Provider compatibility

Although we have setup testing to make sure that existing providers will continue to work with EF Core 2.2, there might be unexpected problems, and we welcome users and provider writers to report compatibility issues on our issue tracker.

What comes next?

We are still working in some additional features we would like to include in EF Core 2.2, like reverse engineering of database views into query types, support for spatial types with SQLite, as well as additional bug fixes. We are planning on releasing EF Core 2.2 in the last calendar quarter of 2018.

In the meantime, our team has started working on the our next major release, EF Core 3.0, which will include, among other improvements, a significant overhaul of our LINQ implementation.

We will also soon start the work to make Entity Framework 6 compatible with .NET Core 3.0, which was announced last may.

Your feedback is really needed!

We encourage you to play with the new features, and we thank you in advance for posting any feedback to our issue tracker.

The spatial extensions and the Cosmos DB provider in particular are very large features that expose a lot of new capabilities and APIs. Really being able to ship these features as part of EF Core 2.2 RTM is going to depend on your valuable feedback and on our ability to use it to iterate over the design in the next few months.

If not Notebooks, then what? Look to Literate Programming

$
0
0

Author and research engineer Joel Grus kicked off an important conversation about Jupyter Notebooks in his recent presentation at JupyterCon

Joel grus title slide

There's no video yet available of Joel's talk, but you can guess the theme of that opening slide, and walking through the slides conveys the message well, I think. Yuhui Xie, author and creator of the rmarkdown package, provides a detailed summary and response to Joel's talk, where he lists Joel's main critiques of Notebooks: 

  1. Hidden state and out-of-order execution
  2. Notebooks are difficult for beginners
  3. Notebooks encourage bad habits
  4. Notebooks discourage modularity and testing
  5. Jupyter’s autocomplete, linting, and way of looking up the help are awkward
  6. Notebooks encourage bad processes
  7. Notebooks hinder reproducible + extensible science
  8. Notebooks make it hard to copy and paste into Slack/Github issues
  9. Errors will always halt execution
  10. Notebooks make it easy to teach poorly
  11. Notebooks make it hard to teach well 

R Markdown bookYihui suggests that many of these shortcomings of Notebooks could be addressed through literate programming systems, where the document you edit is plain-text (and so easy to edit, manage, and track), and computations are strictly processed from the beginning of the document to the end. I use the RMarkdown system myself, and find it a delightful way of combining code, output and graphics in a single document, which can in turn be rendered in a variety of formats including HTML, PDF, Word and even PowerPoint.

Yihui expands on these themes in greater detail in his excellent book (with JJ Allaire and Garrett Grolemund), R Markdown: The Definitive Guide, published by CRC Press. Incidentally, the book itself is a fine example of literate programming; you can find the R Markdown source here, and you can read the book in its entirety here. As Joel mentions in his talk, an automatically-generated document of that length and complexity simply wouldn't be possible with Notebooks.

All that being said, RMarkdown is (for now) a strictly R-based system. Are there equivalent literate programming systems for Python? That's a genuine question — I don't know the Python ecosystem well enough to answer — but if you have suggestions please leave them in the comments.

Yihui Xie: The First Notebook War

Azure preparedness for Hurricane Florence

$
0
0

As Hurricane Florence continues its journey to the mainland, our thoughts are with those in its path. Please stay safe. We’re actively monitoring Azure infrastructure in the region. We at Microsoft have taken all precautions to protect our customers and our people.

Our datacenters (US East, US East 2, and US Gov Virginia) have been reviewed internally and externally to ensure that we are prepared for this weather event. Our onsite teams are prepared to switch to generators if utility power is unavailable or unreliable. All our emergency operating procedures have been reviewed by our team members across the datacenters, and we are ensuring that our personnel have all necessary supplies throughout the event.

As a best practice, all customers should consider their disaster recovery plans and all mission-critical applications should be taking advantage of geo-replication.

Rest assured that Microsoft is focused on the readiness and safety of our teams, as well as our customers’ business interests that rely on our datacenters. 

You can reach our handle @AzureSupport on Twitter, we are online 24/7. Any business impact to customers will be communicated through Azure Service Health in Azure portal.

If there is any change to the situation, we will keep customers informed of Microsoft’s actions through this announcement.

For guidance on Disaster Recovery best practices see references below: 

Five habits of highly effective Azure users

$
0
0

There’s a lot you can do with Azure. But whether you’re modernizing your IT environment, building next-generation apps, harnessing the power of artificial intelligence, or deploying any of a million other solutions, there are foundational habits that can help you succeed.

On the Azure team, we spend a lot of time listening to our customers and understanding what makes them successful. We’ve started to compile a handful of routine activities that can help you get the most out of Azure.

Stay on top of proven practice recommendations

Highly effective Azure users understand that optimization is never really finished. To stay on top of the latest proven practices, they regularly review Azure Advisor.

Advisor is a free tool that analyzes your Azure resource configurations and usage and offers recommendations to optimize your workloads for high availability, security, performance, and cost. Examples of popular Advisor recommendations include rightsizing or shutting down underutilized Virtual Machines (VMs) and configuring backup for your VMs.

1

You can think of Advisor as your single pane of glass for recommendations across Azure. Advisor integrates with companion tools like Azure Security Center and Azure Cost Management, which makes it easier for you to discover and act on all your optimizations. We’re constantly adding more recommendations, and you’re constantly doing new things with Azure, so make it a habit to check Advisor regularly to stay on top of Azure proven practices.

Review your personalized Advisor proven practice recommendations.

Check out Azure Advisor documentation to help you get started.

Stay in control of your resources on the go

We live in a world of always on, and highly effective Azure users recognize that they need to stay in control of their Azure resources anytime, anywhere, not just from their desktop. They use the Azure mobile app to monitor and manage their Azure resources on the go.

2

With the Azure mobile app, you can check for alerts, review metrics, and take corrective actions to fix common issues, right from your Android or iOS phone or tablet. You can also tap the Cloud Shell button and run your choice of shell environment (PowerShell or Bash) for full access to your Azure services in a command-line experience.

Download the Azure mobile app to stay in control on the go.

Explore the Azure mobile app in this video demo.

Stay informed during issues and maintenance

Highly effective Azure users recognize how important it is to stay informed about Azure maintenance and service issues. They turn to Azure Service Health, a free service that provides personalized alerts and guidance for everything from planned maintenance and service changes to health advisories like service transitions. Service Health can notify you, help you understand the impact to your resources, and keep you updated as the issue is resolved.

3

Highly effective Azure users configure Service Health to inform their teams of service issues by setting up alerts for the subscriptions, services, and regions that are relevant to them. For example, they might create an alert to:

  • Send an email to a dev team when a resource in a dev/test subscription is impacted.
  • Update ServiceNow or PagerDuty via webhook to alert your on-duty operations team when a resource in production is impacted.
  • Send an SMS to a regional IT operations manager when resources in a given region are impacted.

Stay informed during issues and maintenance by setting up your Azure Service Health alerts.

For more information, read documentation on creating Service Health alerts.

Stay up-to-date with the latest announcements

The pace of change in the cloud is rapid. Highly effective Azure users understand that they need all the help they can get to stay up-to-date with the latest releases, announcements, and innovations.

roadmap-image

The Azure updates page on Azure.com is the central place to get all your updates about Azure, from pricing changes to SDK updates. Highly effective Azure users go beyond simply bookmarking the Azure updates page — they subscribe to products and features that are relevant to them, so they’ll receive proactive notifications whenever there’s an announcement or change.

Subscribe for your Azure updates.

Explore product availability by region to find specific release dates for your preferred regions.

Stay engaged with your peers – share and learn

This list of habits of highly effective Azure users is by no means exhaustive. There are many more, which is why this fifth habit is so important. Time and again, we see highly effective Azure users staying engaged with their peers to share good habits they’ve discovered and learn new ones from the community.

There are as many ways to stay engaged with your peers as there are good habits to share and learn. Here are a few ideas to get started:

  • Get involved with the Azure Community.
  • Create or join a practice-sharing community within your own organization.
  • Attend industry conferences and trade groups.
  • Participate in Meet Ups in your area.
  • Reach out to your peers and Azure on Twitter.

Join us at Microsoft Ignite in-person and online

Finally, another great way to share and learn strong Azure habits is to join us at Microsoft Ignite in Orlando, Florida from September 24 to September 28. You’ll gain new skills, meet other experts, and discover the latest technology.

Look for the Five Habits of Highly Effective Azure Users booth on the Ignite Expo floor to get started fast with these habits and become an even more effective Azure user. It’s also never too soon to start planning which Ignite sessions you’d like to attend, either in-person or streaming live and on-demand.

Regardless of how you’re using Azure, start putting these five habits of highly effective Azure users into practice today.

Viewing all 10804 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>