Top Stories from the Microsoft DevOps Community – 2018.08.31
Guide to a high-performance, powerful R installation
R is an amazing language with 25 years of development behind it, but you can make the most from R with additional components. An IDE makes developing in R more convenient; packages extend R's capabilities; and multi-threaded libraries make computations faster.
Since these additional components aren't included on the official R website, getting the ideal R environment set up can be a bit tricky. Fortunately, there's a handy R installation guide by Mauricio Vargas that explains how to get everything you need set up on Windows, Mac and Ubuntu Linux. On each platform, the guide describes how to install:
- The R language engine
- The RStudio IDE.
- The tidyverse suite of packages
- Multi-threaded math libraries (BLAS). On Windows, Mauricio recommends Microsoft R Open ("what made my R and Windows experience amazing"). For Mac and Unix he suggests installing OpenBLAS, but I'll add that Microsoft R Open provides BLAS acceleration on those platforms as well. It's easy to configure RStudio to use Microsoft R Open, too.
Find all the details in the installation guide, linked below.
DataCamp Community: How to install R on Windows, Mac OS X and Ubuntu
Because it’s Friday: The Curiosity Show
This one's all about nostalgia. When I was about 7 or 8 years old back in Australia, my favorite TV show by far was The Curiosity Show, a weekly series hosted by Rob (Morrison) and Deane (Hutton) — they didn't use their last names — featuring segments about science and practical experiments you could do at home. My favorite part was the experiments: I remember pestering my parents for all manner of straws, construction paper, watch batteries and wire to follow along with the TV show (and also the companion books, themed around Earth, Air, Fire and Water). So I was thrilled to find this week that the entire series is now available on YouTube. I still remember the thrill of building my own electric motor:
If you have a young child with a burgeoning interest in science, you might want to check this out. As for me, I always assumed this was a local show (it was filmed in the town where I grew up, and I'm pretty sure my Mum took me to see Rob and Deane do outdoor shows once or twice), so I was pleased to learn that it was shown it was also shown in Asia or Europe.
That's all from us for this week. We'll be back after the long weekend — see you then!
Azure.Source – Volume 47
Now generally available
Announcing general availability of Azure IoT Hub’s integration with Azure Event Grid - IoT Hub integration with Azure Event Grid is now generally available, making it even easier to transform business insights from billions of devices sending their data to the cloud into actions by simplifying the architecture of IoT solutions. Now you can easily integrate with modern serverless architectures, such as Azure Functions and Azure Logic Apps, to automate workflows and downstream processes; enable alerting with quick reaction to creation, deletion, connection, and disconnection of devices; and eliminate the complexity and expense of polling services and integrate events with 3rd party applications using webhooks such as ticketing, billing system, and database updates.
The IoT Show | Azure IoT Hub and Azure Event Grid - The Azure IoT Hub and Event Grid integration is now generally available and comes with support for new types of events coming out of IoT Hub. You can now easily integrate your IoT workflows with other Azure services using the Azure pub-sub platform as demonstrated by Ashita Rastogi, PM in the Azure IoT team. |
Microsoft Azure, the cloud for high performance computing - Azure CycleCloud, a tool for creating, managing, operating, and optimizing HPC clusters of any scale in Azure, is now generally available. Now it's easier for everyone to deploy, use, and optimize HPC burst, hybrid, or cloud-only clusters. Microsoft Azure provides a variety of virtual machines enabled with NVIDIA GPUs, which provide outstanding performance for AI and HPC. Azure users and cloud developers now have a way to accelerate their AI and HPC workflows with powerful GPU-optimized software that takes full advantage of supported NVIDIA GPUs on Azure. Azure CycleCloud and NVIDIA GPUs ease integration and the ability to manage and scale.
Azure Friday
Azure Friday | Azure Automation - Jenny Hunter joins Scott Hanselman to discuss Azure Automation, which has seen several new improvements including support for Python 2, the ability to run against local Linux machines, the ability to run against any Azure VM with no setup (Run Command), and native Azure alert integration. |
News and updates
Helping Go developers build better cloud apps faster - Last week, GopherCon took place in Denver. Since making the Azure SDK for Go generally available earlier this year, our teams have been hard at work building Go tools and services for Azure, Visual Studio Code, and Visual Studio Team Services. We continue to strive to help Go developers build better cloud apps, faster, with an expanding range of services covering the cloud-native workflow. We recorded a series of Azure Friday episodes that demonstrate how to build and run Go apps with Azure, Visual Studio Code, and Visual Studio Team Services. Check out the series at Go on Azure.
Announcing Project Athens and GopherSource for the Go community - In February, engineers from Microsoft and other collaborators began working on Project Athens, an open source project released under the MIT license and hosted on GitHub, to create the first proxy server for Go modules. Along with the Athens community, we are currently focusing on improving the modules experience, ensuring that Go modules work seamlessly with all proxy servers, and working to set up a federated, organizationally diverse proxy network. GopherSource is an initiative to strengthen and diversify the Go ecosystem through building up more contributors to upstream Go and key Go projects, such as Project Athens, from within the community. By encouraging the Go community’s own talented developers to contribute to upstream Go, we ensure that the Go ecosystem will meet the needs of the entire community.
A quick take on the State of Hybrid Cloud survey - Microsoft conducted a survey with research firm Kantar TNS in January 2018, asking more than 1700 respondents to chime in. Surveys were collected from IT professionals, developers, and business decision makers to identify how they perceive hybrid cloud, what motivates adoption, and what features they see as most important. Check out this post for a summary infographic of the survey results, and watch the on-demand webinar for a more complete review.
Monitor all Azure Backup protected workloads using Log Analytics - Azure Backup now allows you to monitor all workloads protected by it by leveraging the power of Log Analytics, which enables you to monitor key backup parameters across Recovery Services vaults and subscriptions irrespective of which Azure backup solution you are using. This solution now covers all workloads protected by Azure Backup including Azure VMs, SQL in Azure VM backups, System Center Data Protection Manager connected to Azure (DPM-A), Microsoft Azure Backup Server (MABS), and file-folder backup from Azure backup agent.
Protecting privacy in Microsoft Azure: GDPR, Azure Policy Updates - Microsoft designed Azure with industry-leading security controls, compliance tools, and privacy policies to safeguard your data in the cloud, including the categories of personal data identified by the GDPR. These also help you comply with other important global and regional privacy standards such as ISO/IEC 27018, EU-U.S. Privacy Shield, EU Model Clauses, HIPAA/HITECH, and HITRUST. Check out the video in this post to learn how you can accelerate your move to the cloud by achieving compliance more readily, allowing you to enable privacy-sensitive cloud scenarios, such as financial and health service, with confidence.
Additional news and updates
- Cross-subscription disaster recovery for Azure virtual machines
- OMS portal moving to Azure portal
- Azure API Management update August 28
- Updates to file support and Windows services tracking in Change Tracking and Inventory
- New log experience in the Azure portal
- Troubleshoot connectivity issues in a virtual network
- Azure Security Center update August 29
Azure tips & tricks
How to generate SSH public key to log into Linux VM | Azure Tips and Tricks - Learn how to generate SSH public keys to log into a Linux VM with Cloud Shell and BASH on Windows 10. Watch how you can automatically log into a VM without having to put in a password if you SSH into the server. |
How the Azure Cloud Shell uses storage | Azure Tips and Tricks - We'll demystify storage in the Azure Cloud Shell. When creating an Azure storage account, it will use the storage account to persist any of the information you might need whenever you're interacting with your resources. |
Technical content and training
Turn your whiteboard sketches to working code in seconds with Sketch2Code - As covered in an earlier episode of The AI Show, Sketch2Code is a web-based solution that uses AI to transform a handwritten user interface design from a picture to a valid HTML markup code. Check out this post to understand the process of transforming handwritten image to HTML using Sketch2Code in more detail. You can find the code, solution development process, and all other details on GitHub. Sketch2Code is developed in collaboration with Kabel and Spike Techniques.
Monitoring environmental conditions near underwater datacenters using Deep Learning - Project Natick seeks to understand the benefits and difficulties in deploying subsea datacenters worldwide; it is the world's first deployed underwater datacenter and it was designed with an emphasis on sustainability. Phase 2 extends the research accomplished in Phase 1 by deploying a full-scale datacenter module in the North Sea, powered by renewable energy. Project Natick uses AI to monitor the servers and other equipment for signs of failure and to identify any correlations between the environment and server longevity.
Sharing a self-hosted Integration Runtime infrastructure with multiple Data Factories - The Integration Runtime is the compute infrastructure used by Azure Data Factory to provide data integration capabilities across different network environments. If you need to perform data integration and orchestration securely in a private network environment, which does not have a direct line-of-sight from the public cloud environment, you can install a self-hosted Integration Runtime on premises behind your corporate firewall, or inside a virtual private network. Thanks to self-hosted Integration Runtime sharing, you can share the same self-hosted Integration Runtime infrastructure across data factories, which enables you to reuse the same highly available and scalable self-hosted Integration Runtime infrastructure from different data factories within the same Azure Active Directory tenant.
Azure Content Spotlight – Migrate Apps to Azure - This week's spotlight shines light on a set of resources that can help software architects and developers gaining insights about the different services and resources Azure has to offer to migrate their applications to the cloud, including patterns and best practices.
The Azure Podcast
|
The Azure Podcast: Episode 244 - Azure Container Instances - Justin Luk, a PM on the Azure Compute team, breaks down ACI for us, giving advice on why and when you should consider this new service for your container workloads. |
Events
Accelerating Artificial Intelligence (AI) in healthcare using Microsoft Azure blueprints - To rapidly acquire new capabilities (like AI) and implement new solutions, healthcare IT and developers can now take advantage of industry-specific Azure Blueprints. Blueprints include resources such as example code, test data, security, and compliance support. Using these blueprints, your healthcare IT team can quickly implement a new type of solution in your secure Azure cloud. With a blueprint, however, you get 50 percent to 90 percent of the end solution. Then, you can simply devote your efforts and resources to customizing the blueprint solution. You can learn more by watching the Accelerating Artificial Intelligence (AI) in Healthcare Using Microsoft Azure Blueprints Webcast with David Starr, Principal Systems Architect, Azure.
Microsoft Azure Data welcomes attendees to VLDB 2018 - While gophers were in Denver for GopherCon, the data crowd ventured to Rio de Janeiro for the 44th International Conference on Very Large Data Bases (VLDB) 2018. VLDB is a premier annual international forum for data management and database researchers, vendors, practitioners, application developers, and users. Rohan Kumar Corporate Vice President, Azure Data took this opportunity to catalog the many services and ongoing work in the Azure Data group at Microsoft. As Kumar states, "The emergence of the cloud and edge as the new frontiers for computing, and thus data management, is an exciting direction—data is now dispersed within and beyond the enterprise, on-premises, on-cloud, and on edge devices, and we must enable intelligent analysis, transactions, and responsible governance for all data everywhere, from the moment it is created to the moment it is deleted, through the entire life-cycle of ingestion, updates, exploration, data prep, analysis, serving, and archival."
Tuesdays with Corey
Azure Site Recovery between Regions - Corey Sanders, Corporate VP - Microsoft Azure Compute team sat down with Sujay Talasila, Senior PM Manager on the Azure Compute Team to talk about new changes to Azure Site Recovery. |
Customers and partners
Azure Marketplace new offers: July 16-31 - The Azure Marketplace is the premier destination for all your software needs – certified and optimized to run on Azure. Find, try, purchase, and provision applications & services from hundreds of leading software providers. In the second half of July we published 50 new offers, including: Akamai Enterprise Application Access, Fedora Linux, Carbonite's Migrate to Azure Stack - Windows 1Tb (4-Wks), and more.
Securing Kubernetes workloads in hybrid settings with Aporeto - Aporeto, a Zero Trust security solution for microservices, containers and cloud, provides centralized visibility and security for applications distributed on Azure Kubernetes Service (AKS) and private clouds. Fundamental to Aporeto’s approach is the principle that everything in an application is accessible to everyone and could be compromised at any time. Because Aporeto untethers application security from the network and infrastructure, one key benefit of Aporeto’s approach for protecting your containers, microservices and cloud applications is that you can have a consistent security approach even in a hybrid or multi-cloud setting.
Industries
Keeping shelves stocked and consumers happy (and shopping!) - Mariya Zorotovich Worldwide Retail and CG Industry Leader covers how advanced analytics can help retailers solve out-of-stock challenges and gain an estimated 5-10 percent increase in sales. To determine how to solve out-of-stocks with a SKU assortment optimization solution, read the Inventory optimization through SKU Assortment + machine learning use case. This document provides options for solving the out-of-stock challenge and a solution overview leveraging Azure Services, including an approach by Neal Analytics, Microsoft 2017 Global Partner of the Year for business analytics.
Extracting actionable insights from IoT data to drive more efficient manufacturing - Thanks to the explosion of IoT we now have millions of devices, machines, products, and assets connected and streaming terabytes of data. But connecting to devices, ingesting and storing their sensor data is just the first step. The whole point of collecting this data is to extract actionable insights — insights that will trigger some sort of action that will result in business value. To take the first actionable step in using an IoT solution, we just released the Extracting insights from IoT data overview. Here you can learn more about the value of IoT data, the anatomy of an IoT solution, available options for IoT data analytics, how to act on the insights you extract, and build vs. buy considerations.
Four trends powering healthcare AI, and what you can do about it - A tsunami of healthcare data, healthcare professional burnout, healthcare costs are redlining, and ransomware disrupts healthcared and breaches erode patient trust. Download the new, free AI in Healthcare guide a to find out more about how these trends are driving a new set of transformational use cases and workloads powered by AI and delivered from the Microsoft Azure cloud. Don't start from scratch. See how you can accelerate your healthcare AI initiative with free blueprints that include not only documentation and security and compliance support, but also example code, test data, and automated deployment to rapidly establish an AI solution reference starting point that you can then customize to achieve your target AI solution faster. Key partners are also available to help you further accelerate your AI initiatives.
Two seconds to take a bite out of mobile bank fraud with Artificial Intelligence - As mobile banking grows, so does the one aspect about it that can be wrenching for customers and banks, mobile device fraud. Artificial Intelligence (AI) models have the potential to dramatically improve fraud detection rates and detection times. One approach is described in the Mobile bank fraud solution guide. It’s a behavioral-based AI approach and can be much more responsive to changing fraud patterns than rules-based or other approaches. The guide explains the logic and concepts and gets you to the next stage in implementing a mobile bank fraud detection solution.
A Cloud Guru's Azure This Week
Azure This Week - 31 August 2018 - This Week: Dean Bryen takes a look at the introduction of Google Identities in Azure AD B2B, how you can turn your whiteboard wireframes into HTML with the sketch2code tool, and the public preview of confidence scores for alerts in Azure Security Center. |
Transparent data encryption or always encrypted?
How to choose the right encryption technology for Azure SQL Database or SQL Server
Transparent Data Encryption (TDE) and Always Encrypted are two different encryption technologies offered by SQL Server and Azure SQL Database. Generally, encryption protects data from unauthorized access in different scenarios. They are complementary features, and this blog post will show a side-by-side comparison to help decide which technology to choose and how to combine them to provide a layered security approach.
Transparent Data Encryption
TDE is intended to add a layer of security to protect data at rest from offline access to raw files or backups, common scenarios include datacenter theft or unsecured disposal of hardware or media such as disk drives and backup tapes. For a deeper look into how TDE protects against the risk of malicious parties trying to recover stolen databases: data, log files, snapshots, copies or backups and to review TDE best practices see Feature Spotlight: Transparent Data Encryption (TDE).
Enabling TDE on databases provides the ability to comply with many laws, regulations, and security guidelines established across various industries that require data to be encrypted at rest. Unless data stored in a SQL database has no protection requirements at all, there should be no reason to disable TDE. It is, however, important to recognize that TDE only adds one layer of protection for data at rest and remaining risks must be addressed at the OS file system and hardware layer, see Bitlocker documentation to learn more.
TDE encrypts the entire database using an AES encryption algorithm which doesn’t require application developers to make any changes to existing applications. TDE performs real-time I/O encryption and decryption of the data and log files. Once the database is loaded into memory, the data is accessible to administrators (DBAs) of SQL Server (sysadmin, db_owner), and Azure SQL Database (cloud) administrators and can be used by all roles and applications that have access to the database. If a database contains sensitive data in specific columns that needs to be protected from administrator roles and remain encrypted in memory, using Always Encrypted should be evaluated in addition to TDE.
TDE offers two options for encryption key management, service managed keys and customer managed keys. For service managed keys, the TDE protector is a certificate stored in the master database of the server. For customer managed keys, the TDE protector is an asymmetric key protected by an EKM module or Azure Key Vault. When using Azure SQL Database TDE with Bring Your Own Keys (BYOK) the TDE Protector never leaves the key vault, but Azure SQL Database needs to access the key vault to decrypt and encrypt the Database Encryption Key (DEK) used to decrypt and encrypt the data. After the encrypted DEK is sent to key vault for decryption, the unencrypted DEK is stored in memory until the TDE Protector in key vault is deleted or access to SQL Database has been revoked. In this case, the database will go offline within 24 hours and the DEK will be removed from memory.
Always Encrypted
Always Encrypted is a feature designed to protect sensitive data, stored in Azure SQL Database or SQL Server databases from access by database administrators (e.g. the members of the SQL Server sysadmin or db_owner roles), administrators of machines hosting SQL Server instances,), and Azure SQL Database (cloud) administrators. Data stored in the database is protected even if the entire machine is compromised, for example by malware. Always Encrypted leverages client-side encryption: a database driver inside an application transparently encrypts data, before sending the data to the database. Similarly, the driver decrypts encrypted data retrieved in query results.
With Always Encrypted, cryptographic operations on the client-side use keys that are never revealed to the Database Engine (SQL Database or SQL Server). There are two types of keys in Always Encrypted:
- Column encryption keys are used to encrypt data in the database. These keys are stored in the database in the encrypted form (never in plaintext).
- Column master keys are used to encrypt column encryption keys. These keys are stored in an external key store, such as Windows Certificate Store, Azure Key Vault or hardware security modules. For keys stored in Azure Key Vault, only the client application has access to the keys, but not the database, unlike TDE.
The unique security benefit of Always Encrypted is the protection of data “in use” – i.e., the data used in computations, in memory of the SQL Server process remains encrypted. As a result, Always Encrypted protects the data from attacks that involve scanning the memory of the SQL Server process or extracting the data from a memory dump file.
By protecting data from high-privilege users who have no “need-to-know,” Always Encrypted provides a separation between those who own the data (and can view it) and those who manage the data (but should have no access). This enables customers to confidently store sensitive data in the cloud, delegate on-premises database administration to third parties, or to reduce security clearance requirements for their own DBA staff.
Unlike TDE, this is only partially transparent to applications. Although the client driver transparently encrypts and decrypts data, the application may need to be changed to adhere to requirements/limitations of Always Encrypted. For example, Always Encrypted only supports very limited operations on encrypted database columns. This is one of the reasons why we recommend you use Always Encrypted to protect truly sensitive data in selected database columns.
One thing to call out is the fact that by encrypting data on the client-side, Always Encrypted also protects the data, stored in encrypted columns, at rest and in transit. However, unless your goal is to protect sensitive data in use, TDE is the recommended choice for encryption at rest, and we recommend TLS for protecting data in-transit. In fact, it is often advised to use Always Encrypted, TDE, and TLS together:
- TDE as the first line of defense (and to meet common compliance requirements) to encrypt the entire database at rest.
- TLS to protect all traffic to the database.
- Always Encrypted to protect highly sensitive data from high-privilege users and malware in the database environment.
Following is a side-by-side comparison of the capabilities discussed above:
Always Encrypted | TDE | |
SQL Server version | SQL Server 2016 and above; Azure SQL Database | SQL Server 2008 and above; Azure SQL Database |
Requires SQL Server Enterprise Edition | No (starting with SQL Server 2016 SP1) | Yes |
Free in Azure SQL Database | Yes | Yes |
Protects data at rest | Yes | Yes |
Protects data in use | Yes | No |
Protects data from SQL administrators and admins | Yes | No |
Data is encrypted/decrypted on the client side | Yes | No |
Data is encrypted/decrypted on the server side | No | Yes |
Encrypt at column level | Yes | No (encrypts entire database) |
Transparent to application | Partially | Yes |
Encryption options | Yes | No |
Encryption key management | Customer Managed Keys | Service or Customer Managed Keys |
Protects keys in use | Yes | No |
Driver required | Yes | No |
More Information:
- Editions and supported features of SQL Server 2016
- For a Channel 9 presentation that includes Always Encrypted, see Keeping Sensitive Data Secure with Always Encrypted
- SQLSweet16!, Episode 1: Backup Compression for TDE-enabled Databases
Save money on actuarial compute by retiring your on-premises HPC grids
No insurance company should keep on-premises compute grids for actuarial computing. In the past, resistance to the cloud went along these lines: the cloud has a lack of data security, the cloud is expensive, and no one has experience with the cloud. But those arguments are out of date. I have worked in and supported, compute grids at many different Insurance companies. Just before joining Microsoft, I led a project to move workloads to Azure and to decommission on-premises grids globally. At this point, all insurance companies see the increasing demand from growth in the number of policies processed, and new regulations that require changes to the actuarial and accounting systems. IFRS-17 requires changes to workflows, reporting and control throughout the actuarial and accounting process. Now is the time to move to a cloud-based solution on Azure.
Why wait to move to a cloud-based compute solution?
Over the years, I’ve worked in IT departments supporting actuaries, and in an actuarial department working with IT teams. I have seen three main blockers when moving to an all cloud-based solution. It always starts with the Business Information Security Officer (BISO) who has security and business continuity questions. Then the accounting, legal and procurement departments need to be satisfied. Finally, the midlevel IT manager becomes concerned. It is not uncommon to hear “I built this grid, and I want to keep it”. These blockers can be addressed quickly, and your company can realize the cost saving and increased performance of an all cloud-based compute environment.
Confidence in security and continuity
Microsoft leads the industry in meeting compliance requirements. Azure satisfies a broad set of international and industry-specific compliance standards, such as General Data Protection Regulation (GDPR), ISO 27001, HIPAA, FedRAMP, SOC 1 and SOC 2, as well as country-specific standards including Australia IRAP, UK G-Cloud, and Singapore MTCS.
Rigorous third-party audits, such as those done by the British Standards Institute, verify Azure’s adherence to the strict security controls these standards mandate. For more information, visit these links where your security team can get more documentation.
Compliance such as ISO 22301 highlights Azure's business continuity and disaster recovery preparedness. It provides assurance that Azure can be trusted with mission-critical applications by providing an extensive, independent third party audit of Azure’s business continuity. Learn more about Azure’s ISO 22301 certification, and download a copy of the certification.
Nothing motivates accountants as much as expense reduction
It has been my experience that moving from a fixed size on-premises actuarial compute grid to an on-demand variable size cloud compute solution can reduce costs significantly. Consider a fixed 10,000 core grid running continuously for quarterly runs, it can be moved to an Azure 10,000 core grid running for only 15 days a quarter. According to the Microsoft Azure TCO calculator, in some cases that could result in as much as 80 percent savings for the that same actuarial compute runs. The key is to decommission old equipment that is still in use, as soon as possible. Even if the current grid is not at the end of its contract, the net savings can still be significant.
Time to move
Many years ago, the cloud felt new to IT professionals and support staff at insurance companies. But now, with so many business applications running as cloud services such as Office 365, it is the new normal. Resistance to turning off and getting rid of servers is natural. The hardware has been amortized, with software installed. But all equipment and software becomes less cost-efficient over time.
For many companies, the simplest solution is to use a SaaS function from an existing vendor running on the Azure cloud. The change is as simple as copying over the existing data to the SaaS location then reconnecting the inputs, and output data flows. Some providers even have a transition utility that will do this and verify that the result did not change because of the conversion. The only real issue occurs if the current models are running on old and unsupported versions of the software. In that case, the models will need to be upgraded before the move. How you move depends on your business model and requirements. But the time to consider the options is now.
Next steps
Read the Actuarial risk compute and modeling overview to understand your options for moving your compute workload to Azure.
I post regularly about new developments on social media. If you would like to follow me, you can find me on Linkedin and Twitter.
Finish your insurance actuarial modeling in hours, not days
Because of existing and upcoming regulations, insurers perform quite a bit of analysis over their assets and liabilities. Actuaries need time to review and correct results before reviewing the reports with regulators. Today, it is common for quarterly reporting to require thousands of hours of compute time. Companies which offer variable annuity products must follow Actuarial Guideline XLIII which requires several compute intensive tasks, including nested stochastic modeling. Solvency II requires quite a bit of computational analysis to understand the Solvency Capital Requirement and the Minimum Capital Requirement. International Financial Reporting Standard 17 requires analysis of each policy, reviews of overall profitability, and more. Actuarial departments everywhere work to make sure that their financial and other models produce results which can be used to evaluate their business for regulatory and internal needs.
With all this reporting, actuaries get pinched for time. They need time for things like:
- Development: Actuaries code the models in their favorite software or in custom solutions. Anything they can do to reduce the cycle of Code-Test-Review helps deliver the actuarial results sooner.
- Data preparation: Much of the source data is initially entered by hand. Errors need to be identified and fixed. If the errors can be found and fixed quicker, the actuaries can be more confident in their results.
- Result verification (and re-run when needed): Once the math and other algorithms have run, the actuaries review the resulting reports. Actuaries regularly review intermediate results during a longer modeling run. A few results will look wrong, necessitating review of the corresponding inputs and intermediate results. Upon review, a few inputs get fixed and parts of the model are run again.
Most actuarial packages on the market support in-house grids with the ability to either extend the grid to Azure, or to use Azure as the primary environment. As part of the move to Azure, actuarial teams review their models and processes to maximize the scale benefits of the cloud. For example, to reduce the amount of clock time to run a model, the insurer will continuously update data stored in Azure. The insurer will also work with their software vendors to review the model for changes that would allow more of the tasks to run in parallel. When running thousands of scenarios, it is frequently possible to run the data as thousands of independent processes with an aggregation step performed at the end.
Recommended next steps
To get more time as an actuary to review results and build models, read the Actuarial risk analysis and financial modeling solution guide. The solution guide will teach you how to prepare your data, run your models, and review your results. The advice works with both your purchased software and the software that your development team wrote.
Additional resources
Avere vFXT for Microsoft Azure now in public preview
The Avere vFXT for Azure delivers new possibilities for running high-performance computing (HPC) workloads in the cloud. With this new software solution, you can now easily connect on-premises infrastructure to Azure compute or “lift and shift” file-based workloads in Azure to run between Azure blob in the same manner.
Organizations have long struggled with how they can leverage the cloud for high-performance, file-based workloads. Moving petabytes of data to the cloud is not a quick or simple operation, and organizational buy-in for wholesale moves of these workloads can be challenging. As a result, the majority of these often mission-critical applications have remained in local data centers even as demands for more and faster compute power continue — a problem easily solved with access to limitless cloud compute resources.
The new Avere vFXT for Azure solves the problem of running these enterprise HPC applications in the cloud while leveraging data stored on-premises or in Azure Blob. The Avere vFXT facilitates Edge computing as a virtual high-performance file caching appliance that runs in Azure Compute. With Avere, these critical workloads can access thousands of cores on-demand, increasing business agility by processing smarter and faster without adding extra cost.
For organizations that are planning on a hybrid infrastructure situation with on-premises storage and cloud computing, HPC applications can “burst” into Azure using data stored in NAS devices and spin up as many virtual CPUs on an as needed basis. The data set itself never completely moves into the cloud. The requested bytes are temporarily cached using an Avere cluster while processing.
For lift and shift deployments, organizations can take advantage of minimizing operational and capital costs by using Azure Blob as primary storage. The Avere vFXT tiers data stored in Azure Blob to an SSD-based cache in Azure Compute while the job is running. Once processing is complete, the data is written back into Blob. With this flexibility to support hybrid deployments and edge computing, the Avere vFXT becomes an important part of a cloud migration strategy for file-based applications in HPC environments.
Many industries can take advantage of the Avere vFXT for read-heavy, cacheable workloads. From media rendering in post-production studios to genomics processing in pharma or research, Avere hides latency between compute and storage to optimize productivity while providing access to Azure’s unlimited capacity. In manufacturing, energy, and government, companies leverage Avere to support ingest of large amounts of data for timely analytics and research. In hedge funds and investment banking, systems engineers make it easy for analysts to decide on strategies with efficient backtesting and quant analytics in the cloud. After years of working with competing cloud providers, we are extremely excited to bring the Avere vFXT to the Microsoft Azure portfolio.
We’re starting now with the public preview of Avere vFXT for Azure and we’ve set our sights on general availability this fall. With the below tips, you’ll easily match Avere to the right use cases.
- High-performance file caching: Avere vFXTs will support several common brands of network-attached storage (NAS) as well as Azure Blob API.
- Large core counts: Compute farms with about 1,000 – 40,000 cores work well with Avere vFXTs. These farms usually are facing fluctuation in demand as well as growth.
- Read-heavy workloads: Workloads running with Avere vFXT clusters are heavy read, infrequent write in nature (at least 70:30), and demand high performance relying on a clustered network-attached storage.
- Hybrid or all-in Azure support: Organizations in any stage of a cloud strategy can take advantage of Azure for HPC by using the Avere vFXT.
- Multi-cloud support: Many customers request solutions that will support multiple cloud providers. The Avere vFXT is also available for both Amazon Web Services and Google Cloud Platform. However, when use in Azure, the cost for Avere vFXT nodes is zero!
Just the beginning
Since Avere Systems joined Microsoft earlier this year, we are well on our way to delivering the Avere vFXT for Azure as a generally available Azure Marketplace product this year. And this is just the beginning! Customers will continue to see exciting enhancements and developments that provide even better support of high-performance hybrid cloud environments. The Avere vFXT is just one more way Microsoft is deepening the commitment to making Azure the cloud of choice for HPC.
Getting started
To join the preview, visit the Avere vFXT for Azure public preview page, or simply find Avere in the Azure Marketplace. Then get started by reviewing the installation documentation and launching an Avere vFXT cluster.
Pricing
The Avere vFXT for Azure has no charge associated with licensing. However, costs associated with consumption do apply at normal rates.
Share your experiences!
We would love to hear about your public preview experiences! Email us at averevfxt@microsoft.com with suggestions, comments, and stories. You can also share your thoughts in the Azure Storage Advisors forum.
Anti-money laundering – Microsoft Azure helping banks reduce false positives
This blog post was created in partnership with André Burrell who is the Banking & Capital Markets Strategy Leader on the Worldwide Industry team at Microsoft.
One of the biggest compliance challenges facing financial institutions today is the high rate of false positives being generated by their Anti-Money Laundering (AML) Transactions Monitoring Systems (TMS). These systems are designed to identify suspicious transactions that may involve illicit proceeds or legitimate proceeds used for illegal purposes. The predominant TMS technologies are antiquated batch rule-based systems that have not fundamentally changed since the late ‘90s, and have not kept pace with increasing regulatory expectations and continually evolving money laundering techniques. More specifically, the technology lacks:
- The ability to assess a transaction in the context of the customer, and of similar customers.
- The agility to react and adapt to rapidly evolving patterns used by money launders or terrorists, with minimal human intervention.
- The ability to clearly understand why a transaction is identified as possibly suspicious.
Additionally, banks continue to struggle with a 360 view of their customers, they have difficulty accessing the rich transactional data sources held in multiple data silos, aka “dark data”. Without the 360 view, banks are unable to quickly, easily, and reliably conduct deep data analysis, e.g. identifying truly suspicious activity, assess customer profitability, or identifying negative trends early. And they cannot establish compliance and risk monitoring systems that adapt to changing regulatory requirements.
Outrageous levels of regulation and cost
What’s been the main driver for even having this discussion? Regulation, the heavy legal requirements for AML arise from the Bank Secrecy Act (BSA), additionally modified by numerous other regulations for example, the USA PATRIOT Act. These regulations are intended to stop the conversion of illegal proceeds (or value) into seemingly legitimate money. Or to prevent the use of legitimately obtained value to fund illicit activity (money laundering). The scope of what must be identified is broad and ever growing. Banks are required to investigate, and report suspected money laundering, terrorist financing, or any violation of law to the Financial Crimes Enforcement Network (FinCEN)1 and other agencies. Unfortunately, the antiquated TMS technology generates false positive alerts typically in the range of between 95 percent and 98 percent. Why should such an inefficient use of resources continue to be accepted?
AML costs across the banking industry due to this regulatory blunt force approach, has resulted in massive inefficiency and high operating costs for banks. It is commonly reported that costs being incurred by banks in the areas of transaction monitoring and operations are escalating due to inefficiencies and resource capacity issues within AML departments. AML can be the most costly area of compliance in a bank and one of the most expensive operational areas. On the other hand, the cost of non-compliance can easily exceed the cost of operating an AML department. So, banks are left operating inefficient and costly systems. This makes TMS technology ripe for disruption through the application of machine learning (ML) and artificial intelligence (AI).
Over time, as algorithms improve, the false positive rate will continue to decline, driving significant cost reductions to operations. ML and AI can quickly flag changes in patterns of activity, whether caused by new product offerings or money laundering. Such savings and improvements in the ability to meet increasing regulatory demands should provide banks with the incentive to move AML regulatory workloads to the cloud and accrue additional value. It should also make banks more interested in moving other workloads to the cloud.
An end to the annuity of the dominant players
It's not all doom and gloom for banks. These seemingly insurmountable problems can be addressed with cloud technologies. We believe this will cause a huge disruption in the existing AML risk and compliance industry. Lots of players feel they have the answer to reducing false positives, but we’ll let the future speak for itself.
So, where can you get started to cut through the overwhelming amount of information, and to provide a path for your company to start? Identity Proofing with the Behavioral Biometrics solution from BioCatch can be the place to start - by mapping these behaviors in real-time throughout the initiation process, behavioral biometrics pick up on suspicious behaviors, alert the company that possible fraud is in the works and provide an immediate recourse to prevent a potentially fraudulent application from going through. Stopping fraud means shutting cybercriminals down at the application stage. Identity proofing with behavioral biometrics eliminates our overreliance on easily-acquired personal credentials.
Also, stay tuned for the next episode in our blog series that will provide more robust discussions about solving for financial crime. In the meantime, if you would like to engage with the authors on this topic, feel free to reach out via the comment section.
Describe, diagnose, and predict with IoT Analytics
In a previous blog article Extracting actionable insights from IoT data I discuss the value of collecting IoT data from machines, assets and products. The whole point of collecting IoT data is to extract actionable insights. Insights that will trigger some sort of action that will result in some business value such as optimized factory operations, improved product quality, better understanding of customer demand, new sources of revenue, and improved customer experience.
In this blog, I discuss the extraction of value out of IoT data by focusing on the analytics part of the story.
Generally speaking, data analytics comes in four types (Figure 1):
- Descriptive, to answer the question: What’s happening?
- Diagnostic, to answer the question: Why’s happening?
- Predictive, to answer the question: What will happen?
- Prescriptive, to answer the question: What actions should we take?
Figure 1: IoT Analytics Flavors
Since IoT analytics is a subcase of data analytics, these types map nicely onto IoT analytics as follows:
Data Analytics Type |
Focus |
IoT Application |
Representative Questions |
Implementation |
Descriptive |
What’s happening? |
Monitor the status of machines, devices, products, and assets. Assess if things are going according to plan, or to alert people if anomalies arise. |
What’s the throughput and utilization of this machine? Are there any anomalies that require immediate attention? How much energy is this machine consuming? How many parts are we producing with this tool? How are customers using our products? Where are my assets? |
Usually in the form of dashboards that display current and historical sensor data, statistics, KPIs, and alerts. |
Diagnostic |
Why is something happening? |
Examine data from multiple angles to understand why something is happening. The goal is to find the root cause of a problem, in order to fix, or improve something (a process, a service, or a product). |
Why is the OEE of this machine so low? Why is this machine producing more defective parts than the others? Why is this machine consuming so much energy? Why are we producing so few parts with this tool? Why are we getting so many returns of this product? Why are we getting so many product returns from our European customers? |
Diagnostic capabilities are often extensions to dashboards that allow users to drill into the data, pivot it in multiple ways, compare it, visualize trends and correlations in an ad-hoc way. The users performing diagnostics from data are normally domain experts (i.e., experts on the specific machine, process, device, product) as opposed to pure data scientists. Data scientists have a supporting role enabling domain experts to extract insights. |
Predictive |
What will happen? |
Calculate the probability that something will happen within a specific timeframe, based on historical data. The goal is to proactively take some sort of corrective action before something (usually bad) happens, mitigate risk, or to identify opportunities to gain a competitive advantage. |
What’s the probability of this machine failing in the next 24 hours? What is the expected remaining useful life of this tool? When should I plan service for this machine? What will be the demand for this product or feature? |
Usually implemented thru machine learning models that are trained with historical data and deployed to the cloud so that they can be used by end-user applications. |
Prescriptive |
What actions should I take? |
Recommend actions as a result of a diagnosis or a prediction, or at least provide some visibility to the reasoning behind a diagnostic or prediction. Often the recommendations are about how to fix or optimize something. |
This machine is 90 percent likely to fail in the next 24 hours. What should I do to prevent it? The OEE of this machine is low. What can I do to improve it? This machine is producing too many defective parts. What should I do to avoid this? This design is causing many manufacturing issues. How can I improve the design to reduce them? |
Prescriptive analytics is still in its early stages. Often an extension of predictive analytics, where the user is presented with the steps a machine learning model took to reach a conclusion or prediction. While this is not quite a recommendation, it may provide some insight into the reasoning of the ML algorithm to hints at a recommendation. |
The role of Machine Learning in IoT Analytics
Machine learning (ML) is playing an increasingly important role in IoT analytics. One could argue that the recent emergence of real-world applications of ML in manufacturing is thanks to the explosion of data, most of which we can attribute to the IoT. Other factors include the availability of better ML algorithms, and the compute power of the cloud.
A deeper discussion of ML for manufacturing will be the focus of future blogs. For now, the important thing is that there are many ML algorithms to choose from and selecting the right one depends on:
- What you are trying to do? (predict values, predict categories, find unusual data points, etc.)
- What is the specific scenario?
- What is the desired accuracy?
- What is the desired training time?
- How linear is the data?
- How many parameters are there?
- How many features are there?
And more, see the Azure Machine Learning: Algorithm Cheat Sheet to get a better idea.
Even for a specific use case, you may have multiple algorithm choices. For example, predictive maintenance:
If you want to… |
You would likely choose the following ML algorithm |
Predict the probability that a piece of equipment will fail within a future time period. |
Binary classification |
Compute the remaining useful life of an asset. |
Regression |
Predict a range of time to failure for an asset. Calculate the likelihood of failure in a future period due to one of multiple root causes. Predict the most likely root cause of a given failure |
Multi-class classification |
One important point is that ML is not only for predictive and prescriptive analytics, although is commonly associated with these types of analytics. ML can also be used for descriptive and diagnostic analytics. For example, raising alerts when something is abnormal with a machine is a descriptive analytics scenario and can be enabled with anomaly detection ML algorithms.
IoT Analytics solution overview
At a high level, an IoT analytics solution contains the following components (Figure 2):
- Data ingestion: where we connect to the devices or field gateways to collect the data records they stream. In IoT, data records can be events, messages, alerts, or telemetry (sensor measurements). They are usually timestamped and may come at different frequencies. They may come in different formats and use different communication protocols. This data is ingested into the cloud thru a cloud gateway.
- Stream processing: in some applications, data is analyzed live as it is streamed. This is usually done with a combination of visualization, live queries, and actions. For example, a dashboard can display the temperature of a device over a period of time. Live queries may be constantly checking that the average temperature over the last 5 minutes doesn’t exceed 90 F. And these queries can execute some action (for example, shut down the machine, and/or send an alert) if it does. Stream processing data may or may not be stored.
- Data storage: where we store the IoT data records being ingested. IoT solutions can generate significant amounts of data depending on how many devices are in the solution, how often they send data, and the size of payload in the data records sent from devices. Data is often time stamped and required to be stored where it can be accessed for further processing and used in visualization and reporting. It is common to have IoT data split into “warm” and "cold" data stores:
- Warm storage: for data that needs to be available for reporting and visualization immediately. Warm storage holds recent data that needs to be accessed with low latency, high throughput, and full query capabilities.
- Cold storage: for data that is stored longer term and used for batch processing. Data stored in cold storage is typically historical data that needs to be accessed in the future for reporting, analysis, training machine learning models, etc. Most often the cold storage database technology chosen will be cheaper in cost but offer fewer query and reporting features than the warm database solution.
A common implementation for storage is to keep a recent range (e.g. the last day, week, or month) of telemetry data in warm storage and to store historical data in cold storage. With this implementation, the application has access to the most recent data and can quickly observe recent telemetry data and trends. Retrieving historical information for devices can be accomplished using cold storage, generally with higher latency than if the data were in warm storage.
- Data transformation: involves manipulation or aggregation of the telemetry stream either before or after it is received by the cloud gateway service. Manipulation can include protocol transformation, combining data points, and more.
- Machine learning: highly structured data stored in cold storage is normally used to train the ML models. Once created, the ML models are deployed and can be consumed (used) by applications.
- User interface & reporting tools: the end-user applications used to visualize and analyze the IoT data.
Integrations to other systems: the insight extracted from the IoT solution will normally result in some sort of action. Often, this action takes place in an external line-of-business system such as CRM, PLM, or ERP. For example, when a predictive maintenance machine learning model predicts that a machine will fail, it can trigger an action in the CRM system to schedule its service.
Figure 2: Components of an IoT Analytics Solution
Conclusion
This blog article focuses on the analytics portion of an IoT solution. It discusses the four types of analytics (descriptive, diagnostic, predictive, and prescriptive) and how they map to IoT. I also discussed the role of machine learning in IoT analytics. Finally, I overviewed a high-level solution architecture to highlight the components of an IoT solution that come into play when it comes to IoT analytics.
Recommended next steps
Learn more about IoT analytics by exploring Extracting actionable insights from IoT data.
std::optional: How, when, and why
C++17 adds several new “vocabulary types” – types intended to be used in the interfaces between components from different sources – to the standard library. MSVC has been shipping implementations of std::optional
, std::any
, and std::variant
since the Visual Studio 2017 release, but we haven’t provided any guidelines on how and when these vocabulary types should be used. This article on std::optional
is the first of a series that will examine each of the vocabulary types in turn.
The need for “sometimes-a-thing”
How do you write a function that optionally accepts or returns an object? The traditional solution is to choose one of the potential values as a sentinel to indicate the absence of a value:
void maybe_take_an_int(int value = -1); // an argument of -1 means "no value" int maybe_return_an_int(); // a return value of -1 means "no value"
This works reasonably well when one of the representable values of the type never occurs in practice. It’s less great when there’s no obvious choice of sentinel and you want to be able to pass all representable values. If that’s the case, the typical approach is to use a separate boolean to indicate whether the optional parameter holds a valid value:
void maybe_take_an_int(int value = -1, bool is_valid = false); void or_even_better(pair<int, bool> param = std::make_pair(-1, false)); pair<int, bool> maybe_return_an_int();
This is also feasible, but awkward. The “two distinct parameters” technique of maybe_take_an_int
requires the caller to pass two things instead of one to represent a single notion, and fails silently when the caller forgets the bool
and simply calls maybe_take_an_int(42)
. The use of pair
in the other two functions avoids those problems, but it’s possible for the user of the pair
to forget to check the bool
and potentially use a garbage value in the int
. Passing std::make_pair(42, true)
or std::make_pair(whatever, false)
is also hugely different than passing 42
or nothing – we’ve made the interface hard to use.
The need for “not-yet-a-thing”
How do you write a class with a member object whose initialization is delayed, i.e., optionally contains an object? For whatever reason, you do not want to initialize this member in a constructor. The initialization may happen in a later mandatory call, or it may happen only on request. When the object is destroyed the member must be destroyed only if it has been initialized. It’s possible to achieve this by allocating raw storage for the member object, using a bool
to track its initialization status, and doing horrible placement new
tricks:
using T = /* some object type */; struct S { bool is_initialized = false; alignas(T) unsigned char maybe_T[sizeof(T)]; void construct_the_T(int arg) { assert(!is_initialized); new (&maybe_T) T(arg); is_initialized = true; } T& get_the_T() { assert(is_initialized); return reinterpret_cast<T&>(maybe_T); } ~S() { if (is_initialized) { get_the_T().~T(); // destroy the T } } // ... lots of code ... };
The "lots of code"
comment in the body of S
is where you write copy/move constructors/assignment operators that do the right thing depending on whether the source and target objects contain an initialized T
. If this all seems horribly messy and fragile to you, then give yourself a pat on the back – your instincts are right. We’re walking right along the cliff’s edge where small mistakes will send us tumbling into undefined behavior.
Another possible solution to many of the above problems is to dynamically allocate the “optional” value and pass it via pointer – ideally std::unique_ptr
. Given that we C++ programmers are accustomed to using pointers, this solution has good usability: a null pointer indicates the no-value condition, *
is used to access the value, std::make_unique<int>(42)
is only slightly awkward compared to return 42
and unique_ptr
handles the deallocation for us automatically. Of course usability is not the only concern; readers accustomed to C++’s zero-overhead abstractions will immediately pounce upon this solution and complain that dynamic allocation is orders of magnitude more expensive than simply returning an integer. We’d like to solve this class of problem without requiring dynamic allocation.
optional
is mandatory
C++17’s solution to the above problems is std::optional
. optional<T>
directly addresses the issues that arise when passing or storing what may-or-may-not-currently-be an object. optional<T>
provides interfaces to determine if it contains a T
and to query the stored value. You can initialize an optional
with an actual T
value, or default-initialize it (or initialize with std::nullopt
) to put it in the “empty” state. optional<T>
even extends T
‘s ordering operations <
, >
, <=
, >=
– where an empty optional
compares as less than any optional
that contains a T
– so you can use it in some contexts exactly as if it were a T
. optional<T>
stores the T
object internally, so dynamic allocation is not necessary and in fact explicitly forbidden by the C++ Standard.
Our functions that need to optionally pass a T
would be declared as:
void maybe_take_an_int(optional<int> potential_value = nullopt); // or equivalently, "potential_value = {}" optional<int> maybe_return_an_int();
Since optional<T>
can be initialized from a T
value, callers of maybe_take_an_int
need not change unless they were explicitly passing -1
to indicate “not-a-value.” Similarly, the implementation of maybe_return_an_int
need only change places that are returning -1
for “not-a-value” to instead return nullopt
(or equivalently {}
).
Callers of maybe_return_an_int
and the implementation of maybe_take_an_int
require more substantial changes. You can ask explicitly if an instance of optional
holds a value using either the has_value
member or by contextual conversion to bool
:
optional<int> o = maybe_return_an_int(); if (o.has_value()) { /* ... */ } if (o) { /* ... */ } // "if" converts its condition to bool
Once you know that the optional
contains a value, you can extract it with the *
operator:
if (o) { cout << "The value is: " << *o << 'n'; }
or you can use the value member function to get the stored value or a bad_optional_access
exception if there is none, and not bother with checking:
cout << "The value is: " << o.value() << 'n';
or the value_or
member function if you’d rather get a fallback value than an exception from an empty optional
:
cout << "The value might be: " << o.value_or(42) << 'n';
All of which together means we cannot inadvertently use a garbage value as was the case for the “traditional” solutions. Attempting to access the contained value of an empty optional
results in an exception if accessed with the value()
member, or undefined behavior if accessed via the *
operator that can be caught by debug libraries and static analysis tools. Updating the “old” code is probably as simple as replacing validity tests like value == not_a_value_sentinel
and if (is_valid)
with opt_value.has_value()
and if (opt_value)
and replacing uses with *opt_value
.
Returning to the concrete example, your function that looks up a string given an integer can simply return optional<string>
. This avoids the problems of the suggested solutions; we can
- easily discern the no-value case from the value-found case, unlike for the “return a default value” solution,
- report the no-value case without using exception handling machinery, which is likely too expensive if such cases are frequent rather than exceptional,
- avoid leaking implementation details to the caller as would be necessary to expose an “end” iterator with which they could compare a returned iterator.
Solving the delayed initialization problem is straightforward: we simply add an optional<T>
member to our class. The standard library implementer is responsible for getting the placement new
handling correct, and std::optional
already handles all of the special cases for the copy/move constructors/assignment operators:
using T = /* some object type */; struct S { optional<T> maybe_T; void construct_the_T(int arg) { // We need not guard against repeat initialization; // optional's emplace member will destroy any // contained object and make a fresh one. maybe_T.emplace(arg); } T& get_the_T() { assert(maybe_T); return *maybe_T; // Or, if we prefer an exception when maybe_T is not initialized: // return maybe_T.value(); } // ... No error-prone handwritten special member functions! ... };
optional
is particularly well-suited to the delayed initialization problem because it is itself an instance of delayed initialization. The contained T
may be initialized at construction, or sometime later, or never. Any contained T
must be destroyed when the optional
is destroyed. The designers of optional
have already answered most of the questions that arise in this context.
Conclusions
Any time you need a tool to express “value-or-not-value”, or “possibly an answer”, or “object with delayed initialization”, you should reach into your toolbox for std::optional
. Using a vocabulary type for these cases raises the level of abstraction, making it easier for others to understand what your code is doing. The declarations optional<T> f();
and void g(optional<T>);
express intent more clearly and concisely than do pair<T, bool> f();
or void g(T t, bool is_valid);
. Just as is the case with words, adding to our vocabulary of types increases our capacity to describe complex problems simply – it makes us more efficient.
If you have any questions, please feel free to post in the comments below. You can also send any comments and suggestions directly to the author via e-mail at cacarter@microsoft.com, or Twitter @CoderCasey. Thank you!
How to get started with Azure and .NET
Azure is a big cloud with lots of services, and for even the most experienced user it can be intimidating to know which service will best meet your needs. This blog post is intended to provide a short overview of the most common concepts and services .NET developers need get started and provide resources to help you learn more.
Key Concepts
Azure Account: Your Azure account is the credentials that you sign into Azure with (e.g. what you would use to log into the Azure Portal). If you to not yet have an Azure account, you can create one for free
Azure Subscription: A subscription is the billing plan that Azure resources are created inside. These can either be individual or managed by your company. Your account can be associated with multiple subscriptions. If this is the case, you need to make sure you are selecting the correct one when creating resources. For more info see understanding Azure accounts, subscriptions and billing. Did you know, if you have a Visual Studio Subscription you have monthly Azure credits just waiting to be activated?
Resource Group: Resource groups are one of the most fundamental primitives you’ll deal with in Azure. At a high level, you can think of a resource group like a folder on your computer. Any resources or service you create in Azure will be stored in a resource group (just like when you save a file on your computer you choose where on disk it is saved).
Hosting: When you want code you’ve written to run in Azure, it needs to be hosted in a service that supports executing user provided code.
Managed Services: Azure provides many services where you provide data or information to Azure, and Azure’s implementation takes the appropriate action. A common example of this is Azure Blob Storage, where you provide files, and Azure handles reading, writing, and persisting them for you.
Choosing a Hosting Option
Hosting in Azure can be divided into three main categories:
- Infrastructure-as-a-Service (IaaS): With IaaS, you provision the VMs that you need, along with associated network and storage components. Then you deploy whatever software and applications you want onto those VMs. This model is the closest to a traditional on-premises environment, except that Microsoft manages the infrastructure. You still manage the individual VMs (e.g. deciding what operating system you want, install custom software, when to apply security updates, etc.).
- Platform-as-a-Service (PaaS): Provides a managed hosting environment, where you can deploy your application without needing to manage VMs or networking resources. For example, instead of creating individual VMs, you specify an instance count, and the service will provision, configure, and manage the necessary resources. Azure App Service is an example of a PaaS service.
- Functions-as-a-Service (FaaS): Often called “Serverless” computing, FaaS goes even further than PaaS in removing the need to worry about the hosting environment. Instead of creating compute instances and deploying code to those instances, you simply deploy your code, and the service automatically runs it. You don’t need to administer the compute resources, the platform seamlessly scales your code up or down to whatever level necessary to handle the traffic and you pay only when your code is running. Azure Functions are a FaaS service.
In general, the further towards Serverless you can host your application, the more benefits you’ll see from running in the cloud. Below is a short cheat sheet for getting started with three common hosting choices in Azure and when to choose them (for a more complete list see Overview of Azure compute options)
- Azure App Service: If you are looking to host a web application or service we recommend you look at App Service first. To get started with App Service, see the ASP.NET Quickstart (instructions other than project type are applicable to ASP.NET, WCF, and ASP.NET Core apps).
- Azure Functions: Azure Functions are great for event driven workflows. Examples include responding to Webhooks, processing items placed into Queues or Blob Storage, Timers, etc. To get started with Azure Functions see the Create your first function quickstart.
- Azure Virtual Machines: If App Service doesn’t meet your needs for hosting an existing application (e.g. you need to install customer software on the machine, or access to operating system APIs that are not available in App Service’s environment) Virtual Machines will be the easiest place to start. To get started with Virtual Machines see our Deploy an ASP.NET app to an Azure virtual machine tutorial (applies equally to WCF).
If you need more help deciding on which hosting/compute option is best for you, see the Decision tree for Azure compute services.
Choosing a Data Storage Service
Azure offers many services for storing your data depending on your needs. The most common data services for .NET developers are:
- Azure SQL Database: If you are looking to migrate an application that is already using SQL Server to the cloud, then Azure SQL Database is a natural place to start. To get started see the Build an ASP.NET app in Azure with SQL Database tutorial.
- Azure Cosmos DB: Is a modern database designed for the cloud. If you are looking to start a new application that doesn’t yet have a specific database dependency, you should look at Azure Cosmos DB as a starting point. Cosmos DB is a good choice for new web, mobile, gaming, and IoT applications where automatic scale, predictable performance, fast response times, and the ability to query over schema-free data is important (common use cases for Azure Cosmos DB). To get started See build a .NET web app with Azure Cosmos DB quickstart
- Blob Storage: Is optimized for storing and retrieving large binary objects (images, files, video and audio streams, large application data objects and documents, etc.). Object stores enable the management of extremely large amounts of unstructured data. To get started see the upload, download, and list blobs using .NET quickstart.
For more help deciding on which data storage service will best meet your needs, see choose the right data store.
Diagnosing Problems in the Cloud
Once you deploy your application to Azure, you’ll likely run into cases where it worked locally but doesn’t in Azure. Below are two good places to start:
- Remote debug from Visual Studio: Most of the Azure compute services (including all three covered above) support remote debugging with Visual Studio and acquiring logs. To explore Visual Studio’s capabilities for your application type, open the Cloud Explorer tool window (type “Cloud Explorer” into Visual Studio’s Quicklaunch toolbar (in the top right corner), and locate your application in the tree. For details see diagnosing errors in your cloud apps.
- Enable Application Insights: Application Insights is a complete application performance monitoring (APM) solution that captures diagnostic data, telemetry, and performance data from the application automatically. To get started collecting diagnostic data for your app, see find and diagnose run-time exceptions with Azure Application Insights.
Conclusion and Resources
The above is a short list of what to look at when you’re first starting with Azure. If you are interested in learning more here are a few resources:
- Visit our .NET Azure Developer Center
- Read the Azure Quick Start Guide for .NET Developers free e-book
As always, we want to see you successful, so if you run into any issues, let me know in the comment section below, or via Twitter and I’ll do my best to get your question answered.
Windows 10 SDK Preview Build 17749 available now!
Today, we released a new Windows 10 Preview Build of the SDK to be used in conjunction with Windows 10 Insider Preview (Build 17749 or greater). The Preview SDK Build 17749 contains bug fixes and under development changes to the API surface area.
The Preview SDK can be downloaded from developer section on Windows Insider.
For feedback and updates to the known issues, please see the developer forum. For new developer feature requests, head over to our Windows Platform UserVoice.
Things to note:
- This build works in conjunction with previously released SDKs and Visual Studio 2017. You can install this SDK and still also continue to submit your apps that target Windows 10 build 1803 or earlier to the store.
- The Windows SDK will now formally only be supported by Visual Studio 2017 and greater. You can download the Visual Studio 2017 here.
- This build of the Windows SDK will install on Windows 10 Insider Preview builds and supported Windows operating systems.
- In order to assist with script access to the SDK, the ISO will also be able to be accessed through the following URL: https://go.microsoft.com/fwlink/?prd=11966&pver=1.0&plcid=0x409&clcid=0x409&ar=Flight&sar=Sdsurl&o1=17749 once the static URL is published.
C++/WinRT Update for build 17709 and beyond:
This update introduces many improvements and fixes for C++/WinRT. Notably, it introduces the ability to build C++/WinRT without any dependency on the Windows SDK. This isn’t particularly interesting to the OS developer, but even in the OS repo it provides benefits because it doesn’t include any Windows headers. Thus, a developer will typically pull in fewer or no dependencies inadvertently. This also means a dramatic reduction in the number of macros that a C++/WinRT developer must guard against. Removing the dependency on the Windows headers means that C++/WinRT is more portable and standards compliant and furthers our efforts to make it a cross-compiler and cross-platform library. It also means that the C++/WinRT headers will never be mangled by macros. If you previously relied on C++/WinRT to include various Windows headers that you will now have to include them yourself. It has always been good practice to always include any headers you depend on explicitly and not rely on another library to include them for you.
Highlights
Support get_strong and get_weak to create delegates: This update allows a developer to use either get_strong or get_weak instead of a raw this pointer when creating a delegate pointing to a member function.
Add async cancellation callback: The most frequently requested feature for C++/WinRT’s coroutine support has been the addition of a cancellation callback.
Simplify the use of APIs expecting IBuffer parameters: Although most APIs prefer collections or arrays, enough APIs rely on IBuffer that it should be easier to use such APIs from C++. This update provides direct access to the data behind an IBuffer implementation using the same data naming convention used by the C++ standard library containers. This also avoids colliding with metadata names that conventionally begin with an uppercase letter.
Conformance: Improved support for Clang and Visual C++’s stricter conformance modes.
Improved code gen: Various improvements to reduce code size, improve inlining, and optimize factory caching.
Remove unnecessary recursion: When the command line refers to a folder rather than a specific winmd, cppwinrt will no longer search recursively for winmd files. It causes performance problems in the OS build and can lead to usage errors that are hard to diagnose when developers inadvertently cause cppwinrt to consume more winmds than expected. The cppwinrt compiler also now handles duplicates more intelligently, making it more resilient to user error and poorly-formed winmd files.
Declare both WINRT_CanUnloadNow and WINRT_GetActivationFactory in base.h: Callers don’t need to declare them directly. Their signatures have also changed, amounting to a breaking change. The declarations alleviate most of the pain of this change. The change is necessitated by the fact that C++/WinRT no longer depends on the Windows headers and this change removes the dependency on the types from the Windows headers.
Harden smart pointers: The event revokers didn’t revoke when move-assigned a new value. This lead us to take a closer look at the smart pointer classes and we noticed that they were not reliably handling self-assignment. This is rooted in the com_ptr class template that most of the others rely on. We fixed com_ptr and updated the event revokers to handle move semantics correctly to ensure that they revoke upon assignment. The handle class template has also been hardened by the removal of the implicit constructor that made it easy to write incorrect code. This also turned bugs in the OS into compiler errors fixed in this PR.
Breaking Changes
Support for non-WinRT interfaces is disabled by default. To enable, simply #include <unknwn.h> before any C++/WinRT headers.
winrt::get_abi(winrt::hstring) now returns void* instead of HSTRING. Code requiring the HSTRING ABI can simply use a static_cast.
winrt::put_abi(winrt::hstring) returns void** instead of HSTRING*. Code requiring the HSTRING ABI can simply use a reinterpret_cast.
HRESULT is now projected as winrt::hresult. Code requiring an HRESULT can simply static_cast if you need to do type checking or support type traits, but it is otherwise convertible as long as <unknwn.h> is included first.
GUID is now projected as winrt::guid. Code implementing APIs with GUID parameters must use winrt::guid instead, but it is otherwise convertible as long as <unknwn.h> is included first.
The signatures of WINRT_CanUnloadNow and WINRT_GetActivationFactory has changed. Code must not declare these functions at all and instead include winrt/base.h to include their declarations.
The winrt::handle constructor is now explicit. Code assigning a raw handle value must call the attach method instead.
winrt::clock::from_FILETIME has been deprecated. Code should use winrt::clock::from_file_time instead.
What’s New:
MSIX Support
It’s finally here! You can now package your applications as MSIX. These applications can be installed and run on any device with 17682 build or later.
To package your application with MSIX, use the MakeAppx tool. To install the application – just click on the MSIX file. To understand more about MSIX, watch this introductory video: link
Feedback and comments are welcome on our MSIX community: http://aka.ms/MSIXCommunity
MSIX is not currently supported by the App Certification Kit nor the Microsoft Store at this time.
MC.EXE
We’ve made some important changes to the C/C++ ETW code generation of mc.exe (Message Compiler):
The “-mof” parameter is deprecated. This parameter instructs MC.exe to generate ETW code that is compatible with Windows XP and earlier. Support for the “-mof” parameter will be removed in a future version of mc.exe.
As long as the “-mof” parameter is not used, the generated C/C++ header is now compatible with both kernel-mode and user-mode, regardless of whether “-km” or “-um” was specified on the command line. The header will use the _ETW_KM_ macro to automatically determine whether it is being compiled for kernel-mode or user-mode and will call the appropriate ETW APIs for each mode.
- The only remaining difference between “-km” and “-um” is that the EventWrite[EventName] macros generated with “-km” have an Activity ID parameter while the EventWrite[EventName] macros generated with “-um” do not have an Activity ID parameter.
The EventWrite[EventName] macros now default to calling EventWriteTransfer (user mode) or EtwWriteTransfer (kernel mode). Previously, the EventWrite[EventName] macros defaulted to calling EventWrite (user mode) or EtwWrite (kernel mode).
- The generated header now supports several customization macros. For example, you can set the MCGEN_EVENTWRITETRANSFER macro if you need the generated macros to call something other than EventWriteTransfer.
- The manifest supports new attributes.
- Event “name”: non-localized event name.
- Event “attributes”: additional key-value metadata for an event such as filename, line number, component name, function name.
- Event “tags”: 28-bit value with user-defined semantics (per-event).
- Field “tags”: 28-bit value with user-defined semantics (per-field – can be applied to “data” or “struct” elements).
- You can now define “provider traits” in the manifest (e.g. provider group). If provider traits are used in the manifest, the EventRegister[ProviderName] macro will automatically register them.
- MC will now report an error if a localized message file is missing a string. (Previously MC would silently generate a corrupt message resource.)
- MC can now generate Unicode (utf-8 or utf-16) output with the “-cp utf-8” or “-cp utf-16” parameters.
Known Issues:
The SDK headers are generated with types in the “ABI” namespace. This is done to avoid conflicts with C++/CX and C++/WinRT clients that need to consume types directly at the ABI layer[1]. By default, types emitted by MIDL are *not* put in the ABI namespace, however this has the potential to introduce conflicts from teams attempting to consume ABI types from Windows WinRT MIDL generated headers and non-Windows WinRT MIDL generated headers (this is especially challenging if the non-Windows header references Windows types).
To ensure that developers have a consistent view of the WinRT API surface, validation has been added to the generated headers to ensure that the ABI prefix is consistent between the Windows headers and user generated headers. If you encounter an error like:
5>c:program files (x86)windows kits10include10.0.17687.0winrtwindows.foundation.h(83): error C2220: warning treated as error – no ‘object’ file generated
5>c:program files (x86)windows kits10include10.0.17687.0winrtwindows.foundation.h(83): warning C4005: ‘CHECK_NS_PREFIX_STATE’: macro redefinition
5>g:<PATH TO YOUR HEADER HERE>(41): note: see previous definition of ‘CHECK_NS_PREFIX_STATE’
It means that some of your MIDL generated headers are inconsistent with the system generated headers.
There are two ways to fix this:
- Preferred: Compile your IDL file with the /ns_prefix MIDL command line switch. This will cause all your types to be moved to the ABI namespace consistent with the Windows headers. This may require code changes in your code however.
- Alternate: Add #define DISABLE_NS_PREFIX_CHECKS before including the Windows headers. This will suppress the validation.
Windows App Certification Kit Crashes
The Windows App Certification Kit will crash when running. To avoid this failure we recommend not installing the Windows App Certification Kit. You can uncheck this option during setup.
API Updates, Additions and Removals
When targeting new APIs, consider writing your app to be adaptive in order to run correctly on the widest number of Windows 10 devices. Please see Dynamically detecting features with API contracts (10 by 10) for more information.
The following APIs have been added to the platform since the release of 17134. The APIs listed below have been removed.
In addition, please note the following removal since build 17744:
namespace Windows.ApplicationModel.DataTransfer { public sealed class DataPackagePropertySetView : IIterable<IKeyValuePair<string, object>>, IMapView<string, object> { string SourceDisplayName { get; } } }
Additions:
namespace Windows.AI.MachineLearning { public interface ILearningModelFeatureDescriptor public interface ILearningModelFeatureValue public interface ILearningModelOperatorProvider public sealed class ImageFeatureDescriptor : ILearningModelFeatureDescriptor public sealed class ImageFeatureValue : ILearningModelFeatureValue public interface ITensor : ILearningModelFeatureValue public sealed class LearningModel : IClosable public sealed class LearningModelBinding : IIterable<IKeyValuePair<string, object>>, IMapView<string, object> public sealed class LearningModelDevice public enum LearningModelDeviceKind public sealed class LearningModelEvaluationResult public enum LearningModelFeatureKind public sealed class LearningModelSession : IClosable public struct MachineLearningContract public sealed class MapFeatureDescriptor : ILearningModelFeatureDescriptor public sealed class SequenceFeatureDescriptor : ILearningModelFeatureDescriptor public sealed class TensorBoolean : ILearningModelFeatureValue, ITensor public sealed class TensorDouble : ILearningModelFeatureValue, ITensor public sealed class TensorFeatureDescriptor : ILearningModelFeatureDescriptor public sealed class TensorFloat : ILearningModelFeatureValue, ITensor public sealed class TensorFloat16Bit : ILearningModelFeatureValue, ITensor public sealed class TensorInt16Bit : ILearningModelFeatureValue, ITensor public sealed class TensorInt32Bit : ILearningModelFeatureValue, ITensor public sealed class TensorInt64Bit : ILearningModelFeatureValue, ITensor public sealed class TensorInt8Bit : ILearningModelFeatureValue, ITensor public enum TensorKind public sealed class TensorString : ILearningModelFeatureValue, ITensor public sealed class TensorUInt16Bit : ILearningModelFeatureValue, ITensor public sealed class TensorUInt32Bit : ILearningModelFeatureValue, ITensor public sealed class TensorUInt64Bit : ILearningModelFeatureValue, ITensor public sealed class TensorUInt8Bit : ILearningModelFeatureValue, ITensor } namespace Windows.ApplicationModel { public sealed class AppInstallerInfo public sealed class LimitedAccessFeatureRequestResult public static class LimitedAccessFeatures public enum LimitedAccessFeatureStatus public sealed class Package { IAsyncOperation<PackageUpdateAvailabilityResult> CheckUpdateAvailabilityAsync(); AppInstallerInfo GetAppInstallerInfo(); } public enum PackageUpdateAvailability public sealed class PackageUpdateAvailabilityResult } namespace Windows.ApplicationModel.Calls { public sealed class VoipCallCoordinator { IAsyncOperation<VoipPhoneCallResourceReservationStatus> ReserveCallResourcesAsync(); } } namespace Windows.ApplicationModel.Chat { public static class ChatCapabilitiesManager { public static IAsyncOperation<ChatCapabilities> GetCachedCapabilitiesAsync(string address, string transportId); public static IAsyncOperation<ChatCapabilities> GetCapabilitiesFromNetworkAsync(string address, string transportId); } public static class RcsManager { public static event EventHandler<object> TransportListChanged; } } namespace Windows.ApplicationModel.DataTransfer { public static class Clipboard { public static event EventHandler<ClipboardHistoryChangedEventArgs> HistoryChanged; public static event EventHandler<object> HistoryEnabledChanged; public static event EventHandler<object> RoamingEnabledChanged; public static bool ClearHistory(); public static bool DeleteItemFromHistory(ClipboardHistoryItem item); public static IAsyncOperation<ClipboardHistoryItemsResult> GetHistoryItemsAsync(); public static bool IsHistoryEnabled(); public static bool IsRoamingEnabled(); public static bool SetContentWithOptions(DataPackage content, ClipboardContentOptions options); public static SetHistoryItemAsContentStatus SetHistoryItemAsContent(ClipboardHistoryItem item); } public sealed class ClipboardContentOptions public sealed class ClipboardHistoryChangedEventArgs public sealed class ClipboardHistoryItem public sealed class ClipboardHistoryItemsResult public enum ClipboardHistoryItemsResultStatus public sealed class DataPackagePropertySetView : IIterable<IKeyValuePair<string, object>>, IMapView<string, object> { bool IsFromRoamingClipboard { get; } } public enum SetHistoryItemAsContentStatus } namespace Windows.ApplicationModel.Store.Preview { public enum DeliveryOptimizationDownloadMode public enum DeliveryOptimizationDownloadModeSource public sealed class DeliveryOptimizationSettings public static class StoreConfiguration { public static bool IsPinToDesktopSupported(); public static bool IsPinToStartSupported(); public static bool IsPinToTaskbarSupported(); public static void PinToDesktop(string appPackageFamilyName); public static void PinToDesktopForUser(User user, string appPackageFamilyName); } } namespace Windows.ApplicationModel.Store.Preview.InstallControl { public enum AppInstallationToastNotificationMode public sealed class AppInstallItem { AppInstallationToastNotificationMode CompletedInstallToastNotificationMode { get; set; } AppInstallationToastNotificationMode InstallInProgressToastNotificationMode { get; set; } bool PinToDesktopAfterInstall { get; set; } bool PinToStartAfterInstall { get; set; } bool PinToTaskbarAfterInstall { get; set; } } public sealed class AppInstallManager { bool CanInstallForAllUsers { get; } } public sealed class AppInstallOptions { string CampaignId { get; set; } AppInstallationToastNotificationMode CompletedInstallToastNotificationMode { get; set; } string ExtendedCampaignId { get; set; } bool InstallForAllUsers { get; set; } AppInstallationToastNotificationMode InstallInProgressToastNotificationMode { get; set; } bool PinToDesktopAfterInstall { get; set; } bool PinToStartAfterInstall { get; set; } bool PinToTaskbarAfterInstall { get; set; } bool StageButDoNotInstall { get; set; } } public sealed class AppUpdateOptions { bool AutomaticallyDownloadAndInstallUpdateIfFound { get; set; } } } namespace Windows.ApplicationModel.UserActivities { public sealed class UserActivity { bool IsRoamable { get; set; } } } namespace Windows.Data.Text { public sealed class TextPredictionGenerator { CoreTextInputScope InputScope { get; set; } IAsyncOperation<IVectorView<string>> GetCandidatesAsync(string input, uint maxCandidates, TextPredictionOptions predictionOptions, IIterable<string> previousStrings); IAsyncOperation<IVectorView<string>> GetNextWordCandidatesAsync(uint maxCandidates, IIterable<string> previousStrings); } public enum TextPredictionOptions : uint } namespace Windows.Devices.Display.Core { public sealed class DisplayAdapter public enum DisplayBitsPerChannel : uint public sealed class DisplayDevice public enum DisplayDeviceCapability public sealed class DisplayFence public sealed class DisplayManager : IClosable public sealed class DisplayManagerChangedEventArgs public sealed class DisplayManagerDisabledEventArgs public sealed class DisplayManagerEnabledEventArgs public enum DisplayManagerOptions : uint public sealed class DisplayManagerPathsFailedOrInvalidatedEventArgs public enum DisplayManagerResult public sealed class DisplayManagerResultWithState public sealed class DisplayModeInfo public enum DisplayModeQueryOptions : uint public sealed class DisplayPath public enum DisplayPathScaling public enum DisplayPathStatus public struct DisplayPresentationRate public sealed class DisplayPrimaryDescription public enum DisplayRotation public sealed class DisplayScanout public sealed class DisplaySource public sealed class DisplayState public enum DisplayStateApplyOptions : uint public enum DisplayStateFunctionalizeOptions : uint public sealed class DisplayStateOperationResult public enum DisplayStateOperationStatus public sealed class DisplaySurface public sealed class DisplayTarget public enum DisplayTargetPersistence public sealed class DisplayTask public sealed class DisplayTaskPool public enum DisplayTaskSignalKind public sealed class DisplayView public sealed class DisplayWireFormat public enum DisplayWireFormatColorSpace public enum DisplayWireFormatEotf public enum DisplayWireFormatHdrMetadata public enum DisplayWireFormatPixelEncoding } namespace Windows.Devices.Enumeration { public enum DeviceInformationKind { DevicePanel = 8, } public sealed class DeviceInformationPairing { public static bool TryRegisterForAllInboundPairingRequestsWithProtectionLevel(DevicePairingKinds pairingKindsSupported, DevicePairingProtectionLevel minProtectionLevel); } } namespace Windows.Devices.Enumeration.Pnp { public enum PnpObjectType { DevicePanel = 8, } } namespace Windows.Devices.Lights { public sealed class LampArray public enum LampArrayKind public sealed class LampInfo public enum LampPurposes : uint } namespace Windows.Devices.Lights.Effects { public interface ILampArrayEffect public sealed class LampArrayBitmapEffect : ILampArrayEffect public sealed class LampArrayBitmapRequestedEventArgs public sealed class LampArrayBlinkEffect : ILampArrayEffect public sealed class LampArrayColorRampEffect : ILampArrayEffect public sealed class LampArrayCustomEffect : ILampArrayEffect public enum LampArrayEffectCompletionBehavior public sealed class LampArrayEffectPlaylist : IIterable<ILampArrayEffect>, IVectorView<ILampArrayEffect> public enum LampArrayEffectStartMode public enum LampArrayRepetitionMode public sealed class LampArraySolidEffect : ILampArrayEffect public sealed class LampArrayUpdateRequestedEventArgs } namespace Windows.Devices.PointOfService { public sealed class BarcodeScannerCapabilities { bool IsVideoPreviewSupported { get; } } public sealed class ClaimedBarcodeScanner : IClosable { event TypedEventHandler<ClaimedBarcodeScanner, ClaimedBarcodeScannerClosedEventArgs> Closed; } public sealed class ClaimedBarcodeScannerClosedEventArgs public sealed class ClaimedCashDrawer : IClosable { event TypedEventHandler<ClaimedCashDrawer, ClaimedCashDrawerClosedEventArgs> Closed; } public sealed class ClaimedCashDrawerClosedEventArgs public sealed class ClaimedLineDisplay : IClosable { event TypedEventHandler<ClaimedLineDisplay, ClaimedLineDisplayClosedEventArgs> Closed; } public sealed class ClaimedLineDisplayClosedEventArgs public sealed class ClaimedMagneticStripeReader : IClosable { event TypedEventHandler<ClaimedMagneticStripeReader, ClaimedMagneticStripeReaderClosedEventArgs> Closed; } public sealed class ClaimedMagneticStripeReaderClosedEventArgs public sealed class ClaimedPosPrinter : IClosable { event TypedEventHandler<ClaimedPosPrinter, ClaimedPosPrinterClosedEventArgs> Closed; } public sealed class ClaimedPosPrinterClosedEventArgs } namespace Windows.Devices.PointOfService.Provider { public sealed class BarcodeScannerDisableScannerRequest { IAsyncAction ReportFailedAsync(int reason); IAsyncAction ReportFailedAsync(int reason, string failedReasonDescription); } public sealed class BarcodeScannerEnableScannerRequest { IAsyncAction ReportFailedAsync(int reason); IAsyncAction ReportFailedAsync(int reason, string failedReasonDescription); } public sealed class BarcodeScannerFrameReader : IClosable public sealed class BarcodeScannerFrameReaderFrameArrivedEventArgs public sealed class BarcodeScannerGetSymbologyAttributesRequest { IAsyncAction ReportFailedAsync(int reason); IAsyncAction ReportFailedAsync(int reason, string failedReasonDescription); } public sealed class BarcodeScannerHideVideoPreviewRequest { IAsyncAction ReportFailedAsync(int reason); IAsyncAction ReportFailedAsync(int reason, string failedReasonDescription); } public sealed class BarcodeScannerProviderConnection : IClosable { IAsyncOperation<BarcodeScannerFrameReader> CreateFrameReaderAsync(); IAsyncOperation<BarcodeScannerFrameReader> CreateFrameReaderAsync(BitmapPixelFormat preferredFormat); IAsyncOperation<BarcodeScannerFrameReader> CreateFrameReaderAsync(BitmapPixelFormat preferredFormat, BitmapSize preferredSize); } public sealed class BarcodeScannerSetActiveSymbologiesRequest { IAsyncAction ReportFailedAsync(int reason); IAsyncAction ReportFailedAsync(int reason, string failedReasonDescription); } public sealed class BarcodeScannerSetSymbologyAttributesRequest { IAsyncAction ReportFailedAsync(int reason); IAsyncAction ReportFailedAsync(int reason, string failedReasonDescription); } public sealed class BarcodeScannerStartSoftwareTriggerRequest { IAsyncAction ReportFailedAsync(int reason); IAsyncAction ReportFailedAsync(int reason, string failedReasonDescription); } public sealed class BarcodeScannerStopSoftwareTriggerRequest { IAsyncAction ReportFailedAsync(int reason); IAsyncAction ReportFailedAsync(int reason, string failedReasonDescription); } public sealed class BarcodeScannerVideoFrame : IClosable } namespace Windows.Devices.Sensors { public sealed class HingeAngleReading public sealed class HingeAngleSensor public sealed class HingeAngleSensorReadingChangedEventArgs public sealed class SimpleOrientationSensor { public static IAsyncOperation<SimpleOrientationSensor> FromIdAsync(string deviceId); public static string GetDeviceSelector(); } } namespace Windows.Devices.SmartCards { public static class KnownSmartCardAppletIds public sealed class SmartCardAppletIdGroup { string Description { get; set; } IRandomAccessStreamReference Logo { get; set; } ValueSet Properties { get; } bool SecureUserAuthenticationRequired { get; set; } } public sealed class SmartCardAppletIdGroupRegistration { string SmartCardReaderId { get; } IAsyncAction SetPropertiesAsync(ValueSet props); } } namespace Windows.Devices.WiFi { public enum WiFiPhyKind { HE = 10, } } namespace Windows.Foundation { public static class GuidHelper } namespace Windows.Globalization { public static class CurrencyIdentifiers { public static string MRU { get; } public static string SSP { get; } public static string STN { get; } public static string VES { get; } } } namespace Windows.Graphics.Capture { public sealed class Direct3D11CaptureFramePool : IClosable { public static Direct3D11CaptureFramePool CreateFreeThreaded(IDirect3DDevice device, DirectXPixelFormat pixelFormat, int numberOfBuffers, SizeInt32 size); } public sealed class GraphicsCaptureItem { public static GraphicsCaptureItem CreateFromVisual(Visual visual); } } namespace Windows.Graphics.Display.Core { public enum HdmiDisplayHdrOption { DolbyVisionLowLatency = 3, } public sealed class HdmiDisplayMode { bool IsDolbyVisionLowLatencySupported { get; } } } namespace Windows.Graphics.Holographic { public sealed class HolographicCamera { bool IsHardwareContentProtectionEnabled { get; set; } bool IsHardwareContentProtectionSupported { get; } } public sealed class HolographicQuadLayerUpdateParameters { bool CanAcquireWithHardwareProtection { get; } IDirect3DSurface AcquireBufferToUpdateContentWithHardwareProtection(); } } namespace Windows.Graphics.Imaging { public sealed class BitmapDecoder : IBitmapFrame, IBitmapFrameWithSoftwareBitmap { public static Guid HeifDecoderId { get; } public static Guid WebpDecoderId { get; } } public sealed class BitmapEncoder { public static Guid HeifEncoderId { get; } } } namespace Windows.Management.Deployment { public enum DeploymentOptions : uint { ForceUpdateFromAnyVersion = (uint)262144, } public sealed class PackageManager { IAsyncOperationWithProgress<DeploymentResult, DeploymentProgress> DeprovisionPackageForAllUsersAsync(string packageFamilyName); } public enum RemovalOptions : uint { RemoveForAllUsers = (uint)524288, } } namespace Windows.Media.Audio { public sealed class CreateAudioDeviceInputNodeResult { HResult ExtendedError { get; } } public sealed class CreateAudioDeviceOutputNodeResult { HResult ExtendedError { get; } } public sealed class CreateAudioFileInputNodeResult { HResult ExtendedError { get; } } public sealed class CreateAudioFileOutputNodeResult { HResult ExtendedError { get; } } public sealed class CreateAudioGraphResult { HResult ExtendedError { get; } } public sealed class CreateMediaSourceAudioInputNodeResult { HResult ExtendedError { get; } } public enum MixedRealitySpatialAudioFormatPolicy public sealed class SetDefaultSpatialAudioFormatResult public enum SetDefaultSpatialAudioFormatStatus public sealed class SpatialAudioDeviceConfiguration public sealed class SpatialAudioFormatConfiguration public static class SpatialAudioFormatSubtype } namespace Windows.Media.Control { public sealed class CurrentSessionChangedEventArgs public sealed class GlobalSystemMediaTransportControlsSession public sealed class GlobalSystemMediaTransportControlsSessionManager public sealed class GlobalSystemMediaTransportControlsSessionMediaProperties public sealed class GlobalSystemMediaTransportControlsSessionPlaybackControls public sealed class GlobalSystemMediaTransportControlsSessionPlaybackInfo public enum GlobalSystemMediaTransportControlsSessionPlaybackStatus public sealed class GlobalSystemMediaTransportControlsSessionTimelineProperties public sealed class MediaPropertiesChangedEventArgs public sealed class PlaybackInfoChangedEventArgs public sealed class SessionsChangedEventArgs public sealed class TimelinePropertiesChangedEventArgs } namespace Windows.Media.Core { public sealed class MediaStreamSample { IDirect3DSurface Direct3D11Surface { get; } public static MediaStreamSample CreateFromDirect3D11Surface(IDirect3DSurface surface, TimeSpan timestamp); } } namespace Windows.Media.Devices.Core { public sealed class CameraIntrinsics { public CameraIntrinsics(Vector2 focalLength, Vector2 principalPoint, Vector3 radialDistortion, Vector2 tangentialDistortion, uint imageWidth, uint imageHeight); } } namespace Windows.Media.Import { public enum PhotoImportContentTypeFilter { ImagesAndVideosFromCameraRoll = 3, } public sealed class PhotoImportItem { string Path { get; } } } namespace Windows.Media.MediaProperties { public sealed class ImageEncodingProperties : IMediaEncodingProperties { public static ImageEncodingProperties CreateHeif(); } public static class MediaEncodingSubtypes { public static string Heif { get; } } } namespace Windows.Media.Protection.PlayReady { public static class PlayReadyStatics { public static IReference<DateTime> HardwareDRMDisabledAtTime { get; } public static IReference<DateTime> HardwareDRMDisabledUntilTime { get; } public static void ResetHardwareDRMDisabled(); } } namespace Windows.Media.Streaming.Adaptive { public enum AdaptiveMediaSourceResourceType { MediaSegmentIndex = 5, } } namespace Windows.Networking.BackgroundTransfer { public enum BackgroundTransferPriority { Low = 2, } } namespace Windows.Networking.Connectivity { public sealed class ConnectionProfile { bool CanDelete { get; } IAsyncOperation<ConnectionProfileDeleteStatus> TryDeleteAsync(); } public enum ConnectionProfileDeleteStatus } namespace Windows.Networking.NetworkOperators { public enum ESimOperationStatus { CardGeneralFailure = 13, ConfirmationCodeMissing = 14, EidMismatch = 18, InvalidMatchingId = 15, NoCorrespondingRequest = 23, NoEligibleProfileForThisDevice = 16, OperationAborted = 17, OperationProhibitedByProfileClass = 21, ProfileNotAvailableForNewBinding = 19, ProfileNotPresent = 22, ProfileNotReleasedByOperator = 20, } } namespace Windows.Perception { public sealed class PerceptionTimestamp { TimeSpan SystemRelativeTargetTime { get; } } public static class PerceptionTimestampHelper { public static PerceptionTimestamp FromSystemRelativeTargetTime(TimeSpan targetTime); } } namespace Windows.Perception.Spatial { public sealed class SpatialAnchorExporter public enum SpatialAnchorExportPurpose public sealed class SpatialAnchorExportSufficiency public sealed class SpatialLocation { Vector3 AbsoluteAngularAccelerationAxisAngle { get; } Vector3 AbsoluteAngularVelocityAxisAngle { get; } } } namespace Windows.Perception.Spatial.Preview { public static class SpatialGraphInteropPreview } namespace Windows.Services.Cortana { public sealed class CortanaActionableInsights public sealed class CortanaActionableInsightsOptions } namespace Windows.Services.Store { public sealed class StoreAppLicense { bool IsDiscLicense { get; } } public sealed class StoreContext { IAsyncOperation<StoreRateAndReviewResult> RequestRateAndReviewAppAsync(); IAsyncOperation<IVectorView<StoreQueueItem>> SetInstallOrderForAssociatedStoreQueueItemsAsync(IIterable<StoreQueueItem> items); } public sealed class StoreQueueItem { IAsyncAction CancelInstallAsync(); IAsyncAction PauseInstallAsync(); IAsyncAction ResumeInstallAsync(); } public sealed class StoreRateAndReviewResult public enum StoreRateAndReviewStatus } namespace Windows.Storage.Provider { public enum StorageProviderHydrationPolicyModifier : uint { AutoDehydrationAllowed = (uint)4, } public sealed class StorageProviderSyncRootInfo { Guid ProviderId { get; set; } } } namespace Windows.System { public sealed class AppUriHandlerHost public sealed class AppUriHandlerRegistration public sealed class AppUriHandlerRegistrationManager public static class Launcher { public static IAsyncOperation<bool> LaunchFolderPathAsync(string path); public static IAsyncOperation<bool> LaunchFolderPathAsync(string path, FolderLauncherOptions options); public static IAsyncOperation<bool> LaunchFolderPathForUserAsync(User user, string path); public static IAsyncOperation<bool> LaunchFolderPathForUserAsync(User user, string path, FolderLauncherOptions options); } } namespace Windows.System.Preview { public enum HingeState public sealed class TwoPanelHingedDevicePosturePreview public sealed class TwoPanelHingedDevicePosturePreviewReading public sealed class TwoPanelHingedDevicePosturePreviewReadingChangedEventArgs } namespace Windows.System.Profile { public enum SystemOutOfBoxExperienceState public static class SystemSetupInfo public static class WindowsIntegrityPolicy } namespace Windows.System.Profile.SystemManufacturers { public sealed class SystemSupportDeviceInfo public static class SystemSupportInfo { public static SystemSupportDeviceInfo LocalDeviceInfo { get; } } } namespace Windows.System.RemoteSystems { public sealed class RemoteSystem { IVectorView<RemoteSystemApp> Apps { get; } } public sealed class RemoteSystemApp public sealed class RemoteSystemAppRegistration public sealed class RemoteSystemConnectionInfo public sealed class RemoteSystemConnectionRequest { RemoteSystemApp RemoteSystemApp { get; } public static RemoteSystemConnectionRequest CreateForApp(RemoteSystemApp remoteSystemApp); } public sealed class RemoteSystemWebAccountFilter : IRemoteSystemFilter } namespace Windows.System.Update { public enum SystemUpdateAttentionRequiredReason public sealed class SystemUpdateItem public enum SystemUpdateItemState public sealed class SystemUpdateLastErrorInfo public static class SystemUpdateManager public enum SystemUpdateManagerState public enum SystemUpdateStartInstallAction } namespace Windows.System.UserProfile { public sealed class AssignedAccessSettings } namespace Windows.UI.Accessibility { public sealed class ScreenReaderPositionChangedEventArgs public sealed class ScreenReaderService } namespace Windows.UI.Composition { public enum AnimationPropertyAccessMode public sealed class AnimationPropertyInfo : CompositionObject public sealed class BooleanKeyFrameAnimation : KeyFrameAnimation public class CompositionAnimation : CompositionObject, ICompositionAnimationBase { void SetExpressionReferenceParameter(string parameterName, IAnimationObject source); } public enum CompositionBatchTypes : uint { AllAnimations = (uint)5, InfiniteAnimation = (uint)4, } public sealed class CompositionGeometricClip : CompositionClip public class CompositionGradientBrush : CompositionBrush { CompositionMappingMode MappingMode { get; set; } } public enum CompositionMappingMode public class CompositionObject : IAnimationObject, IClosable { void PopulatePropertyInfo(string propertyName, AnimationPropertyInfo propertyInfo); public static void StartAnimationGroupWithIAnimationObject(IAnimationObject target, ICompositionAnimationBase animation); public static void StartAnimationWithIAnimationObject(IAnimationObject target, string propertyName, CompositionAnimation animation); } public sealed class Compositor : IClosable { BooleanKeyFrameAnimation CreateBooleanKeyFrameAnimation(); CompositionGeometricClip CreateGeometricClip(); CompositionGeometricClip CreateGeometricClip(CompositionGeometry geometry); RedirectVisual CreateRedirectVisual(); RedirectVisual CreateRedirectVisual(Visual source); } public interface IAnimationObject public sealed class RedirectVisual : ContainerVisual } namespace Windows.UI.Composition.Interactions { public sealed class InteractionSourceConfiguration : CompositionObject public enum InteractionSourceRedirectionMode public sealed class InteractionTracker : CompositionObject { bool IsInertiaFromImpulse { get; } int TryUpdatePosition(Vector3 value, InteractionTrackerClampingOption option); int TryUpdatePositionBy(Vector3 amount, InteractionTrackerClampingOption option); } public enum InteractionTrackerClampingOption public sealed class InteractionTrackerInertiaStateEnteredArgs { bool IsInertiaFromImpulse { get; } } public class VisualInteractionSource : CompositionObject, ICompositionInteractionSource { InteractionSourceConfiguration PointerWheelConfig { get; } } } namespace Windows.UI.Input.Inking { public enum HandwritingLineHeight public sealed class PenAndInkSettings public enum PenHandedness } namespace Windows.UI.Input.Inking.Preview { public sealed class PalmRejectionDelayZonePreview : IClosable } namespace Windows.UI.Notifications { public sealed class ScheduledToastNotificationShowingEventArgs public sealed class ToastNotifier { event TypedEventHandler<ToastNotifier, ScheduledToastNotificationShowingEventArgs> ScheduledToastNotificationShowing; } } namespace Windows.UI.Shell { public enum SecurityAppKind public sealed class SecurityAppManager public struct SecurityAppManagerContract public enum SecurityAppState public enum SecurityAppSubstatus public sealed class TaskbarManager { IAsyncOperation<bool> IsSecondaryTilePinnedAsync(string tileId); IAsyncOperation<bool> RequestPinSecondaryTileAsync(SecondaryTile secondaryTile); IAsyncOperation<bool> TryUnpinSecondaryTileAsync(string tileId); } } namespace Windows.UI.StartScreen { public sealed class StartScreenManager { IAsyncOperation<bool> ContainsSecondaryTileAsync(string tileId); IAsyncOperation<bool> TryRemoveSecondaryTileAsync(string tileId); } } namespace Windows.UI.Text { public sealed class RichEditTextDocument : ITextDocument { void ClearUndoRedoHistory(); } } namespace Windows.UI.Text.Core { public sealed class CoreTextLayoutRequest { CoreTextLayoutBounds LayoutBoundsVisualPixels { get; } } } namespace Windows.UI.ViewManagement { public enum ApplicationViewWindowingMode { CompactOverlay = 3, Maximized = 4, } } namespace Windows.UI.ViewManagement.Core { public sealed class CoreInputView { bool TryHide(); bool TryShow(); bool TryShow(CoreInputViewKind type); } public enum CoreInputViewKind } namespace Windows.UI.WebUI { public sealed class BackgroundActivatedEventArgs : IBackgroundActivatedEventArgs public delegate void BackgroundActivatedEventHandler(object sender, IBackgroundActivatedEventArgs eventArgs); public sealed class NewWebUIViewCreatedEventArgs public static class WebUIApplication { public static event BackgroundActivatedEventHandler BackgroundActivated; public static event EventHandler<NewWebUIViewCreatedEventArgs> NewWebUIViewCreated; } public sealed class WebUIView : IWebViewControl, IWebViewControl2 } namespace Windows.UI.Xaml { public class BrushTransition public class ColorPaletteResources : ResourceDictionary public class DataTemplate : FrameworkTemplate, IElementFactory { UIElement GetElement(ElementFactoryGetArgs args); void RecycleElement(ElementFactoryRecycleArgs args); } public sealed class DebugSettings { bool FailFastOnErrors { get; set; } } public sealed class EffectiveViewportChangedEventArgs public class ElementFactoryGetArgs public class ElementFactoryRecycleArgs public class FrameworkElement : UIElement { bool IsLoaded { get; } event TypedEventHandler<FrameworkElement, EffectiveViewportChangedEventArgs> EffectiveViewportChanged; void InvalidateViewport(); } public interface IElementFactory public class ScalarTransition public class UIElement : DependencyObject, IAnimationObject { bool CanBeScrollAnchor { get; set; } public static DependencyProperty CanBeScrollAnchorProperty { get; } Vector3 CenterPoint { get; set; } ScalarTransition OpacityTransition { get; set; } float Rotation { get; set; } Vector3 RotationAxis { get; set; } ScalarTransition RotationTransition { get; set; } Vector3 Scale { get; set; } Vector3Transition ScaleTransition { get; set; } Matrix4x4 TransformMatrix { get; set; } Vector3 Translation { get; set; } Vector3Transition TranslationTransition { get; set; } void PopulatePropertyInfo(string propertyName, AnimationPropertyInfo propertyInfo); virtual void PopulatePropertyInfoOverride(string propertyName, AnimationPropertyInfo animationPropertyInfo); void StartAnimation(ICompositionAnimationBase animation); void StopAnimation(ICompositionAnimationBase animation); } public class Vector3Transition public enum Vector3TransitionComponents : uint } namespace Windows.UI.Xaml.Automation { public sealed class AutomationElementIdentifiers { public static AutomationProperty IsDialogProperty { get; } } public sealed class AutomationProperties { public static DependencyProperty IsDialogProperty { get; } public static bool GetIsDialog(DependencyObject element); public static void SetIsDialog(DependencyObject element, bool value); } } namespace Windows.UI.Xaml.Automation.Peers { public class AppBarButtonAutomationPeer : ButtonAutomationPeer, IExpandCollapseProvider { ExpandCollapseState ExpandCollapseState { get; } void Collapse(); void Expand(); } public class AutomationPeer : DependencyObject { bool IsDialog(); virtual bool IsDialogCore(); } public class MenuBarAutomationPeer : FrameworkElementAutomationPeer public class MenuBarItemAutomationPeer : FrameworkElementAutomationPeer, IExpandCollapseProvider, IInvokeProvider } namespace Windows.UI.Xaml.Controls { public sealed class AnchorRequestedEventArgs public class AppBarElementContainer : ContentControl, ICommandBarElement, ICommandBarElement2 public sealed class AutoSuggestBox : ItemsControl { object Description { get; set; } public static DependencyProperty DescriptionProperty { get; } } public enum BackgroundSizing public sealed class Border : FrameworkElement { BackgroundSizing BackgroundSizing { get; set; } public static DependencyProperty BackgroundSizingProperty { get; } BrushTransition BackgroundTransition { get; set; } } public class CalendarDatePicker : Control { object Description { get; set; } public static DependencyProperty DescriptionProperty { get; } } public class ComboBox : Selector { object Description { get; set; } public static DependencyProperty DescriptionProperty { get; } bool IsEditable { get; set; } public static DependencyProperty IsEditableProperty { get; } string Text { get; set; } Style TextBoxStyle { get; set; } public static DependencyProperty TextBoxStyleProperty { get; } public static DependencyProperty TextProperty { get; } event TypedEventHandler<ComboBox, ComboBoxTextSubmittedEventArgs> TextSubmitted; } public sealed class ComboBoxTextSubmittedEventArgs public class CommandBarFlyout : FlyoutBase public class ContentPresenter : FrameworkElement { BackgroundSizing BackgroundSizing { get; set; } public static DependencyProperty BackgroundSizingProperty { get; } BrushTransition BackgroundTransition { get; set; } } public class Control : FrameworkElement { BackgroundSizing BackgroundSizing { get; set; } public static DependencyProperty BackgroundSizingProperty { get; } CornerRadius CornerRadius { get; set; } public static DependencyProperty CornerRadiusProperty { get; } } public class DataTemplateSelector : IElementFactory { UIElement GetElement(ElementFactoryGetArgs args); void RecycleElement(ElementFactoryRecycleArgs args); } public class DatePicker : Control { IReference<DateTime> SelectedDate { get; set; } public static DependencyProperty SelectedDateProperty { get; } event TypedEventHandler<DatePicker, DatePickerSelectedValueChangedEventArgs> SelectedDateChanged; } public sealed class DatePickerSelectedValueChangedEventArgs public class DropDownButton : Button public class DropDownButtonAutomationPeer : ButtonAutomationPeer, IExpandCollapseProvider public class Frame : ContentControl, INavigate { bool IsNavigationStackEnabled { get; set; } public static DependencyProperty IsNavigationStackEnabledProperty { get; } bool NavigateToType(TypeName sourcePageType, object parameter, FrameNavigationOptions navigationOptions); } public class Grid : Panel { BackgroundSizing BackgroundSizing { get; set; } public static DependencyProperty BackgroundSizingProperty { get; } } public class IconSourceElement : IconElement public interface IScrollAnchorProvider public class MenuBar : Control public class MenuBarItem : Control public class MenuBarItemFlyout : MenuFlyout public class NavigationView : ContentControl { UIElement ContentOverlay { get; set; } public static DependencyProperty ContentOverlayProperty { get; } bool IsPaneVisible { get; set; } public static DependencyProperty IsPaneVisibleProperty { get; } NavigationViewOverflowLabelMode OverflowLabelMode { get; set; } public static DependencyProperty OverflowLabelModeProperty { get; } UIElement PaneCustomContent { get; set; } public static DependencyProperty PaneCustomContentProperty { get; } NavigationViewPaneDisplayMode PaneDisplayMode { get; set; } public static DependencyProperty PaneDisplayModeProperty { get; } UIElement PaneHeader { get; set; } public static DependencyProperty PaneHeaderProperty { get; } NavigationViewSelectionFollowsFocus SelectionFollowsFocus { get; set; } public static DependencyProperty SelectionFollowsFocusProperty { get; } NavigationViewShoulderNavigationEnabled ShoulderNavigationEnabled { get; set; } public static DependencyProperty ShoulderNavigationEnabledProperty { get; } NavigationViewTemplateSettings TemplateSettings { get; } public static DependencyProperty TemplateSettingsProperty { get; } } public class NavigationViewItem : NavigationViewItemBase { bool SelectsOnInvoked { get; set; } public static DependencyProperty SelectsOnInvokedProperty { get; } } public sealed class NavigationViewItemInvokedEventArgs { NavigationViewItemBase InvokedItemContainer { get; } NavigationTransitionInfo RecommendedNavigationTransitionInfo { get; } } public enum NavigationViewOverflowLabelMode public enum NavigationViewPaneDisplayMode public sealed class NavigationViewSelectionChangedEventArgs { NavigationTransitionInfo RecommendedNavigationTransitionInfo { get; } NavigationViewItemBase SelectedItemContainer { get; } } public enum NavigationViewSelectionFollowsFocus public enum NavigationViewShoulderNavigationEnabled public class NavigationViewTemplateSettings : DependencyObject public class Panel : FrameworkElement { BrushTransition BackgroundTransition { get; set; } } public sealed class PasswordBox : Control { bool CanPasteClipboardContent { get; } public static DependencyProperty CanPasteClipboardContentProperty { get; } object Description { get; set; } public static DependencyProperty DescriptionProperty { get; } FlyoutBase SelectionFlyout { get; set; } public static DependencyProperty SelectionFlyoutProperty { get; } void PasteFromClipboard(); } public class RelativePanel : Panel { BackgroundSizing BackgroundSizing { get; set; } public static DependencyProperty BackgroundSizingProperty { get; } } public class RichEditBox : Control { object Description { get; set; } public static DependencyProperty DescriptionProperty { get; } FlyoutBase ProofingMenuFlyout { get; } public static DependencyProperty ProofingMenuFlyoutProperty { get; } FlyoutBase SelectionFlyout { get; set; } public static DependencyProperty SelectionFlyoutProperty { get; } RichEditTextDocument TextDocument { get; } event TypedEventHandler<RichEditBox, RichEditBoxSelectionChangingEventArgs> SelectionChanging; } public sealed class RichEditBoxSelectionChangingEventArgs public sealed class RichTextBlock : FrameworkElement { FlyoutBase SelectionFlyout { get; set; } public static DependencyProperty SelectionFlyoutProperty { get; } void CopySelectionToClipboard(); } public sealed class ScrollContentPresenter : ContentPresenter { bool CanContentRenderOutsideBounds { get; set; } public static DependencyProperty CanContentRenderOutsideBoundsProperty { get; } bool SizesContentToTemplatedParent { get; set; } public static DependencyProperty SizesContentToTemplatedParentProperty { get; } } public sealed class ScrollViewer : ContentControl, IScrollAnchorProvider { bool CanContentRenderOutsideBounds { get; set; } public static DependencyProperty CanContentRenderOutsideBoundsProperty { get; } UIElement CurrentAnchor { get; } double HorizontalAnchorRatio { get; set; } public static DependencyProperty HorizontalAnchorRatioProperty { get; } bool ReduceViewportForCoreInputViewOcclusions { get; set; } public static DependencyProperty ReduceViewportForCoreInputViewOcclusionsProperty { get; } double VerticalAnchorRatio { get; set; } public static DependencyProperty VerticalAnchorRatioProperty { get; } event TypedEventHandler<ScrollViewer, AnchorRequestedEventArgs> AnchorRequested; public static bool GetCanContentRenderOutsideBounds(DependencyObject element); void RegisterAnchorCandidate(UIElement element); public static void SetCanContentRenderOutsideBounds(DependencyObject element, bool canContentRenderOutsideBounds); void UnregisterAnchorCandidate(UIElement element); } public class SplitButton : ContentControl public class SplitButtonAutomationPeer : FrameworkElementAutomationPeer, IExpandCollapseProvider, IInvokeProvider public sealed class SplitButtonClickEventArgs public class StackPanel : Panel, IInsertionPanel, IScrollSnapPointsInfo { BackgroundSizing BackgroundSizing { get; set; } public static DependencyProperty BackgroundSizingProperty { get; } } public sealed class TextBlock : FrameworkElement { FlyoutBase SelectionFlyout { get; set; } public static DependencyProperty SelectionFlyoutProperty { get; } void CopySelectionToClipboard(); } public class TextBox : Control { bool CanPasteClipboardContent { get; } public static DependencyProperty CanPasteClipboardContentProperty { get; } bool CanRedo { get; } public static DependencyProperty CanRedoProperty { get; } bool CanUndo { get; } public static DependencyProperty CanUndoProperty { get; } object Description { get; set; } public static DependencyProperty DescriptionProperty { get; } FlyoutBase ProofingMenuFlyout { get; } public static DependencyProperty ProofingMenuFlyoutProperty { get; } FlyoutBase SelectionFlyout { get; set; } public static DependencyProperty SelectionFlyoutProperty { get; } event TypedEventHandler<TextBox, TextBoxSelectionChangingEventArgs> SelectionChanging; void ClearUndoRedoHistory(); void CopySelectionToClipboard(); void CutSelectionToClipboard(); void PasteFromClipboard(); void Redo(); void Undo(); } public sealed class TextBoxSelectionChangingEventArgs public class TextCommandBarFlyout : CommandBarFlyout public class TimePicker : Control { IReference<TimeSpan> SelectedTime { get; set; } public static DependencyProperty SelectedTimeProperty { get; } event TypedEventHandler<TimePicker, TimePickerSelectedValueChangedEventArgs> SelectedTimeChanged; } public sealed class TimePickerSelectedValueChangedEventArgs public class ToggleSplitButton : SplitButton public class ToggleSplitButtonAutomationPeer : FrameworkElementAutomationPeer, IExpandCollapseProvider, IToggleProvider public sealed class ToggleSplitButtonIsCheckedChangedEventArgs public class ToolTip : ContentControl { IReference<Rect> PlacementRect { get; set; } public static DependencyProperty PlacementRectProperty { get; } } public class TreeView : Control { bool CanDragItems { get; set; } public static DependencyProperty CanDragItemsProperty { get; } bool CanReorderItems { get; set; } public static DependencyProperty CanReorderItemsProperty { get; } Style ItemContainerStyle { get; set; } public static DependencyProperty ItemContainerStyleProperty { get; } StyleSelector ItemContainerStyleSelector { get; set; } public static DependencyProperty ItemContainerStyleSelectorProperty { get; } TransitionCollection ItemContainerTransitions { get; set; } public static DependencyProperty ItemContainerTransitionsProperty { get; } object ItemsSource { get; set; } public static DependencyProperty ItemsSourceProperty { get; } DataTemplate ItemTemplate { get; set; } public static DependencyProperty ItemTemplateProperty { get; } DataTemplateSelector ItemTemplateSelector { get; set; } public static DependencyProperty ItemTemplateSelectorProperty { get; } event TypedEventHandler<TreeView, TreeViewDragItemsCompletedEventArgs> DragItemsCompleted; event TypedEventHandler<TreeView, TreeViewDragItemsStartingEventArgs> DragItemsStarting; DependencyObject ContainerFromItem(object item); DependencyObject ContainerFromNode(TreeViewNode node); object ItemFromContainer(DependencyObject container); TreeViewNode NodeFromContainer(DependencyObject container); } public sealed class TreeViewCollapsedEventArgs { object Item { get; } } public sealed class TreeViewDragItemsCompletedEventArgs public sealed class TreeViewDragItemsStartingEventArgs public sealed class TreeViewExpandingEventArgs { object Item { get; } } public class TreeViewItem : ListViewItem { bool HasUnrealizedChildren { get; set; } public static DependencyProperty HasUnrealizedChildrenProperty { get; } object ItemsSource { get; set; } public static DependencyProperty ItemsSourceProperty { get; } } public sealed class WebView : FrameworkElement { event TypedEventHandler<WebView, WebViewWebResourceRequestedEventArgs> WebResourceRequested; } public sealed class WebViewWebResourceRequestedEventArgs } namespace Windows.UI.Xaml.Controls.Maps { public enum MapTileAnimationState public sealed class MapTileBitmapRequestedEventArgs { int FrameIndex { get; } } public class MapTileSource : DependencyObject { MapTileAnimationState AnimationState { get; } public static DependencyProperty AnimationStateProperty { get; } bool AutoPlay { get; set; } public static DependencyProperty AutoPlayProperty { get; } int FrameCount { get; set; } public static DependencyProperty FrameCountProperty { get; } TimeSpan FrameDuration { get; set; } public static DependencyProperty FrameDurationProperty { get; } void Pause(); void Play(); void Stop(); } public sealed class MapTileUriRequestedEventArgs { int FrameIndex { get; } } } namespace Windows.UI.Xaml.Controls.Primitives { public class CommandBarFlyoutCommandBar : CommandBar public sealed class CommandBarFlyoutCommandBarTemplateSettings : DependencyObject public class FlyoutBase : DependencyObject { bool AreOpenCloseAnimationsEnabled { get; set; } public static DependencyProperty AreOpenCloseAnimationsEnabledProperty { get; } bool InputDevicePrefersPrimaryCommands { get; } public static DependencyProperty InputDevicePrefersPrimaryCommandsProperty { get; } bool IsOpen { get; } public static DependencyProperty IsOpenProperty { get; } FlyoutShowMode ShowMode { get; set; } public static DependencyProperty ShowModeProperty { get; } public static DependencyProperty TargetProperty { get; } void ShowAt(DependencyObject placementTarget, FlyoutShowOptions showOptions); } public enum FlyoutPlacementMode { Auto = 13, BottomEdgeAlignedLeft = 7, BottomEdgeAlignedRight = 8, LeftEdgeAlignedBottom = 10, LeftEdgeAlignedTop = 9, RightEdgeAlignedBottom = 12, RightEdgeAlignedTop = 11, TopEdgeAlignedLeft = 5, TopEdgeAlignedRight = 6, } public enum FlyoutShowMode public class FlyoutShowOptions public class NavigationViewItemPresenter : ContentControl } namespace Windows.UI.Xaml.Core.Direct { public interface IXamlDirectObject public sealed class XamlDirect public struct XamlDirectContract public enum XamlEventIndex public enum XamlPropertyIndex public enum XamlTypeIndex } namespace Windows.UI.Xaml.Hosting { public class DesktopWindowXamlSource : IClosable public sealed class DesktopWindowXamlSourceGotFocusEventArgs public sealed class DesktopWindowXamlSourceTakeFocusRequestedEventArgs public sealed class WindowsXamlManager : IClosable public enum XamlSourceFocusNavigationReason public sealed class XamlSourceFocusNavigationRequest public sealed class XamlSourceFocusNavigationResult } namespace Windows.UI.Xaml.Input { public sealed class CanExecuteRequestedEventArgs public sealed class ExecuteRequestedEventArgs public sealed class FocusManager { public static event EventHandler<GettingFocusEventArgs> GettingFocus; public static event EventHandler<FocusManagerGotFocusEventArgs> GotFocus; public static event EventHandler<LosingFocusEventArgs> LosingFocus; public static event EventHandler<FocusManagerLostFocusEventArgs> LostFocus; } public sealed class FocusManagerGotFocusEventArgs public sealed class FocusManagerLostFocusEventArgs public sealed class GettingFocusEventArgs : RoutedEventArgs { Guid CorrelationId { get; } } public sealed class LosingFocusEventArgs : RoutedEventArgs { Guid CorrelationId { get; } } public class StandardUICommand : XamlUICommand public enum StandardUICommandKind public class XamlUICommand : DependencyObject, ICommand } namespace Windows.UI.Xaml.Markup { public sealed class FullXamlMetadataProviderAttribute : Attribute public interface IXamlBindScopeDiagnostics public interface IXamlType2 : IXamlType } namespace Windows.UI.Xaml.Media { public class Brush : DependencyObject, IAnimationObject { void PopulatePropertyInfo(string propertyName, AnimationPropertyInfo propertyInfo); virtual void PopulatePropertyInfoOverride(string propertyName, AnimationPropertyInfo animationPropertyInfo); } } namespace Windows.UI.Xaml.Media.Animation { public class BasicConnectedAnimationConfiguration : ConnectedAnimationConfiguration public sealed class ConnectedAnimation { ConnectedAnimationConfiguration Configuration { get; set; } } public class ConnectedAnimationConfiguration public class DirectConnectedAnimationConfiguration : ConnectedAnimationConfiguration public class GravityConnectedAnimationConfiguration : ConnectedAnimationConfiguration public enum SlideNavigationTransitionEffect public sealed class SlideNavigationTransitionInfo : NavigationTransitionInfo { SlideNavigationTransitionEffect Effect { get; set; } public static DependencyProperty EffectProperty { get; } } } namespace Windows.UI.Xaml.Navigation { public class FrameNavigationOptions } namespace Windows.Web.UI { public interface IWebViewControl2 public sealed class WebViewControlNewWindowRequestedEventArgs { IWebViewControl NewWindow { get; set; } Deferral GetDeferral(); } public enum WebViewControlPermissionType { ImmersiveView = 6, } } namespace Windows.Web.UI.Interop { public sealed class WebViewControl : IWebViewControl, IWebViewControl2 { event TypedEventHandler<WebViewControl, object> GotFocus; event TypedEventHandler<WebViewControl, object> LostFocus; void AddInitializeScript(string script); } }
Removals:
namespace Windows.Gaming.UI { public sealed class GameMonitor public enum GameMonitoringPermission }
The post Windows 10 SDK Preview Build 17749 available now! appeared first on Windows Developer Blog.
Book review: SQL Server 2017 Machine Learning Services with R
If you want to do statistical analysis or machine learning with data in SQL Server, you can of course extract the data from SQL Server and then analyze it in R or Python. But a better way is to run R or Python within the database, using Microsoft ML Services in SQL Server 2017. Why?
- It's faster. Not only to you get to use the SQL Server instance (which is likely to be faster than your local machine), but it also means you no longer have to transport the data over a network, which is likely to be the biggest factor, particularly when working with large data sets.
- It operationalizes your analysis. Rather than running an ad-hoc analysis on your desktop, you can instead publish it as a stored procedure, and make your analysis available on demand as a SQL query,
- It's more powerful. Not only can you use all of open-source R and Python within SQL Server, but you also get access to dedicated Microsoft libraries in R/Python including RevoScaleR/revoscalepy and MicrosoftML/microsoftml, which provide algorithms optimized for large data sets in SQL Server.
If you'd like to learn how to run R within SQL Server, a great resource is the book SQL Server 2017 Machine Learning Services with R, by Tomaž Kaštrun and Julie Koesmarno. At over 300 pages, the book covers every detail, including installation and configuration, an overview of data analysis in R, operationalizing R code in SQL Server and via Power BI, and using the aforementioned extension libraries for R. It's mostly aimed at R-using data scientists, but does includes a chapter for database administrations to provide an introduction to R for DBAs as well. In short, it contains everything you need to know to build production applications with R and SQL Server 2017 (or 2016), backed up with a wealth of examples from industry.
You can purchase a paperback copy from your favorite bookseller, and you can also get a discounted digital edition from the publisher at the link below.
Packt Publishing: SQL Server 2017 Machine Learning Services with R
Introduction to Azure Durable Functions
Azure Durable Functions is a new programming model based on Microsoft serverless’ platform Azure Functions. It allows you to write a workflow as code and have the execution run with the scalability and the reliability of serverless with high throughput.
Scenario
Initially, I wanted to index data from GitHub repositories. I explained it all in a previous post, but I’ll drill down into the concepts today.
To summarize the post, I wanted to retrieve a list of repositories that I wanted to index. Then, query each repository for more detailed information. Finally, save the data into a database.
Today, we’ll do something similar but to keep things simple, I’ll save the content to Azure Blob Storage.
If you want to follow along, I’ve created a sample that contains all the code in this post.
Let’s get started.
Concepts
Besides the common serverless concepts, Azure Durable Functions introduces more concepts that make developing complex workflow easy.
Let’s start with the basic concepts and work our way up.
Definition of a Function
A serverless function on Azure is a unit of work that is, in C#, a single method from a static class. Azure Functions allows you only pay for what you use as well as scaling that piece of code for you.
Generous free quotas are also given to allow you to experiment quickly and cheaply in the cloud.
In C#, we can write a simple Function with the following code.
public static class MyFunction
{
[FunctionName("MyFirstFunction")]
public void MyFirstFunction()
{
//...
}
}
Definition of a trigger
A trigger is an event happening to which a piece of code responds.
It is part of what is called a binding which hooks a method, a variable, or a parameter of your code to external systems.
Bindings come in 3 types, but we’re only interested in the latter. For more information on bindings, you can read the concept page
- Input
- Output
- Trigger
By looking at the list of supported bindings, we see a Trigger
column. Triggers include storage events (Blob Storage, Cosmos DB, or other.) as well as eventing systems such as Event Grid and Event Hubs.
The most common triggers, however, are Http
, which respond to an HTTP request from a user, as well as Timer
which allows you to specific code on a specified schedule.
Those triggers are applied directly on a parameter of a function.
public static class MyFunction
{
// The timer trigger is based on a CRON expression.
// {second} {minute} {hour} {day} {month} {day-of-week}
// In this case, the timer is every 5 minutes.
[FunctionName("MyTimerTrigger")]
public static void MyTimerTrigger([TimerTrigger("0 */5 * * * *")]TimerInfo myTimer) { /* ... */ }
[FunctionName("HttpTriggerCSharp")]
public static async Task Run(
[HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)]HttpRequestMessage req) { /* ... */}
}
Those are traditional functions that you find in a serverless application deployed on Azure. They are simple and respond to events.
What if you need something more complicated? What about calling functions with functions? For example, invoking one function for every element of a list then reconciling the result and saving it to a file.
Writing a complex workflow with simple code is where Durable Functions comes in.
Definition of an Orchestrator Function
An Orchestrator is an Azure Function with specific behaviors attached to them. As an example, it automatically set checkpoint during its execution, shut itself down while waiting for other functions to finish executing, then replay through the checkpoint to resume execution.
They are at the very heart of how Durable Functions works.
It is important to remember how an Orchestrator function operates differently than a standard function. While a normal function only executes once per event, the orchestrator is restarted many times while waiting for other functions to complete.
This kind of behavior means that this function needs to be deterministic. It must return the same result each time. It is crucial then to not use DateTime.Now
, Guid.NewGuid()
or anything generating a different result in this method. More information is available on the checkpoint and replay pattern that Orchestrators are using internally.
An Orchestrator function is a function with an OrchestrationTrigger
on a DurableOrchestrationContext
object parameter.
[FunctionName("Orchestrator")]
public static async Task RunOrchestrator([OrchestrationTrigger] DurableOrchestrationContext context)
{
return context.InstanceId;
}
It is invoked by a DurableOrchestrationClient
decorated with an OrchestrationClientAttribute
injected within an ordinary function with any triggers. Let’s take a Timer
trigger as an example.
[FunctionName("MyTimerTrigger")]
public static void MyTimerTrigger([TimerTrigger("0 */5 * * * *")]TimerInfo myTimer, [OrchestrationClient]DurableOrchestrationClient starter)
{
// Function input comes from the request content.
string instanceId = await starter.StartNewAsync("Orchestrator", null);
/* ... */
}
At this point, we have an orchestrator that is ready to run other functions.
Definition of an Activity Function
Not every function can be used with an orchestrator. Those functions become an Activity Function when an ActivityTriggerAttribute
is on a DurableActivityContext
parameter.
[FunctionName("MyActivity")]
public static async Task MyActivity([ActivityTrigger] DurableActivityContext context)
{
/* ... */
}
Activities are part of the checkpoint and replay pattern described previously. Among the differences with other functions, they do not respond to other triggers than an orchestrator calling them.
An activity is a perfect place to have code accessing a database, sending emails, or any other external systems. Activities that have been called and returned have their results cached. Meaning, that returning 1000 rows from a database won’t be done multiple times but only a single time.
As an example, our sample uses an activity function to retrieve the list of repositories from which we want to extract more data.
[FunctionName("GetAllRepositoriesForOrganization")]
public static async Task<List> GetAllRepositoriesForOrganization([ActivityTrigger] DurableActivityContext context)
{
// retrieves the organization name from the Orchestrator function
var organizationName = context.GetInput();
// invoke the API to retrieve the list of repositories of a specific organization
var repositories = (await github.Repository.GetAllForOrg(organizationName)).Select(x => (x.Id, x.Name)).ToList();
return repositories;
}
An Activity Function is not re-executed for the same Orchestrator instance. In this scenario, we preserve the list of repositories without invoking the API again.
This caching mechanism gives us significant leverage while orchestrating complex workflows. First, it allows us to keep the list of data on which we operate fixed. Second, since the orchestrator may call that function multiple times, it’s essential to have the result cached to avoid crashing external systems. Finally, it becomes possible to debug what we send/return to activities since every execution is logged transactionally by the Durable Functions component.
Code as orchestration
With those concepts clarified, let’s look at Code as Orchestration.
Workflows in the past were traditionally built with either a graphic designer or data persistence format (XML, JSON, YAML, or other). Those are easy to create and understand for small workflow. However, they become tough to debug and understand where a failure occurs.
On top of that, they were never meant to run on a high scale infrastructure where one part of the workflow would need to be scale differently.
Let’s take a simple example.
I want to retrieve the list of opened issues for all repositories within a GitHub organization and save the count somewhere in the cloud.
Here’s how it looks in pseudo-code.
repositories = getAllRepositories(organizationName);
foreach( repository in repositories)
repository.getOpenedIssuesCount()
cloud.save(repository)
Here’s how you would represent it with Azure Durable Functions.
It is the same code found in the sample on GitHub.
[FunctionName("Orchestrator")]
public static async Task RunOrchestrator(
[OrchestrationTrigger] DurableOrchestrationContext context)
{
// retrieves the organization name from the Orchestrator_HttpStart function
var organizationName = context.GetInput();
// retrieves the list of repositories for an organization by invoking a separate Activity Function.
var repositories = await context.CallActivityAsync<List>("GetAllRepositoriesForOrganization", organizationName);
// Creates an array of task to store the result of each functions
var tasks = new Task[repositories.Count];
for (int i = 0; i < repositories.Count; i++)
{
// Starting a `GetOpenedIssues` activity WITHOUT `async`
// This will starts Activity Functions in parallel instead of sequentially.
tasks[i] = context.CallActivityAsync("GetOpenedIssues", (repositories[i]));
}
// Wait for all Activity Functions to complete execution
await Task.WhenAll(tasks);
// Retrieve the result of each Activity Function and return them in a list
var openedIssues = tasks.Select(x => x.Result).ToList();
// Send the list to an Activity Function to save them to Blob Storage.
await context.CallActivityAsync("SaveRepositories", openedIssues);
return context.InstanceId;
}
Let’s walk through this orchestrator line by line. I retrieve an organization name from the function that triggered the Orchestrator. The input mechanism allows me to reuse the code for more than a single purpose. Then, I invoke a function that returns me a list of all repositories.
Once the list is returned, and not before, we invoke as many GetOpenedIssues
functions as necessary to complete our task. Note that we are not using the await
keyword here meaning that they are executing in parallel instead of sequentially.
Finally, we wait for all the functions to complete their executions before sending the compiled results to a SaveRepositories
function to save them to Blob Storage.
With as few as 8 lines of codes, we managed to build a complex workflow without XML/JSON that your team can understand quickly.
Give it a try now by downloading the sample and trying it out! If you want to run it in the cloud, get a free Azure subscription and take advantage of free Azure Functions.
Resources
To get more information on Azure Durable Functions here’s some reading material.
Visual Studio Code August 2018
Always Be Closing…Pull Requests
I was looking at a Well Known Open Source Project on GitHub today. It had like 978 Pull Requests. A "PR" means "hey here's some code I did for your project, you can PULL it from here and merge it into your code!"
But these were Open Pull Requests. Pending. Limbo Pull Requests. Dating back to 2015.
Why do Pull Requests stay open?
Why do projects keep Pull Requests open? What's a reasonable amount of time? Here's a few thoughts.
- PR as Call to Action
- PRs are a shout. They are HERE IS SOME CODE and they create work for the maintainer. They are needy things and require review and merging, but even worse, sometimes manual merging. Plus for folks new to Git and Open Source, asking them to "rebase on top of latest" may be enough for them to just give up.
- Fear of Closing
- If you close a PR without merging it, it's a rejection. It's a statement that this work isn't going to be used, and there's always a chance that the person who did the work will feel pretty bad about it.
- Abandoned
- Sometimes the originator of the PR disappears. The PR is effectively abandoned. These should be closed after a time.
- Opened so long they can't be merged
- The problem with PRs that are open for long is that they become impossible to merge. The cost of understanding whether they are still relevant plus resolving the merge conflicts might be higher than the value of the PR itself.
- Incorrectly created
- A PR originator may intent to change a single word (misspelling) but their PR changes CRs to LFs or Tabs to Spaces, it's a hassle.
- Formatting
- It's generally considered poor form to send a PR out of the blue where one just ran a linter or formatter. If the project wanted that done they'd ask for it.
- Totally not aligned with Roadmap
- If a PR shows up without context or communication, it may not be aligned with the direction of the project.
- Surprise PR
- Unfortunately some PRs show up out of the blue with major changes, file moves, or no context. If a PR wasn't asked for, or if a PR wasn't requested, or borne of an Issue, you'll likely have trouble pushing it through.
Thanks to Jon and Immo for their thoughts on this (likely incomplete) list. Jess Frazelle has a great post on "The Art of Closing" that I just found, and it includes a glorious gif from Glengarry Glen Ross where Always Be Closing comes from (warning, clip has dated and offensive language).
Jess suggests a few ways to Always Be Closing.
Two things that can help make your open source project successful AND stay tidy!
- including a CONTRIBUTING.md
- GitHub has some good guidance https://help.github.com/articles/setting-up-your-project-for-healthy-contributions/ and most of the dotnet repos have some decent contribution guidelines.
- I LOVE Gina Häußge's Contributing.md on her open source project "OctoPrint."
- using Pull Request templates that give clear guidance on how to submit a successful pull request.
- Use bots to test and build PRs, sign CLAs (Contributor License Agreements) and move the ball forward.
What do you think? Why do PRs stay open?
Sponsor: Get home early, eat supper on time and coach your kids in soccer. Moving workloads to Azure just got easy with Azure NetApp Files. Sign up to Preview Azure NetApp Files!
© 2018 Scott Hanselman. All rights reserved.
Python in Visual Studio Code – August 2018 Release
We are pleased to announce that the August 2018 release of the Python Extension for Visual Studio Code is now available. You can download the Python extension from the marketplace, or install it directly from the extension gallery in Visual Studio Code. You can learn more about Python support in Visual Studio Code in the VS Code documentation.
In this release we have closed a total of 38 issues including the stable release of our ptvsd 4 debugger, improvements to the language server preview, and other fixes.
Faster, more reliable debugging with ptvsd 4
In this release we are updating all users to the ptvsd 4.1.1 version of our Python debugger, providing a significant improvement to debugging performance and stability over the previous ptvsd 3.0 version. We originally announced an opt-in preview of ptvsd 4 in the February release of the Python extension, and have been continuing to improve on it based on user feedback.
The new debug engine is built on top of the open source pydevd, which has allowed us to take advantage of its superior performance and support for third party libraries.
The following features are also available with the new debugger:
- Support for the breakpoint() built-in added in Python 3.7: you can now break execution by adding this line of code
- Python support for the Logpoints feature added in the March iteration of VS Code
Remote debugging is easier to use and improved; previously you had to install the exact version of ptvsd used in VS Code on the remote server. Now the version on the remote machine just needs to be a 4.x version.
You can also now launch remote debugging by launching with the command line, to start remote debugging:
pip install --upgrade ptvsd python3 -m ptvsd --host 1.2.3.4 --port 3000 -m myproject
Once the server starts you can attach to it from VS Code. We are continuing to make improvements to the debugger, so stay tuned in our future releases.
Improvements to the Language Server Preview
Last month we released the Python Language Server, our Python analysis engine from Visual Studio hosted inside of VS Code. This allowed us to provide faster & richer completions including support for typeshed definitions. We have made the following improvements in this release:
- Language server now populates document outline with all symbols instead of just top-level ones. (#2050)
- Fixed issue in the language server when documentation for a function always produced "Documentation is still being calculated, please try again soon". (#2179)
- Fix null reference exception in the language server causing server initialization to fail. The exception happened when search paths contained a folder that did not exist. (#2017)
- Fixed language server issue when it could enter infinite loop reloading modules. (#2207)
- Language server now correctly handles
with
statement when__enter__
is declared in a base class. (#2240) - Fixed issue in the language server when typing dot under certain conditions produced null reference exception. (#2262)
- Language server now correctly merges data from typeshed and the Python library. (#2345)
- Support for switching virtual environments
- Code lenses for unit tests when using the language server
Various Fixes and Enhancements
We have also added small enhancements and fixed issues requested by users that should improve your experience working with Python in Visual Studio Code. The full list of improvements is listed in our changelog, some notable improvements are:
- Ensure test count values in the status bar represent the correct number of tests that were discovered and run. (#2143)
- Ensure workspace
pipenv
environment is not labeled as avirtual env
. (#2223) - Fix
visualstudio_py_testLauncher
to stop breaking out of test discovery too soon. (#2241) - Fix error when switching from new language server to the old
Jedi
language server. (#2281) - Ensure stepping out of debugged code does not take user into
PTVSD
debugger code. (#767)
Be sure to download the Python extension for VS Code now to try out the above improvements. If you run into any issues be sure to file an issue on the Python VS Code GitHub page.
Current use cases for machine learning in retail and consumer goods
This blog was co-authored by Marty Donovan.
Retail and consumer goods companies are seeing the applicability of machine learning (ML) to drive improvements in customer service and operational efficiency. For example, the Azure cloud is helping retail and consumer brands improve the shopping experience by ensuring shelves are stocked and product is always available when, where and how the consumer wants to shop. Learn more by reading Retail and consumer goods use case: Inventory optimization through SKU assortment + machine learning.
Here are common use cases for ML in retail and consumer goods, along with resources for getting started with ML in Azure.
8 ML use cases to improve service and provide benefits of optimization, automation and scale
- Inventory optimization through SKU assortment + machine learning ensure shelves are stocked and best products are always available for purchase.
- Recommendation Engine - Train Matchbox Recommender to modernize engine capabilities for relevant product and service offerings which can generate incremental revenue.
- Visual Search capitalizes on mobile-first, content rich, customer-centric search capabilities.
- Sentiment Analysis can help companies improve their products and services by better understanding how their offering impact customers.
- Fraud Detection to detect anomalies and other errors that signal dishonest behavior.
- Demand forecasting by pricing optimization to meet consumer demand related by creating a demand forecast at various price points and business constraints to maximize potential profit.
- Personalized offers improve the customer experience by offering relevant information which in turn provides retailers with improved data about the customer’s brand engagement.
- Customer Churn Prediction to improve strategic decision making regarding customer engagement and lifetime value.
All of these use cases can be addressed using machine learning.
Machine learning on Azure
Customers can build artificial intelligence (AI) applications that intelligently process and act on data, often in near real time. This helps organizations achieve more through increased speed and efficiency. Here are some resources to help you get started.
- Azure Machine Learning services enable building, deploying, and managing machine learning and AI models using any Python tools and libraries.
- Azure Data Science Virtual Machines are customized VM images on Azure, loaded with data science tools used to build intelligent applications for advanced analytics.
- Azure Machine Learning Studio which comes with many algorithms out of the box.
- Azure AI Gallery, which showcases AI and ML algorithms and use cases for them.
Recommended next steps
- Complete the Azure Machine Learning services quickstart.
- Read Retail and consumer goods use case: Inventory optimization through SKU assortment + machine learning. This explains how consumer brands can leverage Azure to create to prevent stock-outs, ensuring shelves are stocked and products are always available.
Additional resources
- Download Machine learning algorithm cheat sheet to help choose an algorithm.
- What machine learning algorithm should I use? Read How to choose algorithms for Microsoft Azure Machine Learning for help answering this question.
- Machine learning at scale AI/ML expert Paige Bailey takes you on a tour of the services available on Azure. See how to take your predictive model to production, dynamically train it online with streaming updates, and add real-time data to your models from IoT sources. Paige covers Azure DataBricks, Batch AI, KubeFlow + AKS, Stream Analytics, and Event Hubs.
- Read Welcome to Machine Learning Server for an introduction to the Microsoft Machine Learning Server (formerly named “R Server”).
- Review the Solution templates for Machine Learning Server for industry-specific templates, including one for retail.
I post regularly about new developments on social media. If you would like to follow me, you can find me on Linkedin.
Helping customers shift to a modern desktop
IT is complex. And that means it can be difficult to keep up with the day-to-day demands of your organization, let alone deliver technological innovation that drives the business forward. In desktop management, this is especially true: the process of creating standard images, deploying devices, testing updates, and providing end user support hasn’t changed much in years. It can be tedious, manual, and time consuming. We’re determined to change that with our vision for a modern desktop powered by Windows 10 and Office 365 ProPlus. A modern desktop not only offers end users the most productive, most secure computing experience—it also saves IT time and money so you can focus on driving business results.
Today, we’re pleased to make three announcements that help you make the shift to a modern desktop:
- Cloud-based analytics tools to make modern desktop deployment even easier.
- A program to ensure app compatibility for upgrades and updates of Windows and Office.
- Servicing and support changes to give you additional deployment flexibility.
Analytics to make modern desktop deployment easier
Collectively, you’ve told us that one of your biggest upgrade and update challenges is application testing. A critical part of any desktop deployment plan is analysis of existing applications—and the process of testing apps and remediating issues has historically been very manual and very time consuming. Microsoft 365 offers incredible tools today to help customers shift to a modern desktop, including System Center Configuration Manager, Microsoft Intune, Windows Analytics, and Office Readiness Toolkit. But we’ve felt like there’s even more we could do.
Today, we’re announcing that Windows Analytics is being expanded to Desktop Analytics—a new cloud-based service integrated with ConfigMgr and designed to create an inventory of apps running in the organization, assess app compatibility with the latest feature updates of Windows 10 and Office 365 ProPlus, and create pilot groups that represent the entire application and driver estate across a minimal set of devices.
The new Desktop Analytics service will provide insight and intelligence for you to make more informed decisions about the update readiness of your Windows and Office clients. You can then optimize pilot and production deployments with ConfigMgr. Combining data from your own organization with data aggregated from millions of devices connected to our cloud services, you can take the guess work out of testing and focus your attention on key blockers. We’ll share more information about Desktop Analytics and other modern desktop deployment tools at Ignite.
Standing behind our app compatibility promise
We’re also pleased to announce Desktop App Assure—a new service from Microsoft FastTrack designed to address issues with Windows 10 and Office 365 ProPlus app compatibility. Windows 10 is the most compatible Windows operating system ever, and using millions of data points from customer diagnostic data and the Windows Insider validation process, we’ve found that 99 percent of apps are compatible with new Windows updates. So you should generally expect that apps that work on Windows 7 will continue to work on Windows 10 and subsequent feature updates. But if you find any app compatibility issues after a Windows 10 or Office 365 ProPlus update, Desktop App Assure is designed to help you get a fix. Simply let us know by filing a ticket through FastTrack, and a Microsoft engineer will follow up to work with you until the issue is resolved. In short, Desktop App Assure operationalizes our Windows 10 and Office 365 ProPlus compatibility promise: We’ve got your back on app compatibility and are committed to removing it entirely as a blocker.
Desktop App Assure will be offered at no additional cost to Windows 10 Enterprise and Windows 10 Education customers. We’ll share more details on this new service at Ignite and will begin to preview this service in North America on October 1, 2018, with worldwide availability by February 1, 2019.
Servicing and support flexibility
Longer Windows 10 servicing for enterprises and educational institutions
In April 2017, we aligned the Windows 10 and Office 365 ProPlus update cadence to a predictable semi-annual schedule, targeting September and March. While many customers—including Mars and Accenture—have shifted to a modern desktop and are using the semi-annual channel to take updates regularly with great success, we’ve also heard feedback from some of you that you need more time and flexibility in the Windows 10 update cycle.
Based on that feedback, we’re announcing four changes:
- All currently supported feature updates of Windows 10 Enterprise and Education editions (versions 1607, 1703, 1709, and 1803) will be supported for 30 months from their original release date. This will give customers on those versions more time for change management as they move to a faster update cycle.
- All future feature updates of Windows 10 Enterprise and Education editions with a targeted release month of September (starting with 1809) will be supported for 30 months from their release date. This will give customers with longer deployment cycles the time they need to plan, test, and deploy.
- All future feature updates of Windows 10 Enterprise and Education editions with a targeted release month of March (starting with 1903) will continue to be supported for 18 months from their release date. This maintains the semi-annual update cadence as our north star and retains the option for customers that want to update twice a year.
- All feature releases of Windows 10 Home, Windows 10 Pro, and Office 365 ProPlus will continue to be supported for 18 months (this applies to feature updates targeting both March and September).
In summary, our new modern desktop support policies—starting in September 2018—are:
Windows 7 Extended Security Updates
As previously announced, Windows 7 extended support is ending January 14, 2020. While many of you are already well on your way in deploying Windows 10, we understand that everyone is at a different point in the upgrade process.
With that in mind, today we are announcing that we will offer paid Windows 7 Extended Security Updates (ESU) through January 2023. The Windows 7 ESU will be sold on a per-device basis and the price will increase each year. Windows 7 ESUs will be available to all Windows 7 Professional and Windows 7 Enterprise customers in Volume Licensing, with a discount to customers with Windows software assurance, Windows 10 Enterprise or Windows 10 Education subscriptions. In addition, Office 365 ProPlus will be supported on devices with active Windows 7 Extended Security Updates (ESU) through January 2023. This means that customers who purchase the Windows 7 ESU will be able to continue to run Office 365 ProPlus.
Please reach out to your partner or Microsoft account team for further details.
Support for Office 365 ProPlus on Windows 8.1 and Windows Server 2016
Office 365 ProPlus delivers cloud-connected and always up-to-date versions of the Office desktop apps. To support customers already on Office 365 ProPlus through their operating system transitions, we are updating the Windows system requirements for Office 365 ProPlus and revising some announcements that were made in February. We are pleased to announce the following updates to our Office 365 ProPlus system requirements:
- Office 365 ProPlus will continue to be supported on Windows 8.1 through January 2023, which is the end of support date for Windows 8.1.
- Office 365 ProPlus will also continue to be supported on Windows Server 2016 until October 2025.
Office 2016 connectivity support for Office 365 services
In addition, we are modifying the Office 365 services system requirements related to service connectivity. In February, we announced that starting October 13, 2020, customers will need Office 365 ProPlus or Office 2019 clients in mainstream support to connect to Office 365 services. To give you more time to transition fully to the cloud, we are now modifying that policy and will continue to support Office 2016 connections with the Office 365 services through October 2023.
Shift to a modern desktop
You’ve been talking, and we’ve been listening. Specifically, we’ve heard your feedback on desktop deployment, and we’re working hard to introduce new capabilities, services, and policies to help you on your way. The combination of Windows 10 and Office 365 ProPlus delivers the most productive, most secure end user computing experience available. But we recognize that it takes time to both upgrade devices and operationalize new update processes. Today’s announcements are designed to respond to your feedback and make it easier, faster, and cheaper to deploy a modern desktop. We know that there is still a lot of work to do. But we’re committed to working with you and systematically resolving any issues. We’d love to hear your thoughts and look forward to seeing you and discussing in more detail in the keynotes and sessions at Ignite in a few weeks!
The post Helping customers shift to a modern desktop appeared first on Microsoft 365 Blog.