Quantcast
Channel: Category Name
Viewing all 10804 articles
Browse latest View live

Get started with Collections in Microsoft Edge

$
0
0

We’re excited to announce that Collections is now enabled by default for all Microsoft Edge Insiders in the Canary and Dev channels (build 80.0.338.0 or later). Following our initial preview behind a feature flag two months ago, we have been adding in new features and functionality. For those who enabled the feature flag – thank you! We have been listening to your feedback and are excited to share the improvements we’ve made.

We designed Collections based on what you do on the web. If you’re a shopper, it will help you collect and compare items. If you’re an event or trip organizer, Collections will help pull together all your trip or event information as well as ideas to make your event or trip a success. If you’re a teacher or student, it will help you organize your web research and create your lesson plans or reports. Whatever you are doing on the web, Collections can help.

Recent improvements to Collections

We’ve been working hard to add more functionality and refine the feature over the last couple months – some of which were directly informed by your feedback.

Here are some of the improvements we made, based on your input:

Access your collections across your devices: We’ve added sync to Collections. We know some of you have seen issues around sync, your feedback has been helping us improve. We know this is an important scenario and are ready for you to try it. When you are signed into Microsoft Edge preview builds with the same profile on different computers, Collections will sync between them.

Open all links in a collection into a new window: We’ve heard you’d like an easy way to open all sites saved in a collection. Try out “Open all” from the “Sharing and more” menu to open tabs in a new window, or from the context menu on a collection to open them as tabs in the current window so you can easily pick up where you left off. We’ve also heard that you want an easy way to save a group of tabs to a collection. This is something that we are actively working on and are excited to share when it is ready.

Edit card titles: You’ve been asking for the ability to rename the titles of items in collections, so they are easier for you to understand. Now you can. To edit a title, right click and choose “Edit” from the context menu. A dialog will appear giving you the ability to rename the title.

Dark theme in Collections: We know you love dark theme, and we want to make sure we provide a great experience in Collections. We’ve heard some feedback on notes which we’ve addressed. Try it out and let us know what you think.

 “Try Collections” flyout: We understand that if you’re an active user of Collections that we were showing you the “Try Collections” flyout even though you previously used the feature. We’ve now tuned the flyout to be quieter.

Sharing a collection: You’ve told us that once you’ve collected content you want to share it with others. We have lots of work planned to better support sharing scenarios. One way you can share today is through the “Copy all” option added to the “Sharing and more” menu, or by selecting individual items and copying them via the “Copy” button in the toolbar.

Screenshot of Collections pane with “Sharing and more” menu opened with “Copy all” selected

Once you’ve copied items from your Collection, you can then paste them into your favorite apps, like OneNote or Email. If you are pasting into an app that supports HTML you will get a rich copy of the content.

Screenshot of pasting a shopping list from Collections into Outlook

Try out Collections

You can get started by opening the Collections pane from the button next to the address bar.

When you open the Collections pane, select Start new collection and give it a name. As you browse, you can start to add content related to your collection.

Screenshot showing the Collections pane in Microsoft Edge

Send Feedback

Now that we’re on by default, we hope that more of you will give us a try. Thank you again to all of you that have been using the feature and sending us feedback. If you think something’s not working right, or if there’s some capability you’d like to see added, please send us feedback using the smiley face icon in the top right corner of the browser.

Screenshot highlighting the Send Feedback button in Microsoft Edge

Thanks for continuing to be a part of this preview!

The post Get started with Collections in Microsoft Edge appeared first on Microsoft Edge Blog.


We made Windows Server Core container images >40% smaller

$
0
0

Over the past year, we’ve been working with the Windows Server team to make Windows Server Core container images a lot smaller. They are now >40% smaller! The Windows Server team has already published the new images in the Server Core Insider Docker repo, and will eventually publish them to their stable repo with their 20H1 release. You can check them out for yourself. I’ll tell you how we did it and what you need to know to take advantage of the improvements.

Let’s start with the numbers:

  • Insider images are >40% smaller than the latest (patched) 1903 images.
  • Container startup into Windows PowerShell is 30-45% faster.

These measurements are based on images in the Windows Server Core insiders Docker repo. We used PowerShell as a proxy for any .NET Framework application, but also because we expect that PowerShell is used a lot in containers.

The improvements should apply to any scenario where you use Windows Server Core containers images. It should be most beneficial and noticeable for scaling applications in production, CI/CD and any other workflow that pulls images without the benefit of a Docker image cache or that has a need for faster startup.

A case of playing poorly with container layers

We started this project with the hypothesis that the way .NET Framework is packaged and installed does not play nicely with the way Docker layers work. We found that this was the case based on an investigation we did over a year ago.

As background, Docker creates a read-only layer for each command in a Dockerfile, like FROM, RUN and even ENV. If files are updated in multiple layers, you will end up carrying multiple copies of that file in the image even though there is only one copy in the final image layer (the one you see and use). We found that this situation was common with .NET Framework container images. This is similar to the way Git works with binaries, if you are familiar with that model. If this makes you cringe, then you are following along perfectly well.

.NET Framework Dockerfiles are open source, so I will use them as examples in the rest of the post. They are also a good source of Docker-related techniques if you want to customize your own Dockerfiles further.

The Dockerfile for .NET Framework 4.8 on Windows Server Core 2019 demonstrates the anti-pattern we’re wanting to fix, of updating files multiple times in different layers, as follows:

# escape=`

FROM mcr.microsoft.com/windows/servercore:ltsc2019

# Install .NET 4.8
RUN curl -fSLo dotnet-framework-installer.exe https://download.visualstudio.microsoft.com/download/pr/7afca223-55d2-470a-8edc-6a1739ae3252/abd170b4b0ec15ad0222a809b761a036/ndp48-x86-x64-allos-enu.exe `
    && .dotnet-framework-installer.exe /q `
    && del .dotnet-framework-installer.exe `
    && powershell Remove-Item -Force -Recurse ${Env:TEMP}*

# Apply latest patch
RUN curl -fSLo patch.msu http://download.windowsupdate.com/c/msdownload/update/software/updt/2019/09/windows10.0-kb4515843-x64_181da0224818b03254ff48178c3cd7f73501c9db.msu `
    && mkdir patch `
    && expand patch.msu patch -F:* `
    && del /F /Q patch.msu `
    && DISM /Online /Quiet /Add-Package /PackagePath:C:patchWindows10.0-kb4515843-x64.cab `
    && rmdir /S /Q patch

# ngen .NET Fx
ENV COMPLUS_NGenProtectedProcess_FeatureEnabled 0
RUN WindowsMicrosoft.NETFramework64v4.0.30319ngen uninstall "Microsoft.Tpm.Commands, Version=10.0.0.0, Culture=Neutral, PublicKeyToken=31bf3856ad364e35, processorArchitecture=amd64" `
    && WindowsMicrosoft.NETFramework64v4.0.30319ngen update `
    && WindowsMicrosoft.NETFrameworkv4.0.30319ngen update

Lets’s dive into what the Dockerfile is actually doing. The first FROM line pulls Windows Server Core 2019, which includes .NET Framework 4.7.2. The following RUN line then installs .NET Framework 4.8 on top. The middle RUN line services .NET Framework 4.8 with the latest patches. The last RUN line runs the ngen tool to create or update NGEN images, if needed. Many files are being updated multiple times with this series of commands. Each time a file is updated, the size of the image increases by the size of the new “duplicate” file.

In the worst case scenario, four copies of many files are created, and that doesn’t account for the fact that each file has IL and NGEN variants, for x86 and x64. The size explosion starts to become apparent and is hard to fully grasp without a full accounting in a spreadsheet.

Stepping back, not all file updates are equal. For example, you can (in theory) update a 1 KB text file 500 times before it will have the same impact of updating a 500 KB file once. We found that NGEN image files were the worst offender. NGEN images are generated by ngen.exe (which you see used in the Dockerfile) to improve startup performance. They are also big, typically 3x larger than their associated IL files. It quickly became clear that NGEN images were going to be a primary target for a solution.

Designing a container-friendly approach

Architecturally, we had three design characteristics that we wanted in a solution:

  • There should be a single copy of each file in the .NET Framework, across all container image layers published by Microsoft.
  • NGEN images that are created by default should align with default use cases.
  • Maintain startup performance as container image size is reduced.

The biggest risk was the last characteristic, about maintaining startup performance, given that our primary startup performance lever — NGEN — was the primary target for reducing container image size. You already know how the story ends from the introduction, but let’s keep digging, and look at what we did in preparation for Windows Server Core 20H1 images (what is in Insiders now).

Here’s what we did in the Windows Server Core base image layer:

  • Include a serviced copy of .NET Framework 4.8.
  • Remove all NGEN images, except for the three most critical ones, for both x86 and x64. These images are for mscorlib.dll, System.dll and System.Core.dll.

Here’s what we did in the .NET Framework runtime image layer:

  • NGEN assemblies used by Windows PowerShell and ASP.NET (and no more).
  • NGEN only 64-bit assemblies. The only 32-bit NGEN images are the three included in the Server Core base image.

You can see these changes in the Dockerfile for .NET Framework 4.8 on the Windows Server Core Insider image, as follows:

# escape=`

FROM mcr.microsoft.com/windows/servercore/insider:10.0.19023.1

# Enable detection of running in a container
ENV DOTNET_RUNNING_IN_CONTAINER=true

RUN `
    # Ngen top of assembly graph to optimize a set of frequently used assemblies
    WindowsMicrosoft.NETFramework64v4.0.30319ngen install "Microsoft.PowerShell.Utility.Activities, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"
    # To optimize 32-bit assemblies, uncomment the next line and add an escape character (`) to the end of the previous line
    # && WindowsMicrosoft.NETFrameworkv4.0.30319ngen install "Microsoft.PowerShell.Utility.Activities, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"

This Dockerfile is much simpler, but we’ll still take a deeper look. The FROM statement pulls the Windows Server Core Insider base image layer, which already contains the (serviced) version of the .NET Framework we want. This is why there are no subsequent RUN statements that download and install later or serviced .NET Framework versions. The single RUN statement uses ngen to pre-compile a curated set of assemblies that we expect will benefit most .NET applications, but only for the 64-bit version of .NET Framework.

This much more streamlined approach has the following key benefits:

  • The Windows Server Core base image is now a lot smaller, and will be a massive benefit for Windows applications that don’t use .NET Framework. They still contain the .NET Framework but only a much smaller set of the NGEN images as compared to the 1903 base image.
  • The .NET Framework container image is also significantly smaller because it is now constructed in a way that much better aligns with the way Docker layers and images work and it contains a much smaller and curated set of NGEN images.

In terms of guidance, this new approach means that you should strongly prefer the .NET Framework runtime (or SDK) image if you are using Windows PowerShell or containerizing a .NET Framework application. It also means that it makes more sense to customize NGEN in your own Dockerfiles since the images Microsoft produces have much fewer NGEN images to start with.

Looking back at the new .NET Framework runtime Dockerfile, you can see that the last line is commented, which would otherwise NGEN assemblies for the 32-bit .NET Framework. You should consider uncommenting this line if you run 32-bit .NET Framework applications. You would need to either copy this line to your application Dockerfile (typically as the first line after the FROM statement) or use this Dockerfile as an alternative to using the .NET Framework runtime image.

If you use your own version of this Dockerfile, then you can customize it further. For example, you could target a smaller or different set of assemblies that are specifically chosen for only your application.

Performance of the new approach

I’m finally going to share the performance numbers! I’ll explain a few more things first, to make sure you’ve got the right context.

Like I said earlier, our primary goal wasn’t to improve the startup time of PowerShell or ASP.NET, but maintain it as we reduced the size of container images. Turns out that we did better than that, but let me ignore our achievements for a moment to make a point. If you are not familar with containers, it may not be obvious how valuable achieving that goal really is. I’ll explain it.

For many scenarios, image size ends up being a dominant startup cost because images need to be pulled across a network as part of docker run. This is the case for scenarios where the container environment doesn’t maintain an image cache (more common than you might think).

If the value here still isn’t popping, I’ll try an anology (I really love analogies). Imagine you and I are racing two cars on a track. I’m racing a white one with a red maple leaf … OK, OK, the color doesn’t matter! The gun goes off and fans are expecting us to start tearing down the track. Instead of hitting the gas pedals, we jump out of the cars and first fill up our gas tanks, then jump back in and finally start moving forward to do the job we were paid to do (race the cars!). That’s basically what docker run has to do if you don’t have a local copy of an image.

With this improvement in place, we still have to jump out of the cars when the gun goes up, but your car tank is now half the size it was before, so filling it is much quicker, HOWEVER the car still goes the same speed and distance. Unfortunately for me, you win the race because I’m stuck with the older version of the car! Unlucky me.

I’m going to stretch this analogy a little further. I said that your tank fills up in half the time now, but still goes the same speed and distance. It turns out that we managed to make the car go faster, too, and it can still go just as far. Sounds awesome! That’s basically the same thing we achieved.

OK, back to reality … let’s look at the actual results we saw, as measured in our performance lab. Performance scenarios are on the left and the different container images in which we measured them are on top.

1903 1903-FX Insider Insider-FX
Size compressed (GB) 2.11 2.18 1.13 1.19
Size uncompressed (GB) 5.03 5.29 2.63 2.89
Container launch (s) 6.7 6.67 4.68 3.61
PowerShell launch (s) 0.64 0.13 0.73 0.15

Note: The 1903 image is the latest version of 1903, with nearly as year of patches (which increase the size of the image).

Legend:

  • 1903: mcr.microsoft.com/windows/servercore:1903
  • 1903-FX: mcr.microsoft.com/dotnet/framework/runtime:4.8-20191008-windowsservercore-1903
  • Insider: mcr.microsoft.com/windows/servercore/insider:10.0.19023.1
  • Insider-FX: image built from runtime Dockerfile
  • Size compressed (GB) — this is the size of an image, in gigabytes, within a registry and when you pull it (AKA “over the wire”).
  • Size uncompressed (GB) — this is the size of an image, in gigabytes, on disk after you pull it. It is uncompressed so that it is fast to run.
  • Container launch (s) — This is the time it takes, in seconds, to launch a container, into PowerShell. It is equivalent to: docker run --rm DOCKERIMAGE powershell exit.
  • PowerShell launch (s) — This the time it takes, in seconds, to launch PowerShell within a running container. It is equivalent to: powershell exit.

I’ll give you the value-oriented summary of what those numbers are actually telling us.

For the Windows Server Core Insider base image:

  • The compressed Insider image is 46% smaller than the 1903 base image.
  • The uncompressed Insider image is 47% smaller than the 1903 base image.
  • Container startup into Windows PowerShell is 30% faster, when using the Insider image compared to the 1903 base image.
  • Windows PowerShell startup within a running container is slower with the Insider image than the 1903 base image, by 100ms (15%) on our hardware.

For the .NET Framework runtime image, based on the new Windows Server Core Insider base image:

  • The compressed .NET Framework runtime image is 45% smaller than the 1903 runtime image.
  • The uncompressed .NET Framework runtime image is 45% smaller than the 1903 runtime image.
  • Container startup into Windows PowerShell is 45% faster, using the .NET Framework runtime image compared to the 1903 runtime image.
  • Windows PowerShell startup within a running container is slower with the Insider-based runtime image than the 1903 runtime image, by 20ms (15%) on our hardware. We are investigating why startup is slower in this scenario. It shouldn’t be.
  • We specifically measured the benefit of not including 32-bit images in the runtime image. It is 70MB in the compressed image and 300 MB in the uncompressed image.

Note: The drop in size is probably closer to 40% in actuality. We are comparing an Insider image to a serviced 1903 image (nearly a year of patches that cause size increases). Still, the measurements are in the right ball park and a big win. Also, we expect these numbers to change before the Windows Server 20H1 release, either a little better or a little worse, but not far off what I’ve described here.

If you are interested in the details or reproducing these numbers yourself, the following list details the measurements we made and some of our methodology.

  • Size compressed: Retrieving Docker Image Sizes
  • Size uncompressed: docker images
  • Container launch (run from the host, in PowerShell): powershell
    $a = @(); 1..5 | % { $a += (measure-command { docker run --rm DOCKERIMAGE powershell exit }).TotalSeconds }; $a
  • PowerShell launch (run from inside the container, in PowerShell): powershell
    $a = @(); 1..5 | % { $a += (measure-command { powershell exit } ).TotalSeconds } ; $a

Note: All launch measurements listed are the average of the middle 3 of 5 test runs.

PowerShell launch is run from within PowerShell. This approach could be viewed as a weak test methodology. Instead, it is a practical pattern for what we are measuring, which is the reduction of JIT time. The second PowerShell instance will be in a second process. There is some benefit from launching PowerShell from PowerShell because read-only pages will be shared across processes. JITed code is written to read-write pages, which are not shared across process boundaries, such that the actual code execution of PowerShell will be unique in both processes and sensitive to the need to JIT at startup. As a result, the difference in startup numbers is primarily due to the reduction in JIT compilation required during startup. That also explains why we are only measuring powershell exit (we are only targeting startup for the scenario). Feel free to measure this and other scenarios and give us your feedback. We’d appreciate that.

We haven’t yet started measuring the performance improvement to the .NET Framework SDK image. We expect to see size and container startup improvements for that image, too. You can see an early version of the .NET Framework SDK image Dockerfile that you can see and test.

Forward-looking Guidance

Starting with the next version of Windows Server, we have the following guidance for Windows container users:

  • If you are using .NET Framework applications with Windows containers, including Windows PowerShell, use a .NET Framework image.
  • If you are not using .NET, use the Windows Server Core base image, or another image derived from it.
  • If you need better startup performance than the .NET Framework runtime image has to offer, we recommend creating your own images with your own profile of NGEN images. This is considered a supported scenario, and doesn’t disqualify you from getting support from Microsoft.

Closing

A lot of our effort on Docker containers has been focused on .NET Core, however, we have been looking for opportunities to improve the experience for .NET Framework users as well. This post describes such an improvement. Please tell us about other pain points for using .NET Framework in containers. We’d be interested in talking with you if you are using .NET Framework containers in production to learn more about what is working well and what isn’t.

Please give us feedback as you start adopting the new Windows Server Core container images. We intend to produce .NET Framework images for the next version of Windows Server Core as soon as 20H1 images are available in the Windows Docker repo.

The post We made Windows Server Core container images >40% smaller appeared first on .NET Blog.

What’s new in Azure DevOps Sprint 161

$
0
0

Sprint 161 has just finished rolling out to all organizations and you can check out all the new features in the release notes. Here are some of the features that you can start using today.

Create bulk subscriptions in Azure Pipelines app for Slack and Microsoft Teams

Users of the Azure Pipelines app for Slack and Microsoft Teams can now bulk subscribe to all of the pipelines in a project. You can use filters to manage what gets posted in the Slack or Teams channels. You can continue to subscribe to individual pipelines too.

Checkout multiple repositories in Azure Pipelines

Pipelines often rely on multiple repositories. You can have different repositories with source, tools, scripts, or other items that you need to build your code. Previously, you had to add these repositories as submodules or as manual scripts to run git checkout. Now you can fetch and check out other repositories, in addition to the one you use to store your YAML pipeline.

Use GitHub Actions to trigger a run in Azure Pipelines

GitHub Actions make it easy to build, test, and deploy your code right from GitHub. We now have GitHub Actions for Azure Pipelines (Azure/pipelines). You can use Azure/pipelines to trigger a run in Azure Pipelines as part of your GitHub Actions workflow.

You can use this action to trigger a specific pipeline (YAML or classic release pipeline) in Azure DevOps. GitHub Actions will take the Project URL, pipeline name, and a Personal Access Token (PAT) for your Azure DevOps organization as inputs.

These are just the tip of the iceberg, and there are plenty more features that we’ve released in Sprint 161. Check out the full list of features for this sprint in the release notes.

The post What’s new in Azure DevOps Sprint 161 appeared first on Azure DevOps Blog.

Introducing maintenance control for platform updates

$
0
0

Today we are announcing the preview of a maintenance control feature for Azure Virtual Machines that gives more control to customers with highly sensitive workloads for platform maintenance. Using this feature, customers can control all impactful host updates, including rebootless updates, for up to 35 days.

Azure frequently updates its infrastructure to improve the reliability, performance, and security, or to launch new features. Almost all updates have zero impact on your Azure virtual machines (VMs). When updates do have an effect, Azure chooses the least impactful method for updates:

  • If the update does not require a reboot, the VM is briefly paused while the host is updated, or it's live migrated to an already updated host. These rebootless maintenance operations are applied fault domain by fault domain, and progress is stopped if any warning health signals are received.
  • In the extremely rare scenario when the maintenance requires a reboot, the customer is notified of the planned maintenance. Azure also provides a time window in which you can start the maintenance yourself, at a time that works for you.

Typically, rebootless updates do not impact the overall customer experience. However, certain very sensitive workloads may require full control of all maintenance activities. This new feature will benefit those customers who deploy this type of workload.

Who is this for?

The ability to control the maintenance window is particularly useful when you deploy workloads that are extremely sensitive to interruptions running on an Azure Dedicated Host or an Isolated VM, where the underlying physical server runs a single customer’s workload. This feature is not supported for VMs deployed in hosts shared with other customers.

The typical customer who should consider using this feature requires full control over updates because while they need to have the latest updates in place, their business requires that at least some of their cloud resources must be updated with zero impact on their own schedule.

Customers like financial services providers, gaming companies, or media streaming services using Azure Dedicated Hosts or Isolated VMs will benefit by being able to manage necessary updates without any impact on their most critical Azure resources.

How does it work?

A diagram showing how this feature works.

You can enable the maintenance control feature for platform updates by adding a custom maintenance configuration to a resource (either an Azure Dedicated Host or an Isolated VM). When the Azure updater sees this custom configuration, it will skip all non-zero-impact updates, including rebootless updates. For as long as the maintenance configuration is applied to the resource, it will be your responsibility to determine when to initiate updates for that resource. You can check for pending updates on the resource and apply updates within the 35-day window. When you initiate an update on the resource, Azure applies all pending host updates. A new 35-day window starts after another update becomes pending on the resource. If you choose not to apply the updates within the 35-day window, Azure will automatically apply all pending updates for you, to ensure that your resources remain secure and get other fixes and features.

Things to consider

  • You can automate platform updates for your maintenance window by calling “apply pending update” commands through your automation scripts. This can be batched with your application maintenance. You can also make use of Azure Functions and schedule updates at regular intervals.
  • Maintenance configurations are supported across subscriptions and resource groups, so you can manage all maintenance configurations in one place and use them anywhere they're needed.

Getting started

The maintenance control feature for platform updates is available in preview now. You can get started by using CLI, PowerShellREST APIs, .NET, or SDK. Azure portal support will follow.

For more information, please refer to the documentation: Maintenance for virtual machines in Azure.

FAQ

Q: Are there cases where I can’t control certain updates? 

A:  In case of a high-severity security issue that may endanger the Azure platform or our customers, Azure may need to override customer control of the maintenance window and push the change. This is a rare occurrence that would only be used in extreme cases, such as a last resort to protect you from critical security issues.

Q: If I don’t self-update within 35-days what action will Azure take?

A:  If you don’t execute a platform update within 35-days, Azure will apply the pending updates on a fault domain by fault domain basis. This is done to maintain security and performance, and to fix any defects.

Q: Is this feature supported in all regions?

A:   Maintenance Control is supported in all public cloud regions. Currently we don't support gov cloud regions, but this support will come later.

Microsoft partner ANSYS extends ability of Azure Digital Twins platform

$
0
0

Digital twins have moved from an exciting concept to reality. More companies than ever are connecting assets and production networks with sensors and using analytics to optimize operations across machinery, plants, and industrial networks. As exact virtual representations of the physical environment, digital twins incorporate historical and real-time data to enable sophisticated spatial analysis of key relationships. Teams can use digital twins to model the impact of process changes before putting them into production, reducing time, cost, and risk.

For the second year in a row, Gartner has identified digital twins as one of the top 10 strategic technology trends. According to Gartner, while 13 percent of organizations that are implementing IoT have already adopted digital twins, 62 percent are in the process or plan to do so. Gartner predicts a tipping point in 2022 when two out of three companies will have deployed at least one digital twin to optimize some facet of their business processes.

This is why we’re excited by the great work of ANSYS, a Microsoft partner working to extend the value of the Microsoft Azure Digital Twins platform for our joint customers. The ANSYS Twin Builder combines the power of physics-based simulations and analytics-driven digital twins to provide real-time data transfer, reusable components, ultrafast modeling, and other tools that enable teams to perform myriad “what-if” analyses, and build, validate, and deploy complex systems more easily.

“Collaborating with ANSYS to create an advanced IoT digital twins framework provides our customers with an unprecedented understanding of their deployed assets’ performance by leveraging physics and simulation-based analytics.” — Sam George, corporate vice president of Azure IoT, Microsoft

Digital twins model key relationships, simplifying design

Digital twins will be first and most widely adopted in manufacturing, as industrial companies invest millions to build, maintain, and track the performance of remotely deployed IoT-enabled assets, machinery, and vehicles. Operators depend on near-continuous asset uptime to achieve production goals, meaning supply-chain bottlenecks, machine failures, or other unexpected downtime can hamper production output and reduce revenue recognition for the company and its customers. The use of digital twins, analytics, business rules, and automation helps companies avoid many of these issues by guiding decision-making and enabling instant informed action.

Digital twins can also simulate a multidimensional view of asset performance that can be endlessly manipulated and perfected prior to producing new systems or devices, ending not just the guesswork of manually predicting new processes, but also the cost of developing multiple prototypes. Digital twins, analytics-based tools, and automation also equip companies to avoid unnecessary costs by prioritizing issues for investment and resolution.

Digital twins can optimize production across networks

Longer-term, companies can more easily operate global supply chains, production networks, and digital ecosystems through the use of IoT, digital twins, and other tools. Enterprise teams and their partners will be able to pivot from sensing and reacting to changes to predicting them and responding immediately based on predetermined business rules. Utilities will be better prepared to predict and prevent accidents, companies poised to address infrastructure issues before customers complain, and stores more strategically set up to maintain adequate inventories.

Simulations increase digital twins’ effectiveness

ANSYS’ engineering simulation software enables customers to model the design of nearly every physical product or process. The simulations are then compiled into runtime modules that can execute in a docker container and integrate automatically into IoT processing systems, reducing the heavy lift of IoT customization.

With the combined Microsoft Azure Digital Twins-ANSYS physics-based simulation capabilities, customers can now:

  • Simulate baseline and failure data resulting in accurate, physics-based digital twins models.
  • Use physics-based predictive models to increase accuracy and improve ROI from predictive maintenance programs.
  • Leverage “what-if analyses” to simulate different solutions before selecting the best one.
  • Use virtual sensors to estimate critical quantities through simulation.

Engineering software

In addition, companies can use physics-based simulations within the Microsoft-ANSYS platform to pursue high-value use cases such as these:

  •  Optimize asset performance: Teams can use digital twins to model asset performance to evaluate current performance versus targets, identifying, resolving, and prioritizing issues for resolution based on the value they create.
  •  Manage systems across their lifecycle: Teams can take a systems approach to managing complex and costly assets, driving throughput and retiring systems at the ideal time to avoid over-investing in market-lagging capabilities.
  •  Perform predictive maintenance: Teams can use analytics to determine and schedule maintenance, reduce unplanned downtime and costly break-fix repairs, and perform repairs in order of importance, which frees team members from unnecessary work.
  •  Orchestrate systems: Companies will eventually create systems of intelligence by linking their equipment, systems, and networks to orchestrate production across plants, campuses, and regions, attaining new levels of visibility and efficiency.
  •  Fuel product innovation: With rapid virtual prototyping, teams will be able to explore myriad product versions, reducing the time and cost required to innovate products, decreasing product failures, and enabling the development of customized products.
  •  Enhance employee training: Companies can use digital twins to conduct training with employees, improving their effectiveness on the job while reducing production design errors due to human error.
  •  Eliminate physical constraints: Digital twins eliminate the physical barriers to experimentation, meaning users can simulate tests and conditions for remote assets, such as equipment in other plants, regions, or space.

Opening up new opportunities for partners

According to Gartner, more than 20 billion connected devices are projected by 2020 and adoption of IoT and digital twins is only going to accelerate—in fact, MarketsandMarkets™ estimates that the digital twins market will reach a value of $3.8 billion in 2019 and grow to $35.8 billion by 2025. Our recent IoT Signals research found that 85 percent of decision-makers have already adopted IoT, 74 percent have projects in the “use” phase, and businesses expect to achieve 30 percent ROI on their IoT projects going forward. The top use case participants want to pursue is operations optimization (56 percent), to reap more value from the assets and processes they already possess. That’s why digital twins is so important right now because it provides a framework to accomplish this goal with greater accuracy than was possible before.

“As industrial companies require comprehensive field data and actionable insights to further optimize deployed asset performance, ecosystem partners must collaborate to form business solutions. ANSYS Twins Builder’s complementary simulation data stream augments Azure IoT Services and greatly enhances its customers’ understanding of asset performance.”—Eric Bantegnie, vice president and general manager at ANSYS

Thanks to Microsoft partners like ANSYS, companies are better equipped to unlock productivity and efficiency gains by removing critical constraints, including physical barriers, from process modeling. With tools like digital twins, companies will be limited only by their own creativity, creating a more intelligent and connected world where all have more opportunities to flourish.

Learn more about Microsoft Azure Digital Twins and ANSYS Twin Builder.

Azure Stack HCI now running on HPE Edgeline EL8000

$
0
0

Do you need rugged, compact-sized hyperconverged infrastructure (HCI) enabled servers to run your branch office and edge workloads? Do you want to modernize your applications and IoT functions with container technology? Do you want to leverage Azure's hybrid services such as backup, disaster recovery, update management, monitoring, and security compliance? 

Well, Microsoft and HPE have teamed up to validate the HPE Edgeline EL8000 Converged Edge system for Microsoft's Azure Stack HCI program. Designed specifically for space-constrained environments, the HPE Edgeline EL8000 Converged Edge system has a unique 17-inch depth form factor that fits into limited infrastructures too small for other x86 systems. The chassis has an 8.7-inch width which brings additional flexibility for deploying at the deep edge, whether it is in a telco environment, a mobile vehicle, or a manufacturing floor. This Network Equipment-Building System (NEBs) compliant system delivers secure scalability.

HPE Edgeline EL8000 Converged Edge system gives:

  • Traditional x86 compute optimized for edge deployments, far from the traditional data center without the sacrifice of compute performance.
  • Edge-optimized remote system management with wireless capabilities based on Redfish industry standard.
  • Compact form factor, with short-depth and half-width options.
  • Rugged, modular form factor for secure scalability and serviceability in edge and hostile environments including NEBs level three and American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) level three/four compliance.
  • Broad accelerator support for emerging edge artificial intelligence (AI) use cases, for field programmable gate arrays or graphics processing units.
  • Up to four independent compute nodes, which are cluster-ready with embedded networks.

Modular design providing broad configuration possibilities

The HPE Edgeline EL8000 Converged Edge system offers flexibility of choice for compute density or for input/output expansion. These compact, ruggedized systems offer high-performance capacity to support the use cases that matter most, including media streaming, IoT, AI, and video analytics. The HPE Edgeline EL8000 is a versatile platform that enables edge compute transformation so as use case requirements change, the system's flexible and modular architecture can scale to meet them.

Seamless management and security features with HPE Edgeline Chassis Manager

The HPE Edgeline EL8000 Converged Edge system features the HPE Edgeline Chassis Manager which limits downtime by providing system-level health monitoring and alerts. Increase efficiency and reliability by managing the chassis fan speeds for each server blade installed in addition to monitoring the health and status of the power supply. It simplifies firmware upgrade management and implementation with HPE Edgeline Chassis Manager.

Microsoft Azure Stack HCI:

Azure Stack HCI solutions bring together highly virtualized compute, storage, and networking on industry-standard x86 servers and components. Combining resources in the same cluster makes it easier for you to deploy, manage, and scale. Manage with your choice of command-line automation or Windows Admin Center.

Achieve industry-leading virtual machine performance for your server applications with Hyper-V, the foundational hypervisor technology of the Microsoft cloud, and Storage Spaces Direct technology with built-in support for non-volatile memory express (NVMe), persistent memory, and remote-direct memory access (RDMA) networking.

Help keep apps and data secure with shielded virtual machines, network microsegmentation, and native encryption.

You can take advantage of cloud and on-premises working together with a hyperconverged infrastructure platform in the public cloud. Your team can start building cloud skills with built-in integration to Azure infrastructure management services, including:

  • Azure Site Recovery for high availability and disaster recovery as a service (DRaaS).

  • Azure Monitor, a centralized hub to track what’s happening across your applications, network, and infrastructure – with advanced analytics powered by AI.

  • Cloud Witness, to use Azure as the lightweight tie breaker for cluster quorum.

  • Azure Backup for offsite data protection and to protect against ransomware.

  • Azure Update Management for update assessment and update deployments for Windows virtual machines (VMs) running in Azure and on-premises.

  • Azure Network Adapter to connect resources on-premises with your VMs in Azure via a point-to-site virtual private network (VPN.)

  • Sync your file server with the cloud, using Azure File Sync.

  • Azure Arc for Servers to manage role-based access control, governance, and compliance policy from Azure Portal.

By deploying the Microsoft and HPE HCI solution, you can quickly solve your branch office and edge needs with high performance and resiliency while protecting your business assets by enabling the Azure hybrid services built into the Azure Stack HCI Branch office and edge solution.  

Windows 10 SDK Preview Build 19035 available now!

$
0
0

Today, we released a new Windows 10 Preview Build of the SDK to be used in conjunction with Windows 10 Insider Preview (Build 19035 or greater). The Preview SDK Build 19035 contains bug fixes and under development changes to the API surface area.

The Preview SDK can be downloaded from developer section on Windows Insider.

For feedback and updates to the known issues, please see the developer forum. For new developer feature requests, head over to our Windows Platform UserVoice.

Things to note:

  • This build works in conjunction with previously released SDKs and Visual Studio 2017 and 2019. You can install this SDK and still also continue to submit your apps that target Windows 10 build 1903 or earlier to the Microsoft Store.
  • The Windows SDK will now formally only be supported by Visual Studio 2017 and greater. You can download the Visual Studio 2019 here.
  • This build of the Windows SDK will install on only on Windows 10 Insider Preview builds.
  • In order to assist with script access to the SDK, the ISO will also be able to be accessed through the following static URL: https://software-download.microsoft.com/download/sg/Windows_InsiderPreview_SDK_en-us_19035_1.iso.

Tools Updates

Message Compiler (mc.exe)

  • Now detects the Unicode byte order mark (BOM) in .mc files. If the If the .mc file starts with a UTF-8 BOM, it will be read as a UTF-8 file. Otherwise, if it starts with a UTF-16LE BOM, it will be read as a UTF-16LE file. If the -u parameter was specified, it will be read as a UTF-16LE file. Otherwise, it will be read using the current code page (CP_ACP).
  • Now avoids one-definition-rule (ODR) problems in MC-generated C/C++ ETW helpers caused by conflicting configuration macros (e.g. when two .cpp files with conflicting definitions of MCGEN_EVENTWRITETRANSFER are linked into the same binary, the MC-generated ETW helpers will now respect the definition of MCGEN_EVENTWRITETRANSFER in each .cpp file instead of arbitrarily picking one or the other).

Windows Trace Preprocessor (tracewpp.exe)

  • Now supports Unicode input (.ini, .tpl, and source code) files. Input files starting with a UTF-8 or UTF-16 byte order mark (BOM) will be read as Unicode. Input files that do not start with a BOM will be read using the current code page (CP_ACP). For backwards-compatibility, if the -UnicodeIgnore command-line parameter is specified, files starting with a UTF-16 BOM will be treated as empty.
  • Now supports Unicode output (.tmh) files. By default, output files will be encoded using the current code page (CP_ACP). Use command-line parameters -cp:UTF-8 or -cp:UTF-16 to generate Unicode output files.
  • Behavior change: tracewpp now converts all input text to Unicode, performs processing in Unicode, and converts output text to the specified output encoding. Earlier versions of tracewpp avoided Unicode conversions and performed text processing assuming a single-byte character set. This may lead to behavior changes in cases where the input files do not conform to the current code page. In cases where this is a problem, consider converting the input files to UTF-8 (with BOM) and/or using the -cp:UTF-8 command-line parameter to avoid encoding ambiguity.

TraceLoggingProvider.h

  • Now avoids one-definition-rule (ODR) problems caused by conflicting configuration macros (e.g. when two .cpp files with conflicting definitions of TLG_EVENT_WRITE_TRANSFER are linked into the same binary, the TraceLoggingProvider.h helpers will now respect the definition of TLG_EVENT_WRITE_TRANSFER in each .cpp file instead of arbitrarily picking one or the other).
  • In C++ code, the TraceLoggingWrite macro has been updated to enable better code sharing between similar events using variadic templates.

Signing your apps with Device Guard Signing

Windows SDK Flight NuGet Feed

We have stood up a NuGet feed for the flighted builds of the SDK. You can now test preliminary builds of the Windows 10 WinRT API Pack, as well as a microsoft.windows.sdk.headless.contracts NuGet package.

We use the following feed to flight our NuGet packages.

Microsoft.Windows.SDK.Contracts which can be used with to add the latest Windows Runtime APIs support to your .NET Framework 4.5+ and .NET Core 3.0+ libraries and apps.

The Windows 10 WinRT API Pack enables you to add the latest Windows Runtime APIs support to your .NET Framework 4.5+ and .NET Core 3.0+ libraries and apps.

Microsoft.Windows.SDK.Headless.Contracts provides a subset of the Windows Runtime APIs for console apps excludes the APIs associated with a graphical user interface. This NuGet is used in conjunction with

Windows ML container development. Check out the Getting Started guide for more information.

Breaking Changes

Removal of api-ms-win-net-isolation-l1-1-0.lib

In this release api-ms-win-net-isolation-l1-1-0.lib has been removed from the Windows SDK. Apps that were linking against api-ms-win-net-isolation-l1-1-0.lib can switch to OneCoreUAP.lib as a replacement.

Removal of IRPROPS.LIB

In this release irprops.lib has been removed from the Windows SDK. Apps that were linking against irprops.lib can switch to bthprops.lib as a drop-in replacement.

Removal of WUAPICommon.H and WUAPICommon.IDL

In this release we have moved ENUM tagServerSelection from WUAPICommon.H to wupai.h and removed the header. If you would like to use the ENUM tagServerSelection, you will need to include wuapi.h or wuapi.idl.

API Updates, Additions and Removals

The following APIs have been added to the platform since the release of Windows 10 SDK, version 1903, build 18362.

Additions:

 

namespace Windows.AI.MachineLearning {
  public sealed class LearningModelSessionOptions {
    bool CloseModelOnSessionCreation { get; set; }
  }
}
namespace Windows.ApplicationModel {
  public sealed class AppInfo {
    public static AppInfo Current { get; }
    Package Package { get; }
    public static AppInfo GetFromAppUserModelId(string appUserModelId);
    public static AppInfo GetFromAppUserModelIdForUser(User user, string appUserModelId);
  }
  public interface IAppInfoStatics
  public sealed class Package {
    StorageFolder EffectiveExternalLocation { get; }
    string EffectiveExternalPath { get; }
    string EffectivePath { get; }
    string InstalledPath { get; }
    bool IsStub { get; }
    StorageFolder MachineExternalLocation { get; }
    string MachineExternalPath { get; }
    string MutablePath { get; }
    StorageFolder UserExternalLocation { get; }
    string UserExternalPath { get; }
    IVectorView<AppListEntry> GetAppListEntries();
    RandomAccessStreamReference GetLogoAsRandomAccessStreamReference(Size size);
  }
}
namespace Windows.ApplicationModel.AppService {
  public enum AppServiceConnectionStatus {
    AuthenticationError = 8,
    DisabledByPolicy = 10,
    NetworkNotAvailable = 9,
    WebServiceUnavailable = 11,
  }
  public enum AppServiceResponseStatus {
    AppUnavailable = 6,
    AuthenticationError = 7,
    DisabledByPolicy = 9,
    NetworkNotAvailable = 8,
    WebServiceUnavailable = 10,
  }
  public enum StatelessAppServiceResponseStatus {
    AuthenticationError = 11,
    DisabledByPolicy = 13,
    NetworkNotAvailable = 12,
    WebServiceUnavailable = 14,
  }
}
namespace Windows.ApplicationModel.Background {
  public sealed class BackgroundTaskBuilder {
    void SetTaskEntryPointClsid(Guid TaskEntryPoint);
  }
  public sealed class BluetoothLEAdvertisementPublisherTrigger : IBackgroundTrigger {
    bool IncludeTransmitPowerLevel { get; set; }
    bool IsAnonymous { get; set; }
    IReference<short> PreferredTransmitPowerLevelInDBm { get; set; }
    bool UseExtendedFormat { get; set; }
  }
  public sealed class BluetoothLEAdvertisementWatcherTrigger : IBackgroundTrigger {
    bool AllowExtendedAdvertisements { get; set; }
  }
}
namespace Windows.ApplicationModel.ConversationalAgent {
  public sealed class ActivationSignalDetectionConfiguration
  public enum ActivationSignalDetectionTrainingDataFormat
  public sealed class ActivationSignalDetector
  public enum ActivationSignalDetectorKind
  public enum ActivationSignalDetectorPowerState
  public sealed class ConversationalAgentDetectorManager
  public sealed class DetectionConfigurationAvailabilityChangedEventArgs
  public enum DetectionConfigurationAvailabilityChangeKind
  public sealed class DetectionConfigurationAvailabilityInfo
  public enum DetectionConfigurationTrainingStatus
}
namespace Windows.ApplicationModel.DataTransfer {
  public sealed class DataPackage {
    event TypedEventHandler<DataPackage, object> ShareCanceled;
  }
}
namespace Windows.Devices.Bluetooth {
  public sealed class BluetoothAdapter {
    bool IsExtendedAdvertisingSupported { get; }
    uint MaxAdvertisementDataLength { get; }
  }
}
namespace Windows.Devices.Bluetooth.Advertisement {
  public sealed class BluetoothLEAdvertisementPublisher {
    bool IncludeTransmitPowerLevel { get; set; }
    bool IsAnonymous { get; set; }
    IReference<short> PreferredTransmitPowerLevelInDBm { get; set; }
    bool UseExtendedAdvertisement { get; set; }
  }
  public sealed class BluetoothLEAdvertisementPublisherStatusChangedEventArgs {
    IReference<short> SelectedTransmitPowerLevelInDBm { get; }
  }
  public sealed class BluetoothLEAdvertisementReceivedEventArgs {
    BluetoothAddressType BluetoothAddressType { get; }
    bool IsAnonymous { get; }
    bool IsConnectable { get; }
    bool IsDirected { get; }
    bool IsScannable { get; }
    bool IsScanResponse { get; }
    IReference<short> TransmitPowerLevelInDBm { get; }
  }
  public enum BluetoothLEAdvertisementType {
    Extended = 5,
  }
  public sealed class BluetoothLEAdvertisementWatcher {
    bool AllowExtendedAdvertisements { get; set; }
  }
  public enum BluetoothLEScanningMode {
    None = 2,
  }
}
namespace Windows.Devices.Bluetooth.Background {
  public sealed class BluetoothLEAdvertisementPublisherTriggerDetails {
    IReference<short> SelectedTransmitPowerLevelInDBm { get; }
  }
}
namespace Windows.Devices.Display {
  public sealed class DisplayMonitor {
    bool IsDolbyVisionSupportedInHdrMode { get; }
  }
}
namespace Windows.Devices.Input {
  public sealed class PenButtonListener
  public sealed class PenDockedEventArgs
  public sealed class PenDockListener
  public sealed class PenTailButtonClickedEventArgs
  public sealed class PenTailButtonDoubleClickedEventArgs
  public sealed class PenTailButtonLongPressedEventArgs
  public sealed class PenUndockedEventArgs
}
namespace Windows.Devices.Sensors {
  public sealed class Accelerometer {
    AccelerometerDataThreshold ReportThreshold { get; }
  }
  public sealed class AccelerometerDataThreshold
  public sealed class Barometer {
    BarometerDataThreshold ReportThreshold { get; }
  }
  public sealed class BarometerDataThreshold
  public sealed class Compass {
    CompassDataThreshold ReportThreshold { get; }
  }
  public sealed class CompassDataThreshold
  public sealed class Gyrometer {
    GyrometerDataThreshold ReportThreshold { get; }
  }
  public sealed class GyrometerDataThreshold
  public sealed class Inclinometer {
    InclinometerDataThreshold ReportThreshold { get; }
  }
  public sealed class InclinometerDataThreshold
  public sealed class LightSensor {
    LightSensorDataThreshold ReportThreshold { get; }
  }
  public sealed class LightSensorDataThreshold
  public sealed class Magnetometer {
    MagnetometerDataThreshold ReportThreshold { get; }
  }
  public sealed class MagnetometerDataThreshold
}
namespace Windows.Foundation.Metadata {
  public sealed class AttributeNameAttribute : Attribute
  public sealed class FastAbiAttribute : Attribute
  public sealed class NoExceptionAttribute : Attribute
}
namespace Windows.Globalization {
  public sealed class Language {
    string AbbreviatedName { get; }
    public static IVector<string> GetMuiCompatibleLanguageListFromLanguageTags(IIterable<string> languageTags);
  }
}
namespace Windows.Graphics.Capture {
  public sealed class GraphicsCaptureSession : IClosable {
    bool IsCursorCaptureEnabled { get; set; }
  }
}
namespace Windows.Graphics.DirectX {
  public enum DirectXPixelFormat {
    SamplerFeedbackMinMipOpaque = 189,
    SamplerFeedbackMipRegionUsedOpaque = 190,
  }
}
namespace Windows.Graphics.Holographic {
  public sealed class HolographicFrame {
    HolographicFrameId Id { get; }
  }
  public struct HolographicFrameId
  public sealed class HolographicFrameRenderingReport
  public sealed class HolographicFrameScanoutMonitor : IClosable
  public sealed class HolographicFrameScanoutReport
  public sealed class HolographicSpace {
    HolographicFrameScanoutMonitor CreateFrameScanoutMonitor(uint maxQueuedReports);
  }
}
namespace Windows.Management.Deployment {
  public sealed class AddPackageOptions
  public enum DeploymentOptions : uint {
    StageInPlace = (uint)4194304,
  }
  public sealed class PackageManager {
    IAsyncOperationWithProgress<DeploymentResult, DeploymentProgress> AddPackageByUriAsync(Uri packageUri, AddPackageOptions options);
    IVector<Package> FindProvisionedPackages();
    PackageStubPreference GetPackageStubPreference(string packageFamilyName);
    IAsyncOperationWithProgress<DeploymentResult, DeploymentProgress> RegisterPackageByUriAsync(Uri manifestUri, RegisterPackageOptions options);
    IAsyncOperationWithProgress<DeploymentResult, DeploymentProgress> RegisterPackagesByFullNameAsync(IIterable<string> packageFullNames, RegisterPackageOptions options);
    void SetPackageStubPreference(string packageFamilyName, PackageStubPreference useStub);
    IAsyncOperationWithProgress<DeploymentResult, DeploymentProgress> StagePackageByUriAsync(Uri packageUri, StagePackageOptions options);
  }
  public enum PackageStubPreference
  public enum PackageTypes : uint {
    All = (uint)4294967295,
  }
  public sealed class RegisterPackageOptions
  public enum RemovalOptions : uint {
    PreserveRoamableApplicationData = (uint)128,
  }
  public sealed class StagePackageOptions
  public enum StubPackageOption
}
namespace Windows.Media.Audio {
  public sealed class AudioPlaybackConnection : IClosable
  public sealed class AudioPlaybackConnectionOpenResult
  public enum AudioPlaybackConnectionOpenResultStatus
  public enum AudioPlaybackConnectionState
}
namespace Windows.Media.Capture {
  public sealed class MediaCapture : IClosable {
    MediaCaptureRelativePanelWatcher CreateRelativePanelWatcher(StreamingCaptureMode captureMode, DisplayRegion displayRegion);
  }
  public sealed class MediaCaptureInitializationSettings {
    Uri DeviceUri { get; set; }
    PasswordCredential DeviceUriPasswordCredential { get; set; }
  }
  public sealed class MediaCaptureRelativePanelWatcher : IClosable
}
namespace Windows.Media.Capture.Frames {
  public sealed class MediaFrameSourceInfo {
    Panel GetRelativePanel(DisplayRegion displayRegion);
  }
}
namespace Windows.Media.Devices {
  public sealed class PanelBasedOptimizationControl
  public sealed class VideoDeviceController : IMediaDeviceController {
    PanelBasedOptimizationControl PanelBasedOptimizationControl { get; }
  }
}
namespace Windows.Media.MediaProperties {
  public static class MediaEncodingSubtypes {
    public static string Pgs { get; }
    public static string Srt { get; }
    public static string Ssa { get; }
    public static string VobSub { get; }
  }
  public sealed class TimedMetadataEncodingProperties : IMediaEncodingProperties {
    public static TimedMetadataEncodingProperties CreatePgs();
    public static TimedMetadataEncodingProperties CreateSrt();
    public static TimedMetadataEncodingProperties CreateSsa(byte[] formatUserData);
    public static TimedMetadataEncodingProperties CreateVobSub(byte[] formatUserData);
  }
}
namespace Windows.Networking.BackgroundTransfer {
  public sealed class DownloadOperation : IBackgroundTransferOperation, IBackgroundTransferOperationPriority {
    void RemoveRequestHeader(string headerName);
    void SetRequestHeader(string headerName, string headerValue);
  }
  public sealed class UploadOperation : IBackgroundTransferOperation, IBackgroundTransferOperationPriority {
    void RemoveRequestHeader(string headerName);
    void SetRequestHeader(string headerName, string headerValue);
  }
}
namespace Windows.Networking.Connectivity {
  public enum NetworkAuthenticationType {
    Owe = 12,
  }
}
namespace Windows.Networking.NetworkOperators {
  public sealed class NetworkOperatorTetheringAccessPointConfiguration {
    TetheringWiFiBand Band { get; set; }
    bool IsBandSupported(TetheringWiFiBand band);
    IAsyncOperation<bool> IsBandSupportedAsync(TetheringWiFiBand band);
  }
  public sealed class NetworkOperatorTetheringManager {
    public static void DisableNoConnectionsTimeout();
    public static IAsyncAction DisableNoConnectionsTimeoutAsync();
    public static void EnableNoConnectionsTimeout();
    public static IAsyncAction EnableNoConnectionsTimeoutAsync();
    public static bool IsNoConnectionsTimeoutEnabled();
  }
  public enum TetheringWiFiBand
}
namespace Windows.Networking.PushNotifications {
  public static class PushNotificationChannelManager {
    public static event EventHandler<PushNotificationChannelsRevokedEventArgs> ChannelsRevoked;
  }
  public sealed class PushNotificationChannelsRevokedEventArgs
  public sealed class RawNotification {
    IBuffer ContentBytes { get; }
  }
}
namespace Windows.Security.Authentication.Web.Core {
  public sealed class WebAccountMonitor {
    event TypedEventHandler<WebAccountMonitor, WebAccountEventArgs> AccountPictureUpdated;
  }
}
namespace Windows.Security.Isolation {
  public sealed class IsolatedWindowsEnvironment
  public enum IsolatedWindowsEnvironmentActivator
  public enum IsolatedWindowsEnvironmentAllowedClipboardFormats : uint
  public enum IsolatedWindowsEnvironmentAvailablePrinters : uint
  public enum IsolatedWindowsEnvironmentClipboardCopyPasteDirections : uint
  public struct IsolatedWindowsEnvironmentContract
  public struct IsolatedWindowsEnvironmentCreateProgress
  public sealed class IsolatedWindowsEnvironmentCreateResult
  public enum IsolatedWindowsEnvironmentCreateStatus
  public sealed class IsolatedWindowsEnvironmentFile
  public static class IsolatedWindowsEnvironmentHost
  public enum IsolatedWindowsEnvironmentHostError
  public sealed class IsolatedWindowsEnvironmentLaunchFileResult
  public enum IsolatedWindowsEnvironmentLaunchFileStatus
  public sealed class IsolatedWindowsEnvironmentOptions
  public static class IsolatedWindowsEnvironmentOwnerRegistration
  public sealed class IsolatedWindowsEnvironmentOwnerRegistrationData
  public sealed class IsolatedWindowsEnvironmentOwnerRegistrationResult
  public enum IsolatedWindowsEnvironmentOwnerRegistrationStatus
  public sealed class IsolatedWindowsEnvironmentProcess
  public enum IsolatedWindowsEnvironmentProcessState
  public enum IsolatedWindowsEnvironmentProgressState
  public sealed class IsolatedWindowsEnvironmentShareFolderRequestOptions
  public sealed class IsolatedWindowsEnvironmentShareFolderResult
  public enum IsolatedWindowsEnvironmentShareFolderStatus
  public sealed class IsolatedWindowsEnvironmentStartProcessResult
  public enum IsolatedWindowsEnvironmentStartProcessStatus
  public sealed class IsolatedWindowsEnvironmentTelemetryParameters
  public static class IsolatedWindowsHostMessenger
  public delegate void MessageReceivedCallback(Guid receiverId, IVectorView<object> message);
}
namespace Windows.Storage {
  public static class KnownFolders {
    public static IAsyncOperation<StorageFolder> GetFolderAsync(KnownFolderId folderId);
    public static IAsyncOperation<KnownFoldersAccessStatus> RequestAccessAsync(KnownFolderId folderId);
    public static IAsyncOperation<KnownFoldersAccessStatus> RequestAccessForUserAsync(User user, KnownFolderId folderId);
  }
  public enum KnownFoldersAccessStatus
  public sealed class StorageFile : IInputStreamReference, IRandomAccessStreamReference, IStorageFile, IStorageFile2, IStorageFilePropertiesWithAvailability, IStorageItem, IStorageItem2, IStorageItemProperties, IStorageItemProperties2, IStorageItemPropertiesWithProvider {
    public static IAsyncOperation<StorageFile> GetFileFromPathForUserAsync(User user, string path);
  }
  public sealed class StorageFolder : IStorageFolder, IStorageFolder2, IStorageFolderQueryOperations, IStorageItem, IStorageItem2, IStorageItemProperties, IStorageItemProperties2, IStorageItemPropertiesWithProvider {
    public static IAsyncOperation<StorageFolder> GetFolderFromPathForUserAsync(User user, string path);
  }
}
namespace Windows.Storage.Provider {
  public sealed class StorageProviderFileTypeInfo
  public sealed class StorageProviderSyncRootInfo {
    IVector<StorageProviderFileTypeInfo> FallbackFileTypeInfo { get; }
  }
  public static class StorageProviderSyncRootManager {
    public static bool IsSupported();
  }
}
namespace Windows.System {
  public sealed class UserChangedEventArgs {
    IVectorView<UserWatcherUpdateKind> ChangedPropertyKinds { get; }
  }
  public enum UserWatcherUpdateKind
}
namespace Windows.UI.Composition.Interactions {
  public sealed class InteractionTracker : CompositionObject {
    int TryUpdatePosition(Vector3 value, InteractionTrackerClampingOption option, InteractionTrackerPositionUpdateOption posUpdateOption);
  }
  public enum InteractionTrackerPositionUpdateOption
}
namespace Windows.UI.Input {
  public sealed class CrossSlidingEventArgs {
    uint ContactCount { get; }
  }
  public sealed class DraggingEventArgs {
    uint ContactCount { get; }
  }
  public sealed class GestureRecognizer {
    uint HoldMaxContactCount { get; set; }
    uint HoldMinContactCount { get; set; }
    float HoldRadius { get; set; }
    TimeSpan HoldStartDelay { get; set; }
    uint TapMaxContactCount { get; set; }
    uint TapMinContactCount { get; set; }
    uint TranslationMaxContactCount { get; set; }
    uint TranslationMinContactCount { get; set; }
  }
  public sealed class HoldingEventArgs {
    uint ContactCount { get; }
    uint CurrentContactCount { get; }
  }
  public sealed class ManipulationCompletedEventArgs {
    uint ContactCount { get; }
    uint CurrentContactCount { get; }
  }
  public sealed class ManipulationInertiaStartingEventArgs {
    uint ContactCount { get; }
  }
  public sealed class ManipulationStartedEventArgs {
    uint ContactCount { get; }
  }
  public sealed class ManipulationUpdatedEventArgs {
    uint ContactCount { get; }
    uint CurrentContactCount { get; }
  }
  public sealed class RightTappedEventArgs {
    uint ContactCount { get; }
  }
  public sealed class SystemButtonEventController : AttachableInputObject
  public sealed class SystemFunctionButtonEventArgs
  public sealed class SystemFunctionLockChangedEventArgs
  public sealed class SystemFunctionLockIndicatorChangedEventArgs
  public sealed class TappedEventArgs {
    uint ContactCount { get; }
  }
}
namespace Windows.UI.Input.Inking {
  public sealed class InkModelerAttributes {
    bool UseVelocityBasedPressure { get; set; }
  }
}
namespace Windows.UI.Text {
  public enum RichEditMathMode
  public sealed class RichEditTextDocument : ITextDocument {
    void GetMath(out string value);
    void SetMath(string value);
    void SetMathMode(RichEditMathMode mode);
  }
}
namespace Windows.UI.ViewManagement {
  public sealed class UISettings {
    event TypedEventHandler<UISettings, UISettingsAnimationsEnabledChangedEventArgs> AnimationsEnabledChanged;
    event TypedEventHandler<UISettings, UISettingsMessageDurationChangedEventArgs> MessageDurationChanged;
  }
  public sealed class UISettingsAnimationsEnabledChangedEventArgs
  public sealed class UISettingsMessageDurationChangedEventArgs
}
namespace Windows.UI.ViewManagement.Core {
  public sealed class CoreInputView {
    event TypedEventHandler<CoreInputView, CoreInputViewHidingEventArgs> PrimaryViewHiding;
    event TypedEventHandler<CoreInputView, CoreInputViewShowingEventArgs> PrimaryViewShowing;
  }
  public sealed class CoreInputViewHidingEventArgs
  public enum CoreInputViewKind {
    Symbols = 4,
  }
  public sealed class CoreInputViewShowingEventArgs
  public sealed class UISettingsController
}

The post Windows 10 SDK Preview Build 19035 available now! appeared first on Windows Developer Blog.

Now available: Azure DevOps Server 2019 Update 1.1 RTW

$
0
0

Today, we are releasing Azure DevOps Server 2019 Update 1.1 RTW. Azure DevOps Server offers all the services of Azure DevOps, including Pipelines, Boards, Repos, Artifacts and Test Plans, as a self-hosted product that can be installed in your on-premises datacenter or in a virtual machine on the cloud.

Azure DevOps Server 2019 Update 1.1 includes bug fixes for Azure DevOps Server 2019 Update 1. You can find the details of the fixes in our release notes.

You can upgrade to Azure DevOps Server 2019 Update 1.1 from previous versions of Azure DevOps Server 2019 or Team Foundation Server 2012 or later. You can also install Azure DevOps Server 2019 Update 1.1 without first installing Azure DevOps Server 2019.

Here are some key links:


.NET Framework December 2019 Security and Quality Rollup

$
0
0

Today, we are releasing the December 2019 Security and Quality Rollup Updates for .NET Framework.

Quality and Reliability

This release contains the following quality and reliability improvements.

ASP.NET

  • ASP.NET will now emit a SameSite cookie header when HttpCookie.SameSite value is “None” to accommodate upcoming changes to SameSite cookie handling in Chrome. As part of this change, FormsAuth and SessionState cookies will also be issued with SameSite = ‘Lax’ instead of the previous default of ‘None’, though these values can be overridden in web.config. For more information, refer to Work with SameSite cookies in ASP.NET.

CLR1

  • Addresses and issue where some ClickOnce applications or applications creating the default AppDomain with a restricted permission set may observe application launch or application runtime failures, or unexpected behaviors. The observable issue was the System.AppDomainSetup.TargetFrameworkName is null, leading to any quirks enabling reverting back to .NET Framework 4.0 behaviors.

WPF2

  • Addresses an issue where some Per-Monitor Aware WPF applications that host System-Aware or Unaware child-windows, and run on .NET 4.8, may occasionally encounter a crash with exceptionSystem.Collections.Generic.KeyNotFoundException.

1 Common Language Runtime (CLR)
2 Windows Presentation Foundation (WPF)

Getting the Update

The Security and Quality Rollup is available via Windows Update, Windows Server Update Services, and Microsoft Update Catalog.

Microsoft Update Catalog

You can get the update via the Microsoft Update Catalog. For Windows 10, NET Framework 4.8 updates are available via Windows Update, Windows Server Update Services, Microsoft Update Catalog. Updates for other versions of .NET Framework are part of the Windows 10 Monthly Cumulative Update.

Note: Customers that rely on Windows Update and Windows Server Update Services will automatically receive the .NET Framework version-specific updates. Advanced system administrators can also take use of the below direct Microsoft Update Catalog download links to .NET Framework-specific updates. Before applying these updates, please ensure that you carefully review the .NET Framework version applicability, to ensure that you only install updates on systems where they apply.

The following table is for Windows 10 and Windows Server 2016+ versions.

Product Version Cumulative Update
Windows 10 1909 and Windows Server, version 1909
.NET Framework 3.5, 4.8 Catalog 4533002
Windows 10 1903 and Windows Server, version 1903
.NET Framework 3.5, 4.8 Catalog 4533002
Windows 10 1809 (October 2018 Update) and Windows Server 2019 4533094
.NET Framework 3.5, 4.7.2 Catalog 4533013
.NET Framework 3.5, 4.8 Catalog 4533001
Windows 10 1803 (April 2018 Update)
.NET Framework 3.5, 4.7.2 Catalog 4530717
.NET Framework 3.5, 4.8 Catalog 4533000
Windows 10 1709 (Fall Creators Update)
.NET Framework 3.5, 4.7.1, 4.7.2 Catalog 4530714
.NET Framework 3.5, 4.8 Catalog 4532999
Windows 10 1703 (Creators Update)
.NET Framework 3.5, 4.7, 4.7.1, 4.7.2 Catalog 4530711
.NET Framework 3.5, 4.8 Catalog 4532998
Windows 10 1607 (Anniversary Update) and Windows Server 2016
.NET Framework 3.5, 4.6.2, 4.7, 4.7.1, 4.7.2 Catalog 4530689
.NET Framework 3.5, 4.8 Catalog 4532997

The following table is for earlier Windows and Windows Server versions.

Product Version Security and Quality Rollup
Windows 8.1, Windows RT 8.1 and Windows Server 2012 R2 4533097
.NET Framework 3.5 Catalog 4514371
.NET Framework 4.5.2 Catalog 4514367
.NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2 Catalog 4533011
.NET Framework 4.8 Catalog 4533004
Windows Server 2012 4533096
.NET Framework 3.5 Catalog 4514370
.NET Framework 4.5.2 Catalog 4514368
.NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2 Catalog 4533010
.NET Framework 4.8 Catalog 4533003
Windows 7 SP1 and Windows Server 2008 R2 SP1 4533095
.NET Framework 3.5.1 Catalog 4507004
.NET Framework 4.5.2 Catalog 4507001
.NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2 Catalog 4533012
.NET Framework 4.8 Catalog 4533005
Windows Server 2008 4533098
.NET Framework 2.0, 3.0 Catalog 4507003
.NET Framework 4.5.2 Catalog 4507001
.NET Framework 4.6 Catalog 4533012

Previous Monthly Rollups

The last few .NET Framework Monthly updates are listed below for your convenience:
* November 2019 Preview of Quality Rollup
* October 2019 Preview of Quality Rollup
* October 2019 Security and Quality Rollup

The post .NET Framework December 2019 Security and Quality Rollup appeared first on .NET Blog.

Updating an ASP.NET Core 2.2 Web Site to .NET Core 3.1 LTS

$
0
0

Now that .NET Core 3.1 is out jus this last week and it is a "LTS" or Long Term Support version, I thought it'd be a good time to update my main site and my podcast to .NET 3.1. You can read about what LTS means but quite simply it's that "LTS releases are supported for three years after the initial release."

I'm not sure about you, but for me, when I don't look at some code for a few months - in this case because it's working just fine - it takes some time for the context switch back in. For my podcast site and main site I honestly have forgotten what version of .NET they are running on.

Updating my site to .NET Core 3.1

First, it seems my main homepage is NET Core 2.2. I can tell because the csproj has a "TargetFramework" of netcoreapp2.2. So I'll start at the migration docs here to go from 2.2 to 3.0. .NET Core 2.2 reaches "end of life" (support) this month so it's a good time to update to the 3.1 version that will be supported for 3 years.

Here's my original csproj

<Project Sdk="Microsoft.NET.Sdk.Web">
  <PropertyGroup>
    <TargetFramework>netcoreapp2.2</TargetFramework>
    <AspNetCoreHostingModel>InProcess</AspNetCoreHostingModel>
    <RootNamespace>hanselman_core</RootNamespace>
  </PropertyGroup>
  <ItemGroup>
    <PackageReference Include="Microsoft.AspNetCore.App" />
    <PackageReference Include="Microsoft.AspNetCore.Razor.Design" Version="2.2.0" PrivateAssets="All" />
    <PackageReference Include="Microsoft.VisualStudio.Web.CodeGeneration.Design" Version="2.2.3" />
  </ItemGroup>
  <ItemGroup>
    <None Update="IISUrlRewrite.xml">
      <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
    </None>
  </ItemGroup>
</Project>

and my 3.0 updated csproj. You'll note that most of it is deletions. Also note that I have a custom IISUrlRewrite.xml that I want to make sure gets to a specific place. You'll likely not have anything like this, but be aware.

<Project Sdk="Microsoft.NET.Sdk.Web">
  <PropertyGroup>
    <TargetFramework>netcoreapp3.1</TargetFramework>
    <RootNamespace>hanselman_core</RootNamespace>
  </PropertyGroup>
  <ItemGroup>
    <None Update="IISUrlRewrite.xml">
      <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
    </None>
  </ItemGroup>
</Project>

Some folks are more little methodical about this, upgrading first to 3.0 and then to 3.1. You can feel free to jump all the way if you want. In this case the main breaking changes are from 2.x to 3.x so I'll upgrade the whole thing all in one step.

I compile and run and get an error "InvalidOperationException: Endpoint Routing does not support 'IApplicationBuilder.UseMvc(...)'. To use 'IApplicationBuilder.UseMvc' set 'MvcOptions.EnableEndpointRouting = false' inside 'ConfigureServices(...)." so I'll keep moving through the migration guide, as things change in major versions.

Per the docs, I can remove using Microsoft.AspNetCore.Mvc; and add using Microsoft.Extensions.Hosting; as IHostingEnvironment becomes IWebHostEnvironment. Since my app is a Razor Pages app I'll add a call to servicesAddRazorPages(); as well as calls to UseRouting, UseAuthorization (if needed) and most importantly, moving to endpoint routing like this in my Configure() call.

app.UseRouting();
app.UseEndpoints(endpoints =>

{
endpoints.MapRazorPages();
});

I also decide that I wanted to see what version I was running on, on the page, so I'd be able to better remember it. I added this call in my _layout.cshtml to output the version of .NET Core I'm using at runtime.

 <div class="copyright">&copy; Copyright @DateTime.Now.Year, Powered by @System.Runtime.InteropServices.RuntimeInformation.FrameworkDescription</div> 

In order versions of .NET, you couldn't get exactly what you wanted from RuntimeInformation.FrameworkDescription, but it works fine in 3.x so it's perfect for my needs.

Finally, I notice that I was using my 15 year old IIS Rewrite Rules (because they work great) but I was configuring them like this:

using (StreamReader iisUrlRewriteStreamReader 
                = File.OpenText(Path.Combine(env.ContentRootPath, "IISUrlRewrite.xml")))

{ var options = new RewriteOptions() .AddIISUrlRewrite(iisUrlRewriteStreamReader); app.UseRewriter(options); }

And that smells weird to me. Turns out there's an overload on AddIISUrlRewrite that might be better. I don't want to be manually opening up a text file and streaming it like that, so I'll use an IFileProvider instead. This is a lot cleaner and I can remove a using System.IO;

var options = new RewriteOptions()
    .AddIISUrlRewrite(env.ContentRootFileProvider, "IISUrlRewrite.xml");

app.UseRewriter(options);

I also did a little "Remove and Sort Usings" refactoring and tidied up both Program.cs and Startup.cs to the minimum and here's my final complete Startup.cs.

using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.Rewrite;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;
namespace hanselman_core
{
    public class Startup
    {
        public Startup(IConfiguration configuration)
        {
            Configuration = configuration;
        }
        public IConfiguration Configuration { get; }
        // This method gets called by the runtime. Use this method to add services to the container.
        public void ConfigureServices(IServiceCollection services)
        {
            services.AddHealthChecks();
            services.AddRazorPages().AddRazorPagesOptions(options =>
            {
                options.Conventions.AddPageRoute("/robotstxt", "/Robots.Txt");
            });
            services.AddMemoryCache();
        }
        // This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
        public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
        {
            if (env.IsDevelopment())
            {
                app.UseDeveloperExceptionPage();
            }
            else
            {
                app.UseExceptionHandler("/Error");
                app.UseHsts();
            }
            app.UseHealthChecks("/healthcheck");
            var options = new RewriteOptions()
                .AddIISUrlRewrite(env.ContentRootFileProvider, "IISUrlRewrite.xml");
            app.UseRewriter(options);
            app.UseHttpsRedirection();
            app.UseDefaultFiles();
            app.UseStaticFiles();
            app.UseRouting();
            app.UseEndpoints(endpoints =>

{
endpoints.MapRazorPages();
}); } } }

And that's it. Followed the migration, changed a few methods and interfaces, and ended up removing a half dozen lines of code and in fact ended up with a simpler system. Here's the modified files for my update:

❯ git status

On branch main
Your branch is up to date with 'origin/main'.

Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git restore <file>..." to discard changes in working directory)
modified: Pages/Index.cshtml.cs
modified: Pages/Shared/_Layout.cshtml
modified: Program.cs
modified: Startup.cs
modified: hanselman-core.csproj

Updating the Web Site in Azure App Service and Azure DevOps

That all works locally, so I'll check in in and double check my Azure App Service Plan and Azure DevOps Pipeline to make sure that the staging - and then production - sites are updated.

ASP.NET Core apps can rely on a runtime that is already installed in the Azure App Service or one can do a "self contained" install. My web site needs .NET Core 3.1 (LTS) so ideally I'd change this dropdown in General Settings to get LTS and get 3.1. However, this only works if the latest stuff is installed on Azure App Service. At some point soon in the future .NET Core 3.1 will be on Azure App Service for Linux but it might be a week or so. At the time of this writing LTS is still 2.2.7 so I'll do a self-contained install which will take up more disk space but will be more reliable for my needs and will allow me full controll over versions.

Updating to .NET Core 3.1 LTS

I am running this on Azure App Service for Linux so it's running in a container. It didn't startup so I checked the logs at startup via the Log Stream and it says that the app isn't listening on Port 8080 - or at least it didn't answer an HTTP GET ping.

App Service Log

I wonder why? Well, I scrolled up higher in the logs and noted this error:

2019-12-10T18:21:25.138713683Z The specified framework 'Microsoft.AspNetCore.App', version '3.0.0' was not found.

Oops! Did I make sure that my csproj was 3.1? Turns out I put in netcoreapp3.0 even though I was thinking 3.1! I updated and redeployed.

It's important to make sure that your SDK - the thing that builds - lines up with the the runtime version. I have an Azure DevOps pipeline that is doing the building so I added a "use .NET Core SDK" task that asked for 3.1.100 explicitly.

Using .NET Core 3.1 SDK

Again, I need to make sure that my Pipeline includes that self-contained publish with a -r linux-x64 parameter indicating this is the runtime needed for a self-contained install.

dotnet publish -r linux-x64

Now my CI/CD pipeline is building for 3.1 and I've set my App Service to run on 3.1 by shipping 3.1 with my publish artifact. When .NET Core 3.1 LTS is released on App Service I can remove this extra argument and rely on the Azure App Service to manage the runtime.

powered by .NET Core 3.1

All in all, this took about an hour and a half. Figure a day for your larger apps. Now I'll spend another hour (likely less) to update my podcast site.


Sponsor: Like C#? We do too! That’s why we've developed a fast, smart, cross-platform .NET IDE which gives you even more coding power. Clever code analysis, rich code completion, instant search and navigation, an advanced debugger... With JetBrains Rider, everything you need is at your fingertips. Code C# at the speed of thought on Linux, Mac, or Windows. Try JetBrains Rider today!



© 2019 Scott Hanselman. All rights reserved.
     

Azure Sphere guardian module simplifies & secures brownfield IoT

$
0
0

One of the toughest IoT quandaries is figuring out how to bake IoT into existing hardware in a secure, cost-effective way. For many customers, scrapping existing hardware investments for new IoT-enabled devices (“greenfield” installations) isn’t feasible. And retrofitting mission-critical devices that are already in service with IoT (“brownfield” installations) is often deemed too risky, too complicated, and too expensive.

This is why we’re thrilled about a major advancement for Azure Sphere that opens up the brownfield opportunity, helping make IoT retrofits more secure, substantially easier, and more cost effective than ever before. The guardian module with Azure Sphere simplifies the transformation of brownfield devices into locked-down, internet-connected, data-wielding, intelligent devices that can transform business.

For an in-depth exploration of the guardian module and how it’s being used at major corporations like Starbucks, sign up for the upcoming Azure Sphere Guardian Module webinar.

An image of Rodney Clark, VP of IoT and mixed reality sales.

The guardian module with Azure Sphere offers some key advantages

Like all Microsoft products, Azure Sphere is loaded with robust security features at every turn—from silicon to cloud. For brownfield installations, the guardian module with Azure Sphere physically plugs into existing equipment ports without the need for any hardware redesign.

Azure Sphere, rather than the device itself, talks to the cloud. The guardian module processes data and controls the device without exposing existing equipment to the potential dangers of the internet. The module shields brownfield equipment from attack by restricting the flow of data to only trusted cloud and device communication partners while also protecting module and equipment software.

Using the Azure Sphere guardian module, enterprises can enable any number of secure operations between the device and the cloud. The device can even use the Azure Sphere Security Service for certificate-based authentication, failure reporting, and software updates.

Opportunities abound for the Microsoft partner ecosystem

Given the massive scale of connectable equipment already in use in retail, industrial, and commercial settings, the new guardian module presents a lucrative opportunity for Microsoft partners. Azure Sphere can connect an enormous range of devices of all types, leading the way for a multitude of practical applications that can pay off through increased productivity, predictive maintenance, cost savings, new revenue opportunities, and more.

Fulfilling demand for such a diverse set of use cases is only possible thanks to Azure Sphere’s expanding partner ecosystem. Recent examples of this growth include our partnership with NXP to deliver a new Azure Sphere-certified chip that is an extension of their i.MX 8 high-performance applications process series and brings greater compute capabilities to support advanced workloads. As well as our collaboration with Qualcomm Technologies, Inc to deliver the first cellular-enabled Azure Sphere chip, which gives our customers the ability to securely connect anytime, anywhere.

Starbucks uses Azure Sphere guardian module to connect coffee machines

If you saw Satya Nadella’s Vision Keynote at Build 2019, you probably recall the demonstration of Starbucks’ IoT-connected coffee machines. But what you may not know is the Azure Sphere guardian module is behind the scenes, enabling Starbucks to connect these existing machines to the cloud.

As customers wait for their double-shot, no-whip mochas to brew, these IoT-enabled machines are doing more than meets the eye. They’re collecting more than a dozen data points for each precious shot, like the types of beans used, water temperature, and water quality. The solution enables Starbucks to proactively identify any issues with their machines in order to smooth their customers’ paths to caffeinated bliss.

Beyond predictive maintenance, Azure Sphere will enable Starbucks to transmit new recipes directly to machines in 30,000 stores rather than manually uploading recipes via thumb drives, saving Starbucks lots of time, money, and thumb drives. Watch this Microsoft Ignite session to see how Starbucks is tackling IoT at scale in pursuit of the perfect pour.

As an ecosystem, we have a tremendous opportunity to meet demand for brownfield installations and help our customers quickly bring their existing investments online without taking on risk and jeopardizing mission-critical equipment. The first guardian modules are available today from Avnet and AI-Link, with more expected soon.

Discover the value of adding secured connectivity to existing mission-critical equipment by registering for our upcoming Azure Sphere Guardian Modules webinar. You will experience a guided tour of the guardian module, including a deep dive into its architecture and the opportunity this open-source offering presents to our partner community. We’ll also hear from Starbucks around what they’ve learned since implementing the guardian module with Azure Sphere.

This group of businesses is the most often attacked on earth—here’s how we helped

Combine the Power of Video Indexer and Computer Vision

$
0
0

We are pleased to introduce the ability to export high-resolution keyframes from Azure Media Service’s Video Indexer. Whereas keyframes were previously exported in reduced resolution compared to the source video, high resolution keyframes extraction gives you original quality images and allows you to make use of the image-based artificial intelligence models provided by the Microsoft Computer Vision and Custom Vision services to gain even more insights from your video. This unlocks a wealth of pre-trained and custom model capabilities. You can use the keyframes extracted from Video Indexer, for example, to identify logos for monetization and brand safety needs, to add scene description for accessibility needs or to accurately identify very specific objects relevant for your organization, like identifying a type of car or a place.

Let’s look at some of the use cases we can enable with this new introduction.

Using keyframes to get image description automatically

You can automate the process of “captioning” different visual shots of your video through the image description model within Computer Vision, in order to make the content more accessible to people with visual impairments. This model provides multiple description suggestions along with confidence values for an image. You can take the descriptions of each high-resolution keyframe and stitch them together to create an audio description track for your video.

Image description within Computer Vision

Using Keyframes to get logo detection

While Video Indexer detects brands in speech and visual text, it does not support brands detection from logos yet. Instead, you can run your keyframes through Computer Vision’s logo-based brands detection model to detect instances of logos in your content.

This can also help you with brand safety as you now know and can control the brands showing up in your content. For example, you might not want to showcase the logo of a company directly competing with yours. Also, you can now monetize on the brands showing up in your content through sponsorship agreements or contextual ads.

Furthermore, you can cross-reference the results of this model for you keyframe with the timestamp of your keyframe to determine when exactly a logo is shown in your video and for how long. For example, if you have a sponsorship agreement with a content creator to show your logo for a certain period of time in their video, this can help determine if the terms of the agreement have been upheld.

Computer Vision’s logo detection model can detect and recognize thousands of different brands out of the box. However, if you are working with logos that are specific to your use case or otherwise might not be a part of the out of the box logos database, you can also use Custom Vision to build a custom object detector and essentially train your own database of logos by uploading and correctly labeling instances of the logos relevant to you.

Computer Vision's logo detector, detecting the Microsoft logo.

Using keyframes with other Computer Vision and Custom Vision offerings

The Computer Vision APIs provide different insights in addition to image description and logo detection, such as object detection, image categorization, and more. The possibilities are endless when you use high-resolution keyframes in conjunction with these offerings.

For example, the object detection model in Computer Vision gives bounding boxes for common out of the box objects that are already detected as part of Video Indexer today. You can use these bounding boxes to blur out certain objects that don’t meet your standards.

Object detection model

High-resolution keyframes in conjunction with Custom Vision can be leveraged to achieve many different custom use cases. For example, you can train a model to determine what type of car (or even what breed of cat) is showing in a shot. Maybe you want to identify the location or the set where a scene was filmed for editing purposes. If you have objects of interest that may be unique to your use case, use Custom Vision to build a custom classifier to tag visuals or a custom object detector to tag and provide bounding boxes for visual objects.

Try it for yourself

These are just a few of the new opportunities enabled by the availability of high-resolution keyframes in Video Indexer. Now, it is up to you to get additional insights from your video by taking the keyframes from Video Indexer and running additional image processing using any of the Vision models we have just discussed. You can start doing this by first uploading your video to Video Indexer and taking the high-resolution keyframes after the indexing job is complete and second creating an account and getting started with the Computer Vision API and Custom Vision.

Have questions or feedback? We would love to hear from you. Use our UserVoice page to help us prioritize features, leave a comment below or email VISupport@Microsoft.com for any questions.

Try out WebView2 with the new interactive API sample

$
0
0

Over the past few years, we have seen increased demand for the development of applications that leverage both web and native technologies to modernize native applications, iterate faster with web technologies, and more easily develop cross-platform.

At this year’s Build conference in May, we introduced the Win32 preview of the WebView2 control, powered by the new Chromium-based Microsoft Edge browser. A WebView is a modal that is embedded within a native application, and which renders web content (HTML/CSS/JavaScript) powered by the browser. Since launching our Win32 WebView2 preview, we have been engaging with the community and partners to collect a great deal of feedback, and delivering SDK updates every six weeks.

To learn more about WebViews, how they work, and more about options like Evergreen (WebView content is rendered by the Microsoft Edge browser instance on the user’s computer) vs. Bring Your Own (WebView content is rendered by a separate instance of the Microsoft Edge browser downloaded with the application) check out our developer documentation.

WebView2 API Sample

Recently, we built and launched a sample application (we call it WebView2 API Sample) using the WebView2 APIs to create an interactive application that demonstrates WebView2’s functionalities. The WebView2 API Sample is intended to be the most comprehensive guide available and will be updated regularly as we add more features to our SDK.

Notable features in our WebView2 API Sample are Navigation, Web Messaging (communication between the Win32 Host and the WebView), and Native Object Injection (accessing Win32 Objects directly from JavaScript).

Screen capture showing a WebView2 sample browser

You can build and play around with the WebView2 API Sample by downloading or cloning it from our WebView2 Samples repository. To learn more about the sample’s source code and functionality, read our WebView2 API Sample guide. As you develop your own applications, we recommend referencing the source code for suggested API patterns for WebView2 workflows.

Build your own WebView2 application

You can learn more about WebView2 through our documentation, get started using our getting-started guide, and checkout more examples in our samples repository.

Tell us what you plan to build with WebView2 and please reach out with any thoughts or feedback through our feedback repo.

– Palak Goel, Program Manager, WebView

The post Try out WebView2 with the new interactive API sample appeared first on Microsoft Edge Blog.

Visual Studio 2019 for Mac version 8.4 Preview 4 is now available

$
0
0

Today, we released Visual Studio 2019 for Mac version 8.4 Preview 4. This preview version of Visual Studio for Mac brings support for the latest stable version of .NET Core, Scaffolding support for ASP.NET Core projects, and additional improvements to overall product accessibility. Developers using Xamarin Pair to Mac should also look at the additional information in this blog post related to our release schedule.

To try out the preview, you’ll need to download and install the latest version of Visual Studio 2019 for Mac, then switch to the Preview channel in the IDE.

For more information on the other changes in this release, look at our release notes.

Stay on the latest and greatest with support for .NET Core 3.1

With this release, Visual Studio for Mac adds official support for the newly released .NET Core 3.1. While this release of .NET Core brings with it a small series of improvements over .NET Core 3.0, it’s important to note that .NET Core 3.1 is a long-term supported (LTS) release. This means it will be supported for three years.

Updating to Preview 4 will install the .NET Core 3.1 SDK. If you previously installed Visual Studio for Mac without selecting the .NET Core target in the installer, you’ll need to take the following steps to get started developing .NET Core in Visual Studio for Mac:

Demonstration of the .NET Core target being checked in the Visual Studio for Mac installer

The .NET Core 3.1 release notes contain a full list of changes introduced by this update.

Use assistive technology more reliably

We’re committed to empowering all Mac developers with the ability to bring their thoughts to life using Visual Studio for Mac. In order to do so, we realize the need to support various assistive technologies. We’ve continued to make improvements to accessibility over the entire surface area of the IDE. Some of these efforts include:

  • Refining focus order when navigating with assistive technologies
  • Increasing color contrast ratios for text and icons
  • Eliminating keyboard traps that hinder navigation of the IDE
  • More accurate VoiceOver reading and navigation
  • Rewriting inaccessible components of the IDE with accessibility in mind

Despite the work we’re doing to make Visual Studio for Mac accessible to all, we know there’s still a long journey ahead of us and no end of the road when it comes to making the IDE a delightful experience for all. This has been and will continue to be a top priority for our team and we welcome any and all feedback from our users that will assist in guiding this work. Please reach out directly to me via dominicn@microsoft.com if you’d like to engage with us directly on our accessibility work. I’d look forward to learning from those of you who reach out.

Speaking about feedback from our community, let’s move on to ASP.NET Core Scaffolding…

Speed up your web app development with ASP.NET Core Scaffolding

A top ask from our community has been to add ASP.NET Core Scaffolding to Visual Studio for Mac. We’ve taken that feedback and have now enabled Scaffolding for ASP.NET Core projects in Visual Studio for Mac. Scaffolding makes ASP.NET Core app development easier and faster by generating boilerplate code for common scenarios.

To use the new Scaffolding feature in Visual Studio for Mac, click on the New Scaffolding entry in the Add flyout of the project context menu. The node on which you opened the right-click context menu will be the location where the generated files will be placed.

You’ll then see a Scaffolding wizard to help you generate code into your project. In the image below, I’m using one of our ASP.NET Core sample projects – a movie database app – to demonstrate scaffolding in action. I’ve used the tool to make pages for Create, Read, Update, and Delete operations (CRUD) and a Details page for the movie model.

Scaffolding wizard for ASP.NET Core project in Visual Studio for Mac

Once the wizard closes, it will add required NuGet packages to your project and create additional pages, based on the scaffolder you chose.

If you’re new to Scaffolding ASP.NET Core projects, take a look at our documentation for more information.

Xamarin Pair to Mac considerations

Developers using Visual Studio 2019 for Mac version 8.3 with Visual Studio 2019 version 16.4 for iOS development with Xamarin will see the following warnings in Windows:

Xamarin Pair to Mac warning messages

If you agree to continue, the Mono and Xamarin.iOS SDKs on your Mac will be updated to the latest versions. While we recommend updating to Visual Studio 2019 for Mac 8.4 Preview 4 to avoid version mismatches when working with Xamarin on Windows, updating by clicking through the warnings shown above will allow you to continue to work without moving from the Stable channel on Mac.

We plan to release Visual Studio for Mac version 8.4 to Stable in early January and appreciate your patience with this experience and the workaround until then.

Give it a try today!

Now that we’ve discussed the major additions to Visual Studio for Mac version 8.4 Preview 4, it’s time to download and install the release! To do so, make sure you’ve downloaded Visual Studio 2019 for Mac, then switch to the Preview channel.

As always, if you have any feedback on this, or any, version of Visual Studio for Mac, we invite you to leave them in the comments below this post or to reach out to us on Twitter at @VisualStudioMac. If you run into issues while using Visual Studio for Mac, you can use Report a Problem to notify the team. In addition to product issues, we also welcome your feature suggestions on the Visual Studio Developer Community website.

The post Visual Studio 2019 for Mac version 8.4 Preview 4 is now available appeared first on Visual Studio Blog.


An Introduction to System.Threading.Channels

$
0
0

“Producer/consumer” problems are everywhere, in all facets of our lives. A line cook at a fast food restaurant, slicing tomatoes that are handed off to another cook to assemble a burger, which is handed off to a register worker to fulfill your order, which you happily gobble down. Postal drivers delivering mail all along their routes, and you either seeing a truck arrive and going out to the mailbox to retrieve your deliveries or just checking later in the day when you get home from work. An airline employee offloading suitcases from a cargo hold of a jetliner, placing them onto a conveyer belt, where they’re shuttled down to another employee who transfers bags to a van and drives them to yet another conveyer that will take them to you. And a happy engaged couple preparing to send out invites to their wedding, with one partner addressing an envelope and handing it off to the other who stuffs and licks it.

As software developers, we routinely see happenings from our everyday lives make their way into our software, and “producer/consumer” problems are no exception. Anyone who’s piped together commands at a command-line has utilized producer/consumer, with the stdout from one program being fed as the stdin to another. Anyone who’s launched multiple workers to compute discrete values or to download data from multiple sites has utilized producer/consumer, with a consumer aggregating results for display or further processing. Anyone who’s tried to parallelize a pipeline has very explicitly employed producer/consumer. And so on.

All of these scenarios, whether in our real-world or software lives, have something in common: there is some vehicle for handing off the results from the producer to the consumer. The fast food employee places the completed burgers in a stand that the register worker pulls from to fill the customer’s bag. The postal worker places mail into a mailbox. The engaged couple’s hands meet to transfer the materials from one to the other. In software, such a hand-off requires a data structure of some kind to facilitate the transaction, storage that can used by the producer to transfer a result and potentially buffer more, while also enabling the consumer to be notified that one or more results are available. Enter System.Threading.Channels.

What is a Channel?

I often find it easiest to understand some technology by implementing a simple version myself. In doing so, I learn about various problems implementers of that technology may have had to overcome, trade-offs they may have had to make, and the best way to utilize the functionality. To that end, let’s start learning about System.Threading.Channels by implementing a “channel” from scratch.

A channel is simply a data structure that’s used to store produced data for a consumer to retrieve, and an appropriate synchronization to enable that to happen safely, while also enabling appropriate notifications in both directions. There is a multitude of possible design decisions involved. Should a channel be able to hold an unbounded number of items? If not, what should happen when it fills up? How critical is performance? Do we need to try to minimize synchronization? Can we make any assumptions about how many producers and consumers are allowed concurrently? For the purposes of quickly writing a simple channel, let’s make simplifying assumptions that we don’t need to enforce any particular bound and that we don’t need to be overly concerned about overheads. We’ll also make up a simple API.

To start, we need our type, to which we’ll add a few simple methods:

public sealed class Channel<T>
{
    public void Write(T value);
    public ValueTask<T> ReadAsync(CancellationToken cancellationToken = default);
}

Our Write method gives us a method we can use to produce data into the channel, and our ReadAsync method gives us a method to consume from it. Since we decided our channel is unbounded, producing data into it will always complete successfully and synchronously, just as does calling Add on a List<T>, hence we’ve made it non-asynchronous and void-returning. In contrast, our method for consuming is ReadAsync, which is asynchronous because the data we want to consume may not yet be available yet, and thus we’ll need to wait for it to arrive if nothing is available to consume at the time we try. And while in our getting-started design we’re not overly concerned with performance, we also don’t want to have lots of unnecessary overheads. Since we expect to be reading frequently, and for us to often be reading when data is already available to be consumed, our ReadAsync method returns a ValueTask<T> rather than a Task<T>, so that we can make it allocation-free when it completes synchronously.

Now we just need to implement these two methods. To start, we’ll add two fields to our type: one to serve as the storage mechanism, and one to coordinate between the producers and consumers:

private readonly ConcurrentQueue<T> _queue = new ConcurrentQueue<T>();
private readonly SemaphoreSlim _semaphore = new SemaphoreSlim(0);

We use a ConcurrentQueue<T> to store the data, freeing us from needing to do our own locking to protect the buffering data structure, as ConcurrentQueue<T> is already thread-safe for any number of producers and any number of consumers to access concurrently. And we use a SempahoreSlim to help coordinate between producers and consumers and to notify consumers that might be waiting for additional data to arrive.

Our Write method is simple. It just needs to store the data into the queue and increment the SemaphoreSlim‘s count by “release”ing it:

public void Write(T value)
{
    _queue.Enqueue(value); // store the data
    _semaphore.Release(); // notify any consumers that more data is available
}

And our ReadAsync method is almost just as simple. It needs to wait for data to be available and then take it out.

public async ValueTask<T> ReadAsync(CancellationToken cancellationToken = default)
{
    await _semaphoreSlim.WaitAsync(cancellationToken).ConfigureAwait(false); // wait
    bool gotOne = _queue.TryDequeue(out T item); // retrieve the data
    Debug.Assert(gotOne);
    return item;
}

Note that because no other code could be manipulating the semaphore or the queue, we know that once we’ve successfully waited on the semaphore, the queue will have data to give us, which is why we can just assert that the TryDequeue method successfully returned one. If those assumptions ever changed, this implementation would need to become more complicated.

And that’s it: we have our basic channel. If all you need are the basic features assumed here, such an implementation is perfectly reasonable. Of course, the requirements are often more significant, both on performance and on APIs necessary to enable more scenarios.

Now that we understand the basics of what a channel provides, we can switch to looking at the actual System.Threading.Channel APIs.

Introducing System.Threading.Channels

The core abstractions exposed from the System.Threading.Channels library are a writer:

public abstract class ChannelWriter<T>
{
    public abstract bool TryWrite(T item);
    public virtual ValueTask WriteAsync(T item, CancellationToken cancellationToken = default);
    public abstract ValueTask<bool> WaitToWriteAsync(CancellationToken cancellationToken = default);
    public void Complete(Exception error);
    public virtual bool TryComplete(Exception error);
}

and a reader:

public abstract class ChannelReader<T>
{
    public abstract bool TryRead(out T item);
    public virtual ValueTask<T> ReadAsync(CancellationToken cancellationToken = default)
    public abstract ValueTask<bool> WaitToReadAsync(CancellationToken cancellationToken = default);
    public virtual IAsyncEnumerable<T> ReadAllAsync([EnumeratorCancellation] CancellationToken cancellationToken = default);
    public virtual Task Completion { get; }
}

Having just completed our own simple channel design and implementation, most of this API surface area should feel familiar. ChannelWriter<T> provides a TryWrite method that’s very similar to our Write method; however, it’s abstract and a Try method that returns a Boolean, to account for the fact that some implementations may be bounded in how many items they can physically store, and if the channel was full such that writing couldn’t complete synchronously, TryWrite would need to return false to indicate that writing was unsuccessful. However, ChannelWriter<T> also provides the WriteAsync method; in such a case where the channel is full and writing would need to wait (often referred to as “back pressure”), WriteAsync can be used, with the producer awaiting the result of WriteAsync and only being allowed to continue when room becomes available.

Of course, there are situations where code may not want to produce a value immediately; if producing a value is expensive or if a value represents an expensive resource (maybe it’s a big object that would take up a lot of memory, or maybe it stores a bunch of open files) and if there’s a reasonable chance the producer is running faster than the consumer, the producer may want to delay producing a value until it knows a write will be immediately successful. For that, and related scenarios, there’s WaitToWriteAsync. A producer can await for WaitToWriteAsync to return true, and only then choose to produce a value that it then TryWrites or WriteAsyncs to the channel.

Note that WriteAsync is virtual. Some implementations may choose to provide a more optimized implementation, but with abstract TryWrite and WaitToWriteAsync, the base type can provide a reasonable implementation, which is only slightly more sophisticated than this:

public async ValueTask WriteAsync(T item, CancellationToken cancellationToken)
{
    while (await WaitToWriteAsync(cancellationToken).ConfigureAwait(false))
        if (TryWrite(item))
            return;

    throw new ChannelCompletedException();
}

In addition to showing how WaitToWriteAsync and TryWrite can be used, this highlights a few additional interesting things. First, the while loop is present here because channels by default can be used by any number of producers and any number of consumers concurrently. If a channel had an upper bound on how many items it could store, and if multiple threads are racing to write to the buffer, it’s possible for two threads to be told “yes, there’s space” via WaitToWriteAsync, but then for one of them to lose the race condition and have TryWrite return false, hence the need to loop around and try again. This example also highlights why WaitToWriteAsync returns a ValueTask<bool> instead of just ValueTask, as well as situations beyond a full buffer in which TryWrite may return false. Channels support the notion of completion, where a producer can signal to a consumer that there won’t be any further items produced, enabling the consumer to gracefully stop trying to consume. This is done via the Complete or TryComplete methods previously shown on ChannelWriter<T> (Complete is just implemented to call TryComplete and throw if it returns false). But if one producer marks the channel as complete, other producers need to know they’re no longer welcome to write into the channel; in that case, TryWrite returns false, WaitToWriteAsync also returns false, and WriteAsync throws a ChannelCompletedException.

Most of the members on ChannelReader<T> are likely self-explanatory as well. TryRead will try to synchronously extract the next element from the channel, returning whether it was successful in doing so. ReadAsync will also extract the next element from the channel, but if an element can’t be retrieved synchronously, it will return a task for that element. And WaitToReadAsync returns a ValueTask<bool> that serves as a notification for when an element is available to be consumed. Just as with ChannelWriter<T>‘s WriteAsync, ReadAsync is virtual, with the base implementation implementable in terms of the abstract TryRead and WaitToReadAsync; this isn’t the exact implementation in the base class, but it’s close:

public async ValueTask<T> ReadAsync(CancellationToken cancellationToken)
{
    while (true)
    {
        if (!await WaitToReadAsync(cancellationToken).ConfigureAwait(false))
            throw new ChannelClosedException();

        if (TryRead(out T item))
            return item;
    }
}

There are a variety of typical patterns for how one consumes from a ChannelReader<T>. If a channel represents an unending stream of values, one approach is simply to sit in an infinite loop consuming via ReadAsync:

while (true)
{
    T item = await channelReader.ReadAsync();
    Use(item);
}

Of course, if the stream of values isn’t infinite and the channel will be marked completed at some point, once consumers have emptied the channel of all its data subsequent attempts to ReadAsync from it will throw. In contrast TryRead will return false, as will WaitToReadAsync. So, a more common consumption pattern is via a nested loop:

while (await channelReader.WaitToReadAsync())
    while (channelReader.TryRead(out T item))
        Use(item);

The inner “while” could have instead been a simple “if”, but having the tight inner loop enables a cost-conscious developer to avoid the small additional overheads of WaitToReadAsync when an item is already available such that TryRead will successfully consume an item. In fact, this is the exact pattern employed by the ReadAllAsync method. ReadAllAsync was introduced in .NET Core 3.0, and returns an IAsyncEnumerable&lt;T&gt;. It enables all of the data to be read from a channel using familiar language constructs:

await foreach (T item in channelReader.ReadAllAsync())
    Use(item);

And the base implementation of the virtual method employs the exact pattern nested-loop pattern shown previously with WaitToReadAsync and TryRead:

public virtual async IAsyncEnumerable<T> ReadAllAsync(
    [EnumeratorCancellation] CancellationToken cancellationToken = default)
{
    while (await WaitToReadAsync(cancellationToken).ConfigureAwait(false))
        while (TryRead(out T item))
            yield return item;
}

The final member of ChannelReader&lt;T&gt; is Completion. This simply returns a Task that will complete when the channel reader is completed, meaning the channel was marked for completion by a writer and all data has been consumed.

Built-In Channel Implementations

Ok, so we know how to write to writers and read from readers… but from where do we get those writers and readers?

The Channel&lt;TWrite, TRead&gt; type exposes a Writer property and a Reader property that returns a ChannelWriter&lt;TWrite&gt; and a ChannelReader&lt;TRead&gt;, respectively:

public abstract class Channel<TWrite, TRead>
{
    public ChannelReader<TRead> Reader { get;  }
    public ChannelWriter<TWrite> Writer { get; }
}

This base abstract class is available for the niche uses cases where a channel may itself transform written data into a different type for consumption, but the vast majority use case has TWrite and TRead being the same, which is why the majority use happens via the derived Channel type, which is nothing more than:

public abstract class Channel<T> : Channel<T, T> { }

The non-generic Channel type then provides factories for several implementations of Channel&lt;&T&gt;:

public static class Channel
{
    public static Channel<T> CreateUnbounded<T>();
    public static Channel<T> CreateUnbounded<T>(UnboundedChannelOptions options);

    public static Channel<T> CreateBounded<T>(int capacity);
    public static Channel<T> CreateBounded<T>(BoundedChannelOptions options);
}

The CreateUnbounded method creates a channel with no imposed limit on the number of items that can be stored (of course at some point it might hit the limits of the memory, just as with List&lt;T&gt; and any other collection), very much like the simple Channel-like type we implemented at the beginning of this post. Its TryWrite will always return true, and both its WriteAsync and its WaitToWriteAsync will always complete synchronously.

In contrast, the CreateBounded method creates a channel with an explicit limit maintained by the implementation. Prior to reaching this capacity, just as with CreateUnbounded, TryWrite will return true and both WriteAsync and WaitToWriteAsync will complete synchronously. But once the channel fills up, TryWrite will return false, and both WriteAsync and WaitToWriteAsync will complete asynchronously, only completing their returned tasks when space is available, or another producer signals the channel’s completion. (It should go without saying that all of these APIs that accept a CancellationToken can also be interrupted by cancellation being requested).

Both CreateUnbounded and CreateBounded have overloads that accept a ChannelOptions-derived type. This base ChannelOptions provides options that can control any channel’s behavior. For example, it exposes SingleWriter and SingleReader properties, which allow the creator to indicate constraints they’re willing to accept; a creator sets SingleWriter to true to indicate that at most one producer will be accessing the writer at a time, and similarly sets SingleReader to true to indicate that at most one consumer will be accessing the reader at a time. This allows for the factory methods to specialize the implementation that’s created, optimizing it based on the supplied options; for example, if the options passed to CreateUnbounded specifies SingleReader as true, it returns an implementation that not only avoids locks when reading, it also avoids interlocked operations when reading, significantly reducing the overheads involved in consuming from the channel. The base ChannelOptions also exposes an AllowSynchronousContinuations property. As with SingleReader and SingleWriter, this defaults to false, and a creator setting it to true means signing up for some optimizations that also have strong implications for how producing and consuming code is written. Specifically, AllowSynchronousContinuations in a sense allows a producer to temporarily become a consumer. Let’s say there’s no data in a channel and a consumer comes along and calls ReadAsync. By awaiting the task returned from ReadAsync, that consumer is effectively hooking up a callback to be invoked when data is written to the channel. By default, that callback will be invoked asynchronously, with the producer writing the data to the channel and then queueing the invocation of that callback, which allows the producer to concurrently go on its merry way while the consumer is processed by some other thread. However, in some situations it may be advantageous for performance to allow that producer writing the data to also itself process the callback, e.g. rather than TryWrite queueing the invocation of the callback, it simply invokes the callback itself. This can significantly cut down on overheads, but also requires great understanding of the environment, as, for example, if you were holdling a lock while calling TryWrite, with AllowSynchronousContinuations set to true, you might end up invoking the callback while holding your lock, which (depending on what the callback tried to do) could end up observing some broken invariants your lock was trying to maintain.

The BoundedChannelOptions passed to CreateUnbounded layers on additional options specific to bounding. In addition to the maximum capacity supported by the channel, it also exposes a BoundedChannelFullMode enum that indicates the behavior writes should experience when the channel is full:

public enum BoundedChannelFullMode
{
    Wait,
    DropNewest,
    DropOldest,
    DropWrite
}

The default is Wait, which has the semantics already discussed: TryWrite on a full channel returns false, WriteAsync will return a task that will only complete when space became available and the write could complete successfully, and similarly WaitToWriteAsync will only complete when space becomes available. The other three modes instead enable writes to always complete synchronously, dropping an element if the channel is full rather than introducing back pressure. DropOldest will remove the “oldest” item (wall-clock time) from the queue, meaning whichever element would next be dequeued by a consumer. Conversely, DropNewest will remove the newest item, whichever element was most recently written to the channel. And DropWrite drops the item currently being written, meaning for example TryWrite will return true but the item it added will immediately be removed.

Performance

From an API perspective, that’s pretty much it. The abstractions exposed are relatively simple, which is a large part of where the power of the library comes from. Simple abstractions and a few concrete implementations that should meet the 99.9% use cases of developers’ needs. Of course, the surface area of the library might suggest that the implementation is also simple. In truth, there’s a decent amount of complexity in the implementation, mostly focused on enabling great throughput while enabling simple consumption patterns easily used in consuming code. The implementation, for example, goes to great pains to minimize allocations. You may have noticed that many of the methods in the surface area return ValueTask and ValueTask<T> rather than Task and Task<T>. As we saw in our trivial example implementation at the beginning of this article, we can utilize ValueTask<T> to avoid allocations when methods complete synchronously, but the System.Threading.Channels implementation also takes advantage of the advanced IValueTaskSource and IValueTaskSource<T> interfaces to avoid allocations even when the various methods complete asynchronously and need to return tasks.

Consider this benchmark:

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
using System.Threading.Channels;
using System.Threading.Tasks;

[MemoryDiagnoser]
public class Program
{
    static void Main() => BenchmarkRunner.Run<Program>();

    private readonly Channel<int> s_channel = Channel.CreateUnbounded<int>();

    [Benchmark]
    public async Task WriteThenRead()
    {
        ChannelWriter<int> writer = s_channel.Writer;
        ChannelReader<int> reader = s_channel.Reader;
        for (int i = 0; i < 10_000_000; i++)
        {
            writer.TryWrite(i);
            await reader.ReadAsync();
        }
    }
}

Here we’re just testing the throughput and memory allocation on an unbounded channel when writing an element and then reading out that element 10 million times, which means an element will always be available for the read to consume and thus the read will always complete synchronously, yielding the following results on my machine (the 72 bytes shown in the Allocated column is for the single Task returned from WriteThenRead):

Method Mean Error StdDev Gen 0 Gen 1 Gen 2 Allocated
WriteThenRead 527.8 ms 2.03 ms 1.90 ms 72 B

But now let’s change it slightly, first issuing the read and only then writing the element that will satisfy it. In this case, reads will always complete asynchronously because the data to complete them will never be available:

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
using System.Threading.Channels;
using System.Threading.Tasks;

[MemoryDiagnoser]
public class Program
{
    static void Main() => BenchmarkRunner.Run<Program>();

    private readonly Channel<int> s_channel = Channel.CreateUnbounded<int>();

    [Benchmark]
    public async Task ReadThenWrite()
    {
        ChannelWriter<int> writer = s_channel.Writer;
        ChannelReader<int> reader = s_channel.Reader;
        for (int i = 0; i < 10_000_000; i++)
        {
            ValueTask<int> vt = reader.ReadAsync();
            writer.TryWrite(i);
            await vt;
        }
    }
}

which on my machine for 10 million writes and reads yields results like this:

Method Mean Error StdDev Gen 0 Gen 1 Gen 2 Allocated
ReadThenWrite 881.2 ms 4.60 ms 4.30 ms 72 B

So, there’s some more overhead when every read completes asynchronously, but even here we see zero allocations for the 10 million asynchronously-completing reads (again, the 72 bytes shown in the Allocated column is for the Task returned from ReadThenWrite)!

Combinators

Generally consumption of channels is simple, using one of the approaches shown earlier. But as with IEnumerables, it’s also possible to implement various kinds of operations over channels to accomplish a specific purpose. For example, let’s say I want to wait for the first element to arrive from either of two supplied readers; I could write something like this:

public static async ValueTask<ChannelReader<T>> WhenAny<T>(
    ChannelReader<T> reader1, ChannelReader<T> reader2)
{
    var cts = new CancellationTokenSource();
    Task<bool> t1 = reader1.WaitToReadAsync(cts.Token).AsTask();
    Task<bool> t2 = reader2.WaitToReadAsync(cts.Token).AsTask();
    Task<bool> completed = await Task.WhenAny(t1, t2);
    cts.Cancel();
    return completed == t1 ? reader1 : reader2;
}

Here we’re just calling WaitToReadAsync on both channels, and returning the reader for whichever one completes first. One of the interesting things to note about this example is that, while ChannelReader<T> bears many similarities to IEnumerator<T>, this example can’t be implemented well on top of IEnumerator<T> (or IAsyncEnumerator<T>). I{Async}Enumerator<T> exposes a MoveNext{Async} method, which moves the cursor ahead to the next item, which is then exposed from Current. If we tried to implement such a WhenAny on top of IAsyncEnumerator<T>, we would need to invoke MoveNextAsync on each. In doing so, we would potentially move both ahead to their next item. If we then used that method in a loop, we would likely end up missing items from one or both enumerators, because we would potentially have advanced the enumerator that we didn’t return from the method.

Relationship to the rest of .NET Core

System.Threading.Channels is part of the .NET Core shared framework, meaning a .NET Core app can start using it without installing anything additional. It’s also available as a separate NuGet package, though the separate implementation doesn’t have all of the optimizations that built-in implementation has, in large part because the built-in implementation is able to take advantage of additional runtime and library support in .NET Core.

It’s also used by a variety of other systems in .NET. For example, ASP.NET uses channels as part of SignalR as well as in its Libuv-based Kestrel transport. Channels are also used by the upcoming QUIC implementation currently being developed for .NET 5.

If you squint, the System.Threading.Channels library also looks a bit similar to the System.Threading.Tasks.Dataflow library that’s been available with .NET for years. In some ways, the dataflow library is a superset of the channels library; in particular, the BufferBlock<T> type from the dataflow library exposes much of the same functionality. However, the dataflow library is also focused on a different programming model, one where blocks are linked together such that data flows automatically from one to the next. It also includes advanced functionality that supports, for example, a form of two-phase commit, with multiple blocks linked to the same consumers, and those consumers able to atomically take from multiple blocks without deadlocking. Those mechanisms required to enable that are much more involved, and while more powerful are also more expensive. This is evident just by writing the same benchmark for BufferBlock<T> as we did earlier for Channels.

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
using System.Threading.Channels;
using System.Threading.Tasks;
using System.Threading.Tasks.Dataflow;

[MemoryDiagnoser]
public class Program
{
    static void Main() => BenchmarkRunner.Run<Program>();

    private readonly Channel<int> _channel = Channel.CreateUnbounded<int>();
    private readonly BufferBlock<int> _bufferBlock = new BufferBlock<int>();

    [Benchmark]
    public async Task Channel_ReadThenWrite()
    {
        ChannelWriter<int> writer = _channel.Writer;
        ChannelReader<int> reader = _channel.Reader;
        for (int i = 0; i < 10_000_000; i++)
        {
            ValueTask<int> vt = reader.ReadAsync();
            writer.TryWrite(i);
            await vt;
        }
    }

    [Benchmark]
    public async Task BufferBlock_ReadThenWrite()
    {
        for (int i = 0; i < 10_000_000; i++)
        {
            Task<int> t = _bufferBlock.ReceiveAsync();
            _bufferBlock.Post(i);
            await t;
        }
    }
}
Method Mean Error StdDev Gen 0 Gen 1 Gen 2 Allocated
Channel_ReadThenWrite 878.9 ms 0.68 ms 0.60 ms 72 B 72 B
BufferBlock_ReadThenWrite 20,116.4 ms 192.82 ms 180.37 ms 1184000.0000 2000.0000 7360000232 B

This is in no way meant to suggest that the System.Threading.Tasks.Dataflow library shouldn’t be used. It enables developers to express succinctly a large number of concepts, and it can exhibit very good performance when applied to the problems it suits best. However, when all one needs is a hand-off data structure between one or more producers and one or more consumers you’ve manually implemented, System.Threading.Channels is a much simpler, leaner bet.

What’s Next?

Hopefully at this point you have a better understanding of the System.Threading.Channels library, enough to see how it might fit into and help improve your applications. Give it a try, and we’d love your feedback, suggestions, issues, and PRs to improve it further at https://github.com/dotnet/runtime. Thanks!

The post An Introduction to System.Threading.Channels appeared first on .NET Blog.

ConfigureAwait FAQ

$
0
0

.NET added async/await to the languages and libraries over seven years ago. In that time, it’s caught on like wildfire, not only across the .NET ecosystem, but also being replicated in a myriad of other languages and frameworks. It’s also seen a ton of improvements in .NET, in terms of additional language constructs that utilize asynchrony, APIs offering async support, and fundamental improvements in the infrastructure that makes async/await tick (in particular performance and diagnostic-enabling improvements in .NET Core).

However, one aspect of async/await that continues to draw questions is ConfigureAwait. In this post, I hope to answer many of them. I intend for this post to be both readable from start to finish as well as being a list of Frequently Asked Questions (FAQ) that can be used as future reference.

To really understand ConfigureAwait, we need to start a bit earlier…

What is a SynchronizationContext?

The System.Threading.SynchronizationContext docs state that it “Provides the basic functionality for propagating a synchronization context in various synchronization models.” Not an entirely obvious description.

For the 99.9% use case, SynchronizationContext is just a type that provides a virtual Post method, which takes a delegate to be executed asynchronously (there are a variety of other virtual members on SynchronizationContext, but they’re much less used and are irrelevant for this discussion). The base type’s Post literally just calls ThreadPool.QueueUserWorkItem to asynchronously invoke the supplied delegate. However, derived types override Post to enable that delegate to be executed in the most appropriate place and at the most appropriate time.

For example, Windows Forms has a SynchronizationContext-derived type that overrides Post to do the equivalent of Control.BeginInvoke; that means any calls to its Post method will cause the delegate to be invoked at some later point on the thread associated with that relevant Control, aka “the UI thread”. Windows Forms relies on Win32 message handling and has a “message loop” running on the UI thread, which simply sits waiting for new messages to arrive to process. Those messages could be for mouse movements and clicks, for keyboard typing, for system events, for delegates being available to invoke, etc. So, given a SynchronizationContext instance for the UI thread of a Windows Forms application, to get a delegate to execute on that UI thread, one simply needs to pass it to Post.

The same goes for Windows Presentation Foundation (WPF). It has its own SynchronizationContext-derived type with a Post override that similarly “marshals” a delegate to the UI thread (via Dispatcher.BeginInvoke), in this case managed by a WPF Dispatcher rather than a Windows Forms Control.

And for Windows RunTime (WinRT). It has its own SynchronizationContext-derived type with a Post override that also queues the delegate to the UI thread via its CoreDispatcher.

This goes beyond just “run this delegate on the UI thread”. Anyone can implement a SynchronizationContext with a Post that does anything. For example, I may not care what thread a delegate runs on, but I want to make sure that any delegates Post‘d to my SynchronizationContext are executed with some limited degree of concurrency. I can achieve that with a custom SynchronizationContext like this:

internal sealed class MaxConcurrencySynchronizationContext : SynchronizationContext
{
    private readonly SemaphoreSlim _semaphore;

    public MaxConcurrencySynchronizationContext(int maxConcurrencyLevel) =>
        _semaphore = new SemaphoreSlim(maxConcurrencyLevel);

    public override void Post(SendOrPostCallback d, object state) =>
        _semaphore.WaitAsync().ContinueWith(delegate
        {
            try { d(state); } finally { _semaphore.Release(); }
        }, default, TaskContinuationOptions.None, TaskScheduler.Default);

    public override void Send(SendOrPostCallback d, object state)
    {
        _semaphore.Wait();
        try { d(state); } finally { _semaphore.Release(); }
    }
}

In fact, the unit testing framework xunit provides a SynchronizationContext very similar to this, which it uses to limit the amount of code associated with tests that can be run concurrently.

The benefit of all of this is the same as with any abstraction: it provides a single API that can be used to queue a delegate for handling however the creator of the implementation desires, without needing to know the details of that implementation. So, if I’m writing a library, and I want to go off and do some work, and then queue a delegate back to the original location’s “context”, I just need to grab their SynchronizationContext, hold on to it, and then when I’m done with my work, call Post on that context to hand off the delegate I want invoked. I don’t need to know that for Windows Forms I should grab a Control and use its BeginInvoke, or for WPF I should grab a Dispatcher and uses its BeginInvoke, or for xunit I should somehow acquire its context and queue to it; I simply need to grab the current SynchronizationContext and use that later on. To achieve that, SynchronizationContext provides a Current property, such that to achieve the aforementioned objective I might write code like this:

public void DoWork(Action worker, Action completion)
{
    SynchronizationContext sc = SynchronizationContext.Current;
    ThreadPool.QueueUserWorkItem(_ =>
    {
        try { worker(); }
        finally { sc.Post(_ => completion(), null); }
    });
}

A framework that wants to expose a custom context from Current uses the SynchronizationContext.SetSynchronizationContext method.

What is a TaskScheduler?

SynchronizationContext is a general abstraction for a “scheduler”. Individual frameworks sometimes have their own abstractions for a scheduler, and System.Threading.Tasks is no exception. When Tasks are backed by a delegate such that they can be queued and executed, they’re associated with a System.Threading.Tasks.TaskScheduler. Just as SynchronizationContext provides a virtual Post method to queue a delegate’s invocation (with the implementation later invoking the delegate via typical delegate invocation mechanisms), TaskScheduler provides an abstract QueueTask method (with the implementation later invoking that Task via the ExecuteTask method).

The default scheduler as returned by TaskScheduler.Default is the thread pool, but it’s possible to derive from TaskScheduler and override the relevant methods to achieve arbitrary behaviors for when and where a Task is invoked. For example, the core libraries include the System.Threading.Tasks.ConcurrentExclusiveSchedulerPair type. An instance of this class exposes two TaskScheduler properties, one called ExclusiveScheduler and one called ConcurrentScheduler. Tasks scheduled to the ConcurrentScheduler may run concurrently, but subject to a limit supplied to ConcurrentExclusiveSchedulerPair when it was constructed (similar to the MaxConcurrencySynchronizationContext shown earlier), and no ConcurrentScheduler Tasks will run when a Task scheduled to ExclusiveScheduler is running, with only one exclusive Task allowed to run at a time… in this way, it behaves very much like a reader/writer-lock.

Like SynchronizationContext, TaskScheduler also has a Current property, which returns the “current” TaskScheduler. Unlike SynchronizationContext, however, there’s no method for setting the current scheduler. Instead, the current scheduler is the one associated with the currently running Task, and a scheduler is provided to the system as part of starting a Task. So, for example, this program will output “True”, as the lambda used with StartNew is executed on the ConcurrentExclusiveSchedulerPair‘s ExclusiveScheduler and will see TaskScheduler.Current set to that scheduler.

Interestingly, TaskScheduler provides a static FromCurrentSynchronizationContext method, which creates a new TaskScheduler that queues Tasks to run on whatever SynchronizationContext.Current returned, using its Post method for queueing tasks.

How do SynchronizationContext and TaskScheduler relate to await?

Consider writing a UI app with a Button. Upon clicking the Button, we want to download some text from a web site and set it as the Button‘s Content. The Button should only be accessed from the UI thread that owns it, so when we’ve successfully downloaded the new date and time text and want to store it back into the Button‘s Content, we need to do so from the thread that owns the control. If we don’t, we get an exception like:

System.InvalidOperationException: 'The calling thread cannot access this object because a different thread owns it.'

If we were writing this out manually, we could use SynchronizationContext as shown earlier to marshal the setting of the Content back to the original context, such as via a TaskScheduler:

private static readonly HttpClient s_httpClient = new HttpClient();

private void downloadBtn_Click(object sender, RoutedEventArgs e)
{
    s_httpClient.GetStringAsync("http://example.com/currenttime").ContinueWith(downloadTask =>
    {
        downloadBtn.Content = downloadTask.Result;
    }, TaskScheduler.FromCurrentSynchronizationContext());
}

or using SynchronizationContext directly:

private static readonly HttpClient s_httpClient = new HttpClient();

private void downloadBtn_Click(object sender, RoutedEventArgs e)
{
    SynchronizationContext sc = SynchronizationContext.Current;
    s_httpClient.GetStringAsync("http://example.com/currenttime").ContinueWith(downloadTask =>
    {
        sc.Post(delegate
        {
            downloadBtn.Content = downloadTask.Result;
        }, null);
    });
}

Both of these approaches, though, explicitly uses callbacks. We would instead like to write the code naturally with async/await:

private static readonly HttpClient s_httpClient = new HttpClient();

private async void downloadBtn_Click(object sender, RoutedEventArgs e)
{
    string text = await s_httpClient.GetStringAsync("http://example.com/currenttime");
    downloadBtn.Content = text;
}

This “just works”, successfully setting Content on the UI thread, because just as with the manually implemented version above, awaiting a Task pays attention by default to SynchronizationContext.Current, as well as to TaskScheduler.Current. When you await anything in C#, the compiler transforms the code to ask (via calling GetAwaiter) the “awaitable” (in this case, the Task) for an “awaiter” (in this case, a TaskAwaiter<string>). That awaiter is responsible for hooking up the callback (often referred to as the “continuation”) that will call back into the state machine when the awaited object completes, and it does so using whatever context/scheduler it captured at the time the callback was registered. While not exactly the code used (there are additional optimizations and tweaks employed), it’s something like this:

object scheduler = SynchronizationContext.Current;
if (scheduler is null && TaskScheduler.Current != TaskScheduler.Default)
{
    scheduler = TaskScheduler.Current;
}

In other words, it first checks whether there’s a SynchronizationContext set, and if there isn’t, whether there’s a non-default TaskScheduler in play. If it finds one, when the callback is ready to be invoked, it’ll use the captured scheduler; otherwise, it’ll generally just execute the callback on as part of the operation completing the awaited task.

What does ConfigureAwait(false) do?

The ConfigureAwait method isn’t special: it’s not recognized in any special way by the compiler or by the runtime. It is simply a method that returns a struct (a ConfiguredTaskAwaitable) that wraps the original task it was called on as well as the specified Boolean value. Remember that await can be used with any type that exposes the right pattern. By returning a different type, it means that when the compiler accesses the instances GetAwaiter method (part of the pattern), it’s doing so off of the type returned from ConfigureAwait rather than off of the task directly, and that provides a hook to change the behavior of how the await behaves via this custom awaiter.

Specifically, awaiting the type returned from ConfigureAwait(continueOnCapturedContext: false) instead of awaiting the Task directly ends up impacting the logic shown earlier for how the target context/scheduler is captured. It effectively makes the previously shown logic more like this:

object scheduler = null;
if (continueOnCapturedContext)
{
    scheduler = SynchronizationContext.Current;
    if (scheduler is null && TaskScheduler.Current != TaskScheduler.Default)
    {
        scheduler = TaskScheduler.Current;
    }
}

In other words, by specifying false, even if there is a current context or scheduler to call back to, it pretends as if there isn’t.

Why would I want to use ConfigureAwait(false)?

ConfigureAwait(continueOnCapturedContext: false) is used to avoid forcing the callback to be invoked on the original context or scheduler. This has a few benefits:

Improving performance. There is a cost to queueing the callback rather than just invoking it, both because there’s extra work (and typically extra allocation) involved, but also because it means certain optimizations we’d otherwise like to employ in the runtime can’t be used (we can do more optimization when we know exactly how the callback will be invoked, but if it’s handed off to an arbitrary implementation of an abstraction, we can sometimes be limited). For very hot paths, even the extra costs of checking for the current SynchronizationContext and the current TaskScheduler (both of which involve accessing thread statics) can add measurable overhead. If the code after an await doesn’t actually require running in the original context, using ConfigureAwait(false) can avoid all these costs: it won’t need to queue unnecessarily, it can utilize all the optimizations it can muster, and it can avoid the unnecessary thread static accesses.

Avoiding deadlocks. Consider a library method that uses await on the result of some network download. You invoke this method and synchronously block waiting for it to complete, such as by using .Wait() or .Result or .GetAwaiter().GetResult() off of the returned Task object. Now consider what happens if your invocation of it happens when the current SynchronizationContext is one that limits the number of operations that can be running on it to 1, whether explicitly via something like the MaxConcurrencySynchronizationContext shown earlier, or implicitly by this being a context that only has one thread that can be used, e.g. a UI thread. So you invoke the method on that one thread and then block it waiting for the operation to complete. The operation kicks off the network download and awaits it. Since by default awaiting a Task will capture the current SynchronizationContext, it does so, and when the network download completes, it queues back to the SynchronizationContext the callback that will invoke the remainder of the operation. But the only thread that can process the queued callback is currently blocked by your code blocking waiting on the operation to complete. And that operation won’t complete until the callback is processed. Deadlock! This can apply even when the context doesn’t limit the concurrency to just 1, but when the resources are limited in any fashion. Imagine the same situation, except using the MaxConcurrencySynchronizationContext with a limit of 4. And instead of making just one call to the operation, we queue to that context 4 invocations, each of which makes the call and blocks waiting for it to complete. We’ve now still blocked all of the resources while waiting for the async methods to complete, and the only thing that will allow those async methods to complete is if their callbacks can be processed by this context that’s already entirely consumed. Again, deadlock! If instead the library method had used ConfigureAwait(false), it would not queue the callback back to the original context, avoiding the deadlock scenarios.

Why would I want to use ConfigureAwait(true)?

You wouldn’t. ConfigureAwait(true) does nothing meaningful. When comparing await task with await task.ConfigureAwait(true), they’re functionally identical. If you see ConfigureAwait(true) in production code, you can delete it.

The ConfigureAwait method accepts a Boolean because there are some niche situations in which you want to pass in a variable to control the configuration. But the 99% use case is with a hardcoded false argument value, ConfigureAwait(false).

When should I use ConfigureAwait(false)?

It depends: are you implementing application-level code or general-purpose library code?

When writing applications, you generally want the default behavior (which is why it is the default behavior). If an app model / environment (e.g. Windows Forms, WPF, ASP.NET Core, etc.) publishes a custom SynchronizationContext, there’s almost certainly a really good reason it does: it’s providing a way for code that cares about synchronization context to interact with the app model / environment appropriately. So if you’re writing an event handler in a Windows Forms app, writing a unit test in xunit, writing code in an ASP.NET MVC controller, whether or not the app model did in fact publish a SynchronizationContext, you want to use that SynchronizationContext if it exists. And that means the default / ConfigureAwait(true). You make simple use of await, and the right things happen with regards to callbacks/continuations being posted back to the original context if one existed. This leads to the general guidance of: if you’re writing app-level code, do not use ConfigureAwait(false). If you think back to the Click event handler code example earlier in this post:

private static readonly HttpClient s_httpClient = new HttpClient();

private async void downloadBtn_Click(object sender, RoutedEventArgs e)
{
    string text = await s_httpClient.GetStringAsync("http://example.com/currenttime");
    downloadBtn.Content = text;
}

the setting of downloadBtn.Content = text needs to be done back in the original context. If the code had violated this guideline and instead used ConfigureAwait(false) when it shouldn’t have:

private static readonly HttpClient s_httpClient = new HttpClient();

private async void downloadBtn_Click(object sender, RoutedEventArgs e)
{
    string text = await s_httpClient.GetStringAsync("http://example.com/currenttime").ConfigureAwait(false); // bug
    downloadBtn.Content = text;
}

bad behavior will result. The same would go for code in a classic ASP.NET app reliant on HttpContext.Current; using ConfigureAwait(false) and then trying to use HttpContext.Current is likely going to result in problems.

In contrast, general-purpose libraries are “general purpose” in part because they don’t care about the environment in which they’re used. You can use them from a web app or from a client app or from a test, it doesn’t matter, as the library code is agnostic to the app model it might be used in. Being agnostic then also means that it’s not going to be doing anything that needs to interact with the app model in a particular way, e.g. it won’t be accessing UI controls, because a general-purpose library knows nothing about UI controls. Since we then don’t need to be running the code in any particular environment, we can avoid forcing continuations/callbacks back to the original context, and we do that by using ConfigureAwait(false) and gaining both the performance and reliability benefits it brings. This leads to the general guidance of: if you’re writing general-purpose library code, use ConfigureAwait(false). This is why, for example, you’ll see every (or almost every) await in the .NET Core runtime libraries using ConfigureAwait(false) on every await; with a few exceptions, in cases where it doesn’t it’s very likely a bug to be fixed. For example, this PR fixed a missing ConfigureAwait(false) call in HttpClient.

As with all guidance, of course, there can be exceptions, places where it doesn’t make sense. For example, one of the larger exemptions (or at least categories that requires thought) in general-purpose libraries is when those libraries have APIs that take delegates to be invoked. In such cases, the caller of the library is passing potentially app-level code to be invoked by the library, which then effectively renders those “general purpose” assumptions of the library moot. Consider, for example, an asynchronous version of LINQ’s Where method, e.g. public static async IAsyncEnumerable<T> WhereAsync(this IAsyncEnumerable<T> source, Func<T, bool> predicate). Does predicate here need to be invoked back on the original SynchronizationContext of the caller? That’s up to the implementation of WhereAsync to decide, and it’s a reason it may choose not to use ConfigureAwait(false).

Even with these special cases, the general guidance stands and is a very good starting point: use ConfigureAwait(false) if you’re writing general-purpose library / app-model-agnostic code, and otherwise don’t.

Does ConfigureAwait(false) guarantee the callback won’t be run in the original context?

No. It guarantees it won’t be queued back to the original context… but that doesn’t mean the code after an await task.ConfigureAwait(false) won’t still run in the original context. That’s because awaits on already-completed awaitables just keep running past the await synchronously rather than forcing anything to be queued back. So, if you await a task that’s already completed by the time it’s awaited, regardless of whether you used ConfigureAwait(false), the code immediately after this will continue to execute on the current thread in whatever context is still current.

Is it ok to use ConfigureAwait(false) only on the first await in my method and not on the rest?

In general, no. See the previous FAQ. If the await task.ConfigureAwait(false) involves a task that’s already completed by the time it’s awaited (which is actually incredibly common), then the ConfigureAwait(false) will be meaningless, as the thread continues to execute code in the method after this and still in the same context that was there previously.

One notable exception to this is if you know that the first await will always complete asynchronously and the thing being awaited will invoke its callback in an environment free of a custom SynchronizationContext or a TaskScheduler. For example, CryptoStream in the .NET runtime libraries wants to ensure that its potentially computationally-intensive code doesn’t run as part of the caller’s synchronous invocation, so it uses a custom awaiter to ensure that everything after the first await runs on a thread pool thread. However, even in that case you’ll notice that the next await still uses ConfigureAwait(false); technically that’s not necessary, but it makes code review a lot easier, as otherwise every time this code is looked at it doesn’t require an analysis to understand why ConfigureAwait(false) was left off.

Can I use Task.Run to avoid using ConfigureAwait(false)?

Yes. If you write:

Task.Run(async delegate
{
    await SomethingAsync(); // won't see the original context
});

then a ConfigureAwait(false) on that SomethingAsync() call will be a nop, because the delegate passed to Task.Run is going to be executed on a thread pool thread, with no user code higher on the stack, such that SynchronizationContext.Current will return null. Further, Task.Run implicitly uses TaskScheduler.Default, which means querying TaskScheduler.Current inside of the delegate will also return Default. That means the await will exhibit the same behavior regardless of whether ConfigureAwait(false) was used. It also doesn’t make any guarantees about what code inside of this lambda might do. If you have the code:

Task.Run(async delegate
{
    SynchronizationContext.SetSynchronizationContext(new SomeCoolSyncCtx());
    await SomethingAsync(); // will target SomeCoolSyncCtx
});

then the code inside SomethingAsync will in fact see SynchronizationContext.Current as that SomeCoolSyncCtx instance, and both this await and any non-configured awaits inside SomethingAsync will post back to it. So to use this approach, you need to understand what all of the code you’re queueing may or may not do and whether its actions could thwart yours.

This approach also comes at the expense of needing to create/queue an additional task object. That may or may not matter to your app or library depending on your performance sensitivity.

Also keep in mind that such tricks may cause more problems than they’re worth and have other unintended consequences. For example, static analysis tools (e.g. Roslyn analyzers) have been written to flag awaits that don’t use ConfigureAwait(false), such as CA2007. If you enable such an analyzer but then employ a trick like this just to avoid using ConfigureAwait, there’s a good chance the analyzer will flag it, and actually cause more work for you. So maybe you then disable the analyzer because of its noisiness, and now you end up missing other places in the codebase where you actually should have been using ConfigureAwait(false).

Can I use SynchronizationContext.SetSynchronizationContext to avoid using ConfigureAwait(false)?

No. Well, maybe. It depends on the involved code.

Some developers write code like this:

Task t;
SynchronizationContext old = SynchronizationContext.Current;
SynchronizationContext.SetSynchronizationContext(null);
try
{
    t = CallCodeThatUsesAwaitAsync(); // awaits in here won't see the original context
}
finally { SynchronizationContext.SetSynchronizationContext(old); }
await t; // will still target the original context

in hopes that it’ll make the code inside CallCodeThatUsesAwaitAsync see the current context as null. And it will. However, the above will do nothing to affect what the await sees for TaskScheduler.Current, so if this code is running on some custom TaskScheduler, awaits inside CallCodeThatUsesAwaitAsync (and that don’t use ConfigureAwait(false)) will still see and queue back to that custom TaskScheduler.

All of the same caveats also apply as in the previous Task.Run-related FAQ: there are perf implications of such a workaround, and the code inside the try could also thwart these attempts by setting a different context (or invoking code with a non-default TaskScheduler).

With such a pattern, you also need to be careful about a slight variation:

SynchronizationContext old = SynchronizationContext.Current;
SynchronizationContext.SetSynchronizationContext(null);
try
{
    await t;
}
finally { SynchronizationContext.SetSynchronizationContext(old); }

See the problem? It’s a bit hard to see but also potentially very impactful. There’s no guarantee that the await will end up invoking the callback/continuation on the original thread, which means the resetting of the SynchronizationContext back to the original may not actually happen on the original thread, which could lead subsequent work items on that thread to see the wrong context (to counteract this, well-written app models that set a custom context generally add code to manually reset it before invoking any further user code). And even if it does happen to run on the same thread, it may be a while before it does, such that the context won’t be appropriately restored for a while. And if it runs on a different thread, it could end up setting the wrong context onto that thread. And so on. Very far from ideal.

I’m using GetAwaiter().GetResult(). Do I need to use ConfigureAwait(false)?

No. ConfigureAwait only affects the callbacks. Specifically, the awaiter pattern requires awaiters to expose an IsCompleted property, a GetResult method, and an OnCompleted method (optionally with an UnsafeOnCompleted method). ConfigureAwait only affects the behavior of {Unsafe}OnCompleted, so if you’re just directly calling to the awaiter’s GetResult() method, whether you’re doing it on the TaskAwaiter or the ConfiguredTaskAwaitable.ConfiguredTaskAwaiter makes zero behavior difference. So, if you see task.ConfigureAwait(false).GetAwaiter().GetResult() in code, you can replace it with task.GetAwaiter().GetResult() (and also consider whether you really want to be blocking like that).

I know I’m running in an environment that will never have a custom SynchronizationContext or custom TaskScheduler. Can I skip using ConfigureAwait(false)?

Maybe. It depends on how sure you are of the “never” part. As mentioned in previous FAQs, just because the app model you’re working in doesn’t set a custom SynchronizationContext and doesn’t invoke your code on a custom TaskScheduler doesn’t mean that some other user or library code doesn’t. So you need to be sure that’s not the case, or at least recognize the risk if it may be.

I’ve heard ConfigureAwait(false) is no longer necessary in .NET Core. True?

False. It’s needed when running on .NET Core for exactly the same reasons it’s needed when running on .NET Framework. Nothing’s changed in that regard.

What has changed, however, is whether certain environments publish their own SynchronizationContext. In particular, whereas the classic ASP.NET on .NET Framework has its own SynchronizationContext, in contrast ASP.NET Core does not. That means that code running in an ASP.NET Core app by default won’t see a custom SynchronizationContext, which lessens the need for ConfigureAwait(false) running in such an environment.

It doesn’t mean, however, that there will never be a custom SynchronizationContext or TaskScheduler present. If some user code (or other library code your app is using) sets a custom context and calls your code, or invokes your code in a Task scheduled to a custom TaskScheduler, then even in ASP.NET Core your awaits may see a non-default context or scheduler that would lead you to want to use ConfigureAwait(false). Of course, in such situations, if you avoid synchronously blocking (which you should avoid doing in web apps regardless) and if you don’t mind the small performance overheads in such limited occurrences, you can probably get away without using ConfigureAwait(false).

Can I use ConfigureAwait when ‘await foreach’ing an IAsyncEnumerable?

Yes. See this MSDN Magazine article for an example.

await foreach binds to a pattern, and so while it can be used to enumerate an IAsyncEnumerable<T>, it can also be used to enumerate something that exposes the right API surface area. The .NET runtime libraries include a ConfigureAwait extension method on IAsyncEnumerable<T> that returns a custom type that wraps the IAsyncEnumerable<T> and a Boolean and exposes the right pattern. When the compiler generates calls to the enumerator’s MoveNextAsync and DisposeAsync methods, those calls are to the returned configured enumerator struct type, and it in turns performs the awaits in the desired configured way.

Can I use ConfigureAwait when ‘await using’ an IAsyncDisposable?

Yes, though with a minor complication.

As with IAsyncEnumerable<T> described in the previous FAQ, the .NET runtime libraries expose a ConfigureAwait extension method on IAsyncDisposable, and await using will happily work with this as it implements the appropriate pattern (namely exposing an appropriate DisposeAsync method):

await using (var c = new MyAsyncDisposableClass().ConfigureAwait(false))
{
    ...
}

The problem here is that the type of c is now not MyAsyncDisposableClass but rather a System.Runtime.CompilerServices.ConfiguredAsyncDisposable, which is the type returned from that ConfigureAwait extension method on IAsyncDisposable.

To get around that, you need to write one extra line:

var c = new MyAsyncDisposableClass();
await using (c.ConfigureAwait(false))
{
    ...
}

Now the type of c is again the desired MyAsyncDisposableClass. This also has the effect of increasing the scope of c; if that’s impactful, you can wrap the whole thing in braces.

I used ConfigureAwait(false), but my AsyncLocal still flowed to code after the await. Is that a bug?

No, that is expected. AsyncLocal<T> data flows as part of ExecutionContext, which is separate from SynchronizationContext. Unless you’ve explicitly disabled ExecutionContext flow with ExecutionContext.SuppressFlow(), ExecutionContext (and thus AsyncLocal<T> data) will always flow across awaits, regardless of whether ConfigureAwait is used to avoid capturing the original SynchronizationContext. For more information, see this blog post.

Could the language help me avoid needing to use ConfigureAwait(false) explicitly in my library?

Library developers sometimes express their frustration with needing to use ConfigureAwait(false) and ask for less invasive alternatives.

Currently there aren’t any, at least not built into the language / compiler / runtime. There are however numerous proposals for what such a solution might look like, e.g. https://github.com/dotnet/csharplang/issues/645, https://github.com/dotnet/csharplang/issues/2542, https://github.com/dotnet/csharplang/issues/2649, and https://github.com/dotnet/csharplang/issues/2746.

If this is important to you, or if you feel like you have new and interesting ideas here, I encourage you to contribute your thoughts to those or new discussions.

The post ConfigureAwait FAQ appeared first on .NET Blog.

Visual Studio Code November 2019

Microsoft again recognized as a Leader in the 2019 Gartner Content Services Platforms Magic Quadrant Report

Modernizing Find in Files

$
0
0

Find in Files is one of the most commonly used features in Visual Studio. It’s also a feature that gets a substantial amount of feedback, and due to the age of the code, has been very costly to improve. Earlier this year, we decided to reimplement the feature from the ground up in order to realize significant performance and usability improvements.

We’ve released the new find in files experience in Visual Studio 2019 version 16.5 Preview 1 and we’re looking for feedback from the community. We expect this experience to be the one our developers will use and love in the future, so we want to make sure we’ve prioritized the right features. We still have more improvements coming that we’re not quite ready to talk about yet, but before we deprecate the old experience, we want to make sure the new version is meeting the needs of our users.

A screen capture of the new Find in Files dialog.

The new experience is available by searching for “Find in Files” or “Replace in Files” in Visual Studio search (Ctrl+Q by default). You can also get to these commands with Ctrl+Shift+F and Ctrl+Shift+H respectively. The new experience is pictured above and should be easily recognized by the more modern look and consistent color theming.

If you’re not seeing the new version, you can search for “Preview Features” in Visual Studio search (Again, Ctrl+Q by default). On that page, make sure “Use previous Find in Files” is unchecked. Conversely, if you’re having problems with the new experience, you can toggle this option to enable the old one. If you do find that you need the old Find in Files experience, we’d love to hear why. Please feel free to supply any feedback you might have over in Developer Community.

Performance

We took the previous implementation of Find in Files and reimplemented it completely in managed C#. This allows us to avoid unnecessary interop calls and gives us much more room for improving the experience. The memory consumption is smaller, and our performance is much faster.

In our internal testing on directories containing 100k+ files, we saw searches that took over 4 minutes with the old implementation be done in 26 seconds. The biggest gains are in searches that use regular expressions, but searches without regular expressions generally cut the search time in half.

Specifying Paths

Using the new experience should feel comfortable for most folks since we’ve gone with an experience that matches many other common find experiences. There are a few nuances that are worth calling out.

A screen capture of the Find in Files dialog that is cropped to only show the Look in and File types fields along with the options to include miscellaneous and external items.

The “Look in” box has a new option, “Current Directory”, which will search the folder that contains the currently open document. When searching a solution, there are checkboxes to include miscellaneous files (files that you’ve opened but aren’t part of the solution) as well as external items (files like “windows.h” that you might reference but aren’t part of the solution).

The three dots button next to the “Look in” box work like any other browse option to specify a directory to look in, but if you’ve already specified a directory, this button will append the new directory instead of replacing. For instance, if your “Look in” value was “.Code”, you could click the three buttons and navigate to a folder named “Shared Code”. The “Look in” would now show “.Code;.Shared Code” and when the Find command is executed, it will search both of those folders.

The File types folder now also can exclude files. Any path or file type prefixed with the “!” character will be excluded from the search. For instance, you can add “!*node_modules*” to the file types list to exclude any files in a node_modules folder.

Multiple Searches

One of the more frequent requests we’ve gotten is the ability to keep the results from one search while doing other searches. This makes it easy to compare results and see them side-by-side. This feature has been in Visual Studio for a while, and the new experience still supports it.

In the screenshot above, the Keep Results button has been enabled. Now, when a new search is executed, the results will be shown in a new tab. The screenshot above shows three searches that have already completed. Currently, this feature supports up to five searches. If you’ve already got five search results showing, the next search will reuse the oldest search result tab.

The Keep Results button is available for Find in Files as well as the Find All References feature.

Regular Expression Builder

A screen capture of the Find in Files dialog with a regular expression being used.

With Visual Studio 2019 version 16.5 preview 2, the Regular Expression builder will be available. The “Use regular expressions” checkbox will enable you to specify a regular expression as a pattern for a match. Checking this box with Visual Studio 2019 version 16.5 preview 2 (or later) will also bring up the Regular Expression builder, which is useful for creating regular expressions. Regular expressions can allow searches for strings that span multiple lines. For instance, the expression “.*Hello.*rn.*World.*” will match any occurrence of the string “Hello” that has an occurrence of the string “World” anywhere on the next line.

When the “Use regular expressions” checkbox is checked, the regular expression builder will appear next to the Find field. Clicking this will give some examples for building regular expressions as well as a link to the documentation.

What’s Next

Now that the Find in Files experience has been reimplemented to use the newer patterns of Visual Studio, we’re going to be able to provide more of the features we get asked for. We’d love to hear your experiences with the new dialog. We’re always watching Developer Community, and we’ve got a survey specifically for collecting feedback on the new experience that you can answer here. We know there are features that aren’t available today and your feedback is how we’ll prioritize the rest of the features. If you’re running into problems or you think the new dialog isn’t working correctly, please send us feedback with the Give Feedback button in Visual Studio.

The post Modernizing Find in Files appeared first on Visual Studio Blog.

Viewing all 10804 articles
Browse latest View live