Quantcast
Channel: Category Name
Viewing all 10804 articles
Browse latest View live

Python in Visual Studio 15.7 Preview 4

$
0
0

Today we have released the first preview of our next update to Visual Studio 2017. You will see a notification in Visual Studio within the next few days, or you can download the new installer from visualstudio.com.

In this post, we're going to look at some of the new features we have added for Python developers: faster debugging, Conda environments, type hints and MyPy support. As always, the preview is a way for us to get features into your hands early, so you can provide feedback and we can identify issues with a smaller audience. If you encounter any trouble, please use the Report a Problem tool to let us know.

Faster Debugging

This release includes a new version of our ptvsd debug engine based on PyDevD, which we expect to be much faster than the previous version.

Most of the features you used in the previous version of the debugger are still available, but the following features are not yet supported:

  • Set Next Statement
  • Just My Code

If you want to use these features, you can revert back to the previous debugger version by unchecking “Use experimental debugger” in Tools > Options > Python > Experimental.

The following cases debugging still works but we fall back to the previous version of the debugger:

  • IronPython debugging
  • Attach to process
  • Debug unit tests
  • Mixed-mode debugging

For remote debugging you will need to start ptvsd on the remote server using the command:

py -3 -m ptvsd --server --port <port_num> --file main.py

We have also made a preview of the new ptvsd available in the Python extension for Visual Studio Code, which continues to keep our debugging capabilities consistent across Visual Studio and Visual Studio Code.

IntelliSense for Type Hints

As type hints in Python continue to gain popularity, we want to make sure you have easy access to the best tools to take advantage of them. These will help you improve your code quality and help make refactoring safer.

In this release we have added support for type hints in our IntelliSense. When you add type hints to parameters or variables they’ll be shown in hover tooltips.

For example, in the example below a Vector type is declared as a list of floats, and the scale() method is decorated with types to indicate the parameters and return types. Hovering over the scale() method when calling it shows the expected parameters and return type:

Using type hints you can also declare class attributes and their types, which is handy if those attributes are dynamically added later. Below we declare that the Employee class has a name and an id, and that information is then available in IntelliSense when using variables of type Employee:

In the next section, we’ll see how you can run MyPy to use these type hints to validate your code and detect errors.

Using MyPy with Type Hints

To fully validate your code against type hints, we recommend using MyPy. MyPy is the industry standard tool for validating type hints throughout your entire project. As a separate tool, you can easily configure it to run in your build system as well as the development environment, but it is also useful to have it be easily accessible while developing.

To run MyPy against your project, right-click on the project in Solution Explorer and find it under the Python menu.

This will install it if necessary, and run against every file included in your project. Warnings will be displayed in the Error List window and selecting an item will take you directly to the location in your sources.

By default, you may see many more warnings than you are prepared to fix straight away. To configure the files and warnings you see, use the Add New Item command and select the MyPy Configuration file. This file is automatically detected by MyPy and contains various filters and settings. See the MyPy documentation for full information about configuration options.

Conda Environments

You can now create and use Conda environments as well as manage packages for your Conda environments using pip or Conda.

To manage or use Conda environments from Visual Studio, you'll need Anaconda or Miniconda. You can install Anaconda directly from the Visual Studio installer or get it separately if you'd rather manage the installation yourself.

Note: There are known issues when using older versions of the conda package (4.4.8 or later is recommended). The latest distributions of Anaconda/Miniconda have the necessary version of the conda package.

You can create a new Conda environment using the Python Environments window, using a base Python from version 2.6 to 3.6.

Any environments created using Visual Studio or the Conda tool will be detected and listed in the Python Environments window automatically. You can open interactive windows for these environments, assign them in projects or make them your default environment. You can also delete them using Visual Studio.

To manage packages, you have the option of using either Conda or Pip package manager from the Python Environments window.

The user interface for both package manager is the same. It displays the list of installed packages and let you update or uninstall them.

You can also search for available Conda packages.

Solution Explorer also displays packages for each environment referenced in your project. It lists either Conda or pip packages, picking the most appropriate for the type of environment.

Give Feedback

Be sure to download the latest preview of Visual Studio and try out the above improvements. If you encounter any issues, please use the Report a Problem tool to let us know (this can be found under Help, Send Feedback) or continue to use our GitHub page. Follow our Python blog to make sure you hear about our updates first, and thank you for using Visual Studio!

 


Announcing Visual Studio 2017 15.7 Preview 4

$
0
0

As you know we continue to incrementally improve Visual Studio 2017 (version 15), and our 7th significant update is currently well under way with the 4th preview shipping today. As we’re winding down the preview, we’d like to stop and take the time to tell you about all of the great things that are coming in 15.7 and ask you to try it and give us any feedback you might have while we still have time to correct things before we ship the final version.

From a .NET tools perspective, 15.7 brings a lot of great enhancements including:

  • Support for .NET Core 2.1 projects
  • Improvements to Unit Testing
  • Improvements to .NET productivity tools
  • C# 7.3
  • Updates to F# tools
  • Azure Key Vault support in Connected Services
  • Library Manager for working with client-side libraries in web projects
  • More capabilities when publishing projects

In this post we’ll take a brief tour of all these features and talk about how you can try them out (download 15.7 Preview). As always, if you run into any issues, please report them to us using Visual Studio’s built in “Report a Problem” feature.

.NET Core 2.1 Support

.NET Core 2.1 and ASP.NET Core 2.1 brings a list of great new features including performance improvements, global tools, a Windows compatibility pack, minor version roll-forward, and security improvements to name a few. For full details see the .NET Core 2.1 Roadmap and the ASP.NET Core 2.1 Roadmap respectively.

Visual Studio 15.7 is the recommended version of Visual Studio for working with .NET Core 2.1 projects. To get started building .NET Core 2.1 projects in Visual Studio,

You’ll now see ASP.NET Core 2.1 as an option in the One ASP.NET dialog

clip_image002[4]

If you are working with a Console Application or Class Library, you’ll need to create the project and then open the project’s property page and change the Target framework to “.NET Core 2.1”

clip_image004[4]

Unit Testing Improvements

  • The Test Explorer has undergone more performance improvements which results smoother scrolling and faster updating of the test list for large solutions.
  • We’ve also improved the ability to understand what is happening during test runs. When a test run is in progress, a progress ring appears next to tests that are currently executing, and a clock icon appears for tests that are pending execution.

clip_image006[4]

Productivity Improvements

Each release we’ve been working to add more and more refactorings and code fixes to make you productive. In 15.7 Preview 4, invoke Quick Actions and Refactorings (Ctrl+. or Alt+Enter) to use:

  • Convert for-loop-to-foreach (and vice versa)
  • Make private field readonly
  • Toggle between var and the explicit type (without code style enforcement)

clip_image008[4]

To learn more about productivity features see our Visual Studio 2017 Productivity Guide for .NET Developers.

C# 7.3

15.7 also brings the newest incremental update to C#, 7.3. C# 7.3 features are:

To use C# 7.3 features in your project:

  • Open your project’s property page (Project -> [Project Name] Properties…)
  • Choose the “Build” tab
  • Click the “Advanced…” button on the bottom right
  • Change the “Language version” dropdown to “C# latest minor version (latest)”.  This setting will enable your project to use the latest C# features available to the version of Visual Studio you are in without needing to change it again in the future.  If you prefer, to you can pick a specific version from the list.

F# improvements

15.7 also includes several improvements to F# and F# tooling in Visual Studio.

  • Type Providers are now enabled for .NET Core 2.1. To try it out, we recommend using FSharp.Data version 3.0.0-beta, which has been updated to use the new Type Provider infrastructure.
  • .NET SDK projects can now generate an F# AssmeblyInfo file from project properties.
  • Various smaller bugs in file ordering for .NET SDK projects have been fixed, including initial ordering when pasting a file into a folder.
  • Toggles for outlining and Structured Guidelines are now available in the Text Editor > F# > Advanced options page.
  • Improvements in editor responsiveness have been made, including ensuring that error diagnostics always appear before other diagnostic information (e.g., unused value analysis)
  • Efforts to reduce memory usage of the F# tools have been made in partnership with the open source community, with much of the improvements available in this release.

Finally, templates for ASP.NET Core projects in F# are coming soon, targeted for the RTW release of VS 2017 15.7.

Azure Key Vault support in Connected Services

We have simplified the process to manage your project’s secrets with the ability to create and add a Key Vault to your project as a connected service. The Azure Key Vault provides a secure location to safeguard keys and other secrets used by applications so that they do not get shared unintentionally. Adding a Key Vault through Connected Services will:

  • Provide Key Vault support for ASP.NET and ASP.NET Core applications
  • Automatically add configuration to access your Key Vault through your project
  • Add the required Nuget packages to your project
  • Allow you to access, add, edit, and remove your secrets and permissions through the Azure portal

To get started:

  • Double click on the “Connected Services” node in Solution Explorer in your ASP.Net or ASP.Net Core application.
  • Click on “Secure Secrets with Azure Key Vault”.
  • When the Key Vault tab opens, select the Subscription that you would like your Key Vault to be associated with and click the “Add” button on the bottom left. By default Visual Studio will create a Key Vault with a unique name.
    Tip: If you would like to use an existing Key Vault, change location settings, resource group, or pricing tiers from the preselected values, you can click on the ‘Edit’ link next to Key Vault
  • Once the Key Vault has been added, you will be able to manage secrets and permissions with the links on the right.

clip_image010[4]

Library Manager

Library Manager (“LibMan” for short) is Microsoft’s new client-side static content management system for web projects. Designed as a replacement for Bower and npm, LibMan helps users find and fetch library files from an external source (like CDNJS) or from any file system library catalogue.

To get started, right-click a web project from Solution Explorer and choose “Manage Client-side Libraries…”. This creates and opens the LibMan configuration file (libman.json) with some default content. Update the “libraries” section to add library files to your project. This example adds some jQuery files to the wwwroot/lib directory.

clip_image012[4]

For more details, see Library Manager: Client-side content management for web apps.

Azure Publishing Improvements

We also made several improvements for when publishing applications from Visual Studio, including:

For more details, see our Publish improvements in Visual Studio 2017 15.7 post on the Web Developer blog.

Conclusion

If you haven’t installed a Visual Studio preview yet, it’s worth noting that they can be installed side by side with your existing stable installations of Visual Studio 2017, so you can try the previews out, and then go back to the stable channel for your regular work. So, we hope that you’ll take the time to install the Visual Studio 2017 15.7 Preview 4 update and let us know what you think. You can either use the built-in feedback tools in Visual Studio 2017 or let us know what you think below in the comments section.

Publish Improvements in Visual Studio 2017 15.7

$
0
0

Today we released Visual Studio 2017 15.7 Preview 4. Our 15.7 update brings some exciting updates for publishing applications from Visual Studio that we’re excited to tell you about, including:

  • Ability to configure publish settings before you publish or create a publish profile
  • Create Azure Storage Accounts and automatically store the connection string for App Service
  • Automatic enablement of Managed Service Identity in App Service

If you haven’t installed a Visual Studio Preview yet, it’s worth noting that they can be installed side by side with your existing stable installations of Visual Studio 2017, so you can try the previews out, and then go back to the stable channel for your regular work. We’d be very appreciative if you’d try Visual Studio 2017 15.7 Preview 4 and give us any feedback you might have while we still have time to change or fix things before we ship the final version (download now). As always, if you run into any issues, please report them to us using Visual Studio’s built in “Report a Problem” feature.

Configure settings before publishing

When publishing your ASP.NET Core applications to either a folder or Azure App Service you can configure the following settings prior to creating your publish profile:

To configure this prior to creating your profile, click the “Advanced…” link on the publish target page to open the Advanced Settings dialog.

Advanced link on 'Pick a publish target' dialog

Create Azure Storage Accounts and automatically store the connection string in App Settings

When creating a new Azure App Service, we’ve always offered the ability to create a new SQL Azure database and automatically store its connection string in your app’s App Service Settings. With 15.7, we now offer the ability to create a new Azure Storage Account while you are creating your App Service, and automatically place the connection string in the App Service settings as well. To create a new storage account:

  • Click the “Create a storage account” link in the top right of the “Create App Service” dialog
  • Provide in the connecting string key name your app uses to access the storage account in the “(Optional) Connecting String Name” field at the bottom of the Storage Account dialog
  • Your application will now be able to talk to the storage account once your application is published

Optional Connection String Name field on Storage Account dialog

Managed Service Identity enabled for new App Services

A common challenge when building cloud applications is how to manage the credentials that need to be in your code for authenticating to other services. Ideally, credentials never appear on developer workstations or get checked into source control. Azure Key Vault provides a way to securely store credentials and other keys and secrets, but your code needs to authenticate to Key Vault to retrieve them. Managed Service Identity (MSI) makes solving this problem simpler by giving Azure services an automatically managed identity in Azure Active Directory (Azure AD). You can use this identity to authenticate to any service that supports Azure AD authentication, including Key Vault, without having any credentials in your code.

Starting in Visual Studio 2017 15.7 Preview 4, when you publish an application to Azure App Service (not Linux) Visual Studio automatically enables MSI for your application. You can then give your app permission to communicate with any service that supports MSI authentication by logging into that service’s page in the Azure Portal and granting access your App Service. For example, to create a Key Vault and give your App Service access

  1. In the Azure Portal, select Create a resource > Security + Identity > Key Vault.
  2. Provide a Name for the new Key Vault.
  3. Locate the Key Vault in the same subscription and resource group as the App Service you created from Visual Studio.
  4. Select Access policies and click Add new.
  5. In Configure from template, select Secret Management.
  6. Choose Select Principal, and in the search field enter the name of the App Service.
  7. Select the App Service’s name in the result list and click Select.
  8. Click OK to finishing adding the new access policy, and OK to finish access policy selection.
  9. Click Create to finish creating the Key Vault.

Azure portal dialog: Create a Key Vault and give your App Service access

Once you publish your application, it will have access to the Key Vault without the need for you to take any additional steps.

Conclusion

If you’re interested in the many other great things that Visual Studio 2017 15.7 brings for .NET development, check out our .NET tool updates in Visual Studio 15.7 post on the .NET blog.

We hope that you’ll give 15.7 a try and let us know how it works for you. If you run into any issues, or have any feedback, please report them to us using Visual Studio’s features for sending feedback. or let us know what you think below or via Twitter.

Performance Improvements in .NET Core 2.1

$
0
0

Back before .NET Core 2.0 shipped, I wrote a post highlighting various performance improvements in .NET Core 2.0 when compared with .NET Core 1.1 and the .NET Framework. As .NET Core 2.1 is in its final stages of being released, I thought it would be a good time to have some fun and take a tour through some of the myriad of performance improvements that have found their way into this release.

Performance improvements show up in .NET Core 2.1 in a variety of ways. One of the big focuses of the release has been on the new System.Span<T> type that, along with its friends like System.Memory<T>, are now at the heart of the runtime and core libraries (see this MSDN Magazine article for an introduction). New libraries have been added in this release, like System.Memory.dll, System.Threading.Channels.dll, and System.IO.Pipelines.dll, each targeted at specific scenarios. And many new members have been added to existing types, for example ~250 new members across existing types in the framework that accept or return the new span and memory types, and counting members on new types focusing on working with span and memory more than doubles that (e.g. the new BinaryPrimitives and Utf8Formatter types). All such improvements are worthy of their own focused blog posts, but they’re not what I’m focusing on here. Rather, I’m going to walk through some of the myriad of improvements that have been made to existing functionality, to existing types and methods, places where you upgrade a library or app from .NET Core 2.0 to 2.1 and performance just gets better. For the purposes of this post, I’m focused primarily on the runtime and the core libraries, but there have also been substantial performance improvements higher in the stack, as well as in tooling.

Setup

In my post on .NET Core 2.0 performance, I demonstrated improvements using simple console apps with custom measurement loops, and I got feedback that readers would have preferred if it if I’d used a standard benchmarking tool. While I explicitly opted not to do so then (the reason being making it trivial for developers to follow along by copying and pasting code samples into their own console apps), this time around I decided to experiment with the approach, made easier by tooling improvements in the interim. So, to actually run the complete code samples shown in this post, you’ll need a few things. In my setup, I have both .NET Core 2.0 and a preview of .NET Core 2.1 installed. I then did dotnet new console, and modified the resulting .csproj as follows, which includes specifying both releases as target frameworks and including a package reference for Benchmark.NET, used to do the actual benchmarking.

Then I have the following scaffolding code in my Program.cs:

For each benchmark shown in this post, you should be able to simply copy-and-paste the relevant code to where commented in this .cs file, and then use dotnet run -c Release -f netcoreapp2.0 to see the results. That will run the app using .NET Core 2.0, but the app itself is just the Benchmark.NET host, and the Benchmark.NET library will in turn create, build, and run .NET Core 2.0 and 2.1 apps for comparison. Note that in each results section, I’ve removed superfluous columns, to keep things tidy. I’ve also generally only shown results from running on Windows, when there’s no meaningful difference to highlight between platforms.

With that, let’s explore.

JIT

A lot of work has gone into improving the Just-In-Time (JIT) compiler in .NET Core 2.1, with many optimizations that enhance a wide-range of libraries and applications. Many of these improvements were sought based on needs of the core libraries themselves, giving these improvements both targeted and broad impact.

Let’s start with an example of a JIT improvement that can have broad impact across many types, but in particular for collection classes. .NET Core 2.1 has improvements around “devirtualization”, where the JIT is able to statically determine the target of some virtual invocations and as a result avoid virtual dispatch costs and enable potential inlining. In particular, PR dotnet/coreclr#14125 taught the JIT about the EqualityComparer<T>.Default member, extending the JIT’s intrinsic recognizer to recognize this getter. When a method then does EqualityComparer<T>.Default.Equals, for example, the JIT is able to both devirtualize and often inline the callee, which for certain T types makes a huge difference in throughput. Before this improvment, if T were Int32, the JIT would end up emitting code to make a virtual call to the underlying GenericEqualityComparer<T>.Equals method, but with this change, the JIT is able to inline what ends up being a call to Int32.Equals, which itself is inlineable, and EqualityComparer<int>.Default.Equals becomes as efficient as directly comparing two Int32s for equality. The impact of this is obvious with the following benchmark:

On my machine, I get output like the following, showcasing an ~2.5x speedup over .NET Core 2.0:

Method Toolchain Mean
EqualityComparerInt32 .NET Core 2.0 2.2106 ns
EqualityComparerInt32 .NET Core 2.1 0.8725 ns

Such improvements show up in indirect usage of EqualityComparer<T>.Default, as well. Many of the collection types in .NET, including Dictionary<TKey, TValue>, utilize EqualityComparer<T>.Default, and we can see the impact this improvement has on various operations employed by such collections. For example, PR dotnet/coreclr#15419 from @benaadams tweaked Dictionary<TKey, TValue>‘s ContainsValue implementation to better take advantage of this devirtualization and inlining, such that running this benchmark:

produces on my machine results like the following, showcasing an ~2.25x speedup:

Method Toolchain Mean
DictionaryContainsValue .NET Core 2.0 3.419 us
DictionaryContainsValue .NET Core 2.1 1.519 us

In many situations, improvements like this in the JIT implicitly show up as improvements in higher-level code. In this specific, case, though, it required the aforementioned change, which updated code like:

to instead be like:

In other words, previously this code had been optimized to avoid the overheads associated with using EqualityComparer<TValue>.Default on each iteration of the loop. But that micro-optimization then defeated the JIT’s devirtualization logic, such that what used to be an optimization is now a deoptimization, and the code had to be changed back to a pattern the JIT could recognize to make it as efficient as possible. A similar change was made in PR dotnet/corefx#25097, in order to benefit from this improvement in LINQ’s Enumerable.Contains. However, there are many places where this JIT improvement does simply improve existing code, without any required changes. (There are also places where there are known further improvements to be made, e.g. dotnet/coreclr#17273.)

In the previous discussion, I mentioned “intrinsics” and the ability for the JIT to recognize and special-case certain methods in order to help it better optimize for specific uses. .NET Core 2.1 sees additional intrinsic work, including for some long-standing but rather poor performing methods in .NET. A key example is Enum.HasFlag. This method should be simple, just doing a bit flag test to see whether a given enum value contains another, but because of how this API is defined, it’s relatively expensive to use. No more. In .NET Core 2.1 Enum.HasFlag is now a JIT intrinsic, such that the JIT generates the same quality code you would write by hand if you were doing manual bit flag testing. The evidence of this is in a simple benchmark:

On this test, I get results like the following, showing a 100% reduction in allocation (from 48 bytes per call to 0 bytes per call) and an ~50x improvement in throughput:

Method Toolchain Mean Allocated
EnumHasFlag .NET Core 2.0 14.9214 ns 48 B
EnumHasFlag .NET Core 2.1 0.2932 ns 0 B

This is an example where developers that cared about performance had to avoid writing code a certain way and can now write code that’s both maintainable and efficient, and also helps unaware developers fall into a “pit of success”. (Incidentally, this is also a case where Mono already had this optimization.)

Another example of this isn’t specific to a given API, but rather applies to the general shape of code. Consider the following implementation of string equality:

Unfortunately, on previous releases of .NET, the code generated here was suboptimal, in particular due to the early exit from within the loop. Developers that cared about performance had to write this kind of loop in a specialized way, using gotos, for example as seen in the .NET Core 2.0 implementation of String‘s CompareOrdinalIgnoreCaseHelper method. In .NET Core 2.1, PR dotnet/coreclr#13314 rearranges basic blocks in loops to avoid needing such workarounds. You can see in .NET Core 2.1 that goto in CompareOrdinalIgnoreCaseHelper is now gone, and the shown benchmark is almost double the throughput of what it was in the previous release:

Method Toolchain Mean
LoopBodyLayout .NET Core 2.0 56.30 ns
LoopBodyLayout .NET Core 2.1 30.49 ns

Of course, folks contributing to the JIT don’t just care about such macro-level enhancements to the JIT, but also to improvements as low-level as tuning what instructions are generated for specific operations. For example, PR dotnet/coreclr#13626 from @mikedn enables the JIT to generate the more efficient BT instruction in some situations where TEST and LSH were otherwise being used. The impact of that can be seen on this benchmark extracted from that PR’s comments:

where with this change, .NET Core 2.1 executes this benchmark 40% faster than it did in .NET Core 2.0:

Method Toolchain Mean
LoweringTESTtoBT .NET Core 2.0 1.414 ns
LoweringTESTtoBT .NET Core 2.1 1.057 ns

The JIT also saw a variety of improvements in .NET Core 2.1 around boxing. One of my personal favorites (because of the impact it has on async methods, to be discussed later in this post) is PR dotnet/coreclr#14698 (and a follow-up PR dotnet/coreclr#17006), which enables writing code that would have previously allocated and now doesn’t. Consider this benchmark:

In it, we’ve got an IAnimal with a MakeSound method, and we’ve got a method that wants to accept an arbitrary T, test to see whether it’s an IAnimal (it might be something else), and if it is, call its MakeSound method. Prior to .NET Core 2.1, this allocates, because in order to get the T as an IAnimal on which I can call MakeSound, the T needs to be cast to the interface, which for a value type results in it being boxed, and therefore allocates. In .NET Core 2.1, though, this pattern is recognized, and the JIT is able not only to undo the boxing, but also then devirtualize and inline the callee. The impact of this can be substantial when this kind of pattern shows up on hot paths. Here are the benchmark results, highlighting a significant improvement in throughput and an elimination of the boxing allocations:

Method Toolchain Mean Allocated
BoxingAllocations .NET Core 2.0 12.444 ns 48 B
BoxingAllocations .NET Core 2.1 1.391 ns 0 B

This highlights just some of the improvements that have gone into the JIT in .NET Core 2.1. And while each is impressive in its own right, the whole is greater than the sum of the parts, as work was done to ensure that all of these optimizations, from devirtualization, to boxing removal, to invocation of the unboxed entry, to inlining, to struct promotion, to copy prop through promoted fields, to cleaning up after unused struct locals, and so on, all play nicely together. Consider this example provided by @AndyAyersMS:

In .NET Core 2.0, this resulted in the following assembly code generated:

In contrast, in .NET Core 2.1, that’s all consolidated to this being generated for Main:

Very nice.

Threading

Improvements to the JIT are an example of changes that can have very broad impact over large swaths of code. So, too, are changes to the runtime, and one key area where the runtime has seen significant improvements is in the area of threading. These improvements have come in a variety of forms, whether in reducing the overhead of low-level operations, or reducing lock contention in commonly used threading primitives, or reducing allocation, or generally improving the infrastructure behind async methods. Let’s look at a few examples.

A key need in writing scalable code is taking advantage of thread statics, which are fields unique to each thread. The overhead involved in accessing a thread static is greater than that for normal statics, and it’s important that this be as low as possible as lots of functionality, in the runtime, in the core libraries, and in user code, depends on them, often on hot paths (for example, Int32.Parse(string) looks up the current culture, which is stored in a thread static). PRs dotnet/coreclr#14398 and dotnet/coreclr#14560 significantly reduced this overhead involved in accessing thread statics. So, for example, this benchmark:

yields these results on my machine:

Method Toolchain Mean
ThreadStatics .NET Core 2.0 7.322 ns
ThreadStatics .NET Core 2.1 5.269 ns

Whereas these thread statics changes were focused on improving the throughput of an individual piece of code, other changes focused on scalability and minimizing contention between pieces of code, in various ways. For example, PR dotnet/coreclr#14216 focused on costs involved in Monitor (what’s used under the covers by lock in C#) when there’s contention, PR dotnet/coreclr#13243 focused on the scalability of ReaderWriterLockSlim, and PR dotnet/coreclr#14527 focused on reducing the contention in Timers. Let’s take the last one as an example. Whenever a System.Threading.Timer is created, modified, fired, or removed, in .NET Core 2.0 that required taking a global timers lock; that meant that code which created lots of timers quickly would often end up serializing on this lock. To address this, .NET Core 2.1 partitions the timers across multiple locks, so that different threads running on different cores are less likely to contend with each other. The impact of that is visible in a benchmark like the following:

This spawns multiple tasks, each of which creates a timer, does a bit of work, and then deletes the timer, and it yields the following results on my quad-core:

Method Toolchain Mean
TimerContention .NET Core 2.0 332.8 ms
TimerContention .NET Core 2.1 135.6 ms

Another significant improvement came in the form of both throughput improvement and allocation reduction, in CancellationTokenSource. CancellationTokens have become ubiquitous throughout the framework, in particular in asynchronous methods. It’s often the case that a single token will be created for the lifetime of some composite operation (e.g the handling of a web request), and over its lifetime, it’ll be passed in and out of many sub-operations, each of which will Register a callback with the token for the duration of that sub-operation. In .NET Core 2.0 and previous .NET releases, the implementation was heavily focused on getting as much scalability as possible, achieved via a set of lock-free algorithms that were scalable but that incurred non-trivial costs in both throughput and allocation, so much so that it overshadowed the benefits of the lock-freedom. The associated level of scalability is also generally unnecessary, as the primary use case for a single CancellationToken does not involve many parallel operations, but instead many serialized operations one after the other. In .NET Core 2.1, PR dotnet/coreclr#12819 changed the implementation to prioritize the more common scenarios; it’s still very scalable, but by switching away from a lock-free algorithm to one that instead employed striped locking (as in the Timer case), we significantly reduced allocations and improved throughput while still meeting scalability goals. These improvements can be seen from the following single-threaded benchmark:

Method Toolchain Mean Allocated
SerialCancellationTokenRegistration .NET Core 2.0 95.29 ns 48 B
SerialCancellationTokenRegistration .NET Core 2.1 62.45 ns 0 B

and also from this multi-threaded one (run on a quad-core):

Method Toolchain Mean
ParallelCancellationTokenRegistration .NET Core 2.0 31.31 ns
ParallelCancellationTokenRegistration .NET Core 2.1 18.19 ns

These improvements to CancellationToken are just a piece of a larger set of improvements that have gone into async methods in .NET Core 2.1. As more and more code is written to be asynchronous and to use C#’s async/await features, it becomes more and more important that async methods introduce as little overhead as possible. Some significant strides in that regard have been taken in .NET Core 2.1, on a variety of fronts.

For example, on very hot paths that invoke asynchronous methods, one cost that shows up is simply the overhead involved in invoking an async method and awaiting it, in particular when it completes quickly and synchronously. In part due to the aforementioned JIT and thread static changes, and in part due to PRs like dotnet/coreclr#15629 from @benaadams, this overhead has been cut by ~30%:

Method Toolchain Mean Allocated
AsyncMethodAwaitInvocation .NET Core 2.0 20.36 ns 0 B
AsyncMethodAwaitInvocation .NET Core 2.1 13.48 ns 0 B

Bigger improvements, however, have come in the form of allocation reduction. In previous releases of .NET, the synchronous completion path for async methods was optimized for allocations, meaning that if an async method completed without ever suspending, it either wouldn’t allocate at all or at most would allocate one object (for the returned Task<T> if an internally cached one wasn’t available). However, asynchronous completion (where it suspends at least once) would incur multiple allocations. The first allocation would be for the returned Task/Task<T> object, as the caller needs some object to hold onto to be able to know when the asynchronous operation has completed and to extract its result or exception. The second allocation is the boxing of the compiler-generated state machine: the “locals” for the async method start out on the stack, but when the method suspends, the state machine that contains these “locals” as fields gets boxed to the heap so that the data can survive across the await point. The third allocation is the Action delegate that’s passed to an awaiter and that’s used to move the state machine forward when the awaited object completes. And the fourth is a “runner” that stores additional context (e.g. ExecutionContext). These allocations can be seen by looking at a memory trace. For example, if we run this code:

and look at the results from the Visual Studio allocation profiler, in .NET Core 2.0 we see these allocations associated with the async infrastructure:

Due to PRs like dotnet/coreclr#13105, dotnet/coreclr#14178 and dotnet/coreclr#13907, the previous trace when run with .NET Core 2.1 instead looks like this:

The four allocations have been reduced to one, and the total bytes allocated has shrunk by half. When async methods are used heavily in an application, that savings adds up quickly. There have also been side benefits to the architectural changes that enabled these savings, including improved debuggability.

String

Moving up the stack, another area that’s seen a lot of performance love in .NET Core 2.1 is in commonly used primitive types, in particular System.String. Whether from vectorization, or using System.Span<T> and its optimizations internally, or adding fast paths for common scenarios, or reducing allocations, or simply trimming some fat, a bunch of functionality related to strings has gotten faster in 2.1. Let’s look at a few.

String.Equal is a workhorse of .NET applications, used for all manner of purposes, and thus it’s an ideal target for optimization. PR dotnet/coreclr#16994 improved the performance of String.Equal by vectorizing it, utilizing the already vectorized implementation of Span<T>.SequenceEqual as its core implementation. The effect can be seen here, in the comparison of two strings that differ only in their last character:

Method Toolchain Mean
StringEquals .NET Core 2.0 16.16 ns
StringEquals .NET Core 2.1 10.20 ns

String.IndexOf and String.LastIndexOf are similarly vectored with PR dotnet/coreclr#16392:

Method Toolchain Mean
StringIndexOf .NET Core 2.0 41.13 ns
StringIndexOf .NET Core 2.1 15.94 ns

String.IndexOfAny was also optimized. In contrast to the previous PRs that improved performance via vectorization, PR dotnet/coreclr#13219 from @bbowyersmyth improves the performance of IndexOfAny by special-casing the most commonly-used lengths of the anyOf characters array and adding fast-paths for them:

Method Toolchain Mean
IndexOfAny .NET Core 2.0 94.66 ns
IndexOfAny .NET Core 2.1 38.27 ns

String.ToLower and ToUpper (as well as the ToLower/UpperInvariant varieties) were improved in PR dotnet/coreclr#17391. As with the previous PR, these were improved by adding fast-paths for common cases. First, if the string passed in is entirely ASCII, then it does all of the computation in managed code and avoids calling out to the native globalization library to do the casing. This in and of itself yields a significant throughput improvement, e.g.

Method Toolchain Mean Allocated
StringToLowerChangesNeeded .NET Core 2.0 187.00 ns 144 B
StringToLowerChangesNeeded .NET Core 2.1 96.29 ns 144 B

But things look even better when the string is already in the target casing:

Method Toolchain Mean Allocated
StringToLowerAlreadyCased .NET Core 2.0 197.21 ns 144 B
StringToLowerAlreadyCased .NET Core 2.1 68.81 ns 0 B

In particular, note that all allocation has been eliminated.

Another very common String API was improved to reduce allocation while also improving throughput. In .NET Core 2.0, String.Split allocates an Int32[] to track split locations in the string; PR dotnet/coreclr#15435 from @cod7alex removed that and replaced it with either stack allocation or usage of ArrayPool<int>.Shared, depending on the input string’s length. Further, PR dotnet/coreclr#15322 took advantage of span internally to improve the throughput of several common cases. The results of both of these can be seen in this benchmark:

Method Toolchain Mean Allocated
StringSplit .NET Core 2.0 459.5 ns 1216 B
StringSplit .NET Core 2.1 305.2 ns 480 B

Even some corner cases of String usage saw improvements. For example, some developers use String.Concat(IEnumerable<char>) as a way to compose characters into strings. PR dotnet/coreclr#14298 special-cased T == char in this overload, yielding some nice throughput and allocation wins:

Method Toolchain Mean Allocated
StringConcatCharEnumerable .NET Core 2.0 22.05 us 35.82 KB
StringConcatCharEnumerable .NET Core 2.1 15.56 us 4.57 KB

Formatting and Parsing

The work done around strings also extends into the broad area of formatting and parsing, work that’s the bread-and-butter of many applications.

As noted at the beginning of this post, many Span<T>-based methods were added across the framework, and while I’m not going to focus on those here from a new API perspective, the act of adding these APIs helped to improve existing APIs. Some existing APIs were improved by taking advantage of the new Span<T>-based methods. For example, PR dotnet/coreclr#15110 from @justinvp utilizes the new Span<T>-based TryFormat in StringBuilder.AppendFormat, which is itself used internally by String.Format. The usage of Span<T> enables the implementation internally to format directly into existing buffers rather than first formatting into allocated strings and then copying those strings to the destination buffer.

Method Toolchain Mean Allocated
StringFormat .NET Core 2.0 196.1 ns 128 B
StringFormat .NET Core 2.1 151.3 ns 80 B

Similarly, PR dotnet/coreclr#15069 takes advantage of the Span<T>-based methods in various StringBuilder.Append overloads, to format the provided value directly into the StringBuilder‘s buffer rather than going through a String:

Method Toolchain Mean Allocated
StringBuilderAppend .NET Core 2.0 6.523 ms 3992000 B
StringBuilderAppend .NET Core 2.1 3.268 ms 0 B

Another way the new Span<T>-based methods helped was as a motivational forcing function. In the .NET Framework and .NET Core 2.0 and earlier, most of the numeric parsing and formatting code in .NET was implemented in native code. Having that code as C++ made it a lot more difficult to add the new Span<T>-based methods, which would ideally share most of their implementation with their String-based forebearers. However, all of that C++ was previously ported to C# as part of enabling .NET Native, and all of that code then found its way into corert, which also shares code with coreclr. For the .NET Core 2.1 release, we thus deleted most of the native parsing/formatting code in coreclr and replaced it with the managed port, that’s now shared between coreclr and corert. With the implementation in managed code, it was then also easier to iterate and experiment with optimizations, so not only did the code move to managed and not only is it now used for both the String-based and Span<T>-based implementations, many aspects of it also got faster.

For example, via PRs like dotnet/coreclr#15069 and dotnet/coreclr#17432, throughput of Int32.ToString() approximately doubled:

Method Toolchain Mean Allocated
Int32Formatting .NET Core 2.0 65.27 ns 48 B
Int32Formatting .NET Core 2.1 34.88 ns 48 B

while via PRs like dotnet/coreclr#13389, Int32 parsing improved by over 20%:

Method Toolchain Mean
Int32Parsing .NET Core 2.0 96.95 ns
Int32Parsing .NET Core 2.1 76.99 ns

These improvements aren’t limited to just integral types like Int32, UInt32, Int64, and UInt64. Single.ToString() and Double.ToString() improved as well, in particular on Unix where PR dotnet/coreclr#12894 from @mazong1123 provided an entirely new implementation for some very nice wins over the rather slow implementation that was there previously:

Windows:

Method Toolchain Mean Allocated
DoubleFormatting .NET Core 2.0 448.7 ns 48 B
DoubleFormatting .NET Core 2.1 186.8 ns 48 B

Linux (note that my Windows and Linux installations are running on very different setups, so the values shouldn’t be compared across OSes):

Method Toolchain Mean Allocated
DoubleFormatting .NET Core 2.0 2,018.2 ns 48 B
DoubleFormatting .NET Core 2.1 258.1 ns 48 B

The improvements in 2.1 also apply to less commonly used but still important numerical types, such as via PR dotnet/corefx#25353 for BigInteger:

Method Toolchain Mean Allocated
BigIntegerFormatting .NET Core 2.0 36.677 us 34.73 KB
BigIntegerFormatting .NET Core 2.1 3.119 us 3.27 KB

Note both the 10x improvement in throughput and 10x reduction in allocation.

These improvements continue with other parsing and formatting routines. For example, in services in particular, DateTime and DateTimeOffset are often formatted using either the "r" or "o" formats, both of which have been optimized in .NET Core 2.1, via PR dotnet/coreclr#17092:

Method Toolchain Mean Allocated
DateTimeOffsetFormatR .NET Core 2.0 220.89 ns 88 B
DateTimeOffsetFormatR .NET Core 2.1 64.60 ns 88 B
DateTimeOffsetFormatO .NET Core 2.0 263.45 ns 96 B
DateTimeOffsetFormatO .NET Core 2.1 104.66 ns 96 B

Even System.Convert has gotten in on the formatting and parsing performance fun, with parsing from Base64 via FromBase64Chars and FromBase64String getting significant speedups, thanks to PR dotnet/coreclr#17033:

Method Toolchain Mean Allocated
ConvertFromBase64String .NET Core 2.0 45.99 us 9.79 KB
ConvertFromBase64String .NET Core 2.1 29.86 us 9.79 KB
ConvertFromBase64Chars .NET Core 2.0 46.34 us 9.79 KB
ConvertFromBase64Chars .NET Core 2.1 29.51 us 9.79 KB

Networking

The System.Net libraries received some good performance attention in .NET Core 2.0, but significantly more so in .NET Core 2.1.

There have been some nice improvements throughout the libraries, such as PR dotnet/corefx#26850 from @JeffCyr improving Dns.GetHostAddressAsync on Windows with a true asynchronous implementation, or PR dotnet/corefx#26303 providing an optimized endian-reversing routine which was then used by PR dotnet/corefx#26329 from @justinvp to optimize IPAddress.HostToNetworkOrder/NetworkToHostOrder:

Method Toolchain Mean
NetworkToHostOrder .NET Core 2.0 10.760 ns
NetworkToHostOrder .NET Core 2.1 1.461 ns

or in PRs like dotnet/corefx#28086, dotnet/corefx#28084, and dotnet/corefx#22872 avoiding allocations in Uri:

Method Toolchain Mean Allocated
UriAllocations .NET Core 2.0 997.6 ns 1168 B
UriAllocations .NET Core 2.1 650.6 ns 672 B

But the most impactful changes have come in higher-level types, in particular in Socket, SslStream, and HttpClient.

At the sockets layer, there have been a variety of improvements, but the impact is most noticeable on Unix, where PRs like dotnet/corefx#23115 and dotnet/corefx#25402 overhauled how socket operations are processed and the allocations they incur. This is visible in the following benchmark that repeatedly does receives that will always complete asynchronously, followed by sends to satisfy them, and which sees a 2x improvement in throughput:

Method Toolchain Mean
SocketReceiveThenSend .NET Core 2.0 102.82 ms
SocketReceiveThenSend .NET Core 2.1 48.95 ms

Often used on top of sockets and NetworkStream, SslStream was improved significantly in .NET Core 2.1, as well, in a few ways. First, PRs like dotnet/corefx#24497 and dotnet/corefx#23715 from @Drawaes, as well as dotnet/corefx#22304 and dotnet/corefx#29031 helped to clean up the SslStream codebase, making it easier to improve in the future but also removing a bunch of allocations (above and beyond the significant allocation reductions that were seen in .NET Core 2.0). Second, though, a significant scalability bottleneck in SslStream on Unix was fixed in PR dotnet/corefx#25646 from @Drawaes, such that SslStream now scales well on Unix as concurrent usage increases. This, in concert with the sockets improvements and other lower-level improvements, contributes to the managed implementation beneath HttpClient.

HttpClient is a thin wrapper around an HttpMessageHandler, a public abstract class that represents an implementation of an HTTP client. A general-purpose implementation of HttpMessageHandler is provided in the form of the derived HttpClientHandler class, and while it’s possible to construct and pass a handler like HttpClientHandler to an HttpClient constructor (generally done to be able to configure the handler via its properties), HttpClient also provides a parameterless constructor that uses HttpClientHandler implicitly. In .NET Core 2.0 and earlier, HttpClientHandler was implemented on Windows on top of the native WinHTTP library, and it was implemented on Unix on top of the libcurl library. That dependency on the underlying external library has led to a variety of problems, including different behaviors across platforms and OS distributions as well as limited functionality on some platforms. In .NET Core 2.1, HttpClientHandler has a new default implementation implemented from scratch entirely in C# on top of the other System.Net libraries, e.g. System.Net.Sockets, System.Net.Security, etc. Not only does this address the aforementioned behavioral issues, it provides a significant boost in performance (the implementation is also exposed publicly as SocketsHttpHandler, which can be used directly instead of via HttpClientHandler in order to configure SocketsHttpHandler-specific properties).

Here’s an example benchmark making a bunch of concurrent HTTPS calls to an in-process socket server:

On an 8-core Windows machine, here are my results:

Method Toolchain Mean Gen 0 Gen 1
ConcurrentHttpsGets .NET Core 2.0 228.03 ms 1250.0000 312.5000
ConcurrentHttpsGets .NET Core 2.1 17.93 ms 656.2500

That’s a 12.7x improvement in throughput and a huge reduction in garbage collections, even though the .NET Core 2.0 implementation has most of the logic in native rather than managed code! Similarly, on an 8-core Linux machine, here are my results:

Method Toolchain Mean Gen 0 Gen 1
ConcurrentHttpsGets .NET Core 2.0 135.46 ms 750.0000 250.0000
ConcurrentHttpsGets .NET Core 2.1 21.83 ms 343.7500

Again, huge improvement!

And More

Through this post I aimed to categorize and group various performance changes to highlight areas of concentrated improvement, but I also want to highlight that performance work has happened across the breadth of runtime and libraries, beyond this limited categorization. I’ve picked a few other examples to highlight some of the changes to elsewhere in the libraries throughout the stack.

One particularly nice set of improvements came to file system enumeration support, in PRs dotnet/corefx#26806 and dotnet/corefx#25426. This work has made enumerating directories and files not only faster but also with significantly less garbage left in its wake. Here’s an example enumerating all of the files in my System.IO.FileSystem library folder from my corefx repo clone (obviously if you try this one out locally, you’ll need to update the path to whatever works on your machine):

The improvements are particularly stark on Windows, where this benchmark shows a 3x improvement in throughput and a 50% reduction in allocation:

Method Toolchain Mean Allocated
EnumerateFiles .NET Core 2.0 1,982.6 us 71.65 KB
EnumerateFiles .NET Core 2.1 650.1 us 35.24 KB

but also on Unix, where this benchmark (with the path fixed up appropriately) on Linux shows a 15% improvement in throughput and a 45% reduction in allocation:

Method Toolchain Mean Allocated
EnumerateFiles .NET Core 2.0 638.0 us 56.09 KB
EnumerateFiles .NET Core 2.1 539.5 us 38.6 KB

This change internally benefited from the Span<T>-related work done throughout the framework, as did, for example, an improvement to Rfc2898DeriveBytes in System.Security.Cryptography. Rfc2898DeriveBytes computes cryptographic hash codes over and over as part of implementing password-based key derivation functionality. In previous releases, each iteration of that algorithm would result in at least one byte[] allocation, but now with Span<T>-based methods like HashAlgorithm.TryComputeHash, due to PR dotnet/corefx#23269 those allocations are entirely avoided. And that results in dramatic savings, especially for longer iteration counts:

Method Toolchain Mean Allocated
DeriveBytes .NET Core 2.0 9.199 ms 1120120 B
DeriveBytes .NET Core 2.1 8.084 ms 176 B

Effort has also been put into improving places where one platform is more deficient than others. For example, Guid.NewGuid() on Unix is considerably slower than it is on Windows. And while the gap hasn’t been entirely closed, as part of removing a dependency on the libuuid library, PR dotnet/coreclr#16643 did significantly improve the throughput of Guid.NewGuid() on Unix:

Method Toolchain Mean
GuidNewGuid .NET Core 2.0 7.179 us
GuidNewGuid .NET Core 2.1 1.770 us

The list goes on: improvements to array processing (e.g. dotnet/coreclr#13962), improvements to LINQ (e.g. dotnet/corefx#23368 from @dnickless), improvements to Environment (e.g. dotnet/coreclr#14502 from @justinvp), improvements to collections (e.g. dotnet/corefx#26087 from @gfoidl), improvements to globalization (e.g. dotnet/coreclr#17399), improvements around pooling (e.g. dotnet/coreclr#17078), improvements to SqlClient (e.g. dotnet/corefx#27758), improvements to StreamWriter and StreamReader (e.g. dotnet/corefx#22147), and on.

Finally, all of the examples shown throughout this post were already at least as good in .NET Core 2.0 (if not significantly better) as in the .NET Framework 4.7, and then .NET Core 2.1 just made things even better. However, there are a few places where features were missing in .NET Core 2.0 and have been brought back in 2.1, including for performance. One notable such improvement is in Regex, where the Regex.Compiled option was exposed but ignored in .NET Core 2.0. PR dotnet/corefx#24158 brought back the in-memory compilation support for Regex, enabling the same kinds of throughput improvements here previously available in the .NET Framework:

Method Toolchain Mean
RegexCompiled .NET Core 2.0 473.7 ns
RegexCompiled .NET Core 2.1 295.2 ns

What’s Next?

Huge “thank you”s to everyone who has contributed to this release. As is obvious from this tour, there’s a lot to look forward to in .NET Core 2.1, and this post only scratched the surface of the improvements coming. We look forward to hearing your feedback and to your future contributions in the coreclr, corefx, and other dotnet and ASP.NET repos!

Windows Community Standup – Improvements for Web and Backend Developers in the next update to Windows 10

$
0
0

During our April Windows Community Standup, we discussed a few recent Windows 10 features that will help improve your development experience.

Without further ado, let’s dive into the features we spoke about:

  • Improvements to the Windows Subsystem for Linux
  • In-box command line tools
  • Virtualization in Hyper-V

Windows Subsystem for Linux

The Windows Subsystem for Linux (WSL) lets developers run Linux environments – including most command-line tools, utilities, and applications – directly on Windows, unmodified, without the overhead of a virtual machine.

Today, we showed various workflows with WSL and Node.js in Visual Studio Code (VS Code). Many of the integrations we showed come from community contributions and asks to improve development workflows. The demo is a culmination of all that – launching your project in VS Code from WSL, using WSL in the integrated terminal, using WSL in the debugger, and using a curl inbox in CMD.

We went over just some of the recent improvements in WSL. As we receive community feedback, we plan to continue addressing our top asks. Currently this includes improving the interop between Windows and WSL and adding more of your favorite tools inbox.

Hyper-V

Hyper-V is a virtualization technology that makes working with Linux VM’s a better experience on Windows.

Today, we showed RDP session improvements and the VM gallery. The RDP improvements include the mouse experience, clipboard, and drive sharing. We showed a preview of the VM Quick Create Gallery with a built-in Windows image and loading our own custom image. You can expect to see improvements in both these areas in the near future in areas such as windowing and additional built-in image templates.

Thank You & Feedback

A big thanks to our developer community for helping provide feedback on WSL, inbox tools, and Hyper-V. Be sure to let us know about your WSL experience by submitting issues on our sample GitHub repo and tweeting us at #WSL – @tara_msft.

The post Windows Community Standup – Improvements for Web and Backend Developers in the next update to Windows 10 appeared first on Windows Developer Blog.

How to setup Signed Git Commits with a YubiKey NEO and GPG and Keybase on Windows

$
0
0

This commit was signed with a verified signature.This week in obscure blog titles, I bring you the nightmare that is setting up Signed Git Commits with a YubiKey NEO and GPG and Keybase on Windows. This is one of those "it's good for you" things like diet and exercise and setting up 2 Factor Authentication. I just want to be able to sign my code commits to GitHub so I might avoid people impersonating my Git Commits (happens more than you'd think and has happened recently.) However, I also was hoping to make it more security buy using a YubiKey NEO security key. They're happy to tell you that it supports a BUNCH of stuff that you have never heard of like Yubico OTP, OATH-TOTP, OATH-HOTP, FIDO U2F, OpenPGP, Challenge-Response. I am most concerned with it acting like a Smart Card that holds a PGP (Pretty Good Privacy) key since the YubiKey can look like a "PIV (Personal Identity Verification) Smart Card."

NOTE: I am not a security expert. Let me know if something here is wrong (be nice) and I'll update it. Note also that there are a LOT of guides out there. Some are complete and encyclopedic, some include recommendations and details that are "too much," but this one was my experience. This isn't The Bible On The Topic but rather  what happened with me and what I ran into and how I got past it. Until this is Super Easy (TM) on Windows, there's gonna be guides like this.

As with all things security, there is a balance between Capital-S Secure with offline air-gapped what-nots, and Ease Of Use with tools like Keybase. It depends on your tolerance, patience, technical ability, and if you trust any online services. I like Keybase and trust them so I'm starting there with a Private Key. You can feel free to get/generate your key from wherever makes you happy and secure.

Welcome to Keybase.io

I use Windows and I like it, so if you want to use a Mac or Linux this blog post likely isn't for you. I love and support you and your choice though. ;)

Make sure you have a private PGP key that has your Git Commit Email Address associated with it

I download and installed (and optionally donated) a copy of Gpg4Win here.

Take your private key - either the one you got from Keybase or one you generated locally - and make sure that your UID (your email address that you use on GitHub) is a part of it. Here you can see mine is not, yet. That could be the main email or might be an alias or "uid" that you'll add.

Certs in Kleopatra

If not, as in my case since I'm using a key from keybase, you'll need to add a new uuid to your private key. You will know you got it right when you run this command and see your email address inside it.

> gpg --list-secret-keys --keyid-format LONG


------------------------------------------------
sec# rsa4096/MAINKEY 2015-02-09 [SCEA]

uid [ultimate] keybase.io/shanselman <shanselman@keybase.io>

You can adduid in the gpg command line or you can add it in the Kleopatra GUI.

image

If not, as in my case since I'm using a key from keybase, you'll need to add a new uid to your private key. You will know you got it right when you run this command and see your email address inside it.

> gpg --list-secret-keys --keyid-format LONG


------------------------------------------------
sec# rsa4096/MAINKEY 2015-02-09 [SCEA]
uid [ultimate] keybase.io/shanselman <shanselman@keybase.io>
uid [ unknown] Scott Hanselman <scott@hanselman.com>

Then, when you make changes like this, you can export your public key and update it in Keybase.io (again, if you're using Keybase).

image

Plugin your YubiKey

I installed the YubiKey Smart card mini-driver from here.  Some people have said this driver is optional but I needed it on my main machine. Can anyone confirm?

When you plug your YubiKey in (assuming it's newer than 2015) it should get auto-detected and show up like this "Yubikey NEO OTP+U2F+CCID." You want it so show up as this kind of "combo" or composite device. If it's older or not in this combo mode, you may need to download the YubiKey NEO Manager and switch modes.

Setting up a YubiKey on Windows

Test that your YubiKey can be seen as a Smart Card

Go to the command line and run this to confirm that your Yubikey can be see as a smart card by the GPG command line.

> gpg --card-status

Reader ...........: Yubico Yubikey NEO OTP U2F CCID 0
Version ..........: 2.0
....

IMPORTANT: Sometimes Windows machines and Corporate Laptops have multiple smart card readers, especially if they have Windows Hello installed like my SurfaceBook2! If you hit this, you'll want to create a text file at %appdata%Roaminggnupgscdaemon.conf and include a reader-port that points to your YubiKey. Mine is a NEO, yours might be a 4, etc, so be aware. You may need to reboot or at least restart/kill the GPG services/background apps for it to notice you made a change.
If you want to know what string should go in that file, go to Device Manager, then View | Show Hidden Devices and look under Software Devices. THAT is the string you want. Put this in scdaemon.conf:

reader-port "Yubico Yubikey NEO OTP+U2F+CCID 0"

Yubico Yubikey NEO OTP+U2F+CCID 0

Yubikey NEO can hold keys up to 2048 bits and the Yubikey 4 can hold up to 4096 bits - that's MOAR bits! However, you might find yourself with a 4096 bit key that is too big for the Yubikey NEO. Lots of folks believe this is a limitation of the NEO that sucks and is unacceptable. Since I'm using Keybase and starting with a 4096 bit key, one solution is to make separate 2048 bit subkeys for Authentication and Signing, etc.

From the command line, edit your keys then "addkey"

> gpg --edit-key <scott@hanselman.com>

You'll make a 2048 bit Signing key and you'll want to decide if it ever expires. If it never does, also make a revocation certificate so you can revoke it at some future point.

gpg> addkey

Please select what kind of key you want:
(3) DSA (sign only)
(4) RSA (sign only)
(5) Elgamal (encrypt only)
(6) RSA (encrypt only)
Your selection? 4
RSA keys may be between 1024 and 4096 bits long.
What keysize do you want? (2048)
Requested keysize is 2048 bits
Please specify how long the key should be valid.
0 = key does not expire
<n> = key expires in n days
<n>w = key expires in n weeks
<n>m = key expires in n months
<n>y = key expires in n years
Key is valid for? (0)
Key does not expire at all

Save your changes, and then export the keys. You can do that with Kleopatra or with the command line:

--export-secret-keys --armor KEYID

Here's a GUI view. I have my main 4096 bit key and some 2048 bit subkeys for Signing or Encryption, etc. Make as many as you like

image

LEVEL SET - It will be the public version of the 2048 bit Signing Key that we'll tell GitHub about and we'll put the private part on the YubiKey, acting as a Smart Card.

Move the signing subkey over to the YubiKey

Now I'm going to take my keychain here, select the signing one (note the ASTERISK after I type "key 1" then "keytocard" to move/store it on the YubyKey's SmartCard Signature slot. I'm using my email as a way to get to my key, but if your email is used in multiple keys you'll want to use the unique Key Id/Signature.

> gpg --edit-key scott@hanselman.com


gpg (GnuPG) 2.2.6; Copyright (C) 2018 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

sec rsa4096/MAINKEY
created: 2015-02-09 expires: never usage: SCEA
trust: ultimate validity: ultimate
ssb rsa2048/THEKEYIDFORTHE2048BITSIGNINGKEY
created: 2015-02-09 expires: 2023-02-07 usage: S
card-no: 0006
ssb rsa2048/KEY2
created: 2015-02-09 expires: 2023-02-07 usage: E
[ultimate] (1). keybase.io/shanselman <shanselman@keybase.io>
[ultimate] (2) Scott Hanselman <scott@hanselman.com>
gpg> toggle
gpg> key 1

sec rsa4096/MAINKEY
created: 2015-02-09 expires: never usage: SCEA
trust: ultimate validity: ultimate
ssb* rsa2048/THEKEYIDFORTHE2048BITSIGNINGKEY
created: 2015-02-09 expires: 2023-02-07 usage: S
card-no: 0006
ssb rsa2048/KEY2
created: 2015-02-09 expires: 2023-02-07 usage: E
[ultimate] (1). keybase.io/shanselman <shanselman@keybase.io>
[ultimate] (2) Scott Hanselman <scott@hanselman.com>

gpg> keytocard
Please select where to store the key:
(1) Signature key
(3) Authentication key
Your selection? 1

If you're storing thing on your Smart Card, it should have a pin to protect it. Also, make sure you have a backup of your primary key (if you like) because keytocard is a destructive action.

Have you set up PIN numbers for your Smart Card?

There's a PIN and an Admin PIN. The Admin PIN is the longer one. The default admin PIN is usually ‘12345678’ and the default PIN is usually ‘123456’. You'll want to set these up with either the Kleopatra GUI "Tools | Manage Smart Cards" or the gpg command line:

>gpg --card-edit

gpg/card> admin
Admin commands are allowed
gpg/card> passwd
*FOLLOW THE PROMPTS TO SET PINS, BOTH ADMIN AND STANDARD*

Tell Git about your Signing Key Globally

Be sure to tell Git on your machine some important configuration info like your signing key, but also WHERE the gpg.exe is. This is important because git ships its own older local copy of gpg.exe and you installed a newer one!

git config --global gpg.program "c:Program Files (x86)GnuPGbingpg.exe"

git config --global commit.gpgsign true
git config --global user.signingkey THEKEYIDFORTHE2048BITSIGNINGKEY

If you don't want to set ALL commits to signed, you can skip the commit.gpgsign=true and just include -S as you commit your code:

git commit -S -m your commit message

Test that you can sign things

if you are running Kleopatra (the noob Windows GUI) when you run gpg --card-status you'll notice the cert will turn boldface and get marked as certified.

The goal here is for you to make sure GPG for Windows knows that there's a private key on the smart card, and associates a signing Key ID with that private key so when Git wants to sign a commit, you'll get a Smart Card PIN Prompt.

Advanced: If you make SubKeys for individual things so that they might also be later revoked without torching your main private key. Using the Kleopatra tool from GPG for Windows you can explore the keys and get their IDs. You'll use those Subkey IDs in your git config to remove to your signingkey.

At this point things should look kinda like this in the Kleopatra GUI:

Multiple PGP Sub keys

Make sure to prove you can sign something by making a text file and signing it. If you get a Smart Card prompt (assuming a YubiKey) and a larger .gpg file appears, you're cool.

> gpg --sign .quicktest.txt

> dir quic*

Mode LastWriteTime Length Name
---- ------------- ------ ----
-a---- 4/18/2018 3:29 PM 9 quicktest.txt
-a---- 4/18/2018 3:38 PM 360 quicktest.txt.gpg

Now, go up into GitHub to https://github.com/settings/keys at the bottom. Remember that's GPG Keys, not SSH Keys. Make a new one and paste in your public signing key or subkey.

Note the KeyID (or the SubKey ID) and remember that one of them (either the signing one or the primary one) should be the ID you used when you set up user.signingkey in git above.

GPG Keys in GitHub

The most important thing is that:

  • the email address associated with the GPG Key
  • is the same as the email address GitHub has verified for you
  • is the same as the email in the Git Commit
    • git config --global user.email "email@example.com"

If not, double check your email addresses and make sure they are the same everywhere.

Try a signed commit

If pressing enter pops a PIN Dialog then you're getting somewhere!

Please unlock the card

Commit and push and go over to GitHub and see if your commit is Verified or Unverified. Unverified means that the commit was signed but either had an email GitHub had never seen OR that you forgot to tell GitHub about your signing public key.

Signed Verified Git Commits

Yay!

Setting up to a second (or third) machine

Once you've told Git about your signing key and you've got your signing key stored in your YubiKey, you'll likely want to set up on another machine.

  • Install the Yubikey SmartCard Mini Driver (may be optional)
  • Install GPG for Windows
    • gpg --card-status
    • Import your public key. If I'm setting up signing on another machine, I'll can import my PUBLIC certificates like this or graphically in Kleopatra.
      >gpg --import "keybase public key.asc"
      
      gpg: key *KEYID*: "keybase.io/shanselman <shanselman@keybase.io>" not changed
      gpg: Total number processed: 1
      gpg: unchanged: 1

      You may also want to run gpg --expert --edit-key *KEYID* and type "trust" to certify your key as someone (yourself) that you trust.

  • Install Git (I assume you did this) and configure GPG
    • git config --global gpg.program "c:Program Files (x86)GnuPGbingpg.exe"
    • git config --global commit.gpgsign true
    • git config --global user.signingkey THEKEYIDFORTHE2048BITSIGNINGKEY
  • Sign something with "gpg --sign" to test
  • Do a test commit.

Finally, feel superior for 8 minutes, then realize you're really just lucky because you just followed the blog post of someone who ALSO has no clue, then go help a co-worker because this is TOO HARD.


Sponsor: Check out JetBrains Rider: a cross-platform .NET IDE. Edit, refactor, test and debug ASP.NET, .NET Framework, .NET Core, Xamarin or Unity applications. Learn more and download a 30-day trial!



© 2018 Scott Hanselman. All rights reserved.
     

Updated documentation for Visual Studio Build Tools container

$
0
0

I’ve updated the documentation for building a Docker container image for Visual Studio Build tools based on recent feedback that managed code may fail to run. In the case of MSBuild, you might see an error like,

C:BuildToolsMSBuild15.0binRoslynMicrosoft.CSharp.Core.targets(84,5): error MSB6003: The specified task executable “csc.exe” could not be run. Could not load file or assembly ‘System.IO.FileSystem, Version=4.0.1.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a’ or one of its dependencies. The system cannot find the file specified.

To resolve this, base your image on microsoft/dotnet-framework:4.7.1 so that it does not need to be installed. The image documentation has more details about how its various tags related to microsoft/windowsservercore if you intend to target a specific version of Windows Server Core.

The examples have also been updated to better collect and copy setup logs from a container should an error occur using a separate script file to handle install failures.

Azure Backup now supports storage accounts secured with Azure Storage Firewalls and Virtual Networks

$
0
0

We are happy to announce the Azure IaaS VM backup support for network restricted storage accounts. With storage firewalls and Virtual Networks, you can allow traffic only from selected virtual networks and subnets. With this you can create a secure network boundary for your unmanaged disks in storage accounts. You can also grant access for on-premises networks and other trusted internet traffic, by using network rules based on IP address ranges. With this announcement, we provide an ability for the user to perform and continue with scheduled and ad-hoc IaaS VM backups and restores for these VNET configured storage accounts.

image

Getting Started

After you configure firewall and virtual network settings for your storage account, select Allow trusted Microsoft services to access this storage account as an exception to enable Azure Backup service to access the network restricted storage account.

image

This network focused feature gives the customer a seamless experience by defining network access-based security. This ensures that only requests coming from approved Azure VNETs or specified public IP ranges will be allowed to a specific storage account making it more secure and thus fulfilling the compliance requirements of an organization.

Related links and additional content


Introducing sonarwhal v1: The linting tool for the web

$
0
0

Just over one year ago, we started working on a best practices tool for the web called sonarwhal—a customizable, open-source linting tool, built for modern web developer workflows. Today, we are announcing the release of its first major version. With today’s launch, we’d like to talk you through a look back at how sonarwhal started, and the journey to v1 over the past few months.

It all started with feedback from web developers, partners, and from our own experiences building for the web. The web platform is becoming richer at a faster pace than ever before: we now have web experiences we couldn’t even imagine a few years back. Sites can work offline, push notifications (even when the user is not visiting the site), and even run code at native speeds with WebAssembly. On top of keeping up to date, developers also need to know about many other things like accessibility, security, performance, bundling, transpiling, etc.

Every day, we see sites that have a great architecture, built with the latest libraries and tools, but that don’t use the right cache policy for their static assets, or that don’t compress everything they should, or with common security flaws—and that’s just scratching the surface. The web is complex and it can be easy to miss something at any point during the development process.

Being a web developer can be overwhelming

What could we do to help web developers make their sites faster, more secure and accessible while at the same time making sure they remain interoperable cross browser? And that’s how all started. Picking the name was the easiest part: the team loves narwhals and they have one of the best echolocation beams in nature (or sonars) so it was an obvious choice.

We knew from the beginning that we wanted sonarwhal to be built by and for the web community. We didn’t want to indoctrinate our personal opinions into sonarwhal’s rules. We wanted sonarwhal to be backed by deep research, remain neutral, and allow contributions from any individual or company. Thus, we decided to make sonarwhal an open source project as part of the JS Foundation.

Since then, we’ve kept ourselves busy listening to your feedback and implementing many of the goals we had back then, while adding some new ones.

When creating a new rule, we follow the 80/20 principle:

80% of the time is research and 20% is coding

If there’s one thing we are most proud of, it’s the extensive research we do on each subject and how deep the rules go to check that everything is as it should be.

Just to give you some examples:

  • The http-compression rule will perform several requests for each resource with different headers and check the content to validate the server is actually respecting them. E.g.: When resources are requested uncompressed, does the server actually respect what was requested and serve them uncompressed? Is the server doing User-Agent sniffing instead of relying on the Accept-Encoding header of the request? Is the server compressing resources using Zopfli when requests are made advertising support for gzip compression?
  • The web manifest rules are also interesting. Does the web manifest point to an image? Does that image exist? Does the image meet the recommended resolution and file size? Does it have the right format to be used by any browser? Is the name of the web application short enough to be displayed on all platforms?
  • The web is full of lies (starting with the user-agent string). Just because a file ends with .png and has content-type: image/png doesn’t mean it’s a PNG. It could very well be a JPEG file, or something completely different. And the same goes for every downloaded resource. The content-type rule will look at the bytes of the resources and verify. the server is actually serving what it says it is, and where applicable, that is specifying the proper charset.

And the list goes on…

More than 30 rules in 6 categories (and counting!)

sonarwhal validates many different things: from accessibility and content types, to verifying your JavaScript libraries don’t have any known vulnerabilities and that you are using SRI to validate that no one has tampered with the code.

Some of the issues require developers to change their code, but others require tweaking to the server configurations. Changing the configurations might not be obvious, especially when targeting only certain resource types, or newer tools and techniques such as Brotli compression, which may not be as thoroughly documented. To make the developer experience easier, we’ve also added examples for Apache and IIS for the rules that require it.

Get started testing with our online scanner

sonarwhal runs on top of Node.js and is distributed via npm. But what happens if you want to check a site using your mobile phone? Or maybe your IT administrator doesn’t allow you to install any tool.

We needed a way to scale and make sonarwhal available anywhere with an internet connection.In November of last year we launched the online version. Since then, more than 160,000 URLs have been analyzed.

Each result page has its own permalink to allow you to go back to it later, or share it with anyone.

The code for the online scanner is available on GitHub.

Screen capture showing the sonarwhal.com online scanner results for example.com

Scan results for https://example.com

Configure sonarwhal to your needs

We think tools should be helpful and should stay out of your way. We can tell you what we think is important, but at the end of the day, you are the one that best understands what you are building and what requirements you have.

We build sonarwhal with strong defaults, but with the flexibility to let you decide what rules are relevant to your project, what URLs should be ignored, what browser you want to use—essentially, we want everything to be configurable.

To make it easier to reuse configurations, you can now extend from one or more and tweak the properties you want.

For example, to use the configuration “web-recommended” you just have to:

npm install @sonarwhal/configuration-web-recommended

And tell your .sonarwhalrc file to extend from it:

{
   &quot;extends&quot;: [&quot;web-recommended&quot;]
 }

If you want to tweak it, you can do this:

{
   &quot;extends&quot;: [&quot;web-recommended&quot;],
   &quot;ignoredUrls&quot;: [{
     &quot;domain&quot;: &quot;.*\.domain1\.com/.*&quot;,
     &quot;rules&quot;: [&quot;*&quot;]
   }],
   &quot;rules&quot;: {
     &quot;highest-available-document-mode: [&quot;error&quot;, {
       &quot;requireMetaTag&quot;: true
     }]
   }
 }

The above snippet will use the defaults of “web-recommended”, ignore all resources that match the regular expression .*.domain1.com/.* and enforce the X-UA-Compatible meta tag instead of the header.

Without doubt, one of our favorite features is the adaptability of the rules. Depending on the browsers you want to support, some rules will adapt their feedback telling you the best approach for your specific case. We believe this is really important because not everybody gets to develop for the latest browser versions.

Easily extend sonarwhal with parsers to analyze files such as config files

Catching issues early in the development cycle is usually better than when the project is already shipped. To help with this we created the “local connector” that allows you to validate the files you are working with.

Building a website these days usually requires more than just writing HTML, JavaScript and CSS files. Developers use task managers, bundlers, transpilers, and compilers to generate their code. And each one of these needs to be configured, which in some cases is not easy. To tackle this problem, we came up with the concept of a parser. A parser understands a resource format and is capable of emitting information about it so rules can use it.

Parsers are a powerful concept. They allow sonarwhal to be expanded to support new scenarios we couldn’t imagine when we first started the project. In our ca se, we’ve started creating parsers for config files of the most popular tools used during the build process ( tsconfig.json.babelrc, and webpack.config.js so far) and rules related to them:

  • By default TypeScript output is ES3 compatible, but maybe you don’t need to go all the way down and could be using ES5, ES6, etc. Using the information of your browserslist, we’ll tell you what target you should have in your tsconfig.json file.
  •  If you are using webpack, you should have "modules": false in your .babelrc file to make sure you have better tree shaking and thus, less generated code.

These are just some relatively basic examples of what’s possible. Parsers allow you to create rules for virtually anything. For example, someone could create a parser that understands the metadata of image files and then a rule that checks that all the images have a valid “copyright status.”

sonarwhal analyzing configuration files for webpack and TypeScript

sonarwhal v1 is now available! Go get it while it’s fresh!

As you can see, we’ve been busy! After all that, we’re finally ready to announce the first major version of sonarwhal!

While this is a big milestone for us, it doesn’t mean we are going to remain idle. Indeed, now that GitHub organization projects can be public we’ve opened up ours so you can know the project’s priorities and what we are working on.

Some of the things we are more excited about are:

  • New rules: you can expect more rules around security, performance, PWA, and development very soon.
  • User actions for the browser: sometimes the page or scenario you need to test requires a bit of interaction to get to it. We are looking in ways to allow you to control the browser before a scan.
  • Custom configuration for the online scanner: the user should be capable of deciding everything, even when using the online scanner.
  • Notifications for the online scanner: some websites take longer than others to scan and is easy to forget you have something running on a background tab. We will add opt-in notifications to let you know when all the results are gathered.

sonarwhal is completely open source and community driven. Your feedback is really appreciated and helps us prioritize this list. And if you want to help developers all around the world, join us!

Antón Molleda, Senior Program Manager, Microsoft Edge

The post Introducing sonarwhal v1: The linting tool for the web appeared first on Microsoft Edge Dev Blog.

Azure Service Fabric – announcing Reliable Services on Linux and RHEL support

$
0
0

Many customers are using Azure Service Fabric to build and operate always-on, highly scalable, microservice applications. Recently, we open sourced Service Fabric with the MIT license to increase opportunities for customers to participate in the development and direction of the product. Today, we are excited to announce the release of Service Fabric runtime v6.2 and corresponding SDK and tooling updates.

This release includes:

  • The general availability of Java and .NET Core Reliable Services and Actors on Linux
  • Public preview of Red Hat Enterprise clusters
  • Enhanced container support
  • Improved monitoring and backup/restore capabilities

The updates will be available in all regions over the next few days and details can be found in the release notes

Reliable Services and Reliable Actors on Linux is generally available

Reliable Services and Reliable Actors are programming models to help developers build stateless and stateful microservices for new applications and for adding new microservices to existing applications. Now you can use your preferred language to build Reliable Services and Actors with the Service Fabric API using .NET Core 2.0 and Java 8 SDKs on Linux. 

You can learn more about this capability through Java Quickstarts and .NET Core Samples.

Red Hat Enterprise clusters in public preview

Azure Service Fabric clusters running on Red Hat Enterprise Linux (RHEL) is now in public preview. Using the Azure portal or CLI, you can now create and deploy Service Fabric clusters on RHEL, a capability that was previously available only for Windows and Ubuntu platforms.

Learn how to set up Red Had Enterprise Linux based Linux machine for Service Fabric application development.

Enhanced container support

To make it easier for you to deploy and run container-based apps and services on Service Fabric, we've added several features: 

  • You can now auto scale services and container instances in your Service Fabric cluster based on resource consumption metrics that you define. Service Fabric will monitor the load on a service and will dynamically scale in or out. For example, you can set memory thresholds to automatically scale your service by defined increments.
  • Building on the container log views from the v6.1 release, Service Fabric Explorer (SFX) now shows more descriptive error messages to help you quickly identify and resolve container issues such as the inability to find an image, or if the image fails to start. 
  • Support for queries using the Docker APIs provides more insights into your containers. 
  • Specify custom parameters that will be used when Service Fabric launches the Docker daemon. 
  • With Visual Studio 2017 3.1 (preview), more container tooling and debugging support is now available for Service Fabric as well. 

To learn more about container support in Azure Service Fabric, please visit the Service Fabric and containers documentation.

Improvements to monitoring, backup, and restore

The Application Insights SDK for Service Fabric now supports remoting V2 in public preview. More Service Fabric events are also available via the operational channel for you to track important cluster operations. Furthermore, you can now directly query the cluster to monitor any changes in your cluster through the public preview of EventStore APIs. EventStore APIs can be used to monitor workloads in test or staging clusters, and for on-demand diagnostics of production clusters. Finally, with the public preview of the Backup and Restore service for Windows, you can now easily backup and restore data in your Reliable Stateful or Reliable Actor services.

To learn more about the comprehensive set of improvements made to the platform with this release, please refer to the Azure Service Fabric 6.2 release notes.

Release Flow: How We Do Branching on the VSTS Team

$
0
0
Whenever I talk to somebody about Git and version control, one question always comes up: How do you do your branching at Microsoft? And there’s no one answer to this question. Although we’ve been moving everybody in the company into one engineering system, standardizing on Git hosted in Visual Studio Team Services, what we haven’t... Read More

AI, Machine Learning and Data Science Roundup: April 2018

$
0
0

A monthly roundup of news about Artificial Intelligence, Machine Learning and Data Science. This is an eclectic collection of interesting blog posts, software announcements and data applications I've noted over the past month or so.

Open Source AI, ML & Data Science News

An interface between R and Python: reticulate.

TensorFlow Hub: A library for reusable machine learning modules.

TensorFlow.js: Browser-based machine learning with WebGL acceleration. 

Download data from Kaggle with the Kaggle API.

Industry News

Tensorflow 1.7 supports the TensorRT library for faster computation on NVIDIA GPUs.

RStudio now provides a Tensorflow template in Paperspace for computation with NVIDIA GPUs.

Google Cloud Text-to-Speech provides natural speech in 32 voices and 12 languages.

Amazon Translate is now generally available.

Microsoft News

ZDNet reviews The Future Computed: "do read it to remind yourself how much preparation is required for the impact of AI".

Microsoft edges closer to quantum computer based on Majorana fermions (Bloomberg).

Microsoft’s Shared Innovation Initiative.

Azure Sphere: a new chip, Linux-based OS, and cloud services to secure IoT devices.

Microsoft’s Brainwave makes Bing’s AI over 10 times faster (Venturebeat).

Improvements for Python developers in the March 2018 release of Visual Studio Code.

A review of the Azure Data Science Virtual Machine with a focus on deep learning with GPUs.

Azure Media Analytics services: motion, face and text detection and semantic tagging for videos.

Learning resources

Training SqueezeNet in Azure with MXNet and the Data Science Virtual Machine.

Microsoft's Professional Program in AI, now available to the public as an EdX course

Run Python scripts on demand with Azure Container Instances.

How to train multiple models simultaneously with Azure Batch AI.

Scaling models to Kubernetes clusters with Azure ML Workbench.

A Beginner’s Guide to Quantum Computing and Q#.

A "weird" introduction to Deep Learning, by  Favio Vázquez.

Berkeley's Foundation of Data Science course now available online, for free.

Find previous editions of the monthly AI roundup here.

Automating Industrial IoT Security

$
0
0

Industrial IoT is the largest IoT opportunity. At Microsoft, we serve this vertical by offering an Industrial IoT Cloud Platform Reference Architecture, which we have conveniently bundled into an open-source Azure IoT Suite solution called Connected Factory and launched it at HMI 2017 a year ago.

Since then, we continued our collaboration with the OPC Foundation, the non-profit organization developing the OPC UA Industrial Interoperability Standard, and added many new open-source contributions to their Github page, further extending our lead as the largest contributor of open-source software to the OPC Foundation by a factor of 10. We have also successfully certified the open-source, cross-platform .Net Standard OPC UA reference stack for compliance. This was a crucial step in our open-source OPC UA journey as Connected Factory uses this stack internally. We also managed to reduce the monthly Azure consumption cost of Connected Factory due to the new pricing structure of Azure IoT Hub recently announced.

Although Connected Factory is extremely popular with both machine builders and manufacturers, we hear from time to time that it is still difficult to connect real machines to it and at the same time make these machines secure for IoT applications. Therefore, we have added several new modules and services to Connected Factory, which make connectivity and security fully automatic! We are pleased to announce that we are launching these new modules and services at HMI this year.

In detail, we added:

  • An automatic machine/asset discovery Azure IoT Edge module called OPC Twin, which detects OPC UA Servers on the OT network and automatically registers them with Azure IoT Hub. The OPC Twin can also be controlled and managed from the cloud using a companion OPC Twin microservice running on Azure.
  • The OPC Twin also creates a Device Twin for each OPC UA server, complete with OPC UA metadata. This allows interaction with each individual OPC UA server from the cloud using native IoT Hub APIs.
  • Also, the OPC Twin performs an automatic security assessment for each individual OPC UA server and highlights security weaknesses to the user.
  • Furthermore, we added a cloud-assisted OPC UA Global Discovery Server, again packaged as an IoT Edge module. It handles automatic security configuration of OPC Servers and like the OPC Twin, has a companion Azure-based micro service for control and management from the cloud. The micro service also interacts with Azure Key Vault and is called GDS Vault. GDS Vault uses Key Vault for securely storing and managing X.509 certificates and private keys used by OPC UA. Through its cloud-based interface, operators can manage the security settings of OPC UA servers on a global scale for the first time and no longer need to manually exchange OPC UA certificates for each OPC UA client and server on each factory floor they are responsible for. The GDS therefore represents the world’s first truly global GDS.

Needless to say, we will provide an open-source integration of all of the above in Connected Factory on GitHub shortly.

We are proud that both the OPC Foundation and the Platform Industry 4.0 recommend the use of a Global Discovery Server for security management of OPC UA servers. Both organizations have released whitepapers (1) (2) describing the details.

Here is a diagram of our new Connected Factory architecture:
  clip_image002[8]

I would also like to quote from an article by John Rinaldi from Real Time Automation, written in response to our announcement that we contributed an open-source Global Discovery Server to the OPC Foundation:

Global Discovery Server will make all the difference in adoption. GDS implementations have been lacking for OPC UA and it has been one of the weak points of the architecture. Microsoft products vastly improves the OPC UA technology and solidifies Microsoft’s position on the factory floor. Most, if not all, factory floor implementations will need the GDS to handle the certificate management and that means that nearly all OPC UA installations will have a Microsoft presence.

But wait, there’s more!

We are also proud to announce that we have extended Azure IoT Central with the power of OPC UA, making a non-intrusive connection to on-premises machines also possible for IoT Central customers.

As you can see, we continue to invest to support you with the right Industrial IoT Cloud Platform at every step of your digital transformation journey in manufacturing:

yes

We can’t wait to see what new products you will build using it!

Top stories from the VSTS community – 2018.04.20

$
0
0
Here are top stories we found in our streams this week related to DevOps, VSTS, TFS and other interesting topics, listed in no specific order: TOP STORIES VSTS Gems – Marketplace, the one stop location for added functionalityRui Melo highlights the place to find widgets and extensions to enhance your VSTS experience. Opps, I made... Read More

Help us plan the future of .NET!

$
0
0

We’re currently planning our next major .NET releases and would love to hear your feedback on how you interact with .NET Framework and .NET Core today. Please fill out the survey below and help shape our next release by telling us about your biggest challenges and needs. It should only take 5 minutes to complete!

Take the survey now!

We appreciate your contribution!


Gartner recognizes Microsoft as a leader in enterprise integration

$
0
0

This blog post was authored by Rodrigo de Carvalho, Product Marketing Manager, Microsoft Azure.

Gartner’s Magic Quadrant for Enterprise Integration Platform as a Service (eiPaaS), 2018 positions Microsoft as a leader and it reflects Microsoft’s ability to execute and completeness of vision.

Microsoft’s global presence, strong growth, and platform versatility provides customers the confidence to choose Azure as the cloud platform to automate, integrate, and optimize their business processes by connecting on-premises applications, SaaS, data, and to API-enable applications with managed integration services.

Integration Platform-as-a-Service at enterprise-scale

By integrating applications and data with partners, suppliers or customers, organizations optimize trade and information exchange driving business agility. Process automation, Enterprise Application Integration (EAI), Business-to-Business (B2B) transactions, Electronic Data Interchange (EDI), and Application Programing Interface (API) management are all areas in which organizations leverage Azure’s integration capabilities.

With Azure, organizations enhance productivity with business processes automation, SaaS, and on-premises application integration leveraging the most common out-of-the-box connectors for Azure services, Office 365, Dynamics CRM, among others.

“To build the highest quality product causing the least amount of harm.”

- Drew Story, Solution Architect, Patagonia

Also, organizations use Azure to optimize the exchange of electronic messages in business-to-business transactions, even when using different protocols and formats with trade partners such as suppliers or customers.

“When you have and already know Microsoft technology, connecting with Microsoft’s cloud makes things much easier.”

- William Vélez, CIO, Intermex Wire Transfer

In addition, Azure enables organizations to open new channels with customers and deliver faster value by publishing, managing, securing, and analyzing APIs, with a first-rate developer experience, connecting to back-end services anywhere, protecting and optimizing your APIs in a single place.

“It was essential for us to maintain business momentum while we scaled up. We were able to do both by working with the expert Microsoft cloud team in Norway and the mature, flexible, and powerful Azure platform.”

- Thomas Wold Johansen, Chief Technology Officer, Vipps

Learn how a payment mobile app (Vipps), a global outdoor gear and provisions clothing company (Patagonia), and a financial company that enable wire transfers to 17 Latin American countries in the United States (Intermex) use Azure as an integration platform to connect, exchange, and trade more efficiently and in new ways with partners, suppliers, and customers.

Download the Gartner’s Magic Quadrant for Enterprise Integration Platform as a Service (eiPaaS), 2018 today.

 

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Spectre diagnostic in Visual Studio 2017 Version 15.7 Preview 4

$
0
0

Visual Studio 2017 version 15.7 Preview 4 adds a new capability to our Spectre mitigation: the ability to see where the compiler would have inserted a mitigation and what data led to that action. A new warning, C5045, lets you see what patterns in your code would have caused a mitigation, such as an LFENCE, to be inserted.

This change builds upon our existing Spectre mitigation support, including the changes introduced in Preview 3. Complete details about are available in context in the original MSVC Spectre mitigation post on VCBlog. The new warning is also discussed below.

Enabling C5045

The C5045 warning is off by default. You can enable it in one of two ways:

  1. Set the warning level to EnableAllWarnings in Project Properties > C/C++ > General > Warning Level. This option enables all warnings so it can be a bit noisy. Even the VC++ libraries don’t attempt to be clean for all warnings (/Wall).
  2. Set the warning level of C5045 to a level specified in the “Warning Level” setting. The default level for C++ projects is /W3, so you can set the warning level of C5045 to level 3. To do this, put /w35045 on the command line: it says to treat warning number 5045 as level 3. You can do this in the text box in Project Properties > C/C++ > Command Line > Additional Options.

Using C5045

Once you’ve enabled the warning, just compile your code to see where mitigations would be inserted. This code sample contains one vulnerability:

int G, G1, G2;

__forceinline
int * bar(int **p, int i)
{
	return p[i];
}

__forceinline
void bar1(int ** p, int i)
{
	if (i < G1) {
		auto x = p[i]; // mitigation here
		G = *x;
	}
}

__forceinline
void foo(int * p)
{
	G = *p;
}
 
void baz(int ** p, int i)
{
	if (i < G1) {
		foo(bar(p, i + G2));
	}
	bar1(p, i);
}

int main() { }

Compiling the code above shows that a mitigation would have been inserted on line 13. It also notes that the mitigation is needed because the index i on line 12 feeds the memory load on line 14. The speculation is done across bar and bar1 but the mitigation is effective when placed at line 12.

1>------ Rebuild All started: Project: Spectre, Configuration: Debug Win32 ------ 
1>Source.cpp 
1>c:usersapardoesourcereposspectrespectresource.cpp(13): warning C5045: Compiler will insert Spectre mitigation for memory load if /Qspectre switch specified 
1>c:usersapardoesourcereposspectrespectresource.cpp(12) : note: index 'i' range checked by comparison on this line 
1>c:usersapardoesourcereposspectrespectresource.cpp(14) : note: feeds memory load on this line 
1>Spectre.vcxproj -> c:UsersapardoesourcereposSpectreDebugSpectre.exe 
1>Done building project "Spectre.vcxproj". 
========== Rebuild All: 1 succeeded, 0 failed, 0 skipped ========== 

Note that this warning is purely informational: the mitigation is not inserted until you recompile using the /Qspectre switch. The functionality of C5045 is independent of the /Qspectre switch so you can use them both in the same compilation.

In closing

We on the MSVC team are committed to the continuous improvement and security of your Windows software. We’ll continue to develop technologies that help developers mitigate speculative execution side channel vulnerabilities and other security issues.

We encourage you to recompile and redeploy your vulnerable software as soon as possible. Continue watching this blog and the @visualc Twitter feed for updates on this topic.

If you have any questions, please feel free to ask us below. You can also send us your comments through e-mail at visualcpp@microsoft.com, through Twitter @visualc, or Facebook at Microsoft Visual Cpp. Thank you.

Publish Improvements in Visual Studio 2017 15.7

$
0
0

Today we released Visual Studio 2017 15.7 Preview 4. Our 15.7 update brings some exciting updates for publishing applications from Visual Studio that we’re excited to tell you about, including:

  • Ability to configure publish settings before you publish or create a publish profile
  • Create Azure Storage Accounts and automatically store the connection string for App Service
  • Automatic enablement of Managed Service Identity in App Service

If you haven’t installed a Visual Studio Preview yet, it’s worth noting that they can be installed side by side with your existing stable installations of Visual Studio 2017, so you can try the previews out, and then go back to the stable channel for your regular work. We’d be very appreciative if you’d try Visual Studio 2017 15.7 Preview 4 and give us any feedback you might have while we still have time to change or fix things before we ship the final version (download now). As always, if you run into any issues, please report them to us using Visual Studio’s built in “Report a Problem” feature.

Configure settings before publishing

When publishing your ASP.NET Core applications to either a folder or Azure App Service you can configure the following settings prior to creating your publish profile:

To configure this prior to creating your profile, click the “Advanced…” link on the publish target page to open the Advanced Settings dialog.

Advanced link on 'Pick a publish target' dialog

Create Azure Storage Accounts and automatically store the connection string in App Settings

When creating a new Azure App Service, we’ve always offered the ability to create a new SQL Azure database and automatically store its connection string in your app’s App Service Settings. With 15.7, we now offer the ability to create a new Azure Storage Account while you are creating your App Service, and automatically place the connection string in the App Service settings as well. To create a new storage account:

  • Click the “Create a storage account” link in the top right of the “Create App Service” dialog
  • Provide in the connecting string key name your app uses to access the storage account in the “(Optional) Connecting String Name” field at the bottom of the Storage Account dialog
  • Your application will now be able to talk to the storage account once your application is published

Optional Connection String Name field on Storage Account dialog

Managed Service Identity enabled for new App Services

A common challenge when building cloud applications is how to manage the credentials that need to be in your code for authenticating to other services. Ideally, credentials never appear on developer workstations or get checked into source control. Azure Key Vault provides a way to securely store credentials and other keys and secrets, but your code needs to authenticate to Key Vault to retrieve them. Managed Service Identity (MSI) makes solving this problem simpler by giving Azure services an automatically managed identity in Azure Active Directory (Azure AD). You can use this identity to authenticate to any service that supports Azure AD authentication, including Key Vault, without having any credentials in your code.

Starting in Visual Studio 2017 15.7 Preview 4, when you publish an application to Azure App Service (not Linux) Visual Studio automatically enables MSI for your application. You can then give your app permission to communicate with any service that supports MSI authentication by logging into that service’s page in the Azure Portal and granting access your App Service. For example, to create a Key Vault and give your App Service access

  1. In the Azure Portal, select Create a resource > Security + Identity > Key Vault.
  2. Provide a Name for the new Key Vault.
  3. Locate the Key Vault in the same subscription and resource group as the App Service you created from Visual Studio.
  4. Select Access policies and click Add new.
  5. In Configure from template, select Secret Management.
  6. Choose Select Principal, and in the search field enter the name of the App Service.
  7. Select the App Service’s name in the result list and click Select.
  8. Click OK to finishing adding the new access policy, and OK to finish access policy selection.
  9. Click Create to finish creating the Key Vault.

Azure portal dialog: Create a Key Vault and give your App Service access

Once you publish your application, it will have access to the Key Vault without the need for you to take any additional steps.

Conclusion

If you’re interested in the many other great things that Visual Studio 2017 15.7 brings for .NET development, check out our .NET tool updates in Visual Studio 15.7 post on the .NET blog.

We hope that you’ll give 15.7 a try and let us know how it works for you. If you run into any issues, or have any feedback, please report them to us using Visual Studio’s features for sending feedback. or let us know what you think below or via Twitter.

Announcing Visual Studio 2017 15.7 Preview 4

$
0
0

As you know we continue to incrementally improve Visual Studio 2017 (version 15), and our 7th significant update is currently well under way with the 4th preview shipping today. As we’re winding down the preview, we’d like to stop and take the time to tell you about all of the great things that are coming in 15.7 and ask you to try it and give us any feedback you might have while we still have time to correct things before we ship the final version.

From a .NET tools perspective, 15.7 brings a lot of great enhancements including:

  • Support for .NET Core 2.1 projects
  • Improvements to Unit Testing
  • Improvements to .NET productivity tools
  • C# 7.3
  • Updates to F# tools
  • Azure Key Vault support in Connected Services
  • Library Manager for working with client-side libraries in web projects
  • More capabilities when publishing projects

In this post we’ll take a brief tour of all these features and talk about how you can try them out (download 15.7 Preview). As always, if you run into any issues, please report them to us using Visual Studio’s built in “Report a Problem” feature.

.NET Core 2.1 Support

.NET Core 2.1 and ASP.NET Core 2.1 brings a list of great new features including performance improvements, global tools, a Windows compatibility pack, minor version roll-forward, and security improvements to name a few. For full details see the .NET Core 2.1 Roadmap and the ASP.NET Core 2.1 Roadmap respectively.

Visual Studio 15.7 is the recommended version of Visual Studio for working with .NET Core 2.1 projects. To get started building .NET Core 2.1 projects in Visual Studio,

You’ll now see ASP.NET Core 2.1 as an option in the One ASP.NET dialog

clip_image002[4]

If you are working with a Console Application or Class Library, you’ll need to create the project and then open the project’s property page and change the Target framework to “.NET Core 2.1”

clip_image004[4]

Unit Testing Improvements

  • The Test Explorer has undergone more performance improvements which results smoother scrolling and faster updating of the test list for large solutions.
  • We’ve also improved the ability to understand what is happening during test runs. When a test run is in progress, a progress ring appears next to tests that are currently executing, and a clock icon appears for tests that are pending execution.

clip_image006[4]

Productivity Improvements

Each release we’ve been working to add more and more refactorings and code fixes to make you productive. In 15.7 Preview 4, invoke Quick Actions and Refactorings (Ctrl+. or Alt+Enter) to use:

  • Convert for-loop-to-foreach (and vice versa)
  • Make private field readonly
  • Toggle between var and the explicit type (without code style enforcement)

clip_image008[4]

To learn more about productivity features see our Visual Studio 2017 Productivity Guide for .NET Developers.

C# 7.3

15.7 also brings the newest incremental update to C#, 7.3. C# 7.3 features are:

To use C# 7.3 features in your project:

  • Open your project’s property page (Project -> [Project Name] Properties…)
  • Choose the “Build” tab
  • Click the “Advanced…” button on the bottom right
  • Change the “Language version” dropdown to “C# latest minor version (latest)”.  This setting will enable your project to use the latest C# features available to the version of Visual Studio you are in without needing to change it again in the future.  If you prefer, to you can pick a specific version from the list.

F# improvements

15.7 also includes several improvements to F# and F# tooling in Visual Studio.

  • Type Providers are now enabled for .NET Core 2.1. To try it out, we recommend using FSharp.Data version 3.0.0-beta, which has been updated to use the new Type Provider infrastructure.
  • .NET SDK projects can now generate an F# AssmeblyInfo file from project properties.
  • Various smaller bugs in file ordering for .NET SDK projects have been fixed, including initial ordering when pasting a file into a folder.
  • Toggles for outlining and Structured Guidelines are now available in the Text Editor > F# > Advanced options page.
  • Improvements in editor responsiveness have been made, including ensuring that error diagnostics always appear before other diagnostic information (e.g., unused value analysis)
  • Efforts to reduce memory usage of the F# tools have been made in partnership with the open source community, with much of the improvements available in this release.

Finally, templates for ASP.NET Core projects in F# are coming soon, targeted for the RTW release of VS 2017 15.7.

Azure Key Vault support in Connected Services

We have simplified the process to manage your project’s secrets with the ability to create and add a Key Vault to your project as a connected service. The Azure Key Vault provides a secure location to safeguard keys and other secrets used by applications so that they do not get shared unintentionally. Adding a Key Vault through Connected Services will:

  • Provide Key Vault support for ASP.NET and ASP.NET Core applications
  • Automatically add configuration to access your Key Vault through your project
  • Add the required Nuget packages to your project
  • Allow you to access, add, edit, and remove your secrets and permissions through the Azure portal

To get started:

  • Double click on the “Connected Services” node in Solution Explorer in your ASP.Net or ASP.Net Core application.
  • Click on “Secure Secrets with Azure Key Vault”.
  • When the Key Vault tab opens, select the Subscription that you would like your Key Vault to be associated with and click the “Add” button on the bottom left. By default Visual Studio will create a Key Vault with a unique name.
    Tip: If you would like to use an existing Key Vault, change location settings, resource group, or pricing tiers from the preselected values, you can click on the ‘Edit’ link next to Key Vault
  • Once the Key Vault has been added, you will be able to manage secrets and permissions with the links on the right.

clip_image010[4]

Library Manager

Library Manager (“LibMan” for short) is Microsoft’s new client-side static content management system for web projects. Designed as a replacement for Bower and npm, LibMan helps users find and fetch library files from an external source (like CDNJS) or from any file system library catalogue.

To get started, right-click a web project from Solution Explorer and choose “Manage Client-side Libraries…”. This creates and opens the LibMan configuration file (libman.json) with some default content. Update the “libraries” section to add library files to your project. This example adds some jQuery files to the wwwroot/lib directory.

clip_image012[4]

For more details, see Library Manager: Client-side content management for web apps.

Azure Publishing Improvements

We also made several improvements for when publishing applications from Visual Studio, including:

For more details, see our Publish improvements in Visual Studio 2017 15.7 post on the Web Developer blog.

Conclusion

If you haven’t installed a Visual Studio preview yet, it’s worth noting that they can be installed side by side with your existing stable installations of Visual Studio 2017, so you can try the previews out, and then go back to the stable channel for your regular work. So, we hope that you’ll take the time to install the Visual Studio 2017 15.7 Preview 4 update and let us know what you think. You can either use the built-in feedback tools in Visual Studio 2017 or let us know what you think below in the comments section.

Publish Improvements in Visual Studio 2017 15.7

$
0
0

Today we released Visual Studio 2017 15.7 Preview 4. Our 15.7 update brings some exciting updates for publishing applications from Visual Studio that we’re excited to tell you about, including:

  • Ability to configure publish settings before you publish or create a publish profile
  • Create Azure Storage Accounts and automatically store the connection string for App Service
  • Automatic enablement of Managed Service Identity in App Service

If you haven’t installed a Visual Studio Preview yet, it’s worth noting that they can be installed side by side with your existing stable installations of Visual Studio 2017, so you can try the previews out, and then go back to the stable channel for your regular work. We’d be very appreciative if you’d try Visual Studio 2017 15.7 Preview 4 and give us any feedback you might have while we still have time to change or fix things before we ship the final version (download now). As always, if you run into any issues, please report them to us using Visual Studio’s built in “Report a Problem” feature.

Configure settings before publishing

When publishing your ASP.NET Core applications to either a folder or Azure App Service you can configure the following settings prior to creating your publish profile:

To configure this prior to creating your profile, click the “Advanced…” link on the publish target page to open the Advanced Settings dialog.

Advanced link on 'Pick a publish target' dialog

Create Azure Storage Accounts and automatically store the connection string in App Settings

When creating a new Azure App Service, we’ve always offered the ability to create a new SQL Azure database and automatically store its connection string in your app’s App Service Settings. With 15.7, we now offer the ability to create a new Azure Storage Account while you are creating your App Service, and automatically place the connection string in the App Service settings as well. To create a new storage account:

  • Click the “Create a storage account” link in the top right of the “Create App Service” dialog
  • Provide in the connecting string key name your app uses to access the storage account in the “(Optional) Connecting String Name” field at the bottom of the Storage Account dialog
  • Your application will now be able to talk to the storage account once your application is published

Optional Connection String Name field on Storage Account dialog

Managed Service Identity enabled for new App Services

A common challenge when building cloud applications is how to manage the credentials that need to be in your code for authenticating to other services. Ideally, credentials never appear on developer workstations or get checked into source control. Azure Key Vault provides a way to securely store credentials and other keys and secrets, but your code needs to authenticate to Key Vault to retrieve them. Managed Service Identity (MSI) makes solving this problem simpler by giving Azure services an automatically managed identity in Azure Active Directory (Azure AD). You can use this identity to authenticate to any service that supports Azure AD authentication, including Key Vault, without having any credentials in your code.

Starting in Visual Studio 2017 15.7 Preview 4, when you publish an application to Azure App Service (not Linux) Visual Studio automatically enables MSI for your application. You can then give your app permission to communicate with any service that supports MSI authentication by logging into that service’s page in the Azure Portal and granting access your App Service. For example, to create a Key Vault and give your App Service access

  1. In the Azure Portal, select Create a resource > Security + Identity > Key Vault.
  2. Provide a Name for the new Key Vault.
  3. Locate the Key Vault in the same subscription and resource group as the App Service you created from Visual Studio.
  4. Select Access policies and click Add new.
  5. In Configure from template, select Secret Management.
  6. Choose Select Principal, and in the search field enter the name of the App Service.
  7. Select the App Service’s name in the result list and click Select.
  8. Click OK to finishing adding the new access policy, and OK to finish access policy selection.
  9. Click Create to finish creating the Key Vault.

Azure portal dialog: Create a Key Vault and give your App Service access

Once you publish your application, it will have access to the Key Vault without the need for you to take any additional steps.

Conclusion

If you’re interested in the many other great things that Visual Studio 2017 15.7 brings for .NET development, check out our .NET tool updates in Visual Studio 15.7 post on the .NET blog.

We hope that you’ll give 15.7 a try and let us know how it works for you. If you run into any issues, or have any feedback, please report them to us using Visual Studio’s features for sending feedback. or let us know what you think below or via Twitter.

Viewing all 10804 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>