Quantcast
Channel: Category Name
Viewing all 10804 articles
Browse latest View live

ASP.NET Core Logging

$
0
0
This guest post was written by Mike Rousos

ASP.NET Core supports diagnostic logging through the Microsoft.Extensions.Logging package. This logging solution (which is used throughout ASP.NET Core, including internally by the Kestrel host) is highly extensible. There’s already documentation available to help developers get started with ASP.NET Core logging, so I’d like to use this post to highlight how custom log providers (like Microsoft.Extensions.Logging.AzureAppServices and Serilog) make it easy to log to a wide variety of destinations.

It’s also worth mentioning that nothing in Microsoft.Extensions.Logging requires ASP.NET Core, so these same logging solutions can work in any .NET Standard environment.

A Quick Overview

Setting up logging in an ASP.NET Core app doesn’t require much code. ASP.NET Core new project templates already setup some basic logging providers with this code in the Startup.Configure method:

loggerFactory.AddConsole(Configuration.GetSection("Logging"));
loggerFactory.AddDebug();

These methods register logging providers on an instance of the ILoggerFactory interface which is provided to the Startup.Configure method via dependency injection. The AddConsole and AddDebug methods are just extension methods which wrap calls to ILoggerFactory.AddProvider.

Once these providers are registered, the application can log to them using an ILogger<T> (retrieved, again, via dependency injection). The generic parameter in ILogger<T> will be used as the logger’s category. By convention, ASP.NET Core apps use the class name of the code logging an event as the event’s category. This makes it easy to know where events came from when reviewing them later.

It’s also possible to retrieve an ILoggerFactory and use the CreateLogger method to generate an ILogger with a custom category.

ILogger‘s log APIs send diagnostic messages to the logging providers you have registered.

Structured Logging

One useful characteristic of ILogger logging APIs (LogInformation, LogWarning, etc.) is that they take both a message string and an object[] of arguments to be formatted into the message. This is useful because, in addition to passing the formatted message to logging providers, the individual arguments are also made available so that logging providers can record them in a structured format. This makes it easy to query for events based on those arguments.

So, make sure to take advantage of the args parameter when logging messages with an ILogger. Instead of calling

Logger.LogInformation("Retrieved " + records.Count + " records for user " + user.Id)

Consider calling

Logger.LogInformation("Retrieved {recordsCount} records for user {user}", records.Count, user.Id)

so that later you can easily query to see how many records are returned on average, or query only for events relating to a particular user or with more than a specific number of records.

Azure App Service Logging

If you will be deploying your ASP.NET Core app as an Azure app service web app or API, make sure to try out the Microsoft.Extensions.Logging.AzureAppServices logging provider.

Like other logging providers, the Azure app service provider can be registered on an ILoggerFactory instance:

loggerFactory.AddAzureWebAppDiagnostics(
  new AzureAppServicesDiagnosticsSettings
  {
    OutputTemplate = "{Timestamp:yyyy-MM-dd HH:mm:ss zzz} [{Level}] {RequestId}-{SourceContext}: {Message}{NewLine}{Exception}"
  }
);

The AzureAppServicesDiagnosticsSettings argument is optional, but allows you to specify the format of logged messages (as shown in the sample, above), or customize how Azure will store diagnostic messages.

Notice that the output format string can include common Microsoft.Extensions.Logging parameters (like Level and Message) or ASP.NET Core-specific scopes like RequestId and SourceContext.

To log messages, application logging must be enabled for the Azure app service. Application logging can be enabled in the Azure portal under the app service’s ‘Diagnostic logs’ page. Logging can be sent either to the file system or blob storage. Blob storage is a better option for longer-term diagnostic storage, but logging to the file system allows logs to be streamed. Note that file system application logging should only be turned on temporarily, as needed. The setting will automatically turn itself back off after 12 hours.

  Enable Azure App Service Logging

Logging can also be enabled with the Azure CLI:

az appservice web log config --application-logging true --level information -n [Web App Name] -g [Resource Group]

Once logging has been enabled, the Azure app service logging provider will automatically begin recording messages. Logs can be downloaded via FTP (see information in the diagnostics log pane in the Azure portal) or streamed live to a console. This can be done either through the Azure portal or with the Azure CLI. Notice the streamed messages use the output format specified in the code snippet above.

AppService Log Streaming

The Azure app service logging provider is one example of a useful logging extension available for ASP.NET Core. Of course, if your app is not run as an Azure app service (perhaps it’s run as a microservice in Azure Container Service, for example), you will need other logging providers. Fortunately, ASP.NET Core has many to choose from.

Serilog

ASP.NET Core logging documentation lists the many built-in providers available. In addition to the providers already seen (console, debug, and Azure app service), these include useful providers for writing to ETW, the Windows EventLog, or .NET trace sources.

There are also many third-party providers available. One of these is the Serilog provider. Serilog is a notable logging technology both because it is a structured logging solution and because of the wide variety of custom sinks it supports. Most Serilog sinks now support .NET Standard.

I’ve recently worked with customers interested in logging diagnostic information to custom data stores like Azure Table Storage, Application Insights, Amazon CloudWatch, or Elasticsearch. One approach might be to just use the default console logger or another built-in provider and capture the events from those output streams and redirect them. The problem with that approach is that it’s not suitable for production environments since the console log provider is slow and redirecting from other destinations involves unnecessary extra work. It would be much better to log batches of messages to the desired data store directly.

Fortunately, Serilog sinks exist for all of these data stores that do exactly that. Let’s take a quick look at how to set those up.

First, we need to reference the Serilog.Extensions.Logging package. Then, register the Serilog provider in Startup.Configure:

loggerFactory.AddSerilog();

AddSerilog registers a Serilog ILogger to receive logging events. There are two different overloads of AddSerilog that you may call depending on how you want to provide an ILogger. If no parameters are passed, then the global Log.Logger Serilog instance will be registered to receive events. Alternatively, if you wish to provide the ILogger via dependency injection, you can use the AddSerilog overload which takes an ILogger as a parameter. Regardless of which AddSerilog overload you choose, you’ll need to make sure that your ILogger is setup (typically in the Startup.ConfigureServices method).

To create an ILogger, you will first create a new LoggerConfiguration object, then configure it (more on this below), and call LoggerConfiguration.CreateLogger(). If you will be registering the static Log.Logger, then just assign the logger you have created to that property. If, on the other hand, you will be retrieving an ILogger via dependency injection, then you can use services.AddSingleton<Serilog.ILogger> to register it.

Configuring Serilog Sinks

There are a few ways to configure Serilog sinks. One good approach is to use the LoggerConfiguration.ReadFrom.Configuration method which accepts an IConfiguration as an input parameter and reads sink information from the configuration. This IConfiguration is the same configuration interface that is used elsewhere for ASP.NET Core configuration, so your app’s Startup.cs probably already creates one.

The ability to configure Serilog from IConfiguration is contained in the Serilog.Settings.Configuration package. Make sure to add a reference to that package (as well as any packages containing sinks you intend to use).

The complete call to create an ILogger from configuration would look like this:

var logger = new LoggerConfiguration()
  .ReadFrom.Configuration(Configuration)
  .CreateLogger();

Log.Logger = logger;
// or: services.AddSingleton<Serilog.ILogger>(logger);

Then, in a configuration file (like appsettings.json), you can specify your desired Serilog configuration. Serilog expects to find a configuration element named ‘Serilog’. In that configuration property, you can specify a minimum event level to log and a ‘writeto’ element that is an array of sinks. Each sink needs a ‘name’ property to identify the kind of sink it is and, optionally, can take an args object to configure the sink.

As an example, here is an appsettings.json file that sets the minimum logging level to ‘Information’ and adds two sinks – one for Elasticsearch and one for LiterateConsole (a nifty color-coded structured logging sink that writes to the console):

Another option for configuring Serilog sinks is to add them programmatically when creating the ILogger. For example, here is an updated version of our previous ILogger creation logic which loads Serilog settings from configuration and adds additional sinks programmatically using the WriteTo property:

var logger = new LoggerConfiguration()
  .ReadFrom.Configuration(Configuration)
  .WriteTo.AzureTableStorage(connectionString, LogEventLevel.Information)
  .WriteTo.AmazonCloudWatch(new CloudWatchSinkOptions
    {
      LogGroupName = "MyLogGroupName",
      MinimumLogEventLevel = LogEventLevel.Warning
    }, new AmazonCloudWatchLogsClient(new InstanceProfileAWSCredentials(), RegionEndpoint.APNortheast1))
  .CreateLogger();

In this example, we’re using Azure credentials from a connection string and AWS credentials from the current instance profile (assuming that this code will run on an EC2 instance). We could just as easily use a different AWSCredentials class if we wanted to load credentials in some other way.

Once Serilog is setup and registered with your application’s ILoggerFactory, you will start seeing events (both those you log with an ILogger and those internally logged by Kestrel) in all appropriate sinks!

Here is a screenshot of events logged by the LiterateConsole sink and a screenshot of an Elasticsearch event viewed in Kibana:

Literate Console OutputElastic Search Log Entry

Notice in the Elasticsearch event, there are a number of fields available besides the message. Some (like taskID), I defined through my message format string. Others (like RequestPath or RequestId) are automatically included by ASP.NET Core. This is the power of structured logging – in addition to searching just on the message, I can also query based on these fields. The Azure table storage sink preserves these additional data points as well in a json blob in its ‘data’ column:

Conclusion

Thanks to the Microsoft.Extensions.Logging package, ASP.NET Core apps can easily log to a wide variety of endpoints. Built-in logging providers cover many scenarios, and thid-party providers like Serilog add even more options. Hopefully this post has helped give an overview of the ASP.NET Core (and .NET Standard) logging ecosystem. Please check out the official docs and Serilog.Extensions.Logging readme for more information.

Resources


dv01 uses R bring greater transparency to the consumer lending market

$
0
0

The founder of the NYC-based startup dv01 watched the 2008 financial crisis and was inspired to bring greater transparency to institutional investors in the consumer lending market. Despite being an open-source shop, they switched their data services to Microsoft SQL Server to provide better performance (reducing latency for queries from tens of seconds to under two seconds). They also use R for modeling, as you can see in the video below:

Dv01

Principal Data Scientist Wei Wei Lu also mentions Microsoft Machine Learning in the video. The MicrosoftML package, included in SQL Server R Services and Microsoft R Server 9, provides parallel implementations of many popular algorithms as R functions. The video below (presented by Gleb Krivosheev and Yunling Wang) provides a brief overview of some of the new capabilities in Microsoft ML, including the new pre-trained models for sentiment analysis and image featurization.

To try out the MicrosoftML package, download SQL Server 2016 or Microsoft R Server 9.

Bing intelligence now available on the Interbot channel

$
0
0

Bootstrap your bots with the help of Bing. Developers can leverage Bing’s vast knowledge and intelligence, using our APIs to build intelligent bots more quickly than ever.

Gupshup, one of the leading bot platforms with 30k+ developers, recently launched a bot-to-bot communication channel called Interbot. With Interbot, developers can compose new bots by simply combining component bots - like putting lego blocks together.

Interbot is now enriched with intelligence from Bing that enables developers to build smart conversational interfaces. Multiple services using Bing Knowledge and Search APIs are setup as pre-built utility bots, unleashing new possibilities on Interbot.

“We chose to work with Bing APIs because they offer the same vast knowledge and intelligence embedded in its powerful search engine. Developers can use it to build incredibly advanced bots quickly and easily, especially through Interbot communication. Also, Microsoft has been very good at embracing and working with the broader developer ecosystem,” says Beerud Sheth, Co-founder and CEO of Gupshup.

Bing APIs help empower developers. They benefit from the security, scalability, relevance and ranking improvements that hundreds of millions of monthly users rely on. Below is a list of the utility bots now available on Interbot along with the Bing and Microsoft Cognitive Services APIs that power them:

Using these utility bots, developers can quickly and easily compose powerful new bots, plug-and-play style. For example, you can combine Bing intelligence bots with Microsoft Cognitive Services bots to create an entirely new bot. Check out the video below to see how it’s done:

*Bing Knowledge Graph is still in preview and is offered to white glove partners only

Learn more about Bing APIs at http://www.bing.com/partners.

- The Bing Team

The MVP Show Learns about ASP.NET, Identity Server, and Heidelberg

$
0
0

In the second episode of the MVP Show, intrepid host Seth Juarez traveled to Heidelberg, Germany to meet with Visual Studio and Development Technologies MVP Dominick Baier.  Dominick is one of the developers responsible for the Identity Server open source project.

Seth and Dominick shared several videos about all things API, Germany, and Security.  Check them out below.

Part 1: An Interview with Dominick Baier

 

Part 2: Modern Authentication Architecture

 

Part 3: IdentityServer 4 in action

You can learn more about Identity Server and ASP.NET at the following locations:

IdentityServer website: https://identityserver.io

 

 

Azure IoT Gateway SDK packages now available

$
0
0

Back in November, we announced the general availability of the Azure IoT Gateway SDK. We’ve already heard from a number of customers who are leveraging the open source Gateway SDK to connect their legacy devices or run analytics at the edge of their network. It’s great to see quick adoption! With the Gateway SDK’s modular architecture, developers can also program their own custom modules to perform specific actions. Thanks to its flexible design, you can create these modules in your preferred language – Node.js, Java, C#, or C.

We want to further simplify the experience of getting started with writing modules for the Gateway SDK. Today, we are announcing availability of packages to streamline the developer experience, enabling you to get started in minutes!

What packages are available?

  1. NPM
    • azure-iot-gateway: With this you will be able to run the Gateway sample app and start writing Node.js modules. This package contains the Gateway runtime core and auto-installs the module dependencies’ packages for Linux or Windows.
    • generator-az-iot-gw-module: This provides Gateway module project scaffolding with Yeoman.
  2. Maven
    • com.microsoft.azure.gateway/gateway-module-base: With this you will be able to run the Gateway sample app and start writing Java modules. You only need this package and its dependencies to run the Gateway app locally, but you do not need to include them when you publish your gateway. This package contains the Gateway runtime core and links to the module dependencies’ packages for Linux or Windows.
    • com.microsoft.azure.gateway/gateway-java-binding: This package contains the Java binding or interface. This package is required for both runtime and publishing.
  3. NuGet

What does this mean for you?

The primary benefit of these packages is time saved. They significantly reduce the number of steps required to start writing a module. You no longer have to clone and build the whole Gateway project. In addition, the packages include all the dependencies for you to mix modules written in different languages.

What’s next?

.NET Core NuGet packages are coming soon. We are looking for further ways to improve the Gateway SDK developer experience. For more information on getting started with these packages, check out our GitHub sample apps.

We’re delighted to see developers contributing their modules to the Gateway SDK community. Our team looks forward to seeing further activity in this area and learning more about your gateway scenarios. So take the new packages for a spin, and let us know what you think!

Azure Management Libraries for .NET generally available now

$
0
0

Today, we are announcing the general availability of the new, simplified Azure management libraries for .NET for Compute, Storage, SQL Database, Networking, Resource Manager, Key Vault, Redis, CDN and Batch services.
 
Azure Management Libraries for .NET are open source – https://github.com/Azure/azure-sdk-for-net/tree/Fluent

Service | Feature Generally available Available as preview Coming Soon
Compute

Virtual machines and VM extensions
Virtual machine scale sets
Managed disks

 

Azure container services
Azure container registry

Storage Storage accounts   Encryption
SQL Database

Databases
Firewalls
Elastic pools

   
Networking

Virtual networks
Network interfaces
IP addresses
Routing table
Network security groups
DNS
Traffic managers

Load balancers
Application gateways

 
More services

Resource Manager
Key Vault
Redis
CDN
Batch

App service - Web apps
Functions
Service bus

Monitor
Graph RBAC
DocumentDB
Scheduler

Fundamentals Authentication – core Async methods  

Generally available means that developers can use these libraries in production with full support by Microsoft through GitHub or Azure support channels. Preview features are flagged in documentation comments in libraries.

In Spring 2016, based on .NET developer feedback, we started a journey to simplify the Azure management libraries for .NET. Our goal is to improve the developer experience by providing a higher-level, object-oriented API, optimized for readability and writability. These libraries are built on the lower-level, request-response style auto generated clients and can run side-by-side with auto generated clients.

We announced multiple previews of the libraries. During the preview period, early adopters provided us with valuable feedback and helped us prioritize features and Azure services to be supported. For example, we added support for asynchronous methods and Azure Service Bus.

You can download generally available libraries from

Working with the Azure Management Libraries for .NET

One C# statement to authenticate. One statement to create a virtual machine. One statement to modify an existing virtual network ... No more guessing about what is required vs. optional vs. non-modifiable.

Azure Authentication

One statement to authenticate and choose a subscription. The Azure class is the simplest entry point for creating and interacting with Azure resources.

IAzure azure = Azure.Authenticate(credFile).WithDefaultSubscription();

Create a Virtual Machine

You can create a virtual machine instance by using the define() … create() method chain.

var windowsVM = azure.VirtualMachines.Define("myWindowsVM")
    .WithRegion(Region.USEast)
    .WithNewResourceGroup(rgName)
    .WithNewPrimaryNetwork("10.0.0.0/28")
    .WithPrimaryPrivateIPAddressDynamic()
    .WithNewPrimaryPublicIPAddress("mywindowsvmdns")
    .WithPopularWindowsImage(KnownWindowsVirtualMachineImage.WindowsServer2012R2Datacenter)
    .WithAdminUsername("tirekicker")
    .WithAdminPassword(password)
    .WithSize(VirtualMachineSizeTypes.StandardD3V2)
    .Create();

Update a Virtual Machine

You can update a virtual machine instance by using an update() … apply() method chain.

windowsVM.Update()
    .WithNewDataDisk(20, lun, CachingTypes.ReadWrite)
    .Apply();

Create a Virtual Machine Scale Set

You can create a virtual machine scale set instance by using another define() … create() method chain.

var virtualMachineScaleSet = azure.VirtualMachineScaleSets.Define(vmssName)
    .WithRegion(Region.USEast)
    .WithExistingResourceGroup(rgName)
    .WithSku(VirtualMachineScaleSetSkuTypes.StandardD3v2)
    .WithExistingPrimaryNetworkSubnet(network, "Front-end")
    .WithExistingPrimaryInternetFacingLoadBalancer(loadBalancer1)
    .WithPrimaryInternetFacingLoadBalancerBackends(backendPoolName1, backendPoolName2)
    .WithPrimaryInternetFacingLoadBalancerInboundNatPools(natPool50XXto22, natPool60XXto23)
    .WithoutPrimaryInternalLoadBalancer()
    .WithPopularLinuxImage(KnownLinuxVirtualMachineImage.UbuntuServer16_04_Lts)
    .WithRootUsername(userName)
    .WithSsh(sshKey)
    .WithNewDataDisk(100)
    .WithNewDataDisk(100, 1, CachingTypes.ReadWrite)
    .WithNewDataDisk(100, 2, CachingTypes.ReadWrite, StorageAccountTypes.StandardLRS)
    .WithCapacity(3)
    .Create();

Create a Network Security Group

You can create a network security group instance by using another define() … create() method chain.

var frontEndNSG = azure.NetworkSecurityGroups.Define(frontEndNSGName)
    .WithRegion(Region.USEast)
    .WithNewResourceGroup(rgName)
    .DefineRule("ALLOW-SSH")
        .AllowInbound()
        .FromAnyAddress()
        .FromAnyPort()
        .ToAnyAddress()
        .ToPort(22)
        .WithProtocol(SecurityRuleProtocol.Tcp)
        .WithPriority(100)
        .WithDescription("Allow SSH")
        .Attach()
    .DefineRule("ALLOW-HTTP")
        .AllowInbound()
        .FromAnyAddress()
        .FromAnyPort()
        .ToAnyAddress()
        .ToPort(80)
        .WithProtocol(SecurityRuleProtocol.Tcp)
        .WithPriority(101)
        .WithDescription("Allow HTTP")
        .Attach()
    .Create();

Create a Web App

You can create a Web App instance by using another define() … create() method chain.

var webApp = azure.WebApps.Define(appName)
    .WithRegion(Region.USWest)
    .WithNewResourceGroup(rgName)
    .WithNewFreeAppServicePlan()
    .Create();

Create a SQL Database

You can create a SQL server instance by using another define() … create() method chain.

var sqlServer = azure.SqlServers.Define(sqlServerName)
    .WithRegion(Region.USEast)
    .WithNewResourceGroup(rgName)
    .WithAdministratorLogin(administratorLogin)
    .WithAdministratorPassword(administratorPassword)
    .WithNewFirewallRule(firewallRuleIpAddress)
    .WithNewFirewallRule(firewallRuleStartIpAddress, firewallRuleEndIpAddress)
    .Create();

Then, you can create a SQL database instance by using another define() … create() method chain.

var database = sqlServer.Databases.Define(databaseName)
    .Create();

Sample Code

You can find plenty of sample code that illustrates management scenarios (69+ end-to-end scenarios) for Azure.

Service Management Scenario
Virtual Machines
Virtual Machines - parallel execution
Virtual Machine Scale Sets
Storage
Networking
Networking - DNS
Traffic Manager
Application Gateway
SQL Database
Redis Cache
App Service - Web Apps on Windows
App Service - Web Apps on Linux
Functions
Service Bus
Resource Groups
Key Vault
CDN
Batch

 

Start using Azure Management Libraries for .NET today!

Start using these libraries today. It is easy to get started. You can run the samples above.

As always, we like to hear your feedback via comments on this blog or by opening issues in GitHub or via e-mail to az-libs-for-net@microsoft.com.

JWT Validation and Authorization in ASP.NET Core

$
0
0
This is a guest post by Mike Rousos

In a couple previous posts, I discussed a customer scenario I ran into recently that required issuing bearer tokens from an ASP.NET Core authentication server and then validating those tokens in a separate ASP.NET Core web service which may not have access to the authentication server. The previous posts covered how to setup an authentication server for issuing bearer tokens in ASP.NET Core using libraries like OpenIddict or IdentityServer4. In this post, I’m going to cover the other end of token use on ASP.NET Core – how to validate JWT tokens and use them to authenticate users.

Although this post focuses on .NET Core scenarios, there are also many options for using and validating bearer tokens in the .NET Framework, including the code shown here (which works on both .NET Core and the .NET Framework) and Azure Active Directory packages likeMicrosoft.Owin.Security.ActiveDirectory, which are covered in detail in Azure documentation.

JWT Authentication

The good news is that authenticating with JWT tokens in ASP.NET Core is straightforward. Middleware exists in the Microsoft.AspNetCore.Authentication.JwtBearer package that does most of the work for us!

To test this out, let’s create a new ASP.NET Core web API project. Unlike the web app in my previous post, you don’t need to add any authentication to this web app when creating the project. No identity or user information is managed by the app directly. Instead, it will get all the user information it needs directly from the JWT token that authenticates a caller.

Once the web API is created, decorate some of its actions (like the default Values controller) with [Authorize] attributes. This will cause ASP.NET Core to only allow calls to the attributed APIs if the user is authenticated and logged in.

To actually support JWT bearer authentication as a means of proving identity, all that’s needed is a call to the UseJwtBearerAuthentication extension method (from the Microsoft.AspNetCore.Authentication.JwtBearer package) in the app’s Startup.Configure method. Because ASP.NET Core middleware executes in the order it is added in Startup, it’s important that the UseJwtBearerAuthentication call comes before UseMvc.

UseJwtBearerAuthentication takes a JwtBearerOptions parameter which specifies how to handle incoming tokens. A typical, simple use of UseJwtBearerAuthentication might look like this:

app.UseJwtBearerAuthentication(new JwtBearerOptions()
{
  Audience = "http://localhost:5001/",
  Authority = "http://localhost:5000/",
  AutomaticAuthenticate = true
});

The parameters in such a usage are:

  • Audience represents the intended recipient of the incoming token or the resource that the token grants access to. If the value specified in this parameter doesn’t match the aud parameter in the token, the token will be rejected because it was meant to be used for accessing a different resource. Note that different security token providers have different behaviors regarding what is used as the ‘aud’ claim (some use the URI of a resource a user wants to access, others use scope names). Be sure to use an audience that makes sense given the tokens you plan to accept.
  • Authority is the address of the token-issuing authentication server. The JWT bearer authentication middleware will use this URI to find and retrieve the public key that can be used to validate the token’s signature. It will also confirm that the iss parameter in the token matches this URI.
  • AutomaticAuthenticate is a boolean value indicating whether or not the user defined by the token should be automatically logged in or not.
  • RequireHttpsMetadata is not used in the code snippet above, but is useful for testing purposes. In real-world deployments, JWT bearer tokens should always be passed only over HTTPS.

The scenario I worked on with a customer recently, though, was a little different than this typical JWT scenario. The customer wanted to be able to validate tokens without access to the issuing server. Instead, they wanted to use a public key that was already present locally to validate incoming tokens. Fortunately, UseJWTBearerAuthentication supports this use-case. It just requires a few adjustments to the parameters passed in.

  1. First, the Authority property should not be set on the JwtBearerOptions. If it’s set, the middleware assumes that it can go to that URI to get token validation information. In this scenario, the authority URI may not be available.
  2. A new property (TokenValidationParameters) must be set on the JwtBearerOptions. This object allows the caller to specify more advanced options for how JWT tokens will be validated.

There are a number of interesting properties that can be set in a TokenValidationParameters object, but the ones that matter for this scenario are shown in this updated version of the previous code snippet:

var tokenValidationParameters = new TokenValidationParameters
{
  ValidateIssuerSigningKey = true,
  ValidateIssuer = true,
  ValidIssuer = "http://localhost:5000/",
  IssuerSigningKey = new X509SecurityKey(new X509Certificate2(certLocation))
};

app.UseJwtBearerAuthentication(new JwtBearerOptions()
{
  Audience = "http://localhost:5001/",
  AutomaticAuthenticate = true,
  TokenValidationParameters = tokenValidationParameters
});

The ValidateIssuerSigningKey and ValdiateIssuer properties indicate that the token’s signature should be validated and that the key’s property indicating it’s issuer must match an expected value. This is an alternate way to make sure the issuer is validated since we’re not using an Authority parameter in our JwtBearerOptions (which would have implicitly checked that the JWT’s issuer matched the authority). Instead, the JWT’s issuer is matched against custom values that are provided by the ValidIssuer or ValidIssuers properties of the TokenValidationParameters object.

The IssuerSigningKey is the public key used for validating incoming JWT tokens. By specifying a key here, the token can be validated without any need for the issuing server. What is needed, instead, is the location of the public key. The certLocation parameter in the sample above is a string pointing to a .cer certificate file containing the public key corresponding to the private key used by the issuing authentication server. Of course, this certificate could just as easily (and more likely) come from a certificate store instead of a file.

In my previous posts on the topic of issuing authentication tokens with ASP.NET Core, it was necessary to generate a certificate to use for token signing. As part of that process, a .cer file was generated which contained the public (but not private) key of the certificate. That certificate is what needs to be made available to apps (like this sample) that will be consuming the generated tokens.

With UseJwtBearerAuthentication called in Startup.Configure, our web app should now respect identities sent as JWT bearer tokens in a request’s Authorization header.

Authorizing with Custom Values from JWT

To make the web app consuming tokens a little more interesting, we can also add some custom authorization that only allows access to APIs depending on specific claims in the JWT bearer token.

Role-based Authorization

Authorizing based on roles is available out-of-the-box with ASP.NET Identity. As long as the bearer token used for authentication contains a roles element, ASP.NET Core’s JWT bearer authentication middleware will use that data to populate roles for the user.

So, a roles-based authorization attribute (like [Authorize(Roles = "Manager,Administrator")] to limit access to managers and admins) can be added to APIs and work immediately.

Custom Authorization Policies

Custom authorization in ASP.NET Core is done through custom authorization requirements and handlers. ASP.NET Core documentation has an excellent write-up on how to use requirements and handlers to customize authorization. For a more in-depth look at ASP.NET Core authorization, check out this ASP.NET Authorization Workshop.

The important thing to know when working with JWT tokens is that in your AuthorizationHandler‘s HandleRequirementAsync method, all the elements from the incoming token are available as claims on the AuthorizationHandlerContext.User. So, to validate that a custom claim is present from the JWT, you might confirm that the element exists in the JWT with a call to context.User.HasClaim and then confirm that the claim is valid by checking its value.

Again, details on custom authorization policies can be found in ASP.NET Core documentation, but here’s a code snippet demonstrating claim validation in an AuthorizationHandler that authorizes users based on the (admittedly strange) requirement that their office number claim be lower than some specified value. Notice that it’s necessary to parse the office number claim’s value from a string since (as mentioned in my previous post), ASP.NET Identity stores all claim values as strings.

This authorization requirement can be registered in Startup.ConfigureServices with a call to AddAuthorization to add a requirement that an office number not exceed a particular value (200, in this example), and by adding the handler with a call to AddSingleton:

// Add custom authorization handlers
services.AddAuthorization(options =>
{
  options.AddPolicy("OfficeNumberUnder200", policy => policy.Requirements.Add(new MaximumOfficeNumberRequirement(200)));
});

services.AddSingleton<IAuthorizationHandler, MaximumOfficeNumberAuthorizationHandler>();

Finally, this custom authorization policy can protect APIs by decorating actions (or controllers) with appropriate Authorize attributes with their policy argument set to the name used when defining the custom authorization requirement in startup.cs:

[Authorize(Policy = "OfficeNumberUnder200")]

Testing it All Together

Now that we have a simple web API that can authenticate and authorize based on tokens, we can try out JWT bearer token authentication in ASP.NET Core end-to-end.

The first step is to login with the authentication server we created in my previous post. Once that’s done, copy the token out of the server’s response.

Now, shut down the authentication server just to be sure that our web API can authenticate without it being online.

Then, launch our test web API and using a tool like Postman or Fiddler, create a request to the web API. Initially, the request should fail with a 401 error because the APIs are protected with an [Authorize] attribute. To make the calls work, add an Authorization header with the value “bearer X” where “X” is the JWT bearer token returned from the authentication server. As long as the token hasn’t expired, its audience and authority match the expected values for this web API, and the user indicated by the token satisfies any custom authorization policies on the action called, a valid response should be served from our web API.

Here are a sample request and response from testing out the sample created in this post:

Request:

GET /api/values/1 HTTP/1.1
Host: localhost:5001
Authorization: bearer
eyJhbGciOiJSUzI1NiIsImtpZCI6IkU1N0RBRTRBMzU5NDhGODhBQTg2NThFQkExMUZFOUIxMkI5Qzk5NjIiLCJ0eXAiOiJKV1QifQ
.eyJ1bmlxdWVfbmFtZSI6IkJvYkBDb250b3NvLmNvbSIsIkFzcE5ldC5JZGVudGl0eS5TZWN1cml0eVN0YW1wIjoiM2M4OWIzZjYtN
zE5Ni00NWM2LWE4ZWYtZjlmMzQyN2QxMGYyIiwib2ZmaWNlIjoiMjAiLCJqdGkiOiI0NTZjMzc4Ny00MDQwLTQ2NTMtODYxZi02MWJ
iM2FkZTdlOTUiLCJ1c2FnZSI6ImFjY2Vzc190b2tlbiIsInNjb3BlIjpbImVtYWlsIiwicHJvZmlsZSIsInJvbGVzIl0sInN1YiI6I
jExODBhZjQ4LWU1M2ItNGFhNC1hZmZlLWNmZTZkMjU4YWU2MiIsImF1ZCI6Imh0dHA6Ly9sb2NhbGhvc3Q6NTAwMS8iLCJuYmYiOjE
0Nzc1MDkyNTQsImV4cCI6MTQ3NzUxMTA1NCwiaWF0IjoxNDc3NTA5MjU0LCJpc3MiOiJodHRwOi8vbG9jYWxob3N0OjUwMDAvIn0.L
mx6A3jhwoyZ8KAIkjriwHIOAYkgXYOf1zBbPbFeIiU2b-2-nxlwAf_yMFx3b1Ouh0Bp7UaPXsPZ9g2S0JLkKD4ukUa1qW6CzIDJHEf
e4qwhQSR7xQn5luxSEfLyT_LENVCvOGfdw0VmsUO6XT4wjhBNEArFKMNiqOzBnSnlvX_1VMx1Tdm4AV5iHM9YzmLDMT65_fBeiekxQ
NPKcXkv3z5tchcu_nVEr1srAk6HpRDLmkbYc6h4S4zo4aPcLeljFrCLpZP-IEikXkKIGD1oohvp2dpXyS_WFby-dl8YQUHTBFHqRHi
k2wbqTA_gabIeQy-Kon9aheVxyf8x6h2_FA

Response:

HTTP/1.1 200 OK
Date: Thu, 15 Sep 2016 21:53:10 GMT
Transfer-Encoding: chunked
Content-Type: text/plain; charset=utf-8
Server: Kestrel

value

Conclusion

As shown here, authenticating using JWT bearer tokens is straightforward in ASP.NET Core, even in less common scenarios (such as the authentication server not being available). What’s more, ASP.NET Core’s flexible authorization policy makes it easy to have fine-grained control over access to APIs. Combined with my previous posts on issuing bearer tokens, you should have a good overview of how to use this technology for authentication in ASP.NET Core web apps.

Resources

Announcing TypeScript 2.3

$
0
0

Today we’re excited to bring you our latest release with TypeScript 2.3!

For those who aren’t familiar, TypeScript is a superset of JavaScript that brings users optional static types and solid tooling. Using TypeScript can help avoid painful bugs people commonly run into when writing JavaScript by type-checking your code. TypeScript can actually report issues without you even saving your file, and leverage the type system to help you write code even faster. This leads to a truly awesome editing experience, giving you time to think about and test the things that really matter.

To get started with the latest stable version of TypeScript, you can grab it through NuGet, or use the following command with npm:

npm install -g typescript

Visual Studio 2015 users (who have Update 3) as well as Visual Studio 2017 users using Update 2 Preview will be able to get TypeScript by simply installing it from here. To elaborate a bit, this also means that Visual Studio 2017 Update 2 will be able to use any newer version of TypeScript side-by-side with older ones.

For other editors, default support for 2.3 should be coming soon, but you can configure Visual Studio Code and our Sublime Text plugin to pick up whatever version you need.

While our What’s New in TypeScript Page will give the full scoop, let’s dive in to see some of the great new features TypeScript 2.3 brings!

Type checking in JavaScript files with // @ts-check and --checkJs

TypeScript has long had an option for gradually migrating your files from JavaScript to TypeScript using the --allowJs flag; however, one of the common pain-points we heard from JavaScript users was that migrating JavaScript codebases and getting early benefits from TypeScript was difficult. That’s why in TypeScript 2.3, we’re experimenting with a new “soft” form of checking in .js files, which brings many of the advantages of writing TypeScript without actually writing .ts files.

This new checking mode uses comments to specify types on regular JavaScript declarations. Just like in TypeScript, these annotations are completely optional, and inference will usually pick up the slack from there. But in this mode, your code is still runnable and doesn’t need to go through any new transformations.

You can try this all out without even needing to touch your current build tools. If you’ve already got TypeScript installed (npm install -g typescript), getting started is easy! First create a tsconfig.json in your project’s root directory:

{
    "compilerOptions": {
        "noEmit": true,
        "allowJs": true
    },
    "include": [
        "./src/"
    ]
}

Note: We’re assuming our files are in src. Your folder names might be different.

Now all you need to do to type-check a file is to add a comment with // @ts-check to the top. Now run tsc from the same folder as your tsconfig.json and that’s it.

// @ts-check

/**
 * @param {string} input
 */
function foo(input) {
    input.tolowercase()
    //    ~~~~~~~~~~~ Error! Should be toLowerCase
}

We just assumed you didn’t want to bring TypeScript into your build pipeline at all, but TypeScript is very flexible about how you want to set up your project. Maybe you wanted to have all JavaScript files in your project checked with the checkJs flag instead of using // @ts-check comments. Maybe you wanted TypeScript to also compile down your ES2015+ code while checking it. Here’s a tsconfig.json that does just that:

{
    "compilerOptions": {
        "target": "es5",
        "module": "commonjs",
        "allowJs": true,
        "checkJs": true,
        "outDir": "./lib"
    },
    "include": [
        "./src/**/*"
    ]
}

Note: Since TypeScript is creating new files, we had to set outDir to another folder like lib. That might not be necessary if you use tools like Webpack, Gulp, etc.

Next, you can start using TypeScript declaration files (.d.ts files) for any of your favorite libraries and benefit from any new code you start writing. We think you’ll be especially happy getting code completion and error checking based on these definitions, and chances are, you may’ve already tried it.

This JavaScript checking mode also allows for two other comments in .js files:

  1. // @ts-nocheck to disable a file from being checked when --checkJs is on
  2. // @ts-ignore to ignore errors on the following line.

You might already be thinking of this experience as something similar to linting; however, that doesn’t mean we’re trying to replace your linter! We see this as something complementary that can run side-by-side with existing tools like ESLint on your JavaScript. Each tool can play to its strengths.

If you’re already using TypeScript, we’re sure you have a JavaScript codebase lying around you can turn this on to quickly catch some real bugs in. But if you’re new to TypeScript, we think that this mode will really help show you what TypeScript has to offer without needing to jump straight in or commit.

Language server plugin support

One of TypeScript’s goals is to deliver a state-of-the-art editing experience to the JavaScript world. This experience is something our users have long enjoyed, whether it came to using traditional JavaScript constructs, newer ECMAScript features, or even JSX.

However, in the spirit of separation of concerns, TypeScript avoided special-casing certain content such as templates. This was a problem that we’d discussed deeply with the Angular team – we wanted to be able to deliver the same level of tooling to Angular users for their templates as we did in other parts of their code. That included completions, go-to-definition, error reporting, etc.

After working closely with the Angular team, we’re happy to announce that TypeScript 2.3 is officially makes a language server plugin API available. This API allows plugins to augment the regular editing experience that TypeScript already delivers. What all of this means is that you can get an enhanced editing experience for many different workloads.

You can see the progress of Angular’s Visual Studio Code Plugin here which can greatly enhance the experience for Angular users. But importantly, note here is that this is a general API – that means that a plugin can be written for many different types of content. As an example, there’s already a TSLint language server plugin, as well as a TypeScript GraphQL language server plugin! The Vetur plugin has also delivered a better TypeScript and JavaScript editing experience within .vue files for Vue.js through our language server plugin model.

We hope that TypeScript will continue to empower users of different toolsets. If you’re interested in providing an enhanced experience for your users, you can check out this example plugin.

Default type arguments

To explain this feature, let’s take a simplified look at React’s component API. A React component will have props and potentially some state. You could encode this like follows:

class Component<P, S> {
    // ...
}

Here P is the type of props and S is the type of state.

However, much of the time, state is never used in a component. In those cases, we can just write the type as object or {}, but we have to do so explicitly:

class FooComponent extends React.Component<FooProps, object> {
    // ...
}

This may not be surprising. It’s fairly common for APIs to have some concept of default values for information you don’t care about.

Enter default type arguments. Default type arguments allow us to free ourselves from thinking of unused generics. In TypeScript 2.3, we can declare React.Component as follows:

class Component<P, S = object> {
    // ...
}

And now whenever we write Component<FooProps>, we’re implicitly writing Component<FooProps, object>.

Keep in mind that a type parameter’s default isn’t necessarily the same as its constraint (though the default has to be assignable to the constraint).

Generator and async generator support

Previously, TypeScript didn’t support compiling generators or working with iterators. With TypeScript 2.3, it not only supports both, it also brings support for ECMAScript’s new async generators and async iterators.

This is an opt-in feature when using the --downlevelIteration flag. You can read more about this on our RC blog post.

This functionality means TypeScript more thoroughly supports the latest version of ECMAScript when targeting older versions. It also means that TypeScript can now work well with libraries like redux-saga.

Easier startup with better help, richer init, and quicker strictness

Another one of the common pain-points we heard from our users was around the difficulty of getting started in general, and of the discoverability of new options. We had found that people were often unaware that TypeScript could work on JavaScript files, or that it could catch nullability errors. We wanted to make TypeScript more approachable, easier to explore, and more helpful at getting you to the most ideal experience.

First, TypeScript’s --help output has been improved so that options are grouped by their topics, and more involved/less common options are skipped by default. To get a complete list of options, you can type in tsc --help --all.

Second, because users are often unaware of the sorts of options that TypeScript does make available, we’ve decided to take advantage of TypeScript’s --init output so that potential options are explicitly listed out in comments. As an example, tsconfig.json output will look something like the following:

{
  "compilerOptions": {
    /*  Basic  Options */
    "target": "es5",              /* Specify ECMAScript target version: 'ES3' (default), 'ES5', 'ES2015', 'ES2016', 'ES2017', or 'ESNEXT'. */
    "module": "commonjs",         /* Specify module code generation: 'commonjs', 'amd', 'system', 'umd' or 'es2015'. */
    // "lib": [],                 /* Specify library files to be included in the compilation:  */
    // "allowJs": true,           /* Allow javascript files to be compiled. */
    // "checkJs": true,           /* Report errors in .js files. */
    // "jsx": "preserve",         /* Specify JSX code generation: 'preserve', 'react-native', or 'react'. */
    // "declaration": true,       /* Generates corresponding '.d.ts' file. */
    // "sourceMap": true,         /* Generates corresponding '.map' file. */
    // ...
  }
}

We believe that changing our --init output will make it easier to make changes to your tsconfig.json down the line, and will make it quicker to discover TypeScript’s capabilities.

Finally, we’ve added the --strict flag, which enables the following settings

  • --noImplicitAny
  • --strictNullChecks
  • --noImplicitThis
  • --alwaysStrict (which enforces JavaScript strict mode in all files)

This --strict flag represents a set of flags that the TypeScript team believes will lead to the most optimal developer experience in using the language. In fact, all new projects started with tsc --init will have --strict turned on by default.

Enjoy!

You can read up our full what’s new in TypeScript page on our Wiki, and read our Roadmap to get a deeper look at what we’ve done and what’s coming in the future!

We always appreciate constructive feedback to improve TypeScript however we can. In fact, TypeScript 2.3 was especially driven by feedback from both existing TypeScript users as well as people in the general JavaScript community. If we reached out to you at any point, we’d like to especially thank you for being helpful and and willing to make TypeScript a better project for all JavaScript developers.

We hope you enjoy TypeScript 2.3, and we hope that it makes coding even easier for you. If it does, consider dropping us a line in the comments below, or spreading the love on Twitter.

Thanks for reading up on this new release, and happy hacking!


Where Europe lives, in 14 lines of R Code

$
0
0

Via Max Galka, always a great source of interesting data visualizations, we have this lovely visualization of population density in Europe in 2011, created by Henrik Lindberg:

  Europe

Impressively, the chart was created with just 14 lines of R code:

(To recreate it yourself, download the GEOSTAT-grid-POP-1K-2011-V2-0-1.zip file from eurostat, and move the two .csv files inside in range of your R script.) The code parses the latitude/longitude of population centers listed in the CSV file, arranges them into a 0.01 by 0.01 degree grid, and plots each row as a horizontal line with population as the vertical axis. Grid cells with zero populations cause breaks in the line and leave white gaps in the map. It's quite an elegant effect!

Make pleasingly parallel R code with rxExecBy

$
0
0

Some things are easy to convert from a long-running sequential process to a system where each part runs at the same time, thus reducing the required time overall. We often call these "embarrassingly parallel" problems, but given how easy it is to reduce the time it takes to execute them by converting them into a parallel process, "pleasingly parallel" may well be a more appropriate name.

Using the foreach package (available on CRAN) is one simple way of speeding up pleasingly parallel problems using R. A foreach loop is much like a regular for loop in R, and by default will run each iteration in sequence (again, just like a for loop). But by registering a parallel "backend" for foreach, you can run many (or maybe even all) iterations at the same time, using multiple processors on the same machine, or even multiple machines in the cloud.

For many applications, though, you need to provide a different chunk of data to each iteration to process. (For example, you may need to fit a statistical model within each country — each iteration will then only need the subset for one country.) You could just pass the entire data set into each iteration and subset it there, but that's inefficient and may even be impractical when dealing with very large datasets sitting in a remote repository. A better idea would be to leave the data where it is, and run R within the data repository, in parallel.

Microsoft R 9.1 introduces a new function, rxExecBy, for exactly this purpose. When your data is sitting in SQL Server or Spark, you can specify a set of keys to partition the data by, and an R function (any R function, built-in or user-defined) to apply to the partitions. The data doesn't actually move: R runs directly on the data platform. You can also run it on local data in various formats

RxExecBy

The rxExecBy function is included in Microsoft R Client (available free) and Microsoft R Server. For some examples of using rxExecBy, take a look at the Microsoft R Blog post linked below.

Microsoft R Blog: Running Pleasingly Parallel workloads using rxExecBy on Spark, SQL, Local and Localpar compute contexts

Because it’s Friday: Powerpoint Punchcards

$
0
0

A "Turing Machine" — a conceptual data processing machine that processes instructions encoded on a linear tape — is capable of performing the complete range of computations of any modern computer (though not necessarily in a useful amount of time). Tom Wildenhain demonstrates the Turing-competeness of Powerpoint, where the "tape" is a series of punch-cards controlled by the animations feature:

Of the many things you can do with Powerpoint, but probably shouldn't do, this ranks right up there.

That's all from us for this week. See you back here on Monday, and have a great weekend!

Team Services Extensions Roundup – April

$
0
0

A 6 month high of 30 new Visual Studio Team Services extensions got added to the Marketplace in April. It was really hard to only pick two from such a big set so I encourage everyone to check them all out on the ‘Recently Added’ section of our Marketplace. There are two extensions I want to highlight this month. One is from a well known Visual Studio IDE publisher, the other is the first step our ecosystem has organically taken to fill the AWS integration gap.

ndepend-logo

NDepend Extension for TFS 2017 and VSTS 

You may recognize this publisher from their successful Visual Studio extension, NDepend. NDepend has excelled in helping you estimate technical debt and manage .NET code quality  in Visual Studio. Now they bring all of that, and more, to your continuous integration processes in Visual Studio Team Services. NDepend is a static analysis tool for .NET managed code, and comes with a large library of code metrics, as well as a very rich dashboard and dependency graphs.

ndependdashboard

The Dashboard Hub added by the NDepend extension. It includes a rich set of information and every item is drillable

What you can expect from the extension:

  1. A new build task that does the code analysis and code coverage data analysis.
  2. A rich Hub Dashboard that shows the latest diff-able data set for your code quality metrics, with each item allowing drill-down.
  3. Quality Gates is a check of code quality which must be enforced before committing and releasing is allowed. NDepend comes with 12 default suggested Quality Gates.
  4. Over 150 default Rules that check your code against best practices. Write additional custom rules using Code Query over LINQ
  5. Technical Debt and Issues offers a rich interactive drill-down view of your issues and the rules defining them. Group and sort your issues on a varying set of pivots.
  6. Trends charts are provided displaying your tracked Trends for each build. The extension comes with 70 default trend metrics, with the ability to add new ones.
  7. Code Metrics are displayed in a panel for each assembly, namespace, class or method.
  8. Build Summary Recaps are included in each build showing the analysis recap.
  9. Support is provided from a publisher that is fantastic to work with!

 

aws-logo

AWS S3 Upload

This extension comes as advertised. It adds a useful Build Task allowing you to upload a file to a S3 bucket in AWS. This extension has quickly become a Trending item and gotten great early reviews. There is a big hunger in our ecosystem for more Amazon integration, and this is a good step in the right direction. There are a few setup steps things you’ll need to take care of, but it’s worth it.

awstask

Build task added by this extension. You need to designate the Bucket name, the file to upload, and the S3 Object.

Requirements

  1. AWS Tools for Windows PowerShell installed on build machine and script execution enabled.
    All Windows Amazon Machine Images (AMIs) have the AWS Tools for Windows PowerShell pre-installed.
    https://aws.amazon.com/powershell/
  2. Profile containing keys on build machine (if role is not configured):
    Run aws configure and set Access Key and Secret Key
    http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html#cli-using-examples

Are you using (or building) an extension you think should be featured here?

I’ll be on the lookout for extensions to feature in the future, so if you’d like to see yours (or someone else’s) here, then let me know on Twitter!

@JoeB_in_NC

Setup continuous deployment to MS Azure Government using Visual Studio Team Services

$
0
0

Azure Government Clouds provide private and semi-isolated locations for specific Government or other services, separate from the normal Azure services. Highest levels of privacy have been adopted for these clouds, including restricted data access policies.

MS Azure Government (MAG) is a completely isolated environment and as such requires unique Azure endpoints to manage the services offered there. MAG supports authentication using management certificate, user credentials or service principal for requests to the service management APIs.

Visual Studio Team Services (VSTS) enables requests to MAG environments with a CD process using service endpoints (Azure classic service endpoint for requests using management certificate or credentials, Azure resource manager service endpoint for requests using service principal authentication).

VSTS is currently not available in MS Azure Government.

In this article, we’ll look at how you can configure continuous deployment for an Azure web site in MAG with a VSTS account outside MAG. We’ll authenticate using service principal authentication.

Note that this process would the orchestrating builds & deployments, and storing the build artifacts outside MAG. In case you require stricter data restrictions for your application, you can configure a private agent on a VM in the MAG. Refer to this for more details.

Get set up

Begin with a CI build

Before you begin, you’ll need a CI build that publishes your Web Deploy package. To set up CI for your specific type of app, see:

Create the Azure app service

An Azure App Service is where we’ll deploy the Web App. Create a new web app in your subscription from the MAG portal.

Generate a service principal

Download & run this PowerShell script in an Azure Powershell window to generate required data for Service Principal based Azure service connection. Running this script would prompt you for:

  • The name of your Azure Subscription name
  • A password that you would like to set for the Service Principal that is going to be created
  • You should also provide the MAG environment name for your subscription in the environmentName parameter.

Once successful, the script would output the following details for the Azure Service Endpoint.

  • Connection Name
  • Subscription Id
  • Subscription Name
  • Service Principal Client Id
  • Service Principal key
  • Tenant Id

Configure a VSTS endpoint

    • From your Visual Studio Account, navigate to your Team Project and click on gear icon.vsts_admin
    • Click Services tab and click on ‘New Service Endpoint’ in the left pane.vsts_endpoints
    • From the drop-down, select ‘Azure Resource Manager’ option.

vsts_newarm_endpoint_auto

    • In the dialog, click the link at end of the text “If your subscription is not listed or to specify an existing service principal, click here”, which will switch to manual entry mode.

azure-gov-vsts-arm-endpoint

  • Give the endpoint a friendly name, choose the MAG environment name and enter the details obtained from execution of the script while creating a service principal.

Setup release

  1. Open the Releases tab of the Build & Release hub, open the + drop-down in the list of release definitions, and choose Create release definition
  2. Select the Azure App Service Deployment template and choose Next.

createrd

  1. In Source… make sure your CI build definition for the Web deploy package is selected as the artifact source.

createrd2

  1. Select the Continuous deployment check box, and then choose Create.
  2. Select the Deploy Azure App Service task and configure it as follows:
Task step Parameters
Deploy: Azure App Service Deploy
Deploy the app to Azure App Services
Azure Subscription: Select the endpoint configured earlier
App Service Name: the name of the web app (the part of the URL without .azurewebsites.net)
Deploy to Slot: make sure this is cleared (the default)
Virtual Application: leave blank
Web Deploy Package: $(System.DefaultWorkingDirectory)***.zip (the default)
Advanced:
Take App Offline:
If you run into locked .DLL problems when deploying, try selecting this check box.
  1. Edit the name of the release definition, choose Save, and choose OK. Note that the default environment is named Environment1, which you can edit by clicking directly on the name.

You’re now ready to create a release, which means to start the process of running the release definition with the artifacts produced by a specific build.

References

 

Optimization tips and tricks on Azure SQL Server for Machine Learning Services

$
0
0

Summary

Since SQL Server 2016, a new function called R Services has been introduced. Microsoft recently announced a preview for the next version of SQL Server, which extends the advanced analytical ability to Python. This new capability of running R or Python in-database at scale enables us to keep the analytics services close to the data and eliminates the burden of data movements. It also simplifies the development and deployment of intelligent applications. To get the most out of SQL server, knowing how to fine tune the intelligence model itself is far from sufficient and sometimes still fail to meet the performance requirement. There are quite a few optimization tips and tricks that could help us boost the performance significantly. In this post, we apply a few optimization techniques to a resume-matching scenario, which mimics the workflow of large volume prediction aiming to showcase how those techniques could make data analytics more efficient and powerful. The three main optimization techniques introduced in our blog are as follows:

  • Full durable memory-optimized tables
  • CPU affinity and memory allocation
  • Resource governance and concurrent execution

This blog post is a short summary of how the above optimization tips and tricks work with R Services on Azure SQL Server. Those optimization techniques not only work for R Services, but for any Machine Learning Services integrated with SQL Server. Please refer to the full tutorial for sample code and step-by-step walkthroughs.

Description of the Sample Use Case

The sample use case for both this blog and its associated tutorial is a resume-matching example. Finding the best candidate for a job position has long been an art that is labor intensive and requires a lot of manual efforts from search agents. How to find candidates with certain technical or specialized qualities from massive amount of information collected from diverse sources has become a new big challenge. We developed a model to search good matches among millions of resumes for a giving position. Being formulated as a binary classification problem, the machine learning model takes both the resume and job description as the inputs and produces the probability of being a good match for each resume-job pair. A user defined probability threshold is then used to further filter out all good matches.

A key challenge in this use case is that for each new job, we will need to match it with millions of resumes within a reasonable time frame. The feature engineering step, which produces thousands of features (2600 in this case), is a significant performance bottleneck during scoring. Hence, achieving a low matching (scoring) latency is the main objective in this use case.

Optimizations

There are many different types of optimization techniques, and we are going to discuss a few of them using the resume-matching scenario. In this blog, we will explain why and how those optimization techniques work from high level. For more detailed explanations and background knowledge, please refer to the included reference links. In the tutorial, the results are expected to be reproducible using similar hardware configuration and the SQL scripts.

Memory-optimized table

Nowadays, memory is no longer a problem for a modern machine in terms of size and speed. People can get ‘value of RAM’ with the advancement of hardware. In the meantime, data has been produced far more quickly than ever before and some tasks need to process those data with low latency. Memory-optimized tables can leverage the advancement of hardware to tackle this problem. Memory-optimized tables mainly reside in memory so that data is read from and written to memory [1]. However, for durability purposes a second copy of the table is maintained on disk and data is only read from disk during database recovery. The performance could be optimized with high scalability and low latency using memory especially when we need to read from and write to tables very frequently [2]. You can find a detailed introduction of memory-optimized tables on this blog [1]. You can also watch this video [3] to learn more about the performance benefits of using In-Memory OLTP.

In the resume-matching scenario, we will need to read all the resume features from the database and match all of them with a new job opening. By using memory-optimized tables, resume features are stored in main memory and disk IO could be significantly reduced. In addition, since we need to write all the predictions back to the database concurrently from different batches, extra performance gain could be achieved by using memory-optimized table. With the support of memory-optimized table on SQL Server, we achieved low latency on reading from/writing to tables and a seamless experience during development. Full durable memory-optimized tables were created along with creating the database. The rest of the development is exactly the same as before without knowing where the data is stored.

CPU affinity and memory allocation

With SQL Server 2014 SP2 and later version, soft-NUMA is automatically enabled at the database-instance level when starting the SQL Server service [4, 5, 6]. If the database engine server detects more than 8 physical cores per NUMA node or socket, it will automatically create soft-NUMA nodes that ideally contain 8 cores. But it can go down to 5 or up to 9 logical cores per node. You can find the log information when SQL Server detects more than 8 physical cores in each socket.

SQL log of auto Soft-NUMA

Figure 1: SQL log of auto Soft-NUMA, 4 soft NUMA nodes were created

As shown in Figure 1, our test consisted of 20 physical cores among which 4 soft-NUMA nodes were created automatically such that each node contained 5 cores. Soft-NUMA enables the ability to partition service threads per node and that generally increases scalability and performance by reducing IO and lazy writer bottlenecks. We then further created 4 SQL resource pools and 4 external resource pools [7] to specify the CPU affinity of using the same set of CPUs in each node. By doing this, both SQL Server and the R processes can eliminate foreign memory access since the processes will be within the same NUMA node. Hence, memory access latency could be reduced. Subsequently, those resource pools are then assigned to different workload groups to enhance hardware resource consumption.

Soft-NUMA and CPU affinity cannot divide physical memory in each physical NUMA node. All the soft NUMA nodes in the same physical NUMA node receive memory from the same OS memory block and there is no memory-to-processor affinity. However, we should pay attention to the memory allocation between SQL Server and the R processes. By default, only 20% of memory is allocated to R services and that is not enough for most of the data analytical tasks. Please see How To: Create a Resource Pool for R [7] for more information. We need to fine tune memory allocation between those two and of course the best configuration varies case by case. In the resume-matching use case, we increased the external memory resource allocation to 70% which was the best configuration.

Resource governance and concurrent scoring

To scale up the scoring problem, a good practice is to adopt the map-reduce approach in which we split millions of resumes into multiple batches, and then execute multiple scoring concurrently. The parallel processing framework is illustrated in Figure 2.

Parallel processing

Figure 2: Illustration of parallel processing in multiple batches

Those batches will be processed on different CPU sets, and the results will be collected and written back to the database. Resource governance in SQL Server is designed to implement this idea. We can create resource governance for R services on SQL Server [8] by routing those scoring batches into different workload groups (Figure. 3). More information about resource governor could be found on this blog [9].

Resource governer

Figure 3: Resource governor (from: https://docs.microsoft.com/en-us/sql/relational-databases/resource-governor/resource-governor)

Resource governor can help divide the available resources (CPU and memory) on a SQL Server to minimize the workload competition using a classifier function [10, 11]. It provides multitenancy and resource isolation on SQL Server for different tasks to potentially improve the execution and provide predictable performance.

Other Tricks

One pain point with R is that when we conduct feature engineering it is usually processed on a single CPU. This is a major performance bottleneck for most of the data analysis tasks. In our resume-matching use case, we need to produce 2,500 cross-product features that will be then combined with the original 100 features (Figure 4). This whole process would take significant amount of time if everything was done on a single CPU.

Feature engineering

Figure 4: Feature engineering of our resume-matching use case

One trick here is to create a R function for feature engineering and to pass it as rxTransform function during training. The machine learning algorithm is implemented with parallel processing. As part of the training, the feature engineering is also processed on multiple CPUs. In comparison with regular approach in which feature engineering is conducted before training and scoring, we observed a 16% performance improvement in terms of scoring time.

Another trick that can potentially improves the performance is to use SQL compute context within R [12]. Since we have isolated resources for different batch executions, we need to isolate the SQL query for each batch as well. By using SQL compute context, we can parallelize the SQL query to extract data from tables and constrain the data on the same workload group.

Results and Conclusion

To fully illustrate those tips and tricks, we have published a very detailed step-by-step tutorial. A few benchmark tests for scoring 1.1 million rows of data were also conducted. We used both the RevoScaleR and MicrosoftML packages to train a prediction model separately. We then compared the scoring time if using those optimizations versus without optimizations. Figure 5 and 6 summarize the best performance results using RevoScaleR and MicrosoftML packages. The tests were conducted on the same Azure SQL Server VM using the same SQL query and R codes. Eight batches for one matching job were used in all tests.

RevoScaleR results

Figure 5: RevoScaleR scoring results

MicrosoftML results

Figure 6: MicrosoftML scoring results

The results suggested that the number of features had a significant impact on the scoring time. Also, using those optimization tips and tricks could significantly improve the performance in terms of scoring time. The improvement was even more prominent if more features were used in the prediction model.

Acknowledgement

Lastly, we would like to express our thanks to Umachandar Jayachandran, Amit Banerjee, Ramkumar Chandrasekaran, Wee Hyong Tok, Xinwei Xue, James Ren, Lixin Gong, Ivan Popivanov, Costin Eseanu, Mario Bourgoin, Katherine Lin and Yiyu Chen for the great discussions, proofreading and test-driving the tutorial accompanying this blog post.

References


[1] Introduction to Memory-Optimized Tables

[2] Demonstration: Performance Improvement of In-Memory OLTP

[3] 17-minute video explaining In-Memory OLTP and demonstrating performance benefits

[4] Understanding Non-uniform Memory Access

[5] How SQL Server Supports NUMA

[6] Soft-NUMA (SQL Server)

[7] How To: Create a Resource Pool for R

[8] Resource Governance for R Services

[9] Resource Governor

[10] Introducing Resource Governor

[11] SQL SERVER – Simple Example to Configure Resource Governor – Introduction to Resource Governor

[12] Define and Use Compute Contexts

Enabling Azure CDN from Azure web app and storage account portal extension

$
0
0

Enabling CDN for your Azure workflow becomes easier than ever with this new integration. You can now enable and manage CDN for your Azure web app service or Azure storage account without leaving the portal experience.

When you have a website, a storage account for download or a streaming endpoint for your media event, you may want to add CDN to your solution for scalability and better performance to make your CDN enablement experience easy for these Azure workflow. When you create a CDN endpoint from Azure portal CDN extension, you can choose an "origin type" which lists all the available Azure web app, storage account, and cloud services within your subscription. To enhance the integration, we started with CDN integration with Azure media services. From Azure media services portal extension, you can enable CDN for your streaming endpoint with one click. Now we have extended this integration to Web App and storage account.

Go to the Azure portal web app service or storage account extension, select your resource, then search for "CDN" from the menu and enable CDN! Very little information is required for CDN enablement. After enabling CDN, click the endpoint to manage configuration directly from this extension.

From Azure storage account portal extension:

From Azure web app service portal extension:

Directly manage CDN from Azure web app or storage portal extension:

More information

Is there a feature you'd like to see in Azure CDN? Give us feedback!


HDInsight tools for IntelliJ & Eclipse April updates

$
0
0

We are pleased to announce the April updates of HDInsight Tools for IntelliJ & Eclipse. This is a quality milestone and we focus primarily on refactoring the components and fixing bugs. We also added Azure Data Lake Store support and Eclipse local emulator support in this release. The HDInsight Tools for IntelliJ & Eclipse serve the open source community and are of interest to HDInsight Spark developers. The tools run smoothly in Linux, Mac, and Windows. 

Summary of key updates

Azure Data Lake Store support

  • HDInsight Visual Studio plugin, Eclipse plugin, and IntelliJ plugin now support Azure Data Lake Store (ADLS). Users can now view ADLS entities in the service explorer, add ADLS namespace/path in authoring, and submit Hive/Spark jobs reading/writing to ADLS in HDInsight cluster.

  • To use Azure Data Lake Store, users firstly need to create Azure HDInsight cluster with Data Lake Store as storage. Follow the instructions to Create an HDInsight cluster with Data Lake Store using Azure Portal.

  • As shown below, ADLS entities can be viewed in the service explorer.

1

  • By clicking “Explorer” above, users can explore data stored in ADLS, as shown below:

2

  • Users can read/write ADLS data in their Hive/Spark jobs, as shown below.
    • If Data Lake Store is the primary storage for the cluster, use adl:///. This is the root of the cluster storage in Azure Data Lake. This may translate to path of /clusters/CLUSTERNAME in the Data Lake Store account.
    • If Data Lake Store is additional storage for the cluster, use adl://DATALAKEACCOUNT.azuredatalakestore.net/. The URI specifies the Data Lake Store account the data is written to and data is written starting at the root of the Data Lake Store.

 3

Local emulator for Eclipse plugin

  • Local emulator was supported before in IntelliJ plugin.

  • Now local emulator is also supported in Eclipse plugin, similar functionalities and user experiences as local emulator in IntelliJ.

  • Get more details about local emulator support.

Quality improvement

The major improvements are code refactoring and telemetry enhancements. More than forty bugs around job author, submission, and job view are fixed to improve the quality of the tools in this release.

Installation

If you have HDInsight Tools for Visual Studio/Eclipse/IntelliJ installed before, the new bits can be updated in the IDE directly. Otherwise please refer to the pages below to download the latest bits or distribute the information to the customers:

Upcoming releases

The following features are planned for upcoming release:

  • Debuggability: Remote debugging support for Spark application
  • Monitoring: Improve Spark application view, job view and job graph
  • Usability: Improve installation experience; Integrate into IntelliJ run menu
  • Enable Mooncake support

Feedback

We look forward to your comments and feedback. If there is any feature request, customer ask, or suggestion, please do email us at hdivstool@microsoft.com. For bug submission, please submit using the template provided.

Data driven troubleshooting of Azure Stream Analytics jobs

$
0
0

Today, we are announcing the public preview of metrics in the Job Diagram for data driven monitoring and troubleshooting of Azure Stream Analytics jobs. This new functionality enables the quick and easy isolation of issues to specific components and query steps using metrics on the state of the inputs, outputs, and each step of the processing logic.

If for example, an Azure Stream Analytics job is not producing the expected output, metrics in the Job Diagram can be used to identify query steps that are receiving inputs but not producing any output to identify and isolate issues. Additionally, when one or more inputs to the Steam Analytics job stop producing events, the new capabilities can help identify the inputs and outputs that have pending diagnosis messages associated with them.

To access this capability, click on the “Job diagram” button in the “Settings” blade of the of the Stream Analytics job. For existing jobs, it is necessary to restart the job first.

Job Diagrams Setting

Every input and output is color coded to indicate the current state of that component.

Color coded input output

When you want to look at intermediate query steps to understand the data flow patterns inside Stream Analytics, the visualization tool provides a view of the breakdown of the query into its component steps and the flow sequence. Each logical node shows the number of partitions it has.

Visualization tool

Clicking on each query step will show the corresponding section in a query editing pane as illustrated. A metrics chart for the step is also displayed in a lower pane.

Metrics chart

Clicking the … will pop up the context menu allowing the expansion of partitions showing the partitions of the Event Hub input in addition to the input merger.

Expand partitions

 

Clicking a single partition node will show the metrics chart only for that partition on the bottom.

Single partition node

 

Selecting the merger node will show the metrics chart for the merger. The chart below shows that no events got dropped or adjusted.

No events got dropped or adjusted

 

Hovering the mouse pointer on the chart will show details of the metric value and time.

Metric value and time

 

We are very excited to hear your feedback. Please give it a try and let us know what you think.

Using Microsoft R with Alteryx

$
0
0

Alteryx Designer, the self-service analytics workflow tool, recently added integration with Microsoft R. This allows you to train models provided by Microsoft R, and create predictions from them, without needing to write R code — you simply drag-and-drop to create a workflow.

Alteryx-detail

In a recent post at the Microsoft R blog, Bharath Sankaranarayan walks through the process of building a model to predict response rates in a marketing campaign. In the post, you'll see how to create a workflow that connects to data loaded in SQL Server, and trains a logistic regression model on the data in-database. For the complete walk through, check out the blog post linked below.

3.67 Million Square Kilometers of new imagery for Turkey, Greece and Argentina

$
0
0

We are excited announce the release of new imagery covering a total of 3.67 million square kilometers in Turkey, Greece, and Argentina in partnership with Digital Globe.

Below are a few examples of the imagery now available:

Antalya, Turkey

Located on the Mediterranean coast of southwest Turkey, Antalya is a popular tourist destination, attracting millions tourists each year. Visitors enjoy the natural beauty and the ancient cities of Antalya, including the Xanthos recognized as a UNESCO World Heritage Site in 1988.

Antalya, Turkey

Satellite image ©2017 DigitalGlobe
 

Samos, Greece

The island of Samos is located in the eastern Aegean Sea. Heralded as the birthplace of several Greek philosophers, including Pythagoras of the Pythagorean Theorem, it is also known for its vineyards and wine from ancient times to present day.

Samos, Greece

Satellite image ©2017 DigitalGlobe
 

Cordoba, Argentina

Cordoba is the second largest city in Argentina and home to The National University of Cordoba, which is the oldest university in the country. As seen in the aerial image below, it is also home to piece of art that can only be viewed from the sky – a forest in the shape of a guitar. The guitar formed out of trees was created resident in memory of his late wife.

Cordoba, Turkey

Satellite image ©2017 DigitalGlobe
 

Explore many more interesting places around the world with Bing Maps.

- Bing Maps Team

3.67 Million Square KM of new imagery for Turkey, Greece and Argentina

$
0
0

We are excited announce the release of new imagery covering a total of 3.67 million square kilometers in Turkey, Greece, and Argentina in partnership with Digital Globe.

Below are a few examples of the imagery now available:

Antalya, Turkey

Located on the Mediterranean coast of southwest Turkey, Antalya is a popular tourist destination, attracting millions tourists each year. Visitors enjoy the natural beauty and the ancient cities of Antalya, including the Xanthos recognized as a UNESCO World Heritage Site in 1988.

Antalya, Turkey

Satellite image ©2017 DigitalGlobe
 

Samos, Greece

The island of Samos is located in the eastern Aegean Sea. Heralded as the birthplace of several Greek philosophers, including Pythagoras of the Pythagorean Theorem, it is also known for its vineyards and wine from ancient times to present day.

Samos, Greece

Satellite image ©2017 DigitalGlobe
 

Cordoba, Argentina

Cordoba is the second largest city in Argentina and home to The National University of Cordoba, which is the oldest university in the country. As seen in the aerial image below, it is also home to piece of art that can only be viewed from the sky – a forest in the shape of a guitar. The guitar formed out of trees was created resident in memory of his late wife.

Cordoba, Turkey

Satellite image ©2017 DigitalGlobe
 

Explore many more interesting places around the world with Bing Maps.

- Bing Maps Team

Viewing all 10804 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>