Quantcast
Channel: Category Name
Viewing all 10804 articles
Browse latest View live

Calling All Desktop Developers: How do you interact with data?

$
0
0

Connecting to databases and services is an important part of desktop application development for many of our customers. Visual Studio provides a variety of tools and technologies that can help you connect to and interact with your different data sources. We’d love your help in shaping our future offerings in this space!

Please fill out the survey below so we can learn more about what you currently use, your biggest challenges, and your future plans. It should only take 5 minutes to complete!

Take the survey now!

We appreciate your contribution!


ASP.NET Core 2.1.0-preview1 now available

$
0
0

Today we’re very happy to announce that the first preview of the next minor release of ASP.NET Core and .NET Core is now available for you to try out. We’ve been working hard on this release over the past months, along with many folks from the community, and it’s now ready for a wider audience to try it out and provide the feedback that will continue to shape the release.

You can read about .NET Core 2.1.0-preview2 over on their blog.

You can also read about Entity Framework Core 2.1.0-preview1 on their blog.

How do I get it?

You can download the new .NET Core SDK for 2.1.0-preview1 (which includes ASP.NET Core 2.1.0-preview1) from https://www.microsoft.com/net/download/dotnet-core/sdk-2.1.300-preview1

Visual Studio 2017 version requirements

Customers using Visual Studio 2017 should also install (in addition to the SDK above) and use the Preview channel (15.6 Preview 6 at the time of writing) when working with .NET Core and ASP.NET Core 2.1 projects. .NET Core 2.1 projects require Visual Studio 2017 15.6 or greater.

Impact to machines

Please note that given this is a preview release there are likely to be known issues and as-yet-to-be-discovered bugs. While .NET Core SDK and runtime installs are side-by-side on your machine, your default SDK will become the latest version, which in this case will be the preview. If you run into issues working on existing projects using earlier versions of .NET Core after installing the preview SDK, you can force specific projects to use an earlier installed version of the SDK using a global.json file as documented here. Please log an issue if you run into such cases as SDK releases are intended to be backwards compatible.

Already published applications running on earlier versions of .NET Core and ASP.NET Core shouldn’t be impacted by installing the preview. That said, we don’t recommend installing previews on machines running critical workloads.

New features

You can see a summary of the new features in 2.1 in the roadmap post we published previously.

Furthermore, we’re publishing a series of posts here that go over the new feature areas in detail. We’ll update this post with links to these posts as they go live over the coming days:

  • Using ASP.NET Core previews in Azure App Service
  • Introducing HttpClientFactory
  • Improvements for using HTTPS
  • Improvements for building Web APIs
  • Introducing compatibility version in MVC
  • Getting started with SignalR
  • Introducing global tools
  • Using Razor UI in class libraries
  • Improvements for GDPR
  • Improvements to the Kestrel HTTP server
  • Improvements to IIS hosting
  • Functional testing of MVC applications
  • Introducing Identity UI as a library
  • Hosting non-server apps with GenericHostBuilder

Announcements and release notes

You can see all the announcements published pertaining to this release at https://github.com/aspnet/Announcements/issues?q=is%3Aopen+is%3Aissue+milestone%3A2.1.0

Release notes will be available shortly at https://github.com/aspnet/Home/releases/tag/2.1.0-preview1

Giving feedback

The main purpose of providing previews like this is to solicit feedback from customers such that we can refine and improve the changes in time for the final release. We intend to release a second preview within the next couple of months, followed by a single RC release (with “go-live” license and support) before the final RTW release.

Please provide feedback by logging issues in the appropriate repository at https://github.com/aspnet or https://github.com/dotnet. The posts on specific topics above will provide direct links to the most appropriate place to log issues for the features detailed.

Migrating an ASP.NET Core 2.0.x project to 2.1.0-preview1

Follow these steps to migrate an existing ASP.NET Core 2.0.x project to 2.1.0-preview1:

  1. Open the project’s CSPROJ file and change the value of the <TargetFramework> element to netcoreapp2.1
    • Projects targeting .NET Framework rather than .NET Core, e.g. net471, don’t need to do this
  2. In the same file, update the versions of the various <PackageReference> elements for any Microsoft.AspNetCore, Microsoft.Extensions, and Microsoft.EntityFrameworkCore packages to 2.1.0-preview1-final
  3. In the same file, update the versions of the various <DotNetCliToolReference> elements for any Microsoft.VisualStudio, and Microsoft.EntityFrameworkCore packages to 2.1.0-preview1-final
  4. In the same file, remove the <DotNetCliToolReference> elements for any Microsoft.AspNetCore packages. These have been replaced by global tools.

That should be enough to get the project building and running against 2.1.0-preview1. The following steps will change your project to use new code-based idioms that are recommended in 2.1

  1. Open the Program.cs file
  2. Rename the BuildWebHost method to CreateWebHostBuilder, change its return type to IWebHostBuilder, and remove the call to .Build() in its body
  3. Update the call in Main to call the renamed CreateWebHostBuilder method like so: CreateWebHostBuilder(args).Build().Run();
  4. Open the Startup.cs file
  5. In the ConfigureServices method, change the call to add MVC services to set the compatibility version to 2.1 like so: services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_1);
  6. In the Configure method, add a call to add the HSTS middleware after the exception handler middleware: app.UseHsts();
  7. Staying in the Configure method, add a call to add the HTTPS redirection middleware before the static files middleware: app.UseHttpsRedirection();
  8. Open the project propery pages (right-mouse click on project in Visual Studio Solution Explorer and select “Properties”)
  9. Open the “Debug” tab and in the IIS Express profile, check the “Enable SSL” checkbox and save the changes
  10. Open the Properties/launchSettings.json file
  11. In the "iisSettings"/"iisExpress" section, note the new property added to define HTTPS port for IIS Express to use, e.g. "sslPort": 44374
  12. In the "profiles/IIS Express/environmentVariables" section, add a new property to flow the configured HTTPS port through to the application like so: "ASPNETCORE_HTTPS_PORT": "44374"
    • This configuration value will be read by the HTTPS redirect middleware you added above to ensure non-HTTPS requests are redirected to the correct port. Make sure it matches the value configured for IIS Express.

Note that some projects might require more steps depending on the options selected when the project was created, or packages added since. You might like to try creating a new project targeting 2.1.0-preview1 (in Visual Studio or using dotnet new at the cmd line) with the same options to see what other things have changed.

ASP.NET Core 2.1.0-preview1: Improvements for using HTTPS

$
0
0

Securing web apps with HTTPS is more important than ever before. Browser enforcement of HTTPS is becoming increasingly strict. Sites that don’t use HTTPS are increasingly labeled as insecure. Browsers are also starting to enforce that new and existing web features must only be used from an secure context (Chromium, Mozilla). New privacy requirements like the Global Data Protection Regulation (GDPR) require the use of HTTPS to protect user data. Using HTTPS during development also helps prevent HTTPS related issues before deployment, like insecure links.

ASP.NET Core 2.1 makes it easy to both develop your app with HTTPS enabled and to configure HTTPS once your app is deployed. The ASP.NET Core 2.1 project templates have been updated to enable HTTPS by default. To enable HTTPS in production simply configure the correct server certificate. ASP.NET Core 2.1 also adds support for HTTP Strict Transport Security (HSTS) to enforce HTTPS usage in production and adds improved support for redirecting HTTP traffic to HTTPS endpoints.

HTTPS in development

To get started with ASP.NET Core 2.1.0-preview1 and HTTPS install the .NET Core SDK for 2.1.0-preview1. The SDK will create an HTTPS development certificate for you as part of the first-run experience. For example, when you run dotnet new razor for the first time you should see the following console output:

ASP.NET Core
------------
Successfully installed the ASP.NET Core HTTPS Development Certificate.
To trust the certificate (Windows and macOS only) first install the dev-certs tool by running 'dotnet install tool dotnet-dev-certs -g --version 2.1.0-preview1-final' and then run 'dotnet-dev-certs https --trust'.
For more information on configuring HTTPS see https://go.microsoft.com/fwlink/?linkid=848054.

The ASP.NET Core HTTPS Development Certificate has now been installed into the local user certificate store, but it still needs to be trusted. To trust the certificate you need to perform a one-time step to install and run the new dotnet dev-certs tool as instructed:

C:WebApplication1>dotnet install tool dotnet-dev-certs -g --version 2.1.0-preview1-final

The installation succeeded. If there are no further instructions, you can type the following command in shell directly to invoke: dotnet-dev-certs

C:WebApplication1>dotnet dev-certs https --trust
Trusting the HTTPS development certificate was requested. A confirmation prompt will be displayed if the certificate was not previously trusted. Click yes on the prompt to trust the certificate.
A valid HTTPS certificate is already present.

To run the dev-certs tool both dotnet-dev-certs and dotnet dev-certs (without the extra hyphen) will work. Note: If you get an error that the tool was not found you may need to open a new command prompt if the current command prompt was open when the SDK was installed.

Trust certificate dialog

Click Yes to trust the certificate.

On macOS the certificate will get added to your keychain as a trusted certificate.

On Linux there isn't a standard way across distros to trust the certificate, so you'll need to perform the distro specific guidance for trusting the deveopment certificate.

Run the app by running dotnet run. The ASP.NET Core 2.1 runtime will detect that the development certificate is installed and use the certificate to listen on both http://localhost:5000 and https://localhost:5001:

C:WebApplication1>dotnet run
Using launch settings from C:WebApplication1PropertieslaunchSettings.json...
Hosting environment: Development
Content root path: C:WebApplication1
Now listening on: https://localhost:5001
Now listening on: http://localhost:5000
Application started. Press Ctrl+C to shut down.

Close any open browsers and then in a new browser window browse to https://localhost:5001 to access the app via HTTPS.

Razor Pages with HTTPS

If you didn't trust the ASP.NET Core development certificate then the browser will display a security warning:

Untrusted certificate warning

You can still click on "Details" to ignore the warning and browse to the site, but you're better off running dotnet dev-certs --trust to trust the certificate. Just run the tool once and you should be all set.

HTTPS redirection

If you browse to the app via http://localhost:5000 you get redirected to the HTTPS endpoint:

HTTPS redirect

This is thanks to the new HTTPS redirection middleware that redirects all HTTP traffic to HTTPS. The middleware will detect available HTTPS server addresses at runtime and redirect accordingly. Otherwise, it redirects to port 443 by default.

The HTTPS redirection middleware is added in app's Configure method:

app.UseHttpsRedirection();

You can configure the HTTPS port explicitly in your ConfigureServices method:

services.AddHttpsRedirection(options => options.HttpsPort = 5002);

Alternatively you can specify the HTTPS port to redirect to using configuration or the ASPNETCORE_HTTPS_PORT environment variable. This is useful for when HTTPS is being handled externally from the app, like when the app is hosted behind IIS. For example, the project template adds the ASPNETCORE_HTTPS_PORT environment variable to the IIS Express launch profile so that it matches the HTTPS port setup for IIS Express:

{
  "iisSettings": {
    "windowsAuthentication": false,
    "anonymousAuthentication": true,
    "iisExpress": {
      "applicationUrl": "http://localhost:51667",
      "sslPort": 44370
    }
  },
  "profiles": {
    "IIS Express": {
      "commandName": "IISExpress",
      "launchBrowser": true,
      "environmentVariables": {
        "ASPNETCORE_ENVIRONMENT": "Development",
        "ASPNETCORE_HTTPS_PORT": "44370"
      }
    }
  }
}

HTTP Strict Transport Security (HSTS)

HSTS is a protocol that instructs browsers to access the site via HTTPS. The protocol has allowances for specifying how long the policy should be enforced (max age) and whether the policy applies to subdomains or not. You can also enable support for your domain to be added to the HSTS preload list.

The ASP.NET Core 2.1 project templates enable support for HSTS by adding the new HSTS middleware in the app's Configure method:

if (env.IsDevelopment())
{
    app.UseDeveloperExceptionPage();
}
else
{
    app.UseExceptionHandler("/Error");
    app.UseHsts();
}

Note that HSTS is only enabled when running in a non-development environment. This is to prevent setting an HSTS policy for localhost when in development.

You can configure your HSTS policy (max age, include subdomains, exclude specific domains, support preload) in your ConfigureServices method:

services.AddHsts(options =>
{
    options.MaxAge = TimeSpan.FromDays(100);
    options.IncludeSubDomains = true;
    options.Preload = true;
});

Configuring HTTPS in production

The ASP.NET Core HTTPS development certificate is only for development purposes. In production you need to configure your app for HTTPS including the production certificate that you want to use. Often this is handled externally from the app using a reverse proxy like IIS or NGINX. ASP.NET Core 2.1 adds support to Kestrel for configuring endpoints and HTTPS certificates.

You can still configure server URLs (include HTTPS URLs) using the ASPNETCORE_SERVER_URLS environment variable. To configure the HTTPS certificate for any HTTPS server URLs you configure a default HTTPS certificate.

The default HTTPS certificate can be loaded from a certificate store:

{
  "Certificates": {
    "Default": {
      "Subject": "mysite",
      "Store": "User",
      "Location": "Local",
      "AllowInvalid": "false" // Set to "true" to allow invalid certificates (e.g. self-signed)
    }
  }
}

Or from a password protected PFX file:

{
  "Certificates": {
    "Default": {
      "Path": "cert.pfx",
      "Password": "<password>"
    }
  }
}

You can also configure named enpoints for Kestrel that include both the URL for the endpoint and the HTTPS certificate:

{
  "Kestrel": {
    "EndPoints": {
      "Http": {
        "Url": "http://localhost:5005"
      },

      "HttpsInlineCertFile": {
        "Url": "https://localhost:5006",
        "Certificate": {
          "Path": "cert.pfx",
          "Password": "<cert password>"
        }
      },

      "HttpsInlineCertStore": {
        "Url": "https://localhost:5007",
        "Certificate": {
          "Subject": "mysite",
          "Store": "My",
          "Location": "CurrentUser",
          "AllowInvalid": "false" // Set to true to allow invalid certificates (e.g. self-signed)
        }
      }
    }
  }
}

Summary

We hope these new features will make it much easier to use HTTPS during development and in production. Please give the new HTTPS support a try and let us know what you think!

Announcing .NET Core 2.1 Preview 1

$
0
0

Today, we are announcing .NET Core 2.1 Preview 1. It is the first public release of .NET Core 2.1. We have great improvements that we want to share and that we would love to get your feedback on.

ASP.NET Core 2.1 Preview 1 and Entity Framework 2.1 Preview 1 are also releasing today.

You can download and get started with .NET Core 2.1 Preview 1, on Windows, macOS, and Linux:

You can develop .NET Core 2.1 apps with Visual Studio 2017 15.6 Preview 6 or later, or Visual Studio Code. We expect that Visual Studio for Mac support will be added with Visual Studio 2017 15.7.

ASP.NET Core 2.1 Previews will not be auto-deployed to Azure App Service. Instead, you can opt in to using previews with just a little bit of configuration.

Visual Studio Team Service support for .NET Core 2.1 will come at RTM.

You can see complete details of the release in the .NET Core 2.1 Preview 1 release notes. Known issues and workarounds are included in the releases notes.

To everyone that helped with the release, thank you very much. We couldn’t have gotten to this spot without you and we’ll continue to need your help as we work together towards .NET Core 2.1 RTM.

Let’s look at the many improvements that are part of the .NET Core 2.1 Preview 1 release. As big a release as .NET Core 2.0 was, you should find multiple improvements that will want to make you upgrade.

Themes

The .NET Core 2.1 release, across .NET Core, ASP.NET Core and EF Core is intended to improve the product across the following themes:

  • Faster Build Performance
  • Close gaps in ASP.NET Core and EF Core
  • Improve compatibility with .NET Framework
  • GDPR and Security
  • Microservices and Azure
  • More capable Engineering System

Some improvements for these themes are not yet in Preview 1. These themes are guiding our investments across the .NET Core 2.1 release.

Global Tools

.NET Core now has a new deployment and extensibility mechanism for tools. This new experience is very similar to and was inspired by NPM global tools.

You can try out .NET Core Global Tools for yourself (after you’ve installed .NET Core 2.1 Preview 1) with a sample tool called dotnetsay that we published, using the following commands:

dotnet install tool -g dotnetsay
dotnetsay

Once you install dotnetsay, you can then use it directly by just typing dotnetsay in your command prompt or terminal. You can close terminal sessions, switch drives in the terminal, or reboot your machine and the command will still be there. dotnetsay is now in your path. Check %USERPROFILE&.dotnettools or ~.dotnettools (as appropriate) to see the tools installation location on your machine.

You can create your own global tools by looking at the donetsay tools sample.

.NET Core tools are .NET Core console apps that are packaged and acquired as NuGet packages. By default, these tools are framework-dependent applications and include all of their NuGet dependencies. This means that a given global tool will run on any operating system or chip architecture by default. You might need an existing tool on a new version of Linux. As long as .NET Core works there, you should be able to run the tool.

At present, .NET Core Tools only support global install and require the -g argument to be installed. We’re working on various forms of local install, too, but those are not ready in preview 1.

We expect a whole new ecosystem of tools to establish itself for .NET. Some of these tools will be specific to .NET Core development and many of them will be general in nature. The tools are deployed to NuGet feeds. By default, the dotnet tool install command looks for tools on NuGet.org.

If you are curious about dotnetsay, it is modeled after docker/whalesay, which is modeled after cowsay. dotnetsay is really just dotnet-bot, who is one of our busiest developers!

Build Performance Improvements

Build-time performance is much improved in .NET Core 2.1, particularly for incremental builds. These improvements apply to both dotnet build on the commandline and to builds in Visual Studio. We’ve made improvements in the CLI tools and in MSBuild in order to make the tools deliver a much faster experience.

The following chart provides concrete numbers on the improvements that you can expect from the new release. You can see two different workloads with numbers for .NET Core 2.0, .NET Core 2.1 Preview 1 and where we aspire to land for .NET Core RTW.

You can try building the same code yourself at mikeharder/dotnet-cli-perf. We’d be curious if you get similar results. Note that the improvements are for incremental builds, so you’ll only see benefits after the 2nd build.

Minor-Version Roll-forward

You will now be able to run .NET Core applications on later runtime versions, within the same major version range. For example, you will be able to run .NET Core 2.0 applications on .NET Core 2.1 or .NET Core 2.1 applications on .NET Core 2.5 (if we ever ship such a version). The roll-forward behavior is for minor versions only. For example, a .NET Core 2.x application will never automatically roll forward to .NET Core 3.0 or later.

If the expected .NET Core version is available, it is used. The roll-forward behavior is only relevant if the expected .NET Core version is not available in a given environment.

You can disable minor version roll-forward with

  • Environment variable: DOTNET_ROLL_FORWARD_ON_NO_CANDIDATE_FX=0
  • runtimeconfig.json: rollForwardOnNoCandidateFx=0
  • CLI: –roll-forward-on-no-candidate-fx=0

Sockets Performance and HTTP Managed Handler

We made major improvements to sockets in .NET Core 2.1. Sockets are the basis of both outgoing and incoming networking communication. In .NET Core 2.0, the ASP.NET Kestrel Web Server and HttpClient use native sockets, not the .NET Socket class. We are in the process of changing that, instead basing our higher-level networking APIs on .NET sockets.

We made three significant performance improvements for sockets (and other smaller ones) in Preview 1:

  • Support for Span<T>/Memory<T> in Socket/NetworkStream
  • Improved perf on both Windows and Linux, due to a variety of fixes (e.g. reuse PreAllocatedOverlapped, cache Operation objects on Linux, improved dispatch on linux epoll notification, etc)
  • Improved perf for SslStream, as well as supporting ALPN (required for HTTP2) and streamlining SslStream settings.

For HttpClient, we built a new from-the-ground-up managed HttpClientHandler called SocketHttpHandler. As you can likely guess, it’s a C# implementation of HttpClient based on .NET sockets and Span<T>.

The biggest win of SocketHttpHandler is performance. It is a lot faster than the existing implementation. There are other benefits:

  • Elimination of platform dependencies on libcurl (for linux) and WinHTTP (for Windows) – simplifies both development, deployment and servicing
  • Consistent behavior across platforms and platform/dependency versions.

You can opt-in to using the SocketHTTPHandler in one the following ways with Preview 1:

  • Environment variable: COMPlus_UseManagedHttpClientHandler=true
  • AppContext: System.Net.Http.UseManagedHttpClientHandler=true

In Preview 2 (or GitHub master branch), you’ll be able to new up the handler, as you’d expect: new HttpClient(new SocketsHttpHandler()). We are also thinking about making the new handler the default, and enable the existing native handler (based on libcurl and WinHTTP) as optional.

We are still working on moving Kestrel to use sockets. We don’t have anything to share on that yet.

You might wonder how sockets improvements can be shared between incoming and outgoing scenarios, because they feel so different. It’s actually pretty simple. If you’re a client, you do a connect on a server. if you’re a server, you listen and wait for connections. Once that connection is established, data can flow both ways, and that’s critical from a perf point of view. As a result, these socket improvements should improve both scenarios.

Span<T>, Memory<T> and friends

We are on the verge of introducing a new set of types for using arrays and other types of memory that is much more efficient. They are included in Preview 1. Today, if you want to pass the first 1000 elements of a 10,000 element array, you need to make a copy of those 1000 elements and pass that copy to your caller. That operation is expensive in both time and space. The new Span<T> type enables you to provide a virtual view of that array without the time or space cost. It’s also a struct, which means that there is no allocation cost either.

Span<T> and related types offer a uniform representation of memory from a multitude of different sources, such as arrays, stack allocation, and native code. With its slicing capabilities, it obviates the need for expensive copying and allocation in many scenarios, such as string manipulation, buffer management, etc, and provides a safe alternative to unsafe code. We expect usage of these types to start off in performance critical scenarios, but then transition to replacing arrays as the primary way of managing large blocks of data in .NET.

In terms of usage, you can create a Span<T> from an array:

From there, you can easily and efficiently create a span to represent/point to just a subset of this array, utilizing an overload of the span’s Slice method. From there you can index into the resulting span to write and read data in the relevant portion of the original array:

Jared Parsons gives a great introduction in his Channel 9 video C# 7.2: Understanding Span. Stephen Toub goes into even more detail in C# – All About Span: Exploring a New .NET Mainstay.

Windows Compatibility Pack

When you port existing code from the .NET Framework to .NET Core, you can use the new Windows Compatibility Pack. It provides access to an additional 20,000 APIs, compared to what is available in .NET Core. This includes System.Drawing, EventLog, WMI, Performance Counters, and Windows Services.

The following examples demonstrates accessing the Windows registry with APIs provided by the Windows Compatibility Pack.

Platform Suppport

We expect to support the following operating system versions for .NET Core 2.1:

  • Windows Client: 7, 8.1, 10 (1607+)
  • Windows Server: 2008 R2 SP1+
  • macOS: 10.12+
  • RHEL: 7+
  • Fedora: 26+
  • openSUSE: 42.3+
  • Debian: 8+
  • Ubuntu: 14.04+
  • SLES: 12+
  • Alpine: 3.6+

Docker

For Docker, we made a few changes relative to .NET Core 2.0 images at microsoft/dotnet:

  • Added Alpine runtime and SDK images (x64).
  • Added Ubuntu Bionic/18.04, for runtime and SDK images (x64 and ARM32).
  • Switched from debian:stretch to debian:stretch-slim for runtime images. (x64 and ARM32).
  • Dropped debian:jessie (runtime and SDK).

These changes were based, in part, on two repeated pieces of feedback that we heard over the last six months.

  • Provide smaller images, particularly for the runtime.
  • Provide images with less surface area and/or more frequently updated (than Debian), from a vulnerability standpoint.

We also rewrote the .NET Core Docker Samples instructions and code. The samples provide better instructions and implement more scenarios like unit testing and using docker registries. We hope you find the samples helpful and we plan to continue to expand them. Tell us what you’d like to see us add to make .NET Core and Docker work better together.

Closing

Thanks for trying out .NET Core 2.1 Preview 1. Please try out your existing applications with .NET Core 2.1 Preview 1. And please try out the new features that are introduced in this post. We’ve put a lot of effort into making them right but we need your feedback to take them across the finish line, for our final release.

Once again, thanks to everyone that contributed to the release. We appreciate all of the issues and PRs that you’ve contributed that have helped make this first preview release available.

Announcing Entity Framework Core 2.1 Preview 1

$
0
0

Today we are releasing the first preview of EF Core 2.1, alongside .NET Core 2.1 Preview 1 and ASP.NET Core 2.1 Preview 1.

The new bits are available in NuGet as part of the individual packages, and as part of the ASP.NET Core meta-packages (both Microsoft.AspNetCore.All and the new Microsoft.AspNetCore.App), and included in the .NET Core SDK.

The majority of the new features of EF Core 2.1 are present and ready to try in Preview 1: We want to maximize the exposure of the new features to you, our customers, and the amount of time we will have to address your feedback before the final release. We would like to encourage you try this preview release and to let us know what you think.

Obtaining the bits

Depending on your development environment, to install, you can use NuGet or the dotnet command line interface.

If you are using one of the database providers developed as part of the Entity Framework Core project (e.g. SQL Server, SQLite or In-Memory), you can install EF Core 2.1 Preview 1 by installing the latest version of the provider. For example, using dotnet on the command line:

$ dotnet add package Microsoft.EntityFrameworkCore.SqlServer -V 2.1.0-preview1-final

If you are using another EF Core 2.0-compatible relational database provider, you can get the new EF Core bits by adding the base relational provider, e.g.:

$ dotnet add package Microsoft.EntityFrameworkCore.Relational -V 2.1.0-preview1-final

Note that although there are a few features, such as value conversions, that require an updated database provider, existing providers developed for EF Core 2.0 should be compatible with EF Core 2.1. If there is any incompatibility, that is a bug we want to hear about!

Some additional providers with support for EF Core 2.1 Preview, like Npgsql.EntityFrameworkCore.PostgreSQL, will be available soon.

New features

Besides numerous small improvements and more than a hundred product bug fixes, EF Core 2.1 includes several frequently requested new features:

Lazy loading

EF Core now contains the necessary building blocks for anyone to author entity classes that can load their navigation properties on demand. We have also created a new package, Microsoft.EntityFrameworkCore.Proxies, that leverages those building blocks to produce lazy loading proxy classes based on minimally modified entity classes (e.g. classes with virtual navigation properties).

Read the section on lazy loading for more information about this topic.

Parameters in entity constructors

As one of the required building blocks for lazy loading, we enabled the creation of entities that take parameters in their constructors. You can use parameters to inject property values, lazy loading delegates, and services.

Read the section on entity constructor with parameters for more information about this topic.

Value conversions

Until now, EF Core could only map properties of types natively supported by the underlying database provider. Values were copied back and forth between columns and properties without any transformation. Starting with EF Core 2.1, value conversions can be applied to transform the values obtained from columns before they are applied to properties, and vice versa. We have a number of conversions that can be applied by convention as necessary, as well as an explicit configuration API that allows registering custom conversions between columns and properties. Some of the application of this feature are:

  • Storing enums as strings
  • Mapping unsigned integers with SQL Server
  • Automatic encryption and decryption of property values

Read the section on value conversions for more information about this topic.

LINQ GroupBy translation

Before version 2.1, in EF Core the GroupBy LINQ operator was always be evaluated in memory. We now support translating it to the SQL GROUP BY clause in most common cases.

This example shows a query with GroupBy used to compute various aggregate functions:

var query = context.Orders
    .GroupBy(o => new { o.CustomerId, o.EmployeeId })
    .Select(g => new
        {
          g.Key.CustomerId,
          g.Key.EmployeeId,
          Sum = g.Sum(o => o.Amount),
          Min = g.Min(o => o.Amount),
          Max = g.Max(o => o.Amount),
          Avg = g.Average(o => Amount)
        });

The corresponding SQL translation looks like this:

SELECT [o].[CustomerId], [o].[EmployeeId],
    SUM([o].[Amount]), MIN([o].[Amount]), MAX([o].[Amount]), AVG([o].[Amount])
FROM [Orders] AS [o]
GROUP BY [o].[CustomerId], [o].[EmployeeId];

Data Seeding

With the new release it will be possible to provide initial data to populate a database. Unlike in EF6, seeding data is associated to an entity type as part of the model configuration. Then EF Core migrations can automatically compute what insert, update or delete operations need to be applied when upgrading the database to a new version of the model.

As an example, you can use this to configure seed data for a Post in OnModelCreating:

modelBuilder.Entity<Post>().SeedData(new Post{ Id = 1, Text = "Hello World!" });

Read the section on data seeding for more information about this topic.

Query types

An EF Core model can now include query types. Unlike entity types, query types do not have keys defined on them and cannot be inserted, deleted or updated (i.e. they are read-only), but they can be returned directly by queries. Some of the usage scenarios for query types are:

  • Mapping to views without primary keys
  • Mapping to tables without primary keys
  • Mapping to queries defined in the model
  • Serving as the return type for FromSql() queries

Read the section on query types for more information about this topic.

Include for derived types

It will be now possible to specify navigation properties only defined on derived types when writing expressions for the Include method. For the strongly typed version of Include, we support using either an explicit cast or the as operator. We also now support referencing the names of navigation property defined on derived types in the string version of Include:

var option1 = context.People.Include(p => ((Student)p).School);
var option2 = context.People.Include(p => (p as Student).School);
var option3 = context.People.Include("School");

System.Transactions support

We have added the ability to work with System.Transactions features such as TransactionScope. This will work on both .NET Framework and .NET Core when using database providers that support it.

Read the section on System.Transactions for more information about this topic.

Better column ordering in initial migration

Based on customer feedback, we have updated migrations to initially generate columns for tables in the same order as properties are declared in classes. Note that EF Core cannot change order when new members are added after the initial table creation.

Optimization of correlated subqueries

We have improved our query translation to avoid executing “N + 1” SQL queries in many common scenarios in which the usage of a navigation property in the projection leads to joining data from the root query with data from a correlated subquery. The optimization requires buffering the results form the subquery, and we require that you modify the query to opt-in the new behavior.

As an example, the following query normally gets translated into one query for Customers, plus N (where “N” is the number of customers returned) separate queries for Orders:

var query = context.Customers.Select(
    c => c.Orders.Where(o => o.Amount  > 100).Select(o => o.Amount));

By including ToList() in the right place, you indicate that buffering is appropriate for the Orders, which enable the optimization:

var query = context.Customers.Select(
    c => c.Orders.Where(o => o.Amount  > 100).Select(o => o.Amount).ToList());

Note that this query will be translated to only two SQL queries: One for Customers and the next one for Orders.

OwnedAttribute

It is now possible to configure owned entity types by simply annotating the type with [Owned] and then making sure the owner entity is added to the model:

[Owned]
public class StreetAddress
{
    public string Street { get; set; }
    public string City { get; set; }
}

public class Order
{
    public int Id { get; set; }
    public StreetAddress ShippingAddress { get; set; }
}

The new attribute is defined in a new package: Microsft.EntityFrameworkCore.Attributes. Our current thinking is that this package will host other EF Core-specific attributes.

Read the section on owned entity types for more information about this topic.

What’s next

As mentioned in our roadmap post earlier this month, we intend to release additional previews monthly, and a final release in the first half of 2018.

In the meanwhile, the team has also been busy with other projects, including:

Thank you!

As always, we want to express our gratitude to everyone that has helped making the 2.1 release better by providing feedback, reporting bugs, and contributing code.

Please try the preview bits, and keep the feedback coming!

ASP.NET Core 2.1.0-preview1: Using ASP.NET Core Previews on Azure App Service

$
0
0

There are 3 options to get ASP.NET Core 2.1 Preview applications running on Azure App Service:

  1. Installing the Preview1 site extension
  2. Deploying your app self-contained
  3. Using Web Apps for Containers

Installing the site extension

Starting with 2.1-preview1 we are producing an Azure App Service site extension that contains the everything you need to build and run your ASP.NET Core 2.1-preview1 app. You can install this site extension by:

  1. Go to the Extensions blade
    Azure App Service Site Extension UI

    Site Extension UI

  2. Click ‘Add’ at the top of the screen and Choose the ‘ASP.NET Core Runtime Extension’ from the list of available extensions.
    ASP.NET Core Runtime Extensions

    Choose the ASP.NET Core Runtime Extensions

  3. Then agree to the license terms by clicking ‘OK’ on the ‘Accept Legal Terms’ screen, finally click ‘OK’ at the bottom of the Add Extension screen.
    Accept Agreement

    Accept Agreement

Once the add operation has completed you will have .NET Core 2.1 Preview 1 installed. You can verify this by going to the Console and running ‘dotnet –info’. It should look like this:

dotnet Info output

dotnet Info output

You can see the path to the site extension where Preview1 has been installed, showing that you are running from the site extension instead of from the default ProgramFiles location. If you see ProgramFiles instead then try restarting your site and running the info command again.

Using an ARM template

If you are using an ARM template to create and deploy applications you can use the ‘siteextensions’ resource type to add the site extension to a Web App. For example:

You could add, and edit, this snippet in your own ARM template to add the site extension to your web app. Making sure that this resource definition is in the resources collection of your site resource.

Deploy a self-contained app

You can deploy a self-contained app that carries the preview1 runtime with it when being deployed. This option means that you don’t need to prepare your site, but it does require you to publish your application differently than you would when deploying an app once the SDK is pre-installed on the server.

Self-contained apps are an option for all .NET Core applications, and some of you may be deploying your applications this way already.

Use Docker

We have 2.1 preview1 Docker images available on Docker Hub for use. You can use them as your base image and deploy to Web Apps for Containers as you normally would.

Conclusion

This is the first time that we are using site extensions instead of pre-installing previews globally on Azure App Service. If you have any problems getting it to work then log an issue on GitHub.

Transform your industry with new Microsoft IoT in Action webinar series

$
0
0

The Internet of Things (IoT) is changing the way industries around the world do business. And with recent advances in sensor platforms and intelligent analytic capabilities available from the cloud, the use of IoT has become even more mainstream. As we will highlight in Microsoft’s IoT in Action webinar series, there are many new and exciting ways that IoT solutions are being used across industries.

image

Using IoT solutions to deliver impactful experiences

In industries where there is so much data available, but not always the means to process and interpret it all, IoT solutions can have a huge impact. To start, IoT plays a crucial role in delivering a positive customer experience.

Consider the healthcare industry. While being able to better track and predict illnesses can benefit hospitals and clinics, it also results in a more seamless journey for the individual. For example, individuals can use wearable technology to monitor their health and well-being beyond their heart rate and physicians can use this data to monitor patients remotely. This means each patient receives a more personalized, rewarding experience.

Retail IoT solutions also revolve around using data to customize each experience to the individual shopper. While the insights gleaned from shopper data can help retailers improve their capabilities while reducing costs, it more importantly lets them create an engaging, frictionless shopping journey for their customers. Retailers can accomplish this using integrated sensor platforms, data fusion, and people and product interaction analytics. Attend the IoT in Action Retail Webinar and learn how to deliver your own personal, seamless, and differentiated customer experiences.

Transforming business operations and improving efficiency through IoT solutions

The advances made with IoT solutions doesn’t stop there. By embracing a data-driven mindset, companies across all industries can also form valuable insights and quickly take action.

From manufacturing and energy to healthcare, the benefits of using IoT-gathered data to drive your business are enormous. Companies can use real-time data to optimize production, predict and minimize downtime, boost product quality, and ultimately drive profit. Let’s go back to the previous healthcare example. Using IoT solutions, doctors can better gather patient data and track patterns that help them identify illnesses earlier. Not only does this improve the health of patients, but it can also help reduce long-term healthcare costs.

Then there’s the manufacturing industry, where IoT-enabled predictive maintenance assists companies like Rolls-Royce in anticipating maintenance needs and avoiding expensive, unscheduled delays. Such IoT solutions allow companies to look at wider sets of operating data and use machine learning and analytics to spot correlations. They can then optimize their models based on this data and provide insights that might help improve overall efficiency or reduce costs.  Companies can learn more about implementing smart manufacturing with IoT solutions during the IoT in Action Manufacturing Webinar.

It doesn’t stop at just businesses though, entire communities can be positively affected. Tune in to the IoT in Action Security and Surveillance Webinar to see how IoT is impacting the digital transformation in public safety initiatives.

How to seize the opportunities provided by IoT

Even with increases in IoT investment, there are still companies across multiple industries such as agriculture, energy, healthcare, manufacturing, government, retail, smart Buildings and smart Cities that aren’t sure where to go next. Their focus is determining how to tap new sources of data, how to drive insights and action, and how to implement IoT in a way that supports their customers and advances their bottom line.

To aid in this learning process, we are providing partners and customers access to a new Microsoft IoT in Action webinar series. The aim of these live and on-demand webinars is to show what’s possible with IoT while also educating on how to do it. Filled with IoT thought leadership and technical know-how shared by industry pioneers from Microsoft and our partners, each webinar is crafted to inspire development across the IoT partner ecosystem and help you better meet customers’ needs.

The IoT in Action webinar series includes industry spotlights, which are 60-minute webinars focused on a target industry. IoT experts from Microsoft and our leading partner companies in the industry will lead the webinars with highlight partner solution examples to help you understand the evolving technologies, innovative solutions, and changing business models in the dynamic IoT space.

Register for these live webinars so you can get inspired by industry leaders and begin implementing new IoT strategies in your business today.

ASP.NET Core 2.1.0-preview1: Getting started with SignalR

$
0
0

Since 2013, ASP.NET developers have been using SignalR to build real-time web applications. Now, with ASP.NET Core 2.1 Preview 1, we’re bringing SignalR over to ASP.NET Core so you can build real-time web applications with all the benefits of ASP.NET Core. We released an alpha version of this new SignalR back in October that worked with ASP.NET Core 2.0, but now it’s ready for a broader preview and built-in to ASP.NET Core 2.1 (no additional NuGet packages required!). This new version of SignalR gave us a chance to significantly redesign some elements and learn from the lessons of the past, but the core APIs you work with should be very similar. The new design gives us a much more flexible platform on which to build the future of real-time .NET server applications. For now, though, let’s walk through a simple Chat demo to see how it works in ASP.NET Core SignalR.

Prerequisites

In order to complete this tutorial you need the following tools:

  1. .NET Core SDK version 2.1.300-preview1 or higher.
  2. Node JS (just needed for NPM, to download the SignalR JavaScript library; we strongly recommend using at least version 8.9.4 of Node).
  3. Your IDE/Editor of choice.

Building the UI

Let’s start by building a simple UI for a simple chat app. First, create a new Razor pages application using dotnet new:

Add a new page for the chat UI:

You should now have Pages/Chat.cshtml and Pages/Chat.cshtml.cs files in your project. First, open Pages/Chat.cshtml.cs, change the namespace name to match your other page models and add the Authorize attribute to ensure only authenticated users can access the Chat page.

Next, open Pages/Chat.cshtml and add some UI:

The UI we’ve added is fairly simple. We’re going to use ASP.NET Core Identity for authentication, which means the user will be authenticated, and will have a username when we get here. To try it out, use dotnet run to launch the site and Register as a new user. Then navigate to the /Chat endpoint, you should see the following UI:


The Chat UI

Writing the server code

In SignalR, you put server-side code in a “Hub”. Hubs contain methods that the SignalR Client allows you to invoke from the browser, much like how an MVC controller has actions that are invoked by issuing HTTP requests. However, unlike an MVC Controller Action, SignalR allows the server to invoke methods on the client as well, allowing you to develop real-time applications that notify users of new content. So, first, we need to build a hub. Back in the root of the project, create a Hubs directory and add a new file to that directory called ChatHub.cs:

Let’s go back over that code a little bit and look at what it does.

First, we have a class inheriting from Hub, which is the base class required for all SignalR Hubs. We apply the [Authorize] attribute to it which restricts access to the Hub to registered users and ensures that Context.User is available for us in the Hub methods. Inside Hub methods, you can use the Clients property to access the clients connected to the hub. We use the .All property, which gives us an object that can be used to send messages to every client connected to the Hub.

When a new client connects, the OnConnectedAsync method will be invoked. We override that method to Send the SendAction message to every client, and provide two arguments: The name of the user, and the action that occurred (in this case, that they “joined” the chat session). We do the same for OnDisconnectedAsync, which is invoked when a client disconnects.

When a client invokes the Send method, we send the SendMessage message to every client, again providing two arguments: The name of the user sending the message and the message itself. Every client will receive this message, including the sending client itself.

To finish off the server-side, we need to add SignalR to our application. We do that in the Startup.cs file. First, add the following to the end of the ConfigureServices method to register the necessary SignalR services into the DI container:

Then, we need to put SignalR into the middleware pipeline, and give our ChatHub hub a URL that the client can reference. We do that by adding these lines to the end of the Configure method:

This configures the hub so that it is available at the URL /hubs/chat. You can use any URL you want, but it can’t match an existing MVC action or Razor Page.

NOTE: You’ll need to add a using directive for SignalRTutorial.Hubs in order to use ChatHub in your MapHub call.

Building the client-side

Now that we have the server hub up and running, we need to add code to the Chat.cshtml page to use the client. First, however, we need to get the SignalR JavaScript client and add it to our application. There are many ways you can do this, such as using a bundling tool like Webpack, but here we’re going to go with a fairly simple approach of copying and pasting. First, install the SignalR client using NPM:

You can find the version of the client designed for use in Browsers in node_modules/@aspnet/signalr/dist/browser. There are minified files there as well. For now, let’s just copy the signalr.js file out of that directory and into wwwroot/lib/signalr in the project:


SignalR JS file in the lib/wwwroot/signalr folder

Now, we can add JavaScript to our Chat.cshtml page to wire everything up. At the end of the file (after the closing </ul> tag), add the following:

We put our scripts in the Scripts Razor section, in order to ensure they end up at the very bottom of the Layout page. First, we load the signalr.js library we just copied in:

Then, we add a script block for our own code. In that code, we first get references to some DOM elements, and define a helper function to add a new item to the messages-list list. Then, we create a new connection, connecting to the URL we specified back in the Configure method.

At this point, the connection has not yet been opened. We need to call connection.start() to open the connection. However, before we do that we have some set-up to do. First, let’s wire up the “submit” handler for the <form>. When the “Send” button is pressed, this handler will be fired and we want to grab the content of the message text box and send the Send message to the server, passing the message as an argument (we also clear the text box so that the user can enter a new message):

Then, we wire up handlers for the SendMessage and SendAction messages (remember back in the Hub we use the SendAsync method to send those messages, so we need a handler on the client for them):

Finally, we start the connection. The .start method returns a JavaScript Promise object that completes when the connection has been established. Once it’s established, we want to enable the text box and button:

Testing it out

With all that code in place, it should be ready to go. Use dotnet run to launch the app and give it a try! Then, use a Private Browsing window and log in as a different user. You should be able to chat back and forth between the browser windows.

Conclusion

This has been a brief overview of how to get started with SignalR in ASP.NET Core 2.1 Preview 1. Check out the full code for this tutorial if you’d like to see more details. If you need help, post questions on StackOverflow using the signalr-core tag. Finally, if you think you’ve found a bug, file it on our GitHub repository.


New releases: Microsoft R Client 3.4.3, Microsoft ML Server 9.3

$
0
0

An update to Microsoft R Client, Microsoft's distribution of open source R with additional proprietary packages — including RevoScaleR (for data analysis at scale) and MicrosoftML (for machine learning) — is now available. Microsoft R Client 3.4.3 updates the R engine to R 3.4.3, and (on Linux) now supports deploying computations to a remote SQL Server with the sqlrutils package.

Microsoft R Client 3.4.3 is free to download and use, and as the video above shows is designed for developing analytical applications that will be deployed to production servers. It works with Microsoft Machine Learning Server 9.3, also released today. You can use Microsoft ML Server to scale your R analysis to data sets of any size or workloads of any intensity: as a server or cluster of servers on premises or in the Azure cloud, or as part of a hybrid architecture with Azure Stack.

For more information on the new capabilities in Microsoft ML Server and Microsoft R Client, take a look at the announcement linked below.

Machine Learning Blog: Introducing the Microsoft Machine Learning Server 9.3 Release

 

ASP.NET Core 2.1.0-preview1: Introducing compatibility version in MVC

$
0
0

This post was written by Ryan Nowak

In 2.1 we’re adding a feature to address a long-standing problem for maintaining MVC – how do we make improvements to framework code without making it too hard for developers to upgrade to the latest version? This is not an easy concern to solve – and with 7 major releases of MVC (dating back to 2009) there are a few things we’d like to leave in the past.

Unlike most other parts of ASP.NET Core, MVC is a framework – our code calls your code in lots of idiosyncratic ways. If we change what methods we call or in what order, or how we handle exceptions – it’s very easy for working code to become non-working code. In our experience, it’s also just not good enough for the team to just expect developers to rely on the documented behavior and punish those who don’t.

This last bit is summed up with Hyrum’s Law, or if you prefer, the XKCD version. We make decisions with the assumption that some developers have built working applications that rely on our bugs.

Despite these challenges, we think it’s worthwhile to keep moving forward. We’re disappointed too when we get a good piece of feedback that we can’t act upon because it’s incompatible with our legacy behavior.

What we’re doing

Our plan is to continue to make improvements to framework behaviors – where we think we’ve made a mistake – or where we can update a feature to be unequivocally better. However, we’re going to make these changes opt-in, and make it easy to opt-in. New applications created from our templates will opt-in to the current release’s behaviors by default.

When we reach the next major release (3.0 – not any time soon) – we will remove the old behaviors.

Opt-in means that updating your package references don’t give you different behavior. You have to choose the new version and the new behavior.

Right now this looks like:

OR

What this means

I think this does a few things that are valuable. Consider all of the below as goals or success criteria. We still have to do a good job understanding your feedback and communicating for these things to happen.

For you and for us: We can continue to invest in new ideas and adapt to a changing web landscape.

For you: It’s easy to adopt new versions in small steps.

For us: Streamlines things that require a lot of effort to support, document, and respond to feedback.

For us: Simplifies the decision process of how to make and communicate a change.

What we’re not doing

While we’re giving you fine-grained control over which new behaviors you get, we don’t intend on keeping old things forever. This is not a license to live in the past. As stated above, our plan is to update things that are broken and keep moving forward by removing old behaviors over time.

We’re also not treating this new capability as *open-season* on breaking changes. Making any change that impacts developers on our platform has to be justified in providing enough value, and needs to be comprehensible and actionable by those that are impacted – because we expect all developers to deal with it eventually.

A good candidate change is one that:

  • adds a feature, but with a small break risk for a minority of users (areas for Razor Pages)
  • fixes a big problem, but with a comprehensible impact (exception handling for input formatters)
  • never worked the way we thought (bug), and streamlines something complicated (combining authorization filters)

Note that in all of the cases above, the new behaviors are easier for us to explain and document. We would recommend that everyone choose the new behaviors, it’s not a matter of preference.

Give us feedback about this. If you think this plan leaves you out in the cold, let us know how and why.

What’s happening now?

Most of the known work for us has already happened. Have made about 5 design changes to features inside MVC during the 2.1 milestone that deserved a compatibility switch.

You can find a summary of these things here below. My hope is that the documentation added to the specific options and types explains what is changing when you opt-in to each setting and why we feel it’s important.

General MVC

Combine Authorization Filters

Smarter exception handling for formatters

Smarter validation for enums

Allow non-string types with HeaderModelBinder (2.1.0-preview-2)

JSON Formatter

Better error messages

Razor Pages

Areas for Pages

Appendix A: an auspicious example

I think exception handling for input formatters is probably the best illustrative example of how this philosophy works.

The best starting place is probably to look at the docs that I added in this PR. We have a problem in the 1.X and 2.0 family of MVC releases where any exception thrown by an IInputFormatter will be swallowed by the infrastructure and turned into a model state error. This includes TypeLoadException, NullReferenceException, ThreadAbortException and all other kinds of esoterica.

This is the case because we didn’t have an exception type that says “I failed to process the input, report an error to the client”. We added this in 2.1 and we’ve updated our formatters to use it in the appropriate cases (the XML serializers throw exceptions). However this can’t help formatters we didn’t write.

This leads to the need for a switch. If you need to use a formatter written against 1.0 that throws an exception and expects MVC to handle it, that will still work until you opt-in to the new behavior. We do plan on removing the old way in 3.0, but this eases the pressure – instead of this problem blocking you from adopting 2.1, you have time to figure out a solution before 3.0 (a long time away).

——

I hope this example provides a little insight into what our process is like. See the relevant links for the in-code documentation about the other changes. We are looking forward to feedback on this, either on GitHub or as comments on this post.

ASP.NET Core 2.1.0-preview1: Improvements for building Web APIs

$
0
0

ASP.NET Core 2.1 adds a number of features that make it easier and more convenient to build Web APIs. These features include Web API controller specific conventions, more robust input processing and error handling, and JSON patch improvements.

Please note that some of these features require enabling MVC compatibility with 2.1, so be sure to check out the post on MVC compatibility versions as well.

[ApiController] and ActionResult<T>

ASP.NET Core 2.1 introduces new Web API controller specific conventions that make Web API development more convenient. These conventions can be applied to a controller using the new [ApiController] attribute:

  • Automatically respond with a 400 when validation errors occur – no need to check the model state in your action method
  • Infer smarter defaults for action parameters: [FromBody] for complex types, [FromRoute] when possible, otherwise [FromQuery]
  • Require attribute routing – actions are not accessible by convention-based routes

You can also now return ActionResult<T> from your Web API actions, which allows you to return arbitrary action results or a specific return type (thanks to some clever use of implicit cast operators). Most Web API action methods have a specific return type, but also need to be able to return multiple different action results.

Here’s an example Web API controller that uses these new enhancements:

[Route("api/[controller]")]
[ApiController]
public class ProductsController : ControllerBase
{
    private readonly ProductsRepository _repository;

    public ProductsController(ProductsRepository repository)
    {
        _repository = repository;
    }

    [HttpGet]
    public IEnumerable<Product> Get()
    {
        return _repository.GetProducts();
    }

    [HttpGet("{id}")]
    public ActionResult<Product> Get(int id)
    {
        if (!_repository.TryGetProduct(id, out var product))
        {
            return NotFound();
        }
        return product;
    }

    [HttpPost]
    [ProducesResponseType(201)]
    public ActionResult<Product> Post(Product product)
    {
        _repository.AddProduct(product);
        return CreatedAtAction(nameof(Get), new { id = product.Id }, product);
    }
}

Because these conventions are more descriptive tools like Swashbuckle or NSwag can do a better job generating an OpenAPI specification for this Web API that includes information like return types, parameter sources, and possible error responses without needing addition attributes.

Better input processing

ASP.NET Core 2.1 does a much better job of providing appropriate error information when the request body fails to deserialize or the JSON is invalid.

For example, in ASP.NET Core 2.0 if your Web API received a request with a JSON property that had the wrong type (like a string instead of an int) you get a generic error message, like this:

{
  "count": [
    "The input was not valid."
  ]
}

In 2.1 we provide more detailed error information about what was wrong with the request including path and line number information:

{
  "count": [
    "Could not convert string to integer: abc. Path 'count', line 1, position 16."
  ]
}

Similarly, if the request is syntactically invalid (ex. missing a curly brace) then 2.1 will let you know:

{
  "": [
    "Unexpected end when reading JSON. Path '', line 1, position 1."
  ]
}

You can also now add validation attributes to top level parameters of your action method. For example, you can mark a query string parameter as required like this:

[HttpGet("test/{testId}")]
public ActionResult<TestResult> Get(string testId, [Required]string name)

Problem Details

In this release we added support for RFC 7808 – Problem Details for HTTP APIs as a standardized format for returning machine readable error responses from HTTP APIs.

To update your Web API controllers to return Problem Details responses for invalid requests you can add the following code to your ConfigureServices method:

services.Configure<ApiBehaviorOptions>(options =>
{
    options.InvalidModelStateResponseFactory = context =>
    {
        var problemDetails = new ValidationProblemDetails(context.ModelState)
        {
            Instance = context.HttpContext.Request.Path,
            Status = StatusCodes.Status400BadRequest,
            Type = "https://asp.net/core",
            Detail = "Please refer to the errors property for additional details."
        };
        return new BadRequestObjectResult(problemDetails)
        {
            ContentTypes = { "application/problem+json", "application/problem+xml" }
        };
    };
});

You can also return a Problem Details response from your API action for an invalid request using the ValidationProblem() helper method.

An example Problem Details response for an invalid request looks like this (where the content type is application/problem+json):

{
  "errors": {
    "Text": [
      "The Text field is required."
    ]
  },
  "type": "https://asp.net/core",
  "title": "One or more validation errors occurred.",
  "status": 400,
  "detail": "Please refer to the errors property for additional details.",
  "instance": "/api/values"
}

JSON Patch improvements

JSON Patch defines a JSON document structure for implementing HTTP PATCH semantics. A JSON Patch document defines a sequence of operations (add, remove, replace, copy, etc.) that can be applied to a JSON resource.

ASP.NET Core has supported JSON Patch since it first shipped, but in 2.1 we've added support for the test operation. The test operation allows to check for specific values before applying the patch. If any test operations fail then the whole patch fails.

A Web API controller action that supports JSON Patch looks like this:

[HttpPatch("{id}")]
public ActionResult<Value> Patch(int id, JsonPatchDocument<Value> patch)
{
    var value = new Value { ID = id, Text = "Do" };

    patch.ApplyTo(value, ModelState);

    if (!ModelState.IsValid)
    {
        return BadRequest(ModelState);
    }

    return value;
}

Where the Value type is defined as follows:

public class Value
{
    public int ID { get; set; }

    public string Text { get; set; }

    public IDictionary<int, string> Status { get; } = new Dictionary<int, string>();
}

The following JSON Patch request successfully adds a value to the Status dictionary (note that we've also added support for non-string dictionary keys, like int, Guid, etc.):

Successful request

[
  { "op": "test", "path": "/text", "value": "Do" },
  { "op": "add", "path": "/status/1", "value": "Done!" }
]

Successful response

{
  "id": 123,
  "text": "Do",
  "status": {
    "1": "Done!"
  }
}

Conversely the following JSON Patch request fails because the value of the text property doesn't match:

Failed request

[
  { "op": "test", "path": "/text", "value": "Do not" },
  { "op": "add", "path": "/status/1", "value": "Done!" }
]

Failed response

{
  "Value": [
    "The current value 'Do' at path 'text' is not equal to the test value 'Do not'."
  ]
}

Summary

We hope you enjoy these Web API improvements. Please give them a try and let us know what you think. If you hit any issues or have feedback please file issues on GitHub.

VSTS and TFS roadmap update

$
0
0

This week we updated the roadmap (Feature Timeline) for Visual Studio Team Services and Team Foundation Server.  Check it out to see many of the significant improvements we are working on/planning.  If you have feedback or suggestions for other improvements that are important to you, our User Voice forum is the best place to provide it.

When we first envisioned the feature timeline idea, the thinking was to update it quarterly.  However, over time, it turned out that didn’t work well with our planning rhythm – that involves cross team backlog reviews every 3 sprints – or every 9 weeks.  It made sense to do roadmap updates when each planning wave was completed and the result is  that the updates became unaligned with the quarterly roadmap schedule.  So, here we are 2/3rds of the way through Q1 and we have a roadmap update that still talks about Q1 deliverables.  I thought that might seem a bit weird to you so I wanted to explain why.

We sh9uld be doing another update to the feature timeline in several weeks to identify which of the “future” items will show up in TFS 2018.2 – we generally do that as soon as the first RC for an Update ships and that’s the rough timeframe.

Thank you,

Brian

Confidently plan your cloud migration: Azure Migrate is now generally available!

$
0
0

A few months ago, we announced Azure Migrate – a new service that provides guidance and insights to help you migrate to Azure. Today, we're excited to announce that Azure Migrate is generally available.

Azure Migrate is offered at no additional charge and provides appliance-based, agentless discovery of your on-premises environments. It enables discovery of VMware-virtualized Windows and Linux VMs today and will enable discovery of Hyper-V environments in the future. It also provides an optional, agent-based discovery for visualizing interdependencies between machines to identify multi-tier applications. This enables you to plan your migration across three dimensions:

  • Readiness: Are the machines that host my multi-tier application suitable for running in Azure?
  • Rightsizing: What size will my Azure VM be, based on my machine’s configuration or utilization?
  • Cost: How much will my recurring Azure costs be, taking into account discounts like Azure Hybrid Benefit?

Azure Migrate

Many of you are already using Azure Migrate in production to accelerate your migration journey. Thank you for using the preview service, and for providing us with valuable feedback. Here are some new features added after the preview:

  • Configuration-based sizing: Size your machine as-is, based on configuration settings such as number of CPU cores and size of memory, in addition to already supported sizing based on utilization of CPU, memory, disk, etc.
  • Confidence rating for assessments: Use a star rating to differentiate assessments that are based on more versus less utilization data points.
  • No charge for dependency visualization: Visualize network dependencies of your multi-tier application without getting charged for Service Map.

Service Map

  • More target regions: Assess your machines for target regions in China, Germany, and India. You can create migration projects in two regions – West Central US and East US. However, you can plan migrations to any of the 30 supported target regions.

As the saying goes, “If you fail to plan, you plan to fail.” Azure Migrate can help you do a great job of migration planning. We're listening to your feedback and are continuing to add more features to help you plan migrations. However, we don’t want to stop there. We also want to provide a streamlined experience to perform migrations. Today, you can use services like Azure Site Recovery and Azure Database Migration Service to do this. Going forward, you can expect to see all that goodness integrated into Azure Migrate. That way, you'll have a true single-stop shop for all your Azure migration needs.

You can get started by creating a migration project in the Azure portal. In addition...

  • Get and stay informed with our documentation.
  • Seek help by posting a question on our forum or contacting Microsoft Support.
  • Provide feedback by posting or voting for an idea in our user voice.

Happy migrating!

- Shon

The UWP Community Toolkit v2.2

$
0
0

I am extremely excited to announce the latest update of the UWP Community Toolkit, v2.2. The credit for this release, as always, goes to the community, who have continued to support and improve the toolkit for each release. V2.2 introduces a new Parsers package, new controls and helpers, and many improvements and bug fixes to existing APIs.

Below is a quick list of the highlights of this release. Make sure to visit the release notes for the complete list of what is new in v2.2

Microsoft.Toolkit.Parsers and MarkdownTextBlock

V2.0 of the UWP Community Toolkit introduced several new .NET Standard packages, with a commitment to support more cross platform APIs. Building on top of that commitment, V2.2 introduces a new .NET Standard package: Microsoft.Toolkit.Parsers. This package includes parsers for markdown and RSS that can be used across UWP and other platforms that support .NET Standard 1.4 or above.

In addition, the MarkdownTextBlock control is leveraging the new renderer and in addition supports:

  • Code syntax highlighting
  • SVG images and image width/height syntax
  • Relative URIs for images and links
  • Comments and more

Staggered panel

A new panel has been added to enable staggered layout where items are added to columns with the least amount of space.

XAML Brushes

V2.2 introduces a new namespace (Microsoft.Toolkit.Uwp.UI.Media) and adds 7 composition based brushes, including a RadialGradientBrush. The backdrop brushes apply the effect to whatever is behind the element in the app.

MSAL support and cross-platform Microsoft Graph and OneDrive service

A .NET Standard version of both the Graph and OneDrive services has been introduced and the old OneDrive service has been marked obsolete. The .NET Standard versions of each service now support Microsoft Authentication Library (MSAL) and consumption outside of purely UWP apps. The new service can be found in the Microsoft.Toolkit.Services package.

Notifications package support for My People shoulder taps

With the latest update, the notifications package now includes new toast features for My People shoulder taps, so developers can easily enable this feature in their apps.

Built by the Community

This update would not have been possible if it wasn’t for the community support and participation. If you are interested in participating in the development, but don’t know how to get started, check out our “help wanted” issues on GitHub.

As a reminder, although most of the development efforts and usage of the UWP Community Toolkit is for Desktop apps, it also works great on Xbox One, Mobile, HoloLens, IoT and Surface Hub devices. You can get started by following this tutorial, or preview the latest features by installing the UWP Community Toolkit Sample App from the Microsoft Store.

To join the conversation on Twitter, use the #uwptoolkit hashtag.

Happy coding!

The post The UWP Community Toolkit v2.2 appeared first on Windows Developer Blog.

The R Consortium has funded half a million dollars to R projects

$
0
0

The R Consortium passed a significant milestone this month: since its inception, the non-profit body has provided more than US$500,000 in grant funding to project proposed by the R Community. The R Consortium uses the dues from its member organizations to fund grant proposals, which are reviewed twice a year by its Infrastructure Steering Committee. (If you'd like to propose a project, proposals for the next round are being accepted through April 1.)

New projects funded in this round include:

  • Creating a new data type for R to unify the "units" and "errors" packages
  • Updating the R module for the Simplified Wrapper and Interface Generator (SWIG) to support modern R programming practices like reference classes
  • Providing an API and test framework to underlie the "future" package
  • A package to process spatiotemporal data held on servers with a dplyr-like syntax

With these new grants, the R Consortium has funded 21 projects in total, from R packages to community events to developer tools. You can read more about the new projects in the announcement linked below.

R Consortium: Announcing the second round of ISC Funded Projects for 2017


Bing Maps Dev APIs Winter Update 2018

$
0
0

The Bing Maps team has been working hard this winter and have added some very useful features. So far this year the team has announced many improvements in geocoding and routing, as well as a an open source Fleet Tracking solution. Here are some other exciting things the Bing Maps team has been working on.

Bing Maps V8 Web Control Update

After being on a holiday code freeze, the Bing Maps V8 Control saw two updates, one in January and the other in February. The January update added a new feature called Configuration Driven Maps. Configuration driven maps allow you to quickly and easily create a map using your data with little to no coding required. Instead, create a JSON configuration fill that specifies the data sets you want to render along with some map options and then easily generate a map from this. Besides providing a minimal coding option for creating map apps, configuration driven maps are great for creating reusable map apps which are data driven. Documentation | Try it now

The February update added support for truck routing into the directions module. This allows you to calculate routes which take truck attributes such as weight, cargo type and vehicle dimensions into consideration, while also providing most of the existing features in the directions module such as an input panel and nicely formatted turn by turn directions. Draggable routes are not currently supported for truck routing but are planned. Try it now

Bing Maps V8 Code Samples Project

The Bing Maps V8 has a very useful interactive SDK which is a great place to learn the basics of how to develop with the API. In addition to this, last year the team released an open source project which includes more in-depth code samples. Nearly any sample we create to help a developer is added to this project. There are now over 240 code samples. Check out the GitHub Project or view the Live Code Samples site. Here are a few of the new samples:

Select Pushpins with an Isochrone

This sample combines the new REST Isochrone API with the Spatial Math module in Bing Maps V8 to show how data can be easily filtered based on drive time. Try it now

Generate Convex and Concave Hulls

This sample shows how to use the Spatial Math module to generate convex and concave hulls around data. Convex hulls generate a polygon which is similar to wrapping your data with an elastic band. A concave hull is like wrapping your data with shrink wrap and vacuuming out the air to pull in the sides to create a polygon, which better represents that area in which the data covers. Try it now

Snap Drawing to Shape

This sample loads a map with a polygon on it, you then draw a polygon that has edges near the existing polygon and when you are complete, it will snap the edges together to create a seamless edge between the two polygons. This is very useful if you want to be able to draw territories on the map. Try it now

New REST Service API’s Update

In December the Bing Maps team launched three new REST APIs; Truck Routing, Isochrone, and Snap to Road. Since then, the team has added support for asynchronous truck routing and isochrone requests, thus allowing for longer routes and large isochrones to be calculated. Additionally, the origin coordinate used to calculate an isochrone is now also returned in the response. Find out more about the new API’s here.

Bing Maps .NET REST Toolkit Update

Just over a year ago, the Bing Maps team released the Bing Maps .NET REST Toolkit as an open source project GitHub. This project makes it very easy to use the Bing Maps REST services in .NET and can significantly reduce development time while also ensuring that best practices are used. In additional this it also provides some extend capabilities. Some of the recently added features include:

  • Support for the new Truck Routing, Isochrone, Distance Matrix and Snap to Road APIs.
  • An extension which adds support for Travelling salesmen waypoint optimizations has been added. It can be used on its own to optimize the order of waypoints. It is also integrated into the route request options, thus allowing you to calculate routes with optimized waypoints with only one additional line of code required in the request. Find out more here.
  • Added options for setting proxy settings and QPS limits.
  • Support for .NET Standard 2.

A NuGet package is available for this library making it easy to add this library to your project. Find out more on the GitHub project page.

Related Posts

ASP.NET Core 2.1-preview1: Introducing HTTPClient factory

$
0
0

HttpClient factory is an opinionated factory for creating HttpClient instances to be used in your applications. It is designed to:

  1. Provide a central location for naming and configuring logical HttpClients. For example, you may configure a client that is pre-configured to access the github API.
  2. Codify the concept of outgoing middleware via delegating handlers in HttpClient and implementing Polly based middleware to take advantage of that.
    1. HttpClient already has the concept of delegating handlers that could be linked together for outgoing HTTP requests. The factory will make registering of these per named client more intuitive as well as implement a Polly handler that allows Polly policies to be used for Retry, CircuitBreakers, etc.
  3. Manage the lifetime of HttpClientMessageHandlers to avoid common problems that can occur when managing HttpClient lifetimes yourself.

Usage

There are several ways that you can use HttpClient factory in your application. For the sake of brevity we will only show you one of the ways to use it here, but all options are being documented and are currently listed in the HttpClientFactory repo wiki.

In the rest of this section we will use HttpClient factory to create a HttpClient to call the default API template from Visual Studio, the ValuesController API.

1. Create a typed client

Typed clients are a class that accepts a HttpClient and optionally uses it to call some HTTP service. For example:

NOTE: The Content.ReadAsAsync method comes from the Microsoft.AspNet.WebApi.Client package. You will need to add that to your application if you want to use it.

The typed client is activated by DI, meaning that it can accept any registered service in its constructor.

2. Register the typed client

Once you have a type that accepts a HttpClient you can register it with the following:

The function here will execute to configure your HttpClient instance before it is passed to the ValuesClient. A typed client is, effectively, a transient service, meaning that a new instance is created each time one is needed and it will receive a new HttpClient instance each time it is constructed. This means that your configuration func, in this case retrieving the URI from configuration, will run every time something needs a ValuesClient.

3. Use the client

Now that you have registerd your client you can use it anywhere that can have services injected by DI, for example I could have a Razor Pages page model like this:

or perhaps like this:

Diagnostics

By default, when you use a HttpClient created by HttpClient factory, you will see logs like the following appear:

Log of outgoing HTTP requests

The log messages about starting and processing a HTTP request are being logged because we are using a HttpClient created by the HttpClient factory. From these 7 log messages you can see:

  1. An incoming request in to localhost:5001 in this case this is the browser navigating to my Razor Pages page.
  2. MVC selecting a handler for the request, the OnGetAsync method of my PageModel.
  3. The beginning of an outgoing HTTP request, this marks the start of the outgoing pipeline that we will discuss in the next section.
  4. We send a HTTP request with the given verb.
  5. Recieve the request back in 439.6606 ms, with a status of OK.
  6. End the outgoing HTTP pipeline.
  7. End and return from our handler.

If you set the LogLevel to at least Debug then we will also log header information. In the following screenshot I added an accept header to my request, and you can see the response headers:

Debug logs showing outgoing headers

The outgoing middleware pipeline

For sometime now ASP.NET has had the concept of middleware that operates on an incoming request. With HttpClientFactory we are going to bring a similar concept to outgoing HTTP requests using the existing DelegatingHttpHandler type that has been in .NET for some time. As an example of how this works we will look at how we generate the log messages that we looked at in the previous section:

NOTE: This code is simplified for the sake of brevity and ease of understanding, the actual class can be found here

Let’s look at another example that isn’t already built in. When using client based service discovery systems then you will ask another service for the host/port combination that you should use to communicate to a given service type. For example, you could be using the HTTP API of Consul.io to resolve the name ‘values’ to an IP and port combination. In the following handler we will replace the incoming host name with the result of a request to an IServiceRegistry type that would be implemented to communicate with whatever service discovery system you used. In this way we could make a request to ‘http://values/api/values’ and it would actually be connect to ‘http://:/api/values’.

NOTE: This sample is inspired by the CondensorDotNet project. Which has a HttpClientHandler that works the same way.

We can then register this with the following:

The type being give to the AddHttpMessageHandler must be registered as a transient service. However, because we have the IServiceRegistry as its own service it can have a different lifetime to the handler, allowing caching and other features to be implemented in the service registry instead of the handler itself.

Now that we’ve registered the handler all requests will have their Host and Port set to whatever is returned from the IServiceRegistry type. If we continued our example we would implement IServiceRegistry to call the Consul.io HTTP Endpoint to resolve the URI from the requested HostName.

HttpClient lifetimes

In general you should get a HttpClient from the factory per unit of work. In the case of MVC this means you would generally accept a typed client in the constructor of your controller and let it be garbage collected when the controller does. If you are using IHttpClientFactory directly, which we don’t talk about in this post but can be done, then the equivalent would be to create a HttpClient in the constructor and let it be collected the same way.

Disposing of the client is not mandatory, but doing so will cancel any ongoing requests and ensure the given instance of HttpClient cannot be used after Dispose is called. The factory takes care of tracking and disposing of the important resources that instances of HttpClient use, which means that HttpClient instances can be generally be treated as .NET objects that don’t require disposing.

One effect of this is that some common patterns that people use today to handle HttpClient instances, such as keeping a single HttpClient instance alive for a long time, are no longer required. Documentation about what exactly the factory does and what patterns it resolves will be available, but hasn’t been completed yet.

In the future we hope that a new HttpClientHandler will mean that HttpClient instances created without the factory will also be able to be treated this way. We are working on this in the corefx GitHub repositories now.

Future

Before 2.1 is released

  • Polly integration.
  • Polly is a .NET resilience and transient-fault-handling library that allows developers to express policies such as Retry, Circuit Breaker, Timeout, Bulkhead Isolation, and Fallback in a fluent and thread-safe manner. We will be building a package that allows easy integration of Polly policies with HttpClients created by the HttpClient factory.

Post 2.1
  • Auth Handlers.
  • The ability to have auth headers automatically added to outgoing HTTP requests.

Conclusion

The HTTPClient factory is available in 2.1 Preview 1 apps. You can ask questions and file feedback in the HttpClientFactory github repository.

ASP.NET Core 2.1.0-preview1: Improvements to IIS hosting

$
0
0
The ASP.NET Core Module (ANCM) is a global IIS module that has been responsible for proxying requests over from IIS to your backend ASP.NET application running Kestrel. Since 2.0 we have been hard at work to bring to two major improvements to ANCM: version agility and performance. In the 2.1.0-preview1 release, we have chosen not to update the global module to avoid impacting any existing 1.x/2.0 applications. This blog post details the changes in ANCM and how you can try out these changes today.

Version agility

It has been hard to iterate on ANCM since we’ve had to ensure forward and backward compatibility between every version of ASP.NET Core and ANCM that has shipped thus far. To mitigate this problem going forward, we’ve refactored our code into two separate components- the ASP.NET Core Shim (shim) and the ASP.NET Core Request Handler (request handler). The shim (aspnetcore.dll) as the name suggests is a lightweight shim where as the request handler (aspnetcorerh.dll) does all the request processing. Going forward, the shim will ship globally and will continue to be installed via the Server Hosting Bundle. The request handler will now ship via a NuGet package- Microsoft.AspNetCore.Server.IIS which you can directly reference in your application or consume via the ASP.NET metapackage or shared runtime. As a consequence, two different ASP.NET Core applications running on the same server can use a different version of the request handler.

Performance

In addition to the packaging changes, ANCM also adds supports for an in-process hosting model. Instead of serving as a reverse-proxy, ANCM can now boot the CoreCLR and host your application inside the IIS worker process. Our preliminary performance tests have shown that this model delivers 4.4x the request throughput compared to hosting your dotnet application out-of-process and proxying over the requests.

How do I try it?

If you have already installed the 2.1.0-preview1 ServerHosting bundle, you can install the latest ANCM by running this script.

Alternatively, you can deploy an Azure VM which is already setup with the latest ANCM by clicking the Deploy to Azure button below.
 

Create a new project or update your existing project

Now that we have an environment to publish to, let’s create a new application that targets 2.1.0-preview1 of ASP.NET Core.
Alternatively, you can upgrade an existing project by following the instructions on this blog post.

Modify your project

Let’s go ahead and modify our project by setting a project property to indicate that we want to our published application to be run inprocess.
Add this to your csproj file.

Publish your project

Create a new publish profile and select the Azure VM that you just created. If you’re using Visual Studio, you can easily publish to the Azure VM you just created. In the Solution Explorer, right-click the project and select Publish to open the Publish wizard where you can choose to publish to an Azure VM that you just created.
You may need to allow WebDeploy to publish to a server using an untrusted certificate. This can be accomplished by adding the following attribute to your publish profile (.pubxml file)
If you’re running elsewhere, go ahead publish your app to a Folder and copy over your artifacts or publish directly via WebDeploy.

web.config

As part of the publish process, the WebSDK will read the AspNetCoreModuleHostingModel property and transform your web.config to look something like this. (Observe the new hostingModel attribute)

Debugging

To view the Cloud Explorer, select View > Cloud Explorer on the menu bar
If you’ve been following along using an Azure VM, you can enable remote debugging on your Azure VM via the cloud explorer. In the Actions tab associated with your VM, you should be able to Enable Debugging.
Once you’ve enabled remote debugging, you should be able to attach directly to the w3wp.exe process. If you don’t see the process listed, you may need to send a request to your server to force IIS to start the worker process.
If you’ve been following along locally, you can use Visual Studio to attach a be directly to your IIS worker process and debug your application code running in the IIS worker process as shown below. (You may be prompted to restart Visual Studio as an Administrator for this).
We don’t yet have an experience for debugging with IIS Express. At the moment, you will have to publish to IIS and then attach a debugger.

Switching between in-process and out-of-process

Switching hosting models can be deployment-time decision. To change between hosting models, all you have to do is change the hostingModel attribute in your web.config from inprocess to outofprocess.
It can be easily observed in this simple app where you’ll observe either Hello World from dotnet or Hello World from w3wp based on your hosting model.

Announcing new milestones for Microsoft Cognitive Services vision and search services in Azure

$
0
0

Artificial Intelligence (AI) has emerged as one of the most disruptive forces behind the digital transformation of business. At Microsoft, we believe everyone—developers, data scientists and enterprises—should have access to the benefits of AI to augment human ingenuity in unique and differentiated ways. We’ve been conducting research in AI for more than two decades and infusing it into our products and services. Now we’re bringing it to everyone through simple, yet powerful tools. One of those tools is Microsoft Cognitive Services, a collection of cloud-hosted APIs that let developers easily add AI capabilities for vision, speech, language, knowledge and search into applications, across devices and platforms such as iOS, Android and Windows.

To date more than a million developers have already discovered and tried our Cognitive Services, and many new customers are harnessing the power of AI, such as major auto insurance provider Progressive best known for Flo, its iconic spokesperson. The company wanted to take advantage of customers’ increasing use of mobile channels to interact with its brand. Progressive used Microsoft Azure Bot Service and Cognitive Services to quickly and easily build the Flo Chatbot—currently available on Facebook Messenger—which answers customer questions, provides quotes and even offers a bit of witty banter in Flo’s well-known style.

Today, we’re announcing new milestones for Cognitive Services vision and search services in Azure.

Bringing vision capabilities to every developer

For years, Microsoft researchers have been pushing the boundaries of computer vision, making systems able to more accurately identify images. The following milestones represent one of the many examples of how we’re integrating our research advances into our enterprise services.

Today, I’m pleased to announce the public preview of Custom Vision service on the Azure Portal (Figure 1). Microsoft Custom Vision service makes it possible for developers to easily train a classifier with their own data, export the models and embed these custom classifiers directly in their applications, and run it offline in real time on iOS, Android and many other edge devices. We built Custom Vision with state-of-the-art machine learning that offers developers the ability to train their own classifier to recognize what matters in their scenarios.

With a couple of clicks, Custom Vision service can be used for a multiplicity of scenarios: retailers can easily create models that can auto-classify images from their catalogs (dresses vs shoes, etc.), social sites can more effectively filter and classify images of specific products, or national parks can detect whether images from cameras include wild animals or not. Last month, we also announced Custom Vision Service is able to export models to the CoreML format for iOS 11 and to the TensorFlow format for Android. The exported models are optimized for the constraints of a mobile device, so classification on device happens in real time.

Visual Intelligence Made Easy

Figure 1: Custom Vision service, now available in Azure preview

The Face API is a generally available cloud-based service that provides face and emotion recognition. It detects the location and attributes of human faces and emotions in an image, which can be used to personalize user experiences. With the Face API, developers can help determine if two faces belong to the same person, identify previously tagged people, find similar-looking faces in a collection, and find or group photos of the same person from a collection.

Starting today, the Face API now integrates several improvements, including million-scale recognition to better help customers for their vision scenarios (Figure 2). The million-scale recognition capabilities represent a new type of person group now with up to a million people, and a new type of face list with up to a million faces. With this update, developers can now teach the Face API to recognize up to 1 million people and still get lightning-fast responses.

Face API

Figure 2: The Face API now integrates several improvements, including million-scale recognition

Harnessing search capabilities for every developer

Another key area of AI investment has been around search: everyone around the globe can gather rich information from Bing Search to query the web, but we’re also empowering developers to leverage it through multiple search APIs. Embedding it into any app with a few lines of code, to help users find the right information among the knowledge across the planet.

Part of the search capabilities of Cognitive services, Bing Entity Search brings rich context about people, places, things and local businesses to any app, blog or website for a more engaging user experience. I’m also pleased to announce Bing Entity Search is now generally available today on the Azure Portal.

With Bing Entity Search, developers can now identify the most relevant entity based on searched terms and provide primary details about those entities (Figure 3). Entities spans across multiple international markets and market types including information about famous people, places, movies, TV shows, video games and books.

Many scenarios can be covered with Bing Entity Search: for instance, a messaging app could provide an entity snapshot of a restaurant, making it easier for a group to plan an evening. A social media app could augment users’ photos with information about the locations of each photo. A news app could provide entity snapshots for entities in the article.

Augmenting content

Figure 3: Augmenting content with entity search results

Today’s milestones illustrate our commitment to make our AI Platform suitable for every business scenario, with enterprise-grade tools to make application development easier and respecting customers’ data.

To learn more and start building vision and search intelligent apps, please visit the Cognitive Services site in Azure and our documentation pages.

I invite you to visit www.azure.com/ai to learn more about how AI can augment and empower your digital transformation efforts. We’ve also launched the AI School to help developers get up to speed with these AI technologies.

aischool

Joseph

@josephsirosh

Azure Data Lake launches in the West Europe region

$
0
0

Azure Data Lake Store and Azure Data Lake Analytics are now generally available in the West Europe region, in addition to the previously announced regions of East US 2, Central US, and North Europe.

Azure Data Lake Store is a hyperscale enterprise data lake in the cloud that is secure, massively scalable, and built to the open HDFS standard. Data from disparate data sources can be brought together into a single data lake so that all your analytics can run in one place. From first class integration with AAD to fine grained access control, built-in enterprise grade security makes managing security easy for even the largest organizations. With no limits to the size of data and the ability to run massively parallel analytics, you can now unlock value from all your analytics data at ultra-fast speeds.

Azure Data Lake Analytics is a distributed on-demand analytics job service that dynamically scales so you can focus on achieving your business goals, not on managing distributed infrastructure. The analytics service can handle jobs of any scale by letting you select how many parallel compute resources a job can scale to. You only pay for your job when it is running, making it cost-effective. The service offers U-SQL as a simple, extensible, and highly scalable language framework that uses the declarative nature of SQL to scale out your custom code written in the language of your choice, be it C#, Python, or R. 

To start using these services, visit the Azure Data Lake Store and Azure Data Lake Analytics getting started pages.

For details on regional pricing, see the Azure Data Lake Store and Azure Data Lake Analytics pricing pages.

For more information on migrating your existing Azure Data Lake Store to the West Europe region, visit the migration guidance page.  

Viewing all 10804 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>