Quantcast
Channel: Category Name
Viewing all 10804 articles
Browse latest View live

Windows Template Studio 1.7 released!

$
0
0

We’re extremely excited to announce the Windows Template Studio 1.7! In this release, our two big items are Visual Basic support and Prism framework!  We love how the community is helping. If you’re interested, please head over to head over to WTS’s Github.

What’s new:

Full list of adjustments in the 1.7 release, WTS’s Github has a full changelog.

New Programming Language Support:

  • Visual Basic

New Framework Support:

  • Prism framework

Updated Feature:

  • URI Schema launching for Caliburn.Micro added.

Template improvements:

  • Tweaks for consistence across all frameworks and languages
  • Fix for bug where we have a toast and first time prompt crash.
  • Fix for JSON Helper (#1659)

Improvements to the Wizard:

  • Lots of under the hood bug fixes and code improvements
  • Changed how we handle verifying new templates. This process should be much faster now.

Improvements to Process and Tooling:

  • Improved unit testing for verifying templates

How to get the update:

There are two paths to update to the newest build.

  • Already installed: Visual Studio should auto update the extension. To force an update, Go to Tools->Extensions and Updates. Then go to Update expander on the left and you should see Windows Template Studio in there and click “Update.”
  • Not installed: Head to https://aka.ms/wtsinstall, click “download” and double click the VSIX installer.

Known issue

We are tracking an issue (#1532) when uninstalling / upgrading where you may get an error of “A value for ‘Component’ needs to be specified in the catalog.”

If you get this error, we need logs to help track this with the help of the Visual Studio team.  We don’t know how to reproduce it, but we know a few people have hit this scenario.

We have how to capture these logs in the tracking issue on GitHub.

What else is cooking for next versions?

We love all the community support and participation.  In addition, here are just a few of the things we are currently building out that will in future builds:

  • Improved user interface in-line with Visual Studio
  • Continued refinement with Fluent design in the templates
  • Work for supporting multiple projects in a single solution
  • Ink templates
  • Improved Right-click->add support for existing projects

With partnership with the community, we’ve will continue cranking out and iterating new features and functionality. We’re always looking for additional people to help out and if you’re interested, please head to our GitHub at https://aka.ms/wts. If you have an idea or feature request, please make the request!

The post Windows Template Studio 1.7 released! appeared first on Windows Developer Blog.


Python in Visual Studio Code – Jan 2018 Release

$
0
0

We’re pleased to announce the January 2018 release for the Microsoft Python extension for Visual Studio code is now available. You can the download the Microsoft Python extension for VS Code from the marketplace, or install it directly from the extension gallery in Visual Studio Code. You can learn more about Python support in Visual Studio Code in the VS Code documentation.

In this release we closed a total of 72 issues focusing on linting improvements, support for virtual environments, and other general improvements. Keep on reading for some of the highlights.

Improved Default Linter Rules

The default Pylint rules used in the extension have been improved to reduce noise in the editor. By default, only errors and warnings are shown that are useful for catching coding mistakes and are likely to result in runtime exceptions. We have disabled the style rules and most of the warnings, resulting in less noise in the editor when using the default settings. You can view the default ruleset definition in our Linting Python in VS Code documentation.

The default Pylint options activate all linter rules (also known as checkers), including many style warnings. This meant that previously many warnings were shown complaining about the names of variables and functions by default. While these style warnings can be useful for some code bases, they can also add a lot of unnecessary noise for others. For example, the below warning tells me that I should be using uppercase for the “urlpatterns” variable, which goes against the coding style being used in this codebase:

You can customize which rules are enabled to match the style used in your codebase by:

  • Adding a .pylintrc file in your workspace, e.g. in a specific folder with .py files or in the workspace root.
  • Modifying the python.linting.pylintArgs setting.
  • Disabling our ruleset by setting python.linting.pylintUseMinimalCheckers to false, which will enable Pylint's own default rules.

Commands for Configuring Linting

There are new linting commands that allow you to configure some linter settings without having to manually edit configuration files. The Python: Select Linter command allows you to select your linter of choice, and the Python: Enable Linting allows you to enable and disable linting.

Selecting a linter allows you to easily switch from Pylint to Flake8 or other supported linters. Note that using this command will only enable a single linter to be run, if you want to configure multiple linters you can do so by manually editing configuration options.

Terminal Now Uses Conda and Virtual Environments

Creating a new Python terminal will now automatically activate the currently selected virtual or conda environment.

Below is a simple “hello world” flask web app. Flask is not installed globally on this machine, but it is installed in the virtual environment in the app’s current folder.

You can activate the virtual environment using the Python: Select Interpreter command:

You can create a new terminal using the Python: Create Terminal command, which will open a command window and activate the selected virtual or conda environment:

You can then run the app using python helloflask.py, and the app will run successfully because flask is available in the activated virtual environment:

In previous versions this would fail because it would run the file using the global python interpreter instead of the virtual environment’s interpreter, which does not have flask installed,.

The Python: Run File in Terminal command will also activate the selected environment before running the Python script. If for any reason you want to disable virtual environment terminal activation, set python.terminal.activateEnvironments to false.

Try it out today

Try out the Python VS Code extension today, and be sure to file an issue on our Microsoft/vscode-python GitHub page if you run into any problems.

We are continuing to improve the extension and are releasing a new version every month, so be sure to tell us what you think!

Virtual Network Service Endpoints and Firewalls for Azure Storage now generally available

$
0
0

This blog post was co-authored by Anitha Adusumilli, Principal Program Manager, Azure Networking.

Today we are announcing the general availability of Firewalls and Virtual Networks (VNets) for Azure Storage along with Virtual Network Service Endpoints. Azure Storage Firewalls and Virtual Networks uses Virtual Network Service Endpoints to allow administrators to create network rules that allow traffic only from selected VNets and subnets, creating a secure network boundary for their data. These features are now available in all Azure public cloud regions and Azure Government. As part of moving to general availability it is now backed by the standard SLAs. There is no additional billing for virtual network access through service endpoints. The current pricing model for Azure Storage applies as is today.

Customers often prefer multiple layers of security to help protect their data. This includes network-based access control protections as well as authentication and authorization-based protections. As part of the general availability of Firewalls and Virtual Networks for Storage and VNet Service Endpoints we enable network-based access control. These new network focused features allow the customer to define network access-based security ensuring that only requests coming from approved Azure VNets or specified public IP ranges will be allowed to a specific storage account. Customers can combine existing authorization mechanisms with the new network boundaries to better secure their data.

To enable VNet protection, first enable service endpoints for storage in the VNet. Virtual Network Service Endpoints allow you to secure your critical Azure service resource to only your virtual network. Service endpoints also provide optimal routing for Azure traffic over the Azure backbone in scenarios where Internet traffic is routed through virtual appliances or on-premises.

On the storage account you can select to allow access to one or more VNets. You may also configure to allow access to one or more public IP ranges. A detailed explanation on how to enable the network functionality can be found at Configure Azure Storage Firewalls and Virtual Networks.

Next steps

To get started, refer to the documentation Virtual Network Service Endpoints and Configure Azure Storage Firewalls and Virtual Networks.

To allow access from on-premises networks and support for various Azure services to your secured storage accounts, refer to our documentation.

For feature details and scenarios please watch the Microsoft Ignite session, “Network security for applications in Azure”.

Top stories from the VSTS community – 2018.02.02

$
0
0
Here are top stories we found in our streams this week related to DevOps, VSTS, TFS and other interesting topics. TOP STORIES Work can flow across the Sprint boundary – Martin HinshelwoodThere is nothing in the Scrum Guide that says that you can’t have workflow across the Sprint boundary. I’m going to suggest that not... Read More

Enhancements to Azure Budgets API supporting Resource Groups and Usage Budgets

$
0
0

Last month we announced a preview release of subscription level budgets for enterprise customers, that was only the first step. Today we're announcing the release of additional features that support the scoping of more granular budgets with filters as well as support for usage and cost budgets. We've heard from our customers that multiple teams share a subscription and that resource groups serve as cost boundaries. Today's updates will support resource group and resource level budgets in addition to the subscription level budgets. The budgets API is now generally available and we welcome your feedback.

The preview release of budgets only supported cost based budgets. In this release we are also adding support for usage budgets. Additionally, support for filters enables you to define the scope at which a budget applies.

Here are a few common scenarios that the budgets API addresses:

  • A budget for the subscription with no constraints.image
  • A resource group budget with no constraints.
  • A budget for multiple resource groups within a subscription.
  • A  budget for multiple resources within a resource group or a subscription.
  • Budgets based on usage on a subscription or resource group.

 

This enables most common scenarios where resource groups or specific resources within resource groups need to be budgeted. The table describes the specific filters available for crafting a budget of different types and scopes.

Support for Azure Action Groups

One of the most requested features from the preview was the ability to orchestrate services based on cost thresholds. Integrating with Action Groups allows us to support multiple notification channels including webhooks. With webhooks, you can trigger an Azure Logic App and/or an Azure Automation script to take any desired action. For instance, you might want to notify a developer via email on the first threshold, but start to spin down resources on the next.

Limitations

The budgets API has a few limitations

  • Budgets are currently only supported for Enterprise customers.
  • Calls to the budgets API enforce a user context. So you will need to call the budgets API in the context of a user and not a service principal. For customers using MFA the ARM client is an option to get a user token to use with the request.
  • Usage based budgets require a meter, the constraint enforced by the budget API is to enforce a single unit of measure for all meters within a budget. For instance, if you are budgeting your compute hours, you cannot include meters for networking that measure the GB transferred in the same budget.

Enhancements to Cost Management APIs

Support for New Data Indicator (E-Tags)

Customers who are heavy users of our APIs have requested a new data indicator. This will help them avoid polling the API and getting a full response back that has not changed since the last call. To support this need, we are releasing an updated version of the usage details API that will use E-Tags to let callers know when data has been refreshed. Each call to the API will return an E-Tag, in subsequent calls to the same API, pass the captured E-Tag with the key If-None-Match in the request header, the API will not return any data in the case there is no change since the E-Tag was generated.

ARM APIs for Pricesheet and Marketplace Charges

One ongoing initiative for the team has been standardizing all our APIs to be ARM (Azure Resource Manager) based APIs from the legacy key based APIs. With the release of Marketplace Charges and the Pricesheet API this month we will have converted all the major APIs to ARM.

We continue to work on improving our budgets feature and have a few releases planned over the next few months. As always, we welcome your feedback how we could make the experience better.

Enhancements to Cost Management APIs

$
0
0

In December we announced the release of ARM APIs for usage details. We continue that transformation to ARM APIs with this release of the Marketplace charges API for Enterprise and Web Direct customers with a few exceptions documented in the limitations and a Price Sheet API for Enterprise customers.

The updated ARM API offers the benefits of:

  • Migrating from a key based authorization model to ARM based authentication. Resulting in an improved security posture and the ability to utilize ARM RBAC for authorization.
  • Adding support for Web Direct subscriptions, with a few exceptions documented below.

Limitations on subscriptions

The following subscription types are currently not supported with the Marketplace Charges API:

  • MS-AZR-0145P
  • MS-AZR-0 146P
  • MS-AZR-159P
  • MS-AZR-0036P
  • MS-AZR-0143P
  • MS-AZR-0015P
  • MS-AZR-0144P

Support for E-Tags

Customers who are heavy users of our APIs have requested a new data indicator. This helps them avoid polling the API and getting a full response back that has not changed since the last call. To address this request, we are releasing an updated version of the legacy usage details API that will use E-Tags to let callers know when data has been refreshed. Each call to the API will return an E-Tag, in subsequent calls to the same API, pass the captured E-Tag with the key If-None-Match in the request header. The API will not return any data in the case there is no change since the E-Tag was generated. Currently the refresh frequency is 4 times a day as we work on continually lowering the latency between billing events and usage details.

Azure #CosmosDB and Microsoft’s Project Olympus honored in InfoWorld’s 2018 Technology of the Year Awards

$
0
0

The word is out, and the industry is taking notice. Azure Cosmos DB is the world’s first globally distributed, multi-model database service with native NoSQL support. Designed for the cloud, Azure Cosmos DB enables you to build planet-scale applications that bring data to where your users are with SLA guarantees low latency, throughput, and 99.99% availability.

The experts at IDG's InfoWorld recently recognized Azure Cosmos DB in the InfoWorld Technology of the Year Awards, zeroing in on its “innovative approach to the complexities of building and managing distributed systems,” which includes recognition for leveraging the work of Turing Award winner Leslie Lamport to deliver multiple consistency models. Azure Cosmos DB was also recognized for delivering a globally distributed system where users anywhere in the world can see the same version of data, no matter their location.

In addition, InfoWorld complimented the flexibility and variety of use cases with Azure Cosmos DB, from JSON-based document stores to support for MongoDB APIs and a SQL query option for Azure’s Table Storage.

 

“Do you need a distributed NoSQL database with a choice of APIs and consistency models? That would be Microsoft’s Azure Cosmos DB.”—InfoWorld, Technology of the Year 2018: The best hardware, software, and cloud services

 

InfoWorld noted that 2017 was “the year when you could pick a database without making huge compromises,” exactly the advantage of the multiple consistency models available in Azure Cosmos DB. With five distinct options, you no longer have to choose between slow, but accurate, or fast, but inaccurate, data.

Learn more in our free e-book, go hands-on with real-time personalization scenarios, get $200 in credit to try Azure Cosmos DB with a free Azure account, or simply try Azure Cosmos DB right now.

Along with Azure Cosmos DB, InfoWorld also honored Microsoft’s Project Olympus in their 2018 awards, calling out the open hardware design from Microsoft for helping the Open Compute Project push forward the development of cloud-scale hardware. Complex workloads are driving datacenters to diversify hardware, and Project Olympus designs are flexible with multiple compute configurations and a new open-source standard available to any manufacturer.

Learn more about Project Olympus deployment on Azure.

How AI works … and how it fails

$
0
0

There's been a lot written about AI in recent years, but it's rare to find an article that explains the basics in non-technical language, without dumbing down the concepts. It's definitely worth the time to read this article by Yonatan Zunger: Asking the Right Questions About AI. It explains the processes used to build AI systems, and how the technology and — most importantly — the data used to build them can make AI-enabled applications and devices do unexpected or ethically dubious things.

For example: why image searches in 2016 for "three black teens" and "three white teens" produces the following results:

Threeteens
Kabir Alli’s (in)famous results

The bias displayed here comes with data:

What happened here wasn’t a bias in Google’s algorithms: it was a bias in the underlying data. This particular bias was a combination of “invisible whiteness” and media bias in reporting: if three white teenagers are arrested for a crime, not only are news media much less likely to show their mug shots, but they’re less likely to refer to them as “white teenagers.” In fact, nearly the only time groups of teenagers were explicitly labeled as being “white” was in stock photography catalogues. But if three black teenagers are arrested, you can count on that phrase showing up a lot in the press coverage.

Read the full article at the link below. There's an audio version available too.

Medium: Asking the Right Questions About AI (via Thomas Lumley)


Hot Code Replacement for Java comes to Visual Studio Code

$
0
0

Hot code replacement (HCR), which doesn’t require a restart, is a fast debugging technique in which the Java debugger transmits new class files over the debugging channel to another JVM. With this new feature in Visual Studio Code (VS Code), you can start a debugging session and change a Java file in your development environment, and the debugger will replace the code in the JVM running your code. This is the faster and easier way in Java to facilitate experimental development and to foster iterative trial-and-error coding. Below is an illustration of how you can use HCR with Debugger for Java in Visual Studio Code.

Hot Code Replacement for Java in Visual Studio Code

HCR only works when the class signature does not change; you cannot remove or add fields to existing classes, when using HCR. However, HCR can be used to change the body of a method.

Since announcing several new extensions for Java on VS Code in our last blog post, we’ve further enhanced our updates to those extensions to provide a better editing experience for Java developers using VS Code.

Java Test Runner

We have added a few new features to better support JUnit in VS Code.

  1. Added test explorer so now you can view and locate all tests from within the test explorer
  2. Added status bar item to show test status and statistics.
  3. Added command to show test output window, by default it won’t be opened while running tests.

Java Test Runner

Tomcat

With the updated Tomcat extension, you can now create new Tomcat server from server explorer using the newly added “Add” button and run war package on it. You can also create the server during the deployment.

Create new Tomcat Server

Checkstyle

In our latest release of Checkstyle extension, properties in the Checkstyle configuration file get automatically resolved, which makes it easier for you to edit the configuration.

Properties in Checkstyle configuration file get automatically resolved

Please also refer to Checkstyle extension home page to view all the convenient commands that you can use to configure your Checkstyle.

Maven

We’ve also released our latest Maven extension under Microsoft. The new release brings new support to use maven wrapper as executable. The Maven wrapper is widely used to provide a fully encapsulated build setup for the project.

Try it out

Please don’t hesitate to have a try using VS Code, a lightweight code editor, for your Java development and let us know your feedback!

Xiaokai He, Program Manager
@XiaokaiHe

Xiaokai is a program manager working on Java tools and services. He’s currently focusing on making Visual Studio Code great for Java developers, as well as supporting Java in various of Azure services.

#ifdef WINDOWS – Progressive Web Apps

$
0
0

Jeff Burtoft from the Web Apps team at Microsoft dropped by to share how web apps on Windows have evolved, all the way from regular web sites, to packaged web apps in Windows 8, Hosted Web Apps in Windows 10, and finally adopting Progressive Web Apps with support for Service Workers and native APIs.

We also covered the top 3 necessary components needed to build a PWA and the top 4 things developers can do to make sure their PWAs are successful on any platform. Check out the full video above and feel free to reach out on  Twitter or in the comments below for questions or comments.

Happy coding!

The post #ifdef WINDOWS – Progressive Web Apps appeared first on Windows Developer Blog.

ASP.NET Core 2.1 roadmap

$
0
0

Five months ago, we shipped ASP.NET Core 2.0 as a foundational release for our high performance, cross-platform web framework for .NET and .NET Core. Since then we have been hard at work to deliver the next wave of features in ASP.NET Core 2.1. Below is an outline of the features and improvements that are planned for this release, which is targeted for mid-year 2018.

Contents:

MVC

Razor Pages improvements

In ASP.NET Core 2.0 we introduced Razor Pages as a new paged-based model for building Web UI. In 2.1 we are making a variety of improvements to Razor Pages to make it even more productive.

Razor Pages in an area

Areas provide a way to partition a large MVC app into smaller functional groupings each with their own controllers and views. In 2.1 we will add support for areas to Razor Pages so that areas can have their own pages directory.

Support for /Pages/Shared

In 2.1 Razor Pages will fall back to finding Razor assets such as layouts and partials in /[pages root]/Shared before falling back to /Views/Shared. In addition to this, pages themselves can now be in the /[pages root]/Shared path and they will be routable as if they existed directly at /[pages root]/, unless a page actually exists at that location, in which case it will be served instead.

Bind all properties on a page or controller

Starting in 2.0 you could use the BindPropertyAttribute to specify that a property on a page model or controller should be bound to data from the request. If you have lots of properties that you want to bind, then this can get tedious and verbose. In 2.1 we will add support for specifying that all properties on a page or controller should be bound by putting the BindPropertyAttribute on the class.

Implement IPageFilter on page models

We will implement IPageFilter on page models, so that you can run logic before or after page handlers run for a given request, much the same way that you can implement IActionFilter on a controller.

Functional testing infrastructure

Writing functional tests for an MVC app allows you to test handling of a request end-to-end including running routing, filters, controllers, actions, views and pages. While writing in-memory functional tests for MVC apps is possible with ASP.NET Core 2.0 it requires significant setup.

For 2.1 we will provide an test fixture implementation that handles the typical pitfalls when trying to test MVC applications using TestServer:

  • Copy the .deps file from your project into the test assembly bin folder
  • Specify the content root of the application’s project root so that static files and views can be found
  • Streamline setting up your app on TestServer

A sample test that uses the new test fixture with xUnit looks like this:

See https://github.com/aspnet/announcements/issues/275 for additional details.

Web API improvements

ASP.NET Core gives you a single unified framework for building both Web UI and Web APIs. In 2.1 we are making various improvements to the
framework for building Web APIs.

Better Input Processing

We want the experience around invalid input to be more automatic and more consistent. More concretely we’re going to:

  • Create a programming model where your action code isn’t called when a request has validation errors (see “Enhanced Web API controller conventions” below)
  • Improve the fidelity of error responses when the request body fails to deserialize or the JSON is invalid
  • Enable placing validation attributes directly on action parameters
Support for Problem Details

We are adding support for RFC 7808 – Problem Details for HTTP APIs as a standardized format for returning machine readable error responses from HTTP APIs. You can return a Problem Details response from your API action using the ValidationProblem() helper method.

Improved OpenAPI specification support

We want to embrace the OpenAPI specification (previously called “Swagger”) and make Web APIs built with ASP.NET Core more descriptive. Today you need a lot of “attribute soup” to get a reasonable OpenAPI spec from ASP.NET Core. We plan to introduce an opinionated layer that infers the possible responses based on what you’re likely to have done with your actions (attributes still win when you want to be explicit).

For example, actions that return IActionResult need to be attributed to indicate the return type so that the schema of the response body can be determined. Actions that return the response type directly don’t need to be attributed, but then you lose the flexibility to return any action result.

We will introduce a new ActionResult<T> type that allows you to return either the response type or any action result, while still indicating the response type.

Enhanced Web API controller conventions and ActionResult<T>

We are adding the [ApiController] attribute as the way to opt-in to Web API specific conventions and behaviors. These behaviors include:

  • Automatically responding with a 400 when validation errors occur
  • Infer smarter defaults for action parameters: [FromBody] for complex types, [FromRoute] when possible, otherwise [FromQuery]
  • Requires attribute routing – actions are not accessible by convention-based routes

Here’s an example Web API controller that uses these new enhancements:

Here’s what the Web API would look like if you were to implement it with 2.0:

JSON Patch improvements

For JSON Patch we will add support for the test operator and for patching dictionaries with non-string keys.

Partial Tag Helper

Razor partial views are a convenient way to include some Razor content into a view or page. Today there are four different methods for rendering a partial on a page that have different trade-offs and limitations (Html.Partial vs Html.RenderPartial, sync vs async). Rendering partials also suffers from a limitation where the generated prefix for rendered form elements based on the given model, must be handled manually for each partial rendering.

The new partial Tag Helper makes rendering a partial straightforward and elegant. You can specify the model using model expression syntax and the partial Tag Helper will handle setting up the correct HTML field prefix for you:

Razor UI in a class library

ASP.NET Core 2.1 will make it easier to build and include Razor based UI in a library and share it across multiple projects. A new Razor SDK will enable building Razor files into a class library project that can then be packaged into a NuGet package. Views and pages in libraries will automatically be discovered and can be overridden by the application. By integrating Razor compilation into the build, the app startup time is also significantly faster, while still allowing for fast updates to your Razor views and pages at runtime as part of an iterative development workflow.

SignalR

For ASP.NET Core 2.1 we are porting ASP.NET SignalR to ASP.NET Core to support real-time web scenarios. As previously announced, ASP.NET Core SignalR will also include a number of improvements, including a simplified scale-out model, a new JavaScript client with no jQuery dependency, a new compact binary protocol based on MessagePack, support for custom protocols, a new streaming response model, and support for clients based on bare WebSockets. You can start trying out ASP.NET Core SignalR today by checking out the samples.

WebHooks

WebHooks are a lightweight HTTP pattern for event notification across the web. WebHooks enable services to send event notifications over HTTP to registered subscribers. For 2.1 we are porting a subset of the ASP.NET WebHooks receivers to ASP.NET Core in a way that integrates with the ASP.NET Core idioms.

For 2.1 we plan to port the following receivers:

  • Microsoft Azure alerts
  • Microsoft Azure Kudu notifications
  • Microsoft Dynamics CRM
  • Bitbucket
  • Dropbox
  • GitHub
  • MailChimp
  • Pusher
  • Salesforce
  • Slack
  • Stripe
  • Trello
  • WordPress

To use a WebHook receiver in ASP.NET Core WebHooks you attribute a controller action that you want to handle the notification. For example, here’s how you can handle an Azure alert:

Improvements for GDPR

The ASP.NET Core 2.1 project templates will include some extension points to help you meet some of your UE General Data Protection Regulation (GDPR) requirements.

A new cookie consent feature will allow you to ask for (and track) consent from your users for storing personal information. This can be combined with a new cookie feature where cookies can be marked as essential or non-essential. If a user has not consented to data collection, non-essential cookies will not be sent to the browser. You will still need to create the wording on the UI prompt and a suitable privacy policy which matches the GDPR analysis you or your company have performed, along with implementing the logic for determining under what conditions a given user should be asked for consent before writing non-essential cookies (the templates simply default to asking all users).

Additionally, the ASP.NET Core Identity templates for individual authentication now have a UI to allow users to download their personal data, along with the ability to delete their account entirely. By default, these UI areas only return personal information from ASP.NET Core identity, and perform a delete on the identity tables. As you add your own information into your database you should extend these features to also include that data according to your GDPR analysis.

Finally, we are considering extension points to allow you to apply your own encryption of ASP.NET Core identity data. We recommend that you examine the encryption features of your database to see if they match your GDPR requirements before attempting to layer on your own encryption mechanisms. Both Microsoft SQL and SQL Azure, as well as Azure table storage offer transparent encryption of data at rest, which does not require any changes to your application and is managed for you.

Security

HTTPS

With the increased focus on security and privacy, enabling HTTPS for web apps is more important than ever before. HTTPS enforcement is becoming increasingly strict on the web, and sites that don’t use it are considered, and increasingly labeled as, not secure. GDPR requires the use of HTTPS to protect user privacy. While using HTTPS in production is critical, using HTTPS during development can also help prevent related issues before deployment, like insecure links.

On by default

To facilitate secure website development, we are enabling HTTPS in ASP.NET Core 2.1 by default. Starting in 2.1, in addition to listing on http://localhost:5000, Kestrel will listen on https://localhost:5001 when a local development certificate is present. A suitable certificate will be created when the .NET Core SDK is installed or can be manually setup using the new ‘dev-certs’ tool. We will also update our project templates to run on HTTPS by default and include HTTPS redirection and HSTS support.

HTTPS redirection and enforcement

Web apps typically need to listen on both HTTP and HTTPS, but then redirect all HTTP traffic to HTTPS. ASP.NET Core 2.0 has URL rewrite middleware that can be used for this purpose, but it could be tricky to configure correctly. In 2.1 we are introducing specialized HTTPS redirection middleware that intelligently redirects based on the presence of configuration or bound server ports.

Use of HTTPS can be further enforced using HTTP Strict Transport Security Protocol (HSTS), which instructs browsers to always access the site via HTTPS. ASP.NET Core 2.1 adds HSTS middleware that supports options for max age, subdomains, and the HSTS preload list.

Configuration for production

In production, HTTPS must be explicitly configured. In 2.1 we are introducing default configuration schema for configuring HTTPS for Kestrel that is simple and straightforward. You can configure multiple endpoints including the URLs and the certificate to use for HTTPS either from a file on disk or from a certificate store:

Virtual authentication schemes

We’re adding something tentatively called “Virtual Schemes” to address two main scenarios:

  1. Making it easier to mix authentication schemes, like bearer tokens and cookie authentication in the same app (sample). Virtual schemes allow you to configure a dynamic authentication scheme that will use bearer authentication only for requests starting with /api, and cookie authentication otherwise
  2. Compose (mix/match) different authentication verbs (Challenge/SignIn/SignOut/Authenticate) across different handlers. For example, combining OAuth + Cookies, where you would have Challenge = OAuth, and everything else handled by cookies.

Identity

Identity as a library

ASP.NET Core Identity gives you a framework for setting up authentication and identity concerns for your site, including user registration, managing passwords, two-factor authentication, social logins and much more. However, setting up a site to use ASP.NET Core Identity requires quite a bit of code. While project templates help with generating this code, they don’t help with adding identity to an existing application and the code can’t easily be updated.

For 2.1 we will provide a default identity UI implementation as a library. You can add the default identity UI to your application by installing a NuGet package and then enable it in your Startup class:

Identity scaffolder

If you want all the identity code to be in your application so that you can change it however you want, you can use the new identity scaffolder to add the identity code to your application. All the scaffolded identity code is generated in an identity specific area folder so that it remains nicely separated from your application code.

Options improvements

To configure options with the help of configured services, you can today implement IConfigureOptions<T>. In 2.1 we’re adding convenience overloads to the Configure method that allow you to configure options using services without having to implement a separate class:

Also, the new ConfigureOptions<TSetup> method lets you register a single class that configures multiple options (by implementing IConfigureOptions<T> multiple times):

HttpClientFactory

The new HttpClientFactory type can be registered and used to configure and consume instances of HttpClient in your application. It provides several benefits:

  1. Provide a central location for naming and configuring logical instances of HttpClient. For example, you may configure a “github” client that is pre-configured to access GitHub and a default client for other purposes.
  2. Codify the concept of outgoing middleware via delegating handlers in HttpClient and implementing Polly based middleware to take advantage of that.
  3. Manage the lifetime of HttpClientMessageHandlers to avoid common problems that can be hit when managing HttpClient lifetimes yourself.

HttpClient already has the concept of delegating handlers that could be linked together for outgoing HTTP requests. The factory will make registering of these per named client more intuitive as well as implement a Polly handler that allows Polly policies to be used for Retry, CircuitBreakers, etc. Other “middleware” could also be implemented in the future but we don’t yet know when that will be.

In this first example we will configure two logical HttpClient configurations, a default one with no name and a named “github” client.

Registration in Startup.cs:

Consumption in a controller:

In addition to using strings to differentiate configurations of HttpClient, you can also leverage the DI system using what we are calling a typed client:

A class called GitHubService:

This type can have behavior and completely encapsulate HttpClient access if you wish, or just be used as a strongly typed way of naming an HttpClient as shown here.

Registration in Startup.cs:

NOTE: The Polly section of this code sample should be considered pseudocode at best. We haven’t built this yet and as such are not sure of the final shape of the API.

Consumption in a Razor Page:

Kestrel

Transport Extensibility

The current implementation of the underlying libuv connection semantics has been decoupled from the rest of Kestrel and abstracted away into a new Transport abstraction. While we continue to ship with libuv as the default transport, we are also adding support for a new transport based on the socket types included in .NET.

Socket Transport

We are continuing to invest in a new socket transport for Kestrel as we believe it has the potential to be more performant than the existing libuv transport. While we aren’t quite there yet, you can still easily switch to the new socket transport and try it out today.

Default configuration

We are adding support to Kestrel for configuring endpoints and HTTPS settings (see HTTPS: Configuration for production)

ASP.NET Core Module

The ASP.NET Core Module (ANCM) is a global IIS module for IIS that acts as a reverse-proxy from IIS to your Kestrel backend.

Version agility

Since ANCM is a global singleton, it can’t version or ship with the same agility as the rest of the ASP.NET Core. In 2.1, we’ve refactored ANCM into two pieces: the shim and the request handler. The shim will continue to be installed as a global singleton, but the request handler will ship as part of the new Microsoft.AspNetCore.Server.IIS package which can be referenced directly by your application. This will allow you to use different versions of ANCM with different app deployments.

In-process hosting

In 2.1, we’re adding a new in-process mode to ANCM for .NET Core based apps where the runtime and your app are both loaded inside the IIS worker process (w3wp.exe). This removes the performance penalty of proxying requests over the loopback adapter. Our preliminary tests show performance improvements of around ~4.4x compared to running out-of-process. Configuring your app to use to use the in-process model can be done using `web.config`, and will be eventually be the default for new applications targeting 2.1:

Alternatively, you can set a project property in your project file:

New Microsoft.AspNetCore.App package

ASP.NET Core 2.1 will introduce a new meta-package for use by applications: Microsoft.AspNetCore.App. The new meta-package differs from the existing meta-package in that it reduces the number of dependencies of packages not owned or supported by the ASP.NET or .NET teams to just those deemed necessary to ensure the major framework features function. We will update project templates to use the new meta-package. The existing Microsoft.AspNetCore.All meta-package will continue to be made available throughout the 2.x lifecycle. For additional details see https://github.com/aspnet/Announcements/issues/287.

In conclusion

We hope you are as excited about these features and improvements as we are! Of course, it is still early in the release and these plans are subject to change, but you can follow along with the latest status of these features by tracking the action on GitHub. Major updates and changes will be posted on the Announcements repo. You can also get live updates and participate in the conversation by watching the weekly ASP.NET Community Standup at https://live.asp.net. You can also read about the roadmaps for .NET Core 2.1 and EF Core 2.0 on the .NET team blog. Your feedback is welcome and appreciated!

.NET Core 2.1 Roadmap

$
0
0

The .NET team has been working on the .NET Core 2.1 release for the last several months on GitHub. We know that many of you have been using .NET Core 2.0 since it shipped in August of last year and want to know what is coming next. The team has done enough work now that we know the overall shape of the next release. Like past releases, the .NET community has contributed many important improvements. Thanks so much!

We have been thinking of .NET Core 2.1 as a feedback-oriented release after the more foundational .NET Core 2.0 release. The following improvements are based on some of the most common feedback.

  • [CLI] Build-time performance improvements.
  • [CLI] Global tools; replaces CliReferenceTool.
  • [CoreCLR] Minor-version roll-forward.
  • [CoreCLR] No-copy array slicing with Span<T>.
  • [CoreFX] HttpClient performance improvements.
  • [CoreFX] Windows Compatibility Pack.
  • [ASP.NET] SignalR is available for .NET Core.
  • [ASP.NET] HTTPS is on by default for ASP.NET.
  • [EF] Basic lazy loading support.
  • [EF] Support for Azure Cosmos DB.

This post will focus on CoreCLR, CoreFX and CLI improvements. Please see ASP.NET Core 2.1 Roadmap and EF Core 2.1 Roadmap for more information on ASP.NET Core and EF Core.

Thanks to everyone that has used .NET Core 2.0. There are now over half a million active users of .NET Core 2.0 across Windows, macOS and Linux. Product usage is growing fast and we expect that the .NET Core 2.1 improvements will only increase that.

A few of us recorded an On.NET show to introduce .NET Core 2.1 in the Channel 9 studios, in two parts (roadmap, demos). The roadmap part is what you see below.

Build-time Performance

Build-time performance is much improved in .NET Core 2.1, particularly for incremental build. These improvements apply to both dotnet build on the commandline and to builds in Visual Studio. We’ve made improvements in the CLI tools and in MSBuild in order to make the tools deliver a much faster experience.

The following chart provides concrete numbers on the improvements that you can expect from the new release. You can see two different workloads with numbers for .NET Core 2.0, the upcoming .NET Core 2.1 preview and where we expect to land for .NET Core RTW.

Incremental Build Improvements for .NET Core 2.1 SDK

.NET Core Global Tools

.NET Core will include a new deployment and extensibility mechanism for tools. This new experience is very similar to and was inspired by Node global tools. We are re-using the same syntax and much of the experience.

.NET Core tools are .NET Core console apps that are packaged and acquired as NuGet packages. By default, these tools are framework-dependent applications and include all of their NuGet dependencies. This means that a given global tool will run on any operating system or chip architecture by default. You might need an existing tool on a new version of Linux. As long as .NET Core works there, you should be able to run the tool.

At present, .NET Core Tools only support global install. We’re working on various forms of local install, too. The current working syntax looks like the following:

dotnet tool install -g awesome-tool
awesome-tool

Yes, once you install a tool (in this case, awesome-tool), you can then use it directly. You don’t need to type dotnet awesome-tool but can just type awesome-tool, and the tool will run. You can close terminal sessions, switch drives in the terminal, or reboot your machine and the command will still be there.

We expect a whole new ecosystem of tools to establish itself for .NET. Some of these tools will be specific to .NET Core development and many of them will be quite general in nature. The tools are deployed to NuGet feeds. By default, the dotnet tool install command looks for tools on NuGet.org.

Span<T>, Memory<T> and friends

We are on the verge of introducing a new set of types for using arrays and other types of memory that is much more efficient. Today, if you want to pass the first 1000 elements of a 10,000 element array, you need to make a copy of those 1000 elements and pass that copy to your caller. That operation is expensive in both time and space. The new Span<T> type enables you to provide a virtual view of that array without the time or space cost. It’s also a struct, which means that there is no allocation cost either.

Jared Parsons gives a great introduction in his Channel 9 video C# 7.2: Understanding Span. Stephen Toub goes into even more detail in C# – All About Span: Exploring a New .NET Mainstay.

Span<T> and related types offer a uniform representation of memory from a multitude of different sources, such as arrays, stack allocation and native code. With its slicing capabilities, it obviates the need for expensive copying and allocation in many scenarios, such as string manipulation, buffer management, etc, and provides a safe alternative to unsafe code. We expect usage of these types to start off in performance critical scenarios, but then transition to replacing arrays as the primary way of managing large blocks of data in .NET.

In terms of usage, you can create a Span<T> from an array:

From there, you can easily and efficiently create a span to represent/point to just a subset of this array, utilizing an overload of the span’s Slice method. From there you can index into the resulting span to write and read data in the relevant portion of the original array:

HttpClient Performance

Outgoing network requests are a critical part of application performance for microservices and other types of applications. .NET Core 2.1 includes a new HttpClient handler that is a rewritten for high performance. We’ve heard from some early adopters that the new handler has greatly improved their applications.

We are also shipping a new IHttpClientFactory feature that provides circuit breaker and other services for HttpClient calls. This new feature builds on top of this new HttpClient handler. You can learn more about IHttpClientFactory in the ASP.NET Core 2.1 Roadmap post.

Minor Version Roll-forward

You will now be able to run .NET Core applications on later runtime versions, within the same major version range. For example, you will be able to run .NET Core 2.0 applications on .NET Core 2.1 or .NET Core 2.1 applications on .NET Core 2.5 (if we ever ship such a version). The roll-forward behavior is for minor versions only. For example, a .NET Core 2.x application will never roll forward to .NET Core 3.0 or later.

If the expected .NET Core version is available, it will be used. The roll-forward behavior is only relevant if the expected .NET Core version is not available in a given environment.

You will be able to configure applications to disable this roll-forward behavior. We will document the exact configuration settings for that in an upcoming post (we’re still working on it).

Windows Compatibility Pack

When you port existing code from the .NET Framework to .NET Core, you can use the new Windows Compatibility Pack. It provides access to an additional 20,000 APIs, compared to what is available in .NET Core. This includes System.Drawing, EventLog, WMI, Performance Counters, and Windows Services.

If you plan to make your code cross-platform, use the new API Analyzer to ensure you don’t accidentally depend on Windows-only APIs.

Availability

We intend to start shipping .NET Core 2.1 previews on a monthly basis starting this month, leading to a final release in the first half of 2018.

Again, thanks to everyone that has installed and used .NET Core 2.0. We’ve heard great feedback from many developers on their experience so far. We’re hoping that these .NET Core 2.1 improvements make .NET Core development easier and make your apps run faster while using less memory. Preview builds will be available soon!

Entity Framework Core 2.1 Roadmap

$
0
0

As mentioned in the announcement of the .NET Core 2.1 roadmap earlier today, at this point we know the overall shape of our next release and we have decided on a general schedule for it. As we approach the release of our first preview later this month, we also wanted to expand on what we have planned for Entity Framework Core 2.1.

New features

Although EF Core 2.1 is a minor release that builds over the foundational 2.0, it introduces significant new capabilities:

  • Lazy loading: EF Core now contains the necessary building blocks for anyone to write entity classes that can load their navigation properties on demand. We have also created a new package, Microsoft.EntityFrameworkCore.Proxies, that leverages those building blocks to produce lazy loading proxy classes based on minimally modified entity classes. In order to use these lazy loading proxies, you only need navigation properties in your entities to be virtual.
  • Parameters in entity constructors: as one of the required building blocks for lazy loading, we enabled the creation of entities that take parameters in their constructor. You can use parameters to inject property values, lazy loading delegates, and services.
  • Value conversions: Until now, EF Core could only map properties of types natively supported by the underlying database provider. Values were copied back an forth between columns and properties without any transformation. Starting with EF Core 2.1, value conversions can be applied to transform the values obtained from columns before they are applied to properties, and vice versa. We have a number of conversions that can be applied by convention as necessary, as well as an explicit configuration API that allows registering delegates for the conversions between columns and properties. Some of the application of this feature are:
    • Storing enums as strings
    • Mapping unsigned integers with SQL Server
    • Transparent encryption and decryption of property values
  • LINQ GroupBy translation: Before EF Core 2.1, the GroupBy LINQ operator would always be evaluated in memory. We now support translating it to the SQL GROUP BY clause in most common cases.
  • Data Seeding: With the new release it will be possible to provide initial data to populate a database. Unlike in EF6, in EF Core, seeding data is associated to an entity type as part of the model configuration. Then EF Core migrations can automatically compute what insert, update or delete operations need to be applied when upgrading the database to a new version of the model.
  • Query types: An EF Core model can now include query types. Unlike entity types, query types do not have keys defined on them and cannot be inserted, deleted or updated (i.e. they are read-only), but they can be returned directly by queries. Some of the usage scenarios for query types are:
    • Mapping to views without primary keys
    • Mapping to tables without primary keys
    • Mapping to queries defined in the model
    • Serving as the return type for FromSql() queries
  • Include for derived types: It will be now possible to specify navigation properties only defined in derived types when writing expressions for the Include() methods. The syntax looks like this:
    var query = context.People.Include(p => ((Student)p).School);
  • System.Transactions support: We have added the ability to work with System.Transactions features such as TransactionScope. This will work on both .NET Framework and .NET Core when using database providers that support it.

Other improvements and new initiatives

Besides the major new features included in 2.1, we have made numerous smaller improvements and we have fixed more than a hundred product bugs. We also made progress on the following areas:

  • Optimization of correlated subqueries: We have improved our query translation to avoid executing N + 1 SQL queries in many common scenarios in which a root query is joined with a correlated subquery.
  • Column ordering in migrations: Based on customer feedback, we have updated migrations to initially generate columns for tables in the same order as properties are declared in classes.
  • Cosmos DB provider preview: We have been developing an EF Core provider for the DocumentDB API in Cosmos DB. This is the first document database provider we have produced, and the learnings from this exercise are going to inform improvements in the design of the subsequent release after 2.1. The current plan is to publish an early preview of the Cosmos DB provider in the 2.1 timeframe.
  • Sample Oracle provider for EF Core: We have produced a sample EF Core provider for Oracle databases. The purpose of the project is not to produce an EF Core provider owned by Microsoft, but to:
    1. Help us identify gaps in EF Core’s relational and base functionality which we need to address in order to better support Oracle.
    2. Help jumpstart the development of other Oracle providers for EF Core either by Oracle or third parties.

    Note that currently our sample is based on the latest available ADO.NET provider from Oracle, which only supports .NET Framework. As soon as an ADO.NET provider for .NET Core is made available, we will consider updating the sample to use it.

What’s next

We will be releasing the first preview of EF Core 2.1, including all the features mentioned above, later this month. After that, we intend to release additional previews monthly, and a final release in the first half of 2018.

A big thank you to everyone that uses EF Core, and to everyone who has helped make the 2.1 release better by providing feedback, reporting bugs, and contributing code.

Because it’s Friday: Time for Sushi

$
0
0

Why not end your week with a strangely calming dose of the bizarre: the short film Time for Sushi, by David Lewandowski.

That's all for us for this week. Have a great weekend, and we'll be back next week with more for the blog. Enjoy!

Build email notifications for SQL Database Automatic tuning recommendations

$
0
0

After reading this blogpost, you will be able to build your own custom email notifications for SQL Database Automatic tuning recommendations. We have listened to our customers requesting this functionality and have created a custom solution based on readily available technologies on Azure.

SQL Database performance tuning recommendations are generated by Azure SQL Database Automatic tuning. This solution provides peak performance and stable workloads through continuous database performance tuning utilizing Artificial Intelligence (AI).

Tuning recommendations are provided for each individual SQL database on Azure subscription for which Automatic tuning is enabled. Recommendations are related to index creation, index deletion, and optimization of query execution plans. Tuning recommendations are provided only in cases when AI considers them as beneficial to database performance.

Email notifications for Automatic tuning

Some of our customers have indicated a need to receive automated email notifications with suggested SQL Database Automatic tuning recommendations to be able to view and build automated alerts. For example, when the solution recommends that an index should be dropped to improve database performance, some customers would prefer to be notified of such event. Another customer scenario is, for example, emailing automated tuning recommendations to different database administrators in charge of different database assets they are looking over.

To build automated email notifications to receive automatic tuning recommendations, please follow our step-by-step instructions, see how to build custom SQL Database Automatic tuning email notifications.

The solution we have devised consists of automating execution of a PowerShell script retrieving tuning recommendations using Azure Automation, and automation of scheduling email delivery job using Microsoft Flow.

Azure Automation enables scheduling of your scripts on Azure that could be used in many ways. In our example, it is used for scheduling the retrieval of automatic tuning recommendations from SQL databases on your Azure subscription.

The below is a screenshot of executing automated PowerShell script to retrieve SQL Database Automatic tuning recommendations using Azure Automation. The automation allows users on-screen display of script inputs, outputs, log files, errors and warnings for monitoring and troubleshooting purposes.

image

Microsoft Flow is used as a readily available out of the box solution to schedule and automate email delivery job for forwarding the retrieved database tuning recommendations using Office 365 integration. The automated schedule can be set to run in increments anywhere from each minute, hour, day, to a week, depending on your needs and preferences.

With further customization of the provided PowerShell script and Microsoft Flow workflows, you can customize the solution to automate emailing of tuning recommendations to various individuals and for different SQL Databases.

Microsoft Flow provides on-screen stats on execution of the automated jobs, i.e. showing success of email notifications sent out. See the example from our solution in the screenshot below.

image

Microsoft Flow analytics is helpful for monitoring as well as for troubleshooting the automation flows. Please note that in case of troubleshooting, you also might want to examine the PowerShell script execution log accessible through the Azure Automation app.

The final output of the automated email notification will look similar to the following email received after building and running this solution:

Final Output

By further customizing the sample PowerShell script provided, you can adjust the output and formatting of the automated email to suit your needs.

The solution we have provided for you is the starting point from which you can build further and customize for your own scenarios. Some of the possible custom scenarios are for example, creating notifications based on the type of tuning recommendation received, sending emails to multiple recipients, or perhaps to different database owners.

Summary

With the solution provided, you will be able to automate sending of email notifications for Azure SQL Database Automatic tuning recommendations. The solution is using PowerShell script to retrieve tuning recommendations and Azure Automation to run it. The flow of automating recurring email delivery job with outputs from the PowerShell script was built using Microsoft Flow.

You might further customize the solution to build email notifications based on a specific tuning event, to multiple recipients, for multiple subscriptions, or databases depending on your custom scenarios.

If you are inclined to programming, please note there are also alternative ways through which automatic tuning recommendations can be retrieved from SQL Database, for example, through REST API calls, or by using T-SQL, alongside with PowerShell commands.

Please let us know how are you using this solution, and perhaps examples of how you have customized it for your needs? Please leave your feedback and questions in the comments.


Go serverless for your IoT needs

$
0
0

If you are building an IoT solution in the cloud, chances are your focus is on the devices and what you can accomplish with them. You might want to process data coming from a network of devices in real time, analyze the data to gain insights, get alerted for special conditions, manage the devices themselves, and so on. What is less interesting to you is setting up and managing the infrastructure in the cloud, which will enable you to do the above. This is where serverless comes in.

Serverless technologies, like Azure Functions, take away the burden of managing infrastructure and enables you to focus on your IoT-powered business logic. IoT projects usually have variable traffic, which means accumulating infrastructure to account for peak loads, isn’t the best strategy. Adopting serverless allows your solutions to scale dynamically while keeping costs low.

This video shows a great application of a serverless architecture to receive data from a device, transform it in real time using machine learning, and send it back to the device. It is based on the DevKit Translator IoT project.

Here we describe a few scenarios in which the combination of Azure IoT Hub, Azure Event Grid, and Azure Functions provide a potent solution to your IoT needs.

Process incoming device data in real time

Most IoT devices generate data that needs to be ingested and processed. 

  • This data could be regular telemetry data that needs to go through some custom processing before being visualized or archived for later analysis. In this case, triggering a serverless function from the built-in Azure Event Hubs endpoint in IoT Hub, is a perfect solution for the custom processing part.
  • It could be a special condition alert, which needs to be acted upon with slightly different (and urgent) custom processing. In this case, triggering a serverless function using the message routing functionality of IoT Hub, for instance with Azure Service Bus queues, is a perfect solution.

Sending messages from cloud to device

Sending messages from your cloud applications to IoT devices is also a common scenario. IoT Hubs facilitate this through API which can be readily consumed from any application. Since sending such data is usually done as a part of a workflow, serverless functions again provide a quick, easy, and scalable mechanism to run the code meant to perform this specific part (firing messages off to devices), without worrying about where this code is running.

Manage and auto-scale your IoT solution

While a serverless backend removes any scale worries for the data processing part of your solution, you still want to ensure that you are appropriately scaling the capacity of IoT Hub itself to manage how much traffic can actually flow through to your serverless backend. This is where the Durable Functions feature of Azure Functions can provide a serverless method of monitoring and scaling the IoT Hubs so that your entire IoT project can effectively be auto-scale.

Additionally, using Azure Event Grid and Azure IoT Hub, you can also respond to device updates, like additions and deletions, in a serverless manner using Functions or Logic Apps.

Custom processing on IoT Edge

Businesses are increasingly demanding low-latency output from their IoT solutions. Solutions like Azure IoT Edge are allowing customers to transfer the capability of cloud technologies to their own devices. There are multiple scenarios in which some custom processing needs to be performed on the device itself without communicating with the cloud. Azure Functions on IoT Edge provides a powerful mechanism through which you can utilize the productive programming model of Azure Functions on your own devices. This allows consistency of architecture across your cloud and Edge applications.

These are just some of the examples of how Azure provides ways of building IoT applications using serverless technologies. The ease of use and low-cost nature of Azure Functions makes it a viable option for experimenting with many other IoT scenarios. We are looking forward to what you will build. 

Resources

Bing Speech API extends its text to speech support to 34 languages

$
0
0

Voice is becoming more and more prevalent as a mode of interaction with all kinds of devices and services. The ability to provide not only voice input but also voice output or Text-to-speech (TTS), is also becoming a critical technology that supports AI. Whether you need to interact on a device, over the phone, in a vehicle, through a building PA system, or even with a translated input, TTS is a crucial part of your end-to-end solution. It is also a necessity for all applications that enable accessibility.

We are excited to announce that the Speech API, a Microsoft Cognitive Service, now offers six new TTS languages to all developers, bringing the total number of available languages to 34:

  • Bulgarian (language code: bg-BG)
  • Croatian (hr-HR)
  • Malaysia (ms-MY)
  • Slovenian (sl-SI)
  • Tamil (ta-IN)
  • Vietnamese (vi-VN)

Powered by the latest AI technology, these 34 languages are available across 48 locales and 78 voice fonts. Through a single API, developers can access the latest-generation of speech recognition and TTS models.

This Text-to-Speech API can be integrated by developers for a broad set of use cases. It can be used on its own for accessibility, hand-free communication or media consumption, or any other machine-to-human interactions. It can also be combined with other Cognitive Services APIs such as the Speech to Text and Language Understanding ones to create comprehensive voice driven solutions, online or on device.

In addition, these new TTS languages will become available through the Microsoft Translator Speech API and the Microsoft Translator apps by the end of February 2018, making these new languages text-to-speech output available for developers integrating the Translator speech API as well as end-users of the Microsoft Translator apps and Translator live feature.

Try these new TTS languages today 

For the complete list of our 34 languages, 48 locales and 78 voice fonts please refer to this documentation 

Last week in Azure: Event Grid GA, Ansible in Cloud Shell, and more

$
0
0

Last week in Azure, Azure Event Grid, a fully managed, scalable and flexible event routing service became generally available. Event Grid enables you to build reactive, event-driven apps with a fully managed event routing service using a publish-subscribe model to connect data sources and event handlers, automate operations, and integrate applications.

In addition, Ansible is now pre-installed in Bash in Cloud Shell. Ansible is an open-source product that automates cloud provisioning, configuration management, and application deployments. By using Ansible, you can provision virtual machines, containers, and cloud infrastructures. You can also use Ansible to automate the deployment and configuration of resources in your environment.

Now in preview

Integrate Azure Security Center alerts into SIEM solutions - Announces the public preview of a new feature, SIEM Export, which enables you to export Azure Security Center alerts into popular SIEM solutions such as Splunk and IBM QRadar.

Now generally available

Announcing the general availability of Azure Event Grid - Azure Event Grid, a fully managed event routing service that simplifies the development of reactive, event-driven applications, is now generally available.

Network Watcher Connection Troubleshoot now generally available - Azure Network Watcher Connection Troubleshoot, previously in preview as Connectivity Check, is now generally available in the Network Watcher suite of networking tools and capabilities, which enables you to troubleshoot network performance and connectivity issues in Azure.

Azure Storage SDKs for Python, Ruby and PHP now generally available - The Azure Storage SDKs for Python, Ruby and PHP are now generally available. To reduce the footprint of the libraries and enable developers to use just packages in which they are interested, each of the Storage SDKs were split into four packages, one each for Blob, Table, Queue, and File.

New in Stream Analytics: General availability of sub-streams, query compatibility, and more - Announces the general availability of several features in Azure Stream Analytics: sub-streams support, egress to Azure functions, and query compatiblity levels.

Three new reasons to love the TSI explorer - Announces three new Time Series Insights (TSI) explorer: TSI explorer is now generally available, improvements were made to improve accessiblity, and its now easier to export aggregate event data to other analytics tools (e.g., Microsoft Excel).

Virtual Network Service Endpoints and Firewalls for Azure Storage now generally available - Announces the general availability of Firewalls and Virtual Networks (VNets) for Azure Storage along with Virtual Network Service Endpoints with network-based access control in all Azure public cloud regions and Azure Government.

Headlines & updates

A great developer experience for Ansible - Ansible is now available, pre-installed and ready to use for every Azure user in the Azure Cloud Shell. We also released an Ansible extension for Visual Studio Code that allows for faster development and testing of Ansible playbooks.

Azure ExpressRoute updates – New partnerships, monitoring and simplification - Get the latest news on an extended partnership between Cisco & Microsoft to build a new network practice providing Cisco Solution Support for Azure ExpressRoute, new monitoring options for ExpressRoute will be generally available later this month, simplified ExpressRoute peering, and new ExpressRoute locations.

Full MeitY accreditation enables Indian public sector to deploy on Azure - Microsoft recently became one of the first global cloud service providers to achieve full accreditation by the Ministry of Electronics and Information Technology (MeitY) for the Government of India. The full MeitY accreditation adds to the Azure compliance portfolio of more than 70 offerings, the largest in the industry, with more than 30 offerings specific to regions and countries around the globe.

New Azure Data Factory self-paced hands-on lab for UI - Learn how to build scale-out data integration project using Azure Data Factory and how to build data integration patterns using ADF V2 with new hands-on labs.

Customer success stories with Azure Backup: Russell Reynolds - Having moved to Azure to reduce their IT and datacenter costs, global leadership and executive search firm Russell Reynolds adopts Azure Backup as an alternative to their tape backups, which was proving both cumbersome and expensive.

Jenkins on Azure: from zero to hero - Microsoft published an update of the Jenkins Master on a Linux (Ubuntu) VM, which brings the next phase of our support for Jenkins on Microsoft Azure with the launch of a secure, stable and production ready version of Jenkins.

Enhancements to Azure Budgets API supporting Resource Groups and Usage Budgets - Announces the release of additional features to the budgets API that support the scoping of more granular budgets with filters as well as support for usage and cost budgets, including support resource group and resource level budgets in addition to the subscription level budgets.

Enhancements to Cost Management APIs - ARM APIs now include Marketplace charges API for Enterprise and Web Direct customers with a few exceptions documented in the limitations and a Price Sheet API for Enterprise customers.

Azure #CosmosDB and Microsoft’s Project Olympus honored in InfoWorld’s 2018 Technology of the Year Awards - IDG's InfoWorld recently recognized Azure Cosmos DB in the InfoWorld Technology of the Year Awards, which includes recognition for leveraging the work of Turing Award winner Leslie Lamport to deliver multiple consistency models. Azure Cosmos DB was also recognized for delivering a globally distributed system where users anywhere in the world can see the same version of data, no matter their location.

Technical articles

Using EXPLAIN to profile slow queries in Azure Database for MySQL - Learn how you can use the EXPLAIN statement with Azure Database for MySQL to profile client queries and thus help you identify the root cause of a slow query. You can use an EXPLAIN statement to get information about how SQL statements are executed. With this information, you can profile which queries are running slow and why.

Using the MySQL sys schema to optimize and maintain a database - Learn how to use the MySQL sys schema with Azure Database for MySQL to troubleshoot performance issues and manage resources efficiently.

Managing Azure Secrets on GitHub Repositories - Azure runs Credential Scanner (aka, CredScan) to monitor all incoming commits on GitHub, which checks for specific Azure tenant secrets such as Azure subscription management certificates and Azure SQL connection strings. The Continuous Delivery Tools for Visual Studio extension in early preview provides developers an inline experience for detecting potential secrets in their code.

Lambda Architecture using Azure #CosmosDB: Faster performance, Low TCO, Low DevOps - Learn how Azure Cosmos DB provides a scalable database solution that can handle both batch and real-time ingestion and querying and enables developers to implement lambda architectures that enable efficient data processing of massive data sets.

How is AI for video different from AI for images - Learn how Azure Video Indexer implements serveral video-specific algorithms, and the shortcomings of taking an approach of just processing individual video frames as you would with images.

Developer Spotlight: Serverless

Azure - Event-Driven Architecture in the Cloud with Azure Event Grid - This MSDN Magazine article explores the flexibility of Azure Event Grid and shows how it can be used to resolve familiar challenges in enterprise applications.

Azure Serverless Computing Cookbook - Download the free, 325-page serverless computing e-book and get access to dozens of step-by-step recipes for quickly building serverless apps.

Monitoring issues on StackOverflow with serverless, CosmosDB, and Teams - In this blog post, Shayne Boyer, a developer advocate for Azure, covers a set of serverless functions (code on GitHub) his team uses to help monitor Azure developer questions on Stack Overflow.

Logic Apps Introduction - Get an overview of Azure Logic Apps in this article from the Azure docs.

Create a function triggered by Azure Cosmos DB - Learn how to create a function triggered when data is added to or changed in Azure Cosmos DB.

Service updates

Azure shows

Running Ansible on Azure - Kylie Liang shows Donovan Brown how to run Ansible playbooks on Azure using Cloud Shell, a browser-based shell experience hosted in the cloud. She also demonstrates how to use the Ansible extension for VS Code to accelerate Ansible playbook development using auto-completion and code snippets, and then run it inside Docker or Cloud Shell.

Azure Functions and Logic Apps - Corey Sanders, Director of Program Management on the Microsoft Azure Compute team sat down with Azure Functions / Azure Logic Apps PM Jeff Hollan to see what's new in the serverless space on Azure.

Cloud Tech 10 - 29th January 2018 - Zone Redundant Scale Sets and Storage and more! - Each week, Mark Whitby, a Cloud Solution Architect at Microsoft UK, covers what's happening with Microsoft Azure in just 10 minutes, or less.

Cloud Tech 10 - 5th February 2018 - Event Grid, Jenkins, Ansible and more! - Each week, Mark Whitby, a Cloud Solution Architect at Microsoft UK, covers what's happening with Microsoft Azure in just 10 minutes, or less.

The Azure Podcast: Episode 213 - Azure Dev using Windows Subsystem for Linux - Tara Raj, a PM who has been on the show before, talks to us about her latest endeavor, the Windows Subsystem for Linux and how it can be used for various tasks including Azure Development!

Learn how to do Image Recognition with Cognitive Services and ASP.NET

$
0
0

With all the talk about artificial intelligence (AI) and machine learning (ML) doing crazy things, it’s easy to be left wondering, “what are practical ways I can use this today?” It turns out there are some extremely easy ways to try this today.

In this post, I’ll walk through how to detect faces, gender, ages, and hair color in photos, by adding only a few lines of code to an ASP.NET app. Images will be uploaded and shown in an image gallery built with ASP.NET, images will be hosted in Azure Storage, and Azure Cognitive Services will be used to analyze the images. The full application is available on GitHub. To begin, clone the repository on your machine.

What we’ll build

Here’s what the recognized photos can look like when displayed in a web browser. Note how the image and metadata generated by Azure Cognitive Services is displayed alongside it.

A sample image of the application running, showing a woman whose age and gender have been estimated by Cognitive Services.

Set up prerequisites with Visual Studio and Azure

To begin, make sure you’ve installed Visual Studio 2017 with the ASP.NET and web workload. This will provide everything you need to build and run the app yourself.

Next, set up the Azure prerequisites.

First, ensure you have an Azure account. If not, you can sign up for an Azure free account, which will give you a $200 credit towards anything.

Next, create a Storage account, through the Azure Portal:

You’ll need to create the Storage resource:

An image of the Azure Portal, showing how to create a Storage resource.

After creating the resource, you’ll need to create the storage account for your resource with the default settings:

An image of the Create Storage Account page in the Azure Portal with default settings selected.

Finally, create a Cognitive Services resource through the Azure portal:

An image showing how to create a Cognitive Services resource.

Once you’ve set that up, you’re ready to start hacking away at the sample app!

Explore the codebase

Open the project in Visual Studio 2017 if you haven’t already. The application is an ASP.NET MVC app. It does three major things:

The first major operation is uploading an image to Azure Blob storage, analyzing the image using Azure Cognitive Services, and uploading image metadata generated from Cognitive Services back to Blob Storage.

The second major operation is to snag images and their associated metadata from Blob Storage.

The UI simply wires up these images to a page with an upload button.

Add your API keys

Modify the Web.config file to include your Cognitive Services URL and Cognitive Services API key. Look for this file:

Your Cognitive Services URL and API keys can be found in the dashboard for your Cognitive Services resource in the Azure Portal here:

The connection strings for your Azure Storage resource can be found in the Azure Portal under Access Keys:

An image showing how to access the Azure Storage Access Keys in the Azure Portal.

Once you have entered your information in your Web.Config file, you’ll be good to go!

To learn more about how to best work with keys and other information in a development, see Best Practices for Deploying Passwords and other Sensitive Data to ASP.NET and Azure.

Run the application and add some images

Now that everything is set up and configuration has been set up locally, you can run the application on your machine!

Press F5 to debug and see how everything works. I recommend that you set a breakpoint in the Upload controller action (HomeController.cs, line 32), so that you can step through each operation as you upload a new image. In the opened browser, upload an image to see what happens!

If you want to see images show up in Azure blobs when running the app, you can do so with Cloud Explorer (View -> Cloud Explorer). You may need to log in first, but after that, you can navigate to your created Storage Account and see all of your Blobs under Blob Containers:

An image showing Visual Studio Cloud Explorer and browsing live Blobs in Azure.

In this example, I’ve uploaded three images to my container called “images”. The web app also uploaded a json file with image metadata for each image.

Publish to Azure and impress your friends with your use of AI

You can publish the entire application to Azure App Service. Right-click on your project and select “Publish”. Next, select App Service and continue. You can create one right in the Visual Studio UI:

An image showing how you can publish to Azure from Visual Studio 2017.

Finally, click “Create” and it will create all the Azure resources you need and publish your app! After that process completes (it should take a minute or two), your browser will open with your application running entirely in Azure.

Next steps

And that’s it! Try exploring other, interesting things you can do with Cognitive services. Some fun things to try, without needing to add support for any services or read other tutorials:

  • Modify the web app to replace someone’s face with an emoji that matches their measured emotion (try the System.Drawing API!)
  • Group faces by similarity, age, or if they have makeup on
  • Try it out on pictures of animals instead of humans

Additionally, check out these tutorials to learn more about what you can do with .NET and Cognitive Services:

Cheers, and happy coding!

Announcing .NET Framework 4.7.2 Early Access build 3052!

$
0
0

Today, we are happy to share the .NET Framework 4.7.2 Early Access build 3052 for your feedback. .NET Framework 4.7.2 is the next version of the .NET Framework. It is currently feature-complete and in the testing phase. We would love your help to ensure this is a high quality and compatible release. This release is feature complete and in the testing phase but is not supported for production use.

Next steps:

This pre-release build 3052 includes improvements in several areas. Here’s a representative set and we will share further information on these features in the subsequent weeks.

  • [ASP.NET] Support for SameSite cookie in ASP.NET
  • [ASP.NET] Support for ASP.NET Dependency Injection
  • [ClickOnce] Per-monitor support for WPF and HDPI-aware VSTO apps deployed via ClickOnce
  • [SQL] Always Encrypted enhancements in SQL Connectivity
  • [Networking & BCL] Enhanced .NET Framework support for .NET Standard 2.0
  • [BCL] Cryptography improvements
  • [WPF] Diagnostic enhancements

You can see the complete list of improvements in this build in the release notes.

The .NET Framework build 3052 will replace any existing .NET Framework 4 and later installation on your machine. This means all .NET Framework 4 and later applications on your machine will run on the .NET Framework early access builds upon installation. The .NET Framework build 3052 installs on Windows 10 Fall Creators Update, Windows 10 Creators Update, Windows 10 Anniversary Update, Windows 8.1, Windows 7 SP1, Windows Server 2016, Windows Server 2012 R2, Windows Server 2012 and Windows Server 2008 R2 SP1 OS platforms.

.NET Framework build 3052 is also included in the next update for Windows 10. You can sign up for Windows Insiders to validate that your applications work great on the latest .NET Framework included in the latest Windows 10 releases.

Going forward, we will share early builds of the next release of the .NET Framework via the Early Access Program on a regular basis for your feedback. As a member of the .NET Framework Early Access community you will have an active role in helping us build new .NET Framework products. We will do our best to ensure these early access builds are stable and compatible, but you may see bugs or issues from time to time. We’d appreciate you taking the time to report these to us on Github so we can address these issues before the official release. Thank you.

Viewing all 10804 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>