Quantcast
Channel: Category Name
Viewing all 10804 articles
Browse latest View live

Statistics from R-bloggers

$
0
0

Tal Galili's R-bloggers.com has been syndicating blog posts about R for quite a while — from memory I'd say about 8 years, but I couldn't find the exact date it started aggregating. Anyway, it contains a wealth of information about activity in the R ecosystem, but without any easy way to access that information other than the blog post feed. Bob Rudis figured it out though — by using the Feedly API and the RSS feeds of R-bloggers, you can extract quite a bit of data about the posts of the 750+ bloggers syndicated on the site. Among the insights:

  • Since 2014, there have been more than 160 posts per month (and up to as many as 250 in one month) aggregated on R-bloggers
  • Most posts appear on Mondays and Tuesdays. Sunday is the quietest day for R bloggers.
  • More than 1,000 authors have been published on the site.
  • And (with more than a modicum of personal pride here), the top 10 authors on the site, measured by total engagement, are:

R-bloggers

You can find more analysis, including charts and the detailed R code used to produce them and a link to the extracted data file, in Bob Rudis's blog post at the link below:

rud.is: Exploring R-Bloggers Posts with the Feedly API

 


Updating jQuery-based Lazy Image Loading to IntersectionObserver

$
0
0

The Hanselminutes Tech PodcastFive years ago I implemented "lazy loading" of the 600+ images on my podcast's archives page (I don't like paging, as a rule) over here https://www.hanselminutes.com/episodes. I did it with jQuery and a jQuery Plugin. It was kind of messy and gross from a purist's perspective, but it totally worked and has easily saved me (and you) hundreds of dollars in bandwidth over the years. The page is like 9 or 10 megs if you load 600 images, not to mention you're loading 600 freaking images.

Fast-forward to 2018, and there's the "Intersection Observer API" that's supported everywhere but Safari and IE, well, because, Safari and IE, sigh. We will return to that issue in a moment.

Following Dean Hume's blog post on the topic, I start with my images like this. I don't populate src="", but instead hold the Image URL in the HTML5 data- bucket of data-src. For src, I can use the nothing grey.gif or just style and color the image grey.

<a href="http://feeds.hanselman.com/~/t/0/0/scotthanselman/~https://www.hanselman.com/626/christine-spangs-open-source-journey-from-teen-oss-contributor-to-cto-of-nylas" class="showCard">
    <img data-src="https://images.hanselminutes.com/images/626.jpg" 
         class="lazy" src="https://www.hanselman.com/images/grey.gif" width="212" height="212" alt="Christine Spang's Open Source Journey from Teen OSS Contributor to CTO of Nylas" />
    <span class="shownumber">626</span>                
    <div class="overlay title">Christine Spang's Open Source Journey from Teen OSS Contributor to CTO of Nylas</div>
</a>
<a href="http://feeds.hanselman.com/~/t/0/0/scotthanselman/~https://www.hanselman.com/625/a-new-sega-megadrivegenesis-game-in-2018-with-1995-tools-with-tanglewoods-matt-phillips" class="showCard">
    <img data-src="https://images.hanselminutes.com/images/625.jpg" 
         class="lazy" src="https://www.hanselman.com/images/grey.gif" width="212" height="212" alt="A new Sega Megadrive/Genesis Game in 2018 with 1995 Tools with Tanglewood's Matt Phillips" />
    <span class="shownumber">625</span>                
    <div class="overlay title">A new Sega Megadrive/Genesis Game in 2018 with 1995 Tools with Tanglewood's Matt Phillips</div>
</a>

Then, if the images get within 50px intersecting the viewPort (I'm scrolling down) then I load them:

// Get images of class lazy
const images = document.querySelectorAll('.lazy');
const config = {
  // If image gets within 50px go get it
  rootMargin: '50px 0px',
  threshold: 0.01
};
let observer = new IntersectionObserver(onIntersection, config);
  images.forEach(image => {
    observer.observe(image);
  });

Now that we are watching it, we need to do something when it's observed.

function onIntersection(entries) {
  // Loop through the entries
  entries.forEach(entry => {
    // Are we in viewport?
    if (entry.intersectionRatio > 0) {
      // Stop watching and load the image
      observer.unobserve(entry.target);
      preloadImage(entry.target);
    }
  });
}

If the browser (IE, Safari, Mobile Safari) doesn't support IntersectionObserver, we can do a few things. I *could* fall back to my old jQuery technique, although it would involve loading a bunch of extra scripts for those browsers, or I could just load all the images in a loop, regardless, like:

if (!('IntersectionObserver' in window)) {
    loadImagesImmediately(images);
} else {...}

Dean's examples are all "Vanilla JS" and require no jQuery, no plugins, no polyfills WITH browser support. There are also some IntersectionObserver helper libraries out there like Cory Dowdy's IOLazy. Cory's is a nice simple wrapper and is super easy to implement. Given I want to support iOS Safari as well, I am using a polyfill to get the support I want from browsers that don't have it natively.

<!-- intersection observer polyfill -->
<script src="https://cdn.polyfill.io/v2/polyfill.min.js?features=IntersectionObserver"></script>

Pollyfill.io is a lovely site that gives you just the fills you need (or those you need AND request) tailored to your browser. Try GETting the URL above in Chrome. You'll see it's basically empty as you don't need it. Then hit it in IE, and you'll get the polyfill. The official IntersectionObserver polyfill is at the w3c.

At this point I've removed jQuery entirely from my site and I'm just using an optional polyfill plus browser support that didn't exist when I started my podcast site. Fewer moving parts means a cleaner, leaner, simpler site!

Go subscribe to the Hanselminutes Podcast today! We're on iTunes, Spotify, Google Play, and even Twitter!


Sponsor: Announcing Raygun APM! Now you can monitor your entire application stack, with your whole team, all in one place. Learn more!



© 2018 Scott Hanselman. All rights reserved.
     

Announcing Improvements to Maps Related Searches for the UK

$
0
0

The Bing Maps team is happy to announce several improvements we have made to maps related searches on Bing.com for the UK market. Over the last few months we have been hard at work lighting up features to improve our users' experience when searching for addresses, places, maps, directions, etc.

New Features

Here's an overview of just some of the features we've recently enabled for our UK users:

Address Queries. Looking for an address? Search for an address on Bing.com and see where it is on the map. You can also search for addresses with high-precision postcodes. On large screen devices like laptops or tablets, the map is fully dynamic and interactive. This also works for queries for cities, countries, postcodes, and other kinds of places.

Business at Address. Search for an address and see what businesses (if any) are associated with that address. This is great for confirming that the address your friend sent you for the restaurant you're meeting at is actually the right one. Here’s an example for you to try.

Business at Address

Enriched Experience for Simple Maps Queries. Simple queries like 'map' and 'maps' on Bing PC will now trigger a map answer experience centered on your user location, along with autosuggest for places, businesses, and all your favorite locations if you're a signed-in user! Here’s an example for you to try. If you search ‘aerial maps’ you’ll get the same experience, but this time with satellite imagery turned on.

Simple Maps Queries

Improved Experience for Directions Queries. Want driving directions to a place? We've made several improvements to our answer for users. We now list out alternate driving routes and provide helpful information like the amount of time you can expect to be stuck in traffic. We've also added 'Search along the route', so if you're taking a road trip, you can see what great attractions and restaurants are along the way. Here’s an example for you to try.

Directions Queries

Directions Queries
Improved Experience for Latitude & Longitude Queries.
This is one for the geeks! Search for lat/long queries on Bing.com and we show the associated address location and list out any businesses nearby. We have also enabled Streetside imagery. Here’s an example for you to try out

We hope you’re as excited about these features as we are and would love to hear your feedback!

- The Bing Maps Team

Announcing .NET Core 2.1 Preview 2

$
0
0

Today, we are announcing .NET Core 2.1 Preview 2. The release is now ready for broad testing, as we get closer to a final build within the next two to three months. We’d appreciate any feedback that you have.

ASP.NET Core 2.1 Preview 2 and Entity Framework 2.1 Preview 2 are also releasing today.

You can download and get started with .NET Core 2.1 Preview 2, on Windows, macOS, and Linux:

You can see complete details of the release in the .NET Core 2.1 Preview 2 release notes. Known issues and workarounds are included in the releases notes.

You can develop .NET Core 2.1 apps with Visual Studio 2017 15.7 Preview 1 or later, or Visual Studio Code. We expect that Visual Studio for Mac support will be added by .NET Core 2.1 RTM.

Thank you very much! We couldn’t have gotten to this spot without you, and we’ll continue to need your help as we work together towards .NET Core 2.1 RTM.

Build Performance Improvements

Build-time performance is greatly improved in .NET Core 2.1, particularly for incremental builds. These improvements apply to both dotnet build on the command line and to builds in Visual Studio. We’ve made improvements in the CLI and in MSBuild in order to make the tools deliver a much faster experience.

The following image shows the improvements that we’ve made since .NET Core 2.0. We have focused on large projects.

.NET Core incremental build performance improvements

These improvements are from many changes including the following ones:

We’re happy to look at your project if you don’t see significant improvements from using .NET Core 2.1 Preview 2.

Long-running SDK build servers

We added long-running servers to the .NET Core SDK to improve the performance of common development operations. The servers are just additional processes that run for longer than a single dotnet build invocation. Some of these are ports from the .NET Framework and others are new.

The following SDK build servers have been added:

  • VBCSCompiler
  • MSBuild worker processes
  • Razor server

The primary benefit of these servers is that they skip the need to JIT compile large blocks of code on every dotnet build invocation. They auto-terminate after a period of time.

You can manually terminate the build server processes via the following command:

dotnet buildserver shutdown

This command can be used in CI scripts in order to terminate worker processes after builds are completed. You can also run builds with dotnet build -nodeReuse:false to prevent MSBuild worker processes from being created.

New SDK Commands

The following tools have been added to the SDK:

  • dotnet watch
  • dotnet dev-certs
  • dotnet user-secrets
  • dotnet sql-cache
  • dotnet ef

We found that these tools were so popular that having to add them to individual projects didn’t seem like the right design, so we made them part of the SDK.

These tools were previously DotNetCliToolReference tools. They are no longer delivered that way. You can delete the DotNetCliToolReference entries in your project file when you adopt .NET Core 2.1.

Global Tools

.NET Core now has a new deployment and extensibility story for tools. This new experience is inspired by npm global tools

With Preview 2, the syntax for global tools has changed, as you can see in the following example:

dotnet tool install -g dotnetsay
dotnetsay

You can try out .NET Core Global Tools for yourself (after you’ve installed .NET Core 2.1 Preview 2) with a sample tool called dotnetsay.

New Tools Arguments

All tools operations now use the dotnet tool command. The following new functionality has been added in Preview 2:

  • dotnet tool install — installs a tool
  • dotnet tool update — uninstalls and reinstalls a tool, effectively updating it
  • dotnet tool uninstall — uninstalls a tool
  • dotnet tool list — lists currently installed tools
  • --tool-path — specifies a specific location to (un)install and list tools, per invocation

Preview releases and Roll-forward

.NET Core Applications, starting with 2.0, roll forward to minor versions. You can learn more about that behavior in the .NET Core 2.1 Preview 1 post.

However, .NET Core has the opposite behavior for previews. Applications, including global tools, do not roll forward from one preview to another or from preview to RTM. This means that you need to publish new versions of Global Tools to support later previews and for RTM.

The policy for previews is a bit controversial. The premise behind it is that we may make breaking changes between a given preview and the final RTM build. This policy enables us to do that while minimizing breakage in the ecosystem. There is also the likely case that software built for previews was not tested with RTM builds, however, this rationale is less compelling.

This policy has been in place since the start of the .NET Core project. Global Tools makes it a bit more challenging. We’d appreciate your feedback and insight on it.

Sockets Performance and SocketsHttpHandler

We made major improvements to sockets in .NET Core 2.1. Sockets are the basis of both outgoing and incoming networking communication. The higher-level networking APIs in .NET Core 2.1, including HttpClient and Kestrel, are now based on .NET sockets. In earlier versions, these higher-level APIs were based on native networking implementations.

We built a new from-the-ground-up managed HttpMessageHandler called SocketsHttpHandler. It’s an implementation of HttpMessageHandler based on .NET sockets and Span<T>.

SocketsHttpHandler is now the default implementation for HttpClient. The biggest win of SocketsHttpHandler is performance. It is a lot faster than the existing implementation. There are other benefits, such as:

  • Elimination of platform dependencies on libcurl (for Linux and the macOS) and WinHTTP (for Windows) – simplifying both development, deployment, and servicing.
  • Consistent behavior across platforms and platform/dependency versions.

You can use one of the following mechanisms to configure a process to use the older HttpClientHandler:

From code, use the AppContext class:

AppContext.SetSwitch("System.Net.Http.UseSocketsHttpHandler", false);

The AppContext switch can also be set by config file.

The same can be achieved via the environment variable DOTNET_SYSTEM_NET_HTTP_USESOCKETSHTTPHANDLER. To opt out, set the value to either false or 0.

On Windows, you can choose to use WinHttpHandler or SocketsHttpHandler on a call-by-call basis. To do that, instantiate one of those types and then pass it to HttpClient when you instantiate it.

On Linux and macOS, you can only configure HttpClient on a process-basis. On Linux, you need to deploy libcurl yourself if you want to use the old HttpClient implementation. If you have .NET Core 2.0 working on your machine, then libcurl is already installed.

Self-contained Application Servicing

dotnet publish now publishes self-contained applications with a serviced runtime version. When you publish a self-contained application with the new SDK, your application will include the latest serviced runtime version known by that SDK. When you upgrade to the latest SDK, you’ll publish with the latest .NET Core runtime version. This applies for .NET Core 1.0 runtimes and later.

Self-contained publishing relies on runtime versions on NuGet.org. You do not need to have the serviced runtime on your machine.

Using the .NET Core 2.0 SDK, self-contained applications are published with .NET Core 2.0.0 Runtime unless a different version is specified via the RuntimeFrameworkVersion property. With this new behavior, you’ll no longer need to set this property to select a higher runtime version for self-contained application. The easiest approach going forward is to always publish with the latest SDK.

Docker

We are consolidating the set of Docker Hub repositories that we use for .NET Core and ASP.NET Core. We will use microsoft/dotnet as our only .NET Core repository going forward.

The publicly available statistics suggest that most users are already using the dotnet repo, as you can see via the following docker pull badges:

  • microsoft/dotnet ->
  • microsoft/aspnetcore ->
  • microsoft/aspnetcore-build ->

You can learn more about this change and how to adapt at aspnet/announcements #298.

We also added a set of environment variables to .NET Core docker images, for 2.0 and later. These environment variables enable more scenarios to work without additional configuration, such as developing ASP.NET Core applications in a container.

  • To sdk images (example)
    • ASPNETCORE_URLS=http://+:80
    • DOTNET_RUNNING_IN_CONTAINER=true
    • DOTNET_USE_POLLING_FILE_WATCHER=true
  • To Linux runtime-deps images (example)
    • ASPNETCORE_URLS=http://+:80
    • DOTNET_RUNNING_IN_CONTAINER=true
  • To Windows runtime images (example)
    • ASPNETCORE_URLS=http://+:80
    • DOTNET_RUNNING_IN_CONTAINER=true

Note: These environment variables will be added to the 2.0 images later this month.

Supported Operating Systems and Chip Architectures

The biggest additions are supporting Ubuntu 18.04 and adding official ARM32 support.

We will support the following operating system versions for .NET Core 2.1:

  • Windows Client: 7, 8.1, 10 (1607+)
  • Windows Server: 2008 R2 SP1+
  • macOS: 10.12+
  • RHEL: 7+
  • Fedora: 26+
  • openSUSE: 42.3+
  • Debian: 8+
  • Ubuntu: 14.04+
  • SLES: 12+

Alpine support is still in preview.

We will support the following chip architectures:

  • On Windows: x64 and x86
  • On Linux: x64 and ARM32
  • On macOS: x64

Azure App Service and VSTS Deployment

ASP.NET Core 2.1 Previews will not be automatically deployed to Azure App Service. Instead, you can opt in to using .NET Core previews with just a little bit of configuration. See Using ASP.NET Core Previews on Azure App Service for more information.

Visual Studio Team Service support for .NET Core 2.1 will come closer to RTM.

Key Improvements in .NET Core 2.1 Preview 1

There are some key improvements that are important to recap from Preview 1. For more details, take a look at the .NET Core 2.1 Preview 1 Announcement.

  • Minor-version roll-forward
  • Span<T> and Memory<T> and friends
  • Windows Compatibility Pack

Closing

Please test your existing applications with .NET Core 2.1 Preview 2. Thanks in advance for trying it out. We need your feedback to take these new features across the finish line for the final 2.1 release.

.NET Core 2.1 is a big step forward from .NET Core 2.0. We hope that you find multiple improvements that make you want to upgrade.

Once again, thanks to everyone that contributed to the release. We appreciate all of the issues and PRs that you’ve contributed that have helped make this preview release available.

Announcing Entity Framework Core 2.1 Preview 2

$
0
0

Today we’re releasing the second preview of EF Core 2.1, alongside .NET Core 2.1 Preview 2 and ASP.NET Core 2.1 Preview 2.

Thank you so much to everyone who has tried our early builds and has helped shape this release with their feedback and code contributions!

The new preview bits are now available in NuGet as individual packages, and as part of the ASP.NET Core 2.1 Preview 2 metapackage and in the .NET Core 2.1 Preview 2 SDK, also released today.

Changes since Preview 1

As we announced in February, the first preview contained the initial implementation of all the major features we have planned for the 2.1 release: GroupBy translation, lazy loading, parameters in entity constructors, value conversion, query types, data seeding, System.Transactions support, among others.

For a more complete description of the new features in 2.1, see the What’s New section in our documentation. We will keep this section up to date as we make progress.

For the second preview, we have made dozens of bug fixes, functional and performance improvements. For the complete set of issues addressed in Preview 2, see this query on our issue tracker.

Among all the improvements, the following are worth mentioning here:

Refinement of the new 2.1 features

  • The GroupBy LINQ operator is now translated to SQL for more query patterns, including:
    • Grouping by constants or variables.
    • Grouping by scalar properties on reference navigations.
    • Grouping in combination with ordering or filtering on the grouping key or an aggregate.
    • Grouping in combination with projecting into an unmapped nominal type.

    For example, the following query can now be translated to SQL using GROUP BY and HAVING clauses:

    var activeCustomers = db.Orders
      .GroupBy(o => o.CustomerID)
      .Where(o => o.Count() > 0)
      .Select(g => new { g.Key, Count = g.Count() });
  • Query types can now be used to create explicitly compiled queries and can be configured in a separate class that implements IQueryTypeConfiguration<TQuery>
  • Data seeding now works with in-memory databases.
  • In general, new features like value conversions, data seeding and query types now work better in many more scenarios, and in combination with each other and with other existing features.
  • New APIs have been reviewed for consistency and in some cases renamed. For example, the SeedData method was renamed to HasData.

Minor new features and enhancements

  • The dotnet-ef commands now ship as part of the .NET Core SDK, so it will no longer be necessary to use DotNetCliToolReference in the project to be able to use migrations or to scaffold a DbContext from an existing database.
  • A new Microsoft.EntityFrameworkCore.Abstractions package contains attributes and interfaces that you can use in your projects to light up EF Core features without taking a dependency on EF Core as a whole. For example, the [Owned] attribute introduced in Preview 1 was moved here.
  • New Tracked And StateChanged events on ChangeTracker can be used to write logic that reacts to entities entering the DbContext or changing their state.
  • A new code analyzer is included with EF Core that detects possibly unsafe usages of our raw-SQL APIs in which parameters are not generated.
  • The dotnet ef dbcontext scaffold and the Scaffold-DbContext commands can now use named connection strings (that is, connection strings of the form Name=some_name) that reference connection strings defined in configuration (including user secrets).
  • Table splitting can now be used for relationships declared on derived types. This is useful for owned types.

Obtaining the bits

EF Core 2.1 Preview 2 and the corresponding versions of the SQL Server and in-memory database providers are included in the ASP.NET Core metapackage Microsoft.AspNetCore.App 2.1 Preview 2. Therefore, if your application references the metapackage and uses these database provides, you will not need any additional installation steps.

Otherwise, depending on your development environment, in order to install the new bits in your application, you can use NuGet or the dotnet command-line interface.

If you’re using one of the database providers developed as part of the Entity Framework Core project (for example, SQL Server, SQLite or In-Memory), you can install EF Core 2.1 Preview 2 by installing the latest version of the provider. For example, using dotnet on the command-line:

$ dotnet add package Microsoft.EntityFrameworkCore.SqlServer -V 2.1.0-preview2-final

If you’re using another EF Core 2.0-compatible relational database provider, it’s recommended that in order to obtain all the newest EF Core bits, you add a direct reference to the base relational provider in your application, for example:

$ dotnet add package Microsoft.EntityFrameworkCore.Relational -V 2.1.0-preview2-final

Provider compatibility

Some of the new features in 2.1, such as value conversions, require an updated database provider. However it’s our goal that existing providers developed for EF Core 2.0 will be compatible with EF Core 2.1, or at most require minimal updates.

We have been working and will continue to work with provider writers to make sure we identify and address any breaking changes.

If you experience any incompatibility, please report it by creating an issue in our GitHub repository.

Providers with support for new 2.1 features that are also compatible with Preview 2 will be available soon.

What’s next

As mentioned in our roadmap post in February, we intend to release additional previews on a monthly cadence, and a final release within the next two or three months.

Regarding other ongoing projects, we still plan to release a preview of our Cosmos DB provider in the 2.1 timeframe, and a final release around the 2.2 timeframe.

Thank you!

Again, we want to express our deep gratitude to everyone that has helped making the 2.1 release better by trying early builds, providing feedback, reporting bugs, and contributing code.

Please try the preview bits, and keep posting any new feedback to our issue tracker!

Enhanced capabilities to monitor, manage, and integrate SQL Data Warehouse in the Azure Portal

$
0
0

Azure SQL Data Warehouse (SQL DW) continues to introduce updates to the Azure portal to provide a seamless user experience when monitoring, managing, and integrating your data warehouse.

Support for Azure Monitor metrics

SQL DW now supports Azure Monitor which is a built-in monitoring service that consumes performance and health telemetry for your data warehouse. Azure monitor not only enables you to monitor your data warehouse within the Azure portal, but its tight integration between Azure services also enables you to monitor your entire data analytics solution within a single interface. For this release, data warehouse metrics have been enabled to enables you to identify performance bottlenecks and user activity:

  • Successful/Failed/Blocked by firewall connections
  • CPU
  • IO
  • DWU Limit
  • DWU Percentage
  • DWU used

These metrics now have a one-minute frequency for near real-time visibility into resource bottlenecks of your data warehouse. There is a default retention period of 90 days for all data warehouse metrics with Azure Monitor.

Configure metric charts in the Azure monitor service through the Azure Portal or programmatically query for metrics via PowerShell or REST:

AzureMonitor_Support

Pin configured charts for your data warehouse through Azure dashboards:

AzureMonitorDashboard_Support (002)

Safely manage costs by pausing

The pause feature for SQL DW enables you to reduce and manage operating costs for your data warehouse by turning off compute during times of little to no activity. We have enhanced this feature within the Azure portal by detecting active running queries and providing a warning before issuing the pause command. Pausing will cancel all sessions to immediately quiesce your data warehouse before shutting it down. This can sometimes lead to interruptions to your end user applications. Now with a simple click of the pause button in the Azure portal, you can detect the number of running queries to you can make an informed decision on when to pause:

Pause_DW (003)

You can also leverage the “dataWarehouseUserActivities” REST API to programatically integrate query detection in your applications.

Integrate with Azure Analysis Services

SQL DW is tightly integrated across many Azure services enabling you to develop an advanced modern data analytics solution. One common pattern is to leverage Azure Analysis Services (AAS) with SQL DW in a hub and spoke pattern to optimize for cost, performance, and concurrency. To enable seamless access to AAS, a tighter integration point within the Azure Portal has been enabled. Click on the “Model and Cache Data” button within the Task panel to immediately begin building and hosting semantic models of your data.

AAS_Portal (002)

 

AAS_view (002)

If you need help for a POC, contact us directly. Stay up-to-date on the latest Azure SQL DW news and features by following us on Twitter @AzureSQLDW.

Bing Maps United Kingdom Quality Update 2018

$
0
0

The Bing Maps team has been continuously improving its services with new data, quality updates and bug fixes. The March update added support of rooftop address geocoding with high precision post codes in the United Kingdom. It significantly boosts geo-accuracy in multiple services and improves the customer experience across Bing maps clients. Here are a few of the samples of the improvements:

Better Geo Accuracy with Higher Granularity

With millions of rooftop addresses added into Bing maps platform, our geocoding service now offers the most accurate rooftop-level geocoder for United Kingdom on the market. Instead of interpolated results, now addresses are resolved to the latitude/longitude coordinate at the center of the address parcel (property boundary) with high precision post code. This is critical for a country like the United Kingdom with high-density addresses. This example shows the accurate address coordinate returned by geocoder:

Geo Accuracy and Higher Granularity

Bing Maps result for the query: 411 Malden Road Worcester Park, KT4 7NY

Better User Experience with Richer Information on the Map

Improved Geocoding accuracy leads to improved driving directions. With the geo-accuracy improvement, users can see granular buildings and business landmarks shown on Bing maps. If you’re using our services to find the businesses near a point of interest, you won’t miss any that you might be interested in. This example shows accurate direction result returned by Bing Maps:

Accurate Direction Results

The Example below shows buildings and business landmarks returned by Bing Maps:

Businesses and Landmarks

Bing Maps nearby result for the query: London bridge

- The Bing Maps Team

ASP.NET Core 2.1.0-preview2 now available

$
0
0

Today we’re very happy to announce that the second preview of the next minor release of ASP.NET Core and .NET Core is now available for you to try out. This second preview includes many refinements based on feedback we received from the first preview we released back in February.

You can read about .NET Core 2.1.0-preview2 over on their blog.

You can also read about Entity Framework Core 2.1.0-preview2 on their blog.

How do I get it?

You can download the new .NET Core SDK for 2.1.0-preview2 (which includes ASP.NET Core 2.1.0-preview2) from https://www.microsoft.com/net/download/dotnet-core/sdk-2.1.300-preview2

Visual Studio 2017 version requirements

Customers using Visual Studio 2017 should also install (in addition to the SDK above) and use the Preview channel (15.7 Preview 3 at the time of writing) when working with .NET Core and ASP.NET Core 2.1 projects. .NET Core 2.1 projects require Visual Studio 2017 15.7 or greater.

Impact to machines

Please note that given this is a preview release there are likely to be known issues and as-yet-to-be-discovered bugs. While .NET Core SDK and runtime installs are side-by-side on your machine, your default SDK will become the latest version, which in this case will be the preview. If you run into issues working on existing projects using earlier versions of .NET Core after installing the preview SDK, you can force specific projects to use an earlier installed version of the SDK using a global.json file as documented here. Please log an issue if you run into such cases as SDK releases are intended to be backwards compatible.

Already published applications running on earlier versions of .NET Core and ASP.NET Core shouldn’t be impacted by installing the preview. That said, we don’t recommend installing previews on machines running critical workloads.

Announcements and release notes

You can see all the announcements published pertaining to this release at https://github.com/aspnet/Announcements/issues?q=is%3Aopen+is%3Aissue+milestone%3A2.1.0-preview2

Release notes, including known issues, are available at https://github.com/aspnet/Home/releases/tag/2.1.0-preview2

Giving feedback

The main purpose of providing previews like this is to solicit feedback from customers such that we can refine and improve the changes in time for the final release. We intend to ship a release candidate in about a month (with “go-live” license and support) before the final RTW release.

Please provide feedback by logging issues in the appropriate repository at https://github.com/aspnet or https://github.com/dotnet. The posts on specific topics above will provide direct links to the most appropriate place to log issues for the features detailed.

New features

You can see a summary of the new features planned in 2.1 in the roadmap post we published previously.

Following are details of additions and changes in preview2 itself.

Improvements to Razor UI libraries

New in ASP.NET Core 2.1 is support for building Razor UI in class libraries. In Preview 2 we’ve made various improvements to simplify authoring Razor UI in class libraries through the introduction of the new Razor SDK.

To create a Razor UI class library, start with a .NET Standard class library and then update the SDK in the .csproj file to be Microsoft.NET.SDK.Razor. The Razor SDK adds the necessary build targets and properties so that Razor files can be included in the build.

To create your own Razor UI class library:

  1. Create a .NET Standard class library
    dotnet new classlib -o ClassLibrary1
    
  2. Add a reference from the class library to Microsoft.AspNetCore.Mvc
    dotnet add ClassLibrary1 package Microsoft.AspNetCore.Mvc -v 2.1.0-preview2-final
    
  3. Open ClassLibrary1.csproj and change the SDK to be Microsoft.NET.SDK.Razor
    <Project Sdk="Microsoft.NET.Sdk.Razor">
    
      <PropertyGroup>
        <TargetFramework>netstandard2.0</TargetFramework>
      </PropertyGroup>
    
      <ItemGroup>
        <PackageReference Include="Microsoft.AspNetCore.Mvc" Version="2.1.0-preview2-final" />
      </ItemGroup>
    
    </Project>
  4. Add a Razor page and a view imports file to the class library
    dotnet new page -n Test -na ClassLibrary1.Pages -o ClassLibrary1/Pages
    dotnet new viewimports -na ClassLibrary1.Pages -o ClassLibrary1/Pages
    
  5. Update the Razor page to add some markup
    @page
    
    <h1>Hello from a Razor UI class library!</h1>
  6. Build the class library to ensure there are no build errors
    dotnet build ClassLibrary1
    

    In the build output you should see both ClassLibrary1.dll and ClassLibrary1.Views.dll, where the latter contains the compiled Razor content.

Now let’s use our Razor UI library from an ASP.NET Core web app.

  1. Create a ASP.NET Core Web Application
    dotnet new razor -o WebApplication1
    
  2. Create a solution file and add both projects to the solution
    dotnet new sln
    dotnet sln add WebApplication1
    dotnet sln add ClassLibrary1
    
  3. Add a reference from the web application to the class library
    dotnet add WebApplication1 reference ClassLibrary1
    
  4. Build and run the web app
    cd WebApplication1
    dotnet run
    
  5. Browse to /test to see your page from your Razor UI class libraryRazor UI class library

Looks great! Now we can package up our Razor UI class library and share it with others.

  1. Create a package for the Razor UI class library
    cd ..
    dotnet pack ClassLibrary1
    
  2. Create a new web app and add a package reference to our Razor UI class library package
    dotnet new razor -o WebApplication2
    dotnet add WebApplication2 package ClassLibrary1 --source <current path>/ClassLibrary1/bin/Debug
    
  3. Run the new app with the package reference
    cd WebApplication2
    dotnet run
    
  4. Browse to /test for the new app to see that your package is getting used.Razor UI class library package

Publish your package to NuGet to share your handiwork with everyone.

Razor compilation on build

Razor compilation is now part of every build. This means that Razor compilation issues are caught at design time instead of when the app is first run. Compiling the Razor views and pages also significantly speeds up startup time. And even though your view and pages are built up front, you can still modify your Razor files at runtime and see them updated without having to restart the app.

Scaffold identity into an existing project

The latest preview of Visual Studio 2017 (15.7 Preview 3) supports scaffolding identity into an existing application and overriding specific pages from the default identity UI.

To scaffold identity into an existing application:

  1. Right-click on the project in the solution explorer and select Add -> New Scaffolded Item...
  2. Select the Identity scaffolder and click Add.Add Scaffold Identity
  3. The Add Identity dialog appears. Leave the layout unspecified. Check the checkbox in the override file list for “LoginPartial”. Also click the “+” button to create a new data context class and a custom identity user class. Click Add to run the identity scaffolder.Add default Identity

    The scaffolder will add an Identity area to your application that configures identity and also will update the layout to include the login partial.

  4. Update the Configure method in Startup.cs to add the database error page when in development and also the authentication middleware before invoking MVC.
    public void Configure(IApplicationBuilder app, IHostingEnvironment env)
    {
        if (env.IsDevelopment())
        {
            app.UseDeveloperExceptionPage();
            app.UseDatabaseErrorPage();
        }
        else
        {
            app.UseExceptionHandler("/Error");
            app.UseHsts();
        }
    
        app.UseHttpsRedirection();
        app.UseStaticFiles();
        app.UseCookiePolicy();
    
        app.UseAuthentication();
    
        app.UseMvc();
    }
  5. The generated _ViewStart.cshtml in this preview release contains an unfortunate typo in the specified layout path. Fixup the layout path to be /Pages/Shared/_Layout.cshtml. This will be fixed in the next release.
  6. Select Tools -> NuGet Package Manager -> Package Manager Console and run the following commands to add an EF migration and create the database.
    Add-Migration Initial
    Update-Database
    
  7. Build and run the application. You should now be able to register and login users.Web app login

Customize default Identity UI

The identity scaffolder can also scaffold individual pages to override the default identity UI. For example, you can use a custom user type and update the identity UI to add additional user properties.

  1. In the solution explorer right-click on the project you added Identity to in the previous section and select Add -> New Scaffolded Item...
  2. Select the Identity scaffolder and click Add.Add Scaffold Identity
  3. The Add Identity dialog appears. Again leave the layout unspecified. Check the checkbox for the AccountManageIndex file. For the data context select the data context we created in the previous section. Click Add to run the identity scaffolder.Override Manage
  4. Fixup the layout path in _ViewStart.cshtml as we did previously.
  5. Open the generated /Areas/Identity/Pages/Account/Manage/Index.cshtml.cs file and replace references IdentityUser with your custom user type (ScaffoldIdentityWebAppUser). This manual edit is necessary in this preview, but will be handled by the identity scaffolder in a future update.
  6. Update ScaffoldIdentityWebAppUser to add an Age property.
        public class ScaffoldIdentityWebAppUser : IdentityUser
        {
            public int Age { get; set; }
        }
  7. Update the InputModel in /Areas/Identity/Pages/Account/Manage/Index.cshtml.cs to add a new Age property.
    public class InputModel
    {
        [Required]
        [EmailAddress]
        public string Email { get; set; }
    
        [Phone]
        [Display(Name = "Phone number")]
        public string PhoneNumber { get; set; }
        
        [Range(0, 120)]
        public int Age { get; set; }
    }
  8. Update /Areas/Identity/Pages/Account/Manage/Index.cshtml to add a field for setting the Age property. You can the field below the existing phone number field.
    <div class="form-group">
        <label asp-for="Input.Age"></label>
        <input asp-for="Input.Age" class="form-control" />
        <span asp-validation-for="Input.Age" class="text-danger"></span>
    </div>
  9. Update the OnPostAsync method in /Areas/Identity/Pages/Account/Manage/Index.cshtml.cs to save the user’s age to the database:
    if (Input.Age >= 0 && Input.Age < 120)
    {
        user.Age = Input.Age;
        await _userManager.UpdateAsync(user);
    }
  10. Update the OnGetAsync method in to initialize the InputModel with the user’s age from the database:
    Input = new InputModel
    {
        Email = user.Email,
        PhoneNumber = user.PhoneNumber,
        Age = user.Age
    };
  11. Select Tools -> NuGet Package Manager -> Package Manager Console and run the following commands to add an EF migration and update the database.
    Add-Migration UserAge
    Update-Database
    
  12. Build and run the app. Register a user and then set the user’s age on the manage page:Web app manage

Improvements to [ApiController] parameter binding

Applying [ApiController] to your controller sets up convenient conventions for how parameters get bound to request data. In Preview 2 we’ve made a number of improvements to how these conventions work based on feedback:

  • [FromBody] will no longer be inferred for complex types with specific semantics, like CancellationToken
  • Multiple [FromBody] parameters will result in an exception
  • When there are multiple routes for an action parameters that match any route value will be considered [FromRoute]

Provide constructed model type to the partial tag helper

The partial tag helper now supports passing a model instance through the new model attribute.

<partial name="MovieView" model='new Movie() { Name="Ghostsmashers" }' />

The asp-for attribute was also renamed to for.

Analyzer to warn about using Html.Partial usage

Starting in this preview, calls to Html.Partial will result in an analyzer warning due to the potential for deadlocks.

Html.Partial warning

Calls to @Html.Partial(...) should be replaced by @await Html.PartialAsync(...) or use the partial tag helper instead (<partial name="..." />).

Option to opt-out of HTTPS

HTTPS is enabled by default in ASP.NET Core 2.1 and the out of the box templates include support for handling HTTPS redirects and HSTS. But in some backend services where HTTPS is being handled externally at the edge using HTTPS at each node is not needed.

In Preview 2 you can disable HTTPS when creating new ASP.NET Core projects by passing the --no-https option at the command-line. In Visual Studio this option is surfaced from the new ASP.NET Core Web Application dialog.

Disable HTTPS

Razor Pages handling of HEAD requests

Razor Pages will now fall back to calling a matching GET page handler if no HEAD page handler is defined.

Updated to Json.NET 11

We’ve updated to Json.NET 11 to benefit from the latest Json.NET features and improvements.

Added Web API Client to ASP.NET Core meta-packages

The Web API Client is now included by the ASP.NET Core meta-packages for your convenience. This package provides convenient methods for handling formatting and deserialization when calling web APIs.

ViewData backed properties

Properties decorated with [ViewData] on controllers, page models, and Razor Pages provide a convenient way to add data that can be read by views and pages.

For example, to specify the title for a page and have it show up in the page layout you can define a property on your page model that is decorated with [ViewData]:

public class AboutModel : PageModel
{
    [ViewData]
    public string Title { get; } = "About";
}

The title can be accessed from the about page as a model property:

@page
@model AboutModel

<h2>@Model.Title</h2>

Then, in the layout, the title can be read from the ViewData dictionary:

<!DOCTYPE html>
<html>
<head>
    <meta charset="utf-8" />
    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
    <title>@ViewData["Title"] - WebApplication2</title>
...

Prebuilt UI libraries for Azure AD and Azure AD B2C

The UI and components required for setting up authentication with Azure AD or Azure AD B2C are now available in this preview as prebuilt packages:

These packages can be used to setup authentication with Azure AD or Azure AD B2C in any project.

Updates to launchSettings.json

The applicationUrl property in launchSettings.json can now be used to specify a semicolon separated list of server URLs.

"WebApplication1": {
  "commandName": "Project",
  "launchBrowser": true,
  "applicationUrl": "https://localhost:5001;http://localhost:5000",
  "environmentVariables": {
    "ASPNETCORE_ENVIRONMENT": "Development"
  }
}

Deprecating aspnetcore and aspnetcore-build Docker images

Starting with .NET Core 2.1.0-preview2, we intend to migrate from using the microsoft/aspnetcore-build and microsoft/aspnetcore Docker repos to the microsoft/dotnet Docker repo. We will continue to ship patches and security fixes for the existing aspnetcore images but any new images for 2.1 and higher will be pushed to microsoft/dotnet.

Dockerfiles using microsoft/aspnetcore: should change to microsoft/dotnet:-aspnetcore-runtime.

Dockerfiles using microsoft/aspnetcore-build that do not require Node should change to microsoft/dotnet:-sdk.

Dockerfiles using microsoft/aspnetcore-build that require Node will need to handle that in their own images, either with a multi-stage build or by installing Node themselves.

For more details on the change, including some example Dockerfiles and a link to a discussion issue, you can see the announcement here: https://github.com/aspnet/Announcements/issues/298

Migrating an ASP.NET Core 2.0.x project to 2.1.0-preview2

Follow these steps to migrate an existing ASP.NET Core 2.0.x project to 2.1.0-preview2:

  1. Open the project’s CSPROJ file and change the value of the <TargetFramework> element to netcoreapp2.1
    • Projects targeting .NET Framework rather than .NET Core, e.g. net471, don’t need to do this
  2. In the same file, update the versions of the various <PackageReference> elements for any Microsoft.AspNetCore, Microsoft.Extensions, and Microsoft.EntityFrameworkCore packages to 2.1.0-preview2-final
  3. In the same file, remove any references to <DotNetCliToolReference> elements for any Microsoft.AspNetCore, Microsoft.VisualStudio, and Microsoft.EntityFrameworkCore packages. These tools are now deprecated and are replaced by global tools.
  4. In the same file, remove the <DotNetCliToolReference> elements for any Microsoft.AspNetCore packages. These have been replaced by global tools.

That should be enough to get the project building and running against 2.1.0-preview2. The following steps will change your project to use new code-based idioms that are recommended in 2.1

  1. Open the Program.cs file
  2. Rename the BuildWebHost method to CreateWebHostBuilder, change its return type to IWebHostBuilder, and remove the call to .Build() in its body
  3. Update the call in Main to call the renamed CreateWebHostBuilder method like so: CreateWebHostBuilder(args).Build().Run();
  4. Open the Startup.cs file
  5. In the ConfigureServices method, change the call to add MVC services to set the compatibility version to 2.1 like so: services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_1);
  6. In the Configure method, add a call to add the HSTS middleware after the exception handler middleware: app.UseHsts();
  7. Staying in the Configure method, add a call to add the HTTPS redirection middleware before the static files middleware: app.UseHttpsRedirection();
  8. Open the project propery pages (right-mouse click on project in Visual Studio Solution Explorer and select “Properties”)
  9. Open the “Debug” tab and in the IIS Express profile, check the “Enable SSL” checkbox and save the changes

    Note that some projects might require more steps depending on the options selected when the project was created, or packages added since. You might like to try creating a new project targeting 2.1.0-preview2 (in Visual Studio or using dotnet new at the cmd line) with the same options to see what other things have changed.


    Rubikloud leverages Azure SQL Data Warehouse to disrupt retail market with accessible AI

    $
    0
    0

    In the modern retail environment, consumers are well-informed and expect intuitive, engaging, and informative experiences when they shop. To keep up, retailers need solutions that can help them delight their customers with personalized experiences, empower their workforce to provide differentiated customer experiences, optimize their supply chain with intelligent operations and transform their products and services.

    With global scale and intelligence built in to key services, Azure is the perfect platform to build powerful apps to delight retail customers, the possibilities are endless. With a single photo, retailers can create new access points for the customer on a device of their choice. Take a look at this example of what’s possible using Microsoft’s big data and advanced analytics products

    AI can be complex, this is where Rubikloud comes in. Rubikloud is focused on accessible AI products for retailers and delivering on the promise of “intelligent decision automation”. They offer a set of SaaS products, Promotion Manager and Customer Lifecycle Manager, that help retailers automate and optimize mass promotional planning and loyalty marketing. These products help retailers reduce the complexities of promotion planning and store allocations and better predict their customers intention and behavior throughout their retail life cycle.

    image

    As Rubikloud democratizes AI, there was a need to make data accessible across business user profiles. To build a secure, high performance data platform that underpins all its SaaS products, Rubikloud chose Azure. The Rubikloud data platform lets retailers easily ingest most common data sources like Dynamics 365, SAP Hybris, NCR and Demandware seamlessly and at any scale.

    Waleed Ayoub, Rubilkoud’s CTO says, “In our mission to democratize AI, the need to process massive volumes of data for near real time decision-making, made Microsoft Azure the preferred platform choice. Azure’s global footprint and elastic compute fabric enabled us to connect with retailers’ legacy systems and onboard our products in a matter of weeks”.

    With Azure SQL DW, Rubikloud is able to make data and insights accessible to business users securely and effortlessly. Ayoub adds, “SQL DW has allowed us to offer our users another powerful access point in order to meet their growing analytic needs”.

    Rubikloud has seen impact. A.S. Watson, one of the largest health and beauty retailer in the world, is deploying Rubikloud products across its network of 13,300 retail stores.

    For retailers, combining data that they own and control with data from external sources such as vendors and research firms can be transformative. Azure SQL DW provides a fast, flexible and trusted analytics platform that lets the largest retailers in the world securely manage their data and allows trailblazers like Rubikloud to accelerate their journey to intelligent and accessible products.

    You can find out more about Rubikloud’s AI products, how SQL DW accelerates your journey to a cloud data warehouse solution and how Rubikloud uses Azure.

    GDPR offers one more reason to focus on your disaster recovery strategy

    $
    0
    0

    Has your organization failed to devise a business continuity and disaster recovery plan because of the perception that it’s complex or expensive? Or perhaps you have a disaster recovery plan, but maybe you're not testing it frequently enough because of concerns about impacting production systems.

    If you’re in either category, you’ll want to start developing a plan now. Especially if your company does business in the European Union (EU) or might have any data on EU citizens. The General Data Protection Regulation (GDPR), which goes into effect May 25, 2018,  is the EU’s new data protection regulation. While it doesn’t explicitly require that you back up data or implement a site recovery solution, the GDPR requirements provide additional reasons to stop waiting and fine-tune your DR plan:

    • Under the GDPR, data controllers and data processors must “provide a copy of the personal data undergoing processing”. (Article 15)
    • According to the GDPR, companies must also have "the ability to restore the availability and access to personal data in a timely manner in the event of a physical or technical incident”. (Article 32)
    • GDPR also grants EU citizens the "right to data portability”, (Article 20) which you can’t grant if you lose your datacenter and don’t have a backup
    • Companies found to be non-compliant with these or other requirements could be on the hook for a variety of penalties, including administrative fines “up to 20,000,000 EUR … or 4% of the total worldwide annual turnover [revenue] of the preceding financial year, whichever is higher”. (Article 83)

    In the past, for organizations not using the cloud backups consisted of copying content from drives to an offline media, such as tape and storing it in an offsite location. In addition to the costs associated with such a solution, it could prove challenging to comply with some GDPR requirements. Microsoft Azure provides a better alternative, with the enormous storage capacity of the cloud, built-in security, and the high availability of datacenters around the world.

    With Azure Site Recovery you’ll benefit from a cost-effective solution that replicates your workloads to a secondary location in the cloud. And Azure Backup service keeps your data safe and recoverable in the cloud. So if a disaster should take place, you can quickly and easily restore access to personal data and workloads, making it easier to comply with GDPR data restoration requirement.

    Learn more about common questions businesses have about GDPR compliance. Explore Azure Site Recovery.

    The case for R, for AI developers

    $
    0
    0

    I had a great time this week at the Qcon.ai conference in San Francisco, where I had the pleasure of presenting to an audience of mostly Java and Python developers. It's unfortunate that videos won't be available for a while, because there were some amazing presentations: those by Matt Ranney, Mike Williams and Rachel Thomas were particular standouts.

    My goal for the presentation I gave was to encourage developers to take a look at R (and its community) for developing AI applications, and in particular to bring a statistical perspective to data, inference and prediction as used by AI applications:

    I also delivered a workshop on using R to interface with a couple of the Cognitive Services vision APIs, to generate captions from random images in Wikipedia, and to train a custom image recognizer with images of hotdogs. The workshop is hosted as a Jupyter Notebook, so it's easy to try out yourself — all you need is a browser. You can find all the files and instructions at the link below.

    Azure Notebooks: AI for R users 

    ASP.NET Core 2.1.0-preview2: Improvements to the Kestrel HTTP server

    $
    0
    0

    Change default transport to Sockets

    Building off the improvements to the managed sockets implementation in .NET Core we have changed the default transport in Kestrel from libuv to sockets. As a consequence, the Microsoft.AspNetCore.Server.Kestrel.Transport.Libuv package is no longer part of the Microsoft.AspNetCore.App metapackage.

    How to switch back to libuv

    To continue using libuv as your transport, you will need to add reference to the libuv package and modify your application to use libuv as it’s transport. Alternatively, you can reference the Microsoft.AspNetCore.All metapackage which includes a transitive dependency on the libuv package.

    dotnet add package Microsoft.AspNetCore.Server.Kestrel.Transport.Libuv -v 2.1.0-preview2-final
    public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
        WebHost.CreateDefaultBuilder(args)
            .UseLibuv()
            .UseStartup<Startup>();

    SNI support

    Server Name Indication (SNI) is extension to the TLS protocol that allows clients to send the desired hostname unencrypted as part of the TLS handshake. As a consequence, you can host multiple domains on the same IP/Port and use the hostname to respond with the correct certificate. In 2.1.0-preview2, Kestrel has added a ServerCertificateSelector callback which is invoked once per connection to allow you select the right certificate. If you specify a ServerCertificateSelector, the selector will always take precedence over any specified server certificate.

    SNI support requires running on netcoreapp2.1. On netcoreapp2.0 and net461 the callback will be invoked but the name will always be null. The name will also be null if the client did not provide this optional parameter.

    public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
        WebHost.CreateDefaultBuilder(args)
            .UseKestrel((context, options) =>
            {
                options.ListenAnyIP(5005, listenOptions =>
                {
                    listenOptions.UseHttps(httpsOptions =>
                    {
                        var localhostCert = CertificateLoader.LoadFromStoreCert("localhost", "My", StoreLocation.CurrentUser, allowInvalid: true);
                        var exampleCert = CertificateLoader.LoadFromStoreCert("example.com", "My", StoreLocation.CurrentUser, allowInvalid: true);
                        var subExampleCert = CertificateLoader.LoadFromStoreCert("sub.example.com", "My", StoreLocation.CurrentUser, allowInvalid: true);
                        var certs = new Dictionary<string, X509Certificate2>(StringComparer.OrdinalIgnoreCase);
                        certs["localhost"] = localhostCert;
                        certs["example.com"] = exampleCert;
                        certs["sub.example.com"] = subExampleCert;
    
                        httpsOptions.ServerCertificateSelector = (features, name) =>
                        {
                            if (name != null && certs.TryGetValue(name, out var cert))
                            {
                                return cert;
                            }
    
                           return exampleCert;
                        };
                    });
               });
            })
            .UseStartup<Startup>();

    Host filtering middleware

    While Kestrel supports configuration based on prefixes such as https://contoso.com:5000, it largely ignores the host name. Localhost is a special case used for binding to loopback addresses. Any host other than an explicit IP address binds to all public IP addresses. None of this information is used to validate request Host headers. In 2.1.0-preview2, we introduced a new HostFiltering middleware(Microsoft.AspNetCore.HostFiltering) that we recommend you use in conjunction with Kestrel to validate Host headers. The host filtering middleware is already included as part of the default WebHost. To configure the middleware, use a semicolon separated list of hostnames in your appSettings.json file.

    {
        "AllowedHosts": "localhost;127.0.0.1;[::1]"
    }

    Alternatively, you can configure it directly from code.

    services.AddHostFiltering(options =>
    {
        var allowedHosts = new List<String>{
            "localhost",
            "127.0.0.1",
            "[::1]"
        };
        options.AllowedHosts = allowedHosts;
    });

    Top stories from the VSTS community – 2018.04.13

    $
    0
    0
    Here are top stories we found in our streams this week related to DevOps, VSTS, TFS and other interesting topics, listed in no specific order: TOP STORIES Is there a place for DevOps in desktop software development?Andrei Marukovich raises the question whether the DevOps principles, widely accepted by cloud and Web projects, may benefit desktop... Read More

    Because it’s Friday: The borders have changed. Film at 11.

    $
    0
    0

    There's a lot of stupidity in the US news these days, but at least these reviews of bad maps on TV are amusing rather than infuriating. (Click through to see the entire thread.) That's Iran in the first image by the way, although I confess I did have to check a map to be sure. 

    As a connoisseur of hilariously wrong TV news maps, that CBS “Syria” is nothing. Kids’ stuff.

    Friends, follow along on a tour of the world according to Cable TV News. pic.twitter.com/XumduDwyJM

    — Max Fisher (@Max_Fisher) April 11, 2018

    That's all from us for this week. Have a great weekend, and we'll be back next week with more for the blog.

    Retrogaming on original consoles in HDMI on a budget

    $
    0
    0

    Just a few of my consoles. There's a LOT off screen.My sons (10 and 12) and I have been enjoying Retrogaming as a hobby of late. Sure there's a lot of talk of 4k 60fps this and that, but there's amazing stories in classing video games. From The Legend of Zelda (all of them) to Ico and Shadow of the Colossus, we are enjoying playing games across every platform. Over the years we've assembled quite the collection of consoles, most purchased at thrift stores.

    Initially I started out as a purist, wanting to play each game on the original console unmodified. I'm not a fan of emulators for a number of reasons. I don't particularly like the idea of illegal ROM come up and I'd like to support the original game creators. Additionally, if I can support a small business by purchasing original game cartridges or CDs, I prefer to do that as well. However, the kids and I have come up with somewhat of a balance in our console selection.

    For example, we enjoy the Hyperkin Retron 5 in that it lets us play NES, Famicom, SNES, Super Famicom, Genesis, Mega Drive, Game Boy, Game Boy Color, & Game Boy over 5 category ports. with one additional adapter, it adds Game Gear, Master System, and Master System Cards. It uses emulators at its heart, but it requires the use of the original game cartridges. However, the Hyperkin supports all the original controllers - many of which we've found at our local thrift store - which strikes a nice balance between the old and the new. Best of all, it uses HDMI as its output plug which makes it super easy to hook up to our TV.

    The prevalence of HDMI as THE standard for getting stuff onto our Living Room TV has caused me to dig into finding HDMI solutions for as many of my systems as possible. Certainly you CAN use a Composite Video Adapter to HDMI to go from the classic Yellow/White/Red connectors to HDMI but prepare for disappointment. By the time it gets to your 4k flat panel it's gonna be muddy and gross. These aren't upscalers. They can't clean an analog signal. More on that in a moment because there are LAYERS to these solutions.

    Some are simple, and I recommend these (cheap products, but they work great) adapters:

    • Wii to HDMI Adapter - The Wii is a very under-respected console and has a TON of great games. In the US you can find a Wii at a thrift store for $20 and there's tens of millions of them out there. This simple little adapter will get you very clean 480i or 480p HDMI with audio. Combine that with the Wii's easily soft-modded operating system and you've got the potential for a multi-system emulator as well.
    • PS2 to HDMI Adapter - This little (cheap) adapter will get you HTMI output as well, although it's converted off the component Y Cb/Pb Cr/Pr signal coming out. It also needs USB Power so you may end up leaching that off the PS2 itself. One note - even though every PS2 can also play PS1 games, those games output 240p and this adapter won't pick it up, so be prepared to downgrade depend on the game. But, if you use a Progressive Scan 16:9 Widescreen game like God of War you'll be very pleased with the result.
    • Nintendo N64 - THIS is the most difficult console so far to get HDMI output from. There ARE solutions but they are few and far between and often out of stock. There's an RGB mod that will get you clean Red/Green/Blue outputs but not HDMI. You'll need to get the mod and then either do the soldering yourself or find a shop to do it for you. The holy grail is the UltraHDMI Mod but I have yet to find one and I'm not sure I want to pay $150 for it if I do.
      • The cheapest and easiest thing you can and should do with an N64 is get a Composite & C-Video converter box. This box will also do basic up-scaling as well, but remember, this isn't going to create pixels that aren't already there.
    • Dreamcast - There is an adapter from Akura that will get you all the way to HDMI but it's $85 and it's just for Dreamcast. I chose instead to use a Dreamcast to VGA cable, as the Dreamcast can do VGA natively, then a powered VGA to HDMI box. It doesn't upscale, but rather passes the original video resolution to your panel for upscaling. In my experience this is a solid budget compromise.

    If you're ever in or around Portland/Beaverton, Oregon, I highly recommend you stop by Retro Game Trader. Their selection and quality is truly unmatched. One of THE great retro game stores on the west coast of the US.

    The games and systems at Retro Game Trader are amazing Retro Game Trader has shelves upon shelves of classic games

    For legal retrogames on a budget, I also enjoy the new "mini consoles" you've likely heard a lot about, all of which support HDMI output natively!

    • Super NES Classic (USA or Europe have different styles) - 21 classic games, works with HDMI, includes controllers
    • NES Classic - Harder to find but they are out there. 30 classic games, plus controllers. Tiny!
    • Atari Flashback 8 - 120 games, 2 controllers AND 2 paddles!
    • C64 Mini - Includes Joystick and 64 games AND supports a USB Keyboard so you can program in C64 Basic

    8bitdo-sfc30-pro-controller-gamepad-538219.4In the vein of retrogaming, but not directly related, I wanted to give a shootout to EVERYTHING that the 8BitDo company does. I have three of their controllers and they are amazing. They get constant firmware updates, and particularly the 8Bitdo SF30 Pro Controller is amazing as it works on Windows, Mac, Android, and Nintendo Switch. It pairs perfectly with the Switch, I use it on the road with my laptop as an "Xbox" style controller and it always Just Works. Amazing product.

    If you want the inverse - the ability to use your favorite controllers with your Windows, Mac, or Raspberry Pi, check out their Wireless Adapter. You'll be able to pair all your controllers and use them on your PC - Xbox One S/X Bluetooth controller, PS4, PS3, Wii Mote, Wii U Pro wirelessly on Nintendo Switch with DS4 Motion and Rumble features! NOTE: I am NOT affiliated with 8BitDo at all, I just love their products.

    We are having a ton of fun doing this. You'll always be on the lookout for old and classic games at swap meets, garage sales, and friends' houses. There's RetroGaming conventions and Arcades (like Ground Kontrol in Portland) and an ever-growing group of new friends and enthusiasts.

    This post uses Amazon Referral Links and I'll use the few dollars I get from YOU uses them to buy retro games for the kids! Also, go subscribe to the Hanselminutes Podcast today! We're on iTunes, Spotify, Google Play, and even Twitter! Check out the episode where Matt Phillips from Tanglewood Games uses a 1995 PC to great a NEW Sega Megadrive/Genesis game in 2018!


    Sponsor: Announcing Raygun APM! Now you can monitor your entire application stack, with your whole team, all in one place. Learn more!



    © 2018 Scott Hanselman. All rights reserved.
         

    Iterative development and debugging using Data Factory

    $
    0
    0

    Data Integration is becoming more and more complex as customer requirements and expectations are continuously changing. There is increasingly a need among users to develop and debug their Extract Transform/Load (ETL) and Extract Load/Transform (ELT) workflows iteratively. Now, Azure Data Factory (ADF) visual tools allow you to do iterative development and debugging.

    You can create your pipelines and do test runs using the Debug capability in the pipeline canvas without writing a single line of code. You can view the results of your test runs in the Output window of your pipeline canvas. Once your test run succeeds, you can add more activities to your pipeline and continue debugging in an iterative manner. You can also Cancel your test runs once they are in-progress. You are not required to publish your changes to the data factory service before clicking Debug. This is helpful in scenarios where you want to make sure that the new additions or changes work as expected before you update your data factory workflows in dev, test or prod environments.

    image

    Data Factory visual tools also allow you to do debugging until a particular activity in your pipeline canvas. Simply put a breakpoint on the activity until which you want to test and click Debug. Data Factory will guarantee that the test run will only happen until the breakpoint activity in your pipeline canvas.

    image

    Get started today by clicking the Author & Monitor tile in your provisioned v2 data factory blade. Build, debug your data factory pipelines iteratively and let us know your feedback.

    image

    Our goal is to continue adding features to continuously improve the usability of data factory tools. Get more information and detailed steps for doing iterative development and debugging using Data Factory UX.

    Get started building pipelines easily and quickly using Azure Data Factory. If you have any feature requests or want to provide feedback, please visit the Azure Data Factory forum.

    Mobilizing Existing .NET Apps

    $
    0
    0

    Since the first release of the .NET Framework in 2002, developers have been building large-scale apps with client-server architectures. These apps frequently adopt a layered approach, with business logic to solve diverse and complex problems, accessed via desktop or web front-ends.

    Today, C# and .NET are available across a diverse set of platforms in addition to Windows, including Android and iOS with Xamarin, but also wearables like Apple Watch and Android Wear, consumer electronics via Samsung Tizen, and even HoloLens.

    In this blog post, I’ll show how to port business logic from WPF and build a phone- and tablet-friendly mobile app for Android, iOS, and UWP. Existing C# code can often be re-used with minimal changes, and the user interfaces can be built using Xamarin.Forms XAML to run across platforms.

    Porting a Desktop app with a Mobile Mindset

    Mobile apps, unlike desktop or server apps, run on limited resources and have the least user attention span. Although this post focuses mainly on porting the existing code, you should also consider changes to the app architecture or user interface to work better on phones and tablets. A good place to start is our mobile development principles.

    WPF-to-mobile Example

    Almost any .NET codebase, including Windows Forms, WPF, ASP.NET, and Silverlight, has sharable code that can be ported to Xamarin.iOS, Xamarin.Android and UWP projects. By moving the shared code that is platform agnostic to a .NET Standard Library (or a Shared Project) you can easily reference them in mobile projects.

    For this example, I am mobilizing an Expenses sample written a few years ago for a cloud-enabled WPF app demo. The functionality works great on mobile, as you can see here:

    Cloud enabled WPF app working on mobile device

    The original Expenses app is a thick client written for desktop in WPF. The app helps users manage their charges, create expense reports, and allows submitting for manager approvals. It connects to a WCF backend and SQL Server for data storage, and looks like this:

    Expenses app thick client written for desktop in WPF

    The following sections detail how the legacy app code was analyzed, re-used, and adapted for mobile deployment. You can download the original code and new mobile solution from my GitHub repo.

    Analyze code for Mobilization

    In general, any non-platform dependent code i.e. your Business Layer, Data Layer, Data Access Layer, Service Access Layer, etc. can be shared across all platforms. To help you identify what code is sharable, use the .NET Portability Analyzer tool. The .NET Portability Analyzer provides you with a detailed report on how portable your program is across .NET platforms by analyzing assemblies. The Portability Analyzer is offered as a Visual Studio Extension and as a console app. Once you install this extension, be sure to check the platforms that you want to analyze in the settings and then right click on the project you want to analyze and click “Analyze Project Portability”

    Analyze Project Portability option when you right click on a project in Solution Explorer

    It will generate a report like the one below from the Expenses WPF app.

    Demo report table from Expenses WPF app

    The above shown consolidated report has the analysis of two libraries – Expenses.WPF and Expenses.Data. From the report, apparently, Expenses.Data (data layer) is 100% shareable across all platforms and as expected Expenses.WPF is about 80% shareable. These reports are available on my GitHub repo – please refer the detailed sheet within the workbook to understand the libraries that aren’t shareable.

    Porting WCF backend to Azure Mobile Apps

    I could retain the WCF backend as-is and host it in the cloud for mobile apps to access. However, I decided to port this to Azure Mobile Apps to take advantage of offline sync support that is an essential feature in creating excellent mobile user experiences. In a mobile world, devices are always moving, connectivity varies, and network outages happen. Apps need to be intelligent by falling back on locally stored data and transferring the data on demand when a better network connection is established with the server.  Luckily, Azure Mobile Apps provides a simple API for developers through its SDK to support both online and offline scenarios for data storage, including automatically syncing data between a device and server.

    The project MyExpenses.MobileAppService has controllers that inherit from TableController that provides a RESTful endpoint and nicely abstracts away the code that supports offline data sync.

    If you’re new to Azure Mobile Apps, these docs will help you get started quickly. However, if you wish to retain the WCF service as-is on Azure, watch this video on Channel 9.

    Porting the WPF client app to Xamarin.Forms

    Xamarin.Forms is a cross-platform UI toolkit that helps you create native user interfaces that can be shared across iOS, Android, and Universal Windows Platform apps. Since the UI is rendered using the native controls of the target platform, Xamarin.Forms retains the appropriate look and feel on every platform. Just like in WPF, UI in Xamarin.Forms are created either in XAML or in C# code entirely. However, it is best to take advantage of its DataBinding support and embrace MVVM (Model-View-ViewModel) pattern. An important point to be noted here is that the controls used in WPF are different from the ones in Xamarin.Forms. Although similar controls are available, often they are named differently to suit the mobile user interface guidelines, and hence the WPF XAML cannot be reused in Xamarin.Forms projects. Read our documentation on WPF vs. Xamarin.Forms: Similarities & Differences.

    The interesting thing about porting this WPF app is that 100% of the ViewModels, Helpers, Converters, Models, Services, and any code that does not have platform-specific reference can be reused in the Xamarin.Forms app. Hence even the entire UI logic (ViewModels) is retained, and only the UI (XAML) is freshly created to suit the multi-device mobile form factors. In this sample project, I have created folders called “Legacy” to help you understand what code was reused.

    Only UI XAML is freshly created to suit the multi device mobile form factors while retaining entire UI logic ViewModels

    For details on how I built the UI for this sample, refer to the Views folder within the shared project. All the ViewModels from the WPF project have been reused without much modification. Although comparing to the WPF thick client apps, mobile implementations can be much simpler. However, I have retained the logic as-is to exhibit the reuse of maximum code share.

    Getting started with Xamarin.Forms

    To get started quickly using Xamarin.Forms with Azure Mobile Apps, there is a template in Visual Studio 2017 and Visual Studio for Mac that automatically sets up everything for you. All that you need to do is add your code in the right places.

    Create solution in Visual Studio

    Open Visual Studio 2017, click on File> New Project>Visual C#> Cross-Platform > Mobile App (Xamarin.Forms). In the next dialog, be sure to check select Master-Detail template, Xamarin.Forms as the UI Technology and choose “Include Azure Mobile Apps backend project” option. Complete the process by selecting an appropriate Azure Subscription and hosting on a preferred Resource Group.

    New Project Dialog window showing Include Azure Mobile Apps backend project option under Cross Plat Mobile App

    For detailed guidance on building Cross Platform apps using Xamarin.Forms, checkout our Getting-Started guide.

    Consuming Azure Mobile Apps backend

    The backend code that was ported from WCF to Azure Mobile Apps can be found in the MobileAppService. Explanation of the Mobile Apps backend code is beyond the scope of this article. You can download the source code from my GitHub repo and publish them directly to your Azure Portal.

    By default, the template creates a helper class called AzureDataStore that abstracts away the code for offline sync support. I further modified it to fit Expenses project. In this project, we have three scenarios –

    • Manage Charges (Add/Edit/Delete Charges)
    • Manage Expense Report (Attach charges, Create Expense reports and Submit for approval)
    • Manage Employees

    To support offline sync for each of the tables, a corresponding Data Store is created that implements IDataStore<T> interface – T being the Model object.

    Source Code and Wrap up

    You can download the entire source code from my GitHub repo. Refer, the _before folder for the original WPF source and Port-Report folder for Portability analyzer reports. For detailed documentation on porting existing Windows Forms or WPF apps to Android, iOS, macOS, or UWP, refer our Porting Guidance.

    Your existing .NET code is more mobile than you think!

    Nish Anil Senior Program Manager, Visual Studio
    @nishanil

    Nish is a Senior Program Manager on the Xamarin team. He is a C# fanatic and enjoys writing Mobile Apps in Xamarin and Visual Studio. Working out of Bangalore, India, he’s passionate about spreading C# and Xamarin love among .NET developers across the world.

    Introducing Microsoft Azure Sphere: Secure and power the intelligent edge

    $
    0
    0

    In the next decade, nearly every consumer gadget, every household appliance, and every industrial device will be connected to the Internet. These connected devices will also become more intelligent with the ability to predict, talk, listen, and more. The companies who manufacture these devices will have an opportunity to reimagine everything and fundamentally transform their businesses with new product offerings, new customer experiences, and differentiate against competition with new business models.

    All these everyday devices have in common a tiny chip, often smaller than the size of your thumbnail, called a microcontroller (MCU). The MCU functions as the brain of the device, hosting the compute, storage, memory, and an operating system right on the device. Over 9 billion of these MCU-powered devices are built and deployed every year. For perspective, that’s more devices shipping every single year than the world’s entire human population. While few of these devices are connected to the Internet today, within just a few years, this entire industry, all 9 billion or more devices per year, is on path to include connected MCUs.

    Internet connectivity is a two-way street. With these devices becoming a gateway to our homes, workplaces, and sensitive data, they also become targets for attacks. Look around a typical household and consider what could happen when even the most mundane devices are compromised: a weaponized stove, baby monitors that spy, the contents of your refrigerator being held for ransom. We also need to consider that when a device becomes compromised, it’s not just a problem for the owner, it can also become a problem for society. A device can disrupt and do damage on a larger scale. This is what happened with the 2016 Mirai botnet attack where roughly 100,000 compromised IoT devices were repurposed by hackers into a botnet that effectively knocked the U.S. East Coast off the Internet for a day. It’s of paramount importance that we proactively address this emerging threat landscape with solutions that can keep pace as connected MCUs ship in billions of new devices ever year.

    In 2015 a small team of us within Microsoft Research began exploring how to secure this vast number of MCU-powered devices yet to come online. Leveraging years of security experience at Microsoft, and learnings from across the tech industry, we identified The Seven Properties of Highly-Secure Devices. We identified the need for a hardware root of trust to protect and defend the software on a device. We identified the need for multiple layers of defense-in-depth, both in hardware and in software, to repel hackers even if they fully breach one layer of security. We identified the critical need for hardware, software, and cloud to work together to secure a device. Over time the Seven Properties gained traction and became the foundation for a movement within Microsoft – which ultimately brings us to today.

    Securing the billions of MCU powered devices

    Today at RSA 2018, we announced the preview of Microsoft Azure Sphere, a new solution for creating highly-secured, Internet-connected microcontroller (MCU) devices. Azure Sphere includes three components that work together to protect and power devices at the intelligent edge.MCU_Image_title_1200x627

    • Azure Sphere certified microcontrollers (MCUs): A new cross-over class of MCUs that combines both real-time and application processors with built-in Microsoft security technology and connectivity. Each chip includes custom silicon security technology from Microsoft, inspired by 15 years of experience and learnings from Xbox, to secure this new class of MCUs and the devices they power.
    • Azure Sphere OS: This OS is purpose-built to offer unequalled security and agility. Unlike the RTOSes common to MCUs today, our defense-in-depth IoT OS offers multiple layers of security. It combines security innovations pioneered in Windows, a security monitor, and a custom Linux kernel to create a highly-secured software environment and a trustworthy platform for new IoT experiences.
    • Azure Sphere Security Service: A turnkey, cloud service that guards every Azure Sphere device; brokering trust for device-to-device and device-to-cloud communication through certificate-based authentication, detecting emerging security threats across the entire Azure Sphere ecosystem through online failure reporting, and renewing security through software updates. It brings the rigor and scale Microsoft has built over decades protecting our own devices and data in the cloud to MCU powered devices.

    These capabilities come together to enable Azure Sphere to meet all 7 properties of a highly secured device – making it a first of its kind solution.

    What device manufacturers are saying

    “Sub-Zero and Wolf have had a legacy of innovation in food preservation and preparation for over 70 years and we see significant opportunity in the connected devices market to create new and unique customer experiences. As our homes become more connected, we place significant value on the security of connected devices, so we can focus on continuing to deliver an exceptional customer experience. Microsoft’s approach with Azure Sphere is unique in that it addresses security holistically at every layer.”

    – Brian Jones, Director of Product Strategy and Marketing, Sub-Zero

    “Glen Dimplex is a leader in development of intelligent heating, renewable energy solutions and domestic appliances. We recognize that addressing security at every layer of connected devices is critical to shipping connected devices with confidence. The work Microsoft is doing with Azure Sphere uniquely addresses the security challenges of the connected microcontrollers shipping in billions of devices every year. We look forward to integrating Azure Sphere into our product lines later this year.”

    – Neil Naughton, Deputy Chairman, Glen Dimplex

     

    We’ve been sharing our plans for Azure Sphere with device manufacturers across multiple verticals including whitegoods, agriculture, energy, and infrastructure and their enthusiasm has been consistently centered around three core benefits:

    Security

    Our device manufacturing partners consider security a pre-requisite for creating connected experiences, and they know that single line-of-defense and second-best solutions are not enough. Azure Sphere provides security that starts in the hardware and extends to the cloud, delivering holistic security that protects, detects, and responds to threats – so they’re always prepared. And they love the fact that our solution is turnkey, eliminating the need to invest in additional infrastructure and staff to secure these devices.

    Productivity

    As device manufacturers look to transform their products, they are also looking for ways to lower overhead and increase team efficiency. Azure Sphere’s software delivery model and Visual Studio development tools deliver productivity and dramatically optimize the process of developing and maintaining apps on their devices. This means our device manufacturing partners can bring products to market faster and they can focus their efforts on creating their unique value.

    Opportunity

    The real magic begins when device manufacturers start imagining the possibilities that open with Azure Sphere. The built-in connectivity and additional headroom included in Azure Sphere certified MCUs changes everything. Our device manufacturing partners are re-thinking business models, product experiences, the way they service customers, and the way they predict the needs of their customers. It’s been incredible to watch them design next generation experiences with Azure Sphere.

    Our silicon ecosystem

    Having the right set of silicon partners has been an important part of our journey in bringing Azure Sphere to market. We’ve been working directly with leaders in the MCU space to build a broad ecosystem of silicon partners who will be combining our silicon security technologies with their unique capabilities to deliver Azure Sphere certified chips. With our silicon partners, we’ve create a revolutionary new generation of MCUs. These chips have network connectivity, unequalled security, and advanced processing power to enable new customer experiences. Each Azure Sphere chip will include our Microsoft Pluton security subsystem, run the Azure Sphere OS, and connect to the Azure Sphere Security Service for simple and secure updates, failure reporting, and authentication.

    The first Azure Sphere chip, the MediaTek MT3620, will come to market in volume this year. Over time we will see other silicon partners introducing their own Azure Sphere chips to the market. To ensure our ecosystem of partners expands rapidly, we’re licensing our silicon security technologies to them royalty-free. This enables any silicon manufacturer to build Azure Sphere chips while keeping costs down and prices affordable to device manufacturers.

    We can’t wait to see what you build with Azure Sphere

    Today, Azure Sphere is in private preview. We’re working closely with select device manufacturers to build future products powered by Azure Sphere. We expect the first wave of Azure Sphere devices to be on shelves by the end of 2018. Dev kits will be universally available in mid-2018. We fully expect to be surprised by the innovative ideas that you invent for the world and for your customers. We can’t wait to see what you will build!

    For more details, please visit the Azure Sphere website.

    Azure.Source – Volume 27

    $
    0
    0

    Welcome to Azure.Source #27! Last week in Azure, we made quite a few announcements about updates to Azure Stream Analytics, which you'll find captured below. In addition, Microsoft was at the 2018 NAB Show in Las Vegas, which provided an opportunity to reflect on how far Azure's media services have come over the past year and the exciting future that lays ahead. For more information on that, see the Events section below.

    Now in preview

    Public preview: Integration of Stream Analytics with Azure Monitor - To improve the self-service troubleshooting experience, integration of Azure Stream Analytics with Azure Monitor is in preview. This integration provides a systematic way to deal with lost, late, or malformed data, while enabling efficient mechanisms to investigate errors caused by bad data.

    Now generally available

    General availability: Stream Analytics tools for Visual Studio - To help maximize end-to-end developer productivity across authoring, testing, and debugging Stream Analytics jobs, Azure Stream Analytics tools for Visual Studio are now generally available.

    News & updates

    Seamlessly upgrade Azure SQL Data Warehouse for greater performance and scalability - Upgrade from the Optimized for Elasticity tier to the new Optimized for Compute performance tier with a simple click in the Azure portal. The Optimized for Compute performance tier offers the latest platform capabilities, providing an enhanced storage architecture that has shown an average fivefold increase in performance for query workloads. On this new tier, Azure SQL Data Warehouse now supports unlimited columnar storage with a fivefold increase in compute scalability.

    Offline media import for Azure - Nobody wants to spend hours and hours inserting tapes, connecting older hard disks, or figuring out how to digitize and upload film to the cloud. Together with our partners, Microsoft Azure is making it easy with our Offline Media Import Program. Learn how this partner-enabled service makes it easy to move data into Azure from almost any media, such as tapes, optical drives, hard disks, or film.

    Enhanced capabilities to monitor, manage, and integrate SQL Data Warehouse in the Azure Portal - New portal capabilities are available to help monitor, manage, and integrate your Azure SQL Data Warehouse. Monitor it using Azure Monitor for metrics, view the number of active running queries that will be canceled before pausing your data warehouse, and launch the Azure Analysis Services web designer directly within the data warehouse task panel.

    GDPR offers one more reason to focus on your disaster recovery strategy - Whether your company does business in the European Union (EU) or might have any data on EU citizens, you should have a disaster recovery plan to help comply with GDPR data restoration requirements. The General Data Protection Regulation (GDPR), which goes into effect 25 May 2018, is the EU’s new data protection regulation. While it doesn’t explicitly require that you back up data or implement a site recovery solution, the GDPR requirements provide additional reasons to stop waiting and fine-tune your disaster recovery plan. Read this post to learn how Azure Site Recovery and Azure Backup can help.

    Azure Stream Analytics updates

    Azure Stream Analytics supports extracting information from incoming data streams (e.g., devices, sensors, web sites, and social media feeds), identifying patterns and relationships, and then using that information to trigger other actions downstream (e.g., alerts, reporting, or storage).

    Stream Analytics pipeline

    See below for the latest updates to Azure Stream Analytics, which is an event-processing engine for examining high volumes of data streaming from devices.

    Additional news & updates

    Azure Friday

    Azure Friday | Azure Security Center - Kelly Anderson joins Scott Hanselman to discuss Azure Security Center, which offers built-in security management and threat protection for your cloud workloads. Azure Security Center helps you find & fix vulnerabilities, aids in blocking malicious access and alerts you when your resources are under attack. 

    Azure Friday | Common design patterns with Azure Cosmos DB - Aravind Krishna stops by to chat with Scott Hanselman and take a look at common design patterns for building highly scalable solutions with Azure Cosmos DB. We will talk a little bit about modeling data and how to choose an appropriate partition key. We then look at a few patterns like event sourcing, time series data, and patterns for addressing bottlenecks/hot spots for reads, writes, and storage.

    Technical content & training

    Continuous integration and deployment using Data Factory - Learn how you can follow industry leading best practices and Azure Data Factory's visual tools to do continuous integration and deployment for your Extract Transform/Load (ETL) and Extract Load/Transform (ELT) workflows to multiple environments such as Dev, Test, Prod, and more.

    How to configure Azure SQL Database Geo-DR with Azure Key Vault - Learn how to avoid creating a single point of failure in active geo-replicated instances or SQL failover groups by configuring redundant Azure Key Vaults. Each geo-replicated server requires a separate key vault, that must be co-located with the server in the same Azure region. Should a primary database become inaccessible due to an outage in one region and a failover is triggered, the secondary database is able to take over using the secondary key vault.

    Three common analytics use cases with Microsoft Azure Databricks - Watch this on-demand webinar to learn how your organization can improve and scale your analytics solutions with Azure Databricks, a high-performance processing engine optimized for Azure. Recommendation engines, churn analysis, and intrusion detection as common scenarios that many organizations are solving across multiple industries. They require machine learning, streaming analytics, and utilize massive amounts of data processing that can be difficult to scale without the right tools.

    Achieving GDPR compliance in the cloud with Microsoft Azure - Prepare yourself for GDPR's arrival next month with a free, four-part video series, Countdown: Preparing for GDPR. Topics include: GDPR and Compliance Documentation, What to expect under GDPR from your cloud provider and insights from Microsoft on GDPR, Accelerating GDPR compliance with Compliance Manager, and last but not least: GDPR & Azure.

    New Disaster Recovery tutorials for Wingtip Tickets sample SaaS application - Continuing in our series of tutorials showcasing features of Azure SQL database that enable SaaS app management, we are introducing two new tutorials that explore disaster recovery strategies for recovering an app and its resources in the event of an outage: Disaster recovery using geo-restore and Disaster recovery using geo-replication.

    The Azure Podcast

    The Azure Podcast: Episode 224 - The AI Platform - Senior Product Marketing Manager Sonya Koptyev talks to us about the Azure AI Platform and give us some great pointers on how to get started with AI.

    Events

    Last week, Microsoft was at the 2018 NAB Show, the ultimate event for the media, entertainment and technology industry.

    A years’ worth of cloud, AI and partner innovation. Welcome to NAB 2018! - Sudheer Sirivara, Partner Director, Azure Media and Azure CDN Services reflects on the key advancements Microsoft made – in media services, distribution and our partner ecosystem – since last year’s IBC.

    From Microsoft Azure to everyone attending NAB Show 2018 — Welcome! - Get insights from Tad Brockway General Manager, Azure Storage & Azure Stack on how Microsoft Azure is ready to help modernize your media workflows and your business.

    Customer and partners

    Rubikloud leverages Azure SQL Data Warehouse to disrupt retail market with accessible AI - Learn how Rubikloud makes accessible AI products for retailers and delivers on the promise of “intelligent decision automation” with a set of SaaS products, Promotion Manager and Customer Lifecycle Manager, to help retailers automate and optimize mass promotional planning and loyalty marketing. These products help retailers reduce the complexities of promotion planning and store allocations and better predict their customers intention and behavior throughout their retail life cycle. And with Azure SQL Data Warehouse, Rubikloud is able to make data and insights accessible to business users securely and effortlessly.

    Azure tips & tricks

    Access Cloud Shell from within Microsoft Docs

    Demystifying storage in Cloud Shell

    Developer spotlight

    Updated: The Developer’s Guide to Azure - I just received word this morning from Michael Crump that he updated the The Developer's Guide to Azure, which now includes content on Azure Batch. Go download the latest. Updates to localized versions are coming soon.

    Explore Azure Cosmos DB with .NET Core and MongoDB - It is important that developers choose the right tool for the job. There is no rule that states you cannot use multiple data platforms in your applications. Every .NET developer should be aware of the NoSQL option. In many cases, it can simplify your application development by eliminating the need for an ORM. You also can store documents exactly as they are modeled in your application. Azure Cosmos DB makes it possible, and easier than ever, to create a highly available and reliable cloud database for your apps.

    Create a MongoDB app with React and Azure Cosmos DB - This multi-part video tutorial demonstrates how to create a hero tracking app with a React front-end. The app used Node and Express for the server, connects to Azure Cosmos DB with the MongoDB API, and then connects the React front-end to the server portion of the app. The tutorial also demonstrates how to do point-and-click scaling of Azure Cosmos DB in the Azure portal and how to deploy the app to the internet so everyone can track their favorite heroes.

    Azure Cosmos DB query cheat sheets - The Azure Cosmos DB query cheat sheets help you quickly write queries for your data by displaying common database queries, operations, functions, and operators in easy-to-print PDF reference sheets. The cheat sheets include reference information for the SQL API, MongoDB API, Table API, and Gremlin/Graph API.

    Rendering in Azure - Leverage the power of Azure to produce renders and process changes, without locking up your machine, with access from anywhere on any device. You pay only for what you use without any sign-up or subscription fees.

    Containers, Clusters and the Cloud for Gaming - Azure offers a variety of options for deploying your services, dependencies and even servers within the cloud. Elastic scale allows you to minimize cost, whilst ensuring you'll always meet player demand. Join us for a slide-free, live demo and learn how easy it is to adopt containers and clusters in the cloud for your gaming applications, regardless of your operating system or dev stack.

    Fluffy Fairy’s Lean Approach to Game Development: How an Indie Studio Grew a Hit Game After Only 8 Weeks - Fluffy Fairy launched Idle Miner Tycoon with few game features, no app-store promotions, no marketing budget -- and no players. One year later their game is a certified hit, with more than 40 million installs and $100K of revenue a day. This tell-all session from co-founder and CTO Oliver Löffler will peel back the curtain to show how Fluffy Fairy launched a bare-bones game leveraging backend cloud services and drew on analytics to find, grow, and retain loyal players.

    News from the R Consortium

    $
    0
    0

    The R Consortium has been quite busy lately, so I thought I'd take a moment to bring you up to speed on some recent news. (Disclosure: Microsoft is a member of the R Consortium, and I am a member of its board of directors.)

    You can keep up with news from the R Consortium by following @RConsortium on Twitter, or at the R Consortium blog.

     

    Viewing all 10804 articles
    Browse latest View live