Quantcast
Channel: Category Name
Viewing all 10804 articles
Browse latest View live

October 2017 Leaderboard of Database Systems contributors on MSDN

$
0
0

Congratulations to our October top-10 contributors! Alberto Morillo maintains the first position in the cloud ranking while Visakh Murukesan climbs to the top in the All Databases ranking.

leaderboard_Top10_october

This Leaderboard initiative was started in October 2016 to recognize the top Database Systems contributors on MSDN forums. The following continues to be the points hierarchy (in decreasing order of points):

leaderboard_rules


Spend more time working on the interesting stuff

$
0
0

There’s a reason that each day thousands of developers take advantage of the rich set of extensions offered by our growing family of VS and VSTS Partners and the broader VS community. Collectively these offerings can save you and your team time in many different ways, from helping find bugs faster, to making it easier to work with data, to helping create great UX.

I’m excited to be here at Connect(); this week where our partners are announcing a slew of exciting new products and updates for developers in tandem with our own developer productivity announcements.

Here’s a quick overview of these updated and new additions to the over 13,000 free and for-pay offerings in the Visual Studio Marketplace that you can take advantage of today:

Enhancing Cross-Platform Development

  • PreEmptive Solutions released the latest edition of Dotfuscator Community Edition, containing many new features to bring developers GDPR compliance relief: a new user-friendly interface for configuring Checks, obfuscation support for Unity, .NET Core 2.0 and .NET Standard 2.0, improved support for Xamarin and XAML, and more.
  • DevExpress announced their second major release of the DevExpress Universal Subscription, containing UI Controls and Libraries for the .NET Framework, .NET Core, and HTML5/JavaScript developers.

Effortless Database Development

  • CData released CosmosDB Drivers, allowing access to Cosmos DB databases from BI, ETL, Analytics, and reporting tools through bi-directional data drivers.
  • Alachisoft issued a major upgrade release of NCache 4.8, improving ease-of-use and integrating .NET Core Client, ASP.NET Core Sessions, Entity Framework Core 2.0, Docker Support, and more, so that you can remove data storage and database bottlenecks related to app performance and scale .NET and Java applications to extreme transaction processing.

DevOps at Your Fingertips

  • Supercharge your C# debugging experience with OzCode v3.5 Release Candidate. With this release, you can bake quality into your continuous deployment pipeline and utilize features like Conditional Search, which helps developers root-cause complicated bugs in minutes.
  • The most downloaded partner extension for Visual Studio Team Services, 7pace Timetracker, released Timetracker4, including a completely new client architecture for HTML and Windows, and compatibility with Team Foundation Server 2018.
  • Redgate released a new ReadyRoll Visual Studio Team Services extension, which works with Team Foundation Server 2018 and ReadyRoll Core (included in Visual Studio Enterprise 2017). This release has a great template to make database CI/CD setup quick and easy and has better support for deploying to Azure SQL database. In Redgate’s internal user tests, this took database CI/CD setup down from hours to a few minutes.
  • GitHub will contribute to the GVFS.io open source project and plans to add GVFS support to GitHub.com.

Innovating with Visual Studio

  • The Progress Telerik UI for Xamarin team is launching Telerik Tagit – a Xamarin.Forms mobile app designed to turn the photo collection on your phone into a searchable database, powered by Microsoft’s Cognitive Servicers Computer Vision API.
  • We welcome CloudRail to the Visual Studio ecosystem. Through unified APIs, CloudRail has made over 50 components available allowing mobile developers to easily integrate with best-of-breed mobile services like OneDrive, Stripe, and Facebook.
  • In addition to a new release of the IP*Works! Collection of components for internet communications, payment processing, EDI integration and cloud storage, /n Software announced beta support for Windows IoT and Bluetooth BLE.
  • Combit Software GmbH made massive increases in performance in the latest release of their List & Label report generator, enabling a rapid performance report designer suitable for projects of all sizes.

You can learn about these Connect() partner announcements and many more partner technologies in our Post-Connect Webinar Series, launching on December 4th on Channel 9.

And if you haven’t had a chance yet I’d suggest taking a few minutes to check out the Visual Studio Marketplace to see what you might be missing!

Shawn Nandi, Senior Director – Developer Programs, Partnerships and Planning – Cloud App Dev, Data, and AI Product Marketing
@ShawnNandi

Shawn drives partnerships and business planning for the developer business at Microsoft as well as product marketing for developer programs and subscriptions including Visual Studio Subscriptions and Dev Essentials.

Bing Maps V8 Web Control SDK November 2017 Update

$
0
0

The Bing Maps V8 Web Control has been updated in November with one key feature which lets you provide your Bing Maps key as a URL parameter of the map script URL rather than as a map option when loading the map. This provides two key benefits:

  • Load multiple map instances under a single session on a page. Normally, when loading two or more maps on a page, each map instance generates a billable transaction and its own session. By specifying your Bing Maps key in the map script URL, all map instances on a page will use the same map session, and thus reduce the number of billable transactions your application generates.
  • Faster live site issue migration. If an issue occurs due to a release, apps which use this new way of specifying a Bing Maps key can have this issue resolved quicker.

Before:

In the past a Bing Maps key would be specified in the map options when loading the map. This will continue to work, but is no longer recommended.

function GetMap()
{
	var map = new Microsoft.Maps.Map('#myMap', {
		credentials: '[YOUR_BING_MAPS_KEY]',
		zoom: 1
	});
}

<script type='text/javascript' src='https://www.bing.com/api/maps/mapcontrol?callback=GetMap' async defer></script>

After:

It is now recommended to specify your Bing Maps key as part of the map script URL.

function GetMap()
{
	var map = new Microsoft.Maps.Map('#myMap', {
		zoom: 1
	});
}

<script type='text/javascript' src='https://www.bing.com/api/maps/mapcontrol?callback=GetMap&key=[YOUR_BING_MAPS_KEY]' async defer></script>

Seasonal Code Freeze

The main release branch of the Bing Maps V8 Web Control will be in a code freeze until early January 2018 to reduce the chances of a service interruption during the holiday season. We will however, continue to push regular updates to the experimental branch of Bing Maps V8.

For more information about Bing Maps V8 and our other APIs, go to https://www.microsoft.com/en-us/maps.

- Bing Maps Team

Test Experience Improvements

$
0
0

There have been several significant improvements to the test experience that range across Visual Studio and Visual Studio Team Services. These efforts involved frameworks and tooling for both .NET and C++, but all had a common goal: make testing with our developer tools a great experience.

.NET

Side-by-side Performance Comparison

These improvements are best shown in a side-by-side comparison of Visual Studio 2017 15.4 and Visual Studio 2017 15.5 Preview 2 with the Real Time Test Discovery feature flag turned on.

Wow! How did you get that performance?!

The performance improvements are thanks to work in a few different areas, namely, Test Platform and Test Adapter improvements and a new feature called Real Time Test Discovery.

Test Platform and Test Adapter Improvements

Each of the top testing frameworks have made great accomplishments that make the whole experience better. In Visual Studio 2017, unit test projects reference the MSTest Version 2 test framework by default. MSTest v2 has experienced great adoption reaching 1.8 million downloads before even becoming the default option in MSTest projects. xUnit 2.3.0 has gone RTM bringing great performance improvements. NUnit has also markedly improved performance.

In the table below, you can see the before and after times of the most popular test frameworks and the percentage improvement per test runner. These were timed using benchmark solutions of 10,000 tests. You can find the links to the solutions in the table.

Test Discovery
Test Framework Before (secs) After (secs) Improvement (%)
xUnit 114 6.4 94%
NUnit 9.54 5.5 42%
MSTest V2 4.76 5.2 -9% *
Test Execution
Test Framework Before (secs) After (secs) Improvement (%)
xUnit 366.5 68 81%
NUnit 316.1 16.7 95%
MSTest V2 16.48 11.74 29%

Before = VS 15.2
After = VS 15.5 Preview 2 + NUnit Adapter v3.9.0, published on 11 Oct 2017 + xUnit v2.3.1 published on 27 Oct 2017
Test Discovery performance improvements include the gains due to the new Source Information Provider implementation. This is presently shipped behind a feature flag. The intent is to make that the default from 15.5 Preview 4.

Much of this was a collaborative effort of developers contributing to the open source test frameworks NUnit, xUnit, and MSTest. Often working on the other side of the world from each other. Luckily, we can share the good vibes on social media.

Tweets

Real Time Test Discovery

Real time test discovery is a new Visual Studio feature that uses a Roslyn analyzer to discover tests and populate the test explorer in real time without requiring you to build your project. This feature has been introduced in Visual Studio 2017 15.5 Preview 2 behind a feature flag. Learn how to turn it on in the Real Time Test Discovery blog post. This not only makes test discovery significantly faster, it also keeps the test explorer in sync with code changes such as adding or removing tests. Since real time discovery is powered by the Roslyn compiler, it is only available for C# and Visual Basic projects.

Real Time Test Discovery gif

Live Unit Testing Improvements

Live Unit Testing is a feature introduced in Visual Studio 2017 Enterprise that automatically runs any impacted unit tests in the background and presents the results and code coverage live in the editor. It now supports more of your projects including projects targeting .NET Core (starting in Visual Studio 2017 15.3) as well as MSTest v1. Users with projects eligible for Live Unit Testing will now be prompted to switch it on with a gold bar appearing at the top of Visual Studio. Don’t worry, you can select not to show it again or to learn more if you aren’t sure Live Unit Testing is right for you.

Live Unit Testing Improvements showing notification indicating available unit tests

We have also introduced a few usability enhancements including test icons appearing next to test method heads in the code editor when Live Unit Test icons are on. This addition was requested to help the test method head stand out from the other glyphs in the margin. Clicking on Live Unit Testing icons now pops out a menu where you can run or debug that test.

Icon identifying test method head

Visual Studio also has a new feature called the Task Center Notification. If you’re curious what processes Live Unit Testing is currently executing, you can investigate by clicking on the Task Center. This is particularly helpful when one of your tests is taking longer to give results. The Task Center Notification will tell you if Live Unit Testing is discovering, building, or executing your test.

Task Center Notification so you know if Live Unit Testing is discovering building or executing your test

Live Unit Testing is now easier to configure with the addition of a Tools > Options page. This page will let you automatically pause Live Unit Testing if, for example, you need to save battery life when you’re on the go. You can also make sure Live Unit Testing skips some tests by adding a skip category. For a short period of focused development, you can include a specific set of tests by right-clicking on the solution, project, or class in the Solution Explorer and select Live Unit Testing > Include (or Exclude). This will only run Live Unit Testing on those tests. You can also include and exclude individual tests by right-clicking on them in the code editor.

Ability to exclude some Live Unit Tests in Tools Options

C++ Testing Improvements

We’ve improved the C++ unit testing experience by adding built-in support for more unit testing frameworks. In addition to Microsoft’s native testing framework, Visual Studio now includes support for Google Test and Boost.Test. New installations of Visual Studio “Desktop Development for C++” workload will acquire Google Test and Boost.Test support by default, but if you’re upgrading from a previous version you’ll need to manually add these adapters via Visual Studio Installer.

Installer showing VC Testing Improvements workload with default support for Google Test and Boost.Test

Whether you prefer the Microsoft, Google Test, or Boost.Test framework, you can immediately use all of Visual Studio’s testing tools to write, discover, and run your unit tests. In the next image, we see all the tests from the three frameworks were discovered and run via the Test Explorer Window.

Test Explorer Window

To quickly add a Microsoft Native Test or Google Test project to your Solution, just choose the desired template in the New Project Wizard.

Microsoft Native Test or Google Test project template in the New Project Wizard

Share Your Feedback

As always, we welcome your thoughts and concerns. Please Install Visual Studio 2017 today, exercise your favorite workloads, and tell us what you think.

For issues, let us know via the Report a Problem tool in Visual Studio. You’ll be able to track your issues in the Visual Studio Developer Community where you can ask questions and find answers. You can also engage with us and other Visual Studio developers through our new Gitter community (requires GitHub account).

Kendra Havens, Program Manager, Visual Studio Testing Tools Team
@gotheapKendra is a Program Manager on the .NET team focused on the testing experience.
Nick Uhlenhuth, Program Manager, Visual Studio Testing Tools Team
@nickuhlenhuthVisual C++ Program Manager focused on making the unit testing experience great!
Pratap Lakshman, Senior Program Manager, Visual Studio Testing Tools Team
@pvlakshmPratap works on Testing tools in the areas of IntelliTest, Fakes, Unit Testing, and CodeCoverage.

Xbox analytics report update in Dev Center

$
0
0

We are happy to announce that we’ve updated our Xbox analytics report with a new Xbox Live service health tab to provide visibility into Xbox Live’s service responses.

Xbox Live service health

The new Xbox Live service health tab helps you understand the impact of any Xbox Live client errors, including rate limiting. You are now able to drill down by endpoint and status code to more effectively fix issues. This includes exempt rate limited requests to see potential impact in cases of high volume of these services.

The report also provides a view on Xbox Live’s service availability for your title, so you can quickly determine whether an issue is due to your title’s code or a service outage. The report includes sandbox filters, so you can be more proactive and mitigate issues before your game is released.

Xbox analytics overview

The Xbox analytics overview tab shows who your players are and how they’re engaging with Xbox Live features, so you can make key business decisions for your Xbox titles. For many of these statistics, we also show the Xbox average, so you can easily see how your customers interact with Xbox compared to the average Xbox customer. You can view the following data in the overview tab:

  • Concurrent usage
  • Gamerscore distribution
  • Achievement unlocks
  • Game statistics
  • Friends and followers
  • Accessory usage
  • Connection type

You can learn more about the Xbox analytics report here. We look forward to the positive impact it will have on your players!

The post Xbox analytics report update in Dev Center appeared first on Building Apps for Windows.

Improving the debugging experience for std::function

$
0
0

We received a Visual Studio User Voice suggestion to make “StepInto” go directly to user code, skipping past standard library (std::function) implementation details. We recently worked on this suggestion and implemented it in the last version of Visual C++.

The issue:

Single-stepping through a call to an instance of std::function was a particular pain point. When debugging, you are not usually interested in the internal implementation details of the standard library. Instead, you’d like the debugger to view this as “system code” that may be skipped, taking you directly to the code you wrote.

It took no less than *22* presses of F11 to single step through the STL to actually get to user code, and 9 more to get back out after the user’s function completes in the code below:

#include "stdafx.h"
#include <functional>
#include <iostream>

void display() {
	std::cout << "Hello" << std::endl; // step into should go here
}

int main() {
	std::function<void()> f = &display;
	f(); // Step into here should go directly to "display()"
}

The solution:

To fix this behavior, we used a debugger feature to control single-stepping behavior by annotating the STL code with hints about the calls to user code. So a single keypress steps from an expression that calls a std::function object (and for calls to std::invoke as well) directly to the user’s function, without stopping in any of the intervening STL machinery. Likewise, a single keypress at the end of the user’s function steps directly back out to the invocation expression without stopping in intervening STL machinery. This vastly improves users’ debugging experience.

std_Fct_stepinto2

In Visual Studio version 15.5 Preview 4 we’ve annotated our standard library code with hints to the debugger that enable this behavior. We know that this feature could be useful for other libraries as well and are working on a way to generalize this feature and make it available to developers.

As we often mention in our blog posts, your feedback is important to improve the Visual Studio experience, we spend a lot of time reviewing your suggestions and incorporating them into our planning for future releases.

We look forward to your feedback!

— Visual C++ Team

UPDATE – Web Applications with ASP.NET Core Architecture and Patterns guidance (Updated for ASP.NET Core 2)

$
0
0

Updated for ASP.NET Core 2.0 (Nov. 15th 2017)

Earlier this year, we published an eBook/Guide and sample application offering guidance named Architecting Modern Web Applications with ASP.NET Core and Microsoft Azure.

We have recently published updates to the eBook (2nd edition) and sample application to bring them in line with the latest releases of ASP.NET Core 2.0 and Entity Framework Core 2.0.

Check it out in the updated blog post and download the new 2nd edition eBook from the link below:

 

UPDATE – Microservices and Docker containers: Architecture, Patterns and Development guidance (Updated for .NET Core 2.0)

$
0
0

Updated for .NET Core 2.0 “wave” of technologies (Nov. 15th 2017)

Earlier this year, we published this eBook/guide and sample application offering guidance for architecting microservices and Docker containers based applications.

We have recently published updates to the eBook (2nd edition) and sample application to bring them in line with the latest releases of .NET Core 2.0 and many other updates coming along as part of the same “wave” of technologies.

See the list of many updates in the updated blog post and download the new 2nd edition eBook from the link below:

 


The City of Chicago uses R to issue beach safety alerts

$
0
0

Among the many interesting talks I saw a the Domino Data Science Pop-Up in Chicago earlier this week was the presentation by Gene Lynes and Nick Lucius from the City of Chicago. The City of Chicago Tech Plan encourages smart communities and open government, and as part of that initiative the city has undertaken dozens of open-source, open-data projects in areas such as food safety inspections, preventing the spread of West Nile virus, and keeping sidewalks clear of snow. 

This talk was on the Clear Water initiative, a project to monitor the water quality of Chicago's many public beaches on Lake Michigan, and to issue safety alerts (or in serious cases, beach closures) when E Coli levels in the water get too high. The problem is that E Coli levels can change rapidly: water levels can be normal for weeks, and then spike for a single day. But traditional culture tests take many days to return results, and while rapid DNA-based tests do exist, it's not possible conduct these tests daily at every beach.

The solution is to build a predictive model, which uses meteorological data and rapid DNA tests for some beaches, combined with historical (culture-based) evaluations of water quality, to predict E Coli levels at all beaches every day. The analysis is performed using R (you can find the R code at this Github repository).

The analysis was developed in conjunction with citizen scientists at Chi Hack Night and statisticians from DePaul University. In 2017, the model was piloted in production in Chicago to issue beach safety alerts and to create a live map of beach water quality. This new R-based model predicted 60 additional occurrences of poor water quality, compared with the process used in prior years.

Still, water quality is hard to predict: once you have the slower test data and an actual result to compare with, that's an accuracy rate of 38%, with fewer than 2% false alarms. (The city plans to use clustering techniques to further improve that number.) That model uses rapid DNA testing at five beaches to predict all beaches along Lake Michigan. A Shiny app (linked below) lets you explore the impact of testing at a different set of beaches, and adjusting the false positive rate, on the overall accuracy of the model.

Chicago Beaches

You can find more details about the City of Chicago Clear Water initiative at the link below.

City of Chicago: Clear Water

Announcing the Windows Compatibility Pack for .NET Core

$
0
0

Porting existing code to .NET Core used to be quite hard because the available API set was very small. In .NET Core 2.0, we already made this much easier, thanks to .NET Standard 2.0. Today, we’re happy to announce that we made it even easier with the Windows Compatibility Pack, which provides access to an additional 20,000 APIs via a single NuGet package.

Who is this package for?

This package is meant for developers that need to port existing .NET Framework code to .NET Core. But before you start porting, you should understand what you want to accomplish with the migration. Just porting to .NET Core because it’s a new .NET implementation isn’t a good enough reason (unless you’re a True Fan).

.NET Core is optimized for building highly scalable web applications, running on Windows, macOS or Linux. If you’re building Windows desktop applications, then the .NET Framework is the best choice for you. Take a look at our documentation for more details on how to choose between .NET Core and .NET Framework.

Demo

For a demo, take a look at this video:

Using the Windows Compatibility Pack

We highly recommend that you plan your migrations as a series of steps instead of assuming you can port the existing code base all at once. If you’re planning to migrate an ASP.NET MVC application running on a local Windows server to an ASP.NET Core application running on Linux in Azure, we’d recommend you perform these steps:

  1. Migrate to ASP.NET Core (while still targeting the .NET Framework)
  2. Migrate to .NET Core (while staying on Windows)
  3. Migrate to Linux
  4. Migrate to Azure

The order of steps might vary, depending on your business goals and what value you need to accomplish first. For example, you might need to deploy to Azure before you perform the other migration steps. The primary point is that you perform one step at a time to ensure your application stays operational along the way. This reduces the complexity and churn you have to reason about at once. It also allows you to learn more about your code base and adjust your plans as you discover issues.

The Porting to .NET Core from .NET Framework documentation provides more details on the recommended process and which tools you can use.

Before bringing existing .NET Framework code to a .NET Core project, we recommend you first add the Windows Compatibility Pack by installing the NuGet package Microsoft.Windows.Compatibility. This maximizes the number of APIs you have at your disposal.

The Windows Compatibility Pack is currently in preview because it’s still a work in progress. The following table describes the APIs that are already part of the Windows Compatibility Pack or are coming in a subsequent update:

Component Status Windows-Only Component Status Windows-Only
Microsoft.Win32.Registry Available Yes System.Management Coming Yes
Microsoft.Win32.Registry.AccessControl Available Yes System.Runtime.Caching Coming
System.CodeDom Available System.Security.AccessControl Available Yes
System.ComponentModel.Composition Coming System.Security.Cryptography.Cng Available Yes
System.Configuration.ConfigurationManager Available System.Security.Cryptography.Pkcs Available Yes
System.Data.DatasetExtensions Coming System.Security.Cryptography.ProtectedData Available Yes
System.Data.Odbc Coming System.Security.Cryptography.Xml Available Yes
System.Data.SqlClient Available System.Security.Permissions Available
System.Diagnostics.EventLog Coming Yes System.Security.Principal.Windows Available Yes
System.Diagnostics.PerformanceCounter Coming Yes System.ServiceModel.Duplex Available
System.DirectoryServices Coming Yes System.ServiceModel.Http Available
System.DirectoryServices.AccountManagement Coming Yes System.ServiceModel.NetTcp Available
System.DirectoryServices.Protocols Coming System.ServiceModel.Primitives Available
System.Drawing Coming System.ServiceModel.Security Available
System.Drawing.Common Available System.ServiceModel.Syndication Coming
System.IO.FileSystem.AccessControl Available Yes System.ServiceProcess.ServiceBase Coming Yes
System.IO.Packaging Available System.ServiceProcess.ServiceController Available Yes
System.IO.Pipes.AccessControl Available Yes System.Text.Encoding.CodePages Available Yes
System.IO.Ports Available Yes System.Threading.AccessControl Available Yes

Handling Windows-only APIs

If you plan to run your .NET Core application on Windows only, then you don’t have to worry about whether an API is cross-platform or not. However, if you plan to migrate your application to Linux or macOS, you need to take the platform support into account.

As you can see in the previous table, about half of the components in the Windows Compatibility Pack are Windows-only, the other half works on any platform. Your code can always assume that all the APIs exist across all platforms, but if they are Windows-only they throw PlatformNotSupportedException. This allows you to write code that calls Windows-only APIs after doing a platform check at runtime, rather than having to use conditional compilation using #if. We recommend you to use RuntimeInformation.IsOSPlatform() for platform checks:

private static string GetLoggingPath()
{
    // Verify the code is running on Windows.
    if (RuntimeInformation.IsOSPlatform(OSPlatform.Windows))
    {
        using (var key = Registry.CurrentUser.OpenSubKey(@"SoftwareFabrikamAssetManagement"))
        {
            if (key?.GetValue("LoggingDirectoryPath") is string configuredPath)
                return configuredPath;
        }
    }

    // This is either not running on Windows or no logging path was configured,
    // so just use the path for non-roaming user-specific data files.
    var appDataPath = Environment.GetFolderPath(Environment.SpecialFolder.LocalApplicationData);
    return Path.Combine(appDataPath, "Fabrikam", "AssetManagement", "Logging");
}

You might wonder how you’re supposed to know which APIs are Windows-only. The obvious answer would be documentation, but that’s not very convenient. This is one of the reasons why we introduced the API Analyzer tool two weeks ago. It’s a Roslyn-based analyzer that will flag usages of Windows-only APIs when you’re targeting .NET Core and .NET Standard. For the previous sample, this looks as follows:

You have three options to deal with Windows-only API usages:

  • Remove. Sometimes you might get away with simply deleting the code as you don’t plan to migrate certain features to the .NET Core-based version of your application.
  • Replace. Usually, you’ll want to preserve the general feature so you might have to replace the technology with one that is cross-platform. For example, instead of saving configuration state in the registry, you’d use text-based configuration files you can read from all platforms.
  • Guard. In some cases, you may want to call the Windows-only API when you’re running on Windows and simply do nothing (or call a Linux-specific API) when you’re running on Linux.

In the previous example, the code is already written in such a way that it provides a default configuration when the setting isn’t found in the registry. So the easiest solution is to guard the call to registry APIs behind a platform check.

The Windows Compatibility Pack is designed as a metapackage, meaning it doesn’t directly contain any libraries but references other packages. This allows you to quickly bring in all the technologies without having to hunt down various packages. But as your port progresses, you may find it useful to reference individual packages instead. This allows you to remove dependencies and ensure newly written code in that project doesn’t take a dependency on it again.

Summary

When you port existing code from the .NET Framework to .NET Core, install the new Windows Compatibility Pack. It provides access to an additional 20,000 APIs, compared to what is available in .NET Core. This includes drawing, EventLog, WMI, Performance Counters, and Windows Services.

If you plan to make your code cross-platform, use the new API Analyzer to ensure you don’t accidentally depend on Windows-only APIs.

But remember that the .NET Framework is still the best choice for building desktop applications as well as Web Form-based web applications. If you’re happy on the .NET Framework, there is also no reason to port to .NET Core.

Let us know what you think!

Managing Secrets Securely in the Cloud

$
0
0

You’ve probably heard some version of the story about a developer who mistakenly checked in his AWS S3 key to Github. He pulled the key within 5 minutes but still racked up a multi-thousand dollar bill from bots that crawl open source sites looking for secrets. As developers we all understand and care about keeping dev and production secrets safe but managing those secrets on your own or especially in a team can be cumbersome. We are pleased to announce several new features that together will make detecting secrets in code and working with secrets stored securely on Azure easier than it’s ever been before.

Safeguarding Secrets while building for Azure

Most of us know it’s a best practice to keep secret settings like connection strings, domain passwords, or other credentials as a runtime configuration and outside the source code. Azure Key Vault provides a security location to safeguard keys and other secrets used by cloud apps. Azure App services recently added support for Managed Service identity which means apps running on App Service can easily get authorized to access a Key Vault and other AAD-protected resources so you no longer need to store secrets visibility in environment variables.

If you do this though, getting your local dev environment setup with the right secrets can be a pain, especially if you work in a team. We hear many developers distribute secrets for shared dev services through email or just check them into source code. So we created the App Authentication Extension to make it easy to develop apps locally while keeping your secrets in Key Vault. With the extension installed, your locally running app uses the identity signed into Visual Studio to get secrets you are authorized to access directly from Key Vault. This works great in a team environment where you might have security group for the dev team with access to a dev environment Key Vault.

Azure key vault

Azure service authentication account selection setting in Tools Options

In ASP.NET applications the ASP.NET Key Vault and User Secret configuration builders with .NET 4.7.1 is a NuGet package that allows secret app settings to be saved in secure configuration stores instead of in web.config as plaintext, without changing application source code. In ASP.NET Core applications there is a small code change, to load Key Vault as a configuration provider and once you do this you are set. This change isn’t done yet, but we’re hoping to eliminate it soon.

App Settings

Here are a couple of walkthroughs that show you how everything works:

Credential Scanner (CredScan) Code Analyzer Preview

We also wanted to make it easier for devs to find secrets in their code to encourage moving secrets to more secure locations like User Secrets or Azure Key Vault. The Credential Scan Code Analyzer is a very early preview that can detect Storage access keys, SAS tokens, API management keys, Cosmos DB access keys, AAD Service principal keys, connection strings for SQL, Azure SQL, Service Bus, Azure Logic apps, BizTalk server, and various other credential types. As you edit your code the analyzer scans your code and immediately warns you about secrets it finds in any open documents with warnings in the error list and in the Build and Code Analysis at Commit time. It’s something we’ve been developing, utilizing, and improving within Microsoft for some time now.

The Credential Scan Code Analyzer is a preview and ships in the experimental DevLabs extension, Continuous Delivery Tools for Visual Studio. This is because we know this is an important area that goes beyond open documents and can stretch all the way into your CI environment. Rather than waiting, we released an experimental version now because we think it’s useful and we want your feedback on how you would use this in your environment.

Please install these extensions and give the walkthroughs a try to let us know what you think.

Catherine Wang, Program Manager, Azure Developer Experience Team
@cawa_cathy

Catherine is a Program Manager for Azure Developer Experience team in Microsoft. I worked on Azure security tooling, Azure diagnostics, Storage Explorer, Service Fabric and Docker tools. Interested in making development experience simple, smooth and productive.

Announcing .NET 4.7.1 Tools for the Cloud

$
0
0

Packages and ContainersToday we are releasing a set of providers for ASP.NET 4.7.1 that make it easier than ever to deploy your applications to cloud services and take advantage of cloud-scale features.  This release includes a new CosmosDb provider for session state and a collection of configuration builders.

A Package-First Approach

With previous versions of the .NET Framework, new features were provided “in the box” and shipped with Windows and new versions of the entire framework.  This means that you can be assured that your providers and capabilities were available on every current version of Windows.  It also means that you had to wait until a new version of Windows to get new .NET Framework features.  We have adopted a strategy starting with .NET Framework 4.7 to deliver more abstract features with the framework and deploy providers through the NuGet package manager service.  There are no concrete ConfigurationBuilder classes in the .NET Framework 4.7.1, and we are now making available several for your use from the NuGet.org repository.  In this way, we can update and deploy new ConfigurationBuilders without requiring a fresh install of Windows or the .NET Framework.

ConfigurationBuilders Simplify Application Management

In .NET Framework 4.7.1 we introduced the concept of ConfigurationBuilders.  ConfigurationBuilders are objects that allow you to inject application configuration into your .NET Framework 4.7.1 application and continue to use the familiar ConfigurationManager interface to read those values.  Sure, you could always write your configuration files to read other config files from disk, but what if you wanted to apply configuration from environment variables?  What if you wanted to read configuration from a service, like Azure Key Vault?  To work with those configuration sources, you would need to rewrite some non-trivial amount of your application to consume these services.

With ConfigurationBuilders, no code changes are necessary in your application.  You simply add references from your web.config or app.config file to the ConfigurationBuilders you want to use and your application will start consuming those sources without updating your configuration files on disk.  One form of ConfigurationBuilder is the KeyValueConfigBuilder that matches a key to a value from an external source and adds that pair to your configuration.  All of the ConfigurationBuilders we are releasing today support this key-value approach to configuration.  Lets take a look at using one of these new ConfigurationBuilders, the EnvironmentConfigBuilder.

When you install any of our new ConfigurationBuilders into your application, we automatically allocate the appropriate new configSections in your app.config or web.config file as shown below:

The new “builders” section contains information about the ConfigurationBuilders you wish to use in your application.  You can declare any number of ConfigurationBuilders, and apply the settings they retrieve to any section of your configuration.  Let’s look at applying our environment variables to the appSettings of this configuration.  You specify which ConfigurationBuilders to apply to a section by adding the configBuilders attribute to that section, and indicate the name of the defined ConfigurationBuilder to apply, in this case “Environment”

<appSettings configBuilders="Environment">
  <add key="COMPUTERNAME" value="VisualStudio" />
</appSettings>

The COMPUTERNAME is a common environment variable set by the Windows operating system that we can use to replace the VisualStudio setting defined here.  With the below ASPX page in our project, we can run our application and see the following results.

AppSettings Reported in the Browser

AppSettings Reported in the Browser

The COMPUTERNAME setting is overwritten by the environment variable.  That’s a nice start, but what if I want to read ALL the environment variables and add them as application settings?  You can specify Greedy Mode for the ConfigurationBuilder and it will read all environment variables and add them to your appSettings:

<add name="Environment" mode="Greedy"
  type="Microsoft.Configuration.ConfigurationBuilders.EnvironmentConfigBuilder, Microsoft.Configuration.ConfigurationBuilders.Environment, Version=1.0.0.0, Culture=neutral" />

There are several Modes that you can apply to each of the ConfigurationBuilders we are releasing today:

  • Greedy – Read all settings and add them to the section the ConfigurationBuilder is applied to
  • Strict – (default) Update only those settings where the key matches the configuration source’s key
  • Expand – Operate on the raw XML of the configuration section and do a string replace where the configuration source’s key is found.

The Greedy and Strict options only apply when operating on AppSettings or ConnectionStrings sections.  Expand can perform its string replacement on any section of your config file.

You can also specify prefixes for your settings to be handled by adding the prefix attribute.  This allows you to only read settings that start with a known prefix.  Perhaps you only want to add environment variables that start with “APPSETTING_”, you can update your config file like this:

<add name="Environment"
     mode="Greedy" prefix="APPSETTING_"
     type="Microsoft.Configuration.ConfigurationBuilders.EnvironmentConfigBuilder, Microsoft.Configuration.ConfigurationBuilders.Environment, Version=1.0.0.0, Culture=neutral" />

Finally, even though using the APPSETTING_ prefix is a nice catch to only read those settings, you may not want your configuration to actually be called “APPSETTING_Setting” in code.  You can use the stripPrefix attribute (default value is false) to omit the prefix when the value is added to your configuration:

Greedy AppSettings with Prefixes Stripped

Greedy AppSettings with Prefixes Stripped

Notice that the COMPUTERNAME was not replaced in this mode.  You can add a second EnvironmentConfigBuilder to read and apply settings by adding another add statement to the configBuilders section and adding an entry to the configBuilders attribute on the appSettings:

Try using the EnvironmentConfigBuilder from inside a Docker container to inject configuration specific to your running instances of your application.  We’ve found that this significantly improves the ability to deploy existing applications in containers without having to rewrite your code to read from alternate configuration sources.

Secure Configuration with Azure Key Vault

We are happy to include a ConfigurationBuilder for Azure Key Vault in this initial collection of providers.  This ConfigurationBuilder allows you to secure your application using the Azure Key Vault service, without any required login information to access the vault.  Add this ConfigurationBuilder to your config file and build an add statement like the following:

<add name="AzureKeyVault"
     mode="Strict"
     vaultName="MyVaultName"
     type="Microsoft.Configuration.ConfigurationBuilders.AzureKeyVaultConfigBuilder, Microsoft.Configuration.ConfigurationBuilders.Azure" />

If your application is running on an Azure service that has , this is all you need to read configuration from the vault and add it to your application.  Conversely, if you are not running on a service with MSI, you can still use the vault by adding the following attributes:

  • clientId – the Azure Active Directory application key that has access to your key vault
  • clientSecret – the Azure Active Directory application secret that corresponds to the clientId

The same mode, prefix, and stripPrefix features described previously are available for use with the AzureKeyVaultConfigBuilder.  You can now configure your application to grab that secret database connection string from the keyvault “conn_mydb” setting with a config file that looks like this:

You can use other vaults by using the uri attribute instead of the vaultName attribute, and providing the URI of the vault you wish to connect to.  More information about getting started configuring key vault is available online.

Other Configuration Builders Available

Today we are introducing five configuration builders as a preview for you to use and extend:

This new collection of ConfigurationBuilders should help make it easier than ever to secure your applications with Azure Key Vault, or orchestrate your applications when you add them to a container by no longer embedding configuration or writing extra code to handle deployment settings.

We plan to fully release the source code and make these providers open source prior to removing the preview tag from them.

Store SessionState in CosmosDb

Today we are also releasing a session state provider for Azure Cosmos Db.  The globally available CosmosDb service means that you can geographically load-balance your ASP.NET application and your users will always maintain the same session state no matter the server they are connected to.  This async provider is available as a NuGet package and can be added to your project by installing that package and updating the session state provider in your web.config as follows:

<connectionStrings
  <add name="myCosmosConnString"
       connectionString="- YOUR CONNECTION STRING -"/>
</connectionStrings>
<sessionState mode="Custom" customProvider="cosmos">
  <providers>
    <add name="cosmos"
         type="Microsoft.AspNet.SessionState.CosmosDBSessionStateProviderAsync, Microsoft.AspNet.SessionState.CosmosDBSessionStateProviderAsync"
         connectionStringName="myCosmosConnString"/>
  </providers>
</sessionState>

Summary

We’re continuing to innovate and update .NET Framework and ASP.NET.  With these new providers, they should make it easier to deploy your applications to Azure or make use of containers without having to rewrite your application.  Update your applications to .NET 4.7.1 and start using these new providers to make your configuration more secure, and to start using CosmosDb for your session state.

Bing Maps V8 Web Control SDK November 2017 Update

$
0
0

The Bing Maps V8 Web Control has been updated in November with one key feature which lets you provide your Bing Maps key as a URL parameter of the map script URL rather than as a map option when loading the map. This provides two key benefits:

  • Load multiple map instances under a single session on a page. Normally, when loading two or more maps on a page, each map instance generates a billable transaction and its own session. By specifying your Bing Maps key in the map script URL, all map instances on a page will use the same map session, and thus reduce the number of billable transactions your application generates.
  • Faster live site issue migration. If an issue occurs due to a release, apps which use this new way of specifying a Bing Maps key can have this issue resolved quicker.

Before:

In the past a Bing Maps key would be specified in the map options when loading the map. This will continue to work, but is no longer recommended.

function GetMap()
{
	var map = new Microsoft.Maps.Map('#myMap', {
		credentials: '[YOUR_BING_MAPS_KEY]',
		zoom: 1
	});
}

<script type='text/javascript' src='https://www.bing.com/api/maps/mapcontrol?callback=GetMap' async defer></script>

After:

It is now recommended to specify your Bing Maps key as part of the map script URL.

function GetMap()
{
	var map = new Microsoft.Maps.Map('#myMap', {
		zoom: 1
	});
}

<script type='text/javascript' src='https://www.bing.com/api/maps/mapcontrol?callback=GetMap&key=[YOUR_BING_MAPS_KEY]' async defer></script>

Seasonal Code Freeze

The main release branch of the Bing Maps V8 Web Control will be in a code freeze until early January 2018 to reduce the chances of a service interruption during the holiday season. We will however, continue to push regular updates to the experimental branch of Bing Maps V8.

For more information about Bing Maps V8 and our other APIs, go to https://www.microsoft.com/en-us/maps.

- Bing Maps Team

Top stories from the VSTS community – 2017.11.17

$
0
0
Here are top stories we found in our streams this week related to DevOps, VSTS, TFS and other interesting topics.  TOP STORIES First Look at VSTS Release Management Approval Gates  – Mike DouglasWhen approval gates were first announced my mind started thinking about all of the ways this could be used. There were several questions... Read More

Highlights from the Connect(); conference

$
0
0

Connect();, the annual Microsoft developer conference, is wrapping up now in New York. The conference was the venue for a number of major announcements and talks. Here are some highlights related to data science, machine learning, and artificial intelligence:

Lastly, I wanted to share this video presented at the conference from Stack Overflow. Keep an eye out for R community luminary David Robinson programming in R!

You can find more from the Connect conference, including on-demand replays of the talks and keynotes, at the link below.

Microsoft: Connect(); November 15-17, 2017


Because it’s Friday: Better living through chemistry

$
0
0

This video is a compilation of some spectacular chemical reactions, with a few physics demonstrations thrown in for good measure. (But hey, chemistry is just applied physics, right?).

That's all from us here at the blog for this week. Have a great weekend, and we'll be back on Monday!

Docker and Linux Containers on Windows, with or without Hyper-V Virtual Machines

$
0
0

Containers are lovely, in case you haven't heard. They are a nice and clean way to get a reliable and guaranteed deployment, no matter the host system.

If I want to run my my ASP.NET Core application, I can just type "docker run -p 5000:80 shanselman/demos" at the command line, and it'll start up! I don't have any concerns that it won't run. It'll run, and run well.

Some containers naysayers say , sure, we could do the same thing with Virtual Machines, but even today, a VHD (virtual hard drive) is rather an unruly thing and includes a ton of overhead that a container doesn't have. Containers are happening and you should be looking hard at them for your deployments.

docker run shanselman/demos

Historically on Windows, however, Linux Containers run inside a Hyper-V virtual machine. This can be a good thing or a bad thing, depending on what your goals are. Running Containers inside a VM gives you significant isolation with some overhead. This is nice for Servers but less so for my laptop. Docker for Windows hides the VM for the most part, but it's there. Your Container runs inside a Linux VM that runs within Hyper-V on Windows proper.

HyperV on Windows

With the latest version of Windows 10 (or 10 Server) and the beta of Docker for Windows, there's native Linux Container support on Windows. That means there's no Virtual Machine or Hyper-V involved (unless you want), so Linux Containers run on Windows itself using Windows 10's built in container support.

For now you have to switch "modes" between Hyper V and native Containers, and you can't (yet) run Linux and Windows Containers side by side. The word on the street is that this is just a point in time thing, and that Docker will at some point support running Linux and Windows Containers in parallel. That's pretty sweet because it opens up all kinds of cool hybrid scenarios. I could run a Windows Server container with an .NET Framework ASP.NET app that talks to a Linux Container running Redis or Postgres. I could then put them all up into Kubernetes in Azure, for example.

Once I've turned Linux Containers on Windows on within Docker, everything just works and has one less moving part.

Linux Containers on Docker

I can easily and quickly run busybox or real Ubuntu (although Windows 10 already supports Ubuntu natively with WSL):

docker run -ti busybox sh

More useful even is to run the Azure Command Line with no install! Just "docker run -it microsoft/azure-cli" and it's running in a Linux Container.

Azure CLI in a Container

I can even run nyancat! (Thanks Thomas!)

docker run -it supertest2014/nyan

nyancat!

Speculating - I look forward to the day I can run "minikube start --vm-driver="windows" (or something) and easily set up a Kubernetes development system locally using Windows native Linux Container support rather than using Hyper-V Virtual Machines, if I choose to.


Sponsor: Why miss out on version controlling your database? It’s easier than you think because SQL Source Control connects your database to the same version control tools you use for applications. Find out how.


© 2017 Scott Hanselman. All rights reserved.
     

Package Management adds nuget.org upstream source

$
0
0
Until now, we’ve focused on making Package Management in Visual Studio Team Services and Team Foundation Server the best place to store your private NuGet and npm packages, but we haven’t focused as much on the packages you use from public sources like NuGet.org. We’ve had basic support for npmjs.com as an “upstream source”, but that’s... Read More

#AzureSQLDW cost savings with optimized for elasticity and Azure Functions – part 1

$
0
0

Azure SQL Data Warehouse is Microsoft’s SQL analytics platform, the backbone of your Enterprise Data Warehouse (EDW). The service is designed to allow customers to elastically, and independently, scale compute and storage with massively parallel processing. SQL DW integrates seamlessly with big data stores and acts as a hub to your data marts and cubes for an optimized and tailored performance of your EDW. Azure SQL DW offers guaranteed 99.9% high availability, compliance, advanced security, and tight integration with upstream and downstream services so you can build a data warehouse that fits your needs. Azure SQL DW is the first and only service enabling enterprises to replicate their data everywhere with global availability in more than 30 regions. Today, we will show you how to push data quickly into Azure Analysis Services for optimized performance.

One of the key features of Azure SQL Data Warehouse is our decoupled compute and storage model. For key price and performance, it is important that you choose the performance level that matches your current workload, not just the maximum needed. This blog post is first of a two part blog series covering how to use Azure Function App to save money by automating compute levels.

The first solution we consider is a schedule based set of Azure functions, which allows you to set times during which you would like to have your data warehouse scaled up, down, or paused. Many data warehouse users have periods of activity and inactivity, usually corresponding with the workday. Shown below is a real customer data warehouse instance with their corresponding query usage over several days. Observing this daily workload, the query patterns display clear spikes during the working hours of each day with a drop off during the night.

DWU Usage to Query Activity Recommendation

Currently this customer is not changing its DWU regardless of its usage, and is constantly running at DW600. While a DW600 may be necessary during the workday, a DW200 may be more appropriate during off-hours. Over a month of scaling between these two values with a 16:8 hour ratio, one could potentially be saving around around 32%. Looking at its usage, we would recommend scaling up once before the working spike at 12am UTC and scale it back down to 5pm UTC daily. Using Azure TimeTrigger functions, we could reduce this customer’s monthly spend by 25%!

To learn more about SQL Data Warehouse schedule-based scaling with Azure functions, check out our documentation and its Github repository, or deploy now to your instance! This schedule based function is a proof of concept (POC) and we look forward to any community contributions around logging, retry-logic in the case of failures, and other features that enhance the user experience. This function defaults to using a consumption plan with blob storage. Consumption plan pricing includes a monthly free grant of 1 million requests and 400,000 GB-s of resource consumption per month.

If you need our help for a POC, contact us directly. Stay up-to-date on the latest Azure SQL DW news and features by following us on Twitter @AzureSQLDW.

Deploy in Azure

R3 on Azure: Strengthening our partnership

$
0
0

Blockchain is increasingly prevalent as a topic of interest in our conversations with business leaders. A growing number of our customers and partners are experimenting with the technology as a secure and transparent way to digitally track ownership of assets, and are partnering with Microsoft to build a new class of distributed applications on Azure.

As financial institutions and fintechs move their blockchain projects from innovation labs to lines of business, they require a seamless integration between their preferred ledger stack and infrastructure, and a robust set of platform components and tooling to develop, manage, and optimize blockchain applications. To meet these growing needs, Microsoft is expanding its strategic partnership with R3 to more deeply integrate R3’s distributed ledger platform, Corda and R3Net, with Azure.

Our partnership with R3 goes back to 2016, when we announced plans to accelerate the adoption of distributed ledger technologies among R3 member banks and make Azure the preferred cloud services provider for the R3 Lab and Research Center. Since then, scores of customers across banking and capital markets have used our Corda templates as the foundation for distributed applications.

Today, we are extending our partnership to more deeply integrate Corda’s capabilities with Azure services. This will enable developers to design and build apps on Corda, known as CorDapps, using enterprise tools they’re already familiar with such as Azure Active Directory for identity management, Key Vault for key management, and Azure SQL Database for off-chain storage and analytics. Azure will also provide critical enterprise capabilities for running blockchain infrastructure securely and at scale, including Azure Express Route for secure, high performance hybrid topologies and Azure Management and Security capabilities for turnkey operations management.

We are excited for the outcomes this partnership will enable for our customers. R3 has tailored its platform to meet the specific needs of financial institutions and have a deep bench of industry experts focused on building a product uniquely suited for banking scenarios. Partnering with R3, we aim to develop a rich portfolio of offerings that can help our customers accelerate the most ambitious blockchain projects.

Viewing all 10804 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>