Quantcast
Channel: Category Name
Viewing all 10804 articles
Browse latest View live

The week in .NET – Mitch Muenster – Stateless

$
0
0

To read last week’s post, see The week in .NET – On .NET on CoreRT and .NET Native – Enums.NET – Ylands – Markdown Monster.

On .NET

Last week, we hosted the MVP Summit, and instead of having a big one-hour show, we did several mini-interviews with MVPs. The first one was published on Monday. Mitch Muenster spent 25 minutes with us talking about being a developer with autism:

This week, we’ll publish the other interviews that we recorded during the summit.

Package of the week: Stateless

Almost all applications implement processes that can be represented as workflows or state machines. Stateless is a library that enables the representation of state machine-based workflows directly in .NET Code.

Version 3.0 of Stateless just came out, with support for .NET Core.

User group meeting of the week: Introduction to TPL Dataflow in Boulder, CO

The Boulder .NET User Group holds a meeting on Tuesday, November 15 at 5:45 on TPL Dataflow, a pattern that allows for lock-free multitasking.

.NET

ASP.NET

F#

Check out F# Weekly for more great content from the F# community.

Xamarin

Azure

And this is it for this week!

Contribute to the week in .NET

As always, this weekly post couldn’t exist without community contributions, and I’d like to thank all those who sent links and tips. The F# section is provided by Phillip Carter, the gaming section by Stacey Haffner, and the Xamarin section by Dan Rigby.

You can participate too. Did you write a great blog post, or just read one? Do you want everyone to know about an amazing new contribution or a useful library? Did you make or play a great game built on .NET?
We’d love to hear from you, and feature your contributions on future posts:

This week’s post (and future posts) also contains news I first read on The ASP.NET Community Standup, on Weekly Xamarin, on F# weekly, and on Chris Alcock’s The Morning Brew.


The expanding Visual Studio family of products

$
0
0

The core of our vision is “Any Developer, Any App, Any Platform.” With our Visual Studio family of products, we are committed to bringing you the most powerful and productive development tools and services for any developer to build mobile-first and cloud-first apps across Windows, iOS, Android, and Linux.

Our existing Visual Studio family of products includes the most comprehensive set of development and application lifecycle tools on the market today. An industry leading IDE, a lightweight code editor – Visual Studio Code, and an on-premises and cloud-based team collaboration services with Visual Studio Team Foundation Server and Visual Studio Team Services. In addition, we offer a free developer program with Visual Studio Dev Essentials and a commercial program with Visual Studio Subscriptions.

Today, at the Connect(); 2016 event in New York city, we announced the release candidate of Visual Studio 2017 and Team Foundation Server 2017 RTM. I am also excited to see our Visual Studio family continue to grow with the introduction of Visual Studio for Mac and Visual Studio Mobile Center.

Visual Studio 2017 RC focuses on improved productivity, refined fundamentals (performance improvements across all areas in VS 2017), streamlined cloud development, and great mobile development. To learn more, read the details in John Montgomery’s post announcing Visual Studio 2017 RC. The download is available here.

Visual Studio for Mac is a new Visual Studio IDE. It’s built from the ground up for the Mac and focuses on full-stack, client-to-cloud native mobile development, using Xamarin for Visual Studio, ASP.Net Core, and Azure. To learn more, read Miguel de Icaza’s blog post introducing Visual Studio for Mac. The download is available from here.

Visual Studio Mobile Center is “mission control for mobile apps.” It brings together multiple services commonly used by mobile developers into a single, integrated service that allows you to build, test, deploy, and monitor cloud attached apps in one place. To learn more, please read Nat Friedman’s blog post elaborating on Visual Studio Mobile Center.

Team Foundation Server 2017 RTM and Visual Studio Team Services is bringing general availability to Application Insights, Package Management service, Code Search, and 3rd party commerce to on-premises extensions. To learn more, please read Brian Harry’s blog post. Get started here.

We hope you join us for Connect(); and enjoy 100+ on-demand videos throughout the day. If you miss the event, check back for recordings of the sessions as well as the live Q&A. Enjoy the Connect() event!

Julia Liuson, Corporate Vice President, Visual StudioJulia is responsible for developer tools and services, including the programming languages and runtimes designed for a broad base of software developers and development teams, as well as for the Visual Studio, Visual Studio Code, and the .NET Framework lines of products and services. Julia joined Microsoft in 1992, and has held a variety of technical and management positions while at Microsoft, including the General Manager for Visual Studio Business Applications, the General Manager for Server and Tools in Shanghai, and the development manager for Visual Basic.

Windows Application Driver for PC integrates with Appium

$
0
0

The most time consuming and expensive part of developing an app is often the testing phase. Manually testing an app on hundreds of different devices is impractical, and existing automation solutions run into a number of platform and tooling limitations. Appium is designed to simplify testing by supporting multiple platforms, and it’s our goal at Microsoft with Windows Application Driver (WinAppDriver) to enable you to use Appium to test Windows apps. The recent release of Appium v1.6 integrates WinAppDriver so developers can run tests targeting Windows 10 PC applications through Appium!

What is WinAppDriver?

WinAppDriver is a modern, standards-based UI test automation service that aligns with the Selenium WebDriver Protocol The Windows Application Driver allows a developer to use the “write a test once, run anywhere” approach. No longer forced to choose a specific test language and runner, developers are granted flexibility and no longer need to rewrite tests for each platform.

What is Selenium/Appium?

Selenium is the industry standard for automated UI testing of websites/browser applications. Selenium works off of the WebDriver protocol, which is an open API for browser automation. Realizing that this same protocol could be leveraged for mobile app UI testing, the Appium project was created, and it extended the WebDriver API to allow for app-specific automation endpoints. WinAppDriver was created in the spirit of the Selenium/Appium projects to conform to the industry standards for UI testing and bring those standards to the Universal Windows Platform.

How it works

screen-shot-2016-11-15-at-5-12-47-pm

With Appium’s integration of WinAppDriver, developers will have full customization of their preferred testing language and test runner as shown in the diagram above—and they can reuse their tests if their app is on iOS and Android. It is only through Appium that developers can have this customization – each UWP developer might prefer a different test script/test runner for their UI tests and because Appium uses the Webdriver protocol, developers can have that flexibility when authoring tests.

What about CodedUI?

The current UI test automation solution for Windows app testing is CodedUI; however, Coded UI only works for apps running on the Windows platform. For developers who write cross-platform apps, this means they have to write tests for each platform they are targeting. Additionally, those developers who write cross-platform apps will have to write custom tests for each platform they are targeting.

With Appium supporting multiple platforms like Android and iOS, Microsoft encourages customers to use Selenium and Appium for Functional UI testing.

How can I get started?

To download Appium with Windows 10 PC support, make sure you have Node version >=6.0 and npm version >=3.5. Then use the following steps:

  1. In your command prompt, run npm install –g appium
  2. Then, run the command appium from an elevated command prompt
    1. Make sure developer mode is on as well
  3. Choose a test runner (Visual Studio, IntelliJ, Sublime Text etc.) and a language to test in (C#, Ruby, Python, etc.)
  4. Create a test targeting a Windows application of your choice.
    1. Set the URL targeting your Appium server, and the appId capability set to the app ID of the app you are testing.
    2. The platformName capability should be set to “Windows” and the deviceName capability set to “WindowsPC” in the test script.
  5. Run your test from the test runner targeting the Appium server URL

Here is a screenshot of what the install process looks like from the command line:

picture1

As part of the install, you should see that WinAppDriver is downloaded and successfully installed:

picture2

Then, just run Appium from the command line:

picture3

Now that the Appium server is running, you can run a test from your choice of test runner pointing to the Appium endpoint. In this example, we’ll use a test targeting the built-in Calculator app on Windows 10.

picture4

The key components (shown in the red boxes) are setting the URL to target the Appium server, as well as setting the app ID, platformName and deviceName as explained in the earlier instructions.
Once you run the test, you should see results in the test runner.

picture5

To see sample tests, check out the sample apps/tests on the WinAppDriver Github page or in the Appium samples repo. For more information about WinAppDriver + Appium, visit Appium’s website or their GitHub, or check out these videos talking about how Appium and UI test automation works.

Panel with Jonathan Lipps from SauceLabs
UI Test Automation for Browsers and Apps Using the WebDriver Standard

Test result storage improvements and impact on upgrading to Team Foundation Server 2017

$
0
0

With the Team Foundation Server 2017 now available, TFS administrators will be planning to upgrade their existing TFS installations to this new version. As admins plan this activity, we wanted to discuss an important TFS database schema improvement that is rolling out with TFS 2017.

What is the change?
With TFS 2017, the test results generated from automated and manual testing will be stored in a more compact and efficient format, resulting in reduced storage footprint for TFS collection databases. With testing in Continuous Integration (CI) and Continuous Deployment (CD) workflows gathering momentum, this change will translate to meaningful SQL storage savings to customers whose automated test environments generate thousands of test results each day.

What is the impact of this change?
A new schema for test results with TFS 2017 means existing test result data must be migrated when you upgrade your TFS server to the new version. Given the scale of data migration, you will encounter longer-than-normal upgrade times depending on the amount of test data you have in your TFS collections. For most small to medium sized TFS collections (under 100 GB), the impact will not be noticeable. Your upgrade will take few hours longer. However, if the test result data in your TFS collection is more than 100GB, then you must plan for a longer-than-usual upgrade window.

How do we reduce the time taken to upgrade to TFS 2017?
Here are the guidelines that will help you reduce the TFS upgrade window time:

  • Lesser the data to migrate, faster the upgrade. We recommend cleaning up old test results in your system by configuring test retention policy. Details about the retention policy are available in this blog: https://blogs.msdn.microsoft.com/visualstudioalm/2015/10/08/test-result-data-retention-with-team-foundation-server-2015 and the steps to configure retention are available in the documentation. Note that retention does not clean up test results instantaneously. The retention policy is designed to gradually free up space by deleting test results in batches to prevent any performance impact on your TFS instances. As such, make sure you configure retention right way, and have a buffer period of few weeks with retention enabled before you upgrade.
  • If you cannot wait to the test retention policy to gradually clean up test results, you have a second option of cleaning up test results just before the upgrade. You need to install TFS (TFS 2017 or later) and then run the TFSConfig.exe tool to clean up test results. Note that you need to run tool against TFS collections when they are offline, during the window after installing TFS but before starting the upgrade wizard. Most importantly, remember to configure the test retention policy even after cleaning up test results using the TFSConfig.exe tool to prevent unbounded growth of test result data in future.
  • It’s recommended to try out the upgrade on a pre-production environment, before upgrading your production TFS instances. Given that pre-production environments typically have lower hardware capacity than production environments, upgrade may take longer on pre-prod environments when compared to production environments. Make sure you have a backup of your TFS collection databases when the production instances are upgraded.
  • If you still see prolonged upgrade times, reach out to us either via customer support or drop a mail to devops_tools@microsoft.com and we’ll be glad to help.

What kind of gains can we expect with the test result schema improvements?
The gain varies depending on your mix of automated v/s manual testing – more the number of test results, implying higher the frequency of test execution, higher the gains. We observed a 5x-8x reduction in storage used by test results with the new schema across various TFS collections we tested. For the Visual Studio Team Services account used by the TFS development team itself, the Test Result footprint reduced from 80GB to 10GB after upgrading to the new schema. Owing to a reduced data footprint, we have also achieved modest performance gains with this new schema.

What is the impact for teams using Visual Studio Team Services?
For Visual Studio Team Services accounts, the data will be migrated to the new schema in a phased manner. The migration will be transparent to users without any interruptions or down time. Basically, you won’t notice any change in the way you run tests or analyze test results.

What are the improvements that make the new schema for test results more efficient?
The existing test result storage employed a flat schema design which was motivated by manual testing scenarios with Microsoft Test Manager. This design was extended to automated testing as we added capabilities like Lab BDT with XAML and on demand automated testing with MTM. As adoption of automated testing in Build (Continuous Integration) and Release (Continuous Deployment) grew, we witnessed sizable growth test result data. In many TFS collection databases, where customers are invested heavily in automated testing, we observed that test results storage was, by far, the largest user of storage space. With this update we are optimizing the schema for automated testing by moving from a flat schema to normalizing the schema. The new schema has an automated test case reference object that stores all test meta data like test method name, container, priority, owner, etc. – data that does not change with each test result. The test results table will contain only the fields that change with each test result like outcome, start date time, duration, machine name, etc. and point to the automated test case reference for meta data. With these redesigned tables, we have significantly reduced data duplication and eliminated the numerous indexes that existed in the flat schema, yielding the new schema 5x-8x efficient in terms of storage space.

If you have any questions or need any help, please drop us a mail at devops_tools@microsoft.com.

Thank you,
Manoj Bableshwar – Visual Studio Testing Tools team

Announcing Entity Framework Core 1.1

$
0
0

Entity Framework Core (EF Core) is a lightweight, extensible, and cross-platform version of Entity Framework. Today we are making Entity Framework Core 1.1 available.

EF Core follows the same release cycle as .NET Core. Continuous improvements every 2 months and new features released every 6 months. This is the first feature release since 1.0.

Be sure to read the Upgrading to 1.1 section, at the end of this post, for important information about upgrading to the 1.1 release.

What’s in 1.1

The 1.1 release is focused on addressing issues that prevent folks from adopting EF Core. This includes fixing bugs and adding some of the critical features that are not yet implemented in EF Core. While we’ve made some good progress on this, we do want to acknowledge that EF Core still isn’t going to be the right choice for everyone, for more detailed info of what is implemented see our EF Core and EF6.x comparison.

Bug fixes

There are over 100 bug fixes included in the 1.1 release. See the EF Core 1.1 release notes for details.

Improved LINQ translation

In the 1.1 release we have made good progress improving the EF Core LINQ provider. This enables more queries to successfully execute, with more logic being evaluated in the database (rather than in memory).

DbSet.Find

DbSet.Find(…) is an API that is present in EF6.x and has been one of the more common requests for EF Core. It allows you to easily query for an entity based on its primary key value. If the entity is already loaded into the context, then it is returned without querying the database.

using (var db = new BloggingContext())
{
    var blog = db.Blogs.Find(1);
}

Mapping to fields

The new HasField(…) method in the fluent API allows you to configure a backing field for a property. This can be useful for read-only properties, or data that has Get/Set methods rather than a property. For detailed guidance, see the Backing Fields article in our documentation. You can also see a demo in our Entity Framework Connect(); // 2016 video.

public class BloggingContext : DbContext
{
    ...

    protected override void OnModelCreating(ModelBuilder modelBuilder)
    {
        modelBuilder.Entity()
            .Property(b => b.Url)
            .HasField("_validatedUrl");
    }
}

Explicit Loading

Explicit loading allows you to load the contents of a navigation property for an entity that is tracked by the context. For more information, see the Loading Related Data article in our documentation.

using (var db = new BloggingContext())
{
    var blog = db.Blogs.Find(1);

    db.Entry(blog).Collection(b => b.Posts).Load();
    db.Entry(blog).Reference(b => b.Author).Load();
}

Additional EntityEntry APIs from EF6.x

We’ve added the remaining EntityEntry APIs that were available in EF6.x. This includes Reload(), GetModifiedProperties(), GetDatabaseValues() etc. These APIs are most commonly accessed by calling the DbContext.Entry(object entity) method.

Connection resiliency

Connection resiliency automatically retries failed database commands. The SQL Server provider includes a execution strategy that is specifically tailored to SQL Server (including SQL Azure). It is aware of the exception types that can be retried and has sensible defaults for maximum retries, delay between retries, etc. For more information, see the Connection Resiliency article in our documentation.

An execution strategy is specified when configuring the options for your context. This is typically in the OnConfiguring method of your derived context, or in Startup.cs for an ASP.NET Core application.

protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
{
    optionsBuilder
        .UseSqlServer(
            "connection string",
            options => options.EnableRetryOnFailure());
}

SQL Server memory-optimized table support

Memory-Optimized Tables are a feature of SQL Server. You can now specify that the table an entity is mapped to is memory-optimized. For more information, see the Memory-Optimized Tables article in our documentation.

protected override void OnModelCreating(ModelBuilder modelBuilder)
{
    modelBuilder.Entity()
        .ForSqlServerIsMemoryOptimized();
}

Simplified service replacement

In EF Core 1.0 it is possible to replace internal services that EF uses, but this is complicated and requires you to take control of the dependency injection container that EF uses. In 1.1 we have made this much simpler, with a ReplaceService(…) method that can be used when configuring the context. This is typically in the OnConfiguring(…) method of your derived context, or in Startup.cs for an ASP.NET Core application.

protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
{
    optionsBuilder.UseSqlServer("connection string");

    optionsBuilder.ReplaceService();
}

Upgrading to 1.1

If you are using one of the database providers shipped by the EF Team (SQL Server, SQLite, and InMemory), then just upgrade your provider package.

PM> Update-Package Microsoft.EntityFrameworkCore.SqlServer

If you are using a third party database provider, then check to see if they have released an update that depends on 1.1.0. If they have, then just upgrade to the new version. If not, then you should be able to upgrade just the EF Core relational components that they depend on. Most of the new features in 1.1 do not require changes to the database provider. We’ve done some testing to ensure database providers that depend on 1.0 continue to work with 1.1, but this testing has not been exhaustive.

PM> Update-Package Microsoft.EntityFrameworkCore.Relational

Upgrading tooling packages

If you are using the tooling package, then be sure to upgrade that too. Note that tooling is versioned as 1.0.0-preview4 because tooling has not reached its initial stable release (this is true of tooling across .NET Core, ASP.NET Core, and EF Core).

PM> Update-Package Microsoft.EntityFrameworkCore.Tools -Pre

If you are using ASP.NET Core and the dotnet ef commands, then you need to update the tools section of project.json to use the new Microsoft.EntityFrameworkCore.Tools.DotNet package in place of the Microsoft.EntityFrameworkCore.Tools package from 1.0. As the design of .NET CLI Tools has progressed it has become necessary for us to separate the dotnet ef tools into this separate package.

json
"tools": {
   "Microsoft.EntityFrameworkCore.Tools.DotNet": "1.1.0-preview4"
 },

Announcing .NET Core 1.1

$
0
0

We are excited to announce the release of .NET Core 1.1 RTM. You can start creating .NET Core 1.1 apps, today, in Visual Studio 2015, Visual Studio 2017 RC, Visual Studio Code and Visual Studio for the Mac.

We used the 1.1 release to achieve the following improvements:

  • .NET Core: Add distros and improve performance.
  • ASP.NET Core: Improve Kestrel, Azure support and productivity.
  • EF Core: Azure and SQL 2016 support.

News flash: ASP.NET Core 1.1 with Kestrel was ranked as the fastest mainstream fullstack web framework in the TechEmpower plaintext benchmark.

News flash: Google Cloud is joining the .NET Foundation Technical Steering Group. Welcome, Google!

You can see all the .NET Core changes in detail in the .NET Core 1.1 release notes. It’s a small delta on the .NET Core 1.1 Preview 1 release that we shipped 3 weeks ago.

Distributions

Support for the following distributions was added:

  • Linux Mint 18
  • OpenSUSE 42.1
  • macOS 10.12 (also added to .NET Core 1.0)
  • Windows Server 2016 (also added to .NET Core 1.0)

You can see the full list of supported distributions in the .NET Core 1.1 release notes.

Documentation

.NET Core documentation has been updated for the release and will continue to be updated. We are also in the process of making visual and content updates to the .NET Core docs to make the docs easier and more compelling to use.

The ASP.NET Core and Entity Framework, C# and VB docs were moved to docs.microsoft.com as part of this release. F# documentation was added a few months ago.

Documentation on docs.microsoft.com open source. You can help us make it better by filing issues and making contributions on GitHub. Best places to start are dotnet/docs and aspnet/docs.

Performance

We were recently informed by the fine folks at TechEmpower that ASP.NET Core 1.1 with Kestrel was ranked as the fastest mainstream fullstack web framework in the TechEmpower plaintext benchmark. That’s a great result, and the result of significant engineering effort.

We adopted a performance optimization for the CoreCLR runtime called Profile-Guided Optimization (PGO), for the .NET Core 1.1 Windows version. We’ve use this technique for the .NET Framework for many years, but had not yet used it for .NET Core. This improvement was not included in the earlier .NET Core 1.1 Preview 1 release.

PGO optimizes C++ compiler-generated binaries with information it records from apps it observes in our lab. We call this process “training”. It’s about as exciting as 6AM runs in the dark during the Winter. PGO records info about which codepaths are used in a binary and in what order. For this release, we used a simple “Hello World” app for training.

We saw a 15% improvement with the ASP.NET MusicStore app running with a PGO-optized CoreCLR in our lab and believe that these improvements will be representative to other Web applications. We hope to see greater improvements in the future as we increase the pool of apps we train with.

For Linux and macOS, we compile CoreCLR with Clang/LLVM. We intend to use the Clang version of PGO in the next release. Very preliminary scouting of Clang PGO suggests that we will see similar benefits.

APIs

There are 1380 new APIs in .NET Core 1.1. Many of the new APIs were added to support the product itself, include reading portable PDBs. .NET Core 1.1 supports .NET Standard 1.6.

.NET Standard 2.0 support is coming in an upcoming release (in 2017). It is not part of .NET Core 1.1.

Using .NET Core 1.1

You can start by installing .NET Core 1.1. You can either install it globally using the .NET Core 1.1 installer or package manager for your operating system or try it an isolated (and easily removable) environment by downloading .NET Core as a zip.

Safe side-by-side install

You can safely globally install .NET Core 1.1 on a machine that already has .NET Core 1.0.

The dotnet new command creates new templates that reference the latest runtime on the machine. This may not be desired. If not, you can hand-edit the versions in the resulting project.json to earlier version numbers. Based on feedback, we will be changing this behavior in the new version of the tools, at the same time we release the final version of Visual Studio 2017. If you do not use dotnet new to create new projects, but rely on Visual Studio, then you are not affected.

Try it out

You can try .NET Core out with the command line tools, using these commands in your command prompt or terminal.

dotnet newdotnet restoredotnet run

You can also try out .NET Core 1.1 with a dotnet-bot sample we created for using .NET Core with Docker (although you don’t have to use Docker).

Upgrading Existing .NET Core 1.0 Projects

You can upgrade existing .NET Core 1.0 projects to .NET Core 1.1. I will show you the new project.json file that the updated dotnet new now produces. It’s the best way to see the new version values that you need to copy/paste into your existing project.json files. There are no automated tools to upgrade projects to later .NET Core versions.

The default .NET Core 1.1 project.json file follows:

This project.json file is very similar to what your .NET Core 1.0 project.json looks like, with the exception of the netcoreapp1.1 and 1.1.0 target framework and meta-package version strings, respectively.

You can use the following substitutions to help you update project.json files that you want to move temporarily or permanently to .NET Core 1.1.

  • Update the netcoreapp1.0 target framework to netcoreapp1.1.
  • Update the Microsoft.NETCore.App package version from 1.0.x (for example, 1.0.0 or 1.0.1) to 1.1.0.

Upgrading .NET Standard Library Projects

There is no need to update .NET Standard Library projects.

We did publish a NETStandard.Library 1.6.1 meta package, however, there is no benefit in referencing it for producing libraries. The updated package has been provided as a dependency for the updated Microsoft.NETCore.App 1.1 metapackage.

Using .NET Core 1.1 Docker Images

You can use .NET Core 1.1 with Docker. You can find updated images at microsoft/dotnet.

The latest tag has been updated to point to the .NET Core 1.1 SDK. This is a departure from our earlier plan, as discussed in the 1.1 Preview 1 post. We looked at other platforms that have Current and LTS and saw that latest does indeed point to the latest version. Makes sense.

There are two new Runtime tags for .NET Core 1.1:

  • Linux: 1.1.0-runtime
  • Windows: 1.1.0-runtime-nanoserver

There are two new SDK tags for .NET Core 1.1:

  • Preview 2-based SDK, using project.json: 1.1.0-sdk-projectjson
  • Preview 3-based SDK, using CSProj: 1.1.0-sdk-msbuild, .

You can try .NET Core 1.1 with the [dotnetapp-current sample][dotnetapp-current] in the .NET Core Docker Samples repository. The other samples can be easily modified to also depend the .NET Core 1.1 images, by updating both the project.json and Dockerfile files with the appropriate version strings (all of which are provided above).

Current Release

In the earlier .NET Core 1.1 blog post, I described that we have adopted the industry practice of differentiated releases, which we’ve called “Long-term Support (LTS)” and “Current”. .NET Core 1.1 is a Current release and also the first one. Once a given Current release has shipped, we expect very few updates, hopefully only security updates.

We recommend that most developers adopt LTS releases. It’s also the default experience we’ll include in Visual Studio. We do hope that some of you adopt Current releases to give us feedback, as well. It’s hard to quantify it, but we thinking an 80/20 split between LTS and Current across the entire .NET Core developer base would be about right.

Closing

Please try the new .NET Core release and give us feedback. There are a lot of key improvements across .NET Core, ASP.NET Core and EF Core that will make your apps better and faster. It’s the first Current release, providing you with feature faster, provided you are happy with updating .NET Core more quickly than the LTS multi-year supported releases.

To recap, the biggest changes are:

  • Performance improvements, enough to make a very positive first entry on the TechEmpower benchmarks.
  • Addition of four OS distros.
  • 10s of new features and 100s of bug fixes.
  • Updated documentation.

Thanks to everyone who adopted .NET Core 1.0 and .NET Core 1.1 Preview 1 that gave us feedback. We appreciate all of the contribution and engagement! Please tell us what you think of the latest release.

You can start creating .NET Core 1.1 apps, today, in Visual Studio 2015, Visual Studio 2017 RC, Visual Studio Code and Visual Studio for the Mac.

Package Management is generally available: NuGet, npm, and more

$
0
0
Today, I’m proud to announce that Package Management is generally available for Team Services and TFS 2017! If you haven’t already, install it from the Visual Studio Marketplace.

Best-in-class support for NuGet 3

NuGet support in Package Management enables continuous delivery workflows by hosting your packages and making them available to your team, your builds, and your releases. With best-in-class support for the latest NuGet 3.x clients, Package Management is an easy addition to your .NET ecosystem. If you’re still hosting a private copy of NuGet.Server or putting your packages on a file share, Package Management can remove that burden and even help you migrate.
To get started with NuGet in Package Management, check out the docs.
NuGet packages in Package Management

npm

Package Management was never just about NuGet. Accordingly, the team has been hard at work over the last few months adding support for npm packages. If you’re a developer working with node.js, JavaScript, or any of its variants, you can now use Team Services to host private npm packages right alongside your NuGet packages.
npm packages in Package Management
npm is available to every user with a Package Management license. To enable it, simply install Package Management from the Marketplace, if you haven’t already, then check out the get started docs.
npm support will also be available in an update to TFS 2017. Keep an eye on the features timeline for the latest updates.

GA updates: pricing, regions, and more

If you’ve been using Package Management during the preview period, you’ll now need to purchase a license in the Marketplace to continue using it. Your account has automatically been converted to a 60-day trial to allow ample time to do so. Look for the notice bar in the Package Management hub or go directly to the Users hub in your account to buy licenses.
The pricing for Package Management is:
  • First 5 users: Free
  • Users 6 through 100: $4 each
  • Users 101 through 1000: $1.50 each
  • Users 1001 and above: $0.50 each

Although the first 5 users are free, licenses for these users must still be acquired through the Marketplace.

Package Management is now also available in the India South and Brazil South regions.

What’s next?

With the launch of Package Management in TFS 2017, the team is now fully focused on adding additional value to the extension. Over the next year, we’ll be investing in a few key areas:
  • Package lifecycle: we want Package Management to serve not just as a repository for bits, but also as a service that helps you manage the production and release of your components. Accordingly, we’ll continue to invest in features that more closely integrate packages with Team Build and with Release Management, including more investments in versioning and more metadata about how your packages were produced.
  • Dependency management: packages come from everywhere: NuGet.org, teams across your enterprise, and teams in your group. In a world where there’s always pressure to release faster and innovate more, it makes sense to re-use as much code as possible. To enable that re-use, we’ll invest in tooling that helps you understand where your dependencies are coming from, how they’re licensed, if they’re secure, and more.
  • Refreshed experience: when we launched Package Management last November, we shipped a simple UX that worked well for the few scenarios we supported. However, as we expand the service with these new investments, we’ll be transitioning to an expanded UX that more closely matches the rest of Team Services, provides canvases for partners to extend Package Management with their own data and functionality, and gives us room to grow.
  • Maven/Ivy: as the rest of the product builds ever-better support for the Java ecosystem, it follows that Package Management should serve as a repository for the packages Java developers use most. So, we’ll be building support for Maven packages into Package Management feeds.

Announcing Code Search on Team Foundation Server 2017

$
0
0

Code Search is the most downloaded Team Services extension in the Marketplace!And it is now available on Team Foundation Server 2017!

Code Search provides fast, flexible, and accurate search across your code in TFS. As your code base expands and is divided across multiple projects and repositories, finding what you need becomes increasingly difficult. To maximize cross-team collaboration and code sharing, Code Search can quickly and efficiently locate relevant information across all your projects in a collection.

Read more about the capabilities of Code Search here.

Understand the hardware requirements and software dependencies for Code Search on Team Foundation Server 2017 here.

Configuring your TFS 2017 server for Code Search

1. You can configure Code Search as part of your production upgrade via the TFS Server Configuration wizard:

configuresearchdetails

2. Or you can complete your production upgrade first and subsequently configure Code Search through the dedicated Search Configuration Wizard:

searchwizard

3. To try out Code Search, you can use a pre-production TFS instance and carry out a pre-production upgrade. In this case, configure Code Search after the pre-production upgrade is complete. See step 2 above.

4. You can even configure Code Search on a separate server dedicated for Search. In fact we recommend this approach if you have more than 250 users or if average CPU utilization on your TFS server is higher than 50%.

remoteinstall

 

Got feedback?

How can we make Code Search better for you? Here is how you can get in touch with us

 

Thanks,
Search team


Announcing Public Preview for Work Item Search

$
0
0

Today, we are excited to announce the public preview of Work Item Search in Visual Studio Team Services. Work Item Search provides fast and flexible search across all your work items.

With Work Item Search you can quickly and easily find relevant work items by searching across all work item fields over all projects in an account. You can perform full text searches across all fields to efficiently locate relevant work items. Use in-line search filters, on any work item field, to quickly narrow down to a list of work items.

Enabling Work Item Search for your Team Services account

Work Item Search is available as a free extension on Visual Studio Team Services Marketplace. Click the install button on the extension description page and follow instructions displayed, to enable the feature for your account.
Note that you need to be an account admin to install the feature. If you are not, then the install experience will allow you to request your account admin to install the feature. Work Item Search can be added to any Team Services account for free. By installing this extension through the Visual Studio Marketplace, any user with access to work items can take advantage of Work Item Search.

You can start searching for code using the work item search box in the top right corner. Once in the search results page, you can easily switch between Code and Work Item Search.
workitem-search

Search across one or more projects

Work Item Search enables you to search across all projects, so you can focus on the results that matter most to you. You can scope search and drill down into an area path of choice.
search-across-all-projects

Full text search across all fields

You can easily search across all work item fields, including custom fields, which enables more natural searches. The snippet view indicates where matches were found.

Now you need not specify a target work item field to search against. Type the terms you recall and Work Item Search will match it against each work item field including title, description, tags, repro steps, etc. Matching terms across all work item fields enables you to do more natural searches.
Search across all fields

Quick Filters

Quick inline search filters let you refine work items in seconds. The dropdown list of suggestions helps complete your search faster. You can filter work items by specific criteria on any work item field. For example, a search such as “AssignedTo: Chris WorkItemType: Bug State: Active” finds all active bugs assigned to a user named Chris.
Quick Filters

Rich integration with work item tracking

The Work Item Search interface integrates with familiar controls in the Work hub, giving you the ability to view, edit, comment, share and much more.
integration-with-work-item-tracking

Got feedback?

How can we make Work Item Search better for you? Here is how you can get in touch with us

 
Thanks,
Search team

WinAppDriver - Test any app with Appium's Selenium-like tests on Windows

$
0
0
WinAppDriver - Appium testing Windows Apps

I've found blog posts on my site where I'm using the Selenium Web Testing Framework as far back as 2007! Today there's Selenium Drivers for every web browser including Microsoft Edge. You can write Selenium tests in nearly any language these days including Ruby, Python, Java, and C#.

I'm a big Selenium fan. I like using it with systems like BrowserStack to automate across many different browser on many operating systems.

"Appium" is a great Selenium-like testing framework that implements the "WebDriver" protocol - formerly JsonWireProtocol.

WebDriver is a remote control interface that enables introspection and control of user agents. It provides a platform- and language-neutral wire protocol as a way for out-of-process programs to remotely instruct the behavior of web browsers.

From the Appium website, "Appium is 'cross-platform': it allows you to write tests against multiple platforms (iOS, Android, Windows), using the same API. This enables code reuse between iOS, Android, and Windows testsuites"

Appium is a webserver that exposes a REST API. The WinAppDriver enables Appium by using new APIs that were added in Windows 10 Anniversary Edition that allow you to test any Windows app. That means ANY Windows App. Win32, VB6, WPF, UWP, anything. Not only can you put any app in the Windows Store, you can do full and complete UI testing of those apps with a tool that is already familiar to Web Developers like myself.

Your preferred language, your preferred test runner, the Appium Server, and your app

You can write tests in C# and run them from Visual Studio's Test Runner. You can press any button and basically totally control your apps.

// Launch the calculator app
DesiredCapabilities appCapabilities = new DesiredCapabilities();
appCapabilities.SetCapability("app", "Microsoft.WindowsCalculator_8wekyb3d8bbwe!App");
CalculatorSession = new RemoteWebDriver(new Uri(WindowsApplicationDriverUrl), appCapabilities);
Assert.IsNotNull(CalculatorSession);
CalculatorSession.Manage().Timeouts().ImplicitlyWait(TimeSpan.FromSeconds(2));
// Make sure we're in standard mode
CalculatorSession.FindElementByXPath("//Button[starts-with(@Name, \"Menu\")]").Click();
OriginalCalculatorMode = CalculatorSession.FindElementByXPath("//List[@AutomationId=\"FlyoutNav\"]//ListItem[@IsSelected=\"True\"]").Text;
CalculatorSession.FindElementByXPath("//ListItem[@Name=\"Standard Calculator\"]").Click();

It's surprisingly easy once you get started.

public void Addition()
{
CalculatorSession.FindElementByName("One").Click();
CalculatorSession.FindElementByName("Plus").Click();
CalculatorSession.FindElementByName("Seven").Click();
CalculatorSession.FindElementByName("Equals").Click();
Assert.AreEqual("Display is 8 ", CalculatorResult.Text);
}

You can automate any part of Windows, even the Start Menu or Cortana.

var searchBox = CortanaSession.FindElementByAccessibilityId("SearchTextBox");
Assert.IsNotNull(searchBox);
searchBox.SendKeys("What is eight times eleven");

var bingPane = CortanaSession.FindElementByName("Bing");
Assert.IsNotNull(bingPane);

var bingResult = bingPane.FindElementByName("88");
Assert.IsNotNull(bingResult);

If you use "AccessibiltyIds" and refer to native controls in a non-locale specific way you can even reuse test code across platforms. For example, you could write sign in code for Windows, iOS, your web app, and even a VB6 Win32 app. ;)

Testing a VB6 app with WinAppDriver

Appium and WebAppDriver a nice alternative to "CodedUI Tests." CodedUI tests are great but just for Windows apps. If you're a web developer or you are writing cross platform or mobile apps you should check it out.


Sponsor: Help your team write better, shareable SQL faster! Discover how your whole team can write better, shareable SQL faster with a free trial of SQL Prompt. Write, refactor and share SQL effortlessly, try it now.



© 2016 Scott Hanselman. All rights reserved.
     

Announcing .NET Core Tools MSBuild “alpha”

$
0
0

We are excited to announce the first “alpha” release of the new MSBuild-based .NET Core Tools. You can try out the new .NET Core Tools in Visual Studio 2017 RC, Visual Studio for Mac, Visual Studio Code and at the commandline. The new Tools release can be used with both the .NET Core 1.0 and .NET Core 1.1 runtimes.

When we started building .NET Core and ASP.NET Core it was important to have a project system that worked across Windows, Mac and Linux and worked in editors other than Visual Studio. The new project.json project format was created to facilitate this. Feedback from customers was they loved the new project.json model, but they wanted their projects to be able to work with existing .NET code they already had. In order to do this we are making .NET Core become .csproj/MSBuild based so it can interop with existing .NET projects and we are taking the best features of project.json and moving them into .csproj/MSBuild.

There are now four experiences that you can take advantage of for .NET Core development, across Windows, macOS and Linux.

Yes! There is a new member of the Visual Studio family, dedicated to the Mac. Visual Studio for Mac supports Xamarin and .NET Core projects. Visual Studio for Mac is currently in preview. You can read more about how you can use .NET Core in Visual Studio for Mac.

You can download the new MSBuild-based .NET Core Tools preview and learn more about the new experiences in .NET Core Documentation.

Overview

If you’ve been following along, you’ll know that the new Preview 3 release includes support for the MSBuild build system and the csproj project format. We adopted MSBuild for .NET Core for the following reasons:

  • One .NET tools ecosystem— MSBuild is a key component of the .NET tools ecosystem. Tools, scripts and VS extensions that target MSBuild should now extend to working with .NET Core.
  • Project to project references– MSBuild enables project to project references between .NET projects. All other .NET projects use MSBuild, so switching to MSBuild enables you to reference Portable Class Libraries (PCL) from .NET Core projects and .NET Standard libraries from .NET Framework projects, for example.
  • Proven scalability– MSBuild has been proven to be capable of building large projects. As .NET Core adoption increases, it is important to have a build system we can all count on. Updates to MSBuild will improve the experience for all project types, not just .NET Core.

The transition from project.json to csproj is an important one, and one where we have received a lot of feedback. Let’s start with what’s not changing:

  • One project file– Your project file contains dependency and target framework information, all in one file. No source files are listed by default.
  • Targets and dependencies— .NET Core target frameworks and metapackage dependencies remain the same and are declared in a similar way in the new csproj format.
  • .NET Core CLI Tools– The dotnet tool continues to expose the same commands, such as dotnet build and dotnet run.
  • .NET Core Templates– You can continue to rely on dotnet new for templates (for example, dotnet new -t library).
  • Supports multiple .NET Core version— The new tools can be used to target .NET Core 1.0 and 1.1. The tools themselves run on .NET Core 1.0 by default.

There are many of you that have already adopted .NET Core with the existing project.json project format and build system. Us, too! We built a migration tool that migrates project.json project files to csproj. We’ve been using those on our own projects with good success. The migration tool is integrated into Visual Studio and Visual Studio for Mac. It is also available at the command line, with dotnet migrate. We will continue to improve the migration tool based on feedback to ensure that it’s ready to run at scale by the final release.

Now that we’ve moved .NET Core to use MSBuild and the csproj project format, there is an opportunity to share improvements that we’re making with other projects types. In particular, we intend to standardize on package reference within the csproj format for other .NET project types.

Let’s look at the .NET Core support for each of the four supported experiences.

Visual Studio 2017 RC

Visual Studio 2017 RC includes support for the new .NET Core Tools, as a Preview workload. You will notice the following set of improvements over the experience in Visual Studio 2015.

  • Project to project references now work.
  • Project and NuGet references are declared similarly, both in csproj.
  • csproj project files can be manually edited while the project is open.

Installation

You can install the Visual Studio 2017 from the Visual Studio site.

You can install the .NET Core Tools in Visual Studio 2017 RC by selecting the “.NET Core and Docker Tools (Preview)” workload, under the “Web and Cloud” workload as you can see below. The overall installation process for Visual Studio has changed! You can read more about that in the Visual Studio 2017 RC blog post.

.NET Core workload

Creating new Projects

The .NET Core project templates are available under the “.NET Core” project node in Visual Studio. You will see a familar set of projects.

.NET Core templates

Project to project references

You can now reference .NET Standard projects from .NET Framework, Xamarin or UWP projects. You can see two app projects relying on a .NET Standard Library in the image below.

project to project references

Editing CSProj files

You can now edit CSProj files, while the project is open and with intellisense. It’s not an experience we expect most of you to do every day, but it is still a major improvement. It also does a good job of showing the similarily between NuGet and projects references.

Editing csproj files

Dynamic Project system

The new csproj format adds all source files by default. You do not need to list each .cs file. You can see this in action by adding a .cs file to your project directory from outside Visual Studio. You should see the .cs file added to Solution Explorer within 1s.

A more minimal project file has a lot of benefits, including readability. It also helps with source control by reducing a whole category of changes and the potential merge conflicts that have historically come with it.

Opening and upgrading project.json Projects

You will be prompted to upgrade project.json-based xproj projects to csproj when you open them in Visual Studio 2017. You can see that experience below. The migration is one-way. There is no supported way to go back to project.json other than via source control or backups.

.NET Core migration

Visual Studio for Mac

Visual Studio for Mac is a new member of the Visual Studio family, focussed on cross-platform mobile and cloud development on the mac. It includes support for .NET Core and Xamarin projects. In fact, Visual Studio for Mac is an evolution of Xamarin Studio.

Visual Studio for Mac is intended to provide a very similar .NET Core development experience as what was described above for Visual Studio 2017 RC. We’ll continue to improve both experiences together as we get closer to shipping .NET Core Tools, Visual Studio for Mac and Visual Studio 2017 next year.

Installation

You can install the Visual Studio for Mac from the Visual Studio site. Support for .NET Core and ASP.NET Core projects is included.

Creating new Projects

The .NET Core project templates are available under the “.NET Core” project node in Visual Studio. You will see a familar set of projects.

.NET Core templates

You can see a new ASP.NET Core project, below.

ASP.NET Core New Project

Other experiences

Visual Studio for Mac does not yet support xproj migration. That experience will be added before release.

Visual Studio for Mac has existing support for editing csproj files while the project is loaded. You can open the csproj file by right-clicking on the project file, selecting Tools and then Edit File.

Visual Studio Code

The Visual Studio Code C# extension has also been updated to support the new .NET Core Tools release. At present, the extension has been updated to support building and debugging your projects. The extension commands (in the command palette) have not yet been updated.

Installation

You can install VS Code from the visualstudio.com. You can add .NET Core support by installing the C# extension. You can install it via the Extensions tab or wait to be prompted when you open a C# file.

Debugging a .NET Core Project

You can build and debug csproj .NET Core projects.

VS Code Debugging

.NET Core CLI Tools

The .NET Core CLI Tools have also been updated. They are now built on top of MSBuild (just like Visual Studio) and expect csproj project files. All of the logic that once processed project.json files has been removed. The CLI tools are now much simpler (from an implementation perspective), relying heavily on MSBuild, but no less useful or needed.

When we started the project to update the CLI tools, we had to consider the ongoing purpose of the CLI tools, particularly since MSBuild is itself a commandline tool with its own command line syntax, ecosystem and history. We came to the conclusion that it was important to provide a set of simple and intutive tools that made adopting .NET Core (and other .NET platforms) easy and provided a uniform interface for both MSBuild and non-MSBuild tools. This vision will become more valuable as we focus more on .NET Core tools extensibility in future releases.

Installing

You can install the new .NET Core Tools by installing the Preview 3 .NET Core SDK. The SDK comes with .NET Core 1.0. You can also use it with .NET Core 1.1, which you can install separately.

You are recommended to install the zips not the MSI/PKGs if you are doing project.json-based development outside of VS.

Side by side install

By installing the new SDK, you will update the default behavior of the dotnet command. It will use msbuild and process csproj projects instead of project.json. Similarly, dotnet new will create a csproj profile file.

In order to continue using the earlier project.json-based tools on a per-project basis, create a global.json file in your project directory and add the “sdk” property to it. The following example shows a global.json that contrains dotnet to using the project.json-based tools:

Templates

You can use the dotnet new command for creating a new project. It continues to support multiple project types with the -t argument (for example, dotnet new -t lib). The complete list of supported templates follows:

  • console
  • web
  • lib
  • xunittest

We intend to extend the set of templates in the future and make it easier for the community to extend the set of templates. In fact, we’d like to enable acqisition of full samples via dotnet new.

Upgrading project.json projects

You can use the dotnet migrate command to migrate a project.json project to the csproj format. This command will also migrate any project-to-project references you have in your
project.json file automatically. You can check the dotnet migrate command documentation for more information.

You can see an example below of what a default project look file looks like after migration from project.json to csproj. We are continuing to look for opportunities to simplify and reduce the size of the csproj format.

Existing .NET csproj files, for other project types, include GUIDs and file references. Those are (intentionally) missing from .NET Core csproj project files.

Adding project references

Adding a project reference in csproj is done using a element within an element. You can see an example below.

<ItemGroup><ProjectReferenceInclude="..\ClassLibrary1\ClassLibrary1.csproj" />ItemGroup>

After this operation, you still need to call dotnet restore to generate “the assets file” (the replacement for the project.lock.json file).

Adding NuGet references

We made another improvement to the overall csproj experience by integrating NuGet package information into the csproj file. This is done through a new element. You can see an example of the below.

<PackageReferenceInclude="Newtonsoft.Json"><Version>9.0.1Version>PackageReference>

Upgrading your project to use .NET Core 1.1

The dotnet new command produces projects that depends on .NET Core 1.0. You can update your project file to depend on .NET Core 1.1 instead, as you can see in the example below.

The project file has been updated in two places:

  • The target framework has been updated from netcoreapp1.0 to netcoreapp1.1
  • The Microsoft.NETCore.App version has been updated from ‘1.0.1’ to ‘1.0.0’

.NET Core Tooling for Production Apps

We shipped .NET Core 1.0 and the project.json-based .NET Core tools back in June. Many of you are using that release every day on your desktop to build your app and in production on your server/cloud. We shipped .NET Core 1.1 today, and you can start using it the same way.

Today’s .NET Core Tools release is considered alpha and is not recommended for use in production. You are recommended to use the existing project.json-based .NET Core tools (this is the preview 2 version) for production use, including with Visual Studio 2015.

When we ship the new msbuild-based .NET Core Tools, you will be able to open your projects in Visual Studio 2017 and Visual Studio for Mac and go through a quick migration.

For now, we recommend that you you try out today’s Tools alpha release and the .NET Core Tools Preview workload in Visual Studio 2017 RC with sample projects or projects that are under source control.

Closing

Please try the new .NET Core Tools release and give us feedback. You can try out the new csproj/MSBuild support in Visual Studio 2017 RC, Visual Studio for Mac, Visual Studio Code and at the command line. You’ve got great options for .NET Core development on Windows, macOS and Linux.

To recap, the biggest changes are:

  • .NET Core csproj support is now available as an alpha release.
  • .NET Core is integrated into Visual Studio 2017 RC and Visual Studio for Mac. It can be added to Visual Studio Code by the C# extension.
  • .NET Core tools are now based on the same technology as other .NET projects.

Thanks to everyone who has given us feedback about both project.json and csproj. Please keep it coming and please do try the new release.

Announcing general availability of Release Management

$
0
0

Today we are excited to announce the general availability of Release Management in Visual Studio Team Services. Release Management is available for Team Foundation Server 2017 as well.

Since we announced the Public Preview of Release Management, we have been adding new features continuously and the service has been used by thousands of customers whose valuable feedback has helped us improve the product.

Release Management is an essential element of DevOps that helps your team continuously deliver software to your customers at a faster pace and with high quality. Using Release Management, you can automate the deployment and testing of your application to different environments like dev, test, staging and production. You can use to deploy to any app platform and target On-Premises or Cloud.

Continuous delivery Automation flow

Release management works cross-platform and supports different application types from Java to ASP.Net and NodeJs. Also Release Management has been designed to integrate with different ALM tools as well to customize release process. For example, you can integrate Release Management with Jenkins and Team City builds or you can use Node.js sources from Github as artifacts to deploy directly. You can also customize the deployments by using the automation tasks that are available either out of the box or write a custom automation task/extension to meet your requirements.

Automated deployments

You can design and automate release pipelines across your environments to target any platform and any application by using Visual Studio Release Management. You can trigger release as soon as the build is available or even schedule it. Automated pipeline helps you to get faster time to market and respond with greater agility to customer feedback.

release-summary

Manual or automated gates for approval workflows

You can easily configure deployments using pre or post deployment approvals – completely automated to dev/test environments and manual approvals for production environments. Automatic notifications ensure collaboration and release visibility among team members. You get full audit-ability of the releases and approvals.

RM approvals

Raise the quality bar with every release

Testing is essential for any release. You can ship with confidence by configuring testing tasks for all of your release check points – performance, A/B, functional, security, beta testing and more. Using “Manual Intervention” you can even track and do manual testing in the automated flow.

Release Quality

Deploying to Azure is easy

Release Management makes it very easy to configure your release with built in tasks and easy configuration for deploying to Azure. You can deploy to Azure Web Apps, Docker containers, Virtual Machines and more. You can also deploy to a range of other targets like VMware, System Center Virtual Machine Manager or servers managed through some other virtualization platform.

End to end traceability

Traceability is very critical in releases, you can track the status of releases and deployments including commits and work items in each environment.

Refer to documentation to learn more about Release Management.

Try out Release Management in Visual Studio Team Services.

For any questions, comments and feedback – please reach out to Gopinath.ch AT microsoft DOT com.

Thanks

Gopinath

Release Management Team

Twitter: @gopinach

 

Give Visual C++ a Switch to Standard Conformance

$
0
0

This post was written by Gabriel Dos Reis, Phil Christensen, and Andrew Pardoe

The Visual C++ Team is excited to announce that the compiler in Visual Studio 2017 RC will feature a mode much closer to ISO C++ standards conformance than any time in its history. This represents a major milestone in our journey to full ISO C++ conformance.  The mode is available now as an opt-in via the /permissive- switch but will one day become the default mode for the Visual C++ compiler.

Got a minute? Try Visual C++ conformance mode

The Visual C++ Team is previewing a compiler mode whereby longstanding non-conforming C++ constructs are rejected.  This includes fixes to pre-C++11 non-conformance bugs that affect a significant amount of existing code.  Examples are as simple as

typedef int default;  // error: ‘default’ is a keyword, not permitted here

or as advanced as the following.  Consider:

template
struct B {
    int f();
};

template
struct D : B {
    int g();
};

template
int D::g() {
    return f();  // error: should be ‘this->f()’
}

In the definition of D::g, the symbol f is from the dependent base class B but standard C++ does not permit examining dependent base classes when looking for declarations that satisfy the use of f. That is an error in the source code that Visual C++ has long failed to diagnose. The fix is to prefix the call with this->. This fix works in both non-conformance and conformance modes.

The issue in this code sample might sound like a “two-phase name lookup” problem, but it is not quite that. In fact, two phase name lookup is the big missing piece from the first iteration of this standard conformance mode.  Visual C++ does not support two-phase name lookup in VS 2017 RC but we expect to complete it early next year. When we do, the definition of /permissive- will change to include two-phase name lookup. This is an important point: /permissive- is not a completed feature in VS 2017 RC. You will have to keep your codebase clean as we complete our conformance work in Visual C++.

Opting into the conforming mode, /permissive-, during the series of VS 2017 update is a commitment to keeping your code base clean and to fixing non-conforming constructs we fix conformance issues in Visual C++. If you make your code compile with /permissive- today you may still have to make changes when VS 2017 releases, and again with updates to VS 2017 because we may have added more bug fixes for non-conforming behaviors.

The good news is this first release contains most of the big issues. With the exception of two-phase name lookup we don’t expect any major changes. See How to fix your code for a complete list of non-conforming constructs implemented under /permissive- today and how to fix them. We’ll update this list as we complete the changes.

From the Visual Studio IDE

In Visual Studio 2017 RC there is no setting in the project properties page for /permissive-. You’ll need to enter the switch manually under “Configuration -> C/C++ -> Additional Options”.

permissive_settings

When compiling with /permissive-, the IDE compiler disables non-conforming Visual C++-specific behaviors and interprets your code following the C++ standard. You’ll notice that these language conformance features are now reflected in the areas of IDE such as IntelliSense and browsing features. In this example, setting /permissive- causes IntelliSense to flag default as an illegal identifier.

permissive1

There are currently a few places where the IntelliSense in the IDE will conform better to the C++ standard than the Visual C++ compiler.  For example, when relying on two-phase name lookup during overload resolution the compiler will emit an error.  IntelliSense, however, will show that the code is conforming and will not give any red squiggles in the editor.  You can see that happening in the screenshot below:

permissive2

Why is that? You may know that Visual C++ uses a separate compiler for IDE productivity features. Since around 2009 we’ve used a compiler front-end from the Edison Design Group (EDG) to drive IntelliSense and many other features. EDG does a fantastic job at emulating other compilers–if you use Clang in Visual Studio (either through Clang/C2 or  when targeting Android or iOS) we put EDG into “Clang mode” where it emulates Clang’s bugs and vendor-specific behaviors. Likewise, when using Visual C++, we put EDG into “Visual C++” mode.

While EDG emulates most VC++ features and bugs faithfully, there still might be some gaps due to the implementation being separate or how we integrated their compiler in the IDE.  For example, we’ve implemented some features such as ignoring names from dependent base classes, but we haven’t yet implemented others, such as two-phase name lookup (“two-phase overload resolution” on this page) that are both implemented in EDG. Thus EDG shows the “complete” results IntelliSense even though the Visual C++ compiler doesn’t compile the code correctly.

Any differences you see in the IDE and the compiler are temporary. As we complete our conformance work the IDE and the compiler behaviors will match on all code.

Is there any relation to /Za?

The compiler switch /Za was an effort started decades ago to carve out a strictly portable behavior across several C++ compilers. The effort was stalled and we no longer recommend it for new projects. The switch /Za does not support certain key Microsoft SDK header files. By contrast /permissive- offers a useful conformance mode where input C++ code is interpreted according to ISO C++ rules but also allows conforming extensions necessary to compile C++ on targets supported by Visual C++. For example, you can use /permissive- with C++/CLI. The compiler switch /Za rejects a few non-conforming constructs; however the compiler switch /permissive- is our recommendation going forward for conforming code.

The road ahead

We know that conformance changes can be impactful to your code. In order to provide the best experience we’re splitting the development of /permissive- into three stages: the development stage, the adoption stage, on-by-default.

Development stage

The initial preview of /permissive- allows you to try out the switch and make most of the changes that will be required in your code. At this stage there will be some inconsistencies in the experience: the feature set under /permissive- will grow slightly, and editor features like IntelliSense may not match exactly. But all the changes you make to your code in this stage will work with and without /permissive-.

Adoption stage

The Visual C++ team will continue to add new conformance behavior under the /permissive- option throughout the Visual Studio 2017 release cycle. But we know that you rely on libraries and that those libraries also need to work with /permissive- in order for you to opt-in. Our first priority is to ensure that all public headers from our team, and those from Microsoft as a whole, compile cleanly with the /permissive- option. For example, the standard library headers compile correctly both with /permissive- and without. We are also working with the community to make changes to existing open source C++ projects (this typically involves removal of Visual C++-specific workarounds.)

On by default

When developers have had time to migrate their code to conform more closely to the C++ standards we will flip the default so that the compiler applies full ISO C++ standard rules when interpreting your source code. This won’t happen immediately–it’s a long-term goal for our conformance work. Until we make this switch—and at least for the entire Visual Studio 2017 cycle–you will have to opt-in to /permissive-. When we do switch it on by default your existing projects won’t change, but VS will make new projects opt-in to conformance.

Call to action

We encourage you to try the new option /permissive- in your projects. Please add this option to your build settings–either in the IDE or in your build scripts. If your code is already used cross-platform (i.e., it compiles with multiple compilers on many platforms) and does not contain Visual C++ specific workarounds, the chances are that the code will compile just fine with /permissive-!

If you do use Visual C++ specific, non-conforming constructs in your code, chances are that your code won’t compile the first time with /permissive-. There’s a list of patterns below of non-conforming C++ code, the typical error message/number you’ll see, and how to fix the code.  The good news is that the fixes work both in the conforming mode and in the permissive mode. Therefore, if you try to compile with /permissive- and you correct issues, the corrected code will be good for the permissive mode too. This enables you to migrate your code to /permissive- incrementally, rather than making all of the changes at once.

How to fix your code

Here’s a list of behaviors that are affected by the permissive option today. Note that this list is incomplete because the /permissive- switch is incomplete. As we add new conforming behaviors the potential breaking changes protected by /permissive- will expand. This means even if you fix your code to work with /permissive- today you may still have to make fixes in the future as we make Visual C++ conform better to the standard. Again, these changes will only affect your code if you opt-in by using the /permissive- switch.

Lookup members in dependent base

template  struct B {
  void f();
};
template  struct D
    : public B // B is a dependent base because its type depends on the type of T.
{
    // One possible fix is to uncomment the following line.  If this were a type don't forget the 'typename' keyword
    // using B::f;
void g() {
      f(); // error C3861: 'f': identifier not found
           // change it to 'this->f();'
    }
};
template  struct C
    : public B
{
    C()
      : B() // error C2955: 'B': use of class template requires template argument list
               // Change to B() to fix
    {}
};
void h()
{
   D d;
   d.g();
   C c;
}

Use of qualified names in member declarations

struct A {
    void A::f() { } // error C4596: illegal qualified name in member declaration
                    // remove redundant 'A::' to fix
};

Initializing multiple union members in a member initializer

union U
{
  U() 
    : i(1), j(1) // error C3442: Initializing multiple members of union: 'U::i' and 'U::j'
                 // Remove all but one of the initializations
  {}
  int i;
  int j;
};

Hidden friend name lookup rules

// Example 1
struct S {
friend void f(S *);
};
// Uncomment this declaration to make the hidden friend visible
//void f(S *); // this declaration makes the hidden friend visible
using type = void (*)(S *);
type p = &f; //error C2065: 'f': undeclared identifier

/*--------------------------------------------------------*/

// Example 2
struct S {
friend void f(S *);
};
 
void g() { 
    // using nullptr instead of S prevents argument dependent lookup in S
    f(nullptr); // error C3861: 'f': identifier not found
    
S *p = nullptr;
f( S ); // hidden friend can be found via argument-dependent lookup.
} 

Using scoped enums in array bounds

enum class Color 
{ 
    Red, Green, Blue 
};
int data[Color::Blue];// error C3411: 'Color' is not valid as the size of an array as it is not an integer type
                      // Cast to type size_t or int

Using ‘default’ as an identifier in native code

void func(int default); // Error C2321: 'default' is a keyword, and cannot be used in this context

‘for each’ in native code

void func()
{
   int array[] = {1, 2, 30, 40};
   for each( int i in array) // error C4496: nonstandard extension 'for each' used: replace with ranged-for statement
                             // for( int i: array)
   {
       // ... 
   }
}

Defaulting conformance switches

The compiler switches /Zc:strictStrings and /Zc:rvalueCast are currently off by default, allowing non-conforming behavior. The switch /permissive- turns them on by default. You can pass in the /Zc flags after /permissive- to override this behavior if needed.

See these MSDN pages for more information:

  • /Zc:strictStrings https://msdn.microsoft.com/en-us/library/dn449508.aspx
  • /Zc:rvalueCast https://msdn.microsoft.com/en-us/library/dn449507.aspx

ATL attributes

We started to deprecate attributed ATL support in VS 2008. The /permissive- switch removes support for attributed ATL.

// Example 1
[uuid("594382D9-44B0-461A-8DE3-E06A3E73C5EB")]
class A {};
// Fix for example 1
class __declspec(uuid("594382D9-44B0-461A-8DE3-E06A3E73C5EB")) B {};
// Example 2
[emitidl];
[module(name="Foo")];
[object, local, uuid("9e66a290-4365-11d2-a997-00c04fa37ddb")]
__interface ICustom {
HRESULT Custom([in] long l, [out, retval] long *pLong);
[local] HRESULT CustomLocal([in] long l, [out, retval] long *pLong);
};
[coclass, appobject, uuid("9e66a294-4365-11d2-a997-00c04fa37ddb")]
class CFoo : public ICustom
{};
// Fix for example 2
// First, create the *.idl file. The vc140.idl generated file can be used to automatically obtain *.idl file for the interfaces with annotation. 
// Second, add a midl step to your build system to make sure that the C++ interface definitions are outputted. 
// Lastly, adjust your existing code to use ATL directly as shown in atl implementation section

-- IDL  FILE -- 
import "docobj.idl";
 
[object, local, uuid(9e66a290-4365-11d2-a997-00c04fa37ddb)] 
interface ICustom : IUnknown {
HRESULT  Custom([in] long l, [out,retval] long *pLong);
[local] HRESULT  CustomLocal([in] long l, [out,retval] long *pLong);
};
 
[ version(1.0), uuid(29079a2c-5f3f-3325-99a1-3ec9c40988bb) ]
library Foo {
importlib("stdole2.tlb");
importlib("olepro32.dll");
 
[version(1.0), appobject, uuid(9e66a294-4365-11d2-a997-00c04fa37ddb)] 
 
coclass CFoo { interface ICustom; };
}
 
-- ATL IMPLEMENTATION--
#include 
#include 
class ATL_NO_VTABLE CFooImpl : public ICustom, public ATL::CComObjectRootEx {
public:
BEGIN_COM_MAP(CFooImpl)
COM_INTERFACE_ENTRY(ICustom)
END_COM_MAP()
}; 

Changes not present in VS 2017 RC

Here are some changes to /permissive- that we expect to enable in future releases and updates of VS 2017. This list may not be complete.

  • Two-phase name lookup
  • Error when binding a non-const reference to a temporary
  • Not treating copy init as direct init
  • Not allowing multiple user-defined conversions in initialization.
  • Alternative tokens for logical operators (‘and’, ‘or’, etc…)

In closing

As always, we welcome your feedback. Please give us feedback about the /permissive- feature in the comments below or through e-mail at visualcpp@microsoft.com.

If you encounter other problems with Visual C++ in VS 2017 RC please let us know via the Report a Problem option, either from the installer or the Visual Studio IDE itself. For suggestions, let us know through UserVoice. Thank you!

Updates to Expression SFINAE in VS 2017 RC

$
0
0

Throughout the VS 2015 cycle we’ve been focusing on the quality of our expression SFINAE implementation. Because expression SFINAE issues can be subtle and complex we’ve been using popular libraries such as Boost and Microsoft’s fork of Range-v3 to validate our implementation and find remaining bugs. As we shift the compiler team’s focus to Visual Studio 2017 release we’re excited to tell you about the improvements we’ve made in correctly parsing expression SFINAE.

We’ve been tracking the changes and improvements to our parsing of expression SFINAE throughout the Visual Studio 2015 and 2017 cycle. Please see this blog post for a list of expression SFINAE improvements in Visual Studio 2017.

CMake support in Visual Studio – the Visual Studio 2017 RC update

$
0
0

Visual Studio 2017 RC is an important release when it comes to its support for CMake. The “Tools for CMake” VS component is now ready for public preview and we’d like to invite all of you to bring your CMake projects into VS and give us feedback on your experience.

For an overview of the general Visual Studio CMake experience, head over to the announcement post for CMake support in Visual Studio that has been updated to include all the capabilities discussed in this post. Additionally, if you’re interested in the “Open Folder” capability for C++ projects that are not using CMake or MSBuild, check out the Open Folder for C++ announcement blog.

The RC release brings support for:

Editing CMake projects

Default CMake configurations. As soon as you open a folder containing a CMake project, Solution Explorer will display the files in that folder and you can open any one of them in the editor. In the background, VS will start indexing the C++ sources in your folder. It will also run CMake.exe to collect more information about your CMake project (CMake cache will be generated in the process). CMake is invoked with a specific set of switches that are defined as part of a default CMake configuration that VS creates under the name “Visual Studio 15 x86”.

cmake-editor-goldbar

CMake configuration switch. You can switch between CMake configurations from the C++ Configuration dropdown in the General tab. If a configuration does not have the needed information for CMake to correctly create its cache, you can further customize it – how to configure CMake is explained later in the post.

cmake-configuration-dropdown

Auto-update CMake cache. If you make changes to the CMakeLists.txt files or change the active configuration, the CMake generation step will automatically rerun. You can track its progress in the CMake output pane of the Output Window.

cmake-editor-goldbar-2

When the generation step completes, the notification bar in editors is dismissed, the Startup Item dropdown will contain the updated list of CMake targets and C++ IntelliSense will incrementally update with the latest changes you made (e.g. adding new files, changing compiler switches, etc.)

cmake-debug-target

Configure CMake projects

Configure CMake via CMakeSettings.json. If your CMake project requires additional settings to configure the CMake cache correctly, you can customize these settings by creating a CMakeSettings.json file in the same folder with the root CMakeLists.txt. In this file you can specify as many CMake configurations as you need – you will be able to switch between them at any time.

You can create the CMakeSettings.json file by selecting the Project>Edit Settings>path-to-CMakeLists (configuration-name) menu entry.

cmake-editsettings

CMakeSettings.json example

{
  "configurations": [
   {
    "name": "my-config",
    "generator": "Visual Studio 15 2017",
    "buildRoot": "${env.LOCALAPPDATA}\\CMakeBuild\\${workspaceHash}\\build\\${name}",
    "cmakeCommandArgs": "",
    "variables": [
     {
      "name": "VARIABLE",
      "value": "value"
     }
    ]
  }
 ]
}

If you already have CMake.exe working on the command line, creating a new CMake configuration in the CMakeSettings.json should be trivial:

  • name: is the configuration name that will show up in the C++ configuration dropdown. This property value can also be used as a macro ${name} to specify other property values e.g. see “buildRoot” definition
  • generator: maps to -G switch and specifies the generator to be used. This property can also be used as a macro ${generator} to help specify other property values. VS currently supports the following CMake generators:
    • “Visual Studio 14 2015”
    • “Visual Studio 14 2015 ARM”
    • “Visual Studio 14 2015 Win64”
    • “Visual Studio 15 2017”
    • “Visual Studio 15 2017 ARM”
    • “Visual Studio 15 2017 Win64”
  • buildRoot: maps to -DCMAKE_BINARY_DIR switch and specifies where the CMake cache will be created. If the folder does not exist, it will be created
  • variables: contains a name+value pair of CMake variables that will get passed as -Dname=value to CMake. If your CMake project build instructions specify adding any variables directly to the CMake cache file, it is recommended that you add them here instead.
  • cmakeCommandArgs: specifies any additional switches you want to pass to CMake.exe

CMakeSettings.json file IntelliSense. When you have the JSON editor installed (it comes with the Web Development Workload), JSON intelliSense will assist you while making changes to the CMakeSettings.json file.

cmake-settings-intellisense

Environment variable support and macros. CMakeSettings.json supports consuming environment variables for any of the configuration properties. The syntax to use is ${env.FOO} to expand the environment variable %FOO%.

You also have access to built-in macros inside this file:

  • ${workspaceRoot}– provides the full path to the workspace folder
  • ${workspaceHash}– hash of workspace location; useful for creating a unique identifier for the current workspace (e.g. to use in folder paths)
  • ${projectFile}– the full path for the root CMakeLists.txt
  • ${projectDir}– the full path to the folder of the root CMakeLists.txt file
  • ${thisFile}– the full path to the CMakeSettings.json file
  • ${name}– the name of the configuration
  • ${generator}– the name of the CMake generator used in this configuration

Building and debugging CMake projects

Customize build command. By default, VS invokes MSBuild with the following switches: -m -v:minimal. You can customize this command, by changing the “buildCommandArgs” configuration property in CMakeSettings.json

CMakeSettings.json

{
  "configurations": [
   {
     "name": "x86",
     "generator": "Visual Studio 15 2017",
     "buildRoot": "${env.LOCALAPPDATA}\\CMakeBuild\\${workspaceHash}\\build\\${name}",
     "cmakeCommandArgs": "",
     "buildCommandArgs": "-m:8 -v:minimal -p:PreferredToolArchitecture=x64"
   }
 ]
}

Call to action

Download Visual Studio 2017 RC today and try the “Open Folder” experience for CMake projects. For an overview of the CMake experience, also check out the CMake support in Visual Studio blog post.
If you’re using CMake when developing your C++ projects, we would love to hear from you! Please share your feedback in the comments below or through the “Send Feedback” icon in VS.


Open any folder with C++ sources in Visual Studio 2017 RC

$
0
0

With the Visual Studio 2017 RC release, we’re continuing to improve the “Open Folder” capabilities for C++ source code. In this release, we’re adding support for building as well as easier configuration for the debugger and the C++ language services.

If you are just getting started with “Open Folder” or want to read about these capabilities in more depth, head over to the Open Folder for C++ introductory post that has been updated with the content below. If you are using CMake, head over to our blog post introducing the CMake support in Visual Studio.

Here are the improvements for the “Open Folder” C++ experience in the new RC release for Visual Studio 2017:

Reading and editing C++ Code

Environment variables and macros support. CppProperties.json file, which aids in configuring C++ IntelliSense and browsing, now supports environment variable expansion for include paths or other property values. The syntax is ${env.FOODIR} to expand an environment variable %FOODIR%

CppProperties.json:

{
  "configurations": [
    {
      "name": "Windows",
      "includePath": [ // include UCRT and CRT headers
        "${env.WindowsSdkDir}include\\${env.WindowsSDKVersion}\\ucrt",
        "${env.VCToolsInstallDir}include"
      ]
    }
  ]
}

Note: %WindowsSdkDir% and %VCToolsInstallDir% are not set as global environment variables so make sure you start devenv.exe from a “Developer Command Prompt for VS 2017” that defines these variables.

You also have access to built-in macros inside this file:

  • ${workspaceRoot}– provides the full path to the workspace folder
  • ${projectRoot}– full path to the folder where CppProperties.json is placed
  • ${vsInstallDir}– full path to the folder where the running instance of VS 2017 is installed

CppProperties.json IntelliSense. Get assistance while editing CppProperties.json via JSON IntelliSense when you have the full-fledged JSON editor installed (it comes with the Web Development Workload)

anycode-rc0-cppprops-intellisense

C++ Configuration dropdown. You can create as many configurations as you want in CppProperties.json and easily switch between them from the C++ configuration dropdown in the Standard toolbar

CppProperties.json

{
  "configurations": [
    {
      "name": "Windows",
      ...
    },
    {
      "name": "with EXTERNAL_CODECS",
      ...
    }
  ]
}

anycode-rc0-cppconfig-dropdown

CppProperties.json is now optional and by default, when you open a folder with C++ source code, VS will create 2 default C++ configurations: Debug and Release. These configurations are consistent with the configurations provided by the Single File IntelliSense we introduced in VS 2015.

anycode-rc0-default-config

Building C++ projects

Integrate external tools via tasks. You can now automate build scripts or any other external operations on the files you have in your current workspace by running them as tasks directly in the IDE. You can configure a new task by right clicking on a file or folder and select “Customize Task Settings”.

anycode-rc0-tasksjson-menu

This will create a new file tasks.vs.json under the hidden .vs folder in your workspace and a new task that you can customize. JSON IntelliSense is available if you have the JSON editor installed (it comes with the Web Development workload)

anycode-rc0-tasksjson-intellisense

By default, a task can be executed from the context menu of the file in Solution Explorer. For each task, you will find a new entry at the bottom of the context menu.

Tasks.vs.json

{
  "version": "0.2.1",
  "tasks": [
    {
      "taskName": "Echo filename",
      "appliesTo": "makefile",
      "type": "command",
      "command": "${env.COMSPEC}",
      "args": ["echo ${file}"]
    }
  ]
}

anycode-rc0-tasksjson-contextmenu

Environment variables support and macros. Just like CppProperties.json, in tasks.vs.json you can consume environment variables by using the syntax ${env.VARIABLE}.

Additionally, you can use built-in macros inside your tasks properties:

  • ${workspaceRoot}– provides the full path to the workspace folder
  • ${file}– provides the full path to the file or folder selected to run this task against

You can also specify additional user macros yourself that you can use in the tasks properties e.g. ${outDir} in the example below:

Tasks.vs.json

{
  "version": "0.2.1",
  "outDir": "${workspaceRoot}\\bin",
  "tasks": [
    {
      "taskName": "List outputs",
      "appliesTo": "*",
      "type": "command",
      "command": "${env.COMSPEC}",
      "args": [ "dir ${outDir}" ]
    }
  ]
}

Building projects. By specifying the “contextType” for a given task to equal “build”, “clean” or “rebuild” you can wire up the VS build-in commands for Build, Clean and Rebuild that can be invoked from the context menu.

Tasks.vs.json

{
  "version": "0.2.1",
  "tasks": [
    {
      "taskName": "makefile-build",
      "appliesTo": "makefile",
      "type": "command",
      "contextType": "build",
      "command": "nmake"
    },
    {
      "taskName": "makefile-clean",
      "appliesTo": "makefile",
      "type": "command",
      "contextType": "clean",
      "command": "nmake",
      "args": ["clean"]
    }
  ]
}

anycode-rc0-tasksjson-build

File and folder masks. You can create tasks for any file or folder by specifying its name in the “appliesTo” field. But to create more generic tasks you can use file masks. For example:

  • “appliesTo”: “*”– task is available to all files and folders in the workspace
  • “appliesTo”: “*/”– task is available to all folders in the workspace
  • “appliesTo”: “*.cpp”– task is available to all files with the extension .cpp in the workspace
  • “appliesTo”: “/*.cpp”– task is available to all files with the extension .cpp in the root of the workspace
  • “appliesTo”: “src/*/”– task is available to all subfolders of the “src” folder
  • “appliesTo”: “makefile”– task is available to all makefile files in the workspace
  • “appliesTo”: “/makefile”– task is available only on the makefile in the root of the workspace

Debugging C++ binaries

Debug task outputs. If you specify an output binary in your task definition (via “output”), this binary will be automatically launched under the debugger if you select the source file as a startup item or just right click on the source file and choose “Debug”. E.g.

Tasks.vs.json

{
  "version": "0.2.1",
  "tasks": [
    {
      "taskName": "makefile-build",
      "appliesTo": "makefile",
      "type": "command",
      "contextType": "build",
      "command": "nmake",
      "output": "${workspaceRoot}\\bin\\hellomake.exe"
    }
  ]
}

anycode-rc0-tasksjson-output

What’s next

Download Visual Studio 2017 RC today and please try the “Open Folder” experience. For an overview of the “Open Folder” experience, also check out the “Open Folder” for C++ overview blog post.

As we’re continuing to evolve the “Open Folder” support, we want your input to make sure the experience meets your needs when bringing C++ codebases that use non-MSBuild build systems into Visual Studio, so don’t hesitate to contact us. We look forward to hearing from you!

Introducing Go To, the successor to Navigate To

$
0
0

Visual Studio 2017 comes packed with several major changes to the core developer productivity experience. It is our goal to maximize your efficiency as you develop applications, and this requires us to constantly refine our features and improve on them over time. For Visual Studio 2017, we wanted to improve code navigation, particularly for larger solutions which produce many search results. One big focus for us was Navigate To (now known as Go To). The other was Find All References, described in a separate blog post.

We rebranded our Navigate To feature to Go To, an umbrella term for a set of filtered navigation experiences around specific kinds of results. We recognized that large searches sometimes produced cases where the desired search term is quite far down the list. With our new filters, it is easier to narrow down on the desired result before the search process has even begun.

Go To User InterfaceThe new Go To experience with added filters

You can open Go To with Ctrl + ,– this creates a search box over the document you are editing. “Go To” is an umbrella term encompassing the following features:

  1. Go To Line (Ctrl +G)– quickly jump to a different line in your current document
  2. Go To All (Ctrl + ,) or (Ctrl + T)– similar to old Navigate To experience, search results include everything below
  3. Go To File (Ctrl 1, F)– search for files in your solution
  4. Go To Type (Ctrl 1, T)– search results include:
    • Classes, Structs, Enums
    • Interfaces & Delegates (managed code only)
  5. Go To Member (Ctrl 1, M)– search results include:
    • Global variables and global functions
    • Class member variables and member functions
    • Constants
    • Enum Items
    • Properties and Events
  6. Go To Symbol (Ctrl 1, S)– search results include:
    • Results from Go To Types and Go To Members
    • All remaining C++ language constructs, including macros

When you first invoke Go To with Ctrl + ,Go To All is activated (no filters on search results). You can then select your desired filter using the buttons near the search textbox. Alternatively, you can invoke a specific Go To filter using its corresponding keyboard shortcut. Doing so opens the Go To search box with that filter pre-selected. All keyboard shortcuts are configurable, so feel free to experiment!

You also have the option of using text filters to activate different Go To filters. To do so, simply start your search query with the filter’s corresponding character followed by a space. Go To Line can optionally omit the space. These are the available text filters:

  • Go To All – (no text filter)
  • Go To Line Number – :
  • Go To File – f
  • Go To Type – t
  • Go To Member – m
  • Go To Symbol – #

If you forget these text filters, just type a ? followed by a space to see the full list.

Another way to access the Go To commands is via the Edit menu. This is also a good way to remind yourself of the main Go To keyboard shortcuts.

go-to-menu

Other notable changes to the old Navigate To (now Go To) experience:

  • Two toggle buttons were added to the right of the filters:
    • A new button that limits searches to the current active document in the IDE.
    • A new button that expands searches to include results from external dependencies in search results (previously was a checkbox setting).
  • The settings for Go To have been moved from the arrow beside the textbox to their own “gear icon” button. The arrow still displays a history of search results. A new setting was added that lets you center the Go To search box in your editor window.

We hope the new Go To feature with its set of filters provide a more advanced and tailored code navigation experience for you. If you’re interested in other productivity-related enhancements in Visual Studio 2017, check out this additional content:

Send us your feedback!

We thrive on your feedback. Use the report a problem feature in the IDE to share feedback on Visual Studio and check out the developer community portal view. If you are not using the Visual Studio IDE, report issues using the Connect Form for reporting issues. Share your product improvement suggestions on UserVoice.

Download Visual Studio 2017 RC to try out this feature for yourself!

Find All References re-designed for larger searches

$
0
0

Visual Studio 2017 comes packed with several major changes to the core developer productivity experience. It is our goal to maximize your efficiency as you develop applications, and this requires us to constantly refine our features and improve on them over time. For Visual Studio 2017, we wanted to improve code navigation, particularly for larger solutions which produce many search results. One big focus for us was Find All References. The other was Navigate To, described in a separate blog post.

Find All References is intended to provide an efficient way to find all usages of a particular code symbol in your codebase. In Visual Studio 2017, you can now filter, sort, or group results in many different ways. Results also populate incrementally, and are classified as Reads or Writes to help you get more context on what you are looking at.

far-ui

Grouping Results

A new dropdown list has been made available that lets you group results by the following categories:

  • Project then Definition
  • Definition Only
  • Definition then Project
  • Definition then Path
  • Definition, Project then Path

Filtering Results

Most columns now support filtering of results. Simply hover over a column and click the filtering icon that pops up. Most notably, you can filter results from the first column to hide things like string and comment references (or choose to display them, if you prefer).

far-filters
The difference between Confirmed, Disconfirmed and Unprocessed results is described below:

  • Confirmed Results– Actual code references to the symbol being searched for. For example, searching for a member function called Size will return all references to Size that match the scope of the class defining Size.
  • Disconfirmed Results– This filter is off by default for a reason, because these are the results that have the same name as the symbol being searched for but have been proven not to be actual references to that symbol. For example, if you have two classes that each define a member function called Size, and you run a search for Size on a reference from an object of Class 1, any references to Size from Class 2 appear as disconfirmed. Since most of the time you won’t actually care for these results, they are hidden from view (unless you turn this filter on).
  • Unprocessed Results– Find All References operations can take some time to fully execute on larger codebases, so we classify unprocessed results here. Unprocessed results match the name of the symbol being searched for but have not yet been confirmed or disconfirmed as actual code references by our IntelliSense engine. You can turn on this filter if you want to see results show up even faster in the list, and don’t mind sometimes getting results that aren’t actual references.

Sorting Results

You can sort results by a particular column by simply clicking on that column. You can swap between ascending/descending order by clicking the column again.

Read/Write Status

We added a new column (far right in the UI) that classifies entries as Read, Write, or Other (where applicable). You can use the new filters to limit results to just one of these categories if you prefer.

We hope the changes to Find All References, designed to help you manage complex searches. If you’re interested in other productivity-related enhancements in Visual Studio 2017, check out this additional content:

Send us your feedback!

We thrive on your feedback. Use the report a problem feature in the IDE to share feedback on Visual Studio and check out the developer community portal view. If you are not using the Visual Studio IDE, report issues using the Connect Form for reporting issues. Share your product improvement suggestions on UserVoice.

Download Visual Studio 2017 RC to try out this feature for yourself!

Introducing the Visual Studio Build Tools

$
0
0

Recap of the Visual C++ Build Tools

Last year we introduced the Visual C++ Build Tools to enable a streamlined build-lab experience for getting the required Visual C++ tools without the additional overhead of installing the Visual Studio IDE.  We expanded the options to include tools like ATL and MFC, .NET tools for C++/CLI development, and various Windows SDKs.

1

There was also an MSBuild standalone installer for installing the tools needed for building .NET applications called the Microsoft Build Tools.

The new Visual Studio Build Tools

For Visual Studio 2017 RC, we are introducing the new Visual Studio Build Tools which uses the new installer experience to provide access to MSBuild tools for both managed and native applications.  This installer replaces both the Visual C++ Build Tools and the Microsoft Build Tools as your one stop shop for build tools.  By default, all of the necessary MSBuild prerequisites for both managed and native builds are installed with the Visual Studio Build Tools, including the MSBuild command prompt which you can use to build your applications.  On top of that there is also an optional workload for the “Visual C++ Build Tools” that provides an additional set of options that native C++ developers can install on top of the core MSBuild components.

2

These options are very similar to those found in the Visual Studio 2017 RC “Desktop development with C++” workload, which provides a comparable set of options than those available in the Visual C++ Build Tools 2015.   Note that we also include CMake support in the Visual Studio Build Tools.

3

Just like the installer for Visual Studio 2017 RC, there is also an area for installing individual components to allow for more granular control over your installation.

4

Command-line “Silent” Installs

The build tools can be installed using the installer from the command-line without needing to launch the installer UI.  Navigate to the installer’s directory using an elevated command prompt and run one of the following commands.  There is also an option to use the “–quiet” argument to invoke a silent install if desired, as shown below:

  • To install just the MSBuild tools

vs_buildtools.exe –quiet

  • To install the MSBuild tools and required VC++ tools

vs_buildtools.exe –quiet –add Microsoft.VisualStudio.Workload.VCTools

  • To install the MSBuild tools and recommended (default) VC++ tools

vs_buildtools.exe –quiet –add Microsoft.VisualStudio.Workload.VCTools;includeRecommended

  • To install the MSBuild tools and all of the optional VC++ tools

vs_buildtools.exe –quiet –add Microsoft.VisualStudio.Workload.VCTools;includeOptional

The –help command will be coming in a future release.

Closing Remarks

Give the new Visual Studio Build Tools a try and let us know what you think.  We plan to evolve this installer to continue to meet your needs, both native and beyond.  Your input will help guide us down this path.  Thanks!

Visual Studio 2017 RC Now Available

$
0
0

Visual Studio 2017 RC (previously known as Dev “15”) is now available. There is a lot of stuff for C++ to love in this release:

For more details, visit What’s New for Visual C++ in Visual Studio 2017 RC. Going Native over on Channel 9 also has a good overview including a look at VCPkg.

We thrive on your feedback. Use the report a problem feature in the IDE to share feedback on Visual Studio and check out the developer community portal view. If you are not using the Visual Studio IDE, report issues using the Connect Form for reporting issues.  Share your product improvement suggestions on UserVoice.

Thank you.

Viewing all 10804 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>