Quantcast
Channel: Category Name
Viewing all 10804 articles
Browse latest View live

Team Foundation Server (TFS) 15 RC2 is available and ready for production use

$
0
0

We have now released TFS “15” RC2. We are using it in production internally. It is fully supported for production use. You can upgrade from TFS 2012 or newer to RC2. You can also upgrade the RC1 release to RC2, and you will be able to upgrade from RC2 to RTM (that should be a very fast upgrade, since there will be very few changes between RC2 and RTM).

You can find the full release notes on the Getting Started page, which also lists out the very long feature list.

Here are the requirements links.

Here are direct links to the downloads.

I’d love to get as many servers using it as possible for folks to give us feedback. We have a bunch of great new features.

  • Follow a work item– makes it super easy to track the progress on work that you care about.
  • Code search– Makes it easy to find code anywhere across your project.
  • Package management– Improves your ability to reuse components across your projects (both OSS and internally produced)
  • Improved Git web experiences, including revamped pull requests and branches pages – pull requests include a new auto-complete feature
  • Docker support– in build and release management.
  • Release management– the number of significant improvements and new capabilities is very large.
  • Parity between MTM and web TCM– We almost have parity now where all the things you are used to using MTM for, you can now use the web experience for.
  • Paid extensions – You can now purchase and install extensions in TFS

RTM is coming soon, so we’d love to get your feedback on the RC2 release!


Bing Enhances the Copyright Infringement Claims Process with New, Easy-to-Use Dashboard

$
0
0

Lack of communication is a leading cause of divorce. Communication is vital, and sharing the status of a copyright infringement notice is no exception. Which is why Bing just made this easier.

A new online dashboard provides insight into the status of a copyright removal request, as well as providing overall historical submission statistics. This dashboard is now available for users who submit DMCA notices via our online form or API.


 

How Bing Receives Copyright Notices

Bing typically receives requests to remove links due to copyright infringement claims, also known as DMCA notices, through three different channels: email, an online form, and for certain users, an API.

Email is the least efficient and prone to error such as missing or incomplete information. When submitters leverage the online form or the API, they decrease the chance of rejection due to incomplete or incorrect information.

Bing’s online form solves the problems of email submissions by providing submitters a fill-in form with guides for all of the required information. Most submitters are recommended to use the online form.

 


After hitting the submit button, an email will arrive with a submission reference number, for example, 604ab644-2a38-4bbc-a839-2034471731c1. Individual submissions such as this, as well as overall historical statistics, are viewable through the dashboard.


For rights owners who submit high volumes of DMCA notices, Bing’s API program is the most efficient method for requesting link removals due to copyright infringement. The API program is reserved for frequent submitters with a demonstrated history of valid submissions.

Submitter Dashboard

The dashboard’s top table shows submission statistics for all notices received by a Copyright Owner or their authorized agent. A submission is accepted if it contains all of the information required by the DMCA. That does not mean, however, that the (alleged) infringing URLs specified within the notice are automatically removed. Finally, submissions in the pending state indicates that Bing is currently processing the notice.



The next table shows statistics for all alleged infringing URLs within all notices sent by a Copyright Owner. The table depicts the overall number of URLs accepted, rejected or still being processed.


The final table shows the status of individual submissions and current status for the URLs contained within each submission.


 

Clicking on and individual submission ID will display the details for that specific submission.

In Conclusion

Bing wants to ensure that copyright owners send valid DMCA notices and that those notices are acted upon promptly. The online form and API help accomplish this. Having insight into the status of these notices helps copyright owners stay better informed and, in turn, promotes the use of such tools to help Bing respond in an expeditious manner.
 
Chad Foster
Bing Program Manager

 
 
 
 

Implementing Seeding, Custom Conventions and Interceptors in EF Core 1.0

$
0
0

This post was written by Alina Popa, a software engineer on the .NET team.

Introduction

Entity Framework Core (EF Core) is a lightweight and extensible version of the Entity Framework (EF) data access technology which is cross-platform and supports multiple database providers. You can find a comparison of EF Core vs. EF6 under the Entity Framework documentation.

When moving an application from EF6 to EF Core, you may encounter features that existed in EF6 but either are not present or are not yet implemented in EF Core. For many of those features, however, you can implement equivalent functionality.

Seeding

With EF6 you can seed a database with initial data by overriding one of the following Seed() methods:

EF Core does not provide similar APIs, and database initializers also no longer exist in EF Core. To seed the database, you would put the database initialization code in the application startup. If you are using migrations, call context.Database.Migrate(), otherwise use context.Database.EnsureCreated()/EnsureDeleted().

The patterns for seeding the database are discussed in issue 3070 in the Entity Framework Core repository on GitHub. The recommended approach is to run the seeding code within a service scope in Startup.Configure():

You can find here an example of database initialization that uses migrations. The MusicStore sample also uses this pattern for seeding.

Custom Conventions

In Entity Framework 6 we can create custom configurations of properties and tables by using model-based conventions. For example, the following code in EF6 creates a convention to throw an exception when the column name is longer than 30 characters:

EF Core does not provide the IStoreModelConvention interface; however, we can create this convention by accessing internal services (extending lower level components in EF Core). In the following example we implement a model validator which checks for very long table and column names:

Registering and using the ModelValidator created here is explained later in this article.

Interceptors

Entity Framework 6 provides the ability to intercept a context using IDbCommandInterceptor. Interceptors let you to get into the pipeline just before and just after a query or command is sent to the database. Entity Framework Core doesn’t have any interceptors yet. The functionality can be achieved by accessing internal services, in a similar way as the example described above for the model validator. The following example implements IEntityStateListener to modify an entity just before it is added to the database:

To use the StateListener and the ModelValidator in your context, create a ServiceProvider and use it in OptionsBuilder:

Notes

The APIs for accessing internal services may change in the future releases, and there is a risk that the application will break when updated to a new version of Entity Framework Core. The approaches described above should not be considered as long-term solutions, but as workarounds until we have a first class way of achieving the functionality.

Interceptors and seeding are high on the feature backlog and the Entity Framework team plans to address them in the near future.

Useful Links

Extend your reach with offline licensing in Windows Store for Business

$
0
0

Windows Store for Business was launched in November 2015 to extend opportunities for developers to offer their apps in volume to business and education organizations running Windows 10. Developers can enable organizations to license and distribute their apps with these two licensing and distribution types (also described on MSDN):

  1. Store-managed (online) is your default option, in which the Store service installs your app and issues entitlements for every user in the organization who installs the app.
  1. Organization-managed (offline) is an optional additional choice, in which the organization can download your app binary with proper entitlements, distribute the apps using their own management tools and keep track of the number of licenses used.

The organization-managed (offline) option broadens the distribution flexibility for organizations, and can extend your opportunity to increase usage of your apps and revenue of your paid apps. This option gives organizations more control over distribution of apps within the organization – including preloading apps to Windows 10 devices, distribution to devices never or rarely connected to the internet or distributing via private intranet networks. Typical organizations asking for organization-managed (offline) apps include multinational manufacturers, city departments and the largest school districts with tens of thousands of devices.

Developers have already enabled many apps for organization-managed (offline) licensing and distribution in Store for Business. Examples include many Microsoft apps like Office Mobile, OneDrive, Mail, Calendar and Fresh Paint—plus apps from other developers such as Twitter, TeamViewer, JT2Go from Siemens, Corinth’s Classroom app collection and many others. Thank you, developers!

Enable offline licensing today

Enabling Organization-managed (offline) licensing today can help more organizations acquire your apps and deploy them to their users.

image1

  1. Log in to your Windows Dev Center dashboard.
  2. Open the app you would like to enable for offline licensing and create a new submission.
  3. In Pricing and availability, go to the Organizational licensing.
  4. Make sure the box for Make my app available to organizations withStore-managed (online) volume licensing is checked.
  5. Also check the box for Allow organization-managed (offline) licensing and distribution for organizations.
  6. Make any other desired changes to your submission, then submit the app to the Store.

Frequently asked questions

Does organization-managed (offline) licensing and distribution mean more work for developers? No—all you have to do is select a checkbox during submission in Dev Center, as shown above.

How do app updates work for organization-managed (offline) apps? Apps distributed with offline licensing will work on a user’s device just like any other Store apps, and will be updated by default when the device is connected to the internet with the latest version of your app. Organizations also may choose to manage app updates with management tools if desired.

Are both free and paid organization-managed (offline) apps supported? Yes, organization-managed (offline) licensing and distribution works for both free and paid apps (in developer markets that support paid apps in Windows Store for Business).

How can organizations purchase organization-managed (offline) apps? Organizations wishing to acquire offline paid apps need to pass additional credit validation by Microsoft. All app binaries (both free and paid) distributed for offline licensing are watermarked with the ID of the organization that acquired your app.

Resources

Create a Dev Center account if you don’t have one!

The Windows team would love to hear your feedback.  Please keep the feedback coming using our Windows Developer UserVoice site. If you have a direct bug, please use the Windows Feedback tool built directly into Windows 10.

Cloud-load testing service is hyper-scale ready: lessons from generating 1M concurrent user load

$
0
0

Every now and then we hear of a business-critical app failing during major promotional or seasonal events such as holiday sales. More often than not it turns out that the app is not ready for the massive demand created on such occasions – causing the servers to fail and resulting in dissatisfied customers and lost opportunity. To ensure that your app doesn’t make the headlines for the wrong reasons, we recommend that you use the cloud-load testing (CLT) service to validate that your app can handle massive spikes. 🙂

It also means that the tool or service that you use for load testing be able to generate the load at the scale you need. Here in the cloud-based load testing world, we recently did some infrastructure changes and are happy to announce that the CLT service is hyper-scale ready!  As part of the validation of our readiness, we successfully ran tests to generate concurrent user load of as many as 1 million (1M) users. Note the concurrent part – this means that all 1 million users were active at the same time! Woohoo! Our tests covered both the cases – automatically provisioned agents as well as ‘bring your own subscription‘ scenario using Azure IaaS VMs. This means that you can run hyper-scale tests regardless of whether you choose the auto-provisioned agents or want to bring your own machines.

The infra changes we did have brought in additional benefits. Two of them are worth calling out here:

  • You can now pack upto ~2.5x more virtual users per agent core than earlier. Our earlier guidance when using declarative webtests was that between 250-1000 virtual users could be generated from an agent core. That number is now 600-2500.
  • Overall resource acquisition time has reduced by more than 50%. This means that load tests now spend much lesser time getting the agents ready and will start faster. If for some large runs you saw a wait time of 15 minutes earlier, that would come down to 7 minutes now. Of course, resource retention feature continues to be available to help you reduce that wait time further, if you want to run load tests in a test->fix->test loop in a short period of time.

The rest of this blog post covers some of the questions that come up when running massive load tests. We will also look at the test settings that impact how load is generated, so you can tweak them appropriately to generate the desired load.

First, here’s a screenshot of one of the tests we ran:

1MTest

 

Did you see that – while simulating 1M concurrent users, 43.7M total requests were generated at a massive 71.1K RPS, with 0 failures ! Ain’t that cool? Let us now look at the details.

App Setup: Since our intent was to validate that 1M concurrent users could be generated,  we wanted the app to be one that would scale easily. This was so that we could focus on the core validation and on fixing any issues we may find in the service and not have to worry about how to get the app to scale. We used a simple ASP.NET Web API comments app and deployed it to a set of Azure IaaS VMs behind a load balancer. As we ran the tests with increasing loads, whenever we found the app had reached it’s limits, we added some more VM instances to beef it up. To get the app to serve 1M concurrent users worth of load, we used 12 instances of Standard_D4_V2 VMs in Azure.

Test: Using Visual Studio, we created a webtest that made requests to the homepage. The load test was set to use the constant load pattern, simulating a million concurrent users at once. In real life, usually the load ramps up over a period of time. But since we wanted to validate that our agents could simulate even an extreme condition with ease, we decided to go with the massive constant load. If you are interested in looking at the tests, they can be found on github, here.

One of the most frequently asked questions is “how many virtual users can be generated by a single agent and how many agents are needed for my load test?”. Like most interesting questions, the answer to this one too is ‘it depends’. Since each app is different, the corresponding test that simulates the user scenario is different too. Your test could be a declarative webtest or you could be writing code in a unit test or a coded webtest to most accurately mimic the scenario you are looking to validate. Tests can even use custom plugins. As test runs, resources such as CPU, memory, disk, etc. are consumed on the load generating agent. Performance counters for these resources on the test agents are collected during a load test and can help you determine whether more vusers can be packed on the agent. The recommendation is to start with a small load to figure out the agent capacity for your particular test before doing a massive load test run.

TCPV4

 

The above screenshot shows various performance counters that can be collected on the agents. These are useful to determine the agent capacity for your test, based on the various resources your test consumes. Apart from the usual metrics such as CPU, memory, etc., we also collect data for the TCPv4 counter category. This one is interesting to look at when you run hyper-scale tests. When a load test runs and makes requests to the app, sockets are used for the connections. If requests fail with SocketExceptions, you want an easy way to determine whether the connection failed because the app couldn’t handle the load and closed the connection or whether the agent is reaching it’s socket capacity. Every agent machine can make upto 64K connections.

You may now be wondering as to what are the factors that impact how many connections are made for a virtual user and if are there any settings that you can tweak to get the most out of a load agent. The answer lies in the “webtest connection model” setting. This setting controls how connections are used when a load test containing a webtest runs.

The Connection Per User model closely simulates the behavior of a user who is using a browser. The first connection is established when the first request in the Web performance test is issued. Additional connections may be used when a page contains dependent requests. These requests are issued in parallel by using additional connections. Up to 6 concurrent connections may be used to closely simulate browser behavior. A drawback to generating load using this model is that number of connections held open on the agent will be high (upto 6 times the user load) and thus limit the user load that can be generated. Additional memory (associated with the connection) is also consumed and extra processing time is required to close and reopen the connections as web tests complete and new web tests are started. For high user loads, we recommend that you use the “Connection Pool” model.

The Connection Pool model conserves the resources on the load test agent by sharing connections to the Web server among multiple virtual users. If the user load is larger than the connection pool size, the Web performance tests that are run by different virtual users will share a connection. This could mean that one webtest might have to wait before it issues a request when another webtest is using the connection. The average time that a Web performance test waits before it submits a request is tracked by the load test performance counter Average Connection Wait Time. This number should be less than the average response time for a page. If it is not, the connection pool size is probably too small and will likely limit the throughput of your page. An appropriate connection pool size can be determined using this formula:

Connection Pool Size = 64,000 / (Max Parallel Connections { = 6 if there are dependent requests in your test}*Number of Server URI Host

If requests are served through CDN, you will need to adjust the number of server URI host accordingly.

To determine the pool size, run the webtest you want to run in your loadtest locally and use the result to determine whether you have dependent requests and the # of Server URI Host. This is because most dependent requests such as javascript, css, images, etc. are not recorded when the test is authored. Instead, the content of the response is parsed and the additional requests for this content is made during execution. So, even if dependent requests don’t show on the authored test, they may be present and that can be determined by looking at the result of the execution. Let’s take some examples.

bingtestresult

 

This screenshot shows the result of a webtest that makes a request to bing.com. As you can see in the results, there are no additional requests getting generated (the second request to bing.com is the result of a 302 redirect and not a dependent request). Also, the only server URI being reached is bing.com. Since I don’t have any other requests in the test that could have dependent requests that need examining, I arrive at Max Parallel connections = 1 (no dependent requests) and Number of Server URI host = 1 (bing.com). This gives me a pool size with the entire range of 64K.

 

 

 

 

 

dependentreqresult

In this second example, again I have a single top level request to my Azure website. The results show that there are several dependent requests (images, css, etc.), but all content is being served by a single server host. This means my Max Parallel Connections = 6 and Server URI Host =1, which gives me a pool size of 10667.

Similarly, if I had some dependent requests that got served via a CDN, my server URI Host # would increase and my pool size will need to be adjusted accordingly.

What does all this translate to when running load tests? When running large tests, if the pool size calculation is smaller, you will likely need more # of agents to generate a higher load. In the above test, we reached the desired load with just 20 agents of 8 cores each, that is a total of 160 cores. Every VSTS account gets a default allocation of 200 cores – for the most part, we have found that this meets the needs of our customers. Every once in a while though, if you are running a massive load test and you need more agent cores for your tests, you can reach out to us at vsoloadtest@microsoft.com

And last, but not the least, monitoring the app when the load test runs is also important and helps with troubleshooting when any issues occur. Use app monitoring tools such as Application Insights.

Hope you enjoyed this post – we look forward to hearing about your app’s success with CLT. The effort to validate the hyper-scale readiness of the CLT service was a joint effort between the Visual Studio load test product team and the Microsoft Testing Services team (Dindo Sabale, Shyamjith Pillai and Dennis Bass – thank you for the great run on this one!). If you need consulting services for testing your applications, you can reach them at srginfo@microsoft.com.  The Testing Services practice is part of Microsoft Enterprise Services and focuses on assisting customers with all of their testing needs including functional, performance, test strategy, knowledge transfer, and general consulting on all aspects of testing.

As always, for any feedback or queries related to load testing, reach us at vsoloadtest@microsoft.com

Just released – Windows developer virtual machines – September 2016 build

$
0
0

Today I’m happy to announce the first release of a developer ready, non-expiring (licensed) virtual machines. Last year, we released the evaluation VMs and we took the feedback to heart that you wanted a fully configured Windows 10 development environment that won’t expire.

With these new VMs, all you need to do is insert your Windows 10 Pro license key and instantly can start developing without having to worry about installing all the tooling. The VMs come in Hyper-V, Parallels, VirtualBox, and VMWare flavors.
These installs contain:

If you don’t currently have a Windows 10 Pro license, you can get one from the Microsoft Store. If you just want to try out Windows 10 and UWP, use the free evaluation version of the VMs. The evaluation copies will expire after a pre-determined amount of time.

The Azure portal also has virtual machines you can spin up with the Windows Developer tooling installed as well!

If you have feedback on the VMs, please provide it over at the Windows Developer Feedback UserVoice site.

UWP Hosted Web App on Xbox One (App Dev on Xbox series)

$
0
0

For the fourth installment in the series, we are open sourcing yet another sample app: South Ridge Video, an open source video app developed as a hosted web application built with React.js and hosted on a web server. South Ridge can easily be converted to a UWP application that takes advantage of native platform capabilities and can be distributed through the Windows Store as any other UWP app. The source code is available on GitHub right now, so make sure to check it out.

If you missed the previous blog post from last week on Background Audio and Cross Platform Development with Xamarin, make sure to check it out. We covered how to build a cross-device music experience using Xamarin and how to support background audio on the Universal Windows Platform, including the Xbox One. To read the other blog posts and watch the recordings from the App Dev on Xbox live event that started it all, visit the App Dev on Xbox landing page.

South Ridge Video

image1

What if you could extend the investment put into your web application and make it available as an Xbox One Windows Store app? What if you could also continue to use your existing web frameworks, CDN and server backend, yet still be able to use native Windows 10 APIs?

You can! A Hosted Web App (HWA) is a web app that can be submitted to the Store just like any other Universal Windows Platform (UWP) app. Since you’ve already invested in the development of the web app, your transformation into a HWA can be done rather quickly and easily.

To do this, you’ll reuse your existing code and frameworks as if you were developing for the browser. Let’s take a look at the South Ridge Video example app on GitHub. This is an app built with React.js and uses a web server back-end.  The application is delivered by the server when the app is launched and the UWP app will host that web content.

However, the HWA is not just a simple wrapper; you can also call into native Windows 10 APIs, such as adding an event to the user’s calendar. You can even use APIs such as toast notifications and camera capture or media transport controls with background audio. Note that when accessing sensitive APIs, make sure you declare the required App Capability in the appmanifest.

Now that your web app is ready and you want to turn it into a HWA, let’s take a look at the things you need to do to get it ready for Windows Store distribution to Xbox One.

Local Testing on a retail Xbox One

First, you need to put your retail Xbox One into developer mode so you can deploy and test the app. This is pretty straightforward; it involves installing an app from the Windows Store. Go here to see the steps on how to activate dev mode and how it works.

Now that you have your Xbox ready for deployment, let’s review the ways you can generate the app. There are three main options:

  • Using a web browser-based tool, such as Windows App Studio, to generate it for you
  • On a Mac or PC using Manifold.js
  • On a PC using Visual Studio

Let’s drill down into each option to see which best fits your needs.

Windows App Studio

Let’s start with the browser-based option, using Windows App Studio to create the HWA. Windows App Studio is a free online app creation tool that allows you to quickly build Windows 10 apps.

  1. Open Windows App Studio in your web browser.
  2. Click Start now!
  3. Under Web app templates, click Hosted Web App.
  4. Follow the on-screen instructions to generate a package ready for publishing to the Windows Store.
  5. You can then download the generate package that you’ll publish to the Store.

On a Mac using Manifold.js

What you need:

  • A web browser
  • A command prompt

ManifoldJS is a Node.js app that easily installs from NPM. It takes the meta-data about your web site and generates native hosted apps across Android, iOS and Windows. If your site does not have a web app manifest, one will be automatically generated for you. For example, take a look at the web app manifest for South Ridge Video.

  1. Install NodeJS, which includes NPM (Node Package Manager)
  2. Open a command prompt and type “NPM install -g manifoldJS”
    1. npm install -g manifoldjs
  3. Run themanifoldjs command on your web site URL:
    1. manifoldjs http://southridge.azurewebsites.net
  4. Follow the steps in the video below to complete the packaging (and publish your Hosted Web App to the Windows Store)

image2

A couple notes about using Manifold.js

  • If you have a W3C manifest, it will use that for app info; otherwise, it will be created automatically.
  • Use the -windows10 platform flag to generate only for UWP.

On a Windows PC using Visual Studio

What you need:

  • Visual Studio 2015. The free, full-featured Visual Studio Community 2015 includes Windows 10 developer tools, universal app templates, a code editor, a powerful debugger, Windows Mobile emulators, rich language support and much more—all ready to use in production. The same is true for Professional or Enterprise variants of VS 2015.
  • (Optional) Windows Standalone SDK for Windows 10. If you are using a development environment other than Visual Studio 2015, you can download a standalone Windows SDK for Windows 10 installer. Note that you do not need to install this SDK if you’re using Visual Studio 2015 Update 3; it is already included.

Steps to create the HWA (go here to see all these steps with screenshots):

  • Pick the website URL and copy it into your clipboard.
  • Launch Visual Studio 2015.
    1. Click File.
    2. Click New Project.
    3. Under JavaScriptthen Windows Universal, click Blank App (Windows Universal).
  • Delete the VS project template generated code.
    1. Since this is a hosted web app where the content is served from a remote server, you will not need most of the local app files that come with the JavaScript template by default. Delete any local HTML, JavaScript, or CSS resources. All that should remain is theappxmanifest file, where you configure the app and the image resources.
  • Open the appxmanifestfile.
    1. Under the Application tab, find the Start page text field.
    2. Replace html with your website URL.
  • Set the boundaries of your app.
    1. Application Content URI Rules (ACURs) specify which remote URLs are allowed access to your app and to the Universal Windows APIs. At the very minimum, you will need to add an ACUR for your start page and any web resources utilized by that page. For more information on ACURs, click here.
    2. Open theappxmanifest file.
    3. Click theContent URIs.
    4. Add any necessary URIs for your start page.

For example:

  1. 1. http://southridge.azurewebsites.net2. http://*.azurewebsites.net
  2. Set theWinRT Access to All (for each URI you added).
  • At this point, you have a fully functioning Windows 10 app capable of accessing Universal Windows APIs! You can now deploy to the Xbox One using the remote debugger as if it were any Windows 10 remote device, or you can use the Device Portal (covered in the next section).

Installing your HWA on the Xbox One for local testing

At this point, you now have an app package (APPX) or a manifest folder (Manifold.js), containing the files you need to install or publish your application. You can “side-load” your HWA to the Xbox by using the Device Portal (go here for more information about the Device Portal for Xbox One). One you’ve logged into the portal, you can then deploy your app.

APPX Deployment (Visual Studio / App Studio)

Here are the steps (go here to see these steps below with screenshots):

  1. Go to the Apps tab on the portal.
  2. You’ll see two buttons to upload items to the device: App Package and Dependencies.
  3. Tap the App Package button and navigate to the package folder you generated for your app. In there, you’ll find an appx file. Select that file and upload it.
  4. Now tap the Dependencies button, navigate again to your package folder and drill down to the dependencies sub folder. This contains the dependencies that your app needs to run – upload each one in the folder. (Note that you only need to do this when deploying via the Portal. Visual Studio delivers dependencies when remote debugging and the end user’s machine will already have these installed when delivered via the Store.)
  5. With the app package and the dependencies uploaded, click the Go button under the “Deploy” section and the app will be installed.
  6. Now go to the Xbox and launch the app!

Loose Files Deployment (Manifold.js)

The only difference between deploying an app packages with Manifold.js is that you have “loose” files instead of an APPX package. In the steps above, instead of uploading an APPX and Dependencies, you choose the “upload loose files” option and select the manifest folder. The portal will look in that folder for the manifest file and gather all the required pieces to complete the installation.

Design and User Experience Considerations for Xbox One apps

Designing for the “10-Foot Experience”

Xbox One is considered a “10-foot experience”, meaning that your users will likely be sitting a minimum of 10 feet away from the screen. You’ll want to consider the app’s ability to be used at that distance as opposed to being accessed within a web browser, from two feet away, with a mouse and keyboard. This article explains how to make sure you get the best user experience for the 10-foot experience scenario.

Designing for TV and understanding the “TV SafeZone”

Television manufacturers will apply a “safe-zone” around the content that can clip your app. By default, we apply a safe border around your app to account for this, however you can ensure that your app takes the full screen size using following code:


var applicationView = Windows.UI.ViewManagement.ApplicationView.getForCurrentView();
applicationView.setDesiredBoundsMode(Windows.UI.ViewManagement.ApplicationViewBoundsMode.useCoreWindow);

Understanding and managing XY focus and navigation

You’ll want to consider your app’s ability to handle XY navigation by the user and disable the “Mouse Mode” that’s turned on by default for UWP apps. This is important because users’ main input method for Xbox One is a handheld controller. See here for more on how to work with XY focus and navigation. Use the following code to enable XY navigation using JavaScript:


navigator.gamepadInputEmulation = "keyboard";

To enable directional navigation, you can use the TVJS library which is discussed below.

Considering your app’s appearance when another app is snapped

When users run an app on Xbox One, a second app may be ‘snapped’ to the right of the main app. When this is the case, the main app is considered to be in Fill Mode. While testing your app, open Cortana or another ‘snappable’ app to see how your app appears. You want to make sure your UI is still usable and has a graceful transition between Full Screen and Fill Mode. Implement an adaptive UI to make sure the user has the best experience for this scenario.

Integrate with the System Media Controls

If your app is a media app, it is important that your app responds to the media controls initiated by the user via the on-screen buttons, Cortana (typically through speech), the System Media Transport Controls in the nav pane or the Xbox and SmartGlass apps on other devices. Take a look at the MediaPlayer control from TVJS which automatically integrates with these controls or check out how to manually integrate with the System Media Transport Controls.

TVJS

TVJS is a collection of helper libraries that make it easier to build web applications for the TV. If you are building a hosted web app that will also run on the Xbox, TVJS can help add support for Directional navigation, as well as provide several controls that make it easier to interact with content on the TV.

DirectionalNavigation

Directional navigation is a feature that provides automatic two-dimensional navigation within the pages of your TV app. Apps won’t need to trap and handle navigation within their pages, or to explicitly specify all the valid focus targets for each element in the UI. With automatic focus handling, users can navigate around in an intuitive and robust way.

When users enter directional input to move from one element to another, the automatic focus algorithm looks at the set of potential focus targets, determines the next element to move to and then automatically sets the focus to that element. To determine the element to move to, the algorithm combines directional input, past focus history, and the physical layout of UI elements on the page.

To enable directional navigation, include the following script reference:

By default, only a, button, input, select, and text area elements are considered focus-able. To make other elements focus-able, set a valid tab index on the element.

This div is eligible for focus

Make sure to check out the documentation to learn how to change the root element, set initial focus, how to override next focus, best ways to optimize controls for focus and how to customize the input. Don’t miss the samples as well that go through a lot of great examples.

Submitting your app to the Windows Store

Once you’re happy with the app after testing it on Xbox, it’s time to publish it to the Windows Store!

Depending on which route you took to build the app, the process is a little different on how you build the package for Store submission.

Xbox One and Hosted Web Apps are a great way to deliver your web application to millions of Xbox One users and enter the living room experience with minimal effort.

Wrap up

Make sure to check out the app source on our official GitHub repository, read through some of the resources provided, watch the event if you missed it and let us know what you think through the comments below or on twitter.

Next time we will release another app experience and go in-depth on how to build great Internet of Things experiences using the Universal Windows Platform and how to make them shine on the Xbox One.

Until then, happy coding!

Resources for Hosted Web Apps

Download Visual Studio to get started!

The Windows team would love to hear your feedback.  Please keep the feedback coming using our Windows Developer UserVoice site. If you have a direct bug, please use the Windows Feedback tool built directly into Windows 10.

Lonely Coding

$
0
0
codingisbettertogether

It's official. I'm a better programmer when I'm pairing with someone. Pair Programming (two people, one keyboard) has been around for at least 20+ years, if not much longer. Usually one person types while another person (paces around and) thinks. It is kind of a "driver and navigator" model.

Everyone is different, to be clear, so it's very possible that you are the kind of person who can disappear into a closet for 8 hours and emerge with code, triumphant. I've done this before. Some of my best projects have involved me coding alone and that's fine.

However, just has we know that "diverse teams make for better projects," the same is true in my experience when coding on specific problems. Diversity isn't just color and gender, etc, it's as much background, age, personal history, work experience, expertise, programming language of choice, heck, it's even google-ability, and more!

How many times have you banged your head against a wall while coding only to have a friend or co-worker find the answer on their first web search?

Good pair programming is like that. Those ah-ha moments happen more often and you'll feel more than twice as productive in a pair.

In fact, I'm trying to pair for an hour every week remotely. Mark Downie and I have been pairing on DasBlog on and off for a year or so now in fits and starts. It's great. Just last week he and I were trying to crack one problem using regular expressions (yes, then we had two problems) and because there were two of us looking at the code it was solved!

Why is pair programming better?

Here's a few reasons why I think Pair Programming is very often better.

  • Focus and Discipline - We set aside specific times and we sprint. We don't chat, we don't delete email, we code. And we also code with a specific goal or endpoint in mind.
  • Collective ownership - I feel like we own the code together. I feel less ego about the code. Our hacks are our hacks, and our successes are shared.
  • Personal growth - We mentor each other. We learn and we watch how the other moves around the code. I've learned new techniques, new hotkeys, and new algorithms.

Let's talk about the remote aspect of things. I'm remote. I also like to poke around on non-work-related tech on the side, as do many of us. Can I pair program remotely as well? Absolutely. I start with Skype, but I also use Google Hangouts, Join.me, TeamViewer, whatever works that day.

If you're a remote person on a larger team, consider remote pair programming. If you're an consultant  or perhaps you've left a big corporate job to strike off on your own, you might be lonely. Seriously, ask yourself that hard question. It's no fun to realize or have to declare you're a lonely coder, but I am and I will. I love my job and I love my team but if I go a day or two without seeing another human or spending some serious time on Skype I get really tense. Remote pair programming can really reduce that feeling of lonely coding.

I was at a small tech get together in Atlanta a few days ago and I knew that one person there was a singular coder at their small business while another at the table was an emerging college student with an emerging talent. I made a gentle suggestion that maybe they consider pairing up on some side projects and they both lit up.

Consider your networks. Are there people you've met at conferences or at local user groups or meetups that might be good remote pairing partners? This might be the missing link for you. It was for me!

Do you pair? Do you pair remotely? Let us all know in the comments.

* Stock photo purchased from ColorStock - Your customers are diverse, why aren't your stock photos?


Sponsor: Big thanks to Telerik for sponsoring the feed this week. Try Kendo UI by Progress: The most complete set of HTML5 UI widgets and JavaScript app tools helping you cut development time.



© 2016 Scott Hanselman. All rights reserved.
     

Work Item Visualization is one of many productivity extensions on the marketplace

$
0
0

We are pleased to announce the latest update of the Work Item Visualization extension. It’s one of our first productivity extensions, enabling you to easily visualize work item relationships and traceability from requirements to code, to test cases, to releases.

The update contains tons of bug fixes and these new features: annotations[1], saving[2] visualizations, and  find[3] on visualizations.

SNAGHTML1a59c6f

What’s next?

Here’s what the team has planned:

  • Currently we are focused on converting code to TypeScript and upgrading necessary base libraries. Once complete, we are ready for another wave of features and improvements.
  • Some of the ideas from the team and community that are waiting in the backlog are:
    • Allowing to filter what link types are shown on visualizations
    • Offering full sharing and favorites functionality for visualizations on personal and project level, including being able to add / remove (/ send and share)
    • Having context menu on the nodes which allows different actions such as collapse / expand / open in new tab / …
    • Having ordered and link type aware visualizations (e.g. Parent -> Child relationships)

A bit of history

By Jeff Levinson, VSTS Customer Success and product owner for the work item visualization extension:

In 2010 Microsoft released the Architecture Explorer tool for visualizing code. It worked by reading an XML file and displayed nodes and edges. At the time I was facing a particular problem – how do I show an auditor that A is linked to B is linked to C and who worked on everything? Sure, I could use a complex set of work item queries but auditors don’t want to wade through hundreds of queries and hundreds of links – they want areas where there are potential problems to jump out at them. This started me down what turned out to be a multi-year project to bring work item visualization to Team Foundation Server and eventually Visual Studio Team Services.

The first version was a fairly robust but manually generated map of relationships that could take a long, long time to generate:

clip_image002

In 2012, Microsoft opened up the Architecture Explorer so that it was extensible and another revision was made to allow users to drill into the nodes before plotting them. The tool started gaining in popularity (to date there have been 6,657 downloads on Visual Studio Gallery). But it was only available to people with the very highest version of Visual Studio – I wanted to bring it to the masses. Which lead to the first version of the Work Item Visualization Web which I built in conjunction with Ahmed Al-Asaad (Canada) and Vinicius Scardazzi (Brazil):

clip_image004

This extension turned out to be not as popular because it had to be hosted on a separate web server and it was more complex to set up because it used the TFS Object Model (the REST API’s didn’t exist at that time). Then Microsoft introduced the VSTS/TFS Extensions SDK and an opportunity presented itself – could the visualization web be integrated directly into VSTS/TFS so that anyone could get this view and could it be done in a way where the nodes could be dynamically added so as not to overload the viewer and the user with too much information.

At that point I did not have the time to do it so I approached the Rangers. They had two big things going for them: a) practically all of the rangers are better web developers than I am and b) they had a drive and passion to produce items of value to the community. Taavi Koosaar jumped in as the lead with help from Mattias Sköld. They re-wrote it almost from scratch to provide capabilities that I was unable to provide and truly added to the user value.

To date they have made numerous updates (I especially like the annotations and the find on visualization) that I could not have envisioned and they have kept the needs of the users in mind by responding to the feedback, making it available in TFS and a host of bug fixes. I think more can come of this in the future and we welcome your feedback to continue to drive the development of this extension. There are lots of great suggestions such as adding Pull Requests to the visualization. We want to hear from you on how you are using it, how it is providing you benefit and if it is saving you time.

At last look, this appears to be the most popular non-Microsoft extension in the store which means that the Rangers are fulfilling their mission of providing benefit to the community. They sacrifice a lot of free time to keep up on the latest capabilities of the product and to build these extensions that potentially benefit all of the users of VSTS/TFS. Enjoy this extension and keep the feedback coming – critical or not, this feedback helps drive our passion.

Looking for more?

Our list of Ranger DevLabs extensions has grown from four, back in November 2015, to twelve on the Marketplace.

Microsoft DevLabs is an outlet for experiments from Microsoft, experiments that represent some of the latest ideas around developer tools. Solutions in this category are designed for broad usage, and you are encouraged to use and provide feedback on them; however, these extensions are not supported nor are any commitments made as to their longevity.

Here’s a little bit about them:

_branching Branch VisualizationVisualize your Team Foundation Version Control (TFVC) branches for your current project. clip_image001[1]

_BuildUsage Build UsageShow how many build minutes are being used within an account on your dashboard!

_contdown Countdown Widget– Team has important dates to remember. Make them visible for your team on your dashboard! clip_image001[1]

_fileownerFile OwnerAllows users to quickly and easily determine ownership of a file.

_folder Folder ManagementQuickly create a folder right from the web. clip_image001[1]

_cards Print CardsPrint cards from your Kanban board for planning exercises with your team, or on a physical scrum board. clip_image002[1]

_rollup Roll-up BoardDisplays an aggregated view of your backlog boards on your dashboards.

_sample Sample DataLets you create and remove sample data in your project. clip_image001[1]

_depende Show Area Path DependenciesProvides a lightweight way to manage dependencies on other teams.

logo Work Item Details– View Details of work item(s) on your dashboard

_TCE Test Case ExplorerHelps you manage your test cases better.

_visualize Work Item VisualizationVisualize these work items from within the work item form.  clip_image003[1]

Many have been open sourced to be shared as sample code andto foster community collaboration. See our Library of tooling and guidance solutions and Samples Overview | Extensions for Visual Studio Team Services for more information.


Feedback

We look forward to hearing from you. Here are some ways to connect with us:

Sharing Authorization Cookies between ASP.NET 4.x and ASP.NET Core 1.0

$
0
0

ASP.NET Core 1.0 runs on ASP.NET 4.6 nicelyASP.NET Core 1.0 is out, as is .NET Core 1.0 and lots of folks are making great cross-platform web apps. These are Web Apps that are built on .NET Core 1.0 and run on Windows, Mac, or Linux.

However, some people don't realize that ASP.NET Core 1.0 (that's the web framework bit) runs on either .NET Core or .NET Framework 4.6aka "Full Framework."

Once you realize that it can be somewhat liberating. If you want to check out the new ASP.NET Core 1.0 and use the unified controllers to make web apis or MVC apps with Razor you can...even if you don't need or care about cross-platform support. Maybe your libraries use COM objects or Windows-specific stuff. ASP.NET Core 1.0 works on .NET Framework 4.6 just fine.

Another option that folks don't consider when talk of "porting" their apps comes up at work is - why not have two apps? There's no reason to start a big porting exercise if your app works great now. Consider that you can have a section of your site by on ASP.NET Core 1.0 and another be on ASP.NET 4.x and the two apps could share authentication cookies. The user would never know the difference.

Barry Dorrans from our team looked into this, and here's what he found. He's interested in your feedback, so be sure to file issues on his GitHub Repo with your thoughts, bugs, and comments. This is a work in progress and at some point will be updated into the official documentation.

Sharing Authorization Cookies between ASP.NET 4.x and .NET Core

Barry is building a GitHub repro here with two sample apps and a markdown file to illustrate clearly how to accomplish cookie sharing.

When you want to share logins with an existing ASP.NET 4.x app and an ASP.NET Core 1.0 app, you'll be creating a login cookie that can be read by both applications. It's certainly possible for you, Dear Reader, to "hack something together" with sessions and your own custom cookies, but please let this blog post and Barry's project be a warning. Don't roll your own crypto. You don't want to accidentally open up one or both if your apps to hacking because you tried to extend auth/auth in a naïve way.

First, you'll need to make sure each application has the right NuGet packages to interop with the security tokens you'll be using in your cookies.

Install the interop packages into your applications.

  1. ASP.NET 4.5

    Open the nuget package manager, or the nuget console and add a reference to Microsoft.Owin.Security.Interop.

  2. ASP.NET Core

    Open the nuget package manager, or the nuget console and add a reference to Microsoft.AspNetCore.DataProtection.Extensions.

Make sure the Cookie Names are identical in each application

Barry is using CookieName = ".AspNet.SharedCookie" in the example, but you just need to make sure they match.

services.AddIdentity(options =>
{
options.Cookies = new Microsoft.AspNetCore.Identity.IdentityCookieOptions
{
ApplicationCookie = new CookieAuthenticationOptions
{
AuthenticationScheme = "Cookie",
LoginPath = new PathString("/Account/Login/"),
AccessDeniedPath = new PathString("/Account/Forbidden/"),
AutomaticAuthenticate = true,
AutomaticChallenge = true,
CookieName = ".AspNet.SharedCookie"

};
})
.AddEntityFrameworkStores()
.AddDefaultTokenProviders();
}

Remember the CookieName property must have the same value in each application, and the AuthenticationType (ASP.NET 4.5) and AuthenticationScheme (ASP.NET Core) properties must have the same value in each application.

Be aware of your cookie domains if you use them

Browsers naturally share cookies between the same domain name. For example if both your sites run in subdirectories under https://contoso.com then cookies will automatically be shared.

However if your sites run on subdomains a cookie issued to a subdomain will not automatically be sent by the browser to a different subdomain, for example, https://site1.contoso.com would not share cookies with https://site2.contoso.com.

If your sites run on subdomains you can configure the issued cookies to be shared by setting the CookieDomain property in CookieAuthenticationOptions to be the parent domain.

Try to do everything over HTTPS and be aware that if a Cookie has its Secure flag set it won't flow to an insecure HTTP URL.

Select a common data protection repository location accessible by both applications

From Barry's instructions, his sample will use a shared DP folder, but you have options:

This sample will use a shared directory (C:\keyring). If your applications aren't on the same server, or can't access the same NTFS share you can use other keyring repositories.

.NET Core 1.0 includes key ring repositories for shared directories and the registry.

.NET Core 1.1 will add support for Redis, Azure Blob Storage and Azure Key Vault.

You can develop your own key ring repository by implementing the IXmlRepository interface.

Configure your applications to use the same cookie format

You'll configure each app - ASP.NET 4.5 and ASP.NET Core - to use the AspNetTicketDataFormat for their cookies.

Cookie Sharing with ASP.NET Core and ASP.NET Full Framework

According to his repo, this gets us started with Cookie Sharing for Identity, but there still needs to be clearer guidance on how share the Identity 3.0 database between the two frameworks.

The interop shim does not enabling the sharing of identity databases between applications. ASP.NET 4.5 uses Identity 1.0 or 2.0, ASP.NET Core uses Identity 3.0. If you want to share databases you must update the ASP.NET Identity 2.0 applications to use the ASP.NET Identity 3.0 schemas. If you are upgrading from Identity 1.0 you should migrate to Identity 2.0 first, rather than try to go directly to 3.0.

Sound off in the Issues over on GitHub if you would like to see this sample (or another) expanded to show more Identity DB sharing. It looks to be very promising work.


Sponsor: Big thanks to Telerik for sponsoring the blog this week! 60+ ASP.NET Core controls for every need. The most complete UI toolset for x-platform responsive web and cloud development.Try now 30 days for free!



© 2016 Scott Hanselman. All rights reserved.
     

Bing Predicts: eSport’s League of Legend™ winner

$
0
0
Bing delivers search experiences that match people’s passions, like sports and entertainment. And many of our users are passionate about one area in particular: the fascinating world of eSports. If this describes you, Bing has you covered.
 
For those not familiar, Electronic Sports, or eSports, is a form of competitive video gaming that engages millions of passionate players and viewers around the world. There are multiple events and competitions happening every year, and the prizes for the winning teams can be in the millions of dollars.
 
One of the major eSports events of the year, the 2016 League of Legends™ World Championship1 (aka Worlds), kicked off on September 29 and runs through October 29. With competitions happening in cities across the U.S., there can be a lot to keep track of, and Bing is ready to help fans stay on top of the action.
 

The 2016 League of Legends™ World Championship:

Find team groupings, match schedules, live streams, and replays

 
Search Bing for “league of legends 2016” or “lol worlds 2016” and get information on when matches will take place, what the group standings are, and how different teams are ranking within their group. For a quick view on how the competition will map out, click on the ‘Knockout’ tab of the answer and get a snapshot of the dates and times for the quarterfinal, semifinal, and final games. 

Group standings, Knockout
Bing Predicts Group Standings 
 
Wondering if a team (e.g., SK Telecom T1)  will be Worlds Champion again this year? Click on that team to get its roster, or click over on the ‘Matches’ tab to see the matches the team is scheduled to play. But will it win? Bing’s predictions for matches are indicated by a blue label next to the team predicted to win and its likelihood.
 
You can also click on an individual player to get detailed championship stats.

Team roster, Matches Player card
Bing Predicts Team Roster Bing Predicts Matches Player Cards   
 
If you want a refresher on the results of last year’s championship, you can find 2015 group standings and the quarterfinal, semifinal, and final results by searching for “lol worlds 2015” and clicking on the ‘Knockout’ tab.
 
To see detailed results and watch the match replay, click on any of the match tiles and you’ll be greeted with our detailed match answer.
 
Knockout, Match detail
Bing Predicts Knockout Bing Predicts Match Details  
 
 

Counter-Strike: Global Offensive™–ESL One New York, plus more

 
If you’re a fan of Counter-Strike: Global Offensive™, the Bing team’s got your number, too.
 
Let’s say you want to know about the most exciting tournaments and gameplay. Simply search for “csgo events” to get information on dates of past, ongoing, or future events, their locations, and prizes. Get even more details on an event simply click on the event row.

Events, Event details
Bing Predicts Events Bing Predicts Event Details 
 
One of the most popular searches is for top teams among the 100+ teams competing in events all around the world, and Bing makes it easy to find the information. Simply search for “csgo top teams” to see who’s on top. And if you want to jump straight to the best matches across all the various events just search “csgo matches” to find featured matches that are live, just completed, or scheduled to start soon.
 
Top teams, Featured matches
Bing Predicts Top Teams Bing Predicts Featured Matches
 
Just like with League of Legends™, you can click on a match to find additional statistics, stream info, and game highlights. Or search for team rosters and find player statistics along with the matches a given team is scheduled to play by simply clicking on a team or searching for your favorite team, as in “nip csgo”.

Detailed match view, Team roster
Bing Predicts Detailed Match View Bing Predicts Team Roster
 
Hope you enjoy these eSports answers and stay tuned for more updates coming soon. If you have feedback or ideas, please reach out to us on Bing Listens. And don’t forget that Bing predicts results during group stages and the knockout rounds, so let us know if you agree with our predictions or have your own by using #bingpredictsworlds2016 on social media.
 
-The Bing Team
 
 





 
Footnotes
1. League of Legends™ is a trademark owned by Riot Games. INC. that is also the administrator of the League of Legends™ World Championship.
2. Counter-Strike: Global Offensive™ is a trademark owned by Valve Corporation that is also the administrator of Counter-Strike: Global Offensive™–ESL One New York.

Visual Studio “15” Preview Compilers Feedback

$
0
0

There is a new Connect Form for reporting issues with Visual Studio “15” Preview 5 compilers. If you are not using the Visual Studio IDE, you can report issues using this form.

If you are using the IDE, we prefer you use the Report-a-Problem feedback system.

Thank you.

Storing and using secrets in Azure

$
0
0

Most applications need access to secret information in order to function: it could be an API key, database credentials, or something else. In this post, we’ll create a simple service that will compare the temperatures in Seattle and Paris using the OpenWeatherMap API, for which we’ll need a secret API key. I’ll walk you through the usage of Azure’s Key Vault for storing the key, then I’ll show how to retrieve and use it in a simple Azure function.

Prerequisites

In order to be able to follow along, you’ll need an Azure subscription. It’s very easy to get a trial subscription, and get started for free.

Where to store secrets?

There are of course many different places where people store such secrets. From worst to best, one could think of the following: in your source code repository on GitHub (of course, nobody should ever do that), in configuration files (encrypted or not), in environment variables, or in specialized secret vaults.

Which one you choose depends on the level of security your application requires. Oftentimes, storing an API key in an environment variable will be adequate (what is never adequate is hard-coded values in code or configuration files checked into source control). If you require a higher level of security, however, you’ll need a specialized vault such as Azure Key Vault.

Azure Key Vault is a service that stores and retrieves secrets in a secure fashion. Once stored, your secrets can only be accessed by applications you authorize, and only on an encrypted channel. Each secret can be managed in a single secure place, while multiple applications can use it.

Setting up Key Vault

First, we’re going to set-up Key Vault. This requires a few steps, but only steps 4 and 5 have to be repeated for new secrets, the others being the one-time building of the vault.

  1. Open the Azure portal and click on Resource groups. Choose an existing group, or create a new one. This is a matter of preference, but is otherwise inconsequential from a security standpoint. For this tutorial, we’ll create a new one called “sample-weather-group”. After clicking the Create button, you may have to wait a few seconds and refresh the list of resource groups.

    Creating a new resource group

  2. Select the newly created group, then click the Add button over its property page. Enter “Key Vault” in the search box, select the Key Vault service, then click Create.

    Adding the Key Vault service to the resource group

  3. Enter “sample-weather-vault” as the name of the new vault. Select the right subscription and location, and leave the “sample-weather-group” resource group selected. Click Create.

    Creating a new vault

  4. If you refresh the resource group property page, you’ll see the new vault appear. We’re now ready to add a key to it. Get an API key from OpenWeatherMaps. Select the vault in the list of resources under the resource group, then select Secrets. You can now click Add to add a new secret. Under Upload options, select Manual. Enter “open-weather-map-key” as the name of the secret, and paste the API key from OpenWeatherMaps into the value field. Click Create.

    Storing the secret in Key Vault

  5. We will later need the URL for the secret we just created. This can be found under Secret Identifier on the property page of the current version of the secret, which can be reached by navigating to Secrets under the key vault, then clicking the secret, and then its latest version.

    Getting the URL for the secret

And this is it for now for Key Vault: we now have a vault, containing our secret. Next, we’ll need to setup access so that we can securely retrieve the key from our application.

Preparing Active Directory authentication

The application will need to securely connect to the vault, for which it will have to prove its identity, using some form of master secret. This is similar to the master password that a password vault uses, and makes sure the identity of our application can be managed independently from the secret, which may be shared with more than one application. We’ll use Active Directory for this.

In this post, we’ll authenticate using a secret key, but it’s important to note that it is possible to use a certificate instead for added security. Please refer to Authenticate with a Certificate instead of a Client Secret for more information.

  1. To access Active Directory, in the Azure portal, select More Services and choose Azure Active Directory (currently in preview). In the next menu that will appear, click App registrations. Click the Add button above the list of applications. You’ll be asked for a name for the application. We’ll choose “sample-weather-ad”. For the application type, leave the default Web app / API selected. We also have to provide a sign-on URL. For our purposes, this doesn’t need to actually exist, but only to be unique. Click the Create button.

    Naming the AD application

  2. Now that the application has been created, select it in the application list, so that you can see your application’s security principal ID, which can be found under Managed Application In Local Directory in the application’s property page. This principal ID will represent the authenticated identity of our application. You’ll need that and a key.

    Viewing the application's principal ID

    Currently, this principal ID does not get automatically generated until the Active Directory application is logged into for the first time. This is a temporary issue that is being looked into. In the meantime, it can be worked around by browsing to ?client_id=, where can be found by clicking the Endpoints button above the list of registered apps, and can be found under the property page for the application.

    Getting the authorization URL

    Navigating to the URL we composed will require you to authenticate with the credentials of a user that has admin rights on the subscription, and then it will yield an error page that can be safely ignored. If all went well, you should now be able to see your application’s principal ID in the property page.

    Getting the AD application's principal ID

  3. We can now proceed to create credentials that our Azure Function will be able to use to authenticate to Active Directory as the application we created earlier. Click on All settings, then select Keys. We can add a new key by entering a description, selecting an expiration, and hitting the Save button. If you do choose to have the key expire, you should also take the time to create a reminder on the schedule of the team in charge of managing this application to renew it.

    Adding a new key

    Note that using a different key and id for each application that will use the secrets makes it possible to revoke access to the whole vault for a specific application in one operation.

  4. Once you’ve saved, the key can be viewed and copied to a safe place. Do it now, because this is the last time the Azure portal is going to show it.

    The generated key

  5. We’ll also need the URL of the Active Directory endpoint. This is not the URL that we manually entered earlier when we created the AD application. It can be obtained by clicking the Endpoints button above the list of applications.

    The

    The URL we want to copy for later use is the one under OAuth 2.0 Token Endpoint.

  6. We’re now ready to authorize the application to access the vault and get values out of it. Navigate back to the key vault’s property page, and select the vault we created earlier, then Access policies. Click Add new, then Select principal. Paste the principal ID into the text box. After a few seconds, the principal should appear checked. Click the Select button. Then click on Secret permissions and check Get, then click OK.

    Adding permissions for AD

We now have a set of Active Directory credentials that our Azure Function will be able to use, that will enable read access to the secrets in the key vault. Our last remaining step is to create the actual code.

Creating the Azure function

We’re going to use Azure Functions to implement the actual service, because it’s the easiest way to write code on Azure, but roughly the same steps would apply to any other kind of application.

  1. From the resource group’s property page, click Add, and type “Function App” in the filter box. Select Function App, then click Create. Name your new function app “sample-weather”. Select the relevant subscription, resource group, plan, and location.

    Setting up the new function app

  2. Refresh the resource group’s property page, then select the new “sample-weather” app service. Create a new function under the function app. Name it “ParisSeattleWeatherComparison” and choose the empty C# template:

    Creating the function

    Now we have an empty function. Let’s add some code.

  3. Click View Files under the code editor. This shows the list of files in the function’s directory. Click the + icon to add a new file, and name it “weather.csx”. Enter the following code in that file:

    Those classes will help deserialize the response from the weather server. The structure of the types WeatherList, Weather, and WeatherData reflects the schema of the JSON documents that the weather service will return. It does not, nor needs to reflect the entirety of the schema returned by the API: the Microsoft.AspNet.WebApi.Client library will figure out which parts to deserialize and how to map them onto the provided object model.

  4. Enter the following code as the body of the function:

    We also need to define the bindings for the function. Open function.json and enter the following as its content.

    This tells Azure Functions what objects to inject into the function.

  5. In the code above, you’ll notice that we’re reading the AD URL, client ID and key from configuration, because of course we haven’t done all this to end up storing secrets in code. For the code to function, we’ll have to enter that information into the function’s Azure configuration. This can be done by clickingFunction app settings on the bottom-left of the function editing screen.

    The function app settings button

    In the screen this brings up, you’ll want to select the Configure app settings option. Once there, you’ll see a few general settings, but what we’re interested in is the table of custom App settings. We’ll add new key-value pairs in there with the names we used in the code:

    • “WeatherADURL” with the Active Directory OAuth 2.0 Token Endpoint URL.
    • “WeatherADClientID” with the Active Directory application ID we got in the previous section.
    • “WeatherADKey” with the Active Directory application key.
    • “WeatherKeyUrl” with the URL for the secret API key we stored in the vault earlier.

    Don’t forget to hit Save on top of the panel.

    Setting the Active Directory URL, master id and key in the app settings

  6. Go back to the function editor and add a project.json file to import the NuGet packages we need.

    Adding a project.json file

    Enter the following code into the file.

And this is it. Once you’ve saved all the files, you should see the trace of the package restoration and compilation of the function in the logs window. After that’s completed, you should be able to click the Run button and figure out where the weather is nicest, between Paris and Seattle.

Where is the weather the nicest?

Summary

We now have a function that connects to an Azure Key Vault using Azure Active Directory authentication, and then uses a secret stored in the vault to query a remote service.

Wait, what about .NET Core?

Before I conclude, and now that we’ve made this work in Azure Functions, how about re-using the knowledge we’ve gained in a different kind of application, such as a .NET Core console application? We actually wouldn’t have to change much. First, we’d need to check our dependencies to make sure that everything we need is available on .NET Core, then we’d have to modify the code so that it reads configuration using the new ConfigurationBuilder. Finally, we’d just change req.CreateResponse(HttpStatusCode.OK, $"..."); into simple Console.WriteLine calls.

Here’s what the code looks like once ported to a .NET Core console app.

And here’s the project.json that enables it to restore the right packages.

The weather data model remains unchanged from the Azure Functions version.

And that’s it, this console application will, like the Azure Function, authenticate to Active Directory, get the API key from Key Vault, and then query the API and tell you about the weather.

What’s next?

Azure services are evolving constantly, and so is .NET support for them. While I was writing this article, I was able to transfer some of the steps that previously required command-line operations, to using the portal, making them a lot easier, as well as more discoverable. One of the things to look forward to is a new ASP.NET configuration provider that will enable developers to get rid of much of the code I had to write to access the key vault.

Please let us know if tutorials like these are helpful. Happy programming!

References

  1. Manage Key Vault using CLI
  2. Azure Key Vault .NET Samples
  3. Azure Functions C# Reference
  4. Safe storage of app secrets during development

Feature flags: How we control exposure in VS Team Services

$
0
0

One question that I often get from customers is how we manage exposing features in the service. Features may not be complete or need to be revealed at a particular time. We may want to get early feedback. With the team working in master and deploying every three-week sprint, let’s take a look at how we do this for Team Services.

Goals

Our first goal is decoupling deployment and exposure. We want to be able to control when a feature is available to users without having to time when the code is committed. This allows engineering the freedom to implement the feature based on our needs while also allowing control for the business on when a feature is announced. Next we want to be able to change the setting at any scope from globally to particular scale units to accounts to individual users. This granularity gives us a great deal of flexibility. We can deploy a feature and then expose it to select users and accounts. That allows us to get feedback early, which includes not only what users tell us but also how the feature is used based on aggregated telemetry. Additionally, we want to be able to react quickly if a feature causes issues and be able to turn it off quickly.

To make all of this work well, we need to be able to change a feature flag’s state without re-deploying any of our services. We need each service to react automatically to the change to minimize the propagation delay.

As a result, we have the following goals.

  • Decouple deployment and exposure
  • Control down to an individual user
  • Get feedback early
  • Turn off quickly
  • Change without redeployment

Feature flags

Feature flags, sometimes called feature switches, allow us to achieve our goals. At the core, a feature flag is nothing more than an input to an if statement in the code: if the flag is enabled, execute a new code path, and else if not, execute the existing code path.

Let’s look at an actual example. In this case I want to control whether a new feature to revert a pull request is available to the user. I’ve highlighted the Revert button in the screen shot.

Continue reading…

The week in .NET – On .NET on Cecil – NAudio – SpeechCentral – Hand of Fate

$
0
0

To read last week’s post, see The week in .NET: On .NET on Orchard 2 – Mocking on Core – StoryTeller – Armello.

On .NET

Last week, JB Evain was on the show:

This week, we’ll speak with Immo Landwerth from the .NET team about NetStandard 2.0. The show begins at 10AM Pacific Time on Channel 9. We’ll take questions on Gitter, on the dotnet/home channel. Please use the #onnet tag. It’s OK to start sending us questions in advance if you can’t do it live during the show.

Package of the week: NAudio

NAudio is a library for reading, writing, decoding, encoding, converting, and playing audio files.

The following code concatenates audio files.

Xamarin App of the week: Speech Central

Speech Central is an iPhone app that lets you enjoy the Internet with the screen off, using vocal commands and speech. You can keep up with the news while you perform another activity, saving significantly on your battery and data plan.

Speech Central was built in C# using Xamarin.

Speech Central will read any page with the screen off through a simple share option

User group meeting of the week: Real World Examples of Azure Functions in Seattle

.netda is hosting a meeting tonight at 7:00PM on Real World Examples of Azure Functions.

Game of the week: Hand of Fate

Hand of Fate is a cross between action, RPG and deck building game play. Challenge the Dealer, a mysterious game master, while you battle your way beyond the thirteen gates at the end of the world. In Hand of Fate, you must make strategic decisions when building your deck and see the consequences of those decisions play out in the traditional RPG/action combat style. Hand of Fate features unique deck building mechanics, hundreds of encounters, items, armor, weapons and mysteries.

hof_screen_combat11

Hand of Fate was created by Defiant Development using Unity and C#. It is available on Xbox One, PlayStation 4 and Windows, Mac and Linux on Steam.

.NET

ASP.NET

F#

Check out F# Weekly for more great content from the F# community.

Xamarin

Azure

Gaming

And this is it for this week!

Contribute to the week in .NET

As always, this weekly post couldn’t exist without community contributions, and I’d like to thank all those who sent links and tips. The F# section is provided by Phillip Carter, the gaming section by Stacey Haffner, and the Xamarin section by Dan Rigby.

You can participate too. Did you write a great blog post, or just read one? Do you want everyone to know about an amazing new contribution or a useful library? Did you make or play a great game built on .NET?
We’d love to hear from you, and feature your contributions on future posts:

This week’s post (and future posts) also contains news I first read on The ASP.NET Community Standup, on Weekly Xamarin, on F# weekly, and on Chris Alcock’s The Morning Brew.


Go to Bing to Livestream the Vice Presidential Debate

$
0
0
On October 4, candidates Tim Kaine and Mike Pence take center stage for the first and only vice presidential debate happening at Longwood University in Farmville, Virginia.
 
If you can’t watch on TV, or prefer to use a second screen so you can multitask, Bing has you covered. Just search for “presidential debate video” to reach our debate page with two different channels of video streaming options.
 

 
You can also find the full schedule of debates on Bing.

Bing US presidential debates

And remember, you can search for ways to register to vote with Bing too! Just search “register to vote” on Bing and we can help you get ready for election day in your state.

Stay-tuned for more election updates in the coming weeks!
 
- The Bing Team
 

Join us this November for Connect(); 2016

$
0
0

Today, I am excited to announce that our popular developer event Connect(); is back for a third year on November 16th and 17th and will be live-streamed globally from New York City. I encourage you to save the date for what promises to be our best Connect(); yet.

At Connect(); 2016, Executive Vice President Scott Guthrie and Principal Program Manager Scott Hanselman, alongside leading industry innovators, will share the latest innovations from Visual Studio, .NET, Xamarin, Azure, SQL, Windows, Office and more.  Over the two days, you’ll have the opportunity to engage in live, interactive Q&A sessions with our engineering teams, customers and partners and learn how it’s now easier than ever before to build and manage breakthrough intelligent apps that work across Android, iOS, Linux and Windows.

Al Hilwa, IDC recently said, “Microsoft’s Connect(); developer event has grown into a significant milestone for developers building modern apps for cloud, mobile and DevOps deployment scenarios, in the process helping drive positive business impact at their companies.”

Developers inspired us to create the Connect(); event. You are at the center of incredible business transformation and the disruption of entire industries through the development of powerful apps that change the world.  For the past two years, we’ve unveiled new innovative technologies and solutions at Connect(); that represent our relentless drive to meet the needs of any developer, building any application on any platform.  Connect(); 2016 is the next step on our cloud-first, mobile-first world journey.

We’re looking forward to welcoming you on November 16th and 17th.

Mitra Azizirad, Corporate Vice President, Cloud Application Development & Data Marketing

With an expansive technical, business and marketing background, Mitra has led multiple and varied businesses across Microsoft for over two decades. She leads product marketing for C+E’s developer and data platform offerings spanning SQL Server, Azure data services, Visual Studio, .NET, Xamarin and associated developer services.

Announcing UWP Community Toolkit 1.1

$
0
0

Today we are releasing the first update to the UWP Community Toolkit. To see the updates, first:

In under a month since the first release, we are humbled by the positive feedback we have received so far and are excited to see all the contributions the community has made, including:

  • 39 community contributors
  • 188 accepted pull requests
  • 173 issues closed
  • 678 stars
  • 159 forks

Thanks to all the contributors that were involved with this release!

Here’s a summary of what’s new in V1.1:

  1. .NET Foundation. We are excited to announce that the UWP Community Toolkit has joined the .NET Foundation, a vibrant community of open-sourced projects focused on the future of the .NET ecosystem.
  2. Updates and new features. The focus of this release is to improve the quality of the toolkit by addressing feedback we received through GitHub and the Store Sample App. Full list available in the Release Notes, including:
    1. Services: added LinkedIn service (i.e. read user profile and share activity), Microsoft Graph service (i.e. send and read emails from UWP via Office 365 or explore Azure Active Directory graph) and updates to the Facebook and Bing services
    2. Controls: added Blade, GridSplitter and DropShadowPanel controls
    3. Animations: new FadeHeaderBehavior
  3. Sample app. The UWP Community Toolkit Sample App has been updated to include the new features of this release. The Sample App is the best way to preview the features of the toolkit.
  4. Documentation. As the project joins the .NET Foundation, we moved the documentation to a new location, directly connected to GitHub.

imagenew

If you want to use it in your projects, visit the Getting Started page. If you are already using the toolkit, we recommend updating to the latest release.

You can find the roadmap of next release here.

If you have any feedback or are interested in contributing, see you on GitHub!

Download Visual Studio to get started!

Announcing Visual Studio “15” Preview 5

$
0
0

Today we released Visual Studio “15” Preview 5. With this Preview, I want to focus mostly on performance improvements, and in the coming days we’ll have some follow-up posts about the performance gains we’ve seen. I’m also going to point out some of the productivity enhancements we’ve made.

So kick off the installer here and read the rest of the post. You can also grab the release notes here.

A big step forward in performance and memory efficiency

I’d like to start with a side-by-side video that will give you a sense of all the performance improvements in one look. This video compares starting Visual Studio and loading the solution for the entire .NET Compiler Platform “Roslyn” in 30 seconds with Visual Studio ‘15’ compared to 60 seconds with Visual Studio 2015:

The faster load time is a result of a couple of the improvements we’ve made – lightweight project load and on-demand loading of extensions. Here are some of the key performance gains in Preview 5:

    • Shorter solution load time with lightweight project load: Working on solutions that contain upwards of 100 projects doesn’t mean you need to work with all the files or projects at a given time. VS “15” provides editing and debugging functionality without waiting for Visual Studio to load every project. You can try out this capability with managed projects in Preview 5 by turning on “Lightweight Solution Load” from Tools -> Options -> Projects and Solutions.
    • Faster startup with on-demand loading of extensions: The idea is simple: load extensions when they’re needed, rather than when VS starts. In Preview 5 we started this effort by moving our Python and Xamarin extensions to load on demand and are working on moving all extensions we ship with Visual Studio and extensions shipped by third party extension vendors to this model. Curious about which extensions impact startup, solution load, and typing performance? You can see this information in Help -> Manage Visual Studio Performance. Do you develop an extension? We will be publishing guidance to help extension developers move to on-demand loading.
    • Moving subsystems from the main VS process to separately processes: We moved some memory-intensive tasks such as Git Source Control, and our JavaScript and TypeScript language services to separate processes. This makes it less likely for you to experience delays caused by code running in the main Visual Studio process, or Visual Studio becoming sluggish and even crashing as the main process approaches the 4GB memory limit of 32-bit processes. We will continue to move components out-of-process in coming releases.
    • Faster project load, coding, and debugging for C++: We have made loading C++ projects faster. Check out this video showing the improvement. You can enable this by setting “Enable Faster Project Load” to True from Tools -> Options -> Text Editor -> C/C++ -> Experimental. We have also made improvements to our linker and PDB loading libraries to make incremental builds and launching the debugger much faster while significantly reducing memory consumption while debugging.
    • Improved speed of Git source control operations by using git.exe: We have improved debugging performance by optimizing initialization and other costs related to IntelliTrace and the Diagnostic Tools window, and removed several delays that occur when editing and switching between XAML files.

This is just the start and we are dedicated to making improvements like these to make Visual Studio start faster, be more responsive, and use less memory. Keep an eye out for more posts on the Visual Studio blog over the coming days where we’ll go deep into the technical details behind these improvements.

We rigorously test these changes to anticipate issues and deliver the best performance but there is no substitute for real world code. We need your help! So please install Preview 5, try it out with your large solutions, and tell us what you think by using the Report-a-problem tool within the IDE.

Improvements in productivity

Visual Studio “15” also has a lot of features aimed at keeping productivity high.

Editing Code

IntelliSense filtering is now available in C#, VB and C++. While exploring complex APIs, you can narrow to just the type you need (for example, just methods, properties, or events). In C# and Visual Basic we determine the “target type” required at a position and preselect items in the list matching that type. This speeds up your typing flow and removes the burden of having to figure out the expected type at a given location.

In C++, an experimental Predictive IntelliSense feature shows a filtered list of IntelliSense results so you don’t have to scroll through a long list. Only items of the expected type are listed based on need based probability. You can turn on this feature in Tools > Options > Text Editor > C/C++ > Experimental.

In XAML, we have added IntelliSense completion for x:Bind which provides a completion list when you attempt to bind to properties and events. Namespace completion offers to auto-complete the prefix if the reference to the namespace already exists. XAML IntelliSense has also been updated to filter out types and properties that do not match. The closest match is selected, so you only see relevant results and don’t have to scroll through a long list of types.

In JavaScript, we have completely revamped the language service that powers IntelliSense. Previously, as you typed, a JavaScript engine continuously executed your code to provide runtime-like completion lists and signature help. This was great for dynamic JavaScript code, however it often provided an inconsistent editing experience. The new language service uses static analysis powered by TypeScript to provide more detailed IntelliSense, full ES6/ES7 coverage, and a more consistent editing experience.

Quick Fixes and Refactorings

To help you maintain a readable codebase and catalyze your development workflow, we’ve added more Quick Actions and Refactorings for C# and Visual Basic. Move Type to Matching File moves a type into a new file that has the same name and Sync File and Type Name gives you the option to rename your type to match your file name (and vice versa). Lastly, Convert to Interpolated String lets you embrace C# 6.0 and VB14 goodness by transforming your `string.Format` expressions into interpolated strings.

Navigating Code

Getting around, and knowing where you are in a large codebase can be challenging; we’ve added several new navigation features to help with this. Go To: (Ctrl + , or Ctrl + T) lets you quickly find files, types, methods, and other kinds of objects in your code.

Find All References (Shift+F12) now helps you get around easily, even in complex codebases. It provides advanced grouping, filtering, sorting, searching within results, and (for some languages) colorization, so you can get a clear understanding of your references.

Debugging

In Preview 5 we have introduced and experimental feature: Run to Click.You no longer need to set a temporary breakpoint to skip ahead and stop on the line you desire. When stopped in the debugger, simply click the icon that appears next to the line of code your mouse is over. Your code will run and stop on that line the next time it is hit. You can turn on this feature in Debug > Options > Enable Run to Click.

The New Exception Helper: See what you need more quickly with the new Exception Helper. View the most useful exception information at a glance, including seeing what variable was null, in a compact non-modal dialog with instant access to inner exceptions.

Try it out

For the complete list of everything in this release, along with some known issues, look at the Visual Studio “15” Preview 5 Release Notes page.

A couple of important caveats about Preview 5. First, this is an unsupported preview so don’t install it on machines that you rely on for critical production work. Second, Preview 5 should work side by side with previous versions of Visual Studio, but you must remove any previous Visual Studio “15” Preview installations before beginning the setup process. Check out this Preview 5 FAQ for other common questions.

As always, we welcome your feedback. For problems, let us know via the Report a Problem option, either from the installer or the Visual Studio IDE itself. Track your feedback on the developer community portal. For suggestions, let us know through UserVoice.

Last but not least, check out Mitra’s post from earlier today to learn more about the upcoming developer conference Connect(); 2016.

John Montgomery, Director of Program Management for Visual Studio
@JohnMont

John is responsible for product design and customer success for all of Visual Studio, C++, C#, VB, JavaScript, and .NET. John has been at Microsoft for 17 years, working in developer technologies the whole time.

Bring your C++ codebase to Visual Studio with “Open Folder”

$
0
0

Welcome to Visual Studio ’15’ Preview 5! Starting with this release, Visual Studio supports opening folders containing source code without the need to create any solutions or projects. This makes it a lot simpler to get started with Visual Studio even if your project is not an MSBuild-based project. The new functionality, “Open Folder”, also offers a more natural source file management as well as access to the powerful code understanding, editing, building and debugging capabilities that Visual Studio already provides for MSBuild projects.

This post describes the “Open Folder” support for C++ codebases. You will learn how to use “Open Folder” to easily:

  • read C++ code
  • edit C++ code
  • build and debug C++

All this without having to create and maintain any .vcxproj or .sln files.

Getting started with “Open Folder” for C++

With the release of Visual Studio ’15’ Preview 5, we’re announcing the availability of C++ support for “Open Folder“. When you install the product, make sure to install one of the C++ workloads e.g. “Desktop development with C++” or “Game development with C++”.

If you have a CMake-based project, also read the blogpost describing our progress on the Visual Studio’s streamlined “Open Folder” experience for CMake. If your project is using another build system, read on.

To get the best experience in Preview 5, you need to create a small file called CppProperties.json in the folder you plan to open with the content below. Note: In future previews, creating this file will not be necessary (it will only be needed for the cases when you want to customize the default IntelliSense experience)

CppProperties.json:

{
  configurations: [
    {
      name: "Windows",
      includePath: [],
      defines: [
        "_DEBUG"
      ],
      compilerSwitches: "/W3 /TP"
    }
  ]
}

After that, all you have to do is run the “Open Folder” command and select the folder you want to browse (either from File > Open > Folder or through Quick Launch)

openfolder

Reading C++ code

As soon as you open the folder, Solution Explorer will immediately display the files in that folder and you can open any files in the editor. In the background, Visual Studio will start indexing the C++ sources in your folder.

You now have access to all the Visual Studio capabilities of reading and browsing C++ code (e.g. Find all references, Go to symbol, Peek definition, Semantic colorization and highlighting, Class View, Call Hierarchy, to name a few).

navigation

Depending on the project, you may need to update the CppProperties.json file with more information about your source code e.g. additional include paths, additional defines or compiler switches. For example, if your project includes windows.h and friends from the Windows SDK (which is common), you may want to update your configuration file with these includes (using default installation paths for Preview 5):

CppProperties.json:

{
  configurations: [
   {
    name: "Windows",
    includePath: [
      "C:\\Program Files (x86)\\Windows Kits\\10\\include\\10.0.10240.0\\ucrt",
      "C:\\Program Files (x86)\\Windows Kits\\NETFXSDK\\4.6.1\\include\\um",
      "C:\\Program Files (x86)\\Windows Kits\\8.1\\Include\\um",
      "C:\\Program Files (x86)\\Windows Kits\\8.1\\Include\\shared",
      "C:\\Program Files (x86)\\Microsoft Visual Studio\\VS15Preview\\Common7\\IDE\\VisualCpp\\Tools\\MSVC\\14.10.24516.00\\include"
    ],
    defines: [
      "_DEBUG",
      "_WIN32"
    ],
    compilerSwitches: "/W3"
   }
  ]
}

In general, the Error List window is a good starting point to review any IntelliSense errors caused by missing includes – filter its content to “IntelliSense only” and error code E1696:

errorlist-filtering

Editing C++ code

All these C++ browsing and navigation services work without you having to create any Visual C++ projects like in previous Visual Studio releases (through the “Create Project from Existing Code” wizard).

As you create, rename or remove source files from your project, you don’t have to worry anymore about updating the Visual C++ projects as well – Visual Studio will rely on the folder structure and monitor changes on disk as needed. Also, when you edit code, Visual Studio’s IntelliSense will continue updating and assisting you with the latest information from your sources.

isense-update

While editing your code, you can also use all of the refactoring features that Visual Studio supports for C++ e.g. Rename symbol, Extract function, Move definition location, Change signature, Convert to raw string literals, etc.

rename-refactor

Building C++ projects

Building under “Open Folder” is not yet supported. Stay tuned for one of our upcoming releases for more information on this.

You will have to continue building your projects from a command prompt for now.

Debugging C++ binaries

To get started with debugging in Visual Studio, you want to navigate in Solution Explorer to your executable. Then right click, and select “Debug” – this will immediately start a debugging session for this executable.

If you want to customize your program’s arguments, select “Debug and Launch Settings” instead. This will create a new launch.json file pre-populated with the information about the program you have selected.

debug-launchsettings

To specify additional arguments, just add them in the “args” JSON array as in the example below

launch.json:

{
  "version": "0.2.1",
  "defaults": {},
  "configurations": [
    {
      "type": "default",
      "project": "CPP\\7zip\\Bundles\\Alone\\O\\7za.exe",
      "name": "7za.exe list content of helloworld.zip",
      "args": [ "l", "d:\\sources\\helloworld.zip" ]
    }
  ]
}

launch-json-debug-target

As soon as you save this file, a new entry in the Debug Target dropdown will be available and you can select it to start the debugger. By editing the launch.json file, you can create as many debug configurations as you like, for any number of executables. If you press F5 now, the debugger will launch and hit any breakpoint you may have already set. All the familiar debugger windows and functionality is now available.

debugging

What’s next

Download Visual Studio ’15’ Preview 5 today and give “Open Folder” a try – no need to create VS projects and solution files anymore to become productive in VS.

This is an early preview and we’re continuing work on making this experience even better. In upcoming releasess, you will see us enhance the way the C++ browsing and navigation works, as well as supporting more debugger types, eventually reaching parity with our MSBuild-based project functionality. Your feedback is really important in informing our next steps so don’t hesitate to share it. We’re listening!

 

Viewing all 10804 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>