Quantcast
Channel: Category Name
Viewing all 10804 articles
Browse latest View live

Recommendations to speed C++ builds in Visual Studio

$
0
0

In this blog, I will discuss features, techniques and tools you can use to reduce build time for C++ projects. The primary focus of this post is to improve developer build time for the Debug Configuration as a part of your Edit/Build/Debug cycle (inner development loop). These recommendations are a result of investigating build issues across several projects.

Developers invoke build frequently while writing and debugging code, so improvements here can have a large impact on productivity. Many of the recommendations focus on this stage, but others will carry over to build lab scenarios, clean builds with optimizations for end-end functional and performance testing and release.

Our recommendations include:

Before We Get Started

I will highlight the search feature in project settings. This feature will make it easy for you to locate and modify project settings.

  1. Bring up the project properties and expand sub groups for the tool you are interested in.
  2. Select “All options” sub group and search for the setting by name or the command line switch e.g. Multi-processor or /MP as shown in the figure below:properties
  3. If you cannot find the setting through search, select “Command Line” sub group and specify the switch in Additional Options

Recommendations

Specific recommendations include:

  • DO USE PCH for projects
  • DO include commonly used system, runtime and third party headers in PCH
  • DO include rarely changing project specific headers in PCH
  • DO NOT include headers that change frequently
  • DO audit PCH regularly to keep it up to date with product churn
  • DOUSE /MP
  • DORemove /Gm in favor of /MP
  • DO resolve conflict with #import and use /MP
  • DOUSElinker switch /incremental
  • DOUSElinker switch /debug:fastlink
  • DOconsider using a third party build accelerator

Precompiled Header

Precompiled headers (PCH) reduce build time significantly but require effort to set up and maintain for the best results. I have investigated several projects that either didn’t have a PCH or had one that was out of date. Once PCH was added or updated to reflect current state of the project, compile time for individual source files in the project reduced by 4-8x (~4s to  <1s).

An ideal PCH is one that includes headers that meet the following criteria

  • Headers that don’t change often.
  • Headers included across a large number source files in the project.

System (SDK), runtime header and third party library headers generally meet the first requirement and are good candidates to include in PCH. Creating a PCH with just these files can significantly improve build times. In addition, you can include your project specific headers in PCH if they don’t change often.

Wikipedia article on the topic or searching for ‘precompiled headers’ is a good starting point to learn about PCH. In a future blog post I will talk about PCH in more detail and as well as tools to help maintain PCH files.

Recommendation:
  • DO USE PCH for projects
  • DOinclude commonly used system, runtime and third party headers in PCH
  • DOinclude rarely changing project specific headers in PCH
  • DO NOTinclude headers that change frequently
  • DOaudit PCH regularly to keep it up to date with product churn

/MP– Parallelize compilation of source files

Invokes multiple instances of cl.exe to compile project source files in parallel. See documentation for /MP for a detailed discussion of the switch including conflicts with other compiler features. In addition to documentation this blog post has good information about the switch.

Resolving conflicts with other compiler features
  • /Gm (enable minimal rebuild): I recommend using /MP over /Gm to reduce build time.
  • #import: Documentation for /MP discusses one option to resolve this conflict. Another option is to move all import directives to precompiled header.
  • /Yc (create precompiled header): /MP does not help with creating precompiled header so not an issue.
  • /EP, /E, /showIncludes: These switches are typically used to diagnose issues hence should not be an issue.
Recommendation:
  • DO USE /MP
  • DO Remove /Gm in favor of /MP
  • DOresolve conflict with #import and use /MP

/incremental– Incremental link

Incremental link enables the linker to significantly speed up link times. With this feature turned on, linker can process just the diffs between two links to generate the image and thus speed up link times by 4-10x in most cases after the first build. In VS2015 this feature was enhanced to handle additional common scenarios that were previously not supported.

Recommendation:
  • DO USElinker switch /incremental

The linker spends significant time in collecting and merging debug information into one PDB. With this switch, debug information is distributed across input object and library files. Link time for medium and large projects can speed up by as much as 2x. Following blog posts discuss this feature in detail

Recommendation:
  • DO USElinker switch /debug:fastlink

Third party build accelerators

Build accelerators analyze Msbuild projects and create a build plan that optimizes resource usage. They can optionally distribute builds across machines. Following are a couple of build accelerators that you may find beneficial.

  • Incredibuild: A link to install VS extension is available under New project/Build accelerators. Visit their website for more information.
  • Electric Cloud: Visit their website for download link and more information

In addition to improving build time, these accelerators help you identify build bottlenecks through build visualization and analysis tools.

Recommendation:
  • DOconsider using a third party build accelerator

Sign up to get help

After you have tried out the recommendations and need further help from the Microsoft C++ team, you can sign up here. Our product team will get in touch with you.

If you run into any problems like crashes, let us know via the Report a Problem option, either from the installer or the Visual Studio IDE itself. You can also email us your query or feedback if you choose to interact with us directly! For new feature suggestions, let us know through User Voice.


Bing helps developers connect more naturally with people

$
0
0
Artificial Intelligence (AI) is central to our ambitions as a company. Through this intelligence, we want to empower everyone to achieve more. Central to that mission is leveraging the knowledge of Bing and the services that Bing has created to make knowledge accessible and usable by all.
 
In this first post on how Bing technology is being used by developers across the industry, we touch upon our work to enable more natural and conversational experiences that are designed around people.
 
Today we are witnessing a shift towards a more natural, conversational computing. While this shift is still very early, and evolving, many groups are already seeing opportunities – from developers who view this as the new platform to create on with bots, to businesses who see a new way to connect with customers, and individuals who are eager for more natural ways to discover, access and interact digitally.
 
Our services continue to evolve through our direct experience delivering new intelligent experiences at scale. Our popular chatbots Xiaoice in China and Rinna in Japan already have more than 90 million users. Built using Bing technology, users have formed strong emotional connections with Xiaoice. Cortana, the more productive personal digital assistant, again built upon Bing services, is used globally by more than 133 million people each month.
 
Beyond Microsoft’s first party experiences, we focus on applying Bing technology to create building blocks for developers to understand the user (intents, contexts, disambiguation), extract knowledge (insights, facts, information) and intelligence (natural language, safe search). Below are just a couple of examples.
 
A search engine is traditionally known for its vast index of the web, a digital map of the planet, complete index of public images, videos, and all the news you ever wanted. Ask it anything, and you typically get back useful information that answers your query across all these domains.
 
Bing additionally has a multi-domain knowledge graph. To put our knowledge graph into perspective, think about a person, place, or thing—say a sports team. That sports team has a name, upcoming game schedule, a team roster with individual player statistics, news, pictures, videos, maps to the venues they play, and weather forecasts for the days they will be playing. All this information is associated with that one sports team, which, in our knowledge graph, is one node/entity. And we have billions—all interlinked to each other so they can be conversationally traversed to bring knowledge into experiences, including conversations.
 

   

 

The Bing Sportscaster bot on Facebook Messenger taps into Bing’s knowledge and intelligence to keep users up-to-date with news, facts, scores, schedules and more about their favorite teams.
 
When we launched Bing, we recognized the value of integrating deep smarts and controls into our search stack, to enable experiences which work better for people.
 
One example is image analysis and the safe search settings that are valued by parents and educators. We apply our ability to use AI to detect and filter out inappropriate search results, both explicit adult and racy content. We offer this capability to developers as part of our Bing Search API.
 


This is just a taste of the capabilities that we make available to developers. Developers and businesses are using Bing services right now to build new natural and conversational experiences for people.
 
It is an exciting time to be working to enable this changing technology landscape.
 
In our next post for this series, we will take a deeper look at how Bing Search APIs help power our partners’ solutions.
 
For more information on the services we make available to power chatbots, apps and new business opportunities, please visit our partner website or contact us. We are always happy to connect!
 
Check out some of our posts from the past for more information about Bing services:

- The Bing Team
 

 

Kevin Gallo gives the developer perspective on today’s Windows 10 Event

$
0
0

Did you see the Microsoft Windows 10 Event this morning?  Satya, Terry, and Panos talked about some of the exciting new features coming in the Windows 10 Creators Update and announced some amazing new additions to our Surface family of devices. If you missed the event, be sure to check it out here.

As a developer, my first question when I see new features or new hardware is “What can I do with that?” We want to take advantage of the latest and coolest platform capabilities to make our apps more useful and engaging.

There were several announcements today that offer exciting opportunities for Windows developers.  Three of these that I want to tell you about are:

  • 3D in Windows 10 along with the first VR headsets capable of mixed reality through the Windows 10 Creators update.
  • Ability to put the people you care about most at the center of your experience—right where they belong—with Windows MyPeople
  • Surface Dial, a new input peripheral designed for the creative process that integrates with Windows and is complimentary to other input devices like pen. It gives developers the ability to create unique multi-modal experiences that can be customized based on context. The APIs work in both Universal Windows Platform (UWP) and Win32 apps.

Rather that write a long blog post, I decided to go down to our Channel 9 studios and record a video that gives my thoughts and provides what I hope will be a useful developer perspective on today’s announcements.  Here’s my conversation with Seth Juarez from Channel 9:

My team and I are working hard to finish the platform work that will fully support the Windows 10 Creators Update, but you can start experimenting with many of the things we talked today. Windows Insiders can download the latest flight of the SDK and get started right away.

If you want to dig deeper on the Surface Dial, check out the following links:

Stay tuned to this space for more information in the coming weeks as we get closer to the release of the Windows 10 Creator’s update.  In the meantime, we always love to hear from you and welcome your feedback at the Windows Developer Feedback site.

Free ASP.NET Core 1.0 Training on Microsoft Virtual Academy

$
0
0

This time last year we did a Microsoft Virtual Academy class on what was then called "ASP.NET 5." It made sense to call it 5 since 5 > 4.6, right? But since then ASP.NET 5 has become .NET Core 1.0 and ASP.NET Core 1.0. It's 1.0 because it's smaller, newer, and different. As the .NET "full" framework marches on, on Windows, .NET Core is cross-platform and for the cloud.

Command line concepts like dnx, dnu, and dnvm have been unified into a single "dotnet" driver. You can download .NET Core at http://dot.net and along with http://code.visualstudio.com you can get a web site up and running in 10 minutes on Windows, Mac, or many flavors of Linux.

So, we've decided to update and refresh our Microsoft Virtual Academy. In fact, we've done three days of training. Introduction, Intermediate, and Cross-Platform. The introduction day is out and it's free! We'll be releasing the new two days of training very soon.

NOTE: There's a LOT of quality free courseware for learning .NET Core and ASP.NET Core. We've put the best at http://asp.net/free-courses and I encourage you to check them out!

Head over to Microsoft Virtual Academy and watch our new, free "Introduction to ASP.NET Core 1.0." It's a great relaxed pace if you've been out of the game for a bit, or you're a seasoned .NET "Full" developer who has avoided learning .NET Core thus far. If you don't know the C# language yet, check out our online C# tutorial first, then watch the video.

image

And help me out by adding a few stars there under Ratings. We're new. ;)


Sponsor: Do you deploy the same application multiple times for each of your end customers? The team at Octopus have taken the pain out of multi-tenant deployments. Check out their latest 3.4 release!



© 2016 Scott Hanselman. All rights reserved.
     

Going social: Project Rome, Maps & social network integration (App Dev on Xbox series)

$
0
0

The Universal Windows Platform is filled with powerful and unique capabilities that allow the creation of some remarkable experiences on any device form factor. This week we are looking at an experience that builds on top of the Adventure Works sample we released last week by adding a social experience with the capability (1) to extend the experience to other devices that the user owns through the Project “Rome” APIs, (2) to be location aware using the powerful Maps API, and (3) to integrate with third-party social networks. As always, you can get the latest source code of the app right now on GitHub and follow along.

And if you missed last week’s article on how to enable great camera experiences, we covered how to build UWP apps that take advantage of camera APIs on the device and in the cloud through the Cognitive Services APIs to capture, modify, and understand images. To read last week’s blog post or any of the other blog posts in the series, or to watch the recordings from the App Dev on Xbox live event that started it all, visit the App Dev on Xbox landing page.

Adventure Works (v2)

screen-shot-2016-10-27-at-1-42-33-pm

To give you a quick recap of the sample app, we released the Adventure Works source code last week and we discussed how we used a combination of client and cloud APIs to create a camera app capable of understanding images, faces and emotion, as well as being able to modify the images by applying some basic effects. Building on top of that, the goal for Adventure Works is to create a larger sample app that extends the experience, to add more social features in which users can share photos and albums of their adventures with friends and family across multiple devices. Therefore, we’ve extended the sample app by:

  1. Adding the ability to have shared second screen experiences through Project Rome
  2. Adding location and proximal information for sharing with the location and Maps APIs
  3. Integrating with Facebook and Twitter for sharing by using the UWP Toolkit.

Project Rome

Most people have multiple devices, and often begin an activity on one device but end up finishing it on another. To accommodate this, apps need to span devices and platforms.

The Remote Systems APIs, also known as Project Rome, enable you to write apps that let your users start a task on one device and continue it on another. The task remains the central focus, and users can do their work on the device that is most convenient for them. For example, you might be listening to the radio on your phone in the car, but when you get home you may want to transfer playback to the Xbox One that is hooked up to your home stereo system.

The Adventure Works app takes advantage of Project Rome in order to create a second screen experience. It uses the Remote System APIs to connect to companion devices for a remote control scenario. Specifically, it uses the app messaging APIs to create an app channel between two devices to send and receive custom messages. Devices can be connected proximally through Bluetooth and local network or remotely through the cloud, and are connected by the Microsoft account of the person using them.

In Adventure Works, you can use a tablet, phone or even your desktop as a second experience for a slideshow displayed on your TV through the Xbox One. The slideshow images can be controller easily on the Xbox through the remote or controller, and the second screen experience allows the same. However, with the second device, the user has the ability to view all photos at once, select which one to show on the big screen and even take advantage of the capabilities of the smaller device otherwise not available to the Xbox, such as enabling inking on images for a collaborative experience.

screen-shot-2016-10-27-at-1-42-54-pm

Adventure Works uses Project Rome in two places to start the second screen experience. First, when a user navigates to a collection of photos, they can click on Connect at the top to see available systems and connect to one of them. Or, if the Xbox is already showing a slideshow, a companion devices will prompt the user to start controlling the experience.

screen-shot-2016-10-27-at-1-43-36-pm

For these scenarios to work, the app needs to be aware of other devices, and that is where Project Rome comes in. To start the discovery of devices, use the RemoteSystem.CreateWatcher method to create a remote system watcher and subscribe to the appropriate events before calling the Start method (see code on GitHub):


_remoteSystemWatcher = RemoteSystem.CreateWatcher(BuildFilters());
_remoteSystemWatcher.RemoteSystemAdded += RemoteSystemWatcher_RemoteSystemAdded;
_remoteSystemWatcher.RemoteSystemRemoved += RemoteSystemWatcher_RemoteSystemRemoved;
_remoteSystemWatcher.RemoteSystemUpdated += RemoteSystemWatcher_RemoteSystemUpdated;
_remoteSystemWatcher.Start();

The BuildFilters method simply creates a list of filters for the watcher. For the purposes of Adventure Works we chose to limit the discovery to only Xbox and Desktop devices that are available in proximity.

We wanted to be able to launch the app on the Xbox from any other device and go directly to the slideshow. We first declared a protocol in the app manifest and implemented the OnActivated method in App.xaml.cs to launch the app directly to the slideshow. Once this was done, we were able to use the RemoteLauncher.LaunchUriAsync command to launch the the slideshow on the remote app if it wan’t already running (see code on GitHub).

var launchUriStatus =
    await RemoteLauncher.LaunchUriAsync(
        new RemoteSystemConnectionRequest(system.RemoteSystem),
        new Uri("adventure:" + deepLink)).AsTask().ConfigureAwait(false);

To control the slideshow, we needed to be able to send and receive messages between the two devices. We covered AppServiceConnection in a previous blog post, but it can also be used to create a messaging channel between apps on different devices using the OpenRemoteAsync method (see code on GitHub).


var appService = new AppServiceConnection()
{
    AppServiceName = "com.adventure",
    PackageFamilyName = Windows.ApplicationModel.Package.Current.Id.FamilyName
};

RemoteSystemConnectionRequest connectionRequest = new RemoteSystemConnectionRequest(remoteSystem);
var status = await appService.OpenRemoteAsync(connectionRequest);

if (status -= AppServiceConnectionStatus.Success)
{
    var message = new ValueSet();
    message.Add("ping", "");
    var response = await appService.SendMessageAsync(message);
}

Once the app is running, both the client and the host can send messages to communicate status and control the slideshow. Messages are not limited to simple strings; arbitrary binary data can be sent over, such as inking information. (This messaging code happens in SlideshowClientPage and SlideshowPage, and the messaging events are all implemented in the ConnectedService source file.)

For example, in the client, the code to send ink strokes looks like this:


var message = new ValueSet();
message.Add("stroke_data", data); // data is a byte array
message.Add("index", index);
var response = await ConnectedService.Instance.SendMessageFromClientAsync(message, SlideshowMessageTypeEnum.UpdateStrokes);

The message is sent over using ValueSet objects and the host handles the stroke messages (along with other messages) in the ReceivedMessageFromClient handler:


private void Instance_ReceivedMessageFromClient(object sender, SlideshowMessageReceivedEventArgs e)
{
    switch (e.QueryType)
    {
        case SlideshowMessageTypeEnum.Status:
            e.ResponseMessage.Add("index", PhotoTimeline.CurrentItemIndex);
            e.ResponseMessage.Add("adventure_id", _adventure.Id.ToString());
            break;
        case SlideshowMessageTypeEnum.UpdateIndex:
            if (e.Message.ContainsKey("index"))
            {
                var index = (int)e.Message["index"];
                PhotoTimeline.CurrentItemIndex = index;
            }
            break;
        case SlideshowMessageTypeEnum.UpdateStrokes:
            if (e.Message.ContainsKey("stroke_data"))
            {
                var data = (byte[])e.Message["stroke_data"];
                var index = (int)e.Message["index"];
                HandleStrokeData(data, index);
            }
            break;
        default:
            break;
    }
}

As mentioned above, the user should be able to directly jump into an ongoing slideshow. As soon as MainPage is loaded, we try to find out if there are any devices already presenting a slideshow. If we find one, we prompt the user to start controlling the slideshow remotely. The code to search for other devices, below (and on GitHub), returns a list of AdventureRemoteSystem objects.


public async Task> FindAllRemoteSystemsHostingAsync()
{
    List systems = new List();
    var message = new ValueSet();
    message.Add("query", ConnectedServiceQuery.CheckStatus.ToString());

    foreach (var system in Rome.AvailableRemoteSystems)
    {
        var reponse = await system.SendMessage(message);
        if (reponse != null && reponse.ContainsKey("status"))
        {
            var status = (ConnectedServiceStatus)Enum.Parse(typeof(ConnectedServiceStatus), (String)reponse["status"]);
            if (status == ConnectedServiceStatus.HostingConnected || status == ConnectedServiceStatus.HostingNotConnected)
            {
                systems.Add(system);
            }
        }
    }

    return systems;
}

An AdventureRemoteSystem is really just a wrapper around the base RemoteSystem class found in Rome and is used to identify instances of the Adventure Works app running on other devices like Surface tablets, Xbox One and Windows 10 phones.

Make sure to check out the full source code and try it on your own devices. And if you want to learn even more, make sure to check out the Cross-device experiences with Project Rome blog post.

Maps and location

As part of building out Adventure Works, we knew that we wanted to develop an app that showed a more social experience, so we added a way to see the adventures of out fictional friends and the location of those adventures. UWP supports rich map experience by providing controls to display maps with 2D, 3D or Streetside views by using APIs from the Windows.UI.Xaml.Controls.Maps namespace. You can mark points of interest (POI) on the map by using pushpins, images, shapes or XAML UI elements. You can use location services with your map to find notable places and you can even use overlay tiled images or replace the map images altogether.
screen-shot-2016-10-27-at-1-43-54-pm

The UWP Maps APIs provide powerful yet simple tools for working with and customizing location data. For instance, in order to get the user’s current location, you use the Geolocator class to request the current geoposition of the device:


var accessStatus = await Geolocator.RequestAccessAsync();
switch (accessStatus)
{
    case GeolocationAccessStatus.Allowed:

        // Get the current location.
        Geolocator geolocator = new Geolocator();
        Geoposition pos = await geolocator.GetGeopositionAsync();
        return pos.Coordinate.Point;

    default:
        // Handle the case if  an unspecified error occurs
        return null;
}

With this location information in hand, you can then create a MapIcon object based on it and add it to your map control.


if (currentLocation != null)
{
    var icon = new MapIcon();
    icon.Location = currentLocation;
    icon.NormalizedAnchorPoint = new Point(0.5, 0.5);
    icon.Image = RandomAccessStreamReference.CreateFromUri(new Uri("ms-appx:///Assets/Square44x44Logo.targetsize-30.png"));
                    
    Map.MapElements.Add(icon);
}

Adding the friends on the map is similar but we used XAML elements instead of a MapIcon, giving us the ability to focus through each one using the controller or remote on the Xbox.


Map.Children.Add(button);
MapControl.SetLocation(button, point);
MapControl.SetNormalizedAnchorPoint(button, new Point(0.5, 0.5));

Directional navigation works best when focusable elements are layed out in a grid layout. Because the friends can be layed out randomly on the map, we wanted to make sure that the focus experience works great with the controller. We used the XYFocus properties of the buttons to specify how the focus should move from one to the other. We used the longitude to specify the order so the user can move through each friend left and right, and down will bring the focus to the main controls. To see the full implementation, take a look at the project on GitHub.


foreach (var button in orderedButtons)
{
    button.XYFocusUp = button;
    button.XYFocusRight = button;
    button.XYFocusLeft = previosBtn != null ? previosBtn : button;
    button.XYFocusDown = MainControlsViewOldAdventuresButton;

    if (previosBtn != null)
    {
        previosBtn.XYFocusRight = button;
    }

    previosBtn = button;
}
if (orderedButtons.Count() > 1)
{
    orderedButtons.Last().XYFocusRight = orderedButtons.First();
    orderedButtons.First().XYFocusLeft = orderedButtons.Last();
}

While the Adventure Works app only uses geolocation for the current device, you can easily extend it to do things like find nearby friends. You should also consider lighting up additional features depending on which device the app is running on. Since it is really more of a mobile experience than a living room experience, you can add a feature like finding great nearby places to take photos but only enable it when the app is installed on a phone.

Facebook and Twitter integration (and the UWP Community Toolkit)

What’s more social than being able to share adventures and photos to your favorite social networks, and the UWP Community Toolkit includes service intergration for both Facebook and Twitter, simplifying OAuth authentication along with your most common social tasks.

The opensource toolkit includes new helper functions; animations; tile and toast notifications; custom controls and app services that simplify or demonstrate common developer tasks; and has been used extensively throughout Adventure Works. It can be used with any new or existing UWP app written in C# or VB.NET and the app can be deployed to any Windows 10 device including the Xbox One. Because it is strongly aligned with the Windows SDK for Windows 10, feedback about the toolkit will be incorporated in future SDK releases. And it just makes common tasks easy and simple!

screen-shot-2016-10-27-at-1-44-11-pm

For instance, logging in and posting to Twitter can be accomplished in only three lines of code.


// Initialize service, login, and tweet
TwitterService.Instance.Initialize("ConsumerKey", "ConsumerSecret", "CallbackUri");
await TwitterService.Instance.LoginAsync();
await TwitterService.Instance.TweetStatusAsync("Hello UWP!", imageStream)

The Adventure Works app lets users authenticate with either their Twitter account or Facebook account. The standard UWP Toolkit code for authenticating with Twitter is shown above. Doing the same thing with Facebook is just as easy.


FacebookService.Instance.Initialize(Keys.FacebookAppId);
success =  await FacebookService.Instance.LoginAsync();
await FacebookService.Instance.PostPictureToFeedAsync("Shared from Adventure Works", "my photo", stream);

Take a look at the Identity.cs source file on GitHub for the full implementation in Adventure Works, and make sure to visit the UWP Community Toolkit GitHub page to learn more. The toolkit is written for the community and fully welcomes the developer community’s input. It is intended to be a repository of best practices and tools for those of us who love working with XAML platforms. You can also preview the cap­­­­abilities of the toolkit by downloading the UWP Community Toolkit Sample App in the Windows Store.

That’s all for now

Make sure to check out the app source on our official GitHub repository, read through some of the resources provided, watch the event if you missed it and let us know what you think through the comments below or on Twitter @WindowsDev.

Don’t miss the last blog post of the series next week, where we’ll share the finished Adventure Works sample app and discuss how to take advantage of more personal computing APIs such as speech and inking.

Until then, happy coding!

Resources for Hosted Web Apps

 

Bringing 3D to everyone through open standards

$
0
0

Earlier this week, at the Microsoft Windows 10 Event, we shared our vision (read more about it from Terry Myerson and Megan Saunders) around 3D for everyone in New York. As part of achieving that vision we are delighted to share that Microsoft is joining the 3D Formats working group at Khronos to collaborate on its GL Transmission Format (glTF).

At Microsoft, we are committed to an open and interoperable 3D content development ecosystem.  As 3D content becomes more pervasive, there is a need for a common, open and interoperable language to describe, edit, and share 3D assets between different applications. glTF fills this need as an expressive and capable open standard.

We look forward to collaborating with the community and our industry partners to help glTF deliver on its objectives and achieve broad support across many devices and applications. To further the openness goal, we will continue our open source contributions including further development of glTF support in the open source frameworks such as BabylonJS.

As the working group starts thinking about the next version, we are especially interested in joining discussions about some of the subjects that have seen the biggest community momentum in the public forums. Physically Based Rendering (PBR) material proposal is one of those topics. PBR materials are a flexible way for 3D content creators to specify the rendering characteristics of their surfaces. Industry-standard implementations can ensure that any PBR content will look consistent irrespective of the scene lighting and environment. Additionally, because PBR material definition is a high-level abstraction that is not tied to any specific platform, 3D assets with PBR materials can be rendered consistently across platforms.

This kind of cross-platform, cross-application power is what will ultimately make glTF truly ubiquitous and Microsoft is proud to be part of this journey.

Visit Bing’s Haunted Graveyard

$
0
0
Getting an early start on this year’s Halloween festivities? You’re not alone. The Bing team is excited to share an early treat with all of you. Head to Bing.com to see what eerie sights and sounds await in our haunted graveyard.


 
You can also search Bing for last-minute costume ideas for adults, kids or our personal favorite, Star Wars dog costumes.




When you’re done searching, don’t forget to wish Cortana a “happy Halloween.” She may just share some costume ideas she’s considering.

       
   
Have a safe and happy Halloween!
 
- The Bing Team
 
 

Just released – Windows developer evaluation virtual machines – October 2016 build

$
0
0

We’re releasing the October 2016 edition of our evaluation Windows developer virtual machines (VM) on Windows Dev Center. The VMs come in Hyper-V, Parallels, VirtualBox and VMWare flavors and will expire on 01/17/17.

These installs contain:

If you want a non-evaluation version, we licensed virtual machines as well but you’ll need a Windows 10 Pro key.  The Azure portal also has virtual machines you can spin up with the Windows Developer tooling installed too!

If you have feedback on the VMs, please provide it over at the Windows Developer Feedback UserVoice site.

Download Visual Studio to get started.

The Windows team would love to hear your feedback.  Please keep the feedback coming using our Windows Developer UserVoice site. If you have a direct bug, please use the Windows Feedback tool built directly into Windows 10.


In Case You Missed It – This Week in Windows Developer

$
0
0

This week in the world of Windows – announcements and tutorials big and small ushered in next-level capabilities for all developers, from mixed reality support with the new Windows 10 Creators Update to a “how-to” on location sharing and more with Project Rome.

On to the recap!

High DPI support means beautiful apps at any size

Users today work on and with all types of devices, with all sizes of screen: wearables with miniscule displays, desktops with major resolution, large format screens with millions of pixels. How do you make your app look good on any device? Windows now has the support with improvements in high DPI (dots per inch) scaling. Get the latest – click through below.

 

Project Rome helps apps go social in the latest App Dev on Xbox post

Our App Dev on Xbox series continued with a tutorial on the capabilities of Project Rome, location sharing and more. Through the sample app Adventure Works, our team brought you the ability to have shared second screen experiences through Project Rome; location and proximal information for sharing with the location and Maps APIs; and integration with Facebook and Twitter for sharing via the UWP Toolkit.

Try it out for yourself – follow the tutorial linked below:

 

Surface Studio, Surface Dial and the Windows 10 Creators Update – oh my!

We’re a bit biased – but perhaps the most exciting news for our dev community this week came from Wednesday’s Microsoft Windows 10 Event. From Surface Studio and Surface Dial to the Windows 10 Creators Update, there will soon be an even greater wealth of opportunity for developers in input methods to incorporate into their apps, support for mixed reality experiences and much, much more.

If you didn’t watch the Microsoft Windows 10 Event livestream, check out some highlights from the event below:


 

And read Kevin Gallo’s take on what these announcements mean for developers:


 

Microsoft joins the 3D Formats working group at Khronos

We’re delighted to share that Microsoft is joining the 3D Formats working group at Khronos to collaborate on its GL Transmission Format (glTF). Among the areas we’re excited to dig into – with such clear enthusiasm and interest from the developer community – is Physically Based Rendering (PBR) material. Read more about next steps for the working group here.

October virtual machine updates are live!

And last, but certainly not least – the latest virtual machine updates have gone live. Read all about the installs by clicking through below:

 

Happy coding to all!

Using dotnet watch test for continuous testing with .NET Core and XUnit.net

$
0
0

When teaching .NET Core I do a lot of "dotnet new" Hello World demos to folks who've never seen it before. That has it's place, but I also wanted to show how easy it is to get setup with Unit Testing on .NET Core.

For this blog post I'm going to use the command line so you know there's nothing hidden, but you can also use Visual Studio or Visual Studio Code, of course. I'll start the command prompt then briefly move to Code.

Starting from an empty folder, I'll make a SomeApp folder and a SomeTests folder.

C:\example\someapp> dotnet new
C:\example\someapp> md ..\sometests && cd ..\sometests
C:\example\sometests> dotnet new -t xunittest

At this point I've got a HelloWorld app and a basic test but the two aren't related - They aren't attached and nothing real is being tested.

Tests are run with dotnet test, not dotnet run. Tests are libraries and don't have an entry point, so dotnet run isn't what you want.

c:\example>dotnet test SomeTests
Project SomeTests (.NETCoreApp,Version=v1.0) was previously compiled. Skipping compilation.
xUnit.net .NET CLI test runner (64-bit win10-x64)
Discovering: SomeTests
Discovered: SomeTests
Starting: SomeTests
Finished: SomeTests
=== TEST EXECUTION SUMMARY ===
SomeTests Total: 1, Errors: 0, Failed: 0, Skipped: 0, Time: 0.197s
SUMMARY: Total: 1 targets, Passed: 1, Failed: 0.

I'll open my test project's project.json and add a reference to my other project.

{
"version": "1.0.0-*",
"buildOptions": {
"debugType": "portable"
},
"dependencies": {
"System.Runtime.Serialization.Primitives": "4.1.1",
"xunit": "2.1.0",
"dotnet-test-xunit": "1.0.0-rc2-*"
},
"testRunner": "xunit",
"frameworks": {
"netcoreapp1.0": {
"dependencies": {
"Microsoft.NETCore.App": {
"type": "platform",
"version": "1.0.1"
},
"SomeApp": "1.0.0-*"
},
"imports": [
"dotnet5.4",
"portable-net451+win8"
]
}
}
}

I'll make a little thing to test in my App.

public class Calc {
public int Add(int x, int y) => x + y;
}

And add some tests.

public class Tests
{
[Fact]
public void TwoAndTwoIsFour()
{
var c = new Calc();
Assert.Equal(4, c.Add(2, 2));
}

[Fact]
public void TwoAndThreeIsFive()
{
var c = new Calc();
Assert.Equal(4, c.Add(2, 3));
}
}

Because the Test app references the other app/library, I can just make changes and run "dotnet test" from the command line. It will build both dependencies and run the tests all at once.

Here's the full output inclding both build and test.

c:\example> dotnet test SomeTests
Project SomeApp (.NETCoreApp,Version=v1.0) will be compiled because inputs were modified
Compiling SomeApp for .NETCoreApp,Version=v1.0

Compilation succeeded.
0 Warning(s)
0 Error(s)

Time elapsed 00:00:00.9814887
Project SomeTests (.NETCoreApp,Version=v1.0) will be compiled because dependencies changed
Compiling SomeTests for .NETCoreApp,Version=v1.0

Compilation succeeded.
0 Warning(s)
0 Error(s)

Time elapsed 00:00:01.0266293


xUnit.net .NET CLI test runner (64-bit win10-x64)
Discovering: SomeTests
Discovered: SomeTests
Starting: SomeTests
Tests.Tests.TwoAndThreeIsFive [FAIL]
Assert.Equal() Failure
Expected: 4
Actual: 5
Stack Trace:
c:\Users\scott\Desktop\testtest\SomeTests\Tests.cs(20,0): at Tests.Tests.TwoAndThreeIsFive()
Finished: SomeTests
=== TEST EXECUTION SUMMARY ===
SomeTests Total: 2, Errors: 0, Failed: 1, Skipped: 0, Time: 0.177s
SUMMARY: Total: 1 targets, Passed: 0, Failed: 1.

Oops, I made a mistake. I'll fix that test and run "dotnet test" again.

c:\example> dotnet test SomeTests
xUnit.net .NET CLI test runner (64-bit .NET Core win10-x64)
Discovering: SomeTests
Discovered: SomeTests
Starting: SomeTests
Finished: SomeTests
=== TEST EXECUTION SUMMARY ===
SomeTests Total: 2, Errors: 0, Failed: 0, Skipped: 0, Time: 0.145s
SUMMARY: Total: 1 targets, Passed: 1, Failed: 0.

I can keep changing code and running "dotnet test" but that's tedious. I'll add dotnet watch as a tool in my Test project's project.json.

{
"version": "1.0.0-*",
"buildOptions": {
"debugType": "portable"
},
"dependencies": {
"System.Runtime.Serialization.Primitives": "4.1.1",
"xunit": "2.1.0",
"dotnet-test-xunit": "1.0.0-rc2-*"
},
"tools": {
"Microsoft.DotNet.Watcher.Tools": "1.0.0-preview2-final"
},
"testRunner": "xunit",
"frameworks": {
"netcoreapp1.0": {
"dependencies": {
"Microsoft.NETCore.App": {
"type": "platform",
"version": "1.0.1"
},
"SomeApp": "1.0.0-*"
},

"imports": [
"dotnet5.4",
"portable-net451+win8"
]
}
}
}

Then I'll go back and rather than typing  "dotnet test" I'll type "dotnet watch test."

c:\example> dotnet watch test
[DotNetWatcher] info: Running dotnet with the following arguments: test
[DotNetWatcher] info: dotnet process id: 14064
Project SomeApp (.NETCoreApp,Version=v1.0) was previously compiled. Skipping compilation.
Project SomeTests (.NETCoreApp,Version=v1.0) will be compiled because inputs were modified
Compiling SomeTests for .NETCoreApp,Version=v1.0
Compilation succeeded.
0 Warning(s)
0 Error(s)
Time elapsed 00:00:01.1479348

xUnit.net .NET CLI test runner (64-bit .NET Core win10-x64)
Discovering: SomeTests
Discovered: SomeTests
Starting: SomeTests
Finished: SomeTests
=== TEST EXECUTION SUMMARY ===
SomeTests Total: 2, Errors: 0, Failed: 0, Skipped: 0, Time: 0.146s
SUMMARY: Total: 1 targets, Passed: 1, Failed: 0.
[DotNetWatcher] info: dotnet exit code: 0
[DotNetWatcher] info: Waiting for a file to change before restarting dotnet...

Now if I make a change to either the Tests or the projects under test it will automatically recompile and run the tests!

[DotNetWatcher] info: File changed: c:\example\SomeApp\Program.cs
[DotNetWatcher] info: Running dotnet with the following arguments: test
[DotNetWatcher] info: dotnet process id: 5492
Project SomeApp (.NETCoreApp,Version=v1.0) will be compiled because inputs were modified
Compiling SomeApp for .NETCoreApp,Version=v1.0

I'm able to do all of this with any text editor and a command prompt.

How do YOU test?


Sponsor: Do you deploy the same application multiple times for each of your end customers? The team at Octopus have taken the pain out of multi-tenant deployments. Check out their latest 3.4 release!



© 2016 Scott Hanselman. All rights reserved.
     

The “Internet of Stranger Things” Wall, Part 1 – Introduction and Remote Wiring

$
0
0

Overview

I am a child of the 80s. Raiders of the Lost Ark was the first movie I saw by myself in the movie theater. The original Star Wars trilogy was an obsession. And WarGames is one thing that inspired me more than anything else to become a programmer.

But it was movies like The Goonies that I would watch over and over again because they spoke to me in a language that reflected what it was like to be a kid at that time. They took kids on a grand adventure, while still allowing them to be kids in a way that so few movies can pull off.

So, of course when a friend pointed out the Netflix series Stranger Things, I dove right in, and while sitting down at my PC I binge-watched every episode over a weekend. It had a treatment of 80s childhood that was recognizable, without being a painful cliché. It referenced movies like The Goonies, ET, and The X-Files in a really fun way.

If you haven’t yet watched the series, go ahead and watch it now. This blog post will still be here when you finish up. 🙂

One of the most iconic scenes in the movie is when Winona Ryder, herself a star of some of my favorite 80s and 90s movies, uses an alphabet wall made of Christmas lights to communicate with her son Will, who is stuck in the Upside Down.

While not physically there, Will could still hear her. So, she would ask him a question and he would respond by lighting up the individual Christmas light associated with each letter on the wall. In the show, the alphabet wall takes up one whole wall in her living room.

I won’t go into more detail than that because I don’t want to spoil the show for those who have not yet seen it or for those who didn’t take my advice to stop and watch it now.

Here’s my smaller (approximately 4’ x 4’) version of the alphabet wall as used during my keynote at the TechBash 2016 conference in Pennsylvania:

image1

“Will? Will? Are you there?”

In the events I used it in, I put on a wig that sort-of resembled Winona’s frazzled hair in the series (but also made me look like part of a Cure cover band), and had my version of the theme/opening music playing on an Elektron Analog Four synthesizer/sequencer in the background. I then triggered the wall with a question and let it spell out the answer with the Christmas lights on the board.

Here’s a block diagram of the demo structure. You can see it involves a few different pieces, all of which are things I enjoy playing with.

image2

In this three-part series, I’ll describe how I built the wall, what products I used, how I built the app, how I built and communicated with the bot framework-based service, and how I made the music. In the end, you should have enough information to be able to create your own version of the wall. You’ll learn about:

  • Windows Remote Wiring
  • LCD Sink ICs
  • Constructing the Wall
  • Wiring the LED Christmas lights
  • Adding UWP voice recognition
  • Setting up a natural language model in LUIS
  • Building a Bot Framework-based bot
  • Music and MIDI
  • And more

There will be plenty of code and both maker and developer-focused technical details along the way.

This first post will cover:

  • Creating the UWP app
  • Windows Remote Wiring
  • Using the MBI5026 LED sink driver

If you’re unfamiliar with the show or the wall, and want to see a quick online-only version of a Stranger Things alphabet wall you can see one at http://StrangerThingsGIFGenerator.com. Example:

new-gif

The remainder of the series will be posted this week. Once they are up, you’ll be able to find the other posts here:

  • Part 1 – Introduction and Remote Wiring (this post)
  • Part 2 – Constructing the wall and adding music
  • Part 3 – Adding voice recognition and intelligence

Creating the basic UWP app

This app is something I used for demonstrating at a couple conferences. As such, it has an event-optimized UI — meaning big text that will show up well even on low contrast projectors. Additionally, it means I need a button to test the board (“Send Alphabet”), test MIDI (“Toggle MIDI”), echo back in case the network is down, and also submit some canned questions in case the network or bot service can’t be reached. When you do live demos, it’s always good to have backups and alternate paths so that a single point of failure doesn’t kill the entire demo. From experience, I can tell you that networks at venues, even speaker and keynote networks, are the single most common killer of cool demos.

This is the UI I put together.

image3

The microphone button starts voice recognition. In case of microphone failure (backups!) I can simply type in the text box — the message icon to the right submits the message. In the case of echo, it simply lights it up on the wall with the text, bypassing the online portion of the demo. In the case of the “Ask a question” field, it sends the message to a Bot Framework bot to be processed.

Despite the technologies I’m using, everything here starts with the standard C#/XAML UWP Blank App template in Visual Studio. I don’t need to use any specific IoT or bot-centric templates for the Windows 10 app.

I am on the latest public SDK version at the time of this post. This is important to note, because the NuGet MIDI library only supports that version (or higher) of the Windows 10 Anniversary Update SDK. (If you need to use an earlier version like 10586, you can compile the library from source.)

I use the Segoe MDL2 Assets font for the icons on the screen. That font is the current Windows standard iconography font. There are a few ways to do this in XAML. In this case, I just set the font and pasted in the correct Unicode value for the icon (you can use Character Map or another app if you wish). One very helpful resource that I use when working with this font is the ModernIcons.io Segoe MDL2 Assets – Cheatsheet site. It gives you the Unicode values in a markup-ready format, making it super easy to use in your XAML or HTML app.

image4

There’s also a free app which you may prefer over the site.

The rest of the UI is standard C# and XAML stuff (I’m not doing anything fancy). In fact, when it comes to program structure you’ll find this demo wanting. Why? When I share this source code, I want you to focus on what’s required to use any of these technologies rather than taking a cognitive hit trying to grok whatever design pattern I used to structure the app. Unless specifically trying to demonstrate a design pattern, I find over-engineered demo apps cumbersome to trod through when looking for a chunk of code to solve a specific problem.

Windows Remote Wiring Basics

When I built this, I wanted to use it as a way to demonstrate how to use Windows Remote Wiring (also called Windows Remote Arduino). Windows Remote Wiring makes it possible to use the IO on an Arduino from a Windows Store app. It does this by connecting to the Arduino through a USB or Bluetooth serial connection, and then using the Firmata protocol (which is itself built on MIDI) to transfer the pin values and other commands back and forth.

Typically used with a PC or phone, you can even use this approach with a Windows 10 IoT Core device and an Arduino. That’s a quick way to add additional IO or other capabilities to an IoT project.

For a primer on Remote Wiring, check the link above, or take a look at this video to learn a bit more about why we decided to make this possible:

Remoting in this way has slower IO than doing the work directly on the Arduino, but as an example this is just fine. If you were going to do something production-ready using this approach, I’d recommend bringing the calls up to a higher level and remote commands (like “Show A”) to the Arduino instead of remoting the pin values and states.

The reason the PC is involved at all is because we need the higher-level capabilities offered by a Windows 10 PC to communicate with the bot, do voice recognition, etc. You could also do these on a higher level IoT Core device like the Intel Joule.

Remote wiring is an excellent way to prototype a solution from the comfort of your PC. It’s also very useful when you’re trying to decide what capabilities you’ll ultimately need in the final target IoT board. The API is very similar to the Windows.Devices.Gpio APIs, so moving to Windows 10 IoT Core when moving to production is not very difficult at all.

For my project, I used a very long USB cable. I didn’t want to mess around with Bluetooth at a live event.

To initialize the Arduino connection in this project, I used this code in my C# standard Windows 10 UWP app:


RemoteDevice _arduino;
UsbSerial _serial;

private const string _vid = "VID_2341";
private const string _pid = "PID_0043";


private void InitializeWiring
{
    _serial = new UsbSerial(_vid, _pid);
    _arduino = new RemoteDevice(_serial);

    _serial.ConnectionEstablished += OnSerialConnectionEstablished;

    _serial.begin(57600, SerialConfig.SERIAL_8N1);
}

I got the VID and PID from looking in the Device Manager properties for the connected Arduino. Super simple, right? I found everything I needed in our tutorial files and documentation.

The final step for Arduino setup is to set the pin modes. This is done in the handler for the ConnectionEstablished event.


private void OnSerialConnectionEstablished()
{

    //_arduino.pinMode(_sdiPin, PinMode.I2C);
    _arduino.pinMode(_sdiPin, PinMode.OUTPUT);
    _arduino.pinMode(_clockPin, PinMode.OUTPUT);
    _arduino.pinMode(_latchPin, PinMode.OUTPUT);
    _arduino.pinMode(_outputEnablePin, PinMode.OUTPUT);

    _arduino.digitalWrite(_outputEnablePin, PinState.HIGH); // turn off all LEDs

    ClearBoard(); // clear out the registers
}

private const UInt32 _clearValue = 0x0;        
private async void ClearBoard()
{
    // clear it out
    await SendUInt32Async(_clearValue, 0);

}

The SendUInt32Async method will be explained in a bit. For now, it’s sufficient to know that it is what lights up the LEDs. Now to work on the electronics part of the project.

Arduino connection to the LCD sink ICs

There are a number of good ways to drive the LEDs using everything from specialized drivers to transistors to various types of two dimensional arrays (a 5×6 array would do it, and require 11 IO pins). I decided to make it super simple and dev board-agnostic and use the MBI5026GN LED driver chip, purchased from Evil Mad Scientist. A single MBI5026 will sink current from 16 LEDs. To do a full alphabet of 26 letters, I used two of these.

The MBI5026 is very simple to use. It’s basically a souped-up shift register with above-average constant current sinking abilities. I connected the LED cathodes (negative side) to the pins and the anode (positive side) to positive voltage. To turn on an LED, just send a high value (1) for that pin.

So for 16 pins with pins 0 through 5 and 12 and 15 turned on, that means that we would send a set of high/low values that looks like this:

image6

The MBI5026 data sheet explains how to pulse the clock signal so it knows when to read each value. There are a couple other pins involved in the transfer, which are also documented in the data sheet.

The IC also includes a pin for shifting out bits that are overflowing from its 16 positions. In this way, you can chain as many of these together as you want. In my case, I chained together two and always passed in 32 bits of data. That’s why I used a UInt32 in the above code.

In this app, I’ll only ever turn on a single LED at a time. So every value sent over will be a single bit turned on with the other thirty-one bits turned off. (This also makes it easier to get away with not worrying about the amp draw from the LEDs.)

To make mapping letters to the 32-bit value easier, I created an array of 32-bit numbers in the app and stored them as the character table for the wall. Although I followed alphabetic order when connecting them, this table approach also supports arbitrary connections of the LEDs as long as you keep alphabetical the actual order for the values in the array.


private UInt32[] _letterTable = new UInt32[]
{
    0x80000000, // A 10000000000000000000000000000000 binary
    0x40000000, // B 01000000000000000000000000000000
    0x20000000, // C 00100000000000000000000000000000
    0x10000000, // D ...
    0x08000000, // E
    0x04000000, // F
    0x02000000, // G
    0x01000000, // H
    0x00800000, // I
    0x00400000, // J
    0x00200000, // K
    0x00100000, // L
    0x00080000, // M
    0x00040000, // N
    0x00020000, // O
    0x00010000, // P
    0x00008000, // Q
    0x00004000, // R
    0x00002000, // S
    0x00001000, // T
    0x00000800, // U
    0x00000400, // V
    0x00000200, // W ...
    0x00000100, // X 00000000000000000000000100000000
    0x00000080, // Y 00000000000000000000000010000000
    0x00000040, // Z 00000000000000000000000001000000
};

These numbers will be sent to the LED sink ICs, LSB (Least Significant Bit) first. In the case of the letter A, that means the bit to turn on the letter A will be the very last bit sent over in the message. That bit maps to the first pin on the first IC.

LEDs require resistors to limit current and keep from burning out. There are a number of scientifically valid approaches to testing the LED lights and figuring out which resistor size to use. I didn’t use any of them, and instead opted to burn out LEDs until I found a reasonable value. 🙂

In reality, with the low voltage we’re using, you can get close using any online resistor value calculator and the default values. We’re not trying to maximize output here and the values would normally be different from color to color (especially blue and white vs. orange and red), in any case. A few hundred ohms works well enough.

Do note that the way the MBI5026 handles the resistor and sets the constant current is slightly different from what you might normally use. One resister is shared for all 16 LEDs and the driver is a constant current driver. The formula is given on page 9 of the datasheet.

IOUT = (VR-EXT / Rext ) x 15

But again, we’re only lighting one LED at a time and we’re not looking to maximize performance or brightness here. Additionally, we’re not using 16 LEDs at once. And, as said above, we also don’t know the actual forward current or forward voltage of the LEDs we’re using. If you want to be completely correct, you could have a different sink driver for each unique LED color, figure out the forward voltage and the correct resistor value, and then plug that in to the appropriate driver.

With that information at hand, it’s time to wire up the breadboard. Assuming I didn’t forget any, here’s the list of all the connections.

image7

Or if you prefer something more visual:

image8

I handled the wiring in two stages. In stage one, I wired the MBI5026 breadboard to the individual posts for each letter. This let me do all that fiddly work at my desk instead of directly on the wall. I used simple construction screws (which I had tested for conductivity) as posts to wire to.

You can see the result here, mounted on the back of the wall.

image9

You can see the individual brown wires going from each of the output pins on the pair of MBI5026 ICs directly to the letter posts. I simply wrapped the wire around the post; there is no solder or hot glue involved there. If you decide to solder the wires, use caution and be advised that the screws will sink a lot of the heat, likely to end up scorching the paper label and burning down all your hard work. The wire wrapped approach is easier and also easily repaired. It also avoids fire. Fire = bad.

The board I put everything on ended up being a bit large to fit between the rows on the back of the wall, so I took the whole thing over to the table saw. I’m the first person I know to take an Arduino, breadboard and wired circuit, and run it across a saw. It survived. 🙂

image10

In the Windows app, I wanted to make sure the code would allow taking an arbitrary string as input and would light up the LEDs in the right order. First, the code that processes the string:


public async Task RenderTextAsync(string message, 
             int onDurationMs = 500, int delayMs = 0, 
             int whitespacePauseMs = 500)
{
    message = message.ToUpper().Trim();

    byte[] asciiValues = Encoding.ASCII.GetBytes(message);

    int asciiA = Encoding.ASCII.GetBytes("A")[0];

    for (int i = 0; i < message.Length; i++)
    {
        char ch = message[i];

        if (char.IsWhiteSpace(ch))
        {
            // pause
            if (whitespacePauseMs > 0)
                await Task.Delay(whitespacePauseMs);
        }
        else if (char.IsLetter(ch))
        {
            byte val = asciiValues[i];
            int ledIndex = val - asciiA;

            UInt32 bitmap = _letterTable[ledIndex];

            // send the letter
            await SendUInt32Async(bitmap, onDurationMs);

            // clear it out
            await SendUInt32Async(_clearValue, 0);

            if (delayMs > 0)
                await Task.Delay(delayMs);

        }
        else
        {
            // unsupported character. Ignore
        }
    }
}

The code first gets the ASCII value for each character in the string. Then, for each character in the string, it checks to see if it’s whitespace or a letter. If neither, it is ignored. If whitespace, we delay for a specified period of time. If a letter, we look up the appropriate letter 32-bit value (a bitmap with a single bit turned on), and then send that bitmap to the LEDs, LSB first.

The code to send the 32-bit map is shown here:


private const int _latchPin = 7;            // LE
private const int _outputEnablePin = 8;     // OE
private const int _sdiPin = 3;              // SDI
private const int _clockPin = 4;            // CLK

// send 32 bits out by bit-banging them with a software clock
private async Task SendUInt32Async(UInt32 bitmap, int outputDurationMs)
{
    for (int i = 0; i < 32; i++)
    {
        // clock low
        _arduino.digitalWrite(_clockPin, PinState.LOW);

        // get the next bit to send
        var b = bitmap & 0x01;

        if (b > 0)
        {
            // send 1 value

            _arduino.digitalWrite(_sdiPin, PinState.HIGH);
        }
        else
        {
            // send 0 value
            _arduino.digitalWrite(_sdiPin, PinState.LOW);
        }

        // clock high
        _arduino.digitalWrite(_clockPin, PinState.HIGH);

        await Task.Delay(1);    // this is an enormous amount of time, 
                                // of course. There are faster timers/delays 
                                // you can use.

        // shift the bitmap to prep for getting the next bit
        bitmap >>= 1;
    }

    // latch
    _arduino.digitalWrite(_latchPin, PinState.HIGH);
    await Task.Delay(1);
    _arduino.digitalWrite(_latchPin, PinState.LOW);
            
    // turn on LEDs
    _arduino.digitalWrite(_outputEnablePin, PinState.LOW);

    // keep the LEDs on for the specified duration
    if (outputDurationMs > 0)
        await Task.Delay(outputDurationMs);

    // turn the LEDs off
    _arduino.digitalWrite(_outputEnablePin, PinState.HIGH);
}

This is bit-banging a shift register over USB, to an Arduino. No, it’s not fast, but it doesn’t matter at all for our use here.

The MBI5026 Data Sheet includes the timing diagram I used when figuring out how to send the clock signals and data. Note that the actual period of these clock pulses isn’t important, it’s the relative timing/order of the signals that counts. The MBI5026 can be clocked at up to 25MHz.

image11

Using that information, I was able to prototype using regular old LEDs on a breadboard. I didn’t do all 26, but I did a couple at the beginning and a couple at the end to ensure I didn’t have any off-by-one errors or similar.

Next, I needed to scale it up to a real wall. We’ll cover that in the next post, before we finish with some speech recognition and natural language processing.

Resources

Questions or comments? Have your own version of the wall, or used the technology described here to help rid the universe of evil? Post below and follow me on twitter @pete_brown

Most of all, thanks for reading!

The Top Inquiries on BIng after each Debate

$
0
0
During the debates, and in the moments and days afterward, debate-related searches surged on Bing. But what are the most-asked questions on Bing after each debate? Filtering out common, heavy-traffic searches, we isolated the peaking queries that triggered our election-fact answers.
 
The First Presidential Debate - 9/26 – 9/28
What is the second continental congress?– We were happy to see Bing users interested in brushing up on their American history as the 2016 elections hit their stride. Fun additional, if obvious fact, the Second Continental Congress succeeded...the First Continental Congress, which began meeting in late 1774.


 
The Vice-Presidential Debate 10/4 – 10/6
Who won the Vice Presidential debate in 2012?– The Tim Kaine vs. Mike Pence VP debate triggered memories of 2012, where Joe Biden and Paul Ryan faced off at Kentucky’s Centre College.

The Second Presidential Debate 10/9 – 10/11
Why is election day always on a Tuesday?– After the second debate, Election Day  started to loom larger in our minds. Bing users wanted to know why Tuesday is always the day we exercise our democratic rights.
 


The Final Presidential Debate 10/19 – 10/21
What is the Electoral College?– Now with the stakes as high as they can get, it appears voters want to learn more about this electoral body. Remember, the popular vote does not elect the president in the United States.
 
Keep searching. Keep questioning. And don’t forget to vote
 
- The Bing team


 

The week in .NET – .NET Foundation – Serilog – Super Dungeon Bros

$
0
0

To read last week’s post, see The week in .NET – .NET, ASP.NET, EF Core 1.1 Preview 1 – On .NET on EF Core 1.1 – Changelog – FluentValidation – Reverse: Time Collapse.

On .NET: Martin Woodward on the .NET Foundation

Last week, Martin Woodward was on the show to talk about the .NET Foundation:

This week, we’ll speak with Mei-Chin Tsai and Jan Kotas about CoreRT and .NET Native and .NET. The show is on Thursdays and begins at 10AM Pacific Time on Channel 9. We’ll take questions on Gitter, on the dotnet/home channel and on Twitter. Please use the #onnet tag. It’s OK to start sending us questions in advance if you can’t do it live during the show.

Package of the week: Serilog

Modern applications can be complex, busy, asynchronous and distributed. This adds up to make understanding behavior and finding bugs a significant challenge. While tools for monitoring and debugging apps are always improving, Serilog helps by capturing log data in a form that’s substantially easier for tooling to work with.

On the surface, Serilog looks like most logging libraries:

While messages can be formatted into text, Serilog uses named placeholders to capture and preserve parameters like Elapsed as first-class event properties:

Many of the Serilog sinks accept data in structured formats like JSON, so searches like Elapsed > 10 can be answered directly, without the need for regular expressions or log parsing.

Serilog is built from the ground up for distributed logging, and comes with a rich set of features for grouping, enriching and correlating log events. The project is open source and developed by a dedicated community on GitHub.

Game of the Week: Super Dungeon Bros

Super Dungeon Bros is a fast paced dungeon brawler where you can play with up to four friends. Complete quests from the Gods of Rock with heavy metal heroes Axl, Lars, Freddie and Ozzie (get it?). You and your friends must explore and fight your way through the deepest, darkest dungeons of Rökheim, searching for epic loot and the legends of fabled rock stars as you solve puzzles and destroy undead monsters. Super Dungeon Bros features cross-platform multiplayer, multiple worlds, randomly generated dungeons and a series of daily and weekly dungeon challenges.

Super Dungeon Bros

Super Dungeon Bros is being developed by React Games using Unity and C#. It is available for Xbox One, PlayStation 4 and Steam.

User group meeting of the week: Intro to Azure DocumentDB in Tallahassee, FL

On Thursday, November 3, at 6:00PM, The Capital City .NET user Group will give an intro to Azure DocumentDB for .NET and SQL Server Developers with Santosh Hari. Santosh will build a simple ASP.NET MVC web app that uses C# and DocumentDB for storing data. Then he’ll walk through writing queries for DocumentDB by leveraging SQL and LINQ querying skills.

.NET

ASP.NET

F#

Check out F# Weekly for more great content from the F# community.

Xamarin

Azure

Games

And this is it for this week!

Contribute to the week in .NET

As always, this weekly post couldn’t exist without community contributions, and I’d like to thank all those who sent links and tips. The F# section is provided by Phillip Carter, the gaming section by Stacey Haffner, and the Xamarin section by Dan Rigby.

You can participate too. Did you write a great blog post, or just read one? Do you want everyone to know about an amazing new contribution or a useful library? Did you make or play a great game built on .NET?
We’d love to hear from you, and feature your contributions on future posts:

This week’s post (and future posts) also contains news I first read on The ASP.NET Community Standup, on Weekly Xamarin, on F# weekly, and on Chris Alcock’s The Morning Brew.

Microsoft Teams integration with Visual Studio Team Services

$
0
0

VSTS + Teams

Earlier today, Microsoft Teams was announced. Microsoft Teams is a new chat-based workspace in Office365 that makes collaborating on software projects with Team Services a breeze. Customers often tell us that there is a need for better chat integration in Team Services. With Microsoft Teams, we aim to provide a comprehensive chat and collaboration experience, across your Agile and development work.

Starting today, Team Services users can stay up to date with alerts for work items, pull requests, commits, and builds using the Connectors within Microsoft Teams. Each Connector event is its own conversation, allowing users to be notified of events they care about and discuss them with their team.

VSTS Connectors

We are also bringing the Team Services Kanban boards right into Microsoft Teams, allowing your team to track and create new work items without leaving your team’s channel. The board integration will be available starting next week on November 9. Each board also comes with its own conversation.

teams-kanbanboard

Instructions on how to set up these integrations can be found on the Team Services marketplace.

We’re still early in our collaboration with Microsoft Teams. I am looking for your feedback on the current integrations as well as feedback on new integrations you’d like to see between Team Services and Microsoft Teams – just leave a comment or send me an email.

Vcpkg updates: Static linking is now available

$
0
0

One month ago, we announced the availability of Vcpkg a command line tool to easily acquire and build open source C++ lib and consume it in Visual Studio 2015. The initial release provided only dynamic link libraries, but we heard your feedback, and we are pleased to announce static linking support with Vcpkg.

To generate static libraries, use one of the triplets: x86-windows-static, or x64-windows-static

For example, to build zlib statically for x86 use: 

vcpkg install zlib:x86-windows-static

The library will be installed in the following folder:  vcpkg\installed\x86-windows-static

Community contributions

We really want to say thanks to the community. We started with 20 libraries in the catalog, and the community contributed an additional 60+ more lib, for a total of more than 80 libraries [see the complete list here] The tempo is amazing with almost one new library being added each day: you guys really rock! We collected very good feedback, and many interesting suggestions as well as many requests to add more libraries. Thanks for every contribution and comment, that’s the way we want this project to succeed: by being a real community driven effort.

If you need a specific library, please create an issue identifying the lib you want, don’t hesitate to be precise on the version asked, the source location… To see the list of libraries asked so far, see the issues list tagged with “new port request”. Once the issue is created, the community can jump on your ask and create the right port file. Or if you are already familiar with building the library, please make a Pull Request with your port file and the associated patch file if needed.

We have updated the documentation

We improved the port file creation topics see example #2 “Package a remote project” in example.md  and a patch file example to help you create and maintain the port file collection more easily.

With static linking we have now reached an important milestone, we are currently planning the next milestone in our roadmap, and this is the right time to share your suggestions and hope for this project. Create an issue on GitHub [ https://github.com/Microsoft/vcpkg ] and engage the conversation.

See you soon on the GitHub repo, for any question you can contact us at vcpkg@microsoft.com


ASP.NET Core RESTful Web API versioning made easy

$
0
0

There's a LOT of interesting and intense arguments that have been made around how you should version your Web API. As soon as you say RESTful it turns into a religious argument where folks may just well quote from the original text. ;)

Regardless of how you personally version your Web APIs, and side-stepping any arguments one way or the other, there's great new repository by Chris Martinez that Jon Galloway turned me on to at https://github.com/Microsoft/aspnet-api-versioning. There's ASP.NET 4.x Web API, ODATA with ASP.NET Web APIs, and now ASP.NET Core 1.x. Fortunately Chris has assembled a nicely factored set of libraries called "ASP.NET API Versioning" that add service API versioning in a very convenient way.

As Chris points out:

The default API versioning configuration is compliant with the versioning semantics outlined by the Microsoft REST Guidelines. There are also a number of customization and extension points available to support transitioning services that may not have supported API versioning in the past or supported API versioning with semantics that are different from the Microsoft REST versioning guidelines.

It's also worth pointing out how great the documentation is given it's mostly a one-contributor project. I'm sure Chris would appreciate your help though, even if you're a first timer.

Chris has NuGet packages for three flavors of Web APIs on ASP.NET:

But you should really clone the repo and check out his excellent samples.

When versioning services there's a few schools of thought and with ASP.NET Core it's super easy to get started:

public void ConfigureServices( IServiceCollection services )
{
services.AddMvc();
services.AddApiVersioning();

// remaining other stuff omitted for brevity
}

Oh, but you already have an API that's not versioned yet?

services.AddApiVersioning(
o =>
{
o.AssumeDefaultVersionWhenUnspecified = true );
o.DefaultApiVersion = new ApiVersion( new DateTime( 2016, 7, 1 ) );
} );

Your versions can look however'd you like them to:

  • /api/foo?api-version=1.0
  • /api/foo?api-version=2.0-Alpha
  • /api/foo?api-version=2015-05-01.3.0
  • /api/v1/foo
  • /api/v2.0-Alpha/foo
  • /api/v2015-05-01.3.0/foo

QueryString Parameter Versioning

I'm not a fan of this one, but here's the general idea:

[ApiVersion( "2.0" )]
[Route( "api/helloworld" )]
public class HelloWorld2Controller : Controller {
[HttpGet]
public string Get() => "Hello world!";
}

So this means to get 2.0 over 1.0 in another Controller with the same route, you'd go here:

/api/helloworld?api-version=2.0

Also, don't worry, you can use namespaces to have multiple HelloWorldControllers without having to have any numbers in the class names. ;)

URL Path Segment Versioning

This happens to be my first choice (yes I know Headers are "better," more on that later). You put the version in the route like this.

Here we're throwing in a little curveball. There's three versions but just two controllers.

[ApiVersion( "1.0" )]
[Route( "api/v{version:apiVersion}/[controller]" )]
public class HelloWorldController : Controller {
public string Get() => "Hello world!";
}

[ApiVersion( "2.0" )]
[ApiVersion( "3.0" )]
[Route( "api/v{version:apiVersion}/helloworld" )]
public class HelloWorld2Controller : Controller {
[HttpGet]
public string Get() => "Hello world v2!";

[HttpGet, MapToApiVersion( "3.0" )]
public string GetV3() => "Hello world v3!";
}

To be clear, you have total control, but the result from the outside is quite clean with /api/v[1|2|3]/helloworld. In fact, you can see with this example where this is more sophisticated than what you can do with routing out of the box. (I know some of you are thinking "meh I'll just make a route table." I think the semantics are much clearer and cleaner this way.

Header Versioning

Or, the hardest way (and the one that a lot of people this is best, but I disagree) is HTTP Headers. Set this up in ConfigureServices for ASP.NET Core:

public void ConfigureServices( IServiceCollection services )
{
services.AddMvc();
services.AddApiVersioning(o => o.ApiVersionReader = new HeaderApiVersionReader("api-version"));
}

When you do HeaderApiVersioning you won't be able to just do a GET in your browser, so I'll use Postman to add the header (or I could use Curl, or WGet, or PowerShell, or a Unit Test):

image

Deprecating

Speaking of semantics, here's a nice one. Let's say an API is going away in the next 6 months.

[ApiVersion( "2.0" )]
[ApiVersion( "1.0", Deprecated = true )]

This advertises that 1.0 is going away soon and that folks should consider 2.0. Where does it advertise this fact? The response headers!

api-supported-versions: 2.0, api-deprecated-versions: 1.0

All in all, this is very useful stuff and I'm happy to add it to my personal toolbox. Should this be built in? I don't know but I sure appreciate that it exists.

SIDE NOTE: There is/was the start of a VersionRoute over in the AspNet.Mvc repo. Maybe these folks need to join forces?

How do YOU version your Web APIs and Services?


Sponsor: Big thanks to Telerik! They recently launched their UI toolset for ASP.NET Core so feel free to check it out or learn more about ASP.NET Core development in their recent whitepaper.



© 2016 Scott Hanselman. All rights reserved.
     

Git perf and scale

$
0
0

New features and UI changes naturally get a lot of attention. Today, I want to spotlight the less visible work that we do on Team Services: ensuring our performance and scale meet our customers’ needs now and in the future. We are constantly working behind the scenes profiling, benchmarking, measuring, and iterating to make every action faster. In this post, I’ll share 3 of the dozens of improvements we’ve made recently.


First up, we’ve sped up pull request merges significantly. We have an enormous “torture test repo” (tens of GBs across millions of files and 100K+ folders) we use for perf and scale testing. Median merge time for this repo went from 92 seconds to 33 seconds, a 64% reduction. We also saw improvements for normal-sized repos, but it’s harder to generalize their numbers in a meaningful way.

Several changes contributed to this gain. One was adopting a newer version of LibGit2. Another was altering LibGit2’s caching strategy – its default wasn’t ideal for the way we run merges. As a customer, you’ll notice the faster merges when completing PRs. For our service, it means we can serve more users with fewer resources.


An engineer on a sister team noticed that one of our ref lookups exhibited O(N) behavior. Refs are the data structure behind branches in Git. We have to look up refs to display branch names on the web. If you’re familiar with time complexity of algorithms, you’ll recall that O(N) behavior means that the work done by a program scales linearly with the size of the input.

The work done in this particular lookup scaled linearly with the number of branches in a repository. Up to several hundred refs, this lookup was “fast enough” from a human’s point of view. Humans are quite slow compared to computers 😉

Every millisecond counts in web performance, and there’s no reason to do excess work. We were able to rewrite that lookup to be constant with respect to the number of branches.


The last improvement requires a bit more explanation. At various points in our system, we need to track the history of a file: which commits touched this file? Our initial implementation (which served us well for several years) was to track each commit in a SQL table which we could query by file path or by commit.

Fast forward several years. One of the oldest repos on our service is the one which holds the code for VSTS itself. The SQL table tracking its commits had grown to 90GB (many, many times the size of the repo itself). Even after the usual tricks like schema changes and SQL page compression, we weren’t able to get the table size down to an acceptable level. We needed to rethink the problem.

The team spent 3+ months designing and implementing a fast, compact representation of the Git graph. This representation is small enough to keep in memory on the application tier machines, which themselves are cheaper to operate than SQL machines. The change was carefully designed and implemented to be 100% transparent to end customers. Across a variety of measurements, we found no noticeable performance regressions and in many cases saw improvements.

We were able to completely drop the commit change tracking table, freeing up dozens of gigabytes on every scale unit’s database tier. We finished migrating to the new system over 2 months ago. Besides a handful of incidents during early dogfooding, we have not received complaints about either its performance or correctness. (I’m flirting with chaos making such claims, of course. If you have a scenario where performance regressed since the beginning of September, email me so we can investigate.)

This explanation leaves out a lot of details in favor of brevity. If there’s interest, we’re thinking of doing a series of blog articles on how our Git service works under the hood. Let me know in the comments what you want to hear more about.

Thanks to the VC First Party team [Wil, Jiange, Congyi, Stolee, Garima, Saeed, and others] for their insights on this topic. All remaining errors are mine alone.

Microsoft Teams integration, repo favorites, and new package management and release management regions – Nov 2

$
0
0

Note: The improvements discussed in this post will be rolling out throughout the next week.

There are some exciting new features this sprint.

Package Management in India and Brazil

Package Management is now available to Team Services accounts hosted in the South India and Brazil South Azure regions. To get started, install the extension from the Marketplace.

Microsoft Teams integration

Microsoft Teams is a new chat-based workspace in Office365 that makes collaborating on software projects with Team Services a breeze. Team Services users can stay up to date with alerts for work items, pull requests, commits, and builds using the Connectors within Microsoft Teams. Starting November 9, users will also be able to bring their Kanban boards right into Microsoft Teams. For more information, see our blog.

Microsoft Teams

Repo favorites

You can now favorite the repos you work with most frequently. In the repo picker, you will see tabs for All repositories and your Favorites. Click the star to add a repository to your list of Favorites.

repo favorites

Rollback build definitions

You can roll a build definition back to a previous version by going to the History tab when editing a build definition.

Disable the sync and checkout of sources in a build

Starting with the 2.108 agent, you can optionally disable the automatic source sync and checkout for Git. This will enable you to handle the source operations in a task or script instead of relying on the agent’s built-in behavior. All standard source-related variables like Source.Version, Source.Branch and Build.SourcesDirectory are set.

Docker extension enhancements

There have been a number of enhancements to the Docker extension in the marketplace:

  • Docker Compose run action
  • Restart policy for Docker run
  • Ensuring the registry host is specified on logout
  • Bug fixes

.NET Core build task

We added support for building, testing and publishing .NET core applications with a dedicated .NET Core task for project.json templates.

.net core task

Build and release management templates

We have added support for new templates in Build and Release for building ASP.NET/ASP.NET core and deploying to Azure web applications.

build template

release template

ASP.NET Core and NodeJs deployments

The Azure Web App task now supports ASP.NET Core and NodeJs applications. You just specify the folder with all application contents for deployment. This task can run on Linux platforms for deploying ASP.NET Core or Node-based applications.

Azure Web App Service manage task

We added a new task for managing Azure Web App services. Currently, this task has support for swapping any slot to production.

Release Management available in multiple regions

The Release Management service is now available in Europe, Australia, Brazil, and India regions in addition to the US. All of your Release Management data is now co-located with the rest of your Team Services account data.

REST client helpers for Test Step operations

Users will now be able to create, modify and delete test steps and test step attachments in Test Case work items using the helper classes we have added to the REST client (see the RestApi-Sample).

Test case description in Web runner

Customers often use the test case description field for capturing the prerequisites that must be met before the test case execution can start. With this update, users will now be able to view the test case description information in the Web runner by using the Show description option.

test case description

As always, if you have ideas on things you’d like to see us prioritize, head over to UserVoice to add your idea or vote for an existing one.

Thanks,

Jamie Cool

How to use Test Step using REST Client Helper?

$
0
0

Test Case is the backbone for all manual testing scenarios. You can create test case using the web client from Test or Work hubs OR from Microsoft Test Manager (MTM), which then are stored in Team Foundation Server or Visual Studio Team Services. Using these clients you can create test artifacts such as test cases with test steps, test step attachments, shared steps, parameters, shared parameter. Test case is also a work item and using Work Item REST API support one can create a work item of type test case, see here: Create a work item.

Problem

Currently there is no support to modify/update test steps in a test case work item. Work item saves test steps, associated test step attachment, or expected results in custom XML document, and there is a need of helper to create that custom XML for test steps updates.

Solution

With the current deployment, we have added support to Create/Read/Update/Delete test step (action, and expected result) and test step attachments. ITestBase interface exposes required key method – loadActions and saveActions that provide helper methods for both in C# and JS to do above mentioned operations.

Requirement

C# Client (Microsoft.TeamFoundationServer.Client) as released in previous deployment.
OR
JS Client (vss-sdk-extension) (Note: JS changes will be available only after current deployment completes.)

Walk through using new helper in C# client

Here, let’s walk through step-by-step on how to consume these newly added helper classes. We have also added GitHub sample for the same with some more operations (link given at the bottom of the post).

  1. Create an instance of TestBaseHelper class and generate ITestBase object using that.
    TestBaseHelper helper = new TestBaseHelper();
    ITestBase testBase = helper.Create();
  2. ITestBase exposes methods for create test step, generate xml, save actions and load actions. You can even assign title, set expected result and description with each test step and associate attachment using attachment URL. In the end, all test steps are added to actions associated with testBase object (see below).
    ITestStep testStep1 = testBase.CreateTestStep();
    testStep1.Title = "title1";
    testStep1.ExpectedResult = "expected1";
    testStep1.Description = "description1";
    testStep1.Attachments.Add(testStep1.CreateAttachment(attachmentObject.Url, "attachment1"));
    
    testBase.Actions.Add(testStep1)
    
  3. A call to SaveActions uses the helper classes and calls appropriate field setting of the test case – Test Steps to save newly added steps, expected result and attachment links. A JSON patch document created using “SaveActions” is used to createWorkItemAsync as shown below.
    JsonPatchDocument json = new JsonPatchDocument();
    
    // create a title field
    JsonPatchOperation patchDocument1 = new JsonPatchOperation();
    patchDocument1.Operation = Operation.Add;
    patchDocument1.Path = "/fields/System.Title";
    patchDocument1.Value = "New Test Case";
    json.Add(patchDocument1);
    
    // add test steps in json
    // it will update json document based on test steps and attachments
    json = testBase.SaveActions(json);
    
    // create a test case
    var testCaseObject = _witClient.CreateWorkItemAsync(json, projectName, "Test Case").Result;
    
  4. To modify a test case and its steps, you need to get the test case and just call “LoadAction” which internally uses helper class to parse the given xml and attachmentlinks as shown below. This will populate the testBase class with all details as appropriate.
    testCaseObject = _witClient.GetWorkItemAsync(testCaseId, null, null, WorkItemExpand.Relations).Result;
    
    // initiate testbase object again
    testBase = helper.Create();
    
    // fetch xml from testcase object
    var xml = testCaseObject.Fields["Microsoft.VSTS.TCM.Steps"].ToString();
    
    // create tcmattachemntlink object from workitem relation, teststep helper will use this
    IList tcmlinks= new List();
    foreach (WorkItemRelation rel in testCaseObject.Relations)
    {
        TestAttachmentLink tcmlink = new TestAttachmentLink();
        tcmlink.Url = rel.Url;
        tcmlink.Attributes = rel.Attributes;
        tcmlink.Rel = rel.Rel;
        tcmlinks.Add(tcmlink);
    }
    
    // load teststep xml and attachemnt links
    testBase.LoadActions(xml, tcmlinks);
    
  5. Once testBase object has been loaded with test case information, you can update test steps and attachments in the test case object.
    ITestStep testStep;
    //updating 1st test step
    testStep = (ITestStep)testBase.Actions[0];
    testStep.Title = "New Title";
    testStep.ExpectedResult = "New expected result";
    
    //removing 2nd test step
    testBase.Actions.RemoveAt(1);
    
    //adding new test step
    ITestStep testStep3 = tb.CreateTestStep();
    testStep3.Title = "Title 3";
    testStep3.ExpectedResult = "Expected 3";
    testBase.Actions.Add(testStep3);
    
  6. Update test case object using new changes in the test steps and attachments.
    JsonPatchDocument json2 = new JsonPatchDocument();
    json2 = testBase.SaveActions(json2);
    // update testcase wit using new json
    testCaseObject = _witClient.UpdateWorkItemAsync(json2, testCaseId).Result;

As shown above, you can now use the helper classes provided to update test case steps, and still use the existing Work Item REST APIs for test case work item. You can find comprehensive samples for both C# and JS here on GitHub project: RESTApi-Sample.

– Test Management Team

The mystery of dotnet watch and 'Microsoft.NETCore.App', version '1.1.0-preview1-001100-00' was not found

$
0
0
dotnet watch says

WARNING: This post is full of internal technical stuff. I think it's interesting and useful. You may not.

I had an interesting Error/Warning happen when showing some folks .NET Core recently and I thought I'd deconstruct it here for you, Dear Reader, because it's somewhat multi-layered and it'll likely help you. It's not just about Core, but also NuGet, Versioning, Package Management in general, version pinning, "Tools" in .NET Core, as well as how .NET Runtimes work and version. That's a lot! All that from this little warning. Let's see what's up.

First, let's say you have .NET Core installed. You likely got it from http://dot.net and you have either 1.0.0 or the 1.0.1 update.

Then say you have a website, or any app at all. I made one with "dotnet new -t web" in an empty folder.

I added "dotnet watch" as a tool in the project.json like this. NOTE the "1.0.0-*" there.

"tools": {
"Microsoft.DotNet.Watcher.Tools": "1.0.0-*"
}

dotnet watch is nice because it watches the source code underneath it while running your app. If you change your code files, dotnet-watch will notice, and exit out, then launch "dotnet run" (or whatever, even test, etc) and your app will pick up the changes. It's a nice developer convenience.

I tested this out on last weekend and it worked great. I went to show some folks on Monday that same week and got this error when I typed "dotnet watch."

C:\Users\scott\Desktop\foofoo>dotnet watch
The specified framework 'Microsoft.NETCore.App', version '1.1.0-preview1-001100-00' was not found.
- Check application dependencies and target a framework version installed at:
C:\Program Files\dotnet\shared\Microsoft.NETCore.App
- The following versions are installed:
1.0.0
1.0.1
- Alternatively, install the framework version '1.1.0-preview1-001100-00'.

Let's really look at this. It says "the specified framework...1.1.0" was not found. That's weird, I'm not using that one. I check my project.json and I see:

"Microsoft.NETCore.App": {
"version": "1.0.1",
"type": "platform"
},

So who wants 1.1.0? I typed "dotnet watch." Can I "dotnet run?"

C:\Users\scott\Desktop\foofoo>dotnet run
Project foofoo (.NETCoreApp,Version=v1.0) will be compiled because expected outputs are missing
Compiling foofoo for .NETCoreApp,Version=v1.0
Hosting environment: Production
Content root path: C:\Users\scott\Desktop\foofoo
Now listening on: http://localhost:5000
Application started. Press Ctrl+C to shut down.

Hey, my app runs fine. But if I "dotnet watch" I get an error.

Remember that dotnet watch and other "tools" like it are not dependencies per se, but helpful sidecar apps. Tools can watch, squish css and js, precompile views, and do general administrivia that isn't appropriate at runtime.

It seems it's dotnet watch that wants something I don't have.

Now, I could go install the framework 1.1.0 that it's asking for, and the error would disappear, but would I know why? That would mean dotnet watch would use .NET Core 1.1.0 but my app (dotnet run) would use 1.0.1. That's likely fine, but is it intentional? Is it deterministic and what I wanted?

I'll open my generated project.lock.json. That's the calculated tree of what we ended up with after dotnet restore. It's a big calculated file but I can easily search it. I see two things. The internal details aren't interesting but version strings are.

First, I search for "dotnet.watcher" and I see this:

"projectFileToolGroups": {
".NETCoreApp,Version=v1.0": [
"Microsoft.AspNetCore.Razor.Tools >= 1.0.0-preview2-final",
"Microsoft.AspNetCore.Server.IISIntegration.Tools >= 1.0.0-preview2-final",
"Microsoft.DotNet.Watcher.Tools >= 1.0.0-*",
"Microsoft.EntityFrameworkCore.Tools >= 1.0.0-preview2-final",
"Microsoft.Extensions.SecretManager.Tools >= 1.0.0-preview2-final",
"Microsoft.VisualStudio.Web.CodeGeneration.Tools >= 1.0.0-preview2-final"
]

Ah, that's a reminder that I asked for 1.0.0-*. I asked for STAR for dotnet-watch but everything else was very clear. They were specific versions. I said "I don't care about the stuff after 1.0.0 for watch, gimme whatever's good."

It seems that a new version of dotnet-watch and other tools came out between the weekend and my demo.

Search more in project.lock.json and I can see what all it asked for...I can see my dotnet-watch's dependency tree.

"tools": {
".NETCoreApp,Version=v1.0": {
"Microsoft.DotNet.Watcher.Tools/1.0.0-preview3-final": {
"type": "package",
"dependencies": {
"Microsoft.DotNet.Cli.Utils": "1.0.0-preview2-003121",
"Microsoft.Extensions.CommandLineUtils": "1.1.0-preview1-final",
"Microsoft.Extensions.Logging": "1.1.0-preview1-final",
"Microsoft.Extensions.Logging.Console": "1.1.0-preview1-final",
"Microsoft.NETCore.App": "1.1.0-preview1-001100-00"
},

Hey now. I said "1.0.0-*" and I ended up with "1.0.0-preview3-final"

Looks like dotnet-watch is trying to bring in a whole new .NET Core. It wants 1.1.0. This new dotnet-watch is part of the wave of new preview stuff from 1.1.0.

But I want to stay on the released and supported "LTS" (long term support) stuff, not the new fancy builds.

I shouldn't have used 1.0.0-* as it was ambiguous. That might be great for my local versions or when I intend to chase the latest but not in this case.

I updated my version in my project.json to this and did a restore.

"Microsoft.DotNet.Watcher.Tools": "1.0.0-preview2-final",

Now I can reliably run dotnet restore and get what I want, and both dotnet watch and dotnet run use the same underlying runtime.


Sponsor: Big thanks to Telerik! They recently launched their UI toolset for ASP.NET Core so feel free to check it out or learn more about ASP.NET Core development in their recent whitepaper.


© 2016 Scott Hanselman. All rights reserved.
     
Viewing all 10804 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>