Quantcast
Channel: Category Name
Viewing all 10804 articles
Browse latest View live

Announcing The MVP Reconnect Program & The 2016 MVP Global Summit

$
0
0

Once again, it is the time of the year when we have the privilege to host thousands of passionate technical leaders for our MVP Global Summit. This is far from a traditional technical conference. Next week our whole Redmond campus is dedicated to over 2,000 MVPs. Across 16 buildings and 55 of our largest meeting spaces, MVPs will attend over 620 deeply technical sessions, roundtables and networking opportunities with our Product Groups.

This year we have added some new, great workshops. The content will feature topics ranging from community leadership (led by Jono Bacon, the author of the critically acclaimed The Art of Community) and personal expression through storytelling (hosted by distinguished Microsoft engineer James Whittaker) to technical hands-on sessions hosted by our product teams. We’ll also host a Diversity and Inclusion event to welcome and feature diverse members of the MVP community. Our MVPs will have the opportunity to hack side by side with our engineers to create solutions to real-world problems. And over the course of the entire week, many of our most senior leaders will be speaking with the MVPs. You can learn more about these featured speakers on the MVP Summit website.

One of the best things about the MVP Summit is the community spirit among our MVPs. They always cherish the opportunity to reconnect and learn from other. Over the years, we have received feedback from former MVPs that they were looking for ways to stay in touch with the program and their peers. We want to make sure the valuable expertise these folks contribute to the community is still recognized and supported.

We are therefore thrilled to announce a new program called MVP Reconnect. This is our way of reconnecting former MVPs and keeping them in touch with Microsoft and the select technical expert community. The program is available to join starting today. All former MVPs, regardless of their technical expertise or award category, are invited to program.

Given all the benefits, access and recognition, lots of people want to know more about how to become an MVP.  I get this question all the time!  While there is no numerical equation for becoming an MVP, the basic formula comes down to distinguishing yourself in both your depth of expertise and your level of contributions that strengthen the technical community. We have highlighted some great examples of real MVPs that showcase the passion, community spirit, and leadership demonstrated by them. I invite you to check out those examples on our new webpage called “What it takes to be an MVP.”

To that end, we are also introducing a new show in our Channel 9 community called The MVP Show. This new format gives everyone the opportunity to see the life of MVPs around the world and how they can do great things for their community and have fun at the same time J In the first episode, we meet Shih-Ming, who is doing some cool bot development in Taiwan. We plan to feature a different MVP from a different place in each episode. Who knows? One day, it might be you!

About a year ago, I announced the new generation of the Microsoft MVP Award Program. This was the first step of a longer journey towards a broader vision: empowering community leaders to achieve more, while providing the special recognition they deserve. As you can see, we are fully committed to the MVP Award Program and its continuous improvement, and providing the best experience for MVPs is what drives us.

Thank you as always for your contributions and feedback. I am looking forward to talking with you in the next few days during the MVP Summit!

Cheers,

Guggs

@stevenguggs


The “Internet of Stranger Things” Wall, Part 2 – Wall Construction and Music

$
0
0

Overview

I do a lot of woodworking and carpentry at home. Much to my family’s chagrin, our house is in a constant state of flux. I tend to subscribe to the Norm Abram school of woodworking, where there are tools and jigs for everything. Because of this, I have a lot of woodworking and carpentry tools around. It’s not often I get to use them for my day job, but I found just the way to do it.

In part one of this series, I covered how to use Windows Remote Wiring to wire up LEDs to an Arduino, and control from a Windows 10 UWP app. In this post, we’ll get to constructing the actual wall.

This post covers:

  • Constructing the Wall
  • Music and UWP MIDI
  • Fonts and Title Style

The remainder of the series will be posted this week. Once they are up, you’ll be able to find the other posts here:

  • Part 1 – Introduction and Remote Wiring
  • Part 2 – Constructing the wall and adding Music (this post)
  • Part 3 – Adding voice recognition and intelligence

If you’re not familiar with the wall, please go back and read part 1 now. In that, I described the inspiration for this project, as well as the electronics required.

Constructing the Wall

In the show Stranger Things, “the wall” that’s talked about is an actual wall in a living room. For this version, I considered a few different sizes for the wall. It had to be large enough to be easily visible during a keynote and other larger-room presentations, but small enough that I could fit it in the back of the van, or pack in a special box to (expensively) ship across the country. That meant it couldn’t be completely spread out like the one in the TV show. But at the same time, the letters still had to be large enough so that they looked ok next to the full-size Christmas lights.

Finally, I didn’t want any visible seams in the letter field, or anything that would need to be rewired or otherwise modified to set it up. Seams are almost impossible to hide well once a board has traveled a bit. Plus, demo and device-heavy keynote setup is always very time-constrained, so I needed to make sure I could have the whole thing set up in just a few minutes. Whenever I come to an event, the people running it are stunned by the amount of stuff I put on a table. I typically fill a 2×8 table with laptops, devices, cameras, and more.

I settled on using a 4’ x 4’ sheet of ½” plywood as the base, with poplar from the local home store as reinforcement around the edges. I bisected the plywood sheet to 32” and 16” to make it easier to ship and also so it would easily fit in the back of the family van for the first event we drove to.

The wallpapered portion of the wall ended up being 48” wide and 32” tall. The remaining paneled portion is just under 16” tall. The removable bottom part turned out to be quite heavy, so I left it off when shipping to Las Vegas for DEVintersection.

To build the bottom panel, I considered getting a classic faux wood panel from the local Home Depot and cutting it to size for this. But I really didn’t want a whole 4×8 sheet of fake wood paneling laying around an already messy shop. So instead I used left-over laminate flooring from my laundry room remodel project and cut it to length. Rather than snap the pieces tight together, I left a gap, and then painted the gaps black to give it that old 70s/80s paneling look.

picture1

picture2

The size of this version of the wall does constrain the design a bit. I didn’t try to match the same layout that the letters had in the show, except for having the right letters on the right row. The wall in the show is spaced out enough that you could easily fill a full 4×8 sheet and still look a bit cramped.

The most time-consuming part of constructing the wall was finding appropriately ugly wallpaper. Not surprisingly, a search for “ugly wallpaper” doesn’t generally bring up items for sale :). In the end, I settled for something that was in roughly the same ugliness class as the show wallpaper, but nowhere near an actual match. If you use the wallpaper I did, I suggest darkening it a bit with a tea stain or something similar. As-is, it’s a bit too bright.

Note that the price has gone up significantly since I bought it (perhaps I started an ugly wallpaper demand trend?), so I encourage you to look for other sources. If you find a source for the exact wallpaper, please do post it in the comments below!

Another option, of course, is to use your art skills and paint the “wallpaper” manually. It might actually be easier than hanging wallpaper on plywood, which as it turns out, is not as easy as it sounds. In any case, do the hanging in your basement or some other place that will be ok with getting wet and glued-up.

Here it is with my non-professional wallpaper job. It may look like I’m hanging some ugly sheets out to dry, but this is wallpaper on plywood.

picture3

When painting the letters on the board, I divided the area in three sections vertically, and used a leftover piece of flooring as a straight edge. That helped there, but didn’t do anything for my letter spacing / kerning.

To keep the paint looking messy, I used a cheap 1” chip brush as the paint brush. I dabbed on a bit extra in a few places to add drips, and went back over any areas that didn’t come out quite the way I wanted, like the letter “G.”

picture4

Despite measuring things out, I ran out of room when I got to WXYZ and had to squish things together a bit. I blame all the white space around the “V”. There’s a bit of a “Castle of uuggggggh” thing going on at the end of the painted alphabet.

picture5

Once the painting was complete, I used some pre-colored corner and edge trim to cover the top and bottom and make it look a bit more like the show. I attached most trim with construction glue and narrow crown staples (and cleaned up the glue after I took the above photo). If you want to be more accurate and have the time, use dark stained pine chair rail on the bottom edge, between the wallpapered section and the paneled section.

Here you can see the poplar one-by support around the edges of the plywood. I used a combination of 1×3 and 1×4 that I had around my shop. Plywood, especially plywood soaked with wallpaper paste, doesn’t like to stay flat. For that reason, as well as for shipping reasons, the addition of the poplar was necessary.

picture6

You can see some of the wiring in this photo, so let’s talk about that.

Preparing and Wiring the Christmas lights

There are two important things to know about the Christmas lights:

  1. They are LEDs, not incandescent lamps.
  2. They are not actually wired in a string, but are instead individually wired to the control board.

I used normal 120v AC LED lights. LEDs, like regular incandescent lamps, don’t really care about AC or DC, so it’s easy enough to find LEDs to repurpose for this project. I just had to pick ones which didn’t have a separate transformer or anything odd like that. Direct 120v plug-in only.

The LED lights I sacrificed for this project are Sylvania Stay-Lit Platinum LED Indoor/Outdoor C9 Multi-Colored Christmas Lights. They had the right general size and look. I purchased two packs for this because I was only going to use the colors actually used on the show and also because I wanted to have some spares for when the C9 housings were damaged in transit, or when I blew out an LED or two.

There are almost certainly other brands that will work, as long as they are LED C9 lamps and the wires are wrapped in a way that you can unravel.

When preparing the lamps, I cut the wires approximately halfway between the two lamps. I also discarded any lamps which had three wires going into them, as I didn’t want to bother trying to wire those up. Additionally, I discarded any of the lumps in the wires where fuses or resistors were kept.

picture7

For one evening, my desk was completely covered in severed LED Christmas lamps.

Next, I figured out the polarity of the LED leads and marked them with black marker. It’s important to know the anode from the cathode here because wiring in reverse will both fail to work, and likely burn out the LED, making subsequent trials also fail. Through trial and error, I found the little notch on the inside of the lamp always pointed in the same way, and that it was in the same position relative to the outside clip.

Once marked, I took note of the colors used on the show and following the same letter/color pairings, drilled an approximately ¼” hole above each letter and inserted both wires for the appropriate colored lamp through to the back. Friction held them in place until I could come through with the hot glue gun and permanently stick them there.

From there, I linked each positive (anode) wire on the LEDs together by twisting the wires together with additional lengths of wire and taping over them with electrical tape. The wire I used here was spare wire from the light string. This formed one continuous string connecting all the LED anodes together.

Next, I connected the end of that string to the +3.3v output on the Arduino. 3.3v is plenty to run these LEDs. The connection is not obvious in the photos, but I used a screw on the side of the electronics board and wired one end to the Arduino and the other end to the light string.

Finally, I wired the negative (cathode) wires to their individual terminals on the electronics board. I used a spool of heavier stranded wire here that would hold up to twisting around the screw terminals. For speed, I used wire nuts to connect those wires to the cathode wire on the LED. That’s all the black wire you see in this photo.

picture8

To make it look like one string of lights, I ran a twisted length of the Christmas light wire pairs (from the same light kit) through the clips on each lamp. I didn’t use hot glue here, but just let it go where it wanted. The effect is such that it looks like one continuous strand of Christmas lights; you only see the wires going into the wall if you look closely.

picture9

I attached the top and bottom together using 1×3 maple boards that I simply screwed to both the top and bottom, and then disassembled when I wanted to tear it down.

gif1

The visuals were all done at that point. I could have stopped there, but one of my favorite things about Stranger Things is the soundtrack. Given that a big part of my job at Microsoft is working with musicians and music app developers, and with the team which created the UWP MIDI API, I knew I had to incorporate that into this project.

Music / MIDI

A big part of the appeal of Stranger Things is the John Carpenter-style mostly analog synthesizer soundtrack by the band Survive (with some cameos by Tangerine Dream). John Carpenter, Klaus Shulze and Tangerine Dream have always been favorites of mine, and I can’t help but feel a shiver when I hear a good fat synth-driven soundtrack. They have remained my inspiration when recording my own music.

So, it would have been just wrong of me to do the demo of the wall without at least some synthesizer work in the background. Playing it live was not an option and I wasn’t about to bring a huge rig, so I sequenced the main arpeggio and kick drum in my very portable Elektron Analog Four using some reasonable stand-ins for the sounds.

At the events, I would start and clock the Analog Four using a button on the app and my Windows 10 UWP MIDI Library clock generator. The only lengthy part of this code is where I check for the Analog Four each time. That’s a workaround because my MIDI library, at the time of this writing, doesn’t expose the hardware add/remove event. I will fix that soon.


private void StartMidiClock()
{
    // I do this every time rather than listen for device add/remove
    // becuase my library didn't raise the add/remove event in this version
    SelectMidiOutputDevices();

    _midiClock.Start();

    System.Diagnostics.Debug.WriteLine("MIDI started");
}

private void StopMidiClock()
{
    _midiClock.Stop();

    System.Diagnostics.Debug.WriteLine("MIDI stopped");
}


private const string _midiDeviceName = "Analog Four";
private async void SelectMidiOutputDevices()
{
    _midiClock.OutputPorts.Clear();

    IMidiOutPort port = null;

    foreach (var descriptor in _midiWatcher.OutputPortDescriptors)
    {
        if (descriptor.Name.Contains(_midiDeviceName))
        {
            port = await MidiOutPort.FromIdAsync(descriptor.Id);

            break;
        }
    }

    if (port != null)
    {
        _midiClock.OutputPorts.Add(port);
    }
}

For this code to work, I just set the Analog Four to receive MIDI clock and MIDI start/stop messages on the USB port. The sequence itself is already programmed in by me, so all I need to do is kick it off.

If you want to create a version of the sequence yourself, the main riff is a super simple up/down arpeggio of these notes:

picture10

You can vamp on top of that to bring in more of the sound from what S U R V I V E made. I left it as it was and simply played the filter knob a bit to bring it in. A short version of that may be found on my personal SoundCloud profile here.

There are many other components to the music, including a muted kick drum type of sound, a bass line, some additional melody and some other interesting effects, but I hope this helps get you started.

If you’re interested in the synthesizers behind the music, and a place to hear the music itself, check out this tour of S U R V I V E ’s studio.

The final thing that I needed to include here was a nod to the visual style of the opening sequence of the show.

Fonts and Title Style

If you want to create your own title card in a style similar to the show, the font ITC Benguiat is either the same one used, or a very close match. It’s readily available to anyone who wants to license it. I licensed it from Fonts.com for $35 for my own project. The version I ended up using was the regular book font, but I think the Condensed Bold is probably a closer fit.

Even though there are tons of pages, sites, videos, etc. using the title style, be careful about what you do here, though, as you don’t want to infringe on the show’s trademarks or other IP. When in doubt, consult your lawyer. I did.

picture11

That’s using just the outline and glow text effects. You can do even better in Adobe Photoshop, especially if you add in some lighting effects, adjust the character spacing and height, and use large descending capital letters, like I did at the first event. But I was able to quickly do this above mockup in PowerPoint using the ITC Benguiat font.

If you don’t want to license a font and then work with the red glow in Adobe Photoshop, you can also create simple versions of the title card at http://makeitstranger.com/

None of that is required for the wall itself, but can help tie things together if you are presenting several related and themed demos like I did. Consider it a bit of polish.

With that, we have the visuals and sound all wrapped up. You could use the wall as-is at this point, simply giving it text to display. That’s not quite enough for what I wanted to show, though. Next up, we need to give the bot a little intelligence, and save on some typing.

Resources

Questions or comments? Have your own version of the wall, or used the technology described here to help rid the universe of evil? Post below and follow me on twitter @pete_brown

Most of all, thanks for reading!

Getting personal – speech and inking (App Dev on Xbox series)

$
0
0

The way users interact with apps on different devices has gotten much more personal lately, thanks to a variety of new Natural User Interface features in the Universal Windows Platform. These UWP patterns and APIs are available for developers to easily bring in capabilities for their apps that enable more human technologies. For the final blog post in the series, we have extended the Adventure Works sample to add support for Ink on devices that support it, and to add support for speech interaction where it makes sense (including both synthesis and recognition). Make sure to get the updated code for the Adventure Works Sample from the GitHub repository so you can refer to it as you read on.

And in case you missed the blog post from last week on how to enable great social experiences, we covered how to connect your app to social networks such as Facebook and Twitter, how to enable second screen experiences through Project “Rome”, and how to take advantage of the UWP Maps control and make your app location aware. To read last week’s blog post or any of the other blog posts in the series, or to watch the recordings from the App Dev on Xbox live event that started it all, visit the App Dev on Xbox landing page.

Adventure Works (v3)

picture1

We are continuing to build on top of the Adventure Works sample app we worked with in the previous two blog posts. If you missed those, make sure to check them out here and here. As a reminder, Adventure Works is a social photo app that allows the user to:

  • Capture, edit, and store photos for a specific trip
  • auto analyze and auto tag friends using Cognitive Services vision APIs
  • view albums from friends on an interactive map
  • share albums on social networks like Facebook and Twitter
  • Use one device to remote control slideshows running on another device using project Rome
  • and more …

There is always more to be done, and for this final round of improvements we will focus on two sets of features:

  1. Ink support to annotate images, enable natural text input, as well as the ability to use inking as a presentation tool in connected slideshow mode.
  2. Speech Synthesis and Speech Recognition (with a little help from cognitive services for language understanding) to create a way to quickly access information using speech.

More Personal Computing with Ink

Inking in Windows 10 allows users with Inking capable devices to draw and annotate directly on the screen with a device like the Surface Pen – and if you don’t have a pen handy, you can use your finger or a mouse instead. Windows 10 built-in apps like Sticky Notes, Sketchpad and Screen sketch support inking, as do many Office products. Besides preserving drawings and annotations, inking also uses machine learning to recognize and convert ink to text. OneNote goes a step further by recognizing shapes and equations in addition to text.

picture2

Best of all, you can easily add Inking functionality into your own apps, as we did for Adventure Works,  with one line of XAML markup to create an InkCanvas. With just one more line, you can add an InkToolbar to your canvas that provides a color selector as well as buttons for drawing, erasing, highlighting, and displaying a ruler. (In case you have the Adventure Works project open, the InkCanvas and InkToolbar implementation can be found in PhotoPreviewView.)

The InkCanvas allows users to annotate their Adventure Works slideshow photos. This can be done both directly as well as remotely through the Project “Rome” code highlighted in the previous post. When done on the same device, the ink strokes are saved off to a GIF file which is then associated with the original slideshow image.

picture3

When the image is displayed again during later viewings, the strokes are extracted from the GIF file, as shown in the code below, and inserted back into a canvas layered on top of the image in PhotoPreviewView. The code for saving and extracting ink strokes are found in the InkHelpers class.


var file = await StorageFile.GetFileFromPathAsync(filename);
if (file != null)
{
    using (var stream = await file.OpenReadAsync())
    {
        inker.InkPresenter.StrokeContainer.Clear();
        await inker.InkPresenter.StrokeContainer.LoadAsync(stream);
    }
}

Ink strokes can also be drawn on one device (like a Surface device) and displayed on another one (an Xbox One). In order to do this, the Adventure Works code actually collects the user’s pen strokes using the underlying InkPresenter object that powers the InkCanvas. It then converts the strokes into a byte array and serializes them over to the remote instance of the app. You can find out more about how this is implemented in Adventure Works by looking through the GetStrokeData method in SlideshowSlideView control and the SendStrokeUpdates method in SlideshowClientPage.

It is sometimes useful to save the ink strokes and original image in a new file. In Adventure Works, this is done to create a thumbnail version of an annotated slide for quick display as well as for uploading to Facebook. You can find the code used to combine an image file with an ink stroke annotation in the RenderImageWithInkToFIleAsync method in the InkHelpers class. It uses the Win2D DrawImage and DrawInk methods of a CanvasDrawingSession object to blend the two together, as shown in the snippet below.


CanvasDevice device = CanvasDevice.GetSharedDevice();
CanvasRenderTarget renderTarget = new CanvasRenderTarget(device, (int)inker.ActualWidth, (int)inker.ActualHeight, 96);

var image = await CanvasBitmap.LoadAsync(device, imageStream);
using (var ds = renderTarget.CreateDrawingSession())
{
    var imageBounds = image.GetBounds(device);
                
    //...

    ds.Clear(Colors.White);
    ds.DrawImage(image, new Rect(0, 0, inker.ActualWidth, inker.ActualWidth), imageBounds);
    ds.DrawInk(inker.InkPresenter.StrokeContainer.GetStrokes());
}

Ink Text Recognition

picture4

Adventure Works also takes advantage of Inking’s text recognition feature to let users handwrite the name of their newly created Adventures. This capability is extremely useful if someone is running your app in tablet mode with a pen and doesn’t want to bother with the onscreen keyboard. Converting ink to text relies on the InkRecognizer class. Adventure Works encapsulates this functionality in a templated control called InkOverlay which you can reuse in your own code. The core implementation of ink to text really just requires instantiating an InkRecognizerContainer and then calling its RecognizeAsync method.


var inkRecognizer = new InkRecognizerContainer();
var recognitionResults = await inkRecognizer.RecognizeAsync(_inker.InkPresenter.StrokeContainer, InkRecognitionTarget.All);

You can imagine this being very powerful when the user has a large form to fill out on a tablet device and they don’t have to use the onscreen keyboard.

More Personal Computing with Speech

There are two sets of APIs that are used in Adventure Works that enable a great natural experience using speech. First, UWP speech APIs allow developers to integrate speech-to-text (recognition) and text-to-speech (synthesis) into their UWP apps. Speech recognition converts words spoken by the user into text for form input, for text dictation, to specify an action or command, and to accomplish tasks. Both free-text dictation and custom grammars authored using Speech Recognition Grammar Specification are supported.

Second, Language Understanding Intelligent Service (LUIS) is a Microsoft Cognitive Services API that uses machine learning to help your app figure out what people are trying to say. For instance, if someone wants to order food, they might say “find me a restaurant” or “I’m hungry” or “feed me”. You might try a brute force approach to recognize the intent to order food, listing out all the variations on the concept “order food” that you can think of – but of course you’re going to come up short. LUIS lets you set up a model for the “order food” intent that learns, over time, what people are trying to say.

In Adventure Works, these features are combined to create a variety of speech related functionalities. For instance, the app can listen for an utterance like “Adventure Works, start my latest slideshow” and it will naturally open a slideshow for you when it hears this command. It can also respond using speech when appropriate to answer a question. LUIS, in turn, augments this speech recognition with language understanding to improve the recognition of natural language phrases.

picture5

The speech capabilities for our app are wrapped in a simple assistant called Adventure Works Aide (look for AdventureWorksAideView.xaml). Saying the phrase “Adventure Works…” will invoke it. It will then listen for spoken patterns such as:

  • “What adventures are in .”
  • “Show me adventure.”
  • “Who is closes to me.”

Adventure Works Aide is powered by a custom SpeechService class. There are two SpeechRecognizer instances that are used at different times, first to recognize the “Adventure Works” phrase at any time:


_continousSpeechRecognizer = new SpeechRecognizer();
_continousSpeechRecognizer.Constraints.Add(new SpeechRecognitionListConstraint(new List() { "Adventure Works" }, "start"));
var result = await _continousSpeechRecognizer.CompileConstraintsAsync();
//...
await _continousSpeechRecognizer.ContinuousRecognitionSession.StartAsync(SpeechContinuousRecognitionMode.Default);
and then to understand free form natural language and convert it to text:
_speechRecognizer = new SpeechRecognizer();
var result = await _speechRecognizer.CompileConstraintsAsync();
SpeechRecognitionResult speechRecognitionResult = await _speechRecognizer.RecognizeAsync();
if (speechRecognitionResult.Status == SpeechRecognitionResultStatus.Success)
{
    string str = speechRecognitionResult.Text;
}

As you can see, the SpeechRecognizer API is used for both listening continuously for specific constraints throughout the lifetime of the app, or to convert any free-form speech to text at a specific time. The continuous recognition session can be set to recognize phrases from a list of strings, or it can even use a more structured SRGS grammar file which provides the greatest control over the speech recognition by allowing for multiple semantic meanings to be recognized at once. However, because we want to understand every variation the user might say and use LUIS for our semantic understanding, we can use the free-form speech recognition with the default constraints.

Note: before using any of the speech APIs on Xbox, the user must give permission to your application to access the microphone. Not all APIs automatically show the dialog currently so you will need to invoke the dialog yourself. Checkout the CheckForMicrophonePermission function in SpeechService.cs to see how this is done in Adventure Works.

When the continuous speech recognizer recognizes the key phrase, it immediately stops listening, shows the UI for the AdventureWorksAide to let the user know that it’s listening, and starts listening for natural language.


await _continousSpeechRecognizer.ContinuousRecognitionSession.CancelAsync();
ShowUI();
SpeakAsync("hey!");
var spokenText = await ListenForText();

Subsequent utterances are passed on to LUIS which uses training data we have provided to create a machine learning model to identify specific intents. For this app, we have three different intents that can be recognized: showuser, showmap, and whoisclosest (but you can always add more). We have also defined an entity for username for LUIS to provide us with the name of the user when the showuser intent has been recognized. LUIS also provides several pre-built entities that have been trained for specific types of data; in this case, we are using an entity for geography locations in the showmap intent.

picture6

To use LUIS in the app, we used the official nugget library which allowed us to register specific handlers for each intent when we send over a phrase.


var handlers = new LUISIntentHandlers();
_router = IntentRouter.Setup(Keys.LUISAppId, Keys.LUISAzureSubscriptionKey, handlers, false);
var handled = await _router.Route(text, null);

Take a look at the HandleIntent method in the LUISAPI.cs file and the LUISIntentHandlers class which handles each intent defined in the LUIS portal, and is a useful reference for future LUIS implementations.

Finally, once the text has been processed by LUIS and the intent has been processed by the app, the AdventureWorksAide might need to respond back to the user using speech, and for that, the SpeechService uses the SpeechSynthesizer API:


_speechSynthesizer = new SpeechSynthesizer();
var syntStream = await _speechSynthesizer.SynthesizeTextToStreamAsync(toSpeak);
_player = new MediaPlayer();
_player.Source = MediaSource.CreateFromStream(syntStream, syntStream.ContentType);
_player.Play();

The SpeechSynthesizer API can specify a specific voice to use for the generation based on voices installed on the system, and it can even use SSML (speech synthesis markup language) to control how the speech is generated, including volume, pronunciation, and pitch.

The entire flow, from invoking the Adventure Works Aide to sending the spoken text to LUIS, and finally responding to the user is handled in the WakeUpAndListen method.

There’s more

Though not used in the current version of the project, there are other APIs that you can take advantage of for your apps, both as part of the UWP platform and as part of Cognitive Services.

For example, on desktop and mobile device, Cortana can recognize speech or text directly from the Cortana canvas and activate your app or initiate an action on behalf of your app. It can also expose actions to the user based on insights about them, and with user permission it can even complete the action for them. Using a Voice Command Definition (VCD) file, developers have the option to add commands directly to the Cortana command set (commands like: “Hey Cortana show adventure in Europe in Adventure Works”). Cortana app integration is also part of our long-term plans for voice support on Xbox, even though it is not supported today. Visit the Cortana portal for more info.

In addition, there are several speech and language related Cognitive Services APIs that are simply too cool not to mention:

  • Custom Recognition Service– Overcomes speech recognition barriers like speaking style, background noise, and vocabulary.
  • Speaker Recognition– Identify individual speakers or use speech as a means of authentication with the Speaker Recognition API.
  • Linguistic Analysis– Simplify complex language concepts and parse text with the Linguistic Analysis API.
  • Translator– Translate speech and text with a simple REST API call.
  • Bing Spell Check– Detect and correct spelling mistakes within your app.

The more personal computing features provided through Cognitive Services is constantly being refreshed, so be sure to check back often to see what new machine learning capabilities have been made available to you.

That’s all folks

This was the last blog post (and sample app) in the App Dev on Xbox series, but if you have a great idea that we should cover, please just let us know, we are always looking for cool app ideas to build and features to implement. Make sure to check out the app source on our official GitHub repository, read through some of the resources provided, read through some of the other blog posts or watch the event if you missed it, and let us know what you think through the comments below or on twitter.

Happy coding!

Resources

Previous Xbox Series Posts

The “Internet of Stranger Things” Wall, Part 3 – Voice Recognition and Intelligence

$
0
0

Overview

I called this project the “Internet of Stranger Things,” but so far, there hasn’t been an internet piece. In addition, there really hasn’t been anything that couldn’t be easily accomplished on an Arduino or a Raspberry Pi. I wanted this demo to have more moving parts to improve the experience and also demonstrate some cool technology.

First is voice recognition. Proper voice recognition typically takes a pretty decent computer and a good OS. This isn’t something you’d generally do on an Arduino alone; it’s simply not designed for that kind of workload.

Next, I wanted to wire it up to the cloud, specifically to a bot. The interaction in the show is a conversation between two people, so this was a natural fit. Speaking of “natural,” I wanted the bot to understand many different forms of the questions, not just a few hard-coded questions. For that, I wanted to use the Language Understanding Intelligent Service (LUIS) to handle the parsing.

This third and final post covers:

  • Adding Windows Voice Recognition to the UWP app
  • Creating the natural language model in LUIS
  • Building the Bot Framework Bot
  • Tying it all together

You can find the other posts here:

If you’re not familiar with the wall, please go back and read part one now. In that, I describe the inspiration for this project, as well as the electronics required.

Adding Voice Recognition

In the TV show, Joyce doesn’t type her queries into a 1980s era terminal to speak with her son; she speaks aloud in her living room. I wanted to have something similar for this app, and the built-in voice recognition was a natural fit.

Voice recognition in Windows 10 UWP apps is super-simple to use. You have the option of using the built-in UI, which is nice but may not fit your app style, or simply letting the recognition happen while you handle events.

There are good samples for this in the Windows 10 UWP Samples repo, so I won’t go into great detail here. But I do want to show you the code.

To keep the code simple, I used two recognizers. One is for basic local echo testing, especially useful if connectivity in a venue is unreliable. The second is for sending to the bot. You could use a single recognizer and then just check some sort of app state in the events to decide if you were doing something for local echo or for the bot.

First, I initialized the two recognizers and wired up the two events that I care about in this scenario.


SpeechRecognizer _echoSpeechRecognizer;
SpeechRecognizer _questionSpeechRecognizer;

private async void SetupSpeechRecognizer()
{
    _echoSpeechRecognizer = new SpeechRecognizer();
    _questionSpeechRecognizer = new SpeechRecognizer();

    await _echoSpeechRecognizer.CompileConstraintsAsync();
    await _questionSpeechRecognizer.CompileConstraintsAsync();

    _echoSpeechRecognizer.HypothesisGenerated +=
                   OnEchoSpeechRecognizerHypothesisGenerated;
    _echoSpeechRecognizer.StateChanged += 
                   OnEchoSpeechRecognizerStateChanged;

    _questionSpeechRecognizer.HypothesisGenerated +=
                   OnQuestionSpeechRecognizerHypothesisGenerated;
    _questionSpeechRecognizer.StateChanged += 
                   OnQuestionSpeechRecognizerStateChanged;

}

The HypothesisGenerated event lets me show real-time recognition results, much like when you use Cortana voice recognition on your PC or phone. In that event handler, I just display the results. The only real purpose of this is to show that some recognition is happening in a way similar to how Cortana shows that she’s listening and parsing your words. Note that the hypothesis and the state events come back on a non-UI thread, so you’ll need to dispatch them like I did here.


private async void OnEchoSpeechRecognizerHypothesisGenerated(
        SpeechRecognizer sender,
        SpeechRecognitionHypothesisGeneratedEventArgs args)
{
    await Dispatcher.RunAsync(CoreDispatcherPriority.Normal, () =>
    {
        EchoText.Text = args.Hypothesis.Text;
    });
}

The next is the StateChanged event. This lets me alter the UI based on what is happening. There are lots of good practices here, but I took an expedient route and simply changed the background color of the text box. You might consider running an animation on the microphone or something when recognition is happening.


private SolidColorBrush _micListeningBrush = 
                     new SolidColorBrush(Colors.SkyBlue);
private SolidColorBrush _micIdleBrush = 
                     new SolidColorBrush(Colors.White);

private async void OnEchoSpeechRecognizerStateChanged(
        SpeechRecognizer sender, 
        SpeechRecognizerStateChangedEventArgs args)
{
    await Dispatcher.RunAsync(CoreDispatcherPriority.Normal, () =>
    {
        switch (args.State)
        {
            case SpeechRecognizerState.Idle:
                EchoText.Background = _micIdleBrush;
                break;

            default:
                EchoText.Background = _micListeningBrush;
                break;
        }
    });
}

I have equivalent handlers for the two events for the “ask a question” speech recognizer as well.

Finally, some easy code in the button click handler kicks off recognition.


private async void DictateEcho_Click(object sender, RoutedEventArgs e)
{
    var result = await _echoSpeechRecognizer.RecognizeAsync();

    EchoText.Text = result.Text;
}

The end result looks and behaves well. The voice recognition is really good.

gif1

So now we can talk to the board from the UWP PC app, and we can talk to the app using voice. Time to add just a little intelligence behind it all.

Creating the Natural Language Model in LUIS

The backing for the wall is a bot in the cloud. I wanted the bot to be able to answer questions, but I didn’t want to have the exact text of the question hard-coded in the bot. If I wanted to hard-code them, a simple web service or even local code would do.

What I really want is the ability to ask questions using natural language, and map those questions (or Utterances as called in LUIS) to specific master questions (or Intents in LUIS). In that way, I can ask the questions a few different ways, but still get back an answer that makes sense. My colleague, Ryan Volum, helped me figure out how LUIS worked. You should check out his Getting Started with Bots Microsoft Virtual Academy course.

So I started thinking about the types of questions I wanted answered, and the various ways I might ask them.

For example, when I want to know the location of where Will is, I could ask, “Where are you hiding?” or “Tell me where you are!” or “Where can I find you?” When checking to see if someone is listening, I might ask, “Are you there?” or “Can you hear me?” As you can imagine, hard-coding all these variations would be tedious, and would certainly miss out on ways someone else might ask the question.

I then created those in LUIS with each master question as an Intent, and each way I could think of asking that question then trained as an utterance mapped to that intent. Generally, the more utterances I add, the better the model becomes.

picture1

The above screen shot is not the entire list of Intents; I added a number of other Intents and continued to train the model.

For a scenario such as this, training LUIS is straight forward. My particular requirements didn’t include any entities or Regex, or any connections to a document database or Azure search. If you have a more complex dialog, there’s a ton of power in LUIS to be able to make the model as robust as you need, and to also train it with errors and utterances found in actual use. If you want to learn more about LUIS, I recommend watching Module 5 in the Getting Started with Bots MVA.

Once my LUIS model was set up and working, I needed to connect it to the bot.

Building the Bot Framework Bot

The bot itself was the last thing I added to the wall. In fact, in my first demo of the wall, I had to type the messages in to the app instead of sending it out to a bot. Interesting, but not exactly what I was looking for.

I used the generic Bot Framework template and instructions from the Bot Framework developer site. This creates a generic bot, a simple C# web service controller, which echoes back anything you send it.

Next, following the Bot Framework documentation, I integrated LUIS into the bot. First, I created the class which derived from LuisDialog, and added in code to handle the different intents. Note that this model is changing over time; there are other ways to handle the intents using recognizers. For my use, however, this approach worked just fine.

The answers from the bot are very short, and I keep no context. Responses from the Upside Down need to be short enough to light up on the wall without putting everyone to sleep reading a long dissertation letter by letter.


namespace TheUpsideDown
{
    // Reference: 
    // https://docs.botframework.com/en-us/csharp/builder/sdkreference/dialogs.html

    // Partial class is excluded from project. It contains keys:
    // 
    // [Serializable]
    // [LuisModel("model id", "subscription key")]
    // public partial class UpsideDownDialog
    // {
    // }
    // 
    public partial class UpsideDownDialog : LuisDialog
    {
        // None
        [LuisIntent("")]
        public async Task None(IDialogContext context, LuisResult result)
        {
            string message = $"Eh";
            await context.PostAsync(message);
            context.Wait(MessageReceived);
        }


        [LuisIntent("CheckPresence")]
        public async Task CheckPresence(IDialogContext context, LuisResult result)
        {
            string message = $"Yes";
            await context.PostAsync(message);
            context.Wait(MessageReceived);
        }

        [LuisIntent("AskName")]
        public async Task AskName(IDialogContext context, LuisResult result)
        {
            string message = $"Will";
            await context.PostAsync(message);
            context.Wait(MessageReceived);
        }

        [LuisIntent("FavoriteColor")]
        public async Task FavoriteColor(IDialogContext context, LuisResult result)
        {
            string message = $"Blue ... no Gr..ahhhhh";
            await context.PostAsync(message);
            context.Wait(MessageReceived);
        }

        [LuisIntent("WhatIShouldDoNow")]
        public async Task WhatIShouldDoNow(IDialogContext context, LuisResult result)
        {
            string message = $"Run";
            await context.PostAsync(message);
            context.Wait(MessageReceived);
        }

        ...

    }
}

Once I had that in place, it was time to test. The easiest way to test before deployment is to use the Bot Framework Channel Emulator.

First, I started the bot in my browser from Visual Studio. Then, I opened the emulator and plugged in the URL from the project properties, and cleared out the credentials fields. Next, I started typing in questions that I figured the bot should be able to handle.

picture2

It worked great! I was pretty excited, because this was the first bot I had ever created, and not only did it work, but it also had natural language processing. Very cool stuff.

Now, if you notice in the picture, there are red circles on every reply. It took a while to figure out what was up. As it turns out, the template for the bot includes an older version of the NuGet bot builder library. Once I updated that to the latest version (3.3 at this time), the “Invalid Token” error local IIS was throwing went away.

Be sure to update the bot builder library NuGet package to the latest version.

Publishing and Registering the Bot

Next, it was time to publish it to my Azure account so I could use the Direct Line API from my client app, and also so I could make the bot available via other channels. I used the built-in Visual Studio publish (right click the project, click “Publish”) to put it up there. I had created the Azure Web App in advance.

picture3

Next, I registered the bot on the Bot Framework site. This step is necessary to be able to use the Direct Line API and make the bot visible to other channels. I had some issues getting it to work at first, because I didn’t realize I needed to update the credential information in the web.config of the bot service. The BotId field in the web.config can be most anything. Most tutorials skip telling you what to put in that field, and it doesn’t match up with anything on the portal.

picture4

As you can see, there are a few steps involved in getting the bot published and registered. For the Azure piece, follow the same steps as you would for any Web App. For the bot registration, be sure to follow the instructions carefully, and keep track of your keys, app IDs, and passwords. Take your time the first time you go through the process.

You can see in the previous screen shot that I have a number of errors shown. Those errors were because of that NuGet package version issue mentioned previously. It wasn’t until I had the bot published that I realized there was an error, and went back and debugged it locally.

Testing the Published Bot in Skype

I published and registered the bot primarily to be able to use the Direct Line channel. But it’s a bot, so it makes sense to test it using a few different channels. Skype is a pretty obvious one, and is enabled by default, so I hit that first.

picture5

Through Skype, I was able to verify that it was published and worked as expected.

Using the Direct Line API

When you want to communicate to a bot from code, a good way to do it is using the Direct Line API. This REST API provides an additional layer of authentication and keeps everything within a structured bot framework. Without it, you might as well just make direct REST calls.

First, I needed to enable the Direct Line channel in the bot framework portal. Once I did that, I was able to configure it and get the super-secret key which enables me to connect to the bot. (The disabled field was a pain to try and copy/paste, so I just did a view source, and grabbed the key from the HTML.)

picture6

That’s all I needed to do in the portal. Next, I needed to set up the client to speak to the Direct Line API.

First, I added the Microsoft.Bot.Connector.DirectLine NuGet package to the UWP app. After that, I wrote a pretty small amount of code for the actual communication. Thanks to my colleague, Shen Chauhan (@shenchauhan on Twitter), for providing the boilerplate in his Hunt the Wumpus app.


private const string _botBaseUrl = "(the url to the bot /api/messages)";
private const string _directLineSecret = "(secret from direct line config)";


private DirectLineClient _directLine;
private string _conversationId;


public async Task ConnectAsync()
{
    _directLine = new DirectLineClient(_directLineSecret);

    var conversation = await _directLine.Conversations
            .NewConversationWithHttpMessagesAsync();
    _conversationId = conversation.Body.ConversationId;

    System.Diagnostics.Debug.WriteLine("Bot connection set up.");
}

private async Task GetResponse()
{
    var httpMessages = await _directLine.Conversations
                  .GetMessagesWithHttpMessagesAsync(_conversationId);

    var messages = httpMessages.Body.Messages;

    // our bot only returns a single response, so we won't loop through
    // First message is the question, second message is the response
    if (messages?.Count > 1)
    {
        // select latest message -- the response
        var text = messages[messages.Count-1].Text;
        System.Diagnostics.Debug.WriteLine("Response from bot was: " + text);

        return text;
    }
    else
    {
        System.Diagnostics.Debug.WriteLine("Response from bot was empty.");
        return string.Empty;
    }
}


public async Task TalkToTheUpsideDownAsync(string message)
{
    System.Diagnostics.Debug.WriteLine("Sending bot message");

    var msg = new Message();
    msg.Text = message;


    await _directLine.Conversations.PostMessageAsync(_conversationId, msg);

    return await GetResponse();
}

The client code calls the TalkToTheUpsideDownAsync method, passing in the question. That method fires off the message to the bot, via the Direct Line connection, and then waits for a response.

Because the bot sends only a single message, and only in response to a question, the response comes back as two messages: the first is the message sent from the client, the second is the response from the service. This helps to provide context.

Finally, I wired it to the SendQuestion button on the UI. I also wrapped it in calls to start and stop the MIDI clock, giving us a bit of Stranger Things thinking music while the call is being made and the result displayed on the LEDs.


private async void SendQuestion_Click(object sender, RoutedEventArgs e)
{
    // start music
    StartMidiClock();

    // send question to service
    var response = await _botInterface.TalkToTheUpsideDownAsync(QuestionText.Text);

    // display answer
    await RenderTextAsync(response);

    // stop music
    StopMidiClock();
}

With that, it is 100% complete and ready for demos!

What would I change?

If I were to start this project anew today and had a bit more time, there are a few things I might change.

I like the voice recognition, Bot Framework, and LUIS stuff. Although I could certainly make the conversation more interactive, there’s really nothing I would change there.

On the electronics, I would use a breadboard-friendly Arduino, not hot-glue an Arduino to the back. It pains me to have hot-glued the Arduino to the board, but I was in a hurry and had the glue gun at hand.

I would also use a separate power supply for LEDs. This is especially important if you wish to light more than one LED at a time, as eventually, the Arduino will not be able to support the current draw required by many LED lights.

If I had several weeks, I would have my friends at DF Robot spin a board that I design, rather than use a regular breadboard, or even a solder breadboard. I generally prefer to get boards spun for projects, as they are more robust, and DF Robot can do this for very little cost.

Finally, I would spend more time to find even uglier wallpaper .

Here’s a photo of the wall, packaged up and ready for shipment to Las Vegas (at the time of this writing, it’s in transit), waiting in my driveway. The box was 55” tall, around 42” wide and 7” thick, but only about 25 lbs. It has ¼” plywood on both faces, as well as narrower pieces along the sides. In between the plywood is 2” thick rigid insulating foam. Finally, the corners are protected with the spongier corner form that came with that box.

It costs a stupid amount of money to ship something like that around, but it’s worth it for events. 🙂

picture7

After this, it’s going to Redmond where I’ll record a video walkthrough with Channel 9 during the second week of November.

What Next?

Windows Remote Wiring made this project quite simple to do. I was able to use the tools and languages I love to use (like Visual Studio and C#), but still get the IO of a device like the Arduino Uno. I was also able to use facilities available to a UWP app, and call into a simple bot of my own design. In addition to all that, I was able to use voice recognition and MIDI all in the same app, in a way that made sense.

The Bot Framework and LUIS stuff was all brand new to me, but was really fun to do. Now that I know how to connect app logic to a bot, there will certainly be more interactive projects in the future.

This was a fun project for me. It’s probably my last real maker project of the fall/winter, as I settle into the fall home renovation work and also gear up for the NAMM music event in January. But luckily, there have been many other posts here about Windows 10 IoT Core and our maker and IoT-focused technology. If this topic is interesting to you, I encourage you to take a spin through the archives and check them out.

Whatever gift-giving and receiving holiday you celebrate this winter, be sure to add a few Raspberry Pi 3 devices and some Arduino Uno boards on your list, because there are few things more enjoyable than cozying up to a microcontroller or some IoT code on a cold winter’s day. Oh, and if you steal a strand or two of lights from the tree, I won’t tell. 🙂

Resources

Questions or comments? Have your own version of the wall, or used the technology described here to help rid the universe of evil? Post below and follow me on Twitter @pete_brown

Most of all, thanks for reading!

Kinect demo code and new driver for UWP now available

$
0
0

Here’s a little memory test: Do you recall this blog, which posted back in May and promised to soon begin integrating Kinect for Windows into the Universal Windows Platform? Of course you do! Now we are pleased to announce two important developments in the quest to make Kinect functionality available to UWP apps.

First, by popular demand, the code that Alex Turner used during his Channel 9 video (above) is now available on GitHub as part of the Windows universal samples. With this sample, you can use Windows.Media.Capture.Frames APIs to enumerate the Kinect sensor’s RGB/IR/depth cameras and then use MediaFrameReader to stream frames. This API lets you access pixels of each individual frame directly in a highly efficient way.

These new functionalities debuted in the Windows 10 Anniversary Update, and structure of the APIs should be familiar to those who’ve been using the Kinect SDK for years. But these new APIs are designed to work not only with the Kinect sensor but with any other sensors capable of delivering rich data streams—provided you have a matching device driver.

Which brings us to our second announcement: We have now enabled the Kinect driver on Windows Update. So if you’d like try out this new functionality now, simply go to the Device Manager and update the driver for the Kinect sensor. In addition to enabling the new UWP APIs described above, the new driver also lets you use the Kinect color camera as a normal webcam. This means that apps which use a webcam, such as Skype, can now employ the Kinect sensor as their source. It also means that you can use the Kinect sensor to enable Windows Hello for authentication via facial recognition.

picture1

Another GitHub sample demonstrates how to use new special-correlation APIs, such as CameraIntrinsics or DepthCorrelatedCoordinateMapper, to process RGB and depth camera frames for background removal. These APIs take advantage of the fact that the Kinect sensor’s color and depth cameras are spatially correlated by calibration and depth frame data. This sample also shows how to access the Kinect sensor’s skeletal tracking data through a custom media stream in UWP apps with newly introduced APIs.

Finally, we should note that the Xbox summary update also enables these Kinect features through Windows.Media.Capture.Frames for UWP apps. Thus, apps that use the Kinect sensor’s RGB, infrared, and/or depth cameras will run on Xbox with same code, and Xbox can also use the Kinect RGB camera as a normal webcam for Skype-like scenarios

Judging from requests, we’re confident that many of you are eager to explore both the demo code and download the new driver. When you do, we want to hear about your experiences—what you liked, what you didn’t, and what enhancements you want to see. So send us your feedback!

Please note that, if you have technical questions about this post or would like to discuss Kinect with other developers and Microsoft engineers, we hope you will join the conversation on the Kinect for Windows v2 SDK forum. You can browse existing topics or ask a new question by clicking the Ask a question button on the forum webpage.

The Kinect for Windows Team

Key links

In Case You Missed it – This Week in Windows Developer

$
0
0

Happy (belated) Halloween, Windows Devs! This past week gave 80s kids, pop culture fans and Windows Devs alike all a chance to celebrate the internet of things.

Our very own IoT master, Pete Brown, created a series on IoT, remote wiring, voice recognition and AI inspired by the Netflix hit, Stranger Things. Check it out below!

2016-10-31_strangerthings

Internet of Stranger Things Part 1

TL;DR – go ahead and binge watch the series before getting started.

Internet of Stranger Things Part 2

Pete Brown builds a wall. But it’s more than that – Pete adds to the Internet of Stranger Things project by constructing a wall that integrates music and UWP MIDI capabilities. Learn how to cue up your very own haunting 80s synth soundtrack with part 2!

Internet of Stranger Things Part 3

The final installment of the series covers voice recognition and intelligence – two things most IoT devices don’t necessarily support. Low and behold, Pete Brown works his IoT magic in this post.

#XboxAppDev – Adding natural inputs

This post gets personal (with input methods). Learn how to add natural, intuitive input methods to your Xbox and UWP apps.

2016-11-04_speechandink

UWP Integrations for Kinect

Grab your demo hat and get ready for the new drivers and integrations now available for Kinect and UWP. Read more in this blog:

Windows 10 Insider Preview Build 14959 for Mobile and PC

Last, but certainly not least, we released a new build for Windows Insiders in the Fast Ring. There are quite a few updates here, most notably the new ‘Universal Update Platform’ which helps streamline updates across your Windows 10 devices.

And that’s the week in Windows Dev! Feel free to tweet us with any questions, comments or suggestions for Pete Brown’s next example of IoT wizardry.

Download Visual Studio to get started.

The Windows team would love to hear your feedback.  Please keep the feedback coming using our Windows Developer UserVoice site. If you have a direct bug, please use the Windows Feedback tool built directly into Windows 10.

 

Issue with using Application Insights with load tests

$
0
0

If you use Application Insights to collect app side metrics during load tests, you will find that it currently doesn’t work as expected.

  1. When configuring applications to collect app side metrics in the load test editor in Visual Studio, you will see an error similar to the below:
  2. aiIf you have already configured Application Insights earlier for your load tests, then when running such load tests, you will not see any app metrics being collected and the ‘Status Messages’ in your load test will show a message that ‘Application counter collection failed due to an internal error and will be disabled for this run’.

This is due to an infrastructure issue. We are working with the Application Insights (AI) team to understand and resolve the issue. At this time, we don’t have an ETA for a resolution. In the meantime to workaround this issue, you can view the application metrics using the Azure portal or use the APIs documented at https://dev.applicationinsights.io/quickstart/

If you have any questions related to load testing, please reach out to us at vsoloadtest@microsoft.com

Developing Linux C++ applications with Azure Docker containers

$
0
0

In recent years, Visual Studio has added new features and experiences to allow developers to develop their C++ applications for non-Windows platforms such as Android, iOS and more recently Linux.  One of the challenges that Linux development brings to the table is making sure your native application works across the wide set of Linux distributions available. We have experienced this first-hand when developing the Visual Studio Code C/C++ extension where we needed to test the extension across a wide variations of Linux distributions. We typically build and test our C++ code on different versions of Ubuntu, Debian, Fedora, Arch, openSUSE and more. Furthermore, there are different versions of the standard C library and compilers which bring their own set of issues. To keep costs low for Linux development we made use of Docker containers on Azure.

This blog provides a walkthrough on how to use Azure VMs, Docker containers, and Visual Studio to author your multi-distro Linux C++ application with the help of the following sections:

Prerequisites

Through the course of this walkthrough you will need to setup the following, so let us just go ahead do this upfront.

Docker Containers and Images

A Docker Container is a “stripped-to-basics” version of your operating system. A Docker Image is a read-only snapshot of your software that can be “run” in a Docker Container. Docker containers can allow you to pack a lot more applications into a single physical server than a Virtual Machine can.

Virtual machines run a full copy of the operating system and a virtual copy of all the hardware that operating systems need to run. In contrast, containers only require a stripped to basic version of your operating system, supporting libraries and programs and system resources required to run a specific program.

Combine this with the additional benefit that Docker containers provide for creating a consistent development environment for development, testing and deployment. Docker is here to stay!

Alright with that very short overview on Docker lets go ahead and setup an Azure Docker VM now.

Step 1:Creating an Azure Docker VM

The easiest way to create an Azure VM is by using the cross-platform Azure command line tools. Once installed and connected to your Azure subscription, you can manage many Azure resources right from the command prompt.

Login into your subscription using the ‘azure login’ command. You will go through the following series of steps shown in the picture below.

login

Once you have logged in successfully to find a suitable image, run the azure vm image list command and provide additional details where you would like your VM to be hosted ‘location’ and the publisher for the VM images. All Ubuntu images on Azure are shipped by the ‘Canonical’ publisher.

azure vm image list
info:    Executing command vm image list
Location:  westus
Publisher:  Canonical

This will print for you a list of Ubuntu images. For this walkthrough, I will pick up the popular ‘Canonical:UbuntuServer:14.04.5-LTS:14.04.201609190’ image. Alternatively you can pick others from the 16.0 series as well.

The Docker Installation documentation provides step-by-step instructions on how to install the Docker Toolbox, which will in turn install Docker Machine, Engine, Compose, Kitematic and a shell for running the Docker CLI.  For this tutorial you will install this on your Windows box where you have setup Visual Studio.

Once Docker is installed and running we can go the next step which is to install our Azure Docker Ubuntu VM using the docker-machine Azure driver. You will need to replace the subscription-id with your subscription-id and your vm name e.g. hello-azure-docker-cpp.

docker-machine create --driver azure --azure-subscription-id b5e010e5-3207-4660-b5fa-e6d311457635 --azure-image Canonical:UbuntuServer:14.04.5-LTS:14.04.201609190 hello-azure-docker-cpp

This will run through the following series of command setting up the VM and installing the necessary Docker tools. If you get stuck, you can follow this guide here.

dockermachine

Next, set up your shell for the machine we created by running the following command, where machine name is the name of the machine you created.

docker-machine env

docker-machine

Step 2:Running a Docker container

The easiest way to get started with a Docker container is to make use of an existing container. For this purpose, we use a pre-existing Debian container by executing the following command.

docker run -p 5000:22 -t -i --restart=always debian /bin/bash

This will download the latest image for Debian from Docker and start a new container with it. You should see the following command window as you go through this step. You can replace ‘debian’ with ‘ubuntu’, ‘fedora’ or ‘opensuse’ for creating containers for other distros.

debian
If this step was successful, you should see your Docker running when you execute the ‘docker ps’ command as shown below:

dockerps

Step 3:Setting up SSH for your container

To build your C++ application on this newly created Linux container using Visual Studio, you need to enable SSH and install necessary build tools (gdb, g++ etc.). Setting up SSH  is generally not recommended for Docker containers, but it is required by the Visual Studio C++ Linux development experience today.

Attach to your running container by using the ‘docker attach command and run the following commands to setup SSH.

apt-get update
apt-get install openssh-server
apt-get install  g++ gdb gdbserver
mkdir /var/run/sshd
chmod 0755 /var/run/sshd
/usr/sbin/sshd

Next, create a user account to use with the SSH connection to the Docker container we just created. We can do this by running the following commands. Replace with the username you would like.

useradd -m -d /home// -s /bin/bash -G sudo 
passwd 

All right, we are almost there. The last thing we need to do is make sure the port we are using (5000) is allowed by the inbound-security rules by our Docker resource group’s firewall. The easiest way to do this is to use Azure portal, bring up the network security firewall for the VM we created on Azure and traverse to the inbound security rule. For the VM created in this walk-through the resource is shown below:

firewall

As a part of the Inbound security rule, Add and Allow an additional custom TCP security rule with the port you chose for your SSH connection as shown in the figure below.

inboundsecurityrule

You should now be able to SSH into your Linux container using your favorite SSH client application. The and in the command below will need to be replaced based on your settings.

ssh -p  port-name @

Step 4: Developing your Linux C++ application from Visual Studio

To setup Visual Studio for Linux C++ development, you can read this walkthrough, which we keep current. This walkthrough covers installation, project setup and other usage tips, but to summarize, you need to do two things:

First, run the following command on your Linux containers which downloads dependencies required to build and debug.

sudo apt-get install  g++ gdb gdbserver

Second, download the Visual C++ for Linux development extension or get it from the Extension Manager in Visual Studio. Please note the Linux extension is only supported for Visual Studio 2015 and above.

Once Visual Studio is set up, go ahead and set up connection properties for all your containers in the Connection Manager. The Connection Manager can be launched from Tools->Options as shown in the figure below:

vstools

Notice how by using the Docker containers, you can now develop your application on Debian, different versions of Ubuntu, and Redhat simultaneously using one Virtual Machine from within Visual Studio. 

Alright with everything else setup, we can finally start building and debugging Linux C++ code on our containers. You can choose from any of the following simple templates from the File->New Project-> C++ -> Cross Platform -> Linux section as shown in the figure below to get started:

newproject
For this exercise, choose the simple Console Application template. If you want to start with something more rich you can use this simple  tictactoe project.

Next, pick the Linux distro, Docker container you would like to compile and debug this on. You can choose between them by selecting the one that you would like in the Remote settings section:

generalprops

You can now start debugging (F5) which will copy your sources remotely, build your application and finally allow you to debug your application.

tictactoedebug

Great! you are now successfully debugging a C++ Linux application running in a container inside an Azure VM.

Step 5: Using Dockerfiles to automate building of images

So far you’ve used very basic Docker commands to create your Docker containers in the previous sections. The real power of Docker comes from not only enabling you to instantiate different versions of Linux distros on one virtual machine in a cheaper, more productive manner, but Docker also provides a way to create a consistent development environment. This is because Docker lets you use a Docker file with a series of commands to set up the environment on the virtual machine.

A Docker file is similar in concept to the recipes and manifests found in infrastructure automation (IA) tools like chef and puppet. You can bring up your favorite text editor and create a file called ‘Dockerfile’ with the following content.

FROM debian
RUN apt-get update && apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN echo 'root:screencast' | chpasswd
RUN sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config

# SSH login fix. Otherwise user is kicked off after login
RUN sed 's@session\s*required\s*pam_loginuid.so@session optional pam_loginuid.so@g' -i /etc/pam.d/sshd

ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile

EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]

RUN apt-get install -y openssh-server g++ gdb gdbserver

You can now run the following commands to build your docker container with this docker file and get it running!

"C:\Program Files\Docker\Docker\resources\bin\docker.exe" build -t debiandockercontainer . 
"C:\Program Files\Docker\Docker\resources\bin\docker.exe" run -d -P --name debiancontainer debiandockercontainer 
"C:\Program Files\Docker\Docker\resources\bin\docker.exe" port debiancontainer

Running the ‘docker ps’ command will list your newly created container and you can start with Linux C++ development in Visual Studio.

Wrap up

As always, we welcome your feedback and we would love to learn from your experiences as you try this out. This blog is focused on Linux containers, in the future I will also talk about how you can extend your story with Docker containers for your Windows development.

If you run into any problems, following these steps you can email me your query or feedback if you choose to interact directly! Otherwise, we are happy to see your comments and interact with you here through comments. For general Visual Studio product suggestions, you can let us know through User Voice.


TypeScript 2.1 RC: Better Inference, Async Functions, and More

$
0
0

Today we’re happy to announce our release candidate for TypeScript 2.1! If you aren’t familiar with it, TypeScript is a language that adds optional static types to JavaScript, and brings new features from ES6 and later to whatever JavaScript runtime you’re using.

As usual you can get the RC through NuGet, or just by running

npm install -g typescript@rc

You can then easily use the RC release with Visual Studio Code or our Sublime Text Plugin.

While TypeScript 2.1 has a lot of great features coming up, we’d like to highlight how much more powerful TypeScript 2.1’s inference will be, as well as how much easier it will be to write asynchronous code in all runtimes.

Smarter Inference

TypeScript 2.1 now makes it easier to model scenarios where you might incrementally initialize variables. Since a lot of code is written like this in JavaScript, this makes it even easier to migrate existing codebases to TypeScript.

To understand better, let’s start off talking about the any type.

Most of the time, if TypeScript can’t figure out the type of a variable, it will choose the any type to be as flexible as possible without troubling you. We often call this an implicitany (as opposed to an explicit one, where you would have written out the type).

let x;      // implicitly 'any'let y = []; // implicitly 'any[]'let z:any; // explicitly 'any'.

From that point on, you can do anything you want with those values. For many people, that behavior was too loose, which is why the --noImplicitAny option will warn you whenever a type couldn’t be inferred.

With TypeScript 2.0 we built out a foundation of using control flow analysis to track the flow of types throughout your program. Because that analysis examines the assignments of every variable, we’ve leveraged that same foundation in TypeScript 2.1 to more deeply examine the type of any variable that seems like it’s destined for a better type. Instead of just choosing any, TypeScript will infer types based on what you end up assigning later on.

Let’s take the following example.

let x;// We can still assign anything we want to 'x'.x= () =>42;// After that last assignment, TypeScript 2.1 knows that 'x'// has the type '() => number', so it can be called.x();// But now we'll get an error that we can't add a number to a function!console.log(x+42);//          ~~~~~~// error: Operator '+' cannot be applied to types '() => number' and 'number'.// TypeScript still allows us to assign anything we want to 'x'.x="Hello world!";// But now it also knows that now 'x' is a 'string'!x.toLowerCase();

When it comes to assignments, TypeScript will still trust you and allow you to assign anything you want to x. However, for any other uses, the type checker will know better by climbing up and looking at whatever you’ve actually done with x.

The same sort of tracking is now also done for empty arrays! This means better completions:

let puppies = [];puppies.push(newPuppy());for (let pup ofpuppies) {pup.bark();//  ^^^^ Get completion on 'bark'
}

And it also means that TypeScript can catch more obvious errors:

puppies[1] =newKitty();for (let pup ofpuppies) {pup.bark();//  ~~~~ error: 'bark' does not exist on type 'Kitty'
}

The end result of all this is that you’ll see way fewer implicit any errors in the future, and get much better tooling support.

Downlevel Async Functions

Support for down-level asynchronous functions (or async/await) is coming in 2.1, and you can use it in today’s release candidate! async/await is a new feature in ECMAScript 2017 that allows users to write code around promises without needing to use callbacks. async functions can be written in a style that looks synchronous, but acts asynchronously, using the await keyword.

This feature was supported before TypeScript 2.1, but only when targeting ES6/ES2015. TypeScript 2.1 brings the capability to ES3 and ES5 runtimes, meaning you’ll be free to take advantage of it no matter what environment you’re using.

For example, let’s take the following function named delay, which returns a promise and waits for a certain amount of time before resolving:

function delay(milliseconds:number) {returnnewPromise<void>(resolve=> {setTimeout(resolve, milliseconds);
    });
}

Let’s try to work on a simple-sounding task. We want to write a program that prints "Hello", three dots, and then "World!".

function welcome() {console.log("Hello");for (let i =0; i<3; i++) {console.log(".");
    }console.log("World!");
}

This turned out to be about as simple as it sounded.

Now let’s say we want to use our delay function to pause before each dot.

Without async/await, we’d have to write something like the following:

function dramaticWelcome() {console.log("Hello");

    (function loop(i){if (i<3) {delay(500).then(() => {console.log(".");loop(i+1);
            });
        }else {console.log("World!");
        }
    })(0);
}

This doesn’t look quite as simple any more! What about if we tried using async functions to make this code more readable?

First, we need to make sure our runtime has an ECMAScript-compliant Promise available globally. That might involve grabbing a polyfill for Promise, or relying on one that you might have in the runtime that you’re targeting. We also need to make sure that TypeScript knows Promise exists by setting our lib flag to something like "dom", "es2015" or "dom", "es2015.promise", "es5":

{"compilerOptions": {"lib": ["dom", "es2015.promise", "es5"]
    }
}

Now we can rewrite our code to use async and await:

asyncfunction dramaticWelcome() {console.log("Hello");for (let i =0; i<3; i++) {awaitdelay(500);console.log(".");
    }console.log("World!");
}

Notice how similar this is compared to our synchronous version! Despite its looks, this function is actually asynchronous, and won’t block other code from running in between each pause. In fact, the two versions of dramaticWelcome basically boil down to the same code, but with async&await, TypeScript does the heavy lifting for us.

Next Steps

TypeScript 2.1 RC has plenty of other features, and we’ll have even more coming for 2.1 proper. You can take a look at our roadmap to see what else is in store. We hope you give it a try and enjoy it!

The week in .NET – On .NET on CoreRT & .NET Native – Enums.NET – Ylands – Markdown Monster

$
0
0

To read last week’s post, see The week in .NET – .NET Foundation – Serilog – Super Dungeon Bros.

On .NET

Last week, Mei-Chin Tsai and Jan Kotas were on the show:

This week, we won’t be streaming live, but we’ll be taking advantage of the presence of many MVPs on campus for the MVP Summit to speak with as many of them as possible in the form of short 10-15 minute interviews.

Package of the week: Enums.NET

Enums.NET is a high-performance type-safe .NET enum utility library which caches enum members’ name, value, and attributes and provides many operations as C# extension methods for ease of use. It is available as a NuGet Package and is compatible with .NET Framework 2.0+ and .NET Standard 1.0+.

Comparing the performance of Enums.NET with System.Enum

Game of the Week: Ylands

Ylands is a low-poly sandbox game that gives players the tools to create their own environment and scenarios. When first jumping into the world of Ylands, you pick a completely modifiable island to build and play your own adventures. Using the Scenario Editor, you are able to make anything happen – talking chests, teleportation, castles in desperate need of sieging and even large scale scenarios that you can challenge your friends with.

Ylands

Ylands is being developed by Bohemia Interactive using Unity and C#. It is currently in early alpha development for Windows and has a free trial available.

App of the week: Markdown Monster

Rick Strahl doesn’t just blog a lot, he also writes some quality tools. This week, he’s introducing Markdown Monster – a new Markdown Editor by Rick Strahl. Markdown Monster is a WPF application, and I’m using it to write this post.

Markdown Monster

User group meeting of the week: Intro to HoloLens Development with Unity and UWP in Sterling, VA

Microsoft Maniacs are holding a meeting in Sterling, VA on Wednesday, November 9 about HoloLens development using Unity and UWP.

.NET

ASP.NET

F#

Check out F# Weekly for more great content from the F# community.

Xamarin

Azure

Games

And this is it for this week!

Contribute to the week in .NET

As always, this weekly post couldn’t exist without community contributions, and I’d like to thank all those who sent links and tips. The F# section is provided by Phillip Carter, the gaming section by Stacey Haffner, and the Xamarin section by Dan Rigby.

You can participate too. Did you write a great blog post, or just read one? Do you want everyone to know about an amazing new contribution or a useful library? Did you make or play a great game built on .NET? We’d love to hear from you, and feature your contributions on future posts:

This week’s post (and future posts) also contains news I first read on The ASP.NET Community Standup, on Weekly Xamarin, on F# weekly, and on Chris Alcock’s The Morning Brew.

Visual C++ docs: the future is… soon!

$
0
0

We on the Visual C++ documentation team are pleased to announce some changes to the API reference content in the following Visual C++ libraries: STL, MFC, ATL, AMP, and ConcRT.

Since the beginning of MSDN online, the Visual C++ libraries have documented each class member, free function, macro, enum, and property on a separate web page. While this model works reasonably well if you know exactly what you are looking for, it doesn’t support easy browsing or searching through multiple class members. We have heard from many developers that it is painful (sometimes literally) to click between multiple pages when exploring or searching for something at the class level.

Therefore we have re-deployed the above-mentioned reference content as follows:

   For STL:

    Each header will have a top level topic with the same overview that it currently has, with links to subtopics, which will consist of:

· one topic each for all the functions, operators, enums and typedefs in the header

· one topic for each class or struct which includes the complete content for each member.

   For MFC/ATL/AMP/ConcRT:

· one topic for each class or struct

· one topic for each category of macros and functions, according to how these are currently grouped on MSDN.

We strongly believe this change will make it much easier to read and search the documentation. You will be able to use Ctrl-F to search all instances of a term on the page, you can navigate between methods without leaving the page, and you can browse the entire class documentation just by scrolling.

Non-impacts

1. Reference pages for the CRT, and the C/C++ languages are not impacted.

2. No content is being deprecated or removed as a result of this change. We are only changing the way the content is organized.

3. None of your bookmarks will break. The top level header and class topics all retain their current URLs. Links to subtopics such as class members and free functions will automatically redirect to the correct anchor link in the new location.

4. F1 on members, for now, will be slightly less convenient. It will take you to the class page, where you will have to navigate to the member either by Ctrl-F or by clicking the link in the member table. We hope to improve F1 in the coming months to support anchor links.

Why now?

Documentation at Microsoft is changing! Over the next few months, much content that is now on MSDN will migrate to docs.microsoft.com. You can read more about docs.microsoft.com here at Jeff Sandquist’s blog. On the backend, the source content will be stored in markdown format on public GitHub repos where anyone can contribute by making pull requests. Visual C++ has not moved to the new site just yet, but we are taking the first step by converting our source files from XML to markdown. This is the logical time to make the needed changes. By consolidating content, we have the additional advantage of more manageable repo sizes (in terms of the number of individual files). More content in fewer files should make it easier for contributors to find the content they want to modify. 

Get Real-time Election Day Results with Bing

$
0
0
Today is the day!
 
A year of campaigning candidates, debates and a whirlwind of news stories all comes to a head today as we learn who will be the next President of the United States.
 
Knowing millions across the nation will want to track today’s progress, Bing has made it possible to  follow the presidential results in real-time at a national and state level today, and track the popular vote tally and the electoral votes (272 is the magic number). You can also drill down into US Congressional races, state Senate and House races, gubernatorial contests, as well as keep track of ballot measures. Closely watched swing states are highlighted in yellow, and track the balance of power in the US House and Senate. And all day and night, there will be fresh news covering all the headlines.
 

 
Come to Bing to get live real-time results of this unforgettable election.
 
- The Bing Team


 

VS Team Services Update – Nov 7

$
0
0

This week we are rolling out our sprint 108 work to Team Services.  You can read the release notes for details.

It’s another comparatively “light” sprint.  It’s the last sprint that should be significantly affected by our work to wrap up shipping TFS 15.  We’re pretty much done, have incorporated the feedback we got from RC2 and feel we are about ready.  Looking forward to getting TFS 15 shipped.

One of the biggest things we’ve delivered this sprint is integration with the new Microsoft Teams collaboration tool.  We’ve enabled chat integration and hosting of our Kanban board in a team context.  It’s still preview and has some rough edges.  We’ll keep refining it but we’re pretty excited about it.

We also released a lot of improvements to our build and release pipeline – including long awaited support for .NET Core, Docker improvements and much more.

A little “peeking ahead” thought… Our next deployment (sprint 109) will start in just over 2 weeks.  That will be the last deployment for the year – we’re going to skip 110 because it falls right in the middle of the Christmas holidays.  Our next sprint deployment (after 109) will be 111 in early January.

We also have our annual Connect(); event next week in New York.  We’ll be announcing a bunch of cool new stuff.  I encourage you to tune in if you get the chance.

Brian

.NET Core Data Access

$
0
0

.NET Core was released a few months ago, and data access libraries for most databases, both relational and NoSQL are now available. In this post, I’ll detail what client libraries are available, as well as show code samples for each of them.

ORM

EF Core

Entity Framework is Microsoft’s Object-Relational Mapper for .NET, and as such is one of the most-used data access technologies for .NET. EF Core, released simultaneously with .NET Core, is a lightweight and extensible version of Entity Framework that works on both .NET Core and .NET Framework. It has support for Microsoft SQL Server, SQLite, PostgreSQL, MySQL, Microsoft SQL Server Compact Edition, with more, such as DB2 and Oracle, to come.

What follows is an example of EF Core code accessing a blog’s database. The full tutorial can be found on the EF documentation site.

Dapper

Dapper is a micro-ORM built and maintained by StackExchange engineers. It focuses on performance, and can map the results of a query to a strongly-typed list, or to dynamic objects. .NET Core support is currently in beta.

Relational databases

SQL Server

The Microsoft SQL Server client library is built into .NET Core. You don’t have to use an ORM, and can instead go directly to the metal and talk to a SQL Server instance or to an Azure SQL database using the same APIs from the System.Data.SqlClient package.

PostgreSQL

PostgreSQL is an open source relational database with a devoted following. The Npgsql client library supports .NET Core.

Another interesting library for PostgreSQL that is compatible with .NET Core is Marten. Marten uses PostgreSQL storage to implements a document database.

MySQL

MySQL is one of the most commonly used relational databases on the market and is open source. Support for .NET Core is now available, both through EF Core and directly through the MySQL Connector for .NET Core.

SQLite

SQLite is a self-contained, embedded relational database that is released in the public domain. SQLite is lightweight (less than 1MB), cross-platform, and is extremely easy to embed and deploy with an application, which explains how it quietly became the most widely deployed database in the world. It’s commonly used as an application file format.

You can use SQLite with EF Core, or you can talk to a SQLite database directly using the Microsoft.Data.Sqlite library that is maintained by the ASP.NET team.

There’s another SQLite package that’s compatible with .NET Core called SQLitePCL.raw.

NoSQL

Azure DocumentDB

Azure DocumentDB is a NoSQL database service built for fast and predictable performance, high availability, automatic scaling, and ease of development. Its flexible data model, consistent low latencies, and rich query capabilities make it a great fit for web, mobile, gaming, IoT, and many other applications that need seamless scale. Read more in the DocumentDB introduction. DocumentDB databases can now be used as the data store for apps written for MongoDB. Using existing drivers for MongoDB, applications can easily and transparently communicate with DocumentDB, in many cases by simply changing a connection string. The next version of the DocumentDB client library, which will be available around the Connect event, supports .NET Core.

MongoDB

MongoDB is a document database with an official .NET driver that supports .NET Core.

RavenDB

RavenDB is a document database that is not only compatible with .NET Core, it’s also built with it.

Redis

Redis is one of the most popular key-value stores.

StackExchange.Redis is a high performance Redis client that is maintained by the StackExchange team.

ServiceStack has its own Redis client library, that is compatible with .NET Core, like the rest of ServiceStack.

Cassandra

Apache Cassandra is a highly scalable and fault-tolerant NoSQL database. DataStax is a C# driver for Cassandra with built-in support for mapping Cassandra data to CLR objects. The latest version is compatible with .NET Core.

CouchBase

CouchBase is an open source document database that is popular in mobile applications. The offical Couchbase client library is compatible with .NET Core.

CouchDB

CouchDB is a document database that I personally like a lot for its simplicity. It can scale from small devices such as a Raspberry Pi to cloud applications. It uses a very simple HTTP and JSON-based API, which limits the need for a client library. C# client libraries do exist, but none of them support .NET Core today as far as I can tell except for Kanapa which hasn’t been updated for a while. It’s very easy to interact with the database through its REST API nonetheless.

YesSql

YesSql is an interesting library that implements a transactional document database on top of relational stores such as SQL Server.

Lucene.NET

Finally, I want to mention Lucene.NET. It’s not technically a database, but it’s so useful in setting up full-text search on a data-driven project that a post on data access wouldn’t be complete without it. The team has put a lot of work into the new version of Lucene to implement new features and major improvements, and they also made it compatible with .NET Core. It’s still early, but prerelease packages will soon be available.

What about OLE DB?

OLE DB has been a great way to access various data sources in a uniform manner, but it was based on COM, which is a Windows-only technology, and as such was not the best fit for a cross-platform technology such as .NET Core. It is also unsupported in SQL Server versions 2014 and later. For those reasons, OLE DB won’t be supported by .NET Core.

Keeping track

More database support for .NET Core will no doubt become available in the future, and we’ll make sure to highlight new client libraries in our Week in .NET posts as they get announced. In the meantime, I hope this post helps get you started with .NET Core application development.

Extensibility in Visual Studio “15”: Increasing Reliability and Performance

$
0
0

If you’ve been following this blog, you know that in Visual Studio “15” we’ve been focused on making our developer tools easier to install, increasing performance, and enhancing developer productivity. We’ve been doing the same for extensions, and it’s time to talk a bit more about the implications of these changes both on extension authors and on customers who are using extensions.

A quick summary of the changes we’re making:

  • We’ve added a performance monitoring system for extensions. Customers will now see a gold notification bar when an extension is slowing load time or typing speed;
  • We now batch extension updates and installs to make it easier to install or update multiple extensions;
  • We’ve made it possible for extensions to detect and install dependent components, now that the default installation footprint of Visual Studio is much smaller;
  • We’ve made several performance improvements that have an impact on extension authors, such as lightweight solution load and NGEN support for extensions;
  • Lastly, we’ve updated the Visual Studio marketplace to make it easier to find and install extensions;

The rest of the post goes into more detail on each of these areas.

If you’re an extension author, you’ll want to read this post carefully, since we’re asking you to do some work in this release to support the performance and installation work we’re making to Visual Studio. There is some inevitable friction here, and so we want to give you a heads-up so that you can make the appropriate changes to your extension before we go live.

Performance

One of the top extensibility requests in UserVoice is for tools that let extension users identify slow extensions and disable them if necessary. We designed the extensibility model in Visual Studio to offer great power – authors largely have access to the same APIs that we use internally – so it’s important that extensions do not degrade overall product performance.

In Visual Studio “15”, we have focused on addressing three specific performance bottlenecks in particular: (i) Visual Studio startup performance; (ii) solution load performance; (iii) overall responsiveness. And in addition to improving our own performance, we’re now also adding features to help you measure and manage extension performance in these areas.

If we detect a slow-performing extension that falls into one of these categories, we will display a message that identifies the extension and gives the user the opportunity to disable it. Here’s an example with a malicious extension that sleeps for six seconds during initialization:

You can see the performance of extensions on your system at any time by selecting the Help / Manage Visual Studio Performance menu item.

We trigger this notification by measuring the total initialization time spent by the extension on the main thread. If it exceeds a lower time threshold, we will warn users that the extension is loading slowly; if it exceeds a second, still higher time threshold, we will offer to disable the extension for the user.

We use the same dialog to also track extensions that introduce delays to typing responsiveness. Studies have shown that input latency greatly impacts the perception of an application’s performance. So, Visual Studio now measures the duration of calls within extensions to extension command filters and editor event handlers that are triggered by typing characters, and if those calls regularly take longer than a predefined threshold, we display an infobar to warn users of the impact of that extension on typing responsiveness:

If you’re an extension author, we recommend these best practices to ensure your extension performs well in Visual Studio:

  1. 1. Use rule-based contexts to specify precise conditions when your extension should be loaded. For example, you can ensure that an extension is only loaded when a C# or Visual Basic project is active, rather than at Visual Studio startup.
  2. 2. Use the AsyncPackage support in Visual Studio 2015 and above to allow packages to be loaded on a background thread.
  3. 3. Review your extension’s command filters and editor event handlers (such as ITextBuffer.Changed) and perform any operations longer than 50ms asynchronously;
  4. 4. Minimize work performed during package initialization, deferring it instead to occur on invocation of a user action.

Batched Extension Installations

Another common feature request from Visual Studio 2015 was to make it easier to install, update and remove multiple Visual Studio extensions at once. Visual Studio “15” now lets you batch up extension change operations to occur at once.

Here’s a screenshot:

Size

With the new Visual Studio installer, we have refactored Visual Studio to be more modular, installing just the features you need for the work you’re doing. The smallest configuration of Visual Studio installs in just a couple of minutes on typical machines, and the increasing number of Visual Studio developers who are using it for new platforms and languages like Python and R no longer need to install support for things like C#, MSBuild and ASP.NET to get a great Visual Studio working environment.

We’ve therefore created a way for extension authors to express their Visual Studio component dependencies by extending the VSIX manifest. The model is designed so that if dependencies are missing, the extension installer can acquire and install the missing components automatically. With this approach, for example, an extension can hint to the installer that it requires the managed language debugger to be available for the extension to run successfully.

We’re updating the extensibility tools to automatically emit these changes for new projects. By default, we set a dependency on just the Core Editor, but you can add additional components by selectingthem from the Prerequisites tab. Here’s a screenshot:

The next public build of Visual Studio “15” will contain the updated manifest designer, and we’re planning to ship an update to the tools in Visual Studio 2015 to support the new manifest format before we complete Visual Studio “15”.

New Extension Capabilities

We’re taking advantage of the changes to the standard VSIX extension format to add some powerful new capabilities for extension authors.

  • Extension assemblies can now be compiled into native images (through NGEN) at install time, for added performance. You can specify NGEN options from the Properties toolwindow:

  • Extensions may now install files in certain locations outside of the extensions folder. This enables new scenarios like MSBuild tasks to be installed from a VSIX folder.

One caveat: these capabilities are only available for “all user” extensions because they require elevation. We hope to encourage extension authors who have until now been forced to wrap their VSIX in an MSI to accomplish these needs to be able to transition to a pure VSIX, enabling better manageability, roaming and auto-update capabilities for users of their extensions.

Lightweight Solution Load

In Visual Studio “15”, solutions will load faster due to support for lightweight solution load. When lightweight solution load is enabled, Visual Studio will delay fully loading projects until you start working on them. Visual Studio will still preserve common functionality, such as allowing you to navigate through your codebase, edit code, and build projects without fully loading projects. We’ve seen this speed up solution load by a factor of two or more.

This feature is still “experimental”, so to try out this feature in Visual Studio “15”, you’ll need to manually enable the lightweight solution load checkbox in Tools / Options / Projects and Solutions.

If you’re an extension author, lightweight solution load may impact your extension if you depend on a project to be fully loaded. Our team is putting together steps and guidance on how extensions can know when projects are not yet fully loaded, and how to respond accordingly. We will share this guidance as soon as possible.

Detecting Installations

Some tools need to get more specific information about a Visual Studio installation, for example an external utility that needs the location of the C++ compiler toolset. Along with the changes to Visual Studio to reduce system impact, we’ve added some new setup configuration APIs to make it easier to discover instances of Visual Studio “15”. Heath Stewart’s blog includes more detail about these APIs and samples for managed and unmanaged code.

Marketplace

One last significant change: we’re transitioning to the Visual Studio Marketplace as a place for discovering and installing extensions, again in response to your feedback. The marketplace is a modern website that supports extensions for our developer tools family, including Visual Studio itself, Visual Studio Code and Visual Studio Team Services.

In the next months, we’ll begin the process of retiring the old Visual Studio Gallery. You don’t have any work to do if you’re an extension author – we’ll migrate all the data across automatically. We are excited by some of the new features we’ll be able to offer extension authors and users alike once we are live on the new site.

More Information

If you’re building an extension for Visual Studio “15”, we’re glad to hear from you. In particular, the editor and extensibility engineering teams hang out on our Gitter team room along with many other extension authors. You can also submit or vote on feature requests in the extensibility area of our UserVoice site– we’re listening!

Tim Sneath, Principal Lead Program Manager, Visual Studio

Tim leads a team focused on Visual Studio acquisition and extensibility. His mission is to see developers create stunning applications built on the Microsoft platform, and to persuade his mother that her computer is not an enemy. Amongst other strange obsessions, Tim collects vintage releases of Windows, and has a near-complete set of shrink-wrapped copies that date back to the late 80s.


Best of Both Worlds

$
0
0

Back in February of 2015, I wrote a blog asking a very simple question: how many vendors does it take to implement DevOps? At the time I wrote the post, I felt the answer was one. Almost two years later, I believe that now more than ever. So why do companies insist on manually building a pipeline instead of using a unified solution?

Fear of Vendor Lock In

Despite the fact some vendors offer a complete solution, many still attempt to build DevOps pipelines using as many vendors as possible.

Historically, putting all your eggs in one basket has proved to be risky. Because the systems only provided an All or Nothing approach, users would lose the flexibility to adopt new technology as it was released. The customer was forced to wait for the solution provider to offer an equivalent feature or worse have to start over again with another solution. Customers started to avoid the benefits of a unified solution for flexibility.

This allowed the customer to adapt the hot new technology and be on the bleeding edge with their pipeline. They could evaluate each offering and select the best of breed in each area. On the surface this seemed like a great idea until they realized the products did not play well together. By this point, they had convinced themselves the cost of integration was unavoidable and just a cost of doing business.

This change in customer mindset had vendors focusing on having the best CI system or source control instead of an integrated system. With vendors only focusing on a part of the pipeline, there were great advances in each area. However, the effort to integrate continued to increase at an alarming rate. Eventually the cost of maintaining the pipeline became too great and actually started to have a noticeable impact on developer productivity.

Even when all the products play nice with each other, it can be difficult to enable good traceability from a code change all the way to production. This is the reason more and more vendors are starting to expand their offerings to reduce the cost and risk of integration.

Calculate True Cost of Ownership

When building your DevOps pipeline, you have to consider the true cost of ownership. The cost is much more than what you paid for the products. The cost includes the amount of time and effort to integrate and maintain them. Time spent on integration and maintenance is time not spent innovating on the software that makes your company money. To try and reduce the cost of ownership, vendors have begun to join forces (http://www.businesswire.com/news/home/20160914005298/en/DevOps-Leaders-Announce-DevOps-Express-Industry-Initiative). This should help mitigate the cost of building your own DevOps pipeline. Nevertheless, with each new tool and vendor, you incur a cost of integration. Someone on your team is now responsible for maintaining that pipeline. Making sure all products are upgraded and that the integration is still intact. This is time much better spent delivering value to your end users instead of maintaining your pipeline.

Adding vendors also complicates your billing, as you are paying multiple vendors instead of one. The opportunity for bundle or volume discounts is also reduced.

Best of Breed

I have meet many customers that claim they want the best of breed products. However, when I asked what made one product better than another, I often found that they did not even use that feature. They were complicating their pipeline out of vanity. Everyone else said this was the best so we wanted to be on the best. You need to find the best product for you which might not be the best of breed for that area. Just because Product A does not have all the bells and whistles as Product B does not mean Product B is the right one for you.

Best of Both Worlds

Today, customers want the ease of a unified solution with the ability to select best of breed. Solutions like Team Services offer you both. Even though Team Services offers everything you need to build a DevOps pipeline from source control and continuous integration to package management and continuous delivery, you are free to replace each piece with the product of your choice. If you already have an investment in a continuous integration system, you can continue to use it along with everything else Team Services has to offer. This can go a long way towards reducing the number of vendors in your pipeline.

We have taken a new approach with Team Services. It is an approach that tries to appeal to both types of customers: those that want a unified solution and those that want Best of Breed. We have teams dedicated to Agile planning, source control, continuous integration, package management, and continuous delivery. These teams work to make sure we stack up against all the offerings of each category. However, they never lose sight of the power of a unified system.

This approach reduces the complexity of building and maintaining your pipeline while retaining your flexibility to select products that are the best fit for your organization.

Stateless 3.0 - A State Machine library for .NET Core

$
0
0

.NET StandardState Machines and business processes that describe a series of states seem like they'll be easy to code but you'll eventually regret trying to do it yourself. Sure, you'll start with a boolean, then two, then you'll need to manage three states and there will be an invalid state to avoid then you'll just consider quitting all together. ;)

"Stateless" is a simple library for creating state machines in C# code. It's recently been updated to support .NET Core 1.0. They achieved this not by targeting .NET Core but by writing to the .NET Standard. Just like API levels in Android abstract away the many underlying versions of Android, .NET Standard is a set of APIs that all .NET platforms have to implement. Even better, the folks who wrote Stateless 3.0 targeted .NET Standard 1.0, which is the broadest and most compatible standard - it basically works everywhere and is portable across the .NET Framework on Windows, .NET Core on Windows, Mac, and LInux, as well as Windows Store apps and all phones.

Sure, there's Windows Workflow, but it may be overkill for some projects. In Nicholas Blumhardt's words:

...over time, the logic that decided which actions were allowed in each state, and what the state resulting from an action should be, grew into a tangle of if and switch. Inspired by Simple State Machine, I eventually refactored this out into a little state machine class that was configured declaratively: in this state, allow this trigger, transition to this other state, and so-on.

A state machine diagram describing the states a Bug can go throughYou can use state machines for anything. You can certainly describe high-level business state machines, but you can also easily model IoT device state, user interfaces, and more.

Even better, Stateless also serialize your state machine to a standard text-based "DOT Graph" format that can then be generated into an SVG or PNG like this with http://www.webgraphviz.com. It's super nice to be able to visualize state machines at runtime.

Modeling a Simple State Machine with Stateless

Let's look at a few code examples. You start by describing some finite states as an enum, and some finite "triggers" that cause a state to change. Like a switch could have On and Off as states and Toggle as a trigger.

A more useful example is the Bug Tracker included in the Stateless source on GitHub. To start with here are the states of a Bug and the Triggers that cause state to change:

enum State { Open, Assigned, Deferred, Resolved, Closed }
enum Trigger { Assign, Defer, Resolve, Close }

You then have your initial state, define your StateMachine, and if you like, you can pass Parameters when a state is trigger. For example, if a Bug is triggered with Assign you can pass in "Scott" so the bug goes into the Assigned state - assigned to Scott.

State _state = State.Open;
StateMachine _machine;
StateMachine.TriggerWithParameters _assignTrigger;

string _title;
string _assignee;

Then, in this example, the Bug constructor describes the state machine using a fluent interface that reads rather nicely.

public Bug(string title)
{
_title = title;

_machine = new StateMachine(() => _state, s => _state = s);

_assignTrigger = _machine.SetTriggerParameters(Trigger.Assign);

_machine.Configure(State.Open)
.Permit(Trigger.Assign, State.Assigned);

_machine.Configure(State.Assigned)
.SubstateOf(State.Open)
.OnEntryFrom(_assignTrigger, assignee => OnAssigned(assignee))
.PermitReentry(Trigger.Assign)
.Permit(Trigger.Close, State.Closed)
.Permit(Trigger.Defer, State.Deferred)
.OnExit(() => OnDeassigned());

_machine.Configure(State.Deferred)
.OnEntry(() => _assignee = null)
.Permit(Trigger.Assign, State.Assigned);
}

For example, when the State is Open, it can be Assigned. But as this is written (you can change it) you can't close a Bug that is Open but not Assigned. Make sense?

When the Bug is Assigned, you can Close it, Defer it, or Assign it again. That's PermitReentry(). Also, notice that Assigned is a Substate of Open.

You can have events that are fired as states change. Those events can take actions as you like.

void OnAssigned(string assignee)
{
if (_assignee != null && assignee != _assignee)
SendEmailToAssignee("Don't forget to help the new employee.");

_assignee = assignee;
SendEmailToAssignee("You own it.");
}

void OnDeassigned()
{
SendEmailToAssignee("You're off the hook.");
}

void SendEmailToAssignee(string message)
{
Console.WriteLine("{0}, RE {1}: {2}", _assignee, _title, message);
}

With a nice State Machine library like Stateless you can quickly model states that you'd ordinarily do with a "big ol' switch statement."

What have you used for state machines like this in your projects?


Sponsor: Big thanks to Telerik! They recently published a comprehensive whitepaper on The State of C#, discussing the history of C#, what’s new in C# 7 and whether C# is still a viable language. Check it out!



© 2016 Scott Hanselman. All rights reserved.
     

In Case You Missed It – This Week in Windows Developer

$
0
0

While most of the world froze in place to follow the endless stream of U.S. presidential election coverage, we continued to push forward in the world of Windows Developer. And by push forward, we humbly admit that we just kept geeking out over the new Surface Dial and its recently released APIs. (Check out the Surface Dial and more updates from our event here.)

What Devs Need to Know about the Windows 10 Creators Update & New Surface Devices

We recently learned that you can tweak the Surface Dial to be the ultimate debugging tool. Check it out here:

And while the politicians duked it out in the electoral college, one particular MVP found himself in a higher stakes conflict – battling aliens in a mall.

Insider Preview Build 14965

TL;DR – A bunch of updates and improvements across the board. Check out Dona’s post by clicking above.

MVP Summit

And, on a high note, we had a great time hosting our Microsoft MVPs in Redmond this week. Thank you to everyone who attended and helped organize the event. Here’s a quick recap from Day One:

Overall, regardless of what happens politically, there will always be more bugs to squash and even more code to write. So, on that note, have a great weekend; We’ll be right here waiting for you on Monday morning!

Download Visual Studio to get started.

The Windows team would love to hear your feedback.  Please keep the feedback coming using our Windows Developer UserVoice site. If you have a direct bug, please use the Windows Feedback tool built directly into Windows 10.

Connect(); // 2016 starts Nov 16th

$
0
0

As hundreds of people across Microsoft head towards New York City we wanted to take this opportunity to write a short blog post to remind our community that we’re almost ready to unveil Connect(); // 2016, Microsoft’s big fall developer event, streaming live and totally for free from November 16th through the 18th.

You might be reading this and asking the question “so why should you watch this?” or “what exactly is the agenda like this year?”, so let us walk you through all the details so you can decide.

Day 1: November 16th (6:45am – 1:30pm PST)

Keynotes: November 16th we will be live streaming from 9:45am through 4:30pm EST. Join the live stream to see our keynotes by Scott Guthrie and Scott Hanselman, along with many other guest speakers. You don’t want to miss this live stream; we will have lots of exciting news and announcements.

Live Q&A: After the keynotes starting at around 1:00pm EST we will begin our live Q&A with various keynote speakers, executives and some very special guests. This will be your chance to engage and ask questions as we will be taking them through Channel 9 live.

Day 2: November 17th (8:00am – 5:00pm PST)

Live Sessions: November 17th we’re once again back live streaming from 8:00am until 5:00pm PDT. Over the course of nine hours you’ll have the opportunity to dive deeper with the product teams that made day 1 announcements possible. This will include sessions on Visual Studio, .NET, Mobile, ALM & DevOps, Azure, Intelligent App and Data, Windows and Office development.

Not only will our product teams go deeper into the announcements from day 1, but they will also take your questions. We will also show some demos that go beyond the keynote.

Day 3: November 18th (9:00am – 4:00pm PST)

Free Live Training: Day 3 of Connect(); is new this year. This year we have added a training day where you will have the chance to participate in totally free, live training from both Xamarin University and Microsoft Virtual Academy. These training providers will cover the following topics in their agendas:

  • Xamarin University: Mobile and Cloud Application Development
  • Microsoft Virtual Academy: Web, Cloud and Data Application Development

All the training content will also be available on-demand. , so don’t worry if you miss the live stream. Come join us when you have time after the November 18th live stream is over and you’ll have full access to the recorded versions of these trainings for free on the respective sites.

Joining live does give you one advantage, and that’s the ability to ask questions so we hope you’ll consider doing so!

On-demand Video & MSDN Magazine

On-demand Videos: On November 16th we’ll be publishing over 110 brand new on-demand videos. These relatively short (typically around 8-15min) videos provide you another way to learn about the topics or scenarios you’re most interested in without the fluff.

In addition all the live streaming keynotes, live Q&A’s and day 2 session will be recorded and published to Channel 9 after the event, so you can view them on your schedule.

MSDN Magazine: We will also be shipping a special edition of MSDN magazine, available to both print subscribers and anyone who wishes to read them online (for free). We will be publishing over 12 articles that cover a wide variety of topics from the keynotes, and if you know MSDN Magazine then you know these articles will be deep technical content from some of our best writers.

So that’s it! It has been a lot of work to get here and I hope this blog helps you understand exactly where you can join us live or find the content on-demand whenever you can consume it. We really hope you will join us, thank you so much.

Dmitry Lyalin has been building software for 18+ years and has worked in various industries including payroll, education, banking, media and more recently Microsoft Consulting & Premier Support. Currently Dmitry works as a Senior Product Manager for the Cloud App, Developer and Data team out of Redmond. In his spare time Dmitry loves writing code and is a very passionate PC gamer.

Visual Studio Code just keeps getting better - with extensions

$
0
0
Visual Studio Code

I've been a fan of Visual Studio Code (the free code editor) since it was released. But even though it continues to update itself as I use it, I didn't really grok how much cool stuff has been going on under the hood.

As of this writing. VSCode is on version 1.7.1. Here's the highlights of this new version:

But the REAL star and the REAL magic in VS Code - IMHO - is the growing VS Code Extension Gallery/Marketplace. Go check it out, but here's just a taste of the cool stuff that plugs nicely into Visual Studio Code.

Great Visual Studio Code Extensions

What are your favorite VS Code extensions?


Sponsor: Big thanks to Telerik! They recently published a comprehensive whitepaper on The State of C#, discussing the history of C#, what’s new in C# 7 and whether C# is still a viable language. Check it out!



© 2016 Scott Hanselman. All rights reserved.
     
Viewing all 10804 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>