Quantcast
Channel: Category Name
Viewing all 10804 articles
Browse latest View live

MIDI Enhancements in Windows 10

$
0
0

As Windows 10 evolves, we are continuing to build in support for musician-focused technologies.

Let’s take a look at MIDI. Windows has had built-in MIDI support going back to the 16-bit days. Since then, most MIDI interfaces have moved to USB and our in-box support has kept pace, with a class driver and APIs that support those new interfaces.

Those unfamiliar with music technology may think of MIDI as just .mid music files. But that’s only a tiny part of what MIDI really is. Since its standardization in 1983, MIDI has remained the most used and arguably most important communications protocol in music production. It’s used for everything from controlling synthesizers and sequencers and changing patches for set lists, to synchronizing mixers and even switching cameras on podcasts using a MIDI control surface. Even the Arduino Firmata protocol is based on MIDI.

In this post, we’ll talk about several new things we’ve created to make MIDI even more useful in your apps:

  • UWP MIDI Basics – using MIDI in Windows Store apps
  • New Bluetooth LE MIDI support in Windows 10 Anniversary Update
  • The Win32 wrapper for UWP MIDI (making the API accessible to desktop apps)
  • MIDI Helper libraries for C# and PowerShell

In addition, we included a number of audio-focused enhancements when Windows 10 was released last summer. These enhancements included: low-latency improvements to WASAPI, additional driver work with partners to opt-in to smaller buffers for lower latency on modern Windows 10 devices like Surface and phones like the 950/950xl; tweaks to enable raw audio processing without any latency-adding DSP; a new low-latency UWP Audio and effects API named AudioGraph; and, of course, a new UWP MIDI API.

We’ve also recently added support for spatial audio for immersive experiences. This past fall, in the 1511 update, we enabled very forward-looking OS support for Thunderbolt 3 Audio devices, to ensure we’re there when manufacturers begin creating these devices and their high performance audio drivers. Cumulatively, this was a lot of great work by Windows engineering, all targeting musicians and music creation apps.

UWP MIDI Basics

In Windows 10 RTM last year we introduced a new MIDI API, accessible to UWP Apps on virtually all Windows 10 devices, which provides a modern way to access these MIDI interfaces. We created this API to provide a high performance and flexible base upon which we can build support for new MIDI interfaces.

We originally put this API out for comment as a NuGet package in Windows 8.1 and received a lot of feedback from app developers. What you see in Windows 10 is a direct result of that feedback and our testing.

The API plugs in nicely with the device enumeration and watcher APIs in UWP, making it easy to detect hot plug/unplug of devices while your app is running.

Here’s a simple way to get a list of MIDI devices and their IDs, using C#:


using Windows.Devices.Midi;
using Windows.Devices.Enumeration;
...
private async void ListMidiDevices()
{
    // Enumerate Input devices

    var deviceList = await DeviceInformation.FindAllAsync(
             MidiInPort.GetDeviceSelector());

    foreach (var deviceInfo in deviceList)
    {
        System.Diagnostics.Debug.WriteLine(deviceInfo.Id);
        System.Diagnostics.Debug.WriteLine(deviceInfo.Name);
        System.Diagnostics.Debug.WriteLine("----------");
    }

    // Output devices are enumerated the same way, but 
    // using MidiOutPort.GetDeviceSelector()
}

And here’s how to set up a watcher and handle enumeration/watcher events, and also get the list of connected interfaces. This is a bit more code, but it’s a more appropriate approach for most apps:


private void StartWatchingInputDevices()
{
    var watcher = DeviceInformation.CreateWatcher(
                     MidiInPort.GetDeviceSelector());

    watcher.Added += OnMidiInputDeviceAdded;
    watcher.Removed += OnMidiInputDeviceRemoved;
    watcher.EnumerationCompleted += OnMidiInputDeviceEnumerationCompleted;

    watcher.Start();
}

private void OnMidiInputDeviceEnumerationCompleted(
    DeviceWatcher sender, object args)
{
    // Initial enumeration is complete. This is when
    // you might present a list of interfaces to the
    // user of your application.
}

private void OnMidiInputDeviceRemoved(
    DeviceWatcher sender, DeviceInformationUpdate args)
{
    // handle the removal of a MIDI input device
}

private void OnMidiInputDeviceAdded(
    DeviceWatcher sender, DeviceInformation args)
{
    // handle the addition of a new MIDI input device
}

Using a watcher for listing devices and handling add/remove is a best practice to follow in your apps. No one wants to restart their app just because they forgot to plug in or turn on their MIDI controller. Using the watcher makes it easy for your app to appropriately handle those additions/removals at runtime.

The API is simple to use, with strongly typed classes for all standard messages, as well as support for SysEx and buffer-based operations. This C# example shows how to open input and output ports, and respond to specific MIDI messages.


using Windows.Devices.Midi;
using Windows.Devices.Enumeration;
...
private async void MidiExample()
{
    string outPortId = "id you get through device enumeration";
    string inPortId = "id you get through device enumeration";

    // open output port and send a message
    var outPort = await MidiOutPort.FromIdAsync(outPortId);
    var noteOnMessage = new MidiNoteOnMessage(0, 110, 127);
    outPort.SendMessage(noteOnMessage);

    // open an input port and listen for messages
    var inPort = await MidiInPort.FromIdAsync(inPortId);
    inPort.MessageReceived += OnMidiMessageReceived;
}

private void OnMidiMessageReceived(MidiInPort sender, 
                         MidiMessageReceivedEventArgs args)
{
    switch (args.Message.Type)
    {
        case MidiMessageType.NoteOn:
            break;
        case MidiMessageType.PolyphonicKeyPressure:
            break;
        // etc.
    }
}

In most cases, you would inspect the type of the message, and then cast the IMidiMessage to one of the strongly-typed messages defined in the Windows.Devices.Midi namespace, such as MidiNoteOnMessage or MidiPitchBendChangeMessage. You’re not required to do this, however; you can always work from the raw data bytes if you prefer.

The Windows 10 UWP MIDI API is suitable for creating all kinds of music-focused Windows Store apps. You can create control surfaces, sequencers, synthesizers, utility apps, patch librarians, lighting controllers, High Voltage Tesla Coil Synthesizers and much more.

Just like the older MIDI APIs, the Windows 10 UWP MIDI API works well with third-party add-ons such as Tobias Erichsen’s great rtpMIDI driver, providing support for MIDI over wired and Wi-Fi networking.

One great feature of the new API is that it is multi-client. As long as all apps with the port open are using the Windows 10 UWP MIDI API and not the older Win32 MME or DirectMusic APIs, they can share the same device. This is something the older APIs don’t handle without custom drivers and was a common request from our partners and customers.

Finally, it’s important to note that the Windows 10 UWP MIDI API works with all recognized MIDI devices, whether they use class drivers or their own custom drivers. This includes many software-based MIDI utilities implemented as drivers on Windows 10.

New Bluetooth LE MIDI support in UWP MIDI

In addition to multi-client support and the improvements we’ve made in performance and stability, a good reason to use the Windows 10 UWP MIDI API is because of its support for new standards and transports.

Microsoft actively participates in the MIDI standards process and has representatives in the working groups. There are several of us inside Microsoft who participate directly in the creation, vetting and voting of standards for MIDI, and for audio in general.

One exciting and relatively new MIDI standard which has been quickly gaining popularity is Bluetooth LE MIDI. Microsoft voted to ratify the standard based upon the pioneering work that Apple did in this space; as a result, Apple, Microsoft and others are compatible with a standard that is seeing real traction in the musician community, and already has a number of compatible peripherals.

In Windows 10 Anniversary Edition, we’ve included in-box support for Bluetooth LE MIDI for any app using the Windows 10 UWP MIDI API.

In Windows 10 Anniversary Edition, we’ve included in-box support for Bluetooth LE MIDI for any app using the Windows 10 UWP MIDI API. This is an addition which requires no changes to your code, as the interface itself is simply another transparent transport surfaced by the MIDI API.

This type of MIDI interface uses the Bluetooth radio already in your PC, Phone, IoT device or other Windows 10 device to talk to Bluetooth MIDI peripherals such as keyboards, pedals and controllers. Currently the PC itself can’t be a peripheral, but we’re looking at that for the future. Although there are some great DIN MIDI to Bluetooth LE MIDI and similar adapters out there, no additional hardware is required for Bluetooth LE MIDI in Windows 10 as long as your PC has a Bluetooth LE capable radio available.

We know latency is important to musicians, so we made sure our implementation is competitive with other platforms. Of course, Bluetooth has higher latency than a wired USB connection, but that tradeoff can be worth it to eliminate the cable clutter.

When paired, the Bluetooth LE MIDI peripheral will show up as a MIDI device in the device explorer, and will be automatically included in the UWP MIDI device enumeration. This is completely transparent to your application.

For more information on how to discover and pair devices, including Bluetooth LE MIDI devices, please see the Device Enumeration and Pairing example on GitHub.

We added this capability in Windows 10 Anniversary Edition as a direct result of partner and customer feedback. I’m really excited about Bluetooth LE MIDI in Windows 10 and the devices which can now be used on Windows 10.

image1

Desktop application support for the UWP MIDI API

We know that the majority of musicians use desktop Win32 DAWs and utilities when making music. The UWP MIDI API is accessible to desktop applications, but we know that accessing UWP APIs from different languages and build environments can be challenging.

To help desktop app developers with the new API and to reduce friction, my colleague Dale Stammen on our WDG/PAX Spark team put together a Win32 wrapper for the Windows 10 UWP MIDI API.

The work our team does, including this API wrapper, is mostly partner-driven. That means that as a result of requests and feedback, we create things to enable partners to be successful on Windows. One of the partners we worked with when creating this is Cakewalk, makers of the popular SONAR desktop DAW application.

This is what their developers had to say about the Win32 wrapper for the UWP MIDI API, and our support for Bluetooth LE MIDI:

“We’re happy to see Microsoft supporting the Bluetooth MIDI spec and exposing it to Windows developers through a simplified API. Using the new Win32 wrapper for the UWP MIDI API, we were able to prototype Bluetooth MIDI support very quickly.   At Cakewalk we’re looking ahead to support wireless peripherals, so this is a very welcome addition from Microsoft.”

—  Noel Borthwick, CTO, Cakewalk

 

image2

We love working with great partners like Cakewalk, knowing that the result will directly benefit our mutual customers.

This Win32 wrapper makes it simple to use the API just like any flat Win32 API. It surfaces all the capabilities of the Windows 10 UWP MIDI API, and removes the requirement for your Win32 application to be UWP-aware. Additionally, there’s no requirement to use C++/CX or otherwise change your build tools and processes. Here’s a C++ Win32 console app example:


// open midi out port 0
result = gMidiOutPortOpenFunc(midiPtr, 0, &gMidiOutPort);
if (result != WINRT_NO_ERROR)
{
	cout << "Unable to create Midi Out port" << endl;
	goto cleanup;
}

// send a note on message to the midi out port
unsigned char buffer[3] = { 144, 60 , 127 };
cout << "Sending Note On to midi output port 0" << endl;
gMidiOutPortSendFunc(gMidiOutPort, buffer, 3);

Sleep(500);

// send a note off message to the midi out port
cout << "Sending Note Off to midi output port 0" << endl;
buffer[0] = 128;
gMidiOutPortSendFunc(gMidiOutPort, buffer, 3); 

This API is optimized for working with existing Win32 applications, so we forgo strongly typed MIDI messages and work instead with byte arrays, just like Win32 music app developers are used to.

We’re still getting feedback from partners and developers on the API wrapper, and would love yours. You can find the source code on GitHub. We may change the location later, so the aka.ms link ( http://aka.ms/win10midiwin32nuget ) is the one you want to keep handy.

For developers using recent versions of Visual Studio, we’ve also made available a handy NuGet package.

We’re already working with desktop app partners to incorporate into their applications this API using this wrapper, as well as other audio and user experience enhancements in Windows 10. If you have a desktop app targeting pro musicians and have questions, please contact me at @pete_brown on Twitter, or pete dot brown at Microsoft dot com.

MIDI Helper libraries for Windows Store apps

In addition to the Win32 API wrapper, we also have some smaller helper libraries for store app developers and PowerShell users.

The first is my Windows 10 UWP MIDI API helper, for C#, VB, and C++ Windows Store apps. This is designed to make it easier to enumerate MIDI devices, bind to the results in XAML and respond to any hot plug/unplug changes. It’s available both as source and as a compiled NuGet package.

It includes a watcher class with XAML-friendly bindable / observable collections for the device information instances.

RPN and NRPN Messages

Additionally, the helper library contains code to assist with RPN (Registered Parameter Number) and NRPN (Non-Registered Parameter Number) messages. These can be more challenging for new developers to work with because they are logical messages comprised of several different messages aggregated together, sent in succession.

image3

Because we exposed the Windows.Devices.Midi.IMidiMessage interface in UWP, and the underlying MIDI output code sends whatever is in the buffer, creating strongly typed aggregate message classes was quite easy. When sending messages, you use these classes just like any other strongly typed MIDI message.

I’m investigating incorporating support for the proposed MPE (Multidimensional Polyphonic Expression), as well as for parsing and aggregating incoming RPN and NRPN messages. If these features would be useful to you in your own apps, please contact me and let me know.

MIDI Clock Generator

One other piece the library includes is a MIDI clock generator. If you need a MIDI clock generator (not for a sequencer control loop, but just to produce outgoing clock messages), the library contains an implementation that you will find useful. Here’s how you use it from C#:


private MidiClockGenerator _clock = new MidiClockGenerator();

...
_clock.SendMidiStartMessage = true;
_clock.SendMidiStopMessage = true;

...


foreach (DeviceInformation info in deviceWatcher.OutputPortDescriptors)
{
    var port = (MidiOutPort)await MidiOutPort.FromIdAsync(info.Id);

    if (port != null)
        _clock.OutputPorts.Add(port);
}

...

public void StartClock()
{
    _clock.Start();
}

public void StopClock()
{
    _clock.Stop();
}

public double ClockTempo
{
    get { return _clock.Tempo; }
    set
    {
        _clock.Tempo = value;
    }
}

My GitHub repo includes the C++/CX source and a C#/XAML client app. As an aside: This was my first C++/CX project. Although I still find C# easier for most tasks, I found C++ CX here quite approachable. If you’re a C# developer who has thought about using C++ CX, give it a whirl. You may find it more familiar than you expect!

This library will help developers follow best practices for MIDI apps in the Windows Store. Just like with desktop apps, if you’re building a musician-focused app here and have questions, please contact me at @pete_brown on Twitter, or pete dot brown at Microsoft dot com.

The second helper library is a set of PowerShell commands for using the Windows 10 UWP MIDI API. I’ve talked with individuals who are using this to automate scripting of MIDI updates to synchronize various mixers in a large installation and others who are using it as “glue” for translating messages between different devices. There’s a lot you can do with PowerShell in Windows 10, and now MIDI is part of that. The repo includes usage examples, so I won’t repeat that here.

Conclusion

I’m really excited about the work we continue to do in the audio space to help musicians and music app developers on Windows.

Altogether, the UWP MIDI API, the Win32 wrapper for the UWP MIDI API, and the helper libraries for Windows Store apps and for PowerShell scripting make it possible for apps and scripts to take advantage of the latest MIDI tech in Windows 10, including Bluetooth MIDI.

I’m really looking forward to the upcoming desktop and Windows Store apps which will support this API, and technologies like Bluetooth LE MIDI. And, as I mentioned above, please contact me directly if you’re building a pro musician-targeted app and need guidance or otherwise have questions.

Resources

Download Visual Studio to get started.

The Windows team would love to hear your feedback.  Please keep the feedback coming using our Windows Developer UserVoice site. If you have a direct bug, please use the Windows Feedback tool built directly into Windows 10.


Reusing Configuration Files in ASP.NET Core

$
0
0

Introduction

The release of ASP.NET Core 1.0 has enticed existing ASP.NET customers to migrate their projects to this new platform. While working with a customer to move their assets to ASP.NET Core, we encountered an obstacle. Our customer had a many configuration files (*.config) they used regularly. They had hundreds of configuration files that would take time to transform into a format that could be consumed by the existing configuration providers. In this post, we’ll show how to tackle this hurdle by utilizing ASP.NET Core’s configuration extensibility model. We will write our own configuration provider to reuse the existing *.config files.

The new configuration model handles configuration values as a series of name-value pairs. There are built-in configuration providers to parse (XML, JSON, INI) files. It also enables developers to create their own providers if the current providers do not suit their needs.

Creating the ConfigurationProvider

By following the documentation outlined in Writing custom providers, we created a class that inherited from ConfigurationProvider. Then, it was a matter of overriding the public override void Load() function to parse the data we wanted from the configuration files! Here’s a little snippet of that code below.

The ConfigurationProvider in Action

Consider the following Web.config:

to reuse this file, all we have to do is use our new provider like below and run dotnet run to see the code in action.

`dotnet run` console output

Our configuration provider code is on GitHub so you can check it out yourself and see how easy it is to use the ASP.NET configuration model.

References

Import Git repos and view work item attachments – Sept 21

$
0
0

We have some exciting new features this sprint!

View work item attachments

Continuing to improve the work item attachment experience, you can now view attached images without leaving the work item. When you click an image in the attachment grid, we will create a lightbox to view the image.

Work item type layout improvements

Our layout admin page for work item types has been improved so that group and page contributions can be managed at a work item type level. You can now hide group and page contributions from particular work items and control their positioning in the new form to optimize your process.

wit layout admin

Another improvement in this area is the ability to add custom controls to your work item form. Developers can target this contribution and you will be able to add and configure them for your different work item types.

custom controls

Disable work item types

With this release, you can disable both inherited and custom work item types. Disabling a work item type will prevent creation of new work items of that type. Existing work items will continue to appear on the backlog/board and query results, and can still be modified.

To disable a work item type, go to the work item type’s Overview tab on the Process administration page and check the disable box.

Import repository

Customers can now import a Git repository from GitHub, BitBucket, GitLab, or other locations. You can either import into a new or an existing empty repository.

Import into a new repository

From the repository selector drop-down, click Import repository.

import new repo

If the source repository is a publicly available repository, then simply provide the clone URL of the source repository and you are good to go.

import git repo

If the source repository is a private repository and can be accessed using basic authentication (username-password, personal access token, etc.), then you will need to select the repository visibility as Private and provide the corresponding credentials.

Import into an existing empty repository

On the Explorer page, click the Import repository button and provide the clone URL. You will need to provide credentials if the source repository is private and requires authentication.

import git repo

Markdown preview button

When viewing a diff of a markdown file in a commit, push, or pull request, you can now easily toggle to see the resulting rendered view.

Confirmation for deleting repos

To prevent accidental repository deletions, you now have to type the name of the repository that you wish to delete to confirm the action.

Add .gitignore during repo creation

While creating a new Git repository, customers can now add and associate a .gitignore file with their repository. A .gitignore file specifies files that Git should ignore while performing a commit.

The dialog allows users to select one of the many available .gitignore templates.

gitignore

Verify bugs from work item

You can now verify a bug by re-running the tests which identified the bug. You can invoke the Verify option from the bug work item form context menu to launch the relevant test case in the web runner. Perform your validation using the web runner and update the bug work item directly within the web runner.

verify bugs

Xcode Build task xcpretty formatting

You can now format your xcodebuild output with xcpretty. You can also publish JUnit test results to Team Services with xcodebuild. Previously, xctool had to be used as the build tool to publish test results. Now, to enable xcpretty, check Use xcpretty and uncheck Use xctool in the Advanced section of the Xcode task.

xcpretty

Publish Jenkins test and code coverage results

The Jenkins Queue Job build and release task can now retrieve test and code coverage results from a Jenkins job or pipeline. This requires installation of the TFS Plugin for Jenkins 5.2.0 or later on your Jenkins server and configuring the post-build action Collect Results for TFS/Team Services. After results are retrieved from Jenkins, they can be published to Team Services with the Publish Test Results or Publish Code Coverage build tasks.

Build summary for Maven and Gradle tasks

When you enable the Run SonarQube Analysis option in the Maven or Gradle build tasks, you get a link on the SonarQube project. You can request a full analysis to see the quality gates details, and choose to break the build if they are not met.

maven and gradle

FindBugs and CheckStyle in Maven build tasks

You can now request FindBugs and CheckStyle standalone static analysis in the Maven build task (in addition to the PMD analysis). The results of the static analysis appear in the build summary, and resulting files are available from the Artifact tab of the build result.

FindBugs

Deployment status widget

A build can be deployed and tested in different release environments across multiple release definitions. In such a scenario, the Deployment status widget shows you a consolidated status of the deployment and test pass rate across multiple environments for a recent set of builds from a build definition.

Deployment widget

If you have ideas on things you’d like to see, head over to UserVoice to add your idea or vote for an existing one.

Thanks,

Jamie Cool

Check Out this Fall’s New TV Shows, Movies & More

$
0
0
As summer fades and the weather cools, people begin to spend more time indoors, in front of their screens and devices, looking for entertainment. To help you navigate the deluge of offerings, the Bing team has curated carousels for this fall’s TV series, movies and books. We’ve also included predictions for top reality voting shows and the 2016 Primetime Emmy Awards®*, airing this Sunday, September 18.
 
Check out the list of Bing 2016 fall entertainment experiences, and stay tuned for new releases and updates coming this October and November.
 
TV
• Fall TV premieres: Search Bing for ‘Fall TV shows’ to see a list of this season’s lineup. Use the filters within the carousel to narrow your search by month of release, genre, or new versus returning shows.
 
 
• Cancelled TV shows: Many people search for a list of shows that won’t make it back to the small screen. If you’re among this group, Bing has you covered. Just search for ‘cancelled TV shows’ to see the list of the unfortunate programs .
• Primetime Emmy Awards: See the list of nominees and who Bing predicts will take home the statuette for each major category. For example, Game of Thrones is likely to take home an Emmy for Outstanding Drama Series for the second year in a row, and Veep is also predicted to repeat its Outstanding Comedy Series win.

 
• Reality voting shows: This season, we’re keeping a close eye on the two most searched-for reality voting shows: Dancing with the Stars (DWTS) and The Voice. DWTS just started its 23rd season, and Bing already has its first predictions: Ryan Lochte is safe while former Texas Governor Rick Perry is in danger of elimination. For more predictions, check bing.com. The Voice returns on September 19th and predictions will be available after the first episode.
 
MOVIES
• Movies in theaters: Itching to get out of the house? How about a movie? Use Bing’s carousel of ‘fall movies’ to see what’s in theaters now, and coming soon. Click on each title to watch the trailer, read reviews, see the cast, check local show times (available starting three days prior to theater release), buy tickets, and more.
• Movies at home: Not a new feature to Bing, but always a great reminder that you can search what’s new on Netflix or Amazon to see what movies are available to stream.

 
BOOKS
• Predicted must-read books: With tens of thousands of new books published each season, it can be difficult to choose your next read. To help, we’ve enlisted our Bing Predicts technology that factors in the search, web and social buzz around the books, early reviews and more. From there, we used the data to showcase the top 50 books predicted to be the best sellers this autumn. Just search for ‘fall books’ to see which made Bing’s top list. From there you can use the filters in the carousel to search by release date and by paperback or hardback.
 
 
As you can see, it’s a jam-packed season for entertainment, but we’re just getting started. We’ll be back next month to share more. Until then, enjoy!
 
- The Bing Team


 
* The Emmy name and the Emmy statuette are the trademark property of The Academy of Television Arts & Sciences and the National Academy of Television Arts & Sciences. Additionally, the Emmy name, logo and statuette are licensed to the International Academy of Television Arts & Sciences for International Emmys.

Announcing TypeScript 2.0

$
0
0

Today we’re excited to announce the final release of TypeScript 2.0!

TypeScript 2.0 has been a great journey for the team, with several contributions from the community and partners along the way. It brings several new features that enhance developer productivity, advances TypeScript’s alignment with ECMAScript’s evolution, provides wide support for JavaScript libraries and tools, and augments the language service that powers a first class editing experience across tools.

To get started, you can download TypeScript 2.0 for Visual Studio 2015 (which needs Update 3), grab it with NuGet, start using TypeScript 2.0 in Visual Studio Code, or install it with npm:

npm install -g typescript@2.0

For Visual Studio “15” Preview users, TypeScript 2.0 will be included in the next Preview release.

The 2.0 Journey

A couple of years ago we set out on this journey to version 2.0. TypeScript 1.0 had successfully shown developers the potential of JavaScript when combined with static types. Compile-time error checking saved countless hours of bug hunting, and TypeScript’s editor tools gave developers a huge productivity boost as they began building larger and larger JavaScript apps. However, to be a full superset of the most popular and widespread language in the world, TypeScript still had some growing to do.

TypeScript 1.1 brought a new, completely rewritten compiler that delivered a 4x performance boost. This new compiler core allowed more flexibility, faster iteration, and provided a performance baseline for future releases. Around the same time, the TypeScript repository migrated to GitHub to encourage community engagement and provide a better platform for collaboration.

TS 1.4 & 1.5 introduced a large amount of support for ES2015/ES6 in order to align with the future of the JavaScript language. In addition, TypeScript 1.5 introduced support for modules and decorators, allowing Angular 2 to adopt TypeScript and partner with us in the evolution of TypeScript for their needs.

TypeScript 1.6-1.8 delivered substantial typesystemimprovements, with each new release lighting up additional JavaScript patterns and providing support for major JavaScript libraries. These releases also rounded out ES* support and buffed up the compiler with more advanced out-of-the-box error checking.

Today we’re thrilled to release version 2.0 of the TypeScript language. With this release, TypeScript delivers close ECMAScript spec alignment, wide support for JavaScript libraries and tools, and a language service that powers a first class editing experience in all major editors; all of which come together to provide an even more productive and scalable JavaScript development experience.

The TypeScript Community

Since 1.0, TypeScript has grown not only as a language but also as a community. Last month alone, TypeScript had over 2 million npm downloads compared to just 275K in the same month last year. In addition, we’ve had tremendous adoption of the TypeScript nightly builds with over 2000 users participating in discussion on GitHub and 1500 users logging issues. We’ve also accepted PRs from over 150 users, ranging from bug fixes to prototypes and major features.

DefinitelyTyped is another example of our community going above and beyond. Starting out as a small repository of declaration files (files that describe the shape of your JS libraries to TypeScript), it now contains over 2,000 libraries that have been written by-hand by over 2,500 individual contributors. It is currently the largest formal description of JavaScript libraries that we know of. By building up DefinitelyTyped, the TypeScript community has not only supported the usage of TypeScript with existing JavaScript libraries but also better defined our understanding of all JavaScript code.

The TypeScript and greater JavaScript communities have played a major role in the success that TypeScript has achieved thus far, and whether you’ve contributed, tweeted, tested, filed issues, or used TypeScript in your projects, we’re grateful for your continued support!

What’s New in TypeScript 2.0?

TypeScript 2.0 brings several new features over the 1.8 release, some of which we detailed in the 2.0 Beta and Release Candidate blog posts. Below are highlights of the biggest features that are now available in TypeScript, but you can read about tagged unions, the new never type, this types for functions, glob support in tsconfig, and all the other new features on our wiki.

Simplified Declaration File (.d.ts) Acquisition

Typings and tsd have been fantastic tools for the TypeScript ecosystem. Up until now, these package managers helped users get .d.ts files from DefinitelyTyped to their projects as fast as possible. Despite these tools, one of the biggest pain points for new users has been learning how to acquire and manage declaration file dependencies from these package managers.

Getting and using declaration files in 2.0 is much easier. To get declarations for a library like lodash, all you need is npm:

npm install -s @types/lodash

The above command installs the scoped package@types/lodash which TypeScript 2.0 will automatically reference when importing lodash anywhere in your program. This means you don’t need any additional tools and your .d.ts files can travel with the rest of your dependencies in your package.json.

It’s worth noting that both Typings and tsd will continue to work for existing projects, however 2.0-compatible declaration files may not be available through these tools. As such, we strongly recommend upgrading to the new npm workflow for TypeScript 2.0 and beyond.

We’d like to thank Blake Embrey for his work on Typings and helping us bring this solution forward.

Non-nullable Types

JavaScript has two values for “emptiness” – null and undefined. If null is the billion dollar mistake, undefined only doubles our losses. These two values are a huge source of errors in the JavaScript world because users often forget to account for null or undefined being returned from APIs.

TypeScript originally started out with the idea that types were always nullable. This meant that something with the type number could also have a value of null or undefined. Unfortunately, this didn’t provide any protection from null/undefined issues.

In TypeScript 2.0, null and undefined have their own types which allows developers to explicitly express when null/undefined values are acceptable. Now, when something can be either a number or null, you can describe it with the union type number | null (which reads as “number or null”).

Because this is a breaking change, we’ve added a --strictNullChecks mode to opt into this behavior. However, going forward it will be a general best practice to turn this flag on as it will help catch a wide range of null/undefined errors. To read more about non-nullable types, check out the PR on GitHub.

Control Flow Analyzed Types

TypeScript has had control flow analysis since 1.8, but starting in 2.0 we’ve expanded it to analyze even more control flows to produce the most specific type possible at any given point. When combined with non-nullable types, TypeScript can now do much more complex checks, like definite assignment analysis.

functionf(condition: boolean) {letresult: number;if (condition) {
        result = computeImportantStuff();
    }// Whoops! 'result' might never have been initialized!return result;
}

We’d like to thank Ivo Gabe de Wolff for contributing the initial work and providing substantial feedback on this feature. You can read more about control flow analysis on the PR itself.

The readonly Modifier

Immutable programming in TypeScript just got easier. Starting TypeScript 2.0, you can declare properties as read-only.

classPerson {readonlyname: string;constructor(name: string) {if (name.length <1) {thrownewError("Empty name!");
        }this.name = name;
    }
}// Error! 'name' is read-only.newPerson("Daniel").name ="Dan";

Any get-accessor without a set-accessor is also now considered read-only.

What’s Next

TypeScript is JavaScript that scales. Starting from the same syntax and semantics that millions of JavaScript developers know today, TypeScript allows developers to use existing JavaScript code, incorporate popular JavaScript libraries, and call TypeScript code from JavaScript. TypeScript’s optional static types enable JavaScript developers to use highly-productive development tools and practices like static checking and code refactoring when developing JavaScript applications.

Going forward, we will continue to work with our partners and the community to evolve TypeScript’s type system to allow users to further express JavaScript in a statically typed fashion. In addition, we will focus on enhancing the TypeScript language service and set of tooling features so that developer tools become smarter and further boost developer productivity.

To each and every one of you who has been a part of the journey to 2.0: thank you! Your feedback and enthusiasm have brought the TypeScript language and ecosystem to where it is today. We hope you’re as excited for 2.0 and beyond as we are.

If you still haven’t used TypeScript, give it a try! We’d love to hear from you.

Happy hacking!

The TypeScript Team

Bing App joins the AMP open-source effort

$
0
0
From the start, the Bing App (voted #5 of top 100 iPhone apps of 2016 by PC Magazine) has been designed to help you “find” and “do” faster, wherever the information is needed. With best-in-class news search and browse experiences, the Bing app helps you search for the latest news topics, accessible on the Bing homepage, on the search results page, and on the news vertical page.
 
Building on this great experience, we’re excited to share that the Bing App (for both iOS and Android) now supports AMP, an open-source initiative that makes searching, browsing and reading news even faster.
 
  
 
AMP (short for Accelerated Mobile Pages) are just like any other HTML page, but with a mobile-first approach that is defined and governed by the open-source AMP specification. AMP files take advantage of various technical and architectural approaches that prioritize speed to provide a faster experience for users. Its architecture allows for the development of web pages that are rich in content and yet very optimized and tailored for the mobile space.
 
As the results are being loaded, the Bing App detects whether the news articles have corresponding AMP pages associated with them. This all happens in the background. In case there are AMP pages associated with the article, we always give preference to downloading the available AMP page from the servers that are closer to the end user, preferably via an AMP cache for a faster experience. In case AMP is not detected, the non-AMP news article is presented to the user. For the non-AMP pages we apply a number of other performance techniques to download and render those pages optimally too. AMP does not impact our ranking algorithms in any way. Users will be able to detect the articles that have corresponding AMP pages whenever they see the AMP icon in our iOS app:
 

 
“We started experimenting with AMP in our Bing App last May and have noticed that AMP pages load, on average, approximately 80% faster than non-AMP pages” says Marcelo De Barros, Group Engineering Manager in charge of the AMP integration at Bing. “Lighter pages also translate into less data being transferred over the network, requiring less network bandwidth to be downloaded.” Our data has shown a significant increase in AMP adoption by several news publishers, and we’re happy to be collaborating in this open-source effort. 
 
-The Bing Team
 
 

Background Audio and Cross Platform Development with Xamarin (App Dev on Xbox series)

$
0
0

We are back with yet another app, Backdrop, an open source sample media app developed to showcase a cross-device music experience. In this blog post, we will dive deep into the new background model in the Universal Windows Platform, and specifically focus on how to enable your application to continue playing audio in the background  across Windows 10 devices, including Xbox One, delivering a stellar customer experience. In addition, we will show you how we built Backdrop to be a cross platform application by using Xamarin to share code between Windows 10 and AppleTV. The source code for the application is available on GitHub right now so make sure to check it out.

If you missed the previous blog post from last week on Unity interop and app extensibility, make sure to check it out. We covered how to get started building great 2D and 3D experiences with Unity and XAML as well as how to make your apps work great with other apps on the platform. To read the other blog posts and watch the recordings from the App Dev on Xbox live event that started it all, visit the App Dev on Xbox landing page.

Backdrop

image1

Figure 1. Xbox One view

Backdrop is a sample music application that lets a group of friends collaborate on the music selection process and share their experience and music choices. A device is first chosen to be the host device where the music will be played. Then anyone is able to add and vote on tracks on their own device. Each friend can see the current playlist in real time and the progress of the current track on their own device. Each can vote on different tracks to set the order in which they will be played as well as suggest other tracks to be played.

The application has been written for the Universal Windows Platform (UWP) and tvOS by using Xamarin to share the majority of business logic, view models, playlist management, cloud and device communication. The UI, in turn, is written using the native controls and affordances of each platform. Using the shared project, additional platforms such as Android and iOS can easily be added.

image2

Figure 2. architectural diagram

Background audio

With the Windows Anniversary Update, a new single process model for playing background audio was introduced to simplify UWP development. Using MediaPlayer or MediaPlayerElement with the new model should make implementing background audio much easier than it was before.

Previously, your app was required to manage a background process in addition to your foreground app and then manually communicate state changes between the two processes. Under the new model, you simply add the background audio capability to your app manifest and your app will automatically continue playing audio when it moves to the background. You will use the two new application life cycle events, EnteredBackground and LeavingBackground, to find out when your app is entering and leaving the background.

Background audio in Backdrop

image3

Figure 3. Playing audio in background and System Media Transport Controls on Xbox One

Here is the step-by-step process for implementing background audio:

  • Add the Background Media Playback capability to your app manifest.
  • If your app disables the automatic integration of MediaPlayer with the System Media Transport Controls (SMTC), such as by setting the IsEnabled property to false, then you must implement manual integration with the SMTC in order to enable background media playback. You must also manually integrate with SMTC if you are using an API other than MediaPlayer, such as AudioGraph, to play audio if you want to have the audio continue to play when your app moves to the background.
  • While your app is in the background, you must stay under the memory usage limits set by the system for background apps.

Modifying the app manifest

To enable background audio, you must add the background media playback capability to the app manifest file, Package.appxmanifest. You can modify the app manifest file either by using the designer or manually.

To do this through the Microsoft Visual Studio designer, in Solution Explorer, open the designer for the application manifest by double-clicking the package.appxmanifest item.

  1. Select the Capabilities
  2. Select the Background Media Playback check box.

To set the capability by manually editing the app manifest xml, first make sure that the uap3 namespace prefix is defined in the Package element. If not, add it as shown below. [see code on GitHub]

Then add the backgroundMediaPlayback capability to the Capabilities element:

System Media Transport Controls

image4

Figure 4. System Media Transport Controls on mobile

The Backdrop client uses a MediaPlayer to play audio in the PlaybackService class [see code on GitHub].


        public PlaybackService()
        {
            // Create the player instance
            _player = new MediaPlayer();
            _player.PlaybackSession.PositionChanged += PlaybackSession_PositionChanged;
            _player.PlaybackSession.PlaybackStateChanged += PlaybackSession_PlaybackStateChanged;

            _playlist = new MediaPlaybackList();
            _playlist.CurrentItemChanged += _playlist_CurrentItemChanged;
            _player.Source = _playlist;
            _player.Play();

            Dispatcher = new DispatcherWrapper(CoreApplication.MainView.CoreWindow.Dispatcher);

When using a MediaPlayer, it is necessary to implement MediaTransportControls. The media transport controls let users interact with their media by providing a default playback experience comprised of various buttons including play, pause, closed captions and others.

When adding audio tracks to the player, you also need to create a MediaPlaybackItem from the audio source and set its display properties in order to play it. Here is the code from Backdrop [see code on GitHub]:


    var source = MediaSource.CreateFromUri(new Uri(song.StreamUrl));

    var playbackItem = new MediaPlaybackItem(source);
    var displayProperties = playbackItem.GetDisplayProperties();
    displayProperties.Type = Windows.Media.MediaPlaybackType.Music;
    displayProperties.MusicProperties.Title = song.Title;
    displayProperties.MusicProperties.Artist = song.Artist;
    displayProperties.Thumbnail = RandomAccessStreamReference.CreateFromUri(new Uri(song.AlbumArt));

    playbackItem.ApplyDisplayProperties(displayProperties);

You can also override the default SMTC controls and even use manual control if you are using your own media playback. MediaTransportControls is ultimately just a composite control made up of several other XAML controls, which are all contained within a root Grid element. Because of this, you can re-template the control to change its appearance and functionality. For more info, see Create custom transport controls and the Media transport controls sample.

Managing resources in background

When your app moves from the foreground to the background, the EnteredBackground event is raised. And when your app returns to the foreground, the LeavingBackground event is raised. Because these are app life cycle events, you should register handlers for these events when your app is created. In the default project template, this means adding it to the App class constructor in App.xaml.cs.

Because running in the background will reduce the memory resources your app is allowed to retain by the system, you should also register for the AppMemoryUsageIncreased and AppMemoryUsageLimitChanging events, which will be used to check your app’s current memory usage and the current limit. The handlers for these events are shown in the following examples. For more information about the application lifecycle for UWP apps, see App life cycle. You can see the code from Backdrop here [see code on GitHub]:


private void App_LeavingBackground(object sender, LeavingBackgroundEventArgs e)
        {
            _isInBackgroundMode = false;

            // Reastore view content if it was previously unloaded.
            if (Window.Current.Content == null)
            {
                CreateRootFrame(ApplicationExecutionState.Running, string.Empty);
            }
        }

        private void App_EnteredBackground(object sender, EnteredBackgroundEventArgs e)
        {
            _isInBackgroundMode = true;
            ReduceMemoryUsage(0);
        }

When your app transitions to the background, the memory limit for the app is reduced by the system in order to ensure that the current foreground app has sufficient resources to provide a responsive user experience. The AppMemoryUsageLimitChanging event handler lets your app know that its allotted memory has been reduced and provides the new limit in the event args passed into the handler. [see code on GitHub]


        private void MemoryManager_AppMemoryUsageIncreased(object sender, object e)
        {
            var level = MemoryManager.AppMemoryUsageLevel;

            if (level == AppMemoryUsageLevel.OverLimit || level == AppMemoryUsageLevel.High)
            {
                ReduceMemoryUsage(MemoryManager.AppMemoryUsageLimit);
            }
        }

        private void MemoryManager_AppMemoryUsageLimitChanging(object sender, AppMemoryUsageLimitChangingEventArgs e)
        {
            if (MemoryManager.AppMemoryUsage >= e.NewLimit)
            {
                ReduceMemoryUsage(e.NewLimit);
            }
        }

Xamarin

Xamarin is a free Microsoft tool that allows developers to write fully native Android and iOS apps using C# and programming models already familiar to .NET developers. With Xamarin, you can build native Android and iOS apps without needing to know Java or Objective-C and best of all it is included with all editions of Visual Studio 2015. For Android development, Visual Studio has one of the most performant Android emulators built in so you can test your Xamarin.Android apps from your Windows development environment. For iOS development, Xamarin provides a remote iOS simulator for Visual Studio to test your applications without having to switch to a Mac. It even supports touch and multi-touch; something that is missing from testing iOS apps on the iOS simulator on a Mac. You will still need to connect to a physical Mac in order to build the application, but the entire development process is completely done in Visual Studio, including editing storyboards.

Apps built using Xamarin compile to native code on their respective platforms. Xamarin wraps low level native APIs so you are able to access sensors and other platform specific features. Xamarin.iOS projects compile to .app files that can be deployed to Apple mobile devices. Xamarin.Android projects compile to .apk files, the application package used for deploying to Android devices.

image5

Because Xamarin lets you use Visual Studio projects to organize your Xamarin.Android and Xamarin.iOS code, you can include different platform versions of your app side-by-side under a common solution. When you do this, it is important to clearly separate your UI code from your business layer code. It is also important to separate out any code you intend to share between platforms from code that is platform-specific.

Sharing code with Xamarin

Xamarin provides two ways of sharing business layer code between your cross-platform projects. You can use either a Shared Project or a Portable Class Library (PCL). Each has its advantages and disadvantages.

A Shared Project is generally the easier way to create common code. The .cs files in a Shared Project simply get included as part of any project that references it. On build, the Shared Project is not compiled into a DLL. Instead, the Shared Project’s code files are compiled into the output for the referencing project. Shared Projects become tricky when they include platform specific code. You will need to use conditional compilation directives like #if and #endif in these situations to test for the appropriate compiler version.

Unlike Shared Projects, Portable Class Library projects are compiled into DLLs. You decide which platforms your PCL can run on by setting its Profile identifier. For instance, you might have your PCL target all of the following: .NET Framework 4.5, ASP.NET Core 1.0, Xamarin.Android, Xamarin.iOS and Windows 8. PCLs use conditional compilation directives in order to fork platform specific code. Instead, you may want to use a Dependency Injection pattern with multiple PCLs for this.

The Backdrop reference app uses a PCL for sharing common code between a UWP app, a tvOS app for the Apple TV, and music_appService—a RESTful web service consumed by both.

image6

Building the UI

Each platform needs to have its own UI project. You can choose, however, to share common visual elements using the Xamarin.Forms cross-platform UI library, or create custom UIs for each platform. Both options will create fully-native UIs that will be completely familiar to end-users, but Xamarin.Forms is more restrictive. Developers seeking the greatest amount of code-sharing and efficiency will gravitate toward Xamarin.Forms, while those who require a more granular level of control should choose Xamarin.Android and Xamarin.iOS for a custom UI. The Backdrop reference app uses custom UIs.

You are probably already familiar with how to create a native UWP UI, either in Xaml or with the Visual Studio designer. You will be glad to learn that Visual Studio comes with a Xamarin Designer for iOS that provides a similarly easy way for you to work with native iOS controls. The iOS Designer maintains full compatibility with the Storyboard and .xib formats, so that files can be edited in either Visual Studio or Xcode’s Interface Builder.

image7

Wrap up

Adding other platforms is straightforward and can be an exercise for the reader. We implemented tvOS and UWP apps, but the same shared code can be used in iOS and Android projects just as easily. The cross-platform architecture demonstrated in the Backdrop source code provides a great way to achieve the greatest reach with a single, maintainable cross-platform code base. It also shows us how the cross-platform story for UWP and the cross-platform story for Xamarin actually dovetail to create apps with much greater reach than we have ever seen before. Consider that Backdrop’s architectural design ultimately allows friends on an android phone, an iPhone, a Windows 10 phone, Apple TV and Xbox One to all work together on a common music playlist. How’s that for a build-once solution?

Until next time…

…check out the app source on our official GitHub repository, read through some of the resources provided, watch the event if you missed it, and let us know what you think through the comments below or on twitter.

Next week we will release another app experience and go in-depth on how to build hosted web experiences that take advantage of native platform functionality and different input modalities across UWP and other native platforms.

Until then, happy coding!

Resources

In the meantime, below are some additional resources to help you understand background audio as well as the architectural underpinnings of modern cross-platform apps build with Xamarin.

Download Visual Studio to get started!

The Windows team would love to hear your feedback.  Please keep the feedback coming using our Windows Developer UserVoice site. If you have a direct bug, please use the Windows Feedback tool built directly into Windows 10.

Introducing .NET Standard

$
0
0

In my last post, I talked about how we want to make porting to .NET Core easier. In this post, I’ll focus on how we’re making this plan a reality with .NET Standard. We’ll cover which APIs we plan to include, how cross-framework compatibility will work, and what all of this means for .NET Core.

If you’re interested in details, this post is for you. But don’t worry if you don’t have time or you’re not interested in details: you can just read the TL;DR section.

For the impatient: TL;DR

.NET Standard solves the code sharing problem for .NET developers across all platforms by bringing all the APIs that you expect and love across the environments that you need: desktop applications, mobile apps & games, and cloud services:

  • .NET Standard is a set of APIs that all .NET platforms have to implement. This unifies the .NET platforms and prevents future fragmentation.
  • .NET Standard 2.0 will be implemented by .NET Framework, .NET Core, and Xamarin. For .NET Core, this will add many of the existing APIs that have been requested.
  • .NET Standard 2.0 includes a compatibility shim for .NET Framework binaries, significantly increasing the set of libraries that you can reference from your .NET Standard libraries.
  • .NET Standard will replace Portable Class Libraries (PCLs) as the tooling story for building multi-platform .NET libraries.
  • You can see the .NET Standard API definition in thedotnet/standard repo on GitHub.

Why do we need a standard?

As explained in detail in the post Introducing .NET Core, the .NET platform was forked quite a bit over the years. On the one hand, this is actually a really good thing. It allowed tailoring .NET to fit the needs that a single platform wouldn’t have been able to. For example, the .NET Compact Framework was created to fit into the (fairly) restrictive footprint of phones in the 2000 era. The same is true today: Unity (a fork of Mono) runs on more than 20 platforms. Being able to fork and customize is an important capability for any technology that requires reach.

But on the other hand, this forking poses a massive problem for developers writing code for multiple .NET platforms because there isn’t a unified class library to target:

dotnet-today

There are currently three major flavors of .NET, which means you have to master three different base class libraries in order to write code that works across all of them. Since the industry is much more diverse now than when .NET was originally created it’s safe to assume that we’re not done with creating new .NET platforms. Either Microsoft or someone else will build new flavors of .NET in order to support new operating systems or to tailor it for specific device capabilities.

This is where the .NET Standard comes in:

dotnet-tomorrow

For developers, this means they only have to master one base class library. Libraries targeting .NET Standard will be able to run on all .NET platforms. And platform providers don’t have to guess which APIs they need to offer in order to consume the libraries available on NuGet.

Applications. In the context of applications you don’t use .NET Standard directly. However, you still benefit indirectly. First of all, .NET Standard makes sure that all .NET platforms share the same API shape for the base class library. Once you learn how to use it in your desktop application you know how to use it in your mobile application or your cloud service. Secondly, with .NET Standard most class libraries will become available everywhere, which means the consistency at the base layer will also apply to the larger .NET library ecosystem.

Portable Class Libraries. Let’s contrast this with how Portable Class Libraries (PCL) work today. With PCLs, you select the platforms you want to run on and the tooling presents you with the resulting API set you can use. So while the tooling helps you to produce binaries that work on multiple platforms, it still forces you to think about different base class libraries. With .NET Standard you have a single base class library. Everything in it will be supported across all .NET platforms — current ones as well as future ones. Another key aspect is that the API availability in .NET Standard is very predictable: higher version equals more APIs. With PCLs, that’s not necessarily the case: the set of available APIs is the result of the intersection between the selected platforms, which doesn’t always produce an API surface you can easily predict.

Consistency in APIs. If you compare .NET Framework, .NET Core, and Xamarin/Mono, you’ll notice that .NET Core offers the smallest API surface (excluding OS-specific APIs). The first inconsistency is having drastic differences in the availability of foundational APIs (such as networking- and crypto APIs). The second problem .NET Core introduced was having differences in the API shape of core pieces, especially in reflection. Both inconsistencies are the primary reason why porting code to .NET Core is much harder than it should be. By creating the .NET Standard we’re codifying the requirement of having consistent APIs across all .NET platforms, and this includes availability as well as the shape of the APIs.

Versioning and Tooling. As I mentioned in Introducing .NET Core our goal with .NET Core was to lay the foundation for a portable .NET platform that can unify APIs in shape and implementation. We intended it to be the next version of portable class libraries. Unfortunately, it didn’t result in a great tooling experience. Since our goal was to represent any .NET platform we had to break it up into smaller NuGet packages. This works reasonably well if all these components can be deployed with the application because you can update them independently. However, when you target an abstract specification, such as PCLs or the .NET Standard, this story doesn’t work so well because there is a very specific combination of versions that will allow you to run on the right set of platforms. In order to avoid that issue, we’ve defined .NET Standard as a single NuGet package. Since it only represents the set of required APIs, there is no need to break it up any further because all .NET platforms have to support it in its entirety anyways. The only important dimension is its version, which acts like an API level: the higher the version, the more APIs you have, but the lower the version, the more .NET platforms have already implemented it.

To summarize, we need .NET Standard for two reasons:

  1. Driving force for consistency. We want to have an agreed upon set of required APIs that all .NET platforms have to implement in order to gain access to the .NET library ecosystem.
  2. Foundation for great cross-platform tooling. We want a simplified tooling experience that allows you to target the commonality of all .NET platforms by choosing a single version number.

What’s new in .NET Standard 2.0?

When we shipped .NET Core 1.0, we also introduced .NET Standard. There are multiple versions of the .NET Standard in order to represent the API availability across all current platforms. The following table shows which version of an existing platform is compatible with a given version of .NET Standard:

.NET Platform1.01.11.21.31.41.51.62.0
.NET Core1.0vNext
.NET Framework4.54.5.14.64.6.14.6.2vNext4.6.1
Xamarin.iOSvNext
Xamarin.AndroidvNext
Universal Windows Platform10.0vNext
Windows8.08.1
Windows Phone8.1
Windows Phone Silverlight8.0

The arrows indicate that the platform supports a higher version of .NET Standard. For instance, .NET Core 1.0 supports the .NET Standard version 1.6, which is why there are arrows pointing to the right for the lower versions 1.0 – 1.5.

You can use this table to understand what the highest version of .NET Standard is that you can target, based on which .NET platforms you intend to run on. For instance, if you want to run on .NET Framework 4.5 and .NET Core 1.0, you can at most target .NET Standard 1.1.

You can also see which platforms will support .NET Standard 2.0:

  • We’ll ship updated versions of .NET Core, Xamarin, and UWP that will add all the necessary APIs for supporting .NET Standard 2.0.
  • .NET Framework 4.6.1 already implements all the APIs that are part of .NET Standard 2.0. Note that this version appears twice; I’ll cover later why that is and how it works.

.NET Standard is also compatible with Portable Class Libraries. The mapping from PCL profiles to .NET Standard versions is listed in our documentation.

From a library targeting .NET Standard you’ll be able to reference two kinds of other libraries:

  • .NET Standard, if their version is lower or equal to the version you’re targeting.
  • Portable Class Libraries, if their profile can be mapped to a .NET Standard version and that version is lower or equal to the version you’re targeting.

Graphically, this looks as follows:

netstandard-refs-today

Unfortunately, the adoption of PCLs and .NET Standard on NuGet isn’t as high as it would need to be in order to be a friction free experience. This is how many times a given target occurs in packages on NuGet.org:

TargetOccurrences
.NET Framework46,894
.NET Standard1,886
Portable4,501

As you can see, it’s quite clear that the vast majority of class libraries on NuGet are targeting .NET Framework. However, we know that a large number of these libraries are only using APIs we’ll expose in .NET Standard 2.0.

In .NET Standard 2.0, we’ll make it possible for libraries that target .NET Standard to also reference existing .NET Framework binaries through a compatibility shim:

netstandard-refs-tomorrow

Of course, this will only work for cases where the .NET Framework library uses APIs that are available for .NET Standard. That’s why this isn’t the preferred way of building libraries you intend to use across different .NET platforms. However, this compatibility shim provides a bridge that enables you to convert your libraries to .NET Standard without having to give up referencing existing libraries that haven’t been converted yet.

If you want to learn more about how the compatibility shim works, take a look at the specification for .NET Standard 2.0.

.NET Standard 2.0 breaking change: adding .NET Framework 4.6.1 compatibility

A standard is only as useful as there are platforms implementing it. At the same time, we want to make the .NET Standard meaningful and useful in and of itself, because that’s the API surface that is available to libraries targeting the standard:

  • .NET Framework. .NET Framework 4.6.1 has the highest adoption, which makes it the most attractive version of .NET Framework to target. Hence, we want to make sure that it can implement .NET Standard 2.0.
  • .NET Core. As mentioned above, .NET Core has a much smaller API set than .NET Framework or Xamarin. Supporting .NET Standard 2.0 means that we need to extend the surface area significantly. Since .NET Core doesn’t ship with the OS but with the app, supporting .NET Standard 2.0 only requires updates to the SDK and our NuGet packages.
  • Xamarin. Xamarin already supports most of the APIs that are part of .NET Standard. Updating works similar to .NET Core — we hope we can update Xamarin to include all APIs that are currently missing. In fact, the majority of them were already added to the stable Cycle 8 release/Mono 4.6.0.

The table listed earlier shows which versions of .NET Framework supports which version of .NET Standard:

1.41.51.62.0
.NET Framework4.6.14.6.2vNext4.6.1

Following normal versioning rules one would expect that .NET Standard 2.0 would only be supported by a newer version of .NET Framework, given that the latest version of .NET Framework (4.6.2) only supports .NET Standard 1.5. This would mean that the libraries compiled against .NET Standard 2.0 would not run on the vast majority of .NET Framework installations.

In order to allow .NET Framework 4.6.1 to support .NET Standard 2.0, we had to remove all the APIs from .NET Standard that were introduced in .NET Standard 1.5 and 1.6.

You may wonder what the impact of that decision is. We ran an analysis of all packages on NuGet.org that target .NET Standard 1.5 or later and use any of these APIs. At the time of this writing we only found six non-Microsoft owned packages that do. We’ll reach out to those package owners and work with them to mitigate the issue. From looking at their usages, it’s clear that their calls can be replaced with APIs that are coming with .NET Standard 2.0.

In order for these package owners to support .NET Standard 1.5, 1.6 and 2.0, they will need to cross-compile to target these versions specifically. Alternatively, they can chooose to target .NET Standard 2.0 and higher given the broad set of platforms that support it.

What’s in .NET Standard?

In order to decide which APIs will be part of .NET Standard we used the following process:

  • Input. We start with all the APIs that are available in both .NET Framework and in Xamarin.
  • Assessment. We classify all these APIs into one of two buckets:
    1. Required. APIs that we want all platforms to provide and we believe can be implemented cross-platform, we label as required.
    2. Optional. APIs that are platform-specific or are part of legacy technologies we label as optional.

Optional APIs aren’t part of .NET Standard but are available as separate NuGet packages. We try to build these as libraries targeting .NET Standard so that their implementation can be consumed from any platform, but that might not always be feasible for platform specific APIs (e.g. Windows registry).

In order to make some APIs optional we may have to remove other APIs that are part of the required API set. For example, we decided that AppDomain is in .NET Standard while Code Access Security (CAS) is a legacy component. This requires us to remove all members from AppDomain that use types that are part of CAS, such as overloads on CreateDomain that accept Evidence.

The .NET Standard API set, as well as our proposal for optional APIs will be reviewed by the .NET Standard’s review body.

Here is the high-level summary of the API surface of .NET Standard 2.0:

netstandard-apis

If you want to look at the specific API set of .NET Standard 2.0, you can take a look at the .NET Standard GitHub repository. Please note that .NET Standard 2.0 is a work in progress, which means some APIs might be added, while some might be removed.

Can I still use platform-specific APIs?

One of the biggest challenges in creating an experience for multi-platform class libraries is to avoid only having the lowest-common denominator while also making sure you don’t accidentally create libraries that are much less portable than you intend to.

In PCLs we’ve solved the problem by having multiple profiles, each representing the intersection of a set of platforms. The benefit is that this allows you to max out the API surface between a set of targets. The .NET Standard represents the set of APIs that all .NET platforms have to implement.

This brings up the question how we model APIs that cannot be implemented on all platforms:

  • Runtime specific APIs. For example, the ability to generate and run code on the fly using reflection emit. This cannot work on .NET platforms that do not have a JIT compiler, such as .NET Native on UWP or via Xamarin’s iOS tool chain.
  • Operating system specific APIs. In .NET we’ve exposed many APIs from Win32 in order to make them easier to consume. A good example is the Windows registry. The implementation depends on the underlying Win32 APIs that don’t have equivalents on other operating systems.

We have a couple of options for these APIs:

  1. Make the API unavailable. You cannot use APIs that do not work across all .NET platforms.
  2. Make the API available but throw PlatformNotSupportedException. This would mean that we expose all APIs regardless of whether they are supported everywhere or not. Platforms that do not support them provide the APIs but throw PlatformNotSupportedException.
  3. Emulate the API. Mono implements the registry as an API over .ini files. While that doesn’t work for apps that use the registry to read information about the OS, it works quite well for the cases where the application simply uses the registry to store its own state and user settings.

We believe the best option is a combination. As mentioned above we want the .NET Standard to represent the set of APIs that all .NET platforms are required to implement. We want to make this set sensible to implement while ensuring popular APIs are present so that writing cross-platform libraries is easy and intuitive.

Our general strategy for dealing with technologies that are only available on some .NET platforms is to make them NuGet packages that sit above the .NET Standard. So if you create a .NET Standard-based library, it’ll not reference these APIs by default. You’ll have to add a NuGet package that brings them in.

This strategy works well for APIs that are self-contained and thus can be moved into a separate package. For cases where individual members on types cannot be implemented everywhere, we’ll use the second and third approach: platforms have to have these members but they can decide to throw or emulate them.

Let’s look at a few examples and how we plan on modelling them:

  • Registry. The Windows registry is a self-contained component that will be provided as a separate NuGet package (e.g. Microsoft.Win32.Registry). You’ll be able to consume it from .NET Core, but it will only work on Windows. Calling registry APIs from any other OS will result in PlatformNotSupportedException. You’re expected to guard your calls appropriately or making sure your code will only ever run on Windows. We’re considering improving our tooling to help you with detecting these cases.
  • AppDomain. The AppDomain type has many APIs that aren’t tied to creating app domains, such as getting the list of loaded assemblies or registering an unhandled exception handler. These APIs are heavily used throughout the .NET library ecosystem. For this case, we decided it’s much better to add this type to .NET Standard and let the few APIs that deal with app domain creation throw exceptions on platforms that don’t support that, such as .NET Core.
  • Reflection Emit. Reflection emit is reasonably self-contained and thus we plan on following the model as Registry, above. There are other APIs that logically depend on being able to emit code, such as the expression tree’s Compile method or the ability to compile regexes. In some cases we’ll emulate their behavior (e.g. interpreting expression trees instead of compiling them) while in other cases we’ll throw (e.g. when compiling regexes).

In general, you can always work around APIs that are unavailable in .NET Standard by targeting specific .NET platforms, like you do today. We’re thinking about ways how we can improve our tooling to make the transitions between being platform-specific and being platform-agnostic more fluid so that you can always choose the best option for your situation and not being cornered by earlier design choices.

To summarize:

  • We’ll expose concepts that might not be available on all .NET platforms.
  • We generally make them individual packages that you have to explicitly reference.
  • In rare cases, individual members might throw exceptions.

The goal is to make .NET Standard-based libraries as powerful and as expressive as possible while making sure you’re aware of cases where you take dependencies on technologies that might not work everywhere.

What does this mean for .NET Core?

We designed .NET Core so that its reference assemblies are the .NET portability story. This made it harder to add new APIs because adding them in .NET Core preempts the decision on whether these APIs are made available everywhere. Worse, due to versioning rules, it also means we have to decide which combination of APIs are made available in which order.

Out-of-band delivery. We’ve tried to work this around by making those APIs available “out-of-band” which means making them new components that can sit on top of the existing APIs. For technologies where this is easily possible, that’s the preferred way because it also means any .NET developer can play with the APIs and give us feedback. We’ve done that for immutable collections with great success.

Implications for runtime features. However, for features that require runtime work, this is much harder because we can’t just give you a NuGet package that will work. We also have to give you a way to get an updated runtime. That’s harder on platforms that have a system wide runtime (such as .NET Framework) but is also harder in general because we have multiple runtimes for different purposes (e.g. JIT vs AOT). It’s not practical to innovate across all these spectrums at once. The nice thing about .NET Core is that this platform is designed to be fully self-contained. So for the future, we’re more likely to leverage this capability for experimentation and previewing.

Splitting .NET Standard from .NET Core. In order to be able to evolve .NET Core independently from other .NET platforms we’ve divorced the portability mechanism (which I referred to earlier) from .NET Core. .NET Standard is defined as an independent reference assembly that is satisfied by all .NET platforms. Each of the .NET platforms uses a different set of reference assemblies and thus can freely add new APIs in whatever cadence they choose. We can then, after the fact, make decisions around which of these APIs are added to .NET Standard and thus should become universally available.

Separating portability from .NET Core helps us to speed up development of .NET Core and makes experimentation of newer features much simpler. Instead of artificially trying to design features to sit on top of existing platforms, we can simply modify the layer that needs to be modified in order to support the feature. We can also add the APIs on the types they logically belong to instead of having to worry about whether that type has already shipped in other platforms.

Adding new APIs in .NET Core isn’t a statement whether they will go into the .NET Standard but our goal for .NET Standard is to create and maintain consistency between the .NET platforms. So new members on types that are already part of the standard will be automatically considered when the standard is updated.

As a library author, what should I do now?

As a library author, you should consider switching to .NET Standard because it will replace Portable Class Libraries for targeting multiple .NET platforms.

In case of .NET Standard 1.x the set of available APIs is very similar to PCLs. But .NET Standard 2.x will have a significantly bigger API set and will also allow you to depend on libraries targeting .NET Framework.

The key differences between PCLs and .NET Standard are:

  • Platform tie-in. One challenge with PCLs is that while you target multiple platforms, it’s still a specific set. This is especially true for NuGet packages as you have to list the platforms in the lib folder name, e.g. portable-net45+win8. This causes issues when new platforms show up that support the same APIs. .NET Standard doesn’t have this problem because you target a version of the standard which doesn’t include any platform information, e.g. netstandard14.
  • Platform availability. PCLs currently support a wider range of platforms and not all profiles have a corresponding .NET Standard version. Take a look at the documentation for more details.
  • Library availability. PCLs are designed to enforce that you cannot take dependencies on APIs and libraries that the selected platforms will not be able to run. Thus, PCL projects will only allow you to reference other PCLs that target a superset of the platforms your PCL is targeting. .NET Standard is similar, but it additionally allows referencing .NET Framework binaries, which are the de facto exchange currency in the library ecosystem. Thus, with .NET Standard 2.0 you’ll have access to a much larger set of libraries.

In order to make an informed decision, I suggest you:

  1. Use API Port to see how compatible your code base is with the various versions of .NET Standard.
  2. Look at the .NET Standard documentation to ensure you can reach the platforms that are important to you.

For example, if you want to know whether you should wait for .NET Standard 2.0 you can check against both, .NET Standard 1.6 and .NET Standard 2.0 by downloading the API Port command line tool and run it against your libraries like so:

> apiport analyze -f C:\src\mylibs\ -t ".NET Standard,Version=1.6"^
                                    -t ".NET Standard,Version=2.0"

Note: .NET Standard 2.0 is still work in progress and therefore API availability is subject to change. I also suggest that you watch out for the APIs that are available in .NET Standard 1.6 but are removed from .NET Standard 2.0.

Summary

We’ve created .NET Standard so that sharing and re-using code between multiple .NET platforms becomes much easier.

With .NET Standard 2.0, we’re focusing on compatibility. In order to support .NET Standard 2.0 in .NET Core and UWP, we’ll be extending these platforms to include many more of the existing APIs. This also includes a compatibility shim that allows referencing binaries that were compiled against the .NET Framework.

Moving forward, we recommend that you use .NET Standard instead of Portable Class Libraries. The tooling for targeting .NET Standard 2.0 will ship in the same timeframe as the upcoming release of Visual Studio, code-named “Dev 15”. You’ll reference .NET Standard as a NuGet package. It will have first class support from Visual Studio, VS Code as well as Xamarin Studio.

You can follow our progress via our new dotnet/standard GitHub repository.

Please let us know what you think!


Team Foundation Server “15” RC 2 available

$
0
0

Today we released RC2 of TFS “15”.  The key links are here:

You can check out the release notes for details of what’s new.  The release notes are a union of everything new in TFS 15 but you can identify changes since RC1 by looking for items tagged “New in TFS “15” RC2“.  As you can tell by the length of the release notes – many, many pages, there is a *ton* of new stuff in TFS 15.  Among my favorite enhancements since RC1 are:

  1. An improved pull request experience.  This has been getting rave reviews from internal users.
  2. Pull request auto-complete – a great way to fire and forget pull requests.
  3. The ability to purchase and install paid extension in TFS “15” (the release notes don’t list this one yet but I’m getting them updated).

This release is fully go-live and supported.  You can upgrade from RC1, TFS 2015 with any update, TFS 2013 with any update and TFS 2012.  You will, of course, also be able to upgrade from this release to the final RTM.

Here are the requirements for this release.

If you have a TFS server with a whole lot of test results in it, this release can take a long time to install.  We did a major overhaul of the test results schema – yielding about an 8X reduction in size.  But, during upgrade, the data has to be migrated to the new schema.  The setup should warn you if it looks like your server is going to take a long time.  Of course, a pre-prod test upgrade is never a bad idea for a large mission critical server.  If your server is going to take a long time, you can contact customer support and they can help you with a script that will do the test schema migration before you do the upgrade, while the TFS server is still online and functional.

We’ve installed this on a bunch of internal production TFS servers and so far everything looks good.  Please let us know what you think or if you have any trouble.

Thanks,

Brian

 

Fire!

$
0
0

When you have a farm like I do, with a lot of pastures, a lot of fence lines and a lot of trees, you inevitably spend a lot of time clearing dead trees, fallen limbs, etc. from your fences and fields.  It’s a never ending job.  Sometimes I collect the wood for firewood (I heat my house with that in the winter).  Sometimes I just throw it back in the woods to rot.  Sometimes I collect it up into a pile and burn it.  Over the past 7 or 8 years, I have had hundreds of fires and burnt countless tons of debris.

Three or four years ago I decided to clear a bunch of pine trees out of one of my pastures – they were kind of in the way when I wanted to cut hay and they kept dropping limbs.  They were very large – ~16″-18″ in diameter and there were over 40 of them so I decided not to try to do it myself.  I hired my neighbor who has a landscaping business and the right equipment to come and cut them down, haul away the trunks to a lumber mill and remove the stumps.  He left all the limbs in huge piles.  I thought “Great, I’ll get a wood chipper and chip them up for mulch – heck I use a lot of that in my orchard anyway.”  After several days of chipping all day and only clearing 6 or 7 of the 40+ piles, I gave upon on that idea.  Wow that was time consuming and tiring.  Oh, and I blew out the rear window of my pickup truck in the process but that’s another story.

After the remaining wood sat there for a year, chipping it was no longer viable – it dries out and gets to hard.  So I spent several weekends hauling piles in my pickup truck to my main “burn pile” and burning it.  Another ~4 piles gone, heck, only ~30 to go.  So it sat for a couple of more years.  I decided to take a run at it this Fall.  But this time I wanted to be more efficient.  I decided to burn the piles “in place”.  They are out in the middle of a huge field and, of course by now, they are way overgrown with weeds and in some cases, saplings.  I waited until after a good solid rain so that the ground was good and damp.  I used my mower to cut about a 12 foot ring of very short grass around the piles to reduce the chance of the fire escaping into the grass.  I also got a stock tank with about 40 gallons of water and a bucket so I could dowse any areas that the fire started to get away.

I lit one pile and tended it.  For the first hour or so everything went well.  The fire was contained within the circle of grass I had cut and I used a little water here and there to stop it advancing.  At some point it started to escape on multiple sides of the fire.  I was running back and forth with buckets of water as fast as I could but it wasn’t working.  Sloshing a bucket just wasn’t an effective enough method and eventually the fire got away from me.  I frantically called my wife and asked her to call 911 and come help me while I kept trying to keep the fire back.

About 10 minutes later the fire department showed up – 2 trucks and countless firemen.  They had the fire out in about another 15 minutes.  All told, it burned about 1/3rd of an acre.  It had reached 2 of the other wood piles – a good 30 or 40 feet away and lit them on fire.  Man, was it scary.  Another 10 minutes or so and the fire would have made it to the woods and that would have really been a disaster.

I have to say I felt like an idiot.  The forest ranger who came told me they probably get 100 fires like that every year, just in our area.  Doesn’t make me feel a lot better though.  Burning a fire in a middle of a field is hard.  Grass just catches fire too quickly.  I think, if I had it to do again, I’d have taken a trimmer and cut the grass around the fire all the way down to the dirt and then raked the remains away.

Here’s a picture I took after it was all over.  I was too busy/stressed to take any pictures while it was happening.

Fire

So much for a quiet and peaceful weekend.  Thankfully no one was hurt and no property was damaged – just some burnt grass.  But it’s a reminder to be super careful with fire.

Brian

Pricing for Release Management in TFS “15”

$
0
0

Since the new version of Release Management was introduced in TFS 2015 Update 2, it has been in “trial mode”. Any user with Basic access level was able to access all features of Release Management. For the last few months, we have been hard at work to finalize the pricing model for Release Management in time for the release of TFS “15” RTM. We wanted a model that:

  • makes Release Management available to all Basic users in a team
  • is free for small teams, and is competitive as the complexity in an organization increases
  • is equally applicable to both TFS and VSTS
  • is uniform across Build and Release Management in VSTS
  • provides value to Visual Studio Enterprise subscriptions

Based on all of these, here is a summary of the pricing model for Release Management in TFS “15”:

Per-user charge
  1. All Release Management features, including authoring of release definitions, are included in TFS CAL.
Release pipelines
  1. Run one release pipeline at a time for free per Team Foundation Server.
  2. Each Visual Studio Enterprise subscription in your TFS can add one more concurrent release pipeline.
  3. Buy additional concurrent pipelines for $15/concurrency each month.

Let us now look at what this model means in more detail.

  • No additional per-user charge: You do not pay per user any more for Release Management. Earlier versions of Release Management (Release Management Server 2013) required Visual Studio Test Professional or Enterprise subscriptions for users in order for them to author release definitions. That is no longer the case. Just like Build, Release Management can be used by all users in your TFS as long as they have a Basic access level or TFS CAL. Just like before, Stakeholders can continue to approve or reject releases even without a Basic access level or TFS CAL.
  • No charge for agents or target servers: You do not pay for agents or target servers for Release Management. Register any number of agents with your TFS or deploy to any number of target servers using Release Management in TFS “15”.
  • Charge for concurrent pipelines: The primary metered entity for Release Management is the number of pipelines you can run at a time. A pipeline is just a single release. By default, you an always run one pipeline at a time for free. Additional releases that you create will be queued automatically. When you deploy a release to several environments in parallel, all the deployments still count as one pipeline, since they are part of a single release.
  • Visual Studio Enterprise subscription benefits: You can now pool the Visual Enterprise subscription benefits to increase the number of concurrent pipelines for your entire team. Every Visual Studio Enterprise subscription added to your server contributes to one additional concurrency, provided you did not use the benefit from that subscription in a different Team Services account or server.
  • Buy ala-carte: You can also buy additional release pipelines concurrency from Visual Studio marketplace without having to buy an entire Visual Studio Enterprise subscription.

This new pricing model is in effect starting from TFS “15” RC2. The only option that is still not available in RC2 is the ability to buy ala-carte extensions from the Marketplace. This work is in progress, and is expected to complete by TFS “15” RTM.

When you upgrade to TFS “15” RC2 or above, you will notice that:

  • Release Management is not in “trial mode” any more.
  • All Basic users in your server can access all Release Management features.
  • The number of concurrent pipelines that you can run is set to the free limit of “1” per server.

Your administrator (anyone with permission to modify project collection level information) can further increase the concurrency to as many Visual Studio Enterprise subscriptions as are added to that server. Just make sure that you do not use the same subscription benefit in another Team Foundation Server or Team Services account. For instance, let us say that you have the same 50 Visual Studio Enterprise subscription users in two TFS servers. You can choose to take the benefit of 25 of those subscriptions in one TFS, and another 25 in the second TFS. This will increase the concurrent pipelines that you can run in each of them to be 26.

To understand your true cost of Release Management, there is one key question that you need to answer – How many pipelines do I need to run at the same time? We believe that the one free concurrent pipeline gets a small sized team started for free. A rule of thumb is to count one pipeline for every 10 users in your server or account. Even for large accounts or installations (with around 200-500 users), it is unlikely that more than 20-50 pipelines run at a time. And, in such large accounts, the use of Visual Studio Enterprise subscriptions is quite common, and the benefits of these subscriptions usually cover the concurrency needs, thereby making Release Management costs be included in what you are already paying for.

We plan to complete the official documentation for this pricing model in the next few weeks before the release of TFS “15” RTM. We will also include the pricing model for Release Management in Team Services as part of that documentation. This blog is intended to provide guidance to users of Release Management as the above pricing features are being released in TFS “15” RC2.

Release Management Team

Use Bing to Livestream Presidential Debates and Register to Vote

$
0
0
The U.S. presidential election kicks into high gear tonight as the nation watches Clinton and Trump face-off in their first official debate. This falls just one day prior to National Voter Registration Day and, with a lively encounter expected, voters will be eager to ensure their voices are heard. Given the importance of these moments, Bing has updated its elections experience to include livestreaming of the debates, as well as information on registering to vote. 
 
Livestream the Debates

Millions have been waiting to see the two presidential candidates take the stage tonight in what’s sure to be a heated he-said, she-said debate. But if you can’t watch on your TV, or prefer to use a second screen like many, Bing has you covered. Just search for “presidential debate video” to reach our debate page with two video streaming options.
 
 
You can also find the full schedule of debates on Bing.
 
 
Register to Vote

Tomorrow, September 27, is National Voter Registration Day, and Bing has all the information you need to register to vote in your state, including important deadlines and state requirements. Special thanks to our voter registration partner, Rockthevote.com.
 
 
Stay-tuned for more election updates in the coming weeks!
 
- The Bing Team


 

Run cloud-based load tests using your own machines (a.k.a. Bring your own subscription)

$
0
0

When you run a cloud-based load test, the load testing service automatically provisions the necessary machines (load agents) for generating the load to your application. Once the load test run has completed, these resources are torn down. This works well for the most part for a large set of customers. However, some customers want to be able to run load tests using their own machines – be it virtual machines in Azure that they provision in their own subscription or other machines, virtual or physical, that may be living on-premises.

This blog looks at the two primary scenarios where such a configuration may be useful. Before we begin, let’s get familiar with some terms that we will use through the rest of the blog, for simplicity.

  • Auto-provisioned machines: These are load generating machines that are automatically provisioned by the CLT service when a load test run request is received and are also automatically torn down when the load test run has completed execution. When these machines are used, you are charged VUMs, as applicable for your load test run.
  • Self-provisioned machines: These are load generating machines that you can provision on your own (in your Azure subscription or on-premises). These machines can be configured to register themselves against your VSTS account and then they can run a cloud-based load test. This is the focus of our discussion in this blog.
  • Cloud-load agent: This is the agent that can work with the CLT service. This agent will be installed when you use self-provisioned machines and needs to be configured for your VSTS account. Once configured, it can then be used for running a load test. The cloud-load agent is NOT the same as the Test Controller/Test Agent that you may have used earlier for running load tests or automated tests. Towards the end of the blog, we will see the differences.

Now, let’s look at the scenarios:

  • More control over agent machines – Sometimes, you may need more control over the agent machines – for e.g., to install custom software that is used during a load test run. While simple configuration is easily achieved using deployment items and setup scripts on auto-provisioned agents, if you are installing some bulky software or doing some time consuming operation as part of the setup, you may want to do that only once and reuse the machines over and over again, in order to save time and effort. Since the auto-provisioned agents are also torn down automatically, using your own machines instead that you can setup and tear down at will can help.
  • Testing private apps / apps behind the firewall – The basic requirement of the cloud-load testing service is that the application endpoint be public or reachable from the cloud. Often times, that isn’t the case. The app that you want to load test may live entirely on-premises behind the firewall or in a private VNet in Azure or you may be developing new features that will only become publicly accessible when it’s released. How do you then load test such an app? If you could provision agents in the same network as your app (so that they can reach the app) and have them work with CLT, you could load test your app easily.

The question that then remains is “How”:

  • Using machines on-premises: You can provision as many machines as you need on-premises and run a powershell script that will install the cloud-load agents and configure these machines against your VSTS account. For more details and the powershell script, refer this blog post. The agents communicate with the CLT service only using HTTP(S), so you don’t need to open any other ports.
  • Using virtual machines in Azure: While you can certainly adopt the same approach as (1) where you provision the machines and then run the powershell script to install and configure the load agents, we have simplified this process using ARM templates. You need to specify just a few inputs such as your VSTS account, a PAT token for authentication and the number of agent machines you need and you will be on your way to creating and configuring the machines in a single shot. The machines will be provisioned in your Azure subscription (hence the “Bring your own subscription”) and you will have complete control over these machines. We have provided 2 ARM templates in the Azure quick-start templates repository on github that you can use. Let’s see which template to use when:
  • Simple template with dynamic IP option: This template will provision the number of machines you specify and assign them dynamic IPs. With this configuration, the application will still need to be publicly reachable. You can install any software you may need after the machines are provisioned and ready or you can customize the ARM template to install the necessary software as part of the provisioning process itself.
  • Simple template with static IP option: This template will provision the number of machines you specify and assign them static IPs. The machines will get the same IPs even after you shut them down and restart at a later point in time. With this configuration, you can allow traffic from these known IPs through the firewall to reach an application behind the firewall. The agents communicate with the CLT service only using HTTP(S), so you don’t need to open any other ports.
  • VNet based ARM template: This template will provision the number of machines you specify in a specific VNet that you have already setup in Azure. This VNet would be where your private app is also hosted and thus the load agents have a line of sight to the app and can thus reach your app.

For more information on using these ARM templates, please refer this blog post.

If you have been using Visual Studio load testing for some time now, you may already be familiar with Test Controller/Test Agent (TC/TA) that can be used to run load tests. And you may be wondering how that is different than the cloud-load agent usage described above. Here are the differences:

  • Cloud-load agent does not need a controller. The CLT service acts as a lightweight controller instead. Test Agent on  the other hand needs a separate Test Controller.
  • Cloud-load agent on self-provisioned machines uses the CLT service to store results and benefits from any enhancements we make to the CLT service. For e.g., you can view the load test report in the browser. The Test Controller/Test Agent uses SQL Server to store results.
  • Cloud-load agent uses HTTP(S) for communicating with the CLT service and is quite resilient to network glitches whereas the communication between Test Controller and Test Agent uses .NET remoting, which makes it quite susceptible to network glitches. The Test Agent is also a lot more chatty than the cloud-load agent.

Hope you enjoyed this one! Try it out and reach us at vsoloadtest@microsoft.com with any questions or feedback.

FAQ

  • How do I run JMeter tests using the self-provisioned option?

>> The cloud-load agent has been designed to run a JMeter test too. Enabling the experience to use self-provisioned agents for a JMeter load test instead of the default auto-provisioned path that exists today is on the backlog.

  • What is the cost of using self-provisioned agents ? Will I be charged VUMs?

>> If you use self-provisioned agents on-premises, you bear the hardware cost, of course. If you use self-provisioned agents in Azure, the machines will be provisioned in your Azure subscription and you will be charged the applicable cost by Azure, based on the number of machines and the duration for which these VMs are running. CLT will not levy any VUM charges in this scenario.

  • Can I use the self-provisioned option when running load tests in a CI pipeline?

>> Yes, certainly. You can use the existing Azure RG tasks from the build / RM catalog with either of the ARM templates to provision, start or stop machines. The CLT tasks have also been updated so that you can specify whether to use the auto-provisioned agents or self-provisioned agents. A build template that will help you with this configuration is in the works and will be made available soon. Stay tuned!

  • How do I automate the process of provisioning machines and shutting them down?

>> You can do so with using a build definition. In this context, think of the build (CI pipeline) as an automation orchestrator rather than a system that does build. We will have a blog post outlining this one, stay tuned.

The week in .NET: On .NET on Orchard 2 – Mocking on Core – StoryTeller – Armello

$
0
0

To read last week’s post, see The week in .NET: On .NET with Steeltoe – C# Functional Extensions – Firewatch.

On .NET

Last week, Sébastien Ros was on the show to talk about Orchard 2:

This week, we’ll speak with JB Evain about his work on the Visual Studio 2015 Tools for Unity, and maybe also Cecil. The show begins at 12PM Pacific Time (note that’s 2 hours later than usual) on Channel 9. We’ll take questions on Gitter, on the dotnet/home channel. Please use the #onnet tag. It’s OK to start sending us questions in advance if you can’t do it live during the show.

Mocking on .NET Core

Three major .NET mocking frameworks now have official pre-releases with .NET Standard support:

  • FakeItEasy (nuget feed from AppVeyor CI builds).
  • Moq: Note that there is another “moq.netcore” package from the ASP.NET team’s MyGet feed. It is an obsolete private fork meant to unblock testing in the early days before Moq had releases that support .NET Standard. Consumers of the “moq.netcore” package should switch to use the latest official Moq package.
  • NSubstitute.

Package of the week: Storyteller

BDD enables you to focus on the functional behavior your code should have. Its products are runnable code expressed in plain English, and are thus easy to validate by non-technical stakeholders, converging specification and testing. The natural language used in BDD also opens some really interesting scenarios, such as documentation that lives on with the code.

StoryTeller is such a BDD package for .NET (soon on .NET Core), that is perfect for integration testing, executable specifications, and living documentation. StoryTeller 3.0 was just released, and it’s used by StructureMap, Marten, and of course StoryTeller itself.

StoryTeller

Game of the Week: Armello

Armello is a visually stunning digital board game that combines tactical card game elements with tabletop strategy and roleplaying. When entering the world of Armello, you become one of eight heroes, each of which has their own set of unique traits. Explore, quest, scheme and vanquish monsters while you attempt to overthrow the current ruler and take your rightful place on the throne. Armello features both single player and multiplayer games, dynamically generated levels, and over 120 beautifully animated cards.

Armello

Armello was created by League of Geeks using Unity and C#. It is available on Xbox One, PlayStation 4 and Steam for Windows, Mac and Linux.

Blogger of the week: Muhammad Rehan Saeed

Muhammad Rehan Saeed appears in Week in .NET almost weekly, with long-form, detailed posts that are absolutely outstanding. We are featuring two of his posts this week. Check them out!

User group meeting of the week: Deep Dive to Azure IoT Hub in Edmondton, Alberta

On Wednesday, September 28, in Edmonton, Alberta, Canada, Sergii Baidachnyi is taking you on a deep dive into Azure’s IoT hub with the Edmonton .NET User Group.

.NET

ASP.NET

F#

Check out F# Weekly for more great content from the F# community.

Xamarin

Azure

Games

And this is it for this week!

Contribute to the week in .NET

As always, this weekly post couldn’t exist without community contributions, and I’d like to thank all those who sent links and tips. The F# section is provided by Phillip Carter, the gaming section by Stacey Haffner, and the Xamarin section by Dan Rigby.

You can participate too. Did you write a great blog post, or just read one? Do you want everyone to know about an amazing new contribution or a useful library? Did you make or play a great game built on .NET?
We’d love to hear from you, and feature your contributions on future posts:

This week’s post (and future posts) also contains news I first read on The ASP.NET Community Standup, on Weekly Xamarin, on F# weekly, on ASP.NET Weekly, and on Chris Alcock’s The Morning Brew.

Learn and Explore with Bing

$
0
0
As part of our commitment to make learning more interactive and fun, the Bing team has released a new set of experiences that will help students as they settle into the new school year.
 
These experiences cover a wide range of topics—from helping students learn science and history, to giving them tools to explore the English dictionary and expand their vocabulary. This adds to the work we have been doing in the last year to help students learn math and science concepts in a fun and interactive way. Learn more about each new answer below.
 
Constellations
 
Searching for "constellations" or a specific constellation name (e.g. “Cassiopeia constellation”) on Bing will bring up our interactive constellations viewer. Bing uses location identification technology to show which stars and constellations are above or below the horizon if you were to look at the sky at night. You can click on any constellation to learn more, such as Perseus in the example below, or hover over individual stars to identify them.
 
Constellation Perseus

Molecules
 
The molecule answer creates 2D models of any molecule you type into the Bing search bar, such as methane, c4h2 molecule or h2o molar mass. Highlighting the structure and properties of a molecule and its elements, this interactive experience is great for any budding chemist. You can hover over individual elements and electron pairs to learn more about how chemical elements bond to form molecules, and click and drag elements to "play" with the molecule on the screen.
 
Methane molecule CH4
 
Family tree
 
This interactive family tree enables you to trace the lineage of royal families, or individual kings and queens, by scrolling through time while learning about each family member. In addition to showing a person's birth and death dates alongside their picture or portrait, we often also present a "Did you know" section at the bottom of the answer that mentions notable people related by blood or marriage. Search "Queen Elizabeth II family tree", “house of Tudor”, or “Romanov dynasty” to try it out for yourself.
 
Queen Elizabeth II family tree
 
Words
 
While the primary purpose of the "World of words" answer is to help Bing users enrich their vocabulary, it can also be valuable to people who engage in their favorite activities such as poetry, crosswords, and many types of word games. Presented in the form of a word cloud, the results display the top 10 words that start with, end with, or contain letters that you choose. The top 10 are determined by frequency in text, popularity in search, length in letters, and even Scrabble score, depending on what you're interested in most. For example, the screenshot below displays the most frequently-occurring words that end with the letters b-i-n-g. Search for queries such as "words that start with a" or a more complex “words containing b and end with k” to trigger the experience.
 
World of words
 
Citation
 
One of the most important tasks in academic research is citing sources properly. However, there are several standards for citation and many different types of sources out there, so properly citing something can get confusing. With this in mind, Bing has built a simple, useful citation tool that provides example citation text for each of the major types of source citations.  Search for “apa citation book” to pull up this tool.
 
How to cite a book (APA)

And finally, we also have something for those of you that need a quick break from studying or whatever it is that you are doing. 
 
Rubik's Cube solver
 
Have you ever fumbled around with a Rubik's Cube only to leave it more scrambled than the way you found it? Finally solve one with Bing's virtual Rubik's Cube solver! Rotate and drag the cube with your mouse (or use the tool in the top right) to move the cube on your own or use our instant solver tool to advance the cube through an easy-to-follow solving algorithm. Our solver shows you how to reach the end goal step-by-step and can run at any speed so you can follow along at your own pace or watch the puzzle get solved blazingly fast! Search "rubik's cube" in Bing and try it for yourself.
 
Rubik's Cube solver

Hope you enjoy these new answers and if you do have feedback or ideas, please reach out to us on Bing Listens. You can always find Bing’s full educational experience portfolio at Bing in the Classroom.
 
- The Bing Team
 
 

C++ code analysis: tell us what you think!

$
0
0

We’d love to hear more about what you would like to see in C++ code analysis. We’re running a short survey–just 20 questions–to help us understand how to make C++ code analysis and Visual C++ better.

Please take a couple of minutes to fill out our C++ Code Analysis survey and let us know your thoughts.

Survey link: https://www.surveymonkey.com/r/DX2CHKG

Thank you!

Andrew Pardoe

Get the most out of your PRs with Branch Policies

$
0
0

Pull requests have been widely accepted as a best practice for teams using Git to peer-review code changes. Peer reviews are a great practice for discussing how to improve code and for spreading knowledge about a codebase amongst team members. Contrary to popular belief, code reviews are not particularly good at finding bugs even if that’s what developers expect from their code reviews.

So then, how can you ensure you are finding bugs before they’re introduced into your codebase while still ensuring you have the right people reviewing? Branch policies can go a long way to helping.

Require peer reviews

The first step to protecting the quality of your code base is to require peer reviews.  Direct contributions to the mainline that aren’t reviewed can result in costly build breaks and other bugs. You can protect your mainline with the branch policy to require a minimum number of reviewers.

Start on the Branches page, and find your mainline branch (e.g. master, develop). On the context menu, you’ll see an option to configure Branch policies.

Configure branch policies from the branches page.

Clicking this option will take you to the policy configuration UI for the selected branch. Under the Code review requirements, check the first box to require all changes going into the branch to require a PR. You’ll be able to specify how many reviewers you want to require (2 is a good number) and whether or not developers can approve their own reviews.

Require two code reviewers per pull request.

Clicking save will apply the policy. Once the branch has the policy applied, you’ll see that it gets marked with a badge in the Branches page.

Branches with policies are marked with a badge.

Anyone that tries to push directly to the branch will be blocked and will see a message informing them about the requirement to use pull requests.

! [remote rejected] master -> master (TF402455: Pushes to this branch are not permitted; 
you must use a pull request to update this branch.)
error: failed to push some refs to 'https://account.visualstudio.com/DefaultCollection/_git/MyRepo'

Require specific reviewers

To go even further with the code review requirements, you can add policies to automatically include and require approval from specific reviewers when specific files change. On the policy configuration UI, click on the Add a new path link to add a reviewer requirement for a path.

Add a new path to require specific code reviewers.

Enter in the path you want to require specific reviewers. Note, you can add multiple paths delimited by a semicolon, or even file types using wildcards (i.e. *.cs). Under reviewers, you can add the users and/or groups you want to require.

Configure the required reviewers for the specified path.

When adding groups, only one member of that group will be required to approve on behalf of the group. If there are multiple users and/or groups listed, each of the listed reviewers will be required to approve. In the above example, Alvin and someone from the Fabrikam team will be required to approve any PR that modifies a file under /FabrikamWeb. In general, using groups is probably a better practice than listing specific individuals – that way, people won’t be blocked when a reviewer is out sick or on vacation.

As you probably expect, you can add as many paths as needed, and any PR that changes files that are identified in policy will require signoff from all of the required reviewers.

You might also wonder about the Required and Enabled checkboxes. Required turns the list of reviewers into a suggestion, rather than a requirement. Those reviewers are still added automatically, they just won’t block completion of the PR. In practice, this is good way to encourage the right reviewers and take the burden off of the author from needing to find the best reviewers. Enabled is just a quick way to turn on and off a policy without deleting it – very useful when testing out configurations.

Automatically build pull requests

Reviewers are great for spreading knowledge and generally improving code, but what about preventing build breaks? That’s where the build policy comes in.

Not only does the build policy allow you to build the changes of the new code being added, it builds the result of the merge with the target before the merge is actually created. So, not only will you know that the changes build independently, you’ll know that they build on top of the latest changes in the target. I think a picture always helps explain what’s going on here.

The merge commit between the head of the source and target branches is the commit built by the build policy.

Like the other policies, the build policy is configured from the policy page.

Build polcies are configured from the branch policies page.

Check the box to enable the build policy. This will require that you choose an existing build definition for the changes. In most cases, the build definition you want to use for your PR builds is the same as the CI build for the protected branch. Optimizations can certainly be made, for example, omit steps to publish build drops, symbols, and other build artifacts. Running automated tests in the PR build is also a good idea but it might be necessary to pare down the test suite to ensure high throughput. Focus on breadth of test coverage rather than depth.

Handling updates to the source and target branches

Once the build definition is chosen, the conditions for triggering builds must be set. Any PR into a branch protected with a build policy will trigger a build when the PR is created or the source branch is updated with new code. But what happens when the target branch changes? The build policy has several options for how to handle these additional PR updates.

  • Always require a new build. This is the most strict policy option, requiring every update to pass the build. Using this option essentially eliminates the possibility of build breaks in the target, since every change is guaranteed to have built before being merged. In terms of developer workflow, it can also cause a lot of frustration. Consider three developers that are ready to merge their PR changes, all of which have successful builds. One of the three will merge their changes first, leaving the other two to rebuild their changes against the new changes in the target (i.e. the first developer’s changes). When those builds complete, again, one developer will merge first, leaving the second to rebuild, again.
  • Require a new build if older than X hours. This option is a good for teams that want to ensure PR changes are being built, but are willing to accept some risk to minimize developer friction when merging PRs. When the target branch is updated, the build status is preserved for the specified time window (instead of failing immediately). This way, developers aren’t forced to rebuild when new changes are made to the target, but they also can’t accidentally merge in a PR that passed the build a month prior.
  • Don’t require a new build. This is the most flexible option, requiring that a PR only ever builds when the source branch changes. This option also has the most risk of introducing build breaks, since a stale PR could be meeting policy despite not having been built for a long time.

If you’re not sure where to start, the second option is a good middle ground. If build breaks are a problem, decrease the time limit. If builds are expiring too soon, increase the time limit. This option also works well with the auto-complete setting.

Use the auto-complete option to complete pull requests when policies are passing.

Finally, the option to block PR completion can be used to make passing build a suggestion rather than a requirement for merging PRs.  This can be a good option when initially enabling the build policy (so as not to block throughput if there is a problem with the built itself).

Learn more about build policies and how to configure them in the Team Services docs.

Team Services Update – Sept 28

$
0
0

This week we are rolling out our sprint 106 work.  Here are the release notes.  By now, the update has made it out to most customers and it should finish by the end of the week.

Overall, sprint 106 was a modest sprint.  It has lots of nice, relatively small, improvements and nothing particularly earth shattering.  Probably the biggest thing is a new capability to import repos into Team Services – from anywhere Team Services can access a repo.

A lot of the effort over the past couple of sprints has gone into getting TFS 15 ready and, I think, that’s showing a bit in this sprint.  With RC2 out, I’m already starting to see some post TFS 15 work starting to spin up.  I won’t be surprised if 107 is a “soft” sprint too but, by 108, we should start seeing some significant new stuff coming out.

Stay tuned for more.

Brian

One more farm story for this week

$
0
0

Sorry for the back to back farm stories but I was relating the Fire! story to someone the other day and remembered another story I hadn’t shared.

As a cattle farmer, I go through a lot of hay – about 300,000 lbs per winter.  Spring and late Summer are the seasons for cutting, baling and storing hay.  In the past I’ve had to buy a lot of hay because I didn’t have enough land to make enough.  A little over a year ago I bought another 35 acres of hay fields and this year I was able to make all the hay I needed.

The new hay fields are about 6 miles from my “main farm” where I keep the cows and store the hay.

In August I cut and baled hay on those hay fields – got almost 100 rolls (100,000 lbs).  I haul the hay back to the farm using a Ford F-350/550 dump truck and a trailer.  I can fit 11 rolls on the trailer and 2 in the dump truck for a total of 13,000 lbs.

I hauled the hay over the period of a few days and, on and off, was having trouble with my dump truck.  The battery died a couple of times over night and I had to recharge it.  I was hoping to get all this hauling done before I had to deal with a more permanent fix to the battery problem.

Unfortunately, as I was driving the last load back from the hay fields, the truck died (just plain cut off in the middle of driving it) about a mile from my farm.  I was going up hill at about 45 miles per hour and managed to coast to the top of the hill and then down the other side.  By the time the truck coasted to a stop, I was about half a mile from the farm and stuck in the middle of the road (there were no shoulders to pull off on).

I called my house and got my 16 year old son.  I asked him to get the farm utility vehicle and come meet me on the road.  While I waited, cars kept coming up and I was waving them around this huge truck, trailer and mountain of hay.  Once my son showed up, I left him on truck duty to go get my John Deere 4320 tractor and some chains.

My thinking was that I was going to pull the truck and trailer the remaining half mile with the tractor.  I wasn’t sure it was going to work.  People think of tractors as incredibly powerful machines and, in a way, they are – but it’s all about gearing.  The engine in this tractor is only about 45 horsepower.  By comparison, the engine in my little Hyundai Sonata is 190 horsepower and my dump truck is about 330 horsepower.  So you can imagine why I might be a little concerned 😉

I brought the tractor back and used the tractor to take the two hay rolls out of the back of the dump truck and set them in someone’s yard 🙂  I wanted to lighten the load but I didn’t want to unstrap all the hay on the trailer.  I hooked up the tractor.  The chains I have are huge and I’ve used them to pull all kinds of things.  I think the steel in the links is about 3/8″ and I used 2 chains.

My son rode in the truck to steer and to operate the brake so that the truck didn’t run into the back of the tractor in the event that I had to stop.

I put the tractor in low gear, revved up the engine to about 2,000 rpms and proceeded to pull the truck and trailer 1/2 a mile at 1.8 miles per hour (full speed in low gear – did I mention tractors are all about gearing? :))  To my pleasant surprise, it worked like a charm.

I went back and picked up the two rolls I left behind and afterwards headed to Autozone to get 2 new batteries (big diesel trucks have 2 batteries, not 1 – they pull a heck of a lot of current when they are starting).

In the end everything worked out.  It was an adventure and I learned one more thing my little tractor can do.  I have a bigger 95 horsepower tractor that I was sure would have been able to pull this with no trouble but it was still back at the hay fields, 6 miles away and at a top speed of less than 20 miles per hour that would have taken a long time and been quite a nuisance to everyone else on the road :(.  Yay for my 4320 – the little tractor that could.

Brian

FIXED: Xbox One losing TV signal error message with DirectTV

$
0
0

Your TV signal was lostI've got an Xbox One that I love that is connected to a DirectTV HDTV Receiver that I love somewhat less. The setup is quite simple. Since I can control the DirectTV with the Xbox One and we like to switch between Netflix and Hulu and DirectTV we use the Xbox One to control everything.

The basic idea is this, which is quite typical with an Xbox One. In theory, it's amazing.

I fixed my Xbox One losing Signal with an HDMI powered splitter

However, this doesn't always work. Often you'll turn on the whole system and the Xbox will say "Your TV Signal was lost. Make sure your cable or satellite box is on and plugged into the Xbox." This got so bad in our house that my non-technical spouse was ready to "buy a whole new TV." I was personally blaming the Xbox.

It turns out that's an issue of HDMI compliance. The DirectTV and other older cable boxes aren't super awesome about doing things the exact way HDMI like it, and the Xbox is rather picky about HDMI being totally legit. So how do I "clean" or "fix" my HDMI signal from my Cable/Satellite receiver?

I took at chance and asked on Reddit and this very helpful user (thanks!) suggested an HDMI splitter. I was surprised but I was ready to try anything so I ordered this 2 port HDMI powered splitter from Amazon for just US$20.

IT WORKS

It totally works. The Xbox One now does it's "negotiations" with the compliant splitter, not with the Receiver directly and we haven't seen a single problem since.

I fixed my Xbox One losing Signal with an HDMI powered splitter

If you have had this problem with your Xbox One then pick up a 2 port HDMI powered splitter and rejoice. This is a high quality splitter than doesn't change the audio signal and still works with HDCP if needed. Thanks internets!


Sponsor: Big thanks to Telerik for sponsoring the feed this week. Try Kendo UI by Progress: The most complete set of HTML5 UI widgets and JavaScript app tools helping you cut development time.



© 2016 Scott Hanselman. All rights reserved.
     
Viewing all 10804 articles
Browse latest View live