With the latest deployment to VSTS, you can now create your own custom work item types (WITs) and place them on the backlog and board level of your choice. Read on for a walk through of the process.
Create a custom work item type
If you do not already have one, you’ll need to create an inherited process and migrate your projects to use it. Once you have an inherited process, you’ll notice a new button above the work item types list:
Click the New work item type button to bring up the dialog. Here, you need to provide a name and, optionally, a description and color for your custom WIT. Currently, you can’t rename a WIT, but you can update the description and color at a later time.
Once you’ve created your WIT, you’ll land on the layout page where you can customize your form layout and add new fields. Your WIT will have default states that you can also modify. Note: the existing limitations for states in the completed category still exist. See this topic to learn more: https://www.visualstudio.com/en-us/docs/work/process/customize-process
Add a custom work item type to a backlog and board level
Each work item type now has a new “Backlogs” tab that allows the process admin to choose where their custom WITs appear in the Agile experiences. You can’t change the inherited (system) WITs backlog level.
Choosing a new backlog level will give you details on the changes that will be made:
Commit your change by clicking the “Save” button. Your custom WIT will then begin appearing on the selected backlog and board. You may have to refresh your web browser to see the changes.
Destroy a custom work item type
From the overview page of a custom WIT, you can destroy the WIT. Destroying a WIT will permanently delete all work items of that type as well as all references to the WIT. It also frees up the WIT name for reuse.
Closing
In the next few weeks, we will be adding the ability to disable any WIT. Disabling a WIT will prevent new work items of that type from being created while still preserving existing work items on your backlog, board and queries. It will provide a way to get rid of unwanted inherited WITs and deprecate WITs.
Please comment or email with any feedback or questions.
It has been a busy time since my last post. There have been nine public releases of Git for Windows in the meantime. And a lot has happened.
Most importantly, Git for Windows v2.10.0 has been released. Download it here. Or look at its homepage.
Let me take this opportunity to mention a couple of highlights:
The interactive rebase is now much faster
One of Git’s most powerful commands lets the user reorder commits, edit commit messages and split/join commits. It is called the interactive rebase, or rebase -i.
Originally intended as a simple side project to help myself contribute changes to the Git project itself, it evolved into a very powerful tool that helps with refining topic branches until they are ready to be merged. Essentially, it generates a list of commits called “edit script” or “todo script”, lets the user edit that script, and then re-applies the code changes accordingly.
The initial version was a very simple shell script, and here lies the rub: shell scripting is good for prototyping quick and dirty, but it lacks direct access to Git’s data structures, proper error handling, and most importantly: speed. This matters especially on Windows, where it is more expensive to spawn processes than, say, on Linux. Also, Git for Windows has to resort to a POSIX emulation layer to run shell scripts, which means an enormous performance impact.
Being a power user of the interactive rebase myself, I hence set out to address this problem. As rebase -i had turned into a monster of a shell script in the meantime, my idea was to go for an incremental route: re-implement some parts of the interactive rebase in C and switch the shell script over to use those parts, one by one.
The end result is an interactive rebase that, according to a benchmark included in Git’s source code, runs ~5x faster on Windows, ~4x faster on MacOSX and still ~3x faster on Linux.
Git for Windows v2.10.0 includes this new and improved code, and MacOSX and Linux will soon benefit, too.
Technical details, for the curious
The initial step of this incremental route was still a very, very big step: to process the “todo script” in a “builtin”, i.e. in a Git command implemented in C and using Git’s internal API.
This idea has not been new: there was a plan to back the interactive rebase by a “sequencer”, a to-be-written low-level command that could also be used by other applications directly. The sequencer was introduced alright, and it was similar in design to the interactive rebase, but only used to implement git cherry-pick . It still took quite a bit of work to modify that sequencer to enable it to run interactive rebases.
In addition to this work, I had to touch other code parts, too, such as fixing some regression tests, introducing a performance benchmark for rebase -i, modifying many code paths not to exit uncleanly, and bits and pieces that could be cleaned up “while at it”. To make things more manageable, I decided to split up the 100 or so patches into several patch series. These are the patch series that have been already accepted into “upstream” Git (i.e. the Git project of which Git for Windows is a friendly fork):
(The hex commit names as well as the date indicate when the respective patch series has been accepted into Git’s source code by the Git maintainer.)
There were a few bits and pieces missing, of course, mostly the part where the sequencer actually learns to perform the grunt work of interactive rebases. Those bits and pieces have been contributed to the Git project over the last two weeks, and I will work on them until they get accepted:
the require-clean-work-tree branch that refactors out useful code from the git pull command,
the libify-sequencer branch to allow the sequencer to handle errors other than simply exiting,
the prepare-sequencer branch to rearrange the sequencer code and make it easier to extend,
the sequencer-i branch that teaches the sequencer to understand interactive rebase’s edit scripts,
the rebase--helper branch to add a new low-level command to actually call the sequencer in rebase -i mode, and
the rebase-i-extra branch that re-implements complex processing of the edit scripts in C.
These patches have been developed since early February, and they finally get to benefit the users!
Bonus track: cross-validating the interactive rebases
Made it so far without falling asleep? Congratulations. So now for some fun part: how can I be so certain that this code is ready for prime time?
The answer: I verified it. Inspired by GitHub’s blog post on their Scientist library, I taught my personal Git version to cross-validate each and every interactive rebase that I performed since the middle of May. That is, each and every interactive rebase I ran was first performed using the original shell script, then using the git rebase--helper, and then the results were confirmed to be identical (modulo time stamps).
Of course, that means that I did not benefit from the speed improvements until this past week, when I finally turned off the cross-validation. But it added enormously to the confidence in the correctness of the new code.
Full disclosure: the cross-validation did find three regressions that were not caught by the regression test suite (which I have subsequently adjusted to test for those issues, of course). So it was worth the effort.
MinGit: Git for Windows applications
Another big new feature since Git for Windows 2.8 is MinGit: with every new Git for Windows version, we now offer .zip archives of “Git for Windows applications”.
Let’s look at the motivation for this new feature first: Visual Studio’s Team Explorer, as well as GitHub for Windows and many other applications working on Git repositories and work trees, accesses Git functionality by calling git.exe of Git for Windows, providing input and processing output. This requires Git for Windows to be installed separately, as it is a separate software package, and results in the common dependency problems: how to tell whether the available Git version provides all the functionality required by the application. An alternative to using the installed Git for Windows would be to bundle a complete, kn0wn-good Git for Windows version, but that would require an additional ~200MB on disk.
Enter MinGit.
The idea is to provide a version of Git for Windows that does not provide an interactive user interface, that does not provide localisations, that does not provide GUIs and that omits git svn (which would require an entire Perl infrastructure). Essentially, it is a Git for Windows that was stripped down as much as possible without sacrificing the functionality in which 3rd-party software may be interested.
It currently requires only ~45MB on disk.
In the same spirit, my excellent colleague Jeff Hostetler worked on a new, enhanced low-level mode of the git status command that gives applications a quick, complete picture of a Git work tree’s state, using a single git.exe invocation. This feature has been contributed to the Git project and is already available as an experimental option in Git for Windows.
Other highlights in v2.10.0
In particular when rebasing onto fast-moving branches, git rebase is much, much faster now. This feature has been contributed by my colleague Kevin Willford.
The browser to use to display help pages can now be configured via the help.browser setting; this used to be disabled on Windows for years. This fix was already available in Git for Windows and is now also part of the Git project’s source code.
The git mv dir non-existing-dir/ command now works as expected in Bash on Ubuntu on Windows (previously, it relied on a Linux-specific deviation from the POSIX specs). While not strictly a Git for Windows issue, this came up in my testing of Git in Bash on Ubuntu on Windows.
To help develop cross-platform projects, files can be marked as executable on Windows via the git add --chmod=+x option.
The initial phase of a git fetch is much faster now when the remote repository contains a lot of branches and/or tags.
The git grep command already knew to ignore the case via the -i option. This mode now respects non-ASCII locales, too.
When merging text files in a complicated commit history, Git no longer gets confused by line endings.
Git no longer passes on open handles of temporary files to child processes. This could previously result in locking problems, where the child processes prevented the parent process from deleting the temporary files.
Git’s build process now ensures that no files are added to Git’s source code whose names are illegal on Windows.
These improvements are part of the “upstream” Git v2.10.0 (i.e. not only for Windows). See the full release notes here.
Fun facts: The making of Git for Windows v2.10.0
Every Git for Windows release starts with rebasing the Windows-specific patches of Git for Windows’ master branch to the released version of upstream Git. It is not a simple rebase, though! It retains the branch structure of currently 48 topic branches, using the special-purpose tool called “Git garden shears”.
After that, I run Git’s entire regression test suite. If that is not passing, I investigate it and fix bugs before going on with the release. Happily, this was not necessary this time, also because I had performed a Git garden shears run three days earlier, in preparation for v2.10.0. Running the regression test suite on my (rather beefy) work laptop took 47m6.638s.
Once the regression tests all pass, the “real” release engineering begins, in a dedicated virtual machine. This entails the very same steps, each and every time, so I automated them. This not only avoids mistakes in the release engineering process, it also saves tons of my time. Essentially, I run a series of steps by calling please.sh [], in order, doing other things while the computer does the hard work:
Update the packages (such as bash, curl, gcc, etc) in the 32-bit and the 64-bit Git for Windows SDK: 0m25.141s
Finalize the release notes (verifying the date, the upstream Git version, etc): 0m34.739s
Tag v2.10.0.windows.1 (again verifying a couple of things and rendering the release notes into the tag): 0m8.335s
Build the Git packages (32-bit and 64-bit, including HTML and man help pages): 40m25.803s
Install the Git packages into the SDKs: 1m55.946s
Upload the Git packages to Git for Windows’ Pacman repository: 6m51.918s
Build the installers, portable Git, MinGit, NuPkg: 18m56.284s
Upload the installers, portable Git, etc: 23m16.976s
That means that the computer worked for 2 hours 19 minutes and 41.78 seconds to prepare Git for Windows v2.10.0. During this time, I wrote this blog post.
A few sprints ago we enabled SonarQube and PMD analysis on the Maven and Gradle tasks. We are continuing to add code analysis tooling the main Java build tasks with Checkstyle support for Gradle.
Checkstyle Analysis with Gradle
Checkstyle is the analyzer of choice for enforcing a coding standard. It is a highly configurable analyzer, but if you want to get going fast and use the “Java Sun Checks” enable the “Run Checkstyle” box available in the Code Analysis section.
The build summary then reports the number of issues found by Checkstyle. Detailed issue logs are available under the build Artifact tab of the build summary.
Customizing Checkstyle
If you wish to configure the Checkstyle analysis have a look at the official documentation on applying the Checkstyle plugin in the Gradle build. We will detect that you apply the plugin and we will not intervene, but we will try to find the Checkstyle reports and produce a build summary.
Feedback
We’d like to hear from you. Please raise issues and suggestions on the issues tab of the VSTS task repository in GitHub: https://github.com/microsoft/vsts-tasks/issues and add the label “Area: Analysis”.
In this blog post you’ll learn about IoT Core Blockly, a new UWP application that allows you to program a Raspberry Pi 2 or 3 and a Raspberry Pi Sense Hat using a “block” editor from your browser:
You create a program with interlocking blocks, which will run on the Raspberry Pi. For example, you can control the LED matrix on the Sense Hat and react to the inputs (buttons, sensors like temperature, humidity, etc.).
IoT Core Blockly was inspired by these other super interesting projects:
In this blog post, we will show you how to set up your Raspberry Pi with IoT Core Blockly and get coding with blocks.
Also, we’ll open the hood and look at how we implemented IoT Core Blockly leveraging Windows 10 IoT Core, Google Blockly, the Sense Hat library and the Chakra JavaScript engine.
Set up IoT Core Blockly on your Raspberry Pi
What you will need:
A Raspberry Pi 2 or 3
A Raspberry Pi Sense Hat
A clean SD Card (at least 8 Gb) to install Windows IoT Core 10 Insider Preview
A computer or laptop running Windows 10, to install the IoT Dashboard
First, unpack your Sense Hat and connect it on top of the Raspberry Pi (having four small spacers is handy, but not required):
Now you will need to install the latest Windows IoT Core 10 Insider Preview on your SD card. Follow the instructions at www.windowsondevices.com in the “Get Started” section:
At this point, you should have the IoT Dashboard up and running on your Windows 10 desktop (or laptop) and your Raspberry Pi connected to the network (either Ethernet or Wireless). You should be able to see your Raspberry Pi on the IoT Dashboard “My devices” section:
In IoT Dashboard, go to the section called “Try some samples.” You will see the IoT Core Blockly sample:
Click on it and follow the instructions to install the UWP application onto your Raspberry Pi. After a few seconds, IoT Dashboard will open your favorite browser and connect to the IoT Core Blockly application running on your Raspberry Pi:
Press the “Run” button and the IoT Core Blockly application will start the “Heartbeat” program, and you should see a blinking red heart on your Sense Hat!
Try some other samples (the green buttons on the top). Select a sample, inspect the “blocks” in the editor and press the “Start” button to start this new program.
Try modifying an example: maybe a different image, color or message. IoT Core Blockly remembers the last program you run on the Raspberry Pi and will reload it when you start the Raspberry Pi again.
Under the hood
How does IoT Core Blockly work? How did we build it?
The main app starts a web server which serves the Blockly editor page on port 8000.
At this point, you can browse to your Raspberry Pi :8000 from a browser and access the Blockly editor.
We created custom blocks for specific Sense Hat functionalities (e.g. the LED display, the joystick, the sensors, etc.) and added them to specific “categories” (e.g. Basic, Input, LED, Images, Pin, etc.)
Blockly makes it simple to translate blocks to JavaScript, so we could generate a runnable JavaScript snippet. You can see what your block program translates to in JavaScript by pressing the blue button “Convert to JavaScript” – note: to enable “events” like “on joystick button pressed” we have a few helper JavaScript functions and we pay special attention to the order of the various functions.
At this point, we have a block editor that can generate a runnable JavaScript snippet: We need something that can execute this JavaScript snippet on a different thread without interfering with the web server.
To run the snippet, we instantiate the Chakra JavaScript engine (which is part of every Windows 10 edition) and start the snippet. Chakra makes it easy to stop the snippet at will.
Many of the blocks interact directly with the Sense Hat. We could have written a bunch of JavaScript code to control the Sense Hat, but we leveraged the complete and easy to use C# SenseHat library from EmmellSoft. Bridging between JavaScript and C# was extremely easy leveraging a wrapper UWP library.
Last, we added some machinery to make sure the last “run” snippet is saved on the Raspberry Pi (both the blocks layout and the JavaScript snippet are cached) and run again the next time the IoT Core Blockly app starts (e.g. when you restart your device).
If you inspect the IoTBlockly solution from GitHub, you can see 4 different projects:
IoTBlocklyBackgroundApp: The main app that orchestrate the web server and the Chakra JavaScript engine. Google Blockly code is part of this project.
IoTBlocklyHelper: A simple UWP wrapper library to bridge between C# code and the JavaScript snippet. The SenseHat library from EmmellSoft is referenced in this project.
SimpleWebServer: A rudimentary web server based on Windows.Networking.Sockets.StreamSocketListener.
Let us know if you want more details about how this works. We can definitely geek out more about this! J
Next steps
IoT Core Blockly is a functional sample, but we think we can do more. For example:
add more support for colors,
add support for sounds,
add support for voice (speech and speech recognition),
add more blocks for GPIO, ADC, PWM, SPI, I2C,
add blocks for other “hats” (for example, the GrovePi),
send/Receive data to/from the cloud, over BT, etc.,
Sarah Mei had a great series of tweets last week. She's the Founder of RailsBridge, Director of Ruby Central, and the Chief Consultant of DevMynd so she's experienced with work both "on the job" and "on the side." Like me, she organizes OSS projects, conferences, but she also has a life, as do I.
If you're reading this blog, it's likely that you have gone to a User Group or Conference, or in some way did some "on the side" tech activity. It could be that you have a blog, or you tweet, or you do videos, or you volunteer at a school.
With Sarah's permission, I want to take a moment and call out some of these tweets and share my thoughts about them. I think this is an important conversation to have.
My career has had a bunch of long cycles (months or years in length) of involvement & non-involvement in tech stuff outside of work.
This is vital. Life is cyclical. You aren't required or expected to be ON 130 hours a week your entire working life. It's unreasonable to expect that of yourself. Many of you have emailed me about this in the past. "How do you do _____, Scott?" How do you deal with balance, hang with your kids, do your work, do videos, etc.
I don't.
Sometimes I just chill. Sometimes I play video games. Last week I was in bed before 10pm two nights. I totally didn't answer email that night either. Balls were dropped and the world kept spinning.
Sometimes you need to be told it's OK to stop, Dear Reader. Slow down, breathe. Take a knee. Hell, take a day.
When we pathologize the non-involvement side of the cycle as "burnout," we imply that the involvement side is the positive, natural state.
Here's where it gets really real. We here a lot about "burnout." Are you REALLY burnt? Maybe you just need to chill. Maybe going to three User Groups a month (or a week!) is too much? Maybe you're just not that into the tech today/this week/this month. Sometimes I'm so amped on 3D printing and sometimes I'm just...not.
Am I burned out? Nah. Just taking in a break.
But you know what? Your kids are only babies once (thank goodness). Those rocks won't climb themselves. Etc. And tech will still be here.
I realize that not everyone with children in their lives can get/afford a sitter but I do also want to point out that if you can, REST. RESET. My wife and I have Date Night. Not once a month, not occasionally. Every week. As we tell our kids: We were here before you and we'll be here after you leave, so this is our time to talk to each other. See ya!
Date Night, y'all. Every week. Self care. Take the time, schedule it, pay the sitter. You'll thank yourself.
This is the second regular monthly update to Bing Maps Version 8 control (V8) since its release in late June. In last month’s update, we saw many new features such as draggable pushpins and spatial math geometry. Not to be out done, we have added many new and exciting features in our August update.
Three New Road Maps Styles
One common scenario where we see Bing Maps used a lot is in business intelligence applications. A key piece of feedback we have had from developers in this space is that as nice as the road maps are in Bing Maps, the colors of the map can sometimes interfere with the data that is being overlaid and result in false positive information. Take for example, toll roads. In Bing Maps toll roads are green, a common color used for indicating positive information in business intelligence apps. Now if you were to color code sections of roads based on some metric, and green is used as one of the colors, a toll road may look like part of the data set. With this in mind, we decided to add three additional map styles that are better suited for business intelligence applications. These new map styles are called canvasDark, canvasLight, and grayscale.
As you can see in the following image, these new map styles help your data stand out. On the left is the canvasDark map style with a heat map displayed on top. On the right is the grayscale map style with a weather radar tile layer displayed.
High Contrast Support
To make Bing Maps more accessible, high contrast support has been added. When the user’s computer is in high contrast mode, a high contrast version of the road maps will be displayed.
Animated Tile Layers
Bing Maps has supported overlaying image data as a tile layer in nearly every single version of our map controls. One scenario we have seen some developers try to accomplish is animating an array of tile layers. Weather applications often do this to animate weather radar data. Unfortunately, in older versions of Bing Maps it was difficult to make this animation smooth and often the tile layers would appear to flash between each frame of the animation. To address this, we have created an AnimatedTileLayer class, which makes it easy to smoothly animate through an array of tile layers. Try it now. Additional code samples can be found here.
TypeScript Definitions
In last month’s update, we announced TypeScript definitions for Bing Maps V8. These definitions have been updated to include all the new features from the August update. In addition to being available through NuGet, we have also made these definitions available through npm.
Additional Improvements
There have been many bug fixes and incremental improvements made to the existing features in Bing Maps V8. Most of these were based on feedback from developers. Please keep the feedback coming.
A complete list of new features added in this release can be found on the What’s New page in the documentation. We have many other features and functionalities on the road map for Bing Maps V8. If you have any questions or feedback about V8, please let us know on the Bing Maps forums or visit the Bing Maps website to learn more about our V8 web control features.
This week, we’ll speak with Benjamin Fistein and Jakub Míšek from Peachpie to get an update on their PHP compiler for .NET, which now works on .NET Core and Docker, and can consumer NuGet packages. The show begins at 10AM Pacific Time on Channel 9. We’ll take questions on Gitter, on the dotnet/home channel. Please use the #onnet tag. It’s OK to start sending us questions in advance if you can’t do it live during the show.
Tool of the week: Shaderlab VS
Shaders are extremely important to game developers, but as they mostly run on graphics cards, they can be challenging to work with. Shaderlab VS provides much welcome features for Unity developers, with syntax highlighting, tooltips, and code completion for shader code.
Resource of the week: Awesome Domain-Driven Design
Awesome Domain-Driven Design is a treasure trove of links about DDD, CQRS, event sourcing, and event storming. In there, you’ll find blog posts, podcasts, user groups, courses, books, samples, and mailing-lists. It’s curated, and managed as a GitHub repository, so you can contribute.
User group meeting of the week: Creating VR/AR Apps: A Taste of Unity 3D in Lawrenceville
As I’m writing this post every week, it’s easy to notice those bloggers who consistently put out great contents every week, or in the case of Andrew Lock, several times a week. It takes real dedication and passion for the community to write that much great content so frequently. We’re talking about long-form, very detailed posts, too. So kudos to Andrew for the quality work he’s producing. Two of his posts are featured this week. Check them out!
As always, this weekly post couldn’t exist without community contributions, and I’d like to thank all those who sent links and tips. The F# section is provided by Phillip Carter, the gaming section by Stacey Haffner, and the Xamarin section by Dan Rigby.
You can participate too. Did you write a great blog post, or just read one? Do you want everyone to know about an amazing new contribution or a useful library? Did you make or play a great game built on .NET?
We’d love to hear from you, and feature your contributions on future posts:
Professional football is finally back and, as in previous seasons, and the Bing Predicts team has you covered. We will provide weekly predictions for your favorite team as well as the other 31 teams, plus playoff and Super Bowl predictions. (Our models predict Carolina over New England, in case you wondered. Read below for the full playoff predictions).
We also have weekly fantasy projections to help your fantasy team get the edge in that high-stakes office league. Find all the information you need each week on Bing: schedules, scores, standings, teams, and players.
Weekly Power Rankings
Bing Predicts’ machine learning and deep knowledge takes power rankings to a new level, giving you a glimpse into your team’s future. Every Tuesday at noon Pacific Time, we will update our power rankings with predictions for which team the Predicts model forecasts to win its respective division, and which teams are on-pace to earn those elusive wild-card spots. Here are our projected pre-season playoff rankings:
AFC:
1. New England (AFC East),
2. Cincinnati (AFC North),
3. Kansas City (AFC West),
4. Houston (AFC South),
5. New York Jets (first wild-card),
6. Denver (second wild-card).
NFC:
1. Carolina (NFC South),
2. Green Bay (NFC North),
3. Arizona (NFC West),
4. Philadelphia (NFC East),
5. Minnesota (first wild-card),
6. Seattle (second wild-card).
These rankings will update dynamically as the season unfolds, so check Bing.com or the Bing mobile app weekly for our full divisional and wild card predictions, and find out whether your team is among those top 12 playing deeper into January.
Weekly Game-by-Game Predictions
Search for your favorite team to find its schedule and chance of winning its weekly matchup: Just look for the blue flag (see the table below), or search for NFL predictions to get the coming week’s predictions for all games. This feature gives you the information you need heading into Sunday, whether you’re a survivor-pool fanatic or simply want to have a point of view at the coffee machine at work.
For week one, Bing predicts a 57 percent chance for a Carolina win over Denver in the September 8 season opener. On September 11, we predict an Atlanta win over Tampa Bay (58 percent chance); Green Bay over Jacksonville (55 percent), and Arizona over a New England team without Tom Brady (67percent), to name a few. Bing predicts three likely upsets: Buffalo over Baltimore, San Francisco over Los Angeles, and New York’s Gang Green over Cincinnati.
Fantasy Football Features
For some of you, the success of your fantasy football team is as important as the record of your hometown franchise. Get a competitive advantage from Bing in your fantasy league, with projections of player stats and scores by position (see table below) as well as all-up top players. Search for “fantasy football predictions” on Bing when deciding who to start each week, and which free agents to pick up in order to help you optimize your weekly lineup.
Which running back should you start on week one? Curious about how many fantasy points your wide receiver is going to score? Check Bing Predicts’ latest fantasy football projections.
Before, During and After the Game
Bing has the bass down too. Before the game we’ll help you locate each team’s stat leaders and predicted game-changers, as well as game location, time and broadcast network. During the game, you can search for up-to-date scoring drives.
After the game, we know how important it is to help you find results and the postgame highlights, so make it part of your Sunday-night routine to visit Bing.com or the Bing mobile app. Check back with us at the start of the season as these features become available.
Whether you’re looking for information to help you dominate your fantasy league or just enjoy great football, don’t make your moves without checking in with Bing first. Just like you, we’re counting the hours till the first whistle blows on September 8. Good luck out there this season!
Developers targeting Windows 10 IoT Core can use a programming language of their choice. Today, this includes C#, C++, Visual Basic, JavaScript, Python and Node.js, but we’re not stopping there. Arduino Wiring is the latest addition to IoT Core.
Arduino is one of the most popular platforms among makers. Its community has produced a large number of libraries to interface with peripherals such as LED displays, sensors, RFID readers and breakout boards. One of the main drivers for Arduino’s adoption is its simplicity. With Windows 10 IoT Core, you can now create or port Arduino Wiring sketches that will run on supported IoT Core devices, including Raspberry Pi 2, 3 and Minnowboard Max.
Creating your first Arduino Wiring sketch for IoT Core
You can find the detailed steps for creating Arduino wiring sketches in the Arduino Wiring Project Guide. In addition, here is a quick summary:
Start Visual Studio and create a new Arduino Wiring project. The project template can be found under Templates | Visual C++ | Windows | Windows IoT Core.
Upon project creation, a sketch file with the same name and the extension ‘.ino’ will be created and opened by default. The sketch includes a template for Arduino Wiring using a classic hello world LED blinking code. More details about the Arduino Wiring template and required device setup can be found at the Hello Blinky (Wiring) page.
Before deploying, make sure to change the driver on the target device to DMAP. For more details, follow the instructions in the Lightning Setup Guide.
Finally, select your remote machine and press F5 to build and deploy your app to your device. Go back to the Arduino Wiring Project Guide for additional guidance.
How does an Arduino Wiring sketch run on an IoT Core device
On an IoT Core device, an Arduino Wiring sketch runs as a background application (headless); i.e. there is no UI visible on an attached monitor. This is similar to running a sketch on an Arduino device such as the ArduinoUno for example. However, you can still debug the sketch as you would any other app under Visual Studio by inserting breakpoints and/or stepping through the code. Additionally, to enable verbose output logging, Serial.print() and Serial.println() have been updated to send their output to Visual Studio’s debug pane when the sketch is running under the debugger (the output will also be sent to the serial pins). The Log() function can be used to output to debug as well. Check the Arduino Wiring Porting guide for more details on using Serial.print*() on IoT Core devices.
Porting an Arduino wiring sketch or library to Windows 10 IoT Core
For many Arduino Wiring sketches, simply copying the source code from the existing one to a new Arduino Wiring project in Visual Studio editor and replacing the pin numbers with the ones on your device would suffice.
However, while we have tried to make our implementation as close as possible to other Arduino-based devices, in some cases, a few modifications need to be made to your sketch to make it compatible with the Windows IoT Core OS, the C++ compiler or the board the sketch is running on. The Arduino Wiring Porting Guide covers many of those modifications needed and solutions to problems you may face while porting your sketch.
To use an existing Arduino library, you simply need to copy and add the header (.h) and source file (.cpp) for the library to your project, then reference the header file. Again, most libraries should work with no or very few modifications. However, if you face any issues, please check the Arduino Wiring Porting Guide.
Writing Windows 10 IoT Core specific code
For your scenario, you may want to write code parts that target Windows 10 IoT Core only, but at the same time maintain compatibility with other devices. For that, you can use the _WIN_IOT macro; e.g.:
#ifdef _WIN_IOT
// Windows IoT code only
#else
// Other platforms code
#endif
Arduino Wiring and the Universal Windows Platform (UWP)
Arduino sketches runtime behavior on IoT Core devices is similar to other Arduino devices, with every function implemented to work the same way when interacting with the various controllers. However, Arduino Wiring sketches are also Windows UWP apps and thus can be extended to call any UWP APIs such as networking, storage, Bluetooth, media, security and others; for example, you can communicate with Bluetooth devices, connect to Wi-Fi network or even create a OneNote app using Arduino Wiring.
Additionally, you can wrap your sketch inside a WinRT library and use it in any UWP app, including UI apps. Building Arduino Wiring Libraries guide details the steps needed to create Arduino Wiring based libraries.
Additional Resources:
Here are additional resources to assist with creating/porting Arduino Wiring apps running on Windows 10 IoT Core:
Visual Studio Code is the first code editor and first cross-platform development tool – supporting OS X, Linux, and Windows – in the Visual Studio family.
Using Visual Studio Code and the new Window Iot Core Extension for VS Code, you can write a Node.js script, deploy it to a Windows IoT Core device and then run it from the development machine of your choice, whether it runs OS X, Linux or Windows.
Enter the username and password to log into the device with. The defaults are “Administrator” and “p@ssw0rd.” If you prefer not to have your username and/or password in a plain text file, delete these lines from the generated .json file and you will be prompted each time they are needed.
Verify that settings.json is correct by pressing F1 and then typing “iot: Get Device Info.” If the command succeeds you should see output similar to the following:
Change LaunchBrowserPageNo in settings.json to LaunchBrowserPage
Add a new file to the workspace by clicking the icon found here. Name it index.js or whatever filename you provided in npm.init.
To enable logging of your project set the start script in package.json file. The log file will be written to C:\data\users\DefaultAccount\AppData\Local\Packages\NodeScriptHost_dnsz84vs3g3zp\LocalState\nodeuwp.log on the device.
Enter the username and password to log into the device with. The defaults are “Administrator” and “p@ssw0rd”. If you prefer not to have your username and/or password in a plain text file, delete these lines from the generated .json file and you will be prompted each time they are needed.
// Copyright (c) Microsoft. All rights reserved.
var http = require('http');
var uwp = require("uwp");
uwp.projectNamespace("Windows");
var gpioController = Windows.Devices.Gpio.GpioController.getDefault();
var pin = gpioController.openPin(5);
var currentValue = Windows.Devices.Gpio.GpioPinValue.high;
pin.write(currentValue);
pin.setDriveMode(Windows.Devices.Gpio.GpioPinDriveMode.output);
setTimeout(flipLed, 500);
function flipLed(){
if (currentValue == Windows.Devices.Gpio.GpioPinValue.high) {
currentValue = Windows.Devices.Gpio.GpioPinValue.low;
} else {
currentValue = Windows.Devices.Gpio.GpioPinValue.high;
}
pin.write(currentValue);
setTimeout(flipLed, 500);
}
F1 >“iot: Run Remote Script”
You should see the LED blinking
Quick Tour of IoT Commands
All of the commands contributed by this extension are prefixed with “iot:”. To see a list of commands that are contributed by this extension, press F1 and then type iot. You should see something like this:
iot: Get APPX Process Info // get information about UWP applications running on the device
iot: Get Device Info // get information about the Windows IoT Core device
iot: Get Device Name // get the Windows IoT Core device name
iot: Get Extension Info // get information about this extension
iot: Get Installed Packages // get a list of APPX packages installed on the device
iot: Get Process Info // get a list of processes running on the device
iot: Get Workspace Info // get information about the workspace currently open in VS Code
iot: Initialize settings.json // initialize the project settings with iot extension defaults
iot: List Devices // List Windows IoT Core devices on the network
iot: Restart Device // Restart the Windows IoT Core device.
iot: Run Command Prompt // Prompt for a command and then run it on the device
iot: Run Command List // Show a list of commands from settings.json and run the one that is picked
iot: Run Remote Script // Run the current workspace Node.js scripts on the Windows IoT Core device
iot: Set Device Name // Set the device name using the IP Address and name provided
iot: Start Node Script Host // Start the Node.js script host process that works with this extension
iot: Stop Node Script Host // Stop the Node.js script host process that works with this extension
iot: Upload Workspace Files // Upload the files for this workspace but don’t run the project
We’ve already seen how to get your devices to send data to the cloud. In most cases, the data is then processed by various Azure services. However, in some cases the cloud is nothing more than a means of communication between two devices.
Imagine a Raspberry Pi connected to a garage door. Telling the Raspberry Pi to open or close the door from a phone should only require a minimal amount of cloud programming. There is no data processing; the cloud simply relays a message from the phone to the Pi. Or perhaps the Raspberry Pi is connected to a motion sensor and you’d like an alert on your phone when it detects movement.
In this blog post, we will experiment with implementing device-to-device communication with as little cloud-side programming as possible. A common pipeline for device-to-device communication involves device A sending a message to the cloud, the cloud processing the message and sending it to device B, and device B receiving this message. In minimizing that middle step, you can create a functional app that only requires the free tier of Azure IoT Hub. It’s a cheap and effective way to design device-to-device communication. So, can two devices talk to each other with almost no cloud-based programming?
The answer, of course, is yes. In order to do this, it is important to understand how Azure IoT Hub and the Azure IoT messaging APIs work. Currently, Azure IoT messaging involves two different APIs – Microsoft.Azure.Devices.Client is used in the app running on the device (it can send device-to-cloud and receive cloud-to-device messages) and Microsoft.Azure.Devices SDK and the ServiceBus SDK are used on the service side (it can send cloud-to-device messages and receive device-to-cloud messages). However, our design proposes something slightly unorthodox. We will run the service SDK on the device receiving messages, so less code goes into the cloud.
To take advantage of the latest advances in security, we will provision our device to securely connect to Azure with the help of the TPM (see our earlier blog post that introduced TPM).
This approach uses a many-to-one messaging model. It allows for a simple design, but limits our capabilities. While many devices can send messages, only one can receive. In order to only accept messages from a specific device, the receiver will filter the messages by the device id.
How does all this work?
For a full sample, see the code here. There are two solutions within this project, as described above. The use of the SDKs in each solution remains mostly unchanged from the standard design outlined in here. We decided to run the service side SDK on the receiving device – however, there is one roadblock. One of the two service side SDKs, ServiceBus, does not support UWP. Fortunately, another library called the AMPQNetLite offers a UWP compatible alternative that can be used to send and receive messages on the service side. This requires a little more work: we needed to connect to the event hub port that IoT Hub exposes, create a session and build the receiver link.
All the connection information needed to set up a receiver with AMQPNetLite can be found in your instance of IoT Hub. You can also use this library to filter incoming messages by device id. See this sample for further details.
What next?
This experiment intentionally keeps the amount of cloud-based programming to a minimum (zero, really). Even still, this opens a set of new opportunities. With this, an IoT device can be remote controlled by any Windows device. However, this system has limitations. Any complex message filtering is currently not supported. Extending this solution to be cross platform (using Android or iOS devices) also proves to be difficult, as AMPQNetLite is not compatible with Xamarin.
If you’re willing to do more and utilize more cloud services (including paid ones), advanced messaging patterns, sophisticated data analysis and long term storage become possible. In particular, Azure Functions allow you to run the receiving code in the cloud, which obviates the need of running AMPQNetLite on the client device.
This blog post focused on the simplest pipeline for device-to-device communication, but what we have built is by no means the only solution. We’re eager to hear your feedback and welcome your ideas on what more can be done.
Divya Mahadevan, a software engineer intern, contributed to this piece (thanks, Divya!)
David Ku, CVP IPG
Melissa Du, Intern
Shannon Phu, Intern
Grace O'Brien, Intern
Derrick Connell, CVP @ Bing, Cortana
Road to Rio: Our Summer at Bing
“We’re in full Olympics swing and are so excited for you to join our Road to Rio!”
When we received that message from our soon-to-be mentor, Kelly Freed, we couldn’t believe it. As interns with the Microsoft Explorer program, would get the opportunity to work on the Rio Games experience this summer at Microsoft. There were still a few weeks of school left, but already we couldn’t sit still: We wanted our summer at Microsoft to start already.
Along with about 300 other Explorer interns, we were divided into groups, otherwise known as “pods”. Our pod had three people: Melissa, a rising junior studying computer science at Stanford University, Grace, a rising sophomore studying product design at Stanford, and Shannon, a rising junior studying computer science at the University of California Los Angeles. We first met each other during intern orientation, and over the course of our internship, bonded over many sushi outings and debugging sessions.
Our main project was to help build the Rio Games experience:
Bing has many experiences like these, to help users find all sorts of information, such as the 2016 Academy Awards and even a Rubik’s cube solver! We were each given ownership of a specific module for the Rio Games Experience.
Melissa would be in charge of building the Daily Predicts module, which used past Olympic data and the Bing Predicts team's machine-learning algorithms to generate an athlete or country's chances of winning a certain event.
Grace would be in charge of building the Social module, which displayed the five most relevant and popular tweets from Rio 2016 at any given time.
Shannon would be in charge of building the Events to Watch module, which used the Bing Predicts team's predictions, based on data about sport and athlete popularity, to display the top five events to watch every day.
Actually building one of these experiences required a strong understanding of what happens throughout the whole Bing system, from data ingestion to user interfaces. We onboarded by getting an overview of the connection between the various major components that are involved in building experiences like these on Bing—which include natural language processing, data storage, and UX rendering.
We then used tools that allowed us to emulate Bing on our local machine and develop our standalone modules in C# before they were merged into the whole Rio Games experience. Our backend development work consisted of obtaining data through layers of caching and then dispatching calls to update data stores so that we could render the new UX. Our frontend work consisted of using Bing's internal frontend component library to build the view of our module.
One of the biggest challenges we faced throughout our project was dealing with the intricacies of the data pipeline. For example, because the Bing Rio Games experience was displayed globally, we had to make sure that each of our modules were localized for the markets that they had to support. Another challenge was learning how to communicate effectively across different teams. Throughout the entire development cycle, we were in close contact with the design team, the editorial team, the Predicts team, and our own dev team, just to name a few.
To get more of an idea of the Rio Games experience that we helped contribute to, you can check out the Bing blog post about it here. We were so excited to see something that we had been working on the whole summer actually ship to the entire world while we were still here, and even more excited to see that the millions of fans worldwide enjoyed the Rio Games experience we helped build.
Of course, none of the above would have been possible without the help of our incredible mentors Kelly Freed, Zachary Garcia, and Barry Lumpkin, who were always there to answer our barrage of questions and help us with whatever challenges came our way. In addition, we'd also like to thank the rest of the Bing Engineering Team for providing us with support throughout our internship. We really couldn't have asked for a better summer and thank our team for providing us with such an amazing experience.
- Grace, Melissa, and Shannon, Bing summer interns
Have you ever tried to remember the title of paper you read in an academic journal? You know the author’s name, but not how to spell it? Or, you’re sure that a certain actor starred in a movie directed by a particular director, but can’t remember the title? Wouldn’t it be great if your favorite search engine could help with that? Now it can.
One of the major challenges in search is to be able to help users express their intent in the form of a query that will find the correct information the user is looking for. Previously Bing shipped autocomplete, which helps complete queries, and PageZero, which provides instant answers as you type. Now the Bing team has shipped a couple of features that build on that technology and experience.
Through the integration of technology built by TNR (Microsoft’s Technology and Research team) and the Bing semantic graph, these features allow the user to construct highly structured queries exposing Bing’s deep knowledge of specific topic areas.
It's All Academic
The first subject is academic suggestions, which we shipped earlier this year and was built jointly by the Cognitive Services and Academic Search teams. The feature allows users to explore the relationships between papers, authors, topics and publications through a large object graph. There are many scenarios that the user can explore, for example:
• Find all papers by an author
• Find a paper written by particular co-authors
• Find a paper about a specific topic presented at a conference
• Suggest titles or authors
We have built on that foundation to provide more intelligent autocomplete functionality. The graph relationships are explored in real-time and the most relevant suggestions are generated for the user, even if Bing has never seen the query before.
More Movies
The second upgrade allows users to find movies much more easily. If you were looking for that movie from 1982 directed by Steven Spielberg starring Drew Barrymore, it would be nice if your search engine could help you formulate the query. Now with this feature that’s exactly what happens:
Like the academic suggestions described above, the feature allows users to formulate natural language queries about the domain through autocomplete. Here are some of the kinds of queries that the user can formulate:
• Movies by director
• Movies starring an actor in a particular genre
• Movies from a particular year starring a certain actor
• Movies starring a pair of actors
How It Works
The features are made possible by technologies that allow for extremely efficient representations of semantic graphs as well as a lightning-fast runtime component that evaluates the user’s natural language input. The runtime component analyses the user’s input, determines the intent, extracts any recognized intents, and then generates the most likely interpretations. Uniquely, this system can generate extensions to the query even if no user has ever typed them in before, allowing additional, never-seen-before suggestions to be generated. Traditional autocomplete generally depends on having seen users issue the queries before.
The Data
For both academic searches and movies, the understanding of the underlying domain is represented by a graph. The data is derived from the semantic graph that Bing uses to understand the world. This is stored in a format that allows us to look up information at runtime within milliseconds, thousands of times per second as the user types. This graph store allows us to look up exact matches, such as ‘tom cruise.’ It is also able to support the notion that both ‘tom c’ and ‘tom connor cruise’ refer to the same person.
The Understanding
The final part of the system is the really interesting part. This is where we work out the meaning of the user’s input, and what the best possible completions of that potentially partial intent are. Let’s illustrate this with an example.
1) Imagine a user Jane is looking for machine learning papers by Andrew Ng presented at NIPS. As the user starts typing, Bing’s existing completions technology does a good job of generating suggestions and even showing that we understand what machine learning is.
Even at this stage the new system knows that it could start offering suggestions, but for now it detects that the intent is still quite ambiguous, so it doesn’t trigger yet.
2) As Jane continues typing, the new understanding system kicks in. The system recognizes the intent, and starts evaluating the graph for the next best possible completions.
At this stage it starts exploring potential paths, and starts generating completions.
3) The more Jane types the more constrained the evaluations of the possible completions become, and Jane is able to select the correct suggestion to issue her query.
The compelling detail about this process is that as we are generating the candidates for suggestions or interpreting the user input, we don’t just work with simple string representations. Instead we developed as set of rich objects capturing extremely detailed semantic interpretation of the query, its intent domain information, parts-of-speech mapping, and more.
For the query that was constructed for this example, “machine learning papers by andrew ng in nips” we have the following information representation:
As you can see this allows us to produce truly relevant results on the SERP, related not just to textual matches from the query, but also to a deeper semantic understanding of the users’ intent. An additional benefit is that, since Bing fully understands the query that is constructed, we know that results will be returned. This avoids potentially returning ‘dead-end queries’ with no results, grammatically incorrect or misspelled queries, which can be the case with more generalized language model based synthetic queries.
This is an interesting example of how different technologies developed by Bing and TNR can be brought together in new ways to add value for our users. Please explore the feature and send us feedback let us know what you think via Bing Listens so we can continue to improve our product.
As promised, we are continuing the wave of sharing what we kicked off last week during the “App Dev on Xbox” event. Every week for the next two months, we will be releasing a new blog post where we will focus on a specific topic around UWP app development where Xbox One will be the hero device. With each post, we will be sharing a demo app experience around that topic, and open sourcing the source code on GitHub so anyone can download it and learn from it.
For the first blog post, we thought we’d continue the conversation around premium experiences, or better said, how to take your beautiful app and tailor it for the TV in order to delight your users.
The Premium Experience
Apps built with the Anniversary Update SDK “just work” on Xbox and your users are able to download your apps from the store the same way they do on the desktop and the phone. And even if you have not tested your app on Xbox, the experience in most cases is acceptable. Yet, acceptable is not always enough and as developers we want our apps to have the best experience possible; the premium experience. In that spirit, let’s cover seven (7) things to consider when adapting your app for the TV premium experience.
Fourth Coffee is a sample news application that works across the desktop, phone, and Xbox One and offers a premium experience that takes advantage of each device’s strengths. We will use Fourth Coffee as the example to illustrate the default experience and what it takes to tailor it to get the best experience. All code below comes directly from the app and each code snippet is linked directly to the source in the repository which is open to anyone.
1. Optimize for the Gamepad
The first time you launch your app on Xbox, you will notice a pointer that you can move freely with the controller and you can use it to click on buttons and do much of the same things you can do with a mouse. This is called mouse mode and it is the default experience for any UWP app on Xbox. This mode virtually works with any layout but it’s not always the experience your users will expect as they are used to directional focus navigation. Luckily, switching to directional navigation is as simple as one line of code:
Note: WhenRequested allows a control to request mouse mode if needed when it is focused or engaged to provide the best experience.
In general, if your app is usable through the keyboard, it should work reasonably well with the controller. The great thing about the UWP platform is that the work that enables directional navigation and gamepad support on Xbox also enables the same on other UWP devices running the Anniversary Update. So you can just plug in a controller to your desktop machine and try it out immediately. Try it out with some of the default applications such as the Store or even the Start Menu. It’s really handy on the go; no need to bring your Xbox on the plane just for development.
Navigating the UI with a controller is intuitive and the platform has already mapped existing keyboard input behaviors to gamepad and remote control input. However, there are buttons on the controller and the remote that have not been mapped by default and you might want to consider using them to accelerate navigation. For example, a lot of developers are mapping search to the Y button and users have already started to identify Y as search. As an example, here is the code from Fourth Coffee where we’ve mapped the X button to jump to the bottom of the page, accelerating navigation for the user:
Even though apps on Xbox One generally run on much larger screens than the desktop, the users are much farther away, so everything needs to be a bit bigger with plenty of room to breathe. The effective working size of every app on the Xbox is 960 x 540 (1920 x 1080 with 200% scaling applied) in a XAML app (150% scaling is applied in HTML apps). In general, if elements are appropriately sized for other devices, they will be appropriate for the TV.
Note: you can disable the automatic scaling if you chose so by following these instructions.
In some cases, you will find that the existing layout for your app does not work as well with directional navigation as it would with mouse or touch. The user might run into a situation where they are scrolling through an entire 500 item list just so they can get to the button at the bottom. In those cases, it’s best to change layout of items or use focus engagement on the ListView so it acts as one focusable item until the user engages it.
In cases where the focus algorithm does not prioritize on the correct element, the developer can override the default behavior by using the XYFocus properties on focusable elements. For example, in the following code from Fourth Coffee:
RelatedGridView.XYFocusUp = PlayButton;
when the user presses Up to focus to an element above the RelatedGrid, the PlayButton will always focus next, no matter where it is.
Debugging focus
When first working with directional navigation, you might run into focus misbehavings. In those cases, it’s always helpful to take advantage of the FocusManager API and keep track of the what elements has focus. For example, you can login focus information to the output window while debugging with this code snippet:
When users are interacting with your app using the keyboard or controller, they need to be able to easily identify the element on which they are currently focused. With the Anniversary Update SDK, the existing focus visuals (the border around an element when in focus) have been updated to be more prominent. In the majority cases, developers won’t need to do anything and the focus visual will look great.
Of course, in other cases, you might want to modify the focus visual to apply your color, style, or even complete change the shape or behavior. For example, the default focus visuals for the VideoButton in the DetailsPage are not as visible against the bright background so we changed them:
To take it even further, in MainPage, the focus visual for the navigation pane items has been turned off and has been customized by using a custom template.
For more in depth on Focus Visuals, check out the guidelines.
4. TV Safe Area
Unlike computer monitors, some TVs cut off the edge of the display which can cause content at the edge to be hidden. When you run your app on Xbox, by default, you might notice obvious margins from the edge of the screen.
In many cases, using a dark page background with the dark theme (or a white page background on white theme) is exactly what is needed to get the optimal experience as the page background always extends to the edge. However, if you are using full bleed images as in Fourth Coffee, you will notice obvious borders around your app. Luckily, there is a way to disable this experience by using one line of code:
Once you have your content drawing to the edge of the screen, you will need to make sure that any focusable content is not extending outside of the TV-safe area, which in general is 48 pixels from the sides and 27 pixels from the top and bottom.
To learn more about TV safe area, make sure to check out the guidelines.
5. TV Safe Colors
Not all TVs show color the same way and in some ways that can impact the way an app looks. In general, RGB values in the 16-235 are considered safe for the majority of TVs and if your app depends a lot on subtle differences in color, or high intensities, colors could look washed out or have unexpected effects. To optimize the color palette for TV, the recommendation is to try to clamp the colors to the TV-safe range. In some cases, clamping alone could cause colors to collide and if that is the case, scaling the colors after clamping will give the best results.
The default colors for XAML controls are designed around black or white and are not automatically adjusted for the TV. One way to make sure the default colors are TV safe on Xbox is to use a resources dictionary that overwrites the default colors when running on the Xbox. For example, we use this code in Fourth Coffee to apply the resource dictionary when the app is running on the Xbox:
if (App.IsXbox())
{
// use TV colorsafe values
this.Resources.MergedDictionaries.Add(new ResourceDictionary
{
Source = new Uri("ms-appx:///TvSafeColors.xaml")
});
}
There is a resource dictionary that you can use in your app here, and make sure to check out the guidance on TV safe color.
6. Snapped system apps
UWP apps by definition should be adaptive or responsive to provide the best experience in any size and any supported device. Just like on any UWP device, apps on Xbox will need to account for changes in size. The majority of time, apps on Xbox will be in running in full screen, but the user can at any moment snap a system app next to your app (such as Cortana), effectively changing the width of your app.
One of the methods for responding to size changes is using AdaptiveTriggers directly in your XAML which will apply changes to the XAML depending on the current size. Checkout the use of AddaptiveTriggers in FourthCoffee on GitHub.
For resources around adapting your UI for various screen sizes, checkout this blog post and take a look at the getting started guide.
7. System Media Controls
OK, so this one is not relevant for all apps, but seeing how the majority of apps on Xbox are media apps, it is absolutely worth including in this list. Media applications should respond to media controls initiated by the user no matter what the method used, and there are several ways that a user can initiate media commands:
Via buttons in your application
Via Cortana (typically through speech)
Via the System Media Transport Controls in the multi-tasking tab even if your app is in the background (if implemented)
Via the Xbox app on other devices
If your application is already using the MediaPlayer class for media playback, this integration is automatic. But there are few scenarios where you may need to implement manual control, and implementing the System Media Transport Controls allows you to respond to all the ways the user could be interacting with your app. If you use your own media controls, make sure to integrate with the SystemMediaTransportControl.
Now that you have familiarity what it takes to optimize your app experience for the TV, it’s time to get your hands dirty and try it out. Checkout out the app source on our official GitHub page, watch the event if you missed it, and let us know what you think through the comments bellow or on Twitter.
This is just the beginning. Next week we will release another app experience and go in depth on how to create even richer app experiences with XAML and Unity. We will also cover how to create app services and extensions for your apps so other developers can build experiences that compliment your apps.
Until then, happy coding and get started with Visual Studio!
Yes, I’m a bit late on this one. I was traveling to India and Israel in early Sept and didn’t have a change to post about this.
A little over a week ago we deployed the sprint 104 work to Team Services. Check out the release notes for details.
We continued our journey this sprint to bring full process customization to Team Services with the ability to add custom work item types. Eventually this will lead to the full replacement of the Power Tools Process Template Editor with the new web based experience in TFS on-prem too – likely to land in TFS “15” Update 1 or 2.
One of my favorite improvements this sprint is the new work item history experience. The one we’ve had for a long time was kind of dull, noisy and hard to read. The new one is super nice. Note, it’s not quite finished yet – there are a few more improvements coming in the next sprint deployment.
Another change that’s much bigger than it looks is the new “manual intervention” release management task. The significant thing about this task is that it’s the first incarnation of automation pipeline tasks that can release the agent and then later reacquire a new one and continue. This sets us up to enable a variety of long running workflows without keep agents reserved.
Enjoy and we’ll see you again in a week or two for the sprint 105 deployment.
The Composition APIs allow you to enhance the appeal of your Universal Windows Platform (UWP) app with a wide range of beautiful and interesting visual effects. Because they are applied at a low level, these effects are highly efficient. They run at 60 frames per second, providing smooth visuals whether your app is running on an IoT device, a smartphone, or a high-end gaming PC.
Many visual effects implemented through the Composition APIs, such as blur, use the CompositionEffectBrush class in order to apply effects. Additional examples of Composition effects include 2D affine transforms, arithmetic composites, blends, color source, composite, contrast, exposure, grayscale, gamma transfer, hue rotate, invert, saturate, sepia, temperature and tint. A few very special effects go beyond the core capabilities of the effect brush and use a slightly different programming model. In this post, we’re going to look at an effect brush-based effect, as well as two instances of visual flare that go beyond the basic brush: drop shadow and scene lighting.
Blur
The blur effect is one of the subtlest and most useful visual effects in your tool chest. While many visual effects are design to draw in the user’s attention, the blur’s purpose in user experience design is to do the opposite – basically saying to the user, “Move along, there’s nothing to see here.” By making portions of your UI a little fuzzier, you can direct the user’s attention toward other areas of the screen that they should pay more attention to instead. Aesthetically, blur has a secondary quality of transforming objects into abstract shapes that simply look beautiful on the screen.
Until you get used to the programming model, implementing effects can seem a little daunting. For the most part though, all effects follow a few basic recipes. We’ll use the following in order to apply a blur effect:
Prepare needed objects such as the compositor
Describe your effect with Win2D
Compile your effect
Apply your effect to your image
The compositor is a factory object to create the classes you need to play in the Visual Layer. One of the easiest ways to get an instance of the compositor is to grab it from the backing visual for the current UWP page.
To use composition effects, you also need to include the Win2D.uwp NuGet package in your Visual Studio project. To promote consistency across UWP, the Composition effects pipeline was designed to reuse the effect description classes in Win2D rather than create a parallel set of classes.
Once the prep work is done, you will need to describe your (Win2D) Gaussian blur effect. The following code is a simplified version of the source for the BlurPlayground reference app found in the Windows UI Dev Labs GitHub repository, should you want to dig deeper later.
var graphicsEffect = new GaussianBlurEffect()
{
Name = "Blur",
Source = new CompositionEffectSourceParameter("Backdrop"),
BlurAmount = (float)BlurAmount.Value,
BorderMode = EffectBorderMode.Hard,
};
In this code, you are basically creating a hook into the definition with the Backdrop parameter. We’ll come back to this later. Though it isn’t obvious from the code, you are also initializing the BlurAmount property – which determines how blurry the effect is – of the GaussianBlurEffect to the value property of a Slider control with the name BlurAmount. This isn’t really binding, but rather simply setting a starting value.
After you’ve defined your effect, you need to compile it like this:
Compiling your blur effect probably seems like an odd notion here. There’s basically a lot of magic being done behind the scenes on your behalf. For one thing, the blur effect is being run out of process on a thread that has nothing to do with your app. Even if your app hangs, the effect you compile is going to keep running on that other thread at 60 frames per second.
This is also why there are a lot of apparently magic strings in your Composition effect code; things are being hooked up at the system level for you. “Blur.BlurAmount” lets the compiler know that you want to keep that property of the “Blur” object accessible in case you need to change its value later. The following sample handler for the Slider control will dynamically reset the blur amount on your compiled effect, allowing your users to change the blur by simply moving the slider back and forth.
private void BlurAmount_ValueChanged(object sender, RangeBaseValueChangedEventArgs e)
{
// Get slider value
var blur_amount = (float)e.NewValue;
// Set new BlurAmount
_brush.Properties.InsertScalar("Blur.BlurAmount", blur_amount);
}
The last step in implementing the blur is to apply it to an image. In this sample code, the Image control hosts a picture of a red flower and is named “BackgroundImage.”
To apply the blur to your image control you need to be able to apply your compiled blur to a SpriteVisual, which is a special Composition class that can actually be rendered in your display. To do that, in turn, you have to create a CompositionBackdropBrush instance, which is a class whose main purpose is to let you apply an effect brush to a sprite visual.
var destinationBrush = _compositor.CreateBackdropBrush();
_brush.SetSourceParameter("Backdrop", destinationBrush);
var blurSprite = _compositor.CreateSpriteVisual();
blurSprite.Brush = _brush;
ElementCompositionPreview.SetElementChildVisual(BackgroundImage, blurSprite);
Once everything is hooked up the way you want and the blur has been applied to a new sprite visual, you call the SetElementChildVisual method to insert the sprite visual into the BackgroundImage control’s visual tree. Because it is the last element in the tree, it gets placed, visually, on top of everything else. And voila, you have a blur effect.
Effect brushes can also be animated over time by using the Composition animation system in concert with the effects pipeline. The animation system supports keyframe and expression animations, of which keyframe is generally better known. In a keyframe animation, you typically set some property values you want to change over time and set the duration for the change: in this case a start value, a middle value and then an ending value. The animation system will take care of tweening your animation – in other words, generating all the values between the ones you have explicitly specified.
You’ll notice that when it comes time to apply the animation to a property, you once again need to refer to the magic string “Blur.BlurAmount” in order to access that property since it is running in a different process.
By animating your effects and chaining them together with other effects, you can really start to unlock the massive power behind the Composition effects pipeline to create beautiful transitions like the one below.
Here the blur effect is combined with a scaling animation and an opacity animation in order to effortlessly draw the user’s attention to the most significant information. In addition, effective and restrained use of effects and animations, as in this sample, creates a sense of pleasure and surprise as you use the app.
Drop Shadow
A drop shadow is a common and effective way to draw attention to a screen element by making it appear to pop off of the screen.
The simplest (and probably most helpful) way to show how to implement a drop shadow is to apply one to a SpriteVisual you create yourself, in this case a red square with a blue shadow.
You create the actual drop shadow by calling the CreateDropShadow method on the compositor instance. You set the amount of offset you want, as well as the color of the shadow, then attach it to the main element. Finally, you add the SpriteVisual “myVisual” to the current page so it can be rendered.
_compositor = ElementCompositionPreview.GetElementVisual(this).Compositor;
// create a red sprite visual
var myVisual = _compositor.CreateSpriteVisual();
myVisual.Brush = _compositor.CreateColorBrush(Colors.Red);
myVisual.Size = new System.Numerics.Vector2(100, 100);
// create a blue drop shadow
var shadow = _compositor.CreateDropShadow();
shadow.Offset = new System.Numerics.Vector3(30, 30, 0);
shadow.Color = Colors.Blue;
myVisual.Shadow = shadow;
// render on page
ElementCompositionPreview.SetElementChildVisual(this, myVisual);
A shadow effect can be attached to basic shapes, as well as to images and text.
The Windows UI Dev Labs sample gallery even has an interop XAML control called CompositionShadow that does most of the work for you.
Scene Lighting
One of the coolest effects to come with the Composition APIs is the new scene lighting effect. In the animated gif below, this effect is applied to a collection of images being displayed in a ListView.
While there isn’t space to go into the detailed implementation here, a very general recipe for creating this effect looks like this:
Create various lights and place them in coordinate space.
Identify objects to be lit by targeting the lights at the root visual or any other visuals in the visual tree.
Use SceneLightingEffect in the EffectBrush to customize displaying the SpriteVisual.
Wrapping up
As mentioned, the best place to go in order to learn more about these and other effects is the repository for the Windows UI Dev Labs samples. You should also view these two short but excellent videos on the topic:
This month the roundup focus is on app stores. Whether you’re building apps for the Windows Store, the iOS App Store, or Google Play, these extensions provide build & release tasks to automate many facets of publishing your app. Whether you’re releasing updates to a production app, upgrading from alpha to beta, or managing your rollout, we’ve got you covered here.
With this extension, you’ll need to do a one time manual publish of your app to the store and follow the instructions to set up a new service endpoint with your App Store Publisher credentials. Once that’s done, configure the two build/release tasks this extension adds and let them do the work!
App Store Release– This will be your bread and butter task. This supports automating the release of updates to existing iOS TestFlight beta apps or production apps in the App Store. It’s the perfect final step in any CI environment for an iOS developer
App Store Promote– Do you have an iTunes Connect app you’re looking to promote? The App Store Promote task automates the promotion of a previously submitted app from iTunes Connect to the App Store
With this extension, you’ll need to do a one time manual publish of your app and create a service account with permissions to manage your app, this can all be done from the Google Developers Console. Once you complete the service endpoint configuration to give Team Services your publisher credentials, you’re ready to try out the three tasks this extension brings to the table.
Google Play – Release– This is your bread and butter tasks that automates the release of a new Android app version to the Google Play store
Google Play – Promote– Got apps on different release tracks? This task automates the promotion of a previously released Android app update from one track to another (e.g. alpha ->beta)
Google Play – Increase Rollout– If you are leveraging the Google Play rollout track, this task automates increasing the rollout percentage of a previous release app update. Manage how many people receive your app directly from your CI environment!
Published by Dave Smits, this extension provides tasks to release or update your Windows Store application from your CI environment, and is similar to the one released by Microsoft. With his extension, Dave has gone the extra mile to support release flighting and automatic version rev’ing with the addition of two new tasks
Update Appx Version– Add this to your build definition to automate the version increments to your appx
Publish to Windows Store– This is your bread and butter task that automates publishing or updating your app. It gets really powerful when you combine it with the Team Services release environments, as the task also allows you to target a specific flight (ex. alpha, beta, new_ui_test)
Are you using an extension you think should be featured here?
I’ll be on the lookout for extensions to feature in the future, so if you’d like to see yours (or someone else’s) here, then let me know on Twitter!
We recently posted new bits for our 1.0.5 release of the Visual C++ for Linux extension for Visual Studio 2015. This release has some major performance improvements that feature incremental copy and build, and considerably reducing the number of connections to the remote Linux machine. We’ve also made significant improvements in IntelliSense since our last post here.
Makefile project template
Remote source copy management
Overridable C/C++ compiler path
New debugging options
Makefile Project Template
A Makefile project template has been added that supports using external build systems on your remote machine (make, gmake, CMake, bash script etc.). This works as you would expect under the C++ project property pages you can set your local Intellisense paths, then on the remote build property page you add the commands, semicolon separated, to trigger your build on the remote machine. In the example here I’m switching to my remote directory with my build script and then executing it.
I’ve thrown together some bash scripts that can generate our makefile project with your sources based on the directory structure. These scripts do assume that the source code on the Linux machine is in a directory that has been mapped to Windows. They do set the new flag in the project properties to not copy files remotely. These are unlikely to meet all needs but should give you a good starting point if you have a large project.
Remote file copy management
We also made it possible to specify at the file and project level whether or not a file should be remotely copied. This means you can use your existing build mechanisms just by mapping your existing sources locally and adding them to your project for editing and debugging.
Overridable C/C++ Compiler Path
You can now override the compiler commands used on the remote machine in the Property Pages. That will enable you to point to specific versions of GCC if needed or even point to an alternate compiler like clang. You can use either full paths or a command available on your path.
Under Build Events node of the project properties there are also new pre-build and pre-link remote build events as well as options for arbitrary file copy in all build events to provide greater flexibility.
New Debugging Options
There have also been improvements in debugging. We added a new gdb mode to improve compatibility where we may not have the correct client gdb bits on Windows for the remote target.
You can also override the debugger command itself, this is useful for debugging external programs compiled outside of VS.
The Debugging Property Page has support now for additional gdb commands to be passed to the debugger to run when starting debugging. One example of where this can come in handy is Azure IoT projects. If you’ve used the Azure IoT C SDK on the Raspberry Pi you may have run into seeing an illegal exception being thrown when you start debugging. This is caused by some interactions between libcrypto and gdb. You can find some discussion of this issue here. While debugging can continue you can avoid this error by passing an instruction to the debugger before it starts, in this case “handle SIGILL nostop noprint”.
We want to hear from you!
In addition to our support email alias, vcpplinux-support, VC++ for Linux has a public issue list on GitHub. This is a great option for having public discussion or reporting bugs. We have added our backlog for the extension here as well. You can see what we are targeting for a release by looking at the milestone tagged to an issue. There are a few we’ve committed for 1.0.6 already.
So please do follow our GitHub issue list and use it by either submitting feedback or +1 existing feedback there. We love hearing about how and where our extension is being used. Feel free to use our support email alias if you want to share that with us even if you don’t have any issues. You can also find me on Twitter @robotdad.
We look forward to hearing from you.
Change lists for release since last post
8/19/2016 v1.0.5
Major performance improvements with incremental copy and build, and considerably reducing the number of connections
Makefile project support for external build system support (make, CMake etc)
Over ridable C/C++ compiler path in Property Pages
Remote source copy management per file/project: no copy, and/or override destination path
Debug command override, useful for debugging external programs
Pre-build and pre-link remote build events
Arbitrary file copy as part of the build events
Over ridable timeouts through project for command execution, compile, link and archiver.
Add elements to project file under to override default value of 30 minutes. Values are:
RemoteExecuteTimeout, RemoteCompileCommandTimeout, RemoteLdCommmandTimeout, or RemoteArCommmandTimeout
Debugging Property Page support for additional gdb commands for the debugger to run before starting debugging (debugger bootstrap commands)
Accessibility is about making your app usable to the largest possible audience. For some apps, accessibility is required by law. For others, it’s part of the service you are offering to a specific audience and a way to make your app more generally appealing.
Choosing to incorporate accessibility features is a good idea no matter what your motivation. Thinking about accessibility, in turn, will help you to become a better designer because you will be considering the user experience much more broadly for a greater variety of users.
Be accessible
Accessibility options include features relating to mobility, vision, color perception, hearing, speech, cognition and literacy. However, you can address most requirements by providing:
support for keyboard interactions and screen readers
support for user customization, such as font, zoom setting (magnification), color, and high-contrast settings
alternatives or supplements for parts of your UI, such as audio descriptions of text for those who are visually impaired
Standard Windows controls already have Microsoft UI Automation support and are accessible by default. They require fewer accessibility attributes that are app-specific. If you want to create a custom control, you can add similar support by using custom automation peers.
In the UI design, here are some steps you should take to ensure you app works well with the following scenarios:
Scenario
Steps to take
Screen reading
Users of this feature rely on a screen reader like MS Narrator to help them create a mental model of your UI. To help them interact with your app, you need to provide information about its UI elements, such as name, role, description, state and value. Learn more about exposing basic accessibility information.
Keyboard accessibility
Allow users to interact with all UI elements by keyboard only. This enables them to:
– navigate the app by using Tab and arrow keys
– activate UI elements by using the Spacebar and Enter keys
– access commands and controls by using keyboard shortcutsLearn more about implementing keyboard accessibility.
Accessible visual experience
Some visually impaired users prefer text displayed with a high contrast ratio. They likely also need a UI that looks good in high-contrast mode and scales properly after changing settings in the Ease of Access control panel. Where color is used to convey information, users with color blindness need color alternatives like text, shapes and icons. Learn more about supporting high-contrast themes and about meeting accessible text requirements. Important: To prevent seizures, avoid elements that flash. If you must include flashing elements, do not let them flash more than three times per second.
Reduce UI button pressing
There are other design-related ideas you can employ to make your app more accessible. For example, try to reduce UI gestures to make common tasks require less button pressing.
Provide multiple ways to interact with the same control. For example, activate UI elements by use of the keyboard, rather than just by touch or click. A control that only supports mouse interactions would be extremely difficult, if not impossible, for a visually impaired person to use. To make a control accessible to the visually impaired, you must provide keyboard shortcuts. Also, let users navigate your app using the tab and arrow keys.
Note: Include only interactive elements in the tab order.
Consider creating a tab flow diagram to help you plan how the user would progress through common tasks using only the keyboard. The tab order is also important because it indicates what order screen readers should present the text to the user. For more information, refer to the eBook Engineering Software for Accessibility.
Not everyone has perfect vision
Design your text and UI to support a high-contrast theme. While color is important, it must not be the only channel of communicating information. For example, users who are color blind would not be able to distinguish some color status indicators from their surroundings. Include other visual cues, preferably text, to ensure that information is accessible.
Figure 1: The screen on the right represents, for those viewing this blog without colorblindness, what those who are colorblind would most likely see – reds and greens are indistinguishable.
Figure 2: By adding text indicators, users who are colorblind are able to recognize the various color options even though they are not able to see it.
Imagine you had to work with a black and white printout of your app’s screen. The values of the colors – how dark or light they are on a grayscale – help you tell them apart. This is important to keep in mind when you are selecting colors for highlights and calls to action, which you read about in our blog on visual communication and visual cues. You can also add textures to help distinguish the colors – but try to keep them subtle.
Optimize your UWP app design for touch input and get basic support for mouse, pen and touchpad by default. Visit MSDN to learn more about designing interactions that are easy to use.
Scale
Allowing users to zoom and resize elements can be helpful to people with visual impairment, especially for images that include words. Ensure that text and UI scale appropriately when Ease of Access settings are changed. However, take care not to start with a font size that is too small in general for many users. Everyone’s vision deteriorates as they age; your app should be available to users of any age.
To allow for differences in vision, provide scaling options for your users in your app settings. They might change the font size for easier reading or, if you include the option, may shrink or enlarge the UI as well.
Note: Using vector images (SVG) rather than raster images makes scaling easier. Raster images can become pixelated when enlarged or distorted when shrunk. Vector images look proportional and clear at any scale.
Figure 3: When zoomed onto the section of the illustration, vector images produce clear sharp edges compared to the pixelated and blurriness of the raster image.
Once you believe you have your design working, be sure to test it on your target device(s). The Windows Software Development Kit (SDK) includes accessibility testing tools such as AccScope, Inspect and UI Accessibility Checker. Testing the design should help you identify any areas that need correcting, as well as any opportunities presented on different screen sizes. For more information about these tools, see the article about accessibility testing on MSDN.
If you want your app to look good on very large screens, you may want to include optional larger images, add more whitespace, add rows or columns or incorporate more navigation options without using submenus. You could also take advantage of the extra space by adding something like an overlay on part of the screen to provide more information, such as details about a selected item or a view of the user’s cart.
Figure 4: As the app scales up the images’ size increase and more spacing occurs between elements. The navigation also expands and now includes text indicators. The extra space allows for more information to be included onto the app’s canvas as seen here with the inclusion of the map.
Wrapping up
Accessibility design is really just usability design for a larger audience. While the Universal Windows Platform (UWP) can take care of some of this for you, you will find that a bit of thoughtfulness in incorporating built-in controls will actually go a long way toward making your app more accessible and usable for everyone. To learn more about UWP and accessibility, refer to the following MSDN articles:
The Windows team would love to hear your feedback. Please keep the feedback coming using our Windows Developer UserVoice site. If you have a direct bug, please use the Windows Feedback tool built directly into Windows 10.
Today, we are releasing a set of reliability and quality updates for .NET Core 1.0. The quickest way to get the updates is to head over to dot.net/core and follow the updated install instructions for your operating system. You can download and install the update as an MSI for Windows, a PKG for macOS and updated zips and packages for Linux OSes. If you don’t yet have .NET Core, you can start with this release. It contains everything you need to get started.
We are calling this release .NET Core 1.0.1. It is the first .NET Core 1.0 Long Term Support (LTS) update. We recommend that everyone move to it immediately. You need to be on the latest LTS release to get support from Microsoft.
F# template has been updated for .NET Core 1.0 – cli 3789
Update ASP.NET Core templates to reference ASP.NET Core 1.0.1 – cli 3948
Update ASP.NET Core templates to correctly publish CSHTML files – cli 3950
The following is a summary of updates/fixes in the ASP.NET Core and Entity Framework Core 1.0.1 releases, which you can get by referencing updated NuGet packages.
Microsoft Security Advisory 3181759 : Vulnerabilities in ASP.NET Core View Components Could Allow Elevation of Privilege – aspnet 203
We’ve talked about future updates to .NET Core, including adopting MSBuild and establishing .NET Standard 2.0. These changes will be included in an upcoming feature release and not in a patch version like the one we are releasing today.
Updating your Machine to use .NET Core 1.0.1
.NET Core versions are installed side-by-side. After installing .NET Core 1.0.1, you will have two .NET Core versions on your machine, assuming that you already had .NET Core 1.0.0 installed. The installation of .NET Core 1.0.1 does not delete or otherwise update your .NET Core 1.0.0 installation.
.NET Core has a roll-forward policy for patch versions (the 3rd part of the version number, per semver). We only include important quality and reliability updates in patch versions. We do not include new features in patch versions. After you install .NET Core 1.0.1, apps that previously used .NET Core 1.0.0 will automatically use .NET Core 1.0.1. This policy results in apps running on a more secure and reliable version of .NET Core. The roll-forward policy only applies to patch versions. For example, an app built for .NET Core 1.0.0 will roll forward to 1.0.100, but not to 1.1.0 or 2.0.0.
Updating your Application to use .NET Core 1.0.1
You don’t need to do anything to get your application to use .NET Core 1.0.1. It will roll-forward to .NET Core 1.0.1 by default, as discussed above. You can set .NET Core 1.0.1 as the minimum version for your app by updating your .NET Core metapackage reference to Microsoft.NETCore.App 1.0.1. An app configured this way will no longer run on .NET Core 1.0.0, even if it’s the only .NET Core version on a machine.
You do need to make a change to update to ASP.NET Core 1.0.1 and Entity Framework 1.0.1, by referencing a newer NuGet package. You can see examples of that in the ASP.NET Core 1.0.1 blog post.
What you can do if you have trouble with .NET Core 1.0.1
First, if you are having trouble, we want to know about it. Please report issues on GitHub issue – core 267.
There are a few options to work around having trouble with .NET Core 1.0.1.
You can opt specific apps out of the roll-forward policy. This approach is only encouraged as a short-term option, to determining why an app isn’t working correctly and to fix it. Apps that are configured this way for an extended period of time may run in an insecure state without anyone realizing it. You can configure an app to not roll forward by setting applyPatches to false in app.runtimeconfig.json file. Again, it’s not recommended to use this approach for an extended period of time.
The other approach is to uninstall .NET Core 1.0.1. Your machine will return to using .NET Core 1.0.0, assuming that you already had it installed. You can uninstall .NET Core 1.0.1 using the installer or delete the 1.0.1 Microsoft.NETCore.App directory manually. You can find out where .NET Core is installed on your machine by using where on Windows and which on macOS. Example: where dotnet.
On Linux, you can uninstall .NET Core 1.0.1 with your package manager if you installed it that way. If you installed it via a .zip or .tar.gz then you can use simple file operations to add or remove .NET Core versions. If you don’t know how .NET Core was installed and don’t know how to find it, you can use the following command to find it:
Please upgrade to today’s release. We’ve included security and reliability updates that are important to ensure that your apps are secure, reliable and have great performance. Today’s release is the first update for .NET Core. As we make future updates, the approach to updating your .NET Core installation and your apps will get more familiar and straightforward.
Thanks to everyone who has installed and used .NET Core, ASP.NET Core and Entity Framework Core. We appreciate all of the feedback that we’ve received from you and please keep it coming.