Quantcast
Channel: Category Name
Viewing all 10804 articles
Browse latest View live

Code Search is now Java friendly

$
0
0

In addition to C#, C, C++, and Visual Basic code, you can now do semantic searches across Java code. Adding to our Java feature set and capabilities, we recently enabled contextual search for Java files in the Code Search extension for Visual Studio Team Services and Team Foundation Server starting with TFS “15”. You can apply code type filters to search for specific kinds of Java code such as definitions, references, functions, comments, strings, namespaces, and more.

Semantic search for Java enables Code Search to provide more relevant search results. For instance, a file with a match in definition is ranked above a file with a match as method reference. Similarly, matches in comments are ranked lower than references and so on.

Code Search - Ranking Results

You can use Code Search to narrow down your results to exact code type matches. Navigate quickly to a method definition to understand its implementation simply by applying the definition filter, or scope the search to references in order to view calls and maximize code reuse. You can filter your search to basetype instances to locate a list of derived classes or scope a search to interface instances.

As you type in the search box, select functions and keywords from the drop-down list to quickly create your query. Use the Show more link to display all the available functions and keywords. Mix and match the functions as required.

Code Search - Filter Helper Dropdown

Alternatively, you can select one or a combination of filters from the list in the left column.

Code Search - Code Type Filters

You can type the functions and parameters directly into the search box. The following table shows the full list of functions for selecting specific types or members in your Java code.

To find code where “term” appears as a

Search for

argument

arg: term

base type

basetype: term

class definition or declaration

class: term

class declaration

classdecl: term

class definition

classdef: term

comment

comment: term

constructor

ctor: term

declaration

decl: term

definition

def: term

enumerator

enum: term

field

field: term

function

func: term

function declaration

funcdecl: term

function definition

funcdef: term

global

global: term

header

header: term

interface

interface: term

method

method: term

method declaration

methoddecl: term

method definition

methoddef: term

namespace

namespace: term

reference

ref: term

string literal

strlit: term

type

type: term

typedef

typedef: term

union

union: term

Code Search is available as a free extension on Visual Studio Team Services Marketplace and Team Foundation Server starting with TFS “15”. Click the install button on the extension description page and follow instructions to enable Code Search for your account. For installation on TFS see Administer Search.

You can learn more about the Java integration within Visual Studio Team Services at java.visualstudio.com.

Thanks,
Search team


The week in .NET – Bond – The Gallery

$
0
0

To read last week’s post, see The week in .NET – On .NET on Net Standard 2.0 – Nancy – Satellite Reign.

On .NET

We didn’t have a show last week, but we’re back this week with Rowan Miller to chat about Entity Framework Core 1.1 and .NET. The show is on Thursdays and begins at 10AM Pacific Time on Channel 9. We’ll take questions on Gitter, on the dotnet/home channel and on Twitter. Please use the #onnet tag. It’s OK to start sending us questions in advance if you can’t do it live during the show.

Package of the week: Bond

Bond is a battle-tested binary serialization format and library, similar to Google’s Protocol Buffer. Bond works on Linux, macOS, and Windows, and supports C++, C#, and Python.

To work with Bond, you start by defining your schema using an IDL-like specification.

Then, you codegen a C# library for the schema:

You may now use the generated library in your C# code to instantiate objects of the types defined, as well as serialize and deserialize them:

Bond also offers deep cloning and comparison for objects of compatible types defined from Bond specifications.

Game of the Week: The Gallery – Episode 1: Call of the Starseed

The Gallery – Episode 1: Call of the Starseed is a four part episodic fantasy adventure game designed for virtual reality. Meet mysterious and bizarre characters while you follow clues in search for your missing sister, Elsie. The Gallery – Episode 1: Call of the Starseed features full-room scale VR with interactions that will have you sitting, standing, crouching and crawling around.

The Gallery - Episode 1: Call of the Starseed

The Gallery – Episode 1: Call of the Starseed was created by Cloudhead Games using Unity and C#. It is available for the HTC Vive on Steam and will be available in December on Oculus Home for the Oculus Touch.

Conference of the week: .NET DeveloperDays October 20-21 in Warsaw

.NET DeveloperDays is the biggest event in Central and Eastern Europe dedicated exclusively to application development on the .NET platform. It is designed for architects, developers, testers and project managers using .NET in their work and to those who want to improve their knowledge and skills in this field. The conference content is 100% English, making it easy for the international audience to attend. The speaker lineup includes Jon Skeet, Dino Esposito, and Ted Neward.

.NET

ASP.NET

F#

Check out F# Weekly for more great content from the F# community.

Xamarin

Azure

Games

And this is it for this week!

Contribute to the week in .NET

As always, this weekly post couldn’t exist without community contributions, and I’d like to thank all those who sent links and tips. The F# section is provided by Phillip Carter, the gaming section by Stacey Haffner, and the Xamarin section by Dan Rigby.

You can participate too. Did you write a great blog post, or just read one? Do you want everyone to know about an amazing new contribution or a useful library? Did you make or play a great game built on .NET?
We’d love to hear from you, and feature your contributions on future posts:

This week’s post (and future posts) also contains news I first read on The ASP.NET Community Standup, on Weekly Xamarin, on F# weekly, and on Chris Alcock’s The Morning Brew.

Use Bing's Campaign Landscape Before You Vote

$
0
0
In roughly three weeks, America will elect the 45th President of the United States of America. However, this unprecedented campaign year has left many voters with unanswered questions and undecided when it comes to Trump vs. Clinton.
 
To help you get answers, stay informed on breaking news, and more, we’ve once again updated the Bing elections experience based on your changing needs and where we are in the campaign timeline. We’ve added a new tab called Campaign Landscape and it provides you a view of the winning predictions and candidate spending. Here’s what we got in store:

Winning predictions by state: Go to the Campaign Landscape tab to see who Bing predicts is going to win this election and who is going to win each state, both updated in real-time. Soon we’re adding forecasts for each congressional race and which party will control the balance of power in Congress. And if you want to hear what the polls have to say too, Bing will have you covered. You can rely on us to check the heartbeat of where this campaign season stands.


 
A view on candidates spending: Also under the Campaign Landscape tab is a map view of how much money each candidate is spending in each state and how much each candidate has raised from different sources, industries and Super PACs on the national level.


 
Your state’s voting preference by demographic: Curious who’s voting for Clinton and who’s a Trump fan? Use the map view by state to see search volume by each candidate in relation to age and gender.

 
The latest headlines: The homepage of our elections experience is a news hub which helps you search for the most salient, trending news stories in the election from across the web from a diverse set of publications.


Stream the final debate through Bing: Last but not least, you can stream the final presidential debate through Bing on your desktop or mobile device. Just search “presidential debate” tomorrow on Bing and we will direct you to a live stream of the final debate between the two candidates before election day. You can also find more information about the upcoming debate here.
 
Thank you.
 
-The Bing Team
 
 
 
 

New and updated Microsoft IoT Kits

$
0
0

Earlier this month, we released to customers around the world a new Windows Insider version of Windows 10 IoT Core that supports the brand new Intel® Joule™. We’ve been working hard on Windows 10 IoT Core, we’re proud of the quality and capability of IoT Core Insider releases and we’re humbled by the enthusiasm that you’ve shown in using it to build innovative devices and downright cool Maker projects.

We’ve spoken to thousands of you around the world at both commercial IoT events and Maker Faires and in many of these conversations, you have asked for better ways to get started – how to find the quickest path to device experimentation using Windows 10 and Azure. We’ve heard your feedback and today I’d like to talk about how this is manifesting in two new IoT starter kits from our partners: The Microsoft Internet of Things Pack for Raspberry Pi 3 by Adafruit, and the brand new Seeed Grove Starter Kit for IoT based on Raspberry Pi by Seeed Studio.

Back in September of 2015 we partnered with Adafruit to make a Raspberry Pi 2 based Windows 10 IoT Core Starter Kit available. This kit was designed to get you started quickly and easily on your path of learning both electronics and Windows 10 IoT Core and the Raspberry Pi 2. Adafruit had tremendous success with this kit, and we’re happy to announce that they are releasing a new version of it.

image1

This new kit keeps its focus on helping you get started quickly and easily in the world of IoT, but includes an upgrade to the new Raspberry Pi 3.

The best thing about this update? The price is the same as before.

The newest kit available is from Seeed Studio, called the Grove Starter Kit for IoT based on Raspberry Pi, builds on the great design work that Seeed and their partner Dexter Industries have done around the Grove connector. It utilizes a common connector from the large array of available sensors to simplify the task of connecting to the device platform. This helps you focus on being creative and not worrying about soldering electrical connections.

image2

The selection of compatible modular devices extends way beyond those that are included in the kit, making this applicable to starters, Makers and Maker Pros. The Seeed Kit can be ordered from the Microsoft Store, Seeed Studio or you can also acquire the kit from Digi-Key.

We’re excited about how these kits help enable everyone, from those with no experience to those who prototype for a living, to quickly get started making new devices with Windows 10 IoT Core, Azure IoT and the Raspberry Pi 3.

We can’t wait to see what you make!

Download Visual Studio to get started.

The Windows team would love to hear your feedback.  Please keep the feedback coming using our Windows Developer UserVoice site. If you have a direct bug, please use the Windows Feedback tool built directly into Windows 10.

Exploring Application Insights for disconnected or connected deep telemetry in ASP.NET Apps

$
0
0

Today on the ASP.NET Community Standup we learned about how you can use Application Insights in a disconnected scenario to get some cool - ahem - insights into your application.

Typically when someone sees Application Insights in the File | New Project dialog they assume it's a feature that only works in Azure, that will create some account for you and send your data to the cloud. While App Insights totally does do a lot of cool stuff when you have a cloud-hosted app and it does add a lot of value, it also supports a very useful "SDK only" mode that's totally offline.

Click "Add Application Insights to project" and then under "Send telemetry to" you click "Install SDK only" and no data gets sent to the cloud.

Application Insights dropdown - Install SDK only

Once you make your new project, can you learn more about AppInsights here.

For ASP.NET Core apps, Application Insights will include this package in your project.json - "Microsoft.ApplicationInsights.AspNetCore": "1.0.0" and it'll add itself into your middleware pipeline and register services in your Startup.cs. Remember, nothing is hidden in ASP.NET Core, so you can modify all this to your heart's content.

if (env.IsDevelopment())
{
// This will push telemetry data through Application Insights pipeline faster, allowing you to view results immediately.
builder.AddApplicationInsightsSettings(developerMode: true);
}

Request telemetry and Exception telemetry are added separately, as you like.

Make sure you show the Application Insights Toolbar by right-clicking your toolbars and ensuring it's checked.

Application Insights Dropdown Menu

The button it adds is actually pretty useful.

Application Insights Dropdown Menu

Run your app and click the Application Insights button.

NOTE: I'm using Visual Studio Community. That's the free version of VS you can get at http://visualstudio.com/free. I use it exclusively and I think it's pretty cool that this feature works just great on VS Community.

You'll see the Search window open up in VS. You can keep it running while you debug and it'll fill with Requests, Traces, Exceptions, etc.

image

I added a Exceptional case to /about, and here's what I see:

Searching for the last hours traces

I can dig into each issue, filter, search, and explore deeper:

Unhandled Exception

And once I've found something interesting, I can explore around it with the full details of the HTTP Request. I find the "telemetry 5 minutes before and after" query to be very powerful.

Track Operations

Notice where it says "dependencies for this operation?" That's not dependencies like "Dependency Injection" - that's larger system-wide dependencies like "my app depends on this web service."

You can custom instrument your application with the TrackDependancy API if you like, and that will cause your system's dependency to  light up in AppInsights charts and reports. Here's a dumb example as I pretend that putting data in ViewData is a dependency. It should be calling a WebAPI or a Database or something.

var telemetry = new TelemetryClient();

var success = false;
var startTime = DateTime.UtcNow;
var timer = System.Diagnostics.Stopwatch.StartNew();
try
{
ViewData["Message"] = "Your application description page.";
}
finally
{
timer.Stop();
telemetry.TrackDependency("ViewDataAsDependancy", "CallSomeStuff", startTime, timer.Elapsed, success);
}

Once I'm Tracking external Dependencies I can search for outliers, long durations, categorize them, and they'll affect the generated charts and graphs if/when you do connect your App Insights to the cloud. Here's what I see, after making this one code change. I could build this kind of stuff into all my external calls AND instrument the JavaScript as well. (note Client and Server in this chart.)

Application Insight Maps

And once it's all there, I can query all the insights like this:

Querying Data Live

To be clear, though, you don't have to host your app in the cloud. You can just send the telemetry to the cloud for analysis. Your existing on-premises IIS servers can run a "Status Monitor" app for instrumentation.

Application Insights Charts

There's a TON of good data here and it's REALLY easy to get started either:

  • Totally offline (no cloud) and just query within Visual Studio
  • Somewhat online - Host your app locally and send telemetry to the cloud
  • Totally online - Host your app and telemetry in the cloud

All in all, I am pretty impressed. There's SDKs for Java, Node, Docker, and ASP.NET - There's a LOT here. I'm going to dig deeper.


Sponsor: Big thanks to Telerik! 60+ ASP.NET Core controls for every need. The most complete UI toolset for x-platform responsive web and cloud development. Try now 30 days for free!



© 2016 Scott Hanselman. All rights reserved.
     

Test & Feedback – Collaborate with your team

$
0
0

In the previous blogs, we have gone through the first two steps – Capture your findings and Create artifacts. In this blog, we will take you through the third step i.e. Collaborate. Test & Feedback extension provides many ways in which teams can collaborate with one another to drive the quality. You can use the extension to share your findings in the form of a simple session report or to gather additional feedback where necessary. Additionally, you can also connect to your Visual Studio Team Services account or Team Foundation Server “15” to view in one place all the completed sessions and measure the effectiveness of your bug bashes and exploratory testing sessions using the rich insights provided. These collaboration techniques are available to users based on their access levels and the mode in which the extension is used.

Collaborate using Standalone mode

As described in the Overview blog, one of the modes supported by the extension is the Standalone mode. No connection to Visual Studio Team Services or Team Foundation Server is needed to use the extension in this mode. As you explore the application, you can capture your findings and create bugs offline. All the captured findings – screenshots, notes and bugs created are stored locally. While using the standalone mode, you can use the session report feature to share your captured findings and reported issues with rest of the team.

Session Report

The session report gets generated either on demand by using “Export” capability or automatically at the end of the session. This HTML report can then be easily shared with others as a mail attachment or by using OneNote or SharePoint or in any other way as appropriate. The session report consists of two parts:

  1. Summary of bugs filed
    The first part of the session report provides a list of all the bugs filed while testing along with the details of screenshots and notes that were captured as a part of these bugs.
  2. Session attachments
    This part of the report contains in chronological order the screenshots and notes that were captured while testing the application. If you don’t want to file bugs and are simply capturing your findings or if you have some captures (screenshots and notes) in the session which are not included as a part of any bug, this part of report will help you easily keep a track of them.

export

 

Collaborate using connected mode with stakeholder access

The new feedback flow enabled in Visual Studio Team Services and Team Foundation Server “15” allows teams to use the web access to send feedback requests to stakeholders. Stakeholders can use the Test & Feedback extension to respond to these feedback requests.  The feedback response work items (bugs/tasks or feedback response work item) gets automatically linked to the feedback request. This built-in traceability in the feedback flow allows teams to easily track in one place all the feedback received from different stakeholders. The stakeholders on the other hand can leverage the capabilities provided in the extension to manage all the different feedback requests they receive.

Note Note: Feedback flow is supported only in Team Services and Team Foundation Server “15”.

Request feedback from stakeholders on Features/User Stories

Team members with basic access can now directly request for feedback from stakeholders for features/stories being worked on using the “Request Feedback” option in the WIT form context menu. You only need to fill out a simple feedback form which will send off individual mails to all the selected stakeholders along with the instructions provided in the form.

RequestFeedback3

Respond to feedback requests

Stakeholders can easily respond to the feedback request by clicking on the “Provide feedback” link given in the mail, which will automatically configure the Test and Feedback extension with the selected feedback request. Stakeholders can then use the full capture capabilities of extension to capture their findings and submit their feedback in the form of feedback response or bug or task work items.

FeedbackResponse2

To see the list of feedback requests assigned to you click on [Test & Feedback - Capture Screenshot] in the extension. From the list, you can select the feedback request you want to provide feedback on and then quickly start providing feedback. From this page, you can also manage your “Pending feedback requests” by marking them as complete or by declining them and can switch between different types of feedback requests by clicking on the desired radio button.

feedback_request

In addition to above flow, stakeholders can also use the extension to provide voluntary feedback. In “Connected mode”, connect into the team project you want to provide feedback to. You can then use the extension to capture your findings and submit feedback in the form of feedback response/bug/task work items.

Collaborate using connected mode with basic access

Users with basic access can connect to their Team Services account or Team Foundation Server “15” to view the “Session Insights” page. This page allows users to view all completed sessions at an individual or team level at one place thus allowing them to collaborate with one another as a team. The page provides important summary level data like the total work items explored and created, total time spent across all sessions and the total number of session owners. Users can scope down the data by selecting the “period” they are interested in and grouping the data on various pivots like sessions, explored work items and session owners. Depending on their need teams can use session insights page to derive various kinds of insights.

Note Note: Click on “Recent exploratory sessions” in the Runs tab under Test hub to view “Session Insights” page. Alternatively you can also directly navigate to the insights page from the extension by clicking on [icojn2] in the Timeline.   

As mentioned in the Overview blog, one of the major scenarios that the extension supports is the bug bash scenario. The Session insights enable users to leverage the end-to-end bug-bash scenario which includes running the bug bash, triaging the bugs filed and finally measuring the effectiveness of the bug bashes conducted.

bugbash-scenario.fw

To run the bug bash, team leaders can specify the features and user stories they want to bash. Team members can bash the user story assigned to them by associating it with their session and exploring the application based on the user acceptance criteria provided if any. Users can also explore multiple work items in the same session. Once the bug bash is complete, team can view all the completed sessions in the “recent exploratory sessions” page on Test > Runs hub by changing the pivot to “Sessions”. Using the inline details page, you can easily triage the bugs found during the bug bash and assign them owners and appropriate priority. Finally, team leaders can also measure the effectiveness of the bug-bashes by viewing the amount and quality of exploratory testing done for each of the features and user stories. In addition to this they can also leverage the “Query” support to identify the user stories and features not explored. This data helps team leaders identify gaps in testing and can help them in making decisions regarding the quality of the features being shipped.

unexplored-work-items

.NET Core Tooling in Visual Studio “15”

$
0
0

This post was co-authored by David Carmona,a Principal Program Manager Lead in .NET Team and Joe Morris,a Senior Program Manager in .NET Team.

Couple of weeks back, we dedicated a blog post introducing .NET Standard 2.0, which will significantly extend your ability to share code by unifying .NET APIs across all application types and platforms.

Today, we are going to focus on how we are unifying the project system and the build infrastructure with MSBuild. These changes were announced back in May and will be available aligned with the next version of Visual Studio (Visual Studio “15”). These tools will provide you with a development experience in Visual Studio, Visual Studio Code and the command line.

For the impatient: TL;DR

We released .NET Core 1.0 back in June. This included the RTM runtime and a preview of tools components. The final release of .NET Core tools will provide a build and project system that are unified with the rest of .NET project types, moving away from the project.json infrastructure that was specific to .NET Core. That makes .NET Core application and .NET Standard library projects just another kind of .NET project you can use in conjunction with other project types, such as Xamarin, WPF, Unity or UWP.

This new set of tools and improvements provides a big step forward in the experience. We’ve preserved key project.json characteristics that many of you have told us you value while enabling new cross-project scenarios not possible before. We are also planning to bring those benefits to all the project types over time and not just for .NET Core or .NET Standard. And because we will support full migration of existing project.json files, you can continue to work with them safely.

Here are the key improved experiences you will see in this unified Build system:

  • Project references work: You can reference .NET Core and .NET Standard library projects from existing .NET projects (WPF, ASP.NET, Xamarin, Unity etc.) and the opposite direction also, as explained in the .NET Standard post.
  • Package references are integrated: NuGet package references are now part of the csproj format, not a special file using its own format.
  • Cross-targeting support: You can cross-target multiple target frameworks in one project.
  • Simplified csproj format: The csproj format has been made as minimal as possible to make it easier to edit by hand. Hand-editing is optional, common gestures in Visual Studio will take care of updating the file and we will also provide command line options in the CLI for the most common actions.
  • Support for file wildcards: No requirement to list individual files in the project file. This enables folder-based projects that don’t require every file to be added manually and dramatically improve team collaboration, as the project file doesn’t need to be modified every time a new file is added.
  • Migration of project.json/xproj to csproj : You can seamlessly migrate your existing .NET Core projects from project.json to csproj without any loss at any time, in Visual Studio or at the command line.

Why do we need a standard Build System?

We’ve been talking recently about .NET Standard 2.0. It will give you access to many more APIs and can be used to share code across all the apps you are working on. That sounds great! It turns out that a key enabler of this outcome is a standard build system. In absence of that, the .NET Standard 2.0 vision is not fully realized. .NET Standard requires a standard API and standard project types to act as currencies within a standard build system. With those in place, you can flow code to all the places you want, enabling all the potential combination of project to project and NuGet references.

.NET Core is the only one that isn’t using MSBuild today, so it’s the only one that has to change. This includes .NET Standard Library projects. With all project types using the same build system and project formats, it’s easy and intuitive to re-use libraries across different project types.

The New Tools Experience at the Command line

The updated tools experience will have similar ease-of-use as the existing project.json system, with a better experience if you want to switch back and forth with Visual Studio. The following walkthrough is intended to demonstrate that. Today’s post focusses on the command line experience. We will publish another post at a later date that walks through the same experiences in Visual Studio “15”.

Note: We will show today manual editing of csproj files. We also plan to add dotnet commands that will update csproj and sln files for common tasks, such as adding a NuGet package when working outside of Visual Studio.

New Template

dotnet new is the command to use to create new templates with the .NET Core command line tools. It will generate a csproj project and Program.cs files. The csproj file will be given the same name as the directory by default. You can see the new experience in the image below.

dotnet-new now uses csproj

csproj Format

The csproj file format has been significantly simplified to make it more friendly for the command line experience. If you are familiar with project.json, you can see that it contains very similar information. It also supports a wildcard syntax, to avoid the need of listing individual source files.

This is the default csproj that dotnet new creates. It provides you with access to all of the assemblies that are part of the .NET Core runtime install, such as System.Collections. It also provides access to all of the tools and targets that comes with the .NET Core SDK.

As you can see, the resulting project file definition is in fact quite simple, avoiding the use of complex values such as GUIDs. A detailed mapping of project.json to .csproj elements is listed here.

NuGet Package references

You can add NuGet package references within the csproj format, instead of needing to specify them in a separate file with a special format. NuGet package references take the following form:

.

For example, if you want to add a reference in the project above to WindowsAzure.Storage you just need to add the following line to the other two package references:

Cross-targeting

In most cases, you will target a single .NET Core, .NET Framework or .NET Standard target with your library. Sometimes you need more flexibility and have to produce multiple assets. MSBuild now supports cross-targeting as a key scenario. You can see the syntax below to specify the set of targets that you want to build for, as a semicolon-separated list.

This will also automatically set the right #defines that you can use in your code to enable or disable specific blocks of code depending on the target frameworks.

Project Migration

You will be able to migrate existing project.json projects to csproj very easily using the new command dotnet migrate. The following project.json generates the exact same csproj file showed before after applying dotnet migrate.

.NET CLI commands

There are a set of useful commands exposed by the .NET CLI tools. dotnet restore, dotnet build, dotnet publish and dotnet pack are good examples. These commands will continue to be included with the .NET CLI and do largely the same thing as before, with the exception that they will be implemented on top of MSBuild, as appropriate.

The only difference is that the .NET CLI will provide a much thinner layer, as it will rely on MSBuild for most of the work. The primary role for the .NET CLI is to provide a user-friendly experience for executing MSBuild commands and also as a single tool host for commands that do not use MSBuild, such as dotnet new.

Closing

We are in the process of building the new tools and related experiences that you’ve seen in this post. We’re excited to ship a preview update to you later this year and the final version aligned with Visual Studio “15”.

In the meanwhile you can safely continue to use the existing project.json format which will carry forward as shown before.

We’d love to hear your feedback on this work. We think this is a big step forward for a unified .NET platform that will make your life easier by bringing your new and existing code to any application type and platform.

And because we develop in the open, you are welcome to join the GitHub repos where this work is taking place:

Thank you!

Microsoft NYC Reactor Opening

$
0
0

panorama-entrance01-low-res

Microsoft on Wednesday celebrated the grand opening of its latest Microsoft Reactor, based in New York City’s iconic Grand Central Terminal. This is the third Reactor where Microsoft is supporting the business, university, government, and entrepreneur communities.

The NYC Reactor is co-located with the Hub @ Grand Central Tech, and occupies 4,000 square feet of space. This demonstrates how Microsoft partners with other accelerators, incubators, and innovators in the startup space to support this community and provide them with key resources for success.

Read more here on the Microsoft NYC blog.

Cheers,

Guggs

@stevenguggs


Announcing the October 2016 Update for .NET Core 1.0

$
0
0

The title may be a bit grand for what’s included in this month’s update but it is important for folks encountering this specific issue. Look for more next month.

We are releasing an update today which addresses an issue installing on a clean macOS Sierra system. The change is limited to the macOS installer. There are no changes in the runtime or tools; .NET Core 1.0.1 remains the latest release for Windows and Linux.

You can download the updated .NET Core 1.0.2 macOS SDK installer now.

For more information on the change, please see the .NET Core 1.0.2 release notes.

If you are having trouble, we want to know about it. Please report issues on GitHub issue – core 294.

Thanks to everyone who reported this issue.

Answers to your top TACO questions

$
0
0

Last month I had the pleasure of participating in a panel discussion at the Microsoft Ignite conference where we discussed mobile app development. I spoke about Visual Studio’s Tools for Apache Cordova (a.k.a. “TACO” for short) side-by-side with James Montemagno of Xamarin fame, Ankit Asthana from the Visual Studio C++ team, and Daniel Jacobson from the UWP team. I heard a lot of really good questions from the audience. Some of these questions are so common that I figure many of our Visual Studio blog readers may find them interesting. In this post I’m going to share the answers to our most common TACO questions. (Sorry, no definitive answer to the most pressing question of all: hard or soft shell tacos?)

Feel free to ask new questions via the comments section at the bottom of this post! If I get a strong response, I may even look at writing a follow-up post to answer them.

What is TACO?

The Tools for Apache Cordova – or “TACO” for short – constitute a set of utilities that make it easier, friendlier and faster to develop mobile applications using web technologies (HTML, JavaScript, CSS). You can use these tools to build apps for Android, iOS, or Windows devices. TACO is a suite of products built by Microsoft, including:

I have both .NET and web development skills, should I use Xamarin or Cordova to go mobile?

Many of our blog readers, myself included, have experience building with both .NET and web-based technologies. Consequently, both Xamarin (which lets you take your .NET skills mobile) and Apache Cordova (taking your web skills mobile) may be appealing for mobile development. That’s why this has been our #1 most asked question for years, and even more so now that Xamarin has joined Microsoft and is included for free in Visual Studio!

The short answer is that “it depends” – not all apps or development teams are the same, so you want to look at where the two technologies excel and consider the skills of your existing team (which may just be you). Both products make it possible for you to share nearly 100% of your code across iOS, Android, and Windows applications.

Cordova is great if you:

  • Prefer working with JavaScript, HTML, CSS, and libraries built on top of that tech.
  • Already have web sites/content that you’d like to re-use in a mobile app.
  • Plan to use the most common device features, like the camera.
  • Want to take advantage of services like CodePush, which give you the ability to publish bug fixes and incremental updates to your app without resubmitting to the stores.

While you could build just about any mobile app using Cordova, I’d say you generally wouldn’t use Cordova to build a graphics or data processing intense application like a game; nor would you want to use it to build an app with the richest native-app user experience and animations (though there are frameworks that you can use to build an app that feels just about as good as native). Among the customers we’ve spoken with, a common use for Cordova is to take existing “line of business” or data entry/forms-based web applications and make them mobile. These can be apps like expense and time tracking, retail inventory management, or investment portfolio tracking.

Both Xamarin and Cordova provide you a way to get at native device features, but there’s a big difference in how they work. Xamarin has built-in support for all native APIs on devices, but in Cordova you have to navigate the ecosystem of open source plugins (read more about plugins, below). Plugins can vary in quality and may not be updated as quickly as Xamarin. Companies like Adobe and Microsoft are vigilantly maintaining the plugins most often used by businesses/enterprises to make sure they work great, but other plugins are the domain of the larger community.

Xamarin is great if you:

  • Want to have full access to native API features.
  • Need to build apps using the latest user interface guidelines.
  • Prefer working with C#, .NET, XAML (In the case of Xamarin.Forms) and frameworks built on top of that tech.
  • Already have .NET libraries (like JSON.NET) or other .NET assets you’d like to re-use in a mobile app.
  • Want to take advantage of the full performance of a device.

Xamarin offers the ability to use existing .NET ecosystem technologies, such as NuGet, to build a fully native application that runs with the same performance that is expected from a native app. As Xamarin utilizes the APIs for each platform, Xamarin also offers the ability to use the latest and greatest of what each platform has to offer. Any app that you could build using the native platforms, you can build using Xamarin.

Regardless of your choice

No matter which way you go, I do recommend trying out the tools first and build a prototype or two to see what comfortably suits your development style and application needs.

I have an existing web app; how can I use Cordova to make it mobile?

The simplest thing you can do to take your existing web application to mobile is to build something called a hosted web application. This is a Cordova application that has all its content hosted on a web server, instead of stored locally on the device. It follows the traditional web server hosted model that web developers know and love today. This means you can leverage your existing web assets and create an app that can be loaded to an app store. You can learn more about this technique in the Create a hosted web app using Apache Cordova tutorial on the TACO documentation site. The hosted web app model goes a step further than just hosting a website in the application, it also makes it possible for you to enable the hosted web application to access native device capabilities by leveraging the Cordova plugin model.

With this model you do have some downsides – you still must do extra work if you want to have an application that can work offline/with no network connectivity (because, you wouldn’t be able to get the app’s content from a web server when there’s no network connection).

You might also consider building a separate version of your application tailored just to mobile devices, but still share some common code – see below for some suggestions there.

How does Cordova compare to the new Progressive Web Apps model?

Progressive Web Apps, or “PWA” for short, allow you to build a mobile version of your website that end users can add on their dev’s home screen. This PWA app will run from a web server, but has added capabilities to handle offline caching, send push notifications, and make background content updates. Using web standard APIs already available today, such as Geolocation, you can pretty simply make a web experience that functions like a native app without having to go through an app store. (take a look at this article from our friend’s in the Ionic team, to learn more).

I’d say the three key differences between PWAs in their current form, and Apache Cordova, are:

  1. PWAs do not provide you access to full native device features; only those supported by standard web APIs today. Cordova makes it possible to get at all device capabilities, as long as a plugin has been created to provide that functionality.
  2. A PWA cannot be discovered through app stores; if you want to build an app that can be discovered through an app store, you’d want to use Cordova to create that app. Note: The Microsoft Edge team is exploring how PWAs could be listed through the Windows Store.
  3. Mobile platform support for PWAs are currently limited; neither iOS nor Windows devices currently support the PWA model, only recent versions of Android and Chrome provide support. As of this writing, you can use PWAs with the Chrome browser and as an experimental feature in Firefox and Opera web browsers. Microsoft Edge has also announced work to support PWAs going forward. If you want to reach the broadest sets of end users across all the major device manufacturers and form factors, you’ll want to stick with Cordova for now.

Using Apache Cordova, can I use native features such as push notifications?

Yes! Using plugins with Apache Cordova, you can use a variety of native device features. Plugins exist for device features such as the camera, battery status, and push notifications. You can find a wide variety of plugins for the most common device features by searching the Cordova plugin repository. To work with push notifications, specifically, you can read about how to add push notifications to your Cordova app using Azure App Services.

In creating a plugin, the author provides:

  • A single JavaScript API that can be used across all supported platforms.
  • The native code implementation for each supported platform (e.g. Swift code for iOS, Java for Android, and C# for Windows).

Our team at Microsoft is helping to make sure that the plugins most important to business are working well; while an active community of Cordova developers work on building out other features. We make sure that the core/most important plugins are working well on Android, iOS, and Windows devices, so long as those platforms support the technology.

How can I make a mobile app that will live for 5-10 years?

I know there are projects I’ve worked on in the past where we were building a solution that was expected to last 5 or more years with relatively little maintenance required from a developer team. I’ve heard from many of you that have had similar experiences and the question has come up if you can make a mobile app that would last this long. If you’ve built a mobile web application that can last this long, then you certainly build that same app as a mobile application using Cordova.

My answer is different if you’re building a more complex application with multiple screens that accesses native device features and works with 3rd party services for features like push notifications. For this type of application, you’d need to not only design for the devices available today, but have an eye toward where you think devices will be in those 5-10 years. How will the UI behave on these future devices and what device features will still be supported? You’d also need to select services that you know can still be rely on in the future. I think for many of us, these future requirements would be too hard for us to predict.

During our mobile panel discussion at the Microsoft Ignite conference, we generally agreed that it’s not possible to predict the future of mobile apps this far out whether you’re using Cordova, Xamarin, Objective-C, Swift, Java or something else. Instead, you should focus on building a service layer that can scale to support future requirements and mobile device changes. For example, instead of coding business logic into the application directly, create a backend service layer that is called by the application (e.g. via RESTful APIs) to handle that same logic. Then, as your application needs change over time, or new mobile apps are created, your existing service layer can still be used by those apps without having to modify that code.

In support of the service layer of your app, you may want to consider the following Microsoft services to see how well they’d work for you:

Have more questions? Share them with us!

While these are the most common questions I hear from developers, I suspect you have some more. Feel free to ask them in the comments below, or send us a direct email. If you have some questions about Cordova issues or best practice our visual-studio-cordova tag on Stack Overflow is also a great place to ask them. Also, be sure to check out documentation site to learn more about TACO!

Jordan Matthiesen (@JMatthiesen)

Program Manager, Tools for Apache Cordova

Jordan works at Microsoft on JavaScript tooling for web and mobile application developers. He’s been a developer for over 18 years, and currently focuses on talking to as many awesome mobile developers as possible. When not working on dev tools, you’ll find him enjoying quality time with his wife, 4 kids, 2 cats, 1 dog, and a really big cup of coffee.

Search Bing to find radio stations to stream

$
0
0
There are many formats for listening to music, yet radio remains the number one platform. In fact, Nielsen reported earlier this year that 93 percent of adults listen to the radio every week.
 
Given this preference for radio, the Bing team has partnered with TuneIn to improve our radio-related search results, allowing users to more easily locate radio station information and find radio stations available for streaming.
 
Enhancing our radio search experience is a natural extension of Bing’s existing music search experience, which already includes the ability to find videos, read lyrics and locate where to listen to your favorite songs and podcasts.
 


Search for ‘online radio stations’ to see a sample of some of the most searched for radio stations. Click the play button to listen to a radio station from the web. On a PC you’ll notice a smaller tab redirecting you to TuneIn’s website, allowing you to listen to music from the radio station while you continue to search or move on to another task. On mobile, TuneIn’s app will be opened right to the station you searched for.

Popular online radio stations
 
This is just the beginning. Today, with the help of TuneIn, we have identified over 10,000 stations and formats for you to discover and listen to, and will continue to add stations over the coming months.
 
We hope you enjoy this new feature on Bing and look forward to hearing your feedback. If you have other ideas for how we can make your music experience even better, go to Bing Listens and share your thoughts.
 
- The Bing Team
 

Camera APIs with a dash of cloud intelligence in a UWP app (App Dev on Xbox series)

$
0
0

Apps should be able to see, and with that, they should be able to understand the world. In the sixth blog post in the series, we will cover exactly that, how to build UWP apps that take advantage of the camera found on the majority of devices (including the Xbox One with the Kinect) and build a compelling and intelligent experience for the phone, desktop, and the Xbox One. As with the previous blog posts, we are also open sourcing Adventure Works, a photo capture UWP sample app that uses native and cloud APIs to capture, modify and understand images. The source code is available on GitHub right now, so make sure to check it out.

If you missed the previous blog post from last week on Internet of Things, make sure to check it out. We covered how to build a cross-device IoT fitness experience that shines on all device form factors and how to use client and cloud APIs to make a real time connected IoT experience. To read the other blog posts and watch the recordings from the App Dev on Xbox live event that started it all, visit the App Dev on Xbox landing page.

Adventure Works

image1

Adventure Works is a photo capture UWP sample app that takes advantage of the built in UWP camera APIs for capturing and previewing the camera stream. Using Win2D, an open source library for 2D graphics rendering with GPU acceleration, the app can enhance any photo by appling rich effects or filters, and by using intelligent Cognitive Services API it can analyze any photos to auto tag and caption it appropriately, and more importantly, detect people and emotion.

Camera APIs

Camera and MediaCapture API

The first thing we need to implement is a way to get images into the app. This can be done via a variety of devices; a phone’s forward facing camera, a laptop’s integrated webcam, a USB web cam and even the Kinect’s camera. Fortunately, when using the Universal Windows Platform we don’t have to worry about the low level details of a camera because of the MediaCapture API. Let’s dig into some code on how to get the live camera stream regardless of the Windows 10 device you’re using.

To get started, we’ll need to check what cameras are available to the application and check if any of them are front facing cameras:


var allVideoDevices = await DeviceInformation.FindAllAsync(DeviceClass.VideoCapture);

var desiredDevice = allVideoDevices.FirstOrDefault(device => device.EnclosureLocation != null && device.EnclosureLocation.Panel == Windows.Devices.Enumeration.Panel.Front);

var cameraDevice = desiredDevice ?? allVideoDevices.FirstOrDefault();

We can query the device using DeviceInformation.FindAllAsync to get a list of all devices that support video capture. What you get back from that Task is a DeviceInformationCollection object. From there you can use LINQ to get the first device in the list that reports being in the front panel.

The next line of code covers the scenario where the devices doesn’t have a front facing camera; in that case it just gets the first camera in the list. This is a good fallback for devices that don’t report being in the panel or the device just doesn’t have a front facing camera.

Now it’s time to initialize MediaCapture APIs using the selected camera.


_mediaCapture = new MediaCapture();

var settings = new MediaCaptureInitializationSettings { VideoDeviceId = _cameraDevice.Id };
await _mediaCapture.InitializeAsync(settings);

To start this stage, instantiate a MediaCapture object (be sure to keep the MediaCapture reference as a class field because you must Dispose when you’re done using it later on). Now we create a MediaCaptureInitializationSettings object and use the camera’s Id to set the VideoDeviceId property. Finally, we can initialize the MediaCapture by passing the settings to the InitializeAsync method.

At this point we can start previewing the camera, but before we do, we’ll need a place for the video stream to be shown in the UI. This is done with a CaptureElement:

The CaptureElement has a Source property; we set that using the MediaCapture and then start the preview:


PreviewControl.Source = _mediaCapture;

await _mediaCapture.StartPreviewAsync();

There are other considerations like device rotation and resolution, which the MediaCapture has easy to use APIs to access and modify those properties of the device and stream. Take a look at the Camera class in Adventure Works for a full implementation.

Effects

Now that we have a video stream, we can do a number of things above and beyond just taking a photo or recording video.  Today, we’ll discuss a few possibilities: applying a photo effect with Win2D, applying real time video effect using Win2D and real time face detection.

Win2D

Win2D is an easy-to-use Windows Runtime API for immediate mode 2D graphics rendering with GPU acceleration. It can be used to apply effects to photos, which is what we do in the Adventure Works demo application after a photo is taken. Let’s take a look at how we accomplish this.

At this point in the app, the user has already taken a photo, the photo is saved in the app’s LocalFolder, and the PhotoPreviewView is shown. The user has chosen to apply some filters by clicking the “Filters” AppBarButton, which shows a GridView with a list of photo effects they can apply.

Okay, now let’s get to the code (note that the code is summarized, checkout the sample app for the full code in context). The PhotoPreviewView has Win2D CanvasControl in main section of the view:

When the preview is intially shown, we load the image from the file into that Canvas. Take note that Invalidate() forces the bitmap to be redrawn:


_file = await StorageFile.GetFileFromPathAsync(photo.Uri);

var stream = await _file.OpenReadAsync();
_canvasImage = await CanvasBitmap.LoadAsync(ImageCanvas, stream);

ImageCanvas.Invalidate();

Now that the UI shows the photo, the user can select an effect from the list. This fires the GridView’s SelectionChanged event and in the event handler we take the user’s selection and set it to a _selectedEffectType field:


private void Collection_SelectionChanged(object sender, SelectionChangedEventArgs e)
{
    _selectedEffectType = (EffectType)e.AddedItems.FirstOrDefault();
    ImageCanvas.Invalidate();
}

Since calling Invalidate forces a redraw, it will hit the following event handler and use the selected effect:


private void ImageCanvas_Draw(CanvasControl sender, CanvasDrawEventArgs args)
{
    var ds = args.DrawingSession;
    var size = sender.Size;
    ds.DrawImageWithEffect(_canvasImage, new Rect(0, 0, size.Width, size.Height),
                           _canvasImage.GetBounds(sender), _selectedEffectType);
}

The DrawImageWithEffect method is an extension method found in EffectsGenerator.cs that takes in a specific EffectType (also defined in EffectsGenerator.cs) and draws the image to the canvas with that effect.


public static void DrawImageWithEffect(this CanvasDrawingSession ds, 
                                       ICanvasImage canvasImage, 
                                       Rect destinationRect, 
                                       Rect sourceRect, 
                                       EffectType effectType)
{
    ICanvasImage effect = canvasImage;

    switch (effectType)
    {
        case EffectType.none:
            effect = canvasImage;
            break;
        case EffectType.amet:
            effect = CreateGrayscaleEffect(canvasImage);
            break;
	 // ...
    }

    ds.DrawImage(effect, destinationRect, sourceRect);
}
private static ICanvasImage CreateGrayscaleEffect(ICanvasImage canvasImage)
{
    var ef = new GrayscaleEffect();
    ef.Source = canvasImage;
    return ef;
}

Win2D provides many different effects that can be applied as input to the built in Draw methods. A simple example is the GrayscaleEffect which simply changes the color of each pixels, but there are also effects that can do transforms and much more.

Win2D Video Effects

You can do a lot with Win2D and the camera. One more advanced scenario is to use Win2D to apply real time video effects to any video stream, including the camera preview stream so that the user can see what the effect looks like before they take the photo. We don’t do this in Adventure Works, but it’s worth touching on. Let’s take a quick look.

Applying a video effect on a video stream starts with a VideoEffectDefinition object. This is passed to the MediaCapture by calling mediaCapture.AddVideoEffectAsync() and passing in that VideoEffectDefinition. Let’s take a simple example, applying a grayscale effect.

First, create a class in a UWP Windows Runtime Component project and add a public sealed class GrayScaleVideoEffect that implement IBasicVideoEffect.


public sealed class GrayscaleVideoEffect : IBasicVideoEffect

The interface requires several methods (you can see all of them here); the one we’ll focus on now is ProcessFrame() where each frame is passed and an output frame is expected. This is where you can use Win2D to apply the same effects to each frame (or analyze the frame for information).

Here’s the code:


public void ProcessFrame(ProcessVideoFrameContext context)
{
    using (CanvasBitmap inputBitmap = CanvasBitmap.CreateFromDirect3D11Surface(_canvasDevice, context.InputFrame.Direct3DSurface))
    using (CanvasRenderTarget renderTarget = CanvasRenderTarget.CreateFromDirect3D11Surface(_canvasDevice, context.OutputFrame.Direct3DSurface))
    using (CanvasDrawingSession ds = renderTarget.CreateDrawingSession())
    {
        var grayscale = new GrayscaleEffect() { Source = inputBitmap };
        ds.DrawImage(grayscale);
    }
}

Back to the MediaCapture element, to add this effect to the camera preview screen, you need to call the AddVideoEffectAsync:


await _mediaCapture.AddVideoEffectAsync(
new VideoEffectDefinition(typeof(GrayscaleVideoEffect).FullName),
                            MediaStreamType.VideoPreview);

That’s all there is to the effect. You can see a more complete demo of applying Win2D video effect here in the official Win2D samples on GitHub and you can install the Win2D demo app from the Windows Store here.

Face Detection

The VideoEffectDefinition can be used for much more than just applying beautiful image effects. You can also use it to process the frame for information. You can even detect faces using one! Luckily, this VideoEffectDefintion has already been created for you, the FaceDetectionEffectDefinition!

Here’s how to use it (see the full implementation here):


var definition = new Windows.Media.Core.FaceDetectionEffectDefinition();
definition.SynchronousDetectionEnabled = false;
definition.DetectionMode = FaceDetectionMode.HighPerformance;

_faceDetectionEffect = (await _mediaCapture.AddVideoEffectAsync(definition, MediaStreamType.VideoPreview)) as FaceDetectionEffect;

You only need to instantiate the FaceDetectionEffectDefinition, set some of the properties to your needs and then add it to the initialized MediaCapture. The reason we’re taking the extra step of setting the _faceDetectionEffect private field is so that we can spice it up a little more by hooking into the FaceDetected event:


_faceDetectionEffect.FaceDetected += FaceDetectionEffect_FaceDetected;
_faceDetectionEffect.DesiredDetectionInterval = TimeSpan.FromMilliseconds(100);
_faceDetectionEffect.Enabled = true;

Now, whenever that event handler is fired, we can, for example, snap a photo, start recording, or even process the video for more information, like detecting when someone is smiling! We can use the Microsoft Cognitive Services FaceAPI to detect a smile, let’s take a look at this a little further.

Cognitive Services

Microsoft Cognitive Services let you build apps with powerful algorithms based on Machine Learning using just a few lines of code. To use these APIs, you could use the official NuGet packages, or call the REST endpoints directly. In the Adventure Works demo we use three of these to analyze photos: the Emotion API, Face API and Computer Vision API.

Emotion API

Let’s take a look at how we can detect a smile using the Microsoft Services Emotion API. As mentioned above where we showed how to use the FaceDetectionEffectDefinition, we hooked into the FaceDetected event. This is a good spot to check to see if the people in the preview are smiling in real-time and then take the photo at just the right time.

When the FaceDetected event is fired it is passed two parameters: a FaceDetectionEffect sender and a FaceDetectedEventArgs args. We can determine if there is a face available by checking the ResultFrame.DetectedFaces property in the args.

In Adventure Works, when the handler is called (see here for full event handler), first we check if there are any DetectedFaces in the image, and if so, we can greb the location of each face within the frame and call the Emotion API through our custom method, CheckIfEveryoneIsSmiling:


public async Task CheckIfEveryoneIsSmiling(IRandomAccessStream stream, 
    IEnumerable faces, double scale)
{
    List rectangles = new List();

    foreach (var face in faces)
    {
        var box = face.FaceBox;
        rectangles.Add(new Rectangle()
        {
            Top = (int)((double)box.Y * scale),
            Left = (int)((double)box.X * scale),
            Height = (int)((double)box.Height * scale),
            Width = (int)((double)box.Width * scale)
        });
    }

    var emotions = await _client.RecognizeAsync(stream.AsStream(), rectangles.ToArray());

    return emotions.Where(emotion => GetEmotionType(emotion) == EmotionType.Happiness).Count() == emotions.Count();
}

We use the RecognizeAsync method of the EmotionServiceClient to analyze the emotion of each face in the preview frame. We make the assumption that if everyone is happy in the photo they must be smiling.

Face API

Microsoft Cognitive Services Face API allows you to detect, identify, analyze, organize, and tag faces in photos. More specifically, it allows you to detect one or more human faces in an image and get back face rectangles for where in the image the faces are.

We use the API to identify faces in the photo so we can tag each person. When the photo is captured, we analyze the faces by calling our own FindPeople method and passing it the photo file stream:


public async Task> FindPeople(IRandomAccessStream stream)
{
    Face[] faces = null;
    IdentifyResult[] results = null;
    List photoFaces = new List();

    try
    {
        // find all faces
        faces = await _client.DetectAsync(stream.AsStream());

 results = await _client.IdentifyAsync(_groupId, faces.Select(f => f.FaceId).ToArray());

        for (var i = 0; i < faces.Length; i++)
        {
            var face = faces[i];
            var photoFace = new PhotoFace()
            {
                Rect = face.FaceRectangle,
                Identified = false
            };

            if (results != null)
            {
                var result = results[i];
                if (result.Candidates.Length > 0)
                {
                    photoFace.PersonId = result.Candidates[0].PersonId;
                    photoFace.Name = _personList.Where(p => p.PersonId == result.Candidates[0].PersonId).FirstOrDefault()?.Name;
                    photoFace.Identified = true;
                }
            }

            photoFaces.Add(photoFace);
        }
    }
    catch (FaceAPIException ex)
    {
    
    }

    return photoFaces;
} 

The FaceServiceClient API contains several methods that allow us to easily call into the Face API in Cognitive Services. DetectAsync allows us to see if there are any faces in the captured frame, as well as their bounding box within the image. This is great for locating the face of a person in the image so you can draw their name (or something else more fun). The IdentifyAsync method can use the faces found in the DetectAsync method to identify known faces and get their name (or id for more unique identification).

Not shown here is the AddPersonFaceAsync method of the FaceServiceClient API which can be used to improve the recognition of a specific person by sending another image for that person to train the model better. And to create a new person if that person has not been added to the model, we can use the CreatePersonAsync method. To see how all of these methods work together in the Adventure Works sample, take a look at FaceAPI.cs on Github.

Computer Vision API

You can take this much further by implementing the Microsoft Services Computer Vision API and get information from the photo. Again, let’s go back to PhotoPreviewView in the Adventure Works demo app. If the user clicks on the Details button, we call the AnalyzeImage method where we pass the photo’s file stream to the VisionServiceClient AnalyzeImageAsync method and specify the VisualFeatures that we expect in return. It will analyze the image and return a list of tags describing what the API detected in the photo, a short description of the image, detected faces, and more (see the full implementation on GitHub).


private async Task AnalyzeImage()
{
    var stream = await _file.OpenReadAsync();

    var imageResults = await _visionServiceClient.AnalyzeImageAsync(stream.AsStream(),
                new[] { VisualFeature.Tags, VisualFeature.Description,
                        VisualFeature.Faces, VisualFeature.ImageType }); 
    foreach (var tag in imageResults.Tags)
    {
        // Take first item and use it as the main photo description
        // and add the rest to a list to show in the UI
    }
}

Wrap up

Now that you are familiar with the general use of the APIs, make sure to check out the app source on our official GitHub repository, read through some of the resources provided, watch the event if you missed it, and let us know what you think through the comments below or on twitter.

And come back next week for another blog post in the series where we will extend the Adventure Works example with some social features through enabling Facebook and Twitter login and sharing, integrating project Rome, and adding Maps and location.

Until then, happy coding!

Resources

Previous Xbox Series Posts

Download Visual Studio to get started.

The Windows team would love to hear your feedback.  Please keep the feedback coming using our Windows Developer UserVoice site. If you have a direct bug, please use the Windows Feedback tool built directly into Windows 10.

CppRestSDK 2.9.0 is available on GitHub

$
0
0

We are delighted to announce a new version of CppRestSDK (Casablanca) 2.9.0, this new version available on GitHub introduces new features and fixes issues reported on the 2.8.0 version. The C++ REST SDK is a Microsoft project for cloud-based client-server communication in native code using a modern asynchronous C++ API design. This project aims to help C++ developers connect to and interact with services.

We added
  • support for basic authentication on Linux.
  • static library support for Windows xp
  • a project for compiling as a static lib on Windows
  • a websocket_client_config option for ssl verify mode
  • host based connection pool map on non windows http_clients

We fixed issues of Linux, OSX and Android versions. Here are the set of changes going into this release:

Linux
  • Merged #70&#65 which should fix building on CentOS/RedHat.
  • #143 Work around SSL compression methods memory leak in ASIO.
  • #82 Fixed ambiguous call to begin when using with boost library.
  • #117 Fix header reading on linux listener using HTTPS.
  • #97 Add support for basic authentication.
  • #206 remove warnings-errors for system-headers under linux; honour http_proxy env-variable.
OSX
  • #114 Removed redundant std::move() that was causing errors on Xcode 7.3 gcc.
  • #140 Fix returning std::move causing build failure on osx.
Android
  • #137 Fix android build script for linux, remove libiconv dependency.
  • Use Nuget packages built with Clang 3.8 (VS 2015 Update3) and Android NDK 11rc. Update built scripts for the same.
Windows
  • #150 Add static library for windows xp.
  • #115 Added projects which target v140_xp to resolve Issue#113.
  • #71 Add a project for compiling as a static lib.
WebSockets
  • #102 Added websocket_client_config option for ssl verify mode.
  • #217 Fixed race condition in Casablanca WinRT Websocket client.
http_client
  • #131 Update to include access control allow origin.
  • #156 add host based connection pool map on non windows http_clients.
  • #161 Header parsing assumes whitespace after colon.
  • #146 Fix ambiguous reference to ‘credentials’
Uri
  • #149 Some perf improvements for uri related code.
Json

· #86 Fix obtaining raw string_t pointer from temporary.

· #96 Fix typo hexidecimal/hexadecimal.

  • #116 Fixing latin1 to UTF-16 conversion.
pplx
  • #47 Fixing .then to work with movable-only types.

As always, we trust the community to inform our next steps so, let us know what you need, how to improve Casablanca, by continuing to create an issue or a pull request on https://github.com/Microsoft/cpprestsdk

In Case You Missed It – This Week in Windows Developer

$
0
0

You got the power this week with Windows Developer. From flight control and IoT kit updates, to the magic of camera APIs, read on to learn about the new capabilities to deliver more control over your apps.

Package rollout power

You might have streamlined your app management with the rollout of the Windows Submission API earlier this year. Now, we’ve released two new features for the API that give you more power over package rollouts. Click below to learn more.

IoT for you and me

Adafruit and Seeed are our latest partners bringing IoT to all developers with easy-to-use kits. Adafruit’s Windows 10 IoT Core Starter Kit not only gets you started quickly, but the new version also includes an upgrade to the new Rasperry Pi 3. Seeed Studio’s Grove Starter Kit for IoT is also based on Raspberry Pi, and it builds on the great design work that Seeed and their partner Dexter Industries have done around the Grove connector. Click through to get the latest on the new kits.

Magic with camera APIs

Almost any device you pick up these days has a camera in it, unlocking real time opportunities like never before. So take advantage! Open up a world of opportunities for your app and end users with the magic of camera API development skills, outlined in our latest App Dev on Xbox blog post. Click through for the walk-through.

Download Visual Studio to get started!

The Windows team would love to hear your feedback. Please keep the feedback coming using our Windows Developer UserVoice site. If you have a direct bug, please use the Windows Feedback tool built directly into Windows 10.

Exploring ServiceStack's simple and fast web services on .NET Core

$
0
0

Northwind - ServiceStack styleI've been doing .NET Open Source since the beginning. Trying to get patches into log4net was hard without things like GitHub and Twitter. We emailed .patch files around and hoped for the best. It was a good time.

There's been a lot of feelings around .NET Open Source over the last decade or so - some positive, some negative. There's been some shining lights though and I'm going to do a few blog posts to call them out. I think having .NET Core be cross platform and open source will be a boon for the .NET Community. However, the community needs to also help out by using non-Microsoft OSS, supporting it, doing PRs, helping with docs, giving talks on new tech and spreading the word.

While some OSS projects are purely volunteer projects, ServiceStack has found some balance with a per-developer pricing model. They also support free usage for small projects. They've got deep integration with all major IDEs and support everything from VS, Xcode, INtelliJ, and the commandline.

ServiceStack Logo

One major announcement in the least few days as been ServiceStack 4.5.2 on .NET Core! Effectively one year to the day from the feature request and they did it! Their announcement paragraph says it best, emphasis mine.

Whilst the development and tooling experience is still in a transitionary period we believe .NET Core puts .NET Web and Server App development on the cusp of an exciting future - the kind .NET hasn’t seen before. The existing Windows hosting and VS.NET restraints have been freed, now anyone can develop using .NET’s productive expertly-designed and statically-typed mainstream C#/F# languages in their preferred editor and host it on the most popular server Operating Systems, in either an all-Linux, all-Windows or mixed ecosystem. Not only does this flexibility increase the value of existing .NET investments but it also makes .NET appeal to the wider and highly productive developer ecosystem who’ve previously disregarded .NET as an option.

Many folks ran (and run) ServiceStack on Mono, but it's time to move forward. While Mono is still a fantastic stack on many platforms that .NET Core doesn't support, for mainstream Linux, .NET Core is likely the better choice.

If you’re currently running ServiceStack on Mono, we strongly recommend upgrading to .NET Core to take advantage of its superior performance, stability and its top-to-bottom supported Technology Stack.

I also want to call out ServiceStack's amazing Release Notes. Frankly, we could all learn from Release Note this good - Microsoft absolutely included. These release notes are the now Gold Standard as far as I'm concerned. Additionally, ServiceStack's Live Demos are unmatched.

Enough gushing. What IS ServiceStack? It's a different .NET way for creating web services. I say you should give it a hard look if you're making Web Services today. They say this:

Service Stack provides an alternate, cleaner POCO-driven way of creating web services.

  • Simplicity
  • Speed
  • Best Practices
  • Model-driven, code-first, friction-free development
  • No XML config, no code-gen, conventional defaults
  • Smart - Infers intelligence from strongly typed DTOs
  • .NET and Mono
  • Highly testable - services are completely decoupled from HTTP
  • Mature - over 5+ years of development
  • Commercially supported and Continually Improved

They've plugged into .NET Core and ASP.NET Core exactly as it was design. They've got sophisticated middleware and fits in cleanly and feels natural. Even more, if you have existing ServiceStack code running on .NET 4.x, they've designed their "AppHost" such that moving over the .NET Core is extremely simple.

ServiceStack has the standard "Todo" application running in both .NET Full Framework and .NET Core. Here's two sites, both .NET and both ServiceStack, but look what's underneath them:

Getting Started with Service Stack

There's a million great demos as I mentioned above with source at https://github.com/NetCoreApps, but I love that ServiceStack has a Northwind Database demo here https://github.com/NetCoreApps/Northwind. It even includes a Dockerfile. Let's check it out. I was able to get it running in Docker in seconds.

>git clone https://github.com/NetCoreApps/Northwind
>cd Northwind
>docker build -t "northwindss/latest" .
>docker run northwindss/latest
Project Northwind.ServiceModel (.NETStandard,Version=v1.6) was previously compiled. Skipping compilation.
Project Northwind.ServiceInterface (.NETStandard,Version=v1.6) was previously compiled. Skipping compilation.
Project Northwind (.NETCoreApp,Version=v1.0) was previously compiled. Skipping compilation.
Hosting environment: Production
Content root path: /app/Northwind
Now listening on: https://*:5000
Application started. Press Ctrl+C to shut down.

Let's briefly look at the code, though. It is a great sample and showcases a couple cool features and also is nicely RESTful.

There's some cool techniques in here. It uses SqLITE for the database and the database itselfis created with this Unit Test. Here's the ServiceStack AppHost (AppHost is their concept)

public class AppHost : AppHostBase
{
public AppHost() : base("Northwind Web Services", typeof(CustomersService).GetAssembly()) { }

public override void Configure(Container container)
{
container.Register(
new OrmLiteConnectionFactory(MapProjectPath("~/App_Data/Northwind.sqlite"), SqliteDialect.Provider));

//Use Redis Cache
//container.Register(new PooledRedisClientManager());

VCardFormat.Register(this);

Plugins.Add(new AutoQueryFeature { MaxLimit = 100 });
Plugins.Add(new AdminFeature());

Plugins.Add(new CorsFeature());
}
}

Note host the AppHost base references the Assembly that contains the CustomersService type. That's the assembly that is the ServiceInterface. There's a number of Services in there - CustomersService just happens to be a simple one:

public class CustomersService : Service
{
public object Get(Customers request) =>
new CustomersResponse { Customers = Db.Select() };
}

The response for /customers is just the response and a list of Customers:

[DataContract]
[Route("/customers")]
public class Customers : IReturn {}

[DataContract]
public class CustomersResponse : IHasResponseStatus
{
public CustomersResponse()
{
this.ResponseStatus = new ResponseStatus();
this.Customers = new List();
}

[DataMember]
public List Customers { get; set; }

[DataMember]
public ResponseStatus ResponseStatus { get; set; }
}

Customers has a lovely clean GET that you can see live here: http://northwind.netcore.io/customers. Compare its timestamp to the cached one at http://northwind.netcore.io/cached/customers.

[CacheResponse(Duration = 60 * 60, MaxAge = 30 * 60)]
public class CachedServices : Service
{
public object Get(CachedCustomers request) =>
Gateway.Send(new Customers());

public object Get(CachedCustomerDetails request) =>
Gateway.Send(new CustomerDetails { Id = request.Id });

public object Get(CachedOrders request) =>
Gateway.Send(new Orders { CustomerId = request.CustomerId, Page = request.Page });
}

You may find yourself looking at the source for the Northwind sample and wondering "where's the rest?" (no pun intended!) Turns out ServiceStack will do a LOT for you if you just let it!

The Northwind project is also an example of how much can be achieved with a minimal amount of effort and code. This entire website literally just consists of thesethreeclasses. Everything else seen here is automatically provided by ServiceStack using a code-first, convention-based approach. ServiceStack can infer a richer intelligence about your services to better able to provide more generic and re-usable functionality for free!

ServiceStack is an alternative to ASP.NET's Web API. It's a different perspective and a different architecture than what Microsoft provides out of the box. It's important and useful to explore other points of view when designing your systems. It's especially nice when the systems are so thoughtfully factored, well-documented and designed as ServiceStack. In fact, years ago I wrote their tagline: "Thoughtfully architected, obscenely fast, thoroughly enjoyable web services for all."

Have you used ServiceStack? Have you used other open source .NET Web Service/API frameworks? Share your experience in the comments!


Sponsor: Big thanks to Telerik! 60+ ASP.NET Core controls for every need. The most complete UI toolset for x-platform responsive web and cloud development. Try now 30 days for free!



© 2016 Scott Hanselman. All rights reserved.
     

High DPI Scaling Improvements for Desktop Applications and “Mixed Mode” DPI Scaling in the Windows 10 Anniversary Update

$
0
0

As display technology has improved over time, the cutting edge has moved towards having more pixels packed into each physical square inch, and away from simply making displays physically larger. This trend has increased the dots per inch (DPI) of the displays on the market today. The Surface Pro 4, for example, has roughly 192 DPI (while legacy displays have 96 DPI). Although having more pixels packed into each physical square inch of a display can give you extremely sharp graphics and text, it can also cause problems for desktop application developers. Many desktop applications display blurry, incorrectly sized UI (too big or too small), or are unusable when using high DPI displays in combination with standard-DPI displays. Many desktop UI frameworks that developers rely on to create Windows desktop applications do not natively handle high DPI displays and work is required on the part of the developer to address resizing application UI on these displays. This can be a very expensive and time-consuming process for developers. In this post, I discuss some of the improvements introduced in the Windows 10 Anniversary Update that make it less-expensive for desktop application developers to develop applications that handle high-DPI displays properly.

Note that applications built upon the Windows Universal Platform (UWP) handle display scaling very well and that the content discussed in this post does not apply to UWP. If you’re creating a new Windows application or are in a position where migrating is possible, consider UWP to avoid the problems discussed in this post.

Some Background on DPI Scaling

Steve Wright has written on this topic extensively, but I thought I’d summarize some of the complexities around display scaling for desktop applications here. Many desktop applications (applications written in raw Win32, MFC, WPF, WinForms or other UI frameworks) can often become blurry, incorrectly sized or a combination of both, whenever the display scale factor, or DPI, of the display that they’re on is different than what it was when the Windows session was first started. This can happen under many circumstances:

  • The application window is moved to a display that has a different display scale factor
  • The user changes the display scale factor manually
  • A remote-desktop connection is established from a device with a different scale factor

When the display scale factor changes, the application may be sized incorrectly for the new scale factor and therefore, Windows often jumps in and does a bitmap stretch of the application UI. This causes the application UI to be physically sized correctly, but it can also lead to the UI being blurry.

In the past, Windows offered no support for DPI scaling to applications at the platform level. When these type of “DPI Unaware” applications are run on Windows 10, they are almost always bitmap scaled by Windows when display scaling is > 100%. Later, Windows introduced a DPI-awareness mode called “System DPI Awareness.” System DPI Awareness provides information to applications about the display scale factor, the size of the screen, information on the correct fonts to use, etc., such that developers can have their applications scaled correctly for a high DPI display. Unfortunately, System DPI Awareness was not designed for dynamic-scaling scenarios such as docking/undocking, moving an application window to a display with a different display scale factor, etc. In other words: The model for system-DPI-awareness is one that assumes that only one display will be in use during the lifecycle of the application and that the scale factor will not change.

In dynamic-scale-factor scenarios applications will be bitmap stretched by Windows when the display-scale-factor changed (this even applies to system-DPI-aware processes). Windows 8.1 introduced support for “Per-Monitor-DPI Awareness” to enable developers to write applications that could resize on a per-DPI basis. Applications that register themselves as being Per-Monitor-DPI Aware are informed when the display scale factor changes and are expected to respond accordingly.

So… everything was good, right? Not quite.

Unfortunately, there were three big gaps with our implementation of Per-Monitor-DPI Awareness in the platform:

  • There wasn’t enough platform support for desktop application developers to actually make their applications do the right thing when the display-scale-factor changed.
  • It was very expensive to update application UI to respond correctly to a display-scale factor changes, if it was even possible to do at all.
  • There was no way to directly disable Window’s bitmap-scaling of application UI. Some applications would register themselves as being Per-Monitor-DPI Aware not because they actually were DPI aware, but because they didn’t want Windows to bitmap stretch them.

These problems resulted in very few applications handling dynamic display scaling correctly. Many applications that registered themselves as being Per-Monitor-DPI Aware don’t scale at all and can render extremely large or extremely small on secondary displays.

Background on Explorer

As I mentioned in another blog post, during the development cycle for the first release of Windows 10 we decided to start improving the way Windows handled dynamic display scaling by updating some in-box UI components, such as the Windows File Explorer, to scale correctly.

This was a great learning experience for us because it taught us about the problems developers face when trying to update their applications to dynamically scale and where Windows was limited in this regard. One of the main lessons learned was that, even for simple applications, the model of registering an application as being either System DPI Aware or Per-Monitor-DPI Aware was too rigid of a requirement because it meant that if a developer decided to mark their application as conforming to one of these DPI-awareness modes, they would have had to update every top-level window in their application or live with some top-level windows being sized incorrectly. Any application that hosts third-party content, such as plugins or extensions, may not even have access to the source code for this content and therefore would not be able to validate that it handled display scaling properly. Furthermore, there were many system components (ComDlg32, for example) that didn’t scale on a per-DPI basis.

When we updated File Explorer (a codebase that’s been around and been added to for some time), we kept finding more and more UI that had to be updated to handle scaling correctly, even after we reached the point in the development process when the primary UI scaled correctly. At that point we faced the same choice other developers faced: we had to touch old code to implement dynamic scaling (which came with application-compatibility risks) or live with these UI components being sized incorrectly. This helped us feel the pain that developers face when trying to adhere to the rigid model that Windows required of them.

Mixed-Mode DPI Scaling and the DPI-Awareness Context

Lesson learned. It was clear to us that we needed to break apart this rigid, process-wide, model for display scaling that Windows required. Our goal was to make it easier for developers to update their desktop applications to handle dynamic display scaling so that more desktop applications would scale gracefully on Windows 10. The idea we came up with was to move the process-level constraint on display scaling to the top-level window level. The idea was that instead of requiring every single top-level window in a desktop application to be updated to scale using a single mode, we could instead enable developers to ease-in, so to speak, to the dynamic-DPI world by letting them choose the scaling mode for each top-level window. For an application with a main window and secondary UI, such as a CAD or illustration application, for example, developers can focus their time and energy updating the main UI while letting Windows handle scaling the less-important UI, possibly with bitmap stretching. While this would not be a perfect solution, it would enable application developers to update their UI at their own pace instead of requiring them to update every component of their UI at once, or suffer the consequences previously mentioned.

The Windows 10 Anniversary Update introduced the concept of “Mixed-Mode” DPI scaling, also known as sub-process DPI scaling, via the concept of the DPI-awareness context (DPI_AWARENESS_CONTEXT) and the SetThreadDpiAwarenessContext API. You can think of a DPI-awareness context as a mode that a thread can be in which can impact the DPI-behavior of API calls that are made by the thread (while in one of these modes). A thread’s mode, or context, can be changed via calls to SetThreadDpiAwarenessContext at any time. Here are some key points to consider:

  • A thread can have its DPI Awareness Context changed at any time.
  • Any API calls that are made after the context is changed will run in the corresponding DPI context (and may be virtualized).
  • When a thread that is running with a given context creates a new top-level window, the new top-level window will be assigned the same context that the thread that created it had, at the time of creation.

Let’s discuss the first point: With SetThreadDpiAwarenessContext the context of a thread can be switched at will. Threads can also be switched in and out of different contexts multiple times.

Many Windows API calls in Windows will return different information to applications depending on the DPI awareness mode that the calling process is running in. For example, if an application is DPI-unaware (which means that it didn’t specify a DPI-Awareness mode) and is running on a display scale factor greater than 100%, and if this application queries Windows for the display size, Windows will return the display size scaled to the coordinate space of the application. This process is referred to as virtualization. Prior to the availability of Mixed-Mode DPI, this virtualization only took place at the process level. Now it can be done at the thread level.

Mixed-Mode DPI scaling should significantly reduce the barrier to entry for DPI support for desktop applications.

Making Notepad Per-Monitor DPI Aware

Now that I’ve introduced the concept of Mixed-Mode, let’s talk about how we applied it to an actual application. While we were working on Mixed Mode we decided to try it out on some in-box Windows applications. The first application we started with was Notepad. Notepad is essentially a single-window application with a single edit control. It also has several “level 2” UI such as the font dialog, print dialog and the find/replace dialog. Before the Windows 10 Anniversary Update, Notepad was a System-DPI-Aware process (crisp on the primary display, blurry on others or if the display scale factor changed). Our goal was to make it a first-class Per-Monitor-DPI-Aware process so that it would render crisply at any scale factor.

One of the first things we did was to change the application manifest for Notepad so that it would run in per-monitor mode. Once an application is running as per-monitor and the DPI changes, the process is sent a WM_DPICHANGE message. This message contains a suggested rectangle to size the application to using SetWindowPos. Once we did this and moved Notepad to a second display (a display with a different scale factor), we saw that the non-client area of the window wasn’t scaling automatically. The non-client area can be described as all of the window chrome that is drawn by the OS such as the min/max/close button, window borders, system menu, caption bar, etc.

Here is a picture of Notepad with its non-client area properly DPI scaling next to another per-monitor application that has non-client area that isn’t scaling. Notice how the non-client area of the second application is smaller. This is because the display that its image was captured on used 200% display scaling, while the non-client area was initialized at 100% (system) display scaling.

image1

During the first Windows 10 release we developed functionality that would enable non-client area to scale dynamically, but it wasn’t ready for prime-time and wasn’t released publicly until we released the Anniversary Update.

We were able to use the EnableNonClientDpiScaling API to get Notepad’s non-client area to automatically DPI scale properly.

Using EnableNonClientDpiScaling will enable automatic DPI scaling of the non-client area for a window when the following conditions are satisfied:

  • The API is called from the WM_NCCREATE handler for the window
  • The process or window is running in per-monitor-DPI awareness
  • The window passed to the API is a top-level window (only top-level windows are supported)

Font Size & the ChooseFont Dialog

The next thing that had to be done was to resize the font on a DPI change. Notepad uses an edit control for its primary UI and it needs to have a font-size specified. After a DPI change, the previous font size was either be too large or too small for the new scale factor, so this had to be recalculated. We used GetDpiForWindow to base the calculation for the new font size:

FontStruct.lfHeight = -MulDiv(iPointSize, GetDpiForWindow(hwndNP), 720);

This gave us a font size that was appropriate for the display-scale factor of the current display, but we next ran into an interesting problem: when choosing the font we ran up against the fact that the ChooseFont dialog was not per-monitor DPI aware. This meant that this dialog could be either too large or too small, depending on the display configuration at runtime. Notice in the image below that the ChooseFont dialog is twice as large as it should be:

image2

To address this, we used mixed-mode to have the ChooseFont dialog run with a system-DPI-awareness context. This meant that this dialog would scale to the system DPI on the primary display and be bitmap stretched any time the display scale factor changed:

DPI_AWARENESS_CONTEXT previousDpiContext = SetThreadDpiAwarenessContext(DPI_AWARENESS_CONTEXT_SYSTEM_AWARE);
BOOL cfResult = ChooseFont(cf);
SetThreadDpiAwarenessContext(previousDpiContext);

This code stores the DPI_AWARENESS_CONTEXT of the thread and then temporarily changes the context while the ChooseFont dialog is created. This ensures that the ChooseFont dialog will run with a system-DPI-awareness context. Immediately after the call to create the window, the thread’s awareness context is restored because we didn’t want the thread to have its awareness changed permanently.

We knew that the ChooseFont dialog did support system-DPI awareness so we chose DPI_AWARENESS_CONTEXT_SYSTEM_AWARE, otherwise we could have used DPI_AWARENESS_CONTEXT_UNAWARE to at least ensure that this dialog would have been bitmap stretched to the correct physical size.

Now we had the ChooseFont dialog scaling properly without touching any of the ChooseFont dialog’s code but this lead to our next challenge… and this is one of the most important concepts that developers should understand about the use of mixed-mode DPI scaling: data shared across DPI-awareness contexts can be using different scaling/coordinate spaces and can have different interpretations in each context. In the case of the ChooseFont dialog, this function returns a font size based off of the user’s input, but this font size returned is relative to the scale factor that the dialog is running in. When the main Notepad window is running at a scale factor that is different than that of the system scale factor, the values from the ChooseFont dialog must be translated to be meaningful for the main window’s scale factor. Here we scaled the font point size to the DPI of the display that the Notepad window was running on, again using GetDpiForWindow:

FontStruct.lfHeight = -MulDiv(cf.iPointSize, GetDpiForWindow(hwndNP), 720);

Windows Placement

Another place where we had to deal with handling data across coordinate spaces was with the way Notepad stores and reuses its window placement (position and dimensions). When Notepad is closed, it will store its window placement. The next time it’s launched, it reads this information in an attempt to restore the previous position. Once we started running the main Notepad thread in per-monitor-DPI awareness we ran into a problem: the Notepad window was opening in strange sizes when launched.

What was happening was that in some cases we would store Notepad’s size at one scale factor and then restore it for a different scale factor. If the display configuration of the PC that Notepad was run on hadn’t changed between when the information was stored and when Notepad was launched again, theoretically this wouldn’t have been a problem. However, Windows supports changing scale factors, connecting/disconnecting and rotating display at will. This meant that we needed Notepad handle these situations more gracefully.

The solution was again to use mixed-mode scaling, but this time not leverage Window’s bitmap-stretching functionality and instead to normalize the coordinates that Notepad used to set and restore its window placement. This involved changing the thread to a DPI-unaware context when saving window placement, and to do the same when restoring. This effectively normalized the coordinate space across all displays and display scale factors that Notepad would be restored to, approximately the same placement regardless of the display-topology changes:

DPI_AWARENESS_CONTEXT previousDpiContext = SetThreadDpiAwarenessContext(DPI_AWARENESS_CONTEXT_UNAWARE);
BOOL ret = SetWindowPlacement(hwnd, wp);
SetThreadDpiAwarenessContext(previousDpiContext);

Once all of these changes were made, we had Notepad scaling nicely whenever the DPI would change and the document text rendering natively for each DPI, which was a big improvement over having Windows bitmap stretch the application on DPI change.

Useful DPI Utilities

While working on Mixed Mode display scaling, we ran into the need to have DPI-aware variants of some commonly used APIs:

A note about GetDpiForSystem: Calling GetDpiForSystem is more efficient than calling GetDC and GetDeviceCaps to obtain the system DPI.

Also, any component that could be running in an application that uses sub-process DPI awareness should not assume that the system DPI is static during the lifecycle of the process. For example, if a thread that is running under DPI_AWARENESS_CONTEXT_UNAWARE awareness context queries the system DPI, the answer will be 96. However, if that same thread switched to DPI_AWARENESS_CONTEXT_SYSTEM and queried the system DPI again, the answer could be different. To avoid the use of a cached — and possibly stale — system-DPI value, use GetDpiForSystem() to retrieve the system DPI relative to the DPI-awareness mode of the calling thread.

What We Didn’t Get To

The Windows 10 Anniversary Update delivers useful API for developers that want to update desktop applications to support dynamic DPI scaling in their applications, in particular EnableNonClientDpiScaling and SetThreadDpiAwarenessContext (also known as “mixed-mode”), but there is still some missing functionality that we weren’t able to deliver. Windows common controls (comctl32.dll) do not support per-monitor DPI scaling and non-client area DPI-scaling is only supported for top-level windows (child-window non-client area, such as child-window scroll bars do not automatically scale for DPI (even in the Anniversary Update)).

We recognize that these, and many other, platform features are going to be needed by developers before they’re fully unblocked from updating their desktop applications to handle display scaling well.

As mentioned in my other post, WPF now offers per-monitor DPI-awareness support as well.

Sample Mixed-Mode Application:

We put together a sample that shows the basics of how to use mixed-mode DPI awareness. The project linked below creates a top-level window that is per-monitor DPI aware and has its non-client area automatically scaled. From the menu you can create a secondary window that uses DPI_AWARENESS_CONTEXT_SYSTEM_AWARE context so that Windows will bitmap stretch the content when its rendered at a different DPI.

https://github.com/Microsoft/Windows-classic-samples/tree/master/Samples/DPIAwarenessPerWindow

Conclusion

Our aim was to reduce the cost for developers to update their desktop applications to be per-monitor DPI aware. We recognize that there are still gaps in the DPI-scaling functionality that Windows offers desktop application developers and the importance of fully unblocking developers in this space. Stay tuned for more goodness to come.

Download Visual Studio to get started.

The Windows team would love to hear your feedback.  Please keep the feedback coming using our Windows Developer UserVoice site. If you have a direct bug, please use the Windows Feedback tool built directly into Windows 10.

Building your C++ application with Visual Studio Code

$
0
0

Over the last few months, we have heard a lot of requests with respect to adding capability to Visual Studio Code to allow developers to build their C/C++ application. The task extensibility in Visual Studio Code exists to automate tasks like building, packaging, testing and deploying. This post is going to demonstrate how using task extensibility in Visual Studio Code you can call compilers, build systems and other external tasks through the help of the following sections:

Installing C/C++ build tools

In order to build your C++ code you need to make sure you have C/C++ build tools (compilers, linkers and build systems) installed on your box. If you can already build outside Visual Studio Code you already have these tools setup, so you can move on to the next section.

To obtain your set of C/C++ compilers on Windows you can grab  the Visual C++ build tools SKU. By default these tools are installed at ‘C:\Program Files (x86)\Microsoft Visual C++ Build Tools’. You only need to do this if you don’t have Visual studio installed. If you already have Visual Studio installed, you have everything you need already.

If you are on a Linux platform which supports apt-get you can run the following commands to make sure you grab the right set of tools for building your C/C++ code.

sudo apt-get install g++
sudo apt-get install clang

On OS X, the easiest way to install the C++ build tools would be to install Xcode command line tools. You can follow this article on the apple developer forum. I would recommend this instead of installing clang directly as Apple adds special goodies to their version of the clang toolset. Once installed you can run these commands in a terminal window to determine where the compiler and build tools you need were installed.

xcodebuild -find make
xcodebuild -find gcc
xcodebuild -find g++
xcodebuild -find clang
xcodebuild -find clang++

Creating a simple Visual Studio Code task for building C/C++ code

To follow this specific section you can go ahead and download this helloworld C++ source folder. If you run into any issues you can always cheat and download the same C++ source folder with a task pre-configured.

In Visual Studio Code tasks are defined for a workspace and Visual Studio Code comes pre-installed with a list of common task runners. In the command palette (Ctrl+Shift+P (Win, Linux), ⇧⌘P (Mac)) you can type tasks and look at all the various task related commands.

commands

On executing the ‘Configure Task Runner’ option from the command palette you will see a list of pre-installed tasks as shown below, in the future we will grow the list of task runners for popular build systems but for now go ahead and pick up the others template from this list.

preinstalledtasks

This will create a tasks.json file in your .vscode folder with the following content:

{
    // See https://go.microsoft.com/fwlink/?LinkId=733558
    // for the documentation about the tasks.json format
    "version": "0.1.0",
    "command": "echo",
    "isShellCommand": true,
    "args": ["Hello World"],
    "showOutput": "always"
}
Setting it up for Windows

The easiest way to setup Visual Studio Code on Windows for C/C++ building is to create a batch file called ‘build.bat’ with the following commands:

@echo off
call "C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\vcvarsall.bat" x64     
set compilerflags=/Od /Zi /EHsc
set linkerflags=/OUT:hello.exe
cl.exe %compilerflags% helloworld.cpp /link %linkerflags%

Please note that the location of vcvarsall.bat file which sets up the right environment for building could be different on your machine. Also if you are using the Visual C++ build SKU, you will need to call the following command instead:

call “C:\Program Files (x86)\Microsoft Visual C++ Build Tools\vcbuildtools.bat” x64

Once the build script is ready you can then modify your tasks.json to directly call your batch file on Windows by making the following changes to the automatically generated tasks.json file.

{  
   // See https://go.microsoft.com/fwlink/?LinkId=733558
   // for the documentation about the tasks.json format
   "version": "0.1.0",
   "windows": {
      "command": "build.bat",
      "isShellCommand": true,
      "showOutput": "always"
   }

Initiate a build by bringing up the command palette again and executing the ‘Run Build Task’ command.

build

This should initiate the build for our C++ application and you should be able to monitor the build progress in the output window.

output

Now even though this is a Windows specific example you should be able to re-use the same series of steps to call a build script on other platforms as well.

Calling Clang and GCC from Visual Studio Code task for building C/C++ code

Alright let us now see how we can achieve building our C/C++ application without calling an external batch file using some popular toolsets like GCC and Clang directly without a build system in play.

To follow this specific section you can go ahead and download this helloworld C++ source folder. If you run into any issues you can always cheat and download the same C++ source folder with a task pre-configured.

Tasks.json allow you to specify qualifiers like the one below for ‘OS X’. These qualifiers similar will allow you create specific build configurations for your different build targets or as shown in this case for different platforms.

  "OS X": {
        "command": "clang++",
        "args": [
            "-Wall",
            "helloWorld.cpp",
            "-v"
          ],
        "isShellCommand": true,
        "showOutput": "always",
        "problemMatcher": {
            "owner": "cpp",
            "fileLocation": [
                "relative",
                "${workspaceRoot}"
            ],
            "pattern": {
                "regexp": "^(.*):(\\d+):(\\d+):\\s+(warning|error):\\s+(.*)$",
                "file": 1,
                "line": 2,
                "column": 3,
                "severity": 4,
                "message": 5
            }
  }

Another thing to highlight in this snippet is the ‘problemMatcher’ section. Visual Studio Code ships with some of the most common problem matchers out of the box but many compilers and other tools define their own style of errors and warnings. Need not worry you can create your own custom problem matcher as well with Visual Studio Code. This site which helps test out regex online might also come in handy.

The pattern matcher here will work well for Clang and GCC toolsets so just go ahead and use them. The figure below shows them in effect when you initiate the show problems command in Visual Studio Code (Cntrl+Shift+M (Win, Linux), ⇧⌘M (Mac)).

error

Calling Makefiles using Visual Studio Code task extensibility

Similar to the manner how you configure tasks.json to call the compiler, you can do the same for makefiles. Take a look at the sample tasks.json below, the one new concept in this tasks.json file is the nesting of tasks. Both ‘hello’ and ‘clean’ are tasks in the makefile where as ‘compile w/o makefile’ is a separate task but this example should show you how you can setup tasks.json in cases where there are
multiple build systems at play. You can find the entire sample here.

Note this is an OSX, Linux specific example but to obtain the same behavior on Windows you can replace ‘bash’ with ‘cmd’ and ‘args’ with ‘/C’.

{
    // See https://go.microsoft.com/fwlink/?LinkId=733558
    // for the documentation about the tasks.json format
   "version": "0.1.0",
    "osx": {
        "command": "bash",
        "args": ["-c"],
        "isShellCommand": true,
        "showOutput": "always",
        "suppressTaskName": true,
        "options": {
            "cwd": "${workspaceRoot}"
        },
        "tasks": [
            {
                "taskName": "hello",
                "args": [
                    "make hello"
                ],
                "isBuildCommand": true
            },
            {
                "taskName": "clean",
                "args": [
                    "make clean"
                ]
            },
            {
                "taskName": "compile w/o makefile",
                "args": [
                    "clang++ -Wall -g helloworld.cpp -o hello"
                ],
                "echoCommand": true
            }
        ]
    }
}

Two more things to mention here is that whichever task you associate the ‘isBuildCommand’ with becomes your default build task in Visual Studio Code. In this case that would be the ‘hello’ task. If you would like to run the other tasks bring up the command palette and choose ‘Run Task’ option.

task1

task2

Then choose the individual task to run e.g. ‘clean’ task. Alternatively, you can also wire the build task as a different key binding. For doing so bring up File -> Preferences -> Keyboard shortcuts and add the following key binding to your task. Bindings currently only exist for build and test tasks but an upcoming fix in the October release will allow bindings for individual tasks as well.

[
  {
        "key": "f7",
       "command": "workbench.action.tasks.build"
    }
]

Calling MSBuild using Visual Studio Code task extensibility

MSBuild is already a pre-installed task that Visual Studio Code comes with. Bring up the command palette and choose MSBuild, this will create the following task.json it should be easy then to add your MSBuild solution, project name to the ‘args’ section and get going.

{
 // See https://go.microsoft.com/fwlink/?LinkId=733558
// for the documentation about the tasks.json format
  "version": "0.1.0",
  "command": "msbuild",
  "args": [
        // Ask msbuild to generate full paths for file names.
        "/property:GenerateFullPaths=true"
    ],
    "taskSelector": "/t:",
    "showOutput": "silent",
    "tasks": [
    {
            "taskName": "build",

            // Show the output window only if unrecognized errors occur.
            "showOutput": "silent",
           
            // Use the standard MS compiler pattern to detect errors, warnings and infos
           "problemMatcher": "$msCompile"
        }
    ]
}
Wrap Up

This post provides some guidance with examples on how to use Visual Studio Code task extensibility to build your C/C++ application. If you would like us to provide more guidance on this or any other aspect for Visual Studio Code by all means do reach out to us by continuing to file issues at our Github page and keep trying out this experience and if you would like to shape the future of this extension please join our Cross-Platform C++ Insiders group, where you can speak with us directly and help make this product the best for your needs.

Announcing .NET Core 1.1 Preview 1

$
0
0

We’re excited to announce the .NET Core 1.1 Preview 1 release today. It includes support for additional Linux distributions, has many updates and is the first Current release. I will describe all of these changes below. The release is a preview release and is intended as an early look at the .NET Core 1.1 release. It is not “Go Live” and is not yet recommended for production workloads.

ASP.NET Core 1.1 Preview 1 and Entity Framework Core 1.1 Preview 1 are also shipping today. Please check out those releases, too.

You can download the release now:

You can see the full set of .NET Core 1.1 downloads on the .NET Core Preview Download page.

.NET Core 1.1 Preview 1 Docker image will be posted shortly.

You can find the existing .NET Core 1.0 releases on the dot.net/core page. .NET Core 1.1 will also be listed on that page once it is shipped as a stable release.

Improvements

The .NET Core 1.1 release is the first 1.x minor update. Its primary product theme is adding support for new operating system distributions.

Operating System Distributions

Support for the following distributions was added:

  • Fedora 24
  • Linux Mint 18
  • OpenSUSE 42.1
  • macOS 10.12
  • Windows Server 2016

You can see the full set of supported distributions in the .NET Core 1.1 Preview 1 release notes.

APIs

1380 APIs were added in this release. You can see the complete set in the API Difference .NET Core App 1.0 (ref) vs .NET Core App 1.1 (ref) document.

APIs were added to enable specific scenarios. There was no specific theme to the API additions.

No new .NET Standard version was created. .NET Standard 2.0 support is still coming.

Fixes

Many specific product changes were made. You can look at the full set of .NET Core 1.1 Preview 1 Commits to learn more.

The previously announced MSBuild and CSProj changes are not part of this release, but are still coming.

Adopting .NET Core 1.1 Preview 1

.NET Core 1.1 Preview 1 is a safe and easy install. It works the same way as .NET Core 1.0. There are a few things you will want to know about using it.

Side-by-side Install with .NET Core 1.0

.NET Core 1.1 Preview 1 installs side-by-side with .NET Core 1.0. .NET Core 1.0 applications will continue to use the .NET Core 1.0 runtime. The .NET Core 1.0 environment is designed to be almost completely unaware that a later minor or major release is also installed.

There is only one command — dotnet new— that will change as a result of installing .NET Core 1.1. dotnet new will create new projects that require .NET Core 1.1 Preview 1, as opposed to .NET Core 1.0. As a result, you may want to avoid installing it on a machine where you are doing .NET Core 1.0-based development with the command line tools. If you are on Windows and use Visual Studio for creating new projects, and not dotnet new, then installing .NET Core 1.1 is fine to do.

We would appreciate feedback on this design choice. The current design is that dotnet new will create new projects for the latest .NET Core version installed. If you don’t think that’s the right choice, tell us what you would like to see.

Trying it out

You can start by installing .NET Core 1.1 Preview. After that, you can use the .NET Core tools just like you have with .NET Core 1.0. Try the following set of commands to create, build and run a .NET Core 1.1 Preview 1 application:

dotnet newdotnet restoredotnet run

You can take a look at the dotnetapp-preview sample to try a .NET Core 1.1 Preview 1 application, with or without Docker.

Upgrading Existing Project

You can upgrade existing .NET Core projects from using .NET Core 1.0 to .NET Core 1.1 Preview 1. I will show you the new project.json file that the updated dotnet new now produces. It’s the best way to see the new version values that you need to copy/paste into your existing project.json files. There are no automated tools to upgrade existing projects to later .NET Core versions.

The default .NET Core 1.1 Preview 1 project.json file follows:

This project.json file is very similar to what your .NET Core 1.0 project.json looks like, with the exception of the netcoreapp1.1 and 1.1.0-preview1-001100-00 target framework and meta-package version strings, respectively.

You can use the following substitutions to help you update project.json files that you want to move temporarily or permanently to .NET Core 1.1.

  • Update the netcoreapp1.0 target framework to netcoreapp1.1.
  • Update the Microsoft.NETCore.App package version from 1.0.x (for example, 1.0.0 or 1.0.1) to 1.1.0-preview1-001100-00.

You can also just write 1.1.0-preview1 as a short-hand, skipping the build-specific information. It works and enables you to more easily move forward with .NET Core 1.1 nightly builds if you adopt those. You will want to change the metapackage version to 1.1.0 when .NET Core 1.1 ships as a stable release. The target framework version will not change. It is set for the lifetime of .NET Core 1.1.

Upgrading to .NET Core 1.1 Preview 1 Docker Images

.NET Core 1.1 Preview 1 images will soon be published to the microsoft/dotnet repo. The two new tags for .NET Core 1.1, for the .NET Core 1.1 Preview 1 SDK and Runtime images, respectively will be: 1.0.0-preview2.1-sdk, 1.1.0-core.

The latest and other versionless tags will not been updated to point to .NET Core 1.1, but are still pointing to .NET Core 1.0. As a note, we are still deciding if the versionless tags should always point to LTS releases (see explanation below) or if it is OK for them to point to Current releases. Our thinking is that they should only ever point to LTS releases, leaving Current as opt-in. We’d appreciate your feedback on that.

You can try the new Docker images with the dotnetapp-preview sample in the .NET Core Docker Samples repository. The other samples can be easily modified to also exercise the .NET Core 1.1 Preview 1 images, following the project.json upgrade instructions I gave you above.

“Current” Release

We announced in July that we would be adopting a dual-train strategy for .NET Core releases. At the time, we called the two different product trains “LTS” and “FTS”. Those release terms have since been renamed to “Long Term Support (LTS)” and “Current Release”. This is similar to what other platforms do, like Red Hat Enterprise Linux, Ubuntu and Node.js. In fact, we adopted “Current” since the term was already in use and already had the meaning that we wanted.

We call different releases “trains” since it is easy to apply the train (the long vehicles on metal tracks) analogy to software releases. You can make references to “trains running on schedule” and there being an opportunity to “catch the next train”. I’m sure you can come up with more of these in the comments.

There is more to it, though. The LTS (slow) and Current (fast) trains define different releases cadences, different expectations on the kinds of changes that are acceptable in updates and different support timeframes. Based on our experience with the .NET Framework where we only ever had one train, we wanted to have more flexibility in releases and be able to better serve different customers with different expectations of us.

We ship LTS releases after in-depth and lengthy testing, significant customer adoption (before being named LTS) and a high degree of stability. Once released, the goal is to update LTS releases as little as possible, only for security, significant reliability, performance issues and the rare important feature addition. They are supported for up to three years. Our more conservative customers tell us “I love this plan!”. They would love zero changes if we could make that happen, although they realize that’s not quite realistic.

Current releases are the ones we are actively working on currently. .NET Core 1.1 is such a release. We do our major feature work in these releases and also add support for new operating system distributions. These releases are stable but are also much faster moving, so require more testing when you adopt them. They are also only supported for three months after the next Current release ships. To stay on a supported version, you need to move to the next Current release before the three months passes. With Current, you get new features must faster, but have to stay on that release train.

Support for some new operating system distributions will get added in LTS releases too, but that will be done on an exception basis. Windows Server 2016 and macOS Sierra are examples where that happened.

Once we’re happy with a series of Current releases and have had enough feedback, we label the next release as LTS and then repeat the whole process again. This could happen after few or many Current releases in a row. It depends a lot on the feedback we are hearing.

The transition of a Current release to LTS is a good opportunity to “switch trains”. We expect that some developers will choose Current releases during development of longer projects to get the latest features and broader set of fixes and then switch to LTS later in the project (assuming the timing works out)  getting ready for their production rollout.

Please take a look at the .NET Support and Versioning blog post for more information.

Versioning, Filenames and Docker Tags

If you’ve worked on a significant project with lots of users and releases, you’ll probably know that product naming and versioning is suprisingly hard. The .NET Core project doesn’t escape this problem. In fact, it seems to embrace it, having chosen version strings that are not nearly as intuitive we could like. This section of the blog post hopefully provides you with a decoder ring on those versions, which we really should have shared earlier.

There are two distributions of .NET Core: a Runtime, and an SDK that includes the Runtime and some Tools. Easy so far. The primary issue is that the SDK distribution is the most popular distribution, but doesn’t share the same versioning scheme as the Runtime. The challenge is that we primarily talk about the product in terms of Runtime versioning (including this blog post), while the SDK is versioned in terms of the Tools it carries. There are a variety of reasons why we chose to do that. That’s the context.

.NET Core installers, Docker images and and project.json files carry version numbers that you need to use and reason about. It can be challenging selecting and/or writing the right thing because some of these strings look suprisingly similar, but mean different things.

Here are the key versions and what they mean, in prose English.

  • 1.0.0-preview2-sdk– Refers to the .NET Core 1.0 SDK, which includes a stable 1.0 Runtime and preview 1.0 Tools. This is the second preview release of the .NET Core Tools.
  • 1.0.0-preview2.1-sdk– Refers to the .NET Core 1.1 SDK, which includes a preview 1.1 Runtime and preview 1.0 Tools. It’s called preview2.1 because it’s a dot release for the Tools relative to preview2, even though it comes with a new Runtime.
  • 1.1.0-preview1— Refers to the first preview of the .NET Core 1.1 Runtime.

We intend to ship the final 1.0 version of the .NET Core Tools next year. This situation should get better. It will enable us to ship a 1.0.0-sdk release, with no preview string. The SDK and Runtime versions still won’t match. We’re discussing what to do about that. We’d like the Tools to be able to version faster than the Runtime, however, we may opt to get the version numbers to artificially be the same from time to time to make Runtimes and SDKs easier to match up, including the branding terms we use in blog posts.

Closing

Please try out the .NET Core 1.1 Preview 1 release. We want to hear your .NET Core 1.1 Preview 1 feedback as we get ready to ship the final version of .NET Core 1.1. For those of you installing .NET Core on one of the newly supported operating system distributions, we want your feedback even more to help us scout out those releases more deeply.

Thanks to everyone for trying out and adopting .NET Core. We appreciate all of the feedback, product contributions and the general energy around the project. Thanks!

Announcing Entity Framework Core 1.1 Preview 1

$
0
0

Entity Framework Core (EF Core) is a lightweight, extensible, and cross-platform version of Entity Framework. Today we are making Entity Framework Core 1.1 Preview 1 available.

Upgrading to 1.1 Preview 1

If you are using one of the database providers shipped by the EF Team (SQL Server, SQLite, and InMemory), then just upgrade your provider package.

PM> Update-Package Microsoft.EntityFrameworkCore.SqlServer -Pre

If you are using a third-party database provider, then check to see if they have released an update that depends on 1.1.0-preview1-final. If they have, then just upgrade to the new version. If not, then you should be able to upgrade just the EF Core relational components that they depend on. Most of the new features in 1.1 do not require changes to the database provider. We’ve done some testing to ensure database providers that depend on 1.0 continue to work with 1.1 Preview 1, but this testing has not been exhaustive.

PM> Update-Package Microsoft.EntityFrameworkCore.Relational -Pre

Upgrading tooling packages

If you are using the tooling package, then be sure to upgrade that too. Note that tooling is versioned as 1.0.0-preview3-final because tooling has not reached its initial stable release (this is true of tooling across .NET Core, ASP.NET Core, and EF Core).

PM> Update-Package Microsoft.EntityFrameworkCore.Tools -Pre

If you are using ASP.NET Core, then you need to update the tools section of project.json to use the new Microsoft.EntityFrameworkCore.Tools.DotNet package. As the design of .NET CLI Tools has progressed, it has become necessary for us to separate the dotnet ef tools into this separate package.

"tools": {
  "Microsoft.EntityFrameworkCore.Tools.DotNet": "1.0.0-preview3-final"
},

What’s in 1.1 Preview 1

The 1.1 release is focused on addressing issues that prevent folks from adopting EF Core. This includes fixing bugs and adding some of the critical features that are not yet implemented in EF Core. While we’ve made some good progress on this, we do want to acknowledge that EF Core still isn’t going to be the right choice for everyone. For more detailed info of what is implemented, see our EF Core and EF6.x comparison.

Improved LINQ translation

In the 1.1 release we have made good progress improving the EF Core LINQ provider. This enables more queries to successfully execute, with more logic being evaluated in the database (rather than in memory).

DbSet.Find

DbSet.Find(…) is an API that is present in EF6.x and has been one of the more common requests for EF Core. It allows you to easily query for an entity based on its primary key value. If the entity is already loaded into the context, then it is returned without querying the database.

using (var db = new BloggingContext())
{
    var blog = db.Blogs.Find(1);
}

Mapping to fields

The new HasField(…) method in the fluent API allows you to configure a backing field for a property. This is most commonly done when a property does not have a setter.

public class BloggingContext : DbContext
{
    ...

    protected override void OnModelCreating(ModelBuilder modelBuilder)
    {
        modelBuilder.Entity()
            .Property(b => b.Url)
            .HasField("_theUrl");
    }
}

By default, EF will use the field when constructing instances of your entity during a query, or when it can’t use the property (i.e. it needs to set the value but there is no property setter). You can change this via the new UsePropertyAccessMode(…) API.

protected override void OnModelCreating(ModelBuilder modelBuilder)
{
    modelBuilder.Entity()
        .Property(b => b.Url)
        .HasField("_theUrl")
        .UsePropertyAccessMode(PropertyAccessMode.Field);
}

You can also create a property in your model that does not have a corresponding property in the entity class, but uses a field to store the data in the entity. This is different from Shadow Properties, where the data is stored in the change tracker. This would typically be used if the entity class uses methods to get/set values.

You can give EF the name of the field in the Property(…) API. If there is no property with the given name, then EF will look for a field.

public class BloggingContext : DbContext
{
    ...

    protected override void OnModelCreating(ModelBuilder modelBuilder)
    {
        modelBuilder.Entity()
            .Property("_theUrl");
    }
}

You can also choose to give the property a name, other than the field name. This name is then used when creating the model, most notably it will be used for the column name that is mapped to in the database.

protected override void OnModelCreating(ModelBuilder modelBuilder)
{
    modelBuilder.Entity()
        .Property("Url")
        .HasField("_theUrl");
}

You can use the EF.Property(…) method to refer to these properties in a LINQ query.

var blogs = db.Blogs
    .OrderBy(b => EF.Property(b, "Url"))
    .ToList();

Explicit Loading

Explicit loading allows you to load the contents of a navigation property for an entity that is tracked by the context.

using (var db = new BloggingContext())
{
    var blog = db.Blogs.Find(1);

    db.Entry(blog).Collection(b => b.Posts).Load();
    db.Entry(blog).Reference(b => b.Author).Load();
}

Additional EntityEntry APIs from EF6.x

We’ve added the remaining EntityEntry APIs that were available in EF6.x. This includes Reload(), GetModifiedProperties(), GetDatabaseValues() etc. These APIs are most commonly accessed by calling the DbContext.Entry(object entity) method.

Connection resiliency

Connection resiliency automatically retries failed database commands. This release includes an execution strategy that is specifically tailored to SQL Server (including SQL Azure). This execution strategy is included in our SQL Server provider. It is aware of the exception types that can be retried and has sensible defaults for maximum retries, delay between retries, etc.

protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
{
    optionsBuilder.UseSqlServer(
        "",
        options => options.EnableRetryOnFailure());
}

Other database providers may choose to add retry strategies that are tailored to their database. There is also a mechanism to register a custom execution strategy of your own.

protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
{
    optionsBuilder.UseMyProvider(
        "",
        options => options.ExecutionStrategy(...));
}

SQL Server memory-optimized table support

Memory-Optimized Tables are a feature of SQL Server. You can now specify that the table an entity is mapped to is memory-optimized. When using EF Core to create and maintain a database based on your model (either with migrations or Database.EnsureCreated), a memory-optimized table will be created for these entities.

protected override void OnModelCreating(ModelBuilder modelBuilder)
{
    modelBuilder.Entity()
        .ForSqlServerIsMemoryOptimized();
 }

Simplified service replacement

In EF Core 1.0 it is possible to replace internal services that EF uses, but this is complicated and requires you to take control of the dependency injection container that EF uses. In 1.1 we have made this much simpler, with a ReplaceService(…) method that can be used when configuring the context.

public class BloggingContext : DbContext
{
    ...

    protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
    {
        ...

        optionsBuilder.ReplaceService();
    }
}

What’s happening after 1.1 Preview 1

The stable 1.1 release will be available later this year. We aren’t planning any new features between preview1 and the stable release. We will just be working on fixing bugs that are reported.

Our team is now turning its attention to the EF Core 1.2 and EF6.2 releases. We will share details of these releases in the near future.

The week in .NET – .NET, ASP.NET, EF Core 1.1 Preview 1 – On .NET on EF Core 1.1 – Changelog – FluentValidation – Reverse: Time Collapse

$
0
0

To read last week’s post, see The week in .NET – Bond – The Gallery.

Preview 1 of .NET Core 1.1, ASP.NET Core 1.1, and EF Core 1.1 announced

Preview 1 versions of .NET Core 1.1, ASP.NET Core 1.1, and Entity Framework Core 1.1 were released today. Check out the blog posts to discover the new features!

On .NET

Last week, Rowan Miller was on the show to talk about Entity Framework Core 1.1:

This week, we’ll speak with Martin Woodward about the .NET Foundation. The show is on Thursdays and begins at 10AM Pacific Time on Channel 9. We’ll take questions on Gitter, on the dotnet/home channel and on Twitter. Please use the #onnet tag. It’s OK to start sending us questions in advance if you can’t do it live during the show.

The Changelog podcast

I hope you will forgive me for the self-promotion… I had the chance to be interviewed on the Changelog podcast and you might want to check it out.

Package of the week: FluentValidation

FluentValidation is a lightweight validation library that uses a fluent interface and Lambda expressions for building validation rules. It’s written by Jeremy Skinner and is compatible with .NET Standard 1.0.

Game of the week: Reverse: Time Collapse

Reverse: Time Collapse is an action-adventure game that features a unique time travel story, in that time travels backwards. Take on the role of a scientist, a journalist and a secret agent who are forced to time travel as a result of a laboratory accident. Use each of the characters to solve puzzles across time while avoiding the deadly attacks of the Guardians of Time and Secret Service agents. Reverse: Time Collapse explores historical events such as WikiLeaks (2010), the Kennedy Assassination (1963) and Roswell (1947).

Reverse: Time Collapse

Reverse: Time Collapse is under active development by Meangrip using Unity and C#.

User group meeting of the week: VS 2015 with .NET Core Tooling in Raleigh, NC

TRINUG.NET holds a meeting on Wednesday, October 26 in Raleigh, NC, to talk about .NET Core tooling in VS 2015.

Blogger of the week: Rick Strahl

Rick’s been blogging for as long as I can remember, and his posts are always very detailed and carefully researched. He’s a problem solver, and likes to share his findings. It’s fair to say that anyone who has been working with .NET for a few years has saved some time thanks to one of Rick’s posts at least once. This week’s issue features his latest post.

.NET

ASP.NET

F#

Check out F# Weekly for more great content from the F# community.

Xamarin

Azure

Games

And this is it for this week!

Contribute to the week in .NET

As always, this weekly post couldn’t exist without community contributions, and I’d like to thank all those who sent links and tips. The F# section is provided by Phillip Carter, the gaming section by Stacey Haffner, and the Xamarin section by Dan Rigby.

You can participate too. Did you write a great blog post, or just read one? Do you want everyone to know about an amazing new contribution or a useful library? Did you make or play a great game built on .NET?
We’d love to hear from you, and feature your contributions on future posts:

This week’s post (and future posts) also contains news I first read on The ASP.NET Community Standup, on Weekly Xamarin, on F# weekly, and on Chris Alcock’s The Morning Brew.

Viewing all 10804 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>