Quantcast
Channel: Category Name
Viewing all 10804 articles
Browse latest View live

End of Support for Visual Studio 2008 – in One Year

$
0
0

In line with our ten-year support policy, Visual Studio 2008, its associated products, runtimes, and components will cease to be supported from April 10, 2018. Though your Visual Studio 2008 applications will continue to work, we encourage you to port, migrate, and upgrade your Visual Studio projects over the next year, to ensure you continue to receive support. Visit visualstudio.com to get the most up-to-date version of Visual Studio.

Microsoft will no longer provide security updates, technical support, or hotfixes when support ends on April 10, 2018, for the following products:

  • Microsoft Visual Studio 2008 (All editions)
  • Microsoft Visual C# 2008 (All editions)
  • Microsoft Visual C++ 2008 (All editions)
  • Microsoft Visual Basic 2008 (All editions)
  • Microsoft Visual Studio Team System 2008 (All editions)
  • Microsoft Visual Studio Team System 2008 Team Explorer
  • Microsoft Visual Studio Team System 2008 Team Foundation Server
  • Microsoft Visual Studio Team System 2008 Team Suite
  • Microsoft Visual Studio Team System 2008 Test Load Agent
  • Microsoft Visual Web Developer 2008 Express Edition

All later versions of Visual Studio products will continue to be supported for the duration of their established support lifecycles. More information on these products is available on the Servicing for Visual Studio and Team Foundation Server products page.

You can also check out the lifecycle information for .NET, C++ and Windows components on the Microsoft Support Lifecycle site.

Lastly, Microsoft Visual J# Version 2.0 Redistributable Package Second Edition will also cease to be supported on October 10, 2017.

The best way to continue to get full support for Visual Studio products is to upgrade to the latest versions. Visit VisualStudio.com for information on the latest Visual Studio products.

Deniz Duncan, Program Manager, Visual Studio

Deniz is a program manager in the Visual Studio release engineering team, responsible for making Visual Studio available around the world. Prior to joining Microsoft in Redmond, Deniz worked with Microsoft’s enterprise customers in Australia. She is passionate about the customer experience and ensuring we release tools & features developers need, want and love to use.


Sublime Text Extension Roundup

Import repositories from TFVC to Git

$
0
0

You can now migrate code from an existing TFVC repository to a new Git repository within the same account. To start migration, select Import Repository from the repository selector drop-down.

importrepository

Individual folders or branches can be imported to the Git repository, or the entire TFVC repository can be imported (minus the branches). Users can also import up to 180 days of history.

importrepodialog-tfvc

We strongly recommend reading our whitepapers – Centralized version control to Git and TFVC to Git before starting the migration. For more details, please see the feature documentation. Give it a try and let me know if you have questions in the comments below. Thanks!

C++ Debugging and Diagnostics

$
0
0

Debugging is one of the cornerstones of software development, and it can consume a significant portion of a developer’s day.  The Visual Studio native debugger provides a powerful and feature-rich experience for finding and fixing problems that arise in your applications, no matter the type of problem or how difficult it is to solve.  In fact, there are so many debugging features and tools inside Visual Studio that it can be a bit overwhelming for new users.  This blog is meant to give you a quick tour of the Visual Studio native debugger and how it can help you in all areas of your C++ development.

Table of Contents

Breakpoints and control flow

After you have built your application in Visual Studio, you can start the debugger simply by pressing F5.  When you start debugging, there are several commands that can help you to navigate the breakpoints in your application so that you can control the state of the program and the current context of the debugger.  These commands give you flexible control over the debugger’s scope and what lines and functions of code you want to investigate.

  • Continue with [F5]: Run to the next break point.
  • Step over [F10]: Run the next line of code and then break.
  • Step into [F11]: Step into the function called on the current line of code.
  • Step out [Shift+F11]: Step out of the current function and break at the next executable line after the function call.

When hovering over a breakpoint in your code, you will see two icons appear.  The icon on the right with two circles allows you to quickly toggle the current breakpoint on or off without losing the breakpoint marker at this line of code:

breakpoint

The icon on the left will launch the list of breakpoint options. Here you can add conditions or actions to a breakpoint.

bpmenu

Sometimes you want a breakpoint to be hit only when a certain condition is satisfied, like x<=5 is true where x is a variable in the debugger scope.  Conditional breakpoints can easily be set in Visual Studio using the inline breakpoint settings window, which allows you to conveniently add conditional breakpoints to your code directly in the source viewer without requiring a modal window.  Notice that conditional breakpoints contain a “+” sign to indicate at least one condition has been added to the breakpoint.

inlinebp

There is also a set of breakpoint actions that can be performed at a breakpoint, like printing the process ID or the call stack. Visual Studio also refers to these as breakpoint actions as “tracepoints”.  The inline breakpoint settings window allows you to set a variety of breakpoint actions such as printing the call stack or PID.  Notice that when at least one action is assigned to a breakpoint, the breakpoint appears as a diamond shape.  In the example below, we have added both a condition and an action to the breakpoint; this makes it appear as a diamond with a “+” sign inside.

inlinebp2

Function breakpoints (watch points) will activate when a specified function is encountered by the debugger.  Use the Debug menu and select New breakpoint to add a function breakpoint.

functionbp

Data breakpoints will stop the debugger when a specific address is hit during debugging.  Use the Debug menu and select New breakpoint to add a function breakpoint.

Data inspection and visualization

When you are stopped at a breakpoint, the debugger has access to the variable names and values that are currently stored in memory.  There are several windows that allow you to view the contents of these objects.

  • Locals: The locals window lists all variables currently within the debugger scope, which typically includes all static and dynamic allocations made so far in the current function.
  • Autos: This window provides a list of the variables in memory that originate from:
    • The current line at which the breakpoint is set.
      • Note that in the example below, line 79 has yet to execute. The variable is not yet initialized and there is no value for the Autos window to display.
  • The previous 3 lines of code. As you can see below, when we are at the breakpoint on line 79, the previous three lines are shown, and the current line awaiting execution has been detected, but the value is not yet available until this line executes.

code1

autos

  • Watch: These windows allows you to track variables of interest as you debug your application. Values are only available when the listed variables are in the scope of the debugger.
  • Quick Watch is designed for viewing the variable contents without storing it in the Watch window for later viewing. Since the dialog is modal, it is not the best choice for tracking a variable over the entire debugging session: for cases like this the Watch window is preferable.

quickwatch

  • Memory windows: These provide a more direct view of system memory and are not restricted to what is currently shown in the debugger. They provide the ability to arrange values by bit count, for example 16, 32, and 64. This window is intended primarily for viewing raw unformatted memory contents. Viewing custom data types is not supported here.

memorywindow

Custom Views of Memory

Visual Studio provides the Natvis framework, which enables you to customize the way in which non-primitive native data types are displayed in the variable windows (Locals, Autos, Watches).  We ship Natvis visualizers for our libraries, including the Visual C++ STL, ATL, and MFC.  It is also easy to create your own Natvis visualizer to customize the way a variable’s contents are displayed in the debugger windows mentioned above.

Creating a Natvis File

You can add natvis files to a project or as a top-level solution item for .exe projects.  The debugger consumes natvis files that are in a project/solution.  We provide a built-in template under Visual C++ –> Utility folder for creating a .natvis file.

newnatvis

This will add the visualizer to your project for easier tracking and storage via source control.

solnexp

For more information on how to write .natvis visualizers, consult the Natvis documentation.

Modifying Natvis Visualizers While Debugging

The following animation shows how editing a natvis for the Volcano type will change the debugger display  inthe variable windows.  The top-level display string for the object is changed the to show the m_nativeName instead of the m_EnglishName.  Notice how the changes to the .natvis file are immediately picked up by the debugger and the difference is shown in red text. natvisedit

Diagnostic tools and performance profiling

Most profiling tools run in a special mode that is separate from the debugger itself.  In Visual Studio, we have added a set of performance and diagnostics tools that can run during debugging and provide more insight into the performance and state of your apps. You can control the flow of the application to get to a problem area and then activate more powerful tools as you drill down into the problem.  Instead of waiting around for the problem to happen, you are able to have full control of the program and decide what information you want to analyze, whether it’s how much time a function spends on the CPU, or viewing the memory usage of each allocation by type. The live CPU and memory usage of your application are displayed in the graph and debugger event are indicated along the timeline. There is a tab for using each of the included diagnostic tools: CPU Usage and Memory Usage.

dtwindow

CPU Usage

This tool allows you to view the CPU usage for each function called in a selected time range on the CPU graph.  You must enable the tools by clicking the “CPU Profiling” button on the left of this tab in order to select a time range for analysis.

cpuusage

Memory Usage

This tool enables you to use the memory profiler, which for native profiling must be enabled using the Heap Profiling button so that you can capture heap snapshots.  The button on the left takes a snapshot and you can view the contents of each snapshot by clicking the blue links in the snapshot table.

snapshotreel

The Types View shows the types that were resolved from the memory snapshot including the count and total memory footprint.  You can navigate to the Instances View by double-clicking a line in this view.

typesview

The Instances View shows the types that were resolved from the memory snapshot including the count and total memory footprint.  You can navigate to the Instances View by double-clicking a line in this view.  You can navigate back to the types view using the back arrow to the left of the type name.

instancesview

The Stacks View shows the call stack for your program and allows you to navigate through the call path of each captured allocation.  You can navigate to the stacks view from the types view by selecting Stacks View in the View Mode dropdown.  The top section of this page shows the full execution call stack and can be sorted by callee or caller (in-order or reverse) with the control at the top right called Aggregate call stack by.  The lower section will list all allocation attributable to the selected part of the call stack.  Expanding these allocations will show their allocation call stack.

stacksview

Debugging processes and devices

Attaching to Process

Any process running on your Windows machine can be debugged using Visual Studio.  If you want to view the variable types, make sure to have the debug symbols loaded for the process that you are attaching to.

attach

Remote Debugging

To remotely debug into another machine that you can connect to via your network, enable the remote debugger via the debugger dropdown.  This allows you to debug into a machine no matter how far away it is, as long as you can connect to it over a network.  You can also easily debug applications running on external devices such as a Surface tablet.

debugselector

The IP address and connection details can be managed in the debugger property page, accessed using either Alt+Enter or right-clicking the project in the Solution Explorer.

debugpp

Multi-threaded debugging

Visual Studio provides several powerful windows to help debugging multi-threaded applications.  The Parallel Stacks window is useful when you are debugging multithreaded applications. Its Threads View shows call stack information for all the threads in your application. It lets you navigate between threads and stack frames on those threads. In native code, the Tasks View shows call stacks of task groupsparallel algorithmsasynchronous agents, and lightweight tasks.

parallelstacks

There is also a Parallel Watch window designed specifically for tracking variables across different threads, showing each thread as a row and each watch (object) as a column.  You can also evaluate boolean expressions on the data and export the data to spreadsheet (.csv or Excel) for further analysis.

parallelthreads

Edit and continue

Edit and continue allows you to edit some sections of your code during a debugging session without rebuilding, potentially saving a lot of development time.  This is enabled by default, and can be toggled or customized using the Debugging options, accessible via the Debug menu and selecting Options.

enc

Other resources

If you are interested in some more content and videos about debugging in Visual Studio, check out these links:

Blog posts

Related documentation

Videos

Azure #DocumentDB Service Level Agreements

$
0
0

Why enterprises trust us for their globally distributed applications.

Enterprise applications and massive scale applications need a data store that is globally distributed, offers limitless scale, geographical reach, and is fast and performant. Along with enterprise grade security and compliance, a major criterion is the level of service guarantees the database provides in terms of availability, performance, and durability. Azure DocumentDB is Microsoft’s globally distributed database service designed to enable you to build planet-scale applications, enabling you to elastically scale both throughput and storage across any number of geographical regions. The service offers guaranteed single-digit millisecond low latency at the 99th percentile, 99.99% high availability, predictable throughput, and multiple well-defined consistency models.

We recently updated our Service Level Agreements (SLA) to make them comprehensive to include latency, availability, throughput, and consistency. By virtue of its schema-agnostic and write-optimized database engine, DocumentDB, by default, is capable of automatically indexing all the data it ingests and serves across SQL, MongoDB, and JavaScript language-integrated queries in a scale-independent manner. As one of the foundational services of Azure, DocumentDB has been used virtually ubiquitously as a backend for first-party Microsoft services for many years. Since its general availability in 2015, DocumentDB is one of the fastest growing services on Azure.

SLA

Industry leading comprehensive SLA

Since its inception, Azure DocumentDB always offered the best SLA in the industry with 99.99% guarantees for availability. Now, we are the only cloud service offering a comprehensive SLA for:

  • Availability: The most classical SLA. Your system will be available for more than 99.99% of the time or you get refund.
  • Throughput: At a collection level, we guarantee the throughput for your database collection is always executed according to the maximum throughput you provisioned.
  • Latency: Since speed is important, we guarantee that 99% of your requests will have a latency below 10ms for document read or 15ms for document write operations.
  • Consistency: We ensure that we will honor the consistency guarantees in accordance with the consistency levels chosen for your requests.

While everyone is familiar with the notion of SLA on availability or uptime, providing financial guarantees on throughput, latency, and consistency is a first and industry leading initiative. This is not only difficult to implement but also hard to provide transparency to users. Thanks to the Azure portal, we provide full transparency on uptime, latency, throughput, and the number of requests and failures. In the rare case that we are unable to honor any of these SLA, we will provide credits from 10% to 25% of your monthly bill as a refund.

Availability SLA – 99.99%

Availbility SlA

The following equation shows the SLA formula for availability, given a month with 744 hours:

Formula1

Screenshot_2

A failed request has the HTTP code 5xx or 408 (for document Read/Write/Query operations) as shown in the portal.

Throughput SLA – 99.99%

The following equation shows the SLA formula for throughput, given a month with 744 hours:

Formula3

Screenshot_3

What defines "Throughput Failed Requests", are requests that are throttled by the DocumentDB collection resulting in an error code, but before consumed RUs have exceeded the provisioned RUs for a partition in the collection for a given second. To avoid being throttled due to a misuse, we highly recommend you to look into the best practice in partitioning and scaling DocumentDB.

Consistency SLA – 99.99%

"Consistency Level" is the setting for a particular read request that supports consistency guarantees. You can monitor the consistency SLA through Azure portal:

Eventual consistency

Note: In this screenshot SLA = Actual

The following table captures the guarantees associated with the Consistency Levels. Please note:

  • "K" is the number of versions of a given document for which the reads lag behind the writes.
  • "T" is a given time interval.

 

CONSISTENCY LEVEL CONSISTENCY GUARANTEES
Strong Strong
Session Read Your Own Write
  Monotonic Read
  Consistent Prefix
CONSISTENCY LEVEL CONSISTENCY GUARANTEES
Bounded Staleness Read Your Own Write (Within Write Region)
  Monotonic Read (Within a Region)
  Consistent Prefix
  Staleness Bound < K,T
Consistent Prefix Consistent Prefix
Eventual Eventual

If a month has 744 hours, the SLA formula for consistency is:

Formula5

Screenshot_4

Latency SLA – P99

Observed read

For a given application deployed within a local Azure Region, in a month, we sum the number of one-hour intervals during which Successful Requests submitted by an Application resulted in a P99 latency greater than or equal to 10ms for document read or 15ms for document write operations. We call these hours “Excessive Latency Hours.

Formula7

If Monthly P99 Latency Attainment % is below 99%, we consider it a violation of the SLA and we will refund you up to 25% of your monthly bill.

We hope that this short blog helped you understand the large coverage of our Enterprise SLAs.

Azure DocumentDB, home for Mission Critical Applications

Azure DocumentDB hosts a growing number of customer mission critical apps. Our customers come from diverse verticals such as banking and capital markets, professional services, discrete manufacturers, startups, and health solutions. However, they share a common characteristic, the need to scale out globally while not compromising on speed and availability. Thanks to one of the best architectures, Azure DocumentDB can deliver on these promises and at a very low cost.

Build your first globally distributed application

Our vision is to be the database for all modern applications. We want to enable developers to truly transform the world we are living in through the apps they are building, which is even more important than the individual features we are putting into DocumentDB. Developing applications is hard, developing distributed applications at planet scale that are fast, scalable, elastic, always available, and yet simple, is even harder. Yet it is a fundamental pre-requisite in reaching people globally in our modern world. We spend limitless hours talking to customers every day and adapting DocumentDB to make the experience truly stellar and fluid.

So what are the next steps you should take? Here are a few that come to mind:

If you need any help or have questions or feedback, please reach out to us on the developer forums on Stack Overflow. Stay up-to-date on the latest DocumentDB news and features by following us on Twitter (@DocumentDB) and join our LinkedIn Group.

Announcing public preview of Instance Metadata Service

$
0
0

We are excited to announce the public preview of Instance Metadata Service in Azure’s West Central US region. Instance Metadata Service is a RESTful endpoint that allows virtual machines instances to get information regarding its compute, network and upcoming maintenance events. The endpoint is available at a well-known non-routable IP address (169.254.169.254) that can be accessed only from within the VM. The data from Instance Metadata Service can help with your cluster setup, replica placement, supportability, telemetry, or other cluster bootstrap or runtime needs. 

Previews are made available to you on the condition that you agree to the terms of use. For more information, see Microsoft Azure Supplemental Terms of Use for Microsoft Azure Previews.

Service Availability

Service is available to all Azure Resource Manager created VMs currently in West Central US region. As we add more regions we will update this post and the documentation with the details.

Regions where Instance Metadata Service is available
West Central US

Detailed documentation

Learn more about Azure Instance Metadata Service

Retrieving instance metadata

Instance Metadata Service is available for running VMs created/managed using Azure Resource Manager. To access all data categories for an instance, use the following sample code for Linux or Windows

Linux

curl -H Metadata:true http://169.254.169.254/metadata/instance?api-version=2017-03-01

Windows

curl –H @{‘Metadata’=’true’} http://169.254.169.254/metadata/instance?api-version=2017-03-01

The default output for all instance metadata is of json format (content type Application/JSON)

Instance Metadata data categories

Following table has a list of all data categories available via Instance Metadata

Data Description

location

Azure Region the VM is running

name Name of the VM
offer Offer information for the VM image, these values are present only for images deployed from Azure image gallery
publisher Publisher of the VM image
sku Specific SKU for the VM image
version Version of the VM Image
osType Linux or Windows
platformUpdateDomain Update domain the VM is running in.
platformFaultDomain Fault domain the VM is running in.
vmId Unique identifier for the VM, more info here
vmSize VM size
ipv4/Ipaddress Local IP address of the VM
ipv4/publicip Public IP address for the Instance
subnet/address Address for subnet
subnet/dnsservers/ipaddress1 Primary DNS server
subnet/dnsservers/ipaddress2 Secondary DNS server
subnet/prefix Subnet prefix , example 24
ipv6/ipaddress IPv6 address for the VM
mac VM mac address
scheduledevents see scheduledevents

 

FAQs

  • I am getting Bad request, Required metadata header not specified. What does this mean?

    Metadata Service requires header of Metadata:true to be passed in the request. Passing header will allow access

  • Why  am I not getting compute information for my VM?

    Currently Instance Metadata Service supports Azure Resource Manager created instances only, in future we will add support for Cloud Services VMs

  • I created my Virtual Machine through ARM a while back, Why am I not seeing compute metadata information?

    For any VMs created after Sep 2016 you can simply add a new Tag to start seeing compute metadata. For older VMs (created before Sep 2016) you would have to add/remove extensions to the VM to refresh metadata

  • Why am I getting error 500 - Internal server error?

    Currently Instance Metadata Preview is available only in West US Central Region, please deploy your VMs there.

  • Where do I share Additional questions/comments?

    Send your comments on http://feedback.azure.com

Networking to and within the Azure Cloud, part 1

$
0
0

Hybrid networking is a nice thing, but the question then is how do we define hybrid networking? For me, in the context of the connectivity to virtual networks, ExpressRoute’s private peering or VPN connectivity, it is the ability to connect cross-premises resources to one or more Virtual Networks (VNets). While this all works nicely, and we know how to connect to the cloud, how do we network within the cloud? There are at least 3 Azure built-in ways of doing this. In this series of 3 blog posts, my intent is to briefly explain:

  1. Hybrid networking connectivity options
  2. Intra-cloud connectivity options
  3. Putting all these concepts together

Hybrid Networking Connectivity Options

What are the options? Basically, there are 4 options:

  1. Internet connectivity
  2. Point-to-site VPN (P2S VPN)
  3. Site-to-Site VPN (S2S VPN)
  4. ExpressRoute

Internet Connectivity

As its name suggests, internet connectivity makes your workloads accessible from the internet, by having you expose different public endpoints to workloads that live inside of the virtual network. These workloads could be exposed using internet-facing Load Balancer or simply assigning a public IP address to the ipconfig object, child of the NIC which is a child of the VM. This way, it becomes possible for anything on the internet to be able to reach that virtual machine, provided host firewall if applicable, network security groups (NSG), and User Defined Routes allows that to happen.

So in that scenario, you could expose an application that needs to be public to the internet and be able to connect to it from anywhere, or from specific locations depending on the configuration of your workloads (NSGs, etc.).

Point-to-Site VPN or Site-to-Site VPN

These two, fall into the same category. They both need your VNet to have an VPN Gateway, and you can connect to it using either a VPN Client for your workstation as part of the Point-to-Site configuration or make sure you configure your on-premises VPN device to be able to terminate a Site-to-Site VPN. This way, on-premises devices are able to connect to resources within the VNet. The next blog post in the series will touch on intra-cloud connectivity options.

ExpressRoute

This connectivity is well described in the ExressRoute technical overview. Suffice to say that as with the Site-to-Site VPN options, ExpressRoute also allows you to connect to resources that are not necessarily in only one VNet. In fact, depending on the SKU, it can allow the connection to more than 1 VNet, up to 10 or, having the premium add-on, up to 100 depending on bandwidth. This is also going to be described in greater details in the next section, Intra-Cloud Connectivity Options.

Announcing Azure SDK for Node.js 2.0 preview

$
0
0

Today we're excited to announce the 2.0 preview of the Azure SDK for Node.js. This update is packed with features to help you be more productive and we've added 20 new modules for services such as SQL and DocumentDB management.

As usage of the Azure SDK for Node.js continues to grow, we've received a lot of feedback from the community on how the SDK helps Node.js developers build on Azure and how some changes could make them more productive. With that feedback in mind, we set out to make some significant improvements to the developer experience, which includes enhancements to the modules themselves as well as some updates to make working in Visual Studio Code better.

Keep in mind that, as this is a preview release, it's incredibly important to share your feedback with us on any issues or delightful experiences you face. Please open an issue on GitHub or connect with us directly on the Azure Developers Slack team with any questions or feedback.

Promises

With the way the module is being used, we've seen a lot of opportunities to improve the code that people are writing and maintaining.

promises

This also makes it possible to use async and await in TypeScript or ES2017 environments.

typescript-async

Updated typings

Visual Studio Code's rich Intellisense support makes building apps much quicker and more intuitive. Building with the Azure SDK is no exception and the typings have been updated and improved to provide you with the best possible experience.

typings

New modules

Aside from the updates to the existing modules, this update includes the preview release of 20 new modules.

Moving to the preview

Migrating to the preview is low in complexity and should be a direct replacement without changes to your code. When migrating to the preview it's important to note that because of new implementation, using ES6 features, Node.js version 6.x is required. All of the existing callback-based methods will continue to work; however, omitting the final parameter will result in the method returning a promise.

Sending custom requests

With new updates to the runtime ms-rest and ms-rest-azure, you can make generic requests to Azure with the authenticated client. This is useful when debugging an issue or for making custom requests to the Azure API.

The following example makes a custom, long running request to get all resource groups in a subscriptions then writes the result to standard out. Detailed documentation is available on GitHub.

const msrest = require('ms-rest');
const msRestAzure = require('ms-rest-azure');

const clientId = process.env['CLIENT_ID'];
const secret = process.env['APPLICATION_SECRET'];
const domain = process.env['DOMAIN']; // Also known as tenantId.
const subscriptionId = process.env['AZURE_SUBSCRIPTION_ID'];

msRestAzure.loginWithServicePrincipalSecret(clientId, secret, domain).then((creds) => {
  let client = new msRestAzure.AzureServiceClient(creds);

  let options = {
    method: 'GET',
    url: `https://management.azure.com/subscriptions/${subscriptionId}/resourcegroups?api-version=2016-09-01`,
    headers: {
      'user-agent': 'MyTestApp/1.0'
    }
  };

  return client.sendLongRunningRequest(options);
})
.then(console.dir.bind(console))
.catch(console.error.bind(console));

What’s new in Microsoft Edge in the Windows 10 Creators Update

$
0
0

Today, the Windows 10 Creators Update began rolling out to over 400 million Windows 10 devices. With the Creators Update, we’re upgrading Microsoft Edge with dozens of new features and under-the-hood improvements to make the best browser on Windows 10 faster, leaner, and more capable than ever.

This release updates the Windows web platform to EdgeHTML 15, the fourth release of EdgeHTML and a major step forward both in terms of the browser user experience, web platform capabilities, and fundamentals like performance, efficiency, and accessibility. In this post, we’ll share a quick overview of what’s new in each area, for both users and web developers. Stay tuned over the coming weeks, as we’ll be sharing a deeper look at many of these topics individually.

Web developers can start testing EdgeHTML 15 today by updating their Windows 10 device, or by downloading a free virtual machine from Microsoft Edge Dev. You can also test Microsoft Edge for free in BrowserStack, which offers instant cloud-based testing from a Mac or PC, including WebDriver automation. BrowserStack will be updated to include the final release of EdgeHTML 15 in the coming weeks.

Introducing Microsoft Edge in the Windows 10 Creators Update

Over the last eight months, the Microsoft Edge team has been focused on exciting new features to make the browsing experience better than ever:

Organize your web with new tab management experiences

Windows users spend more than half of their time on the web, and it’s all too easy to get tangled up in the chaos of search results, sites, and other content that can build up over hours, days, or weeks of browsing. In this update, we’ve introduced two new features to take the pain out of tab management.

Microsoft Edge now lets you set your tabs aside for later, sweeping them aside and organizing them neatly in a special section for easy access when you’re ready.


Set your tabs aside for later

Simply click the new “Set these tabs aside” button next to your row of tabs, and they are moved out of sight. When you’re ready to come back to them, just click the “Tabs you’ve set aside” icon, and you get a tidy, visual view of previous sessions. Restore one tab, or restore the full set!

If you have a lot of tabs open, it can be daunting to tell them apart, or to find a specific page in the sea of tiny icons and titles. Microsoft Edge now includes the ability to preview all your open tabs at once, so you can get back to what you’re looking for in a snap.


Show tab previews to scan your tabs more easily

Simply click the “Show tab previews” arrow to the right of your new tab button, and your tabs will expand to show a preview of the full page. You can scroll through this list to see as many tabs as you have open – when you find what you want, just click it to get back to browsing!

New reading experiences in Microsoft Edge

Microsoft Edge now lets you read books from right inside the browser, putting your favorite e-books from the Windows Store or from EPUBs on the web alongside your reading list and other content you expect to find in your browser.


Find ebooks in the Microsoft Edge Hub

You can find books in the new “Books” section of the Microsoft Edge Hub, and a wide selection of books for every taste in the Windows Store.

That’s just the beginning – you’ll find new features and extensions, and improvements to performance, usability, and more, all throughout Microsoft Edge. You can find tips on what’s new and how to get the most out of Microsoft Edge at Microsoft Edge Tips.

More efficient, more responsive, and more secure

We’ve made no secret of our ongoing obsession with making Microsoft Edge get more out of your battery, run the web faster, and keep you safer. We’ve been busy on these fronts, and EdgeHTML 15 is better than ever by any measure.

Pushing the frontier of energy efficiency

In the Creator’s Update, we’re taking the longest-lasting browser on Windows and supercharging it yet again. Thanks to major improvements in Microsoft Edge, like encouraging HTML5 content over Flash, improving the efficiency of iframes, and optimizing hit testing, Microsoft Edge on the Creators Update uses 31% less power than Google Chrome 57, and 44% less power than Mozilla Firefox 52, as measured by our open-source efficiency test that simulates real-world browsing.

These improvements translate into hours more browsing time for our customers – time to finish a crucial report while you’re at a coffee shop with no power, or to watch an extra movie on a long flight. In a head-to-head video rundown test, Microsoft Edge outlasted Google Chrome by more than three hours when streaming video!

There are countless enhancements to improve efficiency in the Creators Update, and we’re methodical about measuring the impact of each fix or new feature to make sure you get the most out of your browser. Watch this space for a detailed update on the engineering work that enables our greater power efficiency, and more on how we measure power consumption, coming early next week.

Responsiveness that puts the user first

In the past, we’ve been happy to share our leadership in synthetic JavaScript benchmarks like Google’s Octane benchmark, Apple’s Jet Stream, and others. Microsoft Edge continues to lead by those measures, but ultimately any single benchmark can only tell part of the story. In EdgeHTML 15, we’ve focused on making Microsoft Edge feel faster and more responsive, even when the page may be busy or hung, by prioritizing the user’s input above other activity, and optimizing rendering for real-world scenarios.


Comparing scrolling on a busy page, before and after the input responsiveness improvements in EdgeHTML 15.

These improvements dramatically reduce input blocking on busy sites – put simply, the browser responds much more quickly to user input like clicking links or scrolling with the keyboard, even when a page may be busy loading or executing JavaScript in the background.

That just scratches the surface – for example, over the past two releases, we’ve been working on an ongoing, multi-year refactoring of the Microsoft Edge DOM tree, which is now substantially complete. Together with a number of performance optimizations, this has resulted in a more than twofold improvement in performance in many real-world scenarios, as measured by the Speedometer benchmark, which simulates real-world app patterns using common frameworks.

Chart showing Microsoft Edge scores on the Speedometer benchmark over the past four release. Edge 12: 5.44. Edge 13: 37.83. Edge 14: 53.92. Edge 15: 82.67.

We’ll be exploring these performance and responsiveness improvements in more detail over the coming weeks – stay tuned!

Safer than ever

Microsoft Edge in the Creators Update includes two broad categories of security improvements which make the browser more resilient to typical attack strategies.

First, we’ve introduced a series of mitigations to prevent arbitrary native code execution: Code Integrity Guard and Arbitrary Code Guard. These mitigations make it much more difficult to load harmful code into memory, making it less likely and less economical for attackers to be successful in building a compete exploit. You can read more about this work in Mitigating arbitrary native code execution in Microsoft Edge.

Second, we’ve dramatically improved the resiliency of the Microsoft Edge sandbox. Microsoft Edge has always been sandboxed in a series of app containers on Windows 10 – in the Creators Update, we’ve tuned these app containers by reducing the access scope to only the capabilities that are directly necessary for Microsoft Edge to work properly. This work dramatically reduces Microsoft Edge’s attack surface area (including a 90% reduction in access to WinRT and DCOM APIs), and when combined with the exploit mitigations that apply to Microsoft Edge and its brokers, increases the difficult of exploiting any remaining vulnerabilities. You can read more about this work in Strengthening the Microsoft Edge Sandbox.

Modern capabilities for web developers

The Windows 10 Creators Update upgrades the Windows web platform to EdgeHTML 15, which introduces a number of new, modern capabilities for web developers. A few of these are highlighted below – you can find the full list of changes on the Microsoft Edge Dev Guide.

Simpler web payments with the Payment Request API

The new W3C Payment Request API enables simpler checkouts and payments on Windows 10 PCs and phones. In Microsoft Edge, the Payment Request API connects to the user’s Microsoft Account (with the user’s permission), allowing easy access to payment information. Because payment information is securely saved in a digital wallet, shoppers don’t have to navigate through traditional checkout flows and repeatedly enter the same payment and shipping address information repeatedly.

Screen capture of a Microsoft Wallet dialog box with shipping and payment information.

This can provide a faster and more consistent experience across websites, which saves shoppers time and effort by allowing them to securely share saved payment information. Learn more about the Payment Request API in our blog post, Simpler web payments: Introducing the Payment Request API, or see the Payment Request API samples on Microsoft Edge Dev.

CSS Custom Properties

CSS Custom Properties (formerly called CSS Variables) are a new primitive value type to fully cascade variables across CSS properties. Custom Properties enable the same fundamental use cases as variables in CSS pre-processors, but have the additional benefits of being fully cascaded, being interacted with via JavaScript, and not requiring the additional build step to work. Learn more about CSS Custom Properties in our blog post, CSS Custom Properties in Microsoft Edge, or see Custom Properties Pooch: a Microsoft Edge demo on Microsoft Edge Dev.

WebVR Developer Preview

Microsoft Edge now supports the WebVR 1.1 draft specification, which has been collaboratively authored by Mozilla, Google, Samsung, Microsoft, Oculus and others. Developers can now use this API to create immersive VR experiences on the web with the recently available Windows Mixed Reality dev kits. You can even get started without a headset using the Windows Mixed Reality Simulator. Acer, ASUS, Dell, HP, and Lenovo will ship the world’s first Windows Mixed Reality-enabled headsets later this year, starting at just $299 USD. Note that while WebVR is enabled by default in Microsoft Edge, using the Windows Mixed Reality Portal or Mixed Reality dev kits currently requires Developer Mode to be turned on in Windows settings.

Brotli

Brotli is a compression format that achieves up to 20% better compression ratios with similar compression and decompression speeds (PDF). This ultimately results in substantially reduced page weight for users, improving load times without substantially impacting client-side CPU costs. As compared to existing algorithms, like Deflate, Brotli compression is more efficient in terms of file size and CPU time. Learn more about Brotli in our blog post, Introducing Brotli compression in Microsoft Edge.

And lots more…

There’s simply too much to list in one post – the list goes on with features including WebRTC, like WebVR, TCP Fast Open, Intersection Observer, experimental support for WebAssembly, and more. You can find the full list of what’s new in the Microsoft Edge Dev Guide, or a comprehensive view of which standards are supported, planned, or in preview at Microsoft Edge Platform Status.

Built in the open

We’re proud to continue building Microsoft Edge in the open, using the voice of our community to drive product planning, and sharing our roadmap transparently. Windows itself is on an exciting journey with 10 million Insiders. These initiatives are better together – Windows Insiders are essential to building Microsoft Edge faster and with better quality, and Windows itself has been able to leverage tools like Microsoft Edge Platform Status and Microsoft Edge Platform issues – for the first time launching an open backlog and bug tracker for the Windows platform.

The voice of our community is helping us chart the course for 2017 and beyond. Nolan Lawson recently shared a look at the top highest-rated CSS features on the Microsoft Edge UserVoice:

In case you think Edge UserVoice doesn't matter… here are the top 4 highest-voted CSS features ???? https://t.co/NBPYTJNfUg pic.twitter.com/y0feYULEd2

— Nolan Lawson (@nolanlawson) March 19, 2017

At An Event Apart Seattle, we recently announced the development has begun on our updated CSS Grid implementation. With this announcement, every one of the features pictured are in development (or, in the case of CSS Custom Properties, shipping today!).

Beyond CSS, our roadmap for preview releases over the rest of 2017 is focused on three areas: doubling down on fundamentals like performance and reliability, delivering Progressive Web Apps on Windows, and continuing to innovate in the user experience of Microsoft Edge. We’re excited to share more about what the future holds soon!

Get started today on Windows 10, or test for free via BrowserStack

You can try Microsoft Edge on the Windows 10 Creators Update today! If you’re on a Windows 10 device, simply check for updates – see the instructions here for more details. If you’re on another platform, you can test EdgeHTML 15 instantly for free via BrowserStack, or download a free virtual machine from Microsoft Edge Dev.

Kyle Pflug, Senior Program Manager, Microsoft Edge

The post What’s new in Microsoft Edge in the Windows 10 Creators Update appeared first on Microsoft Edge Dev Blog.

.NET Framework April 2017 Monthly Rollup

$
0
0

Today, we are releasing a new Security and Quality Rollup and Security Only Update for the .NET Framework. You can read the April 2017 Security Updates Release Notes to learn about all changes being released today.

Security

Microsoft Common Vulnerabilities and Exposures CVE17-0160

A remote code execution vulnerability exists when the Microsoft .NET Framework fails to properly validate input before loading libraries. An attacker who successfully exploited this vulnerability could take control of an affected system. An attacker could then install programs; view, change, or delete data; or create new accounts with full user rights. Users whose accounts are configured to have fewer user rights on the system could be less impacted than users who operate with administrative user rights. To exploit the vulnerability, an attacker would first need to access the local system with the ability to execute a malicious application. The security update addresses the vulnerability by correcting how .NET validates input on library load.

Note: You can also search for the security update at Security TechCenter. Search for “CVE” 17-0160.

Quality and Reliability

There are no quality and reliability changes this month.

Getting the Update

The Security and Quality Rollup is available via Windows Update, Windows Server Update Services and Microsoft Update Catalog. The Security Only Update is available via Windows Server Update Services and Microsoft Update Catalog. The Windows 10 updates are integrated with the Windows 10 Monthly Update.

See .NET Framework Deployment tables for detailed deployment information on the release.

See the table below to learn about version applicability and more detailed release-specific information.

Windows Version .NET Version Rollup KB Security-only KB
Windows 10 Creators Update .NET Framework 3.5 and 4.7 4015583 N/A
Windows 10 Anniversary Update and Windows Server 2016 .NET Framework 3.5 and 4.6.2 4015217 N/A
Windows 10 1511 Update .NET Framework 3.5 and 4.6.1 4015219 N/A
Windows 10 RTM .NET Framework 3.5 and 4.6 4015221 N/A
Windows 8.1 and Windows Server 2012 R2 .NET Framework 3.5, 4.5.2, 4.6, 4.6.1, and 4.6.2 4014983 4014987
Windows Server 2012 .NET Framework 3.5, 4.5.2, 4.6, 4.6.1, and 4.6.2 4014982 4014986
Windows 7 and Windows Server 2008 R2 .NET Framework 3.5, 4.5.2, 4.6, 4.6.1, and 4.6.2 4014981 4014985
Windows Vista SP2 and Windows Server 2008 SP2 .NET Framework 3.5, 4.5.2, and 4.6 4014984 4014988

Docker Images

The Windows ServerCore and .NET Framework Docker images are being updated today. Pulling the latest image will update your local Docker image cache.

Previous Monthly Rollups

The last couple .NET Framework Rollup updates are listed below for your convenience:

Note: Previously released security and quality updates are included in today’s release.

More Information

You can read the .NET Framework Monthly Rollups Explained to learn more about how the .NET Framework is updated.

The week in .NET – .NET Framework 4.7, reference documentation, On .NET on modular ASP.NET, Happy birthday .NET with Immo Landwerth, JustAssembly

$
0
0

Previous posts:

.NET Framework 4.7

This week, we announced the release of the .NET Framework 4.7. We’ve added support for targeting the .NET Framework 4.7 in Visual Studio 2017, also updated today.

The .NET Framework 4.7 includes improvements in several areas:

  • High DPI support for Windows Forms applications on Windows 10
  • Touch support for WPF applications on Windows 10
  • Enhanced cryptography support
  • Performance and reliability improvements

You can see the complete list of improvements and the API diff in the .NET Framework 4.7 release notes.

Read the blog post: Announcing the .NET Framework 4.7 by Rich Lander.

New .NET reference documentation

Almost a year ago, we piloted the .NET Core reference documentation on docs.microsoft.com. Today we are happy to announce our unified .NET API reference experience. We understand that developer productivity is key – from a hobbyist developer, to a startup, to an enterprise. With that in mind, we partnered closely with the Xamarin team to standardize how we document, discover, and navigate .NET APIs at Microsoft.

On .NET

Last week, Sébastien Ros was back on the show to demo the fantastic support for modularity that was built for Orchard Core, that can now be used in any ASP.NET Core application:

Happy birthday .NET with Immo Landwerth

Back in February we threw a party for the 15th anniversary of .NET. We caught up with Immo Landwerth, a program manager on the .NET team at Microsoft, who joined Microsoft in 2010. He tells us about his journey from being a customer using .NET to an employee and the cultural changes he’s witnessed as .NET has moved to open source.

Tool of the week: JustAssembly

This week, Telerik introduced JustAssembly, a free utility tool that compares two .NET assemblies and shows the differences in each assembly code line by line.

JustAssembly

Read Stefan Stefanov’s blog post introducing the tool.

Meetups of the week: VS 2017, AppInsights, and IoT in Adelaide

The Adelaide .NET User Group holds a Visual Studio 2017 launch event on April 12 at 5:30PM with a talk from Paul Usher on AppInsight and another on IoT with Jack Ni.

.NET

ASP.NET

C#

F#

New F# Language Suggestions:

Check out F# Weekly for more great content from the F# community.

VB

Xamarin

Azure

UWP

Data

Game Development

And this is it for this week!

Contribute to the week in .NET

As always, this weekly post couldn’t exist without community contributions, and I’d like to thank all those who sent links and tips. The F# section is provided by Phillip Carter, the gaming section by Stacey Haffner, and the Xamarin section by Dan Rigby, and the UWP section by Michael Crump.

You can participate too. Did you write a great blog post, or just read one? Do you want everyone to know about an amazing new contribution or a useful library? Did you make or play a great game built on .NET?
We’d love to hear from you, and feature your contributions on future posts:

This week’s post (and future posts) also contains news I first read on The ASP.NET Community Standup, on Weekly Xamarin, on F# weekly, and on The Morning Brew.

Prepare real-world data for analysis with the vtreat package

$
0
0

As anyone who's tried to analyze real-world data knows, there are any number of problems that may be lurking in the data that can prevent you from being able to fit a useful predictive model:

  • Categorical variables can include infrequently-used levels, which will cause problems if sampling leaves them unrepresented in the training set.
  • Numerical variables can be in wildly different scales, which can cause instability when fitting models.
  • The data set may include several highly-correlated columns, some of which could be pruned from the data without sacrificing predictive power.
  • The data set may include missing values that need to be dealt with before analysis can begin.
  • ... and many others

The vtreat package is designed to counter common data problems like these in a statistically sound manner. It's a data frame preprocessor which applies a number of data cleaning processes to the input data before analysis, using techniques such as impact coding and categorical variable encoding (the methods are described in detail in this paper). Further details can be found on the vtreat github page, where authors John Mount and Nina Zumel note:

Even with modern machine learning techniques (random forests, support vector machines, neural nets, gradient boosted trees, and so on) or standard statistical methods (regression, generalized regression, generalized additive models) there are common data issues that can cause modeling to fail. vtreat deals with a number of these in a principled and automated fashion.

Vtreat

One final note: the main function in the package, prepare, is a little like model.matrix in that categorical variables are converted into numeric variables using contrast codings. This means that the output is suitable for many machine-learning functions (like xgboost) that don't accept categorical variables.

The vtreat package is available on CRAN now, and you can find a worked example using vtreat in the blog post linked below.

Win-Vector Blog: vtreat: prepare data

Linux development with C++ in Visual Studio

$
0
0

C++ is a versatile language that is used in many domains, and on many platforms. Visual Studio enables you, as a C++ developer, to target Windows desktop, Windows Store, Linux, Mobile (Android and iOS), and also gaming platforms like DirectX, Unreal, and Cocos. We call all these “workloads”, and those of you using other programming languages like Python or C# can target other workloads in Visual Studio. We know C++ developers have other tooling options when it comes to targeting any of those workloads, so why do so many choose Visual Studio to do so? In terms of the number of users, Visual Studio has long been the leading IDE on Windows for C++ developers targeting any platform, and that is because many of you enjoy the quality of its excellent editing environment, market leading debugging and diagnostic tools, great testing tools, useful team collaboration tools, as well as the ability to bring your own compiler/build system.

You can click any of the links above to learn about the corresponding capabilities in Visual Studio. A great starting point is our quick guide to Getting Started with Visual Studio.

In this blog post we will dive into the Linux Development with C++ workload. You will learn

  • how to acquire this as part of installing Visual Studio 2017,
  • how to create a Linux C++ project,
  • how to establish your first connection to a Linux machine from Visual Studio,
  • how sources are managed between Visual Studio and Linux,
  • what capabilities the Linux project system provides,
  • and how to use Visual Studio diagnostic tools to find and resolve issues.

Install Workload for Linux development with C++

Visual Studio 2017 introduces the C/C++ Linux Development workload. To install it, start the Visual Studio installer and choose to either install or modify an existing installation. Scroll to the bottom. Under the section “Other Toolsets” you will find Linux Development with C++. The workload installs in under 10 minutes.

Linux Workload

Opening projects

You will need a Linux machine, of course, or you can use the Windows Subsystem for Linux with Visual Studio. You can use any Linux distribution that has SSH, gdbserver, and a compiler installed. In your Linux environment, this is as easy as:

sudo apt install -y build-essential gdbserver

To create a new Linux Console Application in Visual Studio, select that project type under New Project > Visual C++ > Cross Platform > Linux.

Linux Projects

This project will open a readme with some instructions about its capabilities. There is also a main.cpp file which outputs text to the console. Go ahead and compile it using the menu Build > Build Solution. Since this is your first Linux project you will be prompted by the connection manager dialog to add a connection. You can add new connections with either password or private key authentication.

Linux Connection Manager

After you enter your information, Visual Studio manages the connection to your Linux system where builds are performed.  If there are any problems, the build output points you directly to issues found in your code.

The project system synchronizes your sources between Windows and Linux, and provides you with extensive control to manage this yourself if you need it. Right click on the project in Solution Explorer and choose Properties. The General property page allows you to set options like what folders to use on the remote system and what the Configuration Type is for your output (an executable, static or dynamic library). The Debugging page provides additional control over execution; for example, you can run additional commands before launching a program such as exporting the display to debug desktop apps. The VC++ Directories page provides options for controlling IntelliSense by providing additional directories to include or exclude. The Copy Sources property page allows you to specify whether to copy sources to the remote Linux system. You may not want to copy sources if you are working with a share or are managing your own synchronization through other means. The C/C++ and Linker property page groups provide many options for controlling what flags are passed to the compiler. They also enable you to override the default setting to use g++ and specify your own. Finally, the Build Events property page group provides the capability to run additional commands locally or remotely as part of the build process.

Linux Project Properties

Of course, you probably already have some existing sources, and probably a build system as well. If that is the case, our makefile project is what you want. You can create one from the New Project dialog, then import your sources, and then specify the build commands to use on the remote Linux system. If your project is particularly large, you may find it easier to auto-generate the project files instead of configuring everything manually with the property pages. You can find some example scripts on GitHub that show you how to generate Linux makefile projects for Visual Studio based on an existing source base. These examples should be easy to modify for your own requirements.

Use the full power of Visual Studio productivity features with your Linux C++ code

IntelliSense is provided out of the box for GCC and libstdc++. It is easy to configure your project to use headers from your own Linux system to enable IntelliSense for everything you need. Once you get going with your own code you can really see Visual Studio’s productivity features in action.

Member list and Quick Info, shown in the screenshot below, are just two examples of the powerful IntelliSense features that make writing code easier and faster. Member list shows you a list of valid members from a type or namespace. Typing in “->” following an object instance in the C++ code will display a list of members, and you can insert the selected member into your code by pressing TAB, or by typing a space or a period. Quick Info displays the complete declaration for any identifier in your code. In the example in the screenshot, Visual Studio is showing the list of accessible members of the SimpleServer object and the declaration of the open method, making writing code a lot easier.

Linux Code Editing

Refactoring, autocomplete, squiggles, reference highlighting, syntax colorization, and code snippets are some of the other useful productivity features that are helpful when you are writing and editing your code.

Navigating in large codebases and jumping between multiple code files can be a tiring task. Visual Studio offers many great code navigation features, including Go To Definition, Go To Line/Symbols/Members/Types, Find All References, View Call Hierarchy, Object Browser, and many more, to boost your productivity.

The Peek Definition feature, as shown in the screenshot below, shows the definition inline without switching away from the code that you’re currently editing. You can find Peek Definition by placing the insertion point on a method that you want to explore and then right-clicking or pressing Alt+F12. In the screenshot below, the definition of the OpenCV face detection detectMultiScale method, in objdetect.hpp, is shown in an embedded window in the current cpp file, making reading and writing OpenCV code more efficient.

Linux Peek Definition

You can learn much more about editing and navigating C++ code in Visual Studio here.

Debugging and diagnosing issues

Visual Studio excels at helping you solve your development problems, and now you can use those capabilities with your C++ code on Linux. You can set breakpoints in your C++ code and press F5 to launch the debugger, which will run your code on your Linux machine. When a breakpoint is hit, you can watch the value of variables and complex expressions in the Autos and Watch tool windows as well as in the data tips on mouse hovering, view the call stack in the Call Stack window, and step in and step out of your code easily. You can also use conditions in your breakpoints to narrow in on specific problems. You can also set actions, for example to record variable values to the output window. You can also inspect application threads or view disassembly.

If you need to interact with your programs on Linux you can use the Linux Console window from within Visual Studio. To activate this window, use the menu Debug > Linux Console. In the screenshot below you can see input being provided to the scanf call on line 24.

Linux Console Window

You can even attach to processes on your Linux machines to debug problems live. Open the Debug menu and select Attach to Process. As shown in the screenshot below select the SSH Connection type. You can then use the drop-down target menu to select a Linux machine. This will then enumerate the remote connections you have previously created.

Linux Attach to Process

See our debugging and diagnostics with C++ page to learn more about our general capabilities.

Working with others

Developing an application usually involves working with others. When it comes to source code storing and sharing and cloud build, Visual Studio Team Services have you covered. Collaborating with other team members is as easy as signing up for a free Visual Studio Team Services account, checking in your source code, and specifying who has access to it. Now everyone on your team can check out source code, edit it, and check it back in.

Visual Studio Team Services also simplifies continuous integrations for your code. Create and manage build processes that automatically compile and test your games in the cloud. Wondering if a bug was fixed in this build? By associating work items with code, the work items are listed in the build summary along with code changes and test results.

We have more information available about developing C++ applications in a team.

Get started with Visual Studio Linux C/C++ Development today

Learn in depth how to use the Visual Studio for C/C++ Linux Development in our announcement post and on docs.microsoft.com. Please provide feedback from within Visual Studio by selecting Help > Send Feedback. If you have any questions you can send us email at VC++ Linux Support.

DirectX game development with C++ in Visual Studio

$
0
0

Leverage the full power of C++ to build high-end games powered by DirectX to run on a variety of devices in the Windows family, including desktops, tablets, and phones. In this blog post we will dive into DirectX development with C++ in Visual Studio. First we’ll look at how to acquire the tools needed for DirectX desktop and Universal Windows Platform (UWP) development, then we will get started with a built-in project template, followed by writing C++ code and HLSL (High Level Shader Language) shaders for the DirectX game. Next we will use the world-class Visual Studio debugger and the Visual Studio DirectX graphics debugger and profiler to catch and fix issues in the code. Finally, we will talk about how to test your DirectX game and collaborate with your team members using Visual Studio.

Install Visual Studio for DirectX development

First, download Visual Studio 2017 and launch the Visual Studio installer.

To build DirectX desktop games, choose the “Game development with C++” workload under the “Mobile & Gaming” category. This workload gives you the core tools to build DirectX games for desktop, which includes the Visual Studio core editor, Visual C++ compiler, Windows Universal C Runtime, and Visual Studio debugger.

1-install-vs2017

The pre-selected components are highly recommended. Here are the two recommended components and the rest of the optional components that are useful for building DirectX games:

  • C++ profiling tools: includes Graphics Diagnostics for DirectX (a.k.a. Visual Studio graphics debugger) and a set of profiling tools for memory, CPU and GPU. Selected by default.
  • Windows 10 SDKs: The latest Windows 10 SDK is selected by default.
  • Windows 8.1 SDK and UCRT (Universal C Runtime) SDK
  • IncrediBuild: installs IncrediBuild from incredibuild.com, a distributed computing solution for code builds, data builds and development tasks.
  • Cocos: installs Cocos Creator from cocos2d-x.org, the editor for building Cocos2d games.
  • Unreal Engine installer: installs Epic Games Launcher from unrealengine.com, which you can use to download and install the Unreal Engine.

If you’re also interested in building DirectX games for UWP to run on a variety of devices in the Windows family, you can install the tools by checking the “Universal Windows Platform development” workload under the “Windows” category with the “C++ Universal Windows Platform tools” option selected. The C++ UWP component adds the core C++ UWP support and 3 DirectX project templates for DirectX11 and DirectX12 to get you started quickly. The “Graphics debugger and GPU profiler” component is highly recommended for DirectX development, as it brings in the Graphics Diagnostics feature for DirectX graphics debugging and GPU Usage feature for profiling GPU and CPU usage in DirectX games.

2-install-uwp

Getting started

DirectX game for UWP

The UWP workload comes with 3 DirectX project templates. Use the menu item New->Project to launch the New Project dialog and then type “DirectX” in the search box in the upper right corner to find the project templates for DirectX: DirectX11 App, DirectX12 App, DirectX11 and XAML App. Select one template and click OK.

3-templates

After the project is created, you’re all set to run the DirectX app right away by pressing F5 or clicking Debug->Start with debugging from the menu. You should see a colored 3D cube spinning on your screen.

DirectX game for desktop

To build a DirectX desktop app, you can either start with the Win32 Project template in the New Project dialog or download a sample from DirectX11 samples or DirectX12 samples as a starting point.

Write C++ code with the full power of the Visual Studio IDE

Now we have a basic 3D app running, it’s time to add game logic in C++. Use the full power of Visual Studio productivity features, including IntelliSense and code navigation, to write your game code in C++.

Member list and Quick Info, as shown in the following screenshot, are just two examples of the IntelliSense features Visual Studio offers to make code writing easier and faster. Member list shows you a list of valid members from a type or namespace. Typing in “->” following an object instance in the C++ code will display a list of members, and you can insert the selected member into your code by pressing TAB, or by typing a space or a period. Quick Info displays the complete declaration for any identifier in your code. In the following screenshot, Visual Studio is showing the list of members of an instance of the DX::DeviceResources object and the declaration of the GetBackBufferRendererTargetView method, making writing DirectX code a lot easier.

4-editing

Refactoring, Auto-complete, squiggles, reference highlighting, syntax colorization, code snippets are some of the other useful productivity features to be of great assistance in code writing and editing.

Navigating in large codebases and jumping between multiple code files can be a tiring task. Visual Studio offers many great code navigation features, including Go To Definition, Go To Line/Symbols/Members/Types, Find All References, View Call Hierarchy, Object Browser, and many more, to boost your productivity.

The Peek Definition feature, as shown in the following screenshot, brings the definition to the current code file, allows viewing and editing code without switching away from the code that you’re writing. You can find Peek Definition by opening the context menu on right click or shortcut Alt+F12 for a method that you want to explore. In the example in the screenshot, Visual Studio brings in the definition of the CreateInputLayout method that lives in the d3d1.h file as an embedded window into the current cpp file, making reading and writing DirectX code more efficiently.

5-navigation

Write and debug shaders

Besides C++ code, writing shader code is another big part of building DirectX games. The Visual Studio shader editor recognizes HLSL, FX, and other types of shader files, and provides syntax highlighting and braces auto-completion, making it easier to read and write shader code. Debugging shader code from a captured frame is another great way to pinpoint the source of rendering problems. Simply set a breakpoint in your shader code and press F5 to debug it. You can inspect variables and expressions in Locals and Autos windows. Learn more about the HLSL Shader Debugger.

6-shader

Debug C++ code with the world-class Visual Studio debugger

Troubleshooting issues in the code can be time-consuming. Use the Visual Studio debugger to help find and fix issues faster. Set breakpoints in your C++ code and press F5 to launch the debugger. When the breakpoint is hit, you can watch the value of variables and complex expressions in the Autos and Watch windows as well as in the data tips on mouse hover, view the call stack in the Call Stack window, and step in and step out of the functions easily. In the example in the screenshot below, the Autos window is showing us the data in the constant buffer and the value of each member of the device resource object instance, making stepping through DirectX code easy and efficient.

7-debugging

But that is not all what the Visual Studio debugger can do. For example, the Edit and Continue capability allows you to edit your C++ code during a debugging session and see the impact right away without having to rebuild the application, saving a huge amount of development time.

You can find more details in this blog post C++ Debugging and Diagnostics.

Visual Studio Graphics Diagnostics

Debugging rendering issues

Rendering problems can be very tricky to troubleshoot. Whether it’s a position offset, color incorrectness, or a flickering problem, Visual Studio Graphics Diagnostics, a.k.a. the Visual Studio graphics debugger, provides an easy way to capture and analyze frames from your DirectX 10, 11, or 12 games locally or remotely. You can inspect each DirectX event, graphics object, pixel history, and the graphics pipeline to understand exactly what occurred during the frame. This tool also captures call stacks for each graphics event, making it easy to navigate back to your C++ code in Visual Studio. Learn more about Visual Studio Graphics Diagnostics.

8-graphics-debugging

Analyze frame performance

If you are looking for ways to increase the frame rate for your DirectX games, Visual Studio Frame Analysis can be very helpful. It analyzes captured frames to look for expensive draw calls and performs experiments on them to explore performance optimization opportunities for you. The results are presented in a useful report, which you can save and inspect later or share with your team members. For more information on how to use this tool, see blog post Visual Studio Graphics Frame Analysis in action!.

9-frame-analysis

Analyze GPU Usage

While the Frame Analysis tool can help pinpoint the expensive draw calls, understanding how your game performs on the CPU and the GPU in real-time is essential as well. The Visual Studio GPU Usage tool collects CPU and GPU performance data in real-time, and it complements Frame Analysis that is performed on captured frames in an offline fashion to provide you a complete view of your game performance. By reading the GPU usage detailed report, you can easily identify where the performance bottleneck is, whether it’s on the CPU or the GPU, and help you locate the potential problematic code in the app. This GPU Usage tool in Visual Studio blog post includes a more detailed introduction to the tool.

10-gpu-usage

Unit testing

Shipping high-quality games requires good testing. Visual Studio ships with a native C++ unit test framework that you can use to write your unit tests. Add a new unit test project to your solution by clicking on menu New->Project and selecting the Native Unit Test Project template. This automatically adds a test project to your solution. In the created unittest1.cpp file in the unit test project, find TEST_METHOD(TestMethod1) to start adding your test logic code. You can then open the Test Explorer window by clicking on menu Test->Window->Test Explorer to run your tests. Also, take advantage of the built-in code coverage tool (menu Test->Analyze Code Coverage) to understand how much of your code has been covered by your unit tests. This gives you confidence in shipping high-quality games.

11-unit-testing

Collaborate with your team members

Building a great game usually involves more than one developer. When it comes to source code storing and sharing and cloud build, Visual Studio Team Services has you covered.

Simply sign up for a free Visual Studio Team Services accountthen you can check in the source code of your DirectX game into Visual Studio Team Services. With your code successfully synced, everyone on your team can now check out, edit, and check back in the source code, making collaborating with other team members super-efficient.

Visual Studio Team Services also simplifies continuous integrations for your games. Create and manage build processes that automatically compile and test your games in the cloud. Wondering if a bug was fixed in this build? By associating work items to code, the work items are listed in the build summary along with code changes and test results.

12-vsts

You can find more details in this blog post Visual Studio for Teams of C++ Developers.

Try out Visual Studio 2017 for C++ game development

Download Visual Studio 2017, try it out and share your feedback. For problems, let us know via the Report a Problem option in the upper right corner of the VISUAL STUDIO title bar. Track your feedback on the developer community portal. For suggestions, let us know through UserVoice.

Import repositories from TFVC to Git

$
0
0

You can now migrate code from an existing TFVC repository to a new Git repository within the same account. To start migration, select Import Repository from the repository selector drop-down.

importrepository

Individual folders or branches can be imported to the Git repository, or the entire TFVC repository can be imported (minus the branches). Users can also import up to 180 days of history.

importrepodialog-tfvc

We strongly recommend reading our whitepapers – Centralized version control to Git and TFVC to Git before starting the migration. For more details, please see the feature documentation. Give it a try and let me know if you have questions in the comments below. Thanks!


Hybrid Cloud just got easier: New Azure Migration resources and tools available

$
0
0

Most customers we talk with are using a Hybrid Cloud approach to take advantage of the cloud and their existing applications and infrastructure. Whether you’re considering migrating some or all your applications to the cloud, the transition from on-premises requires careful planning. You need to understand how much it will cost, how to size your environment, what virtual machine options to choose, and more – and you want to do all this in the smartest and most cost-effective way possible.

With this in mind, today we are offering new tools and resources to help you tap into the power of the hybrid cloud to optimize your business:

  • A free Cloud Migration Assessment, which helps you discover the servers across your IT environment, analyze their hardware configurations, and provides a detailed report including the estimated cost benefits of moving to Microsoft Azure.
  • Starting today, you can activate your Azure Hybrid Use Benefit directly in the Azure Management Portal, simplifying your path to the cloud in the most cost effective way possible. With the Azure Hybrid Use Benefit you can save up to 40% with Windows Server licenses that include Software Assurance. All customers can use this easy provisioning experience to save money on Windows Server virtual machines in Azure.
  • Azure Site Recovery is another tool to make the journey to the cloud as easy as possible. This is a tool you can use to migrate virtual machines to Azure, and it’s a great way to move applications whether they are running on AWS, VMware, Hyper-V or on physical servers. You can already configure ASR to use your Hybrid Use Benefit with PowerShell, and today we’re announcing a new experience that will be available in Azure Site Recovery in the coming weeks that will allow you to tag virtual machines within the Azure portal itself. This capability will make it easier than ever to migrate your Windows Server virtual machines. 

With Azure, you get truly consistent hybrid capabilities across cloud and on-premises environments, offering you the flexibility to choose the optimal location for each application, based on your business requirements and reducing the complexity of moving to the cloud. Migrating virtual machines to the cloud is often one of the first steps organizations take in their cloud journey and is a natural part of any hybrid cloud strategy.

Learn more about the tools and resources available today by visiting the Azure Migration page. We’d love to hear from you on how we can continue making your path to the cloud easy and effective.

Azure Data Factory March new features update

$
0
0

Hello, everyone! In March, we added a lot of great new capabilities to Azure Data Factory, including high demanding features like loading data from SAP HANA, SAP Business Warehouse (BW) and SFTP, performance enhancement of directly loading from Data Lake Store into SQL Data Warehouse, data movement support for the first region in the UK (UK South), and a new Spark activity for rich data transformation. We can’t wait to share more details with you, following is a complete list of Azure Data Factory March new features:

  • Support data loading from SAP HANA and SAP DW
  • Support data loading from SFTP
  • Performance enhancement of direct loading from Data Lake Store to Azure SQL Data Warehouse via PolyBase
  • Spark activity for rich data transformation
  • Max allowed cloud Data Movement Units increase
  • UK data center now available for data movement

Support data loading from SAP HANA and SAP Business Warehouse

SAP is one of the most widely-used enterprise softwares in the world. We hear you that it’s crucial for Microsoft to empower customers to integrate their existing SAP system with Azure to unlock business insights. We are happy to announce that we have enabled loading data from SAP HANA and SAP Business Warehouse (BW) into various Azure data stores for advanced analytics and reporting, including Azure Blob, Azure Data Lake, and Azure SQL DW, etc.

SAP HAHA and SAP BW connectors in Copy Wizard

For more information about connecting to SAP HANA and SAP BW, refer to Azure Data Factory offers SAP HANA and Business Warehouse data integration.

Support data loading from SFTP

You can now use Azure Data Factory to copy data from SFTP servers into various data stores in Azure or On-Premise environments, including Azure Blob/Azure Data Lake/Azure SQL DW/etc. A full support matrix can be found in Supported data stores and formats. You can author copy activity using the intuitive Copy wizard (screenshot below) or JSON scripting. Refer to SFTP connector documentation for more details.

SFTP connector in Copy Wizard

Performance enhancement of direct data loading from Data Lake Store to Azure SQL Data Warehouse via PolyBase

Data Factory Copy Activity now supports loading data from Data Lake Store to Azure SQL Data Warehouse directly via PolyBase. When using the Copy Wizard, PolyBase is by default turned on and your source file compatibility will be automatically checked. You can monitor whether PolyBase is used in the activity run details.

If you are currently not using PolyBase or staged copy plus PolyBase for copying data from Data Lake Store to Azure SQL Data Warehouse, we suggest checking your source data format and updating the pipeline to enable PolyBase and remove staging settings for performance improvement. For more detailed information, refer to Use PolyBase to load data into Azure SQL Data Warehouse and Azure Data Factory makes it even easier and convenient to uncover insights from data when using Data Lake Store with SQL Data Warehouse.

Spark activity for rich data transformation

Apache Spark for Azure HDInsight is built on an in-memory compute engine, which enables high performance querying on big data. Azure Data Factory now supports Spark Activity against Bring-Your-Own HDInsight clusters. Users can now operationalize Spark job executions through Spark Activity in Azure Data Factory.

Since Spark job may have multiple dependencies such as jar packages (placed in the java CLASSPATH) and python files (placed on the PYTHONPATH), you will need to follow a predefined folder structure for your Spark script files. For more detailed information about JSON scripting of the Spark Activity, refer to Invoke Spark programs from Azure Data Factory pipelines.

Max allowed cloud Data Movement Units increase

Cloud Data Movement Units (DMU) reflects the powerfulness of copy executor used to empower your cloud-to-cloud copy. To copy multiple files with large volume from Blob storage/Data Lake Store/Amazon S3/cloud FTP/cloud SFTP into Blob storage/Data Lake Store/Azure SQL Database, higher DMUs usually provide you better throughput. Now you can specify up to 32 DMUs for large copy runs. Learn more from cloud data movement units and parallel copy.

UK data center now available for data movement

Azure Data Factory data movement service is now available in the UK, in addition to the existing 16 data centers. With that, you can leverage Data Factory to copy data from Cloud and On-Premise data sources into various supported Azure data stores located in the UK. Learn more about the globally available data movement and how it works from Globally available data movement, and the Azure Data Factory’s Data Movement is now available in the UK blog post.

Above are the new features we introduced in March. Have more feedbacks or questions? Share your thoughts with us on Azure Data Factory forum or feedback site, we’d love to hear more from you.

Integrating Application Insights into a modular CMS and a multi-tenant public SaaS

$
0
0

The Orchard CMS Application Insights module and DotNest case study

Application Insights has an active ecosystem with our partners developing integrations using our Open Source SDKs and public endpoints. We recently had Lombiq (one of our partners) integrate Application Insights into Orchard CMS and a multi-tenant public SaaS version of the same.

Here is a case study of their experience in their own words, by Zoltán Lehóczky, co-founder of Lombiq, Orchard CMS developer.

We have integrated Application Insights into a multi-tenant service in such a way that each tenant gets its own separate performance and usage monitoring. At the same time, we, the providers of the service, get overall monitoring of the whole platform. The code we wrote is open-source.

Adding Application Insights telemetry to an ASP.NET web app is easy with just a few clicks in Visual Studio. But the complexity of monitoring needs increases when the web app is a rich-featured multi-tenant content management system (CMS) that can be self-hosted or offered as CMS as a Service. So you need to build an integration that feels native to the platform by extending the Application Insights libraries. The aim is to give people the great analytical and monitoring capabilities of Application Insights, specific to the CMS platform, that enables as easily. This blog post explains some techniques and practices that are used in the Orchard CMS Application Insights module.

We at Lombiq Technologies are a .NET software services company from Hungary. We have international clients like Microsoft itself. Orchard, an open source ASP.NET MVC CMS started and still supported by Microsoft, is what we mainly work with, having also built the public multi-tenant Orchard as a Service called DotNest. Being a long-time Azure user we learned about Application Insights when it was still very early in development and started to build an easy to use Orchard integration that can be utilized on DotNest. So, what are our experiences worth sharing?

The Application Insights Orchard module we developed is open source, so make sure to check it out on GitHub if you want to see more code! Everything discussed here is implemented there.

Using Application Insights in a modular multi-tenant CMS

Application Insights, as it is delivered “out of the box”, works easily for single-tenant applications, where it’s no issue that you need some root-level XML config files. However, if your code is a module that will be integrated into other people’s applications, like our Orchard CMS, then you want your code, including all the monitoring extensions, to be self-contained. We don’t want our clients to be exposed to configuration files at the application level. In short, we need to integrate Application Insights into our code to make a single, independently distributable MVC project. The distributed form might be a source repository or a zip file.

To package Application Insights into our code, we must:

  • Move Application Insights configuration to code—that is, do the same in C# that would normally be done in the XML config file.
  • Manage the lifetime of telemetry modules in code. Each module handles a different type of telemetry—requests, exceptions, dependencies, and so on. Normally, these modules are instantiated when the .config file is read, and have parameters set in the config file. (Learn more. Our code).
  • Instead of relying on static singletons, manage TelemetryClient and TelemetryConfiguration objects in a custom way. This allows the telemetry for separate tenants to be kept separate. (See for example this code)
  • Orchard uses log4net for logging. We can collect this data in Application Insights, but again we need to write code to configure ApplicationInsightsAppender instead of relying on the config files. (Code)

    All good, so now we got rid of app-level XML configs. But what if we have multiple tenants in the same app? The default setup of Application Insights only has single-tenancy in mind, so we need to dig a bit deeper. (For the purpose of this post “tenant” will mean a sub-application, a component within the application that maintains a high level of data isolation to other tenants)

  • We can’t utilize the HttpModule that ships with Application Insights for request tracking, since that would require changes to a global config file (the Web.config) and wouldn’t allow us to easily switch request tracking on or off per tenant. Time to implement an Owin middleware and do request tracking with some custom code! Such middlewares can be registered entirely from code and can be enabled on a per tenant basis.
  • Since request tracking is done in our own way we also need to add an operation ID from code for each request. In Application Insights, Operation ID is used to correlate telemetry that occur as part of servicing the same request.
  • Let’s also add an ITelemetryInitializer that will add which tenant a piece of telemetry originates from. (Learn more. Code)

    If everything is done we’ll end up with an Application Insights plugin that can be enabled and disabled from the Orchard admin site, separately for each tenant:

     

    Lombiq Modules

     

    Adding some Orchardyness

    So far so good, but the result still needs some more work to really be part of the CMS: There’s no place to configure it yet!

    In Orchard, the site settings can be used for that. It’s easy to add some configuration options that admins can change from the web UI; these settings are on the level of a tenant. We’ve added a settings screen like this:

     

    Enable application wide collection

     

    Note that calls to dependencies, like SQL queries, storage operations or HTTP requests to remote resources are tracked. However, since this generates a lot of data it’s possible to switch dependency tracking off.

    Do note that some settings are either not possible to configure on a tenant level (and thus need to be app level), or it doesn’t make sense to do so: e.g. since log entries might not be tied to a tenant (but rather to the whole application) those are only available for app-wide collection in our module (nevertheless an additional tenant-level log collection would be possible). What you see is the full config that’s only available on the “main” tenant.

    Furthermore, we added several extension points for developers to hook into. So if you’re a fellow Orchard developer you can override the Application Insights configuration, add your own context to telemetry data or utilize event handlers (and Orchard-style events for that matter).

     

    Making Application Insights available in a public SaaS

    What we’ve seen until now was all the fundamental functionality that’s needed for a self-contained component monitored by Application Insights. However, in DotNest, where everyone can sign up, we need two distinct layers of monitoring by Application Insights:

  • We want detailed telemetry about the whole application, for our own use.
  • Users of DotNest tenants want to separately configure Application Insights and collect telemetry that they’re allowed to see, just for their tenants.

    Users of DotNest thus don’t even see the original Application Insights configuration options, as those are managed on the level of the whole platform. However, they get another site settings screen where they can configure their own instrumentation key:

     

    Azure Applications Telemetry Settings

     

    When such a key is provided, then another, second Application Insights configuration will be created on the tenant and used together with the platform-level one, providing server-side and client-side request tracking and error reporting. Thus, while we at Lombiq, the owners of the service see all data under our own Application Insights account, each user will also be able to see just their own tenant’s data in the Azure Portal as usual.

    This tenant configuration is created and managed in the same way as the original one, from code.

     

    Seeing the results

    Once all of this is set up, we want to see what kind of data we gathered, and this happens as usual in the Azure Portal.

    Live Metrics Stream

    Live Metrics Stream provides real time monitoring. We included the appropriate telemetry processor in our initialization chain. It includes system metrics like memory and CPU usage as well, and as of recently you don’t even need to install the Application Insights Extensions for an App Service to see these:

     

    Incoming requests outgoing requests

     

    Tracing errors

    But what if something goes wrong? Log entries are visible as Traces (standard log entries) or Exceptions (when exceptions are caught and logged) in the Azure Portal:

    Something terrible happened

    But remember that we’ve implemented an operation ID? The great thing once we have that is that events, exceptions, request, any data points are not just visible alone, but in context: Using the operation ID, Application Insights will be able to correlate telemetry data with other data points, for example to tell you the request in which the exception happened.

    Hosting Suite Modules

    This makes it easier to find out how you can reproduce a problem that just happened in production.

    Wrapping it up

    All in all, if you need more than just to add Application Insights to your application with a single configuration, without the need to redistribute the integration, then you need to dig into the Application Insights libraries’ API. Now with the libraries being open source this is not much of an issue and you can fully configure and utilize them just by writing C#. With the Azure Application Insights Orchard module you even have a documented example of doing it.

    So, don’t be afraid and code some awesome Application Insights integration! And if you just want to play with fancy graphs on the Azure Portal you can quickly create a free DotNest site and start gathering some data right away!

New search analytics for Azure Search

$
0
0

One of the most important aspects of any search application is the ability to show relevant content that satisfies the needs of your users. Measuring relevance requires combining search results with the app side user interactions, and it can be hard to decide what to collect and how to do it. This is why we are excited to announce our new version of Search Traffic Analytics, a pattern on how to structure, instrument, and monitor search queries and clicks, that will provide you with actionable insights about your search application. You’ll be able to answer common questions, like most clicked documents or most common queries that do not result in clicks, as well as provide evidence for other situations, like deciding on the effectiveness of a new UI layout or tweaks on the search index. Overall, this new tool will provide valuable insights that will let you make more informed decisions.

Let’s expand on the scoring profile example. Let’s say you have a movies site and you think your users usually look for the newest releases, so you add a scoring profile with a freshness function to boost the most recent movies. How can you tell this scoring profile is helping your users find the correct movies? You will need information on what your users are searching for, the content that is being displayed and the content that your users select. When you have the data on what your users are clicking, you can create metrics to measure effectiveness and relevance.

Our solution


To obtain rich search quality metrics, it’s not enough to log the search requests; it’s also necessary to log data on what users are choosing as the relevant documents. This means that you need to add telemetry to your search application that logs what a user searches for and what a user selects. This is the only way you can have information on what users are really interested on and wether they are finding what they are looking for. There are many telemetry solutions available and we didn't invent yet another one. We decided to partner with Application Insights, a mature and robust telemetry solution, available for multiple platforms. You can use any telemetry solution to follow the pattern that we describe, but using Application Insights lets you take advantage of the Power BI template created by Azure Search.

The telemetry and data pattern consists of 4 steps:

1.    Enabling Application Insights
2.    Logging search request data
3.    Logging users’ clicks data
4.    Monitoring in Power BI desktop

Because it’s not easy to decide what to log and how to use that information to produce interesting metrics, we created a clear set schema to follow, that will immediately produce commonly asked for charts and tables out of the box on Power BI desktop. Starting today, you can access the easy to follow instructions on the Azure Portal and the official documentation.

Azure Search traffic analytics in the Azure portal

Once you instrument your application and start sending the data to your instance of Application Insights, you will be able to use Power BI to monitor the search quality metrics. Upon opening the Power BI desktop file, you’ll find the following metrics and charts
•    Clickthrough Rate (CTR): ratio of users who click on a document to the number of total searches.
•    Searches without clicks: terms for top queries that register no clicks.
•    Most clicked documents: most clicked documents by ID in the last 24 hours, 7 days and 30 days.
•    Popular term-document pairs: terms that result in the same document clicked, ordered by clicks.
•    Time to click: clicks bucketed by time since the search query.

Azure Search template for Power BI

 

Operational Logs and Metrics


Monitoring metrics and logs are still available. You can enable and manage them in the Azure Portal under the Monitoring section.

Enable Monitoring to copy operation logs and/or metrics to a storage account of your choosing. This option lets you integrate with the Power BI content pack for Azure Search as well as your own custom integrations.

If you are only interested in Metrics, you don’t need to enable monitoring as metrics are available for all search services since the launch of Azure Monitor, a platform service that lets you monitor all your resources in one place.

Next steps


Follow the instructions in the portal or in the documentation to instrument your app and start getting detailed and insightful search metrics.

You can find more information on Application Insights here.  Please visit Application Insights pricing page to learn more about their different service tiers.

TFS 2015.4 released

$
0
0

Yesterday, we released TFS 2015 Update 4.  This will likely be the last release in the TFS 2015 line.  TFS 2017 shipped almost 6 months ago and we are already hard at work on TFS 2017 Update 2.  This TFS 2015 Update 4 release just contains fixes for commonly reported customer issues – about 30 in total.  Read the release notes for more details.

Brian

Viewing all 10804 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>