Quantcast
Channel: Category Name
Viewing all 10804 articles
Browse latest View live

RStudio Server on Azure

$
0
0

RStudio Server Pro is now available on the Azure Marketplace, the company announced on the RStudio Blog earlier this month. This means you can launch RStudio Server Pro on an virtual machine with the memory, disk, and CPU configuration of your choice, and pay by the minute for the VM instance plus a the RStudio software charge. Then, you can use a browser to access the remote RStudio Server (the interface is nigh-indistinguishable from the desktop version), with access to the commercial features of RStudio including support for multiple R version and concurrent R sessions, load-balancing and high availability instances, and enhanced security.

If you don't want or need those capabilities, you can use the open-source version of RStudio Server as well. The easiest way is simply to launch an instance of the Azure Data Science Virtual Machine, which comes pre-installed with the open-source RStudio Server. It also comes pre-installed with dozens of data science tools you can use with R and RStudio, including Jupyter Notebooks, Python (check out the reticulate package), Keras and Tensorflow, and much more. All of the tools are free and/or open-source, so you only pay the base virtual machine charge (which depends on your region and the size and power of the instance you select). Once you've launched the Ubuntu DSVM, simply log in and use systemctl to start the RStudio Server service, and then use your browser to access RStudio remotely.

DSVM

If you don't need a server with all the tools the DSVM provides, you can configure your own VM with just RStudio Server and anything else you need, too. This handy blog post by Scott James Bell takes you through the steps. This includes setting up an Azure account (if you don't have one yet, click here for $200 in Azure Credits to get started), creating a virtual machine instance, and installing R, RStudio, and Shiny Server (for hosting Shiny applications) on it. Follow the link below for the complete details, and enjoy using RStudio on Azure!

My Year in Data: Running a Shiny Server on Microsoft Azure


Announcing TypeScript 3.3 RC

$
0
0
Today we’re happy to announce the availability of our release candidate (RC) of TypeScript 3.3. Our hope is to collect feedback and early issues to ensure our final release is simple to pick up and use right away.

To get started using the RC, you can get it through NuGet, or use npm with the following command:

npm install -g typescript@rc

You can also get editor support by

TypeScript 3.3 should be a smooth release to adopt, and contains no breaking changes. Let’s explore what’s new in 3.3.

Improved behavior for calling union types

When TypeScript has a union type A | B, it allows you to access all the properties common to both A and B (i.e. the intersection of members).

interface A {
    aProp: string;
    commonProp: string;
}

interface B {
    bProp: number;
    commonProp: number
}

type Union = A | B;

declare let x: Union;

x.aProp; // error - 'B' doesn't have the property 'aProp'
x.bProp; // error - 'A' doesn't have the property 'bProp'
x.commonProp; // okay! Both 'A' and 'B' have a property named `commonProp`.

This behavior should feel intuitive – you can only get a property off of a union type if it’s known to be in every type of the union.

What about, instead of accessing properties, we’re dealing with calling types? Well, when every type has exactly one signature with identical parameters, things just work and you can call these types.

type CallableA = (x: boolean) => string;
type CallableB = (x: boolean) => number;

type CallableUnion = CallableA | CallableB;

declare let f: CallableUnion;

let x = f(true); // Okay! Returns a 'string | number'.

However, this restriction was sometimes, well, overly restrictive.

type Fruit = "apple" | "orange";
type Color = "red" | "orange";

type FruitEater = (fruit: Fruit) => number;     // eats and ranks the fruit
type ColorConsumer = (color: Color) => string;  // consumes and describes the colors

declare let f: FruitEater | ColorConsumer;

// Cannot invoke an expression whose type lacks a call signature.
//   Type 'FruitEater | ColorConsumer' has no compatible call signatures.ts(2349)
f("orange");

Silly example and poor error message aside, both FruitEaters and ColorConsumers should be able to take the string "orange", and return either a number or a string.

In TypeScript 3.3, this is no longer an error.

type Fruit = "apple" | "orange";
type Color = "red" | "orange";

type FruitEater = (fruit: Fruit) => number;     // eats and ranks the fruit
type ColorConsumer = (color: Color) => string;  // consumes and describes the colors

declare let f: FruitEater | ColorConsumer;

f("orange"); // It works! Returns a 'number | string'.

f("apple");  // error - Argument of type '"apple"' is not assignable to parameter of type '"orange"'.

f("red");    // error - Argument of type '"red"' is not assignable to parameter of type '"orange"'.

In TypeScript 3.3, the parameters of these signatures are intersected together to create a new signature. In the example above, the parameters fruit and color are intersected together to a new parameter of type Fruit & Color. Fruit & Color is really the same as ("apple" | "orange") & ("red" | "orange") which is equivalent to ("apple" & "red") | ("apple" & "orange") | ("orange" & "red") | ("orange" & "orange"). Each of those impossible intersections evaporates, and we’re left with "orange" & "orange" which is just "orange".

There are still some restrictions though. This new behavior only kicks in when at most one type in the union has multiple overloads, and at most one type in the union has a generic signature. That means methods on number[] | string[] like map (which is generic) still won’t be callable.

On the other hand, methods like forEach will now be callable, but under noImplicitAny there may be some issues.

interface Dog {
    kind: "pupper"
    dogProp: any;
}
interface Cat {
    kind: "kittyface"
    catProp: any;
}

const catOrDogArray: Dog[] | Cat[] = [];

catOrDogArray.forEach(animal => {
    //                ~~~~~~ error!
    // Parameter 'animal' implicitly has an 'any' type.
});

While we’ll continue to improve the experience here, this is strictly more capable in TypeScript 3.3, and adding an explicit type annotation will work.

interface Dog {
    kind: "pupper"
    dogProp: any;
}
interface Cat {
    kind: "kittyface"
    catProp: any;
}

const catOrDogArray: Dog[] | Cat[] = [];
catOrDogArray.forEach((animal: Dog | Cat) => {
    if (animal.kind === "pupper") {
        animal.dogProp;
        // ...
    }
    else if (animal.kind === "kittyface") {
        animal.catProp;
        // ...
    }
});

Incremental file watching for composite projects in --build --watch

In TypeScript 3.0, we introduced a new feature for structuring builds called “composite projects”. Part of the goal here was to ensure users could break up large projects into smaller parts that build quickly and preserve project structure, without compromising the existing TypeScript experience. Thanks to composite projects, TypeScript can use --build mode to recompile only the set of projects and dependencies. You can think of this as optimizing inter-project builds.

However, around last year our team also shipped optimized --watch mode builds via a new incremental “builder” API. In a similar vein, the entire idea is that this mode only re-checks and re-emits changed files or files whose dependencies might impact type-checking. You can think of this as optimizing intra-project builds.

Perhaps ironically, building composite projects using --build --watch actually didn’t use this infrastructure. An update in one project under --build --watch mode would force a full build of that project, rather than determining which files within that project were affected.

In TypeScript 3.3, --build mode’s --watch flag does leverage incremental file watching as well. That can mean signficantly faster builds under --build --watch. In our testing, this functionality has resulted in a reduction of 50% to 75% in build times of the original --build --watch times. You can read more on the original pull request for the change to see specific numbers, but we believe most composite project users will see significant wins here.

What’s next?

Beyond 3.3, you can keep an eye on our Roadmap page for any upcoming work.

But right now we’re looking forward to hearing about your experience with the RC, so give it a shot now and let us know your thoughts!

– Daniel Rosenwasser and the TypeScript team

Announcing the preview of OpenAPI Specification v3 support in Azure API Management

$
0
0

Azure API Management has just introduced preview support of OpenAPI Specification v3 – the latest version of the broadly used open-source standard of describing APIs. Implementation of the feature is based on the OpenAPI.NET SDK.

In this blog post we will explore:

  • The benefits of using OpenAPI Specification for your APIs.
  • How you can create APIs from OpenAPI Specification documents in Azure API Management.
  • How you can export your APIs as OpenAPI Specification documents.
  • The remaining work for the general availability release.

Why you should use OpenAPI Specification for your APIs

OpenAPI Specification is a widely adopted industry standard. The OpenAPI Initiative has been backed by over 30 companies, including large corporations such as Microsoft.

OpenAPI Specification lets you abstract your APIs from their implementation. The API definitions are language-agnostic.

They are also easy to understand, yet precise. Your APIs are represented through YAML or JSON files, readable for humans as well as machines.

The wide adoption of OpenAPI Specification has resulted in an extensive tooling ecosystem. Functionality of the tools ranges from facilitating collaborative process of designing APIs, to automatically generating client SDKs and server implementations in popular programming languages.

How to import OpenAPI Specification v3 definitions in Azure API Management

If your APIs are defined in an OpenAPI Specification file, you can easily import them in Azure API Management. The Azure portal will automatically recognize the right version of your OpenAPI Specification files. You can learn how to import your APIs through the visual interface by following our tutorial, “Import and publish your first API.”

Importing APIs in Microsoft Azure

Alternatively, you can import APIs using the REST API call, with the contentFormat payload parameter set to openapi, openapi+json, or openapi-link.

During import, if the servers field of the specification contains multiple entries, API Management will select the first HTTPS URL. If there aren't any HTTPS URLs, the first HTTP URL will be selected. If there aren't any HTTP URLs, the backend service URL will be empty.

The import functionality has a few restrictions. For example, it does not support examples and multipart/form-data fields.

How to export OpenAPI Specification v3 definitions in Azure API Management

With Azure API Management, you can also easily export your APIs in the OpenAPI Specification v3 format.

API Management portal

API specifications can be downloaded from your developer portal as JSON or YAML files. The developer portal is an automatically generated, fully customizable website, where visitors can discover APIs, learn how to use them, try them out interactively, and finally sign up to acquire API keys.

You can also export the specifications through the visual interface of the Azure portal or a REST API call, with the format query parameter set to openapi-link.

Exporting APIs

How to get started and what’s coming next

You can try the current functionality in a matter of minutes by importing your APIs from OpenAPI Specification files. Before the feature becomes generally available, we will implement export in a JSON format through a REST API call. In the coming months, we will also add OpenAPI Specification v3 import and export support in the PowerShell SDK.

Let us know what you think

Introducing IoT Hub device streams in public preview

$
0
0

In today's security-first digital age, ensuring secure connectivity to IoT devices is of paramount importance. A wide range of operational and maintenance scenarios in the IoT space rely on end-to-end device connectivity in order to enable users and services to interact with, login, troubleshoot, send, or receive data from devices. Security and compliance with the organization's policies are therefore an essential ingredient across all these scenarios.

Azure IoT Hub device streams is a new PaaS service that addresses these needs by providing a foundation for secure end-to-end connectivity to IoT devices. Customers, partners, application developers, and third-party platform providers can leverage device streams to communicate securely with IoT devices that reside behind firewalls or are deployed inside of private networks. Furthermore, built-in compatibility with the TCP/IP stack makes device streams applicable to a wide range of applications involving both custom proprietary protocols as well standards-based protocols such as remote shell, web, file transfer and video streaming, among others.

At its core, an IoT Hub device stream is a data transfer tunnel that provides connectivity between two TCP/IP-enabled endpoints: one side of the tunnel is an IoT device and the other side is a customer endpoint that intends to communicate with the device (the latter is referred here as service endpoint). We have seen many setups where direct connectivity to a device is prohibited based on the organization's security policies and connectivity restrictions placed on its networks. These restrictions, while justified, frequently impact various legitimate scenarios that require connectivity to an IoT device.

Examples of these scenarios include:

  • An operator wishes to login to a device for inspection or maintenance. This scenario commonly involves logging to the device using Secure Shell (SSH) for Linux and Remote Desktop Protocol (RDP) for Windows. The device or network firewall configurations often block the operator's workstation from reaching the device.
  • An operator needs to remotely access device's diagnostics portal for troubleshooting. Diagnostic portals are typically in the form of a web server hosted on the device. A device's private IP or its firewall configuration may similarly block the user from interacting with the device's web server.
  • An application developer needs to remotely retrieve logs and other runtime diagnostic information from a device's file system. Protocols commonly used for this purpose may include File Transfer Protocol (FTP) or Secure Copy (SCP), among others. Again, the firewall configurations typically restrict these types of traffic.

IoT Hub device streams address the end-to-end connectivity needs of the above scenarios by leveraging an IoT Hub cloud endpoint that acts as a proxy for application traffic exchanged between the device and service. This setup is depicted in the figure below and works as follows.

iot-hub-device-streams-overview

  • Device and service endpoints each create separate outbound connections to an IoT Hub endpoint that acts as a proxy for the traffic being transmitted between them.
  • IoT Hub endpoint will relay traffic packets sent from device to service and vice-versa. This establishes an end-to-end bidirectional tunnel through which device and service applications can communicate.
  • The established tunnel through IoT Hub provides reliable and ordered packet delivery guarantees. Furthermore, the transfer of traffic through IoT Hub as an intermediary is masked from the applications, giving them the seamless experience of direct bidirection communication that is on par with TCP.

Benefits

IoT Hub device streams provide the following benefits:

  • Firewall-friendly secure connectivity: IoT devices can be reached from service endpoints without opening of inbound firewall port at the device or network perimeters. All that is needed is the ability to create outbound connections to IoT Hub cloud endpoints over port 443 (devices that use IoT Hub SDK already maintain such a connection).
  • Authentication enforcement: To establish a stream, both device and service endpoints need to authenticate with IoT Hub using their corresponding credentials. This enhances security of the device communication layer, by ensuring that the identity of each side of the tunnel is verified prior to any communication taking place between them.
  • Encryption: By default, IoT Hub device streams use TLS-enabled connections. This ensures that the application traffic is encrypted regardless of whether the application uses encryption or not.
  • Simplicity of connectivity: The use of device streams eliminates the need for complex setup of Virtual Private Networks (VPN) to enable connectivity to IoT devices. Furthermore, unlike VPN, which give broad access to the entire network, device streams are point-to-point involving a single device and a single service at each side of the tunnel.
  • Compatibility with the TCP/IP stack: IoT Hub device streams can accommodate TCP/IP application traffic. This means that a wide range of proprietary as well as standards-based protocols can leverage this feature. This includes well established protocols such as Remote Desktop Protocol (RDP), Secure Shell (SSH), File Transfer Protocol (FTP), and HTTP/REST, among many others.
  • Ease of use in private network setups: Devices that are deployed inside of private networks can be reached without the need to assign publicly routable IP addresses to each device. Another similar case involves devices with dynamic IP assignment which might not be known by the service at all times. In both cases, device streams enable connectivity to a target device using its device ID (rather than IP address) as identifier.

As outlined above, IoT Hub device streams are particularly helpful when devices are placed behind a firewall or inside a private network (with no publicly reachable IP address). Next, we review one such setup as a case study where direct connectivity to the device is restricted.

A case study: Remote device access in a manufacturing setup

To further illustrate the applicability of device streams in real-world IoT scenarios, consider a setup involving equipment and machinery (i.e., IoT devices) on a factory floor that are connected to the factory's local area network. The LAN typically is connected to the Internet through a network gateway or an HTTP proxy and is protected by a firewall at the network boundary. In this setup, the firewall is configured based on the organizations security policies which may prohibit opening of certain firewall ports. For example, port 3389 used by Remote Desktop Protocol is often blocked. Therefore, users from outside of the network cannot access devices over this port.

While such a network setup is in widespread use, it introduces challenges to many common IoT scenarios. For example, if operators need to access equipment from outside of the LAN, the firewall may need to allow inbound connectivity on arbitrary ports used by the application. In the case of a Windows machine that uses the RDP protocol, this comes at odds with the security policies that block port 3389.

Using device streams, the RDP traffic to target devices is tunneled through IoT Hub. Specifically, this tunnel is established over port 443 using outbound connections originating from the device and service. As a result, there is no need to relax firewall policies in the factory network. In our quickstart guides available in C, C#, and NodeJS languages, we have included instructions on how to leverage IoT Hub device streams to enable the RDP scenario. Other protocols can use a similar approach by simply configuring their corresponding communication port.

Next steps

We are excited about the possibilities that can be enabled to communicate with IoT devices securely via IoT Hub device streams. Use the following links to learn more about this feature:

Azure Service Bus and Azure Event Hubs expand availability

$
0
0

The Azure Messaging team is continually working to enhance the resiliency and availability of our service offerings – Azure Service Bus, Azure Event Hubs, and Azure Event Grid. As part of this effort, in June 2018, we previewed Azure Service Bus Premium tier for Availability Zones and Azure Event Hubs Standard tier in 3 regions – Central US, East US 2, and France Central.

Today, we’re happy to announce that we’ve added Availability Zones support for Azure Service Bus Premium and Azure Event Hubs Standard in the following regions:

  • East US 2
  • West US 2
  • West Europe
  • North Europe
  • France Central
  • Southeast Asia

Availability Zones is a high availability offering by Azure that protects applications and data from datacenter failures. Availability Zones are unique physical locations within an Azure region. Each zone is made up of one or more datacenters equipped with independent power, cooling, and networking. To ensure resiliency, there’s a minimum of three separate zones in all enabled regions. The physical separation of Availability Zones within a region protects applications and data from datacenter failures. Zone-redundant services replicate your applications and data across Availability Zones to protect from single-points-of-failure.

With this, Azure Service Bus Premium and Azure Event Hubs Standard are generally available for Availability Zones, and Azure Event Hubs Standard in every Azure region that has zone redundant datacenters.

How do you enable Availability Zones on your Azure Service Bus Premium namespace or Azure Event Hubs Standard?

You can enable Availability Zones on new namespaces only. Migration of existing namespaces is not supported.

If using an ARM template to create a Service Bus Premium namespace, it is as simple as specifying an AZ supported region and setting the zoneRedundant property to true in the template.

For Azure Service Bus Premium namespace:

"resources": [{
       "apiVersion": "2018-01-01-preview",
       "name": "[parameters('serviceBusNamespaceName')]",
       "type": "Microsoft.ServiceBus/namespaces",
       "location": "[parameters('location')]",
       "sku": {
             "name": "Premium"
       },
       "properties": {
             "zoneRedundant": true
       }
}],

For Azure Event Hubs Standard namespace:

"resources": [{
       "apiVersion": "2018-01-01-preview",
       "name": "[parameters('eventHubNamespaceName')]",
       "type": "Microsoft.EventHub/namespaces",
       "location": "[parameters('location')]",
       "sku": {
             "name": "Standard"
       },
       "properties": {
             "zoneRedundant": true
       }
}],

You can also enable zone-redundancy by creating a new namespace in the Azure portal as shown below. It is important to note that you cannot disable zone redundancy after enabling it on your namespace.

Azure Service Bus Premium:

Azure Service Bus Premium in the Azure portal

Azure Event Hubs Standard:

Azure Event Hubs Standard in the Azure portal

General availability of Service Bus and Event Hubs Availability Zones

In addition to the announcements regarding Availability Zones, we’re happy to announce that we’ve added support for Azure Service Bus Premium tier in the following regions:

  • China North 2
  • China East 2
  • Australia Central
  • Australia Central 2
  • France Central
  • France South

Built on the successful and reliable foundation of Azure Service Bus messaging, we introduced Azure Service Bus Premium in 2015. The Premium tier allows our customers to provision dedicated resources for the Azure Service Bus namespace so that they can ensure greater predictability and performance for the most demanding workloads paired with an equally predictable pricing model. With Service Bus Premium Messaging, our customers benefit from the economics and operational flexibility of a multi-tenant public cloud system, while getting single-tenant reliability and predictability.

Azure Service Bus Premium also provides access to advanced enterprise features such as Availability Zones, Geo-Disaster recovery, and Virtual Network Service Endpoints along with Firewall rules. These additional features make the Premium tier tremendously valuable for customers looking for a highly reliable, resilient, and secure enterprise messaging solution.

For more information on Availability Zones:

For more information on Service Bus Premium:

HDInsight Tools for Visual Studio Code now generally available

$
0
0

We are pleased to announce the general availability for Azure HDInsight Tools for Visual Studio Code (VSCode). HDInsight Tools for VSCode give developers a cross-platform lightweight code editor for developing HDInsight PySpark and Hive batch jobs and interactive query. 

For PySpark developers who value the productivity Python enables, HDInsight Tools for VSCode offer a quick Python editor with simple getting started experiences, and allow you to submit PySpark statements to HDInsight clusters with interactive responses. This interactivity brings the best properties of Python and Spark to developers and empowers you to gain faster insights.

For Hive developers, HDInsight tools for VSCode offer great data warehouse query experiences for big data and helpful features in querying log files and gaining insights. 

Key customer benefits   

  • Integration with Azure worldwide environments for Azure sign-in and HDInsight cluster management 
  • HDInsight Hive and Spark job submission with integration with Spark UI and Yarn UI
  • Interactive responses with the flexibility to execute one or multiple selected Hive and Python scripts
  • Preview and export your interactive query results to CSV, JSON, and Excel format
  • Built-in Hive language services such as IntelliSense auto-suggest, autocomplete, and error marker, among others
  • Supports HDInsight ESP Cluster and Ambari connection
  • Simplified cluster and Spark job configuration management

Latest improvements

Since public preview, we have worked closely with customers to address feedback, implement new functionality, and constantly improve user experiences. Some key improvements include:

How to get started

First, install Visual Studio Code and download Mono 4.2.x (for Linux and Mac). Then, get the latest HDInsight Tools by going to the VSCode Extension repository or the VSCode Marketplace and searching HDInsight Tools for VSCode.

 HDInsight Tools for VSCode

For more information about HDInsight Tools for VSCode, please see the following resources:

If you have questions, feedback, comments, or bug reports, please send a note to hdivstool@microsoft.com.

Using containerized services in your pipeline

$
0
0

Azure Pipelines has supported container jobs for a while now. You craft a container with exactly the versions of exactly the tools you need, and we’ll run your pipeline steps inside that container. Recently we expanded our container support to include service containers: additional, helper containers accessible to your pipeline.

Service containers let you define the set of services you need available, containerize them, and then have them automatically available while your pipeline is running. Azure Pipelines manages the starting up and tearing down of the containers, so you don’t have to think about cleaning them up or resetting their state. In many cases, you don’t even have to build and maintain the containers – Docker Hub has many popular options ready to go. You simply point your app at the services, typically by name, and everything else is taken care of.

So how does it work? You tell Azure Pipelines what containers to pull, what to call them, and what ports & volumes to map. The pipelines agent manages everything else. Let’s look at two examples showing how to use the feature.

Basic use of service containers

First, suppose you need memory cache and proxy servers for your integration tests. How do you make sure those servers get reset to a clean state each time you build your app? With service containers, of course:

resources:
  containers:
  - container: my_container
    image: ubuntu:16.04
  - container: nginx
    image: nginx
  - container: redis
    image: redis

pool:
  vmImage: 'ubuntu-16.04'

container: my_container

services:
  nginx: nginx
  redis: redis

steps:
- script: |
    apt install -y curl
    curl nginx
    apt install redis-tools
    redis-cli -h redis ping

When the pipeline runs, Azure Pipelines pulls three containers: Ubuntu 16.04 to run the build tasks in, nginx for a proxy server, and Redis for a cache server. The agent spins up all three containers and networks them together. Since everything is running on the same container network, you can access the services by hostname: that’s what the curl nginx and redis-cli -h redis ping  lines are doing. Of course, in your app, you’d do more than just ping the service – you’d configure your app to use the services. When the job is complete, all three containers will be spun down.

Combining service containers with a matrix of jobs

Suppose you’re building an app that supports multiple different database backends. How do you easily test against each database, without maintaining a bunch of infrastructure or installing multiple server runtimes? You can use a matrix with service containers, like this:

resources:
  containers:
  - container: my_container
    image: ubuntu:16.04
  - container: pg11
    image: postgres:11
  - container: pg10
    image: postgres:10

pool:
  vmImage: 'ubuntu-16.04'

strategy:
  matrix:
    postgres11:
      postgresService: pg11
    postgres10:
      postgresService: pg10

container: my_container

services:
  postgres: $[ variables['postgresService'] ]

steps:
- script: |
    apt install -y postgresql-client
    psql --host=postgres --username=postgres --command="SELECT 1;"

In this case, the listed steps will be duplicated into two jobs, one against Postgres 10 and the other against Postgres 11.

Service containers work with non-container jobs, where tasks are running directly on the host. They also support advanced scenarios such as defining your own port and volume mappings; see the documentation for more details. Like container jobs, service containers are available in YAML-based pipelines.

Python in Visual Studio 2019 Preview 2

$
0
0

Today we are releasing Visual Studio 2019 Preview 2, which contains new features for Python developers to improve the experience for managing Python environments and enable you to work with Python code without having to create a Python project. We’ve also enabled Python support for Visual Studio Live Share.

We’ll take a closer look at these new features in the rest of this post.

Creating Python Environments

To make it easier for you to create virtual or conda Python environments for your project, the ability to create Python environments has been moved from the Python environments window to a new Add environment dialog that can be opened from various parts of Visual Studio. This improves discoverability and enables new capabilities such as the ability to create conda environments on-demand and support for Open Folder described later in this post.

For example, when opening a project that contains a requirements.txt file or environment.yml but no virtual environment or conda environment is found, you will be prompted to create an environment with a notification:

In this case clicking on ‘Create virtual environment’ will show a new Add environment dialog box, pre-configured to create a new virtual environment using the provided requirements.txt file:

You can also use the Add environment dialog to create a conda environment, using an environment.yml file, or by specifying a list of packages to install:

The dialog also allows you to add existing virtual environments on your machine, or to install new versions of Python.

After clicking on Create, you will see a progress notification in the status bar, and you can click the blue link to view progress in the Output window:

You can continue working while the environment is being created.

Switching from Anaconda to Miniconda

Previous versions of Visual Studio allowed you to install Anaconda through the Visual Studio Installer, and while this enabled you to easily acquire Python data science packages, this resulted in large Visual Studio installation times and sometimes caused reliability issues when upgrading versions of Visual Studio. To address these problems, Anaconda has been removed in favor of the much smaller Miniconda, which is now installed as a default optional component in for the Python workload:

Miniconda allows you to create conda environments on-demand using the Add environment dialog. If you still want to continue using a full install of Anaconda, you can install Anaconda yourself and continue working with Anaconda by selecting your Anaconda install as the active Python environment.  Note that if both the Visual Studio bundled Miniconda and Anaconda are installed, we will use Miniconda to create conda environments. If you prefer to use your own conda version you can specify the path to conda.exe in Tools > Options > Python > Conda.

Python Toolbar and Open Folder Support

In previous versions of Visual Studio we required you to create a Python project in order to work with Python code. We have added a new Python toolbar that allows you to work with Python code without having to create or open a Python project. The toolbar allows you to switch between Python environments, add new Python environments, or manage Python packages installed in the current environment:

The Python toolbar will appear whenever a Python file is open and allows you to select your Python interpreter when working with files in Open Folder workspaces or Python files included in C++ or C# projects.

In the case of Open Folder, your selection is stored in the .vs/PythonSettings.json file so that the same environment is selected the next time you open Visual Studio. By default, the debugger will debug the currently opened Python file, and will run the script using the currently selected Python environment:

To customize your debug settings, you can right-click a Python file and select “Debug and Launch Settings”:

This will generate a launch.vs.json file with Python settings which can be used to customize debug settings:

Note that in Preview 2, the editor and interactive window do not use the currently selected Python environment when using Open Folder. This functionality will be added in future previews of Visual Studio 2019.

Live Share Support for Python

In this release you can now use Visual Studio Live Share with Python files. Previously, you could only use Live Share with Python by hosting a session with Visual Studio Code. You can initiate a Live Share session by clicking the Live Share button on the upper right corner of Visual Studio:

Users who join your live share session will be able to see your Python files, see IntelliSense from your selected Python environment, and collaboratively debug through Python code:

Try it out!

Be sure to download the Visual Studio 2019 Preview 2, install the Python Workload, and give feedback on Visual Studio Developer Community.


Lifetime Profile Update in Visual Studio 2019 Preview 2

$
0
0

Lifetime Profile Update in Visual Studio 2019 Preview 2

The C++ Core Guidelines’ Lifetime Profile, which is part of the C++ Core Guidelines, aims to detect lifetime problems, like dangling pointers and references, in C++ code. It uses the type information already present in the source along with some simple contracts between functions to detect defects at compile time with minimal annotation.

These are the basic contracts that the profile expects code to follow:

  1. Don’t use a potentially dangling pointer.
  2. Don’t pass a potentially dangling pointer to another function.
  3. Don’t return a potentially dangling pointer from any function.

For more information on the history and goals of the profile, check out Herb Sutter’s blog post about version 1.0.

What’s New in Visual Studio 2019 Preview 2

In Preview 2, we’ve shipped a preview release of the Lifetime Profile Checker which implements the published version of the Lifetime Profile. This checker is part of the C++ Core Checkers in Visual Studio.

  • Support for iterators, string_views, and spans.
  • Better detection of custom Owner and Pointer types which allows custom types that behave like Containers, Owning-Pointers, or Non-Owning Pointers to participate in the analysis.
  • Type-aware default rules for function call pre and post conditions help reduce false-positives and improve accuracy.
  • Better support for aggregate types.
  • General correctness and performance improvements.
  • Some simple nullptr analysis.

Enabling the Lifetime Profile Checker Rules

The checker rules are not enabled by default. If you want to try out the new rules, you’ll have to update the code analysis ruleset selected for your project. You can either select the “C++ Core Check Lifetime Rules” – which enables only the Lifetime Profile rules – or you can modify your existing ruleset to enable warnings 26486 through 26489.

Screenshot of the Code Analysis properties page that shows the C++ Core Check Lifetime Rules ruleset selected.

Screenshot of the Code Analysis properties page that shows the C++ Core Check Lifetime Rules ruleset selected.

Warnings will appear in the Error List when code analysis is run (Analyze > Run Code Analysis), or if you have Background Code Analysis enabled, lifetime errors will show up in the editor with green squiggles.

Screenshot showing a Lifetime Profile Checker warning with a green squiggle in source code.

Screenshot showing a Lifetime Profile Checker warning with a green squiggle in source code.

Examples

Dangling Pointer

The simplest example – using a dangling pointer – is the best place to start. Here px points to x and then x leaves scope leaving px dangling. When px is used, a warning is issued.

void simple_test()
{
    int* px;
    {
        int x = 0;
        px = &x;
    }
    *px = 1; // error, dangling pointer to 'x'
}

Dangling Output Pointer

Returning dangling pointers is also not allowed. In this case, the parameter ppx is presumed to be an output parameter. In this case, it’s set to point to x which goes out of scope at the end of the function. This leaves *ppx dangling.

void out_parameter(int x, int** ppx)  // *ppx points to 'x' which is invalid
{
    *ppx = &x;
}

Dangling String View

The last two examples were obvious, but temporary instances can introduce subtle bugs. Can you find the bug in the following code?

std::string get_string();
void dangling_string_view()
{
    std::string_view sv = get_string();
    auto c = sv.at(0);
}

In this case, the string view sv is constructed with the temporary string instance returned from get_string(). The temporary string is then destroyed which leaves the string view referencing an invalid object.

Dangling Iterator

Another hard to spot lifetime issue happens when using an invalidated iterator into a container. In the case below, the call to push_back may cause the vector to reallocate its underlying storage which invalidates the iterator it.

void dangling_iterator()
{
    std::vector<int> v = { 1, 2, 3 };
    auto it = v.begin();
    *it = 0; // ok, iterator is valid
    v.push_back(4);
    *it = 0; // error, using an invalid iterator
}

One thing to note about this example is that there is no special handling for ‘std::vector::push_back’. This behavior falls out of the default profile rules. One rule classifies containers as an ‘Owner’. Then, when a non-const method is called on the Owner, its owned memory is assumed invalidated and iterators that point at the owned memory are also considered invalid.

Modified Owner

The profile is prescriptive in its guidance. It expects your that code uses the type system idiomatically when defining function parameters. In this next example, std::unique_ptr, an ‘Owner’ type, is passed to another function by non-const reference. According to the rules of the profile, Owners that are passed by non-const reference are assumed to be modified by the callee.

void use_unique_ptr(std::unique_ptr<int>& upRef);
void assumes_modification()
{
    auto unique = std::make_unique<int>(0); // Line A
    auto ptr = unique.get();
    *ptr = 10; // ok, ptr is valid
    use_unique_ptr(unique);
    *ptr = 10; // error, dangling pointer to the memory held by 'unique' at Line A
}

In this example, we get a raw pointer, ptr, to the memory owned by unique. Then unique is passed to the function use_unique_ptr by non-const reference. Because this is a non-const use of unique where the function could do anything, the analysis assumes that unique‘ is invalidated somehow (e.g. unique_ptr::reset) which would cause ptr to dangle.

More Examples

There are many other cases that the analysis can detect. Try it out in Visual Studio on your own code and see what you find. Also check out Herb’s blog for more examples and, if you’re curious, read through the Lifetime Profile paper.

Known Issues

The current implementation doesn’t fully support the analysis as described in the Lifetime Profile paper. Here are the broad categories that are not implemented in this release.

  • Annotations – The paper introduces annotations (i.e. [[gsl::lifetime-const]]) which are not supported. Practically this means that if the default analysis rules aren’t working for your code, there’s not much you can do other than suppressing false positives.
  • Exceptions – Exception handling paths, including the contents of catch blocks, are not currently analyzed.
  • Default Rules for STL Types – In lieu of a lifetime-const annotation, the paper recommends that for the rare STL container member functions where we want to override the defaults, we treat them as if they were annotated. For example, one overload of std::vector::at is not const because it can return a non-const reference – however we know that calling it is lifetime-const because it doesn’t invalidate the vector’s memory. We haven’t completed the work to do this implicit annotation of all the STL container types.
  • Lambda Captures – If a stack variable is captured by reference in a lambda, we don’t currently detect if the lambda leaves the scope of the captured variable.
    auto lambda_test()
    {
        int x;
        auto captures_x = [&x] { return x; };
        return captures_x; // returns a dangling reference to 'x'
    }

Wrap Up

Try out the Lifetime Profile Checker in Visual Studio 2019 Preview 2. We hope that it will help identify lifetime problems in your projects. If you find false positives or false negatives, please report them so we can prioritize the scenarios that are important to you. If you have suggestions or problems with this check — or any Visual Studio feature — either Report a Problem or post on Developer Community and let us know. We’re also on Twitter at @VisualC.

MSVC Backend Updates in Visual Studio 2019 Preview 2: New Optimizations, OpenMP, and Build Throughput improvements

$
0
0

In Visual Studio 2019 Preview 2 we have continued to improve the C++ backend with new features, new and improved optimizations, build throughput improvements, and quality of life changes.

New Features

  • Added a new inlining command line switch: -Ob3. -Ob3 is a more aggressive version of -Ob2. -O2 (optimize the binary for speed) still implies -Ob2 by default, but this may change in the future. If you find the compiler is under-inlining, consider passing -O2 -Ob3.
  • Added basic support for OpenMP SIMD vectorization which is the most widely used OpenMP feature in machine learning (ML) libraries. Our case study is the Intel MKL-DNN library, which is used as a building block for other well-known open source ML libraries including Tensor Flow. This can be turned on with a new CL switch -openmp:experimental. This allows loops annotated with “#pragma omp simd” to potentially be vectorized. The vectorization is not guaranteed, and loops annotated but not vectorized will get a warning reported. No SIMD clauses are supported, they will simply be ignored with a warning reported.
  • Added a new C++ exception handler __CxxFrameHandler4 that reduces exception handling metadata overhead by 66%. This provides up to a 15% total binary size improvement on binaries that use large amounts of C++ exception handling. Currently default off, try it out by passing “/d2FH4” when compiling with cl.exe. Note that /d2FH4 is otherwise undocumented and unsupported long term. This is not currently supported on UWP apps as the UWP runtime does not have this feature yet.
  • To support hand vectorization of loops containing calls to math library functions and certain other operations like integer division, MSVC now supports Short Vector Math Library (SVML) intrinsic functions that compute the vector equivalents. Support for 128-bit, 256-bit and 512-bit vectors is available for most functions, with the exceptions listed below. Note that these functions do not set errno. See the Intel Intrinsic Guide for definitions of the supported functions.
    Exceptions include:
    • Vector integer combined division and remainder is only available for 32-bit elements and 128-bit and 256-bit vector lengths. Use separate division and remainder functions for other element sizes and vector lengths.
    • SVML square-root is only available in 128-bit and 256-bit vector lengths. You can use _mm512_sqrt_pd or _mm512_sqrt_ps functions for 512-bit vectors.
    • Only 512-bit vector versions of rint and nearbyint functions are available. In many cases you can use round functions instead, e.g. use _mm256_round_ps(x, _MM_FROUND_CUR_DIRECTION) as a 256-bit vector version of rint, or _mm256_round_ps(x, _MM_FROUND_TO_NEAREST_INT) for nearbyint.
    • Only 512-bit reciprocal is provided. You can compute the equivalent using set1 and div functions, e.g. 256-bit reciprocal could be computed as _mm256_div_ps(_mm256_set1_ps(1.0f), (x)).
    • There are SVML functions for single-precision complex square-root, logarithm and exponentiation only in 128-bit and 256-bit vector lengths.

New and Improved Optimizations

  • Unrolled memsets and block initializations will now use SSE2 instructions (or AVX instructions if allowed). The size threshold for what will be unrolled has increased accordingly (compile for size with SSE2: unroll threshold moves from 31 to 63 bytes, compile for speed with SSE2: threshold moves from 79 to 159 bytes).
  • Optimized the code-gen for small memsets, primarily targeted to initall-protected functions.
  • Improvements to the SSA Optimizer’s redundant store elimination: better escape analysis and handling of loops
  • The compiler recognizes memmove() as an intrinsic function and optimizes accordingly. This improves code generation for operations built on memmove() including std::copy() and other higher level library code such as std::vector and std::string construction
  • The optimizer does a better job of optimizing short, fixed-length memmove(), memcpy(), and memcmp() operations.
  • Implemented switch duplication optimization for better performance of switches inside hot loops. We duplicated the switch jumps to help improve branch prediction accuracy and consequently, run time performance.
  • Added constant-folding and arithmetic simplifications for expressions using SIMD (vector) intrinsic, for both float and integer forms. Most of the usual expression optimizations now handle SSE2 and AVX2 intrinsics, either from user code or a result of automatic vectorization.
  • Several new scalar fused multiply-add (FMA) patterns are identified with /arch:AVX2 /fp:fast. These include the following common expressions: (x + 1.0) * y; (x – 1.0) * y; (1.0 – x) * y; (-1.0 – x) * y
  • Sequences of code that initialize a __m128 SIMD (vector) value element-by-element are identified and replaced by a _mm_set_ps intrinsic. This allows the new SIMD optimizations to consider the value as part of expressions, useful especially if the value has only constant elements. A future update will support more value types.
  • Common sub-expression elimination (CSE) is more effective in the presence of variables which may be modified in indirect ways because they have their address taken.
  • Useless struct/class copies are being removed in several more cases, including copies to output parameters and functions returning an object. This optimization is especially effective in C++ programs that pass objects by value.
  • Added a more powerful analysis for extracting information about variables from control flow (if/else/switch statements), used to remove branches that can be proven to be always true or false and to improve the variable range estimation. Code using gsl::span sees improvements, some range checks that are unnecessary being now removed.
  • The devirtualization optimization will now have additional opportunities, such as when classes are defined in anonymous namespaces.

Build Throughput Improvements

  • Filter debug information during compilation based on referenced symbols and types to reduce debug section size and improve linker throughput. Updating from 15.9 to 16.0 can reduce the input size to the linker by up to 40%.
  • Link time improvements in PDB type merging and creation.
  • Updating to 16.0 from 15.9 can improve link times by up to a 2X speedup. For example, linking Chrome resulted in a 1.75X link time speedup when using /DEBUG:full, and an 1.4X link time speedup when using /DEBUG:fastlink.

Quality of Life Improvements

  • The compiler displays file names and paths using user-provided casing where previously the compiler displayed lower-cased file names and paths.
  • The new linker will now report potentially matched symbol(s) for unresolved symbols, like:
        main.obj : error LNK2019: unresolved external symbol _foo referenced in function _main
          Hint on symbols that are defined and could potentially match:
            "int __cdecl foo(int)" (?foo@@YAHH@Z)
            "bool __cdecl foo(double)" (?foo@@YA_NN@Z)
            @foo@0
            foo@@4
        main.exe : fatal error LNK1120: 1 unresolved externals
  • When generating a static library, it is no longer required to pass the /LTCG flag to LIB.exe.
  • Added a linker option /LINKREPROTARGET:[binary_name] to only generate a link repro for the specified binary. This allows %LINK_REPRO% or /LINKREPRO:[directory_name] to be set in a large build with multiple linkings, and the linker will only generate the repro for the binary specified in /linkreprotarget.

We’d love for you to download Visual Studio 2019 and give it a try. As always, we welcome your feedback. We can be reached via the comments below or via email (visualcpp@microsoft.com). If you encounter problems with Visual Studio or MSVC, or have a suggestion for us, please let us know through Help > Send Feedback > Report A Problem / Provide a Suggestion in the product, or via Developer Community. You can also find us on Twitter (@VisualC) and Facebook (msftvisualcpp).

Office 365 for Mac is available on the Mac App Store

Do more with patterns in C# 8.0

$
0
0

Do more with patterns in C# 8.0

Visual Studio 2019 Preview 2 is out! And with it, a couple more C# 8.0 features are ready for you to try. It’s mostly about pattern matching, though I’ll touch on a few other news and changes at the end.

More patterns in more places

When C# 7.0 introduced pattern matching we said that we expected to add more patterns in more places iin the future. That time has come! We’re adding what we call recursive patterns, as well as a more compact expression form of switch statements called (you guessed it!) switch expressions.

Here’s a simple C# 7.0 example of patterns to start us out:

class Point
{
    public int X { get; }
    public int Y { get; }
    public Point(int x, int y) => (X, Y) = (x, y);
    public void Deconstruct(out int x, out int y) => (x, y) = (X, Y);
}

static string Display(object o)
{
    switch (o)
    {
        case Point p when p.X == 0 && p.Y == 0:
            return "origin";
        case Point p:
            return $"({p.X}, {p.Y})";
        default:
            return "unknown";
    }
}

Switch expressions

First, let’s observe that many switch statements really don’t do much interesting work within the case bodies. Often they all just produce a value, either by assigning it to a variable or by returning it (as above). In all those situations, the switch statement is frankly rather clunky. It feels like the 5-decades-old language feature it is, with lots of ceremony.

We decided it was time to add an expression form of switch. Here it is, applied to the above example:

static string Display(object o)
{
    return o switch
    {
        Point p when p.X == 0 && p.Y == 0 => "origin",
        Point p                           => $"({p.X}, {p.Y})",
        _                                 => "unknown"
    };
}

There are several things here that changed from switch statements. Let’s list them out:

  • The switch keyword is "infix" between the tested value and the {...} list of cases. That makes it more compositional with other expressions, and also easier to tell apart visually from a switch statement.
  • The case keyword and the : have been replaced with a lambda arrow => for brevity.
  • default has been replaced with the _ discard pattern for brevity.
  • The bodies are expressions! The result of the selected body becomes the result of the switch expression.

Since an expression needs to either have a value or throw an exception, a switch expression that reaches the end without a match will throw an exception. The compiler does a great job of warning you when this may be the case, but will not force you to end all switch expressions with a catch-all: you may know better!

Of course, since our Display method now consists of a single return statement, we can simplify it to be expression-bodied:

    static string Display(object o) => o switch
    {
        Point p when p.X == 0 && p.Y == 0 => "origin",
        Point p                           => $"({p.X}, {p.Y})",
        _                                 => "unknown"
    };

To be honest, I am not sure what formatting guidance we will give here, but it should be clear that this is a lot terser and clearer, especially because the brevity typically allows you to format the switch in a "tabular" fashion, as above, with patterns and bodies on the same line, and the =>s lined up under each other.

By the way, we plan to allow a trailing comma , after the last case in keeping with all the other "comma-separated lists in curly braces" in C#, but Preview 2 doesn’t yet allow that.

Property patterns

Speaking of brevity, the patterns are all of a sudden becoming the heaviest elements of the switch expression above! Let’s do something about that.

Note that the switch expression uses the type pattern Point p (twice), as well as a when clause to add additional conditions for the first case.

In C# 8.0 we’re adding more optional elements to the type pattern, which allows the pattern itself to dig further into the value that’s being pattern matched. You can make it a property pattern by adding {...}‘s containing nested patterns to apply to the value’s accessible properties or fields. This let’s us rewrite the switch expression as follows:

static string Display(object o) => o switch
{
    Point { X: 0, Y: 0 }         p => "origin",
    Point { X: var x, Y: var y } p => $"({x}, {y})",
    _                              => "unknown"
};

Both cases still check that o is a Point. The first case then applies the constant pattern 0 recursively to the X and Y properties of p, checking whether they have that value. Thus we can eliminate the when clause in this and many common cases.

The second case applies the var pattern to each of X and Y. Recall that the var pattern in C# 7.0 always succeeds, and simply declares a fresh variable to hold the value. Thus x and y get to contain the int values of p.X and p.Y.

We never use p, and can in fact omit it here:

    Point { X: 0, Y: 0 }         => "origin",
    Point { X: var x, Y: var y } => $"({x}, {y})",
    _                            => "unknown"

One thing that remains true of all type patterns including property patterns, is that they require the value to be non-null. That opens the possibility of the "empty" property pattern {} being used as a compact "not-null" pattern. E.g. we could replace the fallback case with the following two cases:

    {}                           => o.ToString(),
    null                         => "null"

The {} deals with remaining nonnull objects, and null gets the nulls, so the switch is exhaustive and the compiler won’t complain about values falling through.

Positional patterns

The property pattern didn’t exactly make the second Point case shorter, and doesn’t seem worth the trouble there, but there’s more that can be done.

Note that the Point class has a Deconstruct method, a so-called deconstructor. In C# 7.0, deconstructors allowed a value to be deconstructed on assignment, so that you could write e.g.:

(int x, int y) = GetPoint(); // split up the Point according to its deconstructor

C# 7.0 did not integrate deconstruction with patterns. That changes with positional patterns which are an additional way that we are extending type patterns in C# 8.0. If the matched type is a tuple type or has a deconstructor, we can use positional patterns as a compact way of applying recursive patterns without having to name properties:

static string Display(object o) => o switch
{
    Point(0, 0)         => "origin",
    Point(var x, var y) => $"({x}, {y})",
    _                   => "unknown"
};

Once the object has been matched as a Point, the deconstructor is applied, and the nested patterns are applied to the resulting values.

Deconstructors aren’t always appropriate. They should only be added to types where it’s really clear which of the values is which. For a Point class, for instance, it’s safe and intuitive to assume that the first value is X and the second is Y, so the above switch expression is intuitive and easy to read.

Tuple patterns

A very useful special case of positional patterns is when they are applied to tuples. If a switch statement is applied to a tuple expression directly, we even allow the extra set of parentheses to be omitted, as in switch (x, y, z) instead of switch ((x, y, z)).

Tuple patterns are great for testing multiple pieces of input at the same time. Here is a simple implementation of a state machine:

static State ChangeState(State current, Transition transition, bool hasKey) =>
    (current, transition) switch
    {
        (Opened, Close)              => Closed,
        (Closed, Open)               => Opened,
        (Closed, Lock)   when hasKey => Locked,
        (Locked, Unlock) when hasKey => Closed,
        _ => throw new InvalidOperationException($"Invalid transition")
    };

Of course we could opt to include hasKey in the switched-on tuple instead of using when clauses – it is really a matter of taste:

static State ChangeState(State current, Transition transition, bool hasKey) =>
    (current, transition, hasKey) switch
    {
        (Opened, Close,  _)    => Closed,
        (Closed, Open,   _)    => Opened,
        (Closed, Lock,   true) => Locked,
        (Locked, Unlock, true) => Closed,
        _ => throw new InvalidOperationException($"Invalid transition")
    };

All in all I hope you can see that recursive patterns and switch expressions can lead to clearer and more declarative program logic.

Other C# 8.0 features in Preview 2

While the pattern features are the major ones to come online in VS 2019 Preview 2, There are a few smaller ones that I hope you will also find useful and fun. I won’t go into details here, but just give you a brief description of each.

Using declarations

In C#, using statements always cause a level of nesting, which can be highly annoying and hurt readability. For the simple cases where you just want a resource to be cleaned up at the end of a scope, you now have using declarations instead. Using declarations are simply local variable declarations with a using keyword in front, and their contents are disposed at the end of the current statement block. So instead of:

static void Main(string[] args)
{
    using (var options = Parse(args))
    {
        if (options["verbose"]) { WriteLine("Logging..."); }
        ...
    } // options disposed here
}

You can simply write

static void Main(string[] args)
{
    using var options = Parse(args);
    if (options["verbose"]) { WriteLine("Logging..."); }

} // options disposed here

Disposable ref structs

Ref structs were introduced in C# 7.2, and this is not the place to reiterate their usefulness, but in return they come with some severe limitations, such as not being able to implement interfaces. Ref structs can now be disposable without implementing the IDisposable interface, simply by having a Dispose method in them.

Static local functions

If you want to make sure your local function doesn’t incur the runtime costs associated with "capturing" (referencing) variables from the enclosing scope, you can declare it as static. Then the compiler will prevent reference of anything declared in enclosing functions – except other static local functions!

Changes since Preview 1

The main features of Preview 1 were nullable reference types and async streams. Both have evolved a bit in Preview 2, so if you’ve started using them, the following is good to be aware of.

Nullable reference types

We’ve added more options to control nullable warnings both in source (through #nullable and #pragma warning directives) and at the project level. We also changed the project file opt-in to <NullableContextOptions>enable</NullableContextOptions>.

Async streams

We changed the shape of the IAsyncEnumerable<T> interface the compiler expects! This brings the compiler out of sync with the interface provided in .NET Core 3.0 Preview 1, which can cause you some amount of trouble. However, .NET Core 3.0 Preview 2 is due out shortly, and that brings the interfaces back in sync.

Have at it!

As always, we are keen for your feedback! Please play around with the new pattern features in particular. Do you run into brick walls? Is something annoying? What are some cool and useful scenarios you find for them? Hit the feedback button and let us know!

Happy hacking,

Mads Torgersen, design lead for C#

Template IntelliSense Improvements for Visual Studio 2019 Preview 2

$
0
0

In the first version of Template IntelliSense, we introduced the Template Bar which allowed you to provide sample arguments for your template in order to get a richer IntelliSense experience within the template body. Since then, we’ve received a lot of great feedback and suggestions which have led to significant improvements. Our latest iteration includes the following:

  • Peek Window UI
  • Live Edits
  • Nested Template support
  • Default Argument watermarks

Peek Window UI and Live Edits

Clicking the edit button on the Template Bar no longer brings up a modal dialog instead, it opens a Peek Window. The benefit of the Peek Window UI is that it integrates more smoothly into your workflow and allows you to perform live edits. As you type your sample template arguments, the IntelliSense in the template body will update in real-time to reflect your changes. This lets you quickly see how various arguments may affect your code. In the example below, we see that we get Member List completion for std::string, but we get a red squiggle when we change the sample argument to double.

Nested Template Support and Default Argument Watermarks

We’ve also improved our Template Bar support for nested templates. Previously, the Template Bar would only appear at the top-level parent. Now, the it appears at the template header of the inner-most template to the cursor. Note that even from within the member function template you will be able to modify the sample argument of the containing class template:

You’ll also notice that we auto-populate the Peek Window textbox with a watermark if there is a default argument (as in the case of V above). Keeping that textbox as-is will use the default value for IntelliSense; otherwise, you can specify a different sample argument.

Other Productivity Features in Preview 2

C++ Productivity Improvements in Visual Studio 2019 Preview 2

Talk to Us!

We’d love for you to download Visual Studio and give Template IntelliSense  a try. As always, we welcome your feedback. We can be reached via the comments below or via email (visualcpp@microsoft.com). If you encounter other problems with MSVC or have a suggestion for Visual Studio please let us know through Help > Send Feedback > Report A Problem / Provide a Suggestion in the IDE, or via Developer Community. You can also find us on Twitter (@VisualC).

C++ Productivity Improvements in Visual Studio 2019 Preview 2

$
0
0

Visual Studio 2019 Preview 2 contains a host of productivity features, including some new quick fixes and code navigation improvements:

The Quick Actions menu can be used to select the quick fixes referenced below. You can hover over a squiggle and click the lightbulb that appears or open the menu with Alt + Enter.

Quick Fix: Add missing #include

Have you ever forgotten which header to reference from the C++ Standard Library to use a particular function or symbol? Now, Visual Studio will figure that out for you and offer to fix it:
When you use a type or function defined in a different header, and that header was not #included in the file, Visual Studio squiggles the symbol in red. Now, when you hover over the squiggle, you will be shown an option to automatially add the missing #include.

But this feature doesn’t just find standard library headers. It can tell you about missing headers from your codebase too:
The add missing #include quick fix even works for non-STL headers included in your codebase, so you don't have to remember where you declared everything all the time.

Quick Fix: NULL to nullptr

An automatic quick fix for the NULL->nullptr code analysis warning (C26477: USE_NULLPTR_NOT_CONSTANT) is available via the lightbulb menu on relevant lines, enabled by default in the “C++ Core Check Type Rules,” “C++ Core Check Rules,” and “Microsoft All Rules” rulesets.
It is generally bad practice to use the "NULL" macro in modern C++ code, so Visual Studio will place a squiggle under NULLs. When you hover over such a squiggle, Visual Studio will over to replace it with a nullptr for you.
You’ll be able to see a preview of the change for the fix and can choose to confirm it if it looks good. The code will be fixed automatically and green squiggle removed.

Quick Fix: Add missing semicolon

A common pitfall for students learning C++ is remembering to add the semicolon at the end of a statement. Visual Studio will now identify this issue and offer to fix it.
Visual Studio offers to add missing semicolons to your code where needed. Just hover over the squiggle that appears and choose the option to fix it in the menu.

Quick Fix: Resolve missing namespace or scope

Visual Studio will offer to add a “using namespace” statement to your code if one is missing, or alternatively, offer to qualify the symbol’s scope directly with the scope operator:
When you forget to qualify a type or function with the namespace it comes from, Visual Studio will offer to fill in the missing namespace, along with the scope operator, in the code. Alternatively, you can let Visual Studio insert a "using namespace" statement above the code.
Note: we are currently tracking a bug with the quick fix to add “using namespace” that may cause it to not work correctly – we expect to resolve it in a future update.

Quick Fix: Replace bad indirection operands (* to & and & to *)

Did you ever forget to dereference a pointer and manage to reference it directly instead? Or perhaps you meant to refer to the pointer and not what it points to? This quick action offers to fix such issues:
Visual Studio will offer to correct errors arising from using a * insteaof a & in your code (and vice versa). Hover over the squiggle and choose the corresponding fix to resolve the issue.

Quick Info on closing brace

You can now see a Quick Info tooltip when you hover over a closing brace, giving you some information about the starting line of the code block.
When you hover over a closing brace in Visual Studio, context will be provided about the starting line of the code block.

Peek Header / Code File

Visual Studio already had a feature to toggle between a header and a source C++ file, commonly invoked via Ctrl + K, Ctrl + O, or the right-click context menu in the editor. Now, you can also peek at the other file without leaving your current one with Peek Header / Code File (Ctrl + K, Ctrl + J):
You can peek at the header of a C++ source file with Ctrl + K, Ctrl + J. You can also peek at the source file of a header the same way.

Go to Document on #include

F12 is often used to go to the definition of a code symbol. Now you can also do it on #include directives to open the corresponding file. In the right-click context menu, this is referred to as Go to Document:
You can now press F12 on a #include directive to go to that file in Visual Studio.

Other productivity features to check out

We have several more C++ productivity improvements in Preview 2, covered in separate blog posts:

We want your feedback

We’d love for you to download Visual Studio 2019 and give it a try. As always, we welcome your feedback. We can be reached via the comments below or via email (visualcpp@microsoft.com). If you encounter problems with Visual Studio or MSVC, or have a suggestion for us, please let us know through Help > Send Feedback > Report A Problem / Provide a Suggestion in the product, or via Developer Community. You can also find us on Twitter (@VisualC) and Facebook (msftvisualcpp).

In-editor code analysis in Visual Studio 2019 Preview 2

$
0
0

The C++ team has been working to refresh the Code Analysis experience inside Visual Studio. Last year, we blogged about some in-progress features in this area. We’re happy to announce that in Visual Studio 2019 Preview 2, we’ve integrated code analysis directly into the editor, improved upon previously experimental features, and enabled this as the default experience.

In-editor warnings & background analysis

Code analysis now runs automatically in the background, and warnings display as green squiggles in-editor. Analysis re-runs every time you open a file in the editor and when you save your changes.

If you wish to disable – or re-enable – this feature, you can do so via the Tools > Options > Text Editor > C++ > Experimental > Code Analysis menu, where you’ll also be able to toggle squiggles displaying in-editor or the entire new C++ Code Analysis/Error List experience.

Squiggle display improvements

We’ve also made a few improvements to the display style of in-editor warnings. Squiggles are now only displayed underneath the code segment that is relevant to the warning. If we cannot find the appropriate code segment, we fall back to the Visual Studio 2017 behavior of showing the squiggle for the entire line.

Visual Studio 2017

Visual Studio 2019

We’ve also made performance improvements, especially for source files with many C++ code analysis warnings. Latency from when the file is analyzed until green squiggles appear has been greatly improved, and we’ve also enhanced the overall UI performance during code analysis squiggle display.

Light bulb suggestions & fix-its

We’ve begun adding light bulb suggestions to provide automatic fixes for warnings. Please see the C++ Productivity Improvements in Visual Studio 2019 Preview 2 blog post for more information.

Send us feedback

Thank you to everyone who helps make Visual Studio a better experience for all. Your feedback is critical in ensuring we can deliver the best Code Analysis experience. We’d love for you to download Visual Studio 2019 Preview 2, give it a try, and let us know how it’s working for you in the comments below or via email (visualcpp@microsoft.com). If you encounter problems or have a suggestion, please let us know through Help > Send Feedback > Report A Problem / Provide a Suggestion or via Visual Studio Developer Community. You can also find us on Twitter @VisualC.


Introducing the New CMake Project Settings UI

$
0
0

Visual Studio 2019 Preview 2 introduces a new CMake Project Settings Editor to help you more easily configure your CMake projects in Visual Studio. The editor provides an alternative to modifying the CMakeSettings.json file directly and allows you to create and manage your CMake configurations.  

If you’re just getting started with CMake in Visual Studio, head over to our CMake Support in Visual Studio introductory page

The new CMake Project Settings Editor

The goal of this editor is to simplify the experience of configuring a CMake project by grouping and promoting commonly used settings, hiding advanced settings, and making it easier to edit CMake variables. This is the first preview of this new UI so we will continue to improve it based on your feedback.  

Open the editor

The CMake Project Settings Editor opens by default when you select “Manage Configurations…” from the configuration drop-down menu at the top of the screen.  

Open the CMake Project Settings Editor by selecting "Manage Connections..." from the configuration drop-down menu at the top of the screen.

You can also right-click on CMakeSettings.json in the Solution Explorer and select “Edit CMake Settings” from the context menu. If you prefer to manage your configurations directly from the CMakeSettings.json file, you can click the link to “Edit JSON” in the top right-hand corner of the editor.  

Configurations sidebar

The left side of the editor contains a configurations sidebar where you can easily toggle between your existing configurations, add a new configuration, and remove configurations. You can also now clone an existing configuration so that the new configuration inherits all properties set by the original. 

The configurations sidebar is on the left side of the editor.

Sections of the editor

The editor contains four sections: General, Command Arguments, CMake Variables and Cache, and Advanced. The General, Command Arguments, and Advanced sections provide a user interface for properties exposed in the CMakeSettings.json file. The Advanced section is hidden by default and can be expanded by clicking the link to “Show advanced settings” at the bottom of the editor.  

The CMake Variables and Cache section provides a new way for you to edit CMake variables. You can click “Save and Generate CMake Cache to Load Variables” to generate the CMake cache and populate a table with all the CMake cache variables available for you to edit. Advanced variables (per the CMake GUI) are hidden by default. You can check “Show Advanced Variables” to show all cache variables or use the search functionality to filter CMake variables by name. 

The CMake Variables and Cache section provides a new way for you to edit CMake variables.

You can change the value of any CMake variable by editing the “Value” column of the table. Modified variables are automatically saved to CMakeSettings.json 

Linux configurations

The CMake Project Settings Editor also provides support for Linux configurations. If you are targeting a remote Linux machine, the editor will expose properties specific to a remote build and link to the Connection Manager, where you can add and remove connections to remote machines.  

CMake Settings support for Linux configurations.

Give us your feedback!

We’d love for you to download Visual Studio 2019 and give it a try. As always, we welcome your feedback. We can be reached via the comments below or via email (visualcpp@microsoft.com). If you encounter other problems with Visual Studio or MSVC or have a suggestion please let us know through Help > Send Feedback > Report A Problem / Provide a Suggestion in the product, or via Developer Community. You can also find us on Twitter (@VisualC) and Facebook (msftvisualcpp). 

.NET and TypeScript at FOSDEM 2019

$
0
0

The schedule for the .NET and Typescript Developer Room at FOSDEM has now been published!

FOSDEM is one of the longest running Free and Open Source conferences, and we’re excited to have a .NET and TypeScript Developer Room this year, with lots of great speakers and sessions.

The conference is Saturday 2nd to Sunday 3rd February at the ULB Solbosch Campus in Brussels. Attendance is free, and there’s no need to register. The .NET and TypeScript Developer Room will run all day Saturday from 10:30am to 7pm in room K.3.201.

Additionally, Scott Hanselman will have a talk on the Main Track Open Source C#, .NET and Blazor at 3pm on Sunday in room Janson. FOSDEM has posted an interview with Scott where you can learn more about his hopes for the talk.

If you missed the call for participation, don’t fret — you can bring a lightning talk on the day.

Automating Releases in GitHub through Azure Pipelines

$
0
0

Do you own a GitHub repository? Do you create releases on GitHub to distribute software packages? Do you manually compile a list of changes to be included in release notes? If yes, you will be excited to know that you can now automate creation and modification of GitHub Releases directly from Azure Pipelines. This can be done through the GitHub Release task that is now rolled out to all users.

Here is a simple YAML syntax of the task for you to get started:

steps: - task: GithubRelease@0 displayName: 'Create GitHub Release' inputs: githubConnection: zenithworks repositoryName: zenithworks/simplehtml

You can also use the Visual Editor if you prefer.

create github release from azure pipelines

Actions:

There are 3 actions possible using this task:
– Create a GitHub release
– Edit a GitHub release
– Delete a GitHub release

Create a GitHub release: This action is useful when you want to create a new release using the assets generated from successful CI builds. You can do it for all CI runs or only for specific ones. By default, the task will create a release only when a tag is found associated with the commit for which the CI is triggered. A common way to use this task would be to include it as the last step of your CI pipeline. At the end of each pipeline run, this task will check if a tag exists for the triggering commit, if yes then a release will be created in the GitHub repository by uploading the current built binaries as assets and appending changelog to release notes.
You can further restrict the release creation only to certain tag patterns. For this use custom condition in task control options and specify the required tag pattern. This task will then run only when it finds the matching tag.

run the github release task only for specific tag patterns

 

    Edit a GitHub release:  This action is useful especially in cases where you continuously want to update an existing draft release with the latest built assets. With each CI run this task can edit the draft release (identified by tag) and upload latest build assets, release notes etc.

    Here is a sample YAML that edits a release:

    steps: - task: GitHubRelease@0 displayName: 'Edit GitHub Release' inputs: gitHubConnection: zenithworks repositoryName: zenithworks/simplehtml action: edit tag: $(draftReleaseTag) assets: $(Build.ArtifactStagingDirectory)/distributableBinaries/*

    This action can also be used in conjunction with approvals. A draft release can be automatically published publicly once the release notes and assets have been verified manually by 1 or multiple stakeholders and approved.

    Discard a GitHub release: This action can be used to clean up older releases specifically the draft ones. This action deletes all releases matching the specified tag.

    Compiling Release Notes:

    Another exciting feature included in this task is the ability to automatically compile changelog. This task can automatically compute the changes that were done in this release compared to the last published release and append them to the release notes. The list of changes include the commit SHA, message and linked issues.

    Automatically create changelog for GitHub Release using Azure Pipelines

     

    We are excited to have rolled out this feature and want to hear your feedback. Also, all of our built-in tasks, including GitHub release, are open source and available on GitHub for anyone to contribute. If you find any bugs or have any suggestions please feel free to report it there.

    Resources:
    1. Task code in GitHub
    2. YAML syntax and documentation

    Top 5 Open Source Features in Azure Pipelines

    $
    0
    0

    When I became a Program Manager, I gave up writing software for a living. So I did what many programmers do when faced with such a dilemma: I started working on open source software in my spare time. One of the projects that I work on is called libgit2.

    You might not have heard of libgit2; despite that, you’ve almost certainly used it. libgit2 is the Git repository management library that powers many graphical Git clients, dozens of tools, and all the major Git hosting providers. Whether you host your code on GitHub, Azure Repos, or somewhere else, it’s libgit2 that merges your pull requests.

    This is a big responsibility for a small open source project: keeping our code working and well-tested is critical to making sure that we keep your code working and well-tested. Bugs in our code would mean that your code doesn’t get merged. Keeping our continuous integration setup efficient and operational is critical to keeping our development velocity high while maintaining confidence in our code.

    That’s why libgit2 relies on Azure Pipelines for our builds.

    Here’s my five favorite things about Azure Pipelines for open source projects. Some of these are unique to Azure Pipelines, some are not, but all of them help me maintain my projects.

    One Service, Every Platform

    libgit2 tries to support many architectures and operating systems, but we have three “core” platforms that we want our master branch to always build on: Linux, Windows and macOS. This has always been a challenge for continuous integration build systems.

    The libgit2 project originally did its CI on some Jenkins servers that were donated by GitHub from their internal build farm. The build machine was called “janky” and it’s hard to imagine a more appropriate name since we could only get a green checkmark on success or a red X on failure. Since this was an internal build server, open source contributors weren’t authorized to get the full build logs.

    Eventually we moved over to Travis so that we could get more insight into build and test runs. This was a big step up, but it was Linux only. Inevitably someone would check in some code that worked great on Linux but called some function that didn’t exist in Win32. So then we added AppVeyor to the mix for our Windows builds, and this also seemed like a big step up.

    But it didn’t take long before we realized that having two different systems meant doing more than three times as much work on our infrastructure trying to coordinate the configuration and communication between them. And we were paying two different bills, trying to get the fastest builds that we could afford on donations into our shoestring budget. Over time, our CI configuration became incredibly frustrating and not really what I wanted to be working on in my free time hacking on open source. So when we were finally given the option to standardize on a single build system in Azure Pipelines, we jumped at it.

    When we moved over to Azure Pipelines, we got a single source of truth for all our platforms. Linux, Windows and macOS, all hosted in the cloud, all offered in a single service. It simplified everything. There’s one configuration. One output to look at. One set of artifacts produced. One bill at the end of the month. Well, actually, there’s not, because…

    Unlimited Build Minutes; 10 Parallel Pipelines; Zero charge

    Azure Pipelines has a generous offer for open source projects: unlimited build minutes across 10 parallel build pipelines. All for free.

    Having 10 concurrent build pipelines is incredible for libgit2, since we want to build on so many different platforms. Although we only have those three “core” targets that we want to target on every pull request build, there are actually small variances that we want to cover. For example, we want to target both x86 and amd64 architectures on Windows. We want to make sure that we build in gcc, clang, MSVC and mingw. And we want to build with both OpenSSL and mbedTLS.

    Ultimately that means that we run nine builds to validate every pull request. This is actually more validation builds than we used to run with our old Travis and AppVeyor, and thanks to all that parallelism it’s much, much faster. We used to get just a few parallel builds running and long queue times, so when many contributors were working on their pull requests, tweaking and responding to feedback, it took an achingly long time to get validation builds done. And we were paying for that privilege.

    Now we get almost instant start times and all nine builds running in parallel. And it’s free.

    Scheduling Builds

    If you were thinking that running nine builds on every pull request was a lot… I’m afraid that I have to disagree with you. Those nine builds just cover those core platforms: several Linux builds, a handful of Windows builds, and one for macOS. And those give us a reasonably high confidence in the quality of pull requests and the current state of the master branch.

    But always, we want more. Otherwise you might forget that on ARM platforms, chars are signed by default, unlike most other processors. Or that SPARC is aggressive about enforcing alignment. There are too many crazy little variances in other platforms that are hard to remember, that only show up when you run a build and test pass. So build and test we must, or else we accidentally ship a release that fails on those platforms.

    But these platforms are sufficiently uncommon, and their idiosyncrasies mild enough that we don’t need to build every single pull request; it’s sufficient for us to build daily. So every night we queue up another fourteen builds, running on even more platforms, executing the long-running tests, and performing static code analysis.

    This setup gives us a good balance between getting a quick result when validating the core platforms all the time, but still making sure we are validating all the platforms daily.

    Publishing Test Results

    When contributors push up a pull request, they often want more than just a simple result telling them whether the build succeeded or failed. When tests fail, especially, it’s good to get more insight into which ones passed and which didn’t.

    Tests Failed

    If your test framework outputs a test results file, you can probably upload it to Azure Pipelines, where it can provide you a visualization of your tests. Azure Pipelines supports a bunch of formats: JUnit, NUnit (versions 2 and 3), Visual Studio Test (TRX), and xUnit 2. JUnit, in particular, is very commonly used across multiple build tools and languages.

    All you have to do is add an “upload test results” task to your pipeline. You can either do that with the visual designer, or if you use YAML:

    - task: PublishTestResults@2
      displayName: Publish Test Results
      condition: succeededOrFailed()
      inputs:
        testResultsFiles: 'results_*.xml'
        searchFolder: '$(Build.BinariesDirectory)'
        mergeTestResults: true

    Now when you view a build, you can click on the Tests tab to see the test results. If there are any failures, you can click through to get the details.

    Build Badges

    Publishing test results give contributors great visibility into the low-level, nitty-gritty details of the test passes. But users of the project and would-be new contributors want a much higher-level view of the project’s health. They don’t care if the ssh tests are being skipped, they usually care about a much more binary proposition: does it build? Yes or no?

    Build badges give a simple view of whether your build pipeline is running or not. And you can set up a different badge for each pipeline. If you have builds set up on maintenance branches, you can show each of them. If you’re running a scheduled or nightly build, you can show a badge for that.

    It’s simple to add a build badge to your project’s README. In your build pipeline in Azure Pipelines, just click the ellipses (“…”) menu, and select Status Badge. From there, you’ll get the markdown that you can place directly in your README.

    Want to tweak it? You can change the label that gets displayed in the build badge. This is especially helpful if you have multiple builds and multiple badges. Just add ?label=MyLabel to the end of the build badge URL.

    Getting Started

    These five helpful tips will help you use Azure Pipelines to maintain your project. If you’re not using Azure Pipelines yet, it’s easy – and free – for open source projects. To get started, just visit https://azure.com/pipelines.

    Debugging .NET Apps with Time Travel Debugging (TTD)

    $
    0
    0

    When you are debugging an application, there are many tools and techniques you can use, like logs, memory dumps and Event Tracing for Windows (ETW). In this post, we will talk about Time Travel Debugging, a tool used by Microsoft Support and product teams and more advanced users, but I encourage everyone to try this approach when diagnosing hard to find bugs.

    Time Travel Debugging

    Time Travel Debugging or TTD, is the process of recording and then replay the execution of a process, both forwards and backward, to understand what is happening during the execution. It is vital to fixing bugs when the root cause is not clear, and the symptoms appear moments later when the source of the problem is gone. In a way it’s similar to Intellitrace (available in Visual Studio 2017 Enterprise), but while Intellitrace records specific events and associated data – call stack, function parameters and return value -, TTD embraces a more general approach and let you move at a single instruction level and access any process data (heap, registers, stack).

    The same way you can debug a native or managed process, you can also use TTD in both cases, including .NET Core, but it is limited to Windows.
    In the following sections, we’ll describe particularities when debugging a managed process.

    It also allows you to rewind and play how many times you want, helping you to isolate the problem when it happened, and setting breakpoints just as you usually do in Windbg. Even when you don’t need to rewind, TTD has advantages over Live Debugging, since it doesn’t interrupt the process, you can create a trace and analyze it offline, but be aware that TTD is very intrusive and ideally you shouldn’t use it for more than a few minutes or the traces files can become very large (>5GB).

    Demo Lab

    The demo application is a simple Windows Forms bugged applications that writes a log file to disk. You can download the code from GitHub and compile it, or download the binaries. The machine where we’ll record the traces must be a Windows 10 version 1809, Windows Server 2019 or newer.

    Open the application and click on Save several times and after some seconds it doesn’t work anymore:

    Recording a trace

    You can use WinDbg Preview to record the trace, but for this Lab, we’ll use the built-in TTTracer.exe available on Windows 10 (1809 or newer) and Windows Server 2019, because sometimes it’s impossible to install tools in a production environment.

    1. Open the LabWindbgTTD.exe in the target machine and take a note of its PID:

    2. Open a Command Line as admin and enter the following command, replacing <PID> with the PID of the process:

    TTTracer -attach <PID>

    Now, the TTD is recording the process execution.

    3. Go to the LabWindbgTTD application and click on Save several times until you receive the error. After the error appears, in a different Command Prompt (with admin privileges), execute the following command to stop the trace:

    TTTracer -stop <PID>

    4. You are going to be notified about the trace stop in the first Command Prompt:

    5. TTTracer will generate two files: LabWindbgTTD01.out and LabWindbgTTD01.run

    Replay and Analyze

    Copy both files to your machine, you can use WinDbg or WinDbg Preview to analyze it and don’t forget to set up the symbols to be able to resolve the function names. We’ll use WinDbg Preview, but the steps are similar in WinDbg, click File -> Open Trace File and select the “.run” file:

    When debugging a TTD file, you can Step Into (t), Step Out and Step Over (p) like you do when Live Debugging:

    You can see the current Time Travel Position when stepping through the application, like in the image above.

    Using the !tt <POSITION> command you can navigate to a specific position:

    The !positions command show the positions for all threads:

    But it gets interesting when using the Step command contrariwise, instead of p, t, and g, you can execute p- (Step Back), t- (Trace Back), and g- (Go Back):

    Loading Data Access Component and SOS debugging extension

    When debugging .NET applications in a different machine from where the dump or trace was created, you need to copy the Data Access and SOS dlls:

    • For .NET Framework: mscordacwks.dll and SOS.dll in C:WindowsMicrosoft.NETFrameworkv4.0.30319 (or C:WindowsMicrosoft.NETFramework64v4.0.30319 for 64 bits processes)
    • For .NET Core: mscordaccore.dll and SOS.dll in C:Program FilesdotnetsharedMicrosoft.NETCore.AppX.X.X

    And execute this command in WinDbg:

    .cordll -ve -u -lp <PATH>

    .NET Framework

    .NET Framework

     

    .NET Core

    .NET Core

    Finding the problem

    Now we know how to navigate into the TTD file, how can we find the bug?

    Let’s try to stop when the error occurs, to do this you could use “sxe clr;g” like you would do in a Live Debug, but the TTD extends the Session and Process data model objects, exposing events like Exceptions. To see them all, execute “dx @$curprocess.TTD” and “dx @$cursession.TTD“:

    dx Command

    We can take advantage of this feature and filter the Exception events, “dx -r2 @$curprocess.TTD.Events.Where(t => t.Type == “Exception”).Select(e => e.Exception)“:

    dx Exception Command

    Click on [Time Travel] to navigate to the moment when the Exception was thrown and execute !pe to see the System.ObjectDisposedException.

    pe

    You can now execute !clrstack to see exactly which method is throwing the Exception, but it isn’t helpful since the BtnSave_Click just call StreamWriter.WriteAsync and is not disposing the object.

    ClrStack

    In this case, a log containing the Stack Trace, or a Dump file wouldn’t help. The application is small and simple enough that looking the code you would easily find the problem, but let’s continue the analysis using the WinDbg.

    Execute !dso (or !DumpStackObjects) to see the objects in the current stack and click on StreamWriter address.

    dso Exception

    Click the address to execute !do (or !DumpObj) that shows the details of the object, including its fields, where we can see the stream is null, which means it is disposed.

    do streamwriter

    We know that at this point the StreamWrite.Dispose method has been called and we need to find out who called it. Set a breakpoint to this method and continue the execution in reverse, “!bpmd mscorlib.dll System.IO.StreamWriter.Dispose;g-“:

    bpmd

    You’ll stop at Dispose method of StreamWriter. Execute !dso again, we can see a StreamWriter in the same address as before, let’s inspect the object and the underlying stream to find more details about it.

    dso

    The object address may be different because a Garbage Collection happened or just because we are looking at a different instance of StreamWriter. In this case, you would need to check if the object is the same.

    Another option, to see if the object is the same, is to use !GCRoot to find references to the object, this way we can see if in both moments the StreamWriter object is the LogFile field in Form1.

    GCRoot

    If it is not the object you are looking for, execute g- again until you find it, then execute !clrstack to show the Stack Trace and find the method that is Disposing the StreamWriter:

    ClrSTack Dispose

    Conclusion

    TTD makes it viable to analyze many scenarios that would be extremely difficult to reproduce or to collect the right data. The possibility of going back and forth is powerful and has the potential to greatly reduce troubleshooting time.

    Microsoft Docs has many more details about TTTracer and Windbg Preview.

    Viewing all 10804 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>