Quantcast
Channel: Category Name
Viewing all 10804 articles
Browse latest View live

Top Stories from the Microsoft DevOps Community – 2019.01.25

$
0
0

Whew, in addition to the day job, I’ve been hacking on some open source this week. So I’ve been thinking a lot about my favorite things about Azure Pipelines that helps out with OSS. So it’s been a busy week, and I’m looking forward to the weekend. But first, I wanted to share some of this week’s great stories from the community.

A Better Way of Deploying a Dockerized Application to Azure Kubernetes Service Using Azure Pipelines
Last year, Graham Smith put together a series of blog posts about deploying a containerized ASP.NET Core application to AKS. Now he’s revisiting it to modernize it, break down a single monolithic pipeline into several smaller ones, and improve some of the “rough edges” in his original solution.

Import Work Item from external system to Azure DevOps
If you’ve got several software projects, then you’ve probably got a lot of work items in various places. There are great solutions if you’re bringing data in from Team Foundation Server, but what if you need something more general? Gian Maria Ricci shows how to use the Azure DevOps REST APIs to import data from another system.

Azure DevOps: Deploying Secrets from Azure Pipelines
Surely you’re not checking in your password, right? Right?! If you need to use some passwords or secret keys in your build or deployment, the best practice is to store them in a secret store. Jayendran Arumugam shows you how to pull secrets out of Azure Key Vault.

How To Set Up Continuous Integration Pipelines with Azure Pipelines and PCF
Azure DevOps is bringing our development tools to engineers building software in any language on any platform. So if you’re deploying to Pivotal Cloud Foundry? No problem. Ronak Banka walks you through how to set up an Azure Pipelines release to Cloud Foundry.

DevOps and Culture
My buddy Donovan famously said that DevOps is “the union of people, process, and products to enable continuous delivery of value to our end users“. And in my mind, it’s the people that are the hard part – in other words, the culture. Ron Vincent unpacks why changing the culture is so challenging and how we can think about change across the organization.

As always, if you’ve written an article about Azure DevOps or find some great content about DevOps on Azure then let me know! I’m @ethomson on Twitter.


NuGet’s fancy older sibling FuGet gives you a whole new view of the .NET packaging ecosystem

$
0
0

FuGet diffsI remember when we announced NuGet (almost 10 years ago). Today you can get your NuGet packages (that contain .NET libraries) from Nuget.exe, from within Visual Studio, from the .NET CLI (command line interface), and from Paket. Choice is good!

Most folks are familiar with NuGet.org but have you used FuGet?

FuGet is "pro nuget package browsing!" Creating by the amazing Frank A. Krueger - of whom I am an immense fan - FuGet offers a different view on the NuGet package library. NuGet is a repository of nearly 150,000 open source libraries and the NuGet Gallery does a decent job of letting one browse around. However, https://github.com/praeclarum/FuGetGallery is an alternative web UI with a lot more depth.

FuGet is "advanced mode" for NuGet. It's a package browser combined with an API browser that helps you explore the XML documentation and metadata of a package's assemblies to help you explore and learn. And it's a JOY.

For example, if I look at https://www.fuget.org/packages/Newtonsoft.Json I can also see who depends on the package! https://www.fuget.org/packages/Newtonsoft.Json/dependents Who has taken a public dependency on your package? I can see supported frameworks, namepsaces, as well as internal types. For example, I can explore JToken within Newtonsoft.Json and its embedded docs!

You can even do API diffs across versions! Check out https://www.fuget.org/packages/Serilog/2.8.0-dev-01042/lib/netstandard2.0/diff/2.6.0/ for example. This is an API Diff between 2.8.0-dev-01042 and 2.6.0 for Serilog. This could be useful for users or package maintainers when deciding how big a version bumb is required depending on how much of the API has changed. It also gives you a view (as the downstream consumer) of what's coming at you in pre-release versions!

From Frank's blog:

Have you ever wondered if the library your using has been customized for a certain platform? Have you wondered if it will work on your platform at all?

This doubt is removed by displaying - in full technicolor - all the frameworks that the library supports.

 Supported Frameworks

They’re color coded so you can see at a glance:

  • Green libraries are .NET Standard and will work everywhere
  • Dark blue libraries are platform specific
  • Light blue libraries are for full .NET and Mono only
  • Yellow libraries are old PCLs that we’re all trying to forget

FuGet.org is a fanstatic addition to the .NET ecosystem and I"d encourage you to bookmark it, use it, support it, and get involved!

If you're interesting in stuff like this (and the code that runs stuff like this) also check out Stephen Cleary's useful http://dotnetapis.com/ and it's associated code on GitHub https://github.com/StephenClearyApps/DotNetApis.


Sponsor: Your code is bad, but that’s ok thanks to Sentry’s full stack error monitoring that enables you to track and fix application errors in real time. Stop garbage code from becoming garbage fires.



© 2018 Scott Hanselman. All rights reserved.
     

Azure.Source – Volume 67

$
0
0

Now in preview

Introducing IoT Hub device streams in public preview

Azure IoT Hub device streams is a new PaaS service that addresses the need for security and organization policy compliance by providing a foundation for secure end-to-end connectivity to IoT devices. At its core, an IoT Hub device stream is a data transfer tunnel that provides connectivity between two TCP/IP-enabled endpoints: one side of the tunnel is an IoT device and the other side is a customer endpoint that intends to communicate with the device. IoT Hub device streams address end-to-end connectivity needs by leveraging an IoT Hub cloud endpoint that acts as a proxy for application traffic exchanged between the device and service. IoT Hub device streams are particularly helpful when devices are placed behind a firewall or inside a private network.

Azure IoT Hub Device Streams

Announcing the preview of OpenAPI Specification v3 support in Azure API Management

Azure API Management has just introduced preview support of OpenAPI Specification v3 – the latest version of the broadly used open-source standard of describing APIs. We based the implementation of this feature on the OpenAPI.NET SDK. OpenAPI Specification is a widely-adopted industry standard that enables you to abstract your APIs from their implementation in a language-agnostic and easy to understand format. The wide adoption of OpenAPI Specification (formerly known as Swagger) resulted in an extensive tooling ecosystem.  If your APIs are defined in an OpenAPI Specification file, you can easily import them in Azure API Management (APIM). APIM helps organizations publish APIs to external, partner, and internal developers to unlock the potential of their data and services. Once the backend API is imported into APIM, the APIM API becomes a façade for the backend API.

Regulatory compliance dashboard in Azure Security Center now available

The regulatory compliance dashboard in Azure Security Center (ASC) provides insight into your compliance posture for a set of supported standards and regulations, based on continuous assessments of your Azure environment. The ASC regulatory compliance dashboard is designed to help you improve your compliance posture by resolving recommendations directly within the dashboard. Click through to each recommendation to discover its details, including the resources for which the recommendation should be implemented. The regulatory compliance dashboard preview is available within the standard pricing tier of Azure Security Center, and you can try it for free for the first 30 days.

Screenshot of the regulatory compliance dashboard in the Azure Security Center

Public preview: Read replicas in Azure Database for PostgreSQL

You can now replicate data from a single Azure Database for PostgreSQL server (master) to up to five read-only servers (read replicas) within the same Azure region. This feature uses PostgreSQL's native asynchronous replication. With read replicas, you can scale out your read-intensive workloads. Read replicas can also be used for BI and reporting scenarios. You can choose to stop replication to a replica, in which case it becomes a normal read/write server. Replicas are new servers that can be managed in similar ways as normal standalone Azure Database for PostgreSQL servers. For each read replica, you are billed for the provisioned compute in vCores and provisioned storage in GB/month.

Now generally available

HDInsight Tools for Visual Studio Code now generally available

The Azure HDInsight Tools for Visual Studio Code are now generally available on Windows, Linux and Mac. These tools provide best-in-class authoring experiences for Apache Hive batch jobs, interactive Hive queries, and PySpark jobs. The tools feature a cross-platform, lightweight, keyboard-focused code editor which removes constraints and dependencies on a platform. Azure HDInsight Tools for Visual Studio Code is available for download from Visual Studio Marketplace.

Animated GIF showing Azure HDInsight Tools for Visual Studio Code

Azure Service Bus and Azure Event Hubs expand availability

Availability Zones is a high availability offering that protects applications and data from datacenter failures. Availability Zones support is now generally available for Azure Service Bus premium and Azure Event Hubs standard in every Azure region that has zone redundant datacenters. Note that this feature won’t work with existing namespaces—you will need to provision new namespaces to use this feature.  Availability Zones support for Azure Service Bus Premium and Azure Event Hubs Standard is available in the following regions: East US 2, West US 2, West Europe, North Europe, France Central, and Southeast Asia.

Azure Cognitive Services adds important certifications, greater availability, and new unified key

Over the past six months, we added added 31 certifications across services in Cognitive Services and will continue to add more in 2019. With these certifications, hundreds of healthcare, manufacturing, and financial use cases are now supported. In addition, Cognitive Services now offers more assurances for where customer data is stored at rest. These assurances have been enabled by graduating several Cognitive Services to Microsoft Azure Core Services. Also, the global footprint for Cognitive Services has expanded over the past several months — going from 15 to 25 Azure data center regions. Recently, we launched a new bundle of multiple services, enabling the use of a single API key for most of our generally available services: Computer Vision, Content Moderator, Face, Text Analytics, Language Understanding, and Translator Text.

Also generally available

Access generally available functionality in Azure Database Migration Service to migrate Amazon RDS for SQL Server, PostgreSQL, and MySQL to Azure while the source database remains online during migration:

News and updates

Microsoft and Citus Data: Providing the best PostgreSQL service in the cloud

On Thursday, Microsoft  announced the acquisition of Citus Data, an innovative open source extension to scale out PostgreSQL databases without the need to re-architect existing applications. Citus Data delivers unparalleled performance and scalability by intelligently distributing data and queries across multiple nodes, which makes sharding simple. Because Citus Data is packaged as an extension (not a fork) to PostgreSQL, customers can take advantage of all the innovations in community PostgreSQL with queries that are significantly faster compared to proprietary implementations of PostgreSQL. More information is available in this post by Rohan Kumar, Corporate Vice President, Azure Data: Microsoft acquires Citus Data, re-affirming its commitment to Open Source and accelerating Azure PostgreSQL performance and scale.

Export data in near real-time from Azure IoT Central

You can now export data in near real-time to your Azure Event Hubs and Azure Service Bus in near real-time from your Azure IoT Central app. Use the new features in Continuous Data Export to export data to your own Azure Event Hubs, Azure Service Bus, and Azure Blob Storage instances for custom warm path and cold path processing, and analytics on your IoT data. Watch this episode of the Internet of Things Show to learn how to export device data to your Azure Blob storage, Azure Event Hub, and Azure Service Bus using continuous data export in IoT Central. You’ll also learn how to set up continuous export to export measurements, devices, and device template data to your destination and how to use this data.

Export data from your IoT Central app to Azure Event Hubs and Azure Service Bus

HDInsight Metastore Migration Tool open source release now available

Microsoft Azure HDInsight Metastore Migration Tool (HMMT) is an open-source shell script that you can use for applying bulk edits to the Hive metastore. HMMT is a low-latency, no-installation solution for challenges related to data migrations in Azure HDInsight. This blog post covers how HMMT is outlined with respect to the Hive metastore and Hive storage patterns, the design of HMMT and describes initial setup steps, and finally, some sample migrations are described and solved with HMMT as a demonstration of its usage and value.

Azure Backup now supports PowerShell and ACLs for Azure Files

Azure Backup now supports preserving and restoring new technology file system (NTFS) access control lists (ACL) for Azure files in preview. You can now script your backups for Azure File Shares using PowerShell. Make use of the PowerShell commands to configure backups, take on-demand backups, or even restore files from your file shares protected by Azure Backup. Using the “Manage backups” capability in the Azure Files portal, you can take on-demand backups, restore files shares, or individual files and folders, and even change the policy used for scheduling backups. You can also go to the Recovery Services Vault that backs up the file share and edit policies used to backup Azure File shares. Backup alerts for the backup and restored jobs of Azure File shares are enabled, which enables you to configure notifications of job failures to chosen email addresses.

Screenshot from the Azure portal showing the Azure Files portal and highlighting the Manage backups button

Analyze data in Azure Data Explorer using KQL magic for Jupyter Notebook

Jupyter Notebook enable you to create and share documents that contain live code, equations, visualizations, and explanatory text. Its includes data cleaning and transformation, numerical simulation, statistical modeling, and machine learning. KQL magic commands extend the functionality of the Python kernel in Jupyter Notebook. KQL magic enable you to write KQL queries natively and query data from Microsoft Azure Data Explorer. You can easily interchange between Python and KQL, and visualize data using rich Plot.ly library integrated with KQL render commands. KQL magic supports Azure Data Explorer, Application Insights, and Log Analytics as data sources to run queries against. KQL magic also works with Azure Notebooks, Jupyter Lab, and the Visual Studio Code Jupyter extension.

Hyperledger Fabric updates now available

Hyperledger Fabric is an enterprise-grade distributed ledger that provides modular components, enabling customization of components to fit various scenarios. You can now download from the Azure Marketplace an updated template for Hyperledger Fabric that supports Hyperledger Fabric version 1.3. The automation provided by this solution is designed to make it easier to deploy, configure and govern a multi-member consortium using the Hyperledger Fabric software stack. This episode of Block Talk walks through the Hyperledger Fabric ledger and discusses the core features you can use to customize the deployment of Hyperledger Fabric in your environment. 

Hyperledger Fabric on Azure

Additional news and updates

Technical content

Connecting Node-RED to Azure IoT Central

In this post, Peter Provost, Principal PM Manager, Azure IoT, shows how simple it is to connect a temperature/humidity sensor to Azure IoT Central using a Raspberry Pi and Node-RED. Node-RED is a flow-based, drag and drop programming tool designed for IoT. It enables the creation of robust automation flows in a web browser, simplifying IoT project development.

Screenshot of the Node-RED flow for this example

Getting started with Azure Blueprints

Azure Blueprints (currently in Preview) helps you define which policies - including policy initiatives - RBAC settings, and ARM templates to apply on a subscription basis, making it easy to set configurations at scale, knowing that any resources created in those subscriptions will comply with those settings (or will show as non-compliant in the case of audit policies). Sonia provides an intro to the service, showing how they group configuration controls, like Azure Policy and RBAC, and then uses an example scenario to demonstrate how and why to use Blueprints to simplify compliance and governance.

RStudio Server on Azure

RStudio Server Pro, the premier IDE for the R programming language is now available on the Azure Marketplace, letting you launch it on a virtual machine of your choice. David details the benefits of this new offering and also lists alternative solutions for developers interested in running a self-managed instance of RStudio Server.

Sneak Peek: Making Petabyte Scale Data Actionable with ADX Part 2

To celebrate the recent announcement of free private repos in GitHub, Ari released a sneak peak of what he's working on for Part II of his "Making Petabyte Scale Data Actionable with Azure Data Explorer" series.

Azure shows

The Azure Podcast | Episode 263 - Partner Spotlight - Aqua Security

Liz Rice, Technical Evangelist at Aqua Security and master of all things Security in Kubernetes, talks to us about her philosophy on security and gives us the some great tips-n-tricks on how to secure your container workloads in Azure, on-prem or any cloud.

Azure Friday | An intro to Azure Cosmos DB JavaScript SDK 2.0

Chris Anderson joins Donovan Brown to discuss Azure Cosmos DB JavaScript SDK 2.0, which adds support for multi-region writes, a new fluent-style object model—making it easier to reference Azure Cosmos DB resources without an explicit URL—and support for promises and other modern JavaScript features. It is also written in TypeScript and supports the latest TypeScript 3.0.

AI Show | Learn by Doing: A Look at Samples

Gain an understanding of the landscape of sample projects available for Cognitive Services.

Five Things | Five Reasons Why You Should Check Out Cosmos DB

What does a giant Jenga tower have in common with NoSQL databases? NOTHING. But we're giving you both anyway. In this episode, Burke and Jasmine Greenaway bring you five reasons that you should check out Cosmos DB today. They also play a dangerous game of Jenga with an oversized tower made out of 2x4's, and someone nearly gets crushed.

The DevOps Lab | Verifying your Database Deployment with Azure DevOps

While at Microsoft Ignite | The Tour in Berlin, Damian speaks to Microsoft MVP Houssem Dellai about some options for deploying your database alongside your application. Houssem shows a few different ways to deploy database changes, including a clever pre-production verification process for ensuring your production deployment will succeed. Database upgrades are often the scariest part of your deployment process, so having a robust check before getting to production is very important.

Overview of Managed Identities on Azure Government

In this episode of the Azure Government video series, Steve Michelotti talks with Mohit Dewan, of the Azure Government Engineering team, about Managed Identities on Azure Government. Whether you’re storing certificates, connection strings, keys, or any other secrets, Managed Identities is a valuable tool to have in your toolbox. Watch this video to see how quick and easy it is to get up and running with Managed Identities in Azure Government.

Thumbnail from the video, Overview of Managed Identities on Azure Government

Azure Tips and Tricks | How to create a container image with Docker

In this edition of Azure Tips and Tricks, learn how to create a container image to run applications with Docker. You’ll see how to create a folder inside a container and create a script to execute it.

R--rdLMN-Wk

Azure Tips and Tricks | How to manage multiple accounts, directories, and subscriptions in Azure

Discover how to easily manage multiple accounts, directories, and subscriptions in the Microsoft Azure portal. In this video, you'll learn how to log in to the portal and manage multiple accounts, establish the contexts between accounts and directories, and how to filter and scope the portal at a few different levels to their billable subscriptions.

Thumbnail from a video, How to manage multiple accounts, directories, and subscriptions in Azure

The Azure DevOps Podcast | Paul Hacker on DevOps Processes and Migrations - Episode 020

In this episode, Paul Hacker is joining the Azure DevOps Podcast to discuss DevOps processes and migrations. Paul has some really interesting perspectives on today’s topic and provides some valuable insights on patterns that are emerging in the space, steps to migrating to Azure DevOps, and common challenges (and how to overcome them). Listen to his insight on migrations, DevOps processes, and more.

Events

Microsoft Ignite | The Tour

Learn new ways to code, optimize your cloud infrastructure, and modernize your organization with deep technical training. Join us at the place where developers and tech professionals continue learning alongside experts. Explore the latest developer tools and cloud technologies and learn how to put your skills to work in new areas. Connect with our community to gain practical insights and best practices on the future of cloud development, data, IT, and business intelligence. Find a city near you and register today. In February, the tour visits London, Sydney, Hong Kong, and Washington, DC.

Graphic for Microsoft Ignite | The Tour

Customers, partners, and industries

Security for healthcare through cloud agents and virtual patching

For a healthcare organization, security and protection of data is a primary value, but solutions can be attacked from a variety of vectors such as malware, ransomware, and other exploits. The attack surface of an organization could be complex, email and web browsers are immediate targets of sophisticated hackers. One Microsoft Azure partner, XentiT (ex-ent-it), is devoted to protecting healthcare organizations despite the complexity of the attack surface. XentIT leverages two other security services with deep capabilities and adds its own expertise to create a dashboard-driven security solution that lets healthcare organizations better monitor and protect all assets.

Solution diagram showing the XentIT Cloud Security Stack for Azure - Standard

AI & IoT Insider Labs: Helping transform smallholder farming

Microsoft’s AI & IoT Insider Labs was created to help all types of organizations accelerate their digital transformation. Learn how AI & IoT Insider Labs is helping one partner, SunCulture, leverage new technology to provide solar-powered water pumping and irrigation systems for smallholder farmers in Kenya. SunCulture, a 2017 Airband Grant Fund winner, believed sustainable technology could make irrigation affordable enough that even the poorest farmers could use it without further aggravating water shortages. The company set out to build an IoT platform to support a pay-as-you-grow payment model that would make solar-powered precision irrigation financially accessible for smallholders across Kenya.


A Cloud Guru | Azure This Week - 25 January 2019

This time on Azure This Week, Lars talks about Azure Monitor logs for Grafana in public preview, New Azure Portal landing page, and it is time to move on from Windows Server 2008.

Thumbnail image from A Cloud Guru's Azure This Week for 25 January 2019

Microsoft joins the SciKit-learn Consortium

$
0
0

As part of our ongoing commitment to open and interoperable artificial intelligence, Microsoft has joined the SciKit-learn consortium as a platinum member and released tools to enable increased usage of SciKit-learn pipelines.

Initially launched in 2007 by members of the Python scientific community, SciKit-learn has attracted a large community of active developers who have turned it into a first class, open source library used by many companies and individuals around the world for scenarios ranging from fraud detection to process optimization. Following SciKit-learn’s remarkable success, the SciKit-learn consortium was launched in September 2018 by Inria, the French national institute for research in computer science, to foster growth and sustainability of the library, employing central contributors to maintain high standards and develop new features. We are extremely supportive of what the SciKit-learn community has accomplished so far and want to see it continue to thrive and expand. By joining the newly formed SciKit-learn consortium, we will support central contributors to ensure that SciKit-learn remains a high-quality project while also tackling new features in conjunction with the fabulous community of users and developers.

In addition to supporting SciKit-learn development, we are committed to helping Scikit-learn users in training and production scenarios through our own services and open source projects. We released support for using SciKit-learn in inference scenarios through the high performance, cross platform ONNX Runtime. The SKlearn-ONNX converter exports common SciKit-learn pipelines directly to the ONNX-ML standard format. In doing so, these models can now be used on Linux, Windows, or Mac with ONNX Runtime for improved performance and portability.

We also provide strong support for SciKit-learn training in Azure Machine Learning. Using the service, you can take existing Scikit-learn training scripts and scale up your training to the cloud, automatically iterate through hyperparameters to tune your model, and log experiments with minimal effort. Furthermore, the automated machine learning capability can automatically generate the best SciKit-learn pipeline according to your training data and problem scenario.

At Microsoft we believe bringing AI advances to all developers, on any platform, using any language, in an open and interoperable AI ecosystem, will help ensure AI is more accessible and valuable to all. We are excited to be part of the SciKit-learn consortium and supporting a fantastic community of Scikit-learn developers and users.

Read Replicas for Azure Database for PostgreSQL now in Preview

$
0
0

This blog is co-authored by Parikshit Savjani, Senior Program Manager and Rachel Agyemang, Program Manager, Microsoft Azure.

We are excited to announce Public Preview availability of Read Replicas for Azure Database for PostgreSQL.

Azure Database for PostgreSQL now supports continuous, asynchronous replication of data from one Azure Database for PostgreSQL server (the “master”) to up to five Azure Database for PostgreSQL servers (the “read replicas”) in the same region. This allows read-heavy workloads to scale out horizontally and be balanced across replica servers according to users' preferences. Replica servers are read-only except for writes replicated from data changes on the master. Stopping replication to a replica server causes it to become a standalone server that accepts reads and writes.

Key features associated with this functionality are:

  • Turn-key addition and deletion of replicas.
  • Support for up to five read replicas in the same region.
  • The ability to stop replication to any replica to make it a stand-alone, read-write server.
  • The ability to monitor replication performance using two metrics, Replica Lag and Max lag across Replicas.

For more information and instructions on how to create and manage read replicas, see the following articles:

We are sharing some of the application patterns and reference architectures which our customers and partners have implemented leveraging the read replicas for scaling out their workload.

Microservices Pattern with Read scale Replicas

In this architecture pattern, the application is broken into multiple microservices with data modification APIs connecting to master server while reporting APIs connecting to read replicas. The data modification APIs are prefixed with “Set-” while reporting APIs are prefixed with “Get-“. The load balancer is used to route the traffic based on the API prefix.

2019-01-24_14-47-27

BI Reporting

For BI Reporting workload, data from disparate data sources is processed every few mins and loaded in the master server. The master server is dedicated for loads and processing not directly exposing it to BI users for reporting or analytics to ensure predictable performance. The reporting workload is scaled out across multiple read replicas to manage high user concurrency with low latency.

image

IoT scenario

For IoT scenario, the high-speed streaming data is loaded first in master node as a persistent layer. The master server is used for high speed data ingestion. The read replicas are leveraged for reporting and downstream data processing to take data driven actions.

image

We hope that you enjoy working with the latest features and functionality available in our Azure database service for PostgreSQL. Be sure to share your impressions via User Voice for PostgreSQL.

Additional resources for Azure Database for PostgreSQL

Development, source control, and CI/CD for Azure Stream Analytics jobs

$
0
0

Do you know how to develop and source control your Microsoft Azure Stream Analytics (ASA) jobs? Do you know how to setup automated processes to build, test, and deploy these jobs to multiple environments? Stream Analytics Visual Studio tools together with Azure Pipelines provides an integrated environment that helps you accomplish all these scenarios. This article will show you how and point you to the right places in order to get started using these tools.

In the past it was difficult to use Azure Data Lake Store Gen1 as the output sink for ASA jobs, and to set up the related automated CI/CD process. This was because the OAuth model did not allow automated authentication for this kind of storage. The tools being released in January 2019 support Managed Identities for Azure Data Lake Storage Gen1 output sink and now enable this important scenario.

This article covers the end-to-end development and CI/CD process using Stream Analytics Visual Studio tools, Stream Analytics CI.CD NuGet package, and Azure Pipelines. Currently Visual Studio 2019, 2017, and 2015 are all supported. If you haven’t tried the tools, follow the installation instructions to get started!

Job development

Let’s get started by creating a job. Stream Analytics Visual Studio tools allows you to manage your jobs using a project. Each project consists of an ASA query script, a job configuration, and several input and output configurations. Query editing is very efficient when using all the IntelliSense features like error markers, auto completion, and syntax highlighting. For more details about how to author a job, see the quick start guide with Visual Studio.

Screenshot of Azure Stream Analytics query script

If you have existing jobs and want to develop them in Visual Studio or add source control, just export them to local projects first. You can do this from the server explorer context menu for a given ASA job. This feature can also be used to easily copy a job across regions without authoring everything from scratch.

Screenshot of exporting to new Stream Analytics project

Developing in Visual Studio also provides you with the best native authoring and debugging experience when you are writing JavaScript user defined functions in cloud jobs or C# user defined functions in Edge jobs.

Source control

When created as projects, the query and other artifacts sit on the local disk of your development computer. You can use the Azure DevOps, formerly Visual Studio Team Service, for version control or commit code directly to any repositories you want. By doing this you can save different versions of the .asaql query as well as inputs, outputs, and job configurations while easily reverting to previous versions when needed.

Screenshot of home screen on Team Explorer

Testing locally

During development, use local testing to iteratively run and fix code with local sample data or live streaming input data. Running locally starts the query in seconds and makes the testing cycle much shorter.

Testing in the cloud

Once the query works well on your local machine, it’s time to submit to the cloud for performance and scalability testing. Select “Submit to Azure” in the query editor to upload the query and start the job running. You can then view the job metrics and job flow diagram from within Visual Studio.

Screenshot of submitting query to Azure

Screenshot of Stream Analytics job summary

Setup CI/CD pipelines

When your query testing is complete, the next step is to setup your CI/CD pipelines for production environments. ASA jobs are deployed as Azure resources using Azure Resource Manager (ARM) templates. Each job is defined by an ARM template definition file and a parameter file.

There are two ways to generate the two files mentioned above:

  1. In Visual Studio, right click your project name and select “Build.”
  2. On an arbitrary build machine, install Stream Analytics CI.CD NuGet package and run the command “build” only supported on Windows at this time. This is needed for an automated build process.

Performing a “build” generates the two files under the “bin” folder and lets you save them wherever you want.

Screenshot of the deployment folder

The default values in the parameter file are the ones from the inputs/outputs job configuration files in your Visual Studio project. To deploy in multiple environments, replace the values via a simple power shell script in the parameter file to generate different versions of this file to specify the target environment. In this way you can deploy into dev, test, and eventually production environments.

Example code for power shell script

As stated above, the Stream Analytics CI.CD NuGet package can be used independently or in the CI/CD systems such as Azure Pipelines to automate the build and test process of your Stream Analytics Visual Studio project. Check out the “Continuously integrate and develop with Stream Analytics tools”and “Deploy Stream Analytics jobs with CI/CD using Azure Pipelines Tutorial” for more details.

Providing feedback and ideas

The Azure Stream Analytics team is highly committed to listening to your feedback. We welcome you to join the conversation and make your voice heard via our UserVoice. For tools feedback, you can also reach out to ASAToolsFeedback@microsoft.com.

Did you know we have more than ten new features in public preview? Sign up for our preview programs to try them out. Also, follow us on Twitter @AzureStreaming to stay updated on the latest features.

Azure Marketplace new offers – Volume 30

$
0
0

We continue to expand the Azure Marketplace ecosystem. From December 16 to December 31, 2018, 46 new offers successfully met the onboarding criteria and went live. See details of the new offers below:

Virtual machines

A10 vThunder ADC for Microsoft Azure

A10 vThunder ADC for Microsoft Azure: A10 Networks' vThunder for Microsoft Azure is purpose-built for high performance, flexibility, and easy-to-deploy application delivery and server load balancing. It's optimized to run natively within Azure.

Appzillon Consumer Banking

Appzillon Consumer Banking: Appzillon Consumer Banking from i-exceed offers users a convenient and simplified omnichannel approach to digital banking and ensures that user journeys are delightful and engaging.

IIS on Windows Server 2016

IIS on Windows Server 2016: Each version of Microsoft Windows Server brings a new version of Internet Information Services (IIS). With the recent release of Windows Server 2016 comes IIS version 10, also known as version 10.0.17763.

MySQL Server 8.0 On Windows 2016

MySQL Server 8.0 On Windows 2016: Quickly deploy MySQL server into your Azure tenant. MySQL provides you with a suite of tools to develop and manage MySQL-based business-critical applications on Windows.

RStudio Server Pro for Azure

RStudio Server Pro for Azure: RStudio Server Pro lets you access RStudio from anywhere using a web browser and delivers the productivity, security, centralized resource management, and metrics that professional data science teams need to develop in the R programming language.

SkyLIGHT PVX

SkyLIGHT PVX: Get the SkyLIGHT PVX virtual appliance by Accedian here in the Azure Marketplace.

Teradata Unity (IntelliSphere)

Teradata Unity (IntelliSphere): Teradata Unity (IntelliSphere) is key to Teradata's workload routing and synchronization, providing active-active database availability with near-zero recovery time objectives/recovery point objectives. This is a bring-your-own-license offering.

Toad Intelligence Central 4.3

Toad Intelligence Central 4.3: Improve productivity, collaboration, and data provisioning. Share all Toad artifacts – including entity relationship diagrams, query files, automation scripts, and SQL files – with other Toad users.

WhereScape RED automation software

WhereScape® RED automation software: WhereScape RED is an integrated development environment that provides teams with the automation to streamline workflows, eliminate coding by hand, and cut the time to develop, deploy, and operate data infrastructure.

WorkflowGen

WorkflowGen: Automate complex human-centric processes by leveraging the Azure ecosystem. WorkflowGen can enhance your software or application offerings with a high-performance competitive process automation component.

Web applications

Azure Sandbox as a Service

Azure Sandbox as a Service: The Sandbox enables your organization to provide access to cloud subscriptions on-demand that are isolated from your production environment. It's a great service for implementing proofs of concept, DevTest environments, or even hackathons.

eDocStore

eDocStore: The eDocStore is a centralized, scalable, structured file storage solution with enterprise content management (ECM)-grade interoperability based on open OASIS standard Content Management Interoperability Services (CMIS) v1.1.

HashiCorp Consul Enterprise

HashiCorp Consul Enterprise: The world is moving from static infrastructure to dynamic infrastructure. HashiCorp Consul is a distributed service networking layer to connect, secure, and configure applications across dynamic distributed infrastructure.

HashiCorp Vault Enterprise

HashiCorp Vault Enterprise: HashiCorp Vault protects sensitive data. It's designed to help security teams secure, store, and tightly control access to tokens, passwords, certificates, and encryption keys.

Identity Suite

Identity Suite: To meet the strict compliance requirements of the GDPR and of highly regulated industries, Omada offers a solution that governs access to privacy data and other sensitive information.

PowerERM HRMS APP

PowerERM HRMS APP: The PowerERM is a company-wide employee relationship management software package used to coordinate all employee functions from the time of hiring to separation, including recruitment and training functionalities.

RSA Authentication Manager 8.4

RSA® Authentication Manager 8.4: RSA Authentication Manager is the on-premises platform behind RSA SecurID Access that allows for centralized management of the environment, which includes authentication methods, users, applications, and agents across physical sites.

Secure File Exchange

Secure File Exchange: Use this solution template to simply and securely exchange files with your teammates, partners, and customers. All files are encrypted with a password in your storage account. You and the recipient get notifications with detailed download instructions.

SentinelDB

SentinelDB: SentinelDB is a cloud-based, privacy-by-design database that complies with GDPR and HIPAA regulations. It offers per-record encryption, a blockchain-backed audit trail, horizontal scalability, and zero maintenance for customers.

Teradata Unity with IntelliSphere

Teradata Unity with IntelliSphere: Teradata Unity with IntelliSphere is key enabling technology for synchronization and workload routing between Teradata Vantage systems.

WordPress with Azure Database for MariaDB

WordPress Multi-Tier: This solution uses a virtual machine for the application front end and Azure Database for MariaDB services for the application data. The Azure Database service provides automatic patching, automatic backups, and built-in monitoring and security.

Container solutions

Nginx Secured Ubuntu Container with Antivirus

Nginx Secured Ubuntu Container with Antivirus: Deploy an enterprise-ready container for Nginx on Ubuntu. Nginx can be deployed to serve dynamic HTTP content on a network using FastCGI, SCGI handlers for scripts, WSGI application servers, or Phusion Passenger modules.

Node 8 Secured Alpine Container with Antivirus

Node 8 Secured Alpine Container with Antivirus: Deploy an enterprise-ready container for Node 8 on Alpine. Node.js is an open-source, cross-platform JavaScript runtime environment for developing tools and applications.

Sestek SR REST Server

Sestek SR REST Server: You can use this REST API for speech recognition. Supported languages include Turkish, English, Spanish, French, Azerbaijani, Arabic, German, Russian, Urdu, Flemish, and Dutch. Note that this image contains only Turkish, English, and Arabic.

Consulting services

Azure Blockchain Workbench- 8-Week Implementation

Azure Blockchain Workbench: 8-Week Implementation: Azure Blockchain Workbench by Akvelon provides the infrastructure scaffolding for building end-to-end blockchain applications on top of the Microsoft Azure platform.

Azure Cloud Migration- 2 Day Assessment

Azure Cloud Migration: 2 Day Assessment: Take advantage of FMT Consultants' thorough Azure migration assessment. Gain valuable insights for what it will take to migrate your on-premises applications.

Azure Consulting- 2 Day Workshop

Azure Consulting: 2 Day Workshop: BUI’s Azure Discovery Workshop follows a defined and repeatable process to share the value of Microsoft Azure. BUI will showcase different Azure capabilities, considering security, identity, and more.

Azure Data Services Accelerator 4wk implementation

Azure Data Services Accelerator 4wk implementation: ANS accelerates your path to achieving your defined business outcome, as will be uncovered during the preassessment solution stage, utilizing services such as Azure Data Lake and Microsoft Power BI.

Azure eDiscovery Compliance 2 day workshop

Azure eDiscovery/Compliance 2 day workshop: This workshop by Controle is for legal and technical users and is conducted at the client's site. Participants will gain an understanding of eDiscovery capabilities available through Azure.

Azure Managed Svc plus ANS Glass 10wk implementation

Azure Managed Svc + ANS Glass 10wk implementation: Migrate to Azure and utilize new Azure services, such as IoT and AI, with the expertise of ANS cloud experts, underpinned by the governance, automation, and financial insights of ANS Glass.

Azure Migration Assessment- 1 Day Assessment

Azure Migration Assessment: 1 Day Assessment: Are you considering moving your infrastructure to Microsoft Azure? This one-day assessment by Executech will outline the time, effort, and cost required to make the shift and move to the cloud.

Cloud Backup Service- 10 Weeks Implementation

Cloud Backup Service: 10 Weeks Implementation: BUI's Cloud Backup managed services offering is an affordable, fully managed Cloud Backup-as-a-Service (CBaaS) solution. We eliminate your need for costly storage, backup management, and maintenance.

Cloud Data Platform- 3-Day Workshop

Cloud Data Platform: 3-Day Workshop: In this workshop, Siili Solutions Oyj will bring together design, data, and tech, outlining the benefits of Azure and unlocking the value of data through Microsoft Power BI visualization.

Cloud Discovery- 5 Days Workshop

Cloud Discovery: 5 Days Workshop: BUI's Cloud Discovery offering is an affordable and structured engagement, consisting of workshops to determine whether your people, environment, and systems are cloud-ready.

Cloud Executive Readiness- 3 Days Workshop

Cloud Executive Readiness: 3 Days Workshop: Meylah's Cloud Executive Readiness workshop will identify and review the platforms, processes, and resources required for a successful transition to the cloud.

Cloud Migration Discovery- 6-Wk Assessment

Cloud Migration Discovery: 6-Wk Assessment: Discover how the scale, flexibility, agility, and consumption-based pricing of cloud services can be used as part of a program to re-platform and re-architect your business systems.

Cloud Readiness Assessment- 4 Week Assessment

Cloud Readiness Assessment: 4 Week Assessment: Crimson Line’s assessment will provide a detailed report, giving you insight into project costs and complexities to enable you to make accurate cloud decisions.

Data Platform Modernisation - 4 week Assessment

Data Platform Modernisation - 4 week Assessment: This four-week architecture assessment by Ascent Technology helps management, line-of-business, and IT teams design their data platform deployment and identify potential areas of improvement.

Disaster Recovery Accelerator - 6wk Implementation

Disaster Recovery Accelerator - 6wk Implementation: ANS, in partnership with Microsoft, has developed a disaster recovery solution, the Azure Disaster Recovery Accelerator, designed to minimize disruption in the event of a failure or disaster.

DR - ASR Assessment Plan- 5-day assesment

DR / ASR Assessment Plan: 5-day assessment: Crimson Line will install software to assess a client’s on-premises environment. The outcome will provide a detailed Azure Site Recovery deployment planning report that can be used as part of the Crimson Line DRaaS offering.

Dynamic Customer Profile Creation- 4-week PoC

Dynamic Customer Profile Creation: 4-week PoC: This pilot is intended to help the client create a single enriched view of customers using transaction and interaction data by leveraging Azure data services and TheDataTeam's Cadence framework.

FREE 1-Hr Briefing- Azure Cloud Migration

FREE 1-Hr Briefing: Azure Cloud Migration: In this briefing, FMT Consultants will provide you with a report of our discussion, budgetary pricing for an assessment, and the next steps to kick off your cloud migration.

Microsoft Azure Readiness- 5-day Workshop

Microsoft Azure Readiness: 5-day Workshop: A team of experts at Opsgility will help guide your leadership and learning teams by creating and delivering a detailed learning plan and courses tailored to your organizational learning goals.

Oracle Migration to Azure- 1-Week Assessment (UK)

Oracle Migration to Azure: 1-Week Assessment: Pythian can help you migrate from Oracle to the technologies for your needs, including Azure SQL Database, Azure SQL Data Warehouse, and SQL Server (cloud and on-premises.) This offer is for customers in Canada.

Oracle Migration to Azure- 1-Week Assessment (US)

Oracle Migration to Azure: 1-Week Assessment (UK): Pythian can help you migrate from Oracle to the technologies for your needs, including Azure SQL Database, Azure SQL Data Warehouse, and SQL Server (cloud and on-premises.) This offer is for U.K. customers.

Oracle Migration to Azure- 1-Week Assessment

Oracle Migration to Azure: 1-Week Assessment (US): Pythian can help you migrate from Oracle to the technologies for your needs, including Azure SQL Database, Azure SQL Data Warehouse, and SQL Server (cloud and on-premises.) This offer is for U.S. customers.

Concurrency Code Analysis in Visual Studio 2019

$
0
0

Concurrency Code Analysis in Visual Studio 2019

The battle against concurrency bugs poses a serious challenge to C++ developers. The problem is exacerbated by the advent of multi-core and many-core architectures. To cope with the increasing complexity of multithreaded software, it is essential to employ better tools and processes to help developers adhere to proper locking discipline. In this blog post, we’ll walk through a completely rejuvenated Concurrency Code Analysis toolset we are shipping with Visual Studio 2019 Preview 2.

A perilous landscape

The most popular concurrent programming paradigm in use today is based on threads and locks. Many developers in the industry have invested heavily in multithreaded software. Unfortunately, developing and maintaining multithreaded software is difficult due to a lack of mature multi-threaded software development kits.

Developers routinely face a dilemma in how to use synchronization. Too much may result in deadlocks and sluggish performance. Too little may lead to race conditions and data inconsistency. Worse yet, the Win32 threading model introduces additional pitfalls. Unlike in managed code, lock acquire and lock release operations are not required to be syntactically scoped in C/C++. Applications therefore are vulnerable to mismatched locking errors. Some Win32 APIs have subtle synchronization side effects. For example, the popular “SendMessage” call may induce hangs among UI threads if not used carefully. Because concurrency errors are intermittent, they are among the hardest to catch during testing. When encountered, they are difficult to reproduce and diagnose. Therefore, it is highly beneficial to apply effective multi-threading programming guidelines and tools as early as possible in the engineering process.

Importance of locking disciplines

Because one generally cannot control all corner cases induced by thread interleaving, it is essential to adhere to certain locking disciplines when writing multithreaded programs. For example, following a lock order while acquiring multiple locks or using std::lock() consistently helps to avoid deadlocks; acquiring the proper guarding lock before accessing a shared resource helps to prevent race conditions. However, these seemingly simple locking rules are surprisingly hard to follow in practice.

A fundamental limitation in today’s programming languages is that they do not directly support specifications for concurrency requirements. Programmers can only rely on informal documentation to express their intention regarding lock usage. Thus, developers have a clear need for pragmatic tools that help them confidently apply locking rules.

Concurrency Toolset

To address the deficiency of C/C++ in concurrency support, we had shipped an initial version of concurrency analyzer back in Visual Studio 2012, with promising initial results. This tool had a basic understanding of the most common Win32 locking APIs and concurrency related annotations.

In Visual Studio 2019 Preview 2, we are excited to announce a completely rejuvenated set of concurrency checks to meet the needs of modern C++ programmers. The toolset comprises a local intra-procedural lock analyzer with built-in understanding of common Win32 locking primitives and APIs, RAII locking patterns, and STL locks.

Getting started

The concurrency checks are integrated as part of the code analysis toolset in Visual Studio. The default “Microsoft Native Recommended Ruleset” for the project comprises the following rules from the concurrency analyzer. This means, whenever you run code analysis in your project, these checks are automatically executed. These checks are also automatically executed as part of the background code analysis runs for your project. For each rule, you can click on the link to learn more about the rule and its enforcement with clear examples.

  • C26100: Race condition. Variable should be protected by a lock.
  • C26101: Failing to use interlocked operation properly for a variable.
  • C26110: Caller failing to acquire a lock before calling a function which expects the lock to be acquired prior to being called.
  • C26111: Caller failing to release a lock before calling a function which expects the lock to be released prior to being called.
  • C26112: Caller cannot hold any lock before calling a function which expects no locks be held prior to being called.
  • C26115: Failing to release a lock in a function. This introduces an orphaned lock.
  • C26116: Failing to acquire a lock in a function, which is expected to acquire the lock.
  • C26117: Releasing an unheld lock in a function.
  • C26140: Undefined lock kind specified on the lock.

If you want to try out the full set of rules from this checker, there’s a ruleset just for that. You must right click on Project > Properties > Code Analysis > General > Rulesets > Select “Concurrency Check Rules”.

Screenshot of the Code Analysis properties page that shows the ConcurrencyCheck Rules ruleset selected.

You can learn more about each rule enforced by the checker by searching for rule numbers in the ranges C26100 – C26199 in our Code Analysis for C/C++ warning document.

Concurrency toolset in action

The initial version of concurrency toolset was capable of finding concurrency related issues like race conditions, locking side effects, and potential deadlocks in mostly C-like code.

The tool had in-built understanding of standard Win32 locking APIs. For custom functions with locking side effects, the tool understood a number of concurrency related annotations. These annotations allowed the programmer to express locking behavior. Here are some examples of concurrency related annotations.

  • _Acquires_lock_(lock): Function acquires the lock object “lock”.
  • _Releases_lock_(lock): Function releases the lock object “lock”.
  • _Requires_lock_held_(lock): Lock object “lock” must be acquired before entering this function.
  • _Guarded_by_(lock) data: “data” must always be protected by lock object “lock”.
  • _Post_same_lock(lock1, lock2): “lock1” and “lock2” are aliases.

For a complete set of concurrency related annotations, please see this article Annotating Locking Behavior.

The rejuvenated version of the toolset builds on the strengths of the initial version by extending its analysis capabilities to modern C++ code. For example, it now understands STL locks and RAII patterns without having to add any annotations.

Now that we have talked about how the checker works and how you can enable them in your project, let’s look at some real-world examples.

Example 1

Can you spot an issue with this code1 ?

struct RequestProcessor {​
    CRITICAL_SECTION cs_;​
    std::map<int, Request*> cache_;​
​
    bool HandleRequest(int Id, Request* request) {​
        EnterCriticalSection(&cs_);​
        if (cache_.find(Id) != cache_.end()) ​
            return false;    ​
        cache_[Id] = request;                    ​
        LeaveCriticalSection(&cs_);   ​
    }​
    void DumpRequestStatistics() {​
        for (auto& r : cache_) ​
            std::cout << "name: " << r.second->name << std::endl;​
    }​
};​

1 If you have seen this talk given by Anna Gringauze in CppCon 2018, this code may seem familiar to you.

Let’s summarize what’s going on here:

  1. In function HandleRequest, we acquire lock cs on line 6. However, we return early from line 8 without ever releasing the lock.
  2. In function HandleRequest, we see that cache_ access must be protected by lock cs. However, in a different function, DumpStatistics, we access cache_ without acquiring any lock.

If you run code analysis on this example, you’ll get a warning in method HandleRequest, where it will complain about the leaked critical section (issue #1):

This shows the leaked critical section warning from the concurrency analyzer.

Next, if you add the _Guarded_by_ annotation on the field cache_ and select ruleset “Concurrency Check Rules”, you’ll get an additional warning in method DumpRequestStatistics for the possible race condition (issue #2):

This shows the potential race condition warning from the concurrency analyzer.

Example 2

Let’s look at a more modern example. Can you spot an issue with this code1 ?

struct RequestProcessor2 {​
    std::mutex​ m_;
    std::map<int, Request*> cache_;​
​
    void HandleRequest(int Id, Request* request) {​
        std::lock_guard grab(m_);
        if (cache_.find(Id) != cache_.end()) ​
            return;    ​
        cache_[Id] = request;                    ​
    }​
    void DumpRequestStatistics() {​
        for (auto& r : cache_) ​
            std::cout << "name: " << r.second->name << std::endl;​
    }​
};

As expected, we don’t get any warning in HandleRequest in the above implementation using std::lock_guard. However, we still get a warning in DumpRequestStatistics function:

This shows the potential race condition warning from the concurrency analyzer.

There are a couple of interesting things going on behind the scenes here. First, the checker understands the locking nature of std::mutex. Second, it understands that std::lock_guard holds the mutex and releases it during destruction when its scope ends.

This example demonstrates some of the capabilities of the rejuvenated concurrency checker and its understanding of STL locks and RAII patterns.

Give us feedback

We’d love to hear from you about your experience of using the new concurrency checks. Remember to switch to “Concurrency Check Rules” for your project to explore the full capabilities of the toolset. If there are specific concurrency patterns you’d like us to detect at compile time, please let us know.

If you have suggestions or problems with this check — or any Visual Studio feature — either Report a Problem or post on Developer Community. We’re also on Twitter at @VisualC.


Enhanced in Visual Studio 2019: Search for Objects and Properties in the Watch, Autos, and Locals Windows

$
0
0

Are you inspecting many variables at once in the Locals window? Tired of constantly scrolling through the Watch window to locate the object you are currently interested in? New to Visual Studio 2019 for most languages (with some exclusions such as Xamarin, Unity, and SQL), you can now find your variables and their properties faster using the new search feature found in the Watch, Autos, and Locals windows!

With the new search feature, you will be able to highlight and navigate to specified values contained within the name, value, and type columns of each watch window.

Find your keywords faster using search and highlighting

If you are a fan of scrolling to the items you want, highlighting will allow you to find what you want easier. As you start typing in the search bar, the highlighting of matches currently expanded on screen will occur, giving you a faster alternative to performing a large-scale search.

Navigate between your specified keywords quickly

You can execute a search query using ENTER or the right and left arrow icons (“find next” (F3) and “find previous” (Shift+F3), respectively) shown below. If you are not a fan of scrolling to the items you want, clicking the arrows are also used to navigate through each found match. We based the search navigation on a depth first search model, meaning that matches are found by diving as far into the selected variable as specified before looking for matches within the next variable. You don’t have to sit through the full search if you don’t want to because search can also be cleared and canceled at any time, whether the search is ongoing or not.

Search for items deeply nested in your code

Unable to find what you’re looking for on your initial search? We’ve provided a “Search Depth” drop down to find matches nested X number of levels deep into your objects, where levels are defined similarly to levels in a tree data structure context. This option gives you the power to choose how thorough you want to search inside your objects (up to 10 levels), letting you decide how long or short the search process takes.

When you are searching for items that are already expanded and visible on your screen, these items will always be returned as matches no matter what search depth you have specified. Having to loop back to the item you want after passing it can be a pain, so setting the search depth to 1 will allow you to navigate to previous matches using the “find previous” arrow icon.

Excited to start searching in the Watch, Autos, and Locals windows? Let us know in the comments!

For any issues or suggestions, please let us know via Help > Send Feedback > Report a Problem in the IDE. If you have any additional feedback about this feature, feel free to complete this brief survey.

Leslie Richardson, Program Manager, Visual Studio Debugging &amp Diagnostics
@lyrichardson01 Leslie is a Program Manager on the Visual Studio Debugging and Diagnostics team, focusing primarily on improving the overall debugging experience and feature set.

Location, location, location!

$
0
0

When it comes to building contextually relevant applications, it’s all about location. By enabling functionality tied to location, an application can move from being just a convenience to a necessity! Your app can move to the “must have” category by helping with real world tasks, answering questions like “Where should I buy a house?”, “Is this a good area for our new store location?” and more.

The Bing Maps team has been hard at work releasing three new REST APIs that bring the power of location-based search to maps scenarios – Bing Maps Location Recognition, Bing Maps Local Search API and Bing Maps Local Insights API.  Let’s go into detail about each of these APIs and see examples of how they can be used to light up new, location-related possibilities.

Location Recognition API

Get details about points of interest at a specific location with Bing Maps Location Recognition API.

Bing Maps Location Recognition APIMuch like an in-car mapping solution, when given location coordinates (latitude/longitude), the Location Recognition API returns a comprehensive description of the location along with points of interest. Your application can easily receive the data in the form of a JSON or XML response. The response includes:

  • Business entities at the location with a wide range of types supported (e.g. restaurants, hotels, parks, gym, shopping malls and more)
  • Natural entities at the location (e.g., beaches, island lakes, and 9 more)
  • Reverse geocoded address of location
  • Type of property at the location (e.g., residential, commercial, etc).

The Bing Maps Location Recognition API helps answer questions like, "What are the businesses and points of interest near a real estate property that I am interested in buying?" or "What is the address associated with a given latitude/longitude? Is it a private residence? What neighborhood am I in?"

For more details about the API parameters, entity types and examples, check out the documentation.

Local Search API

Need to know what businesses are nearby? As an area-based search by business name, category or free text, Local Search API could be the answer.

Bing is well known for our vast local business data aggregated from providers around the world and Bing Maps is now exposing that rich location based data through this API. You can leverage the breadth of the same dataset powering local search on bing.com within your applications and services with Bing Maps Local Search API.

Bing Maps Local Search API

With responses supported in JSON and XML, the output can be used for retail planning and site selection, but more commonly for finding a business that offers a specific service. For example, “Where is the closest vegan restaurant?”

For more details about the API parameters and examples, check out the documentation.

Local Insights API

Bing Maps Local Insights API can help get insights into businesses and entities within a given area reached by driving, walking or public transit within a given time or distance.

Local Insights can help score the attractiveness of a location based on the proximity to points of interest, making it easy for you to determine how close you are to things that are important to you. Find out how many restaurants, bars, movie theaters, parks or other kinds of places are nearby and also take into consideration the predicted traffic at a specific time of day. For example, “How many schools are within walking distance from this location?” or “How many restaurants are within a 10 minute drive at 5pm?”

Bing Maps Local Insights

For more details about the API parameters and examples, check out the documentation.

Note: Both Local Search API and Local Insights API are available in the United States with plans to make them more broadly available in the near future.

We are excited to see what you build with these new APIs. To learn more about the full suite of Microsoft Mapping options, Bing Maps APIs and how to get licensed, go to https://www.microsoft.com/maps.

- Bing Maps Team

New Code Analysis Checks in Visual Studio 2019: use-after-move and coroutine

$
0
0

New Code Analysis Checks in Visual Studio 2019: use-after-move and coroutine

Visual Studio 2019 Preview 2 is an exciting release for the C++ code analysis team. In this release, we shipped a new set of experimental rules that help you catch bugs in your codebase, namely: use-after-move and coroutine checks. This article provides an overview of the new rules and how you can enable them in your project.

Use-after-move check

C++11 introduced move semantics to help write performant code by replacing some expensive copy operations with cheaper move operations. With the new capabilities of the language, however, we have new ways to make mistakes. It’s important to have the tools to help find and fix these errors.

To understand what these errors are, let’s look at the following code example:

MyType m;
consume(std::move(m));
m.method();

Calling consume will move the internal representation of m. According to the standard, the move constructor must leave m in a valid state so it can be safely destroyed. However, we can’t rely on what that state is. We shouldn’t call any methods on m that have preconditions, but we can safely reassign m, since the assignment operator does not have a precondition on the left-hand side. Therefore, the code above is likely to contain latent bugs. The use after move check is intended to find exactly such code, when we are using a moved-from object in a possibly unintended way.

There are several interesting things happening in the above example:

  • std::move does not actually move m. It’s only cast to a rvalue reference. The actual move happens inside the function consume.
  • The analysis is not inter-procedural, so we will flag the code above even if consume is not actually moving m. This is intentional, since we shouldn’t be using rvalue references when moving is not involved – it’s plain confusing. We recommend rewriting such code in a cleaner way.
  • The check is path sensitive, so it will follow the flow of execution and avoid warning on code like the one below.
    Y y;
    if (condition)
      consume(std::move(y));
    if (!condition)
      y.method();

In our analysis, we basically track what’s happening with the objects.

  • If we reassign a moved-from object it is no longer moved from.
  • Calling a clear function on a container will also cleanse the “moved-from”ness from the container.
  • We even understand what “swap” does, and the code example below works as intended:
    Y y1, y2;
    consume(std::move(y1));
    std::swap(y1, y2);
    y1.method();   // No warning, this is a valid object due to the swap above.
    y2.method();   // Warning, y2 is moved-from.

Coroutine related checks

Coroutines are not standardized yet but they are well on track to become standard. They are the generalizations of procedures and provide us with a useful tool to deal with some concurrency related problems.

In C++, we need to think about the lifetimes of our objects. While this can be a challenging problem on its own, in concurrent programs, it becomes even harder.

The code example below is error prone. Can you spot the problem?

std::future async_coro(int &counter)
{
  Data d = co_await get_data();
  ++counter;
}

This code is safe on its own, but it’s extremely easy to misuse. Let’s look at a potential caller of this function:

int c;
async_coro(c);

The source of the problem is that async_coro is suspended when get_data is called. While it is suspended, the flow of control will return to the caller and the lifetime of the variable c will end. By the time async_coro is resumed the argument reference will point to dangling memory.

To solve this problem, we should either take the argument by value or allocate the integer on the heap and use a shared pointer so its lifetime will not end too early.

A slightly modified version of the code is safe, and we will not warn:

std::future async_coro(int &counter)
{
  ++counter;
  Data d = co_await get_data();
}

Here, we’ll only use the counter before suspending the coroutine. Therefore, there are no lifetime issues in this code. While we don’t warn for the above snippet, we recommend against writing clever code utilizing this behavior since it’s more prone to errors as the code evolves. One might introduce a new use of the argument after the coroutine was suspended.

Let’s look at a more involved example:

int x = 5;
auto bad = [x]() -> std::future {
  co_await coroutine();
  printf("%dn", x);
};
bad();

In the code above, we capture a variable by value. However, the closure object which contains the captured variable is allocated on the stack. When we call the lambda bad, it will eventually be suspended. At that time, the control flow will return to the caller and the lifetime of captured x will end. By the time the body of the lambda is resumed, the closure object is already gone. Usually, it’s error prone to use captures and coroutines together. We will warn for such usages.

Since coroutines are not part of the standard yet, the semantics of these examples might change in the future. However, the currently implemented version in both Clang and MSVC follows the model described above.

Finally, consider the following code:

generator mutex_acquiring_generator(std::mutex& m) {
  std::lock_guard grab(m);
  co_yield 1;
}

In this snippet, we yield a value while holding a lock. Yielding a value will suspend the coroutine. We can’t be sure how long the coroutine will remain suspended. There’s a chance we will hold the lock for a very long time. To have good performance and avoid deadlocks, we want to keep our critical sections short. We will warn for the code above to help with potential concurrency related problems.

Enabling the new checks in the IDE

Now that we have talked about the new checks, it’s time to see them in action. The section below describes the step-by-step instructions for how to enable the new checks in your project for Preview 2 builds.

To enable these checks, we go through two basic steps. First, we select the appropriate ruleset and second, we run code analysis on our file/project.

Use after free

  1. Select: Project > Properties > Code Analysis > General > C++ Core Check Experimental Rules.
    Screenshot of the Code Analysis properties page that shows the C++ Core Check Experimental Rules ruleset selected.
  2. Run code analysis on the source code by right clicking on File > Analyze > Run code analysis on file.
  3. Observe warning C26800 in the code snippet below:
    Screenshot showing "use of a moved from object" warning

Coroutine related checks

  1. Select: Project > Properties > Code Analysis > General > Concurrency Rules.Screenshot of the Code Analysis properties page that shows the Concurrency Rules ruleset selected.
  2. Run code analysis on the source code by right clicking on File > Analyze > Run code analysis on file.
  3. Observe warning C26810 in the code snippet below:Screenshot showing a warning that the lifetime of a captured variable might end before a coroutine is resumed.
  4. Observe warning C26811 in the code snippet below:Screenshot showing a warning that the lifetime of a variable might end before the coroutine is resumed.
  5. Observe warning C26138 in the code snippet below:Screenshot shows a warning that we are suspending a coroutine while holding a lock.

Wrap Up

We’d love to hear from you about your experience of using these new checks in your codebase, and also for you to tell us what sorts of checks you’d like to see from us in the future releases of VS. If you have suggestions or problems with these checks — or any Visual Studio feature — either Report a Problem or post on Developer Community and let us know. We’re also on Twitter at @VisualC.

Help us plan the future of .NET and Big Data

$
0
0

We’re currently looking into how we can make .NET great for Big Data scenarios.   

Please fill out the survey below and help shape how we can improve .NET for Big Data by sharing your experiences, challenges, and needs. It should only take 10 minutes or less to complete! 

Take the survey now! 

Thanks, 
.NET Team  

Announcing .NET Framework 4.8 Early Access Build 3734

$
0
0

We are getting closer to the final version now! This release includes several accessibility, performance and reliability fixes across the major framework libraries. We will continue to stabilize this release and take more fixes over the coming weeks and we would greatly appreciate it if you could help us ensure Build 3734 is a high-quality release by trying it out and providing feedback on the new features via the .NET Framework Early Access GitHub repository.

Supported Windows Client versions: Windows 10 version 1809, Windows 10 version 1803, Windows 10 version 1709, Windows 10 version 1703, Windows 10 version 1607, Windows 8.1, Windows 7 SP1

Supported Windows Server versions: Windows Server 2019, Windows Server version 1803, Windows Server version 1709, Windows Server 2016, Windows Server 2012, Windows Server 2012 R2, Windows Server 2008 R2 SP1

This build includes an updated .NET 4.8 runtime as well as the .NET 4.8 Developer Pack (a single package that bundles the .NET Framework 4.8 runtime, the .NET 4.8 Targeting Pack and the .NET Framework 4.8 SDK). Please note: this build is not supported for production use.

Next steps:
To explore the new build, download the .NET 4.8 Developer Pack. Instead, if you want to try just the .NET 4.8 runtime, you can download either of these:

You can checkout the fixes included in this preview build or if you would like to see the complete list of improvements in 4.8 so far, please go here.

.NET Framework build 3734 is also included in the next update for Windows 10. You can sign up for Windows Insiders to validate that your applications work great on the latest .NET Framework included in the latest Windows 10 releases.

Thanks!

Debug your live apps running in Azure Virtual Machines and Azure Kubernetes

$
0
0

We are excited to announce that, in our Visual Studio Enterprise 2019 preview, we are expanding Snapshot Debugger support beyond Azure App Services hosting ASP.NET Core and ASP.NET applications to now also include Azure Virtual Machines (VM), Azure Virtual Machine scale sets (VMSS), and Azure Kubernetes Services (AKS)!

When Visual Studio 2017 Enterprise 15.5 became generally available, we introduced the Snapshot Debugger, an innovative diagnostic tool that enables you to quickly and accurately evaluate problems in their Azure production environments without stopping the process and with minimal performance impact.

When an unanticipated issue occurs in production, it can be difficult to replicate the exact conditions in your testing environment and almost impossible to do so on your local development machine. You might consider asking your DevOps team to “turn up” production logging but this relies on you having already anticipated where issues might occur prior to deployment. You may also request that a process dump be taken, but that requires perfect timing and some luck to capture the most important details, you also must gauge how your collection strategy might negatively imy pact performance.

The Snapshot Debugger provides a familiar and powerful debugging experience, allowing developers to set Snappoints and Logpoints in code, similar to debugger breakpoints and tracepoints. When a Snappoint is hit in your production environment, a snapshot is dynamically created without stopping the process. Developers can then attach to these snapshots using Visual Studio and see what’s going on with variables, Locals, Watches and Call Stack windows, all this while the live site continues to serve your customers.

Azure Virtual Machines/Azure Virtual Machine scale sets

For most PaaS scenarios, Azure App Service is more than capable of encapsulating a complete end-to-end experience. However, for developers and organizations that require greater control over of their platform and environment, VMs remain a critical option and Snapshot Debugger supports them in the latest preview of Visual Studio.

Once your VM/VMSS has been set up to host your ASP.NET or ASP.NET Core web application you can open your project in Visual Studio 2019, and click on the “Debug->Attach to Snapshot Debugger…” menu item, where you will now be able to select VM/VMSS as shown.

The UI experience remains almost identical but now you will be required to select an Azure Storage account to collect your snapshot logs and to share the snapshot collection plan (App Services will also require Azure Storage in Preview 2).

Selecting the “Install Remote Debugger Extension” option will prompt Visual Studio to install the extensions in Azure, which is necessary to view snapshots. This process also opens a specific set of ports (30398, 31398, 31399, 32398) to facilitate communication to your local machine, these ports are not required for retrieving and viewing logpoints.

Azure Kubernetes Services (AKS)

Azure provides an incredible cross platform experience and our debugging and diagnostics tools now provide feature parity in our Kubernetes service offerings.

Before attempting to use any of the Snapshot Debugger features in AKS it is vital that your Docker images include ASP.NET Core 2.2+ installed in a global location, as well as the correctly configured Snapshot Debugger and the requisite environment variables.

To help you enable support for Snapshot Debugger in AKS we have provided a repo containing a set of Dockerfiles that demonstrate the setup on Docker images. We support three variants of Linux (Debian, Alpine and Ubuntu) and they are organized according to the ASP.NET Core version, the OS platform, and the platform architecture.

For example, the ASP.NET Core 2.2 Debian 9 (Stretch) x64 Dockerfile is located at /2.2/stretch-slim/amd64/Dockerfile. This Dockerfile produces an image with Debian 9 x64 as the base with ASP.NET Core 2.2 Runtime, it includes the latest supported Snapshot Debugger backend package and sets the environment variables to load the debugger into your .NET Core application.

Try the preview

The latest Snapshot Debugger experiences are now in preview, download and try it out here.

This preview supports the following scenarios:

  • Azure App Services on the Windows OS running ASP.NET Core (2.0+) or ASP.NET (4.6.1+).
  • Virtual Machines on the Windows OS running ASP.NET Core (2.0+) or ASP.NET (4.6.1+).
  • Azure Kubernetes Services (Linux Docker Containers) running ASP.NET Core (2.2+).

If you have any issues using Snapshot Debugger, please review this guide on Troubleshooting and known issues for snapshot debugging in Visual Studio.

We would love to hear your feedback. To report issues, use the Report a Problem tool in Visual Studio. You’ll be able to track your issues on the Visual Studio Developer Community site where you can also ask questions and find answers.

Mark Downie, Program Manager, Visual Studio Diagnostics
@poppastringMark is a program manager on the Visual Studio Diagnostics team, working on Snapshot Debugger.

Azure Security Center can detect emerging vulnerabilities in Linux

$
0
0

Recently a new flaw was discovered in PolKit - a component which controls system-wide privileges in Unix OS. This vulnerability potentially allows unprivileged account to have root permission. In this blog post, we will focus on the recent vulnerability, demonstrate how attacker can easily abuse and weaponize it. In addition, we will preset how Azure Security Center can help you detect threats, and provide recommendations for mitigation steps.

The PolKit vulnerability

PolKit (previously known as PolicyKit) is a component that provides centralized way to define and handle policies and controls system-wide privileges in Unix OS. The vulnerability CVE-2018-19788 was caused due to improper validation of permission requests. It allows a non-privileged user with user id greater than the maximum integer to successfully execute arbitrary code under root context.

The vulnerability exists within PolKit’s versions earlier than 0.115, which comes pre-installed by some of the most popular Linux distributions. A patch was released, but it required a manual install by the relevant package manager issuer.
You can check if your machine is vulnerable by running the command “pkttyagent -version” and verify that your PolKit’s version is not vulnerable.

How an attacker can exploit this vulnerability to gain access to your environment

We are going to demonstrate a simple exploitation inspired from a previously published proof of concept (POC). The exploitation shows how an attacker could leverage this vulnerability for achieve privilege escalation technique and access restrict files. For this demonstration, we will use one of the most popular Linux distributions today.

First, we verify that we are on vulnerable machine by checking the PolKit version. Then, we verify that the user ID is greater than the maximal integer value.

Code verification that a user ID is greater than the maximal integer value screenshot

Now, that we know we are on vulnerable machine, we can leverage this flaw by using another pre-installed tool, Systemctl, that uses PolKit as the permission policy enforcer and has the ability to execute arbitrary code. If you take closer look into CVE-2018-19788, you would find Systemctl is impacted by the vulnerability. Systemctl is one of Systemd utilities, and the system manager that is becoming the new foundation for building with Linux.

Using Systemctl, we will be able to create a new service in order to execute our malicious command with root context. Because of the flaw in PolKit, we can bypass the permission checks and runs systemctl operations. Let’s take a look at how we can do that.

Bash script content:

#!/bin/bash
cat <<EOF >> /tmp/polKitVuln.service
[Unit]
Description= Abusing PolKit Vulnerability
[Service]
ExecStart=/bin/bash -c 'cat /etc/sudoers > /tmp/sudoersList.txt'
Restart=on-failure
RuntimeDirectoryMode=0755
 
[Install]
WantedBy=multi-user.target
Alias= polKitVuln.service
EOF
 
systemctl enable /tmp/polKitVuln.service
systemctl start polKitVuln.service

First, we define a new service and provides the required information to “/tmp/polkitVuln.service”. The ExecStart directive contains our command (bolded above), accesses the sudoers file, and copies its content to a share folder. This shared folder can be accessed by unprivileged users. The Sudoers file is one of the most important files in the system, as it contains the users and groups privileges information of the machine. At the last part of the script, we make the actual call for systemctl tool to create and start our new service.

Execute the script:

Screenshot of code showing errors regarding Polkit failing to handle uid field

Notice the errors regarding Polkit failing to handle the uid field. As the Sudoers file is copied using the exploitation, we can read its content.

Screenshot of code proving Sudoers file is copied using the exploitation

With this vulnerability attackers can bypass permissions to check and gain root access to your environment. In another blog post, “How Azure Security Center helps detect attacks against your Linux machines,” we showed how attackers can exploit hosts for installing crypto miners or attack other resources.

Protect against and respond to threats with Azure Security Center

Azure Security Center can help detect threats, such as the PolKit vulnerability, and help you quickly mitigate these risks. Azure Security Center consolidates your security alerts into a single dashboard, making it easier for you to see the threats in your environment and prioritize your response to threats. Each alert gives you a detailed description of the incident as well as steps on how to remediate the issue.

While we investigate Azure Security Center hosts impact, we could determine what is the frequency in which machines are under attack and using behavioral detection techniques, inform customers when they have been attacked. Below is the security alert based on our previous activity which you can see in Security Center.

Screenshot of security alert in Azure Security Center

In addition, Azure Security Center provides a set of steps that enable customers to quickly remediate the problem:

  • System administration should not allow negative user IDs or user IDs greater than 2147483646.
    • Verify user ID maximum and minimum values under “/etc/login.defs.”
  • Upgrade your policykit package by the package manager in advance.

Get started with Azure Security Center

Start using Azure Security Center’s Standard Tier for free today.


Azure Site Recovery: Disaster Recovery as a Service (DRaaS) for Azure, by Azure

$
0
0

This blog was co-authored by Sujay Talasila, Senior Program Manager, Cloud + Enterprise.

Microsoft Azure is the first public cloud to offer native disaster recovery (DR) solution for applications running on IaaS virtual machines (VMs). Six months ago, we announced the general availability of DR for Azure VMs using Azure Site Recovery (ASR). Since then we have been heavily invested on ensuring that the customer experience of using this DR capability is nothing less than the best. Being a service that is used by thousands of customers, there are two key principles based on which we make decisions on what features and updates to invest in.

  • Continue to stand as a fully integrated offering.

Customers can enable the DR capability and carry out all related operations. These operations include testing the DR drill or performing the actual failover, all from the Azure portal with minimal clicks. This also translates to ensuring that ASR is updated with support for the newest Azure features as they’re released.

  • Make continued improvements and democratize DR.

We are constantly working to improve the site recovery service, absorbing the complexities of setting up DR into the service so that as a customer, you need to make minimal decisions. This involves continuously listening to feedback and ensuring that we make enhancements to the service to help customers scale effortlessly while supporting different types of workloads. We also adhere to a once a month service update rhythm so that customers can start using the latest features as soon as possible. Any improvements needed in the service are also in this way, made available as soon as we can.

    We want to start this year with a recap of all the new capabilities that we enabled in the last few months that customers have loved.

    • Support for zone pinned Azure VMs: For IaaS applications running on Azure VMs, you can build high availability into your business continuity strategy by deploying multiple VMs across multiple zones within a region. As announced in December 2018, customers can replicate and failover zone pinned virtual machines to other regions within a geographic cluster using ASR. This new capability is generally available in all regions supporting Availability Zones. Along with Availability Sets and Availability Zones, ASR completes the resiliency continuum for applications running on Azure VMs.

    • DR of Azure Disk Encryption-enabled VMs: We now support DR for Azure Disk Encryption-enabled VMs to safeguard data according to your company’s security and compliance needs. You can replicate VMs, enabled for encryption through the Azure Active Directory app, from one Azure region to another region. For more details, see the Azure Service Update, “Disaster recovery for Azure Disk Encryption–enabled virtual machines.”

    • Support for fire-wall enabled storage accounts: Support for firewall-enabled storage accounts was recently enabled. You can replicate VMs with unmanaged disks on firewall-enabled storage accounts to another Azure region for disaster recovery scenarios. You can also select firewall-enabled storage accounts in a target region as target storage accounts for unmanaged disks. You can restrict access to the cache storage account by allowing only the source Azure VM's virtual network to write to it. When you're using firewall enabled storage accounts, ensure that you enable the “allow trusted Microsoft services,” exception.

    • Accelerated networking support: Accelerated networking enables single root I/O virtualization (SR-IOV) to a VM, greatly improving its networking performance. This high-performance path bypasses the host from the data path, reducing latency, jitter, and CPU utilization, for use with the most demanding network workloads on supported VM types. ASR enables you to utilize the benefits of accelerated networking, for Azure VMs that are failed over to a different Azure region. The documentation, “Accelerated Networking with Azure virtual machine disaster recovery,” describes how you can enable accelerated networking for Azure VMs replicated with ASR.

    • Automatic Site Recovery extension updates: One of the biggest hassles faced by administrators while using services provided by cloud providers is the need to catch up with the latest release and stay up to date. This involves downloading the latest software and performing frequent upgrades. In an enterprise scenario, this means going through a various level of approvals and wait time before you can make any changes. It becomes even more cumbersome if downtime is involved. You no longer need to plan for deploying the new versions with every release, and automatic updates do not require a reboot of your Azure VMs, nor does it affect on-going replication. You can enable ASR automatic updates and ensure your VMs to get the latest updates as soon as they are released.

    • Support for Standard SSD Disks: The Azure Standard SSD Disks are a cost-effective storage option optimized for workloads that need consistent performance at lower IOPS level. Standard SSD disks deliver better availability and latency compared to HDD Disks. ASR supports replication and failover of Azure VMs using Standard SSD disks to another region. By default, ASR retains the original disk type in the target region. The customer can choose a different disk type while configuring disaster recovery.

    • Support for Linux OS: ASR supports a wide variety of Linux OS versions. We added support for the below Linux flavors in the last six months.

    • ASR started supporting RedHat Enterprise Linux (RHEL) and CentOS 7.5, 6.10, and 7.6 from July 2018, August 2018, January 2019, and onward.

    • Support for SUSE Linux Enterprise Server 12 (up to SP3) was added in July 2018.

    • ASR started supporting CentOS 6.10 from August 2018 onward.

    • Latest versions of Oracle Enterprise Linux versions 6.8, 6.9, 7.0, to 7.5, and UEK Release 5 was added for support in November 2018, followed by OEL versions 6.10 and 7.6 in January 2019.

    • For Ubuntu, SUSE Linux Enterprise Server 12 and Debian OS versions, we release frequent updates to certify and support the latest kernel versions to avoid any breaking changes with ASR mobility service extension. We certify support for latest kernels within 15 to 30 days of their release by Linux distribution vendor.

    • Protect Azure VMs using Storage Spaces Direct: You can now use ASR to protect IaaS application using storage spaces direct (S2D) for high availability. Storage spaces direct and ASR together provide comprehensive protection of your IaaS workload on Azure. S2D lets you host a guest cluster on Microsoft Azure which is especially useful in scenarios where VM hosting a critical application such as SAP ASCS layer, SQL Server or Scale-out file server.
    • Pricing calculator for Azure Virtual Machine DR: You can use the sample cost calculator for estimating DR costs for your applications running Azure VMs. To see how the pricing would change for your particular use case, change the appropriate variables to estimate the cost. You can key in the number of VMs for the ASR license cost. You can use the number of managed disks, along with type, and the total data change rate expected across all the VMs to get the estimated storage costs in DR region. Additionally, you can use the total data change rate in a month after applying the compression factor of 0.4 to get the bandwidth costs incurred for transferring data between regions. For more details, refer to the related blog post, “Know exactly how much it will cost for enabling DR to your Azure VMs.”

    While we already have a bunch of new exciting features planned for the next few months, do let us know how we can make your DR experiences even better via User Voice.

    Azure natively provides you the high availability and reliability for your mission-critical workloads, and you can choose to level up your protection and meet compliance requirements using the disaster recovery provided by ASR. Getting started with Azure Site Recovery is easy – simply check out the pricing information, and sign up for a free Azure trial. You can also visit the Azure Site Recovery forum on MSDN for additional information and to engage with other customers.

    Related links and additional content

    Disaster Recovery support for Linux on VMware

    $
    0
    0

    Over the last five years, a gradual shift is observed toward open source environments for a number of advantages over boxed open sources. Factors of lower cost, flexibility, security, performance, and community support for open source operating systems, primarily Linux distros have largely been driving this shift across organizations. Microsoft has embraced this industry trend and has been continuously worked with providers hand in hand to contribute and strengthen the community. All major platform providers of Linux have also witnessed frequent release upgrades, assuring the developers with continued support. With all the more increasing adoption of Linux worldwide, a large number of organizations are moving their mission-critical workloads to Linux based server machines.

    Azure Site Recovery (ASR) has always been onboarded with all major Linux server versions on VMware and/or physical machines for disaster recovery. Also, over the last six months, it has continued to put a keen focus on extending support for the latest OS version releases from multiple providers such as Red Hat Enterprise Linux (RHEL), CentOS, Ubuntu, Debian, SUSE, and Oracle.

    • ASR started supporting RHEL 7.5, 6.10, and 7.6 from July 2018, August 2018, January 2019, and onward.
    • Support for SUSE Linux Enterprise Server 12 (up to SP3) was added in July 2018 after the success of SP2 and SP3 releases and wide usage for critical workloads.
    • ASR started supporting CentOS 6.10 from August 2018 onward.
    • Latest versions of Oracle Enterprise Linux versions 6.8, 6.9, 7.0, to 7.5 and UEK Release 5 were added for support in November 2018, followed by OEL versions 6.10 and 7.6 in January 2019.

    In addition to the above release updates from providers, Linux OS in terms of file systems and partitioning methods have been enhanced. ASR has been watching out for these enhancements and their industry adoption on VMware and physical Linux machines.

    • In 2018, a large number of implementations moved to GUID partitioning table (GPT), allowing for nearly unlimited number of partitions. It also stores multiple copies of boot data which makes the system more robust. ASR started supporting GPT partition style in legacy BIOS compatibility mode from August 2018 onward.
    • Custom usage of Linux has also evolved variety in system structure. Some specific scenarios include having /boot on disk partition (and on LVM volumes), having /(root), /boot, /usr, /usr/local, /var, /etc directories on separate file systems and separate partitions that are not on same system disk. ASR added support for these customizations in November 2018.

    Below, the timeline captures the Linux support extended by ASR since July 2018 for VMware and physical machines.

    Linux support extended by Azure Site Recovery since July 2018

     

    Related Links and additional content

    Debug your live apps running in Azure Virtual Machines and Azure Kubernetes

    $
    0
    0

    We are excited to announce that, in our Visual Studio Enterprise 2019 preview, we are expanding Snapshot Debugger support beyond Azure App Services hosting ASP.NET Core and ASP.NET applications to now also include Azure Virtual Machines (VM), Azure Virtual Machine scale sets (VMSS), and Azure Kubernetes Services (AKS)!

    When Visual Studio 2017 Enterprise 15.5 became generally available, we introduced the Snapshot Debugger, an innovative diagnostic tool that enables you to quickly and accurately evaluate problems in their Azure production environments without stopping the process and with minimal performance impact.

    When an unanticipated issue occurs in production, it can be difficult to replicate the exact conditions in your testing environment and almost impossible to do so on your local development machine. You might consider asking your DevOps team to “turn up” production logging but this relies on you having already anticipated where issues might occur prior to deployment. You may also request that a process dump be taken, but that requires perfect timing and some luck to capture the most important details, you also must gauge how your collection strategy might negatively imy pact performance.

    The Snapshot Debugger provides a familiar and powerful debugging experience, allowing developers to set Snappoints and Logpoints in code, similar to debugger breakpoints and tracepoints. When a Snappoint is hit in your production environment, a snapshot is dynamically created without stopping the process. Developers can then attach to these snapshots using Visual Studio and see what’s going on with variables, Locals, Watches and Call Stack windows, all this while the live site continues to serve your customers.

    Azure Virtual Machines/Azure Virtual Machine scale sets

    For most PaaS scenarios, Azure App Service is more than capable of encapsulating a complete end-to-end experience. However, for developers and organizations that require greater control over of their platform and environment, VMs remain a critical option and Snapshot Debugger supports them in the latest preview of Visual Studio.

    Once your VM/VMSS has been set up to host your ASP.NET or ASP.NET Core web application you can open your project in Visual Studio 2019, and click on the “Debug->Attach to Snapshot Debugger…” menu item, where you will now be able to select VM/VMSS as shown.

    The UI experience remains almost identical but now you will be required to select an Azure Storage account to collect your snapshot logs and to share the snapshot collection plan (App Services will also require Azure Storage in Preview 2).

    Selecting the “Install Remote Debugger Extension” option will prompt Visual Studio to install the extensions in Azure, which is necessary to view snapshots. This process also opens a specific set of ports (30398, 31398, 31399, 32398) to facilitate communication to your local machine, these ports are not required for retrieving and viewing logpoints.

    Azure Kubernetes Services (AKS)

    Azure provides an incredible cross platform experience and our debugging and diagnostics tools now provide feature parity in our Kubernetes service offerings.

    Before attempting to use any of the Snapshot Debugger features in AKS it is vital that your Docker images include ASP.NET Core 2.2+ installed in a global location, as well as the correctly configured Snapshot Debugger and the requisite environment variables.

    To help you enable support for Snapshot Debugger in AKS we have provided a repo containing a set of Dockerfiles that demonstrate the setup on Docker images. We support three variants of Linux (Debian, Alpine and Ubuntu) and they are organized according to the ASP.NET Core version, the OS platform, and the platform architecture.

    For example, the ASP.NET Core 2.2 Debian 9 (Stretch) x64 Dockerfile is located at /2.2/stretch-slim/amd64/Dockerfile. This Dockerfile produces an image with Debian 9 x64 as the base with ASP.NET Core 2.2 Runtime, it includes the latest supported Snapshot Debugger backend package and sets the environment variables to load the debugger into your .NET Core application.

    Try the preview

    The latest Snapshot Debugger experiences are now in preview, download and try it out here.

    This preview supports the following scenarios:

    • Azure App Services on the Windows OS running ASP.NET Core (2.0+) or ASP.NET (4.6.1+).
    • Virtual Machines on the Windows OS running ASP.NET Core (2.0+) or ASP.NET (4.6.1+).
    • Azure Kubernetes Services (Linux Docker Containers) running ASP.NET Core (2.2+).

    If you have any issues using Snapshot Debugger, please review this guide on Troubleshooting and known issues for snapshot debugging in Visual Studio.

    We would love to hear your feedback. To report issues, use the Report a Problem tool in Visual Studio. You’ll be able to track your issues on the Visual Studio Developer Community site where you can also ask questions and find answers.

    Mark Downie, Program Manager, Visual Studio Diagnostics
    @poppastring

    Mark is a program manager on the Visual Studio Diagnostics team, working on Snapshot Debugger.

    New to Microsoft 365 in January—compliance, productivity, and Microsoft Teams updates to help you enable a modern workplace

    New to Microsoft 365 in January—compliance, productivity, and Microsoft Teams updates to help you enable a modern workplace

    Viewing all 10804 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>