Quantcast
Channel: Category Name
Viewing all 10804 articles
Browse latest View live

Because it’s Friday: Bombshell

$
0
0

I've been getting a lot more into podcasts lately, and one of my favorites (other than Not So Standard Deviations, of course), is Bombshell. Hosted by Radha Iyengar Plumb, Loren DeJonge Schulman, and Erin Simpson, it's a whip-smart, approachable and entertaining look at current events in foreign policy, national security and military affairs. Now, those aren't topics I typically immerse myself in, but between the in-depth knowledge of the hosts and their choice of guests, I'm finding a different perspective that makes it a fascinating subject area. (Interesting side note: the podcast recently entered its second year, and it wasn't until the 1-year anniversary episode that the first male guests appeared on the show.) 

Also, as a statistician, I love that the standard questions asked of guests include: "What's your favorite statistical distribution", and "What's your favorite use of statistics or data?".

Bombshell

If you haven't listened before, the December 19, 2017 episode is a great place to start (other than going all the way back to the first episode). It introduces the three hosts, and is an interesting and educational review of the events of 2017. It's also a live episode, with questions from the audience at the Maxwell Air Force Base.

New Bombshell episodes are released every 2 weeks, and you can listed online or via your preferred podcast app. As for us, that's all from the blog here until next week. Have a great weekend!


Upgrading my podcast site to ASP.NET Core 2.1 in Azure plus some Best Practices

$
0
0

I am continuing to upgrade to podcast's site. Today I upgraded it to .NET Core 2.1, keeping the work going from my upgrade from "Web Matrix WebPages" from last week. I upgraded to actually running ASP.NET Core 2.1's preview in Azure by following this blog post.

Pro Tip: Be aware, you can still get up to 10x faster local builds but still keep your site's runtime as 2.0 to lower risk. So there's little reason to not download the .NET Core 2.1 Preview and test your build speeds.

At this point the podcast site is live in Azure at https://hanselminutes.com. Now that I've moved off of the (very old) site I've quickly set up some best practices in just a few hours. I should have taken the time to upgrade this site - and its "devops" a long time ago.

Here's a few things I was able to get done just this evening while the boys' did homework. Each of these tasks were between 5 and 15 min. So not a big investment, but they represented real value I'd been wanting to add to the site.

Git Deploy for Production

The podcast site's code now lives in GitHub and deployment to production is a git push to master.

Deploying from GitHub

A "deployment slot" for staging

Some people like to have the master branch be Production, then they make a branch called Staging for a secondary staging site. Since Azure App Services (WebSites) has "deployment slots" I choose to do it differently. I deploy to Production from GitHub, sure, but I prefer to push manually to staging rather than litter my commits (and clean them up or squash commits later - it's just my preference) with little stuff.

I hooked up Git Deployment but the git repro is in Azure and just for deploy. Then "git remote add azure ..." so when I want to deploy to staging it's:

git push staging

I use it for testing, so ya, it could have been test/dev, etc, but you get the idea. Plus the Deployment Slot/Staging Site is free as it's on the same Azure App Service Plan.

A more sophisticated - but just as easy - plan would be to push to staging, get it perfect then do a "hot swap" with a single button click.

Deployment Slots can have their own independent settings if you click "Slot Setting." Here I've set that this ASPNETCORE_ENVIRONMENT is "Staging" while the main one is "Production."

Staging Slots in Azure

The ASP.NET Core runtime picks up that environment variable and I can conditionally run code based on Environment. I run as "Development" on my local machine. For example:

if (env.IsDevelopment()){
    app.UseDeveloperExceptionPage();
}
else{
    app.UseExceptionHandler("/Error");
}

Don't let Google Index the Staging Site - No Robots

You should be careful to not let Google/Bing/DuckDuckGo index your staging site if it's public. Since I have an environment set on my slot, I can just add this Meta Robots element to the site's main layout. Note also that I use minified CSS when I'm not in Development.

<environment include="Development">
    <link rel="stylesheet" href="http://feeds.hanselman.com/~/t/0/0/scotthanselman/~~/css/site.css" />
</environment>
<environment exclude="Development">
    <link rel="stylesheet" href="http://feeds.hanselman.com/~/t/0/0/scotthanselman/~~/css/site.min.css" />
</environment>
<environment include="Staging">
    <meta name="robots" content="noindex, follow">
</environment>

Require SSL

Making the whole ASP.NET Core site use SSL has been on my list as well. I added my SSL Certs in the Azure Portal that added RequreHttps in my Startup.cs pretty easily.

I could have also added it to the existing IISRewriteUrls.xml legacy file, but this was easier and faster.

var options = new RewriteOptions().AddRedirectToHttps();

Here's how I'd do via IIS Rewrite Middleware, FYI:

<rule name="HTTP to HTTPS redirect" stopProcessing="true">
    <match url="(.*)" />
    <conditions>
       <add input="{HTTPS}" pattern="off" ignoreCase="true" />
    </conditions>
    <action type="Redirect" url="https://{HTTP_HOST}/{R:1}"
redirectType="Permanent" />
</rule>

Application Insights for ASP.NET Core

Next post I'll talk about Application Insights. I was able to set it up both client- and server-side and get a TON of info in about 15 minutes.

Application Insights

How are you?


Sponsor: Unleash a faster Python! Supercharge your applications performance on future forward Intel platforms with The Intel Distribution for Python. Available for Windows, Linux, and macOS. Get the Intel® Distribution for Python Now!


© 2017 Scott Hanselman. All rights reserved.
     

Microsoft and Azure at the Game Developers Conference

$
0
0

Start Your Visit at the Azure Booth

We invite you to come to the South Hall Lower Lobby of the Moscone Center and stop by our dedicated Azure booth to learn how you can create a gaming empire by building on the gaming cloud that has powered Xbox for years.

The Azure booth will have a wide range of activities for game developers:

  • Come talk to experts about Azure, PlayFab, Visual Studio, App Center, Mixer, Xbox Live, Mixed Reality and more.
  • Visit four stations to claim a custom, free Xbox controller, and enter to win an Xbox One X daily.
  • Play our mixed reality game Pinball Lizard and take home the sample source code – and get the high score of the day to win an Xbox One X.
  • Play Darwin Project, a 10-player battle royale that offers audience interaction through Mixer, with a shoutcaster moderating and streaming the action live.
  • Attend one of the many in-depth theater sessions or watch one-on-one interviews with product experts across a variety of topics, directly in the booth.

The entire Azure expo experience will show you what’s possible, while giving you the tools and code on how to build it.

Continue Your Learning of Our Cloud Offerings at the PlayFab Booth

Building on our efforts in the gaming cloud with Azure, in January we welcomed PlayFab to the growing slate of cloud offerings. PlayFab is a backend platform for games, delivering powerful real-time tools and services for LiveOps. With Azure and PlayFab services you concentrate on making a great game, not creating the backend. “You make it fun, we’ll make it run” we like to say.

We will also host three Azure sponsored sessions at GDC that show how Next Games, Fluffy Fairy, and others build to scale with Azure and PlayFab. Come ask about our special offer of $2,500 worth of PlayFab, App Center and Azure services that are free for a year.

To learn more about Microsoft at GDC, visit the Xbox Wire.

Azure cloud data and AI services training roundup

$
0
0

Looking to transform your business by improving your on-premises environments? Accelerating your move to the cloud, and gaining transformative insights from your data? Here’s your opportunity to learn from the experts and ask the questions that help your organization move forward.

Join us for one or all of these training sessions to take a deep dive into a variety of topics. Including products like Azure Cosmos DB, along with Microsoft innovations in artificial intelligence, advanced analytics, and big data. 

Azure Cosmos DB

Engineering experts are leading a seven-part training series on Azure Cosmos DB, complete with interactive Q&As. In addition to a high-level technical deep dive, this series covers a wide array of topics, including:

By the end of this series, you’ll be able to build serverless applications and conduct real-time analytics using Azure Cosmos DB, Azure Functions, and Spark. Register to attend the whole Azure Cosmos DB series, or register for the sessions that interest you.

Artificial Intelligence (AI)

Learn to create the next generation of applications spanning an intelligent cloud as well as an intelligent edge powered by AI. Microsoft offers a comprehensive set of flexible AI services for any scenario and enterprise grade AI infrastructure that runs AI workloads anywhere at scale. Modern AI tools designed for developers and data scientists help you create AI solutions easily, with maximum productivity.

Unlock deeper learning with the new Microsoft Cognitive Toolkit

Data is powerful, but navigating it can be slow, unreliable, and overly complex. Join us to learn about the Microsoft Cognitive Toolkit which offers deep learning capabilities that allow you to enable intelligence within massive datasets. In this session, you’ll learn:

  • What’s new with the Microsoft Cognitive Toolkit.
  • How to maximize the programming languages and algorithms you already use.
  • Cognitive Toolkit features, including support for ONNX, C#/.NET API, and model simplification/compression.

Filter content at scale with Cognitive Services' Content Moderator

Learn how Azure Cognitive Services' Content Moderator filters out offensive and unwanted content from text, images, and videos at scale. By combining intelligent machine assisted technology with an intuitive human review system, Content Moderator enables quick and reliable content scanning. In this session, you’ll learn:

  • Content Moderator platform basics.
  • How to use the machine learning-based APIs.
  • How to easily integrate human review tools with just a few lines of code.

Advanced analytics and big data

Data volumes are exploding. Deliver better experiences and make better decisions by analyzing massive amounts of data in real time. By including diverse datasets from the start, you’ll make more informed decisions that are predictive and holistic rather than reactive and disconnected. 

Accelerate innovation with Microsoft Azure Databricks

Learn how your organization can accelerate data-driven innovation with Azure Databricks, a fast, easy-to-use, and collaborative Apache Spark based analytics platform. Designed in collaboration with the creators of Apache Spark, it combines the best of Databricks and Azure to help you accelerate innovation with one-click set up, streamlined workflows, and an interactive workspace that enables collaboration among data scientists, data engineers, and business analysts. In this session, you’ll learn how to:

  • Use Databricks Notebooks to unify your processes and instantly deploy to production.
  • Launch your new Spark environment with a single click.
  • Integrate effortlessly with a wide variety of data stores.
  • Improve and scale your analytics with a high-performance processing engine optimized for the comprehensive, trusted Azure platform.

Interested in training on other Azure related topics? Take a look at the wide variety of live and on-demand training sessions now available on Azure.com.

New isolated VM sizes now available

$
0
0

Today we are pleased to announce two new Virtual Machine (VM) sizes, E64i_v3 and E64is_v3, which are isolated to hardware and dedicated to a single customer. These VMs are best suited for workloads that require a high degree of isolation from other customers for compliance and regulatory requirements. You can also choose to further subdivide the resources by using Azure support for nested VMs.

The E64i_v3 and E64is_v3 will have the exact same performance and pricing structure as their cousins E64_v3 and E64s_v3. These size additions will be available in each of the regions where E64_v3 and E64s_v3 are available today. The small letter ‘i’ in the VM name denotes that they are isolated sizes.

Unlike the E64_v3 and E64s_v3, the two new sizes E64i_v3 and E64is_v3 are hardware bound sizes. They will live and operate on our Intel® Xeon® Processor E5-2673 v4 2.3GHz hardware only and will be available until at least December 2021. We will provide reminders 12 months in advance of the official decommissioning of the sizes and offer an updated isolated size like these sizes on our next hardware version. 

These two new E64i_v3 and E64is_v3 sizes will be available in the on-demand portal. Starting on May 1st, 2018 they will also be made available for purchase as one-year Reserved VM Instances.

The E64i_v3 and E64is_v3 are joining our other Isolated VM sizes in the Azure family:

We encourage you to use these VMs for workloads needing a high degree of isolation.

Five webinars to catch you up on hybrid cloud

$
0
0

If you need to get up to speed on how to work best in a hybrid environment, we’ve collected five of our most-viewed webinars from the past year for your binge-watching pleasure. These on-demand sessions cover a variety of topics essential to individuals who find themselves needing to work on their hybrid mixed on-premise and cloud strategy:

1. Delivering innovation with the right hybrid cloud solution

If you don’t yet have a plan to take control of your mixed on-premises and cloud infrastructure, simplifying each user’s identity to a single credential can help you easily manage and secure services and create a data platform that simplifies security.

The on-demand delivering innovation with the right hybrid cloud solution webinar covers how to:

  • Consolidate identities and create a consistent data infrastructure.
  • Unify development with help from Azure Stack application patterns.
  • Select and use the best data platform, no matter your cloud model.

2. Migrating to a hybrid cloud environment: An insider’s walkthrough of 3 key methods

Your company has created a cloud strategy, or maybe you are playing catch-up to an employee-driven move to the cloud. Completing that move in an orderly way is a top priority.

Migrating to a hybrid cloud environment: An insider’s walkthrough of 3 key methods explains how to:

  • Move your sites to the cloud with Azure Site Recovery.
  • Extend individual applications into the cloud using Azure Active Directory.
  • Use containers and Windows Server 2016 to run your applications in the cloud.

3. Strengthen your security—starting at the operating system

Most businesses will get compromised at some point. Preventing a compromise from turning into a breach depends on quick detection and response while slowing down the attacker. One way to do this is to make each hop though your network more difficult by hardening the operating systems.

Windows Server 2016 has a large number of security features, which when used appropriately can hobble attackers’ ability to move throughout your network. In the strengthen your security—starting at the operating system webinar, we’ve covered how to:

  • Help harden the most vulnerable systems to slow attackers’ advance through your network.
  • Add more protection to privileged identities with just-in-time and just enough administration.
  • Help protect your Windows systems with tools such as Control Flow Guard and Device Guard.

4. Protect corporate information on-premises and in the cloud

Passwords have a lot of weaknesses. Workers often use insecure passwords, reuse them on multiple services, and are vulnerable to phishing, all putting your company at risk. Phishing, brute-force password attacks, and password-spray attacks all can pose a threat to your business.

With the cloud comes a number of solutions to replacing or strengthening your identity and accessing infrastructure. In the protect corporate information on-premises and in the cloud webinar, you will learn how to:

  • Ditch passwords for technologies that offer more security and more manageability.
  • Protect access to services that require passwords with additional security.
  • Set up Extranet Lockout to help secure against brute-force password attacks.

5. Take your cloud security to the next level: How to protect yourself from cloud attacks

Clearly, security is top-of-mind for our hybrid cloud audiences. In another of our most popular webinar sessions, take your cloud security to the next level: How to protect yourself from cloud attacks, you will learn how to:

  • Gain visibility into your security posture.
  • Turn analytics from virtual machines, databases, and cloud networks into actionable intelligence.
  • Gain greater ability to detect and respond with the Azure Security Center.

New machine-assisted text classification on Content Moderator now in public preview

$
0
0

This blog post is co-authored by Ashish Jhanwar, Data Scientist, Microsoft

Content Moderator is part of Microsoft Cognitive Services allowing businesses to use machine assisted moderation of text, images, and videos that augment human review.

The text moderation capability now includes a new machine-learning based text classification feature which uses a trained model to identify possible abusive, derogatory or discriminatory language such as slang, abbreviated words, offensive, and intentionally misspelled words for review.

In contrast to the existing text moderation service that flags profanity terms, the text classification feature helps detect potentially undesired content that may be deemed as inappropriate depending on context. In addition, to convey the likelihood of each category it may recommend a human review of the content.

The text classification feature is in preview and supports the English language.

How to use

Content Moderator consists of a set of REST APIs. The text moderation API adds an additional request parameter in the form of classify=True. If you specify the parameter as true, and the auto-detected language of your input text is English, the API will output the additional classification insights as shown in the following sections.

If you specify the language as English for non-English text, the API assumes the language as English, and outputs the additional insights, but they may not be relevant or useful.

The following code sample shows how to invoke the new feature by using the text moderation API.

using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Net.Http;
using System.Net.Http.Headers;
using System.Text;
using System.Threading.Tasks;

namespace TextClassifier
{
    class Program
    {
        //Content Moderator Key, API endpoint, and the new parameter
        public const string CONTENTMODERATOR_APIKEY = "YOUR API KEY";
        public const string APIURI = "https://[REGIONNAME].api.cognitive.microsoft.com/contentmoderator/moderate/v1.0/ProcessText/Screen";
        public const string CLASSIFYPARAMETER = "classify=True";

        static void Main(string[] args)
        {
            string ResponseJSON;
            string Message = "This is crap!";

            HttpClient client = new HttpClient();
            client.BaseAddress = new Uri(APIURI);

            string FullUri = APIURI + "?" + CLASSIFYPARAMETER;

            // Add an Accept header for JSON format.
            client.DefaultRequestHeaders.Accept.Add(
            new MediaTypeWithQualityHeaderValue("text/plain"));

            client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", CONTENTMODERATOR_APIKEY);

            HttpResponseMessage response = null;
            response = client.PostAsync(FullUri, new StringContent(
                                   Message, System.Text.Encoding.UTF8, "text/plain")).Result;

            if (response.StatusCode == System.Net.HttpStatusCode.OK)
            {
                Console.WriteLine("Message insights:");
                Console.WriteLine();
                ResponseJSON = response.Content.ReadAsStringAsync().Result;
                Console.Write(ResponseJSON);
                Console.ReadKey();
            }
        }
    }
}

Sample response

If you run the preceding sample console application, the resulting output shows the following classification insights. The ReviewRecommended value is set to true because the score for a classification was greater than the internal thresholds. Customers use either the ReviewRecommended flag to determine when content is flagged for human review or custom thresholds based on their content policies. The scores are in the range from 0 to 1.

 "Classification": {
    "ReviewRecommended": true,
    "Category1": { "Score": 0.0746903046965599 },
    "Category2": { "Score": 0.23644307255744934 },
    "Category3": { "Score": 0.98799997568130493 }
  }

Explanation of the response

  • Category1: Represents the potential presence of language that may be considered sexually explicit or adult in certain situations.
  • Category2: Represents the potential presence of language that may be considered sexually suggestive or mature in certain situations.
  • Category3: Represents the potential presence of language that may be considered offensive in certain situations.
  • Score: The score range is between 0 and 1. The higher the score, the higher the model is predicting that the category may be applicable. This preview relies on a statistical model rather than manually coded outcomes. We recommend testing with your own content to determine how each category aligns to your requirements.
  • ReviewRecommended: ReviewRecommended is either true or false depending on the internal score thresholds. Customers should assess whether to use this value or decide on custom thresholds based on their content policies.

Benefits of machine-assisted text moderation

The text classification feature is powered by a blend of advanced machine learning and Natural Language Processing (NLP) techniques. It is designed to work in different text domains like chats, comments, paragraphs etc.

Businesses use the text moderation service to either block, approve or review the content based on their policies and thresholds. The text moderation service can be used to augment human moderation of environments that require partners, employees and consumers to generate text content. These include chat rooms, discussion boards, chatbots, eCommerce catalogs, documents, and more.

Next steps

Sign up for Content Moderator by using either the Azure portal or the Content Moderator human review tool. Get the API key and your region as explained in the Credentials article.

Use the text moderation API console to test drive the capability online. Get started on your integration by either using the REST API samples or the .NET SDK samples.

Last Week in Azure #22: Week of March 5

$
0
0

Last week, it became even more evident that Azure is the best place for all of your applications and data with numerous announcements about investments that dramatically expand the choice and ROI of moving your SQL Server and open source applications to Azure. But these investments aren't just about Azure. They deliver deeper platform consistency across on-premises and cloud with a rich open source application framework and database support, and expanded cost-savings for Microsoft customers.

Now in preview

Making Azure the best place for all your applications and data - SQL Server customers can now try the preview for SQL Database Managed Instance (see: Migrate your databases to a fully managed service with Azure SQL Database Managed Instance), an Azure Hybrid Benefit for SQL Server license benefit, Azure Database Migration Service preview for Managed Instance, and the preview for Apache Tomcat® support in Azure App Service. This post also forecasts the general availability of Azure Database for MySQL and PostgreSQL in the coming weeks, making even easier to bring your open source powered applications to Azure.

Introducing SQL Information Protection for Azure SQL Database and on-premises SQL Server! - SQL Information Protection (SQL IP) introduces a set of advanced services and new SQL capabilities, forming a new information protection paradigm in SQL aimed at protecting the data, not just the database. Data Discovery & Classification (currently in preview) provides advanced capabilities built into Azure SQL Database for discovering, classifying, labeling & protecting the sensitive data in your databases. Discovering and classifying your most sensitive data (business, financial, healthcare, PII, etc.) can play a pivotal role in your organizational information protection stature.

Faster Metric Alerts for Logs now in limited public preview - The limited public preview of the new capability Metric Alerts for Logs that brings down the time it takes to generate a log alert to less than 5 minutes. The Metric Alerts on Logs preview currently supports the following log types on OMS Log Analytics: heartbeat, perf counters (including those from SCOM), and update.

Public preview of Java on App Service, built-in support for Tomcat and OpenJDK - The public preview of Java apps on App Service includes built-in support for Apache Tomcat 8.5/9.0 and OpenJDK 8, making it easy for Java developers to deploy web or API apps to Azure. Just bring your .jar or .war file to Azure App Service and we’ll take care of the capacity provisioning, server maintenance, and load balancing.

Visibility into network activity with Traffic Analytics - now in public preview - Traffic Analytics is a cloud-based solution that provides visibility into user and application traffic on your cloud networks. Traffic Analytics analyzes NSG Flow Logs across Azure regions and equips you with actionable information to optimize workload performance, secure applications and data, audit your organization’s network activity and stay compliant.

Azure SQL Database now offers zone redundant Premium databases and elastic pools - SQL Database now offers built-in support of Availability Zones in its Premium service tier. By placing individual database replicas in different availability zones, it makes Premium databases resilient to a much larger set of failures, including catastrophic datacenter outages. The built-in support of Availability Zones further enhances the High Availability (HA) solutions in Azure SQL Database. The preview of zone redundant Premium databases and pools is currently available in Central US, West Europe, and France Central, with additional regions added over time.

Now generally available

Update management, inventory, and change tracking in Azure Automation now generally available - Azure Automation delivers a cloud-based automation and configuration service that provides consistent management across your Azure and non-Azure environments. It consists of process automation, update management, and configuration features. Azure Automation provides complete control during deployment, operations, and decommissioning of workloads and resources.

Announcing Storage Service Encryption with customer managed keys general availability - Storage Service Encryption with customer managed keys uses Azure Key Vault that provides highly available and scalable secure storage for RSA cryptographic keys backed by FIPS 140-2 Level 2 validated Hardware Security Modules (HSMs). Key Vault streamlines the key management process and enables customers to maintain full control of keys used to encrypt data, manage, and audit their key usage.

Just-in-Time VM Access is generally available - Azure Security Center provides several threat prevention mechanisms to help you reduce surface areas susceptible to attack. One of those mechanisms is Just-in-Time (JIT) VM Access, which reduces your exposure to network volumetric attacks by enabling you to deny persistent access while providing controlled access to VMs when needed.

NCv3 VMs generally available, other GPUs expanding regions - NCv3 brings NVIDIA’s latest GPU – the Tesla V100 – to our best-in-class HPC, machine learning, and AI products to bring huge amounts of value across a variety of industries. The NCv3 virtual machines are now generally available in the US East region. We'll be adding NCv3 to EU West and US South Central later this month, AP Southeast in April, and UK South & IN Central in May. We’re also expanding our NV and ND series VMs into additional regions.

News & updates

Get Reserved Instance purchase recommendations based on usage pattern - If you have Virtual Machines (VM) running in azure, you can take advantage of discounted pricing on Reserved Instances (RI) and pre-pay for your Virtual Machines. Microsoft consumption recommendation apis looks at your usage for seven, 30, or 60 days and recommends optimum configurations of Reserved Instances. It calculates the cost you would pay if you did not have RI and cost you will pay with RI optimizing your savings.

Build Spring Boot 2.0 apps with Azure Starters and new VSCode extensions - the Spring Boot Starters for Azure now include support for Spring Boot 2.0, which is already available on Spring Initializr. Plus, with new Java and Spring extensions for Visual Studio Code you can build production-ready apps and easily deploy them to the cloud. Visual Studio Code is a free, open source editor for macOS, Linux and Windows.

Microsoft and Esri launch Geospatial AI on Azure - Microsoft and Esri now offer the GeoAI Data Science Virtual Machine (DSVM) as part of our Data Science Virtual Machine/Deep Learning Virtual Machine family of products on Azure. This is a result of a collaboration between the two companies and will bring AI, cloud technology and infrastructure, geospatial analytics and visualization together to help create more powerful and intelligent applications.

Azure Data Lake tools for VS Code now supports job view and job monitoring - Azure Data Lake Tools for Visual Studio Code now includes job monitoring and job view, which enables you to perform real-time monitoring for the jobs you submit. You can also view job summary and job details for historical jobs as well as download any of the input or output data and resources files associated with the job.

Microsoft releases automation for HIPAA/HITRUST compliance - The Azure Security and Compliance Blueprint - HIPAA/HITRUST Health Data and AI offers a turn-key deployment of an Azure PaaS solution to demonstrate how to securely ingest, store, analyze, and interact with health data while being able to meet industry compliance requirements. The blueprint helps accelerate cloud adoption and utilization for customers with data that is regulated.

New app usage monitoring capabilities in Application Insights - Recent improvements to the usage analytics tools in Application Insights can help your team better understand overall usage, dive deep into the impact of performance on customer experience, and give more visibility into user flows. Learn more about the new Impact tool that analyzes performance impact and looks for correlations between any property of measurement in your telemetry and conversion rates. And see how the User Flows tool can analyze what users did before they visited some page or custom event in your site, in addition to what they did afterward.

What’s brewing in Visual Studio Team Services: March 2018 Digest - Buck Hodges provides a comprehensive overview of what's new in Visual Studio Team Services, which releases new features every three weeks to help teams in all areas of their Azure workflow, for apps written in any language and deployed to any OS.

Azure Government – technology innovation shaping the future - At the Microsoft Government Tech Summit in Washington D.C. last week, we announced Azure Government Secret regions for classified data, new and unique hybrid approaches to IT modernization with Azure Stack, and growing connectivity options with new ExpressRoute locations. We’re also releasing new capabilities to accelerate developer and IT productivity and efficiency with DevTest Labs, Logic Apps and Power BI embedded.

Gartner reaffirms Microsoft as a leader in Data Management Solutions for Analytics - Microsoft has once again been positioned as a leader in Gartner's 2018 Magic Quadrant for Data Management Solutions for Analytics (DMSA). Gartner has also positioned Microsoft as a leader in the Magic Quadrant for Analytics and Business Intelligence Platforms, and in the Magic Quadrant for Operational Database Management Systems.

Technical content

A digital transformation Journey featuring Contoso Manufacturing and Azure IoT - Learn how Azure IoT can help you drive digital transformation in your business in this story-based approach. Instead of listing a portfolio of products and services, let me tell you the story of how Contoso HVAC’s journey to introduce Azure IoT with a different perspective transformed its business.

Azure’s layered approach to physical security - In this first in a series of blog posts about the secure foundation Microsoft provides to host your infrastructure, applications, and data, learn about how Microsoft designs, builds and operates datacenters in a way that strictly controls physical access to the areas where customer data is stored.

Events

Join Microsoft at the Rice Oil and Gas HPC conference - We’ll be at the Rice Oil and Gas HPC conference in Houston, Texas on March 12-13, 2018. Here we will talk about Microsoft's commitment to providing the resources that HPC users need.

Join Microsoft at Supercomputing Frontiers Europe - If you’re in Warsaw, Poland March 12-15, you’ll want to come and join Microsoft at Supercomputing Frontiers Europe. This conference is a great opportunity to come together with high-performance computing leaders and practitioners. Microsoft will be there to talk about how Azure enables our customers to run true HPC workloads in the cloud.

Developer spotlight

The Xamarin Show | Episode 28: Azure Functions for Mobile Apps with Donna Malayeri - Donna Malayeri, Program Manager at Microsoft in Azure Functions, introduces us to serverless compute with Azure Functions. We discuss what Azure Functions is, how they work, and why they matter for mobile developers. Donna walks us through several mobile focused scenarios that Azure Functions are ideal for.

The Xamarin Show | Azure Functions for Mobile Apps with Laurent Bugnion - Laurent Bugnion shows how Azure Functions (serverless programming) can be used to create a backend for your Xamarin applications. We'll see a super simple project first and then move on to a more "real life" application.

Quora stays laser-focused on dev velocity and high-quality UX with Visual Studio App Center - Quora empowers its developers to move quickly - deploying up to 100x per day - and a reliable testing framework is critical to their successful, rapid iteration. With Visual Studio App Center, Quora runs automated UI tests on hundreds of real devices, spending less time testing and trying to reproduce errors and more time focusing on what matters: developing and releasing high-quality apps with confidence.

Get Your Azure Mobile Badge from Xamarin University! - Unlock this badge by completing the Xamarin University Azure courses, all of which are available as Self-Guided Learning courses. Once earned, the badge will appear on your Xamarin University profile.

Azure Mobile Apps Quickstarts - If you are new to Mobile Apps, you can get started by following our tutorials for connecting your Mobile Apps cloud backend to Windows Store apps, iOS apps, and Android apps. Tutorials are also available for Xamarin Android, iOS, and Forms apps.

Service updates

Azure shows

Azure Friday: Episode 392 - Using PowerShell Core with Azure - Joey Aiello joins Donovan Brown to discuss PowerShell Core 6.0, a cross-platform, open-source edition of PowerShell based on .NET Core built for heterogeneous environments and the hybrid cloud. You'll also learn about how the upcoming release of OpenSSH for Windows and Windows Server will enable new ways to remotely manage your environments, as well as how PowerShell Core integrates with OpenSSH.

The Azure Podcast: Episode 218 - Ramping up on ARM templates - Based on feedback from our listeners, we chat with Richard Cheney, a Cloud Solution Architect at Microsoft. He give us a loads of advice to help our listeners get ramped up on developing ARM templates, including 'Project Citadel' - a whole set of resources to help you craft the best templates.


Azure Batch for oil and gas

$
0
0

There is a new urgency for reaching oil more efficiently in a capital and risk intensive environment, especially with narrow margins around non-traditional exploration. The cost of offshore drilling for oil could be several hundred million dollars, with no guarantee of finding oil at all. On top of that, the high cost of data acquisition, drilling, and production reduces average profit margins to less than ten percent. Also, the expense and strict time limits of petroleum licenses impose a fixed time for exploration. This limit requires data acquisition, data processing, and interpretation of 3-D images with a limited amount of time to a solution envelope.

High performance computing (HPC) helps oil and gas companies accelerate ROI and minimize risk. This is done by providing engineers and geoscientists engaged in identifying and analyzing resource with the potential to map crucial project decisions. Azure provides true HPC on the cloud for customers in the oil and gas industry. Azure provides a broad range of compute resources to meet the needs of oil and gas workloads. This ranges from single-node jobs that use our compute optimized F-series virtual machines to tightly coupled many-node jobs that run on the H-series virtual machines, and all the way up to a dedicated Cray supercomputer.

Of course, compute resources are only useful to the degree you can access them. Many key line of business applications and workflows rely on in-house developed applications optimized for HPC resources. This can pose a dilemma in the cloud age where a significant customization and re-work is sometimes required to modernize applications in order to take advantage of cloud scalability and elasticity. This is where Azure Batch comes in to assist.

Azure Batch lets companies integrate their existing applications with Azure compute resources to execute applications in parallel, and at scale. There's no need to manually create, configure, and manage an HPC cluster, individual virtual machines, virtual networks, a complex job, or task scheduling infrastructure. Batch automates these tasks to create cloud native applications. Best of all It is a free service, you only pay for the resources used as part of the Batch workflow.

Right now, an independent software vendor (ISV) in the oil and gas industry is using Batch to enable its code to run in a Software-as-a-Service (SaaS). This offering uses containers to have an identical environment in Azure and on-premises. The standard application workflow is implemented through web hooks, which runs the models and uploads data to a storage account. This allows the vendor to make the software available to customers in a scalable way without having to run their own infrastructure.

If you’re at the Rice Oil and Gas HPC conference this week, stop by the Microsoft booth to learn more.

How VSTS is Accelerating the Engineering Group Behind Windows

$
0
0
As part of our engineering processes in Microsoft, we often share best practices and stories of change across the engineering teams in the company. At our latest internal engineering conference as I listened in to sessions, I was struck by the sheer scale of the effort the Windows and Devices Group (WDG) undertook and the... Read More

Giving feedback

$
0
0

Six months ago I wrote a post on Taking Feedback.  Several people asked me to write a follow up on giving feedback.  Amazing how time flies and somehow I just haven’t gotten around to it – so I’m doing it now.

Here’s a key snippet from the Taking Feedback post if you don’t want to go read the whole thing…

At some level, all feedback is valid. It is the perception of another person based on some interaction with us. As such it’s important that we listen, understand and think about how we can improve. Yet, not all feedback is to be taken as given – meaning the person giving the feedback may have heard something that wasn’t true, misinterpreted something, or may simply not have the perspective we have. In the end we are the ones to decide what to do with the feedback. We may decide that the feedback is valid and provides clear ideas for improvement. Or we may decide that we disagree with the feedback but it provides insights into how we could do differently to prevent misperceptions. Or we may decide that the we simply don’t agree with the feedback and we are going to file it away and keep an eye out for future feedback that might make us revisit that conclusion.

Giving someone feedback is a wonderful thing but it’s also a very hard thing – partly because taking feedback can be so difficult that it makes giving it very stressful.  There are some things I’ve learned over the years about giving feedback that have made it a little but easier.

There are two kinds of feedback

This is probably the one I fail the most on.  We usually think of feedback as a negative thing – here’s something you can do better.  But positive feedback is equally important – here’s something you did particularly well.  I tend to be so focused on how I and the people around me can do better that I, too often, forget to point out when someone has done something well – or they have some attribute that I really admire.  It’s not that I don’t know it at some subconscious level; it’s just that I’m caught up in the next challenge to tackle and it just doesn’t occur to me to say anything about it.

So, my first piece of advice is try to be very conscious about positive feedback.  When you see something you like, say so.  Be on the lookout for things to complement people for.  Do it privately; do it publicly.  Thank people for something you appreciate.  Whether they admit it to themselves or not, everyone likes appreciation and they tend to gravitate to doing things that will earn them more appreciation.  Developing a pattern of recognizing good things will encourage people to do more good things.

At the same time, be careful not to overdo it.  There can be too much of a good thing.  By that, I mean, don’t complement people for superficial things or things they didn’t really do.  A complement is most valued when a person feels like they invested energy.  If you complement people for just anything, then you “cheapen” the feedback and make it mean less when it’s really deserved.

If you are good at giving positive feedback, negative feedback is also easier to give.  People are more likely to respond well to negative feedback if it’s given in an environment where, overall, they feel valued than it is if they feel like they are just always criticized for everything and not valued for anything.

There’s a time and a place for everything

When and where you give feedback is *super* important.  There’s a saying “Public praise and private criticism.”  It’s a good rule to follow.  People really appreciate having their successes publicly celebrated and no one likes being publicly berated.  Beyond that, some other important rules, particularly for negative feedback, are:

  1. Find a time when they are ready to hear it – Unless the feedback is urgently required to avoid a disaster, don’t try to give it when someone is under a great deal of stress (maybe rushing to meet a deadline), frustrated, angry, etc.  Feedback is going to be heard and processed best when the person is relaxed and reflective.  Make sure you have enough uninterrupted time to fully discuss the feedback.  It’s a good idea to ask them if they are ready to for you to give feedback.
  2. Make sure you are ready to give it – Similarly to #1, don’t try to give feedback when you are angry or frustrated.  Taken the time to digest what you need to say – to separate your frustration from an objective assessment of what happened.  Have a calm conversation about what you observed and what could be done differently.
  3. If at all possible, give it in person – Feedback is generally best processed face to face.  It is very easy to read unintended tone in written feedback.  By giving it in person, you can watch for body language to see if the person is hearing something you aren’t intending to say.  Sometimes, of course, it isn’t possible and when it isn’t, you have to be doubly thoughtful about how you say it.  Sometimes I give some initial, very light feedback in writing, with an offer to discuss it at length in person (or via video conference, for remote people).
  4. Give it to the person – It’s amazing to me how often someone will “give feedback” to someone else.  By that, I mean, complain about what someone did to a third person without ever following up with the person themselves.  That’s never going to work and will, in the long run, only create a hostile environment.  Always focus your feedback on the person or people directly involved.  Sometimes it’s necessary and appropriate to share feedback with a broader audience so that everyone can learn from something.  Be careful how you do that because, done wrong that looks a lot like public criticism and never do it without talking to the people directly involved first.

Focus on what you can directly observe

It’s very important to focus on what you can directly observe.  Try very hard to avoid “I’ve been heard…” or even “Susan told me…”.  The problem with relaying feedback from someone else is that you don’t really know what has happened and it’s very hard for you to be constructive.  That said, you will, particularly as a manager, get feedback from 3rd parties and it’s not irrelevant.  I generally try to use it, carefully, as supporting evidence when giving my own feedback.  It helps me understand when things I’ve observed are a pattern vs an anomaly.  If someone comes to you with feedback about someone else, try as hard as you can to find a way to facilitate the feedback being given directly between the people involved, even if you need to participate in the discussion to facilitate it.

I’ve observed that humans have an inherent tendency to want to ascribe motive – to determine why someone did something.  “Joe left me out of that important conversation because he was trying to undermine me.”  Any time you find yourself filling in the because clause, stop.  You don’t know why someone does anything.  That is locked up securely in their head.  When filling in that blank people insert some negative reason that’s worse than reality.  So, when giving feedback, stick to what you can see.  “Joe, you left me out of that important conversion.  I felt undermined by that.  Why did you do it?”  In this example, I articulate exactly what I saw happen, how it made me feel and ask Joe to explain to me why.  Joe may dispute that he left me out – that’s fairly factual and we can discuss evidence and Joe can’t dispute how I felt, at least not credibly.  Try as hard as you can to stick to things you personally observed and stay away from asserting motive.  Have a genuine conversation designed to help you better understand each other’s perspective and what each of you can do better in the future.

Consider your relationship

Your relationship with the recipient of your feedback can make a big difference.  You need to be careful about how it colors what you say.  For instance, as a manager, I always try to be one who is connected to what’s going on in the team and give feedback to anyone and everyone on what I see.  Early in my career, I found this can go terribly wrong.  An off hand comment to someone several levels below me in the company can be interpreted as a directive to be followed.  I may have been musing out loud and somehow, accidentally, countermanded several levels of managers.  Try that and see how fast a manager shows up at your door to complain 😊.  Now, I try to be clear when I’m just giving and offhand opinion and when I’m giving direction.  I also tell them to go talk with their manager before acting on what I told them and, often, go tell the manager myself what I said.

This is just one example of how a relationship can affect how feedback is taken.  Feedback from a spouse is different than that from a friend is differ than that from a parent is different than that from a co-worker, etc.

Acknowledge your role

Often, when giving feedback, it’s about some interaction your were party to – and, as they say, it takes two to tango.  There may have been things you did that contributed to whatever happened.  Be prepared to acknowledge them and to talk about them.  Don’t refuse to acknowledge that you may have had a role.  At the same time, don’t allow the person to make it all about you.  You have feedback for them.  Don’t let the conversation become only about you.  Make sure you are able to deliver your feedback too.  You may need to offer to set aside time in the future for the other person to give you feedback so that, for now, you can focus on your feedback.

Retrospectives can be powerful

While most of what I’ve written here focuses on how to give feedback to someone, a great technique to drive improvement is to create an environment where people can critique themselves.  Retrospectives are an awesome tool to get one or more people to reflect on something and make their own suggestions for improvements.  Done right, it is a non-threatening and collaborative environment where ideas and alternate ways of handling things can be explored.  Retrospectives, like all feedback, should focus on what happened and what can be better and avoid accusations, blame, and recrimination.  You can participate in it and contribute your feedback or you can discuss the outcome and help process it for future actions.

Beware of feedback landmines

  1. The feedback sandwich – This is probably one of the hardest ones to get right and depends a lot on you and the person you are talking to.  A feedback sandwich is when you tell someone how good they are, then you tell them something you think they need to improve, then you tell them how good they are again.  There are legitimate reasons to mix both positive and negative feedback, for example, it helps establish the scope of the feedback.  If you only give negative feedback, people can read more into it than you mean.  I often use a mix of positive and negative feedback so that I am clear about the scope of the negative feedback.  “I’m not talking about everything you do, I’m just taking about this specific issue”.  Or, “Here’s example of where you handled something similar well”.  However, when it is primarily used to blunt the emotional impact of the feedback, it is dangerous.  Taken too far, it can completely dilute your point and make your feedback irrelevant.
  2. Examples – When giving feedback, it’s often useful to use examples.  Examples help make the feedback concrete.  But, don’t allow the conversation to turn into a refutation of every example.  I’ve been in conversations where the person I’m talking with wants to go through every example I have and explain why my interpretation is wrong.  Be open to being wrong but don’t let it turn into point/counter point.  Examples are only examples to support your feedback.
  3. Comparisons – Be *very* careful about comparing one person to others.  While it’s often useful to suggest better ways of handling something, it’s very dangerous to do it by saying “You should just do it like Sam.”  It creates resentment, among other things.  Sometimes it is appropriate to talk about examples of how you’ve seen something handled well before but don’t let it become a “Sam is better than you” discussion.

Summary

Ironically, just this last weekend, I was having dinner with a friend that I used to work with (she was on my team).  We haven’t worked together in many years but we’ve stayed in touch.  While we were having dinner, she told her husband a story about me.  She said she remembered a time when she had done a review of her project for me and it had not gone well.  After the review, I approached her and asked if she was feeling bad about the review.  She said “Yes” and I said “Good, you should be”.  We then went on to discuss what was bad about it and what she could do to improve it.  On the retelling, it sounded harsh.  While I remember the discussion, I don’t remember many details but it got me thinking.  On the positive side, it was good for me to approach her separately after the meeting.  It was good for me to start with a question of how she was feeling about it.  I probably could have come up with a better reply than “Good, you should be”.  And I do recall we had a good conversation afterwards about how to improve.  If nothing else, this example is proof of how much emotional impact feedback, particularly when not done carefully enough, can have – she has remembered this incident for almost 10 years and I have long forgotten it.

Giving feedback is hard.  There’s no simple rule for it.  It is stressful and can lead to conflict.  The best advice I can give you is:

  1. Give feedback regularly – both positive and negative.
  2. Be careful about when and where you give feedback so you can have a calm and thoughtful conversation.
  3. Focus on things you directly observe and the effects they had on you.  Don’t ascribe motives and make it a personal attack.
  4. Consider your relationship and how it will affect how feedback is heard.
  5. Be aware of your own role and be prepared to discuss it appropriately.
  6. Use retrospectives as a tool for collecting/processing feedback in a non-threatening way.

Lastly, I’ll say, always remember that the purpose of feedback is to help the other person.  If you are giving feedback to make yourself feel better (for example feeling vindicated or superior), you are doomed.  Stop and rethink what you are doing.

As always, I hope this is helpful and feedback is welcome 😊

Brian

Setting up Application Insights took 10 minutes. It created two days of work for me.

$
0
0

I've been upgrading my podcast site from a 10 year old WebMatrix site to modern, open source ASP.NET Core with Razor Pages. It's off IIS and now running cross-platform in Azure.

I added Application Insights to the site in about 10 min just a few days ago. It was super easy to setup and basically automatic in Visual Studio 2017 Community. I left the defaults, installed a bit of script on the client, and enabled the server side, and AppInsights already found a few interesting things.

It took 10 minutes to set up App Insights. It too two days (and work continues) to fix what it found. I love it. This tool has already give me a deeper insight into how my code runs and how it's behaving - and I'm just scratching the service. I'll need to do some videos and/or more blog posts to dig deeper. Truly, you need to try it.

Slow performance in other countries

I could fill this blog post with dozens of awesome screenshots of the useful charts, graphs, and filters that I got by just turning on AppInsights. But the most interesting part is that I turned it on really expecting nothing. I figured I'd get some "Google Analytics"-type behavior.

Then I got this email:

Browser Time is slow in Bangladesh

Huh. I had set up the Azure CDN at images.hanselminutes.com to handle all the faces for each episode. I then added lazy loading so that the webite only loads the images that enter the browser's viewport. I figured I was pretty much done.

However I didn't really think about the page itself as it loads for folks from around the world - given that it's hosted on Azure in the West US.

18.4 secs to load the page in Bangladesh

Ideally I'd want the site to load in less than a second, but this is my archives page with 600 shows so it's pretty heavy.

That's some long load times

Yuck. I have a few options. I could pay and load up another copy of the site in South Asia and then do some global load balancing. However, I'm hosting this on a single small (along with a dozen other sites) so I don't want to really pay much to fix this.

I ended up signing up for a free account at CloudFlare and set up caching for my HTML. The images stay the same. served by the Azure CDN.

Lots of requests from Cloudflare

Fixing Random and regular Server 500 errors

I left the site up for a while and came back later to a warning. You can see my site availability is just 93%. Note that there's "2 Servers?" That's because one is my local machine! Very cool that AppInsights also (optionally) tracks your local development server as well.

1 Alert!

When I dig in I see a VERY interesting sawtooth pattern.

Pro Tip - Recognizing that a Sawtooth Pattern is a Bad Thing (tm) is an important DevOps thing. Why is this happening regularly? Is it exactly regularly (like every 4 hours on a schedule?) or somewhat regularly (like a garbage collection issue?)

What do these operations have in common? Look closely.

scarygraph

It's not a GET it's a HEAD. Remember that HTTP Verbs are more than GET, POST, PUT, DELETE. There's also HEAD. It literally is a HEADer call. Like a GET, but no body.

HTTP HEAD - The HEAD method is identical to GET except that the server MUST NOT return a message-body in the response.

I installed HTTPie - which is like curl or wget for humans - and issue a HEAD command from my local machine while under the debugger.

C:>http --verify=no HEAD https://localhost:5001

HTTP/1.1 500 Internal Server Error
Content-Type: text/html; charset=utf-8
Date: Tue, 13 Mar 2018 03:41:51 GMT
Server: Kestrel

Ok that is bad. See the 500? I check out AppInsights and see it has the full call stack. See it's getting a NullReferenceException as it tries to Render() the Razor page?

Null Reference Exception

It turns out since I'm using Razor Pages, I have implemented "OnGet" where I do my data base work then pass a model to the pages to generate HTML. However, if someone issues a HEAD, then the pages still run but the local data work never happened (I have no OnHead() call). I have a few options here. I could handle HEAD myself. I could no-op it, but that'd be a lie.

THOUGHT: I think this behavior is sub-optimal. While GET and POST are distinct and it makes sense to require an OnGet() and OnPost(), I think that HEAD is special. It's basically a GET with a "don't return the body" flag set. So why not have Razor Pages automatically delegate OnHead to OnGet, unless there's an explicit OnHead() declared? I'll file an issue on GitHub because I don't like this behavior and I find it counter-intuitive. I could also register a global IPageFilter to make this work for all my site's pages.

The simplest thing to do is just to delegate the OnHead to to the OnGet handler.

public Task OnHeadAsync(int? id, string path) => OnGetAsync(id, path);

Then double check and test it with HTTPie:

C:>http --verify=no HEAD https://localhost:5001

HTTP/1.1 200 OK
Content-Type: text/html; charset=utf-8
Date: Tue, 13 Mar 2018 03:53:55 GMT
Request-Context: appId=cid-v1:e310025f-88e9-4133-bc15-e775513c67ac
Server: Kestrel

Bonus - Application Map

Since I have AppInsights enabled on both the client and the server, I can see this cool live Application Map. I'll check again in a few days to see if I have fewer errors. You can see where my Podcast Site calls into the backend data service at Simplecast.

An application map that shows all the components, both client and server

I saw a few failures in my call to SimpleCast's API as I was failing to consistently set my API key. Everything in this map can be drilled down into.

Bonus - Web Performance Testing

I figured while I was in the Azure Portal I would also take advantage of the free performance testing. I did a simulated aggressive 250 users beating on the site. Average response time is 1.22 seconds and I was doing over 600 req/second.

38097 successful calls

I am learning a ton of stuff. I have more things to fix, more improvements to make, and more insights to dig into. I LOVE that it's creating all this work for me because it's giving me a better application/website!

You can get a free Azure account at http://azure.com/free or check out Azure for Startups https://azure.microsoft.com/overview/startups/ and get a bunch of free Azure time. AppInsights works with Node, Docker, Java, ASP.NET, ASP.NET Core, and other platforms. It even supports telemetry in Electron or Windows Apps.


Sponsor: Get the latest JetBrains Rider for debugging third-party .NET code, Smart Step Into, more debugger improvements, C# Interactive, new project wizard, and formatting code in columns.



© 2017 Scott Hanselman. All rights reserved.
     

Announcing backup and restore performance improvements and support for large disk backup

$
0
0

Today, we are excited to announce the support for backup of large disk VMs and set of improvements aimed at reducing the time taken for backup and restore. These set of improvements and large disk support is based on a new VM backup stack and are available for both managed and unmanaged disks. You can seamlessly upgrade to this new stack without any impact to your on-going backup jobs and there is no change to how you setup backup or restore.

This announcement combines multiple feature improvements:

  • Large disk support – Now you can backup VMs with disk sizes up to 4TB(4095GB), both managed and unmanaged.
  • Instant recovery point – A recovery point is available as soon as the snapshot is done as part of the backup job. This eliminates the need to wait to trigger restore till data transfer phase of the backup is completed. This is particularly useful in scenarios where you want to apply a patch. Now you can go ahead with the patch once the snapshot phase is done and you can use the local snapshot to revert back if the patch goes bad. This is analogous to checkpoint solution offered by Hyper-V or VMware with the added advantage of having snapshot also securely stored in the backup vault.
  • Backup and Restore performance improvements – As part of this new announcement, we are also retaining the snapshots taken as part of the backup job for seven days. This will help us to compute the changes between two backup jobs in an efficient manner to reduce the backup time. These snapshots can also be used to trigger restore. Since these snapshots are available locally, restore process will eliminate the need to transfer the data back from the vault to the storage account, thus reducing the restore time from hours to minutes. Ability to configure the retention snapshots stored locally will be available in upcoming releases.
  • Distribute the disks of restored VM – If you are using unmanaged VM, you might have noticed that during restore, we restore all disks to the same storage account. We are adding a capability where you can tell us to distribute those disks to the same set of storage accounts as the original VM to reduce the reconfiguration needed post-restore.

Getting Started:

You can use either Azure Portal or PowerShell to update the subscription to the new stack. It is a one-directional change and will retain all your existing policies, recovery points as they were. 

Portal:

We are enabling this experience starting today and rolling it out region by region. You will see this across all regions by the end of this week. 

You will see a banner on Recovery Services vault dashboard. Clicking on the banner will open a screen where you can upgrade the stack and get above feature improvements. You can upgrade the complete subscription from any of the vaults. Once upgraded, all VM backups in the subscription will get backed up using the new stack.

VM backup stack V2 enable screen

PowerShell:

Please execute following cmdlets from an elevated PowerShell terminal:

1. Login to Azure Account.

PS C:> Login-AzureRmAccount

2. Select the subscription which you want to register for the preview:

PS C:>  Get-AzureRmSubscription –SubscriptionName "Subscription Name" | Select-AzureRmSubscription

3. Register this subscription for the private preview:

PS C:>  Register-AzureRmProviderFeature -FeatureName “InstantBackupandRecovery” –ProviderNamespace Microsoft.RecoveryServices

Related links and additional content:

Azure Strategy and Implementation Guide – free download

$
0
0

This blog post is co-authored by Joachim Hafner, Cloud Solution Architect 

We’re pleased to offer a free e-book for those of you who are new to Azure or in the beginning stages of planning a cloud migration: the Azure Strategy and Implementation Guide for IT Organizations. As cloud solution architects, we hear a lot of the same questions from customers who are starting to think about their cloud implementation process. This motivated us to write this e-book, which provides guidance in the form of a blueprint which customers can follow to help form their cloud strategy.

image

Whether you are starting with Azure or doing more general research regarding how IT teams navigate cloud implementation, this guide offers a balance of broadly applicable advice and Azure specifics for you to consider. Here is an overview of what is covered:

  • Chapter 1: Governance – This chapter covers the starting points, from the aspirational “digital transformation” to the important tactical steps of administration and resource naming conventions. Get an overview of topics such as envisioning, to cloud readiness, administration, and security standards and policy.
  • Chapter 2: Architecture – This section takes a longer look at security, touches on cloud design patterns, and provides several visual representations to help you understand network design.
  • Chapter 3: Application development and operations – Here, we cover backup and disaster recovery, as well as application development from an IT operations and management perspective. You’ll learn about the culture of DevOps as well as monitoring and Infrastructure as Code (IaC).
  • Chapter 4: Service management – No, IT does not become obsolete when an organization moves to the cloud! This chapter focuses on service management and optimization, along with the day-to-day details of how to stay informed of the Azure roadmap, updates, and where to go when you need support.

The target audiences for this guide are enterprise architects, project managers of cloud roll-out initiatives, solution architects, and IT team leads. We hope you find it helpful!

Get your copy of the free e-book today.

    Python in Visual Studio 15.7 Preview 1

    $
    0
    0

    Today we have released the first preview of our next update to Visual Studio 2017. You will see a notification in Visual Studio within the next few days, or you can download the new installer from visualstudio.com.

    In this post, we're going to look at some of the new features we have added for Python developers. As always, the preview is a way for us to get features into your hands early, so you can provide feedback and we can identify issues with a smaller audience. If you encounter any trouble, please use the Report a Problem tool to let us know.

    The two major new features are a preview of the ptvsd 4.0 debugger, and IntelliSense for type hints.

    IntelliSense for Type Hints

    In this release we have added support for type hints in our IntelliSense. When you add type hints to parameters or variables they’ll be shown in hover tooltips.

    For example, in the example below a Vector type is declared as a list of floats, and the scale() method is decorated with types to indicate the parameters and return types. Hovering over the scale() method when calling it shows the expected parameters and return type:

    Using type hints you can also declare class attributes and their types, which is handy if those attributes are dynamically added later. Below we declare that the Employee class has a name and an id, and that information is then available in IntelliSense when using variables of type Employee:

    Right now you will see type hint information being combined with the automatic analysis we have always performed, but we are not using them to provide warnings about mismatched types. Let us know in the comments on this post how you would like to see this information being used in Visual Studio.

    Preview of ptvsd 4.0 debugger

    In this release we are experimenting with a new version of our ptvsd debug engine based on PyDevD, with a basic set of debugging features and some early performance improvements over the previous version that we want to make available for you to try out. A preview of the latest ptvsd has also been released in the Python extension for Visual Studio Code.

    As one example of a new benefit, Django apps run faster with the new debugger attached (compared to the previous version). With our sample stackoverflow-django app we observed up to a 3x speedup in loading pages.

    You can try it out by taking the following steps:

    1. Check “Use experimental debugger” in Tools > Options > Python > Experimental
    2. Install the latest version of ptvsd by right clicking on your environment in solution explorer, selecting “Install Packages” and typing ptvsd --pre in the text box and clicking on the install link.
    3. Click Start debugging!

    Features supported include:

    • Launching applications
    • Pause/Continue
    • Basic breakpoints and stepping over/into/out
    • Break on exception
    • View/change call stack frames
    • View local and global variables
    • Conditional breakpoints

    Some features that we are still working on include:

    • Tracepoints
    • Set Next Statement
    • Local and Remote Attach
    • Django template debugging

    While we are working hard on this new debugger, we will be publishing updates frequently. If you find a problem, try updating your ptvsd install as we may have already released a fix.

    Be sure to download the latest preview of Visual Studio and try out the above improvements. If you encounter any issues, please use the Report a Problem tool to let us know (this can be found under Help, Send Feedback) or continue to use our GitHub page. Follow our Python blog to make sure you hear about our updates first, and thank you for using Visual Studio!

     

     

     


    Four IT skills with sky-high prospects

    $
    0
    0

    Staying on top of new technologies is always important, but what skills will help an IT professional become more influential in the workplace? And what role does certification play?

    Recent findings from a Microsoft-sponsored IDC white paper, “Cloud Skills and Organizational Influence: How Cloud Skills Are Accelerating the Careers of IT Professionals,” found that even though 70 percent of CIOs surveyed identify themselves as having a “cloud-first IT strategy," only 16 percent of companies have the IT skills to carry out this strategy. This translates into demand for people with the expertise to thrive in a cloud-first environment.

    IT Strategy

    As companies continue moving to the cloud, certain skills will be especially relevant:

    • Business intelligence: Businesses thrive on data, and there is a continued demand for IT pros who can make it easier to turn that data into actionable intelligence. 
    • DevOps: Companies value the IT operations specialist who can balance the need for agile design and development and the protection of its intellectual property and business goals.
    • Identity and access management: Security threats continue to evolve and there’s a constant need for individuals who understand the threat landscape and who can develop and deploy solutions to protect assets, while maintaining employee productivity and engagement.
    • Software architecture: As companies look for more complex and distributed solutions, software architecture is an essential skill in designing and developing solutions, and ensuring that they meet agreed-upon requirements.

    IDC also found that IT professionals who have earned certification in cloud-based development or IT operations benefit further. A survey of 500 IT pros revealed that individuals with either certification are 35 percent more influential in decisions related to cloud deployments.

    Cloud computing will drive changes throughout the IT profession. However, individuals in search of rapid career advancement should consider developing skills in one of these areas, and would do well to complete the extra work toward getting certified.

    Read the Cloud Skills and Organizational Influence: How Cloud Skills Are Accelerating the Careers of IT Professionals white paper for more information, then explore certification and free training options through Azure Essentials.

    Heuristic DNS detections in Azure Security Center

    $
    0
    0

    We have heard from many customers about their challenges with detecting highly evasive threats. To help provide guidance, we published Windows DNS server logging for network forensics and the introduction of the Azure DNS Analytics solution. Today, we are discussing some of our more complex, heuristic techniques to detect malicious use of this vital protocol and how these detect key components of common real-world attacks.

    These analytics focus on behavior that is common to a variety of attacks, ranging from advanced targeted intrusions to the more mundane worms, botnets and ransomware. Such techniques are designed to complement more concrete signature-based detection, giving the opportunity to identify such behavior prior to the deployment of analyst driven rules. This is especially important in the case of targeted attacks, where time to detection of such activity is typically measured in months. The longer an attacker has access to a network, the more expensive the eventual clean-up and removal process becomes. Similarly, while rule-based detection of ransomware is normally available within a few days of an outbreak, this is often too late to avoid significant brand and financial damage for many organizations.

    These analytics, along with many more, are enabled through Azure Security Center upon enabling the collection of DNS logs on Azure based servers. While this logging requires Windows DNS servers, the detections themselves are largely platform agnostic, so they can run across any client operating system configured to use an enabled server.

    A typical attack scenario

    A bad guy seeking to gain access to a cloud server starts a script attempting to log in by brute force guessing of the local administrator password. With no limit to the number of incorrect login attempts, following several days of effort the attacker eventually correctly guesses the perceived strong password of St@1w@rt.

    Upon successful login, the intruder immediately proceeds to download and install a malicious remote administration tool. This enables a raft of useful functions, such as the automated stealing of user passwords, detection of credit card or banking details, and assistance in subsequent brute force or Denial-of-Service attacks. Once running, this tool begins periodically beaconing over HTTP to a pre-configured command and control server, awaiting further instruction.

    This type of attack, while seemingly trivial to detect, is not always easy to prevent. For instance, limiting incorrect login attempts appears to be a sensible precaution, but doing so introduces a severe risk of denial of service through lockouts. Likewise, although it is simple to detect large numbers of failed logins, it is not always easy to differentiate legitimate user activity from the almost continual background noise of often distributed brute force attempts.

    Detection opportunities

    For many of our analytics, we are not specifically looking for the initial infection vector. While our above example could potentially have been detected from its brute force activity, in practice, this could just as easily have been a single malicious login using a known password, as might be the case following exploitation of a legitimate administrator’s desktop or successful social engineering effort. The following techniques are therefore looking to detect the subsequent behavior or the downloading and running of the malicious service.

    Network artifacts

    Attacks, such as the one outlined above, have many possible avenues of detection over the network, but a consistent feature of almost all attacks is their usage of DNS. Regardless of transport protocol used, the odds are that a given server will be contacted by its domain name. This necessitates usage of DNS to resolve this hostname to an IP address. Therefore, by analyzing only DNS interactions, you get a useful view of outbound communication channels from a given network. An additional benefit to running analytics over DNS, rather than the underlying protocols, is local caching of common domains. This reduces their prevalence on the network, reducing both storage and computational expense of any analytic framework.

    image

    WannaCry Ransomware detected by Random Domain analytic.

    image

    Malware report listing hard-coded domains enumerated by WannaCry ransomware.

    Random domains

    Malicious software has a tendency towards randomly generated domains. This may be for many reasons, ranging from simple language issues, avoiding the need to tailor domains to each victim’s native language. To even assisting in the automation of the registration of large numbers of such names, along with helping reduce the chances of accidental reuse or collision. This is highlighted by techniques such as Domain Generation Algorithms (DGAs) but is frequently used in static download sites and command and control servers, such as in the above WannaCry example.

    Detecting these “random” names is not always straightforward. Standard tests tend to only work on relatively large amounts of data. Entropy, for instance, requires a minimum of several times the size of the character set or at least hundreds of bytes. Domain names, on the other hand, are a maximum of 63 characters in length. To address this issue, we have used basic language modelling, calculating the probabilities of various n-grams occurring in legitimate domain names. We also use these to detect the occurrence of highly unlikely combinations of characters in a given name.

    image

    Malware report detailing use of randomly generated domain names by ShadowPad trojan.

    Periodicity

    As mentioned, this attack involved the periodic beaconing of a command and control server. For the sake of argument, let’s assume this is an hourly HTTP request. When attempting to make this request, the HTTP client will first attempt to resolve the server’s domain name through the local DNS resolver. This resolver will tend to keep some local cache of such resolutions, meaning that you cannot guarantee you will see a DNS request on every beacon. However, you can see these on some multiple of an hour.

    In attempting to find such periodic activity, we use a version of Euclid’s algorithm to keep track of an approximate greatest common divisor of the time between lookups of each specific domain. Once a domain’s GCD falls within the permitted error (i.e. in the exact case to one), it is added to a bloom filter of domains to be ignored from further calculations. Assuming a GCD greater than this error, we take the current GCD or estimate of the beacon period and the number of observations to calculate the probability of observing this many concurrent lookups on multiples of this period. I.e. the chances of randomly seeing three concurrent lookups to some domain, all on multiples of two seconds is 1/2^3  or 1 in 8. On the other hand, as with our example, the probability of seeing three random lookups, precisely to the nearest second on multiples of one hour is 1/〖3600〗^3  or 1 in 46,656,000,000. Thus, the longer the time delta, the fewer observations we need to observe before we are certain it is periodic.

    Conclusion

    As demonstrated in the above scenario, analyzing network artifacts can be extremely useful in detecting malicious activity on endpoints. While the ideal situation is the analysis of all protocols from every machine on a network, in practice, this is too expensive to collect and process. Choosing a single protocol to give the highest chance of detecting malicious communications while minimizing the volume of data collected results in a choice between HTTP and DNS. By choosing DNS, you lose the ability to detect direct IP connections. In practice, these are rare, due to the relative scarcity of static IP addresses, alongside the potential to block such connections at firewalls. The benefits of examining DNS is its ability to observe connections across all possible network protocols from all client operating systems in a relatively small dataset. The compactness of this data is further aided by the default behavior of on-host caching of common domains.

     

    R rises to #12 in Redmonk language rankings

    $
    0
    0

    In the latest Redmonk language rankings, R has risen to the #12 position, up from #14 in the June 2017 rankings. (Python remains steady in the #3 position.) The Redmonk rankings are based on activity in StackOverflow (as a proxy for user engagement) and Github (as a proxy for developer engagement). Here's the chart from January 2018 of Github popularity ranking versus StackOverflow popularity ranking.

    RedmonkJan2018

    Here's what Redmonk analyst Stephen O'Grady had to say about Powershell, R and Typescript. (R isn't a Microsoft property, but Microsoft is a founding member of the R Consortium and incorporates R into several products including Microsoft ML Server, SQL Server and Power BI.)

    Powershell (+1) / R (+2) / TypeScript (+3): Of all of the vendors represented on this list, Microsoft has by a fair margin the most to crow about. Its ops-oriented language Powershell continues its steady rise, and R had a bounceback from earlier slight declines. TypeScript, meanwhile, pulled off a contextually impressive three spot jump from #17 to #14. Given that growth in the top twenty comes at a premium, hitting the ranking that a widespread language like R enjoyed in our last rankings is an impressive achievement. From a macro perspective, it’s also worth noting that Microsoft is seeing growth across three distinct categories in operations, analytics/data science and application development. More on this later, but it’s a strong indication that Microsoft’s multi-language approach to the broader market is paying dividends.

    You can find the complete top 20 rankings and analysis of the other languages in the list by following the link below.

    Tecosystems: The RedMonk Programming Language Rankings: January 2018

    Bringing expressive, performant typography to Microsoft Edge with Variable Fonts

    $
    0
    0

    For years, rich typography has been the envy of many web designers, who long for the typographic variety, texture, and precision available in print media. Now, with the recent innovations of OpenType Variable Fonts―pioneered in collaboration between Microsoft, Adobe, Apple, Google, and others―and new standards allowing for developers to control font variations in CSS, things are about to change.

    Animation of the phrase "A single font" cycling through different styles, sizes, and weights.

    In this example from our Variable Fonts demo, the Decovar font is animated along an astounding 15 axes using variable fonts.

    Full support for Variable Fonts (including CSS font-variation-settings and font-optical-sizing) is coming to Microsoft Edge starting with EdgeHTML 17, available to preview in Windows Insider Preview builds as of Build 17120. To demonstrate how variable fonts enable expressive, performant experiences, we’ve built a new immersive developer guide on Test Drive: Variable Fonts.

    Image of an iceberg labelled "Variable fonts: An exploration of expressive, performant typography."

    Join us on an expedition to learn about Variable Fonts, and how to use them on your site.

    Join us on an expedition to learn about what variable fonts provide web developers and designers, and how to use them on your site. For the best experience, visit the Test Drive in a modern browser that supports font-variation-settings and font-optical-sizing, such as Microsoft Edge on Windows Insider Preview build 17120 or higher.

    Greg, Melanie, and Francois

    The post Bringing expressive, performant typography to Microsoft Edge with Variable Fonts appeared first on Microsoft Edge Dev Blog.

    Visual Studio 2017 Version 15.7 Preview 1

    $
    0
    0

    Last week we released Visual Studio 2017 version 15.6 and Visual Studio for Mac version 7.4, and today we are releasing the first preview of the next minor update: Visual Studio 2017 version 15.7.   We hope that you will use this Preview and share your feedback with us.  To install, you can get it fresh from here or, if you already have a prior Preview installed, you can either click on the in-product notification or check for an update directly.  Alternatively, if you have an Azure subscription, you can provision a virtual machine with this latest preview (starting tomorrow).

    The top highlights of this Preview are described below and include productivity enhancements, better diagnostics, additional C++ development improvements, better management of Android and iOS environments, updated tooling for Universal Windows Platform and .NET Core projects, and an improved update experience.  Please note that this is the first set of version 15.7 features; more goodness awaits in the next Preview. And as always, you can view the complete list of new features and details on how to enable them in the Visual Studio 2017 version 15.7 Preview release notes.

    We appreciate your early adoption and feedback as it helps us ship the most high-quality tools to everyone in the Visual Studio community. Thank you for engaging in Visual Studio previews!

    Productivity

    UI Responsiveness: Performance and productivity are two areas we continually work on improving in Visual Studio.  In Visual Studio 2017 version 15.7, we’re making some of the debugging windows asynchronous, which means that they will no longer block Visual Studio as they do work.  This change will allow for faster stepping because you can continue interacting with Visual Studio without interruption. In this Preview, you should see the first of these improvements in the Threads, Callstack and Parallel Stacks windows, with more window improvements in future releases. As always, we’d love to hear your feedback on the Visual Studio debugger: email vsdbgfb@microsoft.com.

    Diagnostics

    Snapshot Debugging: The Visual Studio Snapshot Debugger can now be launched from the Debug -> Attach Snapshot Debugger menu item. The snapshot debugger in Visual Studio enables you to diagnose and debug issues your Azure Web Applications without impacting the availability of the application. You can learn more about the Visual Studio Snapshot debugger through our docs.

    Snapshot Debug Menu

    IntelliTrace Events and Snapshots for .NET Core: IntelliTrace’s new step-back debugging feature, first shipped in Visual Studio 2017 version 15.5, is now supported for debugging .NET Core applications. The feature automatically takes a snapshot of your application on each breakpoint and debugger step, enabling you to go back to a previous breakpoint or step and view the state of the application as it was in the past. To enable this feature, go to Tools > Options > IntelliTrace settings > and select ‘IntelliTrace events and snapshots’. IntelliTrace step-back is a Visual Studio Enterprise feature available on Windows 10 Anniversary Update or above for debugging .NET applications.

    Step Back Forward Buttons

    C++ Development

    C++ Standards Conformance:  In this Preview, we extended template argument deduction for functions to constructors of template classes – when you construct a class template you no longer have to specify the arguments.

    Code Analysis: C++ Core Check is now part of the default toolset for native code analysis. Whenever code analysis is executed over a project, a subset of rules is enabled from C++ Core Check in addition to default recommended rules.

    Linux project properties: We added parallel compilation support for Linux projects, which may significantly improve build times; this can be enabled via Property Pages > C/C++ > Max Number of Parallel Compilation Jobs.  We also added the “Public Project Include Directories” Linux project property to improve consumption of includes from project-to-project references in Linux solutions.

    Compiler throughput: We’ve made changes in fastlink PDBs to reduce in-heap memory consumption by 30%. These changes reduce the time to hit the first breakpoint when debugging and makes single-stepping significantly faster. These changes also eliminate a major cause of out-of-memory crashes when debugging large projects.

    ClangFormat: ClangFormat support for C++ developers was added to the IDE. Like with EditorConfig, you can use ClangFormat to automatically style and format your code as you type, in a way that can be enforced across your development team.

    Clang Format

    Universal Windows Platform Development

    We have included the Windows 10 Insider Preview SDK, Build 17110 as an optional component associated with the Universal Windows Platform workload. We have also added support for generating Windows Machine Learning class wrappers for an ONNX file that is added to a UWP project. Currently, both components are optional but will be included by default in future preview releases.

    .NET Mobile Development

    Figuring out what Android SDKs to install for mobile development can be time consuming. Visual Studio 2017 version 15.7 adds a new Android SDK manager that takes the guesswork out of managing Android SDK installations. When opening a project for which you don’t have SDKs installed to build it, a notice will appear to help you download the required SDKs. After hitting “Download & Install” and accepting the relevant license agreement, the correct SDKs will automatically be installed in the background for you.

    We are also making provisioning iOS devices for development easier. In Visual Studio 2017 version 15.7, we’re replacing requesting a development certificate, generating a signing key, adding a device in the Developer Center, and creating a provisioning profile with a single button. All the heavy lifting of provisioning an iOS device is handled for you in less than 30 seconds.

    ASP.Net and Web Development

    Visual Studio 2017 version 15.7, currently in preview, is the recommended version of Visual Studio to use with .NET Core 2.1, which is also currently in preview. .NET Core 2.1 has a bunch of new features like a new managed socket implementation and various improvements like better support for HTTPS.  It doesn’t matter which you install first, either way Visual Studio 15.7 will find .NET Core 2.1 and offer it as an option.

    Acquisition

    The update experience is now more streamlined than ever in Visual Studio 2017. The yellow notification flag in the upper right-hand corner of the IDE will still notify you when an update is available, but you can now also initiate your own update check by going to Help -> Check for Updates. After you save your work and choose “Update Now”, Visual Studio will automatically apply the update and then reopen back where you left.

    Acquisition

    This new update experience is just the beginning of our investments in this area, and we’d love to hear your feedback in this space, particularly about how and when you want to apply updates. Please share your ideas with us via the Update Survey.

    Try out the Preview today!

    If you’re not familiar with Visual Studio Previews, take a moment to read the Visual Studio 2017 Release Rhythm. Remember that Visual Studio 2017 Previews can be installed side-by-side with other versions of Visual Studio and other installs of Visual Studio 2017 without adversely affecting either your machine or your productivity.  Previews provide an opportunity for you to receive fixes faster and try out upcoming functionality before they become mainstream. Similarly, the Previews enable the Visual Studio engineering team to validate usage, incorporate suggestions, and detect flaws earlier in the development process. We are highly responsive to feedback coming in through the Previews and look forward to hearing from you.

    Please get the Visual Studio Preview today, exercise your favorite workloads, and tell us what you think.  If you have an Azure subscription, you can provision a virtual machine of this preview (starting tomorrow).  You can report issues to us via the Report a Problem tool in Visual Studio or you can share a suggestion on UserVoice. You’ll be able to track your issues in the Visual Studio Developer Community where you can ask questions and find answers. You can also engage with us and other Visual Studio developers through our Visual Studio conversation in the Gitter community (requires GitHub account).

    Christine Ruana Principal Program Manager, Visual Studio

    Christine is on the Visual Studio release engineering team and is responsible for making Visual Studio releases available to our customers around the world.

    Viewing all 10804 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>