Quantcast
Channel: Category Name
Viewing all 10804 articles
Browse latest View live

Kickstart your artificial intelligence/machine learning journey with the Healthcare Blueprint

$
0
0

Azure blueprints are far more than models drawn on paper or solution descriptions in a document. They are packages of scripts, data, and other artifacts needed to install and exercise a reference implementation solution on Azure. The Azure Security and Compliance Blueprint - HIPAA/HITRUST Health Data and AI is one such blueprint targeting a specific scenario common in healthcare.

image

The healthcare blueprint

The healthcare blueprint includes a real healthcare scenario and an associated machine learning experiment for predicted patient length of stay (LOS). This use case is valuable to healthcare organizations because it forecasts bed counts, operational needs, and staffing requirements. This adds up to considerable savings for the organization using a LOS machine learning experiment.

Blueprint solution guide

A blueprint, like the one for AI in healthcare, consists of multiple components along with documentation. That said, there may be some areas that lack clarity and cause trouble in using the blueprint services after installation. To help with any pain points in the installation and usage of the Healthcare AI blueprint, we’ve developed a solution guidance document, Implementing the Azure blueprint for AI.

The article introduces the blueprint and walks through tips for installation and running the AI/ML experiments. For those just getting started with this blueprint, the document gives some insight into the solution. It also provides guidance that those unfamiliar with Azure will find helpful.

Next steps


How to extract building footprints from satellite images using deep learning

$
0
0

As part of the AI for Earth team, I work with our partners and other researchers inside Microsoft to develop new ways to use machine learning and other AI approaches to solve global environmental challenges. In this post, we highlight a sample project of using Azure infrastructure for training a deep learning model to gain insight from geospatial data. Such tools will finally enable us to accurately monitor and measure the impact of our solutions to problems such as deforestation and human-wildlife conflict, helping us to invest in the most effective conservation efforts.

Image1

Applying machine learning to geospatial data

When we looked at the most widely-used tools and datasets in the environmental space, remote sensing data in the form of satellite images jumped out.

Today, subject matter experts working on geospatial data go through such collections manually with the assistance of traditional software, performing tasks such as locating, counting and outlining objects of interest to obtain measurements and trends. As high-resolution satellite images become readily available on a weekly or daily basis, it becomes essential to engage AI in this effort so that we can take advantage of the data to make more informed decisions.

Geospatial data and computer vision, an active field in AI, are natural partners: tasks involving visual data that cannot be automated by traditional algorithms, abundance of labeled data, and even more unlabeled data waiting to be understood in a timely manner. The geospatial data and machine learning communities have joined effort on this front, publishing several datasets such as Functional Map of the World (fMoW) and the xView Dataset for people to create computer vision solutions on overhead imagery.

An example of infusing geospatial data and AI into applications that we use every day is using satellite images to add street map annotations of buildings. In June 2018, our colleagues at Bing announced the release of 124 million building footprints in the United States in support of the Open Street Map project, an open data initiative that powers many location based services and applications. The Bing team was able to create so many building footprints from satellite images by training and applying a deep neural network model that classifies each pixel as building or non-building. Now you can do exactly that on your own!

With the sample project that accompanies this blog post, we walk you through how to train such a model on an Azure Deep Learning Virtual Machine (DLVM). We use labeled data made available by the SpaceNet initiative to demonstrate how you can extract information from visual environmental data using deep learning. For those eager to get started, you can head over to our repo on GitHub to read about the dataset, storage options and instructions on running the code or modifying it for your own dataset.

Semantic segmentation

In computer vision, the task of masking out pixels belonging to different classes of objects such as background or people is referred to as semantic segmentation. The semantic segmentation model (a U-Net implemented in PyTorch, different from what the Bing team used) we are training can be used for other tasks in analyzing satellite, aerial or drone imagery – you can use the same method to extract roads from satellite imagery, infer land use and monitor sustainable farming practices, as well as for applications in a wide range of domains such as locating lungs in CT scans for lung disease prediction and evaluating a street scene.

Image2_semantic_segmentation Illustration from slides by Tingwu Wang, University of Toronto (source).

Satellite imagery data

The data from SpaceNet is 3-channel high resolution (31 cm) satellite images over four cities where buildings are abundant: Paris, Shanghai, Khartoum and Vegas. In the sample code we make use of the Vegas subset, consisting of 3854 images of size 650 x 650 squared pixels. About 17.37 percent of the training images contain no buildings. Since this is a reasonably small percentage of the data, we did not exclude or resample images. In addition, 76.9 percent of all pixels in the training data are background, 15.8 percent are interior of buildings and 7.3 percent are border pixels.

Original images are cropped into nine smaller chips with some overlap using utility functions provided by SpaceNet (details in our repo). The labels are released as polygon shapes defined using well-known text (WKT), a markup language for representing vector geometry objects on maps. These are transformed to 2D labels of the same dimension as the input images, where each pixel is labeled as one of background, boundary of building or interior of building.

Image3

Some chips are partially or completely empty like the examples below, which is an artifact of the original satellite images and the model should be robust enough to not propose building footprints on empty regions.

Image4

Training and applying the model

The sample code contains a walkthrough of carrying out the training and evaluation pipeline on a DLVM. The following segmentation results are produced by the model at various epochs during training for the input image and label pair shown above. This image features buildings with roofs of different colors, roads, pavements, trees and yards. We observe that initially the network learns to identify edges of building blocks and buildings with red roofs (different from the color of roads), followed by buildings of all roof colors after epoch 5. After epoch 7, the network has learnt that building pixels are enclosed by border pixels, separating them from road pixels. After epoch 10, smaller, noisy clusters of building pixels begin to disappear as the shape of buildings becomes more defined.

Image5

A final step is to produce the polygons by assigning all pixels predicted to be building boundary as background to isolate blobs of building pixels. Blobs of connected building pixels are then described in polygon format, subject to a minimum polygon area threshold, a parameter you can tune to reduce false positive proposals.

Training and model parameters

There are a number of parameters for the training process, the model architecture and the polygonization step that you can tune. We chose a learning rate of 0.0005 for the Adam optimizer (default settings for other parameters) and a batch size of 10 chips, which worked reasonably well.

Another parameter unrelated to the CNN part of the procedure is the minimum polygon area threshold below which blobs of building pixels are discarded. Increasing this threshold from 0 to 300 squared pixels causes the false positive count to decrease rapidly as noisy false segments are excluded. The optimum threshold is about 200 squared pixels.

The weight for the three classes (background, boundary of building, interior of building) in computing the total loss during training is another parameter to experiment with. It was found that giving more weights to interior of building helps the model detect significantly more small buildings (result see figure below).

Image6

Each plot in the figure is a histogram of building polygons in the validation set by area, from 300 square pixels to 6000. The count of true positive detections in orange is based on the area of the ground truth polygon to which the proposed polygon was matched. The top histogram is for weights in ratio 1:1:1 in the loss function for background : building interior : building boundary; the bottom histogram is for weights in ratio 1:8:1. We can see that towards the left of the histogram where small buildings are represented, the bars for true positive proposals in orange are much taller in the bottom plot.

Last thoughts

Building footprint information generated this way could be used to document the spatial distribution of settlements, allowing researchers to quantify trends in urbanization and perhaps the developmental impact of climate change such as climate migration. The techniques here can be applied in many different situations and we hope this concrete example serves as a guide to tackling your specific problem.

Another piece of good news for those dealing with geospatial data is that Azure already offers a Geo Artificial Intelligence Data Science Virtual Machine (Geo-DSVM), equipped with ESRI’s ArcGIS Pro Geographic Information System. We also created a tutorial on how to use the Geo-DSVM for training deep learning models and integrating them with ArcGIS Pro to help you get started.

Finally, if your organization is working on solutions to address environmental challenges using data and machine learning, we encourage you to apply for an AI for Earth grant so that you can be better supported in leveraging Azure resources and become a part of this purposeful community.

Acknowledgement

I would like thank Victor Liang, Software Engineer at Microsoft, who worked on the original version of this project with me as part of the coursework for Stanford’s CS231n in Spring 2018, and Wee Hyong Tok, Principal Data Scientist Manager at Microsoft for his help in drafting this blog post.

Announcing .NET Core 2.2 Preview 2

$
0
0

Today, we are announcing .NET Core 2.2 Preview 2. We have great improvements that we want to share and that we would love to get your feedback on, either in the comments or at dotnet/core #1938.

ASP.NET Core 2.2 Preview 2 and Entity Framework 2.2 Preview 2 are also releasing today. We are also announcing C# 7.3 and ML.NET 0.5.

You can see complete details of the release in the .NET Core 2.2 Preview 2 release notes. Related instructions, known issues, and workarounds are included in the releases notes. Please report any issues you find in the comments or at  dotnet/core #1938.

Thanks for everyone that contributed to .NET Core 2.2. You’ve helped make .NET Core a better product!

Download .NET Core 2.2

You can download and get started with .NET Core 2.2, on Windows, macOS, and Linux:

Docker images are available at microsoft/dotnet for .NET Core and ASP.NET Core.

.NET Core 2.2 Preview 2 can be used with Visual Studio 15.8, Visual Studio for Mac and Visual Studio Code.

Tiered Compilation Enabled

The biggest change in .NET Core 2.2 Preview 2 is tiered compilation is enabled by default. We announced that tiered compilation was available as part of the .NET Core 2.1 release. At that time, you had to enable tiered compilation via application configuration or an environment variable. It is now enabled by default and can be disabled, as needed.

You can see the benefit of tiered compilation in the image below. The baseline is .NET Core 2.1 RTM, running in a default configuration, with tiered compilation disabled. The second scenario has tiered compilation. You can see a significant request-per-second (RPS) throughput benefit with tiered compilation enabled.

The numbers in the chart are scaled so that baseline always measures 1.0. That approach makes it very easy to calculate performance changes as a percentage. The first two tests are TechEmpower benchmarks and the last one is Music Store, our frequent sample ASP.NET app.

Platform Support

.NET Core 2.2 is supported on the following operating systems:

  • Windows Client: 7, 8.1, 10 (1607+)
  • Windows Server: 2008 R2 SP1+
  • macOS: 10.12+
  • RHEL: 6+
  • Fedora: 27+
  • Ubuntu: 14.04+
  • Debian: 8+
  • SLES: 12+
  • openSUSE: 42.3+
  • Alpine: 3.7+

Chip support follows:

  • x64 on Windows, macOS, and Linux
  • x86 on Windows
  • ARM32 on Linux (Ubuntu 18.04+, Debian 9+)

Closing

Please download and test .NET Core 2.2 Preview 2. We’re looking for feedback on the release with the intent of shipping the final version later this year.

We recently shared how Bing.com runs of .NET Core 2.1. The Bing.com site experienced significant benefits when it moved to .NET Core 2.1. Please do check out that post if you are interested in case study of running .NET Core in production. You may also want to take a look at the .NET Customers site, if you are interested in a broader set of customer stories.

Deep dive into Azure Boards

$
0
0

Azure Boards is a service for managing the work for your software projects. Teams need tools that flex and grow. Azure Boards does just that, brining you a rich set of capabilities including native support for Scrum and Kanban, customizable dashboards, and integrated reporting.

Azure Boards

In this post I’ll walk through a few core features in Azure Boards and give some insight in to how you can make them work for your teams and projects.

Work items

All work in Azure Boards is tracked through an artifact called a work item. Work items are where you and your team describe the details of what’s needed. Each work item uses a state model to track and communicate progress. For example, a common state model might be: New > Active > Closed. As work progresses, items are updated accordingly, allowing everyone who works on the project to have a complete picture of where things are at. Below is a picture of the work items hub in Azure Boards. This page is the home for all work items and provides quick filters to allow you to find the items you need.

WorkItemsHub

Opening a work item brings you to a much richer view, including the history of all changes, any related discussion, and links to development artifacts including branches, pull requests, commits, and builds. Work items are customizable, supporting the ability to add new fields, create rules, and modify aspects of the layout. For more information, visit the work items documentation page.

Azure DevOps - work items

Boards, Backlogs, and Sprints

Azure Boards provides a variety of choices for planning and managing work. Let’s look at a few of the core experiences.

Boards

Each project comes with a pre-configured Kanban board perfect for managing the flow of your work. Boards are highly customizable allowing you to add the columns you need for each team and project. Boards support swim lanes, card customization, conditional formatting, filtering, and even WIP limits. For more information, visit the Kanban boards documentation page.

Boards

Backlogs

Backlogs help you keep things in order of priority, and to understand the relationships between your work. Drag and drop items to adjust the order, or quickly assign work to an upcoming sprint. For more information, visit backlogs documentation page.

Backlogs

Sprints

Finally, sprints give you the ability to create increments of work for your team to accomplish together. Each sprint comes equipped with a backlog, taskboard, burndown chart, and capacity planning view to help you and your team deliver your work on time. For more information, visit the sprints documentation page.

Sprints

Dashboards

In any project, it’s critical that you have a clear view of what’s happening. Azure Boards comes complete with a rich canvas for creating dashboards. Add widgets as needed to track progress and direction. For more information, visit the dashboards documentation page.

Dashboard

Queries

And finally, one of the most powerful features in Azure Boards is the query engine. Queries let you tailor exactly what you’re tracking, creating easy to monitor KPIs. It’s simple to create new queries and pin them to dashboards for quick monitoring and status. For more information, visit the on queries documentation page.

Queries

Getting started

If you’re new to Azure Boards, it’s easy to get started, just head over to the Azure DevOps homepage, and click Start free to create your first Azure DevOps project. If you’ve got feedback to share, or questions that need answering, please reach out on twitter at @AzureDevOps

Thanks,

Aaron Bjork

HDInsight Tools for VSCode: Integrations with Azure Account and HDInsight Explorer

$
0
0

Making it easy for developers to get started on coding has always been our top priority. We are happy to announce that HDInsight Tools for VS Code now integrates with VS Code Azure Account. This new feature makes your Azure HDInsight sign-in experience much easier. For first-time users, the tools put the required sign-in code into the copy buffer and automatically opens the Azure sign-in portal where the user can paste the code and complete the authentication process. For returning users, the tools sign you in automatically. You can quickly start authoring PySpark or Hive jobs, performing data queries, or navigating your Azure resources.

We are also excited to introduce a graphical tree view for the HDInsight Explorer within VS Code. With HDInsight Explorer, data scientists and data developers can navigate HDInsight Hive and Spark clusters across subscriptions and tenants, and browse Azure Data Lake Storage and Blob Storage connected to these HDInsight clusters. Moreover, you can inspect your Hive metadata database and table schema.

Key Customer Benefits

  • Support Azure auto sign-in and improve sign-in experiences via integration with Azure Account extension.
  • Enable multi-tenant support so you can manage your Azure subscription resources across tenants.
  • Gain insights into available HDInsight Spark, Hadoop and HBase clusters across environments, subscriptions, and tenants.
  • Facilitate Spark and Hive programming by exposing Hive metadata tables and schema in HDInsight Explorer, as well as displaying Blob Storage and Azure Data Lake Storage.

 hdi-azure-hdinsight-cluster

How to install or update

First, install Visual Studio Code and download Mono 4.2.x (for Linux and Mac). Then get the latest HDInsight Tools by going to the VSCode Extension repository or the VSCode Marketplace and searching for HDInsight Tools for VSCode.

Install_thumb2_thumb

For more information about HDInsight Tools for VSCode, please use the following resources:

Learn more about today’s announcements on the Azure blog and Big Data blog. Discover more on the Azure service updates page.

If you have questions, feedback, comments, or bug reports, please use the comments below or send a note to hdivstool@microsoft.com.

Azure Marketplace new offers – Volume 19

$
0
0

We continue to expand the Azure Marketplace ecosystem. From August 1, 2018 to August 15, 2018 50 new offers successfully met the onboarding criteria and went live. See details of the new offers below:

Virtual Machine

AudioCodes IP Phone Manager Express

AudioCodes IP Phone Manager Express: AudioCodes IP Phone Manager enables administrators to offer a reliable desktop phone service within their organization. Deploy and monitor AudioCodes IP phones to increase productivity and lower IT expenses.


Balabit Privileged Session Management (PSM)

Balabit Privileged Session Management (PSM): Balabit Privileged Session Management (PSM) controls privileged access to remote IT systems; records activities in searchable, movie-like audit trails; and prevents malicious actions.

BOSH Stemcell for Windows Server 1803

BOSH Stemcell for Windows Server 1803: BOSH Stemcell for Windows Server 1803 by Pivotal Software Inc.

Consul Certified by Bitnami

Consul Certified by Bitnami: Consul is a tool for discovering and configuring services in your infrastructure. Bitnami certifies that our images are secure, up-to-date, and packaged using industry best practices.

etcd Certified by Bitnami

etcd Certified by Bitnami: etcd is a distributed key-value store designed to securely store data across a cluster. etcd is widely used in production due to its reliability, fault tolerance, and ease of use.

F5 BIG-IP Virtual Edition (BYOL)

F5 BIG-IP Virtual Edition (BYOL): This is F5's application delivery services platform for Azure. From traffic management and service offloading to application access, acceleration, and security, the BIG-IP Virtual Edition ensures your apps are fast, available, and secure.

F5 Per-App Virtual Edition (PAYG)

F5 Per-App Virtual Edition (PAYG): F5 Per-App Virtual Editions (VEs) provide application delivery controller (ADC) and web application firewall (WAF) functionality for Azure-hosted applications, delivering intelligent traffic management and security services on a per-app basis.

GigaSECURE Cloud 5.4.00 - Hourly (100 pack)

GigaSECURE Cloud 5.4.00: GigaSECURE Cloud delivers intelligent network traffic visibility for workloads running in Azure and enables increased security, operational efficiency, and scale across virtual networks (VNets).

Informix

Informix: Informix features a cloud-delivered, ready-to-run database system. Informix is configured for OLTP workloads and includes entitlement to the Informix Warehouse Accelerator, delivering incredible query acceleration.

Intellicus BI Server (25 Users - Linux)

Intellicus BI Server (100 Users - Linux): Intellicus BI Server is an enterprise reporting and business intelligence platform with all the features needed to create a comprehensive data analytics platform.

Intellicus BI Server (50 Users - Linux)

Intellicus BI Server (100 Users): Intellicus BI Server is an enterprise reporting and business intelligence platform with all the features needed to create a comprehensive data analytics platform.

Intellicus BI Server (100 Users - Linux)

Intellicus BI Server (25 Users - Linux): Intellicus BI Server is an enterprise reporting and business intelligence platform with all the features needed to create a comprehensive data analytics platform.

Intellicus BI Server (100 Users)

Intellicus BI Server (50 Users - Linux): Intellicus BI Server is an enterprise reporting and business intelligence platform with all the features needed to create a comprehensive data analytics platform.

NATS Certified by Bitnami

NATS Certified by Bitnami: NATS is an open-source, lightweight, high-performance messaging system. It is ideal for distributed systems and supports modern cloud architectures and pub-sub, request-reply, and queuing models.

Neo4j Certified by Bitnami

Neo4j Certified by Bitnami: Neo4j is a high-performance graph store with all the features expected of a mature and robust database, like a friendly query language and ACID transactions.

ZooKeeper Certified by Bitnami

ZooKeeper Certified by Bitnami: ZooKeeper provides a reliable, centralized register of configuration data and services for distributed applications. Bitnami certifies that our images are secure, up-to-date, and packaged using industry best practices.

Web Applications

AccessData Lab 6.4 for Azure

AccessData Lab 6.4 for Azure: Manage digital forensic investigations in the cloud with AccessData Lab 6.4 for Azure. Power through massive data sets, handle various data types, and run multiple cases at the same time, all within a collaborative, scalable environment.

Axians myOperations Family

Axians myOperations Family: The myOperations family by Axians opens up an era of new freedom and control for IT managers and users. Developed by experienced consultants, it is the product of years of listening to customers’ voices and needs.

Advanced Threat Protection for Office365

BitDam: BitDam protects email from advanced content-borne attacks. BitDam couples deep application learning with alien application code flow detection, preventing illegal attack code hidden in URLs and documents from being run by enterprise applications.

etcd Cluster

etcd Cluster: etcd is a distributed key-value store designed to securely store data across a cluster. This solution provisions a configurable number of etcd nodes to create a fault-tolerant, distributed, reliable key-value store.

HPCBOX Cluster for OpenFOAM

HPCBOX Cluster for OpenFOAM: HPCBOX provides intelligent workflow capability that lets you plug cloud infrastructure into your application pipeline, giving you granular control of your HPC cloud resources and applications.

NATS Cluster

NATS Cluster: NATS is an open-source, lightweight, high-performance messaging system. This solution provisions a configurable number of NATS nodes to create a high-performance distributed messaging system.

NeuVector Container Security Platform

NeuVector Container Security Platform: This multi-vector container security platform is integrated into Docker and Kubernetes and deploys easily on any Azure instance running Docker and/or Kubernetes.

StealthMail Email Security

StealthMail Email Security: StealthMail makes your emails secure and invisible to email relays, hackers, or public internet. StealthMail gives you exclusive control over your encryption keys, data, and access rights so your email communication is fully protected.

Veritas Resiliency Platform (express install)

Veritas™ Resiliency Platform (express install): This Bring Your Own License (BYOL) version of Veritas Resiliency Platform (VRP) provides single-click disaster recovery and migration for any source workload into Azure. Meet your recovery time objectives with confidence.

Vertica Analytics Platform

Vertica Analytics Platform: With Vertica Analytics Platform for Azure, you can tap into core enterprise capabilities with the deployment model that makes sense for your business. Vertica Analytics Platform runs on-premises, on industry-standard hardware, and in the cloud.

ZooKeeper Cluster

ZooKeeper Cluster: ZooKeeper gives you a reliable, centralized register of configuration data and services for distributed applications. This solution provides scalable data storage and provisions a configurable number of nodes that form a fault-tolerant ZooKeeper cluster.

Consulting Services 

AI in business - 1-Day Assessment

AI in business: 1-Day Assessment: Discover which AI technologies can bring business value to your company in this one-day assessment. Topics will include cognitive services, machine learning, and bots.

Azure Architecture 1-Day Workshop

Azure Architecture: 1-Day Workshop: igroup will hold an on-site technical workshop with your IT team and a stakeholder and will conduct a deep dive into your business objectives and software needs, helping you gather requirements and prioritize your goals.

Azure for Data Management & IoT Half-Day Briefing

Azure for Data Management & IoT: Half-Day Briefing: In this free half-day briefing, TwoConnect will discuss how to take your business management and IoT project to the next level in a fast, flexible, and affordable manner.

Azure Governance 1 Day Workshop

Azure Governance: 1 Day Workshop: ClearPointe will hold an Azure governance workshop and consultation to evaluate your current policies and procedures and align with the pillars of a strong Azure governance model.

Azure IaaS Jumpstart 1-Wk Proof of Concept

Azure IaaS Jumpstart - Proof of Concept: T4SPartners' Azure Jumpstart is a fixed-scope services offering designed to help you quickly plan and deploy a hybrid infrastructure spanning your datacenters and the cloud.

Azure IoT 8-Wk Initial Deployment

Azure IoT: 8-Wk Initial Deployment: Work with Lixar to implement an IoT initial deployment for remote monitoring that leverages Azure IoT Central and Lixar’s experience in deploying large-scale IoT solutions.

Azure Optimization 5-Day Assessment (USA)

Azure Optimization: 5-Day Assessment (USA): In this free assessment, our Azure experts personally review (no tools) every aspect of your tenant and produce recommendations to improve performance, lower costs, add availability, and strengthen security.

Data Intelligence AI & Machine Learning 4 Wk PoC

Big Data Platform: 8-Wk PoC: Work with Lixar to implement a Big Data proof of concept that leverages Azure Data Lake and Lixar’s methodology for Azure-based data platform solutions.

DevOps Strategy and PoC

Blockchain: 5-Wk PoC: Leveraging Lixar’s approach to implementing blockchain solutions, companies can quickly turn out end-to-end prototypes on Azure using Blockchain-as-a-Service and Azure App Service components.

Cloud Migration - 1 Hour Briefing

Cloud Migration - 1 Hour Briefing: This briefing will provide a high-level view of the Azure platform and how it can transform your datacenter. T4SPartners will demo the solution to show the capabilities that will help potential customers with an end-to-end migration method.

Current State & Solution Design 3-Wk Assessment

Current State & Solution Design: 3-Wk Assessment: Clientek will define a set of minimal marketable features, create a release plan, and outline an architectural approach. At the end of the assessment, you will receive a full project proposal.

Big Data Platform 8-Wk PoC

Data Intelligence+AI & Machine Learning: 4 Wk PoC: Lixar has a proven methodology for developing machine learning models that work with numeric data to provide hindsight analysis and deeper insight into the data, along with foresight and predictions.

Blockchain 5-Wk PoC

DevOps Strategy and PoC: Leveraging Azure DevOps, organizations can focus on building applications while automating processes and maintaining insights into the environment and the health of the application.

Essentials for Hands-On Labs 1-Hr Briefing

Essentials for Hands-On Labs: 1-Hr Briefing: This briefing will include an overview and demo of using preconfigured, extended Microsoft Azure environments and/or virtual machines for hands-on labs.

Intercept Managed Security 2-Hr Implementation

Intercept Managed Security: 2-Hr Implementation: Gain control over the security and compliance of your IT environment by monitoring behavior and taking preventive automated actions. You receive a dashboard that displays the latest security status of the components.

Current State & Solution Design 3-Wk Assessment

Introductions & Technical Deep-Dive: 3-Hr Briefing: Learn how Clientek's agile approach to custom software development will provide your organization with the technical advancements needed to reach the next level.

Lift & Shift to Azure Cloud 3-Wk PoC

Lift & Shift to Azure Cloud: 3-Wk PoC: Lixar is offering a lift-and-shift or a digital transformation, giving you a boost to help you move your web-based application to the cloud in a matter of weeks.

Supply Chain Logistics to Azure - half Day Briefing

Migrate HL7 & HIPAA Apps to Azure 1/2 Day Briefing: In this free briefing, we will discuss how to take your healthcare apps to the next level in a fast, flexible, and affordable manner by leveraging Microsoft Azure and modern technologies.

Sage on Azure - 5-Day Implementation

Sage on Azure: 5-Day Implementation: Move your on-premises Sage accounting application to Microsoft Azure for centralized access anytime, anywhere. This lift-and-shift implementation is for technical and business leaders and is delivered remotely.

SharePoint Add-in development 5-Wk Implementation

SharePoint Add-in development: 5-Wk Implementation: SharePoint Add-ins let you customize your SharePoint sites’ behavior to your specific business needs. Add-ins will extend boundaries and improve your SharePoint experience.

SharePoint Add-in development 5-Wk Implementation

SQL Management Studio Add-in: 6-Wk Implementation: SQL Server Management Studio Add-ins let you safely and effectively customize your SSMS behavior to your specific business needs. Add-ins will extend boundaries and improve your SQL Database management experience.

Azure for Data Management & IoT Half-Day Briefing

Supply Chain Logistics to Azure - 1/2 Day Briefing: In this free briefing, we will discuss how to take your supply chain logistics apps to the next level in a fast, flexible, and affordable manner by leveraging modern Microsoft technologies.

Azure for Data Management & IoT Half-Day Briefing

Use Azure to Connect Everything: Half-Day Briefing: This free half-day briefing will cover how TwoConnect can help you use Azure and related Microsoft technologies to seamlessly connect all your apps to one another.

Search MSRC fix for TFS 2017 Update 3

$
0
0
Issue description Service endpoints feature was introduced in TFS 2018.  With that feature, Elasticsearch URL can be configured as an endpoint by any team member (Contributor).  As a result, Elasticsearch index data (which serves as a backend for search feature) can be accessed or modified by server-side tasks running on the TFS server. This would mean... Read More

How can I pause my code in Visual Studio?: Breakpoints FAQ

$
0
0

Have you ever found a bug in your code and wanted to pause code execution to inspect the problem? If you are a developer, there’s a strong chance you have experienced or will experience this issue many, many times. While the short and sweet answer to this problem is to use a breakpoint, the longer answer is that Visual Studio actually provides multiple kinds of breakpoints and methods that let you pause your code depending on the context! Based on the different scenarios you may experience while debugging, here are some of the various ways to pause your code and set or manage a breakpoint in Visual Studio 2017:

While my app is running, how can I pause to inspect a line of code that may contain a bug?

The easiest way to pause or “break” execution to inspect a line of code is to use a breakpoint, a tool that allows you to run your code up to a specified line before stopping. Breakpoints are an essential aspect of debugging, which is the process of detecting and removing errors and bugs from your code.

  1. Select the left margin or press F9 next to the line of code you would like to stop at.
  2. Run your code or hit Continue (F5) and your program will pause prior to execution at the location you marked.

Basic Breakpoint

Where can I manage and keep track of all my breakpoints?

If you have set multiple breakpoints located in different areas or files of your project, it can be hard to find and keep track of them. The Breakpoints Window is a central location where you can view, add, delete, and label your breakpoints. If it’s not already visible, this window can be accessed by navigating to the top tool bar in Visual Studio and selecting Debug –> Window –> Breakpoints (or CTRL + ALT + B).

Breakpoints Window

How can I stop execution only when my application reaches a specific state?

Conditional Breakpoints are an extended feature of regular breakpoints that allow you to control where and when a breakpoint executes by using conditional logic. If it’s difficult or time-consuming to manually recreate a particular state in your application to inspect a bug, conditional breakpoints are a good way to mitigate that process. Conditional breakpoints are also useful for determining the state in your application where a variable is storing incorrect data. To create a conditional breakpoint:

  1. Set a breakpoint on the desired line.
  2. Hover over the breakpoint and select the Settings gear icon that appears.
  3. Check the Conditions option. Make sure the first dropdown is set to Conditional Statement.
  4. Input valid conditional logic for when you want the break to occur and hit enter to save the breakpoint.

Conditional Breakpoint

How can I break a loop at a certain iteration when debugging?

You can select the Hit Count option when creating a conditional breakpoint (see above) to specify a specific loop iteration where you want to halt your code. Instead of having to manually step through each iteration, you can use hit count to break at the relevant iteration where your code starts misbehaving.

Hit Count

How can I break at the start of a function that I know the name of but not its location in my code?

Though a standard breakpoint can be used here, function breakpoints can also be used to break at the start of a function call. Function breakpoints can be used over other breakpoints when you know the function’s name but not its location in code. If you have multiple overloaded methods or a function contained within several different projects, function breakpoints are a good way to avoid having to manually set a breakpoint at each function call location. To create a function breakpoint:

  1. Select Debug –> New Breakpoint –> Break at Function.
  2. Input the desired function name and hit enter. These breakpoints can also be created and viewed via the Breakpoints Window.

Functional Breakpoint

How can I break only when a specific object’s property or value changes?

If you are debugging in C++, data breakpoints can be used to stop execution when a particular variable stored at a specific memory address changes. Exclusive to C++, these can be set via the Watch Window or the Breakpoints Window. For more info on data breakpoints, check out this blog post.

Data Breakpoints

If you are debugging managed code, a current workaround and equivalent alternative to data breakpoints is to use an Object ID with a conditional breakpoint. To perform this task:

  1. In break mode, right click on the desired object and select Make Object ID, which will give you a handle to that object in memory.
  2. Add a conditional breakpoint to the desired setter where the conditional statement is “this == $[insert handle here].”
  3. Press Continue (F5) and you will now break in the setter when that particular property value changes for the desired instance.
  4. In the Call Stack, double click on the previous frame to view the line of code that is changing the specific object’s property.

ObjectID Setter Break

How can I break when a handled or unhandled exception is thrown?

When exceptions are thrown at runtime, you are typically given a message about it in the console window and/or browser, but you would then have to set your own breakpoints to debug the issue. However, Visual Studio also allows you to break when a specified exception is thrown automatically, regardless of whether it is being handled or not.

You can configure which thrown exceptions will break execution in the Exception Settings window.

Exception Break

Can I set a breakpoint in the call stack?

If you are using the call stack to examine your application’s execution flow or view function calls currently on the stack, you may want to use call stack breakpoints to pause execution at the line where a calling function returns.

  1. Open the call stack (Debug –> Windows –> Call Stack, or CTRL + ALT + C)
  2. In the call stack, right-click on the calling function and select Breakpoint –> Insert Breakpoint (F9).

CallStack Breakpoint

How can I pause execution at a specific assembly instruction?

If you are examining the disassembly window to inspect method efficiency, inexplainable debugger behavior, or you just want to study how your code works behind the scenes when translated into assembly code, disassembly breakpoints may be useful to you. Disassembly breakpoints can be used to break at a specific line of assembly code, accessible only when code execution is already paused. To place a disassembly breakpoint:

  1. Open the disassembly window (Debug –> Windows –> Disassembly, or Ctrl + Alt + D)
  2. Click in the left margin at the line you want to break at (or press F9).

Disassembly Breakpoint

Excited to try out any of these breakpoints? Let us know in the comments!

For more info on Visual Studio 2017 breakpoints, check out the official documentation. For any issues or suggestions, please let us know via Help > Send Feedback > Report a Problem in the IDE.

Leslie Richardson Program Manager, Visual Studio Debugging & Diagnostics

Leslie is a Program Manager on the Visual Studio Debugging and Diagnostics team, focusing primarily on improving the overall debugging experience and feature set.


Announcing TypeScript 3.1 RC

$
0
0
Today we’re happy to announce the availability of the release candidate (RC) of TypeScript 3.1. Our intent with the RC is to gather any and all feedback so that we can ensure our final release is as pleasant as possible.

If you’d like to give it a shot now, you can get the RC through NuGet, or use npm with the following command:

npm install -g typescript@rc

You can also get editor support by

Let’s look at what’s coming in TypeScript 3.1!

Mappable tuple and array types

Mapping over values in a list is one of the most common patterns in programming. As an example, let’s take a look at the following JavaScript code:

function stringifyAll(...elements) {
    return elements.map(x => String(x));
}

The stringifyAll function takes any number of values, converts each element to a string, places each result in a new array, and returns that array. If we want to have the most general type for stringifyAll, we’d declare it as so:

declare function stringifyAll(...elements: unknown[]): Array<string>;

That basically says, “this thing takes any number of elements, and returns an array of strings”; however, we’ve lost a bit of information about elements in that transformation.

Specifically, the type system doesn’t remember the number of elements user passed in, so our output type doesn’t have a known length either. We can do something like that with overloads:

declare function stringifyAll(...elements: []): string[];
declare function stringifyAll(...elements: [unknown]): [string];
declare function stringifyAll(...elements: [unknown, unknown]): [string, string];
declare function stringifyAll(...elements: [unknown, unknown, unknown]): [string, string, string];
// ... etc

Ugh. And we didn’t even cover taking four elements yet. You end up special-casing all of these possible overloads, and you end up with what we like to call the “death by a thousand overloads” problem. Sure, we could use conditional types instead of overloads, but then you’d have a bunch of nested conditional types.

If only there was a way to uniformly map over each of the types here…

Well, TypeScript already has something that sort of does that. TypeScript has a concept called a mapped object type which can generate new types out of existing ones. For example, given the following Person type,

interface Person {
    name: string;
    age: number;
    isHappy: boolean;
}

we might want to convert each property to a string as above:

interface StringyPerson {
    name: string;
    age: string;
    isHappy: string;
}

function stringifyPerson(p: Person) {
    const result = {} as StringyPerson;
    for (const prop in p) {
        result[prop] = String(p[prop]);
    }
    return result;
}

Though notice that stringifyPerson is pretty general. We can abstract the idea of Stringify-ing types using a mapped object type over the properties of any given type:

type Stringify<T> = {
    [K in keyof T]: string
};

For those unfamiliar, we read this as “for every property named K in T, produce a new property of that name with the type string.

and rewrite our function to use that:

function stringifyProps<T>(p: T) {
    const result = {} as Stringify<T>;
    for (const prop in p) {
        result[prop] = String(p[prop]);
    }
    return result;
}

stringifyProps({ hello: 100, world: true }); // has type `{ hello: string, world: string }`

Seems like we have what we want! However, if we tried changing the type of stringifyAll to return a Stringify:

declare function stringifyAll<T extends unknown[]>(...elements: T): Stringify<T>;

And then tried calling it on an array or tuple, we’d only get something that’s almost useful prior to TypeScript 3.1. Let’s give it a shot on an older version of TypeScript like 3.0:

let stringyCoordinates = stringifyAll(100, true);

// No errors!
let first: string = stringyCoordinates[0];
let second: string = stringyCoordinates[1];

Looks like our tuple indexes have been mapped correctly! Let’s check the grab the length now and make sure that’s right:

   let len: 2 = stringyCoordinates.length
//     ~~~
// Type 'string' is not assignable to type '2'.

Uh. string? Well, let’s try to iterate on our coordinates.

   stringyCoordinates.forEach(x => console.log(x));
// ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
// Cannot invoke an expression whose type lacks a call signature. Type 'String' has no compatible call signatures.

Huh? What’s causing this gross error message? Well our Stringify mapped type not only mapped our tuple members, it also mapped over the methods of Array, as well as the length property! So forEach and length both have the type string!

While technically consistent in behavior, the majority of our team felt that this use-case should just work. Rather than introduce a new concept for mapping over a tuple, mapped object types now just “do the right thing” when iterating over tuples and arrays. This means that if you’re already using existing mapped types like Partial or Required from lib.d.ts, they automatically work on tuples and arrays now.

Properties on function declarations

In JavaScript, functions are just objects. This means we can tack properties onto them as we please:

export function readFile(path) {
    // ...
}

readFile.async = function (path, callback) {
    // ...
}

TypeScript’s traditional approach to this has been an extremely versatile construct called namespaces (a.k.a. “internal modules” if you’re old enough to remember). In addition to organizing code, namespaces support the concept of value-merging, where you can add properties to classes and functions in a declarative way:

export function readFile() {
    // ...
}

export namespace readFile {
    export function async() {
        // ...
    }
}

While perhaps elegant for their time, the construct hasn’t aged well. ECMAScript modules have become the preferred format for organizing new code in the broader TypeScript & JavaScript community, and namespaces are TypeScript-specific. Additionally, namespaces don’t merge with var, let, or const declarations, so code like the following (which is motivated by defaultProps from React):

export const FooComponent => ({ name }) => (
    <div>Hello! I am {name}</div>
);

FooComponent.defaultProps = {
    name: "(anonymous)",
};

can’t even simply be converted to

export const FooComponent => ({ name }) => (
    <div>Hello! I am {name}</div>
);

// Doesn't work!
namespace FooComponent {
    export const defaultProps = {
        name: "(anonymous)",
    };
}

All of this collectively can be frustrating since it makes migrating to TypeScript harder.

Given all of this, we felt that it would be better to make TypeScript a bit “smarter” about these sorts of patterns. In TypeScript 3.1, for any function declaration or const declaration that’s initialized with a function, the type-checker will analyze the containing scope to track any added properties. That means that both of the examples – both our readFile as well as our FooComponent examples – work without modification in TypeScript 3.1!

As an added bonus, this functionality in conjunction with TypeScript 3.0’s support for JSX.LibraryManagedAttributes makes migrating an untyped React codebase to TypeScript significantly easier, since it understands which attributes are optional in the presence of defaultProps:

// TypeScript understands that both are valid:
<FooComponent />
<FooComponent name="Nathan" />

Breaking Changes

Our team always strives to avoid introducing breaking changes, but unfortunately there are some to be aware of for TypeScript 3.1.

Vendor-specific declarations removed

TypeScript 3.1 now generates parts of lib.d.ts (and other built-in declaration file libraries) using Web IDL files provided from the WHATWG DOM specification. While this means that lib.d.ts will be easier to keep up-to-date, many vendor-specific types have been removed. We’ve covered this in more detail on our wiki.

Differences in narrowing functions

Using the typeof foo === "function" type guard may provide different results when intersecting with relatively questionable union types composed of {}, Object, or unconstrained generics.

function foo(x: unknown | (() => string)) {
    if (typeof x === "function") {
        let a = x()
    }
}

You can read more on the breaking changes section of our wiki.

Going forward

We’re looking forward to hearing about your experience with the RC. As always, keep an eye on our roadmap to get the whole picture of the release as we stabilize. We expect to ship our final release in just a few weeks, so give it a shot now!

Customizing Azure Blueprints to accelerate AI in healthcare

$
0
0

Artificial Intelligence (AI) holds major potential for healthcare, from predicting the patient length of stays to diagnostic imaging, anti-fraud, and many more use cases. To be successful in using AI, healthcare needs solutions, not projects. Learn how you can close the gap to your AI in healthcare solution by accelerating your initiative using Microsoft Azure blueprints.

To rapidly acquire new capabilities and implement new solutions, healthcare IT and developers can now take advantage of industry-specific Azure Blueprints. Blueprints include resources such as example code, test data, security, and compliance support. These are packages that include reference architectures, guidance, how-to guides, and other documentation, as well as executable code and sample test data built around a key use case of interest to healthcare organizations. Blueprints also contain components to support privacy, security, and compliance initiatives, including threat models, security controls, responsibility matrices, and compliance audit reports.

You can learn more by attending the Accelerating Artificial Intelligence (AI) in Healthcare using Microsoft Azure Blueprints Webcast - Part 2: Customization. Building on the introduction of Microsoft Azure Blueprints webcast, this session dives in deeper focusing on customizing the blueprints to your unique needs and organization. This session is intended for healthcare providers, payers, pharmaceuticals, and life science organizations. Key roles include senior technical decision makers, IT Managers, Cloud Architects, and developers.

Key insights

Next steps

We invite you to register or watch the Accelerating AI in Healthcare using Microsoft Azure Blueprints Webcast - Part 2: Customization on demand.

Video Indexer – General availability and beyond

$
0
0

Earlier today, we announced the general availability (GA) of Video Indexer. This means that our customers can count on all the metadata goodness of Video Indexer to always be available for them to use when running their business. However, this GA is not the only Video Indexer announcement we have for you. In the time since we released Video Indexer to public preview in May 2018, we never stopped innovating and added a wealth of new capabilities to make Video Indexer more insightful and effective for your video and audio needs.

Delightful experience and enhanced widgets

The Video Indexer portal already includes insights and timeline panes that enables our customers to easily review and evaluate media insights. The same experience is also available in embeddable widgets, which are a great way to integrate Video Indexer into any application.

We are now proud to release revamped insight and timeline panes. The new insight and timeline panes are built to accommodate the growing number of insights in Video Indexer and are automatically responsive to different form factors.

gif

By the way, with the new insight pane we have also added visualizations for the already existing keyframes extraction capability, as well as new emotion detection insights. Which brings us to the next set of announcements.

Richer insights unlocked by new models

The core of Video Indexer is of course the rich set of cross-channel (audio, speech, and visual) machine learning models it provides. We are working hard to continue adding more models, and make improvements to our existing models, in order to provide our customers with more insightful metadata on their videos!

Our most recent additions to Video Indexer’s models are the new emotion detection and topic inferencing models. The new emotion detection model detects emotional moments in video and audio assets based on two channels, speech content and voice tonality. It divides them into four emotional states - anger, fear, joy, and sadness. As with other insights detected by Video Indexer, we provide the exact timeframe for each emotion detected in the video and the results are available both in the JSON file we provide for easy integration and in the insight and timeline experiences to be reviewed in the portal, or as embeddable widgets.

Emotions -02

Another important addition to Video Indexer is the ability to do topic inferencing. That is, understand the high-level topics of the video or audio files based on the spoken words and visual cues. This model is different than the keywords extraction model that already exists in Video Indexer. It detects topics in various granularities (e.g. Science, Astronomy, or Missions to Mars) that are inferred from the assets, but not necessarily appear in it, while the keywords extracted will be specific terms that actually appeared in the content. Our topics catalog for this model is sourced from multiple resources, including the IPTC media topics taxonomy, in order to provide the media standard topics.

Note that today’s topics exist in the JSON file. To try them out, simply download the file using the curly braces button below the player or from the API, and search for the topics hammock. Stay tuned for updates on the new user portal experience we are working on!

In addition to the newly released models, we are investing in the improvement of existing models. One of those models is the well-loved celebrity recognition model, which we recently enhanced to cover approximately one million faces based on commonly requested data sources such as IMDB, Wikipedia, and top LinkedIn influencers. Try it out, and who knows maybe you are one of them!

People-01

Another model that was recently enhanced is the custom language model that allows each of our customers to extend the speech-to-text performance of Video Indexer to its own specific content and industry terms. Starting last month, we extended this custom language support to 10 different languages including English, Spanish, Italian, Arabic, Hindi, Chinese, Japanese, Portuguese, and French.

Another important model we recently released is the automatic identification of the spoken language in video. With that new capability customers can easily index batches of videos, without manually providing their language. The model automatically identifies the main language used and invokes the appropriate speech-to-text model.

LID

Easily manage your account

Video Indexer accounts relay on Azure Media Services accounts and use their different components as infrastructure to perform encoding, computation, and streaming of the content as needed.

For easier management of the Azure Media Services resources used by Video Indexer, we recently added visibility into the relevant configuration and states from within the Video Indexer portal. From here you can see at any given time what media resource is used for your indexing jobs, how many reserved units are allocated for indexing and of what type, how many indexing jobs are currently running, and how many are queued.

Additionally, if we identify any configuration that might interfere with your indexing business needs, we will surface those as warnings and errors with a link to the location within your Azure portal to tend to the identified issue. This may include cases such as Event Grid notification registration missing in your subscription, Streaming Endpoints disabled, Reserved Units quantity, and more.

To try it out, simply go to your account settings in the Video Indexer portal and choose the account tab.

Settings (paid) 1.7

In that same section, we also added the ability to auto-scale the computation units used for indexing. That means that you can allocate the maximum amount of computation reserved units in your Media Services account, and Video Indexer will stop and start them automatically as needed. As a result, you won’t pay extra money for idle time and you will not have to wait for indexing jobs to complete when the indexing load is high.

Another addition that can help customers who wish to only extract insights, without the need to view the content, is the no streaming option. If this is the case for you, you can now use this newly added parameter while indexing to avoid the encoding costs, as well as get faster indexing. Please note that selecting this option will prevent your video from playing in the portal player. So if the portal or widgets are leveraged in your solution, you would probably want to keep streaming enabled.

Minimal integration effort

With the public preview a few months back, we also released a new and improved Video Indexer v2 RESTful API. This API enables quick and easy integration of Video Indexer to your application, on either client-to-server or server-to-server architecture.

Following that API, we recently released a new Video Indexer v2 connector for Logic Apps and Flow. You can set up your own custom Video Indexer workflows to further automate the process of extracting deep insights from your videos quickly and easily without writing a single line of code!

Learn more about the new connector and try out example templates.

flow

To make the integration with Video Indexer fit your current workflow and existing infrastructure, we also expanded our closed caption and subtitle file format support with the addition of Sub Rip Text (SRT) and W3C Timed Text (TTML) file formats. Get more information on how to extract the different caption and subtitle formats.

What's next?

The GA launch is just the beginning. As you can see from this blog there is a lot that we have already done, yet there is a whole lot more that we are actively working on! We are excited to continue this journey together with our partners and customers to enhance Video Indexer and make your video and audio content more discoverable, insightful, and valuable to you.

Have questions or feedback? We would love to hear from you! Use our UserVoice to help us prioritize features, or email VISupport@Microsoft.com for any questions.

From Microsoft Azure to everyone attending IBC Show 2018 – Welkom in Amsterdam!

$
0
0

Media and entertainment industry conferences are by far some of my favorites. Creativity, disruption, opportunity, and technology – particularly cloud, edge, and AI – are everywhere. It’s been exciting to see those things come together at NAB 2018, SIGGRAPH, and now IBC Show 2018. Together with teams from across Microsoft, I’m looking forward to IBC Show and the chance to learn, collaborate, and advance the state of this dynamic industry.

At this year’s IBC we’re excited to announce the general availability of Video Indexer, our advanced metadata extraction service. Announced as public preview earlier this year, Video Indexer provides a rich set of cross-channel (audio, speech, and visual) learning models. Check out Sudheer’s blog for more information on all the new capabilities including emotion detection, topic inferencing, and improvements to the ever-popular celebrity recognition model that recognizes over one million faces.

Insights - 4.6

Video Indexer is just one of the ways Azure is helping customers like Endemol Shine, Multichoice, RTL, and Ericsson with their content needs. At IBC 2018, our teams are excited to share new ways that Azure, together with solutions from our partners, can address common media workflow challenges.

How? Well, read on…

More visual effects and animations mean you need to render more, faster than ever before

With Azure you can burst your render jobs from on-premises to the cloud, more easily and securely than you may have thought. Come see a demonstration of how to use Azure Batch, Avere vFXT for Azure (now in public preview), and your favorite rendering applications. We’ll be using Autodesk Maya to accelerate your productions using cloud computing. You can start running jobs right away with our pre-built images, or bring your own custom VM boot images. We’ll take care of the complexities of licensing and there is no need to move your data. Render more with Azure.

Accelerated production schedules with remote contributions to produce more cost effective, scalable cloud workflows

Together with Dejero, Avid, Haivision, Hiscale, Make.TV, and Signiant we’re showcasing how to make “live production in the cloud” a reality. This demonstration uses a live video stream from the field which is sent to Microsoft Azure by a Dejero EnGo mobile transmitter. Dejero dynamically receives the stream in Azure, transcodes it, and delivers it to Make.TV’s Live Video Cloud, which is used to curate and route the content to any number of destinations from within the Avid MediaCentral Cloud UX. Hiscale’s cloud-based transcoding solution enables live recording into customers’ editing and asset management environments where high- and low-resolution files can be stored in Avid NEXIS. For file-based workflows, Signiant’s technology is used to accelerate ingest. Then, HiScale transcodes the files and stores the assets and metadata in Avid’s Interplay MAM. Learn more about live production in the cloud.

Ever-growing content libraries mean you need more intelligent content management

Our customers consistently tell us they want faster, better ways to store, index, manage, and unlock the value of their content libraries. At IBC we’ll demonstrate how Video Indexer allows you to easily extract metadata, such as emotion and topics, from audio and video files. Then we will show you how to access those assets, and the metadata, through our updated portal experience, API, or through third-party asset managers, including Avid, Dalet, and eMAM. This ensure that those deep insights are available right where editors and production crews need and expect them. You can also surface insights in your own custom apps through a new embeddable insights pane or our API. Video Indexer’s models are customizable and extensible, so they can evolve as your needs do.

Video Indexer leverages the Azure Storage platform, which provides cost effective and simple ways to ingest petabytes of content using our own client tools, as well as third-party file transfer accelerators over the Internet or a private ExpressRoute connection. Where offline transfer is faster or cheaper, we have a range of Data Box solutions to handle anything from single-drive to multi-PB bulk transfers. Our offline media import partners are ready to assist with anything from reels of analog video to thousands of LTO tapes. Azure Storage is designed to offer between eleven and sixteen 9s of durability, depending on the replication option you choose. At IBC we’re showing off lifecycle management policies (in preview), which let you control retention periods and automatic tiering between hot, cool, and archive tiers so that your data is always protected and stored in the most cost-effective way.

Fans around the world want to see their favorite event in ever increasing detail, live

At IBC we’re showing how Azure Media Services can help deliver UHD/4K HLG streams from the cloud.  First, Media Excel’s HERO™on-premises encoders push an UHD/4K HEVC contribution feed over the open Internet using the open source low latency streaming protocol, SRT. This feed is received by Media Excel software encoders running in Azure, transcoded into a multiple bitrate HEVC 60 fps streams, and sent to Azure Media Services for dynamic packaging (DASH/HLS with CMAF) and encryption (PlayReady/Widevine/Fairplay). It’s then ready for delivery using Azure CDN or a CDN of your choice.

MultiChoice, a leading provider of sports, movies, series, and general entertainment channels to over 12 million subscribers in Sub-Saharan Africa (through its DStv services), recently completed a pilot using a similar solution to deliver the first UHD live streaming event in South Africa. They found that Microsoft Azure delivered on the promise of a 3rd party managed cloud solution with real-world effectiveness.

Media solutions from our broad partner ecosystem

Whatever your media challenges, our partners are ready to help with solutions, planning, content creation, management, and monetization. You can learn more about these solutions, and the new capabilities in Video Indexer, in Sudheer’s blog.

The future is cloudy, but bright

Enabling the scenarios above is another step towards the not too distant future where cloud, edge, and AI technologies are put to work for you. For a sneak peek into that future check out Cloud Champions, brought you by Microsoft Azure and our partners.

That’s a wrap

If you’re attending IBC Show 2018 please stop by our booth at Hall 1, Booth #C27 to:

  • Chat with product team representatives from Azure Media Services, Azure Storage, Avere, Azure HPC, Cognitive Services, and  Microsoft Skype for Broadcast.
  • Visit with partners from Avid, GreyMeta, Forbidden, Live Arena, Make.TV, Ooyala, Prime Focus Technologies, Teradici, Streaming Buzz, uCast, and Wowza.
  • See some great customer, partner, and product team presentations. To learn more, see the detailed schedule for Microsoft at IBC 2018.

Thanks for reading and have a great show!

Tad

Microsoft Azure Media Services and our partners Welkom you to IBC 2018

$
0
0

Content creators and broadcasters are increasingly embracing Cloud’s global reach, hybrid model and elastic scale. These attributes combined with AI’s ability to accelerate insights and time to market across content creation, management, and monetization are truly transformative.

At the International Broadcasters Conference (IBC) Show 2018, we are focused on bringing Cloud + AI together to help you overcome common media workflow challenges.

Video Indexer, generally available starting today, is a great example of this Cloud + AI focus. It brings together the power of the cloud + Microsoft AI to intelligently analyze your media assets, extract insights and add metadata. It makes it easier to understand your vast content library and get the more than 20 new and improved models, easy to use interfaces, a single API, and simplified account management. I have been part of Video Indexer team since its inception and could not be more excited to see it reach GA. I’m also incredibly proud of the work the team has done to solve real customer problems and make AI tangible in this easy to use elegant solution.

Our partners are already innovating on top of Video Indexer and extending Azure Media Services to advance the state of the art in cloud-based media services and workflows. You can learn more about Video Indexer and our new partner solutions below. You can also check out Tad Brockway’s IBC blog to learn more about new solutions from Azure and our partners that enable compelling workflows across the media value chain.

Microsoft Azure announces general availability of Video Indexer

At IBC 2018, we are thrilled to announce that Video Indexer (VI) is generally available and ready to cater to our media customers’ changing and growing needs.

Announced as a public preview at Microsoft’s Build 2018 conference in May, Video Indexer is an AI-based advanced metadata extraction service. This latest addition to Azure Media Services enables customers to extract insights from Video and Audio files through a rich set of Machine Learning algorithms. Those insights can then be used to improve content discoverability and accessibility, create new monetization opportunities and unlock data-driven experiences.

Insights - 4.6

At its core, Video Indexer orchestrates a cross-channel machine learning analysis (audio, speech, and vision) pipeline for video and audio files, using models that are continuously updated by Microsoft Research. These models bring the power of machine learning to you, enabling you to benefit without having to acquire expertise. Furthermore, our cross-channel models enable even deeper and more accurate insights to be uncovered.

Customers and partners such as AVID, Ooyala, Dalet, Box, Endemol Shine Group, AVROTROS, and eMAM are already using the Video Indexer service for speech to text and closed captioning in ten different languages, visual text recognition (OCR), keywords extraction, label identifications, out of the box and custom brand detection, face identification, celebrity and custom face recognition, sentiment analysis, key frame detection and more.

At GA we are, of course, adding new capabilities. The Emotion recognition model detects emotional moments in video and audio assets based on speech content and voice tonality. Our Topic inferencing model is built to understand the high-level topics of the video or audio files based on spoken words and visual cues. Topics in this model are sourced from IPTC taxonomy among others to align to industry standards. We’ve also enhanced the well-loved celebrity recognition model which now covers one million faces based on commonly requested data sources such as IMDB, Wikipedia, and top LinkedIn influencers.

We make it easy to try out Video Indexer – just upload, process and review video insights using our web portal. You can even customize models in a highly visual way without having to write a line of code. There is no charge to use the portal; however, if you find the experience suits your needs you can connect to an Azure account and use it in production. Existing customers will find new insight and timeline panels that are available in the portal and to embed. These sleek new panels are built to support the growing number of insight visuals and are responsive to different screen form factors.

Get started today using Video Indexer to enable deep search on video and audio archives, reduce editing, content creation costs, and provide innovative customer experiences.

Azure and our partners address your challenges at IBC 2018

At this year’s IBC we’re showcasing progress towards a future where Cloud, Edge, and AI help the media industry compete and thrive.

First up we’ve partnered with Dejero, Avid, Haivision, Hiscale, Make.TV, and Signiant to showcase “live production in the cloud.” Live cloud workflows have historically been a challenge and this demonstration will take us one step closer.

We’re also showcasing how Azure Media Services can help deliver UHD/4K HLG streams from the cloud in partnership with MediaExcel.

MultiChoice, a leading pay-TV operator for more than 12 million subscribers in Sub-Saharan Africa recently completed a pilot using a similar solution to deliver the first UHD live streaming event in South Africa. They found that Microsoft Azure delivered on the promise of a third party managed cloud solution with real-world effectiveness.

You can learn more about these solutions at our booth at IBC or from this blog.

Media solutions powered by a broad ecosystem

From creation to management and monetization, our partners continue to innovate for you.

Content creation

  • Avid MediaCentral’s latest version that just shipped features the powerful MediaCentral | Search app which makes all production and archived assets — stored across multiple local and remote systems — accessible to every in-house and remote contributor. Other features in the latest release include rundown app for story and sequence editing, social media research app for quickly monitoring a story and working it into your rundown, publish app for distributing content quickly across social media platforms, and MediaCentral l Ingest for enabling OP1A transcoding into growing media for editing while capturing and playing out with FastServe l Playout. These services are all built on a new backend on Azure that enables faster deployment and high availability.
  • Nimble Collective recently launched Nimble Studio offering which enables studios and enterprise customers to harness their favorite tools through a powerful and secure pipeline. They recently visited the Microsoft Store in Vancouver to demonstrate how simple it was to get a studio up and running.

Content management

  • Prime Focus Technologies (PFT) has partnered with Microsoft to further strengthen its flagship product, CLEAR™ Media ERP, which currently handles 1.5 million hours of content annually. As part of the collaboration, PFT is migrating its data storage to Azure to provide uninterrupted service to CLEAR customers. Leveraging Azure’s best-in-class cloud services, scale, reach, and rich AI capabilities, CLEAR offers a reliable, secure, scalable, intelligent and media-savvy ERP solution globally. PFT will showcase CLEAR integrated with Microsoft’s powerful Azure cloud services at IBC 2018 - booth #7.C05.
  • Dalet has integrated Video Indexer into Dalet Media Cortex, a cloud-based SaaS, to enable existing Dalet Galaxy customers to consume Cognitive Services on demand. Dalet Media Cortex uses Video Indexer generated metadata to augment the content production experience and its effectiveness. For example, the new Dalet Discovery Panel provides editors with contextual suggestions of relevant content matching their work in progress.
  • Empress Media Asset Management (eMAM) has integrated Video Indexer into its flagship product, eMAM. eMAM is a flexible, scalable media asset management system that can work natively in Azure or in hybrid environments. Organizations can now use Video Indexer to enrich the metadata for current or legacy content.
  • Zoom Media is a Dutch startup specializing in Automated Speech Recognition (ASR) technology.  They have extended the speech-to-text capabilities of Video Indexer, currently supporting ASR for ten languages, to include Dutch, Swedish, Danish, and Norwegian. Microsoft and Zoom Media will present these new features at IBC Show 2018 in Amsterdam.

Content distribution

  • Built on Azure, uCast’s data-driven, OTT Video Platform supports turnkey AVOD, SVOD, TVOD, and Live functionality. uCast’s content monetization platform recently launched Sports Illustrated’s signature OTT video service SI.TV on Azure. It will also host a new Advertising Video-on-Demand (AVOD) service for an Indonesian-based mobile telecommunications provider and their more than 60 million subscribers.
  • Nowtilus, a digital video distribution solutions provider, has deployed its server-side ad-insertion (SSAI) technology for on-demand and live-linear scenarios on Azure. It’s integrated with platforms such as uCast and waipu.tv (Exaring) to offer industry’s best, standards-compliant stream personalization and ad-targeting in TV and VOD.
  • StreamingBuzz has developed StreamingSportzz Fan App, an Azure-based solution that offers a highly interactive experience for sports fans, athletes, and coaches. The experience includes multiple angles, 360 video, VR, and AR modes with statistics and match analysis. They have also have created BuzzOff, an innovative Azure-based streaming solution for in-flight entertainment, that enables passengers on flights to stream offline DRM protected content to their devices without the need to download an app.
  • Media Excel has introduced a hybrid architecture for live contribution and transcoding of UHD adaptive services based on its HERO product line. This architecture enables PayTV operators and content providers to deploy secure scalable live UHD OTT workflows on Azure by combining multiple in-sync encoder instances for a highly-redundant yet cost-effective offering. For a live demo of the end-end solution, please visit Microsoft (1.C27) and Media Excel (14.G18) booths at IBC.
  • Telestream and Microsoft are partnering closely to support content producers, owners, and distributors as well as corporations in their journey to cloud video production and OTT distribution. With the recently launched Vantage and Telestream Cloud solutions on Azure, Telestream offers comprehensive hybrid and cloud-based media processing in Azure to enable broadcasters and content producers to reduce CAPEX, increase agility, and enhance security for global content production. Telestream has developed Wirecast S specifically for Microsoft Stream within Office 365 and is developing Azure-based video quality monitoring solution for virtual, cloud-based deployments.

Industry updates

Microsoft has joined the Secure Reliable Transport (SRT) Alliance! Having pioneered adaptive streaming for the industry and seen the benefits of broad industry adoption, we believe in the need for a simple, scalable and efficient ingest protocol for supporting large scale live events from the cloud. SRT is a proven video transport that lends itself well for cloud-based live workflows. By joining the SRT Alliance, we hope to create broader support, enthusiasm and adoption.

Come see us at IBC 2018

If you’re attending IBC 2018, stop by Hall 1, Booth #C27. In addition to great demos and product team representatives from Azure Media Services, Azure Storage, Avere, Azure Cognitive Services, PlayReady, and Skype for Broadcast, we will also feature the following partner showcases:

  • Avid

  • Blackbird

  • GreyMeta

  • LiveArena

  • StreamingBuzz

  • Make.TV

  • Ooyala

  • Teradici

  • uCast Global

  • Wowza Media Systems

  • X.news

Microsoft will also feature an in-booth presentation theatre with customer, partner and product presentations scheduled throughout the day. Check out a detailed schedule of all presentations and speakers.

If you are not attending the conference but would like to learn more about our media services, follow the Azure Blog to stay up to date on new announcements.

Finally, a big thank you to all our dedicated and growing community of developers, customers, and partners that continue to provide valuable and actionable feedback.

Thank you!

A complete containerized .NET Core Application microservice that is as small as possible

$
0
0

OK, maybe not technically a microservice, but that's a hot buzzword these days, right? A few weeks ago I blogged about Improvements on ASP.NET Core deployments on Zeit's now.sh and making small container images. By the end I was able to cut my container size in half.

The trimming I was using is experimental and very aggressive. If you app loads things at runtime - like ASP.NET Razor Pages sometimes does - you may end up getting weird errors at runtime when a Type is missing. Some types may have been trimmed away!

For example:

fail: Microsoft.AspNetCore.Server.Kestrel[13]

Connection id "0HLGQ1DIEF1KV", Request id "0HLGQ1DIEF1KV:00000001": An unhandled exception was thrown by the application.
System.TypeLoadException: Could not load type 'Microsoft.AspNetCore.Diagnostics.IExceptionHandlerPathFeature' from assembly 'Microsoft.Extensions.Primitives, Version=2.1.1.0, Culture=neutral, PublicKeyToken=adb9793829ddae60'.
at Microsoft.AspNetCore.Diagnostics.ExceptionHandlerMiddleware.Invoke(HttpContext context)
at System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start[TStateMachine](TStateMachine& stateMachine)
at Microsoft.AspNetCore.Diagnostics.ExceptionHandlerMiddleware.Invoke(HttpContext context)
at Microsoft.AspNetCore.HostFiltering.HostFilteringMiddleware.Invoke(HttpContext context)
at Microsoft.AspNetCore.Hosting.Internal.HostingApplication.ProcessRequestAsync(Context context)
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.HttpProtocol.ProcessRequests[TContext](IHttpApplication`1 application)

Yikes!

I'm doing a self-Contained deployment and then trim the result! Richard Lander has a great dockerfile example. Note how he's doing the package addition with the dotnet CLI with "dotnet add package" and subsequent trim within the Dockerfile (as opposed to you adding it to your local development copy's csproj).

I'm adding the Tree Trimming Linker in the Dockerfile, so the trimming happens when the container image is built. I'm using the dotnet command to "dotnet add package ILLink.Tasks. This means I don't need to reference the linker package at development time - it's all at container build time.

FROM microsoft/dotnet:2.1-sdk-alpine AS build

WORKDIR /app

# copy csproj and restore as distinct layers
COPY *.sln .
COPY nuget.config .
COPY superzeit/*.csproj ./superzeit/
RUN dotnet restore

# copy everything else and build app
COPY . .
WORKDIR /app/superzeit
RUN dotnet build

FROM build AS publish
WORKDIR /app/superzeit
# add IL Linker package
RUN dotnet add package ILLink.Tasks -v 0.1.5-preview-1841731 -s https://dotnet.myget.org/F/dotnet-core/api/v3/index.json
RUN dotnet publish -c Release -o out -r linux-musl-x64 /p:ShowLinkerSizeComparison=true

FROM microsoft/dotnet:2.1-runtime-deps-alpine AS runtime
ENV DOTNET_USE_POLLING_FILE_WATCHER=true
WORKDIR /app
COPY --from=publish /app/superzeit/out ./
ENTRYPOINT ["dotnet", "superzeit.dll"]

I did end up hitting this bug in the Linker (it's not Released) but there's an easy workaround. I just need to set the property CrossGenDuringPublish to false in the project file.

If you look at the Advanced Instructions for the Linker you can see that you can "root" types or assemblies. Root means "don't mess with these or stuff that hangs off them." So I just need to exercise my app at runtime and make sure that all the types that my app needs are available, but no unnecessary ones.

I added the Assemblies I wanted to keep (not remove) while trimming/linking to my project file:

<Project Sdk="Microsoft.NET.Sdk.Web">


<PropertyGroup>
<TargetFramework>netcoreapp2.1</TargetFramework>
<CrossGenDuringPublish>false</CrossGenDuringPublish>
</PropertyGroup>

<ItemGroup>
<LinkerRootAssemblies Include="Microsoft.AspNetCore.Mvc.Razor.Extensions;Microsoft.Extensions.FileProviders.Composite;Microsoft.Extensions.Primitives;Microsoft.AspNetCore.Diagnostics.Abstractions" />
</ItemGroup>

<ItemGroup>
<!-- this can be here, or can be done all at runtime in the Dockerfile -->
<!-- <PackageReference Include="ILLink.Tasks" Version="0.1.5-preview-1841731" /> -->
<PackageReference Include="Microsoft.AspNetCore.App" />
</ItemGroup>

</Project>

My strategy for figuring out which assemblies to "root" and exclude from trimming was literally to just iterate. Build, trim, test, add an assembly by reading the error message, and repeat.

This sample ASP.NET Core app will deploy cleanly on Zeit with the smallest image footprint as possible. https://github.com/shanselman/superzeit

Next I'll try an actual Microservice (as opposed to a complete website, which is what this is) and see how small I can get that. Such fun!

UPDATE: This technique works with "dotnet new webapi" as well and is about 73 megs per "docker images" and it's 34 megs when sent and squished through Zeit's "now" CLI.

Small services!


Sponsor: Rider 2018.2 is here! Publishing to IIS, Docker support in the debugger, built-in spell checking, MacBook Touch Bar support, full C# 7.3 support, advanced Unity support, and more.


© 2018 Scott Hanselman. All rights reserved.
     

Because it’s Friday: Hurricane Trackers

$
0
0

With Hurricane Florence battering the US and Typhoon Manghkut bearing down on the Philippines, it's a good time to take a look at the art of visualizing predicted hurricane paths. (By the way, did you know that "typhoon", "hurricane" and "cyclone" are just different names for the same weather phenomenon?) Flowing Data has a good overview of the ways media have been visualizing the predicted path (hat tip: reader MB), including this animation from Axios which does a good job of demonstrating the uncertainty in the forecast:

Axios-hurricane-florence-tracker

A good thing to be aware of, though, is that the cones around the predicted tracks do not represent the size of the storm, but rather the uncertainty in the position of the center of the storm.

For a "live" view though, the place I like to look at is the global wind visualization from the Climate Literacy and Energy Awareness Network. Here's how Florence looks at this writing (3:45AM East Coast time). Click the image to see the current animated view.

Florence

That's all from us at the blog for this week. For those in the path of the storms, good luck and stay safe. 

 


How many deaths were caused by the hurricane in Puerto Rico?

$
0
0

President Trump is once again causing distress by downplaying the number of deaths caused by Hurricane Maria's devastation of Puerto Rico last year. Official estimates initially put the death toll at 15 before raising it to 64 months later, but it was clear even then that those numbers were absurdly low. The government of Puerto Rico commissioned an official report from the Millikan Institute of Public Health at George Washington University (GWU) to obtain a more accurate estimate, and with its interim publication official toll stands at 2,975

Why were the initial estimates so low? I read the interim GWU report to find out. The report itself is clearly written, quite detailed, and composed by an expert team of social and medical scientists, demographers, epidemiologists and biostatisticians, and I find its analysis and conclusions compelling. (Sadly however the code and data behind the analysis have not yet been released; hopefully they will become available when the final report is published.) In short:

  • In the earliest days of the hurricane, the death-recording office was closed and without power, which suppressed the official count.
  • Even once death certificates were collected, it became clear that officials throughout Puerto Rico has not been trained on how to record deaths in the event of a natural disaster, and most deaths were not attributed correctly in official records. 

Given these deficiencies in the usual data used to calculate death tolls (death certificates) the GWU team used a different approach to calculate the death toll. The basis of the method was to estimate excess mortality, in other words, how many deaths occurred in the post-Maria period compared to the number of deaths that would have been expected if it had never happened. This calculation required two quantitative studies:

  • An estimate of what the population would have been if the hurricane hadn't happened. This was based on a GLM model of monthly data from the prior years, accounting for factors including recorded population, normal emigration and mortality rates.
  • The total number of deaths in the post-Maria period, based on death certificates from the Puerto Rico government (irrespective of how the cause of death was coded).
  • (A third study examined the communication protocols before, during and after the disaster. This study did not affect the quantiative conclusions, but formed the basis of some of the report's recommendations.)

The difference between the actual mortality, and the estimated "normal" mortality formed the basis for the estimate of excess deaths attributed to the hurricane. You can see those estimates of excess deaths one month, three months, and five months after the event in the table below; the last column represents the current official estimate.

Excess mortality

These results are consistent in scale with another earlier study by Nishant Kishore et al. (The data and R code behind this study is available on GitHub.) This study attempted to quantify deaths attributed to the hurricane directly, by visiting 3299 randomly chosen households across Puerto Rico. At each household, inhabitants were asked about any household members who had died and their cause of death (related to or unrelated to the hurricane), and whether anyone had left Puerto Rico because of the hurricane. From this survey, the paper's authors extrapolated the number hurricane-related deaths to the entire island. The headline estimate of 4,625 at three months is somewhat larger than the middle column of the study above, but due to the small number of recorded deaths in the survey sample the 95% confidence interval is also much larger: 793 to 8498 excess deaths. (Gelman's blog has some good discussion of this earlier study, including some commentary from the authors.)

With two independent studies reporting excess deaths well into the thousands attributable directly to Hurricane Maria, it's a fair question to ask whether a more effective response before and after the storm could have reduced the scale of this human tragedy.

Milken Institute School of Public Health: Study to Estimate the Excess Deaths from Hurricane Maria in Puerto Rico

Announcing new REST API’s for Process Customization

$
0
0
Last sprint we released a new set of REST API endpoints for process customization. In version 4.1 there are 3 sets of REST API’s. Two for the inherited model and one for the Hosted XML model. This created some confusion on what endpoints to use and when. In the new 5.0 (preview) version we combined... Read More

Top Stories from the Microsoft DevOps Community – 2018.09.14

$
0
0
Wow, y’all: seven years ago today, at the BUILD conference, we announced the preview of what we called “Team Foundation Service”. That service offering became Visual Studio Team Services. And on Monday, we announced the newest evolution of what that vision has become: Azure DevOps. Azure DevOps is a family of tools to help you... Read More

Azure.Source – Volume 49

$
0
0

Welcome to Azure DevOps

The big news last week was the introduction of Azure DevOps, which represents the evolution of Visual Studio Team Services (VSTS). Azure DevOps is a set of five new services that can be used together as a unified product, independently as stand-alone services or in any combination: Azure Pipelines, Azure Boards, Azure Repos, Azure Test Plans, and Azure Artifacts. Azure Pipelines is a fully-managed CI/CD service that enables developers to continuously build, test, and deploy any type of app, to any platform or cloud, which is available with free, unlimited CI/CD minutes for open source projects and integrated with the GitHub CI marketplace. In partnership with GitHub, we built an extension for Visual Studio Code that give developers the ability to review GitHub pull request source code from within the editor.

 

Azure DevOps & Azure Pipelines Launch Keynote - Learn all about our announcement from hosts Jamie Cool, Donovan Brown and guests who will cover what's new in Azure DevOps, Azure Pipelines, our GitHub CI integration and much more. Watch more content here: aka.ms/AzureDevOpsLaunch.

Introducing Azure DevOps - Announcement blog post from Jamie Cool, Director of Program Management, Azure DevOps that provides a high-level overview of what Azure DevOps is, briefly covers how Open Source projects receive free CI/CD with Azure Pipelines, and outlines the evolution from Visual Studio Team Services.

Announcing Azure Pipelines with unlimited CI/CD minutes for open source - Azure Pipelines is a CI/CD service that enables you to continuously build, test, and deploy to any platform or cloud. Azure Pipelines also provides unlimited CI/CD minutes and 10 parallel jobs to every open source project for free. Use the Azure Pipelines app in the GitHub Marketplace to make it easy to get started.

Screenshot of Azure Pipelines example

Deep dive into Azure Boards - Azure Boards is a service for managing the work for your software projects. Teams need tools that flex and grow. Azure Boards does just that, brining you a rich set of capabilities including native support for Scrum and Kanban, customizable dashboards, and integrated reporting. In this post, Aaron Bjork, Principal Group Program Manager, Azure DevOps goes through a few core features in Azure Boards and give some insight in to how you can make them work for your teams and projects.

Learn more about Azure DevOps:

Now generally available

Video Indexer – General availability and beyond - At the International Broadcasters Conference (IBC) Show 2018, we announced the general availability of Video Indexer, which is a cloud application built on Azure Media Analytics, Azure Search, Cognitive Services (such as the Face API, Microsoft Translator, the Computer Vision API, and Custom Speech Service). It enables you to extract the insights from your videos using Video Indexer's cross-channel (audio, speech, and visual) machine learning models, such as emotion detection and topic inferencing. We also released a new Video Indexer v2 connector for Logic Apps and Flow, which enables you to set up your own custom Video Indexer workflows to further automate the process of extracting deep insights from your videos quickly and easily without writing code.

The Azure Podcast

The Azure Podcast | Episode 246 - South Central US outage discussion - Azure services and customers being impacted. Kendall, Evan and Sujit break down the outage and try to understand how Microsoft and its customers can be better prepared from such unplanned events.

News and updates

Application Insights improvements for Java and Node.js - Get an overview of recent improvements in Azure Monitor to enable a first-class monitoring experience for Java and Node.js teams in both their Azure and on-premises environments. Note that all of Application Insights SDKs are open source, including Java and Node.js.

HDInsight Tools for VSCode: Integrations with Azure Account and HDInsight Explorer - HDInsight Tools for VSCode extension now integrates with the Azure Account extension, which makes your Azure HDInsight sign-in experience even easier. This release also introduces a graphical tree view for the HDInsight Explorer within Visual Studio Code. HDInsight Explorer enables you to navigate HDInsight Hive and Spark clusters across subscriptions and tenants, browse Azure Data Lake Storage and Blob Storage connected to these HDInsight clusters, and inspect your Hive metadata database and table schema.

Announcing the New Auto Healing Experience in App Service Diagnostics - App Service Diagnostics helps you diagnose and solve issues with your web app by following recommended troubleshooting and next steps. You may be able to resolve unexpected behaviors temporarily with some simple mitigation steps, such as restarting the process or starting another executable, or require additional data collection, so that you can better troubleshoot the ongoing issue at a later time. Using the new Auto Healing tile shortcut under Diagnostic Tools in App Service Diagnostics, you can set up custom mitigation actions to run when certain conditions are met.

New Price Drops for App Service on Linux - We’re extending the preview price (for Linux on App Service Environment, which is the Linux Isolated App Service Plan SKU) for a limited time through GA. App Service on Linux is a fully managed platform that enables you to build, deploy, and globally scale your apps more quickly. You can bring your code to App Service on Linux and take advantage of the built-in images for popular supported language stacks, such as Node, Java, PHP, etc., or bring your Docker container to easily deploy to Web App for Containers. We'll provide a 30-day notice before this offer ends, which is TBD.

Azure Friday

Azure Friday | Azure State Configuration experience - Michael Greene joins Scott Hanselman to discuss a new set of experiences for Configuration Management in Azure, and how anyone new to modern management can discover and learn new process more quickly than before.

Azure Friday | Unlock petabyte-scale datasets in Azure with aggregations in Power BI - Christian Wade joins Scott Hanselman to show you how to unlock petabyte-scale datasets in Azure with a way that was not previously possible. Learn how to use the aggregations feature in Power BI to enable interactive analysis over big data.

Technical content

Azure preparedness for weather events - Learn how we’re preparing for and actively monitoring Azure infrastructure in regions impacted by Hurricane Florence and Typhoon Manghkhut. As a best practice, all customers should consider their disaster recovery plans and all mission-critical applications should be taking advantage of geo-replication. You can reach our handle @AzureSupport on Twitter, we are online 24/7. Any business impact to customers will be communicated through Azure Service Health in Azure portal.

GPUs vs CPUs for deployment of deep learning models - Get a detailed comparison of the deployments of various deep learning models to highlight the striking differences in the throughput performance of GPU versus CPU deployments to provide evidence that, at least in the scenarios tested, GPUs provide better throughput and stability at a lower cost. For standard machine learning models where number of parameters are not as high as deep learning models, CPUs should still be considered as more effective and cost efficient. For deep learning inference tasks which use models with high number of parameters, GPU based deployments benefit from the lack of resource contention and provide significantly higher throughput values compared to a CPU cluster of similar cost.

How to extract building footprints from satellite images using deep learning - This post from Siyu Yang, Data Scientist, AI for Earth, highlights a sample project that uses Azure infrastructure for training a deep learning model to gain insight from geospatial data. Such tools will finally enable us to accurately monitor and measure the impact of our solutions to problems such as deforestation and human-wildlife conflict, helping us to invest in the most effective conservation efforts. If you deal with geospatial data, did you know that Azure already offers a Geo Artificial Intelligence Data Science Virtual Machine (Geo-DSVM), equipped with ESRI’s ArcGIS Pro Geographic Information System. Get started with a tutorial on how to use the Geo-DSVM for training deep learning models and integrating them with ArcGIS Pro.

Diagram showing combination of geospatial data and AI at scale to deliver and intelligent geospatial data application

How Security Center and Log Analytics can be used for Threat Hunting - Azure Security Center (ASC) uses advanced analytics and global threat intelligence to detect malicious threats, and the new capabilities that our product team is adding everyday empower our customers to respond quickly to these threats. No security tool can detect 100 percent of the attack, and many of the tools that raise alerts are optimized for low false positive rates. In this post, learn how to adopt a threat hunting mindset by proactively and iteratively searching through your varied log data with the goal of detecting threats that evade existing security solutions. Azure Security Center has built-in features that you can use to launch your investigations and hunting campaigns in addition to responding to alerts that it triggers.

Five habits of highly effective Azure users - Based on customer interactions, we're compiling a list of routine activities that can help you get the most out of Azure, including staying on top of proven practice recommendations, staying in control of your resources on the go, staying informed during issues and maintenance, and staying up-to-date with the latest announcements. Read this post to learn more about these activities. In addition, staying engaged with your peers to share good habits they’ve discovered and learn new ones from the community is also valuable.

Additional technical content

Azure tips & tricks

Screenshot from How to deploy an Azure Web App using only the CLI tool video

How to deploy an Azure Web App using only the CLI tool - Learn how to successfully deploy an Azure Web App by using only the command-line (CLI) tool. Watch to learn how the Azure portal is not only helpful for working with resources, but it also is convenient for using a command-line to deploy web applications.

Screenshot from How to work with files in Azure App Service video

How to work with files in Azure App Service - Learn how to work with files that you’ve uploaded to Azure App Service. Watch to find out what the different options are for interacting with the file system and your deployed applications in the Azure portal.

Events

Microsoft Ignite 2018 - If you're not able to join us next week for this premiere event, be sure to watch online to watch the live stream from Orlando.

Microsoft Azure Media Services and our partners Welkom you to IBC 2018 - International Broadcasters Conference (IBC) Show 2018 took place last week in Amsterdam. In this post, Sudheer Sirivara, General Manager, Azure Media, covers the announcement that Video Indexer is generally available, how we partnered to showcase "live production in the cloud," how our partners are innovating to deliver a broad ecosystem of media solutions, and that Microsoft has joined the Secure Reliable Transport (SRT) Alliance.

From Microsoft Azure to everyone attending IBC Show 2018 – Welkom in Amsterdam! - In this post, Tad Brockway, General Manager, Azure Storage & Azure Stack, shares new ways that Azure, together with solutions from our partners, can address common media workflow challenges.

The IoT Show

Internet of Things Show | Join IoT in Action to Build Transformational IoT Solutions - Gain actionable insights, deepen partnerships, and unlock the transformative potential of intelligent edge and intelligent cloud solutions at this year's IoT in Action event series. This event series is a chance for you to meet and collaborate with Microsoft's customers and partner ecosystem to build and deploy new IoT solutions that can be used to change the world around us.

Internet of Things Show | iotz: a new approach to IoT compile toolchains - iotz is a command line tool that aims at simplifying the whole process. Oguz Bastemur, developer in the Azure IoT team, joins us on the IoT Show to explain and show how iotz can be used to streamline compilation of embedded projects for IoT devices.

Customers and partners

AI helps troubleshoot an intermittent SQL Database performance issue in one day - Learn how Azure SQL Database intelligent performance feature Intelligent Insights to help a customer troubleshoot a hard to find 6-month intermittent database performance issue in a single day only, how Intelligent Insights helps an ISV operate 60,000 databases by identifying related performance issues across their database fleet, and how Intelligent Insights helped an enterprise seamlessly identify a hard to troubleshoot performance degradation issue on a large-scale 35TB database fleet.

Illustration showing the use of Intelligent Insights to identify a performance issue in large group of Azure SQL Databases

Real-time data analytics and Azure Data Lake Storage Gen2 - We are actively partnering with leading ISV’s across the big data spectrum of platform providers, data movement and ETL, governance and data lifecycle management (DLM), analysis, presentation, and beyond to ensure seamless integration between Gen2 and their solutions. Learn how we're partnering with Attunity to help customers learn more about real-time analytics, data lakes and how you can quickly move from evaluation to execution. And join us for our first joint Gen2 engineering-ISV webinar with Attunity tomorrow, Tuesday, September 18th.

Azure Marketplace new offers - Volume 19 - The Azure Marketplace is the premier destination for all your software needs – certified and optimized to run on Azure. Find, try, purchase, and provision applications & services from hundreds of leading software providers. You can also find consulting services from hundreds of leading providers. In the first half of August we published 50 new offers, including: Informix, BitDam, and consulting services from Lixar.

Industries

The Azure Security and Compliance Blueprint - HIPAA/HITRUST Health Data and AI offers a turn-key deployment of an Azure PaaS solution to demonstrate how to securely ingest, store, analyze, and interact with health data while being able to meet industry compliance requirements. The blueprint helps accelerate cloud adoption and utilization for customers with data that is regulated.

Learn more in these posts from last week:

Diagram showing operational process flow for admitting a patient

Reduce false positives, become more efficient by automating anti-money laundering detection - Learn how without human intervention, it is difficult, almost impossible to adapt to the rapidly evolving patterns used by money launders or terrorists. We have many partners that address bank challenges with fraud. Among that elite group, Behavioral Biometrics solution from BioCatch and the Onfido Identity Verification Solution help automate fraud detection through frictionless detection.

Retail brands: gain a competitive advantage with modern data management - Much of the data collected by retailers goes unused. This occurs because the infrastructure within an organization is unable to make the data accessible or searchable. Learn how great data management provides a significant strategic advantage and enables brand differentiation when serving customers.

A Cloud Guru's Azure This Week

Screenshot from A Cloud Guru's Azure This Week - 14 September 2018 video

A Cloud Guru | Azure This Week - 14 September 2018 - This time on Azure This Week, Lars talks about the rebranding of Visual Studio Team Services, the major Azure data center outage in Texas and some great new tools for Spark developers using HDInsights.

Come check out Azure Stack at Ignite 2018

$
0
0

We are excited to host you all at this year’s Ignite conference. The Azure Stack team has put together a list of sessions along with a pre-day event to ensure that you will enhance your skills on Microsoft’s hybrid cloud solution and get the most out of this year’s conference.

We have an agenda that is tailored for developers who use Azure Stack to develop innovative hybrid solutions using services on Azure Stack and Azure, as well as operators who are responsible for the operations, security, and resiliency of Azure Stack itself. Whether you’re a developer or an IT operator, there’s something for you.

To fully benefit from our sessions we recommended you attend our two overview talks, “Intelligent Edge with Azure Stack” and “Azure Stack Overview and Roadmap”. If you’re looking to learn how to operate Azure Stack, we recommend you attend “The Guide to Becoming an Azure Stack Operator” to learn what it takes to get the most of your investment. If you’re just “Getting started with Microsoft Azure Stack as a developer”, we’ve created a path for you as well. See the learning map below:

Azure Stack learning map diagram
  The following table list the details of each session.

 

Date

Time

Session

Title

Sunday, September 23, 2018 8:00 AM – 5:00 PM PRE 28 Azure Stack Pre-Day (Building and operating hybrid cloud solutions with Azure and Azure Stack)
Tuesday, September 25, 2018 10:45 AM - 12:00 PM BRK2367 Azure Stack overview and roadmap
Tuesday, September 25, 2018 11:55 AM - 12:15 PM THR2057 Building solutions for public industry vertical with Microsoft Azure Stack
Tuesday, September 25, 2018 12:45 PM - 1:30 PM BRK2373 Getting started with Microsoft Azure Stack as a developer
Tuesday, September 25, 2018 2:15 PM - 3:30 PM BRK2297 Intelligent Edge With Azure Stack
Tuesday, September 25, 2018 4:00 PM - 4:20 PM THR2058 What you need to know to run Microsoft Azure Stack as a CSP
Wednesday, September 26, 2018 9:00 AM - 10:15 AM BRK3334 The Guide to Becoming an Azure Stack Operator
Wednesday, September 26, 2018 10:45 AM - 12:00 PM BRK2374 Understanding hybrid application patterns for Azure Stack
Wednesday, September 26, 2018 2:15 PM - 2:35 PM THR3027 Machine learning applications in Microsoft Azure Stack
Thursday, September 27, 2018 9:00 AM - 9:45 AM BRK3288 Implementing DevOps in Microsoft Azure Stack
Thursday, September 27, 2018 11:30 AM - 12:15 PM BRK2305 Discovering Security design principles and key use cases for Azure Stack
Thursday, September 27, 2018 12:30 PM - 1:45 PM BRK3317 Best Practices for Planning Azure Stack deployment and post-deployment integrations with Azure
Thursday, September 27, 2018 2:00 PM - 2:45 PM BRK3335 Architectural patterns and practices for business continuity and disaster recovery on Azure Stack
Friday, September 28, 2018 12:30 PM - 1:45 PM BRK3318 Accelerate Application Development through OpenSource Frameworks and Marketplace Items


In addition, the following are a selection of Azure Stack related sessions from our hardware partners:

  • Intel: BRK2448 – Driving business value from a modern, cloud-ready platform
  • Dell EMC: BRK2441 – Why architecture matters: A closer look at Dell EMC solutions for Microsoft WSSD, Azure Stack, and SQL Server
  • Lenovo: THR2350 – What are Lenovo and Microsoft Azure Stack customers experiencing?
  • Cisco: BRK2427 – Secure your cloud journey, from data center to the edge with Cisco
  • HPE: BRK1123 – How to tame your hybrid cloud

Finally, on the Ignite expo floor, you can find the Azure Stack team in 3 booths (341, 342, 344) inside of our Intelligent Edge section. Wondering what intelligent edge is? Ask anyone on our team. Many of us along with our partners will be in Orlando and we look forward to meeting you.

Cheng Wei (@cheng__wei)

Principal PM Manager, Azure Stack

Viewing all 10804 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>