Quantcast
Channel: Category Name
Viewing all 10804 articles
Browse latest View live

CustomVision.AI: Code-free automated machine learning for image classification

$
0
0

Artificial Intelligence (AI) has emerged as one of the most disruptive forces behind the digital transformation of business. Our mission is to bring AI to every developer and every organization on the planet, and help businesses augment human ingenuity in unique and differentiated ways. Developers and data scientists are at the heart of driving this innovation force and we are committed to providing them the best tools to make them successful. These include tools for automating machine learning through the pre-built AI capabilities we offer for vision, speech, language, knowledge and search in Microsoft Cognitive Services.

In November, at Microsoft Connect () 2017, I announced expanding AI tools and resources for developers and data scientists on Azure and described how the Microsoft AI platform enables a rich variety of application scenarios. Last month we also announced the general availability of our conversational AI tools with customers such as Molson Coors, UPS, and Equadex.

We are continuing innovation at a great pace in making AI easy by bringing capabilities such as Transfer learning and automated machine learning to developers. Today I’d like to focus on the Microsoft Custom Vision Service which makes it possible for you to easily train a classifier with your own data, export the models and embed these custom classifiers directly in your applications, and run it offline in real time on iOS, Android and many other edge devices.

Custom Vision Service (Figure 1) is a cloud enabled tool for easily training, deploying, and improving your custom image classifiers. With just a handful of images per category, developers can train their own image classifier in minutes through a simple drag and drop interface (Figure 2). To enable developers to build for the intelligent edge, Custom Vision Service from Microsoft Cognitive Services has added mobile model export. Today, in addition to hosting your classifiers at a REST endpoint, you can now export models to run offline, starting with export to the CoreML format for iOS 11 and to the TensorFlow format for Android. Export will allow you to embed your classifier directly in your application and run it locally on a device. The models you export are optimized for the constraints of a mobile device, so you can classify on device in real time.

 

CustomVision.AI_1
Figure 1 - Custom Vision Service

 

CustomVision.AI_2
Figure 2 - Custom Vision Service

 

Once you have created and trained your custom vision model through the service, it's a matter of a few clicks to get your model exported from the service. This allows developers a quick way to take their custom model with them to any environment whether their scenario requires that the model run on-premises, in the cloud, or on mobile and edge devices (Figure 3). This provides the most flexible and easy way for developers to export and embed custom vision models in minutes with “zero” coding.

 

CustomVision.AI_3
Figure 3 - Custom Vision model classifying fruit running on iOS

 

CustomVision.AI_4
Figure 4 - Custom Vision model classifying fruit running on Android

 

Custom Vision Service is designed to build quality classifiers with very small training datasets, helping you build a classifier that is robust to differences in the items you are trying to recognize and that ignores the things you are not interested in. Now, you can easily add real time image classification to your mobile applications. Creating, updating, and exporting a compact model takes only minutes, making it easy to build and iteratively improve your application. More export formats and supported devices are coming in the near future. To learn and start building your own image classifier, visit www.customvision.ai and our documentation pages.

I also invite you to visit www.azure.com/ai to learn more about how Artificial Intelligence can augment and empower your digital transformation efforts. We’ve also launched the AI School to help developers get up to speed with AI technologies and enabling them to start building intelligence into your solutions.

 

CustomVision.AI_5

 

Dive in and learn how to infuse AI into your applications today with the following Custom Vision service tutorials from AI School:

  1. Introduction to Custom Vision Service
  2. Exporting a Custom Vision model and deploying on iOS
  3. Exporting a Custom Vision model and deploy it to an Android device

Joseph
@josephsirosh


Last week in Azure: New visual tools for Data Factory, and more

$
0
0

If you were too busy writing code, attending meetings, or enjoying some time away from work, here's an overview of what you may have missed in Azure last week:

Headlines

ADF v2: Visual Tools enabled in public preview - Be more productive with Azure Data Factory by getting pipelines up & running quickly without requiring to write a single line of code. You can use a simple and intuitive code free interface to drag and drop activities on a pipeline canvas, perform test runs, debug iteratively, deploy & monitor your pipeline runs. This topic is also featured on Azure Friday (see video below).

Announcing the extension of Azure IP Advantage to Azure Stack - Azure IP Advantage now covers workloads deployed to Azure Stack, which extends protection benefits to cover customers consistently in the hybrid cloud. With Azure IP Advantage, Azure Stack services receive uncapped indemnification from Microsoft, including for the open source software powering these services.

Azure Analysis Services now available in East US, West US 2, and more - Azure Analysis Services is now available in 4 additional regions including East US, West US 2, USGov-Arizona and USGov-Texas. Azure Analysis Services provides enterprise-grade data modeling in the cloud as a fully-managed platform as a service (PaaS), which is integrated with Azure data platform services.

Accelerate your business revolution with IoT in Action - Learn about some recent updates to Azure IoT Suite that are making IoT solutions easier and more robust than ever, including Microsoft IoT Central, Azure IoT Hub, and Azure Stream Analytics on IoT Edge. If you're in San Francisco on February 13, attend IoT Action to learn more about how Azure IoT can help you accelerate your business revolution.

How Azure Security Center helps analyze attacks using Investigation and Log Search - Learn how an analyst can leverage the Investigation and Log Search capabilities in Azure Security Center to determine whether an alert represents a security breach, and to understand the scope of that breach by drilling into an alert to see what you can discover.

Announcing IoT extension for Azure CLI 2.0 - A new open source IoT extension that adds to the capabilities of Azure CLI 2.0 is now available. Azure CLI 2.0 includes commands for interacting with Azure Resource Manager and management endpoints. With this new extension, you get command-line access to IoT Hub, IoT Edge, and IoT Hub Device Provisioning Service capabilities.

Show off your skills with #AzureTrivia - Every Monday, @Azure will tweet out an Azure-related question. See this post for the details on how you can be entered to win a weekly prize by tweeting the correct answer.

Service updates

Azure shows

Visually build pipelines for Azure Data Factory V2 - Gaurav Malhotra shows Donovan Brown how you can now visually build pipelines for Azure Data Factory V2 and be more productive by getting pipelines up & running quickly without writing any code.

Azure Backup - Even though Azure takes three copies of your virtual machines and stores them in Azure Storage, you still need to protect your data against ransomware, corruption, or accidental deletion. Kelly Anderson stops by to chat with Scott Hanselman about how simple it is to set up Azure Backup, how its built-in security features can protect your backup data from ransomware, and how easy it is to restore your data from Azure.

The Azure Podcast: Episode 212 - Planning for Reliability - Evan is back after a couple of stressful weeks dealing with the fallout from "Meltdown". He shares some of his learnings and things customers can do to be better prepared for such situations.

#ifdef WINDOWS – 3D launchers and glTF toolkit

$
0
0

With the Windows 10 Fall Creators Update, when developing experiences for Windows Mixed Reality, a 3D launcher can be defined to override the default 2D launcher and provide a richer experience launching a game or app from the mixed reality home.

Roberto Sonnino and Tom Mignone from the mixed reality team dropped by my office to give me a hands on demonstration as we discussed why developers should consider creating 3D launchers and what is possible when creating 3D tiles. We also covered why the team chose glTF as the file format and how they created the glTF toolkit to make it very easy for developers to modify and optimize glTF assets.

Check out the full video above and feel free to reach out on  Twitter or in the comments below.

Happy coding!

The post #ifdef WINDOWS – 3D launchers and glTF toolkit appeared first on Building Apps for Windows.

Windows Community Standup discussing the Always Connected PC

$
0
0

During our January 2018 Windows Community Standup, we discussed the Always Connected PC and what that means for developers. Kevin Gallo (CVP), Erin Chapple (GM) and Hari Pulapaka (Principal Group Program Manager) discussed the ARM architecture and Qualcomm chip, how the OS is natively recompiled with full feature support, how we extended our WoW abstraction layer to support x86 applications, and much more. Watch the segment to learn how to debug and test your app on a Windows on ARM device and what you should be thinking about when developing for Windows 10 on ARM.

The post Windows Community Standup discussing the Always Connected PC appeared first on Building Apps for Windows.

Announcing integration of Azure Backup into VM create experience

$
0
0

Today, we are excited to announce the ability to enable backup on virtual machines from VM create experience in the portal. Last year, we also announced support for backing up virtual machines from VM management blade. With this announcement, we are bringing the ability to protect the VMs with an enterprise-grade backup solution from the moment of VM creation. Azure Backup supports backup of wide variety of VMs offered by Azure including Windows or Linux, VMs on managed or unmanaged disks, premium or standard storage, encrypted or non-encrypted VMs, or a combination of the above.

Features

With the integration of Azure Backup into the VM create experience, customers would be able to:

Setup backup in 1-click: With smart defaults provided with the experience, customers would be able to add backup to a VM with just one-click.

Create VM Integration

Select or create a vault in-line: Customers have a choice for vault – select an existing one or create a new one to store backups. To support customer configurations where they want to store backups and VMs in different resource groups, we also support creating vault in a different resource group other than the VM.

Manage backup policy for the VM: Customers can create a new backup policy and use this policy to configure backup on the virtual machine, all from the VM create experience. This policy also supports enterprise level GFS schema for flexible retention choices for backups.

Backup policy in create VM integration

Core benefits of Azure VM backup

Azure VM backup provides following benefits using cloud-first approach to backup:

  • Freedom from infrastructure: No need to deploy any additional infrastructure to backup VMs.
  • Application consistent backup: Customers get application consistent backup for both Windows and Linux without the need to shutdown the virtual machine.
  • Instant-file recovery: With instant-file recovery, you can browse files and folders inside the VM, and recover only required files without the need to restore entire virtual machine.
  • Pay as you go: Simple Backup pricing makes it easy to protect VMs and pay for what you use.

Get started

This experience is enabled for all OS images supported by Azure Backup. You will see the option to enable backup in the 3rd step of the VM create experience. By default, this experience is turned off and you can enable it by toggling the choice. We are enabling this experience starting today and rolling it out region by region. You will see this across all regions by the end of this week.

Related links and additional content

Visualize your Strava routes with R

$
0
0

Strava is a fitness app that records you activities, including the routes of your walks, rides and runs. The service also provides an API that allows you to extract all of your data for analysis. University of Melbourne research fellow Marcus Volz created an R package to download and visualize Strava data, and created a chart to visualize all of his runs over six years as a small multiple.

Volz-runs

Inspired by his work (and the availability of the R package he created), others also visualized bike rides, activity calendars, and aggregated route maps with elevation data. (You can see several examples in the Twitter moment embedded below.) If you'd like to download your own Strava data, all you need a Strava access token, a recent version of R (3.4.3 or later) and the strava package found on Github.

Strava activity visualized with R

Marcus Volz:  A gallery of visualisations derived from Strava running data 

Introducing the Windows Desktop Program for Desktop Application Analytics

$
0
0

An important feature for desktop application developers is the ability to view detailed analytics about application performance and its popularity with users. Until today, developers had difficulty accessing these analytics without cobbling together multiple tools. With the new Windows Desktop Program, developers now have a convenient, one-stop portal to view their desktop application analytics or access the data via an API. Statistics and charts quickly show how the applications are doing– from how many customers they’ve reached to detailed performance data on crashes and failures. With these analytics, developers can better track and prioritize fixes, monitor the distribution of their application, prepare and improve the overall experience for their customers.

There’s no charge to access this data—all you need to do is sign up with a Microsoft account to identify yourself, then upload a signed file using the same trusted, valid certificate your company uses to sign your applications.

Once you sign up for the Windows Desktop Application Program and register your certificates, you’ll be able to use the analytics reports to:

  • View a summary of all failure types, sorted by number of hits
  • Drill down into each failure and download stack traces and CAB files to debug the issue faster
  • Compare the health status and adoption of a newly released version of your application to previous releases
  • View health data in aggregate or by region, allowing you to isolate issues that are specific to a region
  • Compare performance and adoption of your desktop applications across Windows versions, such as the latest Windows 10 or Windows Insider releases.

To view analytics for your applications:

  1. Sign up for the Windows Desktop Application Program. If you already have a Windows Dev Center account, you can opt in to this program on the Programs page in Account settings. Otherwise, you can sign up here.
  1. Follow the steps to download an unsigned file, sign it with the same code-signing certificate your company uses to sign your desktop applications, and upload the newly signed file back through the portal.
  2. That’s it! We will take the signed file you just uploaded and map it to the telemetry we collect on all applications with the same certificate to show you your analytics data. To learn more, check out our documentation here.

To learn more about the Windows Desktop Application Program, check out this video from our Windows Developer series.

The post Introducing the Windows Desktop Program for Desktop Application Analytics appeared first on Windows Developer Blog.

Azure Search enterprise security: Data encryption and user-identity access control

$
0
0

Enterprise security requires a comprehensive approach for defense in depth. Effective immediately, Azure Search now supports encryption at rest for all incoming data indexed on or after January 24, 2018, in all regions and SKUs including shared (free) services. With this announcement, encryption now extends throughout the entire indexing pipeline – from connection, through transmission, and down to indexed data stored in Azure Search.

At query time, you can implement user-identity access controls that trim search results of documents that the requestor is not authorized to see. Enhancements to filters enable integration with third-party authentication providers, as well as integration with Azure Active Directory.

Encryption at rest, on by default

All indexing includes encryption on the backend automatically with no measurable impact on indexing workloads or size. This applies to newly indexed documents only. For existing content, you have to re-index to gain encryption. Encryption status of any given index is not visible in the portal, nor available through the API. However, if you indexed after January 24, 2018, data is already encrypted.

Managed by Microsoft

In the context of Azure Search, all aspects of encryption, decryption, and key management are internal. You cannot turn it on or off, manage or substitute your own keys, or view encryption settings in the portal or programmatically. Internally, encryption is based on Azure Storage Service Encryption, using 256-bit AES encryption, one of the strongest block ciphers available.


Zone Redundant Virtual Machine Scale Sets now available in public preview

$
0
0

In September 2017 we introduced Azure Availability Zones, enabling resiliency and high availability for mission-critical workloads running on Azure. Today, we are excited to announce the public preview of Zone Redundant Virtual Machine Scale Sets, bringing the scalability and ease of use of scale sets to availability zones.

Deploying your infrastructure across zones has never been easier. You just specify the availability zones you would like to use for your scale set. It’s as simples as:

az vmss create -n <name> -l <location> --image <image-name> -g <resource-group-name> --zones 1 2 3

With Zone Redundant Virtual Machine Scale Sets, your Virtual Machines are automatically spread across availability zones. You don’t need to worry about distributing VMs across zones, choosing which VMs to remove when scaling in, etc. Zone Redundant Virtual Machine Scale Sets support the same capabilities as Regional Virtual Machine Scale Sets, including but not limited to:

  • Azure Autoscale
  • Azure Virtual Machine Extensions
  • Marketplace and Custom Images
  • Attached Data Disks
  • Azure Application Gateway
  • Azure Load Balancer Standard

Please note that during preview, some of these capabilities might not be fully zone redundant.

With scale sets, it’s easy to build big compute, big data, and containerized workloads. With zones it’s easy to manage VM uptime, especially with the 99.99% uptime SLA at GA mentioned in the previous blog post. Together, they provide the backbone for building mission-critical, scalable services on Azure.

Please refer to this article about creating a virtual machine scale set using Availability zones to get started.

ITSM Connector for Azure is now generally available

$
0
0

This post is also authored by Kiran Madnani, Principal PM Manager, Azure Infrastructure Management and Snehith Muvva, Program Manager II, Azure Infrastructure Management.

We are happy to announce that the IT Service Management Connector (ITSMC) for Azure is now generally available. ITSMC provides bi-directional integration between Azure monitoring tools and your ITSM tools – ServiceNow, Provance, Cherwell, and System Center Service Manager.

Customers use Azure monitoring tools to identify, analyze and troubleshoot issues. However, the work items related to an issue is typically stored in an ITSM tool. Instead of having to having to go back and forth between your ITSM tool and Azure monitoring tools, customers can now get all the information they need in one place. ITSMC will improve the troubleshooting experience and reduce the time it takes to resolve issues. Specifically, you can use ITSMC to:

  1. Create or update work-items (Event, Alert, Incident) in the ITSM tools based on Azure alerts (Activity Log Alerts, Near Real-Time metric alerts and Log Analytics alerts)
  2. Pull the Incident and Change Request data from ITSM tools into Azure Log Analytics.

You can setup ITSMC by following the steps in our documentation. Once set up, you can send Azure alerts to ITSM tool using the ITSM action in Action groups.

clip_image001

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

You can also view your incident and change request data in Log Analytics to perform trend analysis or correlate it against operational data.

image

 

 

 

 

 

 

 

 

 

 

To learn about the pricing, visit our pricing page. We are excited to launch ITSM connector and look forward to your feedback.

Azure Zone Redundant Storage in public preview

$
0
0

We are excited to announce the public preview of Azure Zone Redundant Storage (ZRS). ZRS greatly simplifies development of highly available applications by storing three replicas of your data in different Availability Zones, with inserts and updates to data being performed synchronously across these Availability Zones. This enables you to continue to read and write data even if the data in one of the Availability Zones is unavailable or unrecoverable. ZRS is built over Availability Zones in Azure which provide resilience against failures through fault-isolated groups of datacenters within a single region.

Zone Redundant Storage should be considered for applications where regional availability is critical and downtime is not acceptable, and both read and write access are required at all times.

With the release of the ZRS public preview, Azure offers a compelling set of durability options for your storage needs including ZRS for intra-region high availability, locally-redundant storage (LRS) for low-cost single region durable storage, and geo-redundant storage (GRS) for cross-region redundancy for disaster recovery scenarios with read access geo-redundant storage (RAGRS) offering additional read accessibility.

The ZRS preview will initially be available in the following regions with more to follow. Please check our documentation for the latest list of regions with ZRS preview enabled.

  • US East 2
  • US Central
  • France Central

Getting started

You can create a ZRS storage account in the preview regions mentioned above through a variety of means including Azure CLI, Azure PowerShell, Azure Portal, Azure Resource Manager, and the Azure Storage Management SDK.

To create a ZRS account in the Azure Portal, set the following properties. Please note that ZRS requires a general purpose v2 account kind.

ZRSAccountForBlog

To create a ZRS account with the Azure CLI, install the latest Azure CLI, then run the following command in your console:

az storage account create -n <accountname> -g <resourcegroup> -l <region> --sku Standard_ZRS --kind StorageV2

To create a ZRS account with Azure PowerShell, first install the latest Azure Powershell cmdlets:

  1. Install AzureRM.Storage version 4.1.0
  2. Install AzureRM.Resources version 5.1.1

Once you have successfully completed the above, run the following cmdlet in the Powershell console:

New-AzureRmStorageAccount -Name <accountname> -Location <region> -ResourceGroupName <resourcegroup> -SkuName Standard_ZRS -Kind StorageV2

There is no change to existing API for reading and writing data in a storage account, so existing code and tools will just continue to work when pointed to a ZRS account. Please refer to the ZRS documentation for more details on getting started.

For information on preview pricing, please refer to the pricing pages for Blobs, Files, Queues, and Tables under ZRS (preview) pricing.

Migration from other account types

Once ZRS is made generally available, the option is intended to replace the existing ZRS option in regions that support Azure Availability Zones. Effective immediately, the existing ZRS offering has been renamed to ZRS Classic and can continue to be accessed without any code change required. We will provide a simple migration path for ZRS Classic accounts to ZRS when it is generally available in that region. For more details on migration, including migration from LRS, GRS, and RA-GRS accounts, please refer to our documentation and FAQ.

Please let us know if you have any questions or need our assistance. We are looking forward to your participation in the preview and hearing your feedback.

Launching the Azure Storage Solution showcase

$
0
0

I am pleased to announce a new webcast series showcasing innovative technology partners who have built solutions on top of the Azure Storage infrastructure. Microsoft has always been committed to our partner ecosystem and we are especially proud of the work we have done on the Azure Storage team. Over the last two years we have witnessed an impressive increase in the number of solutions that integrate with, or are built on top of Azure Storage. All of these solutions are capable of helping our customers take advantage of Azure services and achieve tangible benefits for their businesses. It is all about our customers and helping you achieve your goals.

So what will you see and learn about during this series?

  • Learn to use solutions you already have, from the vendors you trust, while extending your data center to Azure and building Cloud native solutions with your data.
  • How to synchronize or migrate data to Azure Storage and leverage it with On-Demand Azure Services like:
    • VMs
    • High Performance Computing
    • App Services
    • Containers
    • Media Services
    • Databases
    • Analytics
    • Machine Learning
    • Cognitive Services
  • Manage explosive data growth in your organization.
  • Worried about GDPR? Meet compliance and legal discovery requirements.
  • End the cycle of equipment refreshes, data migrations, and stranded unusable capacity. 

The solutions we will feature can help you introduce the operational benefits of Azure Storage and Azure Services inside your organization. Each webcast will feature a brief presentation, live demo, and open Q&A. You will hear from industry leaders and from emerging partners as well! The series will be delivered via the Skype Meeting Broadcast platform and you will find attendee instructions, system requirements, and troubleshooting assistance below. Each session will be recorded and you can view any of the events in the series on-demand, with any device.

When will we start and how can you join us?

Our first session will feature an Introduction by Azure Storage Partner PM Manager Tad Brockway and a presentation demo by Commvault®. 

Inaugural Schedule
Commvault® - February 5th at 9am PT 

NetApp - February 12th at 9am PT

Veritas on February 19th at 9am PT

PEER Software on February 26th at 9am PT 

Future sessions, their entries for your calendar, and links to past recorded sessions will be available at https://azurestorage.cloud.

Each recorded event will be shared via this site and we will maintain a forum for you to use as means of interacting with the presenters in the series. We hope to see you on the live broadcasts or viewing the on-demand sessions when your schedule allows. As the series goes forward, your comments and suggestions are always welcome. We intend to make this a valuable and informative resource and will look for you to hold us accountable!  

Skype Meeting Broadcast details

To learn more about how to connect to a Skype Meeting Broadcast please visit the resources below.

Skype Meeting Broadcast Attendee Guide (Our Sessions will all allow Anonymous attendees)
Skype Meeting Broadcast Troubleshooting Guide

Skype Meeting Broadcast System Requirements:
Edge, Internet Explorer 11, Chrome 35 or later, Firefox, OSX Safari, iOS 8 or later, Android (KitKat)

Note: Adobe Flash is required for Internet Explorer 11 on Windows 7, Firefox versions 41 and earlier, as well as Safari on Mac.

Want more storage content?

Check out the most recent episode of the Azure Ninja’s Podcast from our Global Blackbelt team!

Comprehensive monitoring for Azure Site Recovery now generally available

$
0
0

Azure Site Recovery is a vital part of the business continuity strategy of many Azure customers. Customers rely on Azure Site Recovery to protect their mission critical IT systems, maintain compliance, and ensure that their businesses aren’t impacted adversely in the event of a disaster.

Operationalizing a business continuity plan and making sure that it meets your organization’s business continuity objectives is complex. The only way to know if the plan works is by performing periodic tests. Even with periodic tests, you can never be certain that it will work seamlessly the next time around due to variables such as configuration drift and resource availability, among others.

Monitoring for something as critical should not to be so difficult. The comprehensive monitoring capabilities within Azure Site Recovery gives you full visibility into whether your business continuity objectives are being met. Not just that, with a failover readiness model that monitors resource availability and suggests configurations based on best practices, it also helps inform how prepared you are to react to a disaster today.

site-recovery-overview-page

So, what is new in this experience?

  • Enhanced vault overview page: The new vault overview page features a dashboard that presents everything you need to know to understand if your business continuity objectives are being met. In addition to the information needed to understand the current health of your business continuity plan, the dashboard features recommendations based on best practices, and in-built tooling for troubleshooting issues that you may be facing.
  • Replication health model: Continuous, real time monitoring of replication health of servers based on an assessment of a wide range of replication parameters.
  • Failover readiness model: A failover readiness model based on a comprehensive checklist of configuration and disaster recovery best practices, and resource availability monitoring, to help gauge your level of disaster preparedness.
  • Simplified troubleshooting experience: Start at the vault dashboard and dive deeper using an intuitive navigational experience to get in depth visibility into individual components, and additional troubleshooting tools including a brand new dashboard for replicated machines.
  • In-depth anomaly detection tooling to detect error symptoms, and offer prescriptive guidance for remediation.

With best in class monitoring, Azure Site Recovery gives businesses insurance against disasters with a DR strategy that you know will work when you need it the most.

Discover the elements of this new experience by viewing this short demo video:

Learn more about monitoring and troubleshooting with Azure Site Recovery from our documentation.

Azure Site Recovery is an all-encompassing service for your migration and disaster recovery needs. Our mission is to democratize disaster recovery with the power of Microsoft Azure so that you have a disaster recovery plan that covers all of you organization's IT applications.

Get started with Azure Site Recovery today.

Start replicating in under 30 minutes using Azure Site Recovery’s new onboarding experience

$
0
0

We on the Azure Site Recovery product team are consistently striving to simplify the business continuity and disaster recovery to Azure solutions for our customers. With the latest release of the Azure Site Recovery service for VMware to Azure, we bring a new, intuitive and simplified getting started experience, which gets you setup and ready to replicate virtual machines in less than 30 minutes!

What is new?

Open Virtualization Format (OVF) template-based configuration server deployment

Open Virtualization Format (OVF) template is an industry standard software distribution model for virtual machine templates. Starting January 2018, configuration server for the VMware to Azure scenario will be available to all our customers as an OVF template.

With the OVF template, we ensure that all the necessary software, except MySQL Server 5.7.20 and VMware PowerCLI 6.0, is pre-installed in the virtual machine template, and once the template is deployed in your vCenter Server, the configuration sever can be registered with the Azure Site Recovery services in less than 15 minutes.

Here is a quick video that walks you through the new onboarding experience.

Read more on how to deploy the configuration server template to your VMware vCenter Server / ESXi host.

New intuitive infrastructure management experience

A new web portal has been added into the configuration server, which is a one-stop-shop for all the actions to be taken on a configuration server. On all configuration servers deployed using the OVF template, this portal allows customers to modify the following settings with ease:

Hassle-free mobility service deployment experience

One of the top issues that has bothered many customers is the requirement to open firewall rules for WMI and File and Printer Sharing services in Windows. WMI and File and Printer Sharing services were used by Azure Site Recovery service to push install the mobility service on the protected virtual machines, and in many enterprise environments these firewall ports where not open on production servers by default.

Starting with this release, Azure Site Recovery will start using VMware tools to install/update mobility services on all protected VMware virtual machines. With this change the customers will no longer be required to open firewall rules for WMI and File and Printer Sharing services before replicating virtual machines from VMware environments into Azure. This will allow easy deployment of the Azure Site Recovery mobility service on to protected virtual machines without having to get exceptions from your network security teams.

Note: VMware tools based mobility service installation will be made available to all customers who update their configuration servers to version 9.13. xxxx.x.

Start using Azure Site Recovery today. Visit the Azure Site Recovery forum on MSDN for additional information and to engage with other customers. You can also use the ASR User Voice to let us know what features you want us to enable next.

Scraping a website with 5 lines of R code

$
0
0

In what is rapidly becoming a series — cool things you can do with R in a tweet — Julia Silge demonstrates scraping the list of members of the US house of representatives on Wikipedia in just 5 R statements:

library(rvest)
library(tidyverse)

h <- read_html("https://t.co/gloY1eErBn")

reps <- h %>%
html_node("#mw-content-text > div > table:nth-child(18)") %>%
html_table()

reps <- reps[,c(1:2,4:9)] %>%
as_tibble() pic.twitter.com/25ANm7BHkj

— Julia Silge (@juliasilge) January 12, 2018

Since Twitter munges the URL in the third line when you cut-and-paste, here's a plain-text version of Julia's code:

library(rvest)
library(tidyverse)

h <- read_html("https://en.wikipedia.org/wiki/Current_members_of_the_United_States_House_of_Representatives")

reps <- h %>%
 html_node("#mw-content-text > div > table:nth-child(18)") %>%
 html_table()

reps <- reps[,c(1:2,4:9)] %>% as_tibble()

And sure enough, here's what the reps object looks like in the RStudio viewer:

Scrape

As Julia notes it's not perfect, but you're still 95% of the way there to gathering data from a page intended for human rather than computer consumption. Impressive!


Accelerated Spark on GPU-enabled clusters in Azure

$
0
0

The ability to run Spark on a GPU enabled cluster demonstrates a unique convergence of big data and high-performance computing (HPC) technologies. In the past several years, we've seen the GPU market explode as companies all over the world integrate AI and other HPC workflows into their businesses. Tensorflow, a framework designed to utilize GPUs for numerical computation and neural networks has skyrocketed into popularity, a testament to the rise of AI and consequently the demand for GPUs. Simultaneously, the need for big data and powerful data processing engines has never been greater as hundreds of companies start to collect data in the petabyte range.

By providing infrastructure for high performance hardware such as GPUs with big data engines such as Spark, data scientists and data engineers can enable many scenarios that would otherwise be difficult to achieve.

Along with the recent release of our latest GPU SKUs, I'm excited to share that we now support running Spark on a GPU-enabled cluster using the Azure Distributed Data Engineering Toolkit (AZTK). In a single command, AZTK allows you to provision on demand GPU-enabled Spark clusters on top of Azure Batch's infrastructure, helping you take your high performance implementations that are usually single-node only and distribute it across your Spark cluster.

For this release, we have created several additional GPU-enabled Docker images for AZTK, including a python image that comes packaged with Anaconda, Jupyter and PySpark, and a R image that comes packaged with Tidyverse, RStudio-Server and SparklyR.

These images use the NVIDIA Docker Engine to provide our Docker images access to the host's GPUs. Because AZTK runs Spark in a completely containerized fashion, users can customize their own GPU Docker images to their specific needs. However, for those users who simply want to run Spark on a GPU-enabled cluster, they can do so without needing to worry about Docker as well. AZTK will automatically pull the appropriate image, giving you GPU access if GPUs are detected on the host machine.

Here's an example of how you can create a four-node GPU enabled Spark cluster (total of 224GB in memory, four GPUs where one GPU = one-half K80 card, and 24 vCPUs) with AZTK:

$ aztk spark cluster create --id my_gpu_cluster --size 4 --vm-size standard_nc6

Since AZTK is aware that the Standard NC6 VMs come with NVIDIA's Tesla K80s, AZTK automatically selects one of the GPU-enabled Docker images when provisioning your cluster. Alternatively, you can also manually specify which image to use by setting the --docker-repo flag.

We also provide a sample that compares a simple PySpark job using GPUs with Numba vs using CPUs to highlight the performance gain you can get when running your Spark jobs with GPUs.

Whether you will be using Spark with GPUs for AI workflow such as with Tensoflow/Tensorframes, distributed CTNK, or using it simply to speed up your computationally expensive Spark jobs, please let us know about how you plan to take advantage of this unique convergence of HPC and Big Data technologies.

We look forward to you using these capabilities and hearing your feedback. Please contact us at askaztk@microsoft.com for feedback or feel free to contribute to our Github repository.

Additional information

Additional resources

Cognitive Services and Availability of SDKs for the latest Bing Search APIs

$
0
0

We are pleased to announce preview availability of SDKs for the Cognitive Serivces - Bing Search APIs. Currently available as REST APIs, the Bing APIs v7 now have SDKs in four languages: C#, Java, Node.js, and Python. These SDKs include offerings such as Bing Web Search, Bing Image Search, Bing Custom Search, Bing News Search, Bing Video Search, Bing Entity Search, and Bing Spell Check. 

Here are some of the salient features of these SDKs:

  • Easy to use and highly flexible in adjusting your basis application scenario.
  • Encompass all the API v7 functionalities, languages, and countries.
  • Reduce assembly footprint through individual SDK for each Bing offering.
  • Enable development in C#, Java, Node.js, and Python.
  • Provide ability to use the new/existing Bing APIs access keys, both free and paid.
  • Well documented through samples and parameter references.
  • Supported through Azure and other developer forums.
  • Opensource under MIT license and available on GitHub for collaboration.

image

Getting Started with Bing SDKs

For C#, both NuGet packages and SDKs are available for individual Bing offerings. The best place to start is with C# samples. These samples provide an easy to follow step-by-step guide on running various application specific scenarios through corresponding NuGet packages.

Each Bing offering has corresponding samples along with a NuGet package that you can use to build your application on top of.

For development in other languages, check out Java samples, Node samples, and Python samples. They are all covered under MIT license and include step-by-step guide for you to get started.

If you are already utilizing Bing REST APIs in your app, give these SDKs a spin with your existing access keys and let us know how they work in your scenario. If you are just getting started or want to utilize Bing APIs, you can start with free access keys or buy your subscription. As always, we are excited to see what applications you build and how Bing APIs help you do more with your business. For any ideas or suggestions, please reach out to us at User Voice or share your feedback with Azure Support

Useful Links

Azure Standard support now offers the highest value support for production workloads amongst major cloud providers

$
0
0

Our Azure customers have three common needs from their cloud support plan:

    • A fixed monthly cost that is affordable and simple to forecast
    • Fast response time for critical cases
    • A plan that covers their entire organization and eliminates guesswork in how many support plans are needed and who can use them

    We are pleased to announce important updates for Azure Standard support. With these changes, Azure now offers the most cost effective and predictable support offering amongst major cloud providers.

    Azure Standard support now includes:

      • A significant price drop to a fixed cost of $100 USD per month, so forecasted support costs are completely predictable
      • Faster initial response time, now at 1 hour for critical cases
      • Continuing our current offering of unlimited 24x7 technical and billing support for your entire organization


      Get Azure Standard support now >>

      Click here for more details, eligibility, and frequently asked questions.


      Azure is continuously improving and expanding the range of options to help you accelerate your cloud journey. From the built-in Azure Advisor service that provides free, proactive, and personalized best practice recommendations, to direct connection with Azure engineers through multiple levels of Azure support. There are also unique support options for different types of Azure customers, including Developer, Standard, Professional Direct, and Premier.

      Compliance assessment reports for Azure Stack are now available

      $
      0
      0

      A few months ago, we announced we were performing a compliance assessment on Microsoft Azure Stack, today we are happy to share that the compliance assessment is done and available to you.

      Knowing that preparing compliance paperwork is a tedious task, we precompiled the documentation for our customers. Since Azure Stack is delivered as an integrated system through hardware partners, we are in a unique position to perform a formal compliance assessment of Azure Stack that applies to all our customers. This resulted in a set of precompiled compliance documents that customers can now use to accelerate their compliance certification process.

      We are glad to announce that Coalfire, a Qualified Security Assessor (QSA) and independent auditing firm, has audited and evaluated Azure Stack Infrastructure against the technical controls of PCI-DSS and the CSA Cloud Control Matrix, and found that Azure Stack satisfies the applicable controls.

      In the assessor’s words:

      It is Coalfire’s opinion that Microsoft Azure Stack integrated system, reviewed between July 2017 and October 2017, can be effective in creating a PCI DSS compliant infrastructure and to assist in a comprehensive program of compliance with PCI DSS version 3.2.

      It is Coalfire’s opinion that Microsoft Azure Stack as deployed in the Original Equipment Manufacturer (OEM) integrated system test, which was reviewed between July 2017 and October 2017, can be effective in creating a CSA CCM 3.0.1 compliant infrastructure and can assist in a comprehensive program of compliance with CSA CCM version 3.0.1.

      This compliance documentation describes how Azure Stack meets the technical controls applicable to Azure Stack infrastructure. The technical controls related to the workloads running on top of Azure Stack, as well as the controls related to people and processes, however, remain the customers responsibility, since these are specific to a customer. With this documentation, customers now have all the necessary information related to the Azure Stack infrastructure to be certified for either PCI-DSS, or many compliance standards covered by the CSA-CCM framework.

      The PCI-DSS and the CSA-CCM documents for Azure Stack can be downloaded from the Microsoft Service Trust Portal.

      We understand that compliance encompasses not only the Azure Stack infrastructure, but also the workloads that are deployed on it. To reduce the complexity of compliance related to workloads, Azure Stack and the Azure Blueprint Program are coming together to deliver turn-key compliance solutions to support our customers’ compliance needs and help them rapidly deliver value to their companies and customers. We will share more details on the Azure Blueprint Program for Azure Stack in the coming months; stay tuned!

      Azure Stack will continue to expand the portfolio of validated standards based on customer demand. To express your preference about which compliance standard you would like us to prioritize, please fill out this survey.

      Speed up simulations in R with doAzureParallel

      $
      0
      0

      I'm a big fan using R to simulate data. When I'm trying to understand a data set, my first step is sometimes to simulate data from a model and compare the results to the data, before I go down the path of fitting an analytical model directly. Simulations are easy to code in R, but they can sometimes take a while to run — especially if there are a bunch of parameters you want to explore, which in turn requires a bunch of simulations.

      When your pc only has four core processors and your parallel processing across three of the.#DataScience #RStats #Waiting #Multitasking pic.twitter.com/iVkkr7ibox

      — Patrick Williams (@unbalancedDad) January 24, 2018

      In this post, I'll provide a simple example of running multiple simulations in R, and show how you can speed up the process by running the simulations in parallel: either on your own machine, or on a cluster of machines in the Azure cloud using the doAzureParallel package.

      To demonstrate this, let's use a simple simulation example: the birthday problem. Simply stated, the goal is to calculate for a room of (N) people the probability that someone in the room shares a birthday with someone else in the room. Now, you can calculate this probability analytically — R even has a function for it — but this one of those situations where it's quicker for me to write a simulation that it would be to figure out the analytical result. (Better yet, my simulation accounts for February 29 birthdays, which the standard result doesn't. Take that, distribution analysis!) Here's an R function that simulates 10,000 rooms, and counts the number of times a room of n people includes a shared birthday:

      You can compare the results to the built-in function pbirthday to make sure it's working, though you should include the feb29=FALSE option for an apples-to-apples comparison. The more simulations (nsims) you use, the closer the results will be.

      We want to find the number of people in the room where the probability of a match is closest to 50%. We're not exactly sure what that number is, but we can fund out by calculating the probability for a range of room sizes, plotting the results, and see where the probability crosses 0.50. Here's a simple for loop that calculates the probability for room sizes from 1 to 100 people:

      bdayp <- 1:100
      for (n in 1:100) bdayp[n] <- pbirthdaysim(n)
      plot(bdayp, xlab="People in room",
                  ylab="Probability of shared birthday")
      abline(h=0.5)
      

      Birthday

      If you look at where the horizontal line crosses the curve, you can see that with 23 people in the room there's a 50% probability of a match. With more than 60 people, a match is practically guaranteed.

      On my SurfaceBook (16Gb RAM, 2 2.6Gz cores) using R 3.4.3, that simulation takes about 6 minutes (361 seconds). And even though I have 2 cores, according to Task Manager the CPU utilization hovered at little over 50%, because R — as a single-threaded application — was using just one core. (The operating system and my other apps accounted for the rest of the usage.)

      So here's one opportunity to speed things up: we could run two R sessions on my laptop at the same time, each doing half the simulations, and then combine the results at the end. Fortunately, we don't have to break up our simulation manually: there are R packages to streamline that process. One such package is the built-in R package parallel, which provides parallel equivalents to the "apply" family of functions. Here though, I'm going to use the foreach function, which is similar to a for loop and — with the %dopar% operator — runs iterations in parallel. (You can find the foreach package on CRAN.) On my laptop, I can use the doParallel package to make foreach use the aforementioned parallel package in the background to run two iterations at a time.

      This time, my CPU utilization stayed above 90% across both cores (while the fans whirred madly), and the simulation took 316 seconds. Not a huge improvement, but definitely faster. (To be fair, I was also on a video call while the simulation was running, which itself consumed quite a bit of computer CPU.)  Which suggests the question: can we go faster if we use more than one computer (and ideally ones dedicated to R and not other desktop stuff)? The answer, as you might have guessed, is yes.

      The main problem here is getting access to a cluster: not many of us have a rack of machines to ourselves. But in this age of cloud computing, we can rent one for as long as we like, and for small simulations this can be pretty cheap, too. With the doAzureParallel package I can spin up a cluster of any desired size and power, and use the same foreach code as above to run my simulation on it, all controlled from my local R installation.

      To use doAzureParallel, you'll need the following:

      • An Azure account (if you don't have one, a new free account includes $200 in credits), and 
      • An Azure Storage service and an Azure Batch service set up in your account, and they associated keys provided in a credentials.json file. (The instructions explain how to do this, and it only takes a few minutes to set up.) Azure Batch is the cluster management service we'll be using to create our clusters for the R simulations.

      It can take a little while to boot the cluster, but once it's up and running you can use it as much as you like. And best of all, you can launch the simulation on the cluster from your local laptop using almost exactly same code as before:

      library(doAzureParallel)
      setCredentials("credentials.json")
      cluster <- makeCluster("cluster.json")
      registerDoAzureParallel(cluster)
      bdayp <- foreach(n=1:100) %dopar% pbirthdaysim(n)
      

      (That's why I used foreach instead of the parallel package for the local analysis above: you can change the "backend" for the computation without changing the foreach statement itself.) Using a cluster of 8 2-code nodes (as specified in this cluster.json file), the simulation took just 54-seconds. That doesn't include spinning up the cluster itself (the first 4 lines above), but does include sending the jobs to the nodes, waiting for tasks to complete, and merging the results (which happens in line 5). Of course, once you've launched a cluster you can re-use it instantly, and you can even resize it dynamically from R to provide more (or less) computing power.

      In my local region that particular virtual machine (VM) type currently costs US$0.10 an hour. (There are many VM sizes available.) We used 8 VMs for about a minute, so the total cost of that simulation was less than 2 cents. And we can make things even cheaper — up to 80% discount! — by using pre-emptable VMs: you can add some (or all!) to your cluster with the lowPriorityNodes field in the cluster.json file. Using low-priority VMs doesn't affect VM performance, but Azure may shut down some of them if surplus capacity runs out. If this happens when you're using doAzureParallel you might not even notice, however: it will automatically resubmit iterations to remaining nodes or wait for new VMs. You'll still get the same results, but it may possibly take longer than usual.

      Now, simulating the birthday problem is a particularly simple example — deliberately so. Most importantly, it doesn't require any data to be moved to the cluster, which is an important consideration for performance. (We'll look at that issue in a later blog post.) Our simulation also doesn't use any specialized R packages that need to be installed on the cluster. As it happens, in the cluster specification we set things up to use the tidyverse Docker image from Rocker, so most packages we'd need were already there. But if you want to use a different image for the nodes, or need to ship a custom R package or startup script to each node, you can specify that in the cluster.json file as well.

      If you'd like to try out the doAzureParallel package yourself, you can install it and find setup instructions and documentation at the Github repository below.

      Github (Azure): doAzureParallel

       

      Viewing all 10804 articles
      Browse latest View live


      <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>