Quantcast
Channel: Category Name
Viewing all 10804 articles
Browse latest View live

Improve patient engagement and efficiency with AI powered chatbots

$
0
0

A major item in scoring a health care organization is patient engagement. That is, patients are more likely to give a higher rating if they have some way to communicate to an organization with as little friction as possible. Improving the patient engagement experience can also improve patient outcomes — the hope is to do it affordably. Artificial intelligence (AI) powered health agents, also known as “chatbots,” can engage patients 24/7, using different languages, and at very low cost. These AI agents can assist healthcare organizations across a variety of use cases, for example:

  • Check and triage patient symptoms using established medical protocols.
  • Look up medical information about conditions, symptoms, causes, and complications.
  • Find healthcare providers to consult with for specific patient conditions.
  • Answer health plan inquiries, for example checking member eligibility.
  • Schedule patient appointments with healthcare providers.

Read on for an overview of powerful cloud-based tools you can use to rapidly create and deploy your AI powered health agents in Microsoft Azure.

AI powered chatbots

Rapid development and deployment to the cloud

The primary service to rapidly develop and deploy your new chatbot is the Azure Health Bot Service. This is a service that includes a visual editor for customizing bots that communicate using Natural Language Processing (NLP), which supports both text and voice interactions. Using its visual designer tool, administrators can create custom “conversation scenarios” to support the most complex and intricate requirements. These conversation scenarios are stored on the Azure cloud and power the Health Bot Service. Conversations can also be configured to use HealthNavigator content — an extensive and credible source of medical knowledge. To support text chat with your users requires an open source web chat client. Conversations can integrate with your applications, or applications of third parties via secure HTTPS calls to their API’s. Fast Healthcare Interoperability Resources (FHIR) support is also available, enabling your conversations to interact with patient Electronic Medical Records (EMRs).

Privacy, security, and compliance

The Health Bot Service supports HIPAA compliance, and is in the process being enhanced to also support ISO 27001 and ISO 27018 compliance. Built on Microsoft Azure, your AI powered health agent can also benefit from the vast array of compliance frameworks. On Azure you also have powerful security tools to protect the confidentiality, integrity, and availability of your sensitive information and systems. The security extends to protect the privacy of patients, family caregivers, and healthcare professionals.

Getting started

Start today with the Health Bot Overview, and access quick-starts, concepts, how-to guides, and additional resources. For further information on use cases, value propositions, and how to get started with AI in healthcare, see the AI in Healthcare Guide.

Collaboration

What other use cases, challenges, and solutions are you seeing for AI powered health agents? We welcome any feedback or questions you may have below. AI in healthcare is a fast-moving field. New developments are emerging daily. Many of these new developments are not things you can read about yet in a textbook. I post daily about these new developments and solutions in healthcare, AI, cloud computing, security, privacy, and compliance on social media. Reach out to connect with me on LinkedIn and Twitter.


Amtrak keeps its mobile enterprise running ahead of schedule with Microsoft 365

$
0
0

Today’s post was written by Don Friend, senior director of infrastructure platform services at Amtrak.

Profile picture of Don Friend.Stepping off an Amtrak train, passengers on the U.S. railroad are greeted by the vibrant tile work and iconic palm trees of Los Angeles Union Station, or the light-filled atrium of Chicago Union Station, or the friendly station staff at any of the 500 communities we serve across the country. Our mission is to connect people from all over the country through our rail network. Ensuring that riding the rails is a safe, enjoyable way to travel has always been our goal, but today, the industry is changing. We need to move at the speed of our customer’s online reviews. We need to stay connected and agile to remain at the forefront of modern technologies that improve customer experience—like the self-service kiosks and e-ticketing services we have installed. As we evolve to meet the accelerating pace of both business and travel, it’s essential that our workforce is as connected as our network of trains. We use Microsoft 365 to empower our office employees to work together in highly secure mobile environments.

Our visionary approach to communication and collaboration at Amtrak extends to our Firstline Workers as well. We call it our “connected workforce strategy” and it reflects a huge change in the mindset of senior leaders at Amtrak. For too long, our Firstline Workers remained outside the corporate family, with no email to connect them to colleagues. We deployed Microsoft Office 365 F1 to empower these workers with the rich capabilities in the Office 365 ProPlus toolset, so we collaborate more effectively, improve passenger satisfaction, and reduce costs.

We have approximately 14,000 mobile workers at Amtrak who run the gamut from engineers who drive the trains to our police force, onboard service attendants, and mechanics. Even our executives are assigned one or two stations that they visit twice a year to complete a safety and cleanliness audit. Many carry corporate devices, including phones and tablets. Our police force uses mobile handhelds loaded with police applications, and our conductors have devices with barcode readers.

In the past, keeping these employees connected to the wider enterprise was challenging. But today, we use Microsoft Azure Active Directory for identity and access management and Multi-Factor Authentication and Microsoft Intune for mobile app and device management. These two services are the foundation of our connected workforce strategy. Regardless of location, employees can easily authenticate from any device. As we work to protect corporate data and employee identities, Amtrak is in a better position to deploy mobile apps to Firstline Workers that will make a difference to our customer service. These include an app for executives to audit their stations, and apps that conductors and service staff can use to inventory cleanliness and safety issues and alert staff at the next station to mediate a problem.

Mobile access to our corporate communications also makes it easier for our CEO to connect with the entire organization. It took just a month to deploy the CEO Corner on our intranet, which features a question-and-answer section and corporate announcements. As we transition our on-premises intranet to the cloud, employees will be able to use any device from anywhere in the world to access corporate resources safely and securely.

Amtrak recently moved corporate headquarters to a new facility, which presented the perfect opportunity to rethink their conference room strategy. We integrated Microsoft Surface Hubs into our conference rooms, promoting the rich collaboration experience you get with Skype for Business and Surface Hub. As a result, the revamped conference rooms have been a huge hit; everyone loves the large screens where conference calls become vehicles for virtual teamwork, thanks to whiteboarding, coauthoring, and sharing and saving meeting notes. Since upgrading our conference rooms, we have been inundated with requests for Surface Hubs from across the business.

As we continue to keep pace with a changing environment, today we have a productivity solution that combines the key aspects of our vision for the future at Amtrak—a quality passenger experience that’s fast, connected, and safe.

—Don Friend

The post Amtrak keeps its mobile enterprise running ahead of schedule with Microsoft 365 appeared first on Microsoft 365 Blog.

Intelligent GIF Search: Finding the right GIF to express your emotions

$
0
0
The popularity of GIFs has risen, in large part because they afford users a unique way to express their emotions. That said, it can be challenging to find the perfect GIF to express yourself. You could be looking for a special expression from a favorite celebrity, an iconic moment from a movie/tv show or just a way to say ‘good morning’ that’s not mundane. To provide the best GIF search experience Bing employs techniques such as sentiment analysis, OCR, and even pose modeling of subjects appearing in GIF flicks to reflect subtle intent variations of your queries. Read on to find out more about how we made it work, and experience it yourself on Bing’s Image search by typing a query like angry panda. Here are some of the techniques we’ve used to make our GIF search more intelligent.
 

Vector Search and Word Embeddings for Images to Improve Recall

As we’ve previously done with Image Ranking, and Vector Based Search, we’ve gone beyond simple keyword matching techniques, and captured underlying semantic relationships. Using text and image embedding vectors, we first mapped the queries and images into a high-dimensional vector space and used similarity to improve recall. In simple words, vector-based search teaches the search engine that words like “amazing” “great”, “awesome”, and “fantastic” are semantically related. This allows us to retrieve documents not just by exact match, but also by match with semantically related terms.


GIF Summarization and OCR algorithms for improving precision

One reason GIF Search is more complicated than image search is that a GIF is composed of many images (frames), and therefore, you need to search through multiple images and not just the first, to check for relevance. For instance, a search for a cat GIF, a celebrity or a tv show or cartoon character GIF needs to ensure that the subject occurs in multiple frames of the GIF and not just the first. That’s complicated. Moreover, many users include phrases in their queries like “hello”, “good morning”, “Monday mornings” etc. – where we need to ensure that these textual messages are also included in the GIF. That’s complicated too, and that’s where our Optical Character Recognition (OCR) system comes into play. We use a deep-neural-network-based OCR system and we’ve added synthetic training data to better adapt to the GIF scenario.

The multi-frame nature of a GIF introduces additional complexities for OCR as well. For example, an OCR system would look at the images below, and detect four different pieces of text – “HA”, “HAVE”, “HAVE FU” and “HAVE FUN”. In fact there’s just one piece of text – “HAVE FUN”. We use text similarity combined with spatial and temporal information to disambiguate such cases.


Sentiment Analysis using text – to improve results quality

A common scenario for GIF search is emotion queries – where users are searching for GIFs that match a certain emotion (searches like “happy”, “sad”, “awesome, “great”, “angry” or “frustrated”). Here, we analyze the sentiment/emotion of the GIF query and try to provide GIF results that have matching sentiment. Query sentiment analysis is complicated because there are usually just 2-3 terms in queries, and they don’t always reflect emotions. To understand query sentiment, we’ve analyzed public community websites and learned billions of relationships between text and emojis.

To understand the sentiment for GIF documents, we analyze the text that surrounds the GIF documents on web pages. Having the sentiment for both the query and documents, we can match the sentiment of the user query and the results they see. For instance, if a user issues the query “good job”, and we’ve already detected text like “Good job 😊 😊 ” on chat sites, we would infer that “Good job” is a query with positive sentiment and choose the GIFs documents with positive sentiment.
 

Expressiveness, Pose and Awesomeness using CNNs

Celebrities are a major area of GIF searches. Given users are utilizing GIFs to express their emotions, it is a basic requirement that the top ranked celebrity GIFs convey strong emotions. Selecting GIFs that contain the right celebrity is easy but identifying GIFs that contain strong emotions or messages is hard. We do this by using deep learning models to analyze poses, actions, and expressiveness.

Poses and actions
Poses can be modeled using the positions of skeleton points, such as head, shoulder, hand etc. Actions can be modeled using the motion of these points across frames. To extract features to depict human poses and actions, we estimate the skeleton point positions in each frame and estimate the motion across adjacent frames. A full convolutional network is deployed for estimating each skeleton point of the upper body. The motion vectors of these skeleton points are extracted to depict the motion information. A final model deduces the ‘awesomeness’ by examining the poses and the actions in the GIF.

Exaggerated expressions
Here we analyze the facial expression of the subject to select results with more exaggerated facial expressions. We extract the expressions from multiple frames of the GIF and compute a score that indicates the level of exaggeratedness.  Our GIF search returns results that have more exaggerated facial expressions.

By pairing deep convolutional neural networks, expressiveness, poses, actions and exaggeratedness models with our huge celebrity database, we can return awesome results for celebrity searches.

Image graph and other techniques
In addition to helping understand semantic relationships, Image Graph also improves ranking quality. Image Graph is made up of several clusters of similar images (in this case, GIFs), and has historical data (for e.g. clickthrough rate etc.) for images. As shown in the graph below, the images within the same cluster are visually similar (the distance between images denotes similarity), and the distance between the clusters denotes visual similarity of the main images within the clusters. Now, if we know that an image in cluster D was extremely popular, we can propagate that clickthrough rate data to all other GIFs in cluster D. This greatly improves ranking quality. We can also improve the diversity of the recommended GIFs using this technique.

Finally, we also consider source authority, virality and popularity while deciding which GIFs to show on top. And, we have a detrimental content classifier (based on images, and another based on text) to remove offensive content to ensure that all our top results are clean.

There you have it – did you really imagine that so many machine learning techniques are required to make GIF ranking work? Altogether, these components bring intelligence to Bing’s GIF search experience, making it easier for users to find what they’re looking for. Give it a try on Bing image search.
 

Announcing Cumulative Updates for .NET Framework for Windows 10 October 2018 Update

$
0
0

We deliver .NET Framework updates nearly every month, through Windows Update and other distribution channels. We are making changes to the way that we deliver those updates. We’ll soon start delivering a Cumulative Update for .NET Framework alongside the Windows 10 Cumulative Update, starting with the Windows 10 October 2018 Update. This new approach will give you more flexibility on installing .NET Framework updates.

What is the new Cumulative Update for  .NET Framework?

Starting with Windows 10 October 2018 Update and Windows Server 2019, .NET Framework fixes will be delivered through a Cumulative Update for .NET Framework.

We are making this change to enable the following:

  • Provide more flexibility for installing .NET Framework updates (for example, IT Admins can more selectively test line of business applications before broadly deploying).
  • Ability to respond to critical customer needs, when needed, with higher velocity, with standalone .NET Framework patches.

The Cumulative Update for .NET Framework will have the following characteristics:

  • Independent – Released separately from the Windows Cumulative Update
  • Cumulative – The latest patch will fully update all .NET Framework versions on your system
  • Same cadence – The Cumulative Update for .NET Framework will be released on the same cadence as Windows 10.

What should I expect?

You can expect the following new experiences.

Windows Update users:

If you rely on Windows Update to keep your machine up to date and have automatic updates enabled, you will not notice any difference.  Updates for both Windows and the .NET Framework will be silently installed, and as usual you may be prompted for a reboot after installation.

If you manage Windows Update manually, you will notice a Cumulative Update for .NET Framework update alongside the Windows cumulative update. Please continue to apply the latest updates to keep your system up to date.

Image: Cumulative Update for .NET Framework delivered via Windows Update

You can continue to rely on existing guidance for advanced Windows Update settings.

Systems and IT Administrators:

  • System administrators relying on Windows Server Update Services (WSUS) and similar update management applications will observe a new update for .NET Framework when checking for updates applicable to upcoming versions of Windows 10 October 2018 Update and Windows Server 2019.
  • The Cumulative Update for .NET Framework Classifications remain the same as for the Cumulative Update for Windows and continue to show under the same Windows Products. Updates that deliver new Security content will carry the “Security Updates” classification and updates carrying solely new quality updates will carry either the “Updates” or “Critical Updates” classification, depending on their criticality.
  • System administrators that rely on the Microsoft Update Catalog will be able to access the Cumulative Update for .NET Framework by searching for each releases’ Knowledge Based (KB) update number. Note that a single update will contain fixes for both .NET Framework 3.5 and 4.7.2 products.
  • You can use the update title to filter between the Windows Cumulative updates and .NET Framework updates. All other update artifacts are expected to remain the same.

Image: Cumulative Update for .NET Framework delivered via WSUS Administration console

.NET Framework updates across Windows versions

.NET Framework updates will be delivered in the following way:

  • Windows 10 October 2018 Update (version 1809) – One Cumulative Update for .NET Framework, alongside the Windows Cumulative Update.
  • Windows 10 April 2018 (version 1803) and earlier versions of Windows 10 – One Windows Cumulative Update (which includes .NET Framework updates), per Windows version.
  • Windows 7 and 8.1 – Multiple .NET Framework updates, per Windows version.

.NET Framework updates are delivered on the same servicing cadence as Windows 10. We deliver different types of updates on different schedules, as described below.

  • The security and quality updates for .NET Framework will be released on Patch Tuesday, the second Tuesday of each month, containing important security and critical quality improvements.
  • Each new security and quality update will supersede and replace the last security and quality update release.
  • Preview updates for .NET Framework will be released one to two weeks after the Patch Tuesday release, for non-security fixes as a limited distribution release (will not be installed automatically).
  • Out-of-band releases are reserved for situations where customer systems must be updated quickly and outside of the regular schedule, to fix security vulnerabilities or to resolve critical quality issues.

For more information about .NET Framework update model for previous versions of Windows, please refer to: Introducing the .NET Framework Monthly Rollup and .NET Framework Monthly Rollups Explained.

Validating the Quality of Updates

We extensively validate the quality of these updates before publishing them. .NET Framework updates are installed by many customers on many machines. It is often the case for Patch Tuesday updates that they contain security updates and it is important that you can apply those quickly throughout your environment. We are continually improving our validation system to ensure high-quality updates.

We use the following approaches to validate quality:

  • Extensive functional testing with in-house regression tests.
  • Compatibility testing with Microsoft applications, servers and services.
  • Compatibility testing with third-party applications that have been submitted to the .NET Framework compatibility lab. You can submit your app at dotnet@microsoft.com.
  • Security Update Validation Program (SUVP).
  • Listen to customer feedback from previous preview releases and Windows Insider builds.

 

FAQ

Will installing a Cumulative Update for .NET Framework upgrade my .NET Framework version?

No. These updates will not upgrade you to a new .NET Framework version. They will update the .NET Framework version you already have installed.

Will I need to reboot after installing the Cumulative Update for .NET Framework ?

In most cases, yes.

Will I need an additional reboot when installing the Cumulative Update for .NET Framework together with the Windows Cumulative update?

Windows Update will orchestrate making sure updates that ship at the same time are processed together and only require a single reboot. Guidance to WSUS/IT Admins is to continue to ensure that updates are grouped and deployed together to avoid any potential additional reboots.

Is there a security-only variant of the Cumulative Update for .NET Framework?

No. This approach limits the number of updates to manage and aligns with the model used by Windows 10.

What do I need to do to update .NET Framework 3.5?

Install the Cumulative Update for .NET Framework. It includes .NET Framework 3.5 fixes.

I have concerns about the quality of .NET Framework fixes. What can I do?

You can submit your application for testing in our compatibility lab (send mail to dotnet@microsoft.com) and install .NET Framework Preview of Quality updates to validate compatibility.

Is Microsoft producing new types of patches in addition to the new Cumulative Update for .NET Framework?

No new standalone updates are planned.

Azure DevOps Continuous Build/Deploy/Test with ASP.NET Core 2.2 Preview in One Hour

$
0
0

Hanselminutes WebsiteI've been doing Continuous Integration and Deployment for well over 13 years. We used a lot of custom scripts and a lovely tool called CruiseControl.NET to check out, build, test, and deploy our code.

However, it's easy to get lulled into complacency. To get lazy. I don't set up Automated Continuous Integration and Deployment for all my little projects. But I should.

I was manually deploying a change to my podcast website this evening via a git deploy to Azure App Service. Pushing to Azure this way via Git uses "Kudu" to actually build the site. However, earlier this week I was also trying to update my site to .NET Core 2.2 which is in preview. Plus I have Unit Tests that aren't getting run during deploy.

So look at it this way. My simple little podcast website with a few tests and the desire to use a preview .NET Core SDK means I've outgrown a basic "git push to prod" for deploy.

I remembered that Azure DevOps (formerly VSTS) is out and offers free unlimited minutes for open source projects. I have no excuse for my sloppy builds and manual deploys. It also has unlimited free private repos, although I'm happy at GitHub and have no reason to move.

It usually takes me 5-10 minutes for a manual build/test/deploy, so I gave myself an hour to see if I could get this same process automated in Azure DevOps. I've never used this before and I wanted to see if I could do it quickly, and if it was intuitive.

Let's review my goals.

  • My source is in GitHub
  • Build my ASP.NET Core 2.2 Web Site
    • I want to build with .NET Core 2.2 which is currently in Preview.
  • Run my xUnit Unit Tests
    • I have some Selenium Unit Tests that can't run in the cloud (at least, I haven't figured it out yet) so I need them skipped.
  • Deploy the resulting site to product in my Azure App Service

Cool. So I make a project and point Azure DevOps at my GitHub.

Azure DevOps: Source code in GitHub

They have a number of starter templates, so I was pleasantly surprised I didn't need manually build my Build Configuration myself. I'll pick ASP.NET app. I could pick Azure Web App for ASP.NET but I wanted a little more control.

Select a template

Now I've got a basic build pipeline. You can see it will use NuGet, get the packages, build the app, test the assemblies (if there are tests...more on that later) and the publish (zip) the build artifacts.

Build Pipeline

I then clicked Save & Queue...and it failed. Why? It says that I'm targeting .NET Core 2.2 and it doesn't support anything over 2.1. Shoot.

Agent says it doesn't support .NET Core 2.2

Fortunately there's a pipeline element that I can add called ".NET Core Tool Installer" that will get specific versions of the .NET Core SDK.

NOTE: I've emailed the team that ".NET Tool Installer" is the wrong name. A .NET Tool is a totally different thing. This task should be called the ".NET Core SDK Installer." Because it wasn't, it took me a minute to find it and figure out what it does.

I'm using the SDK Agent version 2.22.2.100-preview2-009404 so I put that string into the properties.

Install the .NET Core SDK custom version

At this point it builds, but I get a test error.

There's two problems with the tests. When I look at the logs I can see that the "testadapter.dll" that comes with xunit is mistakenly being pulled into the test runner! Why? Because the "Test Files" spec includes a VERY greedy glob in the form of ***test*.dll. Perhaps testadapter shouldn't include the word test, but then it wouldn't be well-named.

**$(BuildConfiguration)***test*.dll

!**obj**

My test DLLs are all named with "tests" in the filename so I'll change the glob to "**$(BuildConfiguration)***tests*.dll" to cast a less-wide net.

Screenshot (45)

I have four Selenium Tests for my ASP.NET Core site but I don't want them to run when the tests are run in a Docker Container or, in this case, in the Cloud. (Until I figure out how)

I use SkippableFacts from XUnit and do this:

public static class AreWe

{
public static bool InDockerOrBuildServer {
get {
string retVal = Environment.GetEnvironmentVariable("DOTNET_RUNNING_IN_CONTAINER");
string retVal2 = Environment.GetEnvironmentVariable("AGENT_NAME");
return (
(String.Compare(retVal, Boolean.TrueString, ignoreCase: true) == 0)
||
(String.IsNullOrWhiteSpace(retVal2) == false));
}
}
}

Don't tease me. I like it. Now I can skip tests that I don't want running.

if (AreWe.InDockerOrBuildServer) return;

Now my tests run and I get a nice series of charts to show that fact.

22 tests, 4 skipped

I have it building and tests running.

I could add the Deployment Step to the Build but Azure DevOps Pipelines includes a better way. I make a Release Pipeline that is separate. It takes Artifacts as input and runs n number of Stages.

Creating a new Release Pipeline

I take the Artifact from the Build (the zipped up binaries) and pass them through the pipeline into the Azure App Service Deploy step.

Screenshot (49)

Here's the deployment in progress.

Manually Triggered Release

Cool! Now that it works and deploys, I can turn on Continuous Integration Build Triggers (via an automatic GitHub webhook) as well as Continuous Deployment triggers.

Continuous Deployment

Azure DevOps even includes badges that I can add to my readme.md so I always know by looking at GitHub if my site builds AND if it has successfully deployed.

4 releases, the final one succeeded

Now I can see each release as it happens and if it's successful or not.

Build Succeeded, Never Deployed

To top it all off, now that I have all this data and these pipelines, I even put together a nice little dashboard in about a minute to show Deployment Status and Test Trends.

My build and deployment dashboard

When I combine the DevOps Dashboard with my main Azure Dashboard I'm amazed at how much information I can get in so little effort. Consider that my podcast (my little business) is a one-person shop. And how I have a CI/CD pipeline with integrated testing gates that deploys worldwide. Many years ago this would have required a team and a lot of custom code.

Azure Dashboard

Today it took an hour. Awesome.


Sponsor: Copy: Rider 2018.2 is here! Publishing to IIS, Docker support in the debugger, built-in spell checking, MacBook Touch Bar support, full C# 7.3 support, advanced Unity support, and more.



© 2018 Scott Hanselman. All rights reserved.
     

Simplify modern data warehousing with Azure SQL Data Warehouse and Fivetran

$
0
0

Gaining insights rapidly from data is critical to being competitive in today’s business world. With a modern data warehouse, customers can bring together all their data at any scale into a single source of truth for use cases such as business intelligence and advanced analytics.

A key component of successful data warehousing is replicating data from diverse data sources into the canonical data warehousing database. Ensuring that data arrives in your data warehouse consistently and reliably is crucial for success. Data integration tools ensure that users can successfully connect to their critical data sources while moving data between source systems and their data warehouse in a timely yet reliable fashion.

Introducing Fivetran

We’re excited to announce that Fivetran has certified their zero maintenance, zero configuration, data pipelines product for Azure SQL Data Warehouse. Fivetran is a simple to use system that enables customers to load data from applications, files stores, databases, and more into Azure SQL Data Warehouse.

fivetran-logo"Azure is our fastest-growing customer base now that we support SQL Data Warehouse as a destination for Fivetran users. We're excited to be a part of the Microsoft ecosystem."

- George Fraser, CEO and Co-Founder at Fivetran

We’re also pleased to announce the Azure SQL Data Warehouse presence in Fivetran’s Cloud Data Warehouse Benchmark, which helps compare cloud providers TPCDS 1TB TPCDS performance!

benchmark-1tb-speed

With Fivetran’s automated, replicate-all data connectors our customers can:

    • Bring together diverse sources into SQL DW as normalized, ready-to-query schemas.

    • Avoid complex customization and get started quickly.

    • Automatically adjust to source changes so that their solutions are never interrupted.

    • Deliver data reliably without coding or regular maintenance.

    Here are a few sources that Fivetran supports today:

    • Application APIs: Salesforce, Marketo, Adwords, MixPanel, DoubleClick, LinkedIn Ads, Netsuite.
    • Databases: Oracle, SQL Server, Postgres.
    • Files: Azure Blob Storage, FTPS, Amazon S3, CSV Upload, Google Sheets.
    • Events: Google Analytics 360, Snowplow, Webhooks.

    For a more comprehensive listing, please visit their connectors page.

    Custom connector support

    While Fivetran supports many data connectors today, sometimes your required connector isn’t supported. If that is the case, you can use the Azure Cloud Function connector to create a simple custom pipeline.

    How it works:

    • Write a small function to fetch data from your custom source. Then write into Fivetran, state logic to handle the incremental updating.
    • Host your function on Azure Cloud Functions.
    • Connect Fivetran and let us handle the rest. Fivetran loads data into your warehouse, calling your function as often as every five minutes to fetch new data. Duplicate it and incrementally update it.

    Next steps

    To learn how to get started with Fivetran data connectors for Azure SQL Data Warehouse visit their documentation or get started with a free 14 day trial.

    Learn more about SQL DW and stay up-to-date with the latest news by following us on Twitter @AzureSQLDW.

    Getting AI/ML and DevOps working better together

    $
    0
    0

    Artificial Intelligence (AI) and machine learning (ML) technologies extend the capabilities of software applications that are now found throughout our daily life: digital assistants, facial recognition, photo captioning, banking services, and product recommendations. The difficult part about integrating AI or ML into an application is not the technology, or the math, or the science or the algorithms. The challenge is getting the model deployed into a production environment and keeping it operational and supportable. Software development teams know how to deliver business applications and cloud services. AI/ML teams know how to develop models that can transform a business. But when it comes to putting the two together to implement an application pipeline specific to AI/ML — to automate it and wrap it around good deployment practices — the process needs some effort to be successful.

    image

    The need for aligned development approaches

    DevOps has become the de-facto development standard for cloud services. It places an emphasis on process, automation, and fosters a culture that encourages new ways of working together across teams. DevOps is an application-centric paradigm that focuses on the platform, instrumentation, and process to support applications: what is the infrastructure needed to support the application? What tools can be used to automate it? What is the release process for QA/production? 

    AI/ML projects have their own development methodologies including CRISP-DM and Microsoft Team Data Science Process (TDSP). Like DevOps, these methodologies are grounded in principles and practices learned from real-world projects. AI/ML teams use an approach unique to data science projects where there are frequent, small iterations to refine the data features, the model, and the analytics question. It’s a process intended to align a business problem with AI/ML model development. The release process is not a focus for CRISP-DM or TDSP and there is little interaction with an operations team. DevOps teams (today) are yet not familiar with the tools, languages, and artifacts of data science projects. 

    DevOps and AI/ML development are two independent methodologies with a common goal: to put an AI application into production. Today it takes the effort to bridge the gaps between the two approaches. AI/ML projects need to incorporate some of the operational and deployment practices that make DevOps effective and DevOps projects need to accommodate the AI/ML development process to automate the deployment and release process for AI/ML models.

    Integrating AI/ML teams, process, and tools

    Based on lessons learned from several Microsoft projects including the Mobile Bank Fraud Solution, some suggestions for bridging the gap between DevOps and AI/ML projects follow.

    DevOps for AI/ML

    DevOps for AI/ML has the potential to stabilize and streamline the model release process. It is often paired with the practice and toolset to support Continuous Integration/Continuous Deployment (CI/CD). Here are some ways to consider CI/CD for AI/ML workstreams:

    • The AI/ML process relies on experimentation and iteration of models and it can take hours or days for a model to train and test. Carve out a separate workflow to accommodate the timelines and artifacts for a model build and test cycle. Avoid gating time-sensitive application builds on AM/ML model builds.
    • For AI/ML teams, think about models as having an expectation to deliver value over time rather than a one-time construction of the model. Adopt practices and processes that plan for and allow a model lifecycle and evolution.
    • DevOps is often characterized as bringing together business, development, release, and operational expertise to deliver a solution. Ensure that AI/ML is represented on feature teams and is included throughout the design, development, and operational sessions.

    Establish performance metrics and operational telemetry for AI/ML

    Use metrics and telemetry to inform what models will be deployed and updated. Metrics can be standard performance measures like precision, recall, or F1 scores. Or they can be scenario specific measures like the industry-standard fraud metrics developed to inform a fraud manager about a fraud model’s performance. Here are some ways to integrate AI/ML metrics into an application solution: 

    • Define model accuracy metrics and track them through model training, validation, testing, and deployment.
    • Define business metrics to capture the business impact of the model in operations. For an example see R notebook for fraud metrics.
    • Capture data metrics, like dataset sizes, volumes, update frequencies, distributions, categories, and data types. Model performance can change unexpectedly for many reasons and it's expedient to know if changes are due to data.
    • Track operational telemetry about the model:  how often is it called? By which applications or gateways? Are there problems? What are the accuracy and usage trends? How much compute or memory does the model consume?
    • Create a model performance dashboard that tracks model versions, performance metrics, and data sets.

    AI/ML models need to be updated periodically. Over time, and as new and different data becomes available — or customers or seasons or trends change — a model will need to be re-trained to continue to be effective. Use metrics and telemetry to help refine the update strategy and determine when a model needs to be re-trained.

    Automate the end-to-end data and model pipeline

    The AI/ML pipeline is an important concept because it connects the necessary tools, processes, and data elements to produce and operationalize an AI/ML model. It also introduces another dimension of complexity for a DevOps process. One of the foundational pillars of DevOps is automation, but automating an end-to-end data and model pipeline is a byzantine integration challenge.

    Workstreams in an AI/ML pipeline are typically divided between different teams of experts where each step in the process can be very detailed and intricate. It may not be practical to automate across the entire pipeline because of the difference in requirements, tools, and languages. Identify the steps in the process that can be easily automated like the data transformation scripts, or data and model quality checks. Consider the following workstreams:  

    Workstream Description Automation
    Data Analysis    Includes data acquisition and focusing on exploring, profiling, cleaning, and transforming. Also includes enriching, and staging data for modeling. Develop scripts and tests to move and validate the data. Also create scripts to report on the data quality, changes, volume, and consistencies.
    Experimentation    Includes feature engineering, model fitting, and model evaluation. Develop scripts, tests, and documentation to reproduce the steps and capture model outputs and performance.
    Release Process Includes the process for deploying a model and data pipeline into production. Integrate the AI/ML pipeline into the release process
    Operationalization Includes capturing operational and performance metrics. Create operational instrumentation for the AI/ML pipeline. For subsequent model retraining cycles, capture and store model inputs, and outputs.

    Model Re-training and Refinement

    Determine a cadence for model re-training. Instrument the AI/ML pipeline with alerts and notifications to trigger retraining.
    Visualization Develop an AI/ML dashboard to centralize information and metrics related to the model and data. Include accuracy, operational characteristics, business impact, history, and versions. n/a

    An automated end-to-end process for the AI/ML pipeline can accelerate development and drive reproducibility, consistency, and efficiency across AI/ML projects.

    Versioning 

    Versioning is about keeping track of an application’s artifacts and the changes to the artifacts.

    In software development projects this includes code, scripts, documentation, and files. A similar practice is just as important for AI/ML projects because—typically—there are multiple components, each with separate release and versioning cycles. In AI/ML projects, the artifacts could include:

    • Data: training data, inference data, data metrics, graphs, plots, data structures, schemas
    • Models: trained models, scoring models, A/B testing models
    • Model outputs: predictions, model metrics, business metrics 
    • Algorithms, code, notebooks

    Versioning can help provide:

    • Traceability for model changes from multiple collaborators
    • Audit trails for project artifacts
    • Information about which models are called from which applications

    A practical example of the importance of versioning for the AI/ML team happens when the performance of a model changes unexpectedly, and the change has nothing to do with the model itself. The ability to easily trace back inputs, dependencies, model, and data set versions could save days or weeks of effort.

    At a minimum, decide on a consistent naming convention and use it for the data files, folders, and AI/ML models. Several different teams will be involved in the modeling process and without naming conventions, there will be confusion over which data sets or model versions to use.

    Consider container architectures

    Container architectures have the potential to streamline and simplify model development, test, and deployment. And as a package-based interface, containers make it easy for software applications to connect. Containers create an abstraction layer between models and the underlying infrastructure. This lets the AI/ML team focus on model development and not worry about the platform. Containers can easily enable:

    • A/B testing 
    • Deployment to multiple environments (IoT edge, local desktop, Azure infrastructure)
    • Consistent environment configuration and setup to drive faster model development, test, and release cycles
    • Model portability and scalability

    Recommended next steps

    The adoption of DevOps has been very effective at bringing together software development and operations teams to simplify and improve deployment and release processes. As AI and ML become increasingly more important components for applications, more pressure will exist to ensure they are part of an organization’s DevOps model. The suggestions presented are examples of some steps to move towards an integration of two methodologies. To get started, please use some of the links below and please share your feedback and your experience!

    Remote Monitoring Solution allows for root cause analysis with Azure Time Series Insights

    $
    0
    0

    With the abundance of data coming from IoT devices and the global nature of business today, it’s essential to be able to understand correlations and track historical trends across your assets.

    Imagine managing a fleet of trucks carrying items that need to be maintained at a specific temperature. Occasionally you see a low temperature alert triggered for some of your trucks during their daily scheduled delivery. As an operator, you will need to conduct a root cause analysis to understand why this is happening, if there are recurring patterns, and how to prevent it from happening in the future.

    To help you with this, we’re excited to announce that we have now integrated Azure Time Series Insights into the Azure IoT Remote Monitoring solution accelerator. With Time Series Insights, you can gain deeper insights into your time-series sensor data by spotting trends, anomalies, and correlations across real-time and historical data in all your locations. New Remote Monitoring deployments (both Basic and Standard) will include Time Series Insights out-of-the-box* at no extra cost. All messages data from your IoT devices will be stored in Time Series Insights, but your alarms, rules, and configuration settings will remain in Cosmos DB.

    With this added functionality, you will now be able to:

    1. Visualize your telemetry data in the Time Series Insights explorer by clicking any of the outgoing links on the Remote Monitoring dashboard:

    tsi-rm-dashboard

    2. Explore your device data across all locations and drill down to see second-by-second changes in your streams:

    tsi-chiller-pressure

    3. Diagnose root causes of anomalies by adding other data streams into your view and discovering correlations that could help you identify causation:

    tsi-all-streams

    4. Learn from your explorations and set additional rules so that you can assure your assets are always at their best health:

    rm-new-rule

    *Note: Time Series Insights is not yet available in Azure China Cloud. New deployments in Azure China Cloud will continue to use Cosmos DB for all storage.

    Time Series Insights enables an end-to-end experience that will allow you to not only understand when an anomaly has occurred, but also why it has occurred. We look forward to learning about how Time Series Insights has helped your IoT solution become even smarter and to gathering your feedback on this new feature.

    Next steps


    The top 6 ways to boost ratings, reviews, and feedback for your app

    $
    0
    0

    A frequent question we get from developers is, “How can I encourage customers to review and give feedback on my app?”

    User feedback isn’t just useful for product development—it can significantly affect whether a new customer decides to download your product. So, aside from checking your reviews report and feedback report, what can you do to boost your rating and get more reviews and feedback?

    We’ve compiled 6 quick tips to help you out.

    Ask for reviews

    Be direct! It’s the best way to let customers know you want to hear what they think.

    1) Ask in-app

    Prompt your customers in-app with a rating and review dialog by implementing just a few lines of code. Or you can directly launch the rating and review page for your app in the Microsoft Store. You can also use the let customers directly launch Feedback Hub from your app.

    2) Ask with push notifications

    Send a toast or tile notification asking for ratings and reviews. To target only customers who haven’t yet rated your app, first create a customer segment with the definition Has rated == false, then use push notifications to engage them.

    Defining inclusion conditions for push notifications.

    3) Ask on Microsoft Store

    Add a request for reviews and feedback in the What’s new in this version section or the description of your Microsoft Store listing. After you list any updates, let your customers know you’re looking for feedback on what you’ve updated by saying something like “Please help us improve your experience by reviewing our app.”

    "What's new in this version" app summary.

    Screenshot from MEGA Privacy Microsoft Store listing page.

    Respond to your customers

    Thoughtful responses tell your customers you care about their experiences.

    4) Respond directly to reviews

    Let customers know their opinions matter. So long as they haven’t opted out of receiving responses, you can respond to customer reviews via API or in the dashboard. Read more on how to do that here.

    Example of responding directly to customer reviews.

    Screenshot from Sketchable reviews page on their Microsoft Store listing

    5) Respond directly to feedback

    Want to address a particular comment? Use Feedback Hub in your app. You can respond publicly or privately and post updates on the status of any user issue you’re working to resolve.

    Keep your responses shorter than 1,000 words, polite, and to the point. Read more on responding to feedback here.

    When responding to reviews and feedback, keep your responses polite and respectful, because customers can report inappropriate or abusive responses, and we take those reports seriously.

    6) Fix the pain points

    If several reviewers are suggesting the same new feature or fixes, consider releasing a new version with those updates—and when you do, make a note in the app description or “What’s new in this version” section so customers know what issues have been fixed. You might even mention that the issues were identified based on customer feedback and ask them to continue letting you know what they think.

    We hope these tips will help you let your customers know you want their input—and then let them know they’ve been heard.

    The post The top 6 ways to boost ratings, reviews, and feedback for your app appeared first on Windows Developer Blog.

    AI, Machine Learning and Data Science Roundup: September 2018

    $
    0
    0

    A monthly roundup of news about Artificial Intelligence, Machine Learning and Data Science. This is an eclectic collection of interesting blog posts, software announcements and data applications from Microsoft and elsewhere that I've noted over the past month or so.

    Open Source AI, ML & Data Science News

    ONNX 1.3 released. The standard for representing predictive models adds a cross-platform API for executing graphs on optimized backends and hardware.

    A new foundation supporting the development of scikit-learn, the Python machine learning library.

    Google open sources "dopamine", a Tensorflow-based framework for reinforcement learning research.

    Google adds a "What-If Tool" to Tensorboard, to interactively explore the robustness and algorithmic fairness of machine learning models.

    Industry News

    Google introduces Dataset Search, an index of public datasets from the environmental and social sciences, government, and journalism.

    The 2018.3 update to the Alteryx analytics platform brings interactive graphics, and Spark and Jupyter Notebook integration.

    AWS Deep Learning AMIs now provide Tensorflow 1.10 and PyTorch with CUDA 9.2 support.

    Rigetti Computing launches the Quantum Cloud Service with a $1M prize for the first application demonstrating "quantum advantage".

    Thomas Dinsmore's commentary on Forrester Wave rankings for "Multimodal Predictive Analytics and Machine Learning Platforms" and "Notebook-Based Predictive Analytics and Machine Learning Solutions".

    Microsoft News

    Microsoft acquires Lobe, a Bay Area startup that produced a drag-and-drop interface for building machine learning models.

    NVIDIA GPU Cloud now provides ready-to-run containers with GPU-enabled deep learning frameworks for use on Azure.

    Microsoft introduces Azure CycleCloud, a tool for creating, managing, operating, and optimizing burst, hybrid, and cloud-only HPC clusters. Schedulers including Slurm, Grid Engine and Condor are supported.

    New features in Azure HDInsight: Spark 2.3.0 and Kafka 1.1 support; ML Services 9.3 integration, with updated R engine and new statistical and machine learning algorithms; Apache Phoenix, for SQL-like queries for data in HBase; Apache Zeppelin, web-based notebooks for querying Phoenix tables; and more.

    Learning resources

    A review of the algorithms behind AutoML systems for model selection and hyperparameter optimization, from the H2O blog.

    Joel Grus's criticisms of Jupyter Notebooks as a platform for reproducible and production-ready computing. Yihui Xiu offers an alternative: RMarkdown.

    A comprehensive introduction to mixed-effects models, and fitting them in Python.

    A collection of videos, presentations and essays by Brandon Rohrer with approachable explanations of the inner workings of deep learning and machine learning algorithms.

    A tutorial on using Azure Batch AI to parallelize forecasts of energy demand.

    A meticulously-researched accounting of the resources (natural, technological, and human) that enable an Alexa voice query, presented as highly-detailed map.

    A review of the book SQL Server 2017 Machine Learning Services with R.

    Applications

    Data scientists use text similarity analyses in R to try and identify the author of that anonymous NYT op-ed.

    Sketch2Code, an application to translate hand drawings into HTML forms.

    Measuring building footprints from satellite images, with semantic segmentation.

    Near real-time fraud detection for mobile banking, with a classification model implemented in Azure Machine Learning.

    Credit card fraud detection with an Autoencoder neural network, using the Azure Data Science VM.

    Shell deploys machine learning and AI systems to avert equipment failures, autonomously direct drill-bits underground, and improve safety.

    Find previous editions of the monthly AI roundup here

    Deep dive into Azure Artifacts

    $
    0
    0

    Azure Artifacts manages the dependencies used in your codebase and provides easy tools to ensure the immutability and performance of those components. Released as one of the new services available for developers in Azure DevOps, the current features in Artifacts will help you and your users produce and consume artifacts. For teams that use or produces binary packages, Azure Artifacts provides a secure, highly performant store and easy feed.

    Getting started with Artifacts: Package feeds

    Azure Artifacts groups packages in to feeds, which are containers for packages that help you consume and publish.

    Package feeds

    We’ve optimized default settings to be most useful to feed users, such as making your feed account visible to easily share a single source of packages across your entire team. However, if you’d like to customize your settings, simply access the settings tab to refresh your preferences.

    New feature: Universal Packages

    Azure Artifacts is a universal store for all the artifacts you use as part of development and deployment. In addition to NuGet, npm, and Maven packages, feeds now support Universal Packages, which can store any file or set of files. You create and consume Universal Packages via the Visual Studio Team Services (VSTS) CLI. Consider using them to store deployment inputs like installers, large datasets or binary files that you need during development, or as a versioned container for your pipeline outputs. To try them out, look for the Universal Packages toggle in your preview features panel by clicking your profile image in the upper right, followed by clicking on “Preview features”. You can also learn more in the Universal Packages announcement post.

    Next up, enabling Views

    The views in Azure Artifacts enable you to share subsets of the NuGet and npm package-versions in your feed with consumers. A common use for views is to share package-versions that have been tested, validated, or deployed but hold back packages still under development and not ready for public consumption.

    Views

    Views and upstream sources are designed to work together to make it easy to produce and consume packages at enterprise scale.

    Control your dependencies with Upstream Sources

    Upstream sources enable you to use a single feed to store both the packages you produce and the packages you consume from "remote feeds". This includes both public feeds, such as npmjs.com and nuget.org, and authenticated feeds, such as other Azure DevOps feeds in your organization. Once you've enabled an upstream source, any user connected to your feed can install a package from the remote feed, and your feed will save a copy. 

    Note: For each component served from the upstream, a copy will be always available to consume, even if the original source is down or, for TFS users, your internet connection isn’t available.

    Upstream sources

    In short, enabling upstream sources to public sources makes it easy to use your favorite or most used dependencies, and can also give you additional protection against outages and corrupted or compromised packages.

    Easy to use Symbols and the Symbol Server

    To debug compiled executables, especially executables compiled from native code languages like C++, you need symbol files that contain debugging information. Artifacts makes Symbol support and publishing quick and simple.

    The updated “Index Sources and Publish Symbols” task now publishes symbols to the Azure DevOps Symbol Server with a single checkbox. No advanced configuration or file share setup is required.

    Symbols

    We also have made it simple to consume symbols from Visual Studio:

    1. With VS2017 Update 4.1 (version 15.4.1) or later, type “debugging symbols” in Quick Launch and press Enter.
    2. Click the “New Azure DevOps Symbol Server Location…” button (marked in red below). In the dialog that appears, select your Azure DevOps account and click “Connect”.

    When you are done, it should look like this:

    Symbols from Visual Studio

    If you prefer debugging with the new UWP version of WinDbg, these docs will help you configure your Azure DevOps account on the WinDbg sympath.

    Credential Provider authentication for NuGet in Azure Artifacts

    Azure Artifacts secures all the artifacts you publish. However, historically it’s been a challenge to get through security to use your NuGet packages, especially on Mac and Linux. Today, that changes with the new Azure Artifacts Credential Provider. We’ve automated the acquisition of credentials needed to restore NuGet packages as part of your .NET development workflow, whether you’re using MSBuild, dotnet, or NuGet(.exe) on Windows, Mac, or Linux. Any time you want to use packages from an Azure Artifacts feed, the Credential Provider will automatically acquire and store a token on behalf of the NuGet client you're using. To learn more and get the new Credential Provider, see the readme on GitHub.

    Supported protocols versions and compatibility

    Some package management services are only compatible with specific versions of TFS. The table below provides the information needed to understand version compatibility.

    Feature Azure DevOps Services TFS
    NuGet Yes TFS 2017
    npm Yes TFS 2017 Update 1 and newer
    NuGet.org upstream source Yes TFS 2018 Update 2 and newer
    Maven Yes TFS 2018

    Further info

    Want to learn more? See our documented best practices, videos, and other learning materials for Azure Artifacts.

    We also maintain a list of requested features through our UserVoice, and love to complete requests for our passionate users! Always feel free to message us on twitter (@had_msft or @VSTS) with questions or issues!

    Applications of R presented at EARL London 2018

    $
    0
    0

    During the EARL (Enterprise Applications of the R Language) conference in London last week, the organizers asked me how I thought the conference had changed over the years. (This is the conference's fifth year, and I'd been to each one.) My response was that it reflected the increasing maturity of R in the enterprise. The early years featured many presentations that were about using R in research and the challenges (both technical and procedural) for integrating that research into the day-to-day processes of the business. This year though, just about every presentation was about R in production, as a mainstream part of the operational infrastructure for analytics. 

    That theme began in earnest with Garret Grolemund's keynote presentation on the consequences of scientific research that can't be replicated independently. (This slide, based on this 2016 JAMA paper was an eye-opener for me.) The R language has been at the forefront of providing the necessary tools and infrastructure to remove barriers to reproducible research in science, and the RMarkdown package and its streamlined integration in RStudio, is particularly helpful. Garret's keynote was recorded but the video isn't yet available; when it's published I highly recommend taking the time to watch this excellent speech.

    The rest of the program was jam-packed with a variety of applications of R in industry as well. I couldn't see them all (it was a three-track conference), but every one I attended demonstrated mature, production-scale applications of R to solve difficult business problems with data. Here are just a few examples:

    • Rainmakers, a market research firm, uses R to estimate the potential market for new products. They make use of the officer package to automate the generation of Powerpoint reports.
    • Geolytix uses R to help companies choose locations for new stores that maximize profitability while reducing the risk of cannibalizing sales from other nearby locations. They use SQL Server ML Services to deploy these models to their clients.
    • Dyson uses R (and the prophet package) to forecast the expected sales of new models of vacuum cleaners, hand dryers, and other products, so that the manufacturing plants can ramp up (or down) production as needed.
    • Google uses R to design and analyze the results of customer surveys, to make the best decisions of which product features to invest in next.
    • N Brown Group, a fashion retailer, uses R to analyze online product reviews from customers.
    • Marks and Spencer uses R to increase revenues by optimizing the products shown and featured in the online store.  
    • PartnerRe, the reinsurance firm, has build an analytics team around R, and uses Shiny to deploy R-based applications throughout the company.
    • Amazon Web Services uses containerized applications with R to identify customers who need additional onboarding assistance or who may be dissatisfied, and to detect fraud.
    • Microsoft uses R and Spark (via sparklyr in HDInsight) to support marketing efforts for Xbox, Windows and Surface with propensity modeling, to identify who is most likely to respond to an offer. 

    You can see may more examples at the list of speakers linked below. (I blogged about my own talk at EARL earlier this week.) Click through to see detailed summaries and (in most cases) a link to download slides.

    EARL London 2018: Speakers 

    Because it’s Friday: Fly Strong

    $
    0
    0

    I was about the same age as student pilot Maggie Taraska when I had my first solo flight. Unlike Maggie, I didn't have to deal with a busy airspace, or air traffic control, or engines (I was in a glider), or — most significantly — one of my landing wheels falling off during the flight. But Maggie handled the entire situation much more coolly than I remember my own short, uneventual, first solo. Just listen to the radio chatter below. If you're in a rush, you can skip the period from 2:00 – 7:30 while she circles the Beverly (MA) airport waiting for a couple of other planes to land first, but the whole thing is worth a listen to hear how she successfully lands with only one wheel on the left side.

     

    That's all from us at the blog for this week. Have a great weekend, and we'll be back next week.

    cppcon-2018-sweepstakes-official-rules

    $
    0
    0

    MICROSOFT CLOUD AND ENTERPRISE CPPCON EVENT SWEEPSTAKES

    OFFICIAL RULES

    NO PURCHASE NECESSARY. 

    PLEASE NOTE: It is your sole responsibility to comply with your employer’s gift policies.  If your participation violates your employer’s policies, you may be disqualified. Microsoft disclaims any and all liability and will not be party to any disputes or actions related to this matter.

    GOVERNMENT EMPLOYEES: Microsoft is committed to complying with government gift and ethics rules and therefore government employees are not eligible.

    WHAT ARE THE START AND END DATES?

    This Sweepstakes starts on September 22, 2018, and ends on September 27, 2018 (“Entry Period”) and will be conducted during regular event hours. Winners will be selected Thursday morning, September 27, before the 10:30am session.

    Entries must be received within the Entry Period to be eligible.

    CAN I ENTER?

    You are eligible to enter if you meet the following requirements at time entry:

    ·       You are a registered attendee of CppCon and you are 18 years of age or older; and

    ·       If you are 18 of age or older, but are considered a minor in your place of residence, you should ask your parent’s or legal guardian’s permission prior to submitting an entry into this Sweepstakes; and

     You are NOT a resident of any of the following countries: Cuba, Iran, North Korea, Sudan, and Syria.

    ·       PLEASE NOTE: U.S. export regulations prohibit the export of goods and services to Cuba, Iran, North Korea, Sudan and Syria. Therefore residents of these countries / regions are not eligible to participate.

     You are NOT event exhibitor support personnel; and

    ·       You are NOT an employee of Microsoft Corporation, or an employee of a Microsoft subsidiary; and

    ·       You are NOT an immediate family (parent, sibling, spouse, child) or household member of a Microsoft employee, an employee of a Microsoft subsidiary, any person involved in any part of the administration and execution of this Sweepstakes.

    This Sweepstakes is void where prohibited by law.

    HOW DO I ENTER?

    You can enter by completing a short survey, http://aka.ms/cppcon.  All required survey questions must be answered to receive an entry.

    We will only accept one (1) entry(ies) per entrant. We are not responsible for entries that we do not receive for any reason, or for entries that we receive but are not decipherable for any reason.

    We will automatically disqualify:

    ·       Any incomplete or illegible entry; and

    ·       Any entries that we receive from you that are in excess of the entry limit described above.

    WINNER SELECTION AND PRIZES

    At the close of the event, we will randomly select winners of the prizes designated below.

    One (1) Grand Prize(s).  A(n) Microsoft Xbox One S – Starter Bundle – game console – 1 TB HDD – white.  Approximate Retail Value (ARV) $299.00.

    The ARV of electronic prizes is subject to price fluctuations in the consumer marketplace based on, among other things, any gap in time between the date the ARV is estimated for purposes of these Official Rules and the date the prize is awarded or redeemed.  We will determine the value of the prize to be the fair market value at the time of prize award.

    The total Approximate Retail Value (ARV) of all prizes: $299

    We will only award one (1) prize(s) per person during the Entry Period.

    WINNER NOTIFICATION

    Winners will be selected randomly and must be present at time of drawing to win.

    GENERAL CONDITIONS

    Taxes on the prize, if any, are the sole responsibility of the winner.  All federal, state, and local laws and regulations apply.  No substitution, transfer, or assignment of prize permitted, except that Microsoft reserves the right to substitute a prize of equal or greater value in the event the offered prize is unavailable. Prize winners may be required to sign and return an Affidavit of Eligibility and Liability Release and W-9 tax  form or W-8 BEN tax form within 10 days of notification. If you complete a tax form, you will be issued an IRS 1099 the following January, for the actual value of the prize. You are advised to seek independent counsel regarding the tax implications of accepting a prize. If a selected winner cannot be contacted, is ineligible, fails to claim a prize or fails to return the Affidavit of Eligibility and Liability Release or W-8 BEN form, the selected winner will forfeit their prize and an alternate winner will be selected. Your odds of winning depend on the number of eligible entries we receive. In the event of a dispute all decisions of Microsoft are final.

    By entering this Sweepstakes you agree:

    ·       To abide by these Official Rules; and

    ·       To release and hold harmless Microsoft and its respective parents, subsidiaries, affiliates, employees and agents from any and all liability or any injury, loss or damage of any kind arising from or in connection with this Sweepstakes or any prize won; and

    ·       That by accepting a prize, Microsoft may use of your proper name and state of residence online and in print, or in any other media, in connection with this Sweepstakes, without payment or compensation to you, except where prohibited by law.

    WHAT IF SOMETHING UNEXPECTED HAPPENS AND THE SWEEPSTAKES CAN’T RUN AS PLANNED?

    If someone cheats, or a virus, bug, bot, catastrophic event, or any other unforeseen or unexpected event that cannot be reasonably anticipated or controlled, (also referred to as force majeure) affects the fairness and / or integrity of this Sweepstakes, we reserve the right to cancel, change or suspend this Sweepstakes.  This right is reserved whether the event is due to human or technical error. If a solution cannot be found to restore the integrity of the Sweepstakes, we reserve the right to select winners from among all eligible entries received before we had to cancel, change or suspend the Sweepstakes.

    If you attempt or we have strong reason to believe that you have compromised the integrity or the legitimate operation of this Sweepstakes by cheating, hacking, creating a bot or other automated program, or by committing fraud in ANY way, we may seek damages from you to the fullest extent permitted by law.  Further, we may disqualify you, and ban you from participating in any of our future Sweepstakes, so please play fairly.

    WINNERS LIST AND SPONSOR

    If you send an email to visualcpp@microsoft.com within 30 days of winner selection, we will provide you with a list of winners that receive a prize worth $25.00 or more.

    This Sweepstakes is sponsored by Microsoft Corporation, One Microsoft Way, Redmond, WA 98052.

    PRIVACY STATEMENT

    At Microsoft, we are committed to protecting your privacy.  Microsoft uses the information you provide to notify prize winners, and to send you information about other Microsoft products and services, if requested by you.  Microsoft will not share the information you provide with third parties without your permission except where necessary to complete the services or transactions you have requested, or as required by law.  Microsoft is committed to protecting the security of your personal information.  We use a variety of security technologies and procedures to help protect your personal information from unauthorized access, use, or disclosure. Your personal information is never shared outside the company without your permission, except under conditions explained above.

    If you believe that Microsoft has not adhered to this statement, please notify us by sending email to visualcpp@microsoft.com or postal mail to visualcpp@microsoft.com, Microsoft Privacy, Microsoft Corporation, One Microsoft Way, Redmond, WA 98052 USA, and we will use commercially reasonable efforts to remedy the situation.

    Azure.Source – Volume 50

    $
    0
    0

    Now in preview

    Jenkins Azure ACR Build plugin now in public preview - Azure Container Registry (ACR) Build is a suite of features within ACR. It provides cloud-based container image building for Linux, Windows, and ARM, and can automate OS and framework patching for your Docker containers. Now you can use Azure ACR Plugin in Jenkins to build your Docker image in Azure Container Registry based on git commits or from a local directory. One of the best things about ACR build is you only pay for the compute you use to build your images.

    Also in preview

    The Azure Podcast

    The Azure Podcast | Episode 247 - Partner Spotlight - Snowflake - Cynthia, Evan and Cale talk to Leo Giakoumakis, head of Snowflake's Seattle Development Center, about their Data warehouse platform built for the cloud (and now available on Azure).

    Now generally available

    Immutable storage for Azure Storage Blobs now generally available - Many industries are required to retain business-related communications in a Write-Once-Read-Many (WORM) or immutable state that ensures they are non-erasable and non-modifiable for a specific retention interval. Immutable storage for Azure Storage Blobs addresses this requirement and is available in all Azure public regions. Through configurable policies, users can keep Azure Blob storage data in an immutable state where Blobs can be created and read, but not modified or deleted.

    Also generally available

    The IoT Show

    The IoT Show | Enhanced IoT Edge developer experience in VS Code - Joe Binder, PM in the Visual Studio team shows us the latest and greatest additions to the Azure IoT Edge extension for Visual Studio Code such as solution templates and debugging features.

    The IoT Show | Azure IoT DevKit OTA Firmware update - Arthur Ma, developer lead in the Visual Studio team joins Olivier on the IoT Show to demonstrate how to do a Firmware update of a microcontroller over the air with Azure IoT Hub.

    News and updates

    Remote Monitoring Solution allows for root cause analysis with Azure Time Series Insights - Azure Time Series Insights is now integrated into the Azure IoT Remote Monitoring solution accelerator. With Time Series Insights, you can gain deeper insights into your time-series sensor data by spotting trends, anomalies, and correlations across real-time and historical data in all your locations. All messages data from your IoT devices will be stored in Time Series Insights, but your alarms, rules, and configuration settings will remain in Azure Cosmos DB.

    Screenshot of telemetry data in the Time Series Insights explorer

    HDInsight tools for Visual Studio Code: simplifying cluster and Spark job configuration management - HDInsight Tools for Visual Studio Code now uses Visual Studio Code's built-in user settings and workspace settings to manage HDInsight clusters in Azure regions worldwide and Spark job submissions. With this feature, you can manage your linked clusters and set your preferred Azure environment with Visual Studio Code user settings. You can also set your default cluster and manage your job submission configurations via Visual Studio Code workspace settings.

    The Azure DevOps Podcast

    The Azure DevOps Podcast | Donovan Brown on How to Use Azure DevOps Services - Episode 002 - Jeffrey Palermo and Donovan Brown talk about the whirlwind it’s been since the launch of the new Azure DevOps, key information new developers might want to know when beginning to use or incorporate Azure DevOps, some of the changes to their services, what’s available for packages in DevOps, the free build capabilities Microsoft is giving to open source projects, some of the new capabilities around GitHub integration, and more!

    Technical content and training

    Getting AI/ML and DevOps working better together - The difficult part about integrating AI or ML into an application is getting the model deployed into a production environment and keeping it operational and supportable. AI/ML projects need to incorporate some of the operational and deployment practices that make DevOps effective and DevOps projects need to accommodate the AI/ML development process to automate the deployment and release process for AI/ML models. See this post for suggestions for bridging the gap between DevOps and AI/ML projects that are based on lessons learned from several Microsoft projects including the Mobile Bank Fraud Solution.

    Deep dive into Azure Repos - Azure Repos (a service in Azure DevOps) has unlimited free private repositories with collaborative code reviews, advanced file management, code search, and branch policies to ensure high quality code. Learn how Azure Repos is great for small projects as well as large organizations that need native AAD support and advanced policies.

    Deep dive into Azure Test Plans - Azure Test Plans (a service in Azure DevOps) provides a browser-based test management solution for exploratory, planned manual, and user acceptance testing. Azure Test Plans also provides a browser extension for exploratory testing and gathering feedback from stakeholders.

    Deep dive into Azure Artifacts - Azure Test Artifacts (a service in Azure DevOps) manages the dependencies used in your codebase and provides easy tools to ensure the immutability and performance of those components. For teams that use or produces binary packages, Azure Artifacts provides a secure, highly performant store and easy feed.

    Programmatically onboard and manage your subscriptions in Azure Security Center - To streamline the security aspects of the DevOps lifecycle, ASC has recently released its official PowerShell module. This enables organizations to programmatically automate onboarding and management of their Azure resources in Azure Security Center and adding the necessary security controls. This blog post focuses on how to use this PowerShell module to onboard Azure Security Center. You can now use the official PowerShell cmdlets with automation scripts to programmatically iterate on multiple subscriptions/resources, reducing the overhead caused by manually performing these actions, as well as reduce the potential risk of human error resulting from manual actions.

    See also

    Azure Friday

    Azure Friday | Batch and matrix routing with Azure Maps - Julie Kohler joins Scott Hanselman and shows you how to execute batch routing calls using Azure Maps as well as how to do matrix routing with a given set or origins and destinations.

    Azure Friday | Batch geocoding and polygons for administrative areas with Azure Maps - Julie Kohler joins Scott Hanselman and shows you how to execute batch geocoding calls using Azure Maps as well as how to get the polygon for an administrative area on a map.

    Azure Friday | Mapping IP to location with Azure Maps - Julie Kohler joins Scott Hanselman and shows you how to get the ISO country code for a provided IP address to tailor your application to your customers' needs based on geographic location.

    Azure Friday | Calculating isochrones in Azure Maps - Julie Kohler joins Scott Hanselman and shows you how you can generate an isochrone using Azure Maps. Isochrones represent the reachable range from a given point using a set of constraints such as fuel, energy or time.

    Events

    Microsoft Ignite 2018 - If you're not able to join us next week for this premiere event, be sure to watch online to watch the live stream from Orlando.

    Come check out Azure Stack at Ignite 2018 - If you're attending Microsoft Ignite in Orlando next week, the Azure Stack team has put together a list of sessions along with a pre-day event to ensure that you will enhance your skills on Microsoft’s hybrid cloud solution and get the most out of this year’s conference. The agenda is tailored for developers who use Azure Stack to develop innovative hybrid solutions using services on Azure Stack and Azure, as well as operators who are responsible for the operations, security, and resiliency of Azure Stack itself.

    See also

    Azure tips & tricks

    How to work with extensions in Azure App Service thumbnail

    How to work with extensions in Azure App Service - Learn how to add extensions to web applications in Azure App Service. Watch to see the list of the various extensions from Microsoft and external parties that you can choose from and how to create your own.

    How to test web applications in production thumbnail

    How to test web applications in production - Learn how to test web applications for production using Azure App Service. Using this testing in production feature, makes it easier for you to carry out AB testing with different versions of your application.

    Customers and partners

    Simplify modern data warehousing with Azure SQL Data Warehouse and Fivetran - Fivetran has certified their zero maintenance, zero configuration, data pipelines product for Azure SQL Data Warehouse. Fivetran is a simple to use system that enables customers to load data from applications, files stores, databases, and more into Azure SQL Data Warehouse. In addition, the Azure SQL Data Warehouse is present in Fivetran’s Cloud Data Warehouse Benchmark, which helps compare cloud providers TPCDS 1TB TPCDS performance.

    Tuesdays with Corey

    Tuesdays with Corey | What up with Azure File Sync - Corey Sanders, Corporate VP - Microsoft Azure Compute team sat down with Will Gries, Senior PM on the Azure Storage team to talk about Azure File Sync!

    Industries

    Improve patient engagement and efficiency with AI powered chatbots - Patients are more likely to give a higher rating for patient engagement if they have some way to communicate to an organization with as little friction as possible. Improving the patient engagement experience can also improve patient outcomes — the hope is to do it affordably. AI-powered health agents, also known as “chatbots,” can engage patients 24/7, using different languages, and at very low cost. Read this post to get an overview of powerful cloud-based tools you can use to rapidly create and deploy your AI-powered health agents in Microsoft Azure.

    A Cloud Guru's Azure This Week

    Azure This Week for 21 September 2018 thumbnail

    Azure This Week | 21 September 2018 - Dean looks at Video Indexer, which is now GA, how you can save money and increase performance with GPUs vs CPUs for deep learning, as well as updating you on a brand new Azure course on A Cloud Guru.


    Scripts to remove old .NET Core SDKs

    $
    0
    0

    That's a lot of .NET Core installations.NET Core is lovely. It's usage is skyrocketing, it's open source, and .NET Core 2.1 has some amazing performance improvements. Just upgrading from 2.0 to 2.1 gave Bing a 34% performance boost.

    However, for those of us who are installing multiple .NET Core SDKs side by side have noticed that they add-up if you are installing daily builds or very often. As of 2.x, .NET Core doesn't yet have an "uninstall all" or "uninstall all previews" option. There will be work done in .NET Core 3.0 that will mitigate this cumulative effect when you have lots of installers.

    If you're taking dailies and it's time to tidy up, the short answer per Damian Edwards is "Delete them all, then nuke the dotnet folder in program files, then install the latest version."

    Here's a PowerShell Script you can run on Windows as admin that will aggressively uninstall .NET Core SDKs.

    Note the match at the top. Depending on your goals, you might want to change it to "Microsoft .NET Core SDK 2.1" or just "Microsoft .NET Core SDK 2."

    Once it's all removed, then add the latest from https://www.microsoft.com/net/download/archives

    A list of .NET Core SDKs

    Here's the script, which is an improvement on Andrew's comment here. You can improve it as it's on GitHub here https://github.com/shanselman/RemoveDotNetCoreSDKInstallers. This scripts currently requires you to hit YES as the MSIs elevate. It doesn't work right then you try /passive as a switch. I'm interesting if you can get a "torch all Core SDK installers and install LTS and Current" script working.

    $app = Get-WmiObject -Class Win32_Product | Where-Object { 
        $_.Name -match "Microsoft .NET Core SDK" 
    }
    Write-Host $app.Name 
    Write-Host $app.IdentifyingNumber
    pushd $env:SYSTEMROOTSystem32
    $app.identifyingnumber |% { Start-Process msiexec -wait -ArgumentList "/x $_" }
    popd

    This PowerShell is Windows-only, of course.

    If you're on RHEL, Ubuntu/Debian, there are scripts here to try out https://github.com/dotnet/cli/tree/master/scripts/obtain/uninstall

    Let me know if this script works for you.


    Sponsor: Copy: Rider 2018.2 is here! Publishing to IIS, Docker support in the debugger, built-in spell checking, MacBook Touch Bar support, full C# 7.3 support, advanced Unity support, and more.



    © 2018 Scott Hanselman. All rights reserved.
         

    Azure DevOps Reporting – What reports do you want?

    $
    0
    0
    If you are using Azure DevOps – we want to hear what reporting metrics are important to you. We’ve authored a short survey to gather this data. Your feedback will help guide the Azure DevOps Analytics service roadmap. Analytics is the future of reporting for both Azure DevOps & Team Foundation Server. The survey should... Read More

    Azure Container Registry: Public preview of Helm Chart Repositories and more

    $
    0
    0

    With Azure Container Registry (ACR), you can easily store and manage container images for Azure deployments in a central registry. Today we are excited to add native Helm repository support and validation workflows, with ACR tasks and Docker’s content trust, to provide a more integrated container lifecycle management experience. 

    • ACR Helm Chart Repositories, available for public preview, provides Kubernetes Helm chart storage as an integrated service for container images and their deployment charts.
    • Docker Content Trust support, now in public preview, provides end-to-end validation of container images, from the time they’re built, to the location they’re deployed.
    • ACR OCI image support is now available in public preview, enabling the next generation of container image formats including BuildKit.
    • ACR tasks, previously announced as ACR build, provides a container execution capability enabling management and modification of the container images in ACR across the lifecycle of the container including build, run, push, and patch.

    ACR Helm Repositories - Preview

    Helm charts have become the common artifacts to define, install, and upgrade Kubernetes-based applications. Today, we are excited to share that Azure is the first public cloud to support ACR Helm Chart Repositories natively with a container registry, providing integrated security where the same credentials are shared between helm charts and container images. Coupled with ACR Geo-replication, Helm Repositories will be replicated together with multi-region Kubernetes deployments, providing network-close deployments with geo-distributed reliability, and with the same authentication used to pull the referenced images.

    ACR Helm Repos GIF

    Learn more from the ACR Helm Repositories.

    Content Trust - Preview

    As customers move to production, end-to-end validation of an image's integrity can be assured with ACR preview support of Docker Content Trust. Users can push signed images to ACR, verifying the validity upon pulling to the destined node.

    Further enhancing the integrity of your images, ACR supports limiting the users and services who can push signed images to those who are authorized using the AcrImageSigner role.

    For more information, see ACR Content Trust.

    Open Container Initiative image format support - Preview

    ACR now supports Open Container Initiative (OCI) images, enabling further evolution of containers standards and implementation.

    ACR tasks

    ACR tasks help you run, build, test, validate, and push container images securely and efficiently. ACR tasks can be manually invoked or triggered automatically, supporting rich parallel and sequential workflows to execute jobs defined within the container image, including the ability to patch container images. ACR tasks also provide isolation, enabling potentially conflicting technologies to be used together. Developers control what and how their tasks run with minimal dependency on specific OS versions or application framework versions.

    Here are two examples of how ACR tasks can simplify developer experience from the primary phases of container development to operational patching:

    • Inner loop – As developers seek to validate their code changes, before committing to team source control, they can execute the equivalent of docker build within Azure: az acr build -t web:{{.Run.ID}} .

    ACR Build GIF

    • Triggered execution – With an ACR task definition, execution can be triggered based on Git commits and Docker base image updates, with webhooks and Azure Event Grid coming soon. Base image triggered execution enables OS and framework patching, a fundamental challenge with how customers think about security in their deployments once code changes cease.

    ACR Base Image Updates GIF

    ACR tasks supports single-step definitions based on a Dockerfile, as well as multi-step tasks which you can execute concurrent and parallel workflows of build, cmd, and push steps. Single-step tasks based on a Dockerfile are now generally available, enabling OS and framework patching scenarios in production environments. Multi-step tasks, based on an acr-task.yaml file, are available in public preview.

    ACR tasks support Windows and Linux images with ARM images available through QEMU.

    For more information, see ACR tasks.

    Availability and feedback

    ACR tasks, Helm Repositories, and Content Trust are just the latest capabilities added to Azure’s commitment to simplifying your container lifecycle management. We continue to seek your feedback on existing features as well as ideas for product roadmap. Here’s a list of resources how you can use to engage with our team and provide feedback:

    • Roadmap - For visibility into our planned work.
    • UserVoice - To vote for existing requests or create a new request.
    • Feedback - To provide feedback, engage in discussion with the community.
    • Issues - To view existing bugs and issues, logging new ones.

    Thanks,

    Steve and the entire Azure Container Registry Team

    Azure SignalR Service now generally available

    $
    0
    0

    Since its introduction five years ago, SignalR has grown to be one of the most popular real-time connection technologies around the world. As applications that use SignalR scale, managing and scaling a SignalR server can become quite a bit of work.

    Today, we’re announcing the general availability (GA) of the Azure SignalR Service, a fully managed SignalR service that enables you to focus on building real-time web experiences without worrying about setting up, hosting, scaling, or load balancing your SignalR server. The Azure SignalR Service supports existing libraries for ASP.NET Core, ASP.NET, Java, and JavaScript clients, opening this service to a broad array of developers.

    Available today

    The SignalR Service has been in public preview since May, and with the general availability of the Azure SignalR Service, customers get:

    • More regions. SignalR Service is now available in the following regions: US East, US East 2, US Central, US West, US West 2, Canada East, West Europe, North Europe, Southeast Asia, Australia East, and Japan East. And in the coming months we will add more regions.
    • More reliability. The Azure SignalR Service GA offers 99.9% availability with a service level agreement for production use.
    • More capacity. During its preview, the SignalR Service was limited to 10K connections in the Standard Tier. With GA, it increases to 100K connections per instance. Through sharding or partitioning, users can configure multiple instances to handle an even larger scale.

    Of course, we will continue to provide Free Tier for trial and prototyping.

    The generally available service also supports more REST APIs to enable adding users to a group, and to improve the broadcast scenario support and support for port 443 (secure HTTP).

    Along with the new capabilities, we’re also previewing an Azure Functions binding. Serverless applications have many unique use cases including many real-time scenarios. With the Azure Functions binding, the Azure SignalR Service can be used seamlessly in a serverless environment on Azure. The Azure Functions binding is open source and hosted by Microsoft Azure in GitHub repository.

    Netrix immediately saw the benefits of Azure SignalR Service and became an early adopter while it was in preview. Netrix built a customized cost-estimation system for the automotive industry and used Azure SignalR Service to add a new capability that notifies browsers of completed commands in real time.

    As Lars Kemmann, Netrix Solution Architect, put it, “By using Azure SignalR Service, we gain the best practices of built-in security and scalability—a comfort for our clients and a huge win for us.”

    Of course, Azure SignalR Service lends itself to much more than complex business applications in the automotive industry. When GranDen, a Taiwan based gaming company, wanted to bring real-time communication and a new gaming concept involving Augmented Reality to market to enrich player experience, they turned to Azure SignalR Service. As Isak Pao, GranDen's CTO put it, “We reached out to competing cloud service providers. No one else offered us something comparable to Azure SignalR Service. Without it, we’d have to build so many virtual machines that we wouldn’t be able to leverage WebSocket in real time.” See the GranDen case study for more information. 

    Try Azure SignalR Service today

    If you want to learn more about SignalR Service, give it a try. You can get started for free, and of course we have plenty of documentation and a simple quickstart. If you have any questions, feature request, or you want to open an issue ticket, please reach out through any channel listed above.

    We can’t wait to see what you’ll build with Azure SignalR Service!

    Release models at pace using Microsoft’s AutoML!

    $
    0
    0

    What is the real problem?

    When creating a machine learning model, data scientists across all industry segments face the following challenges: Defining and tuning the hyperparameters, and deciding which algorithm to use.

    If a customer plans to create a ML model that will predict the price of a car, the data scientist will need to pick up the right algorithm and hyperparameters. Narrowing down to the best algorithm and hyperparameters is a time-consuming process. This has been a challenge for Microsoft’s customers across all verticals, and Microsoft recently launched an Azure Machine Learning python SDK that has AutoML module. The AutoML module helps with not only defining and tuning hyperparameters but also picking the right algorithm!

    What is AutoML?

    AutoML helps create high quality model using intelligent automation and optimization. AutoML will figure out the right algorithm and hyper parameters to use. It is a tool that will improve the efficiency of data scientist!

    AutoML

    AutoML’ s current capabilities

    AutoML currently supports the problem spaces regression and classification. Additional problem spaces such as clustering will be supported in future releases. From a data pre-processing perspective, AutoML allows one hot encoding (converting categorical variable to binary vector) and assign values to missing fields. It currently supports Python language and scikit-learn framework. For training the model, one could use laptop/desktop, Azure Batch AI or Databricks or Azure DSVM. All scikit-learn supported data formats are currently supported.

    High level steps to execute AutoML methods

    a) Create and activate a conda environment

    conda create -n myenv Python=3.6 cython numpy

    Python

      conda activate myenv

    b) Pip install the Azure ML Training SDK

    ML Training

    c) Launch the Jupyter notebook

    Jupyter

    d) Setup the machine learning resources through API

      Create workspace Additional components are created in the resource group as part of executing the commands in the below cell
    AzureML

    automlblac

    You can write the workspace information to a local config which aids in loading the config to other Jupyter notebook files if required

    Config

    Sample_Project is created…

    Sample_ProjectSample_Project2

    e) Invoke AutoML fit method

    AutoMLClassifier(params) -> Specify # of Iterations, Metric to optimize, etc.

    Example

    automl_classifier = AutoMLClassifier(experiment = experiment,
                                          name = experiment_name,
                                          debug_log = 'automl_errors.log',
                                          primary_metric = 'AUC_weighted',
                                          max_time_sec = 12000,
                                          iterations = 10,
                                          n_cross_validations = 2,
                                          verbosity = logging.INFO)

    AutoMLClassifier.fit(X, Y,….) -> Intelligently generates pipeline parameters to train data

    Example

    local_run = automl_classifier.fit(X=X_digits, y=y_digits, show_output=True)

    f) Check the run details and pick the optimal one

    Run Details

    g) The final step is to operationalize the most performant model

    Availability

    The SDK will be publicly available for use after the Ignite conference, which ends on September 28, 2018. It will be available in westcentralus, eastus2 and west Europe to name a few Azure regions.

    Conclusion

    AutoML is a leap towards the future of Data Science. It is bound to not only make data scientists working for any organization efficient because AutoML will automatically run multiple iterations of your experiment but also enable new or experienced data scientists to explore different algorithms and select and tune hyperparameters because AutoML will help do this. It is worth noting that the data scientists can start on a local machine leveraging the Azure ML Python SDK which has AutoML. Data scientist can then use the power of cloud to run the training/iterations using technologies such as Azure Batch AI or Databricks or Azure DSVM.

    Further reading

    Some of the modules with the Azure ML Python SDK are already in public preview and you can find more details by reading our documentation. If you are new to data science, Azure ML studio is a great starting point.

    Viewing all 10804 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>