Quantcast
Channel: Category Name
Viewing all 10804 articles
Browse latest View live

New recommendations in Azure Advisor

$
0
0

Azure Advisor is a free service that analyzes your Azure usage and provides recommendations on how you can optimize your Azure resources to reduce costs, boost performance, strengthen security, and improve reliability.

We are excited to announce that we have added several new Azure Advisor recommendations to help you get the most out of your Azure subscriptions.

Azure Advisor

Buy Reserved Instances to save over pay-as-you-go costs

Azure Reserved Instances (RIs) allow you to reserve virtual machines (VMs) in advance on a one or three-year term and save up to 80 percent versus pay-as-you go rates. RIs are ideal for workloads with predictable, consistent traffic.

Azure Advisor will analyze your last 30 days of VM usage and recommend purchasing RIs when it may provide cost savings. Advisor will show you the regions and VM sizes where you could save money and give you an estimate of your potential savings from purchasing RIs if your usage remains consistent with the previous 30 days.

Create Azure Service Health alerts

Azure Service Health is a free service that provides personalized guidance and support when Azure service issues might affect you. You can create Service Health alerts for any region or service so that you and your teams stay informed via the Azure portal, email, text message, or webhook notification when business-critical resources could be impacted.

Azure Advisor will identify your subscriptions that do not have Service Health alerts configured and recommend that you set up alerts on those subscriptions.

Azure Service Health alerts

Upgrade to a support plan that includes technical support

Azure technical support plans give you access to Azure experts when you need assistance. Azure offers a range of support options to best fit your needs, whether you’re a developer just starting your cloud journey or a large organization deploying business-critical applications.

Azure Advisor will identify subscriptions with a high amount of monthly Azure spend that are likely running strategic workloads and recommend upgrading your support plan to include technical support.

Configure your Traffic Manager profiles for optimal performance and availability

Azure Traffic Manager allows you to control the distribution of user traffic for service endpoints in different datacenters and optimize for performance and availability. Azure Advisor has added new recommendations to solve common configuration issues with Traffic Manager profiles.

Reduce DNS Time to Live

Time to Live (TTL) settings on your Traffic Manager profile allow you to specify how quickly to switch endpoints if a given endpoint stops responding to queries. Reducing the TTL value means that clients will be routed to functioning endpoints faster.

Azure Advisor will identify Traffic Manager profiles with a longer TTL configured and will recommend configuring the TTL to either 20 seconds or 60 seconds depending on whether the profile is configured for Fast Failover.

Traffic Manager profiles

Add or move one endpoint to another Azure region

If all endpoints in a Traffic Manager profile configured for proximity routing are in the same region, users from other regions may experience connection delays. Adding or moving an endpoint to another region will improve overall performance and provide better availability if all endpoints in one region fail.

Azure Advisor will identify Traffic Manager profiles configured for proximity routing where all the endpoints are in the same region and recommend that you either add or move an endpoint to another Azure region.

Add an endpoint configured to “All (World)”

If a Traffic Manager profile is configured for geographic routing, then traffic is routed to endpoints based on defined regions. If a region fails, there is no pre-defined failover. Having an endpoint where the Regional Grouping is configured to “All (World)” will avoid traffic being dropped and improve service availability.

Azure Advisor will identify Traffic Manager profiles configured for geographic routing where there is no endpoint configured to have the Regional Grouping as “All (World)” and recommend making that configuration change.

Endpoint configuration

Add at least one more endpoint, preferably in another region

Traffic Manager profiles with more than one endpoint experience higher availability if any given endpoint fails. Placing these endpoints in different regions further improves service reliability.

Azure Advisor will identify Traffic Manager profiles where there is only one endpoint and recommend adding at least one more endpoint in another region.

Get started with Azure Advisor

Visit the Azure Advisor webpage to learn more, and get started using Azure Advisor in the Azure portal. See the Azure Advisor documentation for assistance, and if you have any feedback, don’t hesitate to share it with us in the tool.


Azure.Source – Volume 41

$
0
0

Now in preview

Azure Service Fabric Mesh is now in public preview - Azure Service Fabric Mesh is a fully-managed service that enables developers to deploy and operate containerized applications without having to manage VMs, storage or networking configuration, while keeping the enterprise-grade reliability, scalability, and mission-critical performance of Service Fabric. Service Fabric Mesh supports both Windows and Linux containers, so you can develop with the programming language and framework of your choice. The public preview of Service Fabric Mesh is available in three Azure regions - US West, US East, and Europe West. We will expand the service to other Azure regions in the coming months.

Azure Friday | Azure Service Fabric Mesh preview - Chacko Daniel joins Scott Hanselman to discuss Azure Service Fabric Mesh, which offers the same reliability, mission-critical performance and scale customers get with Service Fabric, but no more overhead of cluster management and patching operations. Service Fabric Mesh supports both Windows and Linux containers allowing you to develop with any programming language and framework of your choice.

Azure Security Center is now integrated into the subscription experience - Azure Security Center is available in public preview from within the subscription experience in the Azure portal. The new Security tab for your subscription provides a quick view into the security posture of your subscription, enabling you to discover and assess the security of your resources in that subscription and take action.

Speech services July 2018 update - The 0.5.0 update to the Speech SDK for Azure Cognitive Services was just released in preview. This update adds support for UWP (on Windows version 1709), .NET Standard 2.0 (on Windows), and Java on Android 6.0 (Marshmallow, API level 23) or later. It also includes feature changes and bug fixes. Most notably, it now supports long-running audio and automatic reconnection. This will make the Speech service more resilient in the event of time-out, network failures, or service errors. Improvements to error messages make it easier to handle errors.

Also in preview

Now generally available

Score one for the IT Pro: Azure File Sync is now generally available! - Azure File Sync transforms your Windows Server machines into a quick cache of your Azure file share. You can use any protocol that's available on Windows Server to access your data locally, including SMB, Network File System (NFS), and File Transfer Protocol Service (FTPS). You can have as many caches as you need across the world. You must have an Azure storage account and an Azure file share in the same region that you want to deploy Azure File Sync. Region availability for Azure File Sync.

Also generally available

News and updates

Announcing the Azure Cloud Shell editor in collaboration with Visual Studio Code - Through collaboration with the Visual Studio Code team and their open-source Monaco project, the same web-standards based editor that powers Visual Studio Code is now integrated directly into Cloud Shell. The Monaco code editor brings features like syntax coloring, auto completion, and code snippets. The new Cloud Shell integration includes a file explorer to easily navigate the Cloud Shell file system for seamless file exploration. This enables a rich editing workflow by simply typing “code .” to open the editor’s file explorer from any Cloud Shell web-based experience.

Azure Friday | Azure Cloud Shell editor - Justin Luk joins Scott Hanselman to show the new Azure Cloud Shell editor. Since its launch, Cloud Shell included a variety of editors (vi, emacs, and nano) for editing files. In collaboration with the Visual Studio Code team and their open-source Monaco project, the same web standards-based editor that powers Visual Studio Code is now integrated directly into Cloud Shell.

Spring Data Gremlin for Azure Cosmos DB Graph API - Spring Data Gremlin is now available on Maven Central with source code available on GitHub. Spring Data Gremlin supports an annotation-oriented programming model to simplify the mapping to the database entity. It also provides supports to create database queries based on Spring Data Repository.

Azure cost forecast API and other updates - The Azure Consumption APIs give you programmatic access to cost and usage data for your Azure resources. The latest update adds a Forecasts API, location and service data normalization, price sheet mapping to usage details, a flag for charges that are monetary commit ineligible, additional properties for identifying late arriving data, and PowerShell support. These APIs currently only support Enterprise Enrollments and Web Direct Subscriptions (with a few exceptions).

Additional news and updates

The Open Source Show

The Open Source Show | Getting Started with Cloud Native Infrastructure - Cloud native. What does it really mean? Why do you need it? How and where should you get started? Justin Garrison, Cloud Native Infrastructure, co-author, and Bridget Kromhout cover the what, when, why, when, and how of cloud native, including what Justin's learned from studying massive organizations and CNCF projects. We close it out with "Cloud Native Fundamentals: Kubernetes in 90 seconds" from AKS Program Manager, Ralph Squillace.

Azure Friday

Azure Friday | Event-based data integration with Azure Data Factory - Gaurav Malhotra joins Scott Hanselman to discuss Event-driven architecture (EDA), which is a common data integration pattern that involves production, detection, consumption, and reaction to events. Learn how you can do event-based data integration using Azure Data Factory.

Technical content and training

Foretell and prevent downtime with predictive maintenance - Learn how Azure can provide your predictive maintenance solution by using data about previous breakdowns to model when failures are about to occur, and intervene just as sensors detect the same conditions. Until recently this has not been a realistic option, as modeling did not exist, and real-time processing power was too expensive. See how Azure solves that problem.

Getting started with IoT: how to connect, secure, and manage your “things” - The value of the data an IoT solution generates depends largely on how effectively you deploy and manage the devices. Learn how to get started with securing, provisioning, and managing your devices with Azure IoT Hub.

The Azure Podcast

The Azure Podcast: Episode 238 - Serial Console - Evan and Kendall talk to Craig Wiand, a Principal Engineer in the Azure team, about the new Serial Console feature available for VMs.

Customers and partners

Internet of Things Show | A Perspective on Industrial IoT Security by TrendMicro - Security is critical in IoT applications, but how different is it from (or similar to) traditional IT security? Richard Ku, expert in Industrial IoT Security for Enterprise OT environment at TrendMicro visited the IoT Show to share his perspective on what you need to pay attention to when building IoT applications.

R3 on Azure: Launch of Corda Enterprise v3.1 - R3 launched their Corda Enterprise v3.1 offering to the Azure Marketplace with a free trial offer, giving you the opportunity to kick the tires before you buy. Corda Enterprise is the mission critical version of Corda, the open source blockchain platform. Enterprise brings the higher performance, high availability and other capabilities demanded by enterprise organizations.

Globally replicated data lakes with LiveData using WANdisco on Azure - WANdisco enables globally replicated data lakes on Azure for analytics over the freshest data. With WANdisco Fusion, you can make data that you have used in other large-scale analytics platforms available in Azure Blob Storage, ADLS Gen1 and Gen2 without downtime or disruption to your existing environment.

Intelligent Healthcare with Azure Bring Your Own Key (BYOK) technology - Learn how Change Healthcare implemented Azure SQL Database Transparent Data Encryption (TDE) with BYOK support. TDE with BYOK encrypts databases, log files and backups when written to disk, which protects data at rest from unauthorized access.

Azure Marketplace June container offers - The Azure Marketplace is the premier destination for all your software needs – certified and optimized to run on Azure. In June we published 42 container offers from Bitnami, including container images for Cassandra, Kafka, and Open Service Broker for Azure.

Industries

Blockchain as a tool for anti-fraud - Healthcare costs are skyrocketing. There are many factors driving the high cost of healthcare, and one of them is fraud. There are two root vulnerabilities in healthcare organizations: insufficient protection of data integrity, and a lack of transparency. Learn how you can use blockchain as a tool for anti-fraud.

Azure This Week

A Cloud Guru | Azure This Week - 20 July 2018 - In this episode of Azure This Week, Lars looks at the public preview of Azure Data Box Disk, Dev Spaces for Azure Kubernetes Services and the general availability of Azure DevOps Projects.

Azure App Service now supports Java SE on Linux

$
0
0

Lately there have been a whole lot of changes to Java and its vibrant communities. Now shared between Oracle for Java SE and Eclipse Foundation for Jakarta EE (formerly Java EE), Java continues to be the leading programming language by developers and enterprises. As a matter of fact, it is now well-positioned to thrive in the cloud considering how modern application development is trending with over 12 million Java developers worldwide, and digital transformation being top of mind for many IT organizations.

With the sheer volume of Java apps in existence and soon to be developed, Java developers will benefit greatly from cloud services that will enable fast and secure application development while saving time and cost. Couple this with a vast geographic region coverage, it is a cloud solution every developer should experience.

Today, Microsoft is pleased to announce that Azure App Service now supports Java SE 8 based applications on Linux, now available in public preview. This and subsequent time released versions will be supported for an extended period, as well as upcoming LTS versions. Java web apps can now be built and deployed on a highly scalable, self-patching web hosting service where bug fixes and security updates will be maintained by Microsoft. Additional performance features include scaling to support millions of users with multiple instances, applications, and regions in a dynamic scaling intelligent configuration.

Java Web Apps benefits

Let Azure App Service do all the heavy lifting commonly associated with enterprise-grade infrastructure so developers can focus on productivity. Take advantage of Microsoft Azure’s battle tested infrastructure platform spanning from global security compliance to DevOps capabilities. Developer productivity benefits do not stop there. Java web apps provide the following benefits:

  • Fully managed enterprise platform – Log aggregation, email notifications, and Azure portal alerts. Version updates will soon include auto-patching.
  • Performance monitoring – Integrate with the Application Performance Management (APM) product of your choice to monitor and manage applications.
  • Global scale with high availability – 99.95% uptime with low latency, auto-scaling, or manual-scaling (up or out), anywhere in Microsoft’s global datacenters.
  • Security and compliance – App Service is ISO, SOC, PCI, and GDPR compliant.
  • Authentication and authorization Built-in authentication with Azure Active Directory, governance with Roll-Based Access Control (RBAC) to manage IP address restrictions.
  • Build automation and CI/CD Support – Maven, Jenkins, and Visual Studio Team Services support will be available in the general availability release.

There are three ways of deploying Java Web Apps on Azure. You can create it from the Azure portal, use a template, or create and deploy from Maven. In this post, we will cover how to deploy a Spring Boot app using Maven.

Get started with Maven and Spring

To get started, clone your favorite Java Web app or use this sample: bash-3.2$ git clone

Add the Maven plugin for Azure Web Apps to the app project POM file and set server port to 80.

<build>
   <plugins>
      <plugin>
         <groupId>com.microsoft.azure</groupId>
         <artifactId>azure-webapp-maven-plugin</artifactId>
         <version>1.2.0</version>
         <configuration>
 
            <!-- Web App information -->
            <resourceGroup>${RESOURCE_GROUP}</resourceGroup>
            <appName>${WEBAPP_NAME}</appName>
            <region>${REGION}</region>
            <pricingTier>S1</pricingTier>
 
            <!-- Java Runtime Stack for Web App on Linux -->
            <linuxRuntime>jre8</linuxRuntime>
 
            <deploymentType>ftp</deploymentType>
            <!-- Resources to be deployed to your Web App -->
            <resources>
               <resource>
                  <directory>${project.basedir}/target</directory>
                  <targetPath>/</targetPath>
                  <includes>
                     <include>app.jar</include>
                  </includes>
               </resource>
            </resources>
            <appSettings>
               <property>
                  <name>JAVA_OPTS</name>
                  <value>-Djava.security.egd=file:/dev/./urandom</value>
               </property>
            </appSettings>
         </configuration>
     </plugin>
   </plugins>
   <finalName>app</finalName> 
</build> ​

Build, package, and deploy using Maven – like you would normally do.

bash-3.2$ mvn package azure-webapp:deploy
[INFO] Scanning for projects...
[INFO] 
[INFO] ----------------------------------------------------------------------
[INFO] Building petclinic 2.0.0
[INFO] ----------------------------------------------------------------------
[INFO] 
...
...
[INFO] --- azure-webapp-maven-plugin:1.2.0:deploy (default-cli) @ spring-petclinic ---
[INFO] Start deploying to Web App myjavase-07262018aa...
[INFO] Authenticate with Azure CLI 2.0
[INFO] Target Web App doesn't exist. Creating a new one...
[INFO] Creating App Service Plan 'ServicePlan1af9c8f0-3f71-43a8'...
[INFO] Successfully created App Service Plan.
[INFO] Successfully created Web App.
...
...
 
[INFO] Finished uploading directory: /Users/selvasingh/GitHub/selvasingh/spring-petclinic/target/azure-webapps/myjavase-07262018aa --> /site/wwwroot
[INFO] Successfully uploaded files to FTP server: waws-prod-bay-081.ftp.azurewebsites.windows.net
[INFO] Starting Web App after deploying artifacts...
[INFO] Successfully started Web App.
[INFO] Successfully deployed Web App at https://myjavase-07262018aa.azurewebsites.net
[INFO] ----------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ----------------------------------------------------------------------
[INFO] Total time: 03:06 min
[INFO] Finished at: 2018-07-13T18:03:22-07:00
[INFO] Final Memory: 139M/987M
[INFO] ----------------------------------------------------------------------

Open the site in your favorite browser.

Spring

And here it is, your first Java web app on Azure App Service. Nice job!

More friendly Java tools on the way

We understand that development tools are not one size fit all. We are making sure we continue to enrich our tool suite with tools that enhance productivity and efficiency. In the meantime, expect to see support for Gradle and Jenkins next.

Next steps

Running Java Web Apps in the cloud cannot be any easier and faster. Give it try, create your first Java web app in Azure Service. Below are a few resources to help you get started.

If you don’t have an Azure subscription, create a free account today.

Security Center’s adaptive application controls are generally available

$
0
0

Azure Security Center provides several threat prevention mechanisms to help you reduce surface areas susceptible to attack. One of those mechanisms is adaptive application controls. Today we are excited to announce the general availability of this capability, which helps you audit and block unwanted applications.

Adaptive application controls help you define the set of applications that are allowed to run on configured groups of virtual machines (VM). Enabling adaptive application controls for your VMs. Azure Security Center will allow you to do a few things. First, it recommends applications (EXEs, MSIs, and Scripts) for whitelisting, automatically clustering similar VMs to ease manageability and reduce exposure to unnecessary applications. It also applies the appropriate rules in an automated fashion, monitors any violations of those rules, and enables you to manage and edit previously applied application whitelisting policies.

Adaptive Application Controls

Adaptive Application Controls 2

By default, Security Center enables application control in Audit mode. After validating that the whitelist has not had any adverse effects on your workload, you can change the protection mode to Enforce mode through the Security Center management UI.

You can also change the application control policy for each configured group of VMs through the same Security Center management UI, edit and remove previously applied rules, and extend the rules to allow more applications to run in your workloads.

This feature is available in the standard pricing tier of Security Center, and you can try Security Center for free for the first 60 days.

To learn more about these features in Security Center, visit our documentation.

Data Breakpoints – Visual Studio 2017 15.8 Update  

$
0
0

New to Visual Studio 2017 version 15.8, you can now set data breakpoints from within the Locals, Autos, Watch, and Quickwatch windows!  To view official documentation on data breakpoints, check out this post.

Data breakpoints allow you to stop execution when a particular variable stored at a specific memory address changes.  Prior to version 15.8, data breakpoints could only be set via the Breakpoints window. In this window, a data breakpoint can be set by selecting New > Data Breakpoint.. and then entering the address of the desired variable and the number of bytes you want the window to watch. Setting data breakpoints via the Breakpoints window is still valid, especially since this window allows data breakpoints to be combined with Conditions and Actions like any other breakpoint.  However, This requires multiple navigations to bring up the window and its corresponding data breakpoints menu as well as manual inputs for the desired address and byte count.

Image of a data breakpoint being set in the Breakpoint Window

Setting a data breakpoint in the Breakpoint Window

Now, data breakpoints can also be accessed via context menu in the Locals, Autos, Watch, and Quickwatch windows with just two short clicks and no additional user inputs. A data breakpoint can be created in any of the listed windows by right-clicking (or Shift + F10) the desired variable and selecting Break When Value Changes while in break mode. After continuing execution, your code will break again when your specified variable ‘s value changes, and you will also receive a pop-up notification about the change.

Image of a data breakpoint being set in the Watch Window

Setting a data breakpoint in the Watch Window

Ready to try out data breakpoints?  Give us your feedback!

Download Visual Studio 2017 Preview today and give us your feedback on this feature or anything within Visual Studio 2017. Your feedback is a critical part of ensuring that we can deliver a delightful debugging experience. For any questions, reach out to us via Twitter at @visualc or via email at visualcpp@microsoft.com. For any issues or suggestions, please let us know via Help > Send Feedback > Report a Problem in the IDE.

Top 4 ways to optimize your Microsoft Store listing

$
0
0

When publishing your app to the Microsoft Store, be sure to take advantage of the many options to make your Store listing stand out. Great video, text, and images can help create customer interest and drive purchases. Remember, your Microsoft Store listing will be many customers’ first exposure to your app. It’s crucial to make a good first impression!

Make the most of your listing

1. Include video trailers

Affinity Designer Microsoft Store Listing

Video trailers are short videos that spotlight your product and give your customers a quick look at what it does. On average, including one or more video trailers can increase downloads by up to 11%.

Check out Affinity Designer’s trailer for an example that really shows off what the app can do.

Quick trailer tips:

  • Focus on high quality and short length (60 seconds or less).
  • Use different thumbnails for each trailer.
  • Keep key messaging short and centered in each frame.
  • When using trailers, you must also provide a 1920 x 1080 pixel image (16:9) in the Promotional images section in order for your trailers to appear at the top of your Store listing. This image will appear after your trailers have finished playing.

Note that trailers are only shown to customers on Windows 10, version 1607 or later (which includes Xbox).

See more information and tips here.

2. Create a great app description

The description is the first thing your customer reads about your app in the Microsoft Store, and it may also appear in search results and algorithm lists—so make it count.

Quick description tips:

  • Start with the value prop: Why should your customer buy this?
  • Focus on your app’s appeal with plain, clear language.
  • Localize for all your markets.

Read more here.

3. Include an eye-catching logo

AUTODESK Sketchbook logo in Microsoft Store.

Your logo is the main image displayed on Windows 10 and Xbox, and in searches or collections and we strongly recommend providing both a 9:16 poster art and 1:1 box art image. A good logo can visually “pop” and lead customers to see more.

We recommend providing these logo images to create an optimal appearance in the Store. In particular, the 9:16 Poster art image is required for proper display for customers on Windows 10 and Xbox devices. You also have the option to upload additional logo images that will be used in the Store (instead of images taken from your app’s packages) to create a more customized display.

Quick logo tips:

  • Include your app name as a key part of the image.
  • Provide .png files no larger than 50MB each.
  • Provide all requested formats and sizes for optimal display across devices.

More details on all the display options here.

4. Keep your customers up to date

When you update your app it’s always a good idea to let customers know what you’ve improved in the latest release, especially if you’ve fixed bugs or improved the app based on customer feedback. Use the What’s new in this version text box to share that information with your customers.  In addition to letting your current customers know what’s changed, this also shows potential new customers that you’re listening to feedback and continuing to add new features. 

Get started now

Whether you’re submitting your app for the first time or making an update to an app that’s in the Store, we hope you’ll find these tips useful. For more details on all of these options, along with other ways you can create great Microsoft Store listings, start here.

The post Top 4 ways to optimize your Microsoft Store listing appeared first on Windows Developer Blog.

What’s new in VSTS Sprint 137 Update

$
0
0
The Sprint 137 Update of Visual Studio Team Services (VSTS) has rolled out to all organizations. In this Update, both Microsoft-hosted Linux and macOS agents, as well as Azure DevOps Projects are now generally available. Watch the following video to learn about a few of the features, which also includes a demonstration of some of... Read More

Visual Studio Code C/C++ extension July 2018 Update and IntelliSense auto-configuration for CMake

$
0
0

Last week we shipped the July 2018 update to the C/C++ extension for Visual Studio Code. In this update we added support for a new experimental API that allows build system extensions to pass IntelliSense configuration information to our extension for powering up full IntelliSense experience. You can find the full list of changes in the July 2018 update in the changelog.

As CMake has been the most requested build system for us to provide support for, we’ve been working with vector-of-bool, author of the CMake Tools extension, on an implementation using this new API to provide IntelliSense auto-configuration for CMake, and the changes are now live!

If you are using CMake and are looking for a way to auto-config IntelliSense in the C/C++ extension, be sure to check out the CMake Tools extension. If you are interested in adding IntelliSense support for other build systems, you can learn more about this new API on the npm site. We’re rolling it out as an experimental API and would love any feedback!

IntelliSense auto-configuration for CMake

After both extensions are installed, on opening any folder that contains a CMakeLists.txt file in the root, be sure to follow the steps to let the CMake Tools extension configure your project. Once completed, you will see the following message:

Simply click “Allow” to enable the CMake Tools extension to send IntelliSense configuration over to the C/C++ extension to automatically power up IntelliSense.

If you need to change the IntelliSense provider or disable the one that’s currently in use, you can use the command “C/Cpp: Change Configuration Provider…” in the Command Palette to invoke the list of providers to switch to.

We are very excited about the CMake support (thanks vector-of-bool for partnering with us!), and hope you like it too! We are also looking forward to seeing support for more build systems by leveraging the extension API and any feedback you might have around this API.

Tell us what you think

Download the C/C++ extension for Visual Studio Code, try it out and let us know what you think. File issues and suggestions on GitHub. If you haven’t already provided us feedback, please take this quick survey to help shape this extension for your needs.


Blazor 0.5.0 experimental release now available

$
0
0

Blazor 0.5.0 is now available! This release explores scenarios where Blazor is run in a separate process from the rendering process. Specifically, Blazor 0.5.0 enables the option to run Blazor on the server and then handle all UI interactions over a SignalR connection. This release also adds some very early support for debugging your Blazor .NET code in the browser!

New features in this release:

  • Server-side Blazor
  • Startup model aligned with ASP.NET Core
  • JavaScript interop improvements
    • Removed requirement to preregister JavaScript methods
    • Invoke .NET instance method from JavaScript
    • Pass .NET objects to JavaScript by reference
  • Add Blazor to any HTML file using a normal script tag
  • Render raw HTML
  • New component parameter snippet
  • Early support for in-browser debugging

A full list of the changes in this release can be found in the Blazor 0.5.0 release notes.

Get Blazor 0.5.0

To get setup with Blazor 0.5.0:

  1. Install the .NET Core 2.1 SDK (2.1.300 or later).
  2. Install Visual Studio 2017 (15.7 or later) with the ASP.NET and web development workload selected.
  3. Install the latest Blazor Language Services extension from the Visual Studio Marketplace.

To install the Blazor templates on the command-line:

dotnet new -i Microsoft.AspNetCore.Blazor.Templates

You can find getting started instructions, docs, and tutorials for Blazor at https://blazor.net.

Upgrade an existing project to Blazor 0.5.0

To upgrade an existing Blazor project from 0.4.0 to 0.5.0:

  • Install all of the required bits listed above.
  • Update your Blazor package and .NET CLI tool references to 0.5.0. Your upgraded Blazor project file should look like this:

    <Project Sdk="Microsoft.NET.Sdk.Web">
    
    <PropertyGroup>
        <TargetFramework>netstandard2.1</TargetFramework>
        <RunCommand>dotnet</RunCommand>
        <RunArguments>blazor serve</RunArguments>
        <LangVersion>7.3</LangVersion>
    </PropertyGroup>
    
    <ItemGroup>
        <PackageReference Include="Microsoft.AspNetCore.Blazor.Browser" Version="0.5.0" />
        <PackageReference Include="Microsoft.AspNetCore.Blazor.Build" Version="0.5.0" />
        <DotNetCliToolReference Include="Microsoft.AspNetCore.Blazor.Cli" Version="0.5.0" />
    </ItemGroup>
    
    </Project>
    
  • Update index.html to replace the blazor-boot script tag with a normal script tag that references _framework/blazor.webassembly.js..

    index.html

    <!DOCTYPE html>
    <html>
    <head>
        <meta charset="utf-8" />
        <meta name="viewport" content="width=device-width">
        <title>BlazorApp1</title>
        <base href="/" />
        <link href="css/bootstrap/bootstrap.min.css" rel="stylesheet" />
        <link href="css/site.css" rel="stylesheet" />
    </head>
    <body>
        <app>Loading...</app>  
        <script src="_framework/blazor.webassembly.js"></script>
    </body>
    </html>
    
  • Add a Startup class to your project and update Program.cs to setup the Blazor host.

    Program.cs

    public class Program
    {
        public static void Main(string[] args)
        {
            CreateHostBuilder(args).Build().Run();
        }
    
        public static IWebAssemblyHostBuilder CreateHostBuilder(string[] args) =>
            BlazorWebAssemblyHost.CreateDefaultBuilder()
                .UseBlazorStartup<Startup>();
    }
    

    Startup.cs

    public class Startup
    {
        public void ConfigureServices(IServiceCollection services)
        {
        }
    
        public void Configure(IBlazorApplicationBuilder app)
        {
            app.AddComponent<App>("app");
        }
    }
    
  • Update to the new JavaScript interop model. The changes to the JavaScript interop model are covered in the "JavaScript interop changes" section below.

What is server-side Blazor?

Blazor is principally a client-side web framework intended to run in a browser where the component logic and DOM interactions all happen in the same process.

Blazor client-side

However, Blazor was built to be flexible enough to handle scenarios where the Blazor app runs apart from the rendering process. For example, you might run Blazor in a Web Worker thread so that it runs separately from the UI thread. Events would get pushed from the UI thread to the Blazor worker thread, and Blazor would push UI updates to the UI thread as needed. This scenario isn't supported yet, but it's something Blazor was designed to handle.

Blazor web worker

Another potential use case for running Blazor in a separate process is writing desktop applications with Electron. The Blazor component logic could run in a normal .NET Core process, while the UI updates are handled in the Electron rendering process.

Blazor Electron

We have a working prototype that you can try out of using Blazor with Electron in this way.

Blazor 0.5.0 takes the out-of-process model for Blazor and streeeetches it over a network connection so that you can run Blazor on the server. With Blazor 0.5.0 you can run your Blazor components server-side on .NET Core while UI updates, event handling, and JavaScript interop calls are handled over a SignalR connection.

Blazor server-side

There are a number of benefits to running Blazor on the server in this way:

  • You can still write your entire app with .NET and C# using the Blazor component model.
  • Your app still has a rich interactive feel and avoids unnecessary page refreshes.
  • Your app download size is significantly smaller and the initial app load time is much faster.
  • Your Blazor component logic can take full advantage of server capabilities including using any .NET Core compatible APIs.
  • Because you're running on .NET Core on the server existing .NET tooling, like debugging, just works.
  • Works with thin clients (ex browsers that don't support WebAssembly, resource constrained devices, etc.).

Of course there are some downsides too:

  • Latency: every user interaction now involves a network hop.
  • No offline support: if the client connection goes down the app stops working.
  • Scalability: the server must manage multiple client connections and handle client state.

While our primary goal for Blazor remains to provide a rich client-side web development experience, enough developers expressed interest in the server-side model that we decided to experiment with it. And because server-side Blazor uses the exact same component model as running Blazor on the client, it is well aligned with our client-side efforts.

Get started with server-side Blazor

To create your first server-side Blazor app use the new server-side Blazor project template.

dotnet new blazorserver

Build and run the app to see it in action:

dotnet run

You can also create a server-side Blazor app from Visual Studio.

Blazor server-side template

When you run the Blazor server-side app it looks like a normal Blazor app, but the download size is significantly smaller (under 100KB), because there is no need to download a .NET runtime, the app assembly, or any of its dependencies.

Blazor server-side running app

Blazor server-side download size

You're also free to run the app under the debugger (F5) as all the .NET logic is running on .NET Core on the server.

The template creates a solution with two projects: an ASP.NET Core host project, and a project for your server-side Blazor app. In a future release we hope to merge these two projects into one, but for now the separation is necessary due to the differences in the Blazor compilation model.

The server-side Blazor app contains all of your component logic, but instead of running client-side in the browser the logic is run server-side in the ASP.NET Core host application. The Blazor app uses a different bootstrapping script (blazor.server.js instead of blazor.webassembly.js), which establishes a SignalR connection with the server and handles applying UI updates and forwarding events. Otherwise the Blazor programming model is the same.

The ASP.NET Core app hosts the Blazor app and sets up the SignalR endpoint. Because the Blazor app runs on the server, the event handling logic can directly access server resources and services. For example, the FetchData page no longer needs to issue an HTTP request to retrieve the weather forecast data, but can instead use a service configured on the server:

protected override async Task OnParametersSetAsync()
{
    forecasts = await ForecastService.GetForecastAsync(StartDate);
}

The WeatherForecastService in the template generates the forecast data in memory, but it could just as easily pull the data from a database using EF Core, or use other server resources.

Startup model

All Blazor projects in 0.5.0 now use a new startup model that is similar to the startup model in ASP.NET Core. Each Blazor project has a Startup class with a ConfigureServices method for configuring the services for your Blazor app, and a Configure method for configuring the root components of the application.

public class Startup
{
    public void ConfigureServices(IServiceCollection services)
    {
    }

    public void Configure(IBlazorApplicationBuilder app)
    {
        app.AddComponent<App>("app");
    }
}

The app entry point in Program.cs creates a Blazor host that is configured to use the Startup class.

public class Program
{
    public static void Main(string[] args)
    {
        CreateHostBuilder(args).Build().Run();
    }

    public static IWebAssemblyHostBuilder CreateHostBuilder(string[] args) =>
        BlazorWebAssemblyHost.CreateDefaultBuilder()
            .UseBlazorStartup<Startup>();
}

In server-side Blazor apps the entry point comes from the host ASP.NET Core app, which references the Blazor Startup class to both add the server-side Blazor services and to add the Blazor app to the request handling pipeline:

public class Startup
{
    public void ConfigureServices(IServiceCollection services)
    {
        ...
        services.AddServerSideBlazor<App.Startup>();
    }

    public void Configure(IApplicationBuilder app, IHostingEnvironment env)
    {
        ...
        app.UseServerSideBlazor<App.Startup>();
    }
}

While the server-side Blazor project may also have a Program class, it is not used when running on the server. However it would be used if you switched to client-side (WebAssembly) execution just by changing the <script> tag in index.html to load blazor.webassembly.js instead of blazor.server.js.

The Blazor app and the ASP.NET Core app share the same service provider. Services added in either ConfigureServices methods are visible to both apps. Scoped services are scoped to the client connection.

State management

When running Blazor on the server the UI state is all managed server-side. The initial state is established with the client connects to the server and is maintained in memory as the user interacts with the app. If the client connection is lost then the server-side app state will be lost, unless it is otherwise persisted and restored by the app. For example, you could maintain your app state in an AppState class that you serialize into session state periodically and then initialize the app state from session state when it is available. While this process is currently completely manual in the future we hope to make server-side state management easier and more integrated.

JavaScript interop changes

You can use JavaScript interop libraries when using server-side Blazor. The Blazor runtime handles sending the JavaScript calls to the browser and then sending the results back to the server. To accommodate out-of-process usage of JavaScript interop the JavaScript interop model was significantly revised and expanded upon in this release.

Calling JavaScript from .NET

To call into JavaScript from .NET use the new IJSRuntime abstraction, which is accessible from JSRuntime.Current. The InvokeAsync<T> method on IJSRuntime takes an identifier for the JavaScript function you wish to invoke along with any number of JSON serializable arguments. The function identifier is relative to the global scope (window). For example, if you wish to call window.someScope.someFunction then the identifier would be someScope.someFunction. There is no longer any need to register the function before it can be called. The return type T must also be JSON serializable.

exampleJsInterop.js

window.exampleJsFunctions = {
  showPrompt: function (message) {
    return prompt(message, 'Type anything here');
  }
};

ExampleJsInterop.cs

using Microsoft.JSInterop;

public class ExampleJsInterop
{
    public static Task<string> Prompt(string message)
    {
        // Implemented in exampleJsInterop.js
        return JSRuntime.Current.InvokeAsync<string>(
            "exampleJsFunctions.showPrompt",
            message);
    }
}

The IJSRuntime abstraction is async to allow for out-of-process scenarios. However, if you are running in-process and want to invoke a JavaScript function synchronously you can downcast to IJSInProcessRuntime and call Invoke<T> instead. We recommend that most JavaScript interop libraries should use the async APIs to ensure the libraries can be used in all Blazor scenarios, client-side or server-side.

Calling .NET from JavaScript

To invoke a static .NET method from JavaScript use the DotNet.invokeMethod or DotNet.invokeMethodAsync functions passing in the identifier of the static method you wish to call, the name of the assembly containing the function, and any arguments. Again, the async version is required to support out-of-process scenarios. To be invokable from JavaScript, the .NET method must be public, static, and attributed with [JSInvokable]. By default, the method identifier is the method name, but you can specify a different identifier using the JSInvokableAttribute constructor. Calling open generic methods is not currently supported.

JavaScriptInteroperable.cs

public class JavaScriptInvokable
{
    [JSInvokable]
    public static Task<int[]> ReturnArrayAsync()
    {
        return Task.FromResult(new int[] { 1, 2, 3 });
    }
}

dotnetInterop.js

DotNet.invokeMethodAsync(assemblyName, 'ReturnArrayAsync').then(data => ...)

New in Blazor 0.5.0, you can also call .NET instance methods from JavaScript. To invoke a .NET instance method from JavaScript you first pass the .NET instance to JavaScript by wrapping it in a DotNetObjectRef instance. The .NET instance will then be passed by reference to JavaScript and you can invoke .NET instance methods on the instance using the invokeMethod or invokeMethodAsync functions. The .NET instance can also be passed as an argument when invoking other .NET methods from JavaScript.

ExampleJsInterop.cs

public class ExampleJsInterop
{
    public static Task SayHello(string name)
    {
        return JSRuntime.Current.InvokeAsync<object>(
            "exampleJsFunctions.sayHello", 
            new DotNetObjectRef(new HelloHelper(name)));
    }
}

exampleJsInterop.js

window.exampleJsFunctions = {
  sayHello: function (dotnetHelper) {
    return dotnetHelper.invokeMethodAsync('SayHello')
      .then(r => console.log(r));
  }
};

HelloHelper.cs

public class HelloHelper
{
    public HelloHelper(string name)
    {
        Name = name;
    }

    public string Name { get; set; }

    [JSInvokable]
    public string SayHello() => $"Hello, {Name}!";
}

Output

Hello, Blazor!

Add Blazor to any HTML file

In previous Blazor releases the project build modified index.html to replace the blazor-boot script tag with a real script tag that handled downloading the starting up the runtime. This setup made it difficult to use Blazor in arbitrary HTML files.

In Blazor 0.5.0 this mechanism has been replaced. For client-side projects add a script tag that references the _framework/blazor.webassembly.js script (which is generated as part of the build). For server-side projects you reference _framework/blazor.server.js. You can add this script to any HTML file, including server generated content.

For example, instead of using the static index.html file from the Blazor client project you could add a Razor Page to your ASP.NET Core host project and then add the Blazor script tag there along with any server-side rendering logic.

Render raw HTML

Blazor normally renders strings using DOM text nodes, which means that any markup they may contain will be ignored and treated as literal text. This new feature lets you render special MarkupString values that will be parsed as HTML or SVG and then inserted into the DOM.

WARNING: Rendering raw HTML constructed from any untrusted source is a major security risk!

Use the MarkupString type to add blocks of static HTML content.

@((MarkupString)myMarkup)

@functions {
    string myMarkup = "<p class='markup'>This is a <em>markup string</em>.</p>";
}

Component parameter snippet

Thanks to a community contribution from Benjamin Vertonghen (vertonghenb) we now have a Visual Studio snippet for adding component parameters. Just type para and then hit Tab twice to add a parameter to your component.

Debugging

Blazor 0.5.0 introduces some very basic debugging support in Chrome for client-side Blazor apps running on WebAssembly. While this initial debugging support is very limited and unpolished it does show the basic debugging infrastructure coming together.

To debug your client-side Blazor app in Chrome:

  • Build a Blazor app in Debug configuration (the default for non-published apps)
  • Run the Blazor app in Chrome
  • With the keyboard focus on the app (not in the dev tools, which you should probably close as it's less confusing that way), press the following Blazor specific hotkey:
    • Shift+Alt+D on Windows/Linux
    • Shift+Cmd+D on macOS

You need to run Chrome with remote debugging enabled to debug your Blazor app. If you don't, you will get an error page with instructions for running Chrome with the debugging port open so that the Blazor debugging proxy can connect to it. You will need to close all Chrome instances and then restart Chrome as instructed.

Blazor debugging error page

Once you have Chrome running with remote debugging enabled, hitting the debugging hotkey will open a new debugger tab. After a brief moment the Sources tab will show a list of the .NET assemblies in the app. You can expand each assembly and find the .cs/.cshtml source files you want to debug. You can then set breakpoints, switch back to your app's tab, and cause the breakpoints to be hit. You can then single-step (F10) or resume (F8).

Blazor debugging

How does this work? Blazor provides a debugging proxy that implements the Chrome DevTools Protocol and augments the protocol with .NET specific information. When you hit the debugging hotkey, Blazor points the Chrome DevTools at the proxy, which in turn connects to the browser window you are trying to debug (hence the need for enabling remote debugging).

You might be wondering why we don't just use browser source maps. Source maps allow the browser to map compiled files back to their original source files. However, Blazor does not map C# directly to JS/WASM (at least not yet). Instead, Blazor does IL interpretation within the browser, so source maps are not relevant.

NOTE: The debugger capabilities are very limited. You can currently only:

  • Single-step through the current method (F10) or resume (F8)
  • In the Locals display, observe the values of any local variables of type int/string/bool
  • See the call stack, including call chains that go from JavaScript into .NET and vice-versa

That's it! You cannot step into child methods (i.e., F11), observe the values of any locals that aren't an int/string/bool, observe the values of any class properties or fields, hover over variables to see their values, evaluate expressions in the console, step across async calls, or do basically anything else.

Our friends on the Mono team have done some great work tackling some of the hardest technical problems to enable source viewing, breakpoints, and stepping, but please be patient as completing the long tail of debugger features remains a significant ongoing task.

Community

The Blazor community has produced a number of great Blazor extensions, libraries, sample apps, articles, and videos.
You can find out about these community projects on the Blazor Community page. Recent additions include a Blazor SignalR client, Redux integration, and various community authored samples (Toss, Clock, Chat). If you have a Blazor related project that you'd like to share on the community page let us know by sending us a pull request to the Blazor.Docs repo.

Give feedback

We hope you enjoy this latest preview release of Blazor. As with previous releases, your feedback is important to us. If you run into issues or have questions while trying out Blazor please file issues on GitHub. You can also chat with us and the Blazor community on Gitter if you get stuck or to share how Blazor is working for you. After you've tried out Blazor for a while please also let us know what you think by taking our in-product survey. Click the survey link shown on the app home page when running one of the Blazor project templates:

Blazor survey

Thanks for trying out Blazor!

New recommendations in Azure Advisor

$
0
0

Azure Advisor is a free service that analyzes your Azure usage and provides recommendations on how you can optimize your Azure resources to reduce costs, boost performance, strengthen security, and improve reliability.

We are excited to announce that we have added several new Azure Advisor recommendations to help you get the most out of your Azure subscriptions.

Azure Advisor

Buy Reserved Instances to save over pay-as-you-go costs

Azure Reserved Instances (RIs) allow you to reserve virtual machines (VMs) in advance on a one or three-year term and save up to 80 percent versus pay-as-you go rates. RIs are ideal for workloads with predictable, consistent traffic.

Azure Advisor will analyze your last 30 days of VM usage and recommend purchasing RIs when it may provide cost savings. Advisor will show you the regions and VM sizes where you could save money and give you an estimate of your potential savings from purchasing RIs if your usage remains consistent with the previous 30 days.

Create Azure Service Health alerts

Azure Service Health is a free service that provides personalized guidance and support when Azure service issues might affect you. You can create Service Health alerts for any region or service so that you and your teams stay informed via the Azure portal, email, text message, or webhook notification when business-critical resources could be impacted.

Azure Advisor will identify your subscriptions that do not have Service Health alerts configured and recommend that you set up alerts on those subscriptions.

Azure Service Health alerts

Upgrade to a support plan that includes technical support

Azure technical support plans give you access to Azure experts when you need assistance. Azure offers a range of support options to best fit your needs, whether you’re a developer just starting your cloud journey or a large organization deploying business-critical applications.

Azure Advisor will identify subscriptions with a high amount of monthly Azure spend that are likely running strategic workloads and recommend upgrading your support plan to include technical support.

Configure your Traffic Manager profiles for optimal performance and availability

Azure Traffic Manager allows you to control the distribution of user traffic for service endpoints in different datacenters and optimize for performance and availability. Azure Advisor has added new recommendations to solve common configuration issues with Traffic Manager profiles.

Reduce DNS Time to Live

Time to Live (TTL) settings on your Traffic Manager profile allow you to specify how quickly to switch endpoints if a given endpoint stops responding to queries. Reducing the TTL value means that clients will be routed to functioning endpoints faster.

Azure Advisor will identify Traffic Manager profiles with a longer TTL configured and will recommend configuring the TTL to either 20 seconds or 60 seconds depending on whether the profile is configured for Fast Failover.

Traffic Manager profiles

Add or move one endpoint to another Azure region

If all endpoints in a Traffic Manager profile configured for proximity routing are in the same region, users from other regions may experience connection delays. Adding or moving an endpoint to another region will improve overall performance and provide better availability if all endpoints in one region fail.

Azure Advisor will identify Traffic Manager profiles configured for proximity routing where all the endpoints are in the same region and recommend that you either add or move an endpoint to another Azure region.

Add an endpoint configured to “All (World)”

If a Traffic Manager profile is configured for geographic routing, then traffic is routed to endpoints based on defined regions. If a region fails, there is no pre-defined failover. Having an endpoint where the Regional Grouping is configured to “All (World)” will avoid traffic being dropped and improve service availability.

Azure Advisor will identify Traffic Manager profiles configured for geographic routing where there is no endpoint configured to have the Regional Grouping as “All (World)” and recommend making that configuration change.

Endpoint configuration

Add at least one more endpoint, preferably in another region

Traffic Manager profiles with more than one endpoint experience higher availability if any given endpoint fails. Placing these endpoints in different regions further improves service reliability.

Azure Advisor will identify Traffic Manager profiles where there is only one endpoint and recommend adding at least one more endpoint in another region.

Get started with Azure Advisor

Visit the Azure Advisor webpage to learn more, and get started using Azure Advisor in the Azure portal. See the Azure Advisor documentation for assistance, and if you have any feedback, don’t hesitate to share it with us in the tool.

Azure App Service now supports Java SE on Linux

$
0
0

Lately there have been a whole lot of changes to Java and its vibrant communities. Now shared between Oracle for Java SE and Eclipse Foundation for Jakarta EE (formerly Java EE), Java continues to be the leading programming language by developers and enterprises. As a matter of fact, it is now well-positioned to thrive in the cloud considering how modern application development is trending with over 12 million Java developers worldwide, and digital transformation being top of mind for many IT organizations.

With the sheer volume of Java apps in existence and soon to be developed, Java developers will benefit greatly from cloud services that will enable fast and secure application development while saving time and cost. Couple this with a vast geographic region coverage, it is a cloud solution every developer should experience.

Today, Microsoft is pleased to announce that Azure App Service now supports Java SE 8 based applications on Linux, now available in public preview. This and subsequent time released versions will be supported for an extended period, as well as upcoming LTS versions. Java web apps can now be built and deployed on a highly scalable, self-patching web hosting service where bug fixes and security updates will be maintained by Microsoft. Additional performance features include scaling to support millions of users with multiple instances, applications, and regions in a dynamic scaling intelligent configuration.

Java Web Apps benefits

Let Azure App Service do all the heavy lifting commonly associated with enterprise-grade infrastructure so developers can focus on productivity. Take advantage of Microsoft Azure’s battle tested infrastructure platform spanning from global security compliance to DevOps capabilities. Developer productivity benefits do not stop there. Java web apps provide the following benefits:

  • Fully managed enterprise platform – Log aggregation, email notifications, and Azure portal alerts. Version updates will soon include auto-patching.
  • Performance monitoring – Integrate with the Application Performance Management (APM) product of your choice to monitor and manage applications.
  • Global scale with high availability – 99.95% uptime with low latency, auto-scaling, or manual-scaling (up or out), anywhere in Microsoft’s global datacenters.
  • Security and compliance – App Service is ISO, SOC, PCI, and GDPR compliant.
  • Authentication and authorization Built-in authentication with Azure Active Directory, governance with Roll-Based Access Control (RBAC) to manage IP address restrictions.
  • Build automation and CI/CD Support – Maven, Jenkins, and Visual Studio Team Services support will be available in the general availability release.

There are three ways of deploying Java Web Apps on Azure. You can create it from the Azure portal, use a template, or create and deploy from Maven. In this post, we will cover how to deploy a Spring Boot app using Maven.

Get started with Maven and Spring

To get started, clone your favorite Java Web app or use this sample: bash-3.2$ git clone

Add the Maven plugin for Azure Web Apps to the app project POM file and set server port to 80.

<build>
   <plugins>
      <plugin>
         <groupId>com.microsoft.azure</groupId>
         <artifactId>azure-webapp-maven-plugin</artifactId>
         <version>1.2.0</version>
         <configuration>
 
            <!-- Web App information -->
            <resourceGroup>${RESOURCE_GROUP}</resourceGroup>
            <appName>${WEBAPP_NAME}</appName>
            <region>${REGION}</region>
            <pricingTier>S1</pricingTier>
 
            <!-- Java Runtime Stack for Web App on Linux -->
            <linuxRuntime>jre8</linuxRuntime>
 
            <deploymentType>ftp</deploymentType>
            <!-- Resources to be deployed to your Web App -->
            <resources>
               <resource>
                  <directory>${project.basedir}/target</directory>
                  <targetPath>/</targetPath>
                  <includes>
                     <include>app.jar</include>
                  </includes>
               </resource>
            </resources>
            <appSettings>
               <property>
                  <name>JAVA_OPTS</name>
                  <value>-Djava.security.egd=file:/dev/./urandom</value>
               </property>
            </appSettings>
         </configuration>
     </plugin>
   </plugins>
   <finalName>app</finalName> 
</build> ​

Build, package, and deploy using Maven – like you would normally do.

bash-3.2$ mvn package azure-webapp:deploy
[INFO] Scanning for projects...
[INFO] 
[INFO] ----------------------------------------------------------------------
[INFO] Building petclinic 2.0.0
[INFO] ----------------------------------------------------------------------
[INFO] 
...
...
[INFO] --- azure-webapp-maven-plugin:1.2.0:deploy (default-cli) @ spring-petclinic ---
[INFO] Start deploying to Web App myjavase-07262018aa...
[INFO] Authenticate with Azure CLI 2.0
[INFO] Target Web App doesn't exist. Creating a new one...
[INFO] Creating App Service Plan 'ServicePlan1af9c8f0-3f71-43a8'...
[INFO] Successfully created App Service Plan.
[INFO] Successfully created Web App.
...
...
 
[INFO] Finished uploading directory: /Users/selvasingh/GitHub/selvasingh/spring-petclinic/target/azure-webapps/myjavase-07262018aa --> /site/wwwroot
[INFO] Successfully uploaded files to FTP server: waws-prod-bay-081.ftp.azurewebsites.windows.net
[INFO] Starting Web App after deploying artifacts...
[INFO] Successfully started Web App.
[INFO] Successfully deployed Web App at https://myjavase-07262018aa.azurewebsites.net
[INFO] ----------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ----------------------------------------------------------------------
[INFO] Total time: 03:06 min
[INFO] Finished at: 2018-07-13T18:03:22-07:00
[INFO] Final Memory: 139M/987M
[INFO] ----------------------------------------------------------------------

Open the site in your favorite browser.

Spring

And here it is, your first Java web app on Azure App Service. Nice job!

More friendly Java tools on the way

We understand that development tools are not one size fit all. We are making sure we continue to enrich our tool suite with tools that enhance productivity and efficiency. In the meantime, expect to see support for Gradle and Jenkins next.

Next steps

Running Java Web Apps in the cloud cannot be any easier and faster. Give it try, create your first Java web app in Azure Service. Below are a few resources to help you get started.

If you don’t have an Azure subscription, create a free account today.

Security Center’s adaptive application controls are generally available

$
0
0

Azure Security Center provides several threat prevention mechanisms to help you reduce surface areas susceptible to attack. One of those mechanisms is adaptive application controls. Today we are excited to announce the general availability of this capability, which helps you audit and block unwanted applications.

Adaptive application controls help you define the set of applications that are allowed to run on configured groups of virtual machines (VM). Enabling adaptive application controls for your VMs will allow you to do a few things. First, it recommends applications (EXEs, MSIs, and Scripts) for whitelisting, automatically clustering similar VMs to ease manageability and reduce exposure to unnecessary applications. It also applies the appropriate rules in an automated fashion, monitors any violations of those rules, and enables you to manage and edit previously applied application whitelisting policies.

Adaptive Application Controls

Adaptive Application Controls 2

By default, Security Center enables application control in Audit mode. After validating that the whitelist has not had any adverse effects on your workload, you can change the protection mode to Enforce mode through the Security Center management UI.

You can also change the application control policy for each configured group of VMs through the same Security Center management UI, edit and remove previously applied rules, and extend the rules to allow more applications to run in your workloads.

This feature is available in the standard pricing tier of Security Center, and you can try Security Center for free for the first 60 days.

To learn more about these features in Security Center, visit our documentation.

Orchestrating production-grade workloads with Azure Kubernetes Service

$
0
0

Happy Birthday Kubernetes! In the short three years that Kubernetes has been around, it has become the industry standard for orchestration of containerized workloads. In Azure, we have spent the last three years helping customers run Kubernetes in the cloud. As much as Kubernetes simplifies the task of orchestration, there’s plenty of setup and management that needs to take place for you to take full advantage of Kubernetes. This is where Azure Kubernetes Service (AKS) comes in. With Microsoft’s unique knowledge of the requirements of an enterprise, and our heritage of empowering developers, this managed service takes care of many of the complexities and delivers the best Kubernetes experience in the cloud.

In this blog post, I will dig into the top scenarios that Azure customers are building on Azure Kubernetes Service. After that, we will blow out the candles and have some cake.

If you are new to AKS, check out the Azure Kubernetes Service page and this video to learn more.

Lift and shift to containers

Organizations typically want to move to the cloud quickly and it is often not possible to re-write applications to take full advantage of cloud-native features right from the beginning. Containerizing applications makes it much simpler to modernize your apps and move to the cloud in a frictionless manner while adding benefits such as CI/CD automation, auto-scale, security, monitoring, and much more. It also allows for simplified management at the data layer by taking advantage of Azure’s PaaS based databases such as Azure CosmosDB or Azure’s managed PostgreSQL Service.

For example, I worked with a customer in the manufacturing industry who had many legacy Java applications sprawled throughout a high cost datacenter. They were often unable to scale these applications to meet customer demand and updates were cumbersome and unreliable. With Azure Kubernetes Service and containers, they were able to host many of these applications in a single managed service. This led to a much higher reliability and the ability to ship new capabilities much more frequently.

The following diagram visually illustrates a typical lift and shift approach involving AKS. To get started with AKS, check out the following tutorial.

Lift and shift to containers

Microservices based cloud native applications

Microservices bring super powers to many applications and include benefits such as:

  • Independent deployments
  • Improved scale and resource utilization per service
  • Smaller, focused developer teams
  • Focus code around business capabilities

Containers are the ideal technology to deliver microservices based applications. Kubernetes provides the much-needed orchestration layer to help organizations manage these distributed microservices apps at scale.

What Azure uniquely brings to the table is its native integration with developer tools, and its flexibility to plug into the best tools and services coming out of the Kubernetes ecosystem. It offers the comprehensive, yet simple, end-to-end experience for seamless Kubernetes lifecycle management on Azure. Since microservices are polyglot in nature for language, processes, and tooling, Azure’s focus on developer and ops productivity attracts companies like Siemens, Varian, and Tryg to run microservices at scale with AKS. My customers running AKS also found the below capabilities helpful for development, deployment, and management of their microservices-based applications:

  • Azure Dev Spaces support to iteratively develop, test, and debug microservices targeting at AKS clusters.
  • Automating external access to services with HTTP application routing.
  • Using ACI Connector, a Virtual Kubelet implementation, to allow for fast scaling of services with Azure Container Instances.
  • Simplifying CI/CD with Azure DevOps Projects and open source tools such as Helm, Draft, and Brigade, all backed by a secure, private Docker repository in Azure Container Registry.
  • Supporting Service Mesh solutions such as Istio or Linkerd to easily add complex network routing, logging, TLS security, and distributed tracing to distributed apps.

The image below shows how some of the elements called out above fit in the overall scenario.

Microservices based cloud native applications

This blog has a good run down of developing microservices with AKS. If you are looking to get more hands-on and build microservices with AKS, the following four part blog-tutorial provides good end-to-end coverage.

IoT Edge deployments

IoT solutions such as SmartCity, ConnectedCar, and ConnectedHealth have enabled many diverse applications connecting billions of devices to the cloud. With advances in computing power, these IoT devices are becoming more and more powerful. Of course, IoT application development poses some challenges as well. For instance, crafting, maintaining and updating a robust IoT solution is time-consuming. Such solutions also face a higher degree of difficulty when it comes to maintaining cohesive security in a distributed environment. Device incompatibility with existing infrastructure, and challenges in scaling further compound IoT solution development.

Azure provides a robust set of capabilities to address these IoT challenges. More specifically, Azure IoT Edge was created to help customers run custom business logic and cloud analytics on edge devices so that the focus of the devices can be on business insights instead of data management.

At Azure, we are seeing customers utilize the power of AKS to bring containers and orchestration to help manage this IoT Edge layer. Customers can combine AKS with the IoT Edge connector, a Virtual Kubelet implementation, to help provide:

  • Consistency between cloud and edge software configuration.
  • Applying identical deployments across multiple IoT hubs.

With AKS and the IoT Edge connector, the configuration can be defined in a Kubernetes manifest and then simply and reliably deployed to all IoT devices at the edge with a single command. The simplicity of a single manifest to manage all IoT Hubs helps customers deliver and manage IoT applications at scale. For example, consider the challenges involved in deploying and managing devices across different regions. AKS, along with the IoT Hub and the IoT Edge connector, make these deployments simple. The graphic below illustrates this IoT scenario involving AKS and the IoT Edge connector.

IoT Edge deployment

Check out this blog for more information on managing IoT Edge deployments with Kubernetes.

Machine learning at scale

Though machine learning is immensely powerful, using it in practice is by no means easy. Machine learning in practice often involves training and hosting models which tend to require the data scientist to reproduce the code to work in different environments and be deployed at different scales. Additionally, once the model is running in production with large scale clusters, lifecycle management becomes increasingly difficult. Configuration and deployment are often left to data scientists which results in their time being consumed by infrastructure setup instead of data science itself.

AKS can help address these challenges faced with training and hosting ML models and the lifecycle management workflows.

  • For training, AKS can help ensure GPU resources, designed for compute-intensive, high-scale workloads, are available on demand and scaled down when not needed. This becomes critical when a group of data scientists are all working on various projects and require resources on very diverse schedules. This also allows for faster training cycles by enabling strategies such as distributed training and hyperparameter optimization.
  • For hosting, AKS brings DevOps capabilities to machine learning models. These models can be upgraded more easily using the rolling upgrades capability, and strategies such as blue/green or canary deployments can be easily applied.
  • Using containers also brings much higher consistency across test, dev, and production environments. Also, self-healing capabilities dramatically improve reliability of the execution.

A possible scenario involving AKS for machine learning models is shown in the image below.

ML scenario - updated

Learn more about running AKS with GPU enabled VMs for compute-intensive, high-scale workloads such as machine learning. 

What next?

Are you using or planning to use AKS to implement one of these scenarios? Or have a different scenario in mind where AKS can help address a key challenge? Connect with us to share and discuss your use cases.

To get you started, we also put together a simple demo for you to get familiar with Azure Kubernetes Service, check it out. We also have a webinar coming to walk you through end to end Kubernetes development experience on Azure. Sign up to get started today.

Marketplace news from Inspire 2018

$
0
0

Did you miss Inspire in Las Vegas this month? Here is your quick recap of what announcements we made regarding Microsoft’s cloud marketplace.

Marketplace continues to be a big investment and focus area for Microsoft, reinforcing our commitment to making partner business opportunities the centerpiece of our efforts. To extend these investments, at Inspire, we announced some new features and functionality in our marketplace including: empowering P2P joint selling, extended private offer capabilities, expanded geographical availability of consulting offers and new integrated solutions.

A quick recap of top announcements can be found in this previous blog post.

Got a little more time to spare and want to go deeper? Check out some of the key marketplace sessions and content available on-demand.

  • Grow your business with AppSource and Azure Marketplace
    The nature of cloud software sales is rapidly changing. Customers are increasingly using online marketplaces to find, try, and buy solutions. Learn about these trends in customer buying behavior and how you can grow your business with joint Go-To-Market and Co-Sell with Microsoft cloud marketplaces.
  • Best practices for successful GTM in Azure Marketplace and AppSource
    So you’ve published an offer in Microsoft’s marketplace, now what? Join this session to hear how to optimize your marketplace presence for true sales lead generation and deal conversion. Learn about the programs and resources available to you to help improve your offer’s performance in marketplace and hear best practices from partners.
  • Grow your PaaS or SaaS business in Azure Marketplace or AppSource
    ISVs, SIs, and MSPs are invited to join us and learn how to drive awareness, acquisition, and co-selling across your *aaS solution portfolio via AppSource and Azure Marketplace. We provide an overview of the benefits, resources, and strategies available to publishers as well as best practices for leveraging cloud marketplace‘s list, trial, and transact capabilities to amplify and promote your solutions.
  • Optimize your Microsoft Marketplace listing to attract new customers
    Let us show you how to put your business front and center by creating a great listing that showcases your value. Discover how Microsoft marketplace can help you attract customers and grow your business. Explore tips on optimization and leave with action items on how to better position your listing to increase your success.

To learn more, you can check out the full catalog of recorded sessions and content from Inspire 2018 including keynote sessions.

How to move your e-commerce infrastructure to Azure

$
0
0

Retailers everywhere have started moving parts of their business to Azure. When looking at moving more of the data center into Azure, teams can see the current state, and the end state. At the end state, they see all the possibilities: AI, chatbots, advanced analytics, elastic scalability, and more. They also see how many of the services they use already run in the cloud. Databases, marketing tools, supply chain management, retail optimization platforms, and more are waiting for you once you make the move.

923079848

Working with people across Microsoft to understand what retailers go through to move from on-premises to the cloud, I learned a number of important things that many customers face when migrating to the cloud. At the start, retailers take baby steps: they rehost their applications in virtual machines. During that phase, they get comfortable with just using Azure. Next, they realize that maintenance of the infrastructure services could be eliminated if they switch from virtual machines to services. That refactoring frees up a lot of maintenance time, so the retailer looks at doing more: they rebuild parts of the solution to be cloud native. This effort resulted in putting together an article on Migrating your e-commerce solution to Azure.

In the article, I run through the options and decisions that most retailers face when planning a migration to Azure. For example:

  • Do you simply rehost your application in the cloud as-is? You immediately save the cost of  infrastructure.
  • Or do you consider application refactoring? A switch to Azure PaaS (and even SaaS) increases the ability to integrate new services—those that expand capability, performance and scalability.

Recommended next steps

This journey requires a lot of planning, taking inventory of what you already own, and learning. Regardless of where you are in this process, read Migrating your e-commerce solution to Azure overview to continue your journey moving your e-commerce solution to Azure.


Avoid Big Data pitfalls with Azure HDInsight and these partner solutions

$
0
0

According to a Gartner 2017 prediction, “60 percent of big data projects will fail to go beyond piloting and experimentation, these projects will be abandoned”.

Whether you worked on an analytical project or are starting one, it is a challenge on any cloud. You need to juggle the intricacies of cloud provider services, open source frameworks and the apps in the ecosystem. Apache Hadoop & Spark are very vibrant open source ecosystems which have enabled enterprises to digitally transform their businesses using data. According to Matt Turck VC at FirstMark, it has been an exciting but complex year in the data world. “The data tech ecosystem has continued to fire on all cylinders.  If nothing else, data is probably even more front and center in 2018, in both business and personal conversations”.

However, with great power comes greater responsibility from the ecosystem. There is a lot more than just using open source or a managed platform to a successful project. You have to deal with:

  • The complexity of combining all the open source frameworks.
  • Architecting a data lake to get insights for data engineers, data scientists and BI users.
  • Meeting enterprise regulations such as security, access control, data sovereignty & governance.
  • Handling business continuity and disaster recovery.
  • Choosing and vetting apps from the ecosystem and running this entire solution reliably.

As you can imagine, this is a daunting task and often times projects fail at deployment time or customers are unable to get insights from deployed systems due to lack of expertise or the projects don’t meet the requirements of Enterprise IT and are shut down.

So how do you escape this Gartner 2017 statistic?

Power of Azure, Open Source and partners

Big data and the analytical application lifecycle spans a number of steps. Including ingestion, prep, storage, processing, analyzing and visualization. All of these steps need to have enterprise requirements around governance, access control, monitoring, security and more. Stitching an application together which comprises everything is a complicated task. As explained before this is the biggest reason for the failure for big data applications.

With Azure HDInsight Application Platform we have solved this problem by working tightly with a selected set of ISV’s to certify their solutions with Azure HDInsight and other analytical services so customers can deploy them with a single-click. Applications are deployed natively with the cluster, so they can meet the existing enterprise setup around network security and access control polices. You can try these applications to evaluate them before you decide to purchase them which gives you flexibility as a customer to evaluate partners before signing a deeper engagement.

Screen Shot 2018-07-01 at 10.39.02 AM

Microsoft has always cared deeply about partners and customers, and this platform natively integrates partner solutions with Microsoft Azure foundational services such monitoring, security, authentication, encryption etc. allowing customers to seamlessly use the entire spectrum of Azure services, open source frameworks and partner solutions to get insights from data faster.

Peter Scott, SVP of Business Development WANdisco describes the value, “Azure HDInsight Application is an excellent choice for partners to get the highest ROI for their analytics investment. The one-click deploy experience has made our product more discoverable, and removes the guesswork and friction around discovering, installing and integrating with existing enterprise environments. Combine this with WANdisco Fusion and a LiveData capability, customers are perfectly positioned to realize data resiliency and guaranteed data accessibility on their hybrid cloud journey”.

Screen Shot 2018-07-01 at 10.16.27 AM

Thus, you get the best of all 3 worlds. The strength of the Azure platform, the flexibility and managed OSS platform with Azure HDInsight and the best of our partner ecosystem that makes BI professionals, data stewards and engineers more productive while giving the necessary governance and protection demanded by your IT administrators.

Let’s dive in into each of these challenging areas and how partners are helping solve this.

Pick a task, pick a partner

Make ingesting data easier

Any analytical solution starts with ingesting data from various sources for analytics. Big Data is all about volume, variety and velocity. As streaming and batch data changes, maintaining SLA’s around data quality is a challenge.

StreamSets-logoStreamSets provides a full-featured integrated development environment (IDE) that lets you design, test, deploy, and manage any-to-any ingest pipelines that mesh stream and batch data, and include a variety of in-stream transformations - all without having to write custom code. Try StreamSets on HDInsight.

striimlight

The StriimTM platform makes it easy to integrate, analyze, and visualize streaming data across cloud, Big Data, and IoT devices, helping you make smart and timely operational decisions. Try Striim on HDInsight.

Simplify data prep

Extract-transform-load (ETL) processes are fairly time consuming. The challenge with data prep is around combining data from various sources and exploring the quality of the data, merging schema’s, removing bad data etc. Due to lack of expertise in open source frameworks, customers end up spending writing lots of scripts for ETL. Here’s how partners help simplify data prep.

paxata

Paxata's Adaptive Information Platform enables any user to gain insights from their data faster. Business users can combine unstructured and structured data from various sources, prepare data and analyze the data. Try Paxata on HDInsight.

image

Trifacta is a data wrangling solution for big data allowing you to easily transform and enrich raw, complex data into clean and structured formats for the purpose of exploratory analytics. Try Trifacta on HDInsight.

Use analytics & AI to transform your business

Continuing on the process of a big data application, a data scientist can do machine learning or deep learning. This involves collaborating in a team and using industry’s most popular open source libraries. Due to the complexity of the ecosystem, installing and configuring the toolsets available is challenging for novice users.

dataiku

Dataiku provides Data Science Studio (DSS), the collaborative data science platform that enables professionals (data scientists, data engineers, etc.) to collaborate on building analytical solutions. DSS has an easy to use team-based interface for data scientists and beginner analysts. Try Dataiku on HDInsight.

h20ai

H2O's AI platform is an open source machine learning that works with Spark 2.0+, sparklyr, and PySpark. H2O Sparkling Water allows users to combine the fast, scalable machine learning algorithms of H2O with the capabilities of Spark. Try H2O.ai on HDInsight.

image

KNIME Analytics Platform is the leading open solution for data-driven innovation, designed for discovering the potential hidden in data, mining for fresh insights, or predicting new futures. Organizations can take their collaboration, productivity and performance to the next level with a robust range of commercial extensions to our open source platform. Try KNIME on HDInsight.

Serve up new business insights

BI over data lake is hard since traditional tools don’t work with unstructured or streaming data. To complete the BI journey customers must move the data from a data lake to a relational store. For Big Data customers who have petabytes of data, operationalizing this data movement is challenging.

imageAtScale is a BI on Hadoop software solution.  It allows business users to enjoy multi-dimensional and ad-hoc analysis capabilities on Hadoop data without any movement or client drivers, at OLAP speed and directly from standard BI tools like Microsoft Excel, Power BI and Tableau. Try AtScale on HDInsight.

imageKyligence Analytics Platform is an OLAP solution on Hadoop powered by open source Apache Kylin. Try Kyligence on HDInsight.

Stay safe with robust data governance

Data cataloguing/governance is a key ask from enterprises. This allows them to discover data, define access control and monitor patterns for data security. It also allows users to easily discover data sets across an organization. However, building up such a system is challenging because of data silos across an organization.

image

Waterline Data provides a data catalog solution that enables both a governed data lake as well as provides an automated data catalog as a shared service across multiple data prep, data discovery, and exploratory analytics tools. As a result, Waterline Data gives agility and speed to the business to find the best suited data quickly without manual exploration and coding - staying above the waterline of the data lake, while enabling IT to provide a data layer that is governed and can stay ahead of the need for self-service access to data.

Tune performance for best ROI

The inherent complexity of big data systems, disparate set of tools for monitoring, and lack of expertise in optimizing these open source frameworks create significant challenges for end-users who are responsible for guaranteeing SLAs.

unravel-logo-color

Unravel provides comprehensive application performance management (APM) for. The application helps customers analyze, optimize, and troubleshoot application performance issues and meet SLAs in a seamless, easy to use, and frictionless manner. Some customers report up to 200 percent more jobs at 50 percent lower cost using Unravel’s tuning capability on HDInsight. Getting started guide with Unravel on HDInsight.

Hybrid Big Data and globally replicated Data Lake

Connecting on-premise big data applications to the cloud has always been a challenge. Customers have to think about constantly changing data and metadata (schema of tables in Hive, authorization policies in Ranger, Sentry etc.) for applications. Connecting on-prem to cloud without any down-time is challenging and increases the time to market for customers.

image

WANdisco provides live replication of selected data/ metadata at scale between multiple Big Data and Cloud environments. With guaranteed data consistency and continuous availability, HDInsight customers can easily setup hybrid environments and a disaster recovery solution. Try WANdisco on HDInsight. How to replicate data for hybrid and disaster recovery solutions.

Customer or partner, there’s no better time to start!

If you are a customer, there is no better time to include a partner in the architecture and build out of your solution. They facilitate and accelerate your path to production. You will reap savings through their knowledge, experience and specialization.

Know that Azure HDInsight Application Platform is one of the faster growing partner ecosystems in Azure. Partner contribution has increased over 200 percent last year. Customers are discovering partner solutions organically and this platform is driving more awareness of the ecosystem to customers. There is a strong momentum of customers moving their Hadoop/ Spark solutions to the cloud and with the plus 50 percent price cut on HDInsight there is more opportunity than ever to innovate.

If you are a potential partner and think your company could help complete Microsoft’s Analytical offerings, there has not been a better time to partner with Microsoft and help customers be successful in their analytical journey. Come join us! Contact bigdatapartners@microsoft.com

Current use cases for machine learning in healthcare

$
0
0

Machine learning (ML) is causing quite the buzz at the moment, and it’s having a huge impact on healthcare. Payers, providers, and pharmaceutical companies are all seeing applicability in their spaces and are taking advantage of ML today. This is a quick overview of key topics in ML, and how it is being used in healthcare.

A machine learning model is created by feeding data into a learning algorithm. The algorithm is where the magic happens. There are algorithms to detect a patient’s length of stay based on diagnosis, for example. Someone had to write that algorithm and then train it with true and reliable data. Over time, the model can be re-trained with newer data, increasing the model’s effectiveness.

Machine learning on Azure

Machine learning is a subset of Artificial Intelligence (AI). AI can be thought of as a using a computer system to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages. I won’t go into more detail on the distinction, but here are some resources to help you get started.

Examples of machine learning in healthcare

Current examples of initiatives using AI include:

Microsoft is continuing to commit resources to making healthcare more effective through ML. There are several programs and publicly available resources including:

All these initiatives are driven by algorithms developed by researchers, data scientists, developers and others. The accuracy of prediction or recognition depends on two factors: the data and features used to train the model, and the algorithm used to learn from that data. That’s why people in the ML/AI space are so interested in the many algorithms that can be used today.

Supervised and unsupervised learning

There are two types of algorithms, supervised and unsupervised. Supervised learning algorithms make predictions based on a set of examples. For instance, historical stock prices can be used to hazard guesses at future prices. Each example used for training is labeled with the value of interest—in this case the stock price. A supervised learning algorithm looks for patterns in those value labels. Supervised data makes predictions more precise because the model is being fed correct answers to learn about expected results.

In unsupervised learning, data points have no labels associated with them. Instead, the goal of an unsupervised learning algorithm is to organize the data in some way or to describe its structure. This can mean grouping it into clusters or finding different ways of looking at complex data so that it appears simpler or more organized. This form of training is less specific, and the people analyzing the output may not even know the right answers themselves. That said, unsupervised learning can provide great benefits when an algorithm is tuned properly to fill in the blanks. Algorithm tuning is a process of trial and error, facilitated by the Azure learning platform tools. Using experts to evaluate the results is also critical.

But the benefits are huge when looking at trends in a population. Take this question: Where in the United States do most people with multiple sclerosis live? This can lead to questions like, “why is this?” with ML, one can potentially find insights not observed through other business intelligence approaches.

Personally identifiable information (PII)

The data that comes from healthcare products and services like electronic health records can contain personally identifiable information (PII).  Special consideration needs to be made for how an organization will use and treat PII data in a machine learning solution.

Usage example: diagnostic radiology

Consider the job of a diagnostic radiologist. These physicians spend a lot of time analyzing image after image to identify anomalies in patients and much more. They are often critical in making a diagnosis, and their decisions are based on what they find—for example, identifying a tumor.

AI can be used to assist a diagnostic radiologist. For example, Project InnerEye describes itself this way:

Project InnerEye develops machine learning techniques for the automatic delineation of tumors as well as healthy anatomy in 3D radiological images. This enables; extraction of targeted radiomics measurements for quantitative radiology, fast radiotherapy planning, precise surgery planning and navigation. In practice, Project InnerEye turns multi-dimensional radiological images into measuring devices.

The software assists the radiologist by automatically tracing the outline of a tumor. Radiology produces a large number of scans of an area (e.g. top to bottom of a brain). The radiologist typically goes through each scan and traces the outline of the tumor. After this is done, a 3D composite of the tumor can be produced. This task takes hours. Using ML, Project InnerEye does this in minutes.

Usage example: using machine learning to predict outcomes

Other very practical information can be forecast using ML and AI. For example, predicting the patient’s likely duration of stay in a hospital is a form of predictive analysis.

Predicative Analytics for nurses helps them take better care of patients and of themselves. 

Wrapping up

Diagnostic and predictive analysis are today’s main benefits, but work is underway to take advantage of ML/AI in other medical problem spaces. Pharmaceutical and insurance companies are keen on ML right now because it helps them manage with data and identify core KPIs.

These technologies will fundamentally change healthcare. But with heavy regulations around the healthcare space, we’ll see incremental adoption of ML/AI as new benefits are uncovered, and new algorithms developed.

Recommended next steps

Additional Resources

Feeding IoT Device Telemetry Data to Kafka-based Applications

$
0
0

With the newly released support for Kafka streams in Event Hubs, it is now possible for Azure IoT Hub customers to easily feed their IoT device telemetry data into Kafka-based applications for further downstream processing or analysis. This gives customers with existing Kafka-based applications the added flexibility of faster adoption of Azure IoT Hub, without the need to rewrite any parts of their applications upfront. This means that customers can start using IoT Hub's native support for messaging, device and configuration management early, and defer the decision to migrate their telemetry processing applications to natively use Event Hubs at a later time.

Applicable Customer Scenarios

The ability to feed IoT device telemetry data into Kafka-based processing applications is valuable in several scenarios:

  • A primary scenario involves a prospective Azure IoT Hub customer with legacy data processing applications that are already written to interface with Kafka clusters. With the support for Kafka in Event Hubs, the customer can defer the need to make upfront changes to such applications as part of onboarding to Azure IoT Hub. The new feature enables a faster IoT Hub adoption cycle for the customer with lower upfront development costs.
  • A secondary scenario involves a customer who would like to keep an existing Kafka-based telemetry processing application as is, perhaps to use it for processing telemetry data emitted as Kafka streams by other disparate sources (e.g., non-Azure-IoT-Hub-managed devices). Similarly, the new feature benefits the customer by reducing churn in existing Kafka-based applications.
  • A final scenario involves a customer who is simply evaluating Azure IoT Hub's capabilities for future adoption, and would typically like to undertake a minimal set of changes needed to validate IoT Hub's capabilities without making significant changes in any external periphery data processing system.

 

    Using Kafka-based Applications with IoT Hub Telemetry

    As shown in the diagram below, IoT Hub acts as a conduit for the device telemetry data. This data is persisted in Event Hubs, and ultimately consumed by downstream applications. To enable your Kafka-based application to retrieve this data, you need to add a Kafka-enabled Event Hub as an endpoint/route to IoT Hub, and configure your Kafka-based application with the connection string of your Event Hub where the data is stored. These steps are described in more detail below.

    1. Follow this guide to create a new Event Hubs. Ensure the "Enable Kafka" option is selected during the creation process.
    2. You now need to add your Event Hub as a custom endpoint in your IoT Hub using the Azure portal or CLI. To use the portal, your IoT Hub and Event Hub must be in the same region and under the same subscriptions. Go to your IoT Hub dashboard page in the portal, open the "Endpoints" tab, and click "Add". Enter a name for your endpoint, and select "Event Hub" under "Endpoint Type". Under "Event Hub Namespace" select your Event Hub created in Step 1, and select the Event Hub name. Click "OK" to create the endpoint. Alternatively, if your Event Hubs and IoT Hub are in different regions or under different subscriptions, you can use the IoT Hub CLI to add your Event Hub as a custom endpoint in IoT Hub. In that case, install the Azure CLI with IoT Hub extension and run the following command (substitute your information, and ensure that your Event Hubs connection string has the Event Hub name specified as EntityPath).
      az iot hub update -g [YOUR_IOT_HUB_RESOURCE_GROUP] -n [YOUR_IOT_HUB_NAME] --add properties.routing.endpoints.eventHubs connectionString='Endpoint=sb://[YOUR_EVENT_HUBS_NAMESPACE_FQDN];SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=[YOUR_EVENT_HUBS_KEY];EntityPath=[YOUR_EVENT_HUB_NAME_FROM_STEP_1]' name=[YOUR_ENDPOINT_NAME] subscriptionId=[YOUR_IOT_HUB_SUBSCRIPTION_ID] resourceGroup=[YOUR_IOT_HUB_RESOURCE_GROUP]
    3. Next, you will add a route so that IoT Hub persists the telemetry data emitted from devices to your Kafka-enabled Event Hub endpoint (from Step 2). For this purpose, go to your IoT Hub dashboard on the portal, and open the "Routes" tab. Click "Add", and enter a name for your route. Under "Data source" select "Device Messages", and subsequently select your endpoint created in Step 2. Enter `true` in the "Query string" box so that all messages from your devices match this route. At the end, click "Save" to save your new route. Alternatively, you can use the IoT Hub CLI as follows to add a new route (substitute [YOUR_ENDPOINT_NAME] with the endpoint name you used in step 2).
      az iot hub update -g [YOUR_IOT_HUB_RESOURCE_GROUP] -n [YOUR_IOT_HUB_NAME] --add properties.routing.routes "{'condition':'true', 'endpointNames':['[YOUR_ENDPOINT_NAME]'], 'isEnabled':True, 'name':'[YOUR_ROUTE_NAME]', 'source':'DeviceMessages'}"
    4. The next step for enabling consumption of device telemetry data in Kafka-based application is to update its connection string to the connection string of the Event Hubs you created previously and added to IoT Hub. You can use the QuickStart Kafka consumer code available here. Note that you will need to install a number of pre-requisites as outlined here. Assuming that you cloned the [Java QuickStart code](https://github.com/Azure/azure-event-hubs), follow the steps below to configure, compile and run the code:
      • Update bootstrap.servers=[YOUR_EVENT_HUBS_NAMESPACE_FQDN] in azure-event-hubs/samples/kafka/quickstart/consumer/src/main/resources/consumer.config with your Event Hub's namespace FQDN (FQDN is normally in this format: [EVENT_HUBS_NAME].servicebus.windows.net:9093).
      • Update sasl.jaas.config and set password="[YOUR_EVENT_HUBS_CONNECTION_STRING]" to your Kafka-enabled Event Hub's connection string from step 1.
      • Update the TOPIC constant defined in azure-event-hubs/samples/kafka/quickstart/src/main/java/com/example/app/TestConsumer.java and set it to your hub's name in Event Hubs (a hub's name in Event Hubs is a counterpart to Kafka topics).
      • On a terminal, compile the code using mvn clean package.
      • On a terminal, run the consumer code using mvn exec:java -Dexec.mainClass="TestConsumer". The consumer will periodically poll your Event Hubs for events and print them out on the console.
    5. Finally, you can use any of IoT Hub's sample codes in Java, Node.js, Python, or .NET to send device-to-cloud telemetry messages into your IoT Hub. These messages will be routed to your Event Hubs endpoint and can be consumed by your Kafka consumer. The flow of events from an IoT device into the Kafka-based application is depicted in the figure below.

    Azure IoT Hub device telemetry feed into Kafka-based applications

    Feeding IoT device telemetry data to Kafka-based applications

    $
    0
    0

    With the newly released support for Kafka streams in Event Hubs, it is now possible for Azure IoT Hub customers to easily feed their IoT device telemetry data into Kafka-based applications for further downstream processing or analysis. This gives customers with existing Kafka-based applications the added flexibility of faster adoption of Azure IoT Hub, without the need to rewrite any parts of their applications upfront. This means that customers can start using IoT Hub's native support for messaging, device and configuration management early, and defer the decision to migrate their telemetry processing applications to natively use Event Hubs at a later time.

    Applicable customer scenarios

    The ability to feed IoT device telemetry data into Kafka-based processing applications is valuable in several scenarios:

    • A primary scenario involves a prospective Azure IoT Hub customer with legacy data processing applications that are already written to interface with Kafka clusters. With the support for Kafka in Event Hubs, the customer can defer the need to make upfront changes to such applications as part of onboarding to Azure IoT Hub. The new feature enables a faster IoT Hub adoption cycle for the customer with lower upfront development costs.
    • A secondary scenario involves a customer who would like to keep an existing Kafka-based telemetry processing application as is, perhaps to use it for processing telemetry data emitted as Kafka streams by other disparate sources (e.g., non-Azure-IoT-Hub-managed devices). Similarly, the new feature benefits the customer by reducing churn in existing Kafka-based applications.
    • A final scenario involves a customer who is simply evaluating Azure IoT Hub's capabilities for future adoption and would typically like to undertake a minimal set of changes needed to validate IoT Hub's capabilities without making significant changes in any external periphery data processing system.

    Using Kafka-based applications with IoT Hub telemetry

    As shown in the diagram below, IoT Hub acts as a conduit for the device telemetry data. This data is persisted in Event Hubs, and ultimately consumed by downstream applications.

    To enable your Kafka-based application to retrieve this data, you need to add a Event Hub as an endpoint/route to IoT Hub and configure your Kafka-based application with the connection string of your Event Hub where the data is stored. Use the links below on how to complete these steps:

    image

    For a detailed demonstration of the steps involved, see the recorded video below:

    How to enhance HDInsight security with service endpoints

    $
    0
    0

    HDInsight enterprise customers work with some of the most sensitive data in the world. They want to be able to lock down access to this data at the networking layer as well. However, while service endpoints have been available in Azure data sources, HDInsight customers couldn’t leverage this additional layer of security for their big data pipelines due to the lack of interoperability between HDInsight and other data stores. As we have recently announced, HDInsight is now excited to support service endpoints for Azure Blob Storage, Azure SQL databases and Azure Cosmos DB.

    With this enhanced level of security at the networking layer, customers can now lock down their big data storage accounts to their specified Virtual Networks (VNETs) and still use HDInsight clusters seamlessly to access and process that data.

    In the rest of this post we will explore how to enable service endpoints and point out important HDInsight configurations for Azure Blob Storage, Azure SQL DB, and Azure CosmosDB.

    Azure Blob Storage:

    When using Azure Blob Storage with HDInsight, you can configure selected VNETs on a blob storage firewall settings. This will ensure that only traffic from those subnets can access this storage account.

    It is important to check the "Allow trusted Microsoft services to access this storage account." This will ensure that HDInsight service will have access to storage accounts and provision the cluster in a seamless manner.

    blobStorageSE

     

    If the storage account is in a different subscription than the HDInsight cluster, please make sure that HDInsight resource provider is registered with the storage subscription. To learn more on how to register or re-register resource providers on a subscription, see additional resource providers and types. If HDInsight resource provider is not registered properly you might get this error message, which can be solved by registration of the resource provider.

    NOTE: HDInsight cluster must be deployed into one of the subnets allowed in the blob storage firewall or the VNETs must be peered. This will ensure that the traffic from cluster VMs can reach the storage.

    Azure SQL DB

    If you are using an external SQL DB for Hive or Oozie metastore, you can configure service endpoints. “Allow access to Azure services” is not a required step from HDInsight point of view, since accessing these databases will happen after the cluster is created and the VMs are injected to the VNET.

    sqlDBSE

    NOTE: HDInsight cluster must be deployed into one of the subnets allowed in the SQL DB firewall or the VNETs must be peered. This will ensure that the traffic from cluster VMs can reach the SQL DB.

    Azure Cosmos DB

    If you are using the spark connector for Azure Cosmos DB you can enable service endpoints in Cosmos DB firewall settings and seamlessly connect to it from HDInsight cluster.

    cosmosDBSE

     

    NOTE: HDInsight cluster must be deployed into one of the VNETs allowed in the Cosmos DB firewall or the VNETs must be peered. This will ensure that the traffic from cluster VMs can reach the SQL DB.

    Try HDInsight now

    We hope you take full advantage of today’s announcements and we are excited to see what you will build with Azure HDInsight. Read the developer guide and follow the quick start guide to learn more about implementing these pipelines and architectures on Azure HDInsight. Stay up-to-date on the latest Azure HDInsight news and features by following us on Twitter #HDInsight and @AzureHDInsight. For questions and feedback, please reach out to AskHDInsight@microsoft.com.

    About HDInsight

    Azure HDInsight is Microsoft’s premium managed offering for running open source workloads on Azure. Today, we are excited to announce several new capabilities across a wide range of OSS frameworks.

    Azure HDInsight powers some of the top customer’s mission critical applications ranging in a wide variety of sectors including, manufacturing, retail education, nonprofit, government, healthcare, media, banking, telecommunication, insurance and many more industries ranging in use cases from ETL to Data Warehousing, from Machine Learning to IoT and many more.

    Viewing all 10804 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>