Quantcast
Channel: Category Name
Viewing all 10804 articles
Browse latest View live

Performance troubleshooting using new Azure Database for PostgreSQL features

$
0
0

At Ignite 2018, Microsoft’s Azure Database for PostgreSQL announced the preview of Query Store (QS), Query Performance Insight (QPI), and Performance Recommendations (PR) to help ease performance troubleshooting, in response to customer feedback. This blog intends to inspire ideas on how you can use features that are currently available to troubleshoot some common scenarios.

A previous blog post on performance best practices touched upon the layers at which you might be experiencing issues based on the application pattern that you are using. This blog nicely categorizes the problem space into several areas and the common techniques to rule out possibilities to quickly get to the root cause. We would like to further expand on this with the help of these newly announced features (QS, QPI, and PR).

In order to use these features, you will need to enable data collection by setting pg_qs.query_capture_mode and pgms_wait_sampling.query_capture_mode to ALL.

New features on Azure Database for PostgreSQL

You can use Query Store for a wide variety of scenarios where you can enable data collection to help with troubleshooting these scenarios better. In this article, we will limit the scope to regressed queries scenario.

Regressed queries

One of the important scenarios that Query Store enables you to monitor is the regressed queries. By setting pg_qs.query_capture_mode to ALL, you get a history of your query performance over time. We can leverage this data to do simple or more complex comparisons based on your needs.

One of the challenges you face when generating a regressed query list is the selection of comparison period in which you baseline your query runtime statistics. There are a handful of factors to think about when selecting the comparison period:

  • Seasonality: Does the workload or the query of your concern occur periodically rather than continuously?
  • History: Is there enough historical data?
  • Threshold: Are you comfortable with a flat percentage change threshold or do you require a more complex method to prove the statistical significance of the regression?

Now, let’s assume no seasonality in the workload and that the default seven days of history will be enough to evaluate a simple threshold of change to pick regressed queries. All you need to do is to pick a baseline start and end time, and a test start and end time to calculate the amount of regression for the metric you would like to track.

Looking at the past seven-day history, compared to last two hours of execution, below would give the top regressed queries in the order of descending percentage. Note that if the result set has negative values, it indicates an improvement from baseline to test period when it’s zero, it may either be unchanged or not executed during the baseline period.

create or replace function get_ordered_query_performance_changes(
baseline_interval_start int,
baseline_interval_type text,
current_interval_start int,
current_interval_type text)
returns table (
     query_id bigint,
     baseline_value numeric,
     current_value numeric,
     percent_change numeric
) as $$
with data_set as (
select query_id
, round(avg( case when start_time >= current_timestamp - ($1 || $2)::interval and start_time < current_timestamp - ($3 || $4)::interval then mean_time else 0 end )::numeric,2) as baseline_value
, round(avg( case when start_time >= current_timestamp - ($3 || $4)::interval then mean_time else 0 end )::numeric,2) as current_value
from query_store.qs_view where query_id != 0 and user_id != 10 group by query_id ) , 
query_regression_data as (
select *
, round(( case when baseline_value = 0 then 0 else (100*(current_value - baseline_value) / baseline_value) end )::numeric,2) as percent_change 
from data_set ) 
select * from query_regression_data order by percent_change desc;
$$
language 'sql';

If you create this function and execute the following, you will get the top regressed queries in the last two hours in descending order compared to their calculated baseline value over the last seven days up to two hours ago.

select * from get_ordered_query_performance_changes (7, 'days', 2, 'hours');

Query performance

The top changes are all good candidates to go after unless you do expect the kind of delta from your baseline period because, say, you know the data size would change or the volume of transactions would increase. Once you identified the query you would like to further investigate, the next step is to look further into query store data and see how the baseline statistics compare to the current period and collect additional clues.

create or replace function compare_baseline_to_current_by_query_id(baseline_interval_cutoff int,baseline_interval_type text,query_id bigint,percentile decimal default 1.00)
returns table(
     query_id bigint,
     period text,
     percentile numeric,
     total_time numeric,
     min_time numeric,
     max_time numeric,
     rows numeric,
     shared_blks_hit numeric,
     shared_blks_read numeric,
     shared_blks_dirtied numeric,
     shared_blks_written numeric,
     local_blks_hit numeric,
     local_blks_read numeric,
     local_blks_dirtied numeric,
     local_blks_written numeric,
     temp_blks_read numeric,
     temp_blks_written numeric,
     blk_read_time numeric,
     blk_write_time numeric
)
as $$

with data_set as
( select *
, ( case when start_time >= current_timestamp - ($1 || $2)::interval then 'current' else 'baseline' end ) as period
from query_store.qs_view where query_id = ( $3 )
)
select query_id
, period
, round((case when $4 <= 1 then 100 * $4 else $4 end)::numeric,2) as percentile
, round(percentile_cont($4) within group ( order by total_time asc)::numeric,2) as total_time
, round(percentile_cont($4) within group ( order by min_time asc)::numeric,2) as min_time
, round(percentile_cont($4) within group ( order by max_time asc)::numeric,2) as max_time
, round(percentile_cont($4) within group ( order by rows asc)::numeric,2) as rows
, round(percentile_cont($4) within group ( order by shared_blks_hit asc)::numeric,2) as shared_blks_hit
, round(percentile_cont($4) within group ( order by shared_blks_read asc)::numeric,2) as shared_blks_read
, round(percentile_cont($4) within group ( order by shared_blks_dirtied asc)::numeric,2) as shared_blks_dirtied
, round(percentile_cont($4) within group ( order by shared_blks_written asc)::numeric,2) as shared_blks_written
, round(percentile_cont($4) within group ( order by local_blks_hit asc)::numeric,2) as local_blks_hit
, round(percentile_cont($4) within group ( order by local_blks_read asc)::numeric,2) as local_blks_read
, round(percentile_cont($4) within group ( order by local_blks_dirtied asc)::numeric,2) as local_blks_dirtied
, round(percentile_cont($4) within group ( order by local_blks_written asc)::numeric,2) as local_blks_written
, round(percentile_cont($4) within group ( order by temp_blks_read asc)::numeric,2) as temp_blks_read
, round(percentile_cont($4) within group ( order by temp_blks_written asc)::numeric,2) as temp_blks_written
, round(percentile_cont($4) within group ( order by blk_read_time asc)::numeric,2) as blk_read_time
, round(percentile_cont($4) within group ( order by blk_write_time asc)::numeric,2) as blk_write_time
from data_set
group by 1, 2
order by 1, 2 asc;
$$
language 'sql';

Once you create the function, provide the query id you would like to investigate. The function will compare the aggregate values between the before and after based on the cutoff time you provide. For instance, the below statement would compare all points prior to two hours from now to points after the two hours mark up until now for the query. If you are aware of outliers that you want to exclude, you can use a percentile value.

select * from compare_baseline_to_current_by_query_id(30, 'minutes', 4271834468, 0.95);

If you don’t use any, the default value is 100 which does include all data points.

select * from compare_baseline_to_current_by_query_id(2, 'hours', 4271834468);

Query performances

If you rule out that there is not a significant data size change and the cache hit ratio is rather steady, you may also want to investigate any obvious wait event occurrence changes within the same period. As wait event types combine different wait types into buckets similar by nature, there is not a single prescription on how to analyze the data. However, a general comparison may give us ideas around the system state change.

create or replace function compare_baseline_to_current_by_wait_event (baseline_interval_start int,baseline_interval_type text,current_interval_start int,current_interval_type text)
returns table(
     wait_event text,
     baseline_count bigint,
     current_count bigint,
     current_to_baseline_factor double precision,
     percent_change numeric
)
as $$
with data_set as
( select event_type || ':' || event as wait_event
, sum( case when start_time >= current_timestamp - ($1 || $2)::interval and start_time < current_timestamp - ($3 || $4)::interval then 1 else 0 end ) as baseline_count
, sum( case when start_time >= current_timestamp - ($3 || $4)::interval then 1 else 0 end ) as current_count
, extract(epoch from ( $1 || $2 ) ::interval) / extract(epoch from ( $3 || $4 ) ::interval) as current_to_baseline_factor
from query_store.pgms_wait_sampling_view where query_id != 0
group by event_type || ':' || event
) ,
wait_event_data as
( select *
, round(( case when baseline_count = 0 then 0 else (100*((current_to_baseline_factor*current_count) - baseline_count) / baseline_count) end )::numeric,2) as percent_change
from data_set
)
select * from wait_event_data order by percent_change desc;
$$
language 'sql';

select * from compare_baseline_to_current_by_wait_event (7, 'days', 2, 'hours');

The above query will let you see some abnormal changes between the two periods. Note that event count here is taken as an approximation and the numbers should be taken within the context of the comparative load of the instance given the time.

Queries

As you can see, with the available time series data in Query Store, your creativity is your limit to the kinds of analysis and algorithms you could implement here. We showed you some simple calculations by which you could apply straight forward techniques to identify candidates and improve. We hope that this could be your starting point and that you share with us what works, what doesn’t and how you take this to the next level.

We are always looking forward to hearing feedback from you!

Acknowledgments

Special thanks to Senior Data Scientist Korhan Ileri, and Principal Data Scientist Managers Intaik Park and Saikat Sen for their contributions to this blog post.


Questions on data residency and compliance in Microsoft Azure? We got answers!

$
0
0

Questions about the security of and control over customer data, and where it resides, are on the minds of cloud customers today. We’re hearing you, and in response, we published a whitepaper that gives clear answers and guidance into the security, data residency, data flows, and compliance aspects of Microsoft Azure. The paper is designed to help our customers ensure that their customer data on Azure is handled in a way that meets their data protection, regulatory, and sovereignty requirements.

Transparency and control are essential to establishing and maintaining trust in cloud technology, while restricted and regulated industries have additional requirements for risk management and to ensure ongoing compliance. To address this, Microsoft provides an industry-leading security and compliance portfolio.

Security is built into the Azure platform beginning with the development process, which is conducted in accordance with the Security Development Lifecycle (SDL). Azure also includes technologies, controls, and tools that address data management and governance, such as Active Directory identity and access controls, network and infrastructure security technologies and tools, threat protection, and encryption to protect data in transit and at rest.

Microsoft gives customers options so they can control the types of data and locations where customer data is stored on Azure. With the innovation of the security and compliance frameworks, customers in regulated industries can confidently run mission-critical workloads in the cloud and leverage all the advantages of Microsoft’s hyperscale cloud.

Download the whitepaper, “Achieving compliant data residency and security with Azure.”

Achieving compliant data residency and security with azure Learn more and get a list of Microsoft‘s compliance offerings on the Microsoft Trust Center site.

Implement predictive analytics for manufacturing with Symphony Industrial AI

$
0
0

Technology allows manufacturers to generate more data than traditional systems and users can digest. Predictive analytics, enabled by big data and cloud technologies, can take advantage of this data and provide new and unique insights into the health of manufacturing equipment and processes. While most manufacturers understand the value of predictive analytics, many find it challenging to introduce into the line of business. Symphony Industrial AI has a mission: to bring the promise of Industrial IoT (IIoT) and artificial intelligence (AI) to reality by delivering real value to their customers through predictive operations solutions. Two solutions by Symphony are specially tailored to the process manufacturing sector (chemicals, refining, pulp and paper, metals and mining, oil, and gas).

There are two solutions offered by Symphony Industrial AI:

The first focuses on existing machinery, and the second on common processes.

Problem: the complexity of data science

Manufacturers have deep knowledge of their manufacturing processes, but they typically lack the expertise of data scientists, who have a deep understanding of statistical modeling, a fundamental component of most predictive analytics applications. And when the application of predictive analytics is a success, most deployments fail to provide users with root causes, or contributing factors, of identified (predicted) issues so that they can take quick and decisive action on the new-found insight.

Solution: predictive analytics made easy

Symphony Industrial AI answers with a pre-built, template-driven approach that minimizes data scientist requirements and promotes rapid predictive analytics deployments. The solution features a data management platform for the process manufacturing sector. It provides real-time stream processing on time-series and related data for predictive analytics, leveraging cloud and big data technologies. The figure below shows an example of the solution’s dashboard.

Predictive analysis made easy

Symphony Industrial AI’s solution speeds time-to-value through rapid deployment for minimized time and financial investment. Some of its features include:

  • Operations Date Lake (ODL): Pre-built integrations to existing systems of record (historians, EAM/CMMS, SCADA, and more).
  • Equipment and process template library: A library of equipment and process templates (pre-packaged analytics) that accelerate implementation and time-to-value.
  • AI/ML algorithms: Pre-packaged algorithms for failure/anomaly prediction.
  • Asset 360 AI and Process 360 AI: Pre-packaged solutions for asset performance intelligence and operations/process intelligence, respectively.

Two solutions: equipment models and process models

Predictive analytics solutions tend to focus on equipment health as scenarios, as the data is readily modeled. To ease the implementation, Asset 360 AI deploys equipment models (also known as asset models) from a template library — which includes heat exchangers, pumps, compressors, and so forth.

Symphony AI’s second solution Process 360 AI helps users create predictive models of their processes. A process is defined at the high level as the items (such as chemicals, fuels, metals, other intermediate and finished products) that are being produced through the equipment. Process template examples include an ammonia process, an ethylene process, an LNG process, and a polypropylene process. Process models help predict process upsets and trips — which equipment models alone may not be able to predict.

Benefits

Built with AI and machine learning (ML), Asset 360 AI and Process 360 AI integrate seamlessly with the equipment and devices already owned. The solutions predict failures before they happen, resulting in several benefits.

  • A reduction in unplanned downtime and process trips.
  • A reduction in capital expenditure and asset maintenance costs.
  • Improvement in quality using gathered process and product data.
  • Improvement in safety and in tracking workforce effectiveness.

Microsoft technologies

Symphony Industrial AI’s solution is delivered as a SaaS model on Azure using the following services:

These services ensure the latest features of IoT and AI advances can be implemented. Additionally, Power BI gives users a rich surface to use for finding insights and monitoring processes.

For manufacturers looking for a way to introduce predictive analytics, Symphony Industrial AI offers two solutions that are easy to implement through a template-driven process. The template libraries include models for existing equipment and standard manufacturing flows. To find out more, go to Asset 360 AI or Process 360 AI and select Contact me.

New year, newly available IoT Hub Device Provisioning Service features

$
0
0

We’re ringing in 2019 by announcing the general availability for the Azure IoT Hub Device Provisioning Service features we first released back in September 2018! The following features are all generally available to you today:

  • Symmetric key attestation support
  • Re-provisioning support
  • Enrollment-level allocation rules
  • Custom allocation logic

All features are available in all provisioning service regions, through the Azure portal, and the SDKs will support these new features by the end of January 2019 (with the exception of the Python SDK). Let’s talk a little more about each feature.

Symmetric key attestation

Symmetric keys are one of the easiest ways to start off using the provisioning service and provide an easy "Hello world" experience for those of you who want to get started with provisioning but haven’t yet decided on an authentication method. Furthermore, symmetric key enrollment groups provide a great way for legacy devices with limited existing security functionality to bootstrap to the cloud via Azure IoT. Check the docs to learn more about how to connect legacy devices.

Symmetric key support is available in two ways:

  • Individual enrollments, in which devices connect to the Device Provisioning Service just like they do in IoT Hub.
  • Enrollment groups, in which devices connect to the Device Provisioning Service using a symmetric key derived from a group key.

The documentation has more about how to use symmetric keys to verify a device's identity.

Automated re-provisioning support

We added first-class support for device re-provisioning which allows devices to be reassigned to a different IoT solution sometime after the initial solution assignment. Re-provisioning support is available in two options:

  • Factory reset, in which the device twin data for the new IoT hub is populated from the enrollment list instead of the old IoT hub. This is common for factory reset scenarios as well as leased device scenarios.
  • Migration, in which device twin data is moved from the old IoT hub to the new IoT hub. This is common for scenarios in which a device is moving between geographies.

We’ve also taken steps to preserve backward compatibility for those who need it. Check the documentation, “IoT Hub Device reprovisioning concepts,” to learn the details. The documentation also has more on how to use re-provisioning.

Enrollment-level allocation rules

Customers need fine-grain control over how their devices are assigned to the proper IoT hub. For example, Contoso is a solution provider with two large multinational companies as customers. Each of Contoso’s customers is using Contoso devices across the globe in a geo-sharded setup. Contoso needs the ability to tell the provisioning service that customer A’s devices need to go to one set of hubs distributed geographically and that customer B’s devices need to go to another set of hubs distributed geographically. Enrollment-level allocation rules allow Contoso to do just that.

There are two pieces of functionality that light up:

  • Specifying allocation policy per enrollment gives finer-grain control.
  • Linked hub scoping allows the allocation policy to run over a subset of hubs.

This is available for both individual and group enrollments.

Custom allocation logic

With custom allocation logic, the Device Provisioning Service will trigger an Azure Function to determine where a device ought to go and what configuration should be applied to the device. Custom allocation logic is set at the enrollment level.

To sum things up with a limerick:

New features we announced last fall

Are ready for one and for all.

More flexibility

Makes provisioning easy

For devices from big to the small.

Who is the greatest finisher in soccer?

$
0
0

It's relatively easy to find the player who has scored the most goals in the last 12 years (hello, Lionel Messi). But which professional football (soccer) player is the best finisher, i.e. which player is most likely to put a shot they take into the goal?

You can't simply use the conversion rate (the ratio of shots taken to goals scored), because some players play more shots a long way from the goal while others get more set-ups near the goal. To correct for that, the blog Barça Numeros used a Bayesian beta-binomial regression model to weight the conversion rates by distance, and then ranked each player for their goal scoring rate at 25 distances from the goal. (The analysis was performed in R using techniques described in David Robinson's book Introduction to Empirical Bayes: Examples from Baseball Statistics, which is available online.)

Here's a chart comparing the ranks of Messi, Zlatan Ibrahimovic, Ronaldo Cristiano, Paulo Dybala and Allesandro Del Piero, at each distance, showing Messi to be the best finisher of these players at all ranges:

Finishers-by-range

For an overall ranking for each player, the blog used the median rank across the 25 shot distances — a ranking that places Lionel Messi as the greatest finisher of the last 12 years.

Best finishers

For more details behind the analysis (and many more charts), check out the complete blog post linked below.

Barça Numeros: Who are the best finishers in contemporary football? (via @barcanumbers)

Red Wing Shoes stays one step ahead in retail industry with Microsoft 365 workplace

Best practices for alerting on metrics with Azure Database for MariaDB monitoring

$
0
0

On December 4, 2018 Microsoft’s Azure Database for open sources announced the general availability of MariaDB. This blog intends to share some guidance and best practices for alerting on the most commonly monitored metrics for MariaDB.

Whether you are a developer, a database analyst, a site reliability engineer, or a DevOps professional at your company, monitoring databases is an important part of maintaining the reliability, availability, and performance of your MariaDB server. There are various metrics available for you in Azure Database for MariaDB to get insights on the behavior of the server. You can also set alerts on these metrics using the Azure portal or Azure CLI.

Create New Rule screenshot

With modern applications evolving from a traditional on-premises approach to becoming more hybrid or cloud native, there is also a need to adopt some best practices for a successful monitoring strategy on a hybrid/public cloud. Here are some example best practices on how you can use monitoring data on your MariaDB server and areas you can consider improving based on these various metrics.

Active connections

Sample threshold (percentage or value): 80 percent of total connection limit for greater than or equal to 30 minutes, checked every five minutes.

Things to check

  • If you notice that active connections are at 80 percent of the total limit for the past half hour, verify if this is expected based on the workload.
  • If you think the load is expected, active connections limits can be increased by upgrading the pricing tier or vCores. You can check active connection limits for each SKU in our documentation, “Limitations in Azure Database for MariDB.”

Active Connections (Platform) screenshot

Failed connections

Sample threshold (percentage or value): 10 failed connections in the last 30 minutes, checked every five minutes.

Things to check

  • If you see connection request failures over the last half hour, verify if this is expected by checking the logs for failure reasons.

Failed Connections (Platform) screenshot

  • If this is a user error, take the appropriate action. For example, if authentication yields a failed error check your username/password.
  • If the error is SSL related, check the SSL settings and input parameters are properly configured.
    • Example: psql "sslmode=verify-ca sslrootcert=root.crt host=mydemoserver.mariadb.database.azure.com dbname=mariadb user=mylogin@mydemoserver"

CPU percent or memory percent

Sample threshold (percent or value): 100 percent for five minutes or 95 percent for more than two hours.

Things to check

  • If you have hit 100 percent CPU or memory usage, check your application telemetry or logs to understand the impact of the errors.
  • Review the number of active connections. Check for connection limits in our documentation, “Limitations in Azure Database for MariaDB.” If your application has exceeded the max connections or is reaching the limits, then consider scaling up compute.

IO percent

Sample threshold (percent or value): 90 percent usage for greater than or equal to 60 minutes.

Things to check

  • If you see that IOPS is at 90 percent for one hour or more, verify if this is expected based on the application workload.
  • If you expect a high load, then increase the IOPS limit by increasing storage. Storage to IOPS mapping is illistrated below as a reference.

Storage

The storage you provision is the amount of storage capacity available to your Azure Database for PostgreSQL server. The storage is used for the database files, temporary files, transaction logs, and the PostgreSQL server logs. The total amount of storage you provision also defines the I/O capacity available to your server.

  Basic General purpose Memory optimized
Storage type Azure Standard Storage Azure Premium Storage Azure Premium Storage
Storage size 5GB TO 1TB 5GB to 4TB 5GB to 4TB
Storage increment size 1GB 1GB 1GB
IOPS Variable

3IOPS/GB

Min 100 IOPS

Max 6000 IOPS

3IOPS/GB

Min 100 IOPS

Max 6000 IOPS

Storage percent

Sample threshold (percent or value): 80 percent

Things to check

  • If your server is reaching provisioned storage limits, it will soon be out of space and set to read-only.
  • Please monitor your usage. You can also provision for more storage to continue using the server without deleting any files, logs, and more.

If you have tried everything and none of the monitoring tips mentioned above lead you to a resolution, please don't hesitate to contact Microsoft Azure Support.

Acknowledgments

Special thanks to Andrea Lam, Program Manager, Azure Database for MariaDB for her contributions to this blog.

Announcing ML.NET 0.9 – Machine Learning for .NET

$
0
0

Announcing ML.NET 0.9 – Machine Learning for .NET

alt text

ML.NET is an open-source and cross-platform machine learning framework (Windows, Linux, macOS) for .NET developers. Using ML.NET, developers can leverage their existing tools and skillsets to develop and infuse custom AI into their applications by creating custom machine learning models.

ML.NET allows you to create and use machine learning models targeting common tasks such as classification, regression, clustering, ranking, recommendations and anomaly detection. It also supports the broader open source ecosystem by proving integration with popular deep-learning frameworks like TensorFlow and interoperability through ONNX. Some common use cases of ML.NET are scenarios like Sentiment Analysis, Recommendations, Image Classification, Sales Forecast, etc. Please see our samples for more scenarios.

Today we’re happy to announce the release of ML.NET 0.9. ( ML.NET 0.1 was released at //Build 2018). This release focuses on: API improvements, model explainability and feature contribution, support for GPU when scoring ONNX models and significant clean up of the framework internals.

This blog post provides details about the following topics in the ML.NET 0.9 release:

Feature Contribution Calculation and other model explainability improvements

Feature Contribution Calculation (FCC)

The Feature Contribution Calculation (FCC for short) shows which features are most influential for a model’s prediction on a particular and individual data sample by determining the amount each feature contributed to the model’s score for that particular data sample.

FCC is particulary important when you initialy have a lot of features/attributes in your historic data and you want to select and use only the most important features because using too many features (especially if including features that don’t influence the model) can reduce the model’s performance and accuracy. Therefore, with FCC you can identify the most influential positive and negative contributions from the initial attribute set.

You can use FCC to produce feature contributions with code like the following:

// Create a Feature Contribution Calculator
// Calculate the feature contributions for all features given trained model parameters

var featureContributionCalculator = mlContext.Model.Explainability.FeatureContributionCalculation(model.Model, model.FeatureColumn, numPositiveContributions: 11, normalize: false);

// FeatureContributionCalculatingEstimator can be use as an intermediary step in a pipeline. 
// The features retained by FeatureContributionCalculatingEstimator will be in the FeatureContribution column.

var pipeline = mlContext.Model.Explainability.FeatureContributionCalculation(model.Model, model.FeatureColumn, numPositiveContributions: 11)
    .Append(mlContext.Regression.Trainers.OrdinaryLeastSquares(featureColumn: "FeatureContributions"));
The output of the above code is:

  Label   Score   BiggestFeature         Value   Weight   Contribution

  24.00   27.74   RoomsPerDwelling        6.58    98.55   39.95
  21.60   23.85   RoomsPerDwelling        6.42    98.55   39.01
  34.70   29.29   RoomsPerDwelling        7.19    98.55   43.65
  33.40   27.17   RoomsPerDwelling        7.00    98.55   42.52

FCC can be used as a step in the ML pipeline and complements the current explainability tools in ML.NET like Permutation Feature Importance (PFI). With ML.NET 0.8, we already provided initial APIs for model explainability to help machine learning developers better understand the feature importance of models (“Overall Feature Importance”) and create (“Generalized Additive Models”)

Sample for FCC.

Additional model explainability improvements for features selection

In addition to FCC, we also extended the capabilities of Permutation Feature Importance (PFI) and Generalized Additive Models (GAMs):

  • PFI now supports most learning tasks: Regression, Binary Classification, Multiclass Classification, and Ranking.

  • PFI now allows you to calculate confidence intervals on feature importance scores to allow you to get a better estimate of the mean.

  • GAMs now supports Feature Contribution Calculation (FCC) so you can quickly see which features drove an individual prediction.

Sample for PFI.

Added GPU support for ONNX Transform

alt text

In ML.NET 0.9 we added the capability to score/run ONNX models using CUDA 10.0 enabled GPUs (such as most NVIDIA GPUs), by integrating the high performance ONNX Runtime library. GPU support for ONNX models is currently available only on Windows 64-bit (not x86,yet), with Linux and Mac support coming soon. Learn here about supported ONNX/CUDA formats/version.

Sample code plus a Test here.

New Visual Studio ML.NET project templates preview

alt text

We are pleased to announce a preview of Visual Studio project templates for ML.NET. These templates make it very easy to get started with machine learning. You can download these templates from Visual Studio gallery here.

The templates cover the following scenarios:

  • ML.NET Console Application – Sample app that demonstrates how you can use a machine learning model in your application.
  • ML.NET Model Library – Creates a new machine learning model library which you can consume from within your application.

VS ML.NET templates screenshot

Additional API improvements in ML.NET 0.9

In this release we have also added other enhancements to our APIs such as the following.

Text data loading is simplified

In ML.NET 0.9, when using the TextLoader class you can either directly provide the attributes/columns in the file as you were able to do it in previous versions or as a new improvement and optional choice you can instead specify those columns/attributes through a data-model class.

Before ML.NET v0.9 you always needed to have explicit code like the following:

//
//... Your code...
var mlContext = new MLContext();

// Create the reader: define the data columns and where to find them in the text file.
var reader = mlContext.Data.CreateTextReader(new[] {
        new TextLoader.Column("IsOver50K", DataKind.BL, 0),
        new TextLoader.Column("Workclass", DataKind.TX, 1)
    },hasHeader: true
);
var dataView = reader.Read(dataPath);

With 0.9, you can simply load the type as follows.

//
//... Your code in your class...
var mlContext = new MLContext();

// Read the data into a data view.
var dataView = mlContext.Data.ReadFromTextFile<InspectedRow>(dataPath, hasHeader: true);

// The data model. This type will be used from multiple code. 
private class InspectedRow
{
    [LoadColumn(0)]
    public bool IsOver50K { get; set; }
    [LoadColumn(1)]
    public string Workclass { get; set; }
}

Sample code.

Get prediction confidence factor

With Calibrator Estimators, in addition to the score column you can get when evaluating the quality of your model you can now get a probability column as well (probability of this example being on the predicted class; prediction confidence indicator).

For instance, you could get a list of the probabilities per each predicted value, like in the following list:

Score - 0.458968    Probability 0.4670409
Score - 0.7022135   Probability 0.3912723
Score 1.138822      Probability 0.8703266

Sample code

New Key-Value mapping estimator and transform

This feature replaces the TermLookupTransform and provides a way to
specify the mapping betweeen two values (note this is specified and not
trained). You can specify the mapping by providing a keys list and
values list that must be equal in size.

Sample code

Other improvements and changes

  • Allow ML.NET to run on Windows Nano containers or Windows machines without Visual C++ runtime installed.
  • Metadata Support In DataView Construction with information about the model, like the evaluation metrics which is encoded metadata into the model and can be programatically extracted and therefore visualized in any tool. This feature can be useful for ISVs.
  • For a with list of breaking changes in v0.9 that impacted the ML.NET samples, check this Gist here

Moving forward

While on the past 9 months we have been adding new features and improving ML.NET, in the forthcoming 0.10, 0.11 and upcoming releases before we reach v1.0, we will focus on the overall stability of the package, continue to refine the API, increase test coverage and improve documentation and samples.

Provide your feedback through the new ML.NET survey!

ML.NET is new, and as we are developing it, we would love to get your feedback! Please fill out the brief survey below and help shape the future of ML.NET by telling us about your usage and interest in Machine Learning and ML.NET.

Take the survey now!

alt text

Get started!

alt text

If you haven’t already get started with ML.NET here.

Next, going further explore some other resources:

We will appreciate your feedback by filing issues with any suggestions or enhancements in the ML.NET GitHub repo to help us shape ML.NET and make .NET a great platform of choice for Machine Learning.

Thanks and happy coding with ML.NET!

The ML.NET Team.

This blog was authored by Cesar de la Torre and Pranav Rastogi plus additional contributions from the ML.NET team


Red Wing Shoes stays one step ahead in retail industry with Microsoft 365 workplace

Top Stories from the Microsoft DevOps Community – 2019.01.11

$
0
0

Welcome back Microsoft developers and DevOps practitioners; I hope you had a great new year! Me? I took some time off to recharge the batteries and I’m glad I did because — wow — even though it’s just the beginning of 2019, there’s already some incredible news coming out of the DevOps community.

Alexa, open Azure DevOps
This. Is. Incredible. Mike Kaufmann demonstrates the MVP of integration between Alexa and Azure DevOps. Do you want to assign a work item to him? Just ask. You’ve got to watch this video – once I did, I realized that I wanted an Alexa.

TFS 2019, Change Work Item Type and Move Between Team Project
We recently brought the ability to change work item types and to move work items between projects to the on-premises version of Azure DevOps Server. But there’s a caveat – you can’t have Reporting Services enabled. Ricci Gian Maria walks through this limitation and the solution.

Deploying to Kubernetes with Azure DevOps: A first pass
Kubernetes is incredibly popular, as it’s the next generation deployment platform for containerized application. But how do you build out a deployment pipeline around it? Jason Farrell creates his first pipeline to build a container and deploy it into AKS (Azure Kubernetes Service).

Creating a git repo with Azure Repos and trying out Git LFS
If you’re thinking about using Git in a project with large binary assets – like images, videos or audio files – you might find yourself disappointed, as Git struggles with large binaries. Andrew Lock explains why, and how you can use Git LFS (Large File Storage) to manage your project.

How the Azure DevOps teams plan with Aaron Bjork
Donovan Brown interviews Aaron Bjork about the way the Azure DevOps team has historically planned our agile processes and how we’ve adapted and changed our high-level planning and adopting Objectives and Key Results (OKRs).

As always, if you’ve written an article about Azure DevOps or find some great content about DevOps on Azure then let me know! I’m @ethomson on Twitter.

Because it’s Friday: A timeline of the elements

$
0
0

A few chemical elements: copper, iron, sulphur, and a few others have been known since the dawn of time. This animated timeline, created by Dr Jamie Gallagher, shows the year of discovery (or in some cases, the creation) of the rest of them:

That's all from the blog for this week. Have a great weekend, and we'll see you next week!

How to update the firmware on your Zune, without Microsoft, dammit.

$
0
0

A glorious little ZuneAs I said on social media today, it's 2019 and I'm updating the Firmware on a Zune, fight me. ;) There's even an article on Vice about the Zune diehards. The Zune is a deeply under-respected piece of history and its UI marked the start of Microsoft's fluent design.

Seriously, though, I got this Zune and it's going to be used by my 11 year old because I don't want him to have a phone yet. He's got a little cheap no-name brand MP3 player and he's filled it up and basically outgrown it. I could get him an iPod Touch or something but he digs retro things (GBC, GBA, etc) so my buddy gave me a Zune in the box. Hasn't been touched...but it has a super old non-metro UI firmware.

Can a Zune be updated in 2019? Surely it can. Isn't Zune dead? I hooked up a 3D0 to my 4k flatscreen last week, so it's dead when I say it's dead.

IMPORTANT UPDATE: After I spent time doing this out I found out on Twitter that there's a small but active Zune community on Reddit! Props for them to doing this in several ways as well. The simplest way to update today is to point resources.zune.net to zuneupdate.com's IP address in your hosts file. The way I did it does use the files directly from Microsoft and gives you full control, but it's overly complex for regular folks for as long as the zuneupdate.com server exists as a mirror. Use the method that works easier for you and that you trust and understand!

  • First, GET ZUNE: the Zune Software version 4.8 is up at the Microsoft Download Center and it installs just fun on Windows 10. I've also made a copy in my Dropbox if this ever disappears. You should too!
  • Second, GET FIRMWARE: the Zune Firmware is still on the Microsoft sites as well. This is an x86 MSI so don't bother trying to install it, we're going to open it up like an archive instead. Save this file forever.
    • There's a half dozen ways to crack open an MSI. Since not everyone who will read this blog is a programmer, the easiest ways is
    • Download lessmsi and use it to to the open and extract the firmware MSI. It's just an MSI specific extractor but it's nicer than 7zip because it extracts the files with the correct names. If you use Chocolatey, it's just "choco install lessmsi" then run "lessmsi-gui." LessMSI will put the files in a deep folder structure. You'll want to move them and have all your files right at the top of c:usersYOURNAMEdownloadszunestuff. We will make some other small changes a little later on here.
      LessMSI
    • If you really want to, you could install 7zip and extract the contents of the Zune Firmware MSI into a new folder but I don't recommend it as you'll need to rename the files and give them the correct extensions.
    • NERDS: you can also use msiexec from the command line, but I'm trying to keep this super simple.
  • Third, FAKE THE ZUNE UPDATE SERVER: Since the Zune servers are gone, you need to pretend to be the old Zune Server. The Zune Software will "phone home" to Microsoft at resources.zune.net (which is gone) to look for firmware. Since the Zune software was made in a simpler time (a decade ago) it doesn't use SSL or do any checking for the cert to confirm the identity of the Zune server. This would be sad in 2019, but it's super useful to us when bringing this old hardware back to life. Again, there's as half dozen ways to do this. Feel feel to do whatever makes you happy as an HTTP GET is an HTTP GET, isn't it?
    • NERDS: If you use Fiddler or any HTTP sniffer you can launch the Zune software and see it phone home for resources.zune.net/firmware/v4_5/zuneprod.xml and get a 404. It if had found this, it'd look at your Zune model and then figure out which cab (cabinet) archive to get the firmware from. We can easily spoof this HTTP GET.
    • NERDS^2: Why didn't I use the Fiddler Autoresponder to record and replay the HTTP GETS? I tried. However, there's a number of different files that the Zune software could request and I only have the one Zune and I couldn't figure out how to model it in Fiddler. If I could do this, we could just install Fidder and avoid editing the hosts file AND using a tiny web server.
    • From an admin command prompt, run notepad windowssystem32driversetchosts and add this line:
      127.0.0.1 resources.zune.net
    • This says "if you ever want stuff from resources.zune.net, I'll handle it myself." Who is "myself?" It's our computer! It'll be a little web server you (or I) will run on our own, on localhost AKA 127.0.0.1.
    • Now download dot.net core, it's small and fast to install programming environment. Don't worry, we aren't coding, we are just using the tools it includes. It won't mess up your machine or install anything at startup.
    • Grab any 2.x .NET SDK from https://dot.net and install it from an MSI. Then go to a command prompt and run these commands. first we'll run dotnet once to warm it up, then get the server and run it from our zunestuff folder. We'll install a tiny static webserver called dotnet serve. See below:
      dotnet
      dotnet tool install --global dotnet-serve
      cd c:usersYOURNAMEdownloadszunestuff
      dotnet serve -p 80
    • If you get any errors that dotnet serve can't be found, just close the command prompt and open it again to update your PATH. If you get errors that port 80 is open, be sure to stop IIS or Skype Desktop or anything that might be listening on port 80.
    • Now, remember where I said you'd extract all those cabs and files out of the Firmware MSI? BUT when we load the Zune software and watch network traffic, we see it's asking for resources.zune.net/firmware/v4_5/zuneprod.xml. We need to answer (since Zune is gone, it's on us now)
    • You'll want to make folders like this: C:usersYOURNAMEdownloadszunestufffirmwarev4_5 copy/rename copy FirmwareUpdate.xml to zuneprod.xml and have it live in that directory. It'll look like this:
      A file heirarchy under zunestuff
    • The zuneprod.xml file has relative URls inside like this, one for each model of the Zune that maps from USB hardware id to cab file. Open zuneprod.xml in a text editor and add http://resources.zune.net/ before each of the firmware file cabinets. For example if you're using notepad, your find and replace will look like this.
      Replace URL=" with URL="http://resources.zune.net/
    • <FirmwareUpdate DeviceClass="1"
      FamilyID="3"
      HardwareID="USBVid_045e&amp;Pid_0710&amp;Rev_0300"
      Manufacturer="Microsoft"
      Name="Zune"
      Version="03.30.00039.00-01620"
      URL="DracoBaseline.cab">

    • UPDATE: As mentioned above, I did all this work (about an hour of traffic sniffing) and spoofed the server locally then found out that someone made http://zuneupdate.com as an online static spoof! It also doesn't use HTTPS, and if you'd like, you can skip the local spoof and point your your windowssystem32driversetchosts with an entry pointing resources.zune.net to its IP address - which at the time of this writing was 66.115.133.19. Your hosts file would look like this, in that case. If this useful resource ever goes away, use the localhost hack above.
      66.115.133.19 resources.zune.net
    • Now run the Zune software, connect your Zune. Notice here that I know it's working because I launch the Zune app and go to Settings | Device then Update and I can see dotnet serve in my other window serving the zuneprod.xml in response.

Required Update

It's worth pointing out that the original Zune server was somewhat smart and would only return firmware if we NEEDED a firmware update. Since we are faking it, we always return the same response. You may be prompted to install new firmware if you manually ask for an update. But you only need to get on the latest (3.30 for my brown Zune 30) and then you're good...forever.

image

Enjoy!

Your iPod SucksZune is the way

Guardians 2 Zune


Sponsor: Preview the latest JetBrains Rider with its Assembly Explorer, Git Submodules, SQL language injections, integrated performance profiler and more advanced Unity support.



© 2018 Scott Hanselman. All rights reserved.
     

Azure Backup for virtual machines behind an Azure Firewall

$
0
0

This blog post primarily talks about how Azure Firewall and Azure Backup can be leveraged to provide comprehensive protection to your data. The former protects your network, while the latter backs up your data to the cloud. Azure Firewall, now generally available, is a cloud-based network security service that protects your Azure Virtual Network resources. It is a fully stateful firewall as a service with built-in high availability and unrestricted cloud scalability. With Azure Firewall you can centrally create, enforce, and log application and network connectivity policies across subscriptions and virtual networks. It uses a static public IP address for your virtual network resources, allowing outside firewalls to identify traffic originating from your virtual network.

Backup of Azure Virtual Machines

In a typical scenario, you may have Azure Virtual Machines (VMs) running business-critical workloads behind an Azure Firewall. While this is an effective means of shielding your VMs against network threats, you would also want to protect your data in the VMs using Azure VM Backup. This further reduces the odds of being exposed to several risks. Azure Backup protects the data in your VMs by safely storing it in your Recovery Services Vault. This involves moving data from your virtual machine storage to the vault and requires a network. However, all of this communication is performed over the secure Azure backbone network, with no need for accessing your virtual networks. You don’t need to open any ports, shortlist any IPs, or grant any accesses to Azure Backup in your Azure Firewall. Hence, your backups will work under the enhanced security of Azure Firewall without having you perform any actions from your end.

It is worth noting that this capability extends to other security measures that can lock a VM down under network restrictions, for example, NSGs. Hence, backup of Azure VMs will work seamlessly irrespective of network restrictions applied at your end to help keep your data within selected networks and without having to perform any additional actions.

Backup of SQL Server running inside an Azure VM (in preview)

Backup of SQL Servers running inside an Azure VM requires the backup extension to communicate with the Azure Backup service in order to upload backup and emit monitoring information. This extension resides inside the virtual machine and requires network access. Hence, when backing up SQL Servers running inside Azure VMs, you would need to permit the Azure Backup service to access the workload. This is a simple process that makes sure the data is restricted to Azure Backup and maintains your desired level of security.

All you need to do is complete the following steps:

1. Navigate to your Azure Firewall.

2. Go to Rules and select the Application rule collection tab. Here you can create a new application rule collection, or edit existing ones in case you have created application rule collections before.

Adding to application rule collection screenshot

3. Create a rule with the following details in an existing or new Application Rule Collection, under the FQDN tags section.

Field

Value

Priority

Enter an appropriate priority for the rule.

Action

Select Allow from the dropdown.

Name

Type a name for the rule.

Source Addresses

Enter * in the text box if you want this rule to be applicable to VMs in all subnets within the scope of the Firewall. Else, specify the desired IP or IP ranges.

FQDN Tags

Select AzureBackup from the dropdown

The following is a sample rule for allowing Azure Backup to protect your SQL Servers in Azure VMs.

Edit application rule collection screenshot

4. Select Add to create the aforementioned rule.

Once the rule is created, you can back up your databases inside the Azure Virtual Machine without any interruptions. All while making sure it is protected by Azure Firewall from any external threats. For more on backing up your SQL Servers in Azure virtual machines, please read the blog, “Azure Backup for SQL Server on Azure now in public preview.”

Azure Backup and Azure Firewall complement each other well to provide a complete protection to your resources and data in Azure. You do not need any special configurations or infrastructure to reap benefits of using both services together. Read about backing up Azure Virtual Machines and backing up SQL servers inside Azure Virtual Machines for more details.

Azure.Source – Volume 65

$
0
0

Now generally available

Announcing the general availability of Azure Data Box Disk

Azure Data Box Disk, an SSD-based solution for offline data transfer to Azure, is now generally available in the US, EU, Canada, and Australia, with more country/regions to be added over time. Each disk is an 8 TB SSD that can copy data up to USB 3.1 speeds and support the SATA II and III interfaces. The disks are encrypted using 128-bit AES encryption and can be locked with your custom passkeys. In addition, check out the end of this post for an announcement about the public preview for Blob Storage on Azure Data Box. When this feature is enabled, you will be able to copy data to Blob Storage on Data Box using blob service REST APIs.

Image of a Microsoft Data Box Disk

New year, newly available IoT Hub Device Provisioning Service features

The following Azure IoT Hub Device Provisioning Service features are now generally available: Symmetric key attestation support; Re-provisioning support; Enrollment-level allocation rules; and Custom allocation logic. The IoT Hub Device Provisioning Service is a helper service for IoT Hub that enables zero-touch, just-in-time provisioning to the right IoT hub without requiring human intervention, enabling you to provision millions of devices in a secure and scalable manner. All features are available in all provisioning service regions, through the Azure portal, and the SDKs will support these new features by the end of January 2019 (with the exception of the Python SDK).

News and updates

Cognitive Services Speech SDK 1.2 – December update – Python, Node.js/NPM and other improvements

Developers can now access the latest improvements to Cognitive Services Speech Service including a new Python API and more. See this post to read what’s new for the Python API for Speech Service, Node.js support, Linux support, lightweight SDK for greater performance, control of server connectivity and connection status, and audio file buffering for unlimited audio session length support. Support for ProGuard during Android APK generation is also now available.

New Azure Migrate and Azure Site Recovery enhancements for cloud migration

This post covers some of the new features added to Microsoft Azure Migrate and Azure Site Recovery that will help you in your lift and shift migration journey to Azure. Azure Migrate enables you to discover your on-premises environment and plan your migration to Azure. Based on popular demand, we enabled Azure Migrate in two new geographies, Azure Government and Europe. We will enable support for other Azure geographies in future. Azure Site Recovery (ASR) helps you migrate your on-premises virtual machines (VMs) to IaaS VMs in Azure, this is the lift and shift migration, which now includes: Support for physical servers with UEFI boot type, Linux disk support, and Migration from anywhere (public or private clouds).

Additional updates for migration support and Azure Site Recovery:

Streamlined development experience with Azure Blockchain Workbench 1.6.0

Azure Blockchain Workbench 1.6.0 is now available and includes new features such as application versioning, updated messaging, and streamlined smart contract development. You can deploy a new instance of Workbench through the Azure portal or upgrade existing deployments to 1.6.0 using an upgrade script. Be advised that this release does include some breaking changes, so check the blog post for details. In addition, information for the latest updates is available from within the Workbench UI.

Screenshot of Azure Blockchain Workbench Updates screen

New smart device security research: Consumers call on manufacturers to do more

To better understand how the allure of smart device experiences stacks up against the concern for security, and who consumers hold responsible to secure smart devices, we partnered with Greenberg Strategy, a consumer research firm to poll more than 3,000 people in the US, UK, and Germany. The research showed that more than 90% of people expect manufacturers to do more for device security, and most people will avoid brands that have public breaches. Security is the top consideration for consumers thinking of buying a device, and consumers are willing to pay more for highly secured devices. See the blog post for a detailed infographic that outlines the details of the research. Note that devices built with Azure Sphere always maintain their security health through a combination of secured hardware, a secured OS, and cloud security that provides automated software updates.

Multi-modal topic inferencing from videos

Azure Video Indexer is a cloud application built on Azure Media Analytics, Azure Search, Cognitive Services. It enables you to extract the insights from your videos using Video Indexer models. Multi-modal topic inferencing in Video Indexer is a new capability that can intuitively index media content using a cross-channel model to automatically infer topics. The model does so by projecting the video concepts onto three different ontologies - IPTC, Wikipedia, and the Video Indexer hierarchical topic ontology. Video Indexer’s topic model empowers media users to categorize their content using an intuitive methodology and optimize their content discovery. Multi-modality is a key ingredient for recognizing high-level concepts in video.

A diagram represents the analysis of a media file from its upload to the insights

The January release of Azure Data Studio

Azure Data Studio (formerly known as SQL Operations Studio) is a new cross-platform desktop environment for data professionals using the family of on-premise and cloud data platforms (such as SQL Server, Azure SQL DB and Azure SQL Data Warehouse) on Windows, MacOS, and Linux. The January release includes: Azure Active Directory Authentication support; Data-Tier Application Wizard support; IDERA SQL DM Performance Insights (Preview); Updates to the SQL Server 2019 Preview extension; SQL Server Profiler improvements; results streaming for large queries (Preview); User setup installation support; and various bug fixes.

Additional updates

Technical content

To infinity and beyond: The definitive guide to scaling 10k VMs on Azure

Every platform has limits, workstations and physical servers have resource boundaries, APIs may be rate-limited, and even the perceived endlessness of the virtual public cloud enforces limitations that protect the platform from overuse or misuse. However, sometimes you experience scenarios that take platforms to their extreme, and those limits become real and therefore thought should be put into overcoming them. Solving this challenge must not only take into account the limitations and thresholds applied near the edge of the cloud platform’s capabilities, but also optimize cost, performance, and usability. Buzz is a scaling platform that uses Azure Virtual Machine Scale Sets (VMSS) to scale beyond the limits of a single set and enables hyper-scale stress tests, DDoS simulators and HPC use cases. Buzz orchestrates a number of Azure components to manage high scale clusters of VMs running and performing the same actions, such as generating load on an endpoint.

Solution diagram showing the flow of a load test scenario

Teradata to Azure SQL Data Warehouse migration guide

With the increasing benefits of cloud-based data warehouses, there has been a surge in the number of customers migrating from their traditional on-premises data warehouses to the cloud. Teradata is a relational database management system and is one of the legacy on-premises systems from which customers are looking to migrate. This post introduces a technical white paper that gives insight into how to approach a Teradata to Azure SQL Data Warehouse migration. It is broken into sections which detail the migration phases, the preparation required for data migration including schema migration, migration of the business logic, the actual data migration approach and testing strategy.

5 Microsoft Learn Modules for Getting Started with Azure

In this quick read, Ari Bornstein shares his top five recommendations for getting up to speed with Azure Services to help you navigate through fundamentals, storing data, deploying to the cloud, administering containers, and using serverless APIs.

Performance troubleshooting using new Azure Database for PostgreSQL features

At Ignite 2018, Microsoft’s Azure Database for PostgreSQL announced the preview of Query Store (QS), Query Performance Insight (QPI), and Performance Recommendations (PR) to help ease performance troubleshooting, in response to customer feedback. This post builds on a previous post (Performance best practices for using Azure Database for PostgreSQL) to show how you can use these recently announced features to troubleshoot some common scenarios.

Questions on data residency and compliance in Microsoft Azure? We got answers!

Transparency and control are essential to establishing and maintaining trust in cloud technology, while restricted and regulated industries have additional requirements for risk management and to ensure ongoing compliance. To address this, Microsoft provides an industry-leading security and compliance portfolio. See this post for a link to the white paper, Achieving Compliant Data Residency and Security with Azure. This paper provides guidance about the security, data residency, data flows, and compliance aspects of Azure. It is designed to help you ensure that your data on Microsoft Azure is handled in a way that meets data protection, regulatory, and sovereignty requirements.

Cover of Achieving Compliant Data Residency and Security with Azure white paper

Best practices for alerting on metrics with Azure Database for MariaDB monitoring

Whether you are a developer, a database analyst, a site reliability engineer, or a DevOps professional at your company, monitoring databases is an important part of maintaining the reliability, availability, and performance of your MariaDB server. This post provides guidance and best practices for alerting on the most commonly monitored metrics for MariaDB and areas you can consider improving based on these various metrics.

Azure shows

The Azure Podcast | Episode 261 - Outage Communications

Kendall, Cale and Evan talk to Sami Kubba, a Senior PM Lead in the Azure CXP org, about how they handle communications of outages and other issues in Azure. Great insight into what goes on behind to scenes to maintain full transparency into the workings of Azure.

Global real-time multi-user apps with Azure Cosmos DB | Azure Friday

Chris Anderson joins Donovan Brown to discuss how to use Azure Cosmos DB and other great Azure services to build a highly-scalable, real-time, collaborative application. You'll see techniques for using the Azure Cosmos DB change feed in both Azure Functions and SignalR applications. We also briefly touch on how custom authentication works with Azure Functions.

What’s New? A Single Key for Cognitive Services | AI Show

In this video we will talk about the work we are doing to simplify the use of Cognitive Services in your applications. We now have a single key, which eliminates having to reference and manage many keys per service for a single application.

Azure IoT Microsoft Professional Program | Internet of Things Show

Accelerate your career in one of the fastest-growing cloud technology fields: IoT. This program will teach you the device programming, data analytics, machine learning, and solution design skills you need for a successful career in IoT. Learn the skills necessary to start or progress a career working on a team that implements IoT solutions.

Consensus in Private Blockchains | Block Talk

This episode provides a review of consensus algorithms that are used, primarily for consortium based deployments.  This include the popular Proof of Authority, Proof of Work and a variant of BFT.  The core concepts of the algorithms are introduced and a demonstration of using the popular GETH client to provision a PoA based network, and how the consensus can be chosen at blockchain creation time, demonstrating the popular pluggable consensus.

TWC9: Unlimited Free Private GitHub Repos, Python in Azure App Service, CES Highlights and more

This Week on Channel 9, Christina Warren reports on the latest developer news.

How to add logic to your Testing in Production sites with PowerShell | Azure Tips and Tricks

Learn how to add additional logic by using PowerShell to automatically distribute the load between your production and deployment slot sites with the Testing in Production feature.

Thumbnail from How to add logic to your Testing in Production sites with PowerShell | Azure Tips and Tricks

How to work with connectors in Azure Logic Apps | Azure Tips and Tricks

Learn how to work with connectors in Azure Logic Apps. Azure Logic Apps has a collection of connectors that you could use to integrate with 3rd party services, such as the Twitter connector.

Thumbnail from How to work with connectors in Azure Logic Apps | Azure Tipsand Tricks

Learn about Serverless technology in Azure Government

Steve Michelotti, Principal Program Manager on the Azure Government team, sits down with Yujin Hong, Program Manager on the Azure Government Engineering team, about Serverless computing in Azure Government.

Thumbnail from Learn about Serverless technology in Azure Government

Azure DevOps Podcast | Aaron Palermo on Cybersecurity and SDP - Episode 018

Jeffrey Palermo, interviews his own older brother, Aaron Palermo. Aaron is a DevOps engineer, solution architect, and all-around cybersecurity expert. This episode is jam-packed with incredibly useful information applicable to software developers — but also anybody who has a Wi-Fi network. Stay tuned to hear about how an SDP replaces a VPN, Aaron’s recommendations on how people can fully protect themselves online, which state-of-the-art multi-factor authentication people should be using, how to keep your data safe and protect from Wi-Fi vulnerabilities, and more.

Events

CES 2019: Microsoft partners, customers showcase breakthrough innovation with Azure IoT, AI, and Mixed Reality

We are continuing to see great momentum for Azure IoT and Azure AI for connected devices and experiences, and new partners and customers choosing Azure IoT and Azure AI to accelerate their business. From connected home products to connected car experiences, check out this post for a few examples from CES 2019 in Las Vegas. Then take a look at a couple of examples that demonstrate innovation for immersive, secured digital experiences.

CES 2019: The rise of AI in automotive

CES 2019 was the perfect venue demonstrate how our customers and partners are enhancing their connected vehicle, autonomous vehicle and smart mobility strategies using the power of the Microsoft intelligent cloud, intelligent edge and AI capabilities. This post covers just a few examples of the innovative work that is happening today. As AI takes on more and more roles across the automotive ecosystem, it is inspiring to imagine the transformational possibilities that lie ahead for our industry.

Four ways to take your apps further with cloud, data, and AI solutions with Microsoft

Companies today demand the latest innovations for every solution they deliver. How can you make sure your infrastructure and data estate keep up with the demands of your business? Read this post for four tips on transforming your business with a modern data estate. Then register for a attend a free webinar on Thursday, January 24, to learn more about the new features and products that can help you optimize value and overcome challenges in modernizing your data estate.

Customers, partners, and industries

Implement predictive analytics for manufacturing with Symphony Industrial AI

Symphony Industrial AI has a mission: to bring the promise of Industrial IoT and AI to reality by delivering real value to their customers through predictive operations solutions. Two solutions by Symphony are specially tailored to the process manufacturing sector (chemicals, refining, pulp and paper, metals and mining, oil, and gas). Check this post to learn about two solutions offered by Symphony Industrial AI: Asset 360 AI and Process 360 AI.

Gain insight into your Azure Cosmos DB data with QlikView and Qlik Sense

Connecting data from various sources in a unified view can produce valuable insights that are otherwise invisible to the human eye and brain. As Azure Cosmos DB allows for collecting the data from various sources in various formats, the ability to mix and match this data becomes even more important for empowering your businesses with additional knowledge and intelligence. This is what Qlik’s analytics and visualization products, QlikView and Qlik Sense, have been able to do for years and now they support Azure Cosmos DB as a first-class data source. Qlik Sense and QlikView are data visualization tools that combine the data from different sources into a single view.

Microsoft Azure-powered Opti platform helps Atlanta prevent flooding

The City of Atlanta Department of Watershed Management will use the Opti platform to prevent flooding by making a retention pond at a local park more efficient. Microsoft CityNext partners with Opti to prevent flooding in Atlanta and other cities. Microsoft CityNext is helping cities around the world become more competitive, sustainable, and prosperous. With partners like Opti, Microsoft is working with cities to engage their citizens, empower city employees, optimize city operations and infrastructure, and transform to accelerate innovation and opportunity. The portfolio organizes solution categories across five broad functional areas: Digital Cities, Educated Cities, Healthier Cities, Safer Cities, and Sustainable Cities.

Photo of a retention pond at Dean Rusk Park in Atlanta, Georgia

3 ways AI can help retailers stay relevant

Microsoft recently partnered with Blue Yonder, a JDA company, to survey retailers everywhere on how they are adapting to the rapidly evolving retail market by using new technologies. The findings show as retailers face new challenges in customer loyalty, competition online and changing consumer expectations, they are more committed than ever to investing in technologies like the Cloud and artificial intelligence (AI). Check out this post for three ways AI can help retailers survive in an unpredictable market. If you’re attending NRF this week, drop by the Microsoft booth to visit with JDA to learn more about price optimization solutions.

How Microsoft AI empowers transformation in your industry

AI presents incredible opportunities for organizations to change the way they do business. With 1,000 researchers—including winners of the Turing Award and Fields Medal—in 11 labs, Microsoft has established itself as a leader in AI through its dogged focus on innovation, empowerment, and ethics. Now, the groundbreaking capabilities of AI can move beyond the lab to make a positive impact on every enterprise, every industry. As Microsoft continues to research AI and incorporate its capabilities into the technologies of everyday life, it also remains committed to an ethical future. Microsoft has identified six principles—fairness, reliability and safety, privacy and security, inclusivity, transparency, and accountability—to guide the development and use of artificial intelligence so technology reflects the diversity of those who use it. In the end, it’s less about what AI can do than what people can do with AI. Visit this post to download the white paper, Microsoft’s vision for AI in the enterprise.

Cover of the Microsoft Enterprise AI white paper

 


A Cloud Guru’s Azure This Week - 11 January 2019

This time on Azure This Week, Lars Klint talks about the definitive guide to scaling 10k VMs on Azure, Teradata to Azure SQL Data Warehouse migration guide, and using QlikView and Qlik Sense with Azure Cosmos DB.

Thumbnail from A Cloud Guru’s Azure This Week - 11 January 2019

AI is the new normal: Recap of 2018

$
0
0

The year 2018 was a banner year for Azure AI as over a million Azure developers, customers, and partners engaged in the conversation on digital transformation. The next generation of AI capabilities are now infused across Microsoft products and services including AI capabilities for Power BI.

Here are the top 10 Azure AI highlights from 2018, across AI Services, tools and frameworks, and infrastructure at a glance:

AI services

1. Azure Machine Learning (AML) service with new automated machine learning capabilities.

2. Historical milestones in Cognitive Services including unified Speech service.

3. Microsoft is first to enable Cognitive Services in containers.

4. Cognitive Search and basketball

5. Bot Framework v4 SDK, offering broader language support (C#, Python, Java, and JavaScript) and extensibility models.

AI tools and frameworks

6. Data science features in Visual Studio Code.

7. Open Neural Network Exchange (ONNX) runtime is now open source.

8. ML.Net and AI Platform for Windows developers.

AI infrastructure

9. Azure Databricks

10. Project Brainwave, integrated with AML.

With many exciting developments, why are these moments the highlight? Read on, as this blog begins to explain the importance of these moments.

AI services

These services span pre-built AI capabilities such as Azure Cognitive Services and Cognitive Search, Conversational AI with Azure Bot Service, and custom AI development with Azure Machine Learning (AML).

1. Azure Machine Learning

Azure Machine Learning icon imageAt Microsoft Connect, the Azure Machine Learning (AML) service with new automated machine learning (automated ML) capabilities became available. With AML, data scientists and developers can quickly and easily build, train, and deploy machine learning models anywhere from the intelligent cloud to the intelligent edge. Once the model is developed, organizations can deploy and manage their models in the cloud and on edge, including IoT devices with integrated (CI/CD) tooling.

To learn more, read our announcement blog, “Announcing the general availability of Azure Machine Learning service.”

Few people know the story behind how Automated ML came to be. It all started in the gene-editing lab in 2016.

Dr. Nicolo Fusi, a machine learning researcher at Microsoft, encountered a problem while working with a new gene editing technology called CRISPR.ML. He tried to use machine learning to predict the best way to edit a gene. His model contained thousands of hyperparameters, making it too difficult and time consuming to optimize with existing methods. Then, Dr. Fusi had a breakthrough idea, why not apply the same approach and algorithms used for recommending movies and products to this problem of model optimization? The result is a recommendation system for machine learning pipelines. The approach combines ideas from collaborative filtering and Bayesian optimization to identify possible machine learning pipelines, allowing data scientists and developers to automate model selection and hyperparameter tuning.

In this interview, Dr.Fusi gives you an inside look at how automated ML empowers decision-making and takes the tedium out of data science.

Check out this Cornell-published white paper, “Probabilistic Matrix Factorization for Automated Machine Learning” to learn more.

2. New milestones for Azure Cognitive Services

Azure Cognitive Services is a collection of APIs that lets developers easily add the ability of vision, speech, language, and search into applications and machines. To date, more than a 1.2 million developers use Cognitive Services.

Image of icons representing various components of Azure Cognitive ServicesAt the Build 2018 conference, Microsoft unveiled the next wave of innovation for Cognitive Services:

New Services:

Enhancements to existing services:

For more details, read “Microsoft Empowers developers with new cognitive services capabilities."

3. Microsoft is the first company to deliver Cognitive Services in containers

Cognitive Services in containers cloud imageIn November Azure Cognitive Services containers became available in preview, making Azure the first platform with pre-built Cognitive Services that span the cloud and the edge.

To learn more, please read the technical blog, “Getting started with Azure Cognitive Services in containers."

4. Azure Cognitive Search and Basketball

Azure Cognitive Search and Basketball flowchartAzure Cognitive Search, an AI-first approach to content understanding became available through preview. Cognitive Search expands Azure Search with built-in cognitive skills to extract knowledge. This knowledge is then organized and stored in a search index, enabling new ways for exploring the data.

Check out how the National Basketball Association (NBA) used Cognitive Search, Cognitive Services, and custom models to power rich data exploration in the //BUILD 2018 keynote.

Read “Announcing Cognitive Search: Azure Search + cognitive capabilities" for more details.

5. Bot Framework v4 SDK

Azure Bot Service icon imageWith the general availability of Bot Framework v4 SDK announced in September, developers can take advantage of broader language support. C# and JavaScript are generally available, while Python and Java are in preview. Also take advantage of better extensibility to harness a vibrant ecosystem of pluggable components like dialog management and machine translation. The Bot Framework also includes an emulator and a set of CLI tools to streamline the creation and management of different bot language understanding services. Today the service has over 340 thousand users and growing.

To learn more, check out Conversational AI Updates.

AI tools and frameworks

These tools and frameworks include Visual Studio tools for AI, Azure Notebooks, Data Science VMs, Azure Machine Learning Studio, ONNX, and the AI Toolkit for Azure IoT Edge.

6. Data science features in Visual Studio Code

Visual Studio Code screenshotAs of November, data science features are available  in the Python extension for Visual Studio Code! With these features, developers can work with data interactively in Visual Studio Code. Whether for exploring data or for incorporating machine learning models into applications, this makes Visual Studio Code an exciting new option for those who prefer an editor for data science tasks.

Visual Studio Tools for AI provides additional details for taking advantage of these new features.

7. ONNX Runtime is now open source

Image of computer chip representing a brainONNX Runtime is now open source. ONNX is an open format to represent machine learning models that enable developers and data scientists to use the frameworks and tools that work best for them including PyTorch, TensorFlow, scikit-learn, and more. ONNX Runtime is the first inference engine that fully supports the ONNX specification. Users typically see two timesthe improvement in performance gains.

At Microsoft, teams are using ONNX Runtime to improve the scoring latency and efficiency of their models. For models the teams converted to ONNX, average performance improved by two times compared to scoring in previous solutions. Leading hardware companies such as Qualcomm, Intel and NVIDIA are actively integrating their custom accelerators into ONNX Runtime.

More details are available in the blog post, "ONNX Runtime is now open source.”

8. ML.NET and AI Platform for Windows Developers

Image of various components of a Network and AI platformDevelopers can access ML.Net, a new open-source, cross-platform machine learning framework. The technology behind AI features in Office and Windows has been released as a project on Github.

In addition, the AI Platform for Windows developers, allows ONNX models to run natively on Windows-based devices.

Check out this blog post and video, “How Three Lines of Code and Windows Machine Learning Empower .NET Developers to Run AI Locally on Windows 10 Devices” for a helpful example of how to use these platforms.

AI infrastructure

This category covers Azure Data Services, compute services including Azure Kubernetes Services (AKS), and AI Silicon support including GPUs and FPGAs.

9. Azure Databricks

Azure Databricks icon imageAzure Databricks, a fast, easy, and collaborative Apache® Spark™-based analytics platform optimized for Azure became widely available. Today, organizations benefit from Azure Databricks' native integration with other services like Azure Blob Storage, Azure Data Factory, Azure SQL Data Warehouse, and Azure Cosmos DB. This platform enables new analytics solutions that support modern data warehousing, advanced analytics, and real-time analytics scenarios.

To learn more, read “Ignite 2018 - Making AI real for your business with Azure Data.”

10. Project Brainwave, integrated with Azure Machine Learning

Image of man holding a computer hardriveMicrosoft showcased the preview of Project Brainwave, integrated with Azure Machine Learning. This service brings hardware-accelerated real-time inference for AI to Azure. The Project Brainwave architecture is deployed on a type of computer chip from Intel called a field programmable gate array (FPGA), which makes real-time AI calculations at a competitive cost and with the industry's lowest lag time.

In addition, customers got a sneak peak of bringing Project Brainwave to the edge. Meaning customers can take advantage of this computing speed for their businesses and facilities, even if their systems aren't connected to a network or the Internet.

Read “Real-time AI: Microsoft announces a preview of Project Brainwave" for more details.

AI is the new normal

Image of a book titled, The Furture ComputedAI catalyzes digital transformation. Microsoft believes in making AI accessible so that developers, data scientists and enterprises can build systems that augment human ingenuity to tackle meaningful challenges.

AI is the new normal. Microsoft has more than 20 years of AI research applied to our products and services. Everyone can now access this AI through simple, yet powerful productivity tools such as Excel and Power BI.

In continual support of bringing AI to all, Microsoft introduced new AI capabilities for Power BI. These features enable all Power BI users to discover hidden, actionable insights in their data and drive better business outcomes with easy-to-use AI. No code needed to get started. Here are a few highlights:

  • Integration of Azure Cognitive Services.
  • Key driver analysis helps users understand what influences key business metrics.
  • Create machine learning models directly in Power BI using automated ML.
  • Seamless integration of Azure Machine Learning within Power BI.

image of occupations which use AI daily

Moving forward into 2019

Many thanks to you, our customers, MVPs, developers, and partners in being a part of Microsoft’s journey to empower businesses to build globally scalable AI applications. A new year is on the way, and the possibilities are endless. We can’t wait to share what we have in store for you in 2019 and to see what you will build with Azure this upcoming year. Happy New Year from the Azure AI team!


Bing Maps APIs help find you at a crossroads!

$
0
0

Frequently we refer to locations based on the intersection of two or more roads. It’s an easy way to pinpoint a specific location. Common scenarios include sharing a meetup spot and finding other nearby locations as a point in relation. Well now, Locations by Point (REST Locations API) and Autosuggest APIs both support intersections as part of their responses.

Locations by Point

The Locations by Point API has a new data type for intersections. The related documentation can be found at https://docs.microsoft.com/en-us/bingmaps/rest-services/locations/find-a-location-by-point but we included the following specific snippet below:

Bing Maps Location By Point API Response Notes

Here’s a snippet from a sample JSON response illustrating an intersection in the response:

Location By Point API - JSON Response

And similarly, here’s the response in XML:

Location By Point API - XML Response

Autosuggest

For Autosuggest, the response will be part of the Address entity type. This means that the Address entity type in Autosuggest now includes addresses, roads, and intersections. Below is a snippet from a Autosuggest response in JSON:

bing Maps Autosuggest API - JSON Response

For more information about Locations by Point API and Autosuggest API, check out the documentation. Also, for details about licensing and to contact the team, go to https://www.microsoft.com/en-us/maps.

- Bing Maps Team

Move to modern. Windows 10 IoT—safer, smarter, cloud-ready

2019 is the year to make the shift to a modern desktop

Our 2019 Resolution: Help you transform your 2008 server applications with Azure!

$
0
0

This blog post was co-authored by Erin Chapple, CVP, Microsoft Windows Server, and Rohan Kumar, CVP, Microsoft Data.

The beginning of a new year is always a time to reflect on our plans. At Microsoft, with the end of support for 2008 servers looming, we’ve been thinking about how we can help you with your server refresh journey. How can we enable you to take advantage of all the cutting-edge innovations available in Azure?

And as we take stock, we believe that the 3 reasons why Azure is the best place to transform your 2008 server applications are:

  1. Security: With security threats becoming more and more sophisticated, increasing your organization’s security policies should be top of mind. The good news is that Azure is the most trusted cloud in the market with more certifications than any other public cloud.
  2. Innovation: We have an optimized, low-risk path to help you embrace Azure. And once you are there, you can continue to innovate with fully-managed services such as Azure SQL Database, Azure Cosmos DB and Azure AI.
  3. Cost savings: By taking advantage of Azure Hybrid Benefit and Extended Security updates, you can save significantly. For example, moving a hundred 2008 servers to Azure can save you more than $300K over 3 years compared to the cost of running them on-premises (check out our Azure TCO calculator to do your own modelling). And you don’t need to think past Azure for Windows Server and SQL Server: AWS is 5x more expensive.    

Get started now – we’re here to help!

The end of support for SQL Server 2008/R2 is now less than six months away on July 9th, 2019 and support ends for Windows Server 2008/R2 on January 14th, 2020. Windows 7, Office 2010 and Exchange Server are also ending their extended support soon. Microsoft and our partners are here to help you in every step of the way. Here are 3 steps to help you make the shift to a modern estate:

Step 1: Assess your environment

The first step is to get a complete inventory of your 2008 server environment. Rank each workload by strategic value to your organization. Answer questions like: How would this workload benefit from running in the cloud? What needs to remain on-premises? What is your strategy for upgrading each server to current versions? By establishing your priorities and objectives at the start, you can ensure a more successful migration.

For detailed guidance, visit the Azure Database Migration Guide, the Windows Server Migration Guide, the Microsoft SQL Server Docs and the Windows Server Docs.

Step 2: Know your options

Microsoft offers a wide range of solutions to modernize your 2008 server applications on your terms:

Azure SQL Database Managed Instance: Get full engine compatibility with existing SQL Server deployments (starting with SQL Server 2008), while enabling PaaS capabilities (automatic patching and version updates, automated backups, high-availability) and AI-based features that drastically reduce management overhead and TCO.

Windows Server and SQL Server in Azure Virtual Machines: Get the flexibility of virtualization for a wide range of computing solutions—development and testing, running applications and extending your datacenter.

Windows Server and SQL Server on-premises: Bring innovative security and compliance features, industry-leading performance, mission-critical availability, advanced analytics built-in and new deployment options such as containers. Refresh your hardware and software infrastructure with Windows Server 2016 and 2019 and SQL Server 2017 and 2019.

Step 3: Make the move

Build your cloud migration plan using four widely adopted strategies: rehost, refactor, rearchitect, and rebuild applications. Choose the right mix for your business, considering the new database and OS options. Join our upcoming webinars and events to learn more:

When you add it all up, Microsoft has the most comprehensive and compelling Cloud, Data and AI platform on the planet. Only Microsoft offers expansive programs that deliver unprecedented value for your existing investments and we are here to help on your refresh journey. We are excited to see how you and your organization continue to innovate and transform your world!

Create alerts to proactively monitor your data factory pipelines

$
0
0

Data integration is complex and helps organizations combine data and business processes in hybrid data environments. The increase in volume, variety, and velocity of data has led to delays in monitoring and reacting to issues. Organizations want to reduce the risk of data integration activity failures and the impact it cause to other downstream processes. Manual approaches to monitoring data integration projects are inefficient and time consuming. As a result, organizations want to have automated processes to monitor and manage data integration projects to remove inefficiencies and catch issues before they affect the entire system. Organizations can now improve operational productivity by creating alerts on data integration events (success/failure) and proactively monitor with Azure Data Factory.

To get started, simply navigate to the Monitor tab in your data factory, select Alerts & Metrics, and then select New Alert Rule.

Creating new alert rule in Azure Data Factory

Select the target data factory metric for which you want to be alerted.

Selecting the target data factory metric

Select one Data Factory Metric to set up the alert condition

Then, configure the alert logic. You can specify various filters such as activity name, pipeline name, activity type, and failure type for the raised alerts. You can also specify the alert logic conditions and the evaluation criteria.

Configure the alert logic in Azure Data Factory

Finally, configure how you want to be alerted. Different mechanisms such email, SMS, voice, and push notifications are supported.

Configuring alert notifications in Azure Data Factory

Creating alerts will ensure 24/7 monitoring of your data integration projects and make sure that you are notified of issues before they potentially corrupt your data or affect downstream processes. This helps your organizations to be more agile and increase confidence in your overall data integration processes. This ultimately results in increasing overall productivity in your organizations, and guarantee that you deliver on your SLAs. Learn more about creating alerts in Azure Data Factory.

Our goal is to continue adding features to improve the usability of Data Factory tools. Get started building pipelines easily and quickly using Azure Data Factory. If you have any feature requests or want to provide feedback, please visit the Azure Data Factory forum.

Viewing all 10804 articles
Browse latest View live