Quantcast
Channel: Category Name
Viewing all 10804 articles
Browse latest View live

Because it’s Friday: Open Source in Lego

$
0
0

If (like me) you work with open source software, you've probably had to explain to non-technical coworkers or family members what "Open Source" actually means. At least in my experience, that rapidly devolves into questions about the meanings of words like "distribution" or "license" or even "software". Next time that discussion comes up, I'm going to point them to this delightful video (hat tip: Alice Data) that explains the concepts of open source without getting into any of the technical details. It's also produced entirely in Lego.

It's been a busy week, so that's all for the blog for now. We'll be back with more next week, and in the meantime have a great weekend!

 


Easier functional and integration testing of ASP.NET Core applications

$
0
0

.NET Test ExplorerIn ASP.NET 2.1 (now in preview) there's apparently a new package called Microsoft.AspNetCore.Mvc.Testing that's meant to help streamline in-memory end-to-end testing of applications that use the MVC pattern. I've been re-writing my podcast site at https://hanselminutes.com in ASP.NET Core 2.1 lately, and recently added some unit testing and automatic unit testing with code coverage. Here's a couple of basic tests. Note that these call the Razor Pages directly and call their OnGet() methods directly. This shows how ASP.NET Core is nicely factored for Unit Testing but it doesn't do a "real" HTTP GET or perform true end-to-end testing.

These tests are testing if visiting URLs like /620 will automatically redirect to the correct full canonical path as they should.

[Fact]

public async void ShowDetailsPageIncompleteTitleUrlTest()
{
// FAKE HTTP GET "/620"
IActionResult result = await pageModel.OnGetAsync(id:620, path:"");

RedirectResult r = Assert.IsType<RedirectResult>(result);
Assert.NotNull(r);
Assert.True(r.Permanent); //HTTP 301?
Assert.Equal("/620/jessica-rose-and-the-worst-advice-ever",r.Url);
}

[Fact]
public async void SuperOldShowTest()
{
// FAKE HTTP GET "/default.aspx?showId=18602"
IActionResult result = await pageModel.OnGetOldShowId(18602);

RedirectResult r = Assert.IsType<RedirectResult>(result);
Assert.NotNull(r);
Assert.True(r.Permanent); //HTTP 301?
Assert.StartsWith("/615/developing-on-not-for-a-nokia-feature",r.Url);
}

I wanted to see how quickly and easily I could do these same two tests, except "from the outside" with an HTTP GET, thereby testing more of the stack.

I added a reference to Microsoft.AspNetCore.Mvc.Testing in my testing assembly using the command-line equivalanet of "Right Click | Add NuGet Package" in Visual Studio. This CLI command does the same thing as the UI and adds the package to the csproj file.

dotnet add package Microsoft.AspNetCore.Mvc.Testing -v 2.1.0-preview1-final

It includes a new WebApplicationTestFixture that I point to my app's Startup class. Note that I can take store the HttpClient the TestFixture makes for me.

public class TestingMvcFunctionalTests : IClassFixture<WebApplicationTestFixture<Startup>>

{
public HttpClient Client { get; }

public TestingMvcFunctionalTests(WebApplicationTestFixture<Startup> fixture)
{
Client = fixture.Client;
}
}

No tests yet, just setup. I'm using SSL redirection so I'll make sure the client knows that, and add a test:

public TestingMvcFunctionalTests(WebApplicationTestFixture<Startup> fixture)

{
Client = fixture.Client;
Client.BaseAddress = new Uri("https://localhost");
}

[Fact]
public async Task GetHomePage()
{
// Arrange & Act
var response = await Client.GetAsync("/");

// Assert
Assert.Equal(HttpStatusCode.OK, response.StatusCode);
}

This will fail, in fact. Because I have an API Key that is needed to call out to my backend system, and I store it in .NET's User Secrets system. My test will get an InternalServerError instead of OK.

Starting test execution, please wait...

[xUnit.net 00:00:01.2110048] Discovering: hanselminutes.core.tests
[xUnit.net 00:00:01.2690390] Discovered: hanselminutes.core.tests
[xUnit.net 00:00:01.2749018] Starting: hanselminutes.core.tests
[xUnit.net 00:00:08.1088832] hanselminutes_core_tests.TestingMvcFunctionalTests.GetHomePage [FAIL]
[xUnit.net 00:00:08.1102884] Assert.Equal() Failure
[xUnit.net 00:00:08.1103719] Expected: OK
[xUnit.net 00:00:08.1104377] Actual: InternalServerError
[xUnit.net 00:00:08.1114432] Stack Trace:
[xUnit.net 00:00:08.1124268] D:githubhanselminutes-corehanselminutes.core.testsFunctionalTests.cs(29,0): at hanselminutes_core_tests.TestingMvcFunctionalTests.<GetHomePage>d__4.MoveNext()
[xUnit.net 00:00:08.1126872] --- End of stack trace from previous location where exception was thrown ---
[xUnit.net 00:00:08.1158250] Finished: hanselminutes.core.tests
Failed hanselminutes_core_tests.TestingMvcFunctionalTests.GetHomePage
Error Message:
Assert.Equal() Failure
Expected: OK
Actual: InternalServerError

Where do these secrets come from? In Development they come from user secrets.

public Startup(IHostingEnvironment env)

{
this.env = env;
var builder = new ConfigurationBuilder();

if (env.IsDevelopment())
{
builder.AddUserSecrets<Startup>();
}
Configuration = builder.Build();
}

But in Production they come from the ENVIRONMENT. Are these tests Development or Production...I must ask myself.  They are Production unless told otherwise. I can override the Fixture and tell it to use another Environment, like "Development." Here is a way (given this preview) to make my own TestFixture by deriving and grabbing and override to change the Environment. I think it's too hard and should be easier.

Either way, the real question here is for me - do I want my tests to be integration tests in development or in "production." Likely I need to make a new environment for myself - "testing."

public class MyOwnTextFixture<TStartup> : WebApplicationTestFixture<Startup> where TStartup : class

{
public MyOwnTextFixture() { }

protected override void ConfigureWebHost(IWebHostBuilder builder)
{
builder.UseEnvironment("Development");
}
}

However, my User Secrets still aren't loading, and that's where the API Key is that I need.

BUG?: There is either a bug here, or I don't know what I'm doing. I'm loading User Secrets in builder.AddUserSecrets<Startup> and later injecting the IConfiguration instance from builder.Build() and going "_apiKey = config["SimpleCastAPIKey"];" but it's null. The config that's injected later in the app isn't the same one that's created in Startup.cs. It's empty. Not sure if this is an ASP.NE Core 2.0 thing or 2.1 thing but I'm going to bring it up with the team and update this blog post later. It might be a Razor Pages subtlety I'm missing.
For now, I'm going to put in a check and manually fix up my Config. However, when this is fixed (or I discover my error) this whole thing will be a pretty nice little set up for integration testing.

I will add another test, similar to the redirect Unit Test but a fuller integration test that actually uses HTTP and tests the result.

[Fact]

public async Task GetAShow()
{
// Arrange & Act
var response = await Client.GetAsync("/620");

// Assert
Assert.Equal(HttpStatusCode.MovedPermanently, response.StatusCode);
Assert.Equal("/620/jessica-rose-and-the-worst-advice-ever",response.Headers.Location.ToString());
}

There's another issue here that I don't understand. Because have to set Client.BaseAddress to https://localhost (because https) and the Client is passed into fixture.Client, I can't set the Base address twice or I'll get an exception, as the Test's Constructor runs twice, but the HttpClient that's passed in as a lifecycler that's longer. It's being reused, and it fails when setting its BaseAddress twice.

Error Message:

System.InvalidOperationException : This instance has already started one or more requests. Properties can only be modified before sending the first request.

BUG? So to work around it I check to see if I've done it before. Which is gross. I want to set the BaseAddress once, but I am not in charge of the creation of this HttpClient as it's passed in by the Fixture.

public TestingMvcFunctionalTests(MyOwnTextFixture<Startup> fixture)

{
Client = fixture.Client;
if (Client.BaseAddress.ToString().StartsWith("https://") == false)
Client.BaseAddress = new Uri("https://localhost");
}

Another option is that I create a new client every time, which is less efficient and perhaps a better idea as it avoids any side effects from other tests, but also feels weird that I should have to do this, as the new standard for ASP.NET Core sites is to be SSL/HTTPS by default..

public TestingMvcFunctionalTests(MyOwnTextFixture<Startup> fixture)

{
Client = fixture.CreateClient(new Uri(https://localhost));
}

I'm still learning about how it all fits together, but later I plan to add in Selenium tests to have a full, complete, test suite that includes the browser, CSS, JavaScript, end-to-end integration tests, and unit tests.

Let me know if you think I'm doing something wrong. This is preview stuff, so it's early days!


Sponsor: Get the latest JetBrains Rider for debugging third-party .NET code, Smart Step Into, more debugger improvements, C# Interactive, new project wizard, and formatting code in columns.


© 2018 Scott Hanselman. All rights reserved.
     

New Microsoft Azure regions available for Australia and New Zealand

$
0
0

Two new Microsoft Azure regions in Australia are available to customers, making Microsoft the only global provider to deliver cloud services specifically designed to address the requirements of the Australian and New Zealand governments and critical national infrastructure, including banks, utilities, transport and telecommunications.

We build our cloud infrastructure to serve the needs of our customers by delivering innovation globally and listening locally. To support the mission-critical work of crucial organizations in Australia and New Zealand, we’re delivering our global cloud platform through a unique partnership with Canberra Data Centres.

A key to our new Australia Central regions is the ability for customers to deploy their own applications and infrastructure within Canberra Data Centres directly connected via Azure ExpressRoute to Microsoft’s global network. This offers a great deal of flexibility and surety on network performance and security, making these regions ideally suited to the complex challenge of modernising mission-critical applications over time. Australian federal government customers can leverage their Intra Government Communications Network (ICON) for direct connectivity.

With heightened scrutiny of supply chain assurance in government and critical national infrastructure, we are proud to deliver these services in partnership with Australian-owned Canberra Data Centres. They are the premier data centre provider in Australia, operating facilities designed for the handling of secret level data.

Microsoft offers Azure from three cities in Australia including Sydney, Melbourne and Canberra with connectivity to Perth, Brisbane and Auckland. Other global providers are limited to only one Australian location, so Microsoft provides the leading flexibility for both high availability and resiliency. We understand it is essential for critical national infrastructure to have disaster resilience options that can distribute data to multiple regions without compromising on retaining data residency within Australian borders. Azure already serves a crucial purpose in the transformation of critical Australian enterprises:

  • The Victorian Supreme Court is undertaking a digital transformation which uses Azure to connect all 34 courtrooms and underpin a digital case management solution to streamline and speed the process of justice. 
  • The Department of Human Services has been exploring how intelligent Azure cloud services and bot technologies can support employees and ultimately expand consumer engagement channels.

Our Australia Central regions are restricted to the community of Australian and New Zealand governments and critical national infrastructure sectors, along with their trusted partners. They include Australian local partners like Veritec, MailGuard and Intelledox as well as global partners like Axon, Citrix and Veritas. It’s the combination of an ecosystem of partners delivering from our trusted Azure cloud platform that enables more rapid access to innovation.

We continue to expand our global and local cloud infrastructure to meet the growing demand and rapid innovation of cloud services. This milestone is another step in Azure’s global momentum. Earlier this month, we announced plans for two new Azure Government United States regions for data classified as secret. Two weeks ago we also announced the availability of our Azure regions in France and our plans to deliver cloud services from new regions in Switzerland, Germany and the United Arab Emirates

Around the world government and critical national infrastructure providers are transforming operations and the services they deliver to citizens and customers. They are rapidly modernising their business and mission-critical applications through the flexibility, scale and reach of Azure, partnering with our unmatched partner ecosystem, and placing their trust in us to create a resilient and responsive platform for growth.

I encourage you to learn more at the Azure Australia Central Regions website.

Ingest, prepare, and transform using Azure Databricks and Data Factory

$
0
0

Today’s business managers depend heavily on reliable data integration systems that run complex ETL/ELT workflows (extract, transform/load and load/transform data). These workflows allow businesses to ingest data in various forms and shapes from different on-prem/cloud data sources; transform/shape the data and gain actionable insights into data to make important business decisions.

With the general availability of Azure Databricks comes support for doing ETL/ELT with Azure Data Factory. This integration allows you to operationalize ETL/ELT workflows (including analytics workloads in Azure Databricks) using data factory pipelines that do the following:

  1. Ingest data at scale using 70+ on-prem/cloud data sources
  2. Prepare and transform (clean, sort, merge, join, etc.) the ingested data in Azure Databricks as a Notebook activity step in data factory pipelines
  3. Monitor and manage your E2E workflow

image

Take a look at a sample data factory pipeline where we are ingesting data from Amazon S3 to Azure Blob, processing the ingested data using a Notebook running in Azure Databricks and moving the processed data in Azure SQL Datawarehouse.

image

You can parameterize the entire workflow (folder name, file name, etc.) using rich expression support and operationalize by defining a trigger in data factory.

Get started today!

We are excited for you to try Azure Databricks and Azure Data Factory integration and let us know your feedback.

Get started by clicking the Author & Monitor tile in your provisioned v2 data factory blade.

image

Click on the Transform data with Azure Databricks tutorial and learn step by step how to operationalize your ETL/ELT workloads including analytics workloads in Azure Databricks using Azure Data Factory.

image

We are continuously working to add new features based on customer feedback. Get more information and detailed steps for using the Azure Databricks and Data Factory integration.

Get started building pipelines easily and quickly using Azure Data Factory. If you have any feature requests or want to provide feedback, please visit the Azure Data Factory forum.

Unique identities are hard: How I learned to stop worrying and love the ID scope

$
0
0

Behold the ID scope, one of the most nuanced concepts in the IoT Hub Device Provisioning Service. It is both reviled and lauded for its name-spacing characteristics in device provisioning. It throws a wrench in complex provisioning scenarios, but it’s also necessary for secure zero-touch device provisioning. This blog post is a culmination of several hours worth of conversations and design discussions in the engineering team, and it may take you several reads to fully understand. Understanding ID scopes is a journey, not a destination. If you don’t care about the details, just know that ID scopes are necessary to ensure identity uniqueness in the device supply chain. If you want to know why, read on.

On device uniqueness

Device uniqueness is made up of two pieces, a unique registration ID (not assumed private) and a key (assumed private). For shorthand, each device is represented within a single DPS as (X, Y) where X = registration ID and Y = key. This has been used for what feels like eons in computing, and the concept of a GUID is nothing new. It turns out that there are a couple of unique things about IoT scenarios that make this insufficient for uniquely setting a device identity. Device creation and digital enrollment can occur at separate times, and this requires some sort of scoping to prevent conflicts. We introduced the ID scope concept to scope identities to a particular tenant DPS.

Now, an illustrative example for why we have a scoping mechanism. Another reason has to do with the security technologies used to prove device identity. We support TPM attestation, and we provide a TPM emulator for developers who want to get started with a simulated device before they grab real hardware. The TPM emulator we publish to GitHub has the endorsement key hardcoded to a single value. It’s for development purposes only, so this shouldn’t be a big deal. This means that everyone following our tutorials will create a device with X = mydevice, Y = EK_github. This entry is unique to their DPS service, but not unique overall. When my tutorial device talks to the Device Provisioning Service and your tutorial device talks to the Device Provisioning Service, the service needs some way of telling the devices apart, hence the ID scope.

To learn about provisioning and TPMs, go to: https://aka.ms/dps-tpm

If we did require a unique (X, Y), a problem would arise when devices are created en masse, sit on a shelf for a year, and then are finally registered in the provisioning service. The (X, Y) of those devices might have already been registered in that period of time, especially if they are relatively common. Now these devices that have been sitting in storage are getting enrolled to a provisioning service, and the following happens:

  • The enrollments are rejected because they exist in the global Device Provisioning Service already.
  • The warehoused devices hijack the identity of the new devices. The service has no way of telling the devices apart.

Unfortunately, this is super common in supply chains. Supply chains are important, so this is out.

Another example of a temporal delay between picking an ID and registering the ID is picking an email address (or a domain name), and password before actually signing up for that email address. The only difference is that IoT devices are incapable of picking a new identity, unlike humans who can just choose a new email address.

I'll introduce a new variable to my shorthand, so (X, Y, Z) is a single device in the service where Z = ID scope for a provisioning service. So my device is (mydevice, EK_github, myIdScope) and your device is (mydevice, EK_gitub, yourIdScope). These are unique, balance is restored to the universe, and I can sleep well at night.

Pictoral representation of tenant data segregation in the Device Provisioning Service.

A meme-like pictoral represenation of the permissions model in the Device Provisioning Service

We could require OEMs to take pains to ensure unique (X, Y) identities, but that's pushing a significant security burden onto customers who would much rather be secure by default. We want to make it as easy as possible for our customers to be secure, which means not giving our customers enough rope to hang themselves. So we need to have a Z.

Devices must know their (X, Y, Z) in order to present it to the Device Provisioning Service and be assigned to a hub. Devices are created with firmware including (X, Y, Z). There's no way for the device to discover its Z value either, because we've already determined that X and Y are not necessarily globally unique and we can’t do a unique lookup for a Z value. The device needs to be created with all three values set at the same time.

Of course, in order to prevent someone from impersonating my device or squatting on my IDs there has to be some sort of control over who gets to create (X, Y, Z). We've already established that X and Y can be pretty much whatever, so they're out. But Z is something that is set by the service and today is tied directly to a particular provisioning service tenant. Z is our way of controlling which devices can connect to my provisioning service. The only people who can create an enrollment record for (X, Y, Z) are those with write-access to the device provisioning service associated with Z, which is represented by some sort of access policy (enrollment write in the case of the Device Provisioning Service). This way the only people who can create enrollment entries for devices are those people who can create the records.

There's also the scenario when the initial programmer of the device doesn't do cloud attach. I'll get to that in a moment.

Of course, bad actors could technically guess my device's (X, Y, Z) and get their malicious devices connected to my IoT hub, but this isn't a new threat. There's only so much we can do to protect our customers. If I name all my devices in the format "toasterN" with a symmetric key of "password" and a bad actor discovers my ID scope, there's nothing I can do to stop them from hijacking my device naming scheme. At least they can only hijack identities for which there are enrollment records, which mitigates the risk somewhat. That being said, I can open a bank account with a password of "password123!" and lose my life savings much more easily. There's always going to be a threat, it just depends on how much effort you're willing to put into your own security.

We should all be on the same page, each device needs a unique (X, Y, Z) and only trusted actors can create enrollment records for a given Z. The Device Provisioning Service obfuscates the Z so it's hard to guess what it’ll be for a given tenant name.

Learning!

Real OEM scenarios are hard

It turns out that this works beautifully if the same entity who programs the initial firmware onto the device is also responsible for the cloud-attach IoT solution. This is the scenario that I call the "all-in-one OEM". This OEM produces specialty devices or single-purpose devices that have a ready-made IoT solution to use with them. The OEM's customer buys the devices and probably a subscription to the service attached to them, but they don't build their own solution. Examples of this are consumer smart devices like coffee makers (OEM wants to geo-shard their solution), specialty manufacturing equipment, and other large machinery that's often leased.

The flipside to the "all-in-one" OEM is the white label OEM. The white label OEM produces many devices before they have customers for those devices. The white label OEM could have many roles in the supply chain:

  • Sell "empty" devices to customers who handle the cloud attach. The device purchasers put on the initial image. This scenario works out of the box like the “all-in-one” OEM.
  • Sell devices with a basic image to customers who handle the cloud attach. Devices are created in bulk before there's a buyer.
    • Device purchaser uses onboarding provided by OEM.
    • Device purchaser re-flashes the device.
  • Sell devices with a basic image to customers. OEM has an existing business relationship with customer and can put in provisioning configuration such as ID scope.
  • Sell devices with a basic image to customers. Offers a value-add service of automatic provisioning to the customer's IoT solution for an added fee (PaaS).
    • Involves having some initial image on the device, in which case the OEM either has their own provisioning service for the value add or the customer gives them an ID scope to burn in.
  • Sell devices with a basic image to customers. Customers go through an ownership claim process to connect to SaaS. The ownership claim process is designed and built by the entity providing the SaaS service.

Two IoT scenarios side-by-side: on the left, a man holding a box of plants for planting. On the right, a stack of shipping containers.

Regardless of the scenario, there's immediately a problem. We can't assume the white label OEM has a provisioning service. Because they have no provisioning service, they have no Z value to program into their devices. There are a couple options:

  • Customer gives the OEM a Z value for the initial image.
  • OEM has their own Z value.
  • OEM programs device to ask for Z value at first book. This requires a touch on boot.

This document focuses on the need for ID scope and is not a dive into the scenarios. If there's interest, I'll do a separate blog post on provisioning scenarios. So for the time being I'm going to assume that somehow there's a Z value that gets on the device so the device has a full identity. Now there's a device (X, Y, Z) with a corresponding enrollment in DPS_Z and we’re sure it is unique and cannot be spoofed.

We really do need ID scopes

Here's what we learned:

  • An ID and key pair alone isn't enough to uniquely identify a device given the timelines involved with the IoT device supply chain.
  • Getting the scoping identifier onto devices in the white label OEM case is difficult and still being designed. There are a couple of ideas we have in this area; more coming soon™.

To sum things up with a limerick:

We really did try to say nope
To device identity scopes
But security
Is not always easy
So now we all just have to cope.

Azure.Source – Volume 25

$
0
0

One of the biggest news items out of last week was the arrival of Virtual Machine Serial Console access in public preview in all global regions. This is something customers have wanted for some time and the team was ecstatic to finally deliver it. Down below, you'll find a blog post and episodes of Tuesdays with Corey and Azure Friday that go into all the details and show how it works.

Now in preview

Virtual Machine Serial Console access - Virtual machine serial console enables bidirectional serial console access to your virtual machines. The preview is available in global Azure regions. To try it, look for Serial console (Preview) in the Support+Troubleshooting section of your virtual machine. Serial Console access requires you to have VM Contributor or higher privileges to the virtual machine. This will ensure connection to the console is kept at the highest level of privileges to protect your system.

Preview: SQL Database Transparent Data Encryption with Azure Key Vault configuration checklist - Azure SQL Database and Data Warehouse offer encryption-at-rest by providing Transparent Data Encryption (TDE) for all data written to disk, including databases, log files and backups. The TDE protector is by default managed by the service in a fully transparent fashion, rotated every 90 days and maintained in archive for access to backups. Optionally, you can assume management of the TDE Protector for more control. However, you should read this post for guidelines before setting up Azure SQL Database TDE with customer-managed keys in Azure Key Vault.

Soft delete for Azure Storage Blobs now in public preview - Soft delete for Azure Storage Blobs is available in all regions, both public and private. When turned on, soft delete enables you to save and recover your data where blobs or blob snapshots are deleted. This protection extends to blob data that is erased as the result of an overwrite. If there is a chance that your data is accidentally modified or deleted by an application or other storage account user, we recommend turning on soft delete. Soft delete is one part of a data protection strategy and can help prevent inadvertent data loss.

Also in preview

Now generally available

Announcing Azure Service Health general availability – configure your alerts today - Azure Service Health is a personalized dashboard that provides guidance and support when issues in Azure services affect you. Unlike our public status page which provides general status information, Azure Service Health provides tailored information for your resources. It also helps you prepare for planned maintenance and other changes that could affect the availability of your resources.

Announcing the general availability of Azure Files share snapshot - Azure Files share snapshots are available globally in all Azure clouds. Share snapshots provide a way to make incremental backups of Server Message Block (SMB) shares in Azure Files. Storage administrators can use snapshots directly and backup providers can now leverage this capability to integrate Azure Files backup and restore capabilities into their products./p>

Azure Monitor–General availability of multi-dimensional metrics APIs - Explore your Azure metrics through their dimensions and unlock deeper insights. Dimensions are name value pairs, or attributes, that can be used to further segment a metric. These additional attributes can help make exploring a metric more meaningful. Azure Monitor has also increased our metric retention period from 30 days to 93 days, so you can access data for longer, and do meaningful comparisons across months.

Azure SQL Data Warehouse now generally available in all Azure regions worldwide - Azure SQL Data Warehouse is now available in three additional regions— Japan West, Australia East, and India West. These additional locations bring the product worldwide availability count to all 33 regions – more than any other major cloud data warehouse provider. SQL Data Warehouse is a high-performance, secure, and compliant SQL analytics platform offering you a SQL-based view across data and a fast, fully managed, petabyte-scale cloud solution.

Azure Availability Zones now available for the most comprehensive resiliency strategy - Availability of Availability Zones begins with select regions in the United States and Europe. Availability Zones are physically separate locations within an Azure region. Each Availability Zone consists of one or more datacenters equipped with independent power, cooling, and networking. With the introduction of Availability Zones, we now offer a service-level agreement (SLA) of 99.99% for uptime of virtual machines.

Also generally available

News & updates

Support for tags in cost management APIs is now available - We’re announcing the support for tags in both the Usage Details API and the Budgets API for detailed reporting on your Azure usage and charges. This release only supports tags for subscriptions in Enterprise Agreements (EA), in future releases we plan to support other subscription types as well.

The new Azure Load Balancer – 10x scale increase - Azure Load Balancer is a network load balancer offering high scalability, throughput and low latency across TCP and UDP load balancing. The Standard SKU adds 10x scale, more features along with deeper diagnostic capabilities than the existing Basic SKU. The new offer is designed to handle millions of flows per second and built to scale and support even higher loads. Standard and the Basic Load Balancer options share APIs and will offer our customers several options to pick and choose what best match their needs.

Create Service Fabric Clusters from Visual Studio now available - Based on your input, we've added the ability to create secure clusters directly from Visual Studio to help simplify and expedite the process. This update shipped with the latest release of the Service Fabric tooling for Visual Studio 2015 and with Visual Studio 2017 as part of the Azure workload.

BigDL Spark deep learning library VM now available on Microsoft Azure Marketplace - BigDL deep learning library is a Spark-based framework for creating and deploying deep learning models at scale. While it has previously been deployed on Azure HDInsight and Data Science VM, it is now also available on Azure Marketplace as a fixed VM image represents a further step in reducing deployment complexity.

Additional news & updates

Technical content & training

Implementation patterns for big data and data warehouse on Azure - To help our customers with their adoption of Azure services for big data and data warehousing workloads we have identified some common adoption patterns which are reference architectures for success: modern data warehouse, advanced analytics on big data, and real-time analytics (Lambda).

How to get support for Azure IoT SDK - Azure IoT SDKs make it easy for developers to begin coding and deploy applications for Azure IoT Hub and Azure IoT Hub Device Provisioning Service. The SDKs are production quality open-sourced project with support from Microsoft via User Voice, Stack Overflow, Azure Support, and GitHub issue submissions.

Essential tools and services for building mobile apps - Azure, Visual Studio, Xamarin, and Visual Studio App Center give you the flexible, yet robust tools and services to build, test, deploy, and continuously improve Android and iOS apps that your users will love. Learn how you can use your favorite language and tools, to tap into robust cloud services, and quickly scale to millions of users on demand.

Customer stories

How Catania secures public data with Azure SQL Data Warehouse - The municipality of Catania in Italy, that serves over a million people, wanted to build a streamlined management and storage of public service operations data. They also wanted to safeguard against any potential damage to their physical datacenter and ensure service continuity. Azure and Softjam helped Commune di Catania built a hybrid solution that uses SQL Server and Azure Data Warehouse (Azure SQL DW) to build a flexible data warehouse for all their reporting needs, backed by automatic recovery and site-to-site recovery.

Using artificial intelligence to analyze roads - Based in Gaimersheim, Germany, EFS is the number one partner of Audi in chassis development. It examines and helps implement future-looking technologies, including automated driving. As part of its research efforts, the company used Azure NC-series virtual machines powered by NVIDIA Tesla P100 GPUs to drive a deep learning AI solution that analyzes high-resolution two-dimensional images of roads. The purpose is to give self-driving vehicles a better understanding of those roads. EFS proved that the concept works, and the company can now move ahead with product development. EFS had several terabytes of data in Azure Blob storage. The NC-series virtual machines, powered by NVIDIA’s Tesla P100 GPU, made quick work of that data. And the elasticity of Azure gave EFS the flexibility they needed to get their work done effectively.

Events

Learn from experts and play with emerging tech at Microsoft Build - Microsoft’s largest developer conference, Microsoft Build, is around the corner, and there’s still time to register. Programmers and Microsoft engineers will gather May 7–9 in Seattle, Washington, to discuss what’s next in cloud, AI, mixed reality, and more. The event will feature incredible technical sessions, inspiring speakers, and interactive workshops—as well as plenty of time to connect and celebrate.

#GlobalAzure Bootcamp 2018 - The Global Azure Bootcamp (#GlobalAzure) is a worldwide series of one-day technical learning events for Azure. It is created and hosted by leaders from the global cloud developer community. This is community, pure and simple, at its very best. And you can join too!

World Backup Day: Secure your backups, not just your data! - Last week was World Backup Day. See this post to look at the key considerations that customers should look for in backup products as they build the defenses to secure their backups from evolving ransomware attacks, and how Azure can help.

Developer spotlight

Build responsive and flexible transactional apps - MyExpenses uses AngularJS for the frontend, Node.js for the backend, and SQL Server 2017 as the database server. Follow along as we move this application to Azure SQL DB and take advantage of new features to improve the application's data performance and security.

Data for Development: Evolving SQL Workloads from Software to SaaS - Get hands-on and build an e-Commerce retail based SaaS (Software as a Service) application using Azure database services (Azure SQL Database).

Database DevOps with SQL Server Data Tools and Team Services - Watch this session to learn how to use SQL Server Data Tools in Visual Studio 2017 to quickly develop a SQL Server database that runs anywhere and an Azure SQL database. Learn how to automate your application and database CI/CD pipeline by using Visual Studio Team Services.

Pinball Lizard (sample game) - Pinball Lizard started as a challenge. We wanted to explore the gameplay possibilities of the new Windows Mixed Reality headsets and how we could use Microsoft Azure, PlayFab, and Unity to drive those possibilities. In this manual, you’ll see how we tackled this challenge and made a proof of concept that we’re excited to make public! We hope you play with it, extend it, and make it your own.

Pinball Lizard Game Manual - Whether you want to check out the underlying game architecture, the gameplay decisions based on mixed reality, how we established communication between Azure and Unity, or how Mixer and PlayFab services are integrated into the game—you’ll find our approach documented here.

Your game, every way it's played: PlayFab at GDC 2018 - Brendan Vanous, Head of Developer Success at PlayFab, provides an overview of PlayFab's backend platform in the cloud for live games.

Azure shows

Tuesdays with Corey | Azure Serial Console - Corey Sanders, Corporate VP - Microsoft Azure Compute team sat down with Hariharan Jayaraman, Principal PM on the Azure Linux Team to talk about a private preview of Serial Console to Linux and Windows VMs.

Azure Friday #395 | Azure Virtual Machine Serial Console - Hariharan Jayaraman joins Scott Hanselman to discuss Azure Virtual Machine Serial Console, which allows console access to Azure VMs. This allows you to connect and recover VMs which are in a stuck state due to OS or configuration issues.

Azure Friday #396| Azure + Visual Studio + Xamarin = Great Mobile Apps - James Montemagno joins Scott Hanselman to show how powerful Azure services can be used with Visual Studio and Xamarin to create cloud-connected mobile apps for Android, iOS, and Windows, using one toolset and development language (C#) across front-end and back-end.

The Azure Podcast: Episode 222 - Azure Portal Update - Sean Watson, a Senior PM in the Azure Portal team, talks to us about the thinking behind the design of the portal and shares tips 'n tricks for getting the most out of the portal.

Azure tips & tricks

Azure Tips and Tricks - Drag Tiles to customize your Azure Dashboard

Azure Tips and Tricks - Customize and Pin Charts to your Azure Dashboard

Azure IoT Hub is driving down the cost of IoT

$
0
0

Customers rely on the Azure IoT Hub service and love the scale, performance, security, and reliability it provides for connecting billions of IoT devices sending trillions of messages. Azure IoT Hub is already powering production IoT solutions across all major market segments including retail, healthcare, automotive, manufacturing, energy, agriculture, oil and gas, life sciences, smart buildings, and many others. Today, we have a few exciting announcements to make about Azure IoT Hub.

Over the years we’ve noticed that many customers start their IoT journey by simply sending data from devices to the cloud. We refer to this as “device to cloud telemetry,” and it provides a significant benefit. We’ve also noticed that later in their IoT journey most customers realize they need the ability to send commands out to devices, i.e., “cloud to device messaging,” as well as full device management capabilities, so they can manage the software, firmware, and configuration of their devices.

At Microsoft, we believe in meeting customers where they are and providing a great experience for them to capture the benefits of IoT. Because of this, we’re excited to announce a new capability of Azure IoT Hub: a “device to cloud telemetry” tier, called the IoT Hub basic tier. The basic tier is a very inexpensive way to start your IoT journey, and despite the name, there is nothing basic about it. It supports inbound telemetry scenarios and has all the same security, scale, performance, and reliability of the existing Azure IoT Hub standard tier. And the best part is, when you’re ready to continue your IoT journey, you can upgrade from basic to standard with zero downtime and no re-architecture.

Next, we’ve also been busy building efficiencies into our Azure IoT Hub while dramatically increasing the reliability, scale, and performance of the service—far surpassing the scale and performance needs of our most demanding commercial IoT customers. We’re excited to announce today that we are passing those efficiencies on to our customers by saving them money with a 50 percent reduction in cost of our standard tier. Existing customers will automatically receive this pricing discount with no action needed on their part.

As of April 3, 2018, the following table* summarizes the new Azure IoT Hub basic tier offering and standard tier price reduction:

Tiers

Note: All prices are in USD.

Azure IoT Hub basic tier: Details

The new basic tier provides all the data ingestion features of Azure IoT Hub standard, such as zero-touch provisioning, as well as the same security features, performance, and scalability. Along with this powerful set of capabilities, you can also choose to upgrade to the IoT Hub standard tier at any moment as your IoT solution needs change. This will seamlessly unlock access for your solution to all the standard-tier features, including bi-directional communications, device management through device twins, and the delivery of cloud intelligence to your devices through Azure IoT Edge. All at a 50 percent savings from the previous standard tier pricing.

The following table summarizes the supported features for the basic and standard tiers:

Features.png

The full pricing details and ordering information can be found on the IoT Hub pricing page. See Choosing the right IoT Hub for more details on the features and functionality available to each tier and how to select the right one for your IoT scenario.

The new tier of IoT Hub and new pricing are available now in global regions and will be available in national regions soon.

A comprehensive portfolio

Azure IoT Hub basic and IoT Hub standard are part of Azure IoT, a comprehensive IoT portfolio that enables companies to create, customize, and control all aspects of a secure IoT solution. The portfolio includes:

  • Solutions: Get started quickly and choose the approach that meets your unique needs with either a fully managed global IoT SaaS (software-as-a-service) solution that enables powerful IoT scenarios without requiring cloud-solution expertise, or preconfigured solutions that enable common IoT scenarios to create a fully customizable solution.
  • Platform services: Build a customized solution with platform services, such as Azure IoT Hub, to connect, manage, and capture data from billions of IoT devices. Then integrate your IoT device data with other business systems to enhance insights across your organization.
  • Edge: Seamlessly deploy and run artificial intelligence, serverless computing, artificial intelligence (AI), and machine learnng directly on cross-platform IoT devices with Azure IoT Edge.

Learn more and get started with your IoT deployment today by visiting Azure IoT Hub.

*For the most up-to-date pricing information on the Azure IoT Hub basic tier offerings, please go to the Azure IoT Hub pricing page.

Accelerate Data Warehouse Modernization to Azure with Informatica’s AI-Driven Platform

$
0
0

Data is central to digital transformation. We have seen many customers moving their data workloads to Azure which benefits from the inherent performance and agility of cloud. Enterprises are moving on-premises workloads to public cloud at an increasing rate. Results from the 2016 Harvey Nash/KPMG CIO Survey indicate that cloud adoption is now mainstream and accelerating as enterprises shift data-intensive operations to the cloud. Specifically, Platform-as-a-Service (PaaS) adoption is predicted to be the fastest-growing sector of cloud platforms according to KPMG, growing from 32 percent in 2017 to 56 percent adoption in 2020.

Cloud data warehouse is one of the fastest growing segments. Azure SQL Data Warehouse (SQL DW) allows customers to unleash the elasticity and economics of cloud while maintain a fast, flexible and secure warehouse for all their data.
Microsoft has partnered with Informatica, the leader in Enterprise Cloud Data Management, to help you modernize your data architecture with intelligent data management. So that you can build a cloud data warehouse solution that easily adapts and scales as your data types, volume, applications and architecture changes.

Informatica’s AI-driven Intelligent Data Platform, with solutions purpose-built for Azure, is a modular micro services architecture that accelerates your Azure SQL Data Warehouse project deployment by automating your data integration development lifecycle, including connectivity, development, deployment, and management.

Companies like AmerisourceBergen are changing the paradigm of patient healthcare by building a cloud-based advanced analytics platform on Azure to drive business innovation with new analytics-enabled revenue streams.

Informatica offers out-of-the box connectivity to hundreds of on-premises and cloud data sources and pre-built interoperability to all Azure data services. Customers can eliminate data silos and deliver high-quality, trusted and secure data backed by enterprise-class data integration, data management, data quality, data security and governance solutions.

For instance, an American office staffing company is beginning their journey to becoming a modern data-driven company with a strategic initiative to replace their on-premises data warehouses with Azure SQL Data Warehouse. To quickly start moving on-premises data into Azure, they leveraged Informatica Intelligent Cloud Services, Informatica’s iPaaS solution.

Slide1 (002)

Many customers like Life Time Fitness are taking a cloud-first approach to deliver competitive advantage through increased “customer intimacy” and driving operational efficiencies by moving to a modern hybrid data architecture that pulls data from anywhere at any time and can scale performance on-demand.

Informatica’s intelligent data discovery engine, Enterprise Data Catalog,  gives you a holistic view of your enterprise data landscape by cataloging all types of data and data relationships, significantly expediting cloud migration.  Machine learning-based data asset discovery, visibility and preparation solutions ensures that no relevant or useful data remains hidden or obscure. Helping you transform big data into fit-for-purpose data sets.

The Informatica Intelligent Data Platform offers comprehensive solutions for Azure, including data integration, data discovery and preparation, data lakes and big data management, master data management, data quality and data security to accelerate Azure deployment and deliver trusted data in the cloud, on-premises and across hybrid environments.

To learn more, download Informatica’s workbook for Cloud Data Warehousing with Microsoft Azure, sign up for an on-demand webinar!


Introducing a new way to purchase Azure monitoring services

$
0
0

Today customers rely on Azure’s application, infrastructure, and network monitoring capabilities to ensure their critical workloads are always up and running. It’s exciting to see the growth of these services and that customers are using multiple monitoring services to get visibility into issues and resolve them faster. To make it even easier to adopt Azure monitoring services, today we are announcing a new consistent purchasing experience across the monitoring services. Three key attributes of this new pricing model are:

1. Consistent pay-as-you-go pricing

We are adopting a simple “pay-as-you-go” model across the complete portfolio of monitoring services. You have full control and transparency, so you pay for only what you use. 

2. Consistent per gigabyte (GB) metering for data ingestion

We are changing the pricing model for data ingestion from “per node” to “per GB”. Customers told us that the value in monitoring came from the amount of data received and the insight built on top of that, rather than the number of nodes. In addition, this new model works best for the future of containers and microservices where the definition of a node is less clear. “Per GB” data ingestion is the new basis for pricing across application, infrastructure, and networking monitoring.

3. Choice of pricing models for existing customers

We understand that some of our existing customers have use cases that are optimized for the current “per node” model and we want to ensure that this change does not disrupt you. Existing customers can choose to continue with the current per node pricing model. If you are an existing customer of an Operations Management Suite license, you can continue with the current model or adopt the new model at renewal if you wish.

The new pricing model is available today to all customers. Many of these changes are a direct result of collaboration between Microsoft and the community. Thank you for the ongoing feedback as we partner together to build monitoring in Azure that spans infrastructure, applications, and networks.

Learn more about the new pricing model by visiting the updated pricing calculator or the individual product pricing pages for Log Analytics, Network Watcher, Azure Monitor, and Application Insights

For guidance on managing data ingestion visit the documentation pages for Log Analytics and Application Insights. The pricing for all other services in the Azure security and management portfolio including Azure Backup, Azure Security Center, and Azure Site Recovery remains the same.

Mathematical art in R

$
0
0

Who says there's no art in mathematics? I've long admired the generative art that Thomas Lin Peterson occasionally posts (and that you can see on Instagram), and though he's a prolific R user I'm not quite sure how he makes his art. Marcus Volz has another beautiful portfolio of generative art, and has also created an R package you can use to create your own designs: the mathart package

Generative art uses mathematical equations and standard graphical rendering tools (point and lines, color and transparency) to create designs. The mathart package provides a number of R functions to create some interesting designs from just a few equations. Complex designs emerge from just a few trigonometric functions, like this shell:

Shell

Or this abstract harmonograph:

Harmonograph

Amazingly, the image above, and an infinite collection of images similar to it, is generated by just two equations implemented in R:

  x = A1*sin(t*f1+p1)*exp(-d1*t) + A2*sin(t*f2+p2)*exp(-d2*t),
  y = A3*sin(t*f3+p3)*exp(-d3*t) + A4*sin(t*f4+p4)*exp(-d4*t)

You can have a lot of fun playing around with the parameters to the harmonograph function to see what other interesting designs you can find. You can find that function, and functions for designs of birds, butterflies, hearts, and more in the mathart package available on Github and linked below.

Github (marcusvolz): mathart

 

A flexible new way to purchase Azure SQL Database

$
0
0

We’re excited to announce the preview of an additional purchasing model to the Azure SQL Database Elastic Pool and Single Database deployment options. Recently announced with SQL Database Managed Instance, the vCore-based model reflects our commitment to customer choice by providing flexibility, control, and transparency. As with Managed Instance, the vCore-based model makes the Elastic Pool and Single Database options eligible for up to 30 percent savings* with the Azure Hybrid Benefit for SQL Server.

Azure SQL Database

Optimize flexibility and performance with two new service tiers

The new vCore-based model introduces two service tiers, General Purpose and Business Critical. These tiers let you independently define and control compute and storage configurations, and optimize them to exactly what your application requires.  If you’re considering a move to the cloud, the new model also provides a straightforward way to translate on-premises workload requirements to the cloud. General Purpose is designed for most business workloads and offers budget-oriented, balanced, and scalable compute and storage options. Business Critical is designed for business applications with high IO requirements and offers the highest resilience to failures.

Choosing between DTU and vCore-based performance levels

You want the freedom to choose what’s right for your workloads and we’re committed to supporting the DTU-based model alongside the new vCore-based option. Looking for a simple way to purchase and configure resources? The DTU-based model provides preconfigured bundles of resources across a range of performance options. If you are not concerned with customizing the underlying resources and prefer the simplicity of paying a fixed amount each month, you may find the DTU-based model more suitable for your needs. However, if you need more insights into the underlying resources or need to scale them independently to achieve optimal performance, the vCore-based model is the best choice. The vCore-based model is also a good choice if you own SQL Server licenses that you would like to move to the cloud. Migration between DTU-based and vCore-based performance levels is a simple online operation and is similar to the current process of upgrading from the Standard to Premium service tiers.

Save up to 30 percent* on vCore-based options with Azure Hybrid Benefit for SQL Server

Save more on vCore-based options when you use the Azure Hybrid Benefit for SQL Server. This benefit is exclusive to Azure and enables you to use your SQL Server Enterprise Edition or Standard Edition licenses with active Software Assurance to pay a reduced rate on a vCore-based Single Database, Elastic Pool or Managed Instance, with savings up to 30 percent.

Get started today!

The new vCore-based service tiers are available in all Azure regions and you can start using them immediately. If you already have an Azure SQL database, you can switch to the new service tiers in the portal and configure the database as illustrated by the following diagrams. Otherwise, you can create a new database in the General Purpose or Business Critical service tiers.

Bucharest (1)

 

Bucharest (2)

 

 

For more information about the vCore-based purchasing options visit our service tier documentation and pricing pages.

*Savings based on a 8 vCore Business Critical Managed Instance in East US Region, running 730 hours per month. Savings are calculated from full price (license included) against reduced price (applying Azure Hybrid Benefit for SQL Server), which includes the Software Assurance cost for SQL Server Enterprise edition. Actual savings may vary based on region, instance size and performance tier, and Software Assurance tier. Prices as of December 2017 are subject to change.

SQL Database: Long-term backup retention preview includes major updates

$
0
0

The preview for long-term backup retention in Azure SQL Database was announced in October 2016, providing you with a way to easily manage long-term retention for your databases – up to 10 years – with backups stored in your own Azure Backup Service Vault.

Based upon feedback gathered during the preview, we are happy to announce a set of major enhancements to the long-term backup retention solution. With this update we have eliminated the need for you to deploy and manage a separate Backup Service Vault. Instead, SQL Database will utilize Azure Blob Storage under the covers to store and manage your long-term backups. This new design will enable flexibility for your backup strategy, and overall more control over costs.

This update brings you the following additional benefits:

  • More regional support – Long-term retention will be supported in all Azure regions and national clouds.
  • More flexible backup policies – You can customize the frequency of long-term backups for each database with policies covering weekly, monthly, yearly, and specific week-within-a-year backups.
  • Management of individual backups – You can delete backups that are not critical for compliance.
  • Streamlined configuration No need to provision a separate backup service vault.

What happens with your existing long-term backup retention policies?

Your existing backups will be automatically transitioned to the SQL Database managed RA-GRS storage containers.

  • All existing long-term backups are already copied from your recovery vaults to the new storage containers free of charge.
  • The new API that supports the enhanced feature set will be available in parallel with the existing API until May 31, 2018. You are expected to update your configuration scripts to the new API by that deadline.

Note, backups associated with servers that are already dropped are not migrated.

The portal experience is updated to support the additional LTR capabilities as illustrated by the following image. If you configured your long-term retention policy using the portal no actions are expected from you. ­

The following diagram illustrates how you can configure a new long-term retention policy for a database.

LtrConfigurePoliciesBlade

The long-term policies for individual databases are shown in a single table as illustrated by the next diagram.

LtrConfigurePolicies

The next diagram illustrates how you can restore a specific long-term backup.

LtrRestore

How will this impact your bill?

Effective July 1, 2018, if you are using the existing LTR preview you will notice a new charge on your bill with the name LTR backup storage. At the same time, you no longer will be billed for the backups in recovery vaults. The new LTR solution is more cost efficient, which can mean lower overall long-term backup retention storage costs. In addition, the added flexibility in the backup retention policy helps you reduce costs even further by letting you select less frequent backups, e.g. once a month or once a year, or by deleting individual backups that you don’t need. If you are new to LTR and just configured your first LTR policy, your next monthly bill will include the LTR backup storage charges.

Does long-term retention impact my GDPR compliance?

If the backup contains personal data that is subject to General Data Protection Regulation (GDPR), you are required to apply enhanced security measures to protect the data from unauthorized access. In order to comply with GDPR, you need a way to manage the data requests of data owners without having to access backups. This layer of protection to the personal data stored in backups can be achieved by storing only "pseudonymized" data in backups. For example, if data about a person needs to be deleted or updated, it will not require deleting or updating the existing backups. You can find more information about GDPR best practices in Data Governance for GDPR Compliance.

Next steps

Load solutions faster with Visual Studio 2017 version 15.6

$
0
0

As we have been working to improve the solution load experience in Visual Studio 2017, you may have read our blog about these improvements in version 15.5. With version 15.6, we have introduced parallel project load, which loads large .NET solutions twice as fast as earlier versions when you reload the same solution. This video compares the time it takes to load a very large solution from the Roslyn repository, with 161 projects, between version 15.5 and 15.6.

solution load performance comparing 15.5 and 15.6

Parallel project load

During the first load of a solution, Visual Studio calculates all the IntelliSense data from scratch. In the previous version of Visual Studio 2017, version 15.5, we optimized the calculation of IntelliSense data by parallelizing the design-time build that produces the data. Solutions that were opened for the first time on a machine loaded significantly faster because of this parallelization. IntelliSense data was cached, so subsequent loads of a solution didn’t require a design-time build.

With version 15.6 we wanted to go one step beyond optimizing IntelliSense calculation. We’ve enabled parallel project load for large solutions that contain C#, VB, .NET Core, and .NET Standard projects. Many Visual Studio customers have machines with at least 4 CPU cores. We wanted to leverage the power of all the CPUs during solution load by loading projects in parallel.

We continuously monitor solution load telemetry coming in through the Customer Experience Improvement Program. We saw that, in aggregate, customers experienced a 25% improvement in solution load times in version 15.6, across all solution sizes. Large .NET solutions experienced even larger improvements, and now load twice as fast as previous versions. A customer with a very large, 400+ project solution told us that their solution now loads 2-4 times faster!

Solution load is getting leaner

Parallel solution load is part of the work we’re doing to improve solution load. Another big effort is to get unneeded components out of the way of solution load. Historically, many components and extensions used solution load to perform initialization work, which added to the overall solution load time. We are changing this to make Visual Studio developers productive as quickly as possible. Only critical code that enables navigation, editing, building, and debugging will run during solution load. The rest of the components initialize asynchronously afterwards.

For example, Visual Studio previously scanned git repositories during solution load to light up the source control experience. Not anymore. This process now starts after the solution has loaded, so you can start coding faster.

As another example, Visual Studio used to synchronously initialize the out-of-process C# language service. This code is now optimized to reuse data structures already available in the Visual Studio process. We’re also working with extension owners to delay operations that do not impact the solution load process until after the load has completed.

We have also optimized the Visual Studio solution loader to batch and run all critical solution load operations before any other work can run. Previously, the Visual Studio solution loader fired a subset of solution load events asynchronously. This allowed lower priority work to interfere with solution load. Additionally, there are popular extensions that listen to these events and can block Visual Studio for seconds. This experience confused some customers, because it was not clear when solution load was complete. This logic is now optimized, so that all solution load events fire synchronously during solution load.

While solution load is significantly faster in version 15.6, we are not done yet. You will see us making solution load even leaner in future updates.

Know what slows you down

Even with these improvements, it’s still possible for slow blocking operations to get scheduled after the solution has loaded. When this happens, the solution appears loaded, but Visual Studio is unresponsive while it is processing these operations. Visual Studio version 15.6 detects blocking operations and presents a performance tip. This helps extension authors find these issues and gives end users more control over their IDE’s performance.

Performance tip in the IDE showing operations that cause delays

Extension authors can use an asynchronous API that allows extensions to run code without blocking users. We’ve also published guidance for extension authors to learn how to diagnose and address responsiveness issues. If you regularly see unresponsiveness notifications for an extension, reach out to the owner of the extension or contact us at vssolutionload@microsoft.com.

Let us know

We would love to know how much faster your solution loads in version 15.6. Give it a try and let us know by sending an email to vssolutionload@microsoft.com. You can also tweet about it and include @VisualStudio.

Viktor Veis, Principal Software Engineering Manager, Visual Studio
@ViktorVeis

Viktor runs the Project and Telemetry team in Visual Studio. He is driving engineering effort to optimize solution load performance. He is passionate about building easy-to-use and fast software development tools and data-driven engineering.

Improvements to SQL Elastic Pool configuration experience

$
0
0

We have made some great improvements to the SQL elastic pool configuration experience in the Azure portal. These changes are released alongside the new vCore-based purchasing model for elastic pools and single databases. Our goal is to simplify your experience configuring elastic pools and ensure you are confident in your configuration choices.

Changing service tiers for existing pools

Existing elastic pools can now be scaled up and down between service tiers. You can easily move between service tiers and discover the one that best fits your business needs. You can also switch between the DTU-based and the new vCore-based service tiers. You can also scale down your pool outside of business hours to save cost.

servicetiers (002)

Simplifying configuration of the pool and its databases

Elastic pools offer many settings for customers to customize. The new experience aims to separate and simplify each aspect of pool management, between the pool settings, database settings, and database management. This enables you to more easily reason over each of these aspects of the pool while being able to save all settings changes in one batch.

configurepool

Understanding your bill with new cost summary

costsummaryOur new cost summary experience for elastic pools and single databases helps you understand the pricing of your selections. Each setting that contributes to your bill is broken down and summed up to show your total estimated monthly bill. We want you to feel confident that after making all of your configuration choices you know how your cost is impacted.

 

 

 

For more information, please visit our documentation on our service tiers and elastic pools.

Not Hotdog: An R image classification application, using the Custom Vision API

$
0
0

If you're a fan of the HBO show Silicon Valley, you probably remember the episode where Jian Yang creates an application to identify food using a smartphone phone camera:

Surprisingly, the app in that scene isn't just a pre-recorded special effect: the producers actually developed a smartphone application using Tensorflow (and you can even download the app for your phone to play with). It was an impressive feat, especially give the relative infancy of deep learning tools back in 2016. Things have advanced since then, though, and I wanted to see how easy it would be to build an equivalent to the Not Hotdog application using Microsoft's Custom Vision API. I don't know React, so instead of an phone application I created an R function to classify an image on the Web. As it turns out, the process was fairly simple and I only needed a small set of training images — less than 200 — to build a pretty good classifier.

In the paragraphs below, I'll walk you through the process if you want to try it out yourself. It doesn't have to be hotdogs, either: it should be easy to adapt the script to detect other kinds of images, and even make multiple classifications.

To run the R script, in addition to R (any recent version should work, but I tested it with R 3.4.1) you'll also need an Azure subscription. If you don't have one already, you can get a free Azure account (or Azure for Students, if you're a student without a credit card). The Custom Vision API is part of the "always free" services, so you can run this script without any charges or depletion of your free credits. You'll also need to generate keys for the Custom Vision API, and I  describe in README.md how to generate the keys and save them in a keys.txt file.

You'll also need a set of images to train your classifier. In our case, that means some images of hotdogs. It's also useful to provide a "negative set" of images that are not what you intend to detect, but might be mistaken for it. (In this case, I used images of tacos and hamburgers.) You can source the images however you like, but since the API allows you to provide URLs of images on the Web, that's what I focused on finding.  One easy way to find URLs of images is to use ImageNet Explorer and search for one of the provided tags (here called "synsets").

Imagenet
Click the "Downloads" tab to download a file of the displayed image URLs

This was an easy way to generate hundreds of URLs of pre-classified images. The only problem is that some of the URLs no longer work, so I used an R script to help me filter the broken URLs and sample from the remainder. (I also rejected a few images with visual inspection that were obviously not representative of the intended class.) I saved the result into files of hotdog urls and similar food urls, so you can skip this step if you like.  

The next step was to write an R function to identify an image as a hot dog (or not) given the URL of an image on the web. This requires a bit of setup using the Custom Vision API first, namely:

  • Defining a "project" and a "domain" for classification. In addition to classifying general images, you can use pre-trained neural networks to detect things like landmarks or shopping items. I used the "Food" domain, which can better identify hotdogs and other foods.
  • Define "tags" to classify the training images. I could have used just a "hotdog" tag, but I also created a "nothotdog" tag for the tacos and hamburgers, which improved the performance of the "hotdog" class detection.
  • Pass in the URLs of the training images for each category. (The only trick here is that the API accepts a maximum of 64 URLs at a time, so an R function loops through the list if there are more than that.)
  • Train the project using the provided images, and retrieve the ID of the training session ("iteration") for use in the prediction step

One of the nice things about the Custom Image API is that as you're stepping through the R code, you can check your progress at customvision.ai, a web-based interface to the service. You can check the recall and precision of the trained model on the training data, and test out predictions on new images. (You can even tag those test images and incorporate them into new training data.)

Precision
You can also adjust the probability threshold cutoff for classifcation — these stats are based on a 50% threshold

Now we're ready to create our prediction function in R:

You could also improve the function (as I did) by checking for invalid URLs, and by converting the probability predictions for the classes into classifications based on a probability threshold. In our case, that's "Hot dog", "Other food" and "Not a hotdog". Let's try some examples:

image from www.hot-dog.org

> hotdog_predict("http://www.hot-dog.org/sites/default/files/pictures/hot-dogs-on-the-grill-sm.jpg")
http://www.hot-dog.org/sites/default/files/pictures/hot-dogs-on-the-grill-sm.jpg 
                                                                        "Hotdog" 

Yep, that's a hotdog. Let's try something else:

Burrito_with_rice

> hotdog_predict("https://upload.wikimedia.org/wikipedia/commons/thumb/4/46/Burrito_with_rice.jpg/1200px-Burrito_with_rice.jpg")
https://upload.wikimedia.org/wikipedia/commons/thumb/4/46/Burrito_with_rice.jpg/1200px-Burrito_with_rice.jpg 
                                                                      "Not Hotdog (but it looks delicious!)" 

Yup, that's a burrito. It's not perfect though: it does misclassify some things (at the 50% probability threshold, anyway):

Spring-Rolls-6

> hotdog_predict("https://www.recipetineats.com/wp-content/uploads/2017/09/Spring-Rolls-6.jpg")
https://www.recipetineats.com/wp-content/uploads/2017/09/Spring-Rolls-6.jpg 
                                                                   "Hotdog" 

Close, but no cigar-shaped food item. Those are spring rolls.

We could improve the performance by providing more training images (including spring rolls in my negative set would probably have helped with the above misclassification)  or by tuning the probability threshold. Let me know how it performs on other images you find. Try it yourself using the scripts and data files provided in the repo below.

Github (revodavid): nothotdog


Audio Switcher should be built into Windows – Easily Switch Playback and Recording Devices

$
0
0

Audio SwitcherI've been running a podcast now for over 600 episodes and I do most of my recordings here at home using a Peavey PV6 Mixing Console - it's fantastic. However, I also work remotely and use Skype a lot to talk to co-workers. Sometimes I use a USB Headset but I also have a Polycom Work Phone for conference calls. Plus my webcams have microphones, so all this adds up to a lot of audio devices.

Windows 10 improved the switching experience for Playback Devices, but there's no "two click" way to quickly change Recording Devices. A lot of Sound Settings are moving into the Windows 10 Settings App but it's still incomplete and sometimes you'll find yourself looking at the older Sound Dialog:

Sound Control Panel

Enter David Kean's "Audio Switcher." It's nearly 3 years old with source code on GitHub, but it works AMAZINGLY. It's literally what the Power User has always wanted when managing audio on Windows 10.

It adds a Headphone Icon in the Tray, and clicking it on puts the Speakers at the Top and Mics at the Bottom. Right-clicking an item lets you set it as default. Even nicer if you set the icons for your devices like I did.

Audio Switcher

Ok, that's the good news. It's great, and there's Source Code available so you can build it easily with free Visual Studio Community.

Bad news? Today, there's no "release" or ZIP or EXE file for you to download. That said, I uploaded a totally unsupported and totally not my responsibility and you shouldn't trust me compiled version here.

Hopefully after this blog post is up a few days, David will see this blog post and make an installer with a cert and/or put this wonderful utility somewhere, as folks clearly are interested. I'll update this blog post as soon as more people start using Audio Switcher.

Thank you David for making this fantastic utility!


Sponsor: Get the latest JetBrains Rider for debugging third-party .NET code, Smart Step Into, more debugger improvements, C# Interactive, new project wizard, and formatting code in columns.



© 2018 Scott Hanselman. All rights reserved.
     

Application Security Groups now generally available in all Azure regions

$
0
0

We are pleased to announce the general availability of Application Security Groups (ASG) in all Azure regions. This feature provides security micro-segmentation for your virtual networks in Azure.

Network security micro segmentation

ASGs enable you to define fine-grained network security policies based on workloads, centralized on applications, instead of explicit IP addresses. Provides the capability to group VMs with monikers and secure applications by filtering traffic from trusted segments of your network.

Implementing granular security traffic controls improves isolation of workloads and protects them individually. If a breach occurs, this technique limits the potential impact of lateral exploration of your networks from hackers.

Security definition simplified

With ASGs, filtering traffic based on applications patterns is simplified, using the following steps:

  • Define your application groups, provide a moniker descriptive name that fits your architecture. You can use it for applications, workload types, systems, tiers, environments or any role.
  • Define a single collection of rules using ASGs and Network Security Groups (NSG), you can apply a single NSG to your entire virtual network on all subnets. A single NSG gives you full visibility on your traffic policies, and a single place for management.
  • Scale at your own pace. When you deploy VMs, make them members of the appropriate ASGs. If your VM is running multiple workloads, just assign multiple ASGs. Access is granted based on your workloads. No need to worry about security definition again. More importantly, you can implement a zero-trust model, limiting access to the application flows that are explicitly permitted.

Single network security policy

ASGs introduce the ability to deploy multiple applications within the same subnet, and isolate traffic based on ASGs. With ASGs you can reduce the number of NSGs in your subscription. In some cases, you can use a single NSG for multiple subnets of your virtual network. ASGs enable you to centralize your configuration, providing the following benefits in dynamic environments:

  • Centralized NSG view: All traffic policies in a single place. It’s easy to operate and manage changes. If you need to allow a new port to or from a group of VMs, you can make a change to a single rule.
  • Centralized logging: In combination with NSG flow logs, a single configuration for logs has multiple advantages for traffic analysis.
  • Enforce policies: If you need to deny specific traffic, you can add a security rule with high priority and enforce administrative rules.

Filtering east-west traffic

With ASGs, you can isolate multiple workloads and provide additional levels of protection for your virtual network.

In the following illustration, multiple applications are deployed into the same virtual network. Based on the security rules described, workloads are isolated from each other. If a VM from one of the applications is compromised, lateral exploration is limited, minimizing the potential impact of an attacker.

In this example, let’s assume one of the web server VMs from application1 is compromised, the rest of the application will continue to be protected, even access to critical workloads like database servers will still be unreachable. This implementation provides multiple extra layers of security to your network, making this intrusion less harmful and easy to react on such events.

Filtering north-south traffic

In combination with additional features on NSG, you can also isolate your workloads from on premises and azure services in different scenarios.

In the following illustration, a relatively complex environment is configured for multiple workload types within a virtual network. By describing their security rules, applications have the correct set of policies applied on each VM. Similar to the previous example, if one of your branches is compromised, exploration within the virtual network is limited therefore minimizing the potential impact of an intruder.

In this example, let’s assume someone on one of your branches connected using VPN, compromise a workstation and has access to your network. Normally only a subset of your network is required for this branch, by isolating the rest of your network; all other applications will continue to be protected and unreachable. ASGs another layers of security to your entire network.

Another interesting scenario, assuming you have detected a breach on one of your web servers, a good idea would be to isolate the VM for investigation. With ASGs, you can easily assign a special group predefined for quarantine VMs on your first security policy. These VMs lose access providing an additional benefit to help you react and mitigate this treats.

Summary

Application Security Groups along with the latest improvements in NSGs, have brought multiple benefits on the network security area, such as a single management experience, increased limits on multiple dimensions, a great level of simplification, and a natural integration with your architecture, begin today and experience these capabilities on your virtual networks.

For more details see the NSG overview article, which also explains ASGs. Learn how to implement NSGs and ASGs on the following tutorial.

As always, your feedback helps us improve and keep moving in the right direction. We would like to hear your suggestions and feedback for ASGs, as well as future scenarios through our user voice channel. Stay tuned for more interesting updates in the network security space from Azure!

A few podcast recommendations

$
0
0

After avoiding the entire medium for years, I've been rather getting into listening to podcasts lately. As a worker-from-home I don't have a commute (the natural use case of podcasts, I guess), but I have been travelling a lot more recently and it's been great to listen to during long flights. It turns out there are a lot of great podcasts out there about R, data science, and AI, so if you're looking for something to listen to here's what's currently in rotation for me:

  • Not So Standard Deviations. Roger Peng and Hilary Parker chat about data science, real-world data analysis, and building ancient versions of R. 
  • DataFramed. DataCamp's offical potcast featured interviews with prominent members of the R and Data Science communities.
  • The R Podcast. Practical advice on using R, and interviews with R developers.
  • This Week in Machine Learning and AI. Interviews with AI and machine learning practitioners, with a good mix of technical and application-oriented topics.
  • The Microsoft Research Podcast. Frequently features researchers working on artificial intelligence and novel data analysis methodology.
  • And, for general interest / entertainment: Bombshell, Marketplace and (our namesake, but no relation) Revolutions.

What other podcasts are you enjoying right now? Let us know your recommendations in the comments.

Fast and easy development with Azure Database for MySQL and PostgreSQL

$
0
0

This blog post was co-authored by James Ashley, MR and AI Architect, Microsoft MVP.

Developers sometimes get anxious when it comes to hooking up a database for their apps. However, with Azure Database for MySQL and Azure Database for PostgreSQL, quickly propping up and accessing a relational database is a piece of cake. These lightweight, open source database services provide a great way to get small apps and prototypes started with very little effort. Without any extra work on your part, you can automatically take advantage of built-in security, fault tolerance, and data protection. You also can use point-in-time restore to recover a server to an earlier state—as far back as 35 days.

Azure Database for MySQL and Azure Database for PostgreSQL will work with whatever kind of project you are creating, whether it is a Linux app running in a Docker container orchestrated by Kubernetes, a computer vision service using Python, or a simple ASP.NET website to display travel photos. If your app needs a relational database, you can easily plug one in and start writing to it with guidance from these connect & query quickstarts:

Azure Database for MySQL

Azure Database for PostgreSQL

Azure Database for PostgreSQL and Azure Database for MySQL are convenient, flexible, fast, and easy. Azure takes care of the infrastructure and maintenance for you, and these two database services will work with pretty much any app platform, development language, and configuration you want to use. It’s simple to spin up a new database instance using the Azure web portal or Azure command-line tools, and you don’t have to worry about your back end when all you want to do is persist your app data. In addition, with the Azure service level agreement (SLA) of 99.99 percent availability, you’ll be able to keep your app up and running 24/7. With Azure Database for PostgreSQL and Azure Database for MySQL, you can get on with the more important work of designing great apps and services.

Top stories from the VSTS community – 2018.04.06

$
0
0
Here are top stories we found in our streams this week related to DevOps, VSTS, TFS and other interesting topics, in no specific order: TOP STORIES TRAINING: Free “DevOps” Professional Training & CertificationKurt Shintaku introduces 8 courses, totaling 128-256 hours of training, to allow people to up their skill How we checked & fixed the... Read More
Viewing all 10804 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>