Quantcast
Channel: Category Name
Viewing all 10804 articles
Browse latest View live

Deploying WordPress application using VSTS and Azure – part one

$
0
0

This post is the first part of two blog posts, describing how to setup a CI/CD pipeline using VSTS for deploying a dockerized custom WordPress website working with Azure WebApp for Containers and Azure Database for MySQL.

The Motivation

The main motivation for building a WordPress CI/CD pipeline is the fact that WordPress is limited in supporting dynamic configuration to allow easy modification between different environments. Some values are hardcoded in the WordPress MySQL database. This limitation causing a time consuming task which limits our ability to deploy fast and more frequently.

The Idea

We will have four environments: local, dev, test and production. The local environment is for the developer that will run the docker images locally, commit the required changes, and will push the code to the master branch once they completed their work. The push action will initiate a CI process, which will build and push a new docker image to our Azure Container Registry. The base image of this docker image will be the WordPress image from the docker hub. As part of the dockerfile, a copy action will be executed for copying the new content into the new docker image.

After the CI process completion a CD process will start automatically, using Azure Database for MySQL as the WordPress DB each environment will have a separate database. For updating the hardcoded values DB, we will perform a DB export from a previous environment into a SQL script file. Execute find & replace will restore the new SQL file into the next environment DB. Also part of this process we will use Azure Application Insights WordPress plugin for logging and monitoring purposes.

Prerequisites

  • Create an Azure account with the following services:
    • One instance of Azure Container Registry.
    • One instance of Azure Database for MySQL with 5 empty DBs.
    • Three instances of Azure WebApp for Containers of each environment: dev, test, production, with one slot-staging.
    • Four instances of Application Insights for each environment: local, dev, test, production
  • Open a VSTS account with docker Integration extension installed from Visual Studio Marketplace. If you don’t have a Visual Studio Team services account yet, open one now.

It’s possible to create all the above Azure resources using ARM deployment task in VSTS.

Code structure

You can find Sample source code, create a new VSTS project and upload the code to master branch of this project.

The code repository structure looks like this:

image

  • Html folder – sample WordPress files.
  • Db folder – sample WordPress script DB file that we need to restore into Azure Database for MySQL (referred in this blog as WordPress5000).
  • Application-insights folder – contains Application Insights plugin folder.
  • Dockerfile – for building the docker image.

We can run the following docker command on our local machine to run the sample WordPress locally. It will connect to MySQL DB on Azure, we might need to add our IP to the firewall rule of this instance.

docker run -e DB_ENV_HOST=[your mysql db url]:[your mysql port number] -e DB_ENV_USER=[your mysql db user name] -e DB_ENV_PASSWORD=[your mysql db password] -e DB_ENV_NAME=[your mysql database name] -p 5000:80 -d [your docker image name]

VSTS – Build Phase

Now, we are going to create a new build definition. Select the relevant source repository and choose empty template as your baseline for build process. Choose Hosted Linux Preview as agent queue:

image

Add two new docker tasks.

The first docker task is Build an image with the following values:

  • Container Registry Type – Azure Container Registry.
  • Azure subscription – select the relevant Azure subscription.
  • Azure Container Registry – select the relevant Azure Container Registry.
  • Action – Build an image.
  • Docker File – select the dockerfile from the repository.
  • Check the Use Default Build Context option.
  • Image Name – [image name, all letters should be lowercase]: $(Build.BuildId).
  • Check the Qualify Image Name option.

The second docker task is Push an image choose the following:

  • Container Registry Type – Azure Container Registry.
  • Azure subscription – select the relevant Azure subscription.
  • Azure Container Registry – select the relevant Azure Container Registry.
  • Action – Push an image.
  • Image Name – same name as in first task.
  • Check the Qualify Image Name option.

Under Triggers tab - enable continuous integration:

image

Now, we need to verify our build.

We need to make a small change in one of the project files and push the new version into the master branch, a new build process should start. That’s it for now, in this post we saw how easy is to create a CI process using VSTS and docker integration for creating a new dockerized custom WordPress image and push it into Azure Container Registry. Stay tuned for part 2 where we will continue with our journey for creating a complete CI/CD pipeline.

Resources


Analyzing accelerometer data with R

$
0
0

Using your smartphone (any modern phone with a built-in accelerometer should work), visit the Cast Your Spell page created by Nick Strayer. (If you need to type it to your phone browser directly, here's a shortlink: bit.ly/castspell .) Scroll down and click the "Press To Cast!" button, and then wave your phone like a wand using one of the shapes shown.

Castspell

The app will attempt to detect which of the four "spells" you gestured. It was pretty confident in its detection when I cast "Incendio", but your mileage may vary depending on your wizarding ability and the underlying categorization model. 

Cast Spell Prediction

Nick Strayer described how he built this application in a presentation at Data Day Texas last month. The app itself was built using Shiny with the shinysense package (on Github) to collect movement data from the phone. Nick trained a convolutional neural network model (from his own casting gesture data) using the keras package to classify gestures into one of the four "spells". (Interesting side note: because CNNs aren't time-dependent, you can gesture in reverse and still pass the classification test.)

image from digitalspyuk.cdnds.net

It's almost like magic! For the complete details on how the Cast your Spell app was constructed, see Nick Strayer's presentation at the link below.

Data Day Texas 2018: Making Magic with Keras and Shiny (Nick Strayer)

az webapp new – Azure CLI extension to create and deploy a .NET Core or nodejs site in one command

$
0
0

az webapp newThe Azure CLI 2.0 (Command line interface) is a clean little command line tool to query the Azure back-end APIs (which are JSON). It's easy to install and cross-platform:

Once you got it installed, go "az login" and get authenticated. Also note that the most important switch (IMHO) is --output:

usage: az [-h] [--output {json,tsv,table,jsonc}] [--verbose] [--debug]

You can get json, tables (for humans), or tsv (tab separated values) for your awks and seds, and json (or the more condensed json-c).

A nice first command after "az login" is "az configure" which will walk you through a bunch of questions interactively to set up defaults.

Then I can "az noun verb" like "az webapp list" or "az vm list" and see things like this:

128→ C:Usersscott> az webapp list

Name Location State ResourceGroup DefaultHostName
------------------------ ---------------- ------- -------------------------- ------------------------------------------
Hanselminutes North Central US Running Default-Web-NorthCentralUS Hanselminutes.azurewebsites.net
HanselmanBandData North Central US Running Default-Web-NorthCentralUS hanselmanbanddata.azurewebsites.net
myEchoHub-WestEurope West Europe Running Default-Web-WestEurope myechohub-westeurope.azurewebsites.net
myEchoHub-SouthEastAsia Southeast Asia Stopped Default-Web-SoutheastAsia myechohub-southeastasia.azurewebsites.net

The Azure CLI supports extensions (plugins) that you can easily add, and the Azure CLI team is experimenting with a few ideas that they are implementing as extensions. "az webapp new" is one of them so I thought I'd take a look. All of this is open source and on GitHub at https://github.com/Azure/azure-cli and is discussed in the GitHub issues for azure-cli-extensions.

You can install the webapp extension with:

az extension add --name webapp

The new command "new" (I'm not sure about that name...maybe deploy? or createAndDeploy?) is basically:

az webapp new --name [app name] --location [optional Azure region name] --dryrun

Now, from a directory, I can make a little node/express app or a little .NET Core app (with "dotnet new razor" and "dotnet build") then it'll make a resource group, web app, and zip up the current folder and just deploy it. The idea being to "JUST DO IT."

128→ C:Usersscottdesktopsomewebapp> az webapp new  --name somewebappforme

Resource group 'appsvc_rg_Windows_CentralUS' already exists.
App service plan 'appsvc_asp_Windows_CentralUS' already exists.
App 'somewebappforme' already exists
Updating app settings to enable build after deployment
Creating zip with contents of dir C:Usersscottdesktopsomewebapp ...
Deploying and building contents to app.This operation can take some time to finish...
All done. {
"location": "Central US",
"name": "somewebappforme",
"os": "Windows",
"resourcegroup": "appsvc_rg_Windows_CentralUS ",
"serverfarm": "appsvc_asp_Windows_CentralUS",
"sku": "FREE",
"src_path": "C:\Users\scott\desktop\somewebapp ",
"version_detected": "2.0",
"version_to_create": "dotnetcore|2.0"
}

I'd even like it to make up a name so I could maybe "az webapp up" or even just "az up." For now it'll make a Free site by default, so you can try it without worrying about paying. If you want to upgrade or change it, upgrade either with the az command or in the Azure portal. Also the site ends at up <name>.azurewebsites.net!

DO NOTE that these extensions are living things, so you can update after installing with

az extension update --name webapp

like I just did!

Again, it's super beta/alpha, but it's an interesting experiment. Go discuss on their GitHub issues.


Sponsor: Get the latest JetBrains Rider for debugging third-party .NET code, Smart Step Into, more debugger improvements, C# Interactive, new project wizard, and formatting code in columns.



© 2017 Scott Hanselman. All rights reserved.
     

Top stories from the VSTS community – 2018.02.23

$
0
0
Here are top stories we found in our streams this week related to DevOps, VSTS, TFS and other interesting topics. TOP STORIES Post 2- Value Stream Mapping for ALM Ranger’s VSTS Extensions Work Stream – Hamid ShahidIn my last blog post, I wrote about benefits of Value Stream Mapping (VSM) in the realm of software... Read More

How to set up a sparklyr cluster in 5 minutes

$
0
0

If you've ever wanted to play around with big data sets in a Spark cluster from R with the sparklyr package, but haven't gotten started because setting up a Spark cluster is too hard, well ... rest easy. You can get up and running in about 5 minutes using the guide SparklyR on Azure with AZTK, and you don't even have to install anything yourself. I'll summarize the steps below, but basically you'll run a command-line utility to launch a cluster in Azure with everything you need already installed, and then connect to RStudio Server using your browser to analyze data with sparklyr.
AZTK

Step 1: Install the Azure Distributed Data Engineering Toolkit (aztk). For this, you'll need a Unix command line with Python 3 installed. I'm on Windows, so I used a bash shell from the Windows Subsystem for Linux and it worked great. (I just had to use pip3 instead of pip to install, since the default there is Python 2.) The same process should work with other Linux distros or from a Mac terminal.

Step 2: Log into the Azure Portal with your Azure subscription. If you don't have an Azure subscription, you can sign up for free and get $200 in Azure credits.  

Step 3: Back at the command line, set up authentication in the secrets.yaml file. You'll be using the Azure portal to retrieve the necessary keys, and you'll need to create an Azure Batch account if you don't have one already. (Batch is the HPC cluster and job-management service in Azure.) You can find step-by-step details in the aztk documentation.

Step 4: Configure your cluster defaults in the cluster.yaml file. Here you can define the default VM instance size used for the cluster nodes; for example vm_size: standard_a2 gives you basic 2-core nodes. (You can override this in the command line, but it's convenient to set it here.) You'll also need to specify a dockerfile here that will be used to set up the node images, and for use with sparklyr you'll need to specify one that includes R and the version of Spark you want. I used:

docker_repo: aztk/r-base:spark2.2.0-r3.4.1-base

This provides an image with Spark 2.2.0, R 3.4.1, and a suite of R packages pre-installed, including sparklyr and tidyverse. (You could provide your own dockerfile here, if you need other things installed on the nodes.)

Step 5: Privision a Spark cluster. This is the easy bit: just use the command line tool like this:

aztk spark cluster create --id mysparklyr4 --size 4

In this case, it will launch a cluster of 4 nodes, each with 2 cores (pre the vm_size option configured above.) Each node will be pre-installed with R and (Warning: the default quotas for Azure Batch are laughably low: for me it was 24 cores total at first. You can get your limit raised fairly easily, but it can take a day to get approval.) Provisioning a cluster takes about 5 minutes; while your waiting you can check on the progress by clicking on the cluster name in the "Pools" section of your Azure Batch account within the Azure Portal.

Pools
Once it's ready, you'll also need to provide a password for the head node unless you set up ssh keys in the secrets.yaml file. 

Step 6: Connect to the head node of the Spark cluster. Normally you'd need to find the IP address first, but aztk makes it easy with its ssh command:

aztk spark cluster ssh --id mysparklyr4

(You'll need to provide a password here, if you set one up in Step 5.) This gives you a shell on the head node, but more importantly it maps the ports for Spark and RStudio server, so that you can connect to them using http://localhost URLs in the next step. Don't exit from this shell until you're done with the next steps, or the port mappings will be cancelled. 

Step 7: Connect to RStudio Server

Open a browser window on your desktop, and browse to http://localhost:8787. This will open up RStudio Server in your browser. (The default login is rstudio/rstudio.) To be clear, RStudio Server is running on the head node of your cluster in Azure Batch, not on your local machine: the port mapping from the previous step is redirecting your local port 8787 to the remote cluster.

From here, you can use RStudio as you normally would. In particular, the sparklyr package is already installed, so you can connect to the Spark cluster directly and use RStudio Server's built-in features for working with Spark.

RStudio-Spark

One of the nice things about using RStudio Server is that you can shut down your browser or even your machine, and RStudio Server will preserve its state so that you can pick up exactly where you left off next time you log in. (Just use aztk spark cluster ssh to reapply the port mappings first, if necessary.)

Step 8: When you're finished, shut down your cluster using the aztk spark cluster delete command. (While you can delete the nodes from the Pools view in the Azure portal, the command does some additional cleanup for you.) You'll be charged for each node in the cluster at the usual VM rates for as long as the cluster is provisioned. (One cost-saving option is to use low-priority VMs for the nodes, for savings of up to 90% compared to the usual rates.)

That's it! Once you get used to it, it's all quick and easy -- the longest part is waiting for the cluster to spin up in Step 5. This is just a summary, but the full details see the guide SparklyR on Azure with AZTK.

Because it’s Friday: Faster Than Light

$
0
0

Faster than Light, a short film by Adam Stern, captures a single idea as well as any good sci-fi short story. It's also very pretty, and surprisingly moving. Well worth your 15 minutes. (Via Kottke)

That's all from us for this week. We'll be back with more next week, and in the meantime have a great weekend!

 

Arithmetic overflow checks in C++ Core Check

$
0
0

We’ve improved the C++ Code Analysis toolset with every major compiler update in Visual Studio 2017. Version 15.6, now in Preview, includes a set of arithmetic overflow checks. This article discusses those checks and why you’ll want to enable them in your code.

If you’re just getting started with C++ Code Analysis in Visual Studio, learn more about the overall experience in this overview blog post.

Motivation

As part of the C++ Code Analysis team, we work with groups across the company to identify classes of vulnerabilities that could be detected with better tooling and language support. Recently, as part of a security review in one of Microsoft’s most security sensitive codebase, we found we needed to add checks for detecting a common class of arithmetic overflows.

Example patterns:

uint32 ConvertOffset(uint32 ptr, uint32 fromSize, uint32 toSize)
{
    if (fromSize == toSize)
    {
        return ptr;
    }
    uint64 tmp = ptr * fromSize; // “tmp” can overflow here if ptr * fromSize is wider than uint32
    
    // More code
}

template
class BucketEntryIterator sealed : public IteratorBase&lt;TDictionary, BucketEntryIterator&gt;
{
private:
    uint bucketIndex;
    // Rest of the data members

public:
    BucketEntryIterator(TDictionary &amp;dictionary)
    : Base(dictionary, -1),
      bucketIndex(0u – 1) // This wraps past 0 to a really big number, which can overflow bucketIndex
      
      // Initialize other data members of the class
};

We looked into the C++ Core Guidelines to see if there are specific guidelines in this space.

Arithmetic rules

The guidelines note that some of these rules can be tricky to enforce in practice. Therefore, we took an approach where we tried to intersect the set of guidelines with the kind of defect patterns we wanted to detect in our implementation.

Checks

The following are a set of arithmetic checks we added to C++ Core Check for 15.6 release:

  • C26450 RESULT_OF_ARITHMETIC_OPERATION_PROVABLY_LOSSY[operator] operation causes overflow at compile time. Use a wider type to store the operands.

    This warning indicates that an arithmetic operation was provably lossy at compile time. This can be asserted when the operands are all compile-time constants. Currently, we check left shift, multiplication, addition, and subtraction operations for such overflows.

    // Example source:
    int multiply() {
        const int a = INT_MAX;
        const int b = 2;
        int c = a * b; // C26450 reported here
        return c;
    }
    
    // Corrected source:
    long long multiply() {
        const int a = INT_MAX;
        const int b = 2;
        long long c = (long long)a * b; // OK
        return c;
    }
    

    In the corrected source, the left operand was cast to a wider type for the result of the arithmetic operation to be wider.

  • C26451 RESULT_OF_ARITHMETIC_OPERATION_CAST_TO_LARGER_SIZE Using operator [operator] on a [size1] byte value and then casting the result to a [size2] byte value. Cast the value to the wider type before calling operator [operator] to avoid overflow

    This warning indicates incorrect behavior that results from integral promotion rules and types larger than those in which arithmetic is typically performed. We detect when a narrow type integral value was shifted left, multiplied, added, or subtracted and the result of that arithmetic operation was cast to a wider type value. If the operation overflowed the narrow type value, then data is lost. You can prevent this loss by casting the value to a wider type before the arithmetic operation.

    // Example source:
    void leftshift(int i) {
        unsigned long long x;
        x = i << 31; // C26451 reported here
             // code
    }
    
    // Corrected source:
    void leftshift(int i) {
        unsigned long long x;
        x = (unsigned long long)i << 31; // OK
            // code
    }
    

    In the corrected source, the left operand was cast to a wider type for the result of the arithmetic operation to be wider.

  • C26452 SHIFT_COUNT_NEGATIVE_OR_TOO_BIG Left shift count is negative or greater than or equal to the operand size which is undefined behavior

    This warning indicates a shift count is negative or greater than or equal to the number of bits of the operand being shifted, resulting in undefined behavior.

    // Example source:
    unsigned long long combine(unsigned lo, unsigned hi)
    {
        return (hi << 32) | lo; // C26252 here
    }
    
    // Corrected source:
    unsigned long long combine(unsigned lo, unsigned hi)
    {
        return ((unsigned long long)hi << 32) | lo; // OK
    }
    

    In the corrected source, the left operand was cast to a 64 bit value before left shifting it by 32 bits.

  • C26453 LEFTSHIFT_NEGATIVE_SIGNED_NUMBER Left shift of a negative signed number is undefined behavior

    This warning indicates we are left shifting a negative signed integral value, which is a bad idea and triggers implementation defined behavior.

    // Example source:
    void leftshift(int shiftCount) {
        const auto result = -1 << shiftCount; // C26453 reported here
            // code
    }
    
    // Corrected source:
    void leftshift(int shiftCount) {
        const auto result = 4294967295 << shiftCount; // OK
             // code
    }
    

    In the corrected source, a positive integral value was used for the shift operation.

  • C26454 RESULT_OF_ARITHMETIC_OPERATION_NEGATIVE_UNSIGNED [operator] operation wraps past 0 and produces a large unsigned number at compile time

    This warning indicates that the subtraction operation produces a negative result which was evaluated in an unsigned context. This causes the result to wrap past 0 and produce a really large unsigned number, which can result in unintended overflows.

    // Example source:
    unsigned int negativeunsigned() {
        const unsigned int x = 1u - 2u; // C26454 reported here
        return x;
    }
    
    // Corrected source:
    unsigned int negativeunsigned() {
        const unsigned int x = 4294967295; // OK
        return x;
    }
    

    In the corrected source, a positive value was assigned to the unsigned result.

Results

We ran these checks in a security-sensitive codebase over the holidays and found interesting bug patterns. I am re-sharing the example patterns that looked suspicious at the beginning of this blog post along with the code analysis warnings they now trigger.

Warning C26451 Arithmetic overflow: Using operator '*' on a 4 byte value and then casting the result to a 8 byte value. Cast the value to the wider type before calling operator '*' to avoid overflow

uint32 ConvertOffset(uint32 ptr, uint32 fromSize, uint32 toSize)
{
    if (fromSize == toSize)
    {
        return ptr;
    }
    uint64 tmp = ptr * fromSize; // C26451
    // More code
}

Warning C26454 Arithmetic overflow: '-' operation produces a negative unsigned result at compile time, resulting in an overflow

template
class BucketEntryIterator sealed : public IteratorBase&lt;TDictionary, BucketEntryIterator&gt;
{
private:
    uint bucketIndex;
    // Rest of the data members

public:
    BucketEntryIterator(TDictionary &amp;dictionary)
    : Base(dictionary, -1),
      bucketIndex(0u – 1) // C26454
      
      // Initialize other data members of the class
};

Feedback

We’d love for you to try these checks firsthand. Download Version 15.6 Preview 6 and let us know if you find any interesting bug patterns in your codebase.

As always, if you have any feedback or suggestions for us, let us know. We can be reached via the comments below, via email (visualcpp@microsoft.com) and you can provide feedback via Help > Report A Problem in the product, or via Developer Community. You can also find us on Twitter (@VisualC) and Facebook (msftvisualcpp).

Upgrading a 10 year old site to ASP.NET Core’s Razor Pages using the URL Rewriting Middleware

$
0
0

Visual Studio Code editing my new ASP.NET Core site using Razor PagesMy podcast has over 600 episodes (Every week for many years, you do the math! And subscribe!) website was written in ASP.NET Web Pages many years ago. "Web Pages" (horrible name) was it's own thing. It wasn't ASP.NET Web Forms, nor was it ASP.NET MVC. However, while open-source and cross-platform ASP.NET Core uses the "MVC" pattern, it includes an integrated architecture that supports pages created with the model-view-controller style, Web APIs that return JSON/whatever from controllers, and routing system that works across all of these. It also includes "Razor Pages."

On first blush, you'd think Razor Pages is "Web Pages" part two. I thought that, but it's not. It's an alternative model to MVC but it's built on MVC. Let me explain.

My podcast site has a home page, a single episode page, and and archives page. It's pretty basic. Back in the day I felt an MVC-style site would just be overkill, so I did it in a page model. However, the code ended up (no disrespect intended) very 90s style PHPy. Basically one super-page with too much state management to all the URL cracking happening at the top of the page.

What I wanted was a Page-focused model without the ceremony of MVC while still being able to dip down into the flexibility and power of MVC when appropriate. That's Razor Pages. Best of all worlds and simply another tool in my toolbox. And the Pages (.cshtml) are Razor so I could port 90% of my very old existing code. In fact, I just made a new site with .NET Core with "dotnet new razor," opened up Visual Studio Code, and started copying over from (gasp) my WebMatrix project. I updated the code to be cleaner (a lot has happened to C# since then) and had 80% of my site going in a few hours. I'll switch Hanselminutes.com over in the next few weeks. This will mean I'll have a proper git checkin/deploy process rather than my "publish from WebMatrix" system I use today. I can containerize the site, run it on Linux, and finally add Unit Testing as I've been able to use pervasive Dependency Injection that's built into ASP.NET.

Merging the old and the new with the ASP.NET Core's URL Rewriting Middleware

Here's the thing though, there's parts of my existing site that are 10 years old, sure, but they also WORK. For example, I have existing URL Rewrite Rules from IIS that have been around that long. I'm pretty obsessive about making old URLs work. Never break a URL. No excuses.

There are still links around that have horrible URLs in the VERY original format that (not my fault) used database ids, like https://hanselminutes.com/default.aspx?ShowID=18570. Well, that database doesn't exist anymore, but I don't break URLs. I have these old URLs store along site my new system, and along with dozens of existing rewrite URLs I have an "IISUrlRewrite.xml" file. This was IIS specific and used with the IIS URL Rewrite Module, but you have all seen these before with things like Apache's ModRewrite. Those files are often loved and managed and carried around for years. They work. A lot of work went into them. Sure, I could rewrite all these rules with ASP.NET Core's routing and custom middleware, but again, they already work. I just want them to continue to work. They can with ASP.NET Core's Url Rewriting Middleware that supports Apache Mod Rewrite AND IIS Url Rewrite without using Apache or IIS!

Here's a complex and very complete example of mixing and matching. Mine is far simpler.

public void Configure(IApplicationBuilder app)

{
using (StreamReader apacheModRewriteStreamReader =
File.OpenText("ApacheModRewrite.txt"))
using (StreamReader iisUrlRewriteStreamReader =
File.OpenText("IISUrlRewrite.xml"))
{
var options = new RewriteOptions()
.AddRedirect("redirect-rule/(.*)", "redirected/$1")
.AddRewrite(@"^rewrite-rule/(d+)/(d+)", "rewritten?var1=$1&var2=$2",
skipRemainingRules: true)
.AddApacheModRewrite(apacheModRewriteStreamReader)
.AddIISUrlRewrite(iisUrlRewriteStreamReader)
.Add(MethodRules.RedirectXMLRequests)
.Add(new RedirectImageRequests(".png", "/png-images"))
.Add(new RedirectImageRequests(".jpg", "/jpg-images"));

app.UseRewriter(options);
}

app.Run(context => context.Response.WriteAsync(
$"Rewritten or Redirected Url: " +
$"{context.Request.Path + context.Request.QueryString}"));
}

Remember I have URLs like default.aspx?ShowID=18570 but I don't use default.aspx any more (literally doesn't exist on disk) and I don't use those IDs (they are just stored as metadata in a new system.

NOTE: Just want to point out that last line above there, where it shows the rewritten URL. Putting that in the logs or bypassing everything and outputting it as text is a nice way to debug and developer with this middleware, then comment it out as you get things refined and working.

I have an IIS Rewrite URL that looks like this. It lives in an XML file along with dozens of other rules. Reminder - there's no IIS in this scenario. We are talking about the format and reusing that format. I load my rewrite rules in my Configure() method in Startup:

using (StreamReader iisUrlRewriteStreamReader = 

File.OpenText("IISUrlRewrite.xml"))
{
var options = new RewriteOptions()
.AddIISUrlRewrite(iisUrlRewriteStreamReader);

app.UseRewriter(options);
}

It lives in the "Microsoft.AspNetCore.Rewrite" package that I added to my csproj with "dotnet add package Microsoft.AspNetCore.Rewrite." And here's the rule I use (one of many in the old xml file):

<rule name="OldShowId">

<match url="^.*(?:Default.aspx).*$" />
<conditions>
<add input="{QUERY_STRING}" pattern="ShowID=(d+)" />
</conditions>
<action type="Rewrite" url="/{C:1}?handler=oldshowid" appendQueryString="false" />
</rule>

I capture that show ID and I rewrite (not redirect...we rewrite and continue on to the next segment of the pipeline) it to /18570?handler=oldshowid. That handler is a magic internal part of Razor Pages. Usually if you have a page called foo.cshtml it will have a method called OnGet or OnPost or OnHTTPVERB. But if you want multiple handlers per page you'll have OnGetHANDLERNAME so I have OnGet() for regular stuff, and I have OnGetOldShowId for this rare but important URL type. But notice that my implementation isn't URL-style specific. Razor Pages doesn't even know about that URL format. It just knows that these weird IDs have their own handler.

public async Task<IActionResult> OnGetOldShowId(int id)

{
var allShows = await _db.GetShows();

string idAsString = id.ToString();
LastShow = allShows.Where(c => c.Guid.EndsWith(idAsString)).FirstOrDefault();
if (LastShow == null) return Redirect("/"); //catch all error case, 302 to home
return RedirectPermanent(LastShow.ShowNumber.ToString()); // 301 to /showid
}

That's it. I have a ton more to share as I keep upgrading my podcast site, coming soon.


Sponsor: Get the latest JetBrains Rider for debugging third-party .NET code, Smart Step Into, more debugger improvements, C# Interactive, new project wizard, and formatting code in columns.


© 2017 Scott Hanselman. All rights reserved.
     

Windows Community Standup discussing Multi-instancing, Console UWPs and Broader File-system Access

$
0
0

During our February Windows Community Standup, we discussed three long-awaited features that add a new dimension to how you build UWP apps. Based on developer feedback these topics are top of mind – multi-instance support, UWP console applications and broader file system access.

With the latest Insider build and SDK, we’re introducing some major new features that provide exciting new opportunities for building UWP apps. The first feature is multi-instancing. This is an opt-in feature. Some apps don’t need or want multi-instancing – but some do.

Note that multi-instancing is not the same as multi-view. A regular single-instanced app can use the multi-view feature, and this works well for those apps where it makes sense. The standard Microsoft Calculator is a good example of an app that uses multi-view. You can have multiple different calculator windows open, each performing different calculations. However, there is still only one Calculator.exe process running, so if the app crashes, or the user terminates the app via TaskManager – then all calculator windows will be closed. This is perfectly fine for this kind of app – but consider if this were to happen with an app that is editing multiple data files. In the worst-case scenario, the user would lose all edits, and possibly even corrupt all of the open files.

With multi-instancing, you get multiple separate instances of the app – each running in a separate process. So, if one instance fails, it doesn’t affect any of the others.

The second new feature is support for Console UWP apps. Think of traditional non-UWP command-line tools. These are apps that don’t have their own windows – instead they use the console window for both input and output. Up until now, you could build console apps, but not console UWP apps. With the latest release, you can build a console UWP app, and publish it to the Store. It will have an entry in the app list, and a primary tile you can pin to Start. So, you can launch it from Start if you want – however, you can also launch it via the command-line in a Cmd window or PowerShell window, and this is likely to be the more normal way to execute such an app. The app can use the console APIs and even traditional Win32 APIs such as printf or getchar.

Multi-instancing and console UWPs are important additions to the platform, and both types of app can also benefit from the additional file-system access that has been added in this release. In fact, any app can take advantage of this. This broader access comes in two forms.

  • The first is used if the app has an AppExecutionAlias (either a regular windowed UWP app or a console UWP app). In this case, the app is granted permissions to the file-system from the current working directory and below. That is, the user executes the app from a command-line, and they choose the location in the file-system from which to launch the app. The app will have file-system permissions from that point downwards.
  • The second file-system feature grants permissions to the entire file-system (or, strictly, grants the app the exact same permissions to the entire file-system as the user who is running the app). This is a very powerful feature – and for this reason, it is protected by a restricted capability. If you submit an app to the Store that declares this capability, you will need to supply additional descriptions of why your app needs this powerful feature, and how it intends to use it.

The latest API documentation can be found:

For a more detailed descriptions of how to use each feature, plus issues to consider when using them, check the documentation:

The post Windows Community Standup discussing Multi-instancing, Console UWPs and Broader File-system Access appeared first on Windows Developer Blog.

#ifdef WINDOWS – How to enable WebVR with just two lines of code with BabylonJS

$
0
0

BabylonJS is a very powerful JavaScript framework for building 3D apps and games with web standards, used by game developers to build some amazing experiences that can run on any platform and device. This includes Windows Mixed Reality and VR platforms such as Oculus or SteamVR.  

With the latest release of BabylonJS, developers can enable immersive experiences with only 2 lines of code. David Rousset, one of the core authors of BabylonJS, stopped by to show just how easy it is to create fully interactive WebVR apps. Watch the full video above and feel free to reach out on Twitter or in the comments below for questions or comments. 

Happy coding! 

The post #ifdef WINDOWS – How to enable WebVR with just two lines of code with BabylonJS appeared first on Windows Developer Blog.

Azure Blob Storage as a Network Drive

$
0
0

Many applications make use of a network drive to backup and store files. When I was in university I found myself constantly coding for fun, and one example took the form of a network share for my roommates to share files wrapped in a handy little app.

Unfortunately, that particular app has long since been erased from whichever hard drive it was initially birthed. Fortunately, I think we can reinvent this magical piece of software (albeit to a scoped degree) with Azure Blob Storage. In the past, network drives did the trick, but Azure Storage offers users automatic backups, better flexibility and global availability, all at a very low cost (or no cost if you are using free Azure credits).

I took my partial memory of the general skeleton of the former masterpiece and rewrote it using Blobs as the backing file store. We are going to build this app as it was in its glory days, which means we need a few things.

Prerequisites

I’ll build this app in WPF using Visual Studio 2017 and the Azure Storage for .NET library. While I made the choice to build this sample in WPF, the Azure Storage portions work with any .NET app type including Windows Forms, ASP.NET, Console, etc.

Representing the Account as a TreeView

At its core, the app needs to display the current state of the blob account in a human-consumable fashion. In a stroke of ingenuity, I decided to represent the containers, directories, and blobs as a TreeView.

As expected, we must build a tree for each container. First, we must obtain a flat list of the blobs in the container.

var blobs = container.ListBlobs(useFlatBlobListing: true)
    .Cast<CloudBlockBlob>()
    .Select(b => new TreeViewBlob
    {
        Name = b.Name, 
        Blob = b
    });

The TreeViewBlob is just a convenient representation for the following tree building algorithm.

IEnumerable<TreeViewNode> BuildTree(IEnumerable<TreeViewBlob> blobs)
{
    return blobs
        .GroupBy(b => b.Name.Split('/')[0])
        .Select(g =>
        {
            var children = g.Where(b => b.Name.Length > g.Key.Length + 1).Select(b => new TreeViewBlob
            {
                Name = b.Name.Substring(g.Key.Length + 1),
                Blob = b.Blob
            });

            var blob = g.FirstOrDefault(b => b.Name == g.Key)?.Blob;

            return new TreeViewNode
            {
                Name = g.Key,
                Blob = blob, 
                Children = BuildTree(children)
            };
        });
}

The above two listings takes a flat list of blobs.

my/share/file1.jpg
my/share/file2.jpg
my/share/private/file.jpg

Subsequent to BuildTree, we essentially have the structure of our TreeView.

my
  share
    file1.jpg
    file2.jpg
    private
        file.jpg

Now that we have built the TreeView, we need to start implementing our Storage commands.

Downloading Blobs

The first of our three action buttons downloads a blob. As I mentioned earlier, Blob storage makes this task excessively simple.

async void DownloadButton_Click(object sender, RoutedEventArgs e)
{
    var item = View.SelectedItem as TreeViewItem;
    var blob = item.Tag as CloudBlockBlob;
    var name = item.Header as string;

    var saveFileDialog = new SaveFileDialog
    {
        FileName = name,
        Title = "Download..."
    };

    if (saveFileDialog.ShowDialog() != true)
        return;

    StatusText = "Downloading...";
    await blob.DownloadToFileAsync(saveFileDialog.FileName, FileMode.Create);
    StatusText = "Success!";
}

This method takes the currently selected BlockBlob, displays a prompt to the user, and downloads the blob to the selected file. The real meat of this method is performed during blob.DownloadToFileAsync; the rest of the method is just gathering the proper information.

Both async and await have drastically simplified UI thread updates in presentation frameworks, so setting the status text and using await gives us the desired UI results.

Downloading blobs is only half the excitement: we need the ability to upload blobs, as well.

Uploading Blobs

Upon selecting a directory, the “upload” button enables the user to select a file and upload it into the Storage directory.

async void UploadButton_Click(object sender, RoutedEventArgs e)
{
    var item = View.SelectedItem as TreeViewItem;

    var (containerName, directoryName) = GetContainerAndDirectory(item);

    var client = Helpers.Storage.CreateCloudBlobClient();
    var container = client.GetContainerReference(containerName);

    var openFileDialog = new OpenFileDialog
    {
        Title = "Upload..."
    };

    if (openFileDialog.ShowDialog() != true)
        return;

    var filePath = openFileDialog.FileName;
    var fileName = filePath.Split(System.IO.Path.DirectorySeparatorChar).Last();

    var blobReference = container.GetBlockBlobReference($"{directoryName}{fileName}");

    StatusText = "Uploading...";
    await blobReference.UploadFromFileAsync(filePath);
    await UpdateView();
    StatusText = "Success!";
}

Similar to the download method, the upload method prompts the user for a file, and uploads the file into a blob of the same name into the, currently selected, directory. Again, most of the method is simply gathering the proper data, while the Storage library simplifies the operation into one call (blobReference.UploadFromFileAsync). However, this is the first and only time we will come across a “reference”. Prior to uploading the file into a blob, a local CloudBlockBlob reference is ascertained via container.GetBlockBlobReference.

After muddying up my test storage account with a bunch of uploaded files, I decided it was time to implement the ability to delete blobs.

Deleting Blobs

Selecting a block blob in the tree will also allow the user to delete the selected blob.

async void DeleteButton_Click(object sender, RoutedEventArgs e)
{
    var item = View.SelectedItem as TreeViewItem;
    var blob = item.Tag as CloudBlockBlob;
    var name = item.Header as string;

    StatusText = "Deleting...";
    await blob.DeleteAsync();
    await UpdateView();
    StatusText = "Success!";
}

As was the case for downloading blobs, the CloudBlockBlob is easily attained from the TreeViewItem, having been attached previously. Again, performing the actual operation only requires only one call to blob.DeleteAsync.

The User Interface

As I mentioned previously, the Azure Storage solution we built is applicable to any type of .NET Application. I decided to use WPF, but the choice for this specific endeavour was made out of a personal love for XAML.

As a former developer on the XAML Developer Platform Team in Windows, I am always excited to stretch my XAML skills after a long time off. Despite getting back into it, I did not really have much to do. So little, in fact, I decided to completely ignore using Styles. Our XAML looks something like this.

<Grid>
    <StackPanel Orientation="Horizontal" HorizontalAlignment="Left" VerticalAlignment="Top" Margin="0,10,10,10" Width="400" Height="30">
        <Button Name="DownloadButton" Content="Download" IsEnabled="{Binding IsBlobSelected}" Click="DownloadButton_Click" Margin="10,0,0,0" Width="100" Height="30"></Button>
        <Button Name="UploadButton" Content="Upload" IsEnabled="{Binding IsDirectorySelected}" Click="UploadButton_Click" Margin="10,0,0,0" Width="100" Height="30"></Button>
        <Button Name="DeleteButton" Content="Delete" IsEnabled="{Binding IsBlobSelected}" Click="DeleteButton_Click" Margin="10,0,0,0" Width="100" Height="30"></Button>
    </StackPanel>
    <TreeView Name="View" Margin="10,50,10,50" SelectedItemChanged="View_SelectedItemChanged" />
    <TextBlock Name="Status" Text="{Binding StatusText}" HorizontalAlignment="Left" VerticalAlignment="Bottom" Margin="10" Width="400" Height="30"></TextBlock>
</Grid>

Putting It All Together

So, there we have it. We have built an app that acts as a network drive using Azure Blob Storage. Implementing more operations and polishing th UI is merely an exercise in elbow grease. If you would like to try out this app, or use the code we went through as a base, take a look at the source.

I hope the work we did here excites you to take existing applications or ideas and port them to equivalent functionality in Azure!

Azure Load Balancer to become more efficient

$
0
0

Azure recently introduced an advanced, more efficient Load Balancer platform. This platform adds a whole new set of abilities for customer workloads using the new Standard Load Balancer. One of the key additions the new Load Balancer platform brings, is a simplified, more predictable and efficient outbound port allocation algorithm.

While already integrated with Standard Load Balancer, we are now bringing this advantage to the rest of Azure.

Load Balancer and Source NAT

Azure deployments use one or more of three scenarios for outbound connectivity, depending on the customer’s deployment model and the resources utilized and configured. Azure uses Source Network Address Translation (SNAT) to enable these scenarios. When multiple private IP addresses or roles share the same public IP (public IP address assign to Load Balancer or automatically assigned public IP address for standalone VMs), Azure uses port masquerading SNAT (PAT) to translate private IP addresses to public IP addresses using the ephemeral ports of the public IP address. PAT does not apply when Instance Level Public IP addresses (ILPIP) are assigned.

For the cases where multiple instances share a public IP address, each instance behind an Azure Load Balancer VIP is pre-allocated a fixed number of ephemeral ports to be used for PAT (SNAT ports), which it uses for masquerading outbound flows. The number of pre-allocated ports per instance is determined by the size of backend pool, see new SNAT Algorithm section for details.

Why are we changing the SNAT algorithm!

The existing deployments in Azure enjoy the version of SNAT port allocation that is designed to be dynamic in nature. This version allocates 160 ports per instance of outbound ports to start with and follow an on-demand model thereafter. The backend instances will initiate connections using these ports, and free these ports for reuse after 4 minutes of idle time in the default configuration. And if there are multiple simultaneous outbound connections exhausting the allocated SNAT ports, the requesting instances are allocated an additional small number of ports depending on availability and rate of request.

This model works well for services with a distributed model, creating uniform outbound flows, or the services that need to establish flows with multiple different external endpoints. However, for services that need multiple simultaneous flows with a few external destinations, the initial port allocation gets exhausted in a short period, and they experience intermittent connection failures. It is very challenging for services to predict exactly how many ports they’ll get and connections they’ll be able to initiate. With the on-demand model, the ports are not evenly distributed. This results in longer pending state for SNAT port allocation for some of the instances in the pool. The challenges listed above are addressed by the newer algorithm.

New SNAT algorithm

The new Azure Load Balancer platform introduces a more robust, simple, and predictable port allocation algorithm. In this model, all the available ports are pre-allocated, and evenly distributed amongst the backend pool of the Load Balancer depending on the pool size. Each IP configuration gets a pre-determined number of ports. Your services can make decisions on the distribution of connections amongst the backend pool instances and make an efficient use of resources. The change will assist customers in designing their services better and with fewer scaling limitations.

The following table shows the number of SNAT ports allocated based on the size of the backend pool:

Pool Size

Pre-Allocated SNAT ports per IP configuration

1 – 50

1024

51 – 100

512

101 – 200

256

201 – 400

128

401 – 800

64

801 – 1000

32

For more details on the allocation, please refer to the Ephemeral port pre-allocation for port masquerading SNAT (PAT) section of Understanding outbound connections in Azure article.

Migration

We plan to adopt this new allocation algorithm across Azure, making it easier to manage the SNAT allocation for platform as well for customers. Migration of existing deployments to the new SNAT port allocation algorithm are targeted for Summer 2018.

This will completely replace the older algorithm. A more detailed schedule will be announced in the future.

What type of SNAT allocation do I get?

For our customers deploying in Azure, this might bring up a question of what type of port allocation will they see with their services! Let’s categorize these into different scenarios:

New deployments

All the new deployments in Azure will subscribe to the newer port allocation model described above. This applies to both Standard SKU Load Balancer based as well as Basic SKU Load Balancer based deployments and any Classic cloud service deployments.

Furthermore, if you have existing deployments, but are migrating or redeploying those services, the newer instances will all provision with the new port allocation algorithm.

Existing deployments

All the existing deployments will keep using the older SNAT port allocation scheme for now. However, existing deployments will be migrated to the new algorithm as well, which will change the experience for the existing deployments and make it consistent everywhere.

How does it affect my services?

SNAT port allocation plays an important role in outbound connectivity for Azure instances, like discussed above. So far, the services enjoyed an on-demand allocation of ports, where starting from a small number of 160 ports, some instances could potentially go to a very high number of port allocation in the tens of thousands while others in the pool or availability set only consume a small number. Usually, large numbers or high rate of port allocations also cause intermittent failures.

However, with the new allocation model, each IP configuration will get a fixed number of SNAT ports, which will be selected for outbound flows. Once the available ports are exhausted, no more allocation will be possible. This might impact the services that initiate a very high number of simultaneous SNAT connections from individual instances. If your services fall under this category, you might want to rethink the service design and look for the possible mitigations. The Managing SNAT Port Exhaustion section in the Understanding Outbound Connections article expands on better SNAT port management techniques.

With the new model, if you are scaling up, the number of SNAT ports allocated per instance will drop to half once the instance count increases than the current pool size. This could also affect the services by resetting existing connections and freeing up some ports for redistribution.

What should I do right now?

Review and familiarize yourself with the scenarios and patterns described in Managing SNAT port exhaustion for guidance on how to design for reliable and scalable scenarios.

How do I get exception from this migration?

The port allocation algorithm is a platform level change. No exceptions will be granted to the customers. However, we do understand that you are running critical production workloads in Azure and want to ensure a safer implementation of mitigations or wait for a critical period before changing the service logic. Please reach out to Azure Support in such scenarios with your deployment information and we’ll work with you to ensure no disruption to your services.

Azure Service Bus now integrates with Azure Event Grid!

$
0
0

We are happy to share that Azure Service Bus is now able to send events to Azure Event Grid. The key scenario this feature enables is for Service Bus queues, topics, or subscriptions with low message volumes to not require a receiver to be polling for messages at all times. Service Bus will now send events to Azure Event Grid when there are messages in a queue if no receivers are present. You can create Azure Event Grid subscriptions for your Service Bus namespaces, listen to these events, and react to the events by starting a receiver. With this feature, Service Bus can be used in reactive programming models.

Service Bus to Event Grid integration

Today, Azure Service Bus sends events for two scenarios:

  • Active messages with no listeners available
  • Deadletter messages available

Additionally, it uses the standard Azure Event Grid security and authentication mechanisms.

How often and how many events are emitted?

If you have multiple queues and topics/subscriptions in the namespace, you get at least one event per queue and subscription. The events are immediately emitted if there are no messages in the Service Bus entity and a new message arrives, or every two minutes unless Azure Service Bus detects an active receiver. Message browsing does not interrupt the events.

By default Azure Service Bus emits events for all entities in the namespace. If you want to get events for specific entities only, see the following filtering section.

Filtering, limiting from where you get events

If you want to get events from only one queue or one subscription within your namespace, you can use the "Begins with" or "Ends with" filters provided by Azure Event Grid. In some interfaces, the filters are called “Pre” and “Suffix” filters. If you want to get events for multiple, but not all queues and subscriptions, you can create multiple Azure Event Grid subscriptions and provide a filter for each.

In the current release, this feature is only for Premium namespaces and is available in all Event Grid regions. We will add support for Standard namespaces at a later point in time.

  • To learn more, please review the full technical documentation.
  • If you want to get started today, please see the examples provided on our documentation page.

B-series burstable VM support in AKS now available

$
0
0

We are thrilled to announce the availability of B-series VM's, burstable VM's in Azure Container Service (AKS).

Burstable VM's (B-series) are significantly cheaper compared to standard and optimal recommended VM's like Standard_DS2_V2. B-series VM's are particularly suited for development and test environments where performance requirements are bursts rather than consistent. In fact, B-Series provides the cheapest cost with bursts CPU usage and thus reduces development and test environment costs significantly. We hope that this addition will significantly reduce the cost of learning Kubernetes AKS, building proof of concepts on Azure Container Service (AKS), running dev/test workloads, etc.

The following configurations are available today.

SKU Type VCPUS GB Ram Data Disks Max IOPS Local SSD
B1s Standard 1 1 2 800 2GB
B1ms Standard 1 2 2 1600 4GB
B2s Standard 2 4 4 3200 8GB
B2ms Standard 2 8 4 4800 16GB
B4ms Standard 4 16 8 7200 32GB
B8ms Standard 8 32 16 10800 64GB

In comparison, a Standard_DS2_V2 node costs greater than five times the B1/B2 SKU's today. Check the latest VM pricing.

To get started log on to the Azure portal and search for Container Service (managed). As you follow the AKS create cluster workflow, you will be able to select B-series VM's in the Node Agent VM configuration section. Read more about burstable VM's.

Please let us know how it works for you by commenting in the section below!

Last week in Azure: NPM’s Service Endpoint Monitor in preview, and more

$
0
0

Now is a great time to learn more about Azure Cosmos DB through seven-part technical training series that started rolling out recently. The first part provides a technical overiew, and the second part on how to create a more intelligent and responsive globally distributed serverless application. Both are available now for on-demand viewing. You can join Part 3 live for an overview of both Graph API and Table API on Tuesday this week (10:00-11:00 AM Pacific Time, UTC-8). Subsequent parts are rolling out weekly through the end of March. Learn more and register for all of them here: Azure Cosmos DB Technical Training Series.

Now in preview

Monitor network connectivity to applications with NPM’s Service Endpoint Monitor - public preview - Network Performance Monitor (NPM) introduces Service Endpoint Monitor in preview, which integrates the monitoring and visualization of the performance of your internally hosted & cloud applications with the end-to-end network performance. You can create HTTP, HTTPS, TCP and ICMP based tests from key points in your network to your applications, allowing you to quickly identify whether the problem is due to the network or the application.

Introducing backup for Azure file shares - Azure Backup now enables a native backup solution for Azure file shares, a key addition to the feature arsenal to enable enterprise adoption of Azure Files. Using Azure Backup, via Recovery Services vault, to protect your file shares is a straightforward way to secure your files and be assured that you can go back in time instantly.

Now generally available

VNet Service Endpoints for Azure SQL Database now generally available - Virtual Network (VNet) Service Endpoints for Azure SQL Database is now generally available in all Azure regions, which enables you to isolate connectivity to your logical server from only a given subnet or set of subnets within your virtual network. The traffic to Azure SQL Database from your VNet will always stay within the Azure backbone network as this direct route is preferred over any specific routes that take Internet traffic through virtual appliances or on-premises.

News & updates

Cray in Azure for weather forecasting - Ben Cotton touches on how Microsoft's recent partnership with Cray for HPC workloads enables scnearios such as weather forecasting would benefit from solutions that leverage the combination of a Cray supercomputer with the elasticity of Azure and its broad suite of AI products.

Spring Security Azure AD: Wire up enterprise grade authentication and authorization - Azure Active Directory (Azure AD) is now integrated with Spring Security to secure your Java web applications. With only few lines of configurations, you can wire up enterprise grade authentication and authorization for your Spring Boot project.

Unlock Query Performance with SQL Data Warehouse using Graphical Execution Plans - The Graphical Execution Plan feature within SQL Server Management Studio (SSMS) is now supported for SQL Data Warehouse (SQL DW). Now you can seamlessly and visually debug query plans to identify performance bottlenecks directly within the SSMS window, which extends the query troubleshooting experience by displaying costly data movement operations which are the most common reasons for slow distributed query plans.

Technical content

Migrating to Azure SQL Database with zero downtime for read-only workloads - Learn how we migrated MSAsset, an internal service we use to manage all Microsoft data center hardware around the world, to Azure. MSAsset’s data tier, consiting of SQL Server 2012 running on aging hardware, included a 107 GB database with 245 tables on SQL Server.

Sync SQL data in large scale using Azure SQL Data Sync - In this article, we are going to show you how to use data sync to sync data between large number of databases and tables, including some best practices and how to temporarily work around database and table limitations in Azure SQL Databases.

New Azure GxP guidelines help pharmaceutical and biotech customers build GxP solutions - GxP qualification guidelines are now available for our Azure customers. These guidelines give life sciences organizations, such as pharmaceutical and biotechnology companies, a comprehensive toolset for building solutions that meet GxP compliance regulations.

Get started with Azure Cosmos DB through this technical training series - Join us for one or all of a seven-week Azure Cosmos DB technical training series, which explores the capabilities and potential of Azure Cosmos DB. Whether you’re brand new to Azure Cosmos DB or an experienced user, you’ll leave this series with a better understanding of database technology and have the practical skills necessary to get started.

LUIS.AI: Automated Machine Learning for Custom Language Understanding - Language understanding (LU) helps enable conversational services such as bots, IoT experiences, analytics, and others by converting words in a sentence into a machine-readable meaning representation. Learn how Microsoft’s Language Understanding Intelligent Service (LUIS) enables software developers to create cloud-based machine-learning LU models specific to their application domains, and without ML expertise. .

Using AI to automatically redact faces in videos - Learn the background on what is driving the growth of body-worn cameras in law enforcement and how AI can help law enforcement agencies to process the videos they capture, such as redaction using an AI-based algorithm from Microsoft for detecting, tracking, and redacting faces in videos and is available for customers to use as part of Azure Media Analytics.

Deploying WordPress application using VSTS and Azure – part one - In the first of a two-part series, learn how to setup a CI/CD pipeline using VSTS for deploying a Dockerized, custom WordPress website working with Azure Web App for Containers and Azure Database for MySQL.

Developer spotlight

Ship Better Apps with Visual Studio App Center - Mark Smith shows you how to automate your app development pipeline with Visual Studio App Center. You’ll walk through how to connect your app to App Center and start improving your development process and your apps immediately.

Engage your users with push notifications in App Center - Learn how to better connect with your users by integrating push notifications in just a few steps. Create better engagement models with users by targeting push notifications to specific groups.

Getting Started with App Center - Sample Swift App and Tutorials - In this tutorial, you will learn how to set up a sample Swift app with App Center for iOS. Note that both Objective-C and Swift are supported.

Automate resizing uploaded images using Event Grid - This tutorial extends a previous tutorial on how to upload image data in the cloud with Azure Storage to add serverless automatic thumbnail generation using Azure Event Grid and Azure Functions. Event Grid enables Azure Functions to respond to Azure Blob storage events and generate thumbnails of uploaded images.

Create your first function in the Azure portal - Learn how to use Azure Functions to create a simple function in the Azure portal to see how you can execute your code in a serverless environment.

Service updates

Azure shows

Cassandra API for Azure Cosmos DB - Join Kirill Gavrylyuk and Scott Hanselman to learn about native support for Apache Cassandra API in Azure Cosmos DB with wire protocol level compatibility. This support ensures you can continue using your existing application and OSS tools with no code changes and gives you the flexibility to run your Cassandra apps fully managed with no vendor lock-in.

The Azure Podcast: Episode 217 - Video Indexer & Custom Speech Service - The Video Indexer is one of our favorite services which we use right here at the Podcast to offer indexed audio insights into our shows. We are fortunate to have Royi Ronen, a Principal Data Science Manager of the Media Artificial Intelligence group and Olivier Nano, a Principal Development Manager for the Cognitive Speech Service about the details of Video Indexer and the Custom Speech service that makes it even better.


VSTS/TFS Roadmap update for 2018 Q1 and Q2

$
0
0
We recently published an update to the “Features under development” roadmap on our Features timeline. This feature list, although subject to change and not comprehensive, provides visibility into our key investments in the medium term. We update the feature list as part of our agile planning rhythms, about every 9 weeks. Some of the features... Read More

Join me on March 2, 2018 for a Developer Tools AMA

$
0
0

A lot has happened since I last hosted a Reddit “Ask Me Anything” (AMA) nearly two years ago.

Our team launched Visual Studio for Mac in late 2016 and released it the following May. Shortly thereafter, we introduced live coding of mobile apps with .NET code with our Live Player. We made it easy to embed .NET into native applications with .NET Embedding and we have been working with Unity to deliver a great experience to their users. We completed Mono ports to the PlayStation 4 and Xbox One, made great progress in unifying Mono and .NET Core, shipped a prototype to run .NET in WebAssembly and brought CSS and Flex layout to Xamarin.Forms.

Please join me in our Ask Me Anything with Miguel de Icaza on March 2, 12 – 2 PM Pacific Time.

Add to Calendar
03/02/2018 12:00 PM
03/02/2018 2:00 PM
America/Los_Angeles
Ask Me Anything with Miguel de Icaza
Please join us on Reddit discussing Visual Studio for Mac and the latest update to our developer tools: https://aka.ms/miguelama
https://aka.ms/miguelama

I look forward to answering the questions on what we have done, how we have done it and what we think about the future of mobile development in a Reddit AMA this Friday.

Miguel de Icaza, Distinguished Engineer, Mobile Developer Tools

Miguel is a Distinguished Engineer at Microsoft, focused on the mobile platform and creating delightful developer tools. With Nat Friedman, he co-founded both Xamarin in 2011 and Ximian in 1999. Before that, Miguel co-founded the GNOME project in 1997 and has directed the Mono project since its creation in 2001, including multiple Mono releases at Novell. Miguel has received the Free Software Foundation 1999 Free Software Award, the MIT Technology Review Innovator of the Year Award in 1999, and was named one of Time Magazine’s 100 innovators for the new century in September 2000.

Announcing new Ad Monetization policies and updates to Ad unit management UX

$
0
0

Today, we are announcing new policy around ad unit decommissioning, user experience updates to manage active/inactive ad units, default policies around Ad network enablement, and changes to the Ad impression measurement methodology for Windows apps. You may be impacted if you are monetizing using Ads on your Windows app leveraging the Microsoft Advertising SDK (UWP or 8.x apps).

1) Auto enabling new ad networks for Manual ad units on UWP apps

Our team continues to evaluate and onboard new Ad networks to improve yield and offer better variety of demand (formats, market specific demand) for our publishers. Today, when a new ad network is onboarded to our system, UWP ad units that are configured with the ‘Manual configuration’ option do not benefit from these networks as they are not automatically turned on for the Manual ad units. We will be making a change over the next few weeks to turn on any new ad networks by default for all our UWP ad units independent of the Manual or Automatic configuration. For ad units in the Manual configuration, since the ad network waterfall configuration is determined by publisher’s choice, these ad networks will be added to the bottom of the waterfall order. The publisher can then make necessary changes to reorder or opt out of the ad networks as necessary. As always, our recommendation is to choose the ‘Automatic’ ad unit configuration to benefit the most from our platform’s yield maximization capabilities. 

2) Policy updates around decommissioning of Ad units

We are implementing an ad unit decommissioning policy wherein an ad unit that hasn’t generated any Ad request over the last 6 months will be subject to deactivation and deletion. We don’t expect active apps to be impacted since we look for a prolonged period (6 months) of ad unit inactivity before we deactivate. However, there may be exceptional circumstances where you may be affected. For instance, if you created an ad unit several months ahead of app deployment time and trying to use this ad unit as your app goes live. Or, you are trying to reuse an existing ad unit from your previous inactive app into your new app. Recommendation in both cases is to use newly created ad units instead of reusing an existing one to avoid potential Ad revenue loss.

Along with this policy announcement, we are making changes in the Windows Dev Center dashboard to make it easy to view active and inactive Ad units. Ad unit state is identified in a separate Status column; in addition, you can choose to view just the Active or Inactive ad units by choosing the appropriate filter on the top of the page.

3) Transitioning to Standards based Ad impression measurement

Standards based Ad impression measurement requires that the ‘Ad must be loaded and at minimum begin to render’ to be counted as a valid impression. For Ad networks that are relying on other methods such as server based Ad impression counting, we are making changes to gradually migrate all our Windows app ad units over the next few months closer to Standards based measurement. More specifically, the impression counting will rely on techniques such as firing an ad network impression beacon upon starting to render the Ad on the client. This is done to better adhere to the IAB standards around Ad impression counting and to be fair to the advertisers. This will also benefit apps that are striving to do right by making sure that Ads are rendered on the client and are viewable.

If you are following standard guidelines put forth in our documentation and recommendations from earlier blogs on how to design the Ad placements in your app, you should not see adverse impact on your Ad monetization. However, if your app is designed in a way that interferes with the rendering of the Ad, you may see a negative impact on your App Ad revenues. For instance, if you are placing an Ad in a UI dialog box that can be easily dismissed by the user before the Ad gets a chance to render or your app is pulling Ads on a background thread, these Ads that previously were possibly counted as valid impressions will not be counted in the new standard. We strongly recommend you to evaluate your app for such practices and actively fix these issues to minimize impact to your Ad revenues.

Please reach out to aiacare@microsoft.com for questions or comments!

The post Announcing new Ad Monetization policies and updates to Ad unit management UX appeared first on Windows Developer Blog.

StorSimple Data Manager now generally available

$
0
0

We are excited to announce the general availability of the StorSimple Data Manager. This feature allows you to transform data from StorSimple format into the native format in Azure blobs or Azure Files. Once your data is transformed, you can use services like Azure Media Services, Azure Machine Learning, HDInsight, Azure Search, and more.

StorSimple devices use the cloud as a tier of storage and sends data to the cloud in a highly efficient and secure manner. Data is stored in the cloud tier in this deduped, compressed, and encrypted format. A side effect of this is that this data is not readily consumable by cloud services that you might want to use. Azure offers a rich bouquet of services and our goal is to let you use the service of your choice on your data to unleash its potential.

Using this service, you can transform data stored in your 8000 series StorSimple devices into Azure blobs or Azure Files. All the file data that you store on-premises on your StorSimple device will show up as individual blobs or files in Azure. You can use the Azure portal, .NET applications, or Azure Automation to trigger these transformations. You can transform all the data in a given StorSimple volume, or specify exactly the subset of data that you are interested in analyzing on the cloud, and this service will transform that subset alone. 

This service can be used in 19 Azure regions starting today! We will expand to more regions shortly. To learn more about region availability and selection, visit the StorSimple Data Manager solution overview. For detailed pricing information, please visit the StorSimple Solution pricing page.

Security Center Playbooks and Azure Functions Integration with Firewalls

$
0
0

Every second counts when an attack has been detected. We have heard from you that you need to be able to quickly take action against detected threats. At Ignite 2017, we announced Azure Security Center Playbooks, which allow you to control how you want to respond to threats detected by Security Center. You can manually run a Security Center Playbooks when a Security Center alert is triggered, reducing time to response, and helping you stay in control of your security posture. Today, we are going to look at the specific example of how Azure Functions work with Security Center Playbooks to help you rapidly respond to detected threats against your Palo Alto VM-Series firewall.

In this scenario, Azure Security Center has detected and notified you of an RDP Brute Force attack. To help you block the source IP address of that attack in your Palo Alto VM-Series firewall, there are a couple steps you need to complete. You will first need to create an Azure Function which can be completed in the Functions Apps in the Azure portal, for HTTP Trigger using C# programming language. The Azure Function is what allows Security Center Playbooks to communicate with the Palo Alto VM-Series firewall and ultimately block malicious activity from traversing the firewall. 

Next, place the sample code below in your Azure Function so that when you deploy a Security Center Playbook, your Playbook blocks the malicious IP from passing through the Palo Alto VM-Series firewall.

using System.Net;

using System.Security.Cryptography.X509Certificates;

using System.Net.Security;

public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, TraceWriter log)

{

	string key = "YOUR_PaloAlto_KEY";

	string xmlElementForamt = @"

<source>

	<member>{0}-{0}</member>

</source>

<destination>

	<member>any</member>

</destination>

<application>

	<member>any</member>

</application>

<service>

	<member>any</member>

</service>

<action>deny</action>

<source-user>

<member>any</member>

</source-user>

<disabled>no</disabled>

<log-start>yes</log-start>

<log-end>yes</log-end>

<description>IP Blocked as response from Azure Security Center</description>

<from>

<member>any</member>

</from>

<to>

	<member>any</member>

</to>";

	ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls12;

	dynamic data = await req.Content.ReadAsAsync<object>();

	string ip = data?.ip;

	string ruleName = string.Format("IP {0} blocked from ASC", ip);

	string xmlElement = string.Format(xmlElementForamt, ip);

	string url = string.Format(

"https://YOUR_PALO_ALTO_URL/api/?type=config&action=set&key={0}&xpath=/config/devices/entry/vsys/entry/rulebase/security/rules/entry[@name='{1}']&element={2}",key, ruleName, xmlElement);

	var request = (HttpWebRequest)WebRequest.Create(Uri.EscapeUriString(url));

	request.ServerCertificateValidationCallback += (object sender, X509Certificate certificate, X509Chain chain, SslPolicyErrors sslPolicyErrors) => true;

	var response = (HttpWebResponse)(await request.GetResponseAsync());

	response.Close();

return req.CreateResponse(HttpStatusCode.OK);

}

Now that your sample code is in your Azure Function, you need to go to the Security Center to create a new Playbook with the workflow shown below. The Playbook defines how you orchestrate your response to issues with your Palo Alto firewall.

image

As shown above, when an alert is generated, an approval email is sent to the security administrator asking if she or he wants to block or ignore the source IP of the attack. If the answer for to that email is block, the Azure Function that was created will receive the source IP and create a blocking rule in the Palo Alto VM-Series firewall. If the answer for that approval email is ignore, another email will be send to the security administrator with an alert description.

Now when a security alert is triggered, because Azure Functions and a Playbook are set up, you can quickly respond to the detected threats by creating a blocking rule in your Palo Alto VM-Series firewall, and stay in control of your network security.

For more information about playbooks and functions, visit our documentation.

To get started with Azure Security Center Playbooks, try the Standard tier of Security.

Viewing all 10804 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>