Quantcast
Channel: Category Name
Viewing all 10804 articles
Browse latest View live

Azure Storage for Serverless .NET Apps in Minutes

$
0
0

Azure Storage is a quick and effortless way to store data for applications that has high availability, is secure, scales and is redundant. This blog post walks through a simple application that creates a short code for a long URL to easily reference it. It uses Table Storage to map codes to URLs and a Queue to process redirect counts. Everything is handled by serverless Azure Functions. The only prerequisite to build and run locally is Visual Studio 2017 15.5 or later, including the Azure Developer workload. That will automatically install the Azure Storage Emulator you can use to program against tables, queues, blobs, and files on your local machine. You do not have to have an Azure account to run this on your machine.

Build and Test Locally with Function App Host and Azure Storage Emulator

You can download the source code for this project here.

Open Visual Studio 2017 and create a new  “Azure Functions” project (the template will be under the “Cloud” category). Pick a name like, ShortLink.

Add new Azure Functions Project

Add new Azure Functions Project

In the next dialog, choose “Azure Functions v1”, select “Http Trigger”, pick “Storage Emulator” for the Storage Account, and set Access rights to “Anonymous.”

Choosing the function template

Choosing the function template

Right-click the name Function1.cs in the Solution Explorer and rename it to LinkShortener.cs. Change the function name to “Set” and update the code to use “href” instead of “name” as follows:

Hit F5 to run the function locally. You should see the function console launch and provide you with a list of URLs to access your function.

Endpoint from function app

Endpoint from function app

Access the end point from your web browser by copying and pasting the URL for the “Set” operation. You should receive an error message asking you to pass an href. Append the following to the end of the URL:

?href=https://developer.microsoft.com/advocates

You should see the URL echoed back to you. Stop debugging (SHIFT+F5).

Out of the box, the functions template creates a function app. The function app hosts multiple functions, which are snippets of code that can be triggered by various events. In this example, the code is triggered by an HTTP/HTTPS request. Visual Studio uses attributes to declare the function name and specify the bindings. The log is automatically passed into the method you to to write logging information.

It’s time to add storage!

Table Storage uses a partition (to segment the data) and a row key (to identify a unique data item). The app will use a special partition of “1” to store a key that indicates the next code to use. The short code is generated by a simple algorithm that translates an integer to a string of alphanumeric characters. To store a short code, the partition will be set to the first character of the code, the row key will be the short code, and a target field will contain the full URL. Create a new class file and name it UrlKey.cs. Add this using statement:

using Microsoft.WindowsAzure.Storage.Table;

Then add the class:

Next, add a class named UrlData.cs, include the same “using” statement and define the class like this:

Add the same using statement to the top of the LinkShortener.cs file. Azure Functions provides special bindings that take care of connecting to various resources. Modify the Run method to include a binding for the key and another binding that will be used to write out the URL information.

The Table attributes represent bindings to Table Storage. Different parameters allow behaviors such as passing in existing entries or collections of entries, as well as a CloudTable instance you can think of as the context you use to interact with a specific table. The binding logic will automatically create the table if it doesn’t exist. The key entry is automatically passed in if it exists. This is because the partition and key are included in the binding. If it doesn’t exist, it will be passed as null and you can initialize it and store it as a new entry:

Next, add the code to turn the numeric key value into an alphanumeric code, then create a new instance of the UrlData class.

The final steps for the redirect loop involve saving the data and updating the key. The response returns the code.

Now you can test the functionality. Make sure the storage emulator is running by searching for “Storage Emulator” in your applications and clicking on it. It will send a notification when it is ready. Press F5 and paste the same URL used earlier with the query string set. If all goes well, the response should contain the initial value “BNK”. Next, open “Cloud Explorer” (View -> Cloud Explorer) and navigate to local developer storage. Expand table storage and view the two entries. Note the id for the key has been incremented:

Cloud Explorer with local Table Storage

Cloud Explorer with local Table Storage

With an entry in storage, the next step is a function that takes the short code and redirects to the full URL. The strategy is simple: check for an existing entry for the code that is passed. If it exists, redirect to the URL, otherwise redirect to a “fallback” (in this case I used my personal blog). The redirect should happen quickly, so the short code is placed on a queue for a separate function to process statistics. Simply declaring the queue with the Queue binding is all it takes for the storage driver to create the queue and add the entry. You are passed an asynchronous collection so you may add multiple queue entries. Anything you add is automatically inserted into the queue. It’s that simple!

Run the project again, and navigate to the new “Go” endpoint and pass the “BNK” parameter. Your URL will look something like: http://localhost:7071/api/Go/BNK. You should see it redirect to the page you originally passed in. Refresh your Cloud Explorer and expand the “Queues” section. There should be a new queue named “counts” with a single entry (or more if you tried the redirect multiple times).

Cloud Explorer with local Queue

Cloud Explorer with local Queue

Processing the queue ties together elements of the previous function. The function uses a queue trigger and will be called for and with each entry in the queue. The implemented logic simply looks for a matching entry in the table, increments the count, then saves it.

Run the project, and if your Storage Emulator is running, you should see a call to the queue processing function in the function app console. After it completes, refresh your Cloud Explorer. You should see the queue is now empty and the count has been updated on the URL in Table Storage.

Publish to Azure

It’s great to be able to run and debug locally, but to be useful the app should be hosted in the cloud. This step requires an Azure Account (you can get one for free). Right-click on the ShortLink project and choose “Publish…”. Make sure “Azure Function App” and “Create New” are selected, then click the “Publish” button.

Publish to Azure

Publish to Azure

In the dialog, give the app a unique name (it must be globally unique so you may have to try a few variations). Choose “New” for the resource group and give it a logical name, then choose “New” for plan. Give the plan a name (I like to use the app name followed by “Link”), choose a region close to you and pick the “Consumption Plan” then press “OK.”

Choose a service plan

Choose a service plan

Click “Create” to create the necessary assets in Azure. Visual Studio will create the resources for you, build your application, then publish it to Azure. When everything is ready, you will see the message “Publish completed.” in the Output dialog for Build.

Test adding a link (replace “myshortlink” with your own function app name):
http://myshortlink.azurewebsites.net/api/Set?href=https://docs.microsoft.com/azure/storage/


Then test the redirect:
http://myshortlink.azurewebsites.net/api/Go/BNK

You can use the Storage Explorer to attach to Azure and verify the count.

But wait – isn’t Azure Storage supposed to be secure? How did this just work without me entering credentials?

If you don’t specify a connection string, all storage references default to an AzureWebJobsStorage connection key. This is the storage account created automatically to support your function app. In your local project, the local.settings.json file points to development storage (the emulator). When the Azure Function App was created, a connection string was automatically generated for the storage account. The application settings override your local settings, so the application was able to run against the storage account without modification! If you want to connect to a different storage account (for example, if you choose to use CosmosDB for premium table storage) you can simply add a new connection string and specify it as a parameter on the bindings and triggers.

When you publish from Visual Studio, the publish dialog has a link to “Manage Application Settings…”. There, you can add your own settings including any custom connection strings you need, and it will deploy the settings securely to Azure as part of the publish process.

Custom application settings

Custom application settings

That’s all there is to it!

Conclusion

There is a lot more you could do with the application. For example, the application “as is” does not have any authentication, meaning anyone could access your link shortener and create short links. You want to change the access to “Function level” for the “Set” function and secure the website with an SSL certificate to prevent anonymous access. For a more complete version of the application that includes logging, monitoring, and web front end to paste links, read Build a Serverless Link Shortener Faster than you can Finish your Latte.

The intent of this post was to illustrate how easy and effective the experience of integrating Azure Storage with your application can be. There are SDKs available to perform the same functions from desktop and mobile applications as well. Perhaps the biggest benefit of leveraging storage is the low cost. I  run a production link shortener that processes several hundred hits per day, and my monthly cost for both the serverless function and the storage is less than one dollar. Azure Storage is both accessible and cost effective.

Here is the full project.

Enjoy!


vswhere now supports -requiresAny to find instances with one or more components installed

$
0
0

The latest release of vswhere.exe supports a new switch parameter, -requiresAny (case-insensitive). This switch changes the behavior of -requires to return any instances that have one or more workloads or components installed.

As Visual Studio continues to add value with more features (including lots of partner content) targeting a wider variety of workloads, we made significant changes to better modularize the install. The refactoring effort was huge and some scenarios may not have been anticipated. As one example, vstest.console.exe is in a single package but we don’t recommend using package IDs in vswhere.exe because we may (and very often have) changed package IDs as refactoring continues. But in this example, the package in which vstest.console.exe ships can be found in two distinct workloads. Depending on your build environment, invoking vswhere.exe twice might been problematic, so -requiresAny can be useful like in the following example:

vswhere.exe -latest -requires Microsoft.VisualStudio.Workload.ManagedDesktop Microsoft.VisualStudio.Workload.Web -requiresAny -property installationPath

See Find VSTest or related content in our wiki for more examples.

Top stories from the VSTS community – 2017.01.26

$
0
0
Here are top stories we found in our streams this week related to DevOps, VSTS, TFS and other interesting topics. TOP STORIES What I learned about Microsoft doing DevOps: Part 8 – Deploying in a continuous delivery and cloud world or how to deploy on Monday morning – Wouter de KortIn this post I want... Read More

#ifdef WINDOWS – LottieUWP – Native Adobe After Effects animations in UWP apps

$
0
0

Lottie is a client library that parses Adobe After Effects animations exported as json and renders them natively on the client. Alexandre maintains the UWP port of the library (LottieUWP), and stopped by to discuss why developers should use Lottie over other formats (such as gifs) and the benefits of a natively rendered and accelerated animations.

Check out the full video above where I learned how to get started with LottieUWP, and more importantly, where to discover existing animations that can make your apps more beautiful and responsive. And feel free to reach out on  Twitter or in the comments below.

Happy coding!

The post #ifdef WINDOWS – LottieUWP – Native Adobe After Effects animations in UWP apps appeared first on Windows Developer Blog.

Because it’s Friday: Excel Painter

$
0
0

Excel often gets criticism when it gets used for applications it's not really designed for. In this instance it's being used in completely the wrong way, and the results are beautiful (via FlowingData):

That's all from the blog for this week. We'll be back next week, and in the meantime have a great weekend!

You got this! You know the fundamentals. You are a learner. Plus The Imposter’s Handbook

$
0
0

Sometimes we all get overwhelmed. There's a million (no irony there) reasons to be overwhelmed today, to be sure. I got an email from a community member who was feeling like they hadn't kept up on the latest tech. Of course, anything you learn today will be obsolete tomorrow, right? I'm overwhelmed thinking of it!

I wrote a little thread about this on Twitter and I wanted to expand on it here.

A brief thread for my developer friends who have 10-15-20 years in the game. Maybe you're a dev who's been keeping up and fresh on the latest since jump, or maybe you've been using the same reliable framework for your whole career. pic.twitter.com/H228QRmlTr

— Scott Hanselman (@shanselman) January 26, 2018

Maybe you're a dev who's been keeping up and fresh on the latest since jump, or maybe you've been using the same reliable framework for your whole career.

It can be totally overwhelming when you "wake up" and look around and notice that you don't know NOUN.js or ASPNET 10 or the like. You feel like it's over, and you've missed the boat. I want to encourage you. You're a developer! You have a good base to build on!

You may not know today's JavaScript/Java/C# but you DO know JavaScript/Java/C#. Yes, the Internet moved your cheese while you were sleeping, but you DID grow. When talking to employers, emphasize the base of knowledge you bring. Frameworks come and go. Fundamentals remain.

I really recommend Rob Conery's "The Imposter's Handbook" as a great way to reinforce those fundamentals and core concepts.Rob has been programming for years but without a CS degree. This book is about all the things he learned and all the gaps that got filled in while he was overwhelmed.

Yes this is a squishy blog post, but sometimes that's what's needed. You are smart, you are capable. Look at the replies to the twitter thread and you'll see you are not alone. Your job as a programmer is to be the figure-outer.


Sponsor: Unleash a faster Python! Supercharge your applications performance on future forward Intel® platforms with The Intel® Distribution for Python. Available for Windows, Linux, and macOS. Get the Intel® Distribution for Python Now!



© 2017 Scott Hanselman. All rights reserved.
     

Building a Raspberry Pi Car Robot with WiFi and Video

$
0
0

The SunFounder Raspberry Pi Car kit comes wtih everything you need except the 18650 batteries. You'll need to get those elsewhere.Last year I found a company called SunFounder that makes great Raspberry Pi-related kits and stuff. I got their Raspberry Pi 10" Touchscreen LCD and enjoyed it very much. This month I picked up the SunFounder PiCar 2.0 kit and built it with the kids. The kit includes everything you need except for the Raspberry Pi itself, a mini SD Card (the Pi uses that as  hard drive), and two 18650 rechargeable lithium batteries. Those batteries are enough to power both the Pi itself (so the car isn't tethered) as well as provide enough voltage to run the 3 servos AND motors to drive and steer the car around. You can also expand the car with other attachments like light sensors, line followers, and more.

The PiCar 2.0 includes the chassis, a nice USB WiFi adapter with antenna (one less thing to think about if you're using a Raspberry Pi  like me), a USB webcam for computer vision scenarios. It includes a TB6612 Motor Driver, PCA9685 PWM (Pulse Width Modulation) Servo Driver with 16 channels for future expansion. The kit also helpfully includes all the tools, screwdriver, wrenches, and bolts.

Preparing to build the SunFounder Raspberry Pi car

All the code for the SunFounder PiCar-V is on GitHub and while there can be a few hiccups with some of the English instructions, there are a bunch of YouTube videos and folks online doing the same thing so we had no trouble making the robot in a weekend.

Building a SunFounder Raspberry Pi Car

PRO TIP - Boot your new Raspberry Pi up with ssh enabled and already joined to your wifi

You'll need to use a tool like Etcher.io to burn a copy of the Raspbian operating system on to a mini SD card. I prefer to save time and avoid having to connect a new Raspberry Pi to HDMI and a mouse and keyboard, so I get the Pi onto my wifi network and enable SSH by copying these two files to the root of the file system of the freshly burned mini SD card. This will cause the Pi to automatically join your network when it boots up for the first time. Then I used Ubuntu on Windows 10 to ssh into the Pi and follow the instructions.

  • Make a 0 byte file called "ssh" and copy it to the root of the new PI disk
  • Make a file called "wpa_supplicant.conf" with just linefeeds at the end and make it look like this. Copy it to the root of the new PI disk.
country=US
ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1
network={
    ssid="YOURWIFI"
    scan_ssid=1
    psk="yourwifipassword"
    key_mgmt=WPA-PSK
}

I like to use Notepad2 or Visual Studio Code to change the line endings of a file. You can see the CRLF or the LF in the status car and click it. Unix/Raspbian/Raspberry Pi likes just an LF (line feed) for the lineending, while Windows defaults to using CRLF (Carriage Return/Line Feed, or 0x13 0x10) for text files.

Changing the line endings to Unix

The default Raspberry username is pi and the default password is raspberry. You may want to change that. SunFounder has a decent "install_dependencies" script that you'll run on the Pi:

Installing the PiCar dependencies

Once you've built the PiCar you can ssh in and run their development server that gives you a little WebAPI to control the car. The SunFounder folks are pretty good at web development (less so with mobile apps) and have a nice Django app to control the PiCar.

Here's the view from the front camera of the PiCar as viewed through local website on port 8000. It's looking at my computer looking at itself. ;)

Viewing the PiCar camera through the Django website

You're able to control the PiCar from this web interface with the keyboard. You can move the car and steer with WASD, as well as move the head/camera independently. You will need to enter the settings area (upper right corner) and calibrate the back wheels direction. By default, one wheel may go the opposite direction because they can't be sure how you mounted them, so you'll need to reverse one wheel to ensure they both go in the same direction.

They also included a client application, also written in Python. On Windows you'll need to install Python, and when you run client.py you may get an error:

ImportError: No module named requests

You'll need to run "pip3 install requests" as that module isn't installed by default.

Additionally, Python apps aren't smart about High-DPI displays, so I went to C:UsersscottappdatalocalProgramsPythonPython36 and right click'ed the Python.exe and set the DPI setting to "System (Enhanced)" like this.

Overridding DPI settings to System (Enhanced)

The client app is best for "Zeroing out" the camera and wheels, in case they are favoring one side or the other.

image

All in all, building the SunFounder "Raspberry Pi Video Car Kit 2.0" with the kids was a great experience. The next step is to see what else we can do with it!

  • Add a speaker so it talks?
  • Add Alexa support so you can talk to it?
  • Make the car drive around and take pictures, then use Azure cognitive services to announce what it sees?
  • Or, as my little boys say, "add weapons and make another bot for it to fight!"

What do you think?

* I use Amazon affiliate links and appreciate it when you use them! It supports this blog and sometimes gives me enough money to buy gadgets like this!


Sponsor: Unleash a faster Python! Supercharge your applications performance on future forward Intel® platforms with The Intel® Distribution for Python. Available for Windows, Linux, and macOS. Get the Intel® Distribution for Python Now!


© 2017 Scott Hanselman. All rights reserved.
     

Azure ExpressRoute updates – New partnerships, monitoring and simplification

$
0
0

Azure ExpressRoute allows enterprise customers to privately and directly connect to Microsoft’s cloud services, providing a more predictable networking experience than traditional internet connections. ExpressRoute is available in 42 peering locations globally and is supported by a large ecosystem of more than 100 connectivity providers. Leading customers use ExpressRoute to connect their on-premises networks to Azure, as a vital part of managing and running their mission critical applications and services.

Cisco to build Azure ExpressRoute practice

As we continue to grow the ExpressRoute experience in Azure, we've found our enterprise customers benefit from understanding networking issues that occur in their internal networks with hybrid architectures. These issues can impact their mission-critical workloads running in the cloud.

To help address on-premises issues, which often require deep technical networking expertise, we continue to partner closely with Cisco to provide a better customer networking experience. Working together, we can solve the most challenging networking issues encountered by enterprise customers using Azure ExpressRoute.

Today, Cisco announced an extended partnership with Microsoft to build a new network practice providing Cisco Solution Support for Azure ExpressRoute.   We are fully committed to working with Cisco and other partners with deep networking experience to build and expand on their networking practices and help accelerate our customers’ journey to Azure.

Cisco Solution Support provides customers with additional centralized options for support and guidance for Azure ExpressRoute, targeting the customers on premises end of the network.

New monitoring options for ExpressRoute

To provide more visibility into ExpressRoute network traffic, Network Performance Monitor (NPM) for ExpressRoute will be generally available in six regions in mid-February, following a successful preview announced at Microsoft Ignite 2017. NPM enables customers to continuously monitor their ExpressRoute circuits and alert on several key networking metrics including availability, latency, and throughput in addition to providing graphical view of the network topology. 

NPM for ExpressRoute can easily be configured through the Azure portal to quickly start monitoring your connections.

We will continue to enhance the footprint, features and functionality of NPM of ExpressRoute to provide richer monitoring capabilities for ExpressRoute. 

 

ExpressRoute1

ExpressRoute2

Figure 1: Network Performance Monitor and Endpoint monitoring simplifies ExpressRoute monitoring

Endpoint monitoring for ExpressRoute enables customers to monitor connectivity not only to PaaS services such as Azure Storage but also SaaS services such as Office 365 over ExpressRoute. Customers can continuously measure and alert on the latency, jitter, packet loss and topology of their circuits from any site to PaaS and SaaS services. A new preview of Endpoint Monitoring for ExpressRoute will be available in mid-February.

Simplifying ExpressRoute peering

To further simplify management and configuration of ExpressRoute we have merged public and Microsoft peerings. Now available on Microsoft peering are Azure PaaS services such as Azure Storage and Azure SQL along with Microsoft SaaS services (Dynamics 365 and Office 365). Access to your Azure Virtual Networking remains on private peering.

ExpressRoute with Microsoft peering and private peering

Figure 2: ExpressRoute with Microsoft peering and private peering

ExpressRoute, using BGP, provides Microsoft prefixes to your internal network. Route filters allow you to select the specific Office 365 or Dynamics 365 services (prefixes) accessed via ExpressRoute. You can also select Azure services by region (e.g. Azure US West, Azure Europe North, Azure East Asia). Previously this capability was only available on ExpressRoute Premium. We will be enabling Microsoft peering configuration for standard ExpressRoute circuits in mid-February.

Manage rules

New ExpressRoute locations

ExpressRoute is always configured as a redundant pair of virtual connections across two physical routers. This highly available connection enables us to offer an enterprise-grade SLA. We recommend that customers connect to Microsoft in multiple ExpressRoute locations to meet their Business Continuity and Disaster Recovery (BCDR) requirements. Previously this required customers to have ExpressRoute circuits in two different cities. In select locations we will provide a second ExpressRoute site in a city that already has an ExpressRoute site. A second peering location is now available in Singapore. We will add more ExpressRoute locations within existing cities based on customer demand. We’ll announce more sites in the coming months.


Merging conflicts in the browser

$
0
0

One of the cool things about having VSTS used across all of Microsoft is that when there’s some useful missing feature, one of the many teams using it might fill the gap and we get to harvest it and make it available to all VSTS customers.  Exactly that has just happened.

I’ve written several times about GVFS and the adoption of Git by our Windows team (Actually WDG – Windows and Devices Group).  It’s a big organization managing a lot of teams, branches and releases at the same time.  Because of this they have a lot of code flowing around and need to merging the various streams of development.  To help with this, they built a VSTS extension that enables you to do merge conflict resolution directly in the browser, as part of the Pull Request UI, validate it with the PR build and tests and commit it without having to get and enlistment and do everything locally.  Although it was designed and tested for extremely large code bases, it works for *any* Git repo on VSTS regardless of size and regardless off whether or not you use GVFS.  It’s a really nice experience that’s been refined over the past several months of use internally.

I hope you like it.  Check it out:

Pull Request merge conflict extension

Brian

Using EXPLAIN to profile slow queries in Azure Database for MySQL

$
0
0

Azure Database for MySQL is a PaaS (Platform as a Service) solution that Microsoft offers on Azure. Using Azure managed services for MySQL (and PostgreSQL), enables one to easily build an intelligent and secure application.

Though Microsoft has done a lot of work to optimize database performance, sometimes a simple query can easily become a bottle neck impacting overall database performance. Luckily, MySQL integrates a handy tool – the EXPLAIN statement – that can profile client queries and thus help you identify the root cause of a slow query. You can use an EXPLAIN statement to get information about how SQL statements are executed. With this information, you can profile which queries are running slow and why.

The output below shows an example of the execution of an EXPLAIN statement.

mysql> EXPLAIN SELECT * FROM tb1 WHERE id=100G
*************************** 1. row ***************************
           id: 1
  select_type: SIMPLE
        table: tb1
   partitions: NULL
         type: ALL
possible_keys: NULL
          key: NULL
      key_len: NULL
          ref: NULL
         rows: 995789
     filtered: 10.00
        Extra: Using where

As you can see from this example, the value of key is NULL. This means that MySQL cannot find any indexes optimized for the query and it performs a full table scan. Let's optimize this query by adding an index on the ID column.

mysql> ALTER TABLE tb1 ADD KEY (id);
mysql> EXPLAIN SELECT * FROM tb1 WHERE id=100G
*************************** 1. row ***************************
           id: 1
  select_type: SIMPLE
        table: tb1
   partitions: NULL
         type: ref
possible_keys: id
          key: id
      key_len: 4
          ref: const
         rows: 1
     filtered: 100.00
        Extra: NULL

The new EXPLAIN statement shows that MySQL will use now an index to limit the number of rows to one, which in turn dramatically shortens the search time.

Covering index

A covering index consists of all columns of your query in the index to reduce value retrieval from data tables. To illustrate this, look at the GROUP BY statement below.

mysql> EXPLAIN SELECT MAX(c1), c2 FROM tb1 WHERE c2 LIKE '%100' GROUP BY c1G
*************************** 1. row ***************************
           id: 1
  select_type: SIMPLE
        table: tb1
   partitions: NULL
         type: ALL
possible_keys: NULL
          key: NULL
      key_len: NULL
          ref: NULL
         rows: 995789
     filtered: 11.11
        Extra: Using where; Using temporary; Using filesort

As you can see in the output, MySQL does not use any indexes because no proper indexes are available. The output also shows "Using temporary; Using filesort", which means MySQL will create a temporary table to satisfy the "GROUP BY" clause.

Creating an index on "c2" alone will make no difference, and MySQL still needs to create a temporary table:

mysql> ALTER TABLE tb1 ADD KEY (c2);
mysql> EXPLAIN SELECT MAX(c1), c2 FROM tb1 WHERE c2 LIKE '%100' GROUP BY c1G
*************************** 1. row ***************************
           id: 1
  select_type: SIMPLE
        table: tb1
   partitions: NULL
         type: ALL
possible_keys: NULL
          key: NULL
      key_len: NULL
          ref: NULL
         rows: 995789
     filtered: 11.11
        Extra: Using where; Using temporary; Using filesort

In this case, you can create a covered index on both "c1" and "c2" by adding the value of "c2" directly in the index to eliminate further data lookup.

mysql> ALTER TABLE tb1 ADD KEY covered(c1,c2);
mysql> EXPLAIN SELECT MAX(c1), c2 FROM tb1 WHERE c2 LIKE '%100' GROUP BY c1G
*************************** 1. row ***************************
           id: 1
  select_type: SIMPLE
        table: tb1
   partitions: NULL
         type: index
possible_keys: covered
          key: covered
      key_len: 108
          ref: NULL
         rows: 995789
     filtered: 11.11
        Extra: Using where; Using index

As the EXPLAIN plan above shows, MySQL will now use the covered index and avoid creating a temporary table.

Combined index

A combined index consists of values from multiple columns and can be considered as an array of rows that are sorted by concatenating values of the indexed columns. This is can be useful in a GROUP BY statement.

mysql> EXPLAIN SELECT c1, c2 from tb1 WHERE c2 LIKE '%100' ORDER BY c1 DESC LIMIT 10G
*************************** 1. row ***************************
           id: 1
  select_type: SIMPLE
        table: tb1
   partitions: NULL
         type: ALL
possible_keys: NULL
          key: NULL
      key_len: NULL
          ref: NULL
         rows: 995789
     filtered: 11.11
        Extra: Using where; Using filesort

MySQL performs a "filesort" operation which is somewhat slow, especially if it requires sorting a lot of rows. To optimize this query, you can create a combined index on both columns that are being sorted.

mysql> ALTER TABLE tb1 ADD KEY my_sort2 (c1, c2);
mysql> EXPLAIN SELECT c1, c2 from tb1 WHERE c2 LIKE '%100' ORDER BY c1 DESC LIMIT 10G
*************************** 1. row ***************************
           id: 1
  select_type: SIMPLE
        table: tb1
   partitions: NULL
         type: index
possible_keys: NULL
          key: my_sort2
      key_len: 108
          ref: NULL
         rows: 10
     filtered: 11.11
        Extra: Using where; Using index

The EXPLAIN plan now shows that MySQL can use the combined index to avoid additional sorting since the index is already sorted.

Conclusion

Using EXPLAIN and different type of indexes can increase performance significantly. Having an index on the table doesn’t necessarily mean that MySQL can use it for your queries. Always be sure to validate your assumptions by using EXPLAIN and optimize your queries using indexes.

See also

Using the MySQL sys schema to optimize and maintain a database

$
0
0

The MySQL sys schema, which is fully enabled in Azure Database for MySQL 5.7, provides a powerful collection of user friendly views in a read-only database. Building on the MySQL Performance and Information Schemas, you can use the MySQL sys schema to troubleshoot performance issues and manage resources efficiently.

The MySQL Performance Schema, first available in MySQL 5.5, provides instrumentation for many vital server resources such as memory allocation, stored programs, metadata locking, etc. However, the Performance Schema contains more than 80 tables, and getting the necessary information often requires joining tables within the Performance Schema, as well as tables from the Information Schema. Let’s look more closely at how to use the MySQL sys schema.

image

There are 52 views in the sys schema, and each view is prefixed by one of the following:

  • Host_summary or IO: I/O related latencies.
  • Innodb: Innodb buffer status and locks.
  • Memory:Memory usage by the host and users.
  • Schema: Schema related information, such as auto increment, indexes, etc.
  • Statement: Information on SQL statements; this can be statements that resulted in a full table scan or long query time.
  • User: Resources consumed and grouped by users. Examples are file I/Os, connections, and memory.
  • Wait: Wait events, grouped by host or user.

Performance tuning

  • IO is the most expensive operation in the database. We can find out the average IO latency by querying the sys.user_summary_by_file_io view. With the default 125GB of provisioned storage, my IO latency is about 15 seconds.

image

Because Azure Database for MySQL scales IO with respect to storage, after I increase my provisioned storage to 1TB, my IO latency reduces to 571ms, representing a 26X performance increase!

image

  • Despite careful planning, many queries still result in full table scans. For additional information about the types of indexes and how to optimize them, you can refer to the blog posting Using EXPLAIN to profile MySQL slow queries. Full table scans are resource intensive and degrade your database performance. The quickest way to find tables with full table scan is to query the sys.schema_tables_with_full_table_scans view.

image

  • To troubleshoot database performance issues, it may be beneficial to identify the events happening inside of your database, and using the sys.user_summary_by_statement_type view may just do the trick.

image

In this example, Azure Database for MySQL spent 53 minutes flushing the slog query log 44,579 times. That’s a long time and a lot of IOs. You can reduce this activity by either disabling your slow query log or decreasing the frequency of slow query log in Azure portal.

Database maintenance

  • The InnoDB buffer pool resides in memory and is the main cache mechanism between the DBMS and storage. The size of the InnoDB buffer pool is tied to the performance tier and cannot be changed unless a different product SKU is chosen. As with memory in your operating system, old pages are swapped out to make room for fresher data. To find out which tables consume most of the InnoDB buffer pool memory, you can query the sys.innodb_buffer_stats_by_table view.

image

In the graphic above, it is apparent that other than system tables and views, each table in the mysqldatabase033 database, which hosts one of my WordPress sites, occupies 16KB, or 1 page, of data in memory.

  • Indexes are great tools to improve read performance, but they incur additional costs for inserts and storage. Sys.schema_unused_indexes and sys.schema_redundant_indexes provide insights into unused or duplicate indexes.

image

image

In summary, the sys schema is a great tool for both performance tuning and database maintenance. Make sure to take advantage of this feature in your Azure Database for MySQL.

References

Network Watcher Connection Troubleshoot now generally available

$
0
0

Azure Network Watcher Connection Troubleshoot, previously in preview as Connectivity Check, is making general availability sporting a new name. Connection Troubleshoot, part of our Network Watcher suite of networking tools and capabilities, enable you to troubleshoot network performance and connectivity issues in Azure.

Continuing the expansion of tools within Azure Network Watcher, this new addition provides visualization of the hop by hop path from source to destination, identifying issues that can potentially impact your network performance and connectivity.

Network Watcher Connection Troubleshoot features

With the addition of Connection Troubleshoot, Network Watcher will see an incremental increase in its capabilities and ways for you to utilize it in your day to day operations. You can now:

  • Check connectivity between source (VM) and destination (VM, URI, FQDN, IP Address)
  • Identify configuration issues that are impacting reachability
  • Provide all possible hop by hop paths from the source to destination
  • Hop by hop latency
  • Latency - min, max, and average between source and destination
  • A topology (graphical) view from your source to destination
  • Number of packets dropped during the connection troubleshoot check

Connectivity troubleshoot check

Connectivity troubleshoot check graph view output Source: Azure VM and Destination: www.bing.com.

What kind of issues can Connection Troubleshoot detect?

Connection Troubleshoot can detect the following types of issues that can impact connectivity:

  • High VM CPU utilization
  • High VM memory utilization
  • Virtual machine (guest) firewall rules blocking traffic
  • DNS resolution failures
  • Misconfigured or missing routes
  • NSG rules that are blocking traffic
  • Inability to open a socket at the specified source port
  • Missing address resolution protocol entries for Azure Express Route circuits
  • No servers listening on designated destination ports

Which scenarios are supported by Connection Troubleshoot?

Connection Troubleshoot supports all networking scenarios where the source and destination is an Azure VM, FQDN, URI or an IPv4 Address. Here are some sample scenarios for Connection Troubleshoot to get you started:

  • Connectivity between your Azure VM and an Azure SQL server, where all Azure traffic is tunneled through an on-premises network.

Network Watcher - Connection troubleshoot

  • Connectivity between VMs in different VNets connected using VNet peering. In this example, connection troubleshoot detects that traffic is blocked by the destination VM firewall.

InkedPresentation1_LI

To learn more and get started using Connection Troubleshoot, please visit our documentation page.

Can I use Connection Troubleshoot through other means than the Azure portal?

Yes, Connection Troubleshoot and all Network Watcher features are available using both the Azure portal, as well as using PowerShell, Azure CLI, and REST API.

You can even run Connection Troubleshoot continuously using Azure Functions triggered by a timer to initiate troubleshooting via PowerShell. In the future, stay tuned for a Network Watcher experience enabling continuous connection monitoring of your network infrastructure.

How can I provide feedback for Connection Troubleshoot?

You can provide feedback for Connection Troubleshoot and all Network Watcher feature through the Azure feedback forum.

Managing Azure Secrets on GitHub Repositories

$
0
0

Background

An increasing number of developers across the globe use GitHub to host their projects, and many of them use GitHub public repositories for their open source work. While this is a great way to contribute and leverage the power of the community, it does come with a unique set of responsibilities. Particularly around managing credentials and other secrets.

Examples of Azure secrets are authentication credentials that should not be made public. These include things such as passwords, private keys, database connection strings, and storage account keys that are managed by Azure tenants.

In Azure, we take security very seriously. Azure secrets are considered sensitive and should not be made publicly available. An exposed secret could lead to the compromise of your Azure subscription, your cloud assets, as well as on-premises assets and data; putting your applications or services at significant risk.

Microsoft Credential Scanner Preview

To help protect our customers, Azure runs Credential Scanner aka CredScan. CredScan monitors all incoming commits on GitHub and checks for specific Azure tenant secrets such as Azure subscription management certificates and Azure SQL connection strings. 

Internally at Microsoft we’ve been developing and leveraging CredScan to protect Azure and our 1st party services and applications. In the future we plan to add and release support for more types of secrets to our GitHub scanning. The GitHub scanning and identification of exposed secrets is done automatically by Microsoft. While developers do not have to do anything to opt in to Microsoft scanning of GitHub for exposed Azure secrets, you should always be vigilant and avoid exposing secrets.

Below is a high-level diagram of our automated GitHub scanning process using CredScan:

image

Upon detection of an exposed secret, the Azure subscription owner gets notified via email from Microsoft’s Cyber Defense Operation Center (CDOC).  The email notifies users on which commits have an issue, along with their affected subscriptions, assets, secret type and guidance on how to fix the exposure.

The scanner focuses on incoming commits. As a rule of thumb, if you receive a notification, there may be more credentials in your source code, so make sure to take a close look, and include a review of past commits and commit history.  Rotate and remove all such secrets from your code, storing them in a safe location such as Azure Key Vault.

Thousands of customers have been notified since the detection was put in place resulting in further securing customer applications and Azure assets. We sincerely appreciate customer efforts in working with us and making Azure safer.

Remediation

Keep in mind that removing a published secret does not address the risk of exposure. The secret may have been compromised, with or without your knowledge and is still exposed in Git History. For these reasons, the secret must be rotated and/or revoked immediately to avoid security risks.

Some recommended steps to help mitigate this risk:

  • Rotate the published credential immediately (e.g. If it detects a leaked certificate then the certificate must be reissued, and the leaked certificate removed and/or revoked).
  • Update configs/apps to use the new secret as necessary.
  • Store the new secret in Azure Key Vault and out of GitHub.
  • Do not publicly share or expose the new secret.
  • Remove the published secret.

Additional Information

While detecting hard coded secrets in a timely manner and mitigating the risks is helpful, it is even better if one could prevent secrets from getting checked in altogether. In this regard, Microsoft has released CredScan Code Analyzer as part of Microsoft DevLabs extension for Visual Studio. While in early preview, it provides developers an inline experience for detecting potential secrets in their code, giving them the opportunity to fix those issues in real-time.

For more information, please see the announced availability early preview of the Credential Scanner Code Analyzer.
Below are few additional resources to help you manage secrets and access sensitive information from within your applications in a secure manner:

Last week in Azure: New Zone Redundant capabilities now in preview, and more

$
0
0

Last week in Azure, several announcements were made for Zone Redundant capabilities now in public preview that build on Azure Availability Zones. In addition, new backup and update capabilities make it easier to manage and help protect your virtual machines. There's also a new, intuitive and simplified onboarding experience for Azure Site Recovery service for VMware to Azure.

Now in preview

Zone Redundant Virtual Machine Scale Sets now available in public preview - Zone Redundant Virtual Machine Scale Sets bring the scalability and ease of use of scale sets to availability zones by automatically spreading your virtual machines across availability zones. You don’t need to worry about distributing VMs across zones, choosing which VMs to remove when scaling in, etc.

Azure Zone Redundant Storage in public preview - Zone Redundant Storage (ZRS) greatly simplifies development of highly available applications by storing three replicas of your data in different Availability Zones, with inserts and updates to data being performed synchronously across these Availability Zones. This enables you to continue to read and write data even if the data in one of the Availability Zones is unavailable or unrecoverable. ZRS is built over Availability Zones in Azure which provide resilience against failures through fault-isolated groups of datacenters within a single region.

Cognitive Services - Availability of SDKs for the latest Bing Search APIs - Currently available as REST APIs, the Bing APIs v7 now have SDKs in four languages: C#, Java, Node.js, and Python. These SDKs include offerings such as Bing Web Search, Bing Image Search, Bing Custom Search, Bing News Search, Bing Video Search, Bing Entity Search, and Bing Spell Check.

Headlines

CustomVision.AI: Code-free automated machine learning for image classification - Custom Vision Service is designed to build quality classifiers with very small training datasets, helping you build a classifier that is robust to differences in the items you are trying to recognize and that ignores the things you are not interested in. Now, you can easily add real time image classification to your mobile applications. Creating, updating, and exporting a compact model takes only minutes, making it easy to build and iteratively improve your application.

Azure Standard support now offers the highest value support for production workloads amongst major cloud providers - Azure now offers the most cost effective and predictable support offering amongst major cloud providers, including: a significant price drop to a fixed cost of $100 USD per month, so forecasted support costs are completely predictable; faster initial response time, now at 1 hour for critical cases; and continuing our current offering of unlimited 24x7 technical and billing support for your entire organization.

Compliance assessment reports for Azure Stack are now available - Coalfire, a Qualified Security Assessor (QSA) and independent auditing firm, has audited and evaluated Azure Stack Infrastructure against the technical controls of PCI-DSS and the CSA Cloud Control Matrix, and found that Azure Stack satisfies the applicable controls.

Announcing integration of Azure Backup into VM create experience - You now have the ability to protect the VMs with an enterprise-grade backup solution from the moment of VM creation. Azure Backup supports backup of wide variety of VMs offered by Azure including Windows or Linux, VMs on managed or unmanaged disks, premium or standard storage, encrypted or non-encrypted VMs, or a combination of the above.

Azure Search enterprise security: Data encryption and user-identity access control - Azure Search now supports encryption at rest for all incoming data indexed on or after January 24, 2018, in all regions and SKUs including shared (free) services. For existing content, you have to re-index to gain encryption. With this announcement, encryption now extends throughout the entire indexing pipeline – from connection, through transmission, and down to indexed data stored in Azure Search.

ITSM Connector for Azure is now generally available - IT Service Management Connector (ITSMC) for Azure provides bi-directional integration between Azure monitoring tools and your ITSM tools – ServiceNow, Provance, Cherwell, and System Center Service Manager.

Comprehensive monitoring for Azure Site Recovery now generally available - The comprehensive monitoring capabilities within Azure Site Recovery gives you full visibility into whether your business continuity objectives are being met. The new experience includes: an enhanced vault overview page, replication health model, failover readiness model, simplified troubleshooting experience, and in-depth anomaly detection tooling.

Start replicating in under 30 minutes using Azure Site Recovery's new onboarding experience - New this month in Azure Site Recovery: Open Virtualization Format (OVF) template-based configuration server deployment; a new intuitive infrastructure management experience; and a hassle-free mobility service deployment experience.

Accelerated Spark on GPU-enabled clusters in Azure - Azure now supports running Spark on a GPU-enabled cluster using the Azure Distributed Data Engineering Toolkit (AZTK). In a single command, AZTK allows you to provision on demand GPU-enabled Spark clusters on top of Azure Batch's infrastructure, helping you take your high performance implementations that are usually single-node only and distribute it across your Spark cluster.

Keeping your environment secure with Update Management - The Azure Update Management enables you to manage updates and patches for your machines. Quickly assess the status of available updates, schedule installation of required updates, and review deployment results to verify updates that apply successfully for machinesthat are Azure VMs, hosted by other cloud providers, or on premise.

Training & webinars

Serious about cloud security? Check out this new training on Azure Security Center - In Hybrid Cloud Workload Protection with Azure Security Center, a new course now available on Microsoft Virtual Academy, Yuri Diogenes and Ty Balascio offer an overview of Azure Security Center, including requirements, planning, onboarding, and troubleshooting. Ty and Yuri work with real-world data and share their experience in the industry to show how the threat landscape differs for a cloud or hybrid versus on-premises. And they explore threat detection and response in a lab environment so they can talk you through it.

Accelerate innovation with Microsoft Azure Databricks - Azure Databricks is a fast, easy and collaborative Apache® Spark™ based analytics platform optimized for Azure. With Azure Databricks, you can help your organization accelerate time to insights by providing a collaborative workspace and a high-performance analytics platform, all at the click of a button.

Launching the Azure Storage Solution showcase - This new webcast series showcases innovative technology partners who have built solutions on top of the Azure Storage infrastructure, such as: Commvault, NetApp, Veritas, and PEER Software.

Developer Spotlight: Serverless

Build apps faster with Azure Serverless - Azure Serverless platform provides a series of fully managed services ranging from compute, storage, database, orchestration, monitoring, analytics, intelligence, and so on to help construct serverless applications for any kind of scenario. In this blog post, Raman Sharma focuses on two pieces central to serverless application development, Azure Functions and Azure Logic Apps.

Azure Storage for Serverless .NET Apps in Minutes - In this blog post, Microsoft Cloud Developer Advocate Jeremy Likness walks through a simple application that creates a short code for a long URL to easily reference it.

Building event-driven applications using serverless architectures - Join us on Tuesday, February 13 for this live webinar on building event-driven applications using Azure Event Grid.

Azure Functions Introduction - This high-level overview of Azure Functions in the Azure docs will get you going on learning Azure Functions with a quick overview and links that will take you deeper into serverless development.

Serverless: The Future of Cloud Development - Download this free infographic for an overview of what Serverless has to offer.

Service updates

Azure shows

Corey Talks with Microsoft Solutions Providers: Datacom Systems, Hanu and New Signature - Corey Sanders, Director of Program Management on the Microsoft Azure Compute team sat down to introduce three videos we recorded back in December. Corey met and talked with cloud first Microsoft Solutions Providers about options for managed services and what they had to offer.

 

 

Spring Boot with VS Code and Azure - Xiaokai He stops by to show Donovan Brown the rich Java support within VS Code, a free and open source editor, as well as how you can easily deploy your Spring Boot application to Azure Web App for Containers.

Azure Notebooks - Shahrokh Mortazavi stops by to chat with Scott Hanselman about Azure Notebooks, a free hosted Python/R/F# REPL for learning to program all the way to mastering Data Science.

The Azure Podcast: Episode 213 - Azure Dev using Windows Subsystem for Linux - Tara Raj, a PM who has been on the show before, talks to us about her latest endeavor, the Windows Subsystem for Linux and how it can be used for various tasks including Azure Development!

Target Surface Hub and Windows 10 S on a Snapdragon processor with your UWP apps

$
0
0

When submitting your UWP app or game to Microsoft Store through Dev Center, you have the flexibility to choose the device families on which customers can acquire your app. By default, we make your app or game available to all device families which can run it (except for Xbox, which you can opt into as appropriate if your packages support it). This lets your apps and games reach the most potential customers.

Recently, we’ve added new options that let you offer your submission to customers on Surface Hub. You can now also offer ARM packages to Windows 10 S on a Snapdragon processor (Always Connected PCs).

To target Surface Hub when submitting your UWP app to the Microsoft Store, simply ensure that the box for the Windows 10 Team device family is checked. This is generally the case if you upload packages targeting the Universal or Team device family.

If you include an ARM package in your submission that targets the Universal or Desktop device family, this package will be made available to Windows 10 S on a Snapdragon processor (Always Connected PCs) devices as long as the Windows 10 Desktop device family box is checked.

The example above shows three packages that target the Universal device family, x64, ARM and x86. The boxes for Windows 10 Desktop, Windows 10 Mobile, Windows 10 Xbox, Windows 10 Team, and Windows 10 Holographic are selected. This means that customers on any of those device types can download this submission.

For more about device family selection, check out our documentation.

App packaging and testing

App packages are configured to run on a specific processor architecture. We highly recommended that you build your app packages to target all architectures whenever possible, so your app will run smoothly on all Windows 10 devices. To learn more, visit our app package architecture documentation.

We highly recommend that you test your app on all device families that you plan to support to ensure the best experience on all devices. To learn more about steps to take before packaging and submitting your UWP app, read our documentation.

The post Target Surface Hub and Windows 10 S on a Snapdragon processor with your UWP apps appeared first on Windows Developer Blog.


VS Subscriptions and linking your VSTS account to AzureAD

$
0
0
A few weeks ago, I posted about a change coming to organizations managing their identities with Microsoft Accounts (MSAs); as of March 30th, you will no longer able to create new MSAs with a custom domain name that is linked to an Azure Active Directory tenant.  Many customers have reached out asking how this change affects... Read More

Announcing the Bing Maps Fleet Tracker Solution

$
0
0

The Bing Maps team is pleased to announce the availability of our complete asset tracking solution for small fleets.  The solution was previewed in September at Microsoft Ignite in Orlando and is now available for download and customization. The Bing Maps Fleet Tracker solution includes the Azure based backend of services and storage, a web interface for tracking assets, and client code for Android phones.

With the Bing Maps Fleet Tracker, you can have your own deployment live today providing the ability to track the location of your businesses phones in real time! The Fleet tracker is ideal for everyone from an HVAC repair company needing to know which repair person is best positioned for a service call to a news organization trying to coordinate reporters, news vans, and other mobile assets in the field. A chain of Hotels can see where their airport shuttle vans are in real time, allowing the front desk to provide better customer service and ETAs.

Since all of the source code is provided, organizations with their own development resources can customize the solution and add new features. You can easily utilize sources of GPS information besides mobile phones or integrate with an existing database of mobile assets.

And for organizations that would like to take advantage of the Bing Maps Fleet Tracker but don't have developer resources, we have a network of partners who can get you up and running quickly with a customized solution. Email maplic@microsoft.com for more details about our partners.

The solution relies on the Bing Maps Enterprise platform for map rendering, geocoding and routing. Think of the solution as an accelerator for those wishing to do asset tracking; it provides A LOT of functionality right out of the box and is extensible and customizable to meet your needs. As an accelerator, it can save you many people months of coding.

Fleet Tracker screenshot

With the Bing Maps Fleet Tracker, the application possibilities are wide open!  Delivery services can provide top notch customer service by letting their customers know where their driver is and how long before they will arrive. A mobile salesforce can see where their colleagues are in real time without having to text or call.  Or maybe you want to build your own Family and friends meetup application. We'd love to hear what you come up with. If you create something super cool with the Fleet tracker source code, shoot us an email with the details!

Features

Right out of the box, after deploying the Fleet Tracker solution to Azure, you'll have a lot of great functionality without modifying or writing any code.

  • Easy to track Android Devices. Register devices for tracking by shooting a QR code or manually entering device information in the web application. The mobile companion app features client control of when location tracking is on or off, giving the mobile worker complete control of their location information.
  • Trip Detection. GPS points are analyzed as they arrive in the service to determine trips and destination locations automatically.
  • Geofence based alerts. You can create regions by drawing on the map, and have an email alert sent whenever a specified asset enters or leaves the geofenced area.  In the screenshot below you can see a new geofence being created represented by the maroon box. Whenever one of the selected assets enters this region, an email is sent to the specified distribution list.

Geofence Alerts

  • Reporting. Get an understanding of how many miles your fleet is driving, trip durations, geofence activity and more.

Reporting

  • Extensibility. All source code provided making it easy to integrate with your own backend data and mobile applications

How to get started

To get started, check out our documentation on GitHub at https://github.com/Microsoft/Bing-Maps-Fleet-Tracker. Our documentation includes everything you need to quickly start using Fleet Tracker.

Also, for more information about each of the APIs, including documentation, how to get licensed, and frequently asked questions, visit our website.

- Bing Maps Team

Speed up R with Parallel Programming in the Cloud

$
0
0

This past weekend I attended the R User Day at Data Day Texas in Austin. It was a great event, mainly because so many awesome people from the R community came to give some really interesting talks. Lucy D’Agostino McGowan has kindly provided a list of the talks and links to slides, and I thoroughly recommend checking it out: you're sure to find a talk (or two, or five or ten) that interests you.

My own talk was on Speeding up R with Parallel Programming in the Cloud, where I talked about using the doAzureParallel package to launch clusters for use with the foreach function, and using aztk to launch Spark clusters for use with the sparklyr package. I've embdeded the slides below: it's not quite the same without the demos (and sadly there was no video recording), but I've included screenshots in the slides to hopefully make it clear.

Live Free or Dichotomize: Wrangling Data Day Texas Slides

How to set up Kubernetes on Windows 10 with Docker for Windows and run ASP.NET Core

$
0
0

Docker for Windows is really coming along nicely. They have both a Stable and Edge channel and the Edge (beta, experimental) one just included a lovely new feature - Kubernetes support. Per their docs, Kubernetes is only available in Docker for Windows 18.02 CE Edge. They set most everything up nicely and put Kubectl into your path and setup a context. If you use kubectl for other things - like your own Raspberry Pi Kubernetes Cluster, then you'll need to be aware of switching contexts. Same thing applies if you have one in the cloud, like the Kubernetes Cluster I made in Azure AKS.

Got Docker for Windows? If you have not yet installed Docker for Windows, see Install Docker for Windows for an explanation of stable and edge channels, system requirements, and download/install information.

It's easy to get started, just click "Enable Kubernetes" and Docker for Windows will download and start the images you need. I clicked "show system containers" because I like to see what's hidden from me, but you decide for yourself. Do be aware - there's a TON.

Enabling Kubernetes in Docker for Windows

By default, you won't get the Kubernetes Dashboard - of which I'm a fan - so you may want to install that. If you follow the default instructions (and you're a noob like me) then you'll likely end up with a Dashboard that is pretty locked down. It can be somewhat frustrating to get access to your own development dashboard, so I use the alternative (read: totally insecure) dashboard, like this:

C:> kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/alternative/kubernetes-dashboard.yaml

I also like charts and graphs so I added these as well:

C:> kubectl create -f https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/influxdb.yaml
C:> kubectl create -f https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/heapster.yaml
C:> kubectl create -f https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/grafana.yaml

I can access the dashboard by default by running "kubectl proxy" then visiting this http://localhost:8001/ui and I'll get redirected to the dashboard:

Kuberenetes Dashboard

Now I can run through all the cool Kubernetes tutorials like the Guestbook Kubernetes Sample Application from the convenience of my Windows 10 machine. (I'm running a SurfaceBook 2 on the current non-Beta Windows 10.)

There are a lot of nice samples on running .NET Core and ASP.NET Core apps with Docker up at https://github.com/dotnet/dotnet-docker-samples/

I made a quick ASP.NET Core app called kubeaspnetapp:

C:UsersscottDesktop>dotnet new razor -o kubeaspnetapp
The template "ASP.NET Core Web App" was created successfully.
...snip...
Restore succeeded.

Then added a two-stage build DockerFile that looks like this:

FROM microsoft/aspnetcore-build:2.0 AS build-env
WORKDIR /app
# copy csproj and restore as distinct layers
COPY *.csproj ./
RUN dotnet restore
# copy everything else and build
COPY . ./
RUN dotnet publish -c Release -o out
# build runtime image
FROM microsoft/aspnetcore:2.0
WORKDIR /app
COPY --from=build-env /app/out .
ENTRYPOINT ["dotnet", "kubeaspnetapp.dll"]

And built and tagged the image with:

C:UsersscottDesktopkubeaspnetapp>docker build -t kubeaspnetapp .

Then I create a quick Deployment that manages a Pod that runs the Container:

C:UsersscottDesktopkubeaspnetapp>kubectl run kubeaspnetapp --image=kubeaspnetapp:v1 --port=80
deployment "kubeaspnetapp" created

Now I'll expose it to the "outside." Again, this is usually done with .yaml files but it's a good learning exercise and it's all local.

C:UsersscottDesktopkubeaspnetapp>kubectl get deployments
NAME            DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
kubeaspnetapp   1         1         1            1           1m
C:UsersscottDesktopkubeaspnetapp>kubectl get pods
NAME                             READY     STATUS    RESTARTS   AGE
kubeaspnetapp-778f6d49bd-rct59   1/1       Running   0          1m
C:UsersscottDesktopkubeaspnetapp>
C:UsersscottDesktopkubeaspnetapp>
C:UsersscottDesktopkubeaspnetapp>kubectl expose deployment kubeaspnetapp --type=NodePort
service "kubeaspnetapp" exposed
C:UsersscottDesktopkubeaspnetapp>kubectl get services
NAME            TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
kubeaspnetapp   LoadBalancer   10.98.234.67   <pending>     80:31756/TCP     5s
kubernetes      ClusterIP      10.96.0.1      <none>        443/TCP          1d

Then I'll hit http://127.0.0.1:31756 in my browser...note how that port is brokering to the internal port 80 where the app listens...and there's my ASP.NET Core app running locally on Kubernetes, set up with Docker for Windows. Nice.

My ASP.NET Core app running in Kubernetes local on my Windows 10 machine

Here's me getting the startup logs from that pod:

C:Usersscott>kubectl get pods
NAME                             READY     STATUS    RESTARTS   AGE
kubeaspnetapp-7fd7f7ffb9-8gnzd   1/1       Running   0          6m
C:UsersscottDropboxk8s for piaspnetcoreapp>kubectl logs kubeaspnetapp-7fd7f7ffb9-8gnzd
Hosting environment: Production
Content root path: /app
Now listening on: http://[::]:80
Application started. Press Ctrl+C to shut down.

Pretty cool. As all the tooling and things across Windows, Docker, Kubernetes, Visual Studio (all flavors) continues to get better and better, I can only imagine this experience will get better and better. I look forward to a time when I can freely mix containers from different OSs and easily push them all en masse to Azure.


Sponsor: Get the latest JetBrains Rider for debugging third-party .NET code, Smart Step Into, more debugger improvements, C# Interactive, new project wizard, and formatting code in columns.



© 2017 Scott Hanselman. All rights reserved.
     

MSTest V2: in-assembly parallel test execution

$
0
0
Introduction MSTest V2 v1.3.0 Beta2 now supports in-assembly parallel execution of tests – the top most requested/commented issue on the testfx repo. The feature can dramatically reduce the total time taken to execute a suite of tests. To get started, install the framework and adapter from NuGet. If you are already using MSTest V2, then... Read More
Viewing all 10804 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>