Quantcast
Channel: Category Name
Viewing all 10804 articles
Browse latest View live

Join Us to Learn How to Build Android 8.0 Oreo and iOS 11 apps with Visual Studio

$
0
0

[Hello, we are looking to improve your experience on the Visual Studio Blog. It will be very helpful if you could share your feedback via this short survey that should take less than 2 minutes to fill out. Thanks!]

Visual Studio and Xamarin enable .NET developers everywhere to use their favorite language and full-featured IDE to create native Android, iOS, and UWP apps, even incorporating the latest and greatest from Android 8.0 Oreo and iOS 11.

To help you get the most out of the new Google and Apple APIs, we’re hosting two webinars: one for all things Android 8.0 Oreo and one dedicated to iOS 11. In these webinars you’ll explore what you need to know about each update, see step-by-step demos and get the expert tips you need to start adding exciting features to new or existing apps. Come with questions, as our team of mobile experts will be ready and waiting to answer!

[Register Now]

 

clip_image002Wednesday, December 13th:

Get the Most out of Android 8.0 Oreo with Visual Studio Tools for Xamarin

 

Join Tom Opgenorth on December 13th at 9am PT, where he’ll walk through new features, like downloadable fonts, emojis, and background execution limits, key considerations for new and existing apps, and how Visual Studio Tools for Xamarin allows .NET developers to take advantage of Android 8.0 Oreo, no platform-native tools or code required.

 

clip_image004

Thursday, December 14th:

Get the Most out of iOS 11 with Visual Studio Tools for Xamarin

 

Join me, Craig Dunn, on December 14th at 9am PT where I’ll discuss all the exciting new features of iOS 11, including drag and drop, SiriKit, CoreML, map improvements, and more. We’ll also look at the iPhone X form factor, learn how to fully support new and old devices, and use your .NET skills to build even better iOS apps.

 

 

Get ready to make your apps shine with the latest Android and iOS features, from user-facing capabilities to backend improvements:

  • Register for “Get the Most out of Android 8 Oreo with Visual Studio Tools for Xamarin”: Wednesday, December 13th at 9am
  • Register for “Get the Most out of iOS 11 with Visual Studio Tools for Xamarin”: Thursday, December 14th at 9am

We’ll send the recording to all registrants, so register even if you’re unable to attend live.

If you want to get started for free, download Visual Studio 2017 or Visual Studio for Mac today.”

See you soon!

Craig Dunn, Principal Program Manager
@conceptdev

Craig works on the Mobile Developer Tools documentation team, where he enjoys writing cross-platform code for iOS, Android, Mac, and Windows platforms with Visual Studio and Xamarin.


Migrating your existing .NET application to the cloud? Tell us about it!

$
0
0

Hi everyone! The .NET team is conducting a survey to learn more about your approach for moving existing .NET applications to the cloud.  The survey should take less than 5 minutes to complete.

Take the Survey now!

The survey will also allow you to provide your contact details which will allow a .NET team member to reach out to you  and help you explore the various cloud migration options available for your application.

We have already worked with a few customers and would like to extend this to all our customers!

 

Control how your files are cached on Azure CDN using caching rules

$
0
0

Content Delivery Networks (CDN) help bring your content closer to your users all over the world. One key way the CDN improves latency is by intelligently caching files on CDN edge servers located in various geographic regions. This new feature allows customers using Azure CDN standard from Verizon or Akamai to create rules to direct the CDN servers to override their default caching behavior. Azure CDN from Premium customers would still be able to use the Rules Engine to manage their cache.

This feature will allow you to specify cache duration of a specific file, files under a path, or specific file extensions to be cached based on values specified to the CDN. Many users find this easier than managing cache directive headers on the origin server itself.

For example, cache all files that end in .jpg for one year, since they don’t change often, but cache files under the directory /news for only an hour since that can change frequently.

In addition, users of the Dynamic Site Delivery optimization will now be able to delivery mixed (dynamic and static) content from a single endpoint by enabling caching for static files.

To learn more about how caching works and the default behavior on the different Optimization Types, see our documentation on How Caching Works.

To jump right into using caching rules, see our documentation on how to Control Azure CDN caching behavior with caching rules.

Connect your applications to Azure with Open Service Broker for Azure

$
0
0

I’m excited to announce today a preview of the Open Service Broker for Azure (OSBA), an implementation of the Open Service Broker API for Azure services. In a multi-cloud, multi-platform world, developers want a standard way to connect their applications to the wealth of services available in the marketplace. The Open Service Broker API is an industry-wide effort to meet that demand, simply and securely. OSBA is the simplest, most flexible way to connect your applications to a suite of the most popular Azure services, from standard offerings like Azure Database for MySQL to unique Azure solutions like Azure Cosmos DB, our globally distributed multi-model database. We currently offer 11 Azure services and intend to support most others over the next year.

Open Service Broker for Azure

The Open Service Broker for Azure and the service catalog CLI make it easy to connect your Kubernetes applications to Azure services.

 

Open Service Broker for Azure can be deployed to any OSB-compatible platform running in any environment. Whether you’re using Kubernetes, Cloud Foundry, or OpenShift in Azure, Azure Stack, your own on-prem environment (or somewhere else) you can take advantage of all the great services that Azure has to offer. Microsoft has a long track record of innovation in this space that goes back to our service broker investments for Cloud Foundry in 2016, and of doing so in the open. OSBA builds upon this work and extends it to new platforms and services, while showing our commitment to the Open Service Broker API. Because of today’s announcement, we plan to supersede the meta-azure-service-broker project in favor of OSBA and will work closely with our Cloud Foundry customers to ensure a smooth migration.

As part of our broader investments on Kubernetes, Microsoft has been an active contributor to the Service Catalog, which enables Kubernetes operators to leverage cloud-native services provided by platforms like Azure. Today, we are also announcing the alpha release of a CLI for the Kubernetes service catalog.

This tool allows interaction with well-defined service catalog objects like brokers, service instances, and service bindings, making interacting with the service catalog considerably simpler. The service catalog CLI will work with the service catalog no matter where it is deployed, and we look forward to collaborating on it with the rest of the Kubernetes community.

The demo below shows how easy it is to use OSBA to provision and bind Azure Services for some of the supported environments. Note that for Kubernetes this includes natural integration with the Helm package manager and the use of the new service catalog CLI.

OSBdemovideo-Fdraft3

You can get started building Azure-powered applications with OSBA today. We have instructions for deploying on Kubernetes and Cloud Foundry, with OpenShift coming soon. If you are targeting Kubernetes, be sure to check out our Helm charts where you can see examples of popular Helm charts like WordPress and Concourse which have been enhanced to use Azure services provisioned by OSBA.

Thanks, and we’ll see you on GitHub!

Sean

Azure brings new Serverless and DevOps capabilities to the Kubernetes community

$
0
0

Starting today, the Kubernetes community comes together at KubeCon in Austin, Texas, with the goal of making it easier than ever to use containers to modernize existing applications and manage new applications to drive digital transformation. Today and tomorrow we will be announcing more Kubernetes community projects and partnerships that extend what customers can do with Kubernetes and Azure, and the ease with which they can do it with new projects for serverless containers and Kubernetes-native DevOps.

Our announcements this week build on significant investments in Kubernetes including joining the CNCF, being the first major cloud provider to introduce serverless containers (Azure Container Instances), delivering Azure’s managed Kubernetes service (AKS), and contributing projects such as Draft and Brigade. We’ve been overwhelmed by the interest and adoption of Kubernetes on Azure, usage is up over 700% YTD. Thank you to everyone who has tried our services, contributed to projects, or just provided feedback. We hope you’ll keep doing so.

Now, for the news...

Manage serverless containers using Kubernetes with the Virtual Kubelet

Back in July, we released the Azure Container Instances (ACI) Connector for Kubernetes, an experimental project to extend Kubernetes with ACI, a serverless container runtime that provides per-second billing and no virtual machine management. We were thrilled to see companies like Hyper.sh adapt it to their own serverless container runtime. Today we are announcing a new version of the Kubernetes connector, the Virtual Kubelet, which can be used by customers to target ACI or any equivalent runtime. The Virtual Kubelet features a pluggable architecture that supports a variety of runtimes, and uses existing Kubernetes primitives, making it much easier to build on. We welcome the community to join us in empowering developers with serverless containers on Kubernetes and are proud that Hyper.sh is already joining us as a contributor.

According to James Kulina, Chief Operating Officer, Hyper.sh: "Hyper is very excited to support the Virtual-Kublet project, as the first outside contributor. Hyper's vision from the start has been to make deploying and using containers as simple and easy as possible. Now with the Virtual-Kublet project, platforms that support secure container technology, such as our Hyper.sh cloud through its use of Kata Containers, will enable seamless multi-cloud container deployment between Kubernetes-based "serverless container" platforms."

Connect Your Kubernetes Applications (and more) to Azure Services with Open Service Broker API

As adoption of Kubernetes continues to grow on Azure, customers need an easy way to connect their containers to Azure services. Today, Microsoft is open sourcing the Open Service Broker for Azure (OSBA), built using the Open Service Broker API. The Open Service Broker API provides a standard way for service providers to expose backing services to applications running in cloud native platforms like Kubernetes and Cloud Foundry. OSBA exposes popular Azure services such as Azure CosmosDB, Azure Database for PostgreSQL, and Azure Blob Storage. With OSBA and the Kubernetes Service Catalog, customers can manage these SLA-backed Azure data services via the Kubernetes API, making it easy for developers to use Azure's data stores in a Kubernetes-native way. To showcase OSBA, we’ve adapted some of the most popular Helm charts to leverage Azure services. For example, using OSBA and Helm you can now easily install an instance of Wordpress backed by Azure Database for MySQL, instead of running the database in a container.

Additionally, Microsoft is also contributing an alpha release of a Command Line Interface for the Kubernetes service catalog. This helps cluster administrators and application developers request and use services exposed through the Kubernetes Service Catalog. For more, be sure to check out the Open Service Broker for Azure blog.

Kubernetes-native DevOps: Dashboard and Visualization tool for pipelines

We are also excited to introduce Kashti, a dashboard and visualization tool for Brigade pipelines, a project we announced in Prague at the Open Source Summit. Brigade helps developers and operations managers get their work done quickly by scripting together multiple tasks and executing them inside of containers. This has many applications including Kubernetes-native CI/CD, ETL, batch workflows, and more. Kashti extends Brigade with a dashboard UI, built as a Kubernetes service and installed via Helm. With Kashti, developers can easily manage and visualize their Brigade events and projects through a web browser. The Kashti project is in its early days and we hope that you’ll check it out, kick the tires, and contribute. For more on how to try it out and even what the name means, check the Kashti blog.

Share your Feedback

Through today’s announcements, and looking forward, Microsoft will continue our commitment to the open source and Kubernetes experience on Azure. We hope that you’ll try out these new services, engage in these projects, and share feedback with us. Be sure to stop by the Azure booth to see some demos, attend our sessions throughout the week, participate in the community projects, or join the AKS preview and let us know what you think.

In the meantime, check out how one of our customers, Siemens Healthineers, is building cloud-based healthcare technology using Azure and Kubernetes:

Snapshot Debugging with Visual Studio 2017: Now Ready for Production

$
0
0

[Hello, we are looking to improve your experience on the Visual Studio Blog. It will be very helpful if you could share your feedback via this short survey that should take less than 2 minutes to fill out. Thanks!]

Earlier this year we previewed the Snapshot Debugger, a tool that enables you to debug web apps running in production in Azure. With the general availability of Visual Studio 2017 Enterprise 15.5 this week, Snapshot Debugger is now available for you to get started. Read more about how here.

Snapshot Debugging Overview

If an issue happens in production, you may find yourself digging through logs or attempting to repro the issue in a local environment. Often, the logs may be insufficient, or a local repro may be hard if not impossible to setup. The Snapshot Debugger enables a safe, non-invasive way for you to use the Visual Studio debugger you know and love directly against the production environment in Azure where the issue is happening.

The Snapshot Debugger works by taking a snapshot of the state of your app at specified lines of code where you set Snappoints. While traditional breakpoints would halt your live server when hit and stop it from serving requests; Snappoints quickly capture state, including locals, watches, and the call stack while your app continues to run. This means that you can debug the actual live, running app, without impacting the experience your customers have while using the app. You can read a further overview of how the Snapshot Debugger can be used effectively in production here.

You can capture snapshots at specified lines of code in by using Snappoints, as shown below. Additionally, you can capture snapshots automatically when exceptions happen in your app by setting up Application Insights.

Using Snappoints

Snapshot Debugging has almost no performance impact on the performance of your production service or the experience end users see while using your application. In the rest of this blog post, we’ll measure the performance impact of using the Snapshot Debugger against a live app with a quick case study using performance load testing.

Production Performance while Debugging

I ran a performance load test to measure the impact of Snapshot Debugging on a deployed app. In a 5-minute load test, I simulated 1,000 users continuously hitting an Azure App Service running the MusicStore ASP.NET Core app. Roughly two minutes into the test, I attached the Snapshot Debugger and set a Snappoint that is hit in the MusicStore app’s home page. I then opened the resulting Snapshot and spent the remaining three minutes inspecting variables and the state at the point of time the snapshot was captured.

Debugging in Production - Performance

Debugging in Production - Throughput

During the load test, my Azure App Service plan was hovering at 80-90% CPU Usage. However, even when I started Snapshot Debugging, the performance and throughput did not degrade. Prior to attaching the Snapshot Debugger, the average response time for my server was between 2.0 and 2.4 seconds, and the throughput was between ~3,400 and ~4,000 requests per second. Attaching the Snapshot Debugger at the 2-minute mark caused no change or degradation to average response time, requests per second, or any other performance metric.

I was able to inspect the full state of my app at a snapshot, yet the performance of the app was unaffected while I was debugging. If I were to attach a live debugger and set a breakpoint, the requests per second would have dropped to zero, as the app would halt when the breakpoint was hit!

The Tech behind Snapshotting

The Snapshot Debugger achieves this minimal overhead by intelligently capturing state at the location you’ve set a Snappoint. When you place a Snappoint in your app, the Snapshot Debugger forks your app’s process and suspends the forked copy, creating a snapshot. You then debug against this snapshot, which sits in-memory on your server. The snapshot is not a copy of the full heap of the app – it’s only a copy of the page table with pages set to copy-on-write. The Snapshot Debugger only makes copies of pages in your app if the page gets modified, minimizing the memory impact on your server. In total, your app will only slow down by 10-30 milliseconds when creating snapshots. As snapshots are held in-memory on your server, they do cost ~100s of kilobytes while they are active, as well as an additional commit charge. The overhead of capturing snapshots is fixed and should therefore not affect the throughput of your app regardless of the scale of your app.

The Snapshot Debugger will only capture one snapshot per Snappoint placed in your code to further limit the performance impact. You can modify a Snappoint’s settings to add conditions to specify when the snapshot should be captured or increase the number of snapshots captured. Additionally, you can set several Snappoints in your app to capture Snapshots at different lines and switch between them. The Snapshot Debugger will ensure these snapshots come from the same end user session, even if there are thousands of requests hitting your app.

When you are finished using the Snapshot Debugger, you can hit the stop debugging button in Visual Studio. Hitting stop will detach the Snapshot Debugger and freeing all existing snapshots from memory on your server.

Try out the Snapshot Debugger

The Snapshot Debugger is available in Visual Studio 2017 Enterprise version 15.5 and greater. Currently, the Snapshot Debugger supports ASP.NET and ASP.NET Core apps running in Azure App Services. The first time you use the Snapshot Debugger you will be required to restart your Azure App Service, but no redeployment is necessary.

Nikhil Joglekar, Program Manager, Visual Studio
@nikjogo

Nikhil is a program manager working on Azure diagnostics tooling. Since joining Microsoft two years ago, Nikhil has worked on the Snapshot Debugger, Visual Studio Profiler, and Azure SDK.

Performance best practices for using Azure Database for PostgreSQL

$
0
0

Microsoft announced the public preview of Azure Database services for PostgreSQL and MySQL at Build 2017 which is a simple, fully managed database service for PostgreSQL and MySQL that removes the complexities around infrastructure management, data availability, protection, and scale. The service has seen tremendous growth and we have had customers reaching out to us regarding best practices for achieving optimal query performance on the service. This post outlines an approach for troubleshooting performance while using Azure Database for PostgreSQL as the backend database.

Based on the usage patterns, we see two common deployment patterns:

  • An application server exposing a web endpoint on an application server, which connects to the database.
  • A client-server architecture where the client directly connects to the database.

Azure Database for PostgreSQL

The performance issues for an application or service using Azure Database for PostgreSQL service can be classified broadly into the following categories. Pleas refer numbers in the bulleted section below for more details.

1. Resource contention (CPU, Memory, and Disk) on the client – The machine/server serving as the client could be having a resource constraint which can be identified in the task manager, the Azure portal, or CLI if the client machine is running on Azure.

Pic1

2. Resource contention (CPU, Memory, and Disk) – The machine/server acting as the application server could cause a resource constraint, which can be identified in the task manager, the Azure portal, or CLI if the application server/service VM is running on Azure. If the application server is an Azure service or virtual machine, then Azure metrics can help with determining the resource contention.

3. Resource contention on Azure Database for PostgreSQL – The database service could be experiencing performance bottlenecks related to CPU, memory, and storage which can be determined from the Azure Metrics for the database service instance. Please see below for more details. To learn more, read about monitoring Azure Database for PostgreSQL.

Pic2

4. Network latency – One of the common issues we encounter while troubleshooting performance is the network latency between the client and the database service instance. A quick check before starting any performance benchmarking run is to determine the network latency between the client and database using a simple SELECT 1 query. We have seen customers report improved throughput when the SELECT 1 timing for a query is <2ms when using a remote client hosted on Azure in the same region and resource group as the Azure Database for PostgreSQL server.

Commands to get SELECT 1 timing using psql:

timing
SELECT;
watch 1

We have observed that customers are able to significantly increase the application throughput by creating the application server and database service in the same region, resource group, and using accelerated networking for the application server/client machine, where applicable. Accelerated networking enables single root I/O virtualization (SR-IOV) to a VM, greatly improving its networking performance. This high-performance path bypasses the host from the datapath reducing latency, jitter, and CPU utilization for use with the most demanding network workloads on supported VM types. Get more information on the OS releases that support Accelerated Networking along the steps to create a virtual machine with accelerated networking.

Database performance

Once you have eliminated resource contention as a possible root cause, you will need to determine the queries on the database server which are contributing to the highest duration. This can be done using pg_stat_statements module. Since we maintain parity with community PostgreSQL, any native queries that you used to troubleshoot query performance on PostgreSQL will apply on our service as well.

You will be able to execute the below query on an Azure Database for PostgreSQL server to get the top 5 duration queries executed during your performance/benchmarking run:

SELECT query, calls, total_time, rows, 100.0 * shared_blks_hit/
nullif(shared_blks_hit + shared_blks_read, 0) AS hit_percent
FROM pg_stat_statements
ORDER BY total_time
DESC LIMIT 5

It is recommended to reset the pg_stat_statements using the query below to ensure that you only capture the statements from your performance/benchmarking run:

SELECT pg_stat_statements_reset()

Quick tips

If CPU usage for an Azure Database for PostgreSQL server is saturated at 100%, then select the next higher level of Compute Units to get more CPU. For example, if the CPU usage is hovering around 100% continuously during business hours for a Standard 100, then it might be worthwhile to consider Standard 200.

A common issue that we notice is the use of the default included storage size for the database, which is 125GB. The default storage size of 125GB is limited to 375 IOPs. If your application requires higher IOPs, then it is recommended that you create an Azure Database for PostgreSQL server with a higher storage size to get more IOPs so that your application performance is not impacted by storage throttling.

If IO waits are observed from MySQL/PostgreSQL performance troubleshooting, then increasing the storage size should be considered for higher IO throughput. For example, if you observe WALWriteLock as the wait event type for maximum requests using pg_stat_activity, then it would be beneficial to use a server with a higher storage size as storage performance scales with the allocated storage size.

Query to determine the number of waits on WALWriteLock which signifies an IO bottleneck associated with Write Ahead Log writes:

select wait_event, wait_event_type, count(*) as counts
from pg_stat_activity
group by wait_event, wait_event_type;

We recommend having the application server/client machine in the same region and resource group in Azure to reduce between the client/application server and the database.

If you are using pgbench for testing performance, then it is advisable to use a scale factor which is higher than at least the number of connections to ensure that your performance benchmarking is not bottlenecked on update contention. For example, if you are using 100 connections to run pgbench3, then you should at least use a scale factor of 100 or higher.

If there is a resource contention associated with memory or CPU usage on the virtual machine acting as the application server, and all possible optimizations on the application has been implemented, it is recommended to increase the virtual machine size to increase the compute and memory available on the virtual machine.

If there are IO related bottlenecks observed on the virtual machine acting as the application server, it is recommended to increase the disk size hosting the application files and possibly evaluating the use of Premium Managed Disks.

If you are still having a performance issue and need assistance, you have the following options:

Reference

#AzureSQLDW cost savings with Autoscaler – part 2

$
0
0

This blog post was co-authored by Eldad Hagashi, Software Engineering Manager, and Feng Tan, Software Engineer, Microsoft Education Data Service.

Azure SQL Data Warehouse is Microsoft’s SQL analytics platform, the backbone of your enterprise data warehouse (EDW). The service is designed to allow customers to elastically, and independently, scale compute and storage with massively parallel processing. SQL DW integrates seamlessly with big data stores and acts as a hub to your data marts and cubes for an optimized and tailored performance of your EDW. Azure SQL DW offers guaranteed 99.9% high availability, compliance, advanced security, and tight integration with upstream and downstream services so you can build a data warehouse that fits your needs. Azure SQL DW is the first and only service enabling enterprises to replicate their data everywhere with global availability in more than 30 regions.

Inside of Microsoft, Azure services are used extensively across teams. Clever solutions combining multiple Azure products are the norm across teams such as the OneNote and Education Microsoft team who built an Autoscaler to meet its SQL Data Warehouse compute demand while minimizing its cost.

Willing to share that work with the SQL DW community, the team provide to us (and you!) their solution with a deployable template to use with just 1-click. For the Education Data Service team, their SQL Data Warehouse represents the single source of truth for all their data across Microsoft. Their instance will drive shared outcomes across Engineering, Marketing, Finance, and Sales teams. Their data warehouse solution consists of Data Lake Storage, Data Factory, Data Warehouse, Analysis Services, and Power BI.

SQL Data Warehouse was a good fit for the team for its ability to provide interactive querying and transforming raw data into business value. To derive the best price and performance, the team needed to ensure that their Data Warehouse Units (DWU) value matched their actual demand at any given time. Lower DWU levels allowed for cost savings, but meant business questions would take longer to answer. Higher DWU levels meant higher performance, but their occasional requirements of 3,000 DWUs would mean a much higher monthly cost if left alone. The team needed an intelligent way to manage elasticity with more flexibility than a schedule-based solution. This is the reason they decided to introduce a scaling solution based on system demand that would be hands free, reduce costs, and be non-disruptive to their workload.

The EDU Data Service team’s Autoscaler is comprised of two primary components including a DWU-usage monitor based scaler and a timer-based scaler. The monitor-based scaler uses a pre-defined DWU ladder, which the scaler climbs up and down, based on usage. Over the course of the day, as the scaler notices DWU usage above or beyond usage thresholds, the scaler will move the system up and down the DWU ladder. The timer-based scaler operates as a supplement to scale the system back to a defined baseline before the business day’s operations if the DWU settings drop too low from inactivity.

Autoscaler uses Azure HTTP Webhook Function combined with Azure Monitor and alert notifications. Azure Monitor observes DWU usage for the previous period, typically 30 minutes. When the DWU usage exceeds an upper threshold, such as 80% of the DWU limit, Azure Monitor will send an alert via email and triggers a webhook. This parameter can easily be changed. The webhook HTTP request will trigger Autoscaler to send a REST API request to their data warehouse to scale either up or down the DWU ladder depending on which threshold monitor was fired. Autoscaler then logs the request in the Azure Table for later analysis and quality control. At the beginning of each business day, the timer-scaler will scale the instance to the day’s baseline.

With the use of the Autoscaler, we see that their DWU levels closely match their demand over the course of the day. Over the course a month, the team has saved around approximately 55% per year!

image

To learn more about Autoscaler, check out our GitHub or deploy now to your own instance! Before deploying, bear in mind that Autoscaler as a solution works specifically with Optimized for Elasticity in architectures that decouples loading, transformation, and querying into discrete jobs. For the EDU team, the use of Azure Data Factory’s retry-logic allows loading and transformation disruption by the Autoscaler. For querying, the use of Azure Analysis Services as a serving layer delivers performance to the team without disruption.

clip_image001


If you need our help for a POC, contact us directly. Stay up-to-date on the latest Azure SQL DW news and features by following us on Twitter @AzureSQLDW.


cloud-init for RHEL 7.4 and CentOS 7.4 preview

$
0
0

cloud-init is an increasingly popular way to configure Linux VMs. Today we are pleased to announce a preview of provisioning RHEL 7.4 and CentOS 7.4 using cloud-init. This will allow you to migrate existing cloud-init configurations to Azure from other environments. cloud-init allows for VM customization during VM provisioning, adding to the existing Azure parameters used to create a VM. You can also use the functionality of cloud-init to configure the VM further, such as add users, change disk configuration, run scripts, install packages, etc., by using the custom cloud-init configuration modules.

Availability

Currently, the Azure Gallery, which contains Linux images for deployment in Azure, the existing RHEL 7.4, and CentOS 7.4 images utilize the Linux agent for customizing the image during the initial provisioning. The Azure Gallery now contains new cloud-init provisioned RHEL 7.4 and CentOS 7.4 images that are ready for you to deploy with or without additional cloud-init configurations. These are available in preview in Azure public clouds.

In the detailed documentation, we show you how you can test customizing your own images of these OS’s using cloud-init too!

Deploying a VM using cloud-init enabled image

This is really just as simple as referencing a different image name when creating a VM. You can pass in additional cloud-init configurations using ‘custom-data’:

CloudInitGif

FAQs

    Do I still need to have the Azure Linux Agent installed on my image?

    Yes, Azure Linux Agent is still required.

    Can I run a cloud-init configuration and install Azure VM extensions too?

    Yes, extensions are supported.

    Is it just CentOS 7.4 and RHEL 7.4 that support cloud-init on Azure?

    No, cloud-init is already available for Canonical Ubuntu images, and CoreOS images support Ignition.

     

    Where can I share feedback / comments?

    We love feedback! Please submit your comments to Azure feedback page.

    Windows Template Studio 1.5 released

    $
    0
    0

    We’re extremely excited to announce the Windows Template Studio 1.5.
    In this release, we finalized our work for localization, added in some new features, and started work on a bunch of new features and pages.

    What’s new:

    Full list of adjustments in the 1.5 release, head over to WTS’s Github.


    New Features:

    • Share source, share target
    • Multi-view
    • Feedback hub (added in v1.4)

    Template improvements:

    • Minor tweaks for Fluent
    • Caliburn.Micro Support (added in v1.4)

    Improvements to the Wizard:

    • Localization in all Visual Studio supported language
    • Adjusted the feature categories
    • Lots of under the hood bug fixes and code improvements
    • Much more Visual Basic engine work
    • Work for supporting multiple projects in a single solution
    • Work to support Prism
    • Bug fixes

    How to get the update:

    There are two paths to update to the newest build.
    Already installed: Visual Studio should auto update the extension. To force an update, Go to Tools->Extensions and Updates. Then go to Update expander on the left and you should see Windows Template Studio in there and click “Update”
    Not installed: Head to https://aka.ms/wtsinstall, click “download” and double click the VSIX installer.

    What else is cooking for next versions?

    We love all the community support and participation. In addition, here are just a few of the things we are currently building out that will in future builds:
    • Image Gallery feature (In nightly)
    • Web to App Link feature (In nightly)
    • Visual Basic support (In nightly)
    • Drag and drop service (in nightly)
    • Prism support (Soon in nightly)
    • Improved update system to help increase speed of startup and file size download
    • Improved user interface in-line with Visual Studio
    • Continued refinement with Fluent design in the templates
    • Ink templates
    • Improved Right-click->add support for existing projects

    With partnership with the community, we’ve will continue cranking out and iterating new features and functionality. We’re always looking for additional people to help out and if you’re interested, please head to our GitHub at https://aka.ms/wts. If you have an idea or feature request, please make the request!

    The post Windows Template Studio 1.5 released appeared first on Building Apps for Windows.

    The British Ecological Society’s Guide to Reproducible Science

    $
    0
    0

    The British Ecological Society has published a new volume in their Guides to Better Science series: A Guide to Reproducible Code in Ecology and Evolution (pdf). The introduction, by , describes its scope:

    A Guide to Reproducible Code covers all the basic tools and information you will need to start making your code more reproducible. We focus on R and Python, but many of the tips apply to any programming language. Anna Krystalli introduces some ways to organise files on your computer and to document your workflows. Laura Graham writes about how to make your code more reproducible and readable. François Michonneau explains how to write reproducible reports. Tamora James breaks down the basics of version control. Finally, Mike Croucher describes how to archive your code. We have also included a selection of helpful tips from other scientists.

    The guide proposes a simple reproducible project workflow, and a guide to organizing projects for reproducibility. The Programming section provides concrete tips and traps to avoid (example: use relative, not absolute pathnames), and the Reproducible Reports section provides a step-by-step guide for generating reports with R Markdown.

    Rmarkdown

    While written for an ecology audience (and also including some gorgeous photography of animals), this guide would be useful for anyone in the science looking to implement a reproducible workflow. You can download the guide at the link below.

    British Ecological Society: A Guide to Reproducible Code in Ecology and Evolution (via Laura Graham)

    Java: Manage Azure Container Service (AKS) and more

    $
    0
    0

    We released 1.4 of the Azure Management Libraries for Java. This release adds support for Azure Container Service (AKS) and more.

    https://github.com/azure/azure-libraries-for-java

    Getting started

    Add the following dependency fragment to your Maven POM file to use 1.4 version of the libraries:

    <dependency>
        <groupId>com.microsoft.azure</groupId>
        <artifactId>azure</artifactId>
        <version>1.4.0</version>
    </dependency>
    

    Create Kubernetes Cluster in Azure Container Service (AKS)

    You can create a Kubernetes cluster by using a define() … create() method chain.

    KubernetesCluster kubernetesCluster = azure.kubernetesClusters().define(aksName)
        .withRegion(region)
        .withNewResourceGroup(rgName)
        .withLatestVersion()
        .withRootUsername(rootUserName)
        .withSshKey(sshKeys.getSshPublicKey())
        .withServicePrincipalClientId(servicePrincipalClientId)
        .withServicePrincipalSecret(servicePrincipalSecret)
        .defineAgentPool("agentpool")
            .withVirtualMachineCount(1)
            .withVirtualMachineSize(ContainerServiceVMSizeTypes.STANDARD_D1_V2)
            .attach()
        .withDnsPrefix("dns-" + aksName)
        .create();
    

    You can instantiate a Kubernetes client using a community developed Kubernetes client library.

    ernetesClient kubernetesClient = new DefaultKubernetesClient(config);

    Deploy from Container Registry to Kubernetes Cluster

    You can deploy an image from Azure Container Registry to a Kubernetes cluster using the same community developed Kubernetes client library and an image pull secret associated with the Container Registry.

    ReplicationController rc = new ReplicationControllerBuilder()
        .withNewMetadata()
            .withName("acrsample-rc")
            .withNamespace(aksNamespace)
            .addToLabels("acrsample-nginx", "nginx")
            .endMetadata()
        .withNewSpec()
            .withReplicas(2)
            .withNewTemplate()
                 .withNewMetadata()
                      .addToLabels("acrsample-nginx", "nginx")
                       .endMetadata()
                 .withNewSpec()
                      .addNewImagePullSecret(aksSecretName)
                      .addNewContainer()
                            .withName("acrsample-pod-nginx")
                          .withImage("acrdemo.azurecr.io/samples/acrsample-nginx")
                          .addNewPort()
                              .withContainerPort(80)
                              .endPort()
                          .endContainer()
                      .endSpec()
                 .endTemplate()
             .endSpec()
        .build();
    
    kubernetesClient.replicationControllers().inNamespace(aksNamespace).create(rc);

    You can find the full sample code to deploy an image from container registry to Kubernetes cluster in Azure Container Service (AKS).

    Apply Lock to Virtual Machine

    You can create and apply a lock to a virtual machine by using a define() … create() method chain.

    lockVirtualMachineRO = azure.managementLocks().define("virtualMachineLockRO")
        .withLockedResource(vm)
        .withLevel(LockLevel.READ_ONLY)
        .create();
    

    You can find the full sample code to manage resource locks.

    Create Express Route Circuit Peering

    You can create an Express Route Circuit by using a define() … create() method chain.

    ExpressRouteCircuit erc = azure.expressRouteCircuits().define(ercName)
        .withRegion(Region.US_NORTH_CENTRAL)
        .withNewResourceGroup(rgName)
        .withServiceProvider("Equinix")
        .withPeeringLocation("Silicon Valley")
        .withBandwidthInMbps(50)
        .withSku(ExpressRouteCircuitSkuType.PREMIUM_METEREDDATA)
        .create();
    

    Then, you can create an Express Route Circuit Peering by using another define() … create() method chain.

    erc.peerings().defineAzurePrivatePeering()
        .withPrimaryPeerAddressPrefix("123.0.0.0/30")
        .withSecondaryPeerAddressPrefix("123.0.0.4/30")
        .withVlanId(200)
        .withPeerAsn(100)
        .create();
    

    You can find the full sample code to create and configure Express Route Circuits.

    Try it

    You can get more samples from our GitHub repo. Give it a try and let us know what you think (via e-mail or comments below).
     
    You can find plenty of additional info about Java on Azure at https://docs.microsoft.com/en-us/java/azure/.

    Partners enhance Kubernetes support for Azure and Windows Server Containers

    $
    0
    0

    Yesterday, Microsoft made some exciting announcements about new serverless and DevOps capabilities we’ll be contributing to the community. We announced Virtual Kubelet, the next version Kubernetes connector for Azure Container Instances, Open Service Broker for Azure, which makes it easier for you to connect your containers to Azure services, and Kashti, a dashboard and visualization tool for Brigade pipelines.

    Today, we’re shifting gears to focus on the Kubernetes ecosystem. What makes the Kubernetes community so great is the diversity of members, who bring so many different technologies to the table. As Microsoft works to make Azure the best place to run Kubernetes, and we continue to bring the Windows Server Containers ecosystem closer to the Kubernetes community, we’ll continue to forge new and enhance existing partnerships to make sure the tools you need are available and working great across our platforms. Today, I’m proud to share two new collaborations with Heptio and Tigera, as well as some progress we’ve made working with SIG Windows.

    Heptio to bring Kubernetes Disaster Recovery and Migration Solutions to Azure

    Today, my former colleague and fellow co-founder of Kubernetes, Craig McLuckie, CEO and co-founder of Heptio announced that Heptio and Microsoft will collaborate to bring Heptio Ark to Azure. Ark is a utility for managing disaster recovery and facilitating migration for Kubernetes across on premise and public cloud environments. Heptio Ark provides a simple, configurable and operationally robust way to back up and restore applications and persistent volumes from a series of checkpoints. The two companies are working together to bring these benefits to Azure customers, to help simplify Kubernetes usage on Microsoft’s cloud platform, whether they are all in on public cloud or working across on-premises and Azure. Check out Craig’s blog for more details.

    Tigera and Microsoft partnering to simplify network security for Windows Server Containers

    I’m also excited to share that we’re partnering with Tigera and continuing our work with SIG Windows to bring Windows Server Containers closer to the Kubernetes community. With Windows Server version 1709, Windows now has parity with Linux for Kubernetes networking from a platform perspective.  To showcase these new networking features, we’re working with Tigera to contribute to Project Calico, a community based, free and open source solution designed to simplify, scale and secure container networks and applications running in them. Be sure to check the blog on securing modern applications with Calico and Windows.

    Ecosystem delivering Windows and Kubernetes solutions for the enterprise

    I'm thrilled to report that Microsoft, with help from SIG-Windows participants like Cloudbase, Apprenda, and Red Hat, is shipping beta support for Windows Server Containers in Kubernetes 1.9. This is a major milestone that helps expand Kubernetes into the huge number of enterprises who have made significant investments in .NET and Windows based applications. Additionally, kubeadm now works on Windows, which means customers can use this popular community tool to bootstrap Kubernetes nodes running Windows Server. I’m also pleased to share that terrific progress is being made on pod autoscaling via Heapster, e2e testing for Windows components, and Windows documentation updates for Kubernetes 1.9. You can find more details in Taylor Brown’s blog on the Kubernetes blog.

    Strength through diversity

    Diversity is not just about working with the myriad of companies in the Kubernetes ecosystem. Diversity is also about all the different people who bring different opinions and ideas to our community. I’m proud to work at a company like Microsoft where diversity is championed and supported, and Microsoft is thrilled to help bring more diversity into the Kubernetes community. It has been a highlight of KubeCon so far to see Microsoft, Google, and others come together to provide a diversity scholarship designed to promote KubeCon attendance from under-represented communities. KubeCon and the Kubernetes community derives its strength through diversity. I’d like to say “thank you” to Google, the CNCF, and every single one of you who come together each day to empower the Kubernetes community.

    Share your feedback

    Microsoft will continue our commitment to the open source and the Kubernetes experience on Azure and beyond. Your feedback remains invaluable to us, so please let us know what you think about these new partnerships. Keep sharing what’s working, and what you’d like to see going forward whether here at the show, through project contributions, or in the comments. I also hope you’ll drop by the Azure booth and our sessions throughout the week. I’m excited to meet you all!

    Azure Application Architecture Guide

    $
    0
    0

    We've talked to many customers since Azure was released nearly eight years ago. Back then, Azure had only a few services. Now it's grown tremendously and keeps expanding. Cloud computing itself also has evolved to embrace customer demands. For example, most consumer-facing apps require a much faster velocity of updates than before, to differentiate them from competitors. That’s part of the reason why new architecture styles such as microservices are gaining traction today. Container-based and serverless workloads are becoming de facto. We see all of these new services and industry trends as a great opportunity, but at the same time, they can be a source of confusion for customers. Customers have a lot of questions, such as:

    • Which architecture should I choose? Microservices? N-Tier? How do we decide?
    • There are many storage choices, which one is the best for me?
    • When should I use a serverless architecture? What’s the benefit? Are there any limitations?
    • How can I improve scalability as well as resiliency?
    • What’s DevOps culture? How can I introduce it to my organization?

    To help answer these questions, the AzureCAT patterns & practices team published the Azure Application Architecture Guide. This guide is intended to provide a starting point for architects and application developers who are designing applications for the cloud. It guides the reader to choose an architectural style, then select appropriate technologies and apply relevant design patterns and proven practices. It also ties together much of the existing content on the site. The following diagram shows the steps in the guide along with the related topics.

    Architecture guide steps

     

    Architecture styles. The first decision point is the most fundamental. What kind of architecture are you building? It might be a microservices architecture, a more traditional N-tier application, or a big data solution. We have identified seven distinct architecture styles. There are benefits and challenges to each.

    Technology Choices. Two technology choices should be decided early on, because they affect the entire architecture. These are the choice of compute and storage technologies. The term compute refers to the hosting model for the computing resources that your applications runs on. Storage includes databases but also storage for message queues, caches, IoT data, unstructured log data, and anything else that an application might persist to storage.

    Design Principles. Throughout the design process, keep these ten high-level design principles in mind.

    Pillars. A successful cloud application will focus on these five pillars of software quality: Scalability, availability, resiliency, management, and security.

    Cloud Design Patterns. These design patterns are useful for building reliable, scalable, and secure applications on Azure. Each pattern describes a problem, a pattern that addresses the problem, and an example based on Azure.

    This guide is also available for download as an ebook.

    We hope you will find the Azure Application Architure Guide useful. Lastly, we value your feedback and suggestions. If you see anything that is missing in the content, suggestions for improvements, or want to share information that has worked well for your customers and could be elevated to a broader audience, please contact us at arch-center-feedback@microsoft.com.

    What’s brewing in Visual Studio Team Services: December 2017 Digest

    $
    0
    0

    This month I have a lot to cover. Since my last post, we’ve shipped three sprints of features, and we had a very successful Connect(); event in November. Let’s dive right in.

    Connect();

    The Connect(); conference highlights developer tools like Visual Studio Team Services (VSTS) and the rest of the Visual Studio family of products. It’s an event full of news and training about software development and DevOps, and if you missed the live stream, the DevOps and Visual Studio Team Services (VSTS) sessions are all available to watch on-demand.

    Brian Harry’s general session discussed the state of Azure DevOps and included some exciting announcements including new easy-to-setup Azure DevOps Projects, TFS 2018 availability, hosted Mac build agents (in addition to Linux and Windows), a partnership with GitHub to expand the scope of GVFS beyond just VSTS, and many more. You can read Brian’s blog post on Connect(); announcements for a summary and watch the video of his presentation.

    DevOps at Microsoft

    Recently we did an extensive series of presentations on how have adopted DevOps and operate VS Team Services at scale. Topics include planning, architecture, testing, live site, and more in depth on DevOps at Microsoft. Also, I recently gave a keynote at the VS Live! conference on Lessons Learned Doing DevOps at Microsoft at Scale. In an hour, I cover key lessons we’ve learned using feature flags to control exposure, improving resiliency using circuit breakers, managing live site and pursuing root cause, and transforming our tests to be fast and reliable. And if you still want more, you can find our DevOps presentations from Ignite 2017 here.

    Azure DevOps Project

    The new Azure DevOps Project makes it easy to get started on Azure. It helps you launch an app on the Azure service of your choice in a few quick steps. DevOps Project sets you up with everything you need for developing, deploying and monitoring your app.

    Creating a DevOps Project provisions Azure resources and comes with a Git code repository, Application Insights integration, and a continuous delivery pipeline set up to deploy to Azure. The DevOps Project dashboard lets you monitor code commits, builds, and deployments from a single view in the Azure portal.

    Key benefits of a DevOps Project:

    • Get up and running with a new app and a full DevOps pipeline in just a few minutes
    • Support for a wide range of popular frameworks such as .NET, Java, PHP, Node, and Python
    • Start fresh or bring your own application from GitHub
    • Built-in Application Insights integration for instant analytics and actionable insights
    • Cloud-powered CI/CD using Visual Studio Team Services (VSTS)

    DevOps Projects are powered by VSTS and gives you a head start in developing and deploying your applications. See the documentation for deploying to Azure for more information.

    Azure DevOps Project

    Configuration as code (YAML) builds in Public Preview

    When you define a CI build on VSTS, you’ve now got a fundamental choice: use a web-based interface or configure your CI process as code in a YAML build. YAML build definitions give you the advantages of configuration as code.

    Why should you care? Have you ever run into build breaks or unexpected outcomes caused, not by changes to your app, but by changes in your build process?

    A YAML build definition follows the same branching structure as your code. So you get validation of your changes through code reviews in pull requests and branch build policies. This way you can much more easily identify and fix (or avoid) this kind of problem because the change is in version control with the rest of your code base.

    See Chris Patterson’s blog post for his perspective on YAML builds, including how we went about making decisions on how this feature works.

    NOTE: To use this capability, you must have the Build Yaml definitions preview feature enabled on both your profile and account.

    You can try it right now. Just add a new file called .vsts-ci.yml to the root of your Git repo in VSTS. Then put this in the file:

    queue: Hosted VS2017
    steps:
    - script: echo hello world

    After you commit the changes, a build definition is automatically created and queued! Ready to go beyond “hello world”?

    Hosted Mac agents for CI/CD pipelines in Public Preview

    VSTS now has cloud-hosted CI/CD agents running on macOS. This allows building and releasing Apple apps in the cloud (including iOS, macOS, tvOS, and watchOS), eliminating the need for providing and maintaining your own dedicated Mac hardware. VSTS now offers hosted CI/CD agents running on three operating systems – Linux, macOS, and Windows. For more information, see Hosted agents.

    To use the hosted macOS agents, select Hosted macOS Preview for your build or release pipeline:

    Hosted Mac

    Agentless build tasks

    Your build process is defined by the tasks it performs. Until now, all these tasks were running on an agent, either a hosted agent we provide or on your own private agent. There are some common tasks where an agent is not needed. For example, when you want to call a REST API, or to have the build pause for a period of time.

    We’ve added some agentless build tasks to the catalog:

    You can add an agentless phase to your build definition and then add one of these tasks to run it on VSTS.

    You can also extend and add your own agentless tasks, but there are some restrictions:

    • Agentless tasks cannot run scripts.
    • You must select one of the pre-defined execution handlers: HttpRequest handler to call an HTTP endpoint, or ServiceBus handler to post a message on the Azure service bus.

    For examples on how to create such tasks, see the InvokeRestAPI and PublishToAzureServiceBus tasks.

    Release gates in Public Preview

    Continuous monitoring is an integral part of DevOps pipelines. Ensuring the app in a release is healthy after deployment is as critical as the success of the deployment process. Enterprises adopt various tools for automatic detection of app health in production and for keeping track of customer reported incidents. Until now, approvers had to manually monitor the health of the apps from all the systems before promoting the release. However, Release Management now supports integrating continuous monitoring into release pipelines. Use this to ensure the system repeatedly queries all the health signals for the app until all of them are successful at the same time, before continuing the release.

    NOTE: To use this capability, you must have the Approval gates in releases preview feature enabled on your profile.

    You start by defining pre-deployment or post-deployment gates in the release definition. Each gate can monitor one or more health signals corresponding to a monitoring system of the app. Built-in gates are available for “Azure monitor (application insight) alerts” and “Work items”. You can integrate with other systems using the flexibility offered through Azure functions.

    Gated releases

    At the time of execution, the Release starts to sample all the gates and collect health signals from each of them. It repeats the sampling at each interval until signals collected from all the gates in the same interval are successful.

    Sampling interval

    Initial samples from the monitoring systems may not be accurate, as not enough information may be available for the new deployment. The “Delay before evaluation” option ensures the Release does not progress during this period, even if all samples are successful.

    No agents or pipelines are consumed during sampling of gates. See the documentation for release gates for more information.

    Docker Hub or Azure Container Registry as an artifact source

    This feature enables automatic creation of releases for updates to apps in the images stored in a Docker Hub registry or an Azure Container Registry (ACR). This is a first step towards supporting scenarios such as rolling out new changes region-by-region by using the geo-replication feature of ACR or deploying to an environment (such as production) from a container registry that has images for only the production environment.

    You can now configure Docker Hub or ACR as a first-class artifact in the + Add artifact experience of a release definition.

    Dockerhub artifact source

     

    Use VSTS as a symbol server

    VSTS Symbol Server enables you to host and share symbols with your organization. Symbols provide additional information that makes it easier to debug executables, especially those written in native languages like C and C++. See the documentation for publishing symbols for debugging for more information.

    NOTE: To use this capability, you must have the Symbol server preview feature enabled on your account.

    Symbol server task

    Save packages from NuGet.org in your feed

    NuGet.org as an upstream source is now available, which enables you to use packages from NuGet.org through your VSTS feed. Check out the announcement blog post to learn more.

    NuGet upstream source

    Filtering on Plans

    The Delivery Plans extension now makes use of our common filtering component, and is consistent with our grid filtering experiences for work items and Boards. This filtering control brings improved usability and a consistent interface to all members of your team.

    Filtering on Plans

    Improved Azure Active Directory integration for pull requests

    Adding Azure AD groups as reviewers for your pull requests just got a lot easier. Previously, before any AAD group could be added as a reviewer, that group needed to be granted explicit access to VSTS.

    Now, AAD groups can be added as reviewers to PRs and both email notifications and voting rollups will work as expected -- without any additional configuration.

    Path filters for pull request policies

    Many times, a single repository will contain code that’s built by multiple continuous integration (CI) pipelines to validate the build and run tests. The integrated build policy now supports a path filtering option that makes it easy to configure multiple PR builds that can be required and automatically triggered for each PR. Just specify a path for each build to require, and set, the trigger and requirement options as desired.

    Path filters for PR policies

    In addition to build, status policies also have the path filtering option available. This will allow any custom or 3rd party policies to configure policy enforcement for specific paths.

    Mention a pull request

    You can now mention pull requests in PR comments and work item discussions. The experience for mentioning a PR is similar to that of a work item, but uses an exclamation point ! instead of a hash mark #.

    Whenever you want to mention a PR, enter a !, and you’ll see an interactive experience for picking a PR from your list of recent PRs. Enter keywords to filter the list of suggestions, or enter the ID of the PR you want to mention. Once a PR is mentioned, it will be rendered inline with the ID and the full title, plus it will link to the PR details page.

    Mention a pull request

    TFS Database Import Service now Generally Available

    We’re announcing the general availability of the TFS Database Import Service. The Import Service enables customers to migrate from their on-premises Team Foundation Server (TFS) and into our cloud hosted SaaS service Visual Studio Team Services (VSTS).

    Customers now no longer require approval from Microsoft to onboard and begin their migrations. Find out more information and get started here.

    VSTS CLI in Public Preview

    VSTS CLI is a new command line interface for working with and managing your VSTS and TFS projects from Windows, Linux, and Mac. This new open source CLI lets you work with pull requests, work items, builds, and more from the comfort of a command prompt or terminal. You can also use the new CLI to automate interactions with VSTS or TFS using scripts written in Bash, PowerShell, or your favorite scripting language.

    Here are just some of the things you can do with VSTS CLI:

    • Queue a build
    • Show the details of a build
    • Create a pull request
    • Add a reviewer to a pull request
    • Create a new project or Git repo
    • Update a work item

    To learn more, see the VSTS CLI docs. To view the source, visit the vsts-cli repo.

    Search for code in multiple branches

    In code search we have enabled support to index multi-branches. This will allow you to search in branches other than the default branch. You can now have 5 more branches per repository indexed for searching. Your Project Admin can configure the additional branches from the Version Control settings page:

    New multi-branch configuration experience

    Wiki Search

    Over time as teams document more content in wiki pages across multiple projects in VSTS, finding relevant content becomes increasingly difficult. To maximize collaboration, you need the ability to easily discover content across all your projects. Now you can use Wiki Search to quickly find relevant wiki pages by title or page content across all projects in your VSTS account.

    NOTE: To use this capability, you must have the New experience in Code & Work Item search and new Wiki search preview feature enabled on your profile.

    Wiki Search

    Extension of the Month: SenseAdapt by RippleRock

    This extension provides a toolset for agile teams to quickly and seamlessly create actionable insight charts without any additional configuration of their account. Some of the top scenarios supported by SenseAdapt are

    • Forecast likely project completion dates and engage stakeholders with simpler, actionable visualizations
    • Give teams the situational awareness to do their job. Help them focus and stimulate improvement with simple visualizations of their work and the system around them
    • Enable governance to base decisions on objective insight, whilst seeing options to influence outcomes
    • Deliver more value by embedding Agile principles of; transparency, visualization, frequent data-based feedback loops
    • Clear, simple oversight with most of the SAFe project metrics

    Your team will be able to leverage 12 visualization that take minutes to create without any additional configuration and can be created based on your team’s different roles and objectives. You can install it here.

    Creating Work Item Extensions

    Speaking of extensions, if you’ve ever wanted to write your own extension for work items, you’ll enjoy this comprehensive blog post on creating work item form extensions. It walks you through the process in detail and helps you understand exactly what you’ll need to do to build your extension.

    VSTS in Hong Kong

    In 2014, we set a goal to make Visual Studio Team Services (VSTS) a global service. This is driven by our commitment to provide our customers around the world great performance and compliance with local data sovereignty requirements. Between 2014 and 2016 we announced VSTS instances in Europe, Australia, India, and Brazil. Two months ago we announced a new VSTS instance in Canada. Along the way we have also stood up four additional instances in the United States and another additional instance in Europe to handle the large number of accounts created in those geographies.

    Today we are excited to announce the availability of our latest VSTS instance in Hong Kong (Azure’s East Asia region).

    When you create a new account we default the region to the one closest to you. Customers near Hong Kong will now notice that East Asia is the default selection. As always, you can override the selection by choosing another region from the list – East Asia is open to everyone. If you have an existing Visual Studio Team Services account and would like to move it to the new East Asia region, you can do that by contacting support.

    If a region important to you does not yet have a VSTS instance, let us know about it on UserVoice. And keep an eye out here for additional announcements as we continue to increase the global presence of VSTS.

    Wrapping Up

    As always, there’s a lot more than I can cover here. I’d encourage you to read over the full release notes for the October 6th, October 30th, and November 28th sprint deployments. Be sure to subscribe to the DevOps blog to keep up with the latest plans and developments for VSTS.

    Happy coding!

    @tfsbuck


    Free eBook – The Developer’s Guide to Microsoft Azure now available

    $
    0
    0

    Today, we are pleased to introduce a free eBook titled, The Developer’s Guide to Microsoft Azure second edition. The book was written by Michael Crump and Barry Luijbregts to help you on your journey to the cloud, whether you’re just considering making the move, or you’ve already decided and are underway. This eBook was written by developers for developers. It is specifically meant to give you the fundamental knowledge of what Azure is all about, what it offers you and your organization, and how to take advantage of it all.

    The Developer's Guide to Microsoft Azure

    FREE eBook available for download now!

    The eBook covers the following topics:

    • Chapter 1: The Developer’s Guide to Microsoft
    • Chapter 2: Getting started with Microsoft Azure
    • Chapter 3: Adding intelligence to your application
    • Chapter 4: Securing your application
    • Chapter 5: Where and how to deploy your Microsoft Azure services
    • Chapter 6: A walk-through of Microsoft Azure
    • Chapter 7: Using the Microsoft Azure Marketplace

    Barry and I have also taken into consideration topics asked by the community. We walk you through scenarios such as a tour of the Azure Portal and creating a virtual machine. We also discuss developing and deploying a web application that uses Node.js and MongoDB. We cover typical tasks such as CI/CD (Continuous Integrations and Continuous Deployment), staging environments, scaling, logging, and monitoring. We wrap up by creating a backend for your mobile application that includes authentication and offline synchronization.

    It is also worth noting that I have a downloadable PDF of the Cloud Service Map for AWS and Azure that allows you to quickly compare the cloud capabilities of Azure and AWS services in all categories.

    Thanks for reading and keep in mind that you can learn more about Azure by following our blog or on Twitter @Azure. You can also reach the author of this post on Twitter @mbcrump.

    Post-Connect(); 2017 Visual Studio Partner Webinar Series

    $
    0
    0

    [Hello, we are looking to improve your experience on the Visual Studio Blog. It will be very helpful if you could share your feedback via this short survey that should take less than 2 minutes to fill out. Thanks!]

    Earlier this week, we released 13 Visual Studio partner webinars that build off of some of the major announcement areas of Connect(); 2017 and provide applications of developer tools that, literally, ‘connect’ with the latest and greatest from Azure, SQL Server 2017, and Visual Studio.

    VS Partner Logos

    Developer Tools for Application Innovation

    Scott Guthrie introduced advancements in developer productivity in the cloud, including the GA of Visual Studio App Center, Visual Studio Live Share, and Visual Studio Tools for AI. Telerik by Progress kicks off this series by drilling down into the cross-platform mobile development updates from Connect(); and introducing their app built with Xamarin and Azure’s Computer Vision API. New to our mobile ecosystem, CloudRail walks through the more than 50 APIs they have made available to developers using Xamarin, including Universal APIs targeted at enhancing developer productivity. Aqua discusses how early development choices can help developers eliminate security risks, configuration mistakes, and vulnerabilities in Docker images, and Developer Express walks through object-relational mapping for .NET Core 2.0.

    Enterprise-Class DevOps

    Brian Harry announced innovations across the development pipeline, including the RTM of Team Foundation Server 2018, a new ‘getting started’ experience with Azure DevOps Projects, and Visual Studio partner GitHub’s announcement of support for GVFS. In this webinar series, Chef details how to use the configuration management features of the Chef Integration extension for Visual Studio Team Services to develop and test a cookbook. 7pace walks through the value of precise time management in conjuncture with Team Foundation Server 2018 to optimize a team’s agile development, and Mobilize.Net illustrates easy migration of legacy client/server apps, using AI and machine learning as an on-ramp to DevOps on Azure.

    Tools for Open Source Development

    You also saw some killer demos on what the open source community can do with Visual Studio Code (enabling Java debugging and Azure Functions working with Visual Studio Code for Python developers). WhiteSource‘s webinar discusses the difference between detecting and fixing proprietary and open source vulnerabilities and how to automate open source security practices. And through its content delivery network and open edge computing, nuu:bit talks about their integration with Azure CosmosDB.

    Developing for the Database

    We also demoed how easy it is to create apps for SQL, Windows, .NET and open source frameworks like MySQL, PostgreSQL, Linux, and Java/Node.js using the Microsoft Data Platform. CData‘s webinar drills down into how users can now use powerful analytics applications like Microsoft PowerBI to report directly on live in-memory Redis data structures with their ODBC Drivers. Redgate Software shows how to build database DevOps into your continuous integration and automated release management pipelines. And finally, Syncfusion shows you how to leverage their UI dashboards with Big Data Platforms on Azure.

    This is just the beginning. Grab a few hours and a friend or two and dive into the rich content our partners have made available on Channel 9.

    Laura Quest, Marketing Manager, Visual Studio Partner Program
    @LadyQuestaway

    Laura Quest works across the Visual Studio family of products with partners who build extensions published in the Visual Studio Marketplace and components published in NuGet. She was part of the Xamarin Business Development team and now works within the Cloud Application Development, Data, and AI organization.

    Testing ASP.NET Core MVC web apps in-memory

    $
    0
    0

    This post was written and submitted by Javier Calvarro Nelson, a developer on the ASP.NET Core MVC team

    Testing is an important part of the development process of any app. In this blog post we’re going to explore how we can test ASP.NET Core MVC app using an in-memory server. This approach has several advantages:

    • It’s very fast because it does not start a real server
    • It’s reliable because there is no need to reserve ports or clean up resources after it runs
    • It’s easier than other ways of testing your application, such as using an external test driver
    • It allows testing of traits in your application that are hard to unit test, like ensuring your authorization rules are correct

    The main shortcoming of this approach is that it’s not well suited to test applications that heavily rely on JavaScript. That said, if you’re writing a traditional web app or an API then all the benefits mentioned above apply.

    For testing MVC app we’re going to use TestServer. TestServer is an in-memory implementation of a server for ASP.NET Core app akin to Kestrel or HTTP.sys.

    Creating and setting up the projects

    Start by creating an MVC app using the following command:

    dotnet new mvc -au Individual -uld --use-launch-settings -o .TestingMVCsrcTestingMVC

    Create a test project with the following command:

    dotnet new xunit -o .TestingMVCtestTestingMVC.Tests

    Next create a solution, add the projects to the solution and add a reference to the app project from the test project:

    dotnet new sln
    dotnet sln add .srcTestingMVCTestingMVC.csproj
    dotnet sln add .testTestingMVC.TestsTestingMVC.Tests.csproj
    dotnet add .testTestingMVC.TestsTestingMVC.Tests.csproj reference .srcTestingMVCTestingMVC.csproj

    Add references to the components we’re going to use for testing by adding the following item group to the test project file:

    Now, we can run dotnet restore on the project or the solution and we can move on to writing tests.

    Writing a test to retrieve the page at ‘/’

    Now that we have our projects set up, let’s wirte a test that will serve as an example of how other tests will look.

    We’re going to start by changing Program.cs in our app project to look like this:

    In the snippet above, we’ve changed the method IWebHost BuildWebHost(string[] args) to call a new method IWebHostBuilder CreateWebHostBuilder(string[] args) within it. The reason for this is that we want to allow our tests to configure the IWebHostBuilder in the same way the app does and to allow making changes required by tests. (By chaining calls on the WebHostBuilder.)

    One example of this will be setting the content root of the app when we’re running the server in a test. The content root needs to be based on the appliation’s root, not the test’s root.

    Now, we can create a test like the one below to get the contents of our home page. This test will fail because we’re missing a couple of things that we describe below.

    The test above can be decomposed into the following actions:

    • Create an IWebHostBuilder in the same way that my app creates it
    • Override the content root of the app to point to the app’s project root instead of the bin folder of the test app. (.srcTestingMVC instead of .testTestingMvc.TestsbinDebugnetcoreapp2.0)
    • Create a test server from the WebHost builder
    • Create an HttpClient that can be used to communicate with our app. (This uses an internal mechanism that sends the requests in-memory – no network involved.)
    • Send an HTTP request to the server using the client
    • Ensuring the status code of the response is correct

    Requirements for Razor views to run on a test context

    If we tried to run the test above, we will probably get an HTTP 500 error instead of an HTTP 200 success. The reason for this is that the dependency context of the app is not correctly set up in our tests. In order to fix this, there are a few actions we need to take:

    • Copy the .deps.json file from our app to the bin folder of the testing project
    • Disable shadow copying assemblies

    For the first bullet point, we can create a target file like the one below and include in our testing project file as follows:

    For the second bullet point, the implementation is dependent on what testing framework we use. For xUnit, add an xunit.runner.json file in the root of the test project (set it to Copy Always) like the one below:

    This step is subject to change at any point; for more information look at the xUnit docs at http://xunit.github.io/#documentation.

    Now if you re-run the sample test, it will pass.

    Summary

    • We’ve seen how to create in-memory tests for an MVC app
    • We’ve discussed the requirements for setting up the app to find static files and find and compile Razor views in the context of a test
    • Set up the content root in the tests to the app’s root folder
    • Ensure the test project references all the assemblies in the app
    • Copy the app’s deps file to the bin folder of the test project
    • Disable shadow copying in your testing framework of choice
    • We’ve shown how to write a functional test in-memory using TestServer and the same configuration your app uses when running on a real server in Production

    The source code of the completed project is available here: https://github.com/aspnet/samples/tree/master/samples/aspnetcore/mvc/testing/TestingMVC

    Happy testing!

    Microsoft Announces Simplygon Cloud; Optimizes Mixed Reality Development

    $
    0
    0

    Earlier this year, we announced the acquisition of Simplygon, a leader in 3D model optimization based in southern Sweden. As we continue our journey to bring the benefits of mixed reality to everyone, Simplygon is an important accelerant that makes it easier, faster, and cheaper to develop in 3D.

    Introducing Simplygon Cloud

    Today, I am excited to announce the launch of Simplygon Cloud on Azure Marketplace. Simplygon reduces complexity in the creation and extensibility of 3D models through optimization. Simplygon supports GLTF, FBX and OBJ file types for ingestion; rendering engines including Unity 3D and Unreal Engine; and all major mixed reality platforms, including Windows Mixed Reality, iOS and Android. 

    How it works

    Historically, 3D asset optimization has taken days or weeks of manual effort and is one of the tasks that artists and developers dislike the most. With Simplygon, you can create 3D assets once – at full visual fidelity – and automatically optimize them to render smoothly on any platform – within minutes, saving valuable time and money.

    As an example, the above left 3D model of a couch was built with 584,000 polygons. To render this content on a lower GPU device, Simplygon optimized this down to 5,000 polygons, which greatly reduces the file size, while maintaining the ideal visual fidelity for the intended device.

    How To Get Started

    Simplygon Cloud is now available in the Azure Marketplace. To get started, visit our Azure Marketplace home to learn how to deploy the Simplygon Cloud virtual machine and start optimizing your 3D assets. Please also visit our documentation for examples and more information on how to integrate this into your workflow today.

    We look forward to sharing more in the months ahead. This is a very exciting time for everyone who is developing in the era of mixed reality!

    The post Microsoft Announces Simplygon Cloud; Optimizes Mixed Reality Development appeared first on Building Apps for Windows.

    Building a great touchpad experience for the web with Pointer Events

    $
    0
    0

    Most web pages don’t fit on one screen, so good scrolling behavior is an integral part of a good web browser. It’s so crucial to the user experience that we have spent a lot of time optimizing page scrolling, with great results.

    Since launching Microsoft Edge, we’ve optimized most scrolling experiences — scrolling via touchscreens, page and content scrollbars. One particular focus in previous releases has been improving touchpads, specifically precision touchpads (PTPs), to provide a smooth, fluid, intuitive experience by default.

    In this post, we’re introducing a new optimization coming in EdgeHTML 17 to allow developers to customize scrolling behaviors and gestures with Precision Touch Pads, without impacting scrolling performance: PTP Pointer Events.

    Background

    Precision touchpads are high-end touchpads that ship in Surface devices (Surface Pro 2 and later) and modern Windows 10 devices from our OEM partners. Windows 10 takes advantage of this hardware to enable system-wide gestures and better, more responsive scrolling than what was possible with older technology.

    Microsoft Edge also utilizes PTPs to enable back/forward swipe and to enhance users’ scrolling experience via off-thread (aka independent) scrolling. Since PTP input is processed differently by the input stack in Windows 10, we wanted to ensure that we took advantage of this and that we gave users a scrolling experience that felt as natural as their experience with touchscreens everywhere on the web.

    However, the web has traditionally had a bit of a design flaw when it comes to scrolling, in the form of scroll jank — that ‘glitchy’ feeling that the page is stuck, and not keeping up with your finger while you’re scrolling.

    Often, scroll jank is caused by mousewheel or Touch Event listeners on the page (these are often used for tracking user interactions or for implementing custom scrolling experiences):

    // Examples of event listeners that can negatively affect scrolling performance
    document.addEventListener("wheel", handler);
    document.addEventListener("touchstart", handler);
    

    If one of these listeners is going to modify the default scrolling behavior of the browser, the browser has to cancel its optimized default scroll altogether (accomplished by web developers calling preventDefault() within handlers). Since browsers don’t always know if the listener is going to cancel the scroll, however, they always wait until the listener code executes before proceeding with the scroll, a delay which manifests itself as scroll jank:

    Animation showing an example of scroll jank due to a mousewheel handler with a 200ms duration

    An example page showing scroll jank due to a mousewheel handler with a 200ms duration.

    Browsers identified this issue and shipped passive event listeners as a mitigation (available in Chrome 51+ and EdgeHTML 16+) to help reduce its scope:

    Animation showing an example page scrolling smoothly touch/wheel handlers attempting to block scrolling.

    The same example with smooth scrolling thanks to passive event listeners

    Intersection Observers also help get around this issue by providing web developers with a mechanism to track user interactions with the page (to trigger lazy loading of infinite scrollers, for example) without affecting scrolling performance. These two approaches, however, still do not solve the cases where active event listeners are necessary, and require developers to be aware of the issues explained above and to change their sites in order for users to see improvements.

    Given that we wanted to enable the best scrolling experience with PTP on as many sites as possible while minimizing developer work, we made the decision to not fire mousewheel events in response to PTP gestures (such as two finger pans). While this greatly reduced scroll jank and gave users a scrolling experience akin to the one they get on touchscreens, the lack of mousewheel events being fired unfortunately also meant that users were unable to zoom on sites such as Bing Maps and pan on sites that use custom scrolling controls (both of which expect mousewheel events coming from touchpads in order to operate).

    Developers on our public issue tracker have made it clear that this has been a top pain point, however, the Microsoft Edge team wanted to ensure that the solution built to address these broken experiences not only fixed them, but also preserved the functional and performance benefits accrued by not firing mousewheel events.

    PTP Pointer Events

    As of EdgeHTML 17, Microsoft Edge will fire Pointer Events with a pointerType of “touch” in response to PTP gestures. While this is a departure from the mousewheel events of the past, we believe that the advantages to this approach more than justify the departure:

    No additional overhead for modern websites

    If your website already supports Pointer Events and touch, there is no additional work you need to do to take advantage of PTPs in Microsoft Edge; your site will just work!

    If you have not yet implemented Pointer Event support, we strongly recommend you check out the MDN documentation for Pointer Events to prepare your site for the modern web. Pointer Events are available on Internet Explorer 11, Microsoft Edge, and Google Chrome and are in development in Firefox.

    Enhanced scrolling performance

    Scrolling with PTPs in Microsoft Edge will never cause scroll jank since Pointer Event handlers (unlike mousewheel and Touch Event handlers) are designed so that they cannot block scrolling.

    With these new changes in Microsoft Edge, you can be certain that you are getting the best possible scrolling experience on PTP-enabled devices thanks to Pointer Events.

    Improved Gesture Recognition/Site Functionality

    Since PTP Pointer Events emulate touch Pointer Events, PTP gestures such as pinch to zoom and two finger panning will light up on sites that already support touch Pointer Events. This will allow developers to build near-native gesture experiences on the web, complete with the smooth animation and inertia curves that users have come to expect from interacting with pages via touch.

    Using PTP Pointer Events

    Using PTP Pointer Events on your site is as simple as registering for Pointer Events and using the touch-action CSS property to control how touches are handled by the browser:

    In HTML, add the touch-action CSS property to your target element to prevent the browser from executing its default touch behavior in response to gestures (in Microsoft Edge, for example, this will prevent two finger swipes from triggering back/forward swipe behavior):

    <canvas height=400 width=400 id="canvas" style="touch-action: none;"></canvas>
    

    In JavaScript, attach a Pointer Event listener to your target element. You can determine the type of pointer that caused the handler to be invoked using the pointerType property of the event object passed into the event listener callback:

    document.getElementById('canvas').addEventListener('pointermove', function(event) {
        console.log('pointermove!');
    });
    

    More detailed information on Pointer Events can be found on MDN here.

    Once you have added Pointer Event support to your site, the only step that remains is understanding how Microsoft Edge exposes PTP gestures to sites as Pointer Events. Note that for both of the gestures below, the Pointer Events generated in EdgeHTML will be sent to the element that is directly under the cursor when the PTP gesture begins.

    Two Finger Panning

    The two finger PTP panning gesture is converted within EdgeHTML to a single contact gesture (identical to a single-fingered touch pan gesture) and is exposed to sites as such. The gesture originates at the cursor location and any movement of the fingers on the touchpad is translated to a scaled delta which results in a pan action. The CSS touch-action property can be used to control the way that a specific region can be manipulated by the user.
    Animation showing two-finger touchpad input being mapped to a one-finger Pointer Event pan (pointerType of "touch") by EdgeHTML

    Zooming

    The pinch to zoom PTP gesture is converted within EdgeHTML to a gesture that originates at the cursor location. Two contacts are placed at a scaled distance away from the cursor location and any movement of the fingers on the touchpad is translated into scaled deltas which results in a zoom action.

    Animation showing a two-finger Touchpad gesture (pinch-to-zoom) mapped by EdgeHTML two a two-finger "touch" Pointer Event (pointerType of "touch")

    Rotation

    PTP Pointer Events in Microsoft Edge introduce support for two-finger Rotation gestures for the first time, due to the fact that raw pointer data is exposed directly from the touchpad in all cases other than panning (where the two contacts on the touchpad are combined into one). Existing sites with Pointer Event handlers for touch that support rotation will light up with Precision Touchpads in Microsoft Edge as well.

    What’s next

    You can try out PTP Pointer Events in Microsoft Edge starting with our next Windows Insider release on any site that currently supports Pointer Events for touch gestures, including Bing Maps or Google Maps, on any device with a Precision Touchpad. The broader Windows 10 community will see PTP Pointer Events when EdgeHTML 17 ships with the next major release of Windows 10.

    We are excited to enable jank-free and near-native touchpad experiences across the web using Pointer Events, and look forward to feedback on this feature from developers and end users alike! You can share any bugs you encounter in testing via Microsoft Edge Platform Issues or the Feedback Hub app on Windows 10, or give your feedback directly @MSEdgeDev on Twitter or in the comments below.

    Try it out and let us know what you think!

    Scott Low<, Program Manager, Microsoft Edge

    The post Building a great touchpad experience for the web with Pointer Events appeared first on Microsoft Edge Dev Blog.

    Viewing all 10804 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>