A summary of news from Connect(); 2017
Visual Studio Code Connect(); 2017
A technical overview of Azure Databricks
This blog post was co-authored by Peter Carlin, Distinguished Engineer, Database Systems and Matei Zaharia, co-founder and Chief Technologist, Databricks.
Today at Microsoft Connect(); we introduced Azure Databricks, an exciting new service in preview that brings together the best of the Apache Spark analytics platform and Azure cloud. As a close partnership between Databricks and Microsoft, Azure Databricks brings unique benefits not present in other cloud platforms. This blog post introduces the technology and new capabilities available for data scientists, data engineers, and business decision-makers using the power of Databricks on Azure.
Apache Spark + Databricks + enterprise cloud = Azure Databricks
Once you manage data at scale in the cloud, you open up massive possibilities for predictive analytics, AI, and real-time applications. Over the past five years, the platform of choice for building these applications has been Apache Spark, with a massive community at thousands of enterprises worldwide, Spark makes it possible to run powerful analytics algorithms at scale and in real time to drive business insights. However, managing and deploying Spark at scale has remained challenging, especially for enterprise use cases with large numbers of users and strong security requirements.
Enter Databricks. Founded by the team that started the Spark project in 2013, Databricks provides an end-to-end, managed Apache Spark platform optimized for the cloud. Featuring one-click deployment, autoscaling, and an optimized Databricks Runtime that can improve the performance of Spark jobs in the cloud by 10-100x, Databricks makes it simple and cost-efficient to run large-scale Spark workloads. Moreover, Databricks includes an interactive notebook environment, monitoring tools, and security controls that make it easy to leverage Spark in enterprises with thousands of users.
In Azure Databricks, we have gone one step beyond the base Databricks platform by integrating closely with Azure services through collaboration between Databricks and Microsoft. Azure Databricks features optimized connectors to Azure storage platforms (e.g. Data Lake and Blob Storage) for the fastest possible data access, and one-click management directly from the Azure console. This is the first time that an Apache Spark platform provider has partnered closely with a cloud provider to optimize data analytics workloads from the ground up.
Benefits for data engineers and data scientists
Why is Azure Databricks so useful for data scientists and engineers? Let’s look at some ways:
Optimized environment
Azure Databricks is optimized from the ground up for performance and cost-efficiency in the cloud. The Databricks Runtime adds several key capabilities to Apache Spark workloads that can increase performance and reduce costs by as much as 10-100x when running on Azure, including:
- High-speed connectors to Azure storage services, such as Azure Blob Store and Azure Data Lake, developed together with the Microsoft teams behind these services.
- Auto-scaling and auto-termination for Spark clusters to automatically minimize costs.
- Performance optimizations including caching, indexing, and advanced query optimization, which can improve performance by as much as 10-100x over traditional Apache Spark deployments in cloud or on-premise environments.
Seamless collaboration
Remember the jump in productivity when documents became truly multi-editable? Why can’t we have that for data engineering and data science? Azure Databricks brings exactly that. Notebooks on Databricks are live and shared, with real-time collaboration, so that everyone in your organization can work with your data. Dashboards enable business users to call an existing job with new parameters. Also, Databricks integrates closely with PowerBI for interactive visualization. All this is possible because Azure Databricks is backed by Azure Database and other technologies that enable highly concurrent access, fast performance, and geo-replication.
Easy to use
Azure Databricks comes packaged with interactive notebooks that let you connect to common data sources, run machine learning algorithms, and learn the basics of Apache Spark to get started quickly. It also features an integrated debugging environment to let you analyze the progress of your Spark jobs from within interactive notebooks, and powerful tools to analyze past jobs. Finally, other common analytics libraries, such as the Python and R data science stacks, are preinstalled so that you can use them with Spark to derive insights. We really believe that big data can become 10x easier to use, and we are continuing the philosophy started in Apache Spark to provide a unified, end-to-end platform.
Architecture of Azure Databricks
So how is Azure Databricks put together? At a high level, the service launches and manages worker nodes in each Azure customer's subscription, letting customers leverage existing management tools within their account.
Specifically, when a customer launches a cluster via Databricks, a "Databricks appliance" is deployed as an Azure resource in the customer's subscription. The customer specifies the types of VMs to use and how many, but Databricks manages all other aspects. In addition to this appliance, a managed resource group is deployed into the customer's subscription that we populate with a VNet, a security group, and a storage account. These are concepts Azure users are familiar with. Once these services are ready, users can manage the Databricks cluster through the Azure Databricks UI or through features such as autoscaling. All metadata, such as scheduled jobs, is stored in an Azure Database with geo-replication for fault tolerance.
For users, this design means two things. First, they can easily connect Azure Databricks to any storage resource in their account, e.g., an existing Blob Store subscription or Data Lake. Second, Databricks is managed centrally from the Azure control center, requiring no additional setup.
Total Azure integration
We are integrating Azure Databricks closely with all features of the Azure platform in order to provide the best of the platform to users. Here are some pieces we’ve done so far:
- Diversity of VM types: Customers can use all existing VMs including F-series for machine learning scenarios, M-series for massive memory scenarios, D-series for general purpose, etc.
- Security and Privacy: In Azure, ownership and control of data is with the customer. We have built Azure Databricks to adhere to these standards. We aim for Azure Databricks to provide all the compliance certifications that the rest of Azure adheres to.
- Flexibility in network topology: Customers have a diversity of network infrastructure needs. Azure Databricks supports deployments in customer VNETs, which can control which sources and sinks can be accessed and how they are accessed.
- Azure Storage and Azure Data Lake integration: These storage services are exposed to Databricks users via DBFS to provide caching and optimized analysis over existing data.
- Azure Power BI: Users can connect Power BI directly to their Databricks clusters using JDBC in order to query data interactively at massive scale using familiar tools.
- Azure Active Directory provide controls of access to resources and is already in use in most enterprises. Azure Databricks workspaces deploy in customer subscriptions, so naturally AAD can be used to control access to sources, results, and jobs.
- Azure SQL Data Warehouse, Azure SQL DB, and Azure CosmosDB: Azure Databricks easily and efficiently uploads results into these services for further analysis and real-time serving, making it simple to build end-to-end data architectures on Azure.
In addition to all the integration you can see, we have worked hard to integrate in ways that you can’t see – but can see the benefits of.
- Internally, we use Azure Container Services to run the Azure Databricks control-plane and data-planes via containers.
- Accelerated Networking provides the fastest virtualized network infrastructure in the cloud. Azure Databricks utilizes this to further improve Spark performance.
- The latest generation of Azure hardware (Dv3 VMs), with NvMe SSDs capable of blazing 100us latency on IO. These make Databricks I/O performance even better.
We are just scratching the surface though! As the service becomes generally available and moves beyond that, we expect to add continued integrations with other upcoming Azure services.
Conclusion
We are very excited to partner together to bring you Azure Databricks. For the first time, a leading cloud provider and leading analytics system provider have partnered to build a cloud analytics platform optimized from the ground up – from Azure's storage and network infrastructure all the way to Databricks's runtime for Apache Spark. We believe that Azure Databricks will greatly simplify building enterprise-grade production data applications, and we would love to hear your feedback as the service rolls out.
MariaDB, PostgreSQL, and MySQL: more choices on Microsoft Azure
I am excited to join all of you virtually from Microsoft Connect(); and to share how we are broadening developer choice, joining communities, and helping make the platforms we work on together better.
It feels great to announce that we are joining the MariaDB foundation as a Platinum sponsor to work closely together with Monty (Michael Widenius) and the MariaDB community on making MariaDB even better. It is very interesting to see how MariaDB is growing quickly. Just a few months ago when I was visiting family in Sweden, Monty and I kicked off our conversations. I remember Monty and my first call to see if we could do something cool together. Since both Monty and I speak Swedish we decided to take this first call in Swedish. Little did I realize that my “database Swedish” is way dusty. Despite this, both of us quickly realized that we could start something great together between MariaDB and Microsoft if we just tried.
I'm super excited to share that we are bringing MariaDB to Azure. Sign up for the upcoming preview for MariaDB as a fully managed service on Azure.
I am very proud of the direction we are taking at Microsoft with a strong belief both in the fundamental of openness as well as our desire to support our customers where they are, on-premises and in Azure. We are committed to working with the community to submit pull requests (hopefully improvements...) with the changes we make to the database engines that we offer in Azure. It keeps open source open and delivers a consistent experience, whether you run the database in the cloud, on your laptop when you develop your applications, or on-premises.
Azure Database for MariaDB joins Azure database services for PostgreSQL and MySQL on Azure to provide more choices to developers. You can provision a new instance in minutes and quickly scale the compute power needed up and down, online to respond to your needs. These services come with built-in high availability, security and a 99.99% SLA at GA. On top of this, being on Azure lets you easily add new modern experiences into your apps through cognitive service APIs and bot framework. Check out a demo by Sunil Kamath on building intelligent apps using Azure Database for PostgreSQL here.
Since announcing the previews preview for Azure database services for PostgreSQL and MySQLs earlier this year, we have added new PostgreSQL extensions, compute tiers and increased the reach to more regions, including Brazil, Canada, and India – taking the total number of regions to 16 worldwide, on our way to the full 40+ regions supported by Azure. A number of our customers are enjoying the benefits of managed service capabilities we are offering.
"Spinning up the PostgreSQL database through the Azure portal was very easy. Then we just exported the database from the existing system and imported it into Azure Database for PostgreSQL. It only took two or three hours.” – Eric Spear, Chief Executive Officer, Higher Ed Profiles
"Our page load times are very low, and we're able to do it on a more powerful and scalable infrastructure that costs us 45 percent less.” – Kevin Lisota, Developer, GeekWire
We are excited to bring MariaDB to Azure and offer you more choices. We'd love to get your feedback and learn how we can continue to make it better for you. I'm looking forward to seeing what we can achieve together!
Dear Cassandra Developers, welcome to Azure #CosmosDB!
Today we're excited to launch native support for Apache Cassandra API in Azure Cosmos DB – offering you Cassandra as-a-service powered by Azure Cosmos DB. You can now experience the power of Azure Cosmos DB platform as a managed service with the familiarity of your favorite Cassandra SDKs and tools—without any app code changes.
Azure Cosmos DB is the industry’s first fully managed globally distributed, massively scalable, and multi-model database service. It is designed to allow developers to elastically scale throughput and storage across any number of geographical regions worldwide—backed by industry-leading comprehensive SLAs including throughput, availability, consistency, and <10ms latency guarantees.
Bring your Cassandra apps to Azure Cosmos DB in 3 simple steps:
- Create a new Azure Cosmos DB account in the Azure Portal and choose the new Cassandra API while creating an Azure Cosmos DB account.
- Connect your Cassandra application to Azure Cosmos DB copying a simple connection code snippet provided to you upon creation of your new account.
- Use your favorite Cassandra tools and drivers to manage and query your Cassandra data in Azure Cosmos DB
This short video shows how easy and quickly it is to get started with Azure Cosmos DB’s native support for Apache Cassandra API.
Enterprise-grade, battle-tested platform for your Cassandra apps
“We are using the Cassandra API on Azure Cosmos DB for several mission-critical use cases. In particular, the geo-redundancy and dynamic scale of the solution are key advantages and we look forward to reaping more benefits in the future.”
- Christoph Leinemann, Senior Director, Data Engineering, Jet.com
With Azure Cosmos DB’s native support for Cassandra APIs you will get the following benefits:
- Fully managed, serverless Cassandra as-a-service. As a true PaaS service, Azure Cosmos DB ensures that you do not have to worry about managing and monitoring myriad of settings across OS, JVM and YAML files and deal with complex interdependencies. Azure Cosmos DB provides first-class monitoring of throughput, latency, consistency, storage and availability and configurable alerts to take action on changes across them.
- Turnkey global distribution. Azure Cosmos DB was designed as a globally distributed service from the ground up to ensure that your data is made available wherever your users are. The service transparently and automatically replicates the data across any number of Azure regions associated with your Cassandra tables. You can add or remove regions for your Cassandra tables with a few clicks in Azure portal or programmatically, at any time.
- Elastic and transparent scaling of storage. Azure Cosmos DB provides automatic storage management without the need for any manual intervention and grows the capacity as your application storage needs increase. You don’t need to worry about the complexities of capacity planning or having to deal with adding cluster nodes and tuning configs anymore.
- Elastic scaling of throughput all around the world. With Azure Cosmos DB you don’t need to worry about tuning config settings for CPU, memory, disk IO, and compaction. Azure Cosmos DB allows you to scale throughput for your Cassandra tables all around the world and guarantees the configured throughput regardless of the volume of data being stored.
- Guaranteed low latency reads and writes. As the first and only schema-agnostic database, Azure Cosmos DB automatically indexes all your data so you can perform blazing fast queries. The service offers guaranteed <10 ms latencies at the 99th percentile for near real-time query results.
- Multiple well-defined consistency models with clear tradeoffs. Writing correct distributed application logic against an eventually consistent database is often difficult. Azure Cosmos DB helps you by providing five well-defined, intuitive and practical consistency levels, each with a clear trade-off between desired consistency and performance, guaranteed correctness and backed by SLAs. You can now choose from strong, bounded staleness, session, consistent prefix and eventual consistency models and can configure them any time and change on a per request basis.
- Secure and compliant and enterprise-ready, by default. Azure Cosmos DB is secure, compliant and enterprise-ready service for mission-critical apps. Azure Cosmos DB has met the stringent compliance standards including ISO 27001, ISO 27018, EUMC, PCI DSS, SOC 1,2,3, HIPAA/HITECH and other compliance certifications. Azure Cosmos DB also provides encryption at rest and in motion, IP firewall and audit log for your database activities to ensure demanding security standards of enterprises.
- Backed by industry leading, comprehensive SLAs. Azure Cosmos DB provides industry-leading, comprehensive SLAs for 99.99% high availability for single region and 99,999% read availability at global scale, consistency, throughput and low latency reads and writes at the 99th percentile. Users do not need to worry about operational overheads and tuning many dozens of configuration options to get good performance. Azure Cosmos DB takes away the worry of managing all these issues and lets you focus on your application logic instead.
Azure Cosmos DB provides wire protocol level compatibility with the Cassandra API. This ensures you can continue using your existing application and OSS tools with no code changes and gives you the flexibility to run your Cassandra apps fully managed with no vendor lock-in. While Azure Cosmos DB exposes APIs for the popular open source databases, it does not rely on the implementations of those databases for realizing the semantics of the corresponding APIs. Our unique approach of providing wire-compatible APIs for the popular open sourcedatabases ensures that you can continue to use Azure Cosmos DB in a cloud-agnostic manner while still leveraging a robust database platform natively designed for the cloud.
Finally, Azure Cosmos DB has been used extensively within Microsoft over the years and by some of the largest enterprise customers with their mission critical workloads at an unprecedented global scale. You will now enjoy the same battle-tested, fully managed, globally distributed database service that provides the lowest TCO while still using your familiar Cassandra API.
Get started today!
With Azure Cosmos DB, our mission is to enable the world’s developers to build amazingly powerful, cosmos-scale apps, more easily and today we are excited to welcome the Cassandra developer community!
Please sign up to try the Apache Cassandra API and the new capabilities Azure Cosmos DB can bring to your Cassandra application. After you sign up for a Cassandra API account, get started with our Quick Start for Cassandra API using .NET, Java, Node.js and Python.
If you need any help or have questions or feedback, please reach out to us on the developer forums on Stack Overflow, and follow us on Twitter @AzureCosmosDB, #CosmosDB, for the latest news and announcements.
- Your friends at Azure Cosmos DB
Azure IoT Edge open for developers to build for the intelligent edge
As businesses learn to harness the transformational power of IoT, IoT devices are becoming a mission-critical business asset. Today, IoT solutions use IoT devices to sense things in the real world with processing and decision making happening in the cloud, but as IoT continues to mature there are many use cases where it’s more appropriate to process data or take action directly on the IoT device itself.
Earlier this year at our //Build developer conference, we introduced a revolutionary new product, Azure IoT Edge, to address these needs. Azure IoT Edge enables businesses to run cloud intelligence directly on IoT devices even smaller than a Raspberry Pi or as powerful as they need.
Today at our Connect(); developer conference, we are thrilled to announce the public preview of new Azure IoT Edge capabilities including support for:
- AI Toolkit for Azure IoT Edge
- Azure Machine Learning
- Azure Stream Analytics
- Azure Functions
- Your own code in Containers
- Protocol adaptor as modules - OPC-UA and Modbus
Build AI applications for the Edge
Traditional IoT sensors measure things like temperature, humidity, acceleration, vibration, and more. While this is powering IoT solutions today, IoT solutions in the future need much more sophisticated sensors – for example, sensors that can detect visual defects, identify objects, and find visual anomalies. Azure IoT Edge now includes support for AI to enable these scenarios and more.
Today we’re announcing the AI Toolkit for Azure IoT Edge to jumpstart the process of creating AI applications that run at the edge. Developers can build AI applications with the toolkit with any framework using Azure Machine Learning, and then easily deploy and manage models to the Azure IoT Edge, including a set of pre-built models for common tasks.
You can find the AI Toolkit for Azure IoT Edge on Github.
Azure IoT Edge in detail
Azure IoT Edge can be used in many IoT scenarios. As an example, a complex data pipeline can be created on Azure IoT Edge (running on an edge device) pulling data from IoT devices and running it in a combination of Azure Machine Learning, Azure Stream Analytics, Azure Functions, and any third-party code. This pipeline can be configured and deployed from Azure IoT Hub in the cloud, with the Azure IoT Edge device pulling down the appropriate containers with these services and linking them.
Azure IoT Edge is designed to run on multiple platforms (Windows and many versions of Linux), and hardware architectures (x64 and ARM). To deploy workloads, Azure IoT Edge can use Linux Containers for Docker or Windows Containers for Docker, with an open design to incorporate number of popular container management systems.
Azure IoT Edge also allows developers to write their own code in multiple languages (C#, C and Python for now, more coming in the future) and deploy to Azure IoT Edge. We provide tools to develop, debug and deploy this code in containers in VSCode (for C#). In addition, Azure IoT Hub has the user experiences to not only deploy Edge modules on a single device, but at scale on a fleet of IoT Edge devices. This functionality is available in Azure portal, as well as APIs for businesses to build their own business applications for deployment and configuration management. Azure IoT Edge is available in most Azure regions today, including US West Central, East Asia, North Europe, and West US, and the rest of the regions will be available shortly.
Many customers are already seeing benefit and new opportunities with Azure IoT Edge. Here is what they are saying:
“Azure IoT Edge provided an easy way to package and deploy our Machine Learning applications. Traditionally, machine learning is something that has only run in the cloud, but for many IoT scenarios that isn’t good enough, because you want to run your application as close as possible to any events. Now we have the flexibility to run it in the cloud or at the edge—wherever we need it to be.”
– Matt Boujonnier, Analytics Application Architect for Schneider Electric
“NEC sees great value in Azure Stream Analytics on IoT Edge to increase the responsiveness of IoT solutions, while ensuring data privacy and sovereignty by processing data locally on edge devices. We see great potential to use this service across both our own IoT solutions, and also those of our customers who benefit from NEC’s Azure Plus consultancy."
– Hiroyuki Ochiai, Director, IT platform division, NEC Corporation
“The term ‘intelligence at the edge’ for Sandvik Coromant means doing useful processing of the data as close to the collection point as possible, allowing systems to make some operational decisions there. At Sandvik Coromant, we are streaming data from manufacturing machines, industrial equipment, pipelines and other remote devices connected to the IIoT. By running the data through an analytics algorithm, at the edge inside a corporate network with Azure IoT Edge, we can set parameters on what information is worth sending to a cloud or on-premises data store for later use -- and what isn't. Edge analytics makes it possible to react very quickly which as an example can prevent crashes in the machine, this will enable organizations to reduce or avoid unplanned equipment downtime.”
– Magnus Ekbäck, VP Business Development, Sandvik Coromant.
We will be doing a webinar on Azure IoT Edge soon. To get more information, register for the webinar today.
Commitment to security at the Edge
To empower developers building applications for Azure IoT Edge, security is a fundamental requirement for success. Just last month we announced updates to our work with NXP for LS1012 and Microchip for ATSAMA5D2, with both product families built using the ARM processor architecture with TrustZone technology. We will continue to work across the industry to make Azure IoT Edge a secure Intelligent Edge platform that is operating system, processor architecture, and hardware agnostic.
More news for Azure IoT
Today we’re also announcing the general availability for Azure Time Series Insights, a fully managed service for the analytics, storage, and visualization of time series data, to do real-time anomaly detection, data streaming, and analysis that will power apps at the edge. Since April, hundreds of customers have pushed 100’s of billions of events into TSI for use in production environments. Now, any organization that produces massive amounts of IoT data has a scalable, enterprise-grade solution for storing and gleaning insights from their data that can be applied to Azure IoT solutions in the cloud or on the edge.
By 2020, it’s estimated that there will be 30 billion connected devices, according to the IDC research group. The ability to securely provision and harness data from these devices at scale has never been so critical to business transformation. Microsoft is helping customers simplify their IoT journey through comprehensive and integrated services to manage devices and solutions at scale securely. Check out our demo and docs on how to get started with Azure IoT Edge today.
Announcing Visual Studio and Kubernetes – Visual Studio Connected Environment
I've been having all kinds of fun lately with Kubernetes, exploring building my own Kubernetes Cluster on the metal, as well as using a managed Kubernetes cluster in Azure with AKS.
Today at the Connect() conference in NYC I was happy to announce Visual Studio Connected Environment. How would one take the best of Visual Studio and the best of managed Kubernetes and create something useful for development teams?
Ecosystem momentum behind containers is amazing right now with support for containers across clouds, operating systems, and development platforms. Additionally, while microservices as an architectural pattern has been around for years, more and more developers are discovering the advantages every day.
You can check out videos of the Connect() conference at https://www.microsoft.com/connectevent, but you should check out my practice video where I show a live demo of Kubernetes in Visual Studio:
The buzzword "cloud native" is thrown around a lot. It's a meaningful term, though, as it means "architecture with the cloud in mind." Applications that are cloud-native should consider these challenges:
- Connecting to and leveraging cloud services
- Use the right cloud services for your app, don't roll your own DB, Auth, Discovery, etc.
- Dealing with complexity and staying cognizant of changes
- Stubbing out copies of services can increase complexity and hide issues when your chain of invocations grows. K.I.S.S.
- Setting up and managing infrastructure and dealing with changing pre-requisites
- Even though you may have moved to containers for production, is your dev environment as representative of prod as possible?
- Establishing consistent, common environments
- Setting up private environments can be challenging, and it gets messier when you need to manage your local env, your team dev, staging, and ultimately prod.
- Adopting best practices such as service discovery and secrets management
- Keep secrets out of code, this is a solved problem. Service discovery and lookup should be straightforward and reliable in all environments.
A lot of this reminds us to use established and mature best practices, and avoid re-inventing the wheel when one already exists.
The announcements at Connect() are pretty cool because they're extending both VS and the Azure cloud to work like devs work AND like devops works. They're extending the developers’ IDE/editor experience into the cloud with services built on top of the container orchestration capabilities of Kubernetes on Azure. Visual Studio, VS Code and Visual Studio for Mac AND and through a CLI (command line interface) - they'll initially support .NET Core, node.js and Java on Linux. As Azure adds more support for Windows containers in Kubernetes, they'll enable .NET Full Framework applications. Given the state of Windows containers support in the platform, the initial focus is on green field development scenarios but lift-shift and modernize will come later.
It took me a moment to get my head around it (be sure to watch the video!) but it's pretty amazing. Your team has a shared development environments with your containers living in, and managed by Kubernetes. However, you also have your local development machine which then can reserve its own spaces for those services and containers that you're working on. You won't break the team with the work you're doing, but you'll be able to see how your services work and interact in an environment that is close to how it will look in production.
PLUS, you can F5 debug from Visual Studio or Visual Studio Code and debug, live in the cloud, in Kubernetes, as fast as you could locally.
This positions Kubernetes as the underlayment for your containers, with the backplane managed by Azure/AKS, and the development experience behaving the way it always has. You use Visual Studio, or Visual Studio code, or the command line, and you use the languages and platforms that you prefer. In the demo I switch between .NET Core/C# and Node, VS and VSCode, no problem.
I, for one, look forward to our containerized future, and I hope you check it out as well!
You can sign up for the preview at http://aka.ms/signup-vsce
Sponsor: Why miss out on version controlling your database? It’s easier than you think because SQL Source Control connects your database to the same version control tools you use for applications. Find out how.
© 2017 Scott Hanselman. All rights reserved.
Announcing general availability of Bash in Cloud Shell
Today, we are excited to announce the general availability of Bash in Azure Cloud Shell. Bash in Cloud Shell provides an interactive web-based, Linux command-line experience from virtually anywhere. With a single click through Azure portal, Azure documentation, or the Azure mobile app, users gain access to a secure and authenticated Azure workstation to manage and deploy resources from a native Linux environment held in Azure. Learn more about Bash in Cloud Shell.
Traditional command-line environments include the overhead of managing and installing dependencies for select tools before getting real work done. Bash in Cloud Shell saves time and effort when managing Azure resources by ensuring users are never far from a ready-to-use Azure environment maintained by Microsoft.
Bash in Cloud Shell comes equipped with commonly used CLI tools, including Linux shell interpreters, Azure tools, text editors, source control, build tools, container tools, database tools, and more. It enables simple, secure authentication to use Azure resources with Azure CLI 2.0. Azure File shares enable file persistence through clouddrive to store scripts and settings.
Bash improvements since launch
Since public preview in May, we’ve incorporated ample feedback from the community to improve the Bash in Cloud Shell experience including:
- 10+ new default tools for bash
- Improved bash history for concurrent sessions
- Persisted font size preference
- Performance improvements for faster shell start-up
- Tmux support
- Terraform auto-authentication
Get started
Getting started is simple, visit portal.azure.com and click the Cloud Shell icon in the top toolbar to set up your file share and start your Cloud Shell journey!
Feedback
Our customers will always be the core of what drives Azure and Cloud Shell. Our incredible community of end-users and partners enable our great experiences. Thank you all for your feedback that has shaped Cloud Shell, we look forward to receiving more at our Cloud Shell Feedback Forum.
Announcing the general availability of Azure App Service diagnostics
Today, we are pleased to announce the general availability of App Service diagnostics. It provides an intelligent and interactive experience, analyzes what’s wrong with your web apps and quickly guides you to the right information to help troubleshoot and resolve issues faster.
Proactive application health checkup
With Azure App Service diagnostics, you can run proactive health checkups against common web app metrics and tap into a pool of built-in knowledge which will help you troubleshoot efficiently. The experience provides you with a quick, interactive overview that either tells you that your app is healthy or identifies unhealthy areas. Behind the scenes, it will detect issues in four areas Requests and Errors, Performance, CPU Usage, and Memory Usage, which are common problem areas we have seen for web applications over the years.
Actionable insights
There are many factors that can impact web applications health and performance. Isolating and addressing those factors can be challenging. App Service diagnostics provides relevant metrics based on problem categories and actionable insights that take the guesswork away.
Let’s dive into an example web app (see below image). This app appears to perform well except when it is under load. The poor performance could originate from any of the underlying components, database, CPU, network, memory, disks etc. This is where App Service diagnostics can help and guide you to the root cause faster. In this example, it has identified that the web app is receiving HTTP server errors. Opening up the error analysis tab, you can further drill down to the specific timeframe when your web app experienced downtime. Next, under Observations tab you can find the source of the problem, which in this case happens to be high CPU usage.
Powered by a massive data set with information from Microsoft’s existing support center, App Service diagnostics reduces the chance of trial and error and expedites problem resolution by recommending potential solutions. It also provides list of helpful links for further investigation.
You can learn more about App Service diagnostics here. If you have any feedback or suggestions, please start a new post here.
Azure Stream Analytics now available on IoT Edge
Today, we are announcing the public preview of Azure Stream Analytics running on Azure IoT Edge. Azure Stream Analytics on IoT Edge empowers developers to deploy near-real-time analytical intelligence closer to IoT devices so that they can unlock the full value of device-generated data. Designed for customers requiring low latency, resiliency, efficient use of bandwidth and compliance, enterprises can now deploy control logic close to the industrial operations and complement Big Data analytics done in the cloud.
Why put analytics closer to the data?
With Azure Streaming Analytics (ASA) on IoT Edge, enterprises benefit from running Complex Event Processing (CEP) closer to where the data is produced, in the following scenarios:
- Low-latency command and control: For example, manufacturing safety systems are required to respond to operational data with ultra-low latency. With ASA on IoT Edge, you can analyze sensor data in near real time and issue commands when you detect anomalies to stop a machine or trigger alerts.
- Limited connectivity to the cloud: Mission critical systems, such as remote mining equipment, connected vessels or offshore drilling, need to analyze and react to data even when cloud connectivity is intermittent. With ASA, your streaming logic runs independently of the network connectivity and you can choose what you send to the cloud for further processing or storage.
- Limited bandwidth: The volume of data produced by jet engines or connected cars can be so large that data must be filtered or pre-processed before sending it to the cloud. Using ASA, you can filter or aggregate the data that need to be sent to the cloud.
- Compliance: Regulatory compliance may require some data to be locally anonymized or aggregated before being sent to the cloud. With ASA, you can aggregate data coming from various sources, or in a given time window, for example.
During the private preview of ASA on IoT Edge, we received positive feedback validating the use of ASA for these scenarios. Hiroyuki Ochiai, Director of the IT platform division for NEC Corporation said, “Azure Stream Analytics on IoT Edge increases the responsiveness of IoT solutions, while ensuring data privacy and sovereignty by processing data locally on IoT Edge. We see great potential to use this service across both our own IoT solutions, and those of our customers who benefit from NEC’s Azure Plus consultancy."
Move between edge and cloud easily
With ASA on IoT Edge, you can easily use CEP for your IoT scenarios using the same interface and the same SQL-like language for both cloud and edge analytics jobs. This makes it easy to move analytics between edge and cloud. Our SQL language notably enables temporal-based joins, windowed aggregates, temporal filters, and other common operations such as aggregates, projections, and filters. You can find more information in our query language documentation.
ASA on IoT Edge offers a cross-platform solution running on Docker containers that can be deployed on multiple platforms (Linux or Windows) and multiple architectures (Intel or ARM CPUs). This allows ASA to run on a large variety of devices, from small-footprint devices such as Raspberry Pi to industrial PCs, dedicated field gateways, or servers.
By leveraging Azure IoT Edge to secure, deploy and manage your IoT solutions from the cloud, you can easily deploy Azure Stream Analytics to thousands of devices.
Get started now
ASA on IoT Edge preview is being deployed and will be enabled to all customers by Friday, November 17, 2017. For more information, refer to the ASA on IoT Edge documentation, or go directly to the Azure portal to create a new ASA job and just select “Edge” as the hosting environment. An end-to-end tutorial is also available for a quick start.
Microsoft announces the general availability of Azure Time Series Insights
Today, we announced the general availability (GA) of Azure Time Series Insights (TSI). TSI is a cost-effective and performant service for the analytics, visualization, and storage of time series data. For the last seven months hundreds of customers, including ThyssenKrupp, BMW, Steelcase, TransAlta, Actionpoint, and Mesh Systems, have pushed more than 50 billion events into TSI for use in their production environments. Customers have leveraged TSI to visualize machine learning models in real-time, compare disparate assets, reduce SLA’s for IoT asset validation and deployment, and conduct root cause analysis. Now, customers with large volumes of time series data have a scalable, commercial-grade solution for storing and analyzing data without the headache and expense of tedious resource management.
When we first started working with customers on TSI, there wasn’t a clean way to analyze and visualize time series data at scale. Historically, time series data is stored in traditional databases hosted on premises where they are hard to set-up and manage. While customers can hobble together commercial and open source products today, they are difficult to provision and get pricey and time-consuming quickly. Once customers get these solutions up and running, they struggle to keep up with the increasing size of their IoT data. In fact, we’ve heard multiple times that these types of solutions are “where good data goes to die,” since customers often wind up not generating any meaningful insights from them.
It’s easy to get started using TSI because it requires no coding or data prep. TSI enables customers to analyze, visualize, and store terabytes of data in near-real time, all without having to worry about managing or connecting multiple applications. With this release, TSI now stores raw data for up to 400 days, four times more than what was previously available, enabling fast multi-site asset comparisons over a rolling year of data. TSI grows with you, elastically scaling up to a multi-terabyte environment in seconds and makes new data available to query for insights in less than 1 minute, so there is no planning or wait time required to meet your business needs. Unlike other products, TSI is pay-as-you-go and enables an unlimited amount of users and queries so organizations can get more value out of it. Organizations building custom applications that need to query, aggregate, store, and chunk time series data into intervals can achieve greater flexibility by leveraging TSI’s APIs. Applications like Microsoft IoT Central, Azure IoT Connected Factory Preconfigured Solution, and the Steelcase Workplace Advisor have all been built on top of TSI using these APIs.
Easy to get started
Actionpoint is a global technology company providing a comprehensive portfolio of innovative products and services, including their new IoT-PREDICT solution for connected factories.
“TSI makes it simple for Actionpoint IoT-PREDICT customers to start storing and visualizing the powerful data generated on their factory floors,” said Finian Nally, Head of Cloud Solutions. “TSI enables us to create scalable solutions on behalf of our customers in less than 10 minutes, as it requires no coding nor data prep to get started.”
Powerful APIs
Steelcase, the global leader in office furniture and connected office solutions, has taken a cloud-first approach to software development in their new Workplace Advisor solution.
“We are constantly working to help our customers reimagine how they can empower their workforce to work more efficiently in their workplaces,” said Scott King, software engineering lead at Steelcase. “To do this well, we needed a place to capture and store large volumes of time series data, make calculations with that data on the fly in real-time, and aggregate that data, so it’s easier to view and explore in our application. Time Series Insights APIs have enabled us to provide real-time visibility across workspaces around the globe, giving our customers the ability to intuitively gain insights and make informed decisions on how to optimize the workplace. We chose to build on TSI because of the speed it allows us to dynamically query our data, its interval chunking, and aggregation capabilities.”
More efficient operations
ThyssenKrupp Elevator, the world’s leading elevator company, is using TSI to make their operations more efficient.
“Azure Time Series Insights has standardized our method of accessing devices’ telemetry in real time without any development effort. Time to detect and diagnose a problem has dropped from days to minutes. With just a few clicks we can visualize the end-to-end device data flow, helping us identify and address customer and market needs,” said Scott Tillman, Software Engineer, ThyssenKrupp Elevator.
Improved organizational visibility
TransAlta, a global energy leader, is using TSI to store and analyze wind farm data.
“The simplicity and speed we have seen since starting to use Time Series Insights has been impressive.” said Jason Killeleagh, lead architect. “We are excited to expand our use of Time Series Insights as we continue to identify ways Time Series insights helps TransAlta cut costs and improve data access for our Plant Operations and Engineering teams.”
Time Series data, simplified
Mesh Systems designs and deploys turnkey IoT / M2M solutions that include hardware, software and networking frameworks for Smart products. They are helping a global automation technology company build a new solution using Time Series Insights.
“Azure Time Series Insights has obviated the need to build a queryable, performant time series repository,” said Kyle Zeronik, VP of engineering. “The removal of the engineering effort and subsequent code maintenance effort is an enormous efficiency gain when building IoT applications. We see tremendous value in the Azure Time Series Insights PaaS offering.”
We are very excited to enable organizations with large volumes of time series data to count on the stability and service level agreement (SLA) that TSI now provides, but what comes next? Looking forward, we’ll be working on expanding storage retention, adding built-in anomaly detection, integrating with Power BI, and making TSI explorer generally available, which enables customers to explore their freshest time series data in seconds. Customers can continue to use the TSI explorer, which remains in public preview, and we will continue to include it as a component of all TSI SKUs. You can stay up to date on all things Time Series Insights by following us on Twitter. Get started on Time Series Insights here.
Azure DevOps Project – public preview
In today’s world, organizations need to innovate and get to market faster. That needs learning latest technologies, using them in your product and deploying at a faster pace.
We are happy to announce the public preview of Azure DevOps Project. Azure DevOps Project helps you launch an app on the Azure service of your choice in a few quick steps and set you up with everything you need for developing, deploying, and monitoring your app.
Creating a DevOps Project provisions Azure resources and comes with a Git code repository, Application Insights integration and a continuous delivery pipeline setup to deploy to Azure. The DevOps Project dashboard lets you monitor code commits, builds and, deployments, from a single view in the Azure portal.
Key benefits of a DevOps Project:
- Get up and running with a new app and a full DevOps pipeline in just a few minutes
- Support for a wide range of popular frameworks such as .NET, Java, PHP, Node, and Python
- Start fresh or bring your own application from GitHub
- Built-in Application Insights integration for instant analytics and actionable insights
- Cloud-powered CI/CD using Visual Studio Team Services (VSTS)
DevOps Projects are powered by VSTS and gives you a head start in developing and deploying your applications. From the initial starting point a DevOps Project provides, you can very easily:
- Customize your build and release pipeline – ex. add a test environment to your pipeline to validate before going to production
- Use pull requests to manage your codeflow & keep your quality high
- Track your projects backlog and issues right along with your application
Get started
Create an Azure DevOps Project now.
For more information about Azure DevOps project, please read our documentation and learn more about today’s announcements on the DevOps blog.
Azure #CosmosDB extends support for MongoDB aggregation pipeline, unique indexes, and more
We're happy to announce that you can now do more within MongoDB queries and types in Azure Cosmos DB. With the latest service deployment, we have made the following improvements in MongoDB API in Azure Cosmos DB.
- MongoDB aggregation pipeline support (preview)
- Unique index support
- MongoDB wire protocol version 5 support, used in MongoDB 3.4 (preview)
MongoDB aggregation pipeline
In MongoDB, the aggregation pipeline enables developers to create more sophisticated queries and manipulate data by combining multiple aggregation ‘stages’ together, thus enabling them to do more data processing on the server side before the results get returned to the client. The data can be filtered, sorted, and prepared for use, eliminating the need to transfer large amounts of data over the wire only to reduce it to a manageable size in the client code.
For example, the following aggregation pipeline finds the records in the Volcanos collection, matching only the ones in the United States, grouping records by the volcano type, collecting the volcano elevation for each one, and counting the number of records matched, all in a single query:
There are many other powerful aggregation, data wrangling, and analytics scenarios that could take advantage of aggregation pipeline capabilities. To learn about every aggregation pipeline construct and other MongoDB syntax constructs currently supported by Azure Cosmos DB, please see MongoDB feature support.
Support for the aggregation pipeline is in now in public preview and can be enabled on the Preview Features page of any MongoDB API account in the Azure portal.
Unique Indexes and Index Management
Azure Cosmos DB indexes every field in documents that are written to the database by default. Unique indexes ensure that a specific field doesn’t have duplicate values across all documents in a collection, similar to the way uniqueness is preserved on the default “_id” key. Now you can create custom indexes in Azure Cosmos DB by using the createIndex command, including the ‘unique’ constraint.
Unique indexes are available for all MongoDB API accounts.
MongoDB wire protocol (version 5)
Azure Cosmos DB implements the MongoDB wire protocol, which enables its MongoDB API to be compatible with most MongoDB applications and tools. The MongoDB wire protocol version that Azure Cosmos DB supports is now version 5. With this change, the wire protocol support is on par with MongoDB 3.4. This enables Azure Cosmos DB to be used by a wider range of applications that take advantage of the capabilities of wire protocol version 5. For instance, applications using Azure Cosmos DB with MongoDB API can now take advantage of the types defined by the BSON specification, such as Decimal128.
This capability is now in public preview and can be enabled on the Preview Features page of any MongoDB API account in the Azure portal.
Try it now
Since these enhancements are a service-side change, you do not need to download anything or make any changes on the client side. The only thing you need to do is enable the aggregation pipeline public preview or MongoDB wire protocol (version 5) features via the switches in the Azure portal:
You can get started with the Azure Cosmos DB MongoDB API by creating an account and connecting it by using the credentials in from Connection String page in the Azure portal.
We continuously evaluate use cases that could take advantage of these new capabilities. We hope that these improvements to the MongoDB API unlock more scenarios and use cases for your applications. Please let us know at askcosmosmongoapi@microsoft.com if you have any feedback on these new capabilities.
You can try Azure Cosmos DB for free today, no sign up or credit card required. Stay up-to-date on the latest Azure Cosmos DB news and features by following us on Twitter #CosmosDB and @AzureCosmosDB.
- Your friends at Azure Cosmos DB
Azure #CosmosDB @ Microsoft Connect(); 2017
Today, we’re excited to make several Azure Cosmos DB announcements. Azure Cosmos DB is the first globally-distributed database service that lets you to elastically scale throughput and storage across any number of geographical regions while guaranteeing low latency, high availability and consistency – all backed by the most comprehensive SLAs in the industry. Azure Cosmos DB is built to power today’s IoT and mobile apps, and tomorrow’s AI-hungry future.
Azure Cosmos DB is the first cloud database to natively support a multitude of data models and popular query APIs, is built on a novel database engine capable of ingesting sustained volumes of data and provides blazing-fast queries – all without having to deal with schema or index management. And it is the first cloud database to offer five well-defined consistency models so you can choose just the right one for your app.
Today at the annual Microsoft Connect(); 2017 event in New York City, we are excited to share our continued commitment to all developers by announcing the expansion of our multi-model and multi-API capabilities, further advances in our service and enhancement to our SLAs. These announcements include:
- Azure Cosmos DB Cassandra API: We are excited to launch the preview of native support for Apache Cassandra API – offering you Cassandra as-a-service powered by Azure Cosmos DB. You can now experience the power of Azure Cosmos DB platform as a managed service with the familiarity of your favorite Cassandra SDKs and toolchain. Learn more about it here and sign up today to get access to the Azure Cosmos DB Cassandra API to easily build planet-scale Cassandra apps.
- Azure Cosmos DB SLA update - 99.999% read availability at global scale: We have been making continuous improvements to our stack and today we are proud to announce even stronger SLAs for Azure Cosmos DB - now databases spanning multiple regions will have 99.999 percent read availability. Learn more by reading Azure Cosmos DB SLA page.
- General availability of Azure Cosmos DB Table API: We are happy to announce the general availability of Table API. With the Azure Cosmos DB Table API your applications written for Azure Table storage can now leverage premium capabilities of Azure Cosmos DB, such as turnkey global distribution, low latency reads/writes, automatic secondary indexing, dedicated throughput, and much more. Azure Cosmos DB Table API support is accessible through the Azure Portal and Azure CLIs.
- General availability of Azure Cosmos DB Gremlin API: We are also happy to pre-announce the general availability of Azure Cosmos DB Gremlin (Graph API), which will be coming at the end of 2017. This update will deliver critical improvements to the performance of graph operations, improved import and backup scenarios through new tooling, and enhanced support for open-source frameworks recommended by Apache Tinkerpop, including Python client support. With GA, we will also simplify migration from popular database engines like TitanDB, Neo4j and others.
- Azure Cosmos DB MongoDB API extended capabilities: Today, we are also happy to announce the public preview of unique indexes and aggregation pipeline support which allows Azure Cosmos DB developers using MongoDB API to perform data manipulation in multi-stage pipelines even within a single query, enabling streamlining development of more sophisticated aggregations. Unique index capability is now generally available and allows to introduce the uniqueness constraint on any document fields that are already auto-indexed in Azure Cosmos DB. Azure Cosmos DB now also implements MongoDB 3.4 wire protocol, allowing the use of tools and applications relying on it.
- General availability of Azure Cosmos DB Spark connector: For customers looking to run Spark over globally distributed operational data, today we are announcing the general availability of the Spark connector for Azure Cosmos DB. Spark connector for Azure Cosmos DB enables real-time data science, machine learning, advanced analytics and exploration over globally distributed data in Azure Cosmos DB by connecting it to Apache Spark. The connector efficiently exploits the native Azure Cosmos DB managed indexes and enables updateable columns when performing analytics. It also utilizes push-down predicate filtering against fast-changing globally-distributed data addressing a diverse set of IoT, data science, and analytics scenarios. Spark structured stream support using Azure Cosmos DB change feed, query performance improvements, and support for the latest Spark version are also included.
With Azure Cosmos DB, our mission is to enable the world’s developers to build amazingly powerful, cosmos-scale apps, more easily. We’re thrilled to see all the applications that are being built with Azure Cosmos DB every day. Your apps are helping us define our product (which capabilities to add, which APIs to support, and how to integrate with other products and services) - all making our database service even better. Azure Cosmos DB already powers e-commerce, banking, automotive industry, professional services, technology companies and manufacturers, startups, education and health solutions. It is used everywhere in the world. After getting used extensively inside Microsoft, we are excited and humbled by the fact that external customers and developers are really loving Azure Cosmos DB – this is something we are really proud of. The revolution that is leading thousands of developers to embrace to Azure Cosmos DB has just started, and it is driven by something much deeper than just our product features. Building a product that allows for significant improvements in how developers build modern applications requires a degree of thoughtfulness, craftsmanship and empathy towards developers and what they are going through. We understand that because we ourselves are developers and we are super-excited to see what our fellow developers will build with Azure Cosmos DB! We are just getting started…
— Your friends at Azure Cosmos DB (@AzureCosmosDB, #CosmosDB)
GVFS Updates: More Performance, More Availability
Pipeline as code (YAML) preview
Connect(); announcements
We’re announcing a bunch of new things at Connect(); this week. It’s an exciting time. Connect(); is an annual developer event where we focus particularly on improving the overall experience for developers. We’ve queued up a lot of good news and I wanted to share a few highlights – particularly from the DevOps space that I’m deeply engaged in.
Team Foundation Server 2018 final release
I’m excited to announce that we released Team Foundation Server 2018 this week. We’ve been hard at work on it since we released TFS 2017 Update 2 back in July. TFS 2018 is a major update to Team Foundation Server and has a ton of improvements you can read all about in the release notes.
- TFS 2018 Release notes
- TFS 2018 web installer
- TFS 2018 ISO image
- TFS 2018 Express web installer
- TFS 2018 Express ISO image
I want to comment on a few of my favorite things:
- Mobile work item experience – We now support a work item experience optimized for phones. It makes it super easy to check on your work, comment on, share and route issues from any where at any time.
- Wiki – Now you can have rich wiki experiences as part of every one of your projects. Edit in markdown and create rich pages with links, tables, images and more.
- Git Forks – For years, TFS has supported Git repos and we’ve iterated to make the experience better and better. Starting with TFS 2018 we now support forking Git repos to better enable collaborating at a distance.
- GVFS support – In May of this year I announced work we were doing to scale Git to the largest code bases on the planet starting with the Microsoft Windows code base. That work involves improvements both to the client and to the server. It also involves a new virtual file system we call Git Virtual File System (GVFS) that instantiates portions of the repo on demand. The result is dramatic – Git commands that run 2 or 3 orders of magnitude faster. I’m extremely pleased to announce that we are releasing GVFS support built into TFS 2018 so you too can benefit from the best performance in the industry on large Git repos. In addition to releasing server support in TFS 2018, we are releasing built, signed versions of the client components that we open sourced – making it much easier for you to acquire and install GVFS. Read more about GVFS announcements in Ed’s post.
- Graphical release definition editor – TFS 2018 includes our new release definition editor that make it really easy to configure and visualize release workflows.
- Deployment groups – TFS 2018 includes our new Deployment groups feature that does agent based deployments across potentially large and complex applications.
These are just a few of the highlights in TFS 2018. I encourage you to checkout the release notes to learn more – there’s a ton of great improvements.
GVFS Momentum – Scaling Git to the worlds largest code bases
In addition to including GVFS in TFS 2018 and the new client binaries I described above, I want to talk a bit about GVFS momentum. A lot has happened since we announced our plans to build GVFS for the Windows repo – weighing in at over 300GB and more than 5.5 million files, delivered support in Visual Studio Team Services and open sourced all the client components. Interest in scaling Git to the largest code bases in the world has been intense. I’ve had numerous customers reach out to me to learn how to apply Git to their very large code bases. We’ve also seen tons of interest among other players in the Git ecosystem to make GVFS the defacto standard for really scaling Git.
At Connect(); GitHub announced that they are working on adding GVFS support, making scalable Git available to the entire open source world. Stay tuned for more info from them on timing. They also will be working closely with us to further improve GVFS and bring it to Mac and Linux users. I’m excited to partner with GitHub on this. They have a lot of experience with the Git community and running Git hosting at scale and I believe their partnership can only help us collectively advance the future of Git.
Atlassian also joined the GVFS chorus. They were one of the first to join, adding GVFS support in SourceTree early on. More recently they released an extension to BitBucket that enables BitBucket customers to use the GVFS client against their backend. It’s exciting to see another Git provider adding GVFS support.
We also have seen continued momentum in the Git client space – with Tower Git and gmaster both adding support.
In parallel with the growing industry support, we’ve continued to improve GVFS by iterating on the top performance and scale issues. The result is that, every month, it is getting faster and more scalable. Again, I encourage you to read Ed’s GVFS update for more detail.
Azure DevOps Projects
Over the past year or so we’ve been working to make deployment to Azure as easy as possible. At Connect(); we released a new “Getting Started” experience for developers we call Azure DevOps Projects. This experience enables you to *very* easily create a new small sample app, using a wide variety of tech stacks (.NET, Java, Node.js, Python, …) and configure a full CI/CD pipeline. With a few clicks, you get a Git repo with the sample app, a CI build definition, a release pipeline and a provisioned and deployed app. If you already have your own app in a Git repo, we’ll help you get a CI/CD pipeline set up for that too. The entire experience is done within the Azure portal and starts by creating a new DevOps Project.
You are prompted with a few screens to specify the parameters of your app, like the language…
the framework
the hosting type
And lastly your VSTS account and Azure resources
And, when you are done, you get a fully configured DevOps pipeline and deployed app. Updating and redeploying is as easy as committing a change to master and watching the changes flow through the pipeline! Magic It’s never been easier to get started with a cloud app and a solid CI/CD foundation.
Pipeline as code – YAML public preview
This week we are also releasing a public preview of YAML support for VSTS build definitions. With YAML, you can represent your build pipeline as a text file that describes the build workflow and actions. You can then check it in with your code and use Pull Requests to manage changes, revert to previous versions, flow build changes across repos and branches along with associated code changes, and more. This is an important milestone and evolution in the VSTS CI/CD experience.
This does not represent a different build system – it’s not like the transition from XAML builds to Pipelines and Tasks (Build.vnext) – which I know was a big transition and created challenges for people. Our YAML support is incremental evolution on our Pipelines and Tasks build system currently in TFS and VSTS. It uses the same build engine and the same agents. Most importantly, it supports the same task ecosystem – so the hundreds of build extensions in the VSTS marketplace can all be used inside a YAML build definition, just like in the graphical editor.
Here’s an example of a base YAML build template for an ASP.NET Core build…
This is, of course, a work in progress. Our YAML build definitions work well and we’ve already adopted them on some of our own teams for our regular builds, but there’s lots more to do. A few key examples are:
- Over the next 2 weeks we will roll out support (as part of our sprint 126 deployment) for exporting a build definition in the graphical editor as a YAML definition. This will be a 100% lossless transformation, but it’s one way. You can’t load a YAML build definition in the graphical editor – you’ll have to choose whether you want each build definition managed as a YAML file or in a graphical editor.
- Over the next few months, we’ll be working to extend our YAML definitions to cover our release pipelines as well as our build pipelines. Right now, our YAML support only includes build, release is under development.
- Once YAML support is baked and hardened, we will release it in our on-prem TFS product as well. I don’t know exactly when that will be but I hope sometime in 2019.
Check out Chris’s VSTS Pipelines as YAML blog post for more details or read the Getting Started with YAML docs to just try it out. I’m really excited about this and I hope you are too. This retires yet another of our top 10 Uservoice requests (with 662 votes) and I hope you like really it.
Release management gates
An important part of any proper DevOps process is gradually deploying an app across the user base and monitoring progress and health as it rolls out. Our release management pipeline with environments and approvals provides a great construct to do this. I’ve mentioned before that we use VSTS release management to deploy VSTS itself and that we have over a dozen scale units organized in to rings of deployments. A ring is deployed and then we wait about 24 hours to monitor the health of the system before we progress to the next ring. A new feature we are releasing this week on VSTS allows you to automate this process. It allows you to create release “gates” that specify conditions necessary to begin or completely finish a deployment to an environment. For instance, you can configure an environment to deploy and wait 24 hours, ensure there are no blocking work items against the release and ensure there are no monitoring alerts before proceeding with the subsequent environment deployments. This will enable you to automate a process that is often manual today.
Here’s an image of a simple pipeline where I’ve configured a Blocking bugs gate and an Azure monitoring gate.
The awesome thing is that you can create any kind of gate that you like. You can create custom gate logic using Azure Functions, REST APIs or independent services connected via Azure Service Bus. These custom gates can do anything you like and then return/post back readiness data. In the next couple of weeks I hope to release a sample release gate, written as an Azure Function, that analyzes Twitter sentiment and presents the result as a release gate status. The possibilities are endless.
Release management gates will help you evolve your release management process to the next level!
VSTS Symbol Server – public preview
For several years now, adding support for TFS/VSTS to host symbols has been a top customer request. Here’s the latest vote count from UserVoice.
I’m incredibly excited to announce a public preview of Symbol Server support in VSTS as part of our Package Management extension. The updated Index Sources & Publish Symbols task now supports publishing to “Team Services”. All you have to do is check “Publish symbols” and leave the default Server Type of “Team Services”. We handle the rest.
Lastly, you configure your VS 2017 Update 5 or later to retrieve symbols from your VSTS account and now you can attach to any running build of your code built by VSTS and get source and symbols without ever having the code on your machine. This is transformational for your ability to debug “it doesn’t happen on my machine” problems. Read Alex’s Symbol Server post for more details.
We plan to bring plan to bring Symbol Server support to our on-prem TFS product, as soon as we can.
Cloud hosted Mac builds in VSTS
We also announced preview availability of free, cloud-hosted continuous integration (CI) and continuous delivery (CD) on macOS as part of Visual Studio Team Services (VSTS). VSTS now supports building and releasing Apple iOS, macOS, tvOS, and watchOS applications without requiring teams to provide and maintain their own Mac hardware. With this release, VSTS becomes the first CI/CD system in the cloud to offer Linux, macOS, and Windows in a unified solution.
Microsoft keeps hosted macOS installations updated with the latest build tools and SDKs including Xcode, Android, and Xamarin. The Apple App Store extension in the Visual Studio Marketplace simplifies releasing applications to beta testing and production environments. The VSTS Secure Files library keeps certificates and provisioning profiles protected during CI and CD. Furthermore, teams can take advantage of Visual Studio App Center to build, test, distribute, and monitor apps, as well as implement push notifications.
TFS data import service general availability
The TFS Database Import Service has reached GA. The Import Service enables customers to migrate from their on-premises Team Foundation Server (TFS) and into our cloud hosted SaaS service Visual Studio Team Services (VSTS). During the preview period, we have used the data import service to help hundreds of customers import their TFS Team Project Collections into a VSTS account. We now have confidence in it to open it for broad, self-service use.
Customers now no longer require approval from Microsoft to onboard and begin their migrations. To find out more information and get started visit: https://aka.ms/tfsimport.
Visual Studio Team Services CLI (VSTS CLI) – public preview
We’ve made available for download the first preview release of our open source command-line tools for Visual Studio Team Services (VSTS). In this first release developers using Windows, Linux or Mac will have the ability to authenticate and interact with VSTS via command-line. Commands available for work items, source repo’s and build tasks enable you to do things like create a Pull Request or queue a build.
In future releases we’ll expand support for additional commands against a wider set of VSTS capabilities and further streamline the experience based on customer feedback.
So let’s recap…
- Azure DevOps Projects – A super simple way to get started with a full DevOps pipeline for Azure.
- TFS 2018 – A major update to TFS with tons of significant improvements like Wikis, Forking, Deployment groups, …
- GVFS – Installable binaries and a new partnership with GitHub to drive GVFS as the industry standard for Git at scale.
- YAML – Support for pipelines as code for maximum flexibility in your DevOps pipeline.
- Release gates – Create automated measures of health and readiness to control your release pipeline
- Symbol Server – Publish all your symbols to VSTS and make symbols and source easily available anywhere and any time.
- VSTS command line – A new cross platform command line for automating all your DevOps processes.
- Hosted Mac agents – Build and release macOS, iOS, tvOS and watchOS code on hosted agents.
It’s mind blowing and these are just a few of the exciting announcements we made at Connect();. I hope you like all the improvements and we look forward to any feedback you have.
Brian
VSTS is now a Symbol Server
Announcing “Azure DevOps Project” public preview
The Latest in Developer Productivity and App Experiences
Whatever the language or platform, developers want the same thing – to create app experiences that are high-quality, intelligent and personalized. Experiences that delight users and keep them engaged. To do that, we need tools that increase our productivity, so that we spend more time on what matters most to our app’s success.
At Connect(); 2017 we are showcasing new tools and services that demonstrate Microsoft’s commitment to developer productivity and incredible app experiences.
Visual Studio App Center – Build, Test, Deploy, Engage, Repeat.
Today we announced the General Availability of Visual Studio App Center (formerly known as the Mobile Center Preview), a groundbreaking new developer service that helps you ship apps more frequently, at higher-quality, and with greater confidence. App Center is designed for all apps targeting iOS, Android, Windows, and macOS, whether written with Swift, Objective-C, Java, C#, Javascript, or any other language.
Delivering fantastic app experiences takes more than great authoring tools. You also need to continuously build, test, deploy, and monitor real-world apps usage, and iterate. One option is to stitch together multiple products into a workflow, but maintaining and building connections between these systems introduces risk and costs time, which takes you away from your mission of creating great apps.
That’s why we created App Center, a one-stop service for everything you need to manage your app lifecycle. Just connect your repo to App Center, and within minutes automate your builds, test on real devices in the cloud, distribute apps to beta testers, and monitor real-world usage with crash and analytics data. All in one place. You can use all of App Center or mix-and-match just the services you need.
With App Center, you can:
- Build your apps in the cloud, with every commit or on-demand, without managing build agents
- Test apps on thousands of real iOS and Android devices using XCUITest, Espresso, Appium, and other popular test frameworks
- Distribute your apps to beta testers and users on Android, iOS, Windows, and macOS with every commit or on demand. And when you’re ready, deploy to public app stores or Intune
- Monitor apps for crashes and create automatic work items in your bug tracker
- Analyze user behavior with out-of-the-box reports, custom event tracking, and continuous export to Azure Application Insights for deeper analysis
- Engage your users with push notifications
For a deeper dive on App Center, check out Keith Ballinger’s post on the App Center blog. Or just give it a try – sign up and let us know what you think.
Visual Studio Live Share
Today we also announced that we’re working on a new feature we call Visual Studio Live Share. Getting quick peer feedback and demonstrating your work can be tough. Screen-sharing solutions don’t convey the full context or enable the developers to independently explore the source code or debugger state. If you need to set up an environment or sync a repo to collaborate, you often won’t bother. Calling someone over to your desk is great, but it’s not possible when you work with remote teammates.
With Visual Studio Live Share, you can share the full context of your code with your teammate instantly and securely. Your teammate can edit and debug with you in real-time in their personalized editor or IDE, enabling real-time collaboration. Learn more about Visual Studio Live Share.
Visual Studio Tools for AI
When creating an application, some features are much easier to build when using a special-purpose library, like compressing files or generating a PDF. Making intelligent applications is no different: trained deep-learning models are like libraries you can include in your app to do amazing new things like recognizing objects in pictures, translating speech, and more.
To make it easier for you to infuse AI into your apps, we’ve made Visual Studio a great place to train the models you need and then use them in your application like any other resource. And today we are proud to announce Visual Studio Tools for AI, a free extension that works with Visual Studio 2015 and Visual Studio 2017.
This new extension makes it easy to get started training models using any of the popular deep learning frameworks including TensorFlow, CNTK, Theano, Keras, Caffe2 and more with new VS Project templates. Visual Studio is a great IDE to train your models because it’s so easy to step through and debug the training code. Models are often written with Python and Visual Studio is a powerful Python IDE.
We also integrated TensorBoard monitoring within Visual Studio. You can use TensorBoard to visualize the quality of your model, plot quantitative metrics about the execution of your graph, and show additional data like images that pass through it.
To make you even more productive when training your models, Visual Studio Tools for AI integrates with Azure Batch AI and Azure Machine Learning services, so that you can submit deep learning jobs to Azure GPU VMs, Spark clusters and more. Many developers test their models on smaller data sets on a dev box, and then train against larger datasets in the cloud. And running your code in the cloud doesn’t mean you have any less visibility with the integrated job monitoring in Visual Studio Tools for AI. You can even upload data and download logs and models all from within Visual Studio.
Once training is complete, building intelligent applications in Visual Studio is as easy as putting your trained model in your app just like any other library or resource. Having your model-training code with your app code, using the same process to manage your complete solution helps provide a seamless way to design, build, validate, and deploy your intelligent app end-to-end.
For more details on Visual Studio Tools for AI, check out the extension in the marketplace.
Visual Studio for Mac
The latest Visual Studio for Mac offers something for everyone. For mobile developers, our iOS development experience is smoother, as Visual Studio can now make use of Fastlane to automatically set up your devices for development and manage the provisioning profiles for you. It also fully supports the new iOS 11, tvOS 11 and watchOS 4 APIs. Along with support for the new .NET Core 2, we also have added Docker support allowing your web backends and applications to be deployed directly to Azure App Service from the IDE. And VSTest support gives Visual Studio for Mac developers an integrated experience for a wide array of popular test frameworks, including MSTest and Xunit.
For more details, check out the Visual Studio for Mac release notes.
Xamarin
With .NET Embedding, developers can now turn their .NET Code into native libraries for Android and iOS, which can be integrated into existing codebases written in Swift, Java or Objective-C. And, we are now shipping the Xamarin Live Player as a preview in Visual Studio and Visual Studio for Mac, enabling developers to write code that is updated live as on their device or simulator as they code, changing the way you will develop mobile applications forever.
More details are available on Joseph Hill’s post on the Xamarin blog.
First class Kubernetes support
Building containerized, microservices based apps is difficult. Kubernetes has made it easy to deploy and run containers but you still have to figure out how to work on your code in the context of the overall application. Collaboration with other developers is tricky as they make changes to other microservices in the same app. Visual Studio Connected Environment for AKS enables you to rapidly and safely develop, debug and test your microservices by extending your local dev experience to a Kubernetes based environment on Azure. You get the full experience of working in Visual Studio and Visual Studio Code but you are always working on your code within the context of other microservices that your code supports or depends on.
Learn more on Scott Hanselman’s blog post.
Visual Studio Team Services
We now offer Mac build hosts to build your iOS, Mac, and tvOS applications. We have also delivered a completely new, powerful, command line interface for Visual Studio Team Services.
Check out Brian Harry’s blog for more details.
Join us!
Join us for the rest of Connect(); 2017 for live-streamed and on-demand technical sessions, as well as hands-on training. There’s never been a better time to be a developer, especially with Microsoft’s developer tools and services helping you at every step of the way.
Nat Friedman, Corporate Vice President, Mobile Developer Tools
Nat is CVP for the Mobile Developer Tools team at Microsoft. He co-founded Xamarin, Inc. with Miguel de Icaza in 2011 and served as CEO through acquisition by Microsoft in 2016. Earlier in his career, Nat served as CTO for the Linux business at Novell, co-founded Ximian with Miguel in 1999, and co-founded and served as chairman of the GNOME foundation in 1997. He is passionate about building products that delight developers. Nat has two degrees from MIT and has been writing software for 27 years. He is an avid traveler, active angel investor, and a private pilot. |