Quantcast
Channel: Category Name
Viewing all 10804 articles
Browse latest View live

Because it’s Friday: 952 in your head

$
0
0

Countdown is one of those quintessentially British shows that I can't imagine anywhere else. (Reading the Wikipedia article just now I learned that it started as a French show, which I can well imagine, but I warrant it lacks the essential earnestness of the British version.) If you haven't seen it, it's a game show where contestants solve arithmetic and word puzzles in pursuit of some ludicrously trivial prize, like a teapot. (I may be misremembering that last bit.) You might have seen the word puzzle in a segment on The IT Crowd, and this numbers game effort is truly impressive:

(If you think you have a better solution, remember that you can only use each drawn number once.) I don't remember Carol Vorderman ever making an arithmetic error or getting stumped when I watched Countdown in the UK in the 90s, so her call for a calculator makes this clip doubly impressive!

That's all for this week. We'll be back with more for the blog next week — have a great weekend!


Announcement: Publish markdown files from your git repository to VSTS Wiki

$
0
0
This feature will be available in VSTS after the deployment of the Sprint 132 update is completed. Now you can publish markdown files from a git repository to the VSTS Wiki. Developers often write SDK documents, product documentation, or README files explaining a product in a git repository. Such pages are often updated alongside code... Read More

Deployment Groups is now generally available: sharing of targets and more…

$
0
0
We are excited to announce that Deployment Groups is out of preview and is now generally available. Deployment Groups is a robust out-of-the-box multi-machine deployment feature of Release Management in VSTS/TFS.  What are Deployment Groups? With Deployment Groups, you can orchestrate deployments across multiple servers and perform rolling updates, while ensuring high availability of your application... Read More

Support for tags in cost management APIs is now available

$
0
0

The Cloud Cost Management (CCM) APIs provides a rich set of APIs to get detailed reporting on your Azure usage and charges. We continue to make the APIs more relevant and with the growing adoption of tags in Azure, we’re announcing the support for tags in both the Usage Details API and the Budgets API. Our support for tags will continue to improve across all the APIs where a group or filter by tags is applicable. This release only supports tags for subscriptions in Enterprise Agreements (EA), in future releases we plan to support other subscription types as well.

Tags in usage details

The usage details API today supports filters for the following dimensions, date range, resource groups and instances. With the most recent release we will now support tags as well. The support for tags is not retroactive and will only apply to usage reported after the tag was applied to the resource. Tag based filtering and aggregation is supported by the $filter and $apply parameters respectively. We will continue to add additional dimensions that can be used to filter and aggregate costs over time.

Tags in budgets

Budgets can be created at the subscription or a resource group level and support filters to scope the budget to a specific set of resources. Today, filters support resource groups, instances and meters and with this release will also include tags. Scoping a budget to a tag or a set of tags will continue to leverage filters. Filters currently only support basic operations, but in future releases will be more expressive and enable finer grain scoping of budgets based on supported dimensions.

Tags in Power BI

The Power BI content pack is also being updated to add tags based reporting on costs. The content pack enables cost aggregation by tag and will continue to evolve and support other dimensions as they are added.

ContentPack 

Constraints

Support for tags has a few nuances that customers need to be aware of as they start using this feature:

  1. Tags based aggregation and filtering is only available for EA customers.
  2. The aggregation and filtering of tags will be available for all usage data starting September 1, 2017. Any usage data prior to September 1, 2017 will not return cost aggregation by tags.
  3. Tags cannot be applied retroactively for cost rollups. For instance adding a tag to an existing resource will not result in any usage from that resource prior to the application of the tag to be attributed to the tag.

The usage details calls are supported in the ARM APIs as well as the Key based APIs. For subscription and resource group scoped calls, the APM APIs should be where you get the usage details. For customers looking for tags based aggregation and filtering at a node in the management hierarchy, the Key based APIs will be where you get the usage details. As always, we’d like to hear from you on your experience with the APIs or any ideas you have on making the APIs more useful.

Useful links

Virtual Machine Serial Console access

$
0
0

Ever since I started working on the Virtual Machine (VM) platform in Azure, there has been one feature request that I consistently hear customers asking for us to build. I don’t think words can describe how excited I am to announce that today we are launching the public preview of Serial Console access for both Linux and Windows VMs.

Managing and running virtual machines can be hard. We offer extensive tools to help you manage and secure your VMs, including patching management, configuration management, agent-based scripting, automation, SSH/RDP connectivity, and support for DevOps tooling like Ansible, Chef, and Puppet. However, we have learned from many of you that sometimes this isn’t enough to diagnose and fix issues. Maybe a change you made resulted in an fstab error on Linux and you cannot connect to fix it. Maybe a bcdedit change you made pushed Windows into a weird boot state. Now, you can debug both with direct serial-based access and fix these issues with the tiniest of effort. It's like having a keyboard plugged into the server in our datacenter but in the comfort of your office or home.

Serial Console for Virtual Machines is available in all global regions starting today! You can access it by going to the Azure portal and visiting the Support + Troubleshooting section. See below for a quick video on how to access Serial Console.

SerialConsole-LinuxAccess

Support for Serial Console comes naturally to Linux VMs. This capability requires no changes to existing images and will just start working. However, Windows VMs require a few additional steps to enable. For all platform images starting in March, we have already taken the required steps to enable the Special Administration Console (SAC) which is exposed via the Serial Console. You can also easily configure this on your own Windows VMs and images, outlined in our Serial Console documentation. From the SAC, you can easily get to a command shell and interact with the system via the serial console as shown here:

SerialConsole-PrivatePreviewWindows

Serial Console access requires you to have VM Contributor or higher privileges to the virtual machine. This will ensure connection to the console is kept at the highest level of privileges to protect your system. Make sure you are using role-based access control to limit to only those administrators who should have access. All data sent back and forth is encrypted in transit.

I am thrilled to be offering this service on Azure VMs. Please try this out today and let us know what you think! You can learn more in this episode of Azure Friday’s, this Monday’s special episode of Tuesday’s with Corey on Serial Console, or in our Serial Console documentation.

 

Thanks,

Corey

Implementation patterns for big data and data warehouse on Azure

$
0
0

To help our customers with their adoption of Azure services for big data and data warehousing workloads we have identified some common adoption patterns which are reference architectures for success. So, what patterns do we have for our modern data warehouse play?

Modern data warehouse

This is the convergence of relational and non-relational, or structured and unstructured data orchestrated by Azure Data Factory coming together in Azure Blob Storage to act as the primary data source for Azure services. The value of having the relational data warehouse layer is to support the business rules, security model, and governance which are often layered here. The de-normalization of the data in the relational model is purposeful as it aligns data models and schemas to support various internal business organizations and applications. Azure Databricks can also cleanse data prior to loading into Azure SQL Data Warehouse. It enables an optional analytical path in addition to the Azure Analysis Services layer for business intelligence applications such as Power BI or other business applications.

Pattern1_c

Advanced analytics on big data

Here we introduce advanced analytical capabilities through our Azure Databricks platforms with Azure Machine Learning. We still have all the greatness of Azure Data Factory, Azure Blob Storage, and Azure SQL Data Warehouse. We build on the modern data warehouse pattern to add new capabilities and extend the data use case into driving advanced analytics and model training. Data scientists are using our Azure Machine Learning capabilities in this way to test experimental models against large, historical, and factual data sets to provide more breadth and credibility to model scores.  Modern and intelligent application integration is enabled through the use of Azure Cosmos DB which is ideal for supporting different data requirements and consumption.

Pattern2_d

Real-time analytics (Lambda)

We introduce Azure IOT Hub and Apache Kafka alongside Azure Databricks to deliver a rich, real-time analytical model alongside batch-based workloads. Here we take everything from the previous patterns and introduce a fast ingestion layer which can execute data analytics on the inbound data in parallel alongside existing batch workloads. You could use Azure Stream Analytics to do the same thing, and the consideration being made here is the high probability of join-capability with inbound data against current stored data. This may or may not be a factor in the lambda requirements, and due diligence should be applied based on the use case.  We can see that there is still support for modern and intelligent application integration using Azure Cosmos DB and this completes the build-out of the use cases from our foundation Modern Data Warehouse pattern.

Pattern3_c

I hope the information shared has been helpful and we look forward to hearing your feedback on the patterns shared in this article.

Announcing Azure Service Health general availability – configure your alerts today

$
0
0

Today I am excited to announce the general availability (GA) of Azure Service Health, a personalized dashboard that provides guidance and support when issues in Azure services affect you. Unlike our public status page which provides general status information, Azure Service Health provides tailored information for your resources. It also helps you prepare for planned maintenance and other changes that could affect the availability of your resources. With Azure Service Health, you can easily configure alerts to ensure that your relevant teams are notified of service health events affecting their resources.

We launched the preview of Azure Service Health in July 2017. We have been evolving the service based on your feedback - including the integration of Service Health alerts. If you haven’t already, set up your Service Health alerts.

Watch this short video to see Azure Service Health in action:

From your personalized dashboard in Azure Service Health, you can view:

Service issues – ongoing issues in Azure services that are impacting your resources. Quickly understand when the issue began and what services, regions, and specific resources are impacted. Share a link referencing the issue with your team or download a PDF summary to share with people who don’t have access to the Azure portal.

Planned maintenance – upcoming maintenance activities in Azure that will affect your resources. Understand when the maintenance will begin. And in the case of virtual machines maintenance, Service Health also shows the exact list of your VMs that will be affected and gives you the ability to perform maintenance on your schedule.

Health advisories – summaries of recommended actions to prevent downtime, including when older Azure features are being retired or if you exceed a usage quota.

Health history – past service issues that have affected the health of your resources, including post-incident summaries after issues have been resolved.
Resource health – resource-level health insights for your resources. Look up a specific resource (like a VM) to see any current or historical health issues.

Health alerts – proactive notifications for Service Health events. Create alerts to inform your relevant teams of issues that they care about – select specific subscriptions, services, and/or regions, and create an action group to alert via email, SMS, webhook, the Azure mobile app, and more.

Service Health alerts are completely configurable, to ensure that the right people in your organization are notified appropriately. For example:

  • An alert to email your dev team when a resource in a test/dev subscription is impacted
  • An alert to update ServiceNow via webhook when a resource in a production subscription is impacted
  • An alert to send an SMS to a specific number when resources in a given region are impacted

Get started by creating a service health alert today!

Finally, we would love to hear your feedback about Azure Service Health – simply select the feedback button in the top right corner of the Azure portal and let us know what you like and how we can improve.

Resources

Azure Service Health – overview page

Azure Service Health – documentation

Azure Service Health – in the Azure portal

Azure.Source – Volume 24

$
0
0

Azure database services for MySQL and PostgreSQL, the next generation of Azure Alerts, and Azure Databricks are now generally available. Last week, we were at GDC 2018 in San Francisco with representation from Azure, PlayFab and Xbox. Meanwhile at OCP Summit 2018 in San Jose, we presented on Project Denali, Project Cerebrus, and SONiC. See all the details below on those topics and more.

Now in preview

Azure DNS Private Zones now available in public preview - Azure DNS Private Zones enables customers to host DNS zones within their virtual networks, and it enables name resolution both within and across virtual networks. You can also configure zone names with a split-horizon or split-brain view, allowing a private and public DNS zone to share the same name. Azure DNS Private Zones is available in all Azure regions.

Also in preview

Now generally available

Announcing general availability of Azure database services for MySQL and PostgreSQL - Available in preview since May 2017, both services are now generally available with built-in high availability, a 99.99% availability SLA, elastic scaling for performance, and industry leading security and compliance to Azure. Azure Database for MySQL is a fully-managed relational database service based on the open source MySQL Server engine capable of handing mission-critical workload with predictable performance and dynamic scalability. Azure Database for PostgreSQL is a fully-managed relational database service based on the open source Postgres database engine capable of handling mission-critical workloads with predictable performance, security, high availability, and dynamic scalability.

The next generation of Azure Alerts has arrived - Now you can set up alerts to monitor the metrics and log data for the entire stack across your infrastructure, application, and Azure platform. With the release of the next generation alerts, we are providing a new consolidated alerts experience and offering a new alerts platform that will be faster and leveraged by other Azure services. In addition, the metric alerts for logs capability announced earlier this month is in public preview.

Azure Databricks, industry-leading analytics platform powered by Apache Spark™ - Azure Databricks, which is now generally available, is a fast, easy, and collaborative Apache Spark-based analytics platform optimized for Azure. Designed in collaboration with the founders of Apache Spark, Azure Databricks combines the best of Databricks and Azure to help customers accelerate innovation with one-click set up, streamlined workflows, and an interactive workspace that enables collaboration between data scientists, data engineers, and business analysts.

Global performance acceleration with Azure Traffic Manager - Traffic View, announced in public preview last fall, is now generally available. Traffic Manager provides you with DNS-level routing so that your end users are directed to healthy endpoints based on the routing method specified when you created the profile. Traffic View provides Traffic Manager with a view of your user bases (at a DNS resolver granularity level) and their traffic pattern. When you enable Traffic View, this information is processed to provide you with actionable insights.

Columnstore support in Standard tier Azure SQL Databases - Columnstore indexes are the standard for storing and querying large data warehousing fact tables. It uses column-based data storage and query processing to achieve up to 10x query performance gains in your data warehouse over traditional row-oriented storage, and up to 10x data compression over the uncompressed data size. Clustered and NonClustered Columnstore indexes for Standard databases are now generally available in the S3 and above pricing tiers

Azure Event Hubs integration with Apache Spark now generally available - Event Hubs users can now use Spark to build end-to-end streaming applications more easily. The Event Hubs connector for Spark supports Spark Core, Spark Streaming, and Structured Streaming for Spark 2.1, Spark 2.2, and Spark 2.3. For users new to Spark, Spark Streaming and Structured Streaming are scalable, fault-tolerant stream processing engines. These processing engines allow users to process huge amounts of data using complex algorithms expressed with high-level functions like map, reduce, join, and window. This data can then be pushed to file systems, databases, or even back to Event Hubs.

Also generally available

News & updates

Build your next iOS and Android game with $2,500+ of gaming services - A limited offer of $2,500 worth of PlayFab, App Center, and Azure services free for up to a year is now available to the first 1000 registrants. PlayFab offers the most complete backend platform built exclusively for live games. Microsoft acquired PlayFab in January.

Cloud Platform Release Announcements for March 21, 2018 - The Cloud Platform News Bytes Blog provides a recap of key announcements that were made last Wednesday across Microsoft’s Cloud Platform organization.

Improved multi-member Blockchain networks now available on Azure - Significant enhancements to our Ethereum on Azure offering, including high availability of Blockchain network presence; a simplified deployment experience; and monitoring and operational support. The Ethereum Consortium Blockchain Network solution template simplifies the infrastructure and protocol substantially by deploying and configuring a consortium Ethereum network from the Azure Portal or cmdline with a single click.

Announcing Terraform availability in the Azure Marketplace - The Terraform solution is now available in the Azure Marketplace. This solution will enable teams to use shared identity, using Managed Service Identity (MSI), and shared state using Azure Storage. These features will allow you to use a consistent hosted instance of Terraform for DevOps Automation and production scenarios.

Azure Security Center and discovery of partner solutions - Security Center makes it easy to enable integrated security solutions in Azure. Auto discovery of partner solutions that have already been deployed in an Azure subscription is now available. Discovered partner solutions will be displayed in security solutions panel. This feature is available in the standard pricing tier of Security Center, and you can try Security Center free for the first 60 days.

7 month retirement notice: Access Control Service - Access Control Service, otherwise known as ACS, is officially being retired. ACS will remain available for existing customers until November 7, 2018. After this date, ACS will be shut down, causing all requests to the service to fail.

Azure Redis Cache feature updates - Firewall and reboot functions are now supported in all three Azure Redis Cache tiers at no additional cost. We are also previewing the ability to pin your Redis instance to specific Availability Zone-enabled Azure regions.

Additional news & updates

Technical content & training

Unlock your data’s potential with Azure SQL Data Warehouse and Azure Databricks - The general availability of the Azure Databricks Service comes with built-in support for Azure SQL Data Warehouse, which enables any data scientist or data engineer to have a seamless experience connecting their Azure Databricks Cluster and their Azure SQL Data Warehouse when building advanced ETL (extract, transform, and load data) for Modern Data Warehouse Architectures or accessing relational data for Machine Learning and AI. In this post, you'll learn how the combination of these services can provide a truly transformative analytics platform for businesses.

Securing Azure Database for MySQL and Azure Database for PostgreSQL - Azure Database for PostgreSQL and Azure Database for MySQL inherit the fundamentally proven trusted security architecture from Microsoft Azure. Azure Database for PostgreSQL and Azure Database for MySQL protection starts with Azure network security. In this post, you'll learn more about the security model.

Compliance offerings for Azure Database for MySQL and Azure Database for PostgreSQL - Azure has over 50 national, regional and industry specific compliance offering that Azure Database for PostgreSQL and Azure Database for MySQL leverage as part of Microsoft’s Trusted Cloud foundation of security, privacy, compliance, and transparency. In this post, you'll learn more about the compliance offerings available.

Serverless computing recipes for your cloud applications - Read t0he free Azure Serverless Computing Cookbook that describes with rich code examples, how to solve common problems using serverless Azure Functions. In this post, you'll learn more about serverless computing, which enables full abstraction of servers, instant event-driven scalability, and pay-per-use.

Text Recognition for Video in Microsoft Video Indexer - Learn about the advanced machine learning in Video Indexer that goes beyond OCR for recognizing and extracting text that is displayed in videos.

Modernize index maintenance with Resumable Online Index Rebuild - Thanks to the growing sizes of databases, index rebuilds can take a very long time. Combine that with the business needs for your applications to be always available and performant and this can be an issue. Big OLTP environments with busy workloads often have very short maintenance windows with some too short to execute large index rebuild operations. You can use ROIR (Resumable Online Index Rebuild) to configure a rebuild to execute only during a maintenance window of defined length and pause your rebuild operation at any time to allow other higher priority tasks to execute.

How to get more leads and close deals faster with Microsoft’s Marketplaces - Microsoft provides two distinct marketplace storefronts that allow partners to list offers, enable trials, and transact directly with Microsoft's customers and ecosystem: Azure Marketplace and AppSource. These storefronts allow customers to find, try, and buy applications and services that accelerate their Digital Transformation, and help publishers grow their businesses by increasing access to Microsoft's customers and partner ecosystem. Watch this webinar to learn about customer purchase behavior, the latest techniques to acquire customers through the Microsoft Azure Marketplace and AppSource, and help to determine which marketplace is the best sales engine for you, which will soon be available for on-demand viewing.

Deploying WordPress Application using Visual Studio Team Services and Azure – Part two - This post is the second part of two blog posts describing how to setup a CI/CD pipeline using Visual Studio Team Services (VSTS) for deploying a Dockerized custom WordPress website working with Azure WebApp for Containers and Azure Database for MySQL. This part focuses on the Continuous Delivery (CD) part by using the VSTS Release management.

Events

Join Microsoft at the GPU Technology Conference - We recently announced the general availability of our NVIDIA Tesla V100-powered virtual machines and an expansion of our other offerings to more global regions. If you're in San Jose this week (March 26-29) at NVIDIA’s GPU Technology Conference, stop by Booth 603 to learn how Azure customers combine the flexibility and elasticity of the cloud with the capability of NVIDIA’s GPUs.

OCP Summit 2018

Microsoft creates industry standards for datacenter hardware storage and security - Both storage and security are the next frontiers for hardware innovation. The Open Compute Project (OCP) U.S. Summit 2018, which was held in San Jose, CA last week, brings together industry leaders to help grow, drive and support the open hardware ecosystem. Microsoft presented Project Denali (a new standard for cloud SSD storage) and Project Cerebrus (a security co-processor for enabling hardware security). At OCP, we highlighted the latest advancements across these key focus areas to further the industry in enabling the future of the cloud.

Project Denali to define flexible SSDs for cloud-scale applications - Learn more about Project Denali drives, which provide the flexibility needed to optimize for the workloads of a wide variety of cloud applications, the simplicity to keep pace with rapid innovations in NAND flash memory and application design, and the scale required for multitenant hardware that is so common in the cloud.

SONiC, the network innovation powerhouse behind Azure - Learn more about Software for Open Networking in the Cloud (SONiC), the default switch OS powering Azure and many other parts of the Microsoft Cloud. Microsoft open-sourced this innovation to the community, making it available on our SONiC GitHub Repository. SONiC is a uniquely extensible platform, with a large and growing ecosystem of hardware and software partners, that offers multiple switching platforms and various software components.

Developer spotlight

Sample App: Azure Mobile Apps - structured data sync with files - The Azure Mobile Apps client and server SDK support offline sync of structured data with CRUD operations against the /tables endpoint. Generally this data is stored in a database or similar store, and generally these data stores cannot store large binary data efficiently. Also, some applications have related data that is stored elsewhere (e.g., blob storage, SharePoint), and it is useful to be able to create associations between records in the /tables endpoint and other data. This sample adds support for images to the Mobile Apps Todo list quickstart (available for multiple mobile platforms).

Under the hood of the Azure Mobile App - Jacob Jedryszek built the Azure Mobile App for iOS and Android using Xamarin Native in C#. In this blog post, Jacob goes into details on Xamarin, CI/CD, storing secrets, UI testing, and more.

Secure and deploy your mobile apps in Microsoft Azure App Service - Learn more about App Service Authentication/Authorization, which is a feature that provides a way for your application to sign in users so that you don't have to change code on the app backend. It provides an easy way to protect your application and work with per-user data.

How we built it: Next Games' global gaming platform in the Cloud - This episode of the Microsoft Mechanics series How we built it features Chief Technology Officer and co-founder of Finnish mobile gaming company Next Games, Kalle Hiitola. Next Games built a successful global connected gaming platform on Azure spanning 166 countries and growing. Their popular “Walking Dead No Man’s Land” game is aligned to the popular Walking Dead TV series, which uniquely releases a rapid cadence of new gaming chapters and characters to compliment the release of each new show episode.

PlayFab Tutorial: Using Resettable Statistics and Leaderboards - This tutorial provides a complete walkthrough of how to configure and manage statistics with versioning, which enables “resetting” of statistics, and by extension, leaderboards. In it, we’ll focus on how to use the Admin API methods for this, with additional info on using the Client and Server API methods to query the data, both for the current version as well as old ones. The goal is to provide developers with a technical review of how resettable statistics work in PlayFab, and all the ways they can be used in your games.

PlayFab Unity Editor Extensions - PlayFab's plugin (currently in beta) houses a new custom inspector serving as the remodeled "front door" for our Unity Developers, which will continue to evolve as PlayFab's features grow. This plugin houses a custom inspector for viewing and configuring the PlayFab SDK.

Azure shows

Using Habitat in Azure - Nick Rycar from Chef stops by Azure Friday to chat with Donovan Brown about Habitat, a simple, flexible way to build, deploy, and manage cloud-native applications. Habitat makes it easier to develop and promote changes by enabling each instance of your application to continually and independently apply updates as soon as they're ready.

For more information, see:

The Azure Podcast: Episode 221 - Graph API - Andrew Liu, a Senior PM on the Azure Cosmos DB team talks to us about the Graph API and he is obviously very passionate about it! He gives us some great use-cases for Graphs and how we can use the service to build these types of applications quickly.


Learn from experts and play with emerging tech at Microsoft Build

$
0
0

Build-Banner 2000x590

Microsoft’s largest developer conference, Microsoft Build, is around the corner, and there’s still time to register. Programmers and Microsoft engineers will gather May 7–9 in Seattle, Washington, to discuss what’s next in cloud, AI, mixed reality, and more. The event will feature incredible technical sessions, inspiring speakers, and interactive workshops—as well as plenty of time to connect and celebrate.

Here’s a preview of what’s coming up at Microsoft Build:

Imagine tomorrow’s tech

Industry leaders—including many Microsoft execs and engineers—will discuss how software is transforming the world in remarkable ways. Devs at any level will learn from an incredible lineup of speakers discussing what’s new, what’s coming, and how technology is a force for good.

Discover the right solutions

Attendees will experience how the Microsoft tools and platforms they rely on can take them (and their code) even further. Microsoft Build will feature more than 350 inspiring technical sessions, workshops, and opportunities to get hands-on experience with the latest tech Microsoft has to offer. There will be ample opportunity to pick up best practices and new skills from sessions such as these:

  • Starting your IoT project in minutes with SaaS and preconfigured solutions. Implementing complete E2E IoT solutions from devices all the way up to business apps can prove challenging. Azure offers solutions that will simplify the experience and minimize time to production. Discover the latest SaaS and preconfigured solutions for IoT on the market today.
  • Building enterprise applications for SAP on Azure. Attendees will hear how Azure can help rapidly develop enterprise applications for SAP landscapes using SAP Cloud Platform for applications such as S/4HANA and SAP HANA on Azure. This demo will show how Microsoft and SAP are collaborating to offer integrated development services with Visual Studio for SAP Fiori and how devs can leverage the SAP HANA Express.
  • Processing complex queries and adding the power of machine learning. Learn how adaptive query processing helps application developers solve query performance-related problems. You’ll also learn how to leverage SQL Graph when modeling complex relationships between entities, and how to leverage machine learning services to add artificial intelligence to innovative, modern applications.
  • Stat scaling your data with Azure Cosmos DB. Build planet-scale applications with small-scale effort, taking a closer look at important design aspects around global distribution, consistency, and server-side partitioning. Attendees will learn what Azure Cosmos DB offers, how to get started with SQL APIs, and how to distribute data across multiple regions in just a few clicks, using Azure Cosmos DB’s consistency models to fine-tune performance.
  • Using Azure to API-enable and connect the Enterprise. This workshop will showcase the power of Azure in developing sophisticated integration solutions. Our Azure integration services increase productivity and connectivity and deliver incredible performance. Logic Apps, API Management, and Service Bus are the foundation of our event-driven, API-centric orchestration capabilities.

Equip yourself with the best skills and practices in these featured sessions and more at Microsoft Build.

Interact with experts

Developers and industry experts can geek out with the Microsoft product engineers behind the company’s hottest tools and platforms, including Azure, Windows, Visual Studio, and more. Over three days at the epicenter of emerging tech, programmers, engineers, and industry game-changers can network and prepare to lead the world’s digital transformation together.

This is your chance to build the future. Register today to code your tomorrow at Microsoft Build.

#GlobalAzure Bootcamp 2018

$
0
0

This blog post was authored by Magnus Mårtensson, Regional Director and Azure MVP, Loftysoft.

The Global Azure Bootcamp (#GlobalAzure) is a worldwide series of one-day technical learning events for Azure. It is created and hosted by leaders from the global cloud developer community. This is community, pure and simple, at its very best. And you can join too!

If you’d like to host a Global Azure Bootcamp, all you need to do is sign up and create an event for your location, which will put your pin on our map of locations. Making your event happen is easy, follow these guidelines for more information. All you need is a location (a conference room will often suffice!), some local experts to speak and tutor a workshop, and a plan to get people to attend. But you won’t be on your own! We are here to help. You’ll see a link on the site to contact us so that we can help connect you with people both inside and outside of Microsoft.

These aren’t Microsoft events, but Microsoft loves us and what we do! Microsoft will help us spread the word for these events, cater lunch, and set up attendees with free Azure trials for testing and learning. All totaled across the world we’ll get over 10,000 global developers skilled up on Azure.

Our event has global sponsors who support us by arranging giveaways which are used locally for raffles to simply hand out to everyone. Anyone may sponsor, but we ask all our sponsors to give something to all our locations. Some great companies have done this and we are very grateful. Locally, the organizers may work with additional sponsors to make their location free to attend community events!

“The Global Azure Bootcamp is community at its finest. We are incredibly excited to see community leaders around the world rise up and help developers build the skills they need in today’s cloud-driven business environment. We’re here to help each of these community led events be a success and can’t wait to continue our decades-long commitment to the worldwide developer community,” says Jeff Sandquist, General Manager of the Azure Platform Experiences Group at Microsoft.

A big thanks to hundreds of community heroes, many of whom are Microsoft MVPs, who work diligently and passionately to make sure the local events grow and flourish. Here are all the locations pinned to this year’s map so far. Where will you stake your claim?

Map

Global Azure is an unstoppable force of thirst for knowledge with 244 locations in 68 countries! It all goes down on April 21, 2018, everywhere at once, rolling from the first time-zones to the last circling the globe, lasting for almost 30 hours end to end. Join us and make some noise on Twitter (#globalazure), Facebook, or upload pictures and share on Flickr!

Thank you for being part of this great event!

Microsoft Azure’s valuable professionals Martin Abbott, Maarten Balliauw, Wesley Cabus, Mike Martin, Alan Smith, Michael Wood and myself, also Regional Director, Magnus Mårtensson.

Adding support for Debug Adapters to Visual Studio IDE

$
0
0

Since its release, Visual Studio Code’s extension model, based on well-known web technologies such as TypeScript and JSON, has attracted a great deal of participation from the community, with hundreds of extensions published to provide support for exciting new languages and technologies. Visual Studio 2017 took the first steps towards participating in this ecosystem in November, with the release of the Language Server Protocol preview. Now, in Visual Studio 2017 version 15.6, we’re excited to announce support for another Visual Studio Code extension component – the debug adapter. If you’ve previously written a debugging extension for Visual Studio Code, you can now use it in Visual Studio as well, generally with only minor modifications. If you’re considering implementing debugging support for a language or runtime, doing so via a debug adapter will allow you to reach both Visual Studio and Visual Studio Code customers without having to support two separate codebases.

What is a Debug Adapter?

A debug adapter is a program that can communicate with a debugger UI using the Debug Adapter Protocol. An adapter can act as a bridge between the UI and a separate debugger (such as GDB or LLDB), or can be a debugger in and of itself (such as “vsdbg”, which supports CoreCLR debugging on Linux and macOS). The Debug Adapter Protocol is JSON-based, and libraries for working with it are available in many languages, including Node.JS and C#/VB.Net.

Debug Adapters Overview Daigram

How do I get started?

You will need to have Visual Studio 2017 version 15.6 installed.

Samples and documentation for the Visual Studio Debug Adapter Host are available on GitHub: https://github.com/Microsoft/VSDebugAdapterHost.

On the Debug Adapter Host wiki, you’ll find walkthroughs that demonstrate testing and packaging a debug adapter for use in Visual Studio. After following the walkthrough, you’ll be able to debug with Visual Studio Code’s “Mock Debug” adapter in Visual Studio:

Visual Studio Code Mock Debug adapter in Visual Studio

The wiki also contains documentation on new functionality added to the Debug Adapter Protocol to support Visual Studio scenarios, such as the ability to edit values in the “Watch” window, control the formatting of data and stack traces, and more.

If your extension also contains a Language Server, you may also be interested in the preview release of Visual Studio’s Language Server Protocol support.

How do I provide feedback?

You can provide feedback by filing issues on GitHub, or you can email the team directly at vsdahfeed@microsoft.com.

Andrew Crawley Sr. Software Engineer, Visual Studio

Andrew is an engineer on the Visual Studio IDE Debugger team, where he works on the Visual Studio Debug Adapter Host.

Bing Launches More Intelligent Search Features

$
0
0
In December, we announced new intelligent search features which tap into advances in AI to provide people with more comprehensive answers, faster.

Today, we’re excited to announce improvements to our current features, and new scenarios that get you to your answer faster.
 

Intelligent Answers Updates

Since December we’ve received a lot of great feedback on our experiences; based on that, we’ve expanded many of our answers to the UK, improved our quality and coverage of existing answers, and added new scenarios.
 
  • More answers that include relevant information across multiple sources

Bing now aggregates facts for given topics across several sites for you, so you can save time by learning about a topic without having to check several sources yourself. For example, if you want to learn more about tundras, simply search for “tundra biome facts” and Bing will give you facts compiled from three different sources at the top of the results page.

 
  • Hover-over definitions for uncommon words

An enhancement to our intelligent answers we’re rolling out this week gives you insight into unfamiliar topics at a glance. When Bing recognizes a word that isn’t common knowledge, it will now show you its definition when you hover over with the cursor.
 
For example, imagine you are searching to find answers to a medical question. We've given you an answer, but there are some terms in the answer that you aren't familar with. Just hover over to get the definition without leaving the page. 
 

  • Multiple answers for how-to questions

We received positive feedback from users including Bing Insiders who said they liked being able to view a variety of answer options in one place so they could easily decide which was best for them. We found that having multiple answers is especially helpful in situations where users struggle to write a specific enough query, such as when they have DIY questions but may not know the right words to ask. In the next few weeks, we’ll be shipping answers for how-to questions, so people can go one level deeper on their search and find the right information quickly.
 

More opportunities to search within an image

Another new feature we announced in December is intelligent image search, which allows you to search within an image to find similar images and products. With intelligent image search, users can manually select a desired object by manually cropping around them if they’d like, but our built-in object detection feature makes this easier by identifying these objects and highlighting them with clickable hotspots so all you have to do is click to get matching results.


 
When we first launched intelligent image search, the object detection feature was mostly focused on a few fashion items, like shirts and handbags. Since then we’ve expanded our object detection to cover all common top fashion categories, so you can find and shop what you see in more places than ever before.
 

Advancing our intelligent capabilities with Intel FPGA

Delivering intelligent search requires tasks like machine reading comprehension at scale, which require immense computational power. So, we built it on a deep learning acceleration platform, called Project Brainwave, which runs deep neural networks on Intel® Arria® and Stratix® Field Programmable Gate Arrays (FPGAs) on the order of milliseconds.
 
Intel’s FPGA chips allows Bing to quickly read and analyze billions of documents across the entire web and provide the best answer to your question in less than a fraction of a second. Intel’s FPGA devices not only provide Bing the real-time performance needed to keep our search fast for our users, but also the agility to continuously and quickly innovate using more and more advanced technology to bring you additional intelligent answers and better search results. In fact, Intel’s FPGAs have enabled us to decrease the latency of our models by more than 10x while also increasing our model size by 10x.
 
We’re excited to disclose the performance details of two of our deep neural networks running in production today.  You can read more about the advances we’ve made in the white paper, "Serving DNNs in Real Time at Datacenter Scale with Project Brainwave", and hear from the people working behind the scenes on the project:


 
We hope you’re as excited about these features as we are, and would love to hear your feedback!

- The Bing Team

Announcing the general availability of Azure Files share snapshot

$
0
0

It has been an exciting last few months since we announced the public preview of Azure Files share snapshot as we see our customers experiencing out-of-the-box snapshot capabilities for their Azure file shares. Today, we are excited to announce the general availability of Azure Files share snapshots globally in all Azure clouds. Share snapshots provide a way to make incremental backups of Server Message Block (SMB) shares in Azure Files. Storage administrators can use snapshots directly and backup providers can now leverage this capability to integrate Azure Files backup and restore capabilities into their products.

Key value proposition

Incremental and fast Only changes made to the base data are stored in the snapshot. If the data is available on the base share, it will not be duplicated in any snapshot. If nothing changes after you create the snapshot, the size of the snapshot remains zero. This is true even for the very first snapshot, which means that it never duplicates any data. This makes snapshots time, space, and cost efficient. This also minimizes the time required to create the snapshot. One can create a snapshot of the share instantaneously. While snapshot can be taken at share level, you can still restore individual files. Being able to restore at the item level makes your recovery fast and cost efficient.

Familiar experiences – There are many ways to browse and restore data from snapshots including “previous versions” in Windows, Azure portal, storage explorer, Storage SDK, and REST APIs. Once the snapshot is created, you will be able to view all your previous versions from Windows explorer. You can even use your favorite diff utility to view changes between file versions in Windows. Azure is the very first public cloud provider to enable capabilities like creating an instantaneous file share snapshot, browsing of snapshots with the native Volume Shadow Copy Service (VSS) like experience in Windows Explorer, and restore from Windows Explorer. We have also added support for share snapshots in the Azure portal and storage explorer which enables a UI experience in Windows, Linux, and macOS.

blogimg

Backup integration – Azure Files is a true born in cloud file share and natively supports REST API to be able to provide higher flexibility of tooling and scripting. Backup providers can now leverage REST API to provide a true native backup story. As an example, a few days back, Azure Backup announced preview support for Azure Files. Azure File Sync customers can also now use Azure Backup to protect their file shares in the cloud. In addition to native Windows integration, we have added support for snapshots to Azure Powershell, .Net, Python, Node, and Java SDK. You can use these for scripting or programmatically accessing data from your snapshots.

Conclusion and next steps

Capacity consumed by share snapshots is charged at the same rate as data storage prices. During public preview, capacity charged by snapshots was not billed. You will start seeing this on your bill in the next few weeks. Since snapshots are incremental in nature, this change should be minimal.

We hope Azure Files share snapshots will be a key addition to your cloud storage management toolkit. To learn more about snapshots, please visit the Azure File share snapshot documentation.

If you have any questions about Azure Files, please leave a comment below. In addition, if you have any feature request, we are always listening to your feedback on our User Voice.

The new Azure Load Balancer – 10x scale increase

$
0
0

Azure Load Balancer is a network load balancer offering high scalability, throughput and low latency across TCP and UDP load balancing.

Today, we are excited to announce the new Standard SKU of the Azure Load Balancer. The Standard SKU adds 10x scale, more features along with deeper diagnostic capabilities than the existing Basic SKU. The new offer is designed to handle millions of flows per second and built to scale and support even higher loads. Standard and the Basic Load Balancer options share APIs and will offer our customers several options to pick and choose what best match their needs.

Below are some of the important features of the new Standard SKU:

Vastly increased Scalability

Standard Load Balancer can distribute network traffic of up to one thousand (1000) VM instances in a backend pool. This is a 10x scale improvement over the existing Basic SKU. One or more large scale virtual machine Scale Sets can be configured behind a single highly available IP address and the health and availability of each instance is managed and monitored by health probes.

image

Versatility within the Vnet

The new Standard Load Balancer spans an entire virtual network (VNet). Any virtual machine in the VNet can be configured to join the backend pool and is not restricted to a single availability set as is the case of our Basic Load Balancer. Customers can combine multiple scale sets, availability sets, or individual virtual machines in the backend pool.

Blazingly fast provisioning

The new SKU sits atop a brand-new control plane that executes configuration changes within seconds. The result is a highly responsive API frontend that is quick to react to updates and needs for sudden changes.

IP address control and flexibility

The use and full control of a static public IP address for the frontend, makes it possible to use it in conjunction with traditional network firewalls which typically requires hardcoded IP addresses. Azure also supports moving a static Public IP address between load balancers, providing stickiness and stability during re-deployments and upgrades. 

Increased outbound connectivity

Both Basic and Standard Load Balancers allow multiple frontend IP addresses to be used. The Standard Load Balancer expands on this ability to allow any or all IPs  to be used for outbound flows, hence increasing the number of overall outbound connections by spinning up more frontends. 

Resiliency and AZ support

We have opted to also include additional functionality when using standard load balancer with Azure Availability Zones (AZs). Customers can now enable zone redundancy on their public and internal frontends using a single IP address or tie their frontend IP addresses to a specific zone. This type of cross-zone load balancing can address any VM or VM Scale Set in a region. A zone-redundant IP address is served (advertised) by the load balancer in every zone, since the data path is anycast within the region. In the unlikely chance that a zone goes down completely, the load balancer is able to serve the traffic from instances in another zone, very quickly. More details can be found in the Availability Zones & Standard Load Balancer documentation.

image

High-availability (HA) Ports

Creation of active-active setups and n+1 redundancy for network virtual appliances like firewalls and other network proxies have been a customer ask for a while. Customers can enable HA Ports for per flow to load balancing on all ports on the frontend of an internal Standard Load Balancer. This enables simple set up of highly-available configurations, while removing the need for many individual load-balancing rules. More details can be found in the HA Ports documentation.

New insights and diagnostics

Introducing new telemetry, automatic in-band health measurements, as well as insights into traffic volumes, inbound connection attempts, outbound connection health, and Azure’s platform health, the new load balancer brings a wealth of extra value to customers looking for increased control and network visibility across their deployments. As soon as a customer configures a public frontend of the Standard Load Balancer, Azure begins in-band active measurements to determine the health of a customer’s endpoint from within the region, allowing for new insights into the network. All of this information is exposed as a collection of multi-dimensional metrics in Azure Monitor and can be consumed by Azure’s Operations Management Suite and others. For complete details, please visit the diagnostics and monitoring improvements documentation.

image

Secure by Default

Lastly, we have made a few changes and tweaks to the security posture of our new SKU. IP addresses and load balanced endpoints now default to closed until or unless a customer has opened specific ports to permit traffic using a Network Security Group (NSG) that is attached to the backend VM or the subnet in which the VM resides.

Azure Standard Load Balancer is now generally available in 27 public cloud regions. For more details please refer to the load balancer documentation page.

Azure Monitor–General availability of multi-dimensional metrics APIs

$
0
0

In September of last year we announced the public preview of multi-dimensional metrics in Azure Monitor, Microsoft’s built-in platform monitoring service for Azure. Today we are pleased to announce the general availability of the APIs that support this capability. Now you can explore your Azure metrics through their dimensions and unlock deeper insights. Dimensions are name value pairs, or attributes, that can be used to further segment a metric. These additional attributes can help make exploring a metric more meaningful. Azure Monitor has also increased our metric retention period from 30 days to 93 days, so you can access data for longer, and do meaningful comparisons across months.

You can access and explore metrics, multi-dimensional and otherwise, via the following:

In the next few weeks we will also be adding support for multi-dimensional metrics in PowerShell.

In addition to using the above methods to access and explore your metrics, we also recently announced the general availability of the next generation of metric alerts, which allow you create alerts on multi-dimensional metrics!

As part of this update we are also introducing the availability of metrics for the following Azure Services:

  • Classic Cloud Services
  • Azure Key Vault
  • DNS Zones
  • HDInsights
  • Azure Container Instances (ACI)
  • PowerBI Dedicated Capacities

Find resources and metrics available via Azure Monitor with this complete list. You can also refer to our pricing page for pricing on the metric query APIs.


Azure SQL Data Warehouse now generally available in all Azure regions worldwide

$
0
0

We are excited to announce the general availability of Azure SQL Data Warehouse in three additional regions— Japan West, Australia East, and India West. These additional locations bring the product worldwide availability count to all 33 regions – more than any other major cloud data warehouse provider. With general availability, you can now provision SQL Data Warehouse across 33 regions with financially backed SLA of 99.9 per cent availability.

Capture

SQL Data Warehouse is a high-performance, secure, and compliant SQL analytics platform offering you a SQL-based view across data and a fast, fully managed, petabyte-scale cloud solution. It is elastic, enabling you to provision in minutes and scale up to 60 times larger in seconds. It comes standard with Geo-Backups, which enable geo-resiliency of your data and allows your data warehouse to be restored to any region in Azure in the case of a region-wide failure.

Azure regions provide multiple, physically separated and isolated availability zones connected through low latency, high throughput, and highly redundant networking. Starting today, customers can leverage these advanced features across 33 regions.

Begin today and experience the speed, scale, elasticity, security, and ease of use of a cloud-based data warehouse for yourself. You can see this blog post for more info on the capabilities and features of SQL Data Warehouse.

Share your feedback

We would love to hear from you about what features you would like us to add.

Please let us know on our feedback site what features you want most. Users who suggest or vote for feedback will receive periodic updates on their request and will be the first to know when the feature is released.

In addition, you can connect with us if you have any product questions via StackOverflow, or via our MSDN forum.

Learn more

Check out the many resources for learning more about SQL Data Warehouse, including:

Announcing TypeScript 2.8

$
0
0

TypeScript 2.8 is here and brings a few features that we think you’ll love unconditionally!

If you’re not familiar with TypeScript, it’s a language that adds optional static types to JavaScript. Those static types help make guarantees about your code to avoid typos and other silly errors. They can also help provide nice things like code completions and easier project navigation thanks to tooling built around those types. When your code is run through the TypeScript compiler, you’re left with clean, readable, and standards-compliant JavaScript code, potentially rewritten to support much older browsers that only support ECMAScript 5 or even ECMAScript 3. To learn more about TypeScript, check out our documentation.

If you can’t wait any longer, you can download TypeScript via NuGet or by running

npm install -g typescript

You can also get editor support for

Other editors may have different update schedules, but should all have excellent TypeScript support soon as well.

To get a quick glance at what we’re shipping in this release, we put this handy list together to navigate our blog post:

We also have some minor breaking changes that you should keep in mind if upgrading.

But otherwise, let’s look at what new features come with TypeScript 2.8!

Conditional types

Conditional types are a new construct in TypeScript that allow us to choose types based on other types. They take the form

A extends B ? C : D

where A, B, C, and D are all types. You should read that as “when the type A is assignable to B, then this type is C; otherwise, it’s D. If you’ve used conditional syntax in JavaScript, this will feel familiar to you.

Let’s take two specific examples:

interface Animal {
    live(): void;
}
interface Dog extends Animal {
    woof(): void;
}

// Has type 'number'
type Foo = Dog extends Animal ? number : string;

// Has type 'string'
type Bar = RegExp extends Dog ? number : string;

You might wonder why this is immediately useful. We can tell that Foo will be number, and Bar will be string, so we might as well write that out explicitly. But the real power of conditional types comes from using them with generics.

For example, let’s take the following function:

interface Id { id: number, /* other fields */ }
interface Name { name: string, /* other fields */ }

declare function createLabel(id: number): Id;
declare function createLabel(name: string): Name;
declare function createLabel(name: string | number): Id | Name;

These overloads for createLabel describe a single JavaScript function that makes a choice based on the types of its inputs. Note two things:

  1. If a library has to make the same sort of choice over and over throughout its API, this becomes cumbersome.
  2. We have to create three overloads: one for each case when we’re sure of the type, and one for the most general case. For every other case we’d have to handle, the number of overloads would grow exponentially.

Instead, we can use a conditional type to smoosh both of our overloads down to one, and create a type alias so that we can reuse that logic.

type IdOrName<T extends number | string> =
    T extends number ? Id : Name;

declare function createLabel<T extends number | string>(idOrName: T):
    T extends number ? Id : Name;

let a = createLabel("typescript");   // Name
let b = createLabel(2.8);            // Id
let c = createLabel("" as any);      // Id | Name
let d = createLabel("" as never);    // never

Just like how JavaScript can make decisions at runtime based on the characteristics of a value, conditional types let TypeScript make decisions in the type system based on the characteristics of other types.

As another example, we could also write a type called Flatten that flattens array types to their element types, but leaves them alone otherwise:

// If we have an array, get the type when we index with a 'number'.
// Otherwise, leave the type alone.
type Flatten<T> = T extends any[] ? T[number] : T;

Inferring within conditional types

Conditional types also provide us with a way to infer from types we compare against in the true branch using the infer keyword. For example, we could have inferred the element type in Flatten instead of fetching it out manually:

// We also could also have used '(infer U)[]' instead of 'Array<infer U>'
type Flatten<T> = T extends Array<infer U> ? U : T;

Here, we’ve declaratively introduced a new generic type variable named U instead of specifying how to retrieve the element type of T. This frees us from having to think about how to get the types we’re interested in.

Distributing on unions with conditionals

When conditional types act on a single type parameter, they distribute across unions. So in the following example, Bar has the type string[] | number[] because Foo is applied to the union type string | number.

type Foo<T> = T extends any ? T[] : never;

/**
 * Foo distributes on 'string | number' to the type
 *
 *    (string extends any ? string[] : never) |
 *    (number extends any ? number[] : never)
 * 
 * which boils down to
 *
 *    string[] | number[]
 */
type Bar = Foo<string | number>;

In case you ever need to avoid distributing on unions, you can surround each side of the extends keyword with square brackets:

type Foo<T> = [T] extends [any] ? T[] : never;

// Boils down to Array<string | number>
type Bar = Foo<string | number>;

While conditional types can be a little intimidating at first, we believe they’ll bring a ton of flexibility for moments when you need to push the type system a little further to get accurate types.

New built-in helpers

TypeScript 2.8 provides several new type aliases in lib.d.ts that take advantage of conditional types:

// These are all now built into lib.d.ts!

/**
 * Exclude from T those types that are assignable to U
 */
type Exclude<T, U> = T extends U ? never : T;

/**
 * Extract from T those types that are assignable to U
 */
type Extract<T, U> = T extends U ? T : never;

/**
 * Exclude null and undefined from T
 */
type NonNullable<T> = T extends null | undefined ? never : T;

/**
 * Obtain the return type of a function type
 */
type ReturnType<T extends (...args: any[]) => any> = T extends (...args: any[]) => infer R ? R : any;

/**
 * Obtain the return type of a constructor function type
 */
type InstanceType<T extends new (...args: any[]) => any> = T extends new (...args: any[]) => infer R ? R : any;

While NonNullable, ReturnType, and InstanceType are relatively self-explanatory, Exclude and Extract are a bit more interesting.

Extract selects types from its first argument that are assignable to its second argument:

// string[] | number[]
type Foo = Extract<boolean | string[] | number[], any[]>;

Exclude does the opposite; it removes types from its first argument that are not assignable to its second:

// boolean
type Bar = Exclude<boolean | string[] | number[], any[]>;

Declaration-only emit

Thanks to a pull request from Manoj Patel, TypeScript now features an --emitDeclarationOnly flag which can be used for cases when you have an alternative build step for emitting JavaScript files, but need to emit declaration files separately. Under this mode no JavaScript files nor sourcemap files will be generated; just .d.ts files that can be used for library consumers.

One use-case for this is when using alternate compilers for TypeScript such as Babel 7. For an example of repositories taking advantage of this flag, check out urql from Formidable Labs, or take a look at our Babel starter repo.

@jsx pragma comments

Typically, users of JSX expect to have their JSX tags rewritten to React.createElement. However, if you’re using libraries that have a React-like factory API, such as Preact, Stencil, Inferno, Cycle, and others, you might want to tweak that emit slightly.

Previously, TypeScript only allowed users to control the emit for JSX at a global level using the jsxFactory option (as well as the deprecated reactNamespace option). However, if you needed to mix any of these libraries in the same application, you’d have been out of luck using JSX for both.

Luckily, TypeScript 2.8 now allows you to set your JSX factory on a file-by-file basis by adding an // @jsx comment at the top of your file. If you’ve used the same functionality in Babel, this should look slightly familiar.

/** @jsx dom */
import { dom } from "./renderer"
<h></h>

The above sample imports a function named dom, and uses the jsx pragma to select dom as the factory for all JSX expressions in the file. TypeScript 2.8 will rewrite it to the following when compiling to CommonJS and ES5:

var renderer_1 = require("./renderer");
renderer_1.dom("h", null);

JSX is resolved via the JSX Factory

Currently, when TypeScript uses JSX, it looks up a global JSX namespace to look up certain types (e.g. “what’s the type of a JSX component?”). In TypeScript 2.8, the compiler will try to look up the JSX namespace based on the location of your JSX factory. For example, if your JSX factory is React.createElement, TypeScript will try to first resolve React.JSX, and then resolve JSX from within the current scope.

This can be helpful when mixing and matching different libraries (e.g. React and Preact) or different versions of a specific library (e.g. React 14 and React 16), as placing the JSX namespace in the global scope can cause issues.

Going forward, we recommend that new JSX-oriented libraries avoid placing JSX in the global scope, and instead export it from the same location as the respective factory function. However, for backward compatibility, TypeScript will continue falling back to the global scope when necessary.

Granular control on mapped type modifiers

TypeScript’s mapped object types are an incredibly powerful construct. One handy feature is that they allow users to create new types that have modifiers set for all their properties. For example, the following type creates a new type based on T and where every property in T becomes readonly and optional (?).

// Creates a type with all the properties in T,
// but marked both readonly and optional.
type ReadonlyAndPartial<T> = {
    readonly [P in keyof T]?: T[P]
}

So mapped object types can add modifiers, but up until this point, there was no way to remove modifiers from T.

TypeScript 2.8 provides a new syntax for removing modifiers in mapped types with the - operator, and a new more explicit syntax for adding modifiers with the + operator. For example,

type Mutable<T> = {
    -readonly [P in keyof T]: T[P]
}

interface Foo {
    readonly abc: number;
    def?: string;
}

// 'abc' is no longer read-only, but 'def' is still optional.
type TotallyMutableFoo = Mutable<Foo>

In the above, Mutable removes readonly from each property of the type that it maps over.

Similarly, TypeScript now provides a new Required type in lib.d.ts that removes optionality from each property:

/**
 * Make all properties in T required
 */
type Required<T> = {
    [P in keyof T]-?: T[P];
}

The + operator can be handy when you want to call out that a mapped type is adding modifiers. For example, our ReadonlyAndPartial from above could be defined as follows:

type ReadonlyAndPartial<T> = {
    +readonly [P in keyof T]+?: T[P];
}

Organize imports

TypeScript’s language service now provides functionality to organize imports. This feature will remove any unused imports, sort existing imports by file paths, and sort named imports as well.

Fixing uninitialized properties

TypeScript 2.7 introduced extra checking for uninitialized properties in classes. Thanks to a pull request by Wenlu Wang TypeScript 2.8 brings some helpful quick fixes to make it easier to add to your codebase.

Breaking changes

Unused type parameters are checked under --noUnusedParameters

Unused type parameters were previously reported under --noUnusedLocals, but are now instead reported under --noUnusedParameters.

HTMLObjectElement no longer has an alt attribute

Such behavior is not covered by the WHATWG standard.

What’s next?

We hope that TypeScript 2.8 pushes the envelope further to provide a type system that can truly represent the nature of JavaScript as a language. With that, we believe we can provide you with an experience that continues to make you more productive and happier as you code.

Over the next few weeks, we’ll have a clearer picture of what’s in store for TypeScript 2.9, but as always, you can keep an eye on the TypeScript roadmap to see what we’re working on for our next release. You can also try out our nightly releases to try out the future today! For example, generic JSX elements are already out in TypeScript’s recent nightly releases!

Let us know what you think of this release over on Twitter or in the comments below, and feel free to report issues and suggestions filing a GitHub issue.

Happy Hacking!

Get a look at the Bing Maps Fleet Management APIs on April 3

$
0
0

Get a first-hand look at the latest full set of Bing Maps APIs and Services for your fleet, asset, and logistics applications.

Bing Maps partner, Grey Matter, is hosting a webinar  on April 3 at 3:00 PM (GMT) where Steve Lombardi, Principal Program Manager Lead with the Bing Maps team, will cover building mobile workforce solutions with the new Bing Maps Fleet Management APIs and services.

In the past months, new geospatial API services and solutions including Truck Routing, Distance Matrix, and Drive Time Isochrone, have been added to Microsoft's rich enterprise mapping platform. These new APIs provide you with advanced business solutions that go beyond standard mapping services, helping you to deliver applications with greater location intelligence and innovative user experiences.

Additionally, the Bing Maps Fleet Tracker solution was released in January as an open source project. Fleet Tracker leverages mobile phones to provide a web dashboard of where your mobile assets are located. Client applications for iOS and Android are included as well.

Below are details:

Date: April 3, 2018

Time: 3:00 – 4:00 PM (GMT)

Register: http://www.greymatter.com/corporate/bing-maps-developer-webinar/

What we'll cover:

  • Review the core features of the Bing Maps platform - maps, geocoding, reverse geocoding, routing, advanced data visualizations and much more, available in over 100 countries.
  • Have a detailed look at the new APIs.
  • Learn about the Fleet Tracker solution, including a demo of the one-click deployment enabling you to be live on Azure in 10 minutes.

- Bing Maps Team

Configuring C++ IntelliSense and Browsing

$
0
0

Whether you are creating a new (or modifying an existing) C++ project using a Wizard, or importing an project into Visual Studio from another IDE, it’s important to configure the project correctly for the IntelliSense and Browsing features to provide accurate information.  This article provides some tips on configuring the projects and describes a few ways that you can investigate configuration problems.

Include Paths and Preprocessor Macros

The two settings that have the greatest effect on the accuracy of IntelliSense and Browsing operations are the Include Paths and the Preprocessor macros.  This is especially important for projects that are built outside of Visual Studio: such a project may build without any errors, but show squiggles in Visual Studio IDE.

To check the project’s configuration, open the Properties for your project.  By default, All Configurations and All Platforms will be selected, so that the changes will be applied to all build configurations:

If some configurations do not have the same values as the rest, then you will see <different options>. If your project is a Makefile project, then you will see the following properties dialog. In this case, the settings controlling IntelliSense and Browsing will be under NMake property page, IntelliSense category:

Error List

If IntelliSense is showing incorrect information (or fails to show anything at all), the first place to check is the Error List window.  It could happen that earlier errors are preventing IntelliSense from working correctly.  To see all the errors for the current source file together with all included header files, enable showing IntelliSense Errors in the Error List Window by making this selection in the dropdown:

Error List IntelliSense Dropdown

Error List IntelliSense Dropdown

IntelliSense limits the number of errors it produces to 1000. If there are over 1000 errors in the header files included by a source file, then the source file will show only a single error squiggle at the very start of the source file.

Validating Project Settings via Diagnostic Logging

To check whether IntelliSense compiler is using correct compiler options, including Include Paths and Preprocessor macros, turn on Diagnostic Logging of IntelliSense command lines in Tools > Options > Text Editor > C/C++ > Advanced > Diagnostic Logging. Set Enable Logging to True, Logging Level to 5 (most verbose), and Logging Filter to 8 (IntelliSense logging):

Enabling the Diagnostic Logging in Tools > Options > Text Editor > C/C++ > Advanced

Enabling Diagnostic Logging in Tools > Options

The Output Window will now show the command lines that are passed to the IntelliSense compiler. Here is a sample output that you may see:

 [IntelliSense] Configuration Name: Debug|Win32
 [IntelliSense] Toolset IntelliSense Identifier:
 [IntelliSense] command line options:
 /c
 /I.
 /IC:RepoIncludes
 /DWIN32
 /DDEBUG
 /D_DEBUG
 /Zc:wchar_t-
 /Zc:forScope
 /Yustdafx.h

This information may be useful in understanding why IntelliSense is providing inaccurate information. One example is unevaluated project properties. If your project’s Include directory contains $(MyVariable)Include, and the diagnostic log shows /IInclude as an Include path, it means that $(MyVariable) wasn’t evaluated, and was removed from the final include path.

IntelliSense Build

In order to evaluate the command lines used by the IntelliSense compiler, Visual Studio launches an IntelliSense-only build of each project in the solution. MSBuild performs the same steps as the project build, but stops short of executing any of the build commands: it only collects the full command line.

If your project contains some custom .props or .targets files, it’s possible for the IntelliSense-only build to fail before it finishes computing the command lines. Starting with Visual Studio 2017 15.6, errors with IntelliSense-only build are logged to the Output window, Solution Pane.

Output Window, Solution Pane

Output Window, Solution Pane

An example error you may see is:
 error: Designtime build failed for project 'E:srcMyProjectMyProject.vcxproj',
 configuration 'Debug|x64'. IntelliSense might be unavailable.
 Set environment variable TRACEDESIGNTIME=true and restart
 Visual Studio to investigate.

If you set the environment variable TRACEDESIGNTIME to true and restart Visual Studio, you will see a log file in the %TEMP% directory which will help diagnose this error:

C:UsersmeAppDataLocalTempMyProject.designtime.log :
 error : Designtime build failed for project 'E:srcMyProjectMyProject.vcxproj',
 configuration 'Debug|x64'. IntelliSense might be unavailable.

To learn more about TRACEDESIGNTIME environment variable, please see the articles from the Roslyn and Common Project System projects. C++ Project System is based on the Common Project System, so the information from the articles is applicable to all C++ projects.

Single File IntelliSense

Visual Studio allows you to take advantage of IntelliSense and Browsing support of files that are not part of any existing projects. By default, files opened in this mode will not display any error squiggles but will still provide IntelliSense; so if you don’t see any error squiggles under incorrect code, or if some expected preprocessor macros are not defined, check whether a file is opened in Single-File mode. To do so, look at the Project node in the Navigation Bar: the project name will be Miscellaneous Files:

Navigation Bar showing Miscellaneous Files project

Navigation Bar showing Miscellaneous Files project

Investigating Open Folder Issues

Open Folder is a new command in Visual Studio 2017 that allows you to open a collection of source files that doesn’t contain any Project or Solution files recognized by Visual Studio. To help configure IntelliSense and browsing for code opened in this mode, we’ve introduced a configuration file CppProperties.json. Please refer to this article for more information.

CppProperties.json Syntax Error

If you mistakenly introduce a syntax error into the CppProperties.json file, IntelliSense in the affected files will be incorrect. Visual Studio will display the error in the Output Window, so be sure to check there.

Project Configurations

In Open Folder mode, different configurations may be selected using the Project Configurations toolbar.

Project Configurations Dropdown

Project Configurations Dropdown

Please note that if multiple CppProperties.json files provide differently-named configurations, then the selected configuration may not be applicable to the currently-opened source file. To check which configuration is being used, turn on Diagnostic Logging to check for IntelliSense switches.

Single-File IntelliSense

When a solution is open, Visual Studio will provide IntelliSense for files that are not part of the solution using the Single-File mode.  Similarly, in Open Folder mode, Single-File IntelliSense will be used for all files outside of the directory cone.  Check the Project name in the Navigation Bar to see whether the Single-File mode is used instead of CppProperties.json to provide IntelliSense for your source code.

Investigating Tag Parser Issues

Tag Parser is a ‘fuzzy’ parser of C++ code, used for Browsing and Navigation.  (Please check out this blog post for more information.)

Because Tag Parser doesn’t evaluate preprocessor macros, it may stumble while time parsing code that makes heavy use of them. When the Tag Parser encounters an unfamiliar code construct, it may skip a large region of code.

There are two common ways for this problem to manifest itself in Visual Studio. The first way is by affecting the results shown in the Navigation Bar. If instead of the enclosing function, the Navigation Bar shows an innermost macro, then the current function definition was skipped:

Navigation Bar shows incorrect scope

Navigation Bar shows incorrect scope

The second way the problem manifests is by showing a suggestion to create a function definition for a function that is already defined:

Spurious Green Squiggle

Spurious Green Squiggle

In order to help the parser understand the content of macros, we have introduced the concept of hint files. (Please see the documentation for more information.) Place a file named cpp.hint to the root of your solution directory, add to it all the code-altering preprocessor definitions (i.e. #define do_if(condition) if(condition)) and invoke the Rescan Solution command, as shown below, to help the Tag Parser correctly understand your code.

Coming soon: Tag Parser errors will start to appear in the Error List window. Stay tuned!

Scanning for Library Updates

Visual Studio periodically checks whether files in the solution have been changed on disk by other programs.  As an example, when a ‘git pull’ or ‘git checkout’ command completes, it may take up to an hour before Visual Studio becomes aware of any new files and starts providing up-to-date information.  In order to force a rescan of all the files in the solution, select the Rescan Solution command from the context menu:

Rescan Solution Context Menu

Rescan Solution Context Menu

The Rescan File command, seen in the screenshot above, should be used as the last diagnostic step.  In the rare instance that the IntelliSense engine loses track of changes and stops providing correct information, the Rescan File command will restart the engine for the current file.

Send us Feedback!

We hope that these starting points will help you diagnose any issues you encounter with IntelliSense and Browsing operations with Visual Studio. For all issues you discover, please report them by using the Help > Send Feedback> Report A Problem command. All reported issues can be viewed at the Developer Community.

Command line “tab” completion for .NET Core CLI in PowerShell or bash

$
0
0

Lots of people are using open source .NET Core and the "dotnet" command line, but few know that the .NET CLI supports command "tab" completion!

You can ensure you have it on .NET Core 2.0 with this test:

C:Usersscott>  dotnet complete "dotnet add pac"
package

You can see I do, as it proposed "package" as the completion for "pac"

Now, just go into PowerShell and run:

notepad $PROFILE

And add this code to the bottom to register "dotnet complete" as the "argument completer" for the dotnet command.

# PowerShell parameter completion shim for the dotnet CLI 
Register-ArgumentCompleter -Native -CommandName dotnet -ScriptBlock {
    param($commandName, $wordToComplete, $cursorPosition)
        dotnet complete --position $cursorPosition "$wordToComplete" | ForEach-Object {
           [System.Management.Automation.CompletionResult]::new($_, $_, 'ParameterValue', $_)
        }
}

Then just use it! You can do the same not only in PowerShell, but in bash, or zsh as well!

It's super useful for "dotnet add package" because it'll make smart completions like this:

It also works for adding/removing local project preferences as it is project file aware. Go set it up NOW, it'll take you 3 minutes.

RANDOM BUT ALSO USEFUL: "dotnet serve" - A simple command-line HTTP server.

Here's a useful little global tool - dotnet serve. It launches a server in the current working directory and serves all files in it. It's not kestrel, the .NET Application/Web Server. It's just a static webserver for development.

The latest release of dotnet-serve requires the 2.1.300-preview1 .NET Core SDK or newer. Once installed, run this command:

dotnet install tool --global dotnet-serve 

Then whenever I'm in a folder where I want to server something static (CSS, JS, PNGs, whatever) I can just

dotnet serve

It can also optionally open a web browser navigated to that localhost URL.


Sponsor: Get the latest JetBrains Rider for debugging third-party .NET code, Smart Step Into, more debugger improvements, C# Interactive, new project wizard, and formatting code in columns.do



© 2018 Scott Hanselman. All rights reserved.
     
Viewing all 10804 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>