Quantcast
Channel: Category Name
Viewing all 10804 articles
Browse latest View live

Because it’s Friday: Regex Games

$
0
0

I've been wrestling with regular expressions recently, so it was useful to give myself a bit of a refresher with Regex Crossword (with thanks to my colleague Paige for the tip). Little crossword-style puzzles (or various difficulties) challenge you to find strings that match the regular expression clues both horizontally and vertically:

Regex

It's nice practice for Regex Golf, the game where you try and find a regular expression that matches all of one set of words (for example: the names of US presidents) but none of another set (example: opponents of US presidents). Buy as xkcd warns, that way madness may lie:

Regex_golf_2x

That's all from us here at the blog for this week, but we'll be back with more next week. In the meantime have a great weekend, and practice your regular expressions!


Continuous integration and deployment using Data Factory

$
0
0

Azure Data Factory (ADF) visual tools public preview was announced on Jan 16, 2018. With visual tools, you can iteratively build, debug, deploy, operationalize and monitor your big data pipelines. Now, you can follow industry leading best practices to do continuous integration and deployment for your ETL/ELT (extract, transform/load, load/transform) workflows to multiple environments (Dev, Test, PROD etc.). Essentially, you can incorporate the practice of testing for your codebase changes and push the tested changes to a Test or Prod environment automatically.

ADF visual interface now allows you to export any data factory as an ARM (Azure Resource Manager) template. You can click the ‘Export ARM template’ to export the template corresponding to a factory.

image

This will generate 2 files:

  • Template file: Template json containing all the data factory metadata (pipelines, datasets etc.) corresponding to your data factory.
  • Configuration file: Contains environment parameters that will be different for each environment (Dev, Test, Prod etc.) like Storage connection, Azure Databricks cluster connection etc..

You will create a separate data factory per environment. You will then use the same template file for each environment and have one configuration file per environment. Clicking the ‘Import ARM Template’ button will take you to the Azure Template Deployment service in Azure Portal that allows you to select a template file (choose the exported template file) and import it to your data factory.

ADF visual tools also allow you to associate a VSTS GIT repository to your data factory for source control, versioning and collaboration. Once you enable the VSTS GIT integration, you can use the following lifecycle to do continuous integration and deployment:

  • Set up a Development ADF with VSTS where all developers can author ADF resources like pipelines, datasets etc..
  • Developers can modify the resources like Pipelines etc. They can use ‘Debug’ button to debug changes and perform test runs.
  • Once satisfied with the changes, developers can create a PR from their branch to master (or collaboration branch) to get the changes reviewed by peers.
  • Once changes are in master branch, they can publish to Development ADF using ‘Publish’ button.
  • When your team is ready to promote changes to ‘Test’ and ‘Prod’ ADF, you can export the ARM template from ‘master’ branch or any other branch in case your master is behind the Live Development ADF.
  • Exported ARM template can be deployed with different environment parameter files to ‘Test’ and ‘Prod’ environments.

You can also set up a VSTS Release definition to automate the deployment of data factory to multiple environments. Get more information and detailed steps for doing continuous integration and deployment with data factory here.

image

We are continuously working to add new features based on customer feedback. Get started building pipelines easily and quickly using Azure Data Factory. If you have any feature requests or want to provide feedback, please visit the Azure Data Factory forum.

Seamlessly upgrade Azure SQL Data Warehouse for greater performance and scalability

$
0
0

Azure SQL Data Warehouse recently announced the preview release of the Optimized for Compute performance tier providing customers with an enhanced offering of the service. With this major release, the service now has a 5X increase in compute scalability and unlimited storage for columnar data. Along with the increased capacity, customers are realizing an average increase of 5X in performance for query workloads. For existing Optimized for Elasticity customers wanting to capitalize on these benefits, there is now an option to seamlessly upgrade via the Azure Portal. The easy to use upgrade experience via the Azure Portal has no downtime associated with exporting and reimporting of the data.

Upgrade to optimize for performance

You can now upgrade to the latest performance tier within the Azure Portal. This will result in no change to your connection string details:

Portal_Upgrade

To learn more about the upgrade process, visit our upgrade documentation. If you need help for a POC, contact us directly. Stay up-to-date on the latest Azure SQL DW news and features by following us on Twitter @AzureSQLDW.

In case you missed it: March 2018 roundup

$
0
0

In case you missed them, here are some articles from March of particular interest to R users.

The reticulate package provides an interface between R and Python.

BotRNot, a Shiny application that uses a generalized boosting model to identify bots on Twitter.

Using the Computer Vision API from R to generate captions for images.

The 20 most prolific package maintainers on CRAN.

The first in a series of monthly roundups of links on AI, ML and Data Science.

A guide to using R with Docker via the Rocker project.

R 3.4.4 is released.

R rises to 12th place in the semiannual Redmonk language rankings.

The OutliersO3 package compares various methods for detecting outliers.

Concepts about data in R explained to Excel users.

A proposal for improving the process of selecting Bayesian priors.

Free Azure offer for students with $100 in credits, no credit card required.

And some general interest stories (not necessarily related to R):

As always, thanks for the comments and please send any suggestions to me at davidsmi@microsoft.com. Don't forget you can follow the blog using an RSS reader, via email using blogtrottr, or by following me on Twitter (I'm @revodavid). You can find roundups of previous months here.

Explore CosmosDB with .NET Core and MongoDB

$
0
0

Have you had to design general purpose “metadata” tables in your SQL database that basically store column names and values? Do you often serialize/de-serialize XML or JSON from your SQL tables to handle volatile schemas and data? .NET developers have traditionally worked with relational database management systems (RDMS) like SQL Server. RDMS systems have withstood the test of time and are production-ready. Like many tools, however, RDMS systems aren’t always the best solution. You may benefit from a different approach called NoSQL. Contrary to popular belief, NoSQL doesn’t mean “NO SQL” but rather “Not Only SQL.”

NoSQL comes in a variety of “flavors” but the most common are column, document, graph, and key/value.

Column databases

Column databases decompose documents into individual properties and store and index those properties. The column approach makes it possible to look up specific properties quickly in otherwise larger documents. Consider a document that looks like the following example:

The following document is a conceptual example of how the data will be stored:

Organizing the database this way improves the effectiveness of reads and writes, especially when only partial properties are being queried.

Document databases

Document databases store structures that are not tied to a specific schema. It is common to store data in JSON or a binary-optimized BSON format, but XML and other types of stores also exist. A major benefit of document databases is that you can store documents with different schemas and versions in the same location (called a collection). Just because the schemas vary doesn’t mean the data isn’t indexed. In fact, most document databases were built specifically to handle fast queries over large data sets. Document databases typically accommodate gigabytes to petabytes of data. Here is an example document that has nested entities:

Graph databases

Graph databases store relationships. The entities are referred to as vertices and describe the things being related. Relationships are referred to as edges and contain data about the connection. The canonical example of a graph database is airports and flight paths. Each vertex is an airport code, the name of the airport, and its coordinates. Each edge is a route between airports that contains information such as airline, distance, and average time. Graph databases can also be used to store genealogical data, hierarchies, and other types of relationships.

Graph database example

Graph database example

Key/value databases

Key/value databases are persistent dictionaries optimized for key-based lookups. The “value” may represent one or many columns and, like document databases, can have different schemas for each entry. If you often have the key, and are simply looking up information (such as loading user preferences or transforming a short code into a longer description) a key/value databases is ideal.

Table storage

Table storage example

Enter Cosmos DB

Azure Cosmos DB is a globally distributed, multi-model database. Cosmos DB provides the ability to elastically scale throughput and replicate storage across multiple regions. All of these benefits are possible with just a few clicks. Cosmos DB provides guarantees related to availability, throughput, consistency of your data and latency to access it. One way to think of Cosmos DB is as a serverless data platform stores your data and manages it like an experienced Database Administrator (DBA).

As a multi-model database, Cosmos DB supports several popular protocols and implements all of the common NoSQL patterns. Support for NoSQL databases includes:

  • Column database support with the Cassandra API
  • Document database support with the SQL and MongoDB APIs
  • Graph database support with the Gremlin API
  • Key/value database support with the Table API

Many of the supported APIs have been around for years and are used in legacy systems. By supporting these APIs, Cosmos DB makes it easier to migrate existing data to the cloud. It also allows you to tap into the existing ecosystem of community tools and drivers for those APIs.

The USDA Example

To help you get started with CosmosDB, I created a simple cross-platform .NET Core application that uses the existing MongoDB driver. MongoDB is a very popular database with many existing implementations. It also has a very mature .NET driver available to both .NET and .NET Core. You can access the repository here:

Explore Cosmos DB

Using .NET Core provides several advantages. The solution file is as easily opened and built from Visual Studio 2017 as from Visual Studio Code. You can build and run the project from your Windows machine, a MacOS, or even Linux. The project demonstrates how you can share code between traditional Web API projects and serverless function apps. It also uses best practices to pull connection strings from environment variables to make the connection.

The usda-importer project parses files from the USDA database website and loads them into a document database using the MongoDB driver. The source database is a relational database with 12 tables.

The import project builds a set of domain objects that are natural C# classes you work with every day. It stores the data in just three collections: two to enable retrieving a list of food groups and nutrients, and a third that contains food items. The food item entries contain a list of weights used to measure them, and references to nutrients all in the same document. De-normalizing the content not only simplifies development but also makes the programming model more natural. Read more about the process here: JSON and the MongoDB Driver for the .NET Developer.

Mongo DB collections

The usda-console-test project verifies the import when it is done. The usda-web-api project is an ASP. NET Web API project with endpoints that allow you to query the database. It lists food groups and nutrients and allows you to search for food items. You can also sort food items by nutrient content to answer questions like, “Which vegetables contain the most protein?” or “What nuts and seeds have the highest calcium content?”

Take a look at the following code snippet.

The example will dynamically sort foods by nutrient content (passed in as the tag). The projection ensures only the portions of the document that are needed are returned. The query limits it to a specific food group and takes the first 100 items. The code is all part of the MongoDB driver, and doesn’t require an additional ORM to map between the database and your domain objects. You can also query using LINQ.

Part of the power of Cosmos DB is that the engine automatically generates indexes as you store data. Queries are lightning fast. I almost always receive results querying the top 100 nutrients within milliseconds of submitting the request. Even text searches that look for strings inside the title or description return results quickly.

The usda-web-vuejs project contains a simple HTML document and JavaScript file. It uses the Vue.js framework to create a single page application (SPA). The application uses the Web API to query the database. You can open the file locally or use the included Dockerfile to build and run a Docker container. The repository contains all of the necessary instructions to get up and running. You will find most queries return results immediately from the application.

Food app UI

Summary

It is important that developers choose the right tool for the job. There is no rule that states you cannot use multiple data platforms in your applications. Every .NET developer should be aware of the NoSQL option. In many cases, it can simplify your application development by eliminating the need for an ORM. You also can store documents exactly as they are modeled in your application. Azure Cosmos DB makes it possible, and easier than ever, to create a highly available and reliable cloud database for your apps.

Click here to get started with the Cosmos DB sample.

How to configure Azure SQL Database Geo-DR with Azure Key Vault

$
0
0

Azure SQL Database and Data Warehouse offer encryption-at-rest by providing Transparent Data Encryption (TDE) for all data written to disk, including databases, log files and backups. This protects data in case of unauthorized access to hardware. TDE provides a TDE Protector that is used to encrypt the Database Encryption Key (DEK), which in turn is used to encrypt the data. With the TDE and Bring Your Own Key (BYOK) offering currently in preview, customers can take control of the TDE Protector in Azure Key Vault.

Taking advantage of TDE with BYOK for databases that are geo-replicated to maintain high availability requires to configure and test the scenario carefully. This post will go over the most common configuration options.

To avoid creating a single point of failure in active geo-replicated instances or SQL failover groups, it is required to configure redundant Azure Key Vaults. Each geo-replicated server requires a separate key vault, that must be co-located with the server in the same Azure region. Should a primary database become inaccessible due to an outage in one region and a failover is triggered, the secondary database is able to take over using the secondary key vault.

For Geo-Replicated Azure SQL databases, the following Azure Key Vault configuration is required:

  • One primary database with a key vault in region and one secondary database with a key vault in region.
  • At least one secondary is required, up to four secondaries are supported.
  • Secondaries of secondaries (chaining) are not supported.

Geo-DR config

When starting out with a new Geo-DR configuration, it is important to ensure that the same TDE Protectors are present in both key vaults, before proceeding to establish the geo-link between the databases.

When working with an existing Geo-DR deployment, it is required to assign the key vault to the secondary server first. After the secondary server was configured with Azure Key Vault this step can be completed for the primary server and after that TDE Protector gets updated. The existing Geo-DR link will continue to work because at this point the TDE Protector used by the replicated database is available to both servers.

Please visit our documentation for detailed step-by-step configuration guidance.

Summary

Before enabling TDE with customer managed keys in Azure Key Vault for a SQL Database Geo-DR scenario, it is important to create and maintain two Azure Key Vaults with identical contents in the same regions that will be used for SQL Database geo-replication. “Identical contents” specifically means that both key vaults must contain copies of the same TDE Protector(s) so that both servers have access to the TDE Protectors used by all databases. Going forward, it is required to keep both key vaults in sync.  This means both key vaults must contain the same copies of TDE Protectors after key rotations, maintain old versions of keys used for log files or backups. In addition, the TDE Protectors must keep the same key properties and the key vaults must maintain the same access permissions for SQL. A best practice is to test and trigger failovers on a regular basis to ensure the scenario continues to work. More information on how to manage and trigger failovers is outlined in the failover groups documentation page.

A years’ worth of cloud, AI and partner innovation. Welcome to NAB 2018!

$
0
0

As I reflect on cloud computing and the media industry since last year’s NAB, I see two emerging trends. First, content creators and broadcasters such as Rakuten, RTL, and Al Jazeera are increasingly using the global reach, hybrid model, and elastic scale of Azure to create, manage, and distribute their content. Second, AI-powered tools for extracting insights from content are becoming an integral part of the content creation, management and distribution workflows with customers such as Endemol Shine Group, and Zone TV.

Therefore, at this year’s NAB, we are focused on helping you modernize your media workflows, so you can get the best of cloud computing and AI.  We made a number of investments to enable better content production workflows in Azure, including the recent acquisition of Avere Systems. You can learn more about how Azure can help you improve your media workflows and business here.

Read on to learn more about the key advancements we’ve made – in media services, distribution and our partner ecosystem – since last year’s IBC.

Azure Media Services

Democratizing AI for Media Industry: Since its launch at NAB 2016, Azure Media Analytics has come a long way. At Build 2017, we launched Video Indexer, an application built by combining Azure Media Analytics with Cognitive services and Azure Search. Video Indexer showcases the best of what is achievable in Azure with AI technologies for video. In the past few months, Video Indexer added several new features including detection of business brands from transcripts, translation of transcripts to 54 languages, integration with custom speech, ability to customize brand detection, and several updates to player and insights widgets. Video Indexer has been used by a number of customers including Box, AVID, Endemol Shine Group, Genetec, Ooyala, and Zone TV to improve search, engagement, monetization and accessibility in their media apps. Please refer to this case study for more details on how Endemol Shine Group worked with Microsoft to implement a ground-breaking Serverless workflow solution powered by Azure Media Analytics, specifically speech-to-text, face recognition, and motion detection technology, to automate scene selection during the production of Big Brother, a TV reality show.

The Media Services platform is now deeply integrated with Microsoft’s Video Indexer capabilities to make it the first cloud video intelligence platform to enable customers to train the AI engine to meet their unique industry needs. By integrating with Custom Speech, Media Services now enables customers to improve the accuracy of video transcription. Developers can create focused vocabularies that provide a unique constrained set of words to detect for industries like finance, healthcare, academics, sciences, legal, and more. This enables customers to minimize the amount of manual labor and cleanup required today when using the default vocabulary models provided by Microsoft.

Enterprise video

Microsoft Stream became generally available 9 months ago.  Since then new features, integrations and enhancements have been added at a rapid pace to enrich video communication and collaboration across teams and organizations. For a look at recent feature announcements, read the latest blog post.

Growing partner ecosystem

Our partners continue to innovate with unique features that extend our broad platform capabilities.

One year in, the Avid and Microsoft strategic cloud partnership is accelerating with major product deliveries, joint marketing, and customer deployments – demonstrating that our companies are perfectly complementary in our mission to lead the media and entertainment industry to the cloud. At NAB 2018, Avid and Microsoft are showcasing further product deliveries coming in the second half of 2018, including a new suite of on-demand cloud services and solutions to reimagine the content supply chain of the largest media organizations, and reimagine the creative production and post production process for creative teams and independent artists everywhere.

uCast, a global Internet TV network and monetization platform, has selected Azure as its exclusive cloud platform provider for its next-generation media monetization platform. We are working together to onboard customers to this new solution.

MakeTV’s cloud-based live video acquisition and management platform, Live Video Cloud (LVC) is now available on Azure. Check out this link for details on how LVC enables AI supported curation and filtering of hundreds of live video feeds.

Wowza has developed a new Ultra low latency (ULL) live streaming solution, exclusively available in Azure for now, that can stream live to any geography with no more than 2-3 seconds latency. To achieve this ultra-low latency, Wowza leveraged the Azure Backbone network that provides high speed connectivity to all Azure Regions across the globe. For more details, please refer to this case study.

Net Insight’s ultra-low latency live OTT Streaming solution, Sye, is now available on Azure and has been deployed by two customers. Sye includes functionality to synchronize individual feeds across devices throughout the entire experience. The Sye solution brings a TV-like viewing experience to anyone on an OTT platform, including features such as instant playback, fast channel changes, seamless ad-insertion, and an exceptional viewing experience.

We worked closely with our partner Haivision to develop SRT Receiver as a Service on Azure. Now, customers can use SRT in addition to RTMP for secure and reliable ingest into the cloud. Haivision will be showing a demo at its booth and at Microsoft booth.

Irdeto’s TraceMark session-based OTT VOD watermarking solution on Azure is fully integrated with Azure Media Services and Azure CDN. Please check out the demo at the Microsoft booth at NAB.

Since November 2017, Sinclair Broadcast Group (SBG) has been using our partner Imagine Communications’ cloud-based playout solution running on Azure to broadcast a three-hour block of children’s programming, every morning, with a full commercial load.

Amagi is heralding a new era of cloud broadcast with artificial intelligence and machine learning-based content preparation services. It aims to improve efficiencies by nearly six times as compared to traditional, manually-intensive content preparation modules. This AI-driven approach will eventually expand into a larger cognitive playout infrastructure, which puts TV networks and content owners in a stronger position to address changing viewer dynamics. The Amagi solution is nominated for product innovation at the IABM's Broadcast and Media (BaM) Awards at NAB this year. Microsoft Azure customers now have the advantage of partnering with Amagi not just for cloud playout, but also for upstream broadcast workflow activities.

Finally, Microsoft is pleased to welcome Verizon Digital Media Services (VDMS) to our booth to showcase how they are partnering with our CDN field team to use Azure for deployment and content distribution.

“We are happy to join Microsoft at NAB this year to showcase Azure and VMDS together. Our strong partnership and technologies coming together, enables flexible and highly performant ways to build, create and distribute content using a one-platform approach, something that is highly attractive to our customers.”

- Ralf Jacob, President, Verizon Digital Media Services

Stay tuned for more Azure blog posts that will dive deeper into these announcements.

Come see us at NAB 2018

If you’re attending NAB Show 2018, I encourage you to by stop by our booth SL6716. In addition to great demos and product team representatives from Azure Media Services, Azure Storage, Azure HPC, Microsoft Stream, and Microsoft Skype for Broadcast, we will also feature the following partner showcases:

  • 360 Collaboration
  • Avere
  • Avid
  • GreyMeta
  • Make.TV
  • Nimble Collective
  • Ooyala
  • Prime Focus Technologies
  • Teradici
  • Teknikos
  • Verizon Digital Media Services

Microsoft will also feature an in-booth presentation theatre with customer, partner and product presentations scheduled throughout the day – a detailed schedule of all presentations and speakers is available here.

If you are not attending the conference but would like to learn more about our media services, follow the Azure Blog to stay up to date on new announcements.

Thank you!

Finally, a big thank you to all our dedicated and growing community of developers, customers, and partners that continues to provide valuable and actionable feedback.

From Microsoft Azure to everyone attending NAB Show 2018 — Welcome!

$
0
0

NAB Show is one of my favorites. The creativity, technology, content and storytelling are epic, as is the digital transformation well underway.

This transformation — driven by new businesses models, shifting consumption patterns and technology advancements — will change this great industry. What won’t change is the focus on creators and connecting their content with consumers around the world so they, too, can be a part of the story.

Microsoft’s mission is to “empower every person and every organization on the planet to achieve more.” We are committed to helping everyone in the industry — customers like Endemol Shine, UFA, Jellyfish and Reuters — do just that. With innovations across cloud storage, compute, CDN, Media Services and new investments in Avere Systems and Cycle Cloud, Microsoft Azure is ready to help modernize your media workflows and your business.

How? Well, queue scene…

Your productions are increasingly global and demanding. Azure can help.

Creators, media artists and innovators want to work together in a flexible, secure and collaborative way from wherever they are. More datacenters, in more regions of the world, than all competitors combined means the show can go on, everywhere. That means a film crew doing a car chase scene through the streets of Monaco can upload the day’s rough cuts into Azure, while dailies are viewed by editors in Hollywood. Bollywood. Or a beach house in Malibu.

Increasing content resolution levels from 4K to 8K means increased production and post-production data storage requirements. Virtual Reality, predicted to go as high as 16K (the resolution of the human eye!), and 360° viewing formats will push those requirements even higher. Azure can help with pay-for-what-you-use storage options, from High Performance SSD to Archive Storage, at an industry-leading price — leaving you more time and budget to get the scene ‘right’.

Similarly, sophisticated new VFX, VR and rendering technologies mean increased compute requirements. With Azure, you get all the CPU and GPU processing power you need, for as long as you need, only paying for what you use. Low Priority VMs and Reserved Instances provide flexible ways to maximize your compute budget. Azure Cycle Cloud can be used to orchestrate the workflows, storage and compute of jobs in Azure and provide critical compliance and governance support. Avere Systems, a new member of the Microsoft Azure team, takes this one step further with industry-leading options that accelerate the performance of your storage, regardless of location.

Content management and intelligence are a big challenge. Enter Azure + Cognitive Services.

You need to store today’s productions, every production ever made and every creative idea. Who knows when you’ll need them again? And you need to discover, identify and tag content — quickly, easily and automatically.

With Azure’s global scale and convenient storage tiers, you can store your content at the right cost, close to where you need it. It’s a cost-effective solution for your exabytes of tape-based media archives. In fact, our partners make it easy to move your data into Azure from almost any media — tapes, optical drives, hard disks or film.

When you move your media to the cloud, you can analyze it with Cognitive Services and benefit from Microsoft’s AI platform. These services can automatically transcribe speech — making dialog searchable — or comb through mounds of archived video and audio to find content featuring a certain actress or actor.

You can reach global audiences more easily and gain new insights with Azure + Microsoft AI

Azure Media Services sets your content free with a proven platform for large scale live events, linear TV and on-demand content. In addition, you can securely distribute content to theaters via ExpressRoute networks and offline via Disk and Data Box. For even broader distribution, Azure CDN provides a uncluttered highway to your consumers, whether your content is static, dynamic or real-time.

As content is delivered, Azure’s advanced analytics and Microsoft’s AI platform go to work to discover insights that help you segment audiences, create personalized experiences and influence production and editorial decisions.

To help bring your production to life, a broad ecosystem awaits

You’ll have access to partner solutions spanning dailies, editorial, VFX, color correction, OTT, playout and more. Check out Sudheer’s blog to learn which partners and product features we’ve added since last year’s IBC.

While the digital transformation of this industry is well underway, it’s far from over

Click here for a sneak peek into our not-too-distant future, brought to you by Microsoft Azure and our partners.

If you’re attending NAB Show 2018, we hope to see you! Stop by our booth SL6716 to:

  • Chat with product team representatives from Azure Media Services, Azure Storage, Azure HPC, Microsoft Stream and Microsoft Skype for Broadcast.
  • Visit with partners from 360 Collaboration, Avid, GreyMeta, Make.TV, Nimble Collective, Ooyala, Prime Focus Technologies, Teradici, Teknikos and Verizon Digital Media Services.
  • See some great customer, partner and product team presentations. A detailed schedule is available here.

Thanks for reading and have a great show!

Tad


Azure.Source – Volume 26

$
0
0

Last week, Azure expanded its global footprint with new regions available for Australia and New Zealand. In addition, updates were made to reduce your Azure cost, such as flexible new ways to purchase Azure SQL Database, a new inexpensive pricing tier in Azure IoT Hub for device to cloud telemetry, and a new pricing model for Azure monitoring services.

Now in preview

A flexible new way to purchase Azure SQL Database - Recently announced with SQL Database Managed Instance, the vCore-based purchasing model reflects our commitment to customer choice by providing flexibility, control, and transparency. As with Managed Instance, the vCore-based model makes the Elastic Pool and Single Database options eligible for up to 30 percent savings with the Azure Hybrid Benefit for SQL Server.

SQL Database: Long-term backup retention preview includes major updates - The preview of long-term backup retention in Azure SQL Database, announced in October 2016, has been updated with a set of major enhancements based on customer feedback requesting more regional support, more flexible backup policies, management of individual backups, and streamlined configuration.

Also in preview

Public preview: Read scale-out support for Azure SQL Database

    Now generally available

    Application Security Groups now generally available in all Azure regions - Application Security Groups enable you to define fine-grained network security policies based on workloads, centralized on applications, instead of explicit IP addresses. They provide the capability to group VMs with monikers and secure applications by filtering traffic from trusted segments of your network.

    Also generally available

    News & updates

    New Microsoft Azure regions available for Australia and New Zealand - Two new Microsoft Azure regions in Australia are available to customers, making Microsoft the only global provider to deliver cloud services specifically designed to address the requirements of the Australian and New Zealand governments and critical national infrastructure, including banks, utilities, transport and telecommunications. Microsoft offers Azure from three cities in Australia including Sydney, Melbourne and Canberra with connectivity to Perth, Brisbane and Auckland.

    Azure IoT Hub is driving down the cost of IoT - Many customers start their IoT journey by sending data from devices to the cloud, which we refer to as “device to cloud telemetry.” We added a new capability of Azure IoT Hub to support this called the IoT Hub basic tier, which is a very inexpensive way to start your IoT journey, and despite the name, there is nothing basic about it. It supports inbound telemetry scenarios and has all the same security, scale, performance, and reliability of the existing Azure IoT Hub standard tier. In addition, due to efficiencies we built into Azure IoT Hub, we are passing along our savings with a 50 percent reduction in cost of our standard tier.

    Introducing a new way to purchase Azure monitoring services - To make it even easier to adopt Azure monitoring services, we announced a new consistent purchasing experience across our application, infrastructure, and network monitoring services. Three key attributes of this new pricing model are: consistent pay-as-you-go pricing, consistent per gigabyte (GB) metering for data ingestion, and a choice of pricing models for existing customers.

    Improvements to SQL Elastic Pool configuration experience - Existing elastic pools can now be scaled up and down between service tiers. You can easily move between service tiers, switch between the DTU-based and the new vCore-based service tiers, and scale down your pool outside of business hours to save cost. Elastic pools offer many settings for customers to customize. The new experience aims to separate and simplify each aspect of pool management, between the pool settings, database settings, and database management.

    Upcoming improvements to the Azure AD sign-in experience - Since we released the redesign of the sign-in screens a few months ago, we’ve gotten feedback on how we can further improve the new UI. Our next set of changes aims to reduce clutter and make our screens look cleaner. A visually simpler UI helps users focus on the task at hand – signing in. This is solely a visual UI change with no changes to functionality. Existing company branding settings will carry forward to the updated UI.

    Additional news & updates

    Technical content & training

    Ingest, prepare, and transform using Azure Databricks and Data Factory - With the general availability of Azure Databricks comes support for doing ETL/ELT with Azure Data Factory. Learn how this integration enables you to operationalize ETL/ELT workflows (including analytics workloads in Azure Databricks) using data factory pipelines that ingest data at scale, prepare & transform ingested data, and monitor & manage your end-to-end workflow.

    Unique identities are hard: How I learned to stop worrying and love the ID scope - The IoT Hub Device Provisioning Service is a helper service for IoT Hub that enables zero-touch, just-in-time provisioning to the right IoT hub without requiring human intervention, allowing customers to provision millions of devices in a secure and scalable manner. Device creation and digital enrollment can occur at separate times, and this requires some sort of scoping to prevent conflicts. In this post, you'll learn why the ID scope concept was introduced to scope identities to a particular tenant Device Provisioning Service (DPS).

    Fast and easy development with Azure Database for MySQL and PostgreSQL - Learn how easy it is to get small apps and prototypes started with very little effort using Azure Database for MySQL and Azure Database for PostgreSQL while taking advantage of built-in security, fault tolerance, and data protection. Azure takes care of the infrastructure and maintenance for you, and these two database services will work with pretty much any app platform, development language, and configuration you want to use.

    Customer & partners

    Accelerate Data Warehouse Modernization to Azure with Informatica’s AI-Driven Platform - Informatica’s AI-driven Intelligent Data Platform is a modular micro services architecture that accelerates your Azure SQL Data Warehouse project deployment by automating your data integration development lifecycle, including connectivity, development, deployment, and management. Informatica offers out-of-the box connectivity to hundreds of on-premises and cloud data sources and pre-built interoperability to all Azure data services. The Informatica Intelligent Data Platform offers comprehensive solutions for Azure, including data integration, data discovery and preparation, data lakes and big data management, master data management, data quality and data security to accelerate Azure deployment and deliver trusted data in the cloud, on-premises and across hybrid environments.

    Developer spotlight

    Real-time Personalized Experiences at Global Scale - Building real-time personalized experiences for your customers, that operate efficiently on a global scale, can be a time-consuming and daunting task. This interactive scenario walks you through things you can do to create such experiences, allowing you to try some of the techniques for yourself without needing to set anything up.

    Modernizing cloud-based web apps - Module 1 of this scenario, Build globally-distributed, multi-model database service for mission-critical applications using Cosmos DB, evolves and grows the e-Commerce web application introduced in last week's scenario by adding Azure Cosmos DB, which offers turnkey global distribution across any number of Azure regions by transparently scaling and replicating your data wherever your users are.

    Sample: Create & Query Graphs using Gremlin API - Azure Cosmos DB provides the Graph API for applications that need to model, query, and traverse large graphs efficiently using the Gremlin standard. This sample shows how to create and query graphs in Azure Cosmos DB using the open-source connector Gremlin.NET in C#.

    Your Game, Everywhere - PlayFab (Game Services) - Brendan Vanous, Head of Developer Success at PlayFab, provides an overview of Game Services on PlayFab's backend platform in the cloud for live games.

    Your Game, For Everyone - PlayFab (LiveOps) - Brendan Vanous, Head of Developer Success at PlayFab, provides an overview of Live Ops on PlayFab's backend platform in the cloud for live games.

    How to Run Massively Scaling Mobile Games on Microsoft Azure - Hear how Next Games, the exclusive license owner for games related to the hit TV series "The Walking Dead," built its infrastructure and a generative platform on Azure.

    Azure shows

    Tuesdays with Corey | Open Sourcing Microsoft Service Fabric - Corey Sanders, Corporate VP - Microsoft Azure Compute team sat down with Vaclav Turecek, Principal PM on the Service Fabric Team to talk about transitioning Azure Service Fabric to a true Open Source offering.

    Azure Friday | Backup and Recovery with Azure Files - Mike Emard joins Scott Hanselman to discuss Azure Files, which offers fully managed file shares in the cloud, and deep dive into the latest backup and restore capability in Azure Files.

    Azure Friday | Metaparticle - A standard library for cloud-native applications on Kubernetes - Metaparticle is an experimental, cloud-native development environment for democratizing and simplifying the development of reliable distributed applications. It uses a code-first approach, so developers can describe the composite nature of their application with easy to apply patterns and practices to annotate their existing code.

    The Azure Podcast | Episode 223 - Azure Storage Options - Kendall, Cynthia, Evan and Cale discuss a fundamental feature of Azure - Storage! They talk about the various possibilities to store data in Azure and break down the use-cases for each of those options.

    Azure tips & tricks

    Add and Reorder Favorites in the Azure Dashboard

    Use the Table Parameter in the Azure CLI

    MSVC now correctly reports __cplusplus

    $
    0
    0

    The MSVC compiler’s definition of the __cplusplus predefined macro leaps ahead 20 years in Visual Studio 2017 version 15.7 Preview 3. This macro has stubbornly remained at the value “199711L”, indicating (erroneously!) that the compiler conformed to the C++98 Standard. Now that our conformance catch-up work is drawing to a close we’re updating the __cplusplus macro to reflect the true state of our implementation. The value of the __cplusplus macro doesn’t imply that we no longer have any conformance bugs. It’s just that the new value is much more accurate than always reporting “199711L”.

    /Zc:__cplusplus

    You need to compile with the /Zc:__cplusplus switch to see the updated value of the __cplusplus macro. We tried updating the macro by default and discovered that a lot of code doesn’t compile correctly when we change the value of __cplusplus. We’ll continue to require use of the /Zc:__cplusplus switch for all minor versions of MSVC in the 19.xx family.

    The version reported by the __cplusplus macro also depends upon the standard version switch used. If you’re compiling in C++14 mode the macro will be set to “201402L”. If you compile in C++17 mode the macro will be set to “201703L”. And the /std:c++latest switch, used to enable features from the Standard currently in development, sets a value that is more than the current Standard. This chart shows the values of the __cplusplus macro with different switch combinations:

    /Zc:__cplusplus switch /std:c++ switch __cplusplus value
    Zc:__cplusplus Currently defaults to C++14 201402L
    Zc:__cplusplus /std:c++14 201402L
    Zc:__cplusplus /std:c++17 201703L
    Zc:__cplusplus /std:c++latest 201704L
    Zc:__cplusplus- (disabled) Any value 199711L
    Zc:__cplusplus not specified Any value 199711L

    Note that the MSVC compiler does not, and never will, support a C++11, C++03, or C++98 standards version switch. Also, the value of the __cplusplus macro is not affected by the /permissive- switch.

    We’re updating IntelliSense to correctly reflect the value of __cplusplus when compiling with MSVC. We expect IntelliSense to be correct in the next preview of 15.7.

    _MSC_VER and _MSVC_LANG

    For finer-grained detection of changes in the MSVC toolset you can continue to use the _MSC_VER predefined macro. We have updated the value of this built-in macro with every toolset update in Visual Studio 2017 and will continue to do so.

    The _MSVC_LANG predefined macro continues to report the Standard version switch regardless of the value of /Zc:__cplusplus. _MSVC_LANG is set whether or not the /Zc:__cplusplus option is enabled. When /Zc:__cplusplus is enabled, __cplusplus == _MSVC_LANG.

    Please look for usage in your code

    We’ve heard repeatedly from developers as we’ve gotten closer to complete conformance that we need to update the value of this macro. Now we need help from you. We tried to define __cplusplus correctly by default but discovered that a lot of code expects MSVC to always set the macro to “199711L”.

    Please take a moment to search your code for references to __cplusplus and compile with the /Zc:__cplusplus switch enabled. Your code might be using this macro to determine if it’s being compiled with MSVC or Clang in MSVC-emulation mode. If your codebase is really old, it might be using this macro to determine if you’re using VC++ 6.0! Take a moment to compile with this switch enabled. We need the ecosystem to move forward so we can set __cplusplus accurately by default.

    In closing

    As always, we welcome your feedback. We can be reached via the comments below or via email (visualcpp@microsoft.com).

    If you encounter other problems with MSVC in Visual Studio 2017 please let us know through Help > Report A Problem in the product, or via Developer Community. Let us know your suggestions through UserVoice. You can also find us on Twitter (@VisualC) and Facebook (msftvisualcpp).

    C++ code analysis: configure rules from the command line

    $
    0
    0

    This post written by Sunny Chatterjee and Andrew Pardoe

    Visual Studio version 15.7 Preview 3 introduces a new MSVC compiler switch, /analyze:ruleset, that configures code analysis runs. The primary motivation for this switch is to enable developers who are using C++ Code Analysis without using MSBuild to filter rules. But developers who are using code analysis with MSBuild also benefit from this switch: code analysis runs are faster, increasing your compilation throughput.

    What are code analysis rulesets?

    Code analysis rulesets allow you to choose what code analysis results you see when you analyze your code. Code analysis rulesets are found in Project > Properties > Code Analysis > General. A new C++ project by default has the ruleset “Microsoft Native Recommended Rules” selected.

    You can select from rulesets you wish to apply to your project with the highlighted dropdown.

    Visual Studio comes with a handful of built-in rulesets you can choose from. They’re located in %VSINSTALLDIR%Team ToolsStatic Analysis ToolsRule Sets. We’re increasing this set of rules—stay tuned to the VC Blog for more information on this.

    You can also create your own custom ruleset and apply that for your project. To create a custom ruleset, go to File > New > File > General > Code Analysis Rule Set.

    Prior to Visual Studio 2017 version 15.7 Preview 3, all rules are run every time you run C++ code analysis. When you run code analysis for a given project, checkers like C++ Core Check generate a list of defects. After code analysis finishes, there’s an MSBuild task which merges the list of defects together and filters them according to the ruleset selected for the project. You’re only shown the warnings that apply to your currently selected code analysis ruleset.

    While the old mechanism works great if you are building inside VS environment, there are a couple of areas where it falls short. First, if you are using the VS compiler toolset in your custom build environment, you don’t get any configuration options through rulesets. You must write your own tool for filtering defects as per your needs. Second, inside the current VS environment itself, the ruleset filtering mechanism is essentially a post processing tool – the checkers do all the work to generate the defect, which then gets filtered out in the end. We added /analyze:ruleset in the MSVC compiler toolset to overcome these shortcomings in the code analysis experience.

    How does /analyze:ruleset work?

    The new /analyze:ruleset option can be used with any build configuration: inside or outside of VS, using MSBuild, Ninja, or a custom build system. This new option allows the compiler to directly filter out defects based on the set of rules specified in the ruleset. Now that the compiler has knowledge of what rules are active, it can pass on that knowledge to the individual checkers, so they can make smart decisions. For example, if the ruleset only specifies rules for type safety, more expensive checks like lifetimes can turn themselves off, so you only pay for what you need in terms of analysis cost. Not having to run unselected rules means that your code analysis experience is faster and more fluid.

    Using /analyze:ruleset

    Taking advantage of this new switch is simple: just define your own ruleset files and pass that option to the compiler when running code analysis. This is best illustrated with a step-by-step example. In this example, we’ll detect all the defects related to uninitialized variables in our program.

    1. First, we need to identify the set of rules which will detect uninitialized variables and memory. For this example, we pick two rules that’ll help us, C6001 and C26494.
    2. Now need to create a ruleset file which contains the set of rules you selected. We can do this in Visual Studio as shown above or we can manually create a ruleset by authoring a simple XML.
    3. Now we have a ruleset file that looks like below that we’ve saved as UninitVariable.ruleset.
      <?xml version="1.0" encoding="utf-8"?>
      <RuleSet Name="New Rule Set" Description=" " ToolsVersion="15.0">
        <Rules AnalyzerId="Microsoft.Analyzers.NativeCodeAnalysis" RuleNamespace="Microsoft.Rules.Native">
          <Rule Id="C6001" Action="Warning" />
          <Rule Id="C26494" Action="Warning" />
        </Rules>
      </RuleSet>
      
    4. For this example, our test file looks like below. We name it test.cpp.
      int f( bool b )
      {
         int i;
         if ( b )
         {
            i = 0;
         }
         return i; // i is unintialized if b is false
      }
      
    5. We run code analysis without any configuration option and observe the following warnings:
      E:test>cl.exe /c test.cpp /analyze:plugin EspXEngine.dll
      
      Microsoft (R) C/C++ Optimizing Compiler Version 19.14.26329 for x86
      Copyright (C) Microsoft Corporation.  All rights reserved.
      
      test.cpp
      e:testtest.cpp(8) : warning C6001: Using uninitialized memory 'i'.: Lines: 3, 4, 8
      e:testtest.cpp(3) : warning C26494: Variable 'i' is uninitialized. Always initialize an object (type.5).
      e:testtest.cpp(1) : warning C26497: The function 'f' could be marked constexpr if compile-time evaluation is desired (f.4).
      e:testtest.cpp(1) : warning C26440: Function 'f' can be declared 'noexcept' (f.6).
      
    6. Next, we pass the additional compiler option to specify our custom ruleset for identifying uninitialized variables: /analyze:ruleset UninitVariable.ruleset.
      E:test>cl.exe /c test.cpp /analyze:plugin EspXEngine.dll /analyze:ruleset UninitVariable.ruleset
      
      Microsoft (R) C/C++ Optimizing Compiler Version 19.14.26329 for x86
      Copyright (C) Microsoft Corporation.  All rights reserved.
      
      test.cpp
      e:testtest.cpp(8) : warning C6001: Using uninitialized memory 'i'.: Lines: 3, 4, 8
      e:testtest.cpp(3) : warning C26494: Variable 'i' is uninitialized. Always initialize an object (type.5).
      

    With the /analyze:ruleset option, code analysis only runs the rules for uninitialized variables and the additional warnings which were not related to these rules don’t show up anymore.

    In closing

    We hope that you’ll find /analyze:ruleset option useful for configuring code analysis runs in your private build environments. We’ve started taking advantage of it already! For example, our code analysis targets file in Visual Studio now passes the /analyze:ruleset option to the compiler when running code analysis. This way we can optimize our checks based on the selected ruleset. We’ll be introducing new default rulesets in the future as well as providing support in Visual Studio to run C++ code analysis in build environments like CMake for Ninja and Visual Studio.

    As always, we welcome your feedback. We can be reached via the comments below or via email (visualcpp@microsoft.com).

    If you encounter other problems with MSVC in Visual Studio 2017 please let us know through Help > Report A Problem in the product, or via Developer Community. Let us know your suggestions through UserVoice. You can also find us on Twitter (@VisualC) and Facebook (msftvisualcpp).

    Spectre mitigation changes in Visual Studio 2017 Version 15.7 Preview 3

    $
    0
    0

    With Visual Studio 2017 version 15.7 Preview 3 we have two new features to announce with regards to our Spectre mitigations. First, the /Qspectre switch is now supported regardless of the selected optimization level. Second, we have provided Spectre-mitigated implementations of the Microsoft Visual C++ libraries.

    Complete details are available in context in the original MSVC Spectre mitigation post on VCBlog. Changes in Update 3 are also listed below.

    In previous versions of MSVC we only added Spectre mitigations when code is being optimized. In Visual Studio 2017 version 15.7 Preview 3 we’ve added support for /Qspectre regardless of your optimization settings. This feature is currently only available in Visual Studio verion 15.7 Preview 3 and future releases.

    We’re also adding Spectre-mitigated implementations of the Microsoft Visual C++ libraries. Visual Studio 2017 version 15.7 Preview 3 includes runtime libraries with mitigation enabled for a subset of the Visual C++ runtimes: VC++ start-up code, vcruntime140, msvcp140, concrt140, and vcamp140. We also include static library equivalents of those libraries. We are providing static linking support and App Local deployment only; the contents of the Visual C++ 2017 Runtime Libraries Redistributable have not been modified.

    You must select these libraries for installation in the VS Installer under the Individual Components tab:

    To enable Spectre mitigations for both your code and library code, simply select “Enabled” under the “Code Generation” section of the project Property Pages:

    Current status

    The following table shows the status of supported features in the versions of Visual Studio with Spectre mitigations available in the MSVC toolset:

    Visual Studio Version (as of April 4, 2018) /Qspectre with optimizations /Qspectre without optimizations X86 and Amd64 Arm and Arm64 Mitigated libs
    VS 2015 Update 3
    VS 2017 RTW 15.0 (26228.23)
    VS 2017 15.5.5
    VS 2017 15.6
    VS 2017 15.7

    In closing

    We on the MSVC team are committed to the continuous improvement and security of your Windows software which is why we have taken steps to enable developers to help mitigate variant 1.
    We encourage you to recompile and redeploy your vulnerable software as soon as possible. Continue watching this blog and the @visualc Twitter feed for updates on this topic.

    If you have any questions, please feel free to ask us below. You can also send us your comments through e-mail at visualcpp@microsoft.com, through Twitter @visualc, or Facebook at Microsoft Visual Cpp. Thank you.

    CMake Support in Visual Studio – Targets View, Single File Compilation, and Cache Generation Settings

    $
    0
    0

    Visual Studio 2017 15.7 Preview 3 is now available, which includes several improvements to the CMake tools.  The latest preview offers more control than ever over how to visualize, build, and manage your CMake projects.

    Please download the preview and check out the latest CMake features such as the Targets View, single file compilation, and more control over when projects are configured.  As always, we would love to hear your feedback too.

    If you are new to CMake in Visual Studio, check out how to get started.

    CMake Targets View

    The latest preview of Visual Studio offers a new way to visualize your CMake projects’ source and structure.  When you open a CMake project in Visual Studio, you see the project’s layout on disk in the Solution Explorer.  Depending on the way your project is organized, this disk-based view may not be a good reflection of the actual organization of your CMake project.  This is especially true if your project includes files outside of the folder or if it conditionally includes files depending on the active configuration.

    The newly added Targets View allows you to visualize the structure of a CMake project in the Solution Explorer.  In this view, source code is organized under individual CMake targets and projects.  You can build and debug individual targets by right clicking them in the Solution Explorer.  You can also see the relationships and dependencies between targets under the References node.  You also have more options to customize the displayed structure of your targets and source code – see Organizing Targets and Source below.

    You can show the Targets View by clicking on the view dropdown in the Solution Explorer:

    If you have worked with the projects and Solutions generated by CMake before, you should feel right at home.  There is a top-level project node where CMake’s Visual Studio generator would have created a Solution and each CMake target shows up under this project with its source code.  The CMake generator would have created individual MSBuild projects for each of these targets.  One thing to keep in mind, however, is that while the view is similar, no MSBuild projects or Solutions are created.  The view is driven directly by the content of the CMake files.

    Organizing Targets and Source

    You can also control the organization of the targets and source code.  Targets can be organized by enabling use_folders and setting the folder property for targets.  Source code can be organized under a target using source_groups.  These directives work with all CMake IDE generators (including the Visual Studio generator) so if you already have them set up they will also work with the Targets View.

    The Targets View shows a representation of the CMake project’s structure.  Currently, you cannot manipulate this structure from the Targets View.  To modify the project’s structure, you will need to manually modify your project’s CMake list files.

    Single File Compilation for CMake Projects

    You can now build single files belonging to CMake projects just like you can for MSBuild projects.  Right click on any file in the Solution Explorer and select “Compile” or build the file open in the editor via the main CMake menu:

    CMake Cache Generation Settings

    Visual Studio automatically configures and generates the cache for your CMake projects by default when you open them for the first time.  This allows the IDE to provide a rich editing, build, and debugging experience, often requiring no additional configuration.  However, we understand that this doesn’t make sense for all projects, so we now offer new settings to control the generation of the CMake project cache:

    We recommend sticking with the default, but if you commonly work with projects that require additional configuration or just want more control over how and when Visual Studio generates your CMake project cache, you may want to change this setting.  If you disable automatic generation of the cache, Visual Studio will remind you to generate before editing code belonging to the CMake project:

    Send Us Feedback

    Your feedback is a critical part of ensuring that we can deliver the best CMake experience.  We would love to know how Visual Studio 2017 Preview is working for you.  If you have any feedback specific to CMake Tools, please reach out to cmake@microsoft.com.  For general issues please Report a Problem.

     

    Announcing CodeLens for C++ Unit Testing

    $
    0
    0

    If you are just getting started with unit testing in C++, visit our testing startup guide.

    C++ developers in Visual Studio can now get their first taste of CodeLens! Specifically, Visual Studio 2017 15.7 Preview 3 Professional and Enterprise editions offer CodeLens for Unit Testing.

    There are a few ways to initialize CodeLens:

    • Edit and Build your test project/solution
    • Rebuild your project/solution
    • Run test(s) from the Test Explorer Window

    After performing any of the above actions, CodeLens will appear above each of your unit tests. CodeLens allows you to Run, Debug, and view the Test Status of your unit tests directly in the source file. The test status indicators are the same as the ones in the Test Explorer (Warning , Success , Failure ).

    To update the test status, you can Run the test directly from CodeLens above the desired test method.

    If your test fails, you can easily view the failure message and Debug straight from CodeLens.

    Give us Feedback

    As always, let us know if you run into any issues by reporting them via Help > Send Feedback > Report A Problem from inside the Visual Studio IDE. Additionally, you can view active issues, add comments, and upvote items on  Developer Community . You can also catch me on Twitter @nickuhlenhuth.

    Happy unit testing!

     

    IntelliSense for Remote Linux Headers

    $
    0
    0

    In Visual Studio 2017 15.7 Preview 3 we are introducing IntelliSense for headers on remote Linux connections. This is part of the Linux development with C++ workload that you can choose in the Visual Studio installer. If you are just getting started with the C++ Linux support in Visual Studio you can read our C++ Linux tutorial at aka.ms/vslinux.

    When you add a new connection in the Connection Manager we will automatically determine the include directories for the compiler on the system. Those directories will be zipped up and copied to a directory on your local Windows machine. Then, when you use that connection in a Visual Studio or CMake project, the headers in those directories will be used to provide IntelliSense.

    Note that you do need zip installed on the Linux machine you are connecting to for this to work. Using apt-get that can be done via:

    apt install zip

    To add a new connection, go to Tools > Options, Select Cross Platform > Connection Manager and select Add. The Connect to Remote System Dialog will appear.

    Connection Manager dialog

    This dialog also appears if you try to perform an operation in a project that requires a remote Linux connection and you have none defined. Provide your connection information, host, user name, and credentials.

    Upon establishing a successful connection, you will see a dialog that we have started gathering the headers for the connection.

    Creating zip message

    Upon completion of creating the zip file you will see progress information on the status of copying and unzipping that to your local machine.

    Download progress message

    Now whenever you select this connection in a Linux project, whether it is a Visual Studio or CMake project, you will get IntelliSense from the headers of this connection. Here you can see that when peeking at the definition for endl the <ostream> header where that is defined is in my header cache, not a generic location.

    Peek definition example

    To manage your header cache, navigate to Tools > Options, Select Cross Platform > Connection Manager > Remote Headers IntelliSense Manager. To update the header cache after making changes on your Linux machine, select your remote connection and select Update. If you want to keep a connection and free space by getting rid of the header cache, select the connection and choose Delete. If needed you can also disable automatically downloading headers for new connections here.

    Remote Headers IntelliSense Manager tools option window

    You can also see where we have saved the header files for the connection by selecting Explore which will open File Explorer in that location. This is not intended for you to make changes in this location, it should be treated as read only. Our Update mechanism will not respect changes made here manually.

    If you have added connections in past releases they are not automatically opted into this experience. To add them go to Tools > Options, Select Cross Platform > Connection Manager > Remote Headers IntelliSense Manager. Select the connection that you want to get a local header cache for and select Download.

    Update example

    What’s next

    Download the Visual Studio 2017 Preview and select the Linux C++ Workload, add a Linux connection, and try it with your projects.

    We will continue to improve this feature in subsequent releases. We already have plans to support project specific headers, including those added through CMake. We are also looking at ways to trigger the synchronization of headers automatically on use of a connection so manually updating won’t be required.

    We love hearing about what is and isn’t working for you with new features like this and with the Linux workload in general. The best way to reach us is via our GitHub hosted issue list, directly via mail at vcpplinux-support@microsoft.com, or find me on Twitter @robotdad.


    Visual Studio 2017 version 15.7 Preview 3

    $
    0
    0

    Today we released the third preview of the next update: Visual Studio 2017 version 15.7. The top highlights of this Preview include:

    • Updates to Universal Windows Platform development
    • C++ development improvements
    • Significant updates in Xamarin and .NET Mobile Development
    • Ability to configure installation locations
    • Debugger support for authenticated Source Link
    • Live Unit Testing improvements
    • New tooling for migrating to NuGet PackageReference
    • Connected Service improvements to deployment and Key Vault functionality

    This preview builds upon the features that debuted in the Preview 2 and Preview 1 releases that occurred in March. As always, you can drill into the details of all of these features by exploring the Visual Studio 2017 version 15.7 Preview release notes. We appreciate your early adoption, engagement, and feedback as it helps us ship the most high-quality tools to everyone in the Visual Studio community.

    We hope that you will install it, use it, and share your feedback with us. To acquire the Preview, you can either install it fresh from here, you can update the bits directly from the IDE, or if you have an Azure subscription, you can provision a virtual machine with this latest preview.

    Universal Windows Platform Development

    Latest Windows Insider SDK: With Visual Studio 2017 version 15.7 Preview 3, the latest Windows Insider Preview SDK (build 17133) is included as an optional component to the Universal Windows Platform development workload. You can choose to install this SDK with Visual Studio to start taking advantage of the latest Windows APIs and features.

    New Features in Blend for the Updates to the XAML Designer: In the fall of last year, we brought some drastic changes to the XAML Designer for UWP developers targeting the Windows 10 Fall Creators Update. With these changes, we lost some of the features that our developers have grown to know and love. With Visual Studio 2017 version 15.7 Preview 3, we are proud to announce that two big features for Blend are back – Visual State Management tooling, and the Animation tooling with the Objects and Timeline pane! Now, all UWP developers can take advantage of these tools in Blend.

    C++ Development

    C++ Standards Conformance: This release includes many conformance improvements, including rewording inheriting constructors, declaring non-type template parameters with auto, std::launder(), and two-phase name lookup. In addition, the MSVC compiler toolset conforms almost fully with the C++17 Standard: when compiling with the /Zc:__cplusplus switch, the value of the __cplusplus macro reflects the correct Standard version numbers.

    Unit Testing: You can now use CodeLens above each of your unit tests to Run, Debug, and view Test Status.

    Unit testing

    Code Analysis: We’ve added the /analyze:ruleset option to cl.exe for filtering down warnings in the C++ Code Analysis tools based on ruleset configuration. This results in a consistent experience between standalone invocations of the compiler and the IDE, and improves performance by running only the rules mentioned in ruleset.

    Spectre Mitigations in MSVC: The Visual C++ Runtime now supports mitigations for the Spectre variant 1 vulnerability, and the toolset includes mitigated and non-mitigated versions of the DLLs. We’ve also enabled compiler support for Spectre mitigations in non-optimized builds (/Od).

    Build Throughput: The /Zf switch is now enabled by default, which enables faster PDB generation when using multiple compilation processes.

    CMake: The CMake Targets View provides an alternative way to view a CMake project’s source in the Solution Explorer; instead of a folder-based view, it organizes the code into individual CMake targets.

    CMake

    In addition, unknown macros that cause tag-parsing errors are now underlined with green squiggles, and source files (.cpp) belonging to CMake projects can now be built individually.

    IntelliSense: Headers from Linux and Unix-like systems are now automatically downloaded for use by IntelliSense on Windows. These are also used to provide an enhanced IntelliSense experience for Linux native platform development.

    For more C++ feature additions, please see the release notes.

    Xamarin and .NET Mobile Development

    Xamarin.Forms XAML Editing Improvements: Xamarin.Forms developers using Visual Studio 2017 version 15.7 will notice a vastly improved IntelliSense experience when editing XAML. The Xamarin.Forms XAML editing experience is now powered by the same engine that powers WPF and UWP. This brings many enhancements to developers, including improved matching, light bulb suggestions, code navigation, error checking, resource completion, and markup extension completion.

    Xamarin Intellisense

    Modernized project templates: Whether you’re new or a seasoned Xamarin developer, project templates are an important part of the app building journey. We have centralized the cross platform project creation in a single place, modernized our iOS and Android native project templates and reduced the time it takes to create a new project.

    Android Development Improvements: We have made several improvements to the Android development experience. We are now distributing the Android Oreo SDK (Android API level 27) and are shipping the Android emulators with Quick Boot enabled. We have modernized the designer’s property editor. Visual Studio will now detect scenarios where the project requires a different version of the Android SDK that is installed and will download in the background the required components.

    Lastly, when you start a compilation, Visual Studio will boot and deploy the Xamarin runtime on your device while the compilation is taking place, thus reducing the time that that you have to wait to see your application show up. With our test application, on a fresh device, with no previous deployment, and including launching the Android emulator, we see the following performance gains:

    Xamarin Intellisense

    Apple Platform Development Improvements: We’ve also made significant improvements to the iOS development experience. The iOS, Mac, tvOS and Watch applications now feature a fully static type system. This brings many benefits, such as smaller applications, faster application startup, and reduced memory usage. We have also introduced the [Weak] attribute for fields that make it simpler to write code that is garbage collection friendly in Apple platforms.

    Lastly, the provisioning of iOS devices has historically been a chore, requiring multiple trips to the documentation. We have incorporated the same fastlane experience that we shipped on macOS and you can now provision your devices in seconds and keep your entitlements up to date with the click of a button.

    Apple Provision

    Python

    The new and improved version of the Python debugger is now bundled with the Python extension and on by default, making it easier and faster for you to debug your Python code. For more information, refer to the Python debugger blog post that describes this feature.

    NuGet

    New tooling for migrating to NuGet PackageReference: This release provides out of the box support for migrating existing projects based on packages.config to PackageReference. To get started, right-click on the References node in the Solution Explorer and select Migrate packages.config to PackageReference. The tool analyzes the installed NuGet packages, calculates the dependency graph, and categorizes them into top-level and transitive dependencies. It also provides warnings for potential package compatibility issues with PackageReference.

    Nuget

    Acquisition

    Starting with Visual Studio 2017 version 15.7 Preview 3, you can reduce the installation footprint on your system drive by directing the download cache, shared components, and some SDKs and tools to different locations. Because these pieces are shared among Visual Studio installations, you can only set the locations with your first installation, meaning that you can’t change them later. If you have a solid-state drive (SSD), we recommend that you install the Visual Studio core product on your SSD drive because this will make Visual Studio run significantly faster. For further details on what each of these locations mean, please refer to the preview release notes.

    Installation Locations

    Debugging

    Source Link Authentication: The debugger now supports authenticated Source Link requests for Visual Studio Team Services and private GitHub repositories. When debugging code that was built on another computer, such as a library in a NuGet package, Source Link enables the debugger to show correctly matching source code by downloading it from the internet. To build your own code with Source Link enabled see https://aka.ms/SourceLinkSpec.

    For VSTS repositories, authentication is provided by signing into Visual Studio. For GitHub, we leverage credentials from the GitHub Extension for Visual Studio and the Git Credential Manager for Windows. If Source Link fails due to an authentication error a new “Source Link Authentication Failed” page is shown to explain the issue. This page allows the user to login to VSTS or GitHub and retry the Source Link request.

    Live Unit Testing

    Live Unit Testing has added support for new project features including embedded pdbs and pdbs specifying /deterministic. Live Unit Testing can also now support projects that use reference assemblies.

    Connected Services

    Deployment to Azure App Services on Linux: Visual Studio now offers you the ability to deploy non-containerized applications to Azure App Service on Linux in addition to our previous support for apps built with Docker. To publish your application to App Service and run in Linux, from the Publish dialog choose “App Service Linux” and select “Create new”.  To continue to publish a containerized application to App Service for Containers using Linux, choose “Create new App Service for Containers”.

    Key Vault Connected Service: Azure Key Vault provides a secure location to safeguard keys and other secrets used by cloud apps.  Securing your project’s secrets is now easier than ever with the ability to create and add a Key Vault to your project as a Connected Service.  Doing so will automatically add configuration, add the required NuGet packages to your project, and enable you to access secrets from the Key Vault in your source code.  Once the Key Vault has been added, you will be able to manage secrets and permissions through the Azure portal.  This is available for both ASP.NET and ASP.NET Core applications.

    Visual Studio for Mac version 7.5 Preview 2

    We’re also releasing a new preview of Visual Studio for Mac, which you can get by switching to the Beta updater channel. This release improves the Razor, JavaScript, and TypeScript web development features we announced in our first preview. Additionally, we’re improving project load performance for .NET Core projects and the debugging experience with Unity games. Be sure to check out the preview release notes for the full list of fixes in this release, driven by your feedback.

    Try out the Preview today!

    If you’re not familiar with Visual Studio Previews, take a moment to read the Visual Studio 2017 Release Rhythm. Remember that Visual Studio 2017 Previews can be installed side-by-side with other versions of Visual Studio and other installs of Visual Studio 2017 without adversely affecting either your machine or your productivity. Previews provide an opportunity for you to receive fixes faster and try out upcoming functionality before they become mainstream. Similarly, the Previews enable the Visual Studio engineering team to validate usage, incorporate suggestions, and detect flaws earlier in the development process. We are highly responsive to feedback coming in through the Previews and look forward to hearing from you.

    Please get the Visual Studio Preview today, exercise your favorite workloads, and tell us what you think. If you have an Azure subscription, you can provision a virtual machine of this preview. You can report issues to us via the Report a Problem tool in Visual Studio or you can share a suggestion on UserVoice. You’ll be able to track your issues in the Visual Studio Developer Community where you can ask questions and find answers. You can also engage with us and other Visual Studio developers through our Visual Studio conversation in the Gitter community (requires GitHub account). Thank you for using the Visual Studio Previews.

    Christine Ruana Principal Program Manager, Visual Studio

    Christine is on the Visual Studio release engineering team and is responsible for making Visual Studio releases available to our customers around the world.

    Three critical analytics use cases with Microsoft Azure Databricks

    $
    0
    0

    Data science and machine learning can be applied to solve many common business scenarios, yet there are many barriers preventing organizations from adopting them. Collaboration between data scientists, data engineers, and business analysts and curating data, structured and unstructured, from disparate sources are two examples of such barriers - and we haven’t even gotten to the complexity involved when trying to do these things with large volumes of data.  

    Recommendation engines, clickstream analytics, and intrusion detection are common scenarios that many organizations are solving across multiple industries. They require machine learning, streaming analytics, and utilize massive amounts of data processing that can be difficult to scale without the right tools. Companies like Lennox International, E.ON, and renewables.AI are just a few examples of organizations that have deployed Apache Spark™ to solve these challenges using Microsoft Azure Databricks.

    Your company can enable data science with high-performance analytics too. Designed in collaboration with the original creators of Apache Spark, Azure Databricks is a fast, easy, and collaborative Apache Spark™ based analytics platform optimized for Azure. Azure Databricks is integrated with Azure through one-click setup and provides streamlined workflows, and an interactive workspace that enables collaboration between data scientists, data engineers, and business analysts. Native integration with Azure Blob Storage, Azure Data Factory, Azure Data Lake Store, Azure SQL Data Warehouse, and Azure Cosmos DB allows organizations to use Azure Databricks to clean, join, and aggregate data no matter where it sits.

    Learn how your organization can improve and scale your analytics solutions with Azure Databricks, a high-performance processing engine optimized for Azure. Now is the perfect time to get started. Not sure how? Sign up for our webinar on April 12, 2018 and we’ll walk you through the benefits of Spark on Azure, and how to get started with Azure Databricks.

    Get started with Azure Databricks today!

    Recommendation engine

    Recommendation Engine

    As mobile apps and other advances in technology continue to change the way users choose and utilize information, recommendation engines are becoming an integral part of applications and software products.

    Clickstream analytics

    Clickstream analytics

    Companies need clickstream analytics to collect, analyze, and report aggregate data about which pages a website visitor visits and in what order.

    Intrusion detection

    Intrusion detection

    Intrusion detection is needed to monitor network or system activities for malicious activities or policy violations and produces electronic reports to a management station.

    Achieving GDPR compliance in the cloud with Microsoft Azure

    $
    0
    0

    MSFT_GDPRVideoSeriesAzureAndSecureBlog_AzureBlog_852x400_R1V2

    The General Data Protection Regulation (GDPR) officially goes into effect on May 25. Will your organization be ready?

    Very soon, the GDPR will replace the Data Protection Directive as the new global standard on data privacy for all government agencies and organizations that do business with European Union (EU) citizens. When it does, all organizations that control, maintain, or process information involving EU citizens will be required to comply with strict new rules regarding the protection of personal customer data. For companies that store and manage data in the cloud, assuming existing infrastructure will remain compliant with new regulatory requirements might result in significant fines.

    It’s important to understand that the differences between the new GDPR and the Data Protection Directive could impact your cloud data and security controls. For example, GDPR’s broad interpretation of what constitutes personal information leaves relevant agencies and organizations responsible for providing “reasonable” protection for a wider range of data types, including genetic and biometric data. More than ever, this regulatory transition highlights the importance of implementing a comprehensive cloud security strategy for your company.

    According to a recent GDPR benchmarking survey, although 89 percent of organizations have (or plan to have) a formal GDPR-readiness program, only 45 percent have completed a readiness assessment. At Microsoft, we’ve been preparing for GDPR compliance for the better part of a year and empowering our customers to do the same. Because Microsoft has extensive experience developing cloud solutions with security built-in, we’ve become a leading voice on solving GDPR-related privacy challenges in the cloud.

    Now, we’ve turned this experience and insight into a free, four-part video series, Countdown: Preparing for GDPR. Be sure to watch GDPR and Azure to learn more from David Burt, Senior Compliance Marketing Manager for Azure. You can also read more about our point of view on this transition as the first hyper-scale cloud vendor to offer GDPR terms and conditions in the enterprise space.

    New Disaster Recovery tutorials for Wingtip Tickets sample SaaS application

    $
    0
    0

    Continuing in our series of tutorials showcasing features of Azure SQL database that enable SaaS app management, we are introducing two new tutorials that explore disaster recovery strategies for recovering an app and its resources in the event of an outage. Disaster Recovery (DR) is an important consideration for many applications, whether for compliance reasons or business continuity. Should there be a prolonged service outage, a well-prepared DR plan can minimize business disruption.

    The tutorials target the database-per-tenant architecture model of the Wingtip Tickets sample and demonstrate recovery using the geo-restore capabilities of Azure SQL database, and recovery using the geo-replication capabilities of Azure SQL database.

    Disaster recovery using geo-restore

    image In this tutorial, you will explore a full disaster recovery scenario using a geo-restore-based DR strategy. You use geo-restore to recover the catalog and tenant databases from automatically maintained geo-redundant backups into an alternate recovery region. After the outage is resolved, you use geo-replication to repatriate new and changed databases to their original production region.

    To learn more about this pattern, check out the tutorial, and associated code on GitHub.

    Disaster recovery using geo-replication

    image In this tutorial, you explore a full disaster recovery scenario using a geo-replication-based DR strategy. Prior to any outage, you use geo-replication to create and maintain secondary replicas of the catalog and tenant databases in an alternate recovery region. If an outage occurs, you failover to these replicas to resume normal business operations. On failover, the databases in the original region become secondary replicas of the databases in the recovery region. These replicas automatically catch up to the state of the databases in the recovery region once they come back online. Later in the tutorial, you fail the application and its databases back to the original production region, exploring what would happen after the outage is resolved.

    To learn more about this pattern, check out the tutorial, and associated code on GitHub.

    Get started

    Learn more about DR patterns and other multi-tenant SaaS app patterns by checking out the Wingtip Tickets SaaS app and tutorials.  You can deploy the sample app in your Azure subscription in less than five minutes, and explore a range of management scenarios, including provisioning, schema management, performance management, and now, disaster recovery.

    Let us know at saasfeedback@microsoft.com what you think of the samples and patterns, and what you’d like to see added next.

    Offline media import for Azure

    $
    0
    0

    So many customers I talk to want to upload their offline data stores into the cloud. Yet, no one wants to spend hours and hours inserting tapes, connecting older hard disks, or figuring out how to digitize and upload film. Well, I’m excited to announce that together with our partners Microsoft Azure is making it easy with our Offline Media Important Program. This partner-enabled service makes it easy to move data into Azure from almost any media, such as tapes, optical drives, hard disks, or film. 

    Why migrate your current storage media to Azure? Azure provides a range of flexible storage options from low-cost, archive storage to high-performance, SSD-based storage. You simply choose the storage tier and we take care of the rest. And once the data is available in Azure, higher-value scenarios around analysis, transformation, and distribution can be unlocked. Here are some of the common uses:

    Media and entertainment

    Offline media import is a great way for entertainment companies to modernize their content assets and take advantage of an array of cloud services such as cognitive services and media analytics. I’m actually at NAB this week talking to media companies about how this program can transform production workflows and breathe new life into existing content. You can learn more about Microsoft Azure, and how it can help your media workflows and business by reading Tad Brockway’s blog post.

    Backup and archive

    Azure’s global, highly-available, tiered storage model makes it a natural cloud destination for your backup and archival media. Additionally, with broad ISV support and rich business continuity solutions including Azure Backup and Azure Site Recovery, you can easily use Azure as the DR platform for your data and applications. This reduces downtime, and provides for faster recovery in the event of an outage.

    Data and analytics

    Tape is also a common retention media for large data repositories such as seismic surveys and genomics sequencing. While this information is locked away on tape, it is effectively inert to higher-order analysis. Migrating the data to Azure can enable richer analytics tools, such as HDInsight and Machine Learning, to derive deeper insights.

    Partners help you move to the cloud

    Our partners work with virtually all types of media, whether that be disks or tapes, SAN or NAS, or even video cassettes or 35mm. And they can work with you, where your media currently is, all over the globe. Check our Offline Media Import website​ to find one near you.

    Whether your data is terabytes, petabytes, or exabytes, the Offline Media Import Program is a great opportunity to move data off existing legacy media and into the Azure cloud. Best of all, Azure can be your data’s enduring home – and the last data migration you’ll have to do!

    Viewing all 10804 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>