We are pleased to announce that SQL Server Data Tools 17.0 is officially released and supported for production use. This GA release includes support for SQL Server 2017 and SQL Server on Linux including new features such as Graph DB. It includes several features that we have consistently received requests for via MSDN forums and Connect and contains numerous fixes and improvements over the 16.x version of the tools. You no longer need to maintain 16.x and 17.0 side-by-side to build SQL Server relational databases, Azure SQL databases, Integration Services packages, Analysis Services data models, Azure Analysis Services data models, and Reporting Services reports. From all the SSDT teams, thank you for your valuable feedback and suggestions!
Additionally, for relational and Azure SQL databases SSDT 17.0 GA includes a highly requested improvement to ignore column order in upgrade plans as well as numerous other bug fixes.
In the Business Intelligence area, SSDT 17.0 GA supports Azure Analysis Services in addition to SQL Server Analysis Services. It features a modern Get Data experience in Tabular 1400 models, including DirectQuery support (see the blog article “Introducing DirectQuery Support for Tabular 1400”) and an increasing portfolio of data sources. Other noteworthy features include object-level security to secure model metadata in addition to data, transaction-performance improvements for a more responsive developer experience, improvements to the authoring experience of detail rows expressions, and a DAX Editor to create measures and other DAX expressions more conveniently.
For Integration Services, SSDT 17.0 GA provides an authoring package with OData Source and OData Connection Manager connecting to the OData feeds of Microsoft Dynamics AX Online and Microsoft Dynamics CRM Online. Moreover, the project target server version supports SQL Server 2017 so you can conveniently deploy your packages on the latest version of SQL Server.
Testing is an increasingly important part of a software development workflow. In many cases, it is insufficient to test a program simply by running it and trying it out – as the scope of the project gets more involved, it becomes increasingly necessary to be able to test individual components of the code on a structured basis. If you’re a C++ developer and are interested in unit testing, you’ll want to be aware of Visual Studio’s unit testing tools. This post goes through just that, and is part of a series aimed at new users to Visual Studio.
This blog post goes over the following concepts:
The easiest and most organized way to set up unit tests is to create a separate project in Visual Studio for your tests. You can create as many test projects as you want in a solution and connect them to any number of other Visual Studio projects in that solution that contain the code you want to test. Assuming you already have some code that you want to test, simply follow these steps to get yourself set up:
Right-click your solution and choose Add > New > Project. Click the Visual C++ category, and choose the Test sub-category. Select Native Unit Test Project, give the project a descriptive name, and then click OK. Image may be NSFW. Clik here to view.
Visual Studio will create a new project containing unit tests, with all dependencies to the native test framework already set up. The next thing to do is to add references to any projects that will be tested. Right-click the unit test project and choose Add > Reference… Image may be NSFW. Clik here to view.
Check any projects that you want to unit test from your test project, and then press OK. Image may be NSFW. Clik here to view.
Your unit testing project can now access your project(s) under test. You can now start writing tests, as long as you add #include statements for the headers you want to access.
NOTE: You will only be able to unit test public functions this way. To unit test private functions, you must write your unit tests in the same class as the code that is being tested.
The Microsoft Native C++ Unit Test Framework
Visual Studio ships with a native C++ test framework that you can use to write your unit tests. The framework defines a series of macros to provide simplified syntax.
If you followed the steps in the previous procedure, you should have a unit test project set up along with your main code. Open unittest1.cpp in your test project and look at the starting code provided: Image may be NSFW. Clik here to view.
Right from the start, you’ll notice that dependencies have already been set up to the test framework, so you can get to work writing your tests. Assuming you connected your test project to your project(s) under test via Add > Reference earlier, you can simply add the #include statements for the header files of the code you want to test.
Tests can be organized by using the TEST_CLASS and TEST_METHOD macros, which perform exactly the functions you’d expect. A TEST_CLASS is a collection of related TEST_METHODS, and each TEST_METHOD contains a test. You can name your TEST_CLASS and TEST_METHOD anything you want in the brackets. It’s a good idea to use descriptive names that make it easy to identify each test/test group individually later.
Let’s try writing some basic asserts. At the TODO comment, write: Assert::AreEqual(1, 1);
This is a basic equality assert which compares two expressions. The first expression holds the expected value, the second holds the item you are testing. For the Assert to pass, both sides must evaluate to the same result. In this trivial example, the test will always pass. You can also test for values you don’t want your expression to evaluate to, like this: Assert::AreNotEqual(1, 2);
Here, for the test to pass, the two expressions must not evaluate to the same result. While this kind of assert is less common, you may find it useful for verifying edge cases where you want to avoid a specific behavior from occurring.
There are several other Assert functions that you can try. Simply type Assert:: and let IntelliSense provide the full list to take a look. Quick Info tooltips appear for each Assert as you make a selection in the list, providing more context on their format and function. You can find a full reference of features in the Microsoft C++ native framework on MSDN.
Using the Test Explorer to Run Tests in the IDE
With Visual Studio, you’re not restricted to running unit tests in the command line. The Text Explorer window in Visual Studio provides a simple interface to run, debug, and parallelize test execution. Image may be NSFW. Clik here to view.
This is a straightforward process. Once you connect your test project to your project(s) under test, add some #include directives in the file containing your unit tests to the code under test, and write some Asserts, you can simply run a full build. Test Explorer will then discover all your unit tests and populate itself with them.
NOTE: In .NET, a feature called Live Unit Testing is available. This feature is not currently supported in C++, so unit tests are discovered and executed only after you run builds.
To run your unit tests, simply click the Run All link in the Test Explorer. This will build your project (though this process is skipped if the project is already up to date) then run all your tests. The Test Explorer indicates the passing tests with a checkmark and the failing tests with an X. A summary of execution results is provided at the bottom of the window. You can click on any failing unit test to see why it failed, including any exceptions that may have been thrown. Execution times for each unit test are also provided. For realistic test execution times, test in the Release solution configuration rather than Debug, which will provide faster runtimes which are more approximate to your shipped application.
To be able to debug your code as you run your unit tests (so you can stop at breakpoints and so forth), simply use the Test > Debug menu to run your tests.
Determining Unit Test Code Coverage
If you are using Visual Studio Enterprise, you can run code coverage on your unit tests. Assuming you have unit tests already set up for your project, this is as simple as going to Test > Analyze Code Coverage in the main Visual Studio menu at the top of the IDE. This opens the Code Coverage Results window which summarizes code coverage data for your tests. Image may be NSFW. Clik here to view. NOTE: There is a known issue where Code Coverage will not work in C++ unless /DEBUG:FULL is selected as the debugging configuration. By default, the configuration is set to /DEBUG:FASTLINK instead. You can switch to /DEBUG:FULL by doing the following:
Right-click the test project and choose Properties.
Go to Linker > Debugging > Generate Debug Info.
Set the option to Generate Debug Information optimized for sharing and publishing (/DEBUG:FULL).
The Code Coverage Results window provides an option called Show Code Coverage Coloring, which colors the code based on whether it’s covered or not. Image may be NSFW. Clik here to view.
Code coverage is counted in blocks, with a block being a piece of code with exactly one entry and exit point. If a block is passed through at least once, it is considered covered.
For more information on C++ unit testing, including some more advanced topics, check out the following MSDN articles:
We’re announcing a great new service to Azure IoT Hub that allows customers to provision millions of devices in a secure and scalable manner. Azure IoT Hub Device Provisioning enables zero-touch provisioning to the right IoT hub without requiring human intervention, and is currently being used by early adopters to validate various solution deployment scenarios.
Provisioning is an important part of the lifecycle management of an IoT device, which enables seamless integration with an Azure IoT solution. Technically speaking, provisioning pairs devices with an IoT hub based on any number of characteristics such as:
Location of the device (geo-sharding)
Customer who bought the device (multitenancy)
Application in which the device is to be used (solution isolation)
The Azure IoT Hub Device Provisioning service is made even better thanks to some security standardization work called DICE and will support multiple types of hardware security modules such as TPM. In conjunction with this, we announced hardware partnerships with STMicro and Micron.
Without IoT Hub Device Provisioning, setting up and deploying a large number of devices to work with a cloud backend is hard and involves a lot of manual work. This is true today for Azure IoT Hub. While customers can create a lot of device identities within the hub at a time using bulk import, they still must individually place connection credentials on the devices themselves. It's hard, and today customers must build their own solution functionality to avoid the painful manual process. Our commitment to strong security best practices is partly to blame. IoT Hub requires each device to have a unique identity registered to the hub in order to enable per-device access revocation in case the device is compromised. This is a security best-practice, but like many security-related best practices, it tends to slow down deployment.
Not only that, but registering a device to Azure IoT Hub is really only half the battle. Once a device is registered, physically deployed in the field, and hooked up to the device management dashboard, now customers have to configure the device with the proper desired twin state and firmware version. This extra step is more time that the device is not a fully-functioning member of the IoT solution. We can do better using the IoT Hub Device Provisioning service.
Hardcoding endpoints with credentials in mass production is operationally expensive, and on top of that the device manufacturer might not know how the device will be used or who the eventual device owner will be, or they may not care. In addition, complete provisioning may involve information that was not available when the device was manufactured, such as who purchased the device. The Azure IoT Hub Device Provisioning service contains all the information needed to provision a device.
Devices running Windows 10 IoT Core operating systems will enable an even easier way to connect to Device Provisioning via an in-box client that OEMs can include in the device unit. With Windows 10 IoT Core, customers can get a zero-touch provisioning experience, eliminating any configuration and provisioning hassles when onboarding new IoT devices that connect to Azure services. When combined with Windows 10 IoT Core support for Azure IoT Hub device management, the entire device life cycle management is simplified through features that enable device reprovisioning, ownership transfer, secure device management, and device end-of-life management. You can learn more about Windows IoT Core device provisioning and device management details by visiting Azure IoT Device Management.
Azure IoT is committed to offering our customers services which take the pain out of deploying and managing an IoT solution in a secure, reliable way. The Azure IoT Hub Device Provisioning service is currently in private preview, and we'll make further announcements when it becomes available to the public. In the meantime, you can learn more about Azure IoT Hub's device management capabilities. We would love to get your feedback on secure device registration, so please continue to submit your suggestions through the Azure IoT User Voice forum or join the Azure IoT Advisors Yammer group.
Learn more about Microsoft IoT
Microsoft is simplifying IoT so every business can digitally transform through IoT solutions that are more accessible and easier to implement. Microsoft has the most comprehensive IoT portfolio with a wide range of IoT offerings to meet organizations where they are on their IoT journey, including everything businesses need to get started — ranging from operating systems for their devices, cloud services to control them, advanced analytics to gain insights, and business applications to enable intelligent action. To see how Microsoft IoT can transform your business, visit www.InternetofYourThings.com.
Microsoft’s commitment to leadership in IoT security continues with Azure IoT’s improving the level of trust and confidence in securing IoT deployments. Azure IoT now supports Device Identity Composition Engine (DICE) and many different kinds of Hardware Security Modules (HSMs). DICE is an upcoming standard at Trusted Computing Group (TCG) for device identification and attestation which enables manufacturers to use silicon gates to create device identification based in hardware, making security hardware part of the DNA of new devices from the ground up. HSMs are the core security technology used to secure device identities and provide advanced functionality such as hardware-based device attestation and zero touch provisioning.
In addition, Azure IoT team is working with standards organizations and major industry partners to employ latest in security best practices to deploy support for a wide variety of Hardware Secure Modules (HSM). HSMs offer resistant and resilient hardware root of trust in IoT devices. The Azure IoT platform transparently integrates HSM support with platform services like Azure IoT Hub Device Provisioning and Azure IoT Hub Device Management, thereby enabling customers and developers to focus more on identifying specific risks associated with their applications and less on security deployment tactics.
IoT device deployments can be remote, autonomous, and open to threats like spoofing, tampering, and displacement. In this case HSMs offer a major defense layer to raise trust in authentication, integrity, confidentiality, privacy, and more. The Azure IoT team is working directly with major HSM manufacturers to easily enable access to a wide variety of HSMs to accommodate deployment specific risks for customers and developers.
The Azure IoT team leverages open standards to develop best practices for secure and robust deployments. One of such upcoming standards is the Device Identity Composition Engine (DICE) from the Trusted Computing Group (TCG) which offers a scalable security framework that requires minimal HSM footprint to anchor trust from which to build various security solutions like authentication, secure boot, and remote attestation. DICE is a response to the new reality of constraint computing that continually characterizes IoT devices. Its minimalist approach is an alternate path to more traditional security framework standards like the Trusted Computing Group’s (TCG) and Trusted Platform Module (TPM), which is also supported on the Azure IoT platform. As of this writing the Azure IoT platform has HSM support for DICE in HSMs from silicon vendors like STMicroelectronics and Micron, as well as support for TPM 1.2. There is also support for HSMs with vendor specific protocols like Spyrus’ Rosetta.
Finally, a high-level guidance on risk assessment is to help solutions architects make the proper security decisions, including choice of HSM. While it is possible to overengineer a security solution that ends up being too expensive to adopt, it is also possible to shortcut the solution security engineering for cost reasons. There is therefore the need to understand this interplay between security and cost for an optimal solution. To this end the Azure IoT team offers the Security Program for Azure IoT to assist customers and solution architects access the security of their IoT infrastructure and help find the right security approach for their IoT deployments.
The security journey is one the Azure IoT team is committed to continually help customers and developers navigate to achieve the highest trust and confidence in securing their IoT deployments. This involves supporting a wide range of hardware base security and security standards to secure hardware root of trust for IoT devices
Learn more about Microsoft IoT
Microsoft is simplifying IoT so every business can digitally transform through IoT solutions that are more accessible and easier to implement. Microsoft has the most comprehensive IoT portfolio with a wide range of IoT offerings to meet organizations where they are on their IoT journey, including everything businesses need to get started — ranging from operating systems for their devices, cloud services to control them, advanced analytics to gain insights, and business applications to enable intelligent action. To see how Microsoft IoT can transform your business, visit www.InternetofYourThings.com.
Goods and Services Tax (GST) is essentially one new indirect tax system for the whole nation, which will make India one unified common market, right from the manufacturer to the consumer. It is a broad-based, comprehensive, single indirect tax which will be levied concurrently on goods and services across India.
The Central and State indirect taxes that may be subsumed by GST include Value Added Tax (VAT), Excise Duty, Service Tax, Central Sales Tax, Additional Customs Duty and Special Additional Duty of Customs. GST will be levied at every stage of the production and distribution chains by giving the benefit of Input Tax Credit (ITC) of the tax remitted at previous stages; thereby, treating the entire country as one market.
Due to the federal structure of India, there will be two components of GST - Central GST (CGST) and State GST (SGST). Both Centre and States will simultaneously levy GST across the value chain. For interstate transactions, an Integrated GST (IGST) will be applicable which will be settled back between the center and the states.
Goods and Services Tax Network (GSTN) a non-Government, private limited company has been formed to provide the IT infrastructure to State/Central government and taxpayers. It has setup a central solution platform to technically enable the GST system including registration of taxpayers, upload/download of invoices, filing returns, State/Central Government reporting, IGST settlement etc. This platform, the GST Platform, has been setup from day 1 as an open API (GST API) based platform allowing various authorized parties to exchange information at scale.
GSTN has further identified GSPs who can wrap the GST Platform and offer various value added services to its customers (i.e. taxpayers) and further downstream sub-GSPs or registered application service providers (ASPs).
GSPs have limited time to understand the new set of rules, which are continuously evolving, develop the solution and host and run it on a secure and scalable platform. Govt. has also adopted an ecosystem approach which will allow GSPs to further expose their APIs to downstream ASPs (Application Service Providers) who will cater to taxpayer needs.
GSP and the ASPs need to focus on solution capabilities and select a platform which provides most of the plumbing necessary to build an open yet secure, scalable, maintainable and compliant platform and achieve all of this at a manageable cost.
With three huge data centers in India offering a host of IAAS, PAAS and SAAS services supporting host of open source and commercial platforms, Azure offers the best platform to host GSP and GST related ASP solutions.
The attached document authored by Mandar Samant (Area Architect- Microsoft Services)provides a good overview of how Azure services can get GSPs and ASPs started very quickly and provide cutting edge GST solution to the taxpayer community in India.
With the release of Cloudera Enterprise Data Hub 5.11, you can now run Spark, Hive, and MapReduce workloads in a Cloudera cluster on Azure Data Lake Store (ADLS). Running on ADLS has the following benefits:
Grow or shrink a cluster independent of the size of the data.
Data persists independently as you spin up or tear down a cluster. Other clusters and compute engines, such as Azure Data Lake Analytics or Azure SQL Data Warehouse, can execute workload on the same data.
Enable role-based access controls integrated with Azure Active Directory and authorize users and groups with fine-grained POSIX-based ACLs.
Cloud HDFS with performance optimized for analytics workload, supporting reading and writing hundreds of terabytes of data concurrently.
No limits on account size or individual file size.
Data is encrypted at rest by default using service-managed or customer-managed keys in Azure Key Vault, and is encrypted with SSL while in transit.
High data durability at lower cost as data replication is managed by Data Lake Store and exposed from HDFS compatible interface rather than having to replicate data both in HDFS and at the cloud storage infrastructure level.
Step 1: ADLS uses Azure Active Directory for identity management and authentication. To access ADLS from a Cloudera cluster, first create a service principal in Azure AD. You will need the Application ID, Authentication Key, and Tenant ID of the service principal.
Step 2: To access ADLS, assign the permissions for the service principal created in the previous step. To do this, go to the Azure portal, navigate to the Data Lake Store, and select Data Explorer. Then navigate to the target path, select Access and add the service principal with appropriate access rights. Refer to this document for details on access control in ADLS.
Step 4: Verify you can access ADLS by running a Hadoop command, for example:
hdfs dfs -ls adl://<your adls account>.azuredatalakestore.net/<path to file>/
Specify a Data Lake Store in the Hadoop command line
Instead of, or in addition to, configuring a Data Lake Store for cluster wide access, you could also provide ADLS access information in the command line of a MapReduce or Spark job. With this method, if you use an Azure AD refresh token instead of a service principal, and encrypt the credentials in a .JCEKS file under a user’s home directory, you gain the following benefits:
Each user can use their own credentials instead of having a cluster wide credential
Nobody can see another user’s credential because it’s encrypted in .JCEKS in the user’s home directory
No need to store credentials in clear text in a configuration file
No need to wait for someone who has rights to create service principals in Azure AD
The following steps illustrate an example of how you can set this up by using the refresh token obtained by signing in to the Azure cross platform client tool.
Step 1: Sign in to Azure cli by running the command “azure login”, then get the refreshToken and _clientId from .azure/accessTokens.json under the user’s home directory.
Step 2: Run the following commands to set up credentials to access ADLS:
Step 3: Verify you can access ADLS by running a Hadoop command, for example:
hdfs dfs -Ddfs.adls.oauth2.access.token.provider.type=RefreshToken -Dhadoop.security.credential.provider.path=jceks://hdfs/user/<username>/cred.jceks -ls adl://<your adls account>.azuredatalakestore.net/<path to file>
hadoop jar /opt/cloudera/parcels/CDH/lib/hadoop-0.20-mapreduce/hadoop-examples.jar teragen -Dmapred.child.env="HADOOP_CREDSTORE_PASSWORD=$HADOOP_CREDSTORE_PASSWORD" -Dyarn.app.mapreduce.am.env="HADOOP_CREDSTORE_PASSWORD=$HADOOP_CREDSTORE_PASSWORD" -Ddfs.adls.oauth2.access.token.provider.type=RefreshToken -Dhadoop.security.credential.provider.path=jceks://hdfs/user/<username>/cred.jceks 1000 adl://<your adls account>.azuredatalakestore.net/<path to file>
Limitations of ADLS support in EDH 5.11
Only Spark, Hive, and MapReduce workloads are supported on ADLS. Support for ADLS in Impala, HBase, and other services will come in future releases.
ADLS is supported as a secondary storage. To access ADLS, use fully qualified URLs in the form of adl://<your adls account>.azuredatalakestore.net/<path to file> .
Today, we are excited to announce the public preview of the Cloud Partner Portal for publishing single virtual machine offers. The Cloud Partner Portal enables our publisher partners to create, define, publish and get insights for their single virtual machine offerings on the Azure Marketplace.
With this announcement, new and existing publisher partners who wish to publish virtual machines onto Azure will be able to use the new Cloud Partner Portal to perform any of the above actions. This new portal will soon support all other offer types and will replace the current publishing portal in time.
The new improved Cloud Partner Portal
Today’s release has several new features that make publishing onto the Azure Marketplace a lot faster, simpler and easier.
Features of Today’s Release:
1. Org Id login support – This has been an ask from our publisher partners for a long time and we are adding support for Org Id to the Cloud Partner Portal. Additionally, the new publishing portal would support RBAC so that offers remain secure and publishers don’t have to make all contributors co-admins giving them only the level of access needed.
2. Get it right the first time – Everyone hates do-overs. There is nothing worse than spending time defining an offer and thinking you are done only an issue with the offer downstream. To prevent this, your offer is validated as you type. This reduces unwanted surprises after publishing the offer.
Additionally, we anticipate an overall reduction in time from starting defining an offer to actually publishing an offer.
We have spent a considerable amount of time in writing validations for every field within the offer to ensure when publishers click publish, their offer will publish successfully. Even as we ship, we are adding new validations with every release which make the process a lot more predictable.
3. Simplified publishing workflow – The new publishing portal has a simplified publishing workflow providing one path to offer publishing. There are no separate production and staging slots exposed to publishers. Publishers just need to ‘Publish’ their offers, and we take care of the rest.
Before an offer goes live, publishers are given a chance to review it and ensure that everything is working as expected.
4. Be more informed – The new Cloud Partner Portal lets publishers know even before they publish their offer about the steps their offer would go through along with estimated execution times. Along with the guidance around the workflows, we have notifications built into the portal which keep the publishers informed on your offer’s progress to getting listed on Azure.
5. Insights in the portal – The Cloud Partner Portal provides a direct link into the insights of an offer. These insights provide a quick glance and drilldowns into an offer’s health and performance on the Azure Marketplace. The insights portal also has an onboarding video and rich documentation that helps publishers familiarize themselves with its features.
6. Feedback is just a click away – The send a smile/frown button will be ubiquitous in the new portal. In a matter of clicks publishers can send feedback directly to the engineering team.
I can keep writing about the host of new features and capabilities of the new publishing portal, however the best way to discover these features is to take the portal for a spin.
If you are an existing Azure Publisher with a Virtual Machine offer, your account for the new publishing portal is already created. Please visit the Cloud Partner Portal and login using your current credentials. Please refer to our documentation if you would need any help in getting started.
Existing publishers can also let us know if they would like to get their offers migrated following the steps available to registered publishers. We also have a brand new seller guide that can help you navigate the Azure Marketplace better and get most value out of it.
If you are a new Publisher looking to publish onto the Azure platform, please fill up the nomination form here and we will be in touch with you.
As you try out the new cloud partner portal, please keep the steady stream of feedback coming in. We hope you enjoy using the portal as much as we enjoyed creating it for you.
Ubuntu 12.04 "Precise Pangolin" has been with us from the beginning, since we first embarked on the journey to support Linux virtual machines in Microsoft Azure. However, as its five-year support cycle is nearing an end in April 2017 we must now move on and say "goodbye" to Precise. Ubuntu posted the official EOL notice back in March. The following is an excerpt from one of the announcements:
This is a reminder that the Ubuntu 12.04 (Precise Pangolin) release is nearing its end of life. Ubuntu announced its 12.04 (Precise Pangolin) release almost 5 years ago, on April 26, 2012. As with the earlier LTS releases, Ubuntu committed to ongoing security and critical fixes for a period of 5 years. The support period is now nearing its completion and Ubuntu 12.04 will reach its end of life near the end of April 2017. At that time, Ubuntu Security Notices will no longer include information or updated packages, including kernel updates, for Ubuntu 12.04.
The supported upgrade path from Ubuntu 12.04 is via Ubuntu 14.04. Users are encouraged to evaluate and upgrade to our latest 16.04 LTS release via 14.04. Ubuntu 14.04 and 16.04 continue to be actively supported with security updates and select high-impact bug fixes.
For users who can't upgrade immediately, Canonical is offering Ubuntu 12.04 ESM (Extended Security Maintenance), which provides important security fixes for the kernel and the most essential user space packages in Ubuntu 12.04. These updates are delivered in a secure, private archive exclusively available to Ubuntu Advantage customers.
Users interested in Ubuntu 12.04 ESM updates can purchase Ubuntu Advantage.
This week at Microsoft Data Amp we covered how you can harness the incredible power of data using Microsoft’s latest innovations in its Data Platform. One of the key pieces in the Data Platform is Azure DocumentDB, Microsoft’s globally distributed NoSQL database service. Released in 2015, DocumentDB is being used virtually ubiquitously as a backend for first-party Microsoft services for many years.
DocumentDB is Microsoft's multi-tenant, globally distributed database system designed to enable developers to build planet scale applications. DocumentDB allows you to elastically scale both, throughput and storage across any number of geographical regions. The service offers guaranteed low latency at P99, 99.99% high availability, predictable throughput, and multiple well-defined consistency models, all backed by comprehensive SLAs. By virtue of its schema-agnostic and write optimized database engine, by default DocumentDB is capable of automatically indexing all the data it ingests and serve SQL, MongoDB, and JavaScript language-integrated queries in a scale-independent manner. As a cloud service, DocumentDB is carefully engineered with multi-tenancy and global distribution from the ground up.
In this blog, we cover case studies of first-party applications of DocumentDB by the Windows, Universal Store, and Azure IoT Hub teams, and how these teams could harness the scalability, low latency, and flexibility benefits of DocumentDB to innovate and bring business value to their services.
Microsoft DnA: How Microsoft uses error reporting and diagnostics to improve Windows
The Windows Data and Analytics (DnA) team in Microsoft implements the crash reporting technology for Windows. One of their components runs as a Windows Service in every Windows device. Whenever an application stops responding on a user's desktop, Windows collects post-error debug information and prompts the user to ask if they’re interested in finding a solution to the error. If the user accepts, the dump is sent over the Internet to the DnA service. When a dump reaches the service, it is analyzed and a solution is sent back to the user when one is available.
In DnA’s terminology, crash reports are organized into “buckets”. Each bucket is used to classify an issue by key attributes such as Application Name, Application Version, Module Name, Module Version, and OS Exception code. Each bucket contains crash reports that are caused by the same bug. With the large ecosystem of hardware and software vendors, and 15 years of collected data about error reports, the DnA service has over 10 billion unique buckets in its database cluster.
One of the DnA team’s requirements was rather simple at face value. Given the hash of a bucket, return the ID corresponding to its bucket/issue if one was available. However, the scale posed interesting technical challenges. There was a lot of data (10 billion buckets, growing at 6 million a day), high volume of requests and global reach (requests from any device running Windows), and low latency requirements (to ensure a good user experience).
To store “Bucket Dimensions”, the DnA team provisioned a single DocumentDB collection with 400,000 request units per second of provisioned throughput. Since all access was by the primary key, they configured the partition key to be the same as the “id”, with a digest of the various attributes as the value. As DocumentDB provided <10 ms read latency and <15ms write latency at p99, DnA could perform fast lookups against buckets and lookup issues even as their data and request volumes continued to grow over time.
Windows cab catalog metadata and query
Aside from fast real-time lookups, the DnA team also wanted to use the data to drive engineering decisions to help improve Microsoft and other vendors’ products by fixing the most impactful issues. For example, the team has observed that addressing the top 1 percent of reliability issues could address 50 percent of customers’ issues. This analysis required storing the crash dump binary files, “cabs”, extracting useful metadata, then running analysis and reports against this data. This presented a number of interesting challenges on its own.
The team deals with approximately 600 different types of reliability-incident data. Managing the schema and indexes required a significant engineering and operational overhead on the team.
The cab metadata was also a big volume of data. There were about 5 billion cabs, and 30 million new cabs were added every day.
The DnA team could migrate their Bucket Dimension and Cab Catalog stores to DocumentDB from their earlier solution based on an on-premises cluster of SQL Servers. Since shifting the database’s heavy lifting to DocumentDB, DnA benefited from the speed, scale, and flexibility offered by DocumentDB. More importantly, they could focus less on maintenance of their database and more on improving user experience on Windows.
Microsoft Global Homing Service: How Xbox Live and Universal Store build highly available location services
Microsoft’s Universal Store team implements the e-commerce platform that is used to power Microsoft’s storefronts across Windows Store, Xbox, and a large set of Microsoft services. One of the key internal components in the Universal Store backend is the Global Homing Service (GHS), a highly reliable service that provides its downstream consumers with the ability to quickly retrieve location metadata associated with one to many, arbitrary large number of, IDs.
Global Homing Service (GHS) using Azure DocumentDB across 4 regions
GHS is on a hot path for the majority of its consumer services and receives hundreds of thousands of requests per second. Therefore, the latency and throughput requirements for the service are strict. The service had to maintain 99.99% availability and predictable latencies under 300ms end-to-end at the 99.9th percentile to satisfy requirements of its partner teams. To reduce latencies, the service is geo-distributed so that it is as close as possible to calling partner services.
The initial design of GHS was implemented using a combination of Azure Table Storage and various levels of caches. This solution worked well for the initial set of loads, but given the critical nature of GHS and increased adoption of the service from key partners, it became apparent that the existing SLA was not going to meet their partners’ P99.9 requirements of <300ms with a 99.99% reliability over 1 minute. Partners with a critical dependency on the GHS call path found that even if the overall reliability was high, there were periods of time where the number of timeouts would exceed their tolerances and result in a noticeable degradation of the partner’s own SLA. These periods of increased timeouts were given the name “micro-outages” and key partners started tracking these daily.
After investigating many possible solutions, such as LevelDB, Kafka, MongoDB, and Cassandra, the Universal Store team chose to replace GHS’s Azure Table backend and the original cache in front of it with an Azure DocumentDB backend. GHS deployed a single DocumentDB collection with 600,000 request units per second deployed across four geographic regions where their partner teams had the biggest footprint. As a result of the switch of DocumentDB, GHS customers have seen p50 latencies under 30ms and a huge reduction in the number and scale of micro-outages. GHS’s availability has remained at or above 99.99% since the migration. In addition to the increase in service availability, overall latencies significantly improved as well for most of GHS call patterns.
Number of GHS micro-outages before and after DocumentDB migration
Microsoft Azure IoT Hub: How to handle the firehose from billions of IoT devices
Azure IoT Hub is a fully managed service that allows organizations to connect, monitor, and manage up to billions of IoT devices. IoT Hub provides reliable communication between devices, the a queryable store for device metadata and synchronized state information, and provides extensive monitoring for device connectivity and device identity management events. Since IoT Hub is at the ingestion point for the massive volume of writes coming from IoT devices across all of Azure, they needed a robust and scalable database in their backend.
IoT Hub provides device-related information, “device twins”, as part of its APIs that device and back ends can use to synchronize device conditions and configuration. A device twin is a JSON document that includes tags assigned to the device in the backend, a property bag of “reported properties” which include device configuration or conditions, and a property bag of “desired properties” that can be used to notify the device to perform a configuration change. The IoT Hub team choose Azure DocumentDB over Hbase, Cassandra, and MongoDB because DocumentDB provided functionality that the team needed like guaranteed low latency, elastic scaling of storage and throughput, provide high availability via global distribution, and rich query capabilities via automatic indexing.
IoT Hub stores the device twin data as JSON documents and performs updates based on the latest state reported by devices in near real-time. The architecture uses a partitioned collection that uses a compound key constructed by concatenating the Azure account (tenant) ID and the device ID to elastically scale to handle massive volumes of writes. IoT Hub also uses Service Fabric to scale out devices across multiple servers, each server communicating with a 1-N DocumentDB partitions. This topology is replicated across each Azure region that IoT Hub is available.
In this blog, we looked at a couple of first-party use cases of DocumentDB and how these Microsoft teams were able to utilize Azure DocumentDB to improve user experience, improve latency, and reliability of their services.
If you want a non-evaluation version of the VM, we have those as well. They do require a Windows 10 Pro license, which you can get from the Microsoft Store.
JavaScript performance is an evergreentopic on the web. With each new release of Microsoft Edge, we look at user feedback and telemetry data to identify opportunities to improve the Chakra JavaScript engine and enable better performance on real sites.
In this post, we’ll walk you through some new features coming to Chakra with the Windows 10 Creators Update that improve the day-to-day browsing experience in Microsoft Edge, as well as some new experimental features for developers: WebAssembly, and Shared Memory and Atomics.
Under the hood: JavaScript performance improvements
Saving memory by re-deferring functions
Back in the days of Internet Explorer, Chakra introduced the ability to defer-parse functions, and more recently extended the capability to defer-parse event-handlers. For eligible functions, Chakra performs a lightweight pre-parsing phase where the engine checks for syntax errors at startup time, and delays the full parsing and bytecode generation until functions are called for the first time. While the obvious benefit is to improve page load time and avoid wasting time on redundant functions, defer-parsing also prevents memory from being allocated to store metadata such as ASTs or bytecode for those redundant functions. In the Creators Update, Microsoft Edge and Chakra further utilizes the defer-parsing mechanism and improves memory usage by allowing functions to be re-deferred.
The idea of re-deferring is deceptively simple – for every function that Chakra deems would no longer get executed, the engine frees the bulk of the memory the function holds to store metadata generated after pre-parsing, and effectively leaves the function in a deferred state as if it has just been pre-parsed. Imagine a function foo which gets deferred upon startup, called at some point, and re-deferred later.
The tricky part about re-deferring is that Chakra cannot perform such actions too aggressively or it risks frequently re-paying the cost of full-parsing, bytecode generation, etc. Chakra checks its record of function call counts every N GC cycles, and re-defers functions that are not called over that period of time. The value of N is based on heuristics, and as a special case a smaller value is used at startup time when memory usage is more susceptible to peak. It is hard to generalize the exact saving from re-deferring as it is very subject to the content served, but in our experiment with a small sample of sites, re-deferring typically reduces the memory allocated by Chakra by 6-12%.
Post Creators Update, we are working on addressing an existing limitation of re-deferring to handle arrow functions, getters, setters, and functions that capture lexically-scoped variables. We expect to see further memory savings from the re-deferring feature.
Optimizing away heap arguments
The usage of arguments object is fairly common on the web. Whenever a function uses the arguments object, Chakra if necessary, creates a “heap arguments” object so that both formals and the arguments object refer to the same memory location. Allocating such object can be expensive, so the Chakra JIT optimizes away the creation of heap arguments when functions have no formal parameters.
// no writes to formals (a & b) therefore heap args can be optimized away
function plus(a, b) {
if (arguments.length == 2) {
return a + b;
}
}
To measure the impact of the optimization, our web crawler estimates that the optimization benefits about 95% websites and it allows the React sub-test in the Speedometer benchmark, which runs a simple todoMVC implemented with React, to speed up by about 30% in Microsoft Edge.
Better performance for minified code
Using a minifier before deploying scripts has been a common practice for web developers to reduce the download burden on the client side. However, minifiers could sometimes pose performance issues as they introduce code patterns that developers typically would not write by hand and therefore might not be optimized.
Previously, we’ve made optimizations in Chakra for code patterns observed in UglifyJS, one of the most heavily-used minifiers, and improved performance for some code patterns by 20-50%. For the Creators Update, we investigated the emit pattern of the Closure compiler, another widely used minifier in the JavaScript ecosystem, and added a series of inlining heuristics, fast paths and other optimizations according to our findings.
The changes lead to a visible speedup for code minified by Closure or other minifiers that follow the same patterns. As an experiment to measure the impact consistently in a well-defined and constrained environment, we minified some popular JavaScript benchmarks using Closure and noticed a 5~15% improvement on sub-tests with patterns we’ve optimized for.
WebAssembly
WebAssembly is an emerging portable, size- and load-time-efficient binary format for the web. It aims to achieve near-native performance and provides a viable solution for performance-critical workloads such as games, multimedia encoding/decoding, etc. As part of the WebAssembly Community Group (CG), we have been collaborating closely with Mozilla, Google, Apple and others in the CG to push the design forward.
Following the recent conclusion of WebAssembly browser preview and the consensus over the minimum viable product (MVP) format among browser vendors, we’re excited to share that Microsoft Edge now supports WebAssembly MVP behind the experimental flag in the Creators Update. Users can navigate to about:flags and check the “Enable experimental JavaScript features” box to turn on WebAssembly and other experimental features such as SharedArrayBuffer.
Under the hood, Chakra defers parsing WebAssembly functions until called, unlike other engines that parse and JIT functions at startup time. We’ve observed startup time as a major headache for large web apps and have rarely seen runtime performance being the issue from our experiences with existing WebAssembly & asm.js workloads. As a result, a WebAssembly app often loads noticeably faster in Microsoft Edge. Try out the Tanks! demo in Microsoft Edge to see it for yourself – be sure to enable the “Experimental JavaScript Features” flag in about:flags!
Beyond the Creators Update, we are tuning WebAssembly performance as well as working on the last remaining MVP features such as response APIs and structured cloning before we enable WebAssembly on by default in Microsoft Edge. Critical post-MVP features such as threads are being considered as well.
Shared Memory & Atomics
JavaScript as we know it operates in a run-to-completion single-threaded model. But with the growing complexity of web apps, there is an increasing need to fully exploit the underlying hardware and utilize multi-core parallelism to achieve better performance.
The creation of Web Workers unlocked the possibility of parallelism on the web and executing JavaScript without blocking the UI thread. Communication between the UI thread and workers was initially done via cloning data and postMessage. Transferable object was later added as a welcome change to allow transferring data to another thread without the runtime and memory overhead of cloning, and the original owner forfeits its right to access the data to avoid synchronization problems.
Soon to be ratified in ES2017, Shared Memory & Atomics is the new addition to the picture to further improve parallel programming on the web. With the release of Creators Update, we are excited to preview the feature behind the experimental JavaScript features flag in Microsoft Edge.
In Shared Memory and Atomics, SharedArrayBuffer is essentially an ArrayBuffer shareable between threads and removes the chore of transferring data back and forth. It enables workers to virtually work on the same block of memory, guaranteeing that a change on one thread on a SharedArrayBuffer will eventually be observed (at some unknown point of time) on other threads holding the same buffer. As long as workers operate on different parts of the same SharedArrayBuffer, all operations are thread-safe.
The addition of Atomics gives developers the necessary tools to safely and predictably operate on the same memory location by adding atomics operations and the ability to wait and wake in JavaScript. The new feature allows developers to build more powerful web applications. As a simple illustration of the feature, here’s the producer-consumer problem implemented with shared memory:
// UI thread
var sab = new SharedArrayBuffer(Int32Array.BYTES_PER_ELEMENT * 1000);
var i32 = new Int32Array(sab);
producer.postMessage(i32);
consumer.postMessage(i32);
// producer.js – a worker that keeps producing non-zero data
onmessage = ev => {
let i32 = ev.data;
let i = 0;
while (true) {
let curr = Atomics.load(i32, i); // load i32[i]
if (curr != 0) Atomics.wait(i32, i, curr); // wait till i32[i] != curr
Atomics.store(i32, i, produceNonZero()); // store in i32[i]
Atomics.wake(i32, i, 1); // wake 1 thread waiting on i32[i]
i = (i + 1) % i32.length;
}
}
// consumer.js – a worker that keeps consuming and replacing data with 0
onmessage = ev => {
let i32 = ev.data;
let i = 0;
while (true) {
Atomics.wait(i32, i, 0); // wait till i32[i] != 0
consumeNonZero(Atomics.exchange(i32, i, 0)); // exchange value of i32[i] with 0
Atomics.wake(i32, i, 1); // wake 1 thread waiting on i32[i]
i = (i + 1) % i32.length;
}
}
Shared memory will also play a key role in the upcoming WebAssembly threads.
Built with the community
We hope you enjoy the JavaScript performance update in Microsoft Edge and are as excited as we are to see the progress on WebAssembly and shared memory pushing the performance boundary of the web. We love to hear user feedback and are always on a lookout for opportunities to improve JavaScript performance on the real-world web. Help us improve and share your thoughts with us via @MSEdgeDev and @ChakraCore, or the ChakraCore repo on GitHub.
In Episode 127 of the Office 365 Developer Podcast, Richard diZerega and Andrew Coates talk with Michael Zlatkovsky and Bhargav Krishna about the new Script Lab Office add-in.
Image may be NSFW. Clik here to view.I’m a developer on the Office Extensibility Team at Microsoft, working on the Office.js APIs and the tooling that surrounds them. I love API design work and feel fortunate to have played a part in the rebirth of the Office 2016 wave of Office.js APIs. In my spare time, I have been writing a book about Office.js key concepts, which has been a fun way of expanding upon my answers on StackOverflow. The book is available in e-book form at leanpub.com/buildingofficeaddins.
About Bhargav Krishna
Image may be NSFW. Clik here to view.I have been a web developer at Microsoft since 2013. I currently work for Microsoft Teams and love cutting edge tech, learning new frameworks, tools, platforms etc. Outside of work, I am an avid gamer and you can find me online with @wrathofzombies on Xbox, GitHub, Twitter and Facebook.
About the hosts
Image may be NSFW. Clik here to view.Richard is a software engineer in Microsoft’s Developer Experience (DX) group, where he helps developers and software vendors maximize their use of Microsoft cloud services in Office 365 and Azure. Richard has spent a good portion of the last decade architecting Office-centric solutions, many that span Microsoft’s diverse technology portfolio. He is a passionate technology evangelist and a frequent speaker at worldwide conferences, trainings and events. Richard is highly active in the Office 365 community, popular blogger at aka.ms/richdizz and can be found on Twitter at @richdizz. Richard is born, raised and based in Dallas, TX, but works on a worldwide team based in Redmond. Richard is an avid builder of things (BoT), musician and lightning-fast runner.
Image may be NSFW. Clik here to view.A civil engineer by training and a software developer by profession, Andrew Coates has been a developer evangelist at Microsoft since early 2004, teaching, learning and sharing coding techniques. During that time, he’s focused on .NET development on the desktop, in the cloud, on the web, on mobile devices and most recently for Office. Andrew has a number of apps in various stores and generally has far too much fun doing his job to honestly be able to call it work. Andrew lives in Sydney, Australia with his wife and two almost-grown-up children.
Starting in the latest preview release of Visual Studio version 15.2 (26418.1-Preview), you can now find vswhere installed in “%ProgramFiles(x86)%Microsoft Visual StudioInstaller” (on 32-bit operating systems before Windows 10, you should use “%ProgramFiles%Microsoft Visual StudioInstaller”).
While I initially made vswhere.exe available via NuGet and Chocolatey for easy acquisition, some projects do not use package managers nor do most projects want to commit binaries to a git repository (since each version with little compression would be downloaded to every repo without a filter like git LFS).
So starting with build 15.2.26418.1* you can rely on vswhere.exe being installed. We actually install it with the installer, so even if you install a product like Build Tools you can still rely on vswhere.exe being available in “%ProgramFiles(x86)%Microsoft Visual StudioInstaller”.
* A note about versions: the display version is 15.2.26418.1, but package and binary versions may be 15.0.26418.1. This is an artifact of how we do versioning but are looking to fix the “installationVersion” property you can see with vswhere.exe to match the display version, which you can currently see as part of the “installationName” property like in the following example.
Visual Studio comes packed with a set of productivity tools to make it easy for C++ developers to read, edit, and navigate through their code. In this blog post we will dive into these features and go over what they do. This post is part of a series aimed at new users to Visual Studio.
If you’re like most developers, chances are you spend more time looking at code than modifying it. With that in mind, Visual Studio provides a suite of features to help you better visualize and understand your project.
Basic Editor Features
Visual Studio automatically provides syntax colorization for your C++ code to differentiate between different types of symbols. Unused code (e.g. code under an #if 0) is more faded in color. In addition, outlines are added around code blocks to make it easy to expand or collapse them. Image may be NSFW. Clik here to view.
If there is an error in your code that will cause your build to fail, Visual Studio adds a red squiggle where the issue is occurring. If Visual Studio finds an issue with your code but the issue wouldn’t cause your build to fail, you’ll see a green squiggle instead. You can look at any compiler-generated warnings or errors in the Error List window. Image may be NSFW. Clik here to view.
If you place your cursor over a curly brace, ‘{‘ or ‘}’, Visual Studio highlights its matching counterpart.
You can zoom in or out in the editor by holding down Ctrl and scrolling with your mouse wheel or selecting the zoom setting in the bottom left corner. Image may be NSFW. Clik here to view.
The Tools > Options menu is the central location for Visual Studio options, and gives you the ability to configure a large variety of different features. It is worth exploring to tailor the IDE to your unique needs. Image may be NSFW. Clik here to view.
You can add line numbers to your project by going to Text Editor > All Languages > General or by searching for “line num” with Quick Launch(Ctrl + Q). Line numbers can be set for all languages or for specific languages only, including C++.
Quick Info and Parameter Info
You can hover over any variable, function, or other code symbol to get information about that symbol. For symbols that can be declared, Quick Info displays the declaration. Image may be NSFW. Clik here to view.
When you are writing out a call to a function, Parameter Info is invoked to clarify the types of parameters expected as inputs. If there is an error in your code, you can hover over it and Quick Info will display the error message. You can also find the error message in the Error List window. Image may be NSFW. Clik here to view.
In addition, Quick Info displays any comments that you place just above the definition of the symbol that you hover over, giving you an easy way to check the documentation in your code.
Scroll Bar Map Mode
Visual Studio takes the concept of a scroll bar much further than most applications. With Scroll Bar Map Mode, you can scroll and browse through a file at the same time without leaving your current location, or click anywhere on the bar to navigate there. Even with Map Mode off, the scroll bar highlights changes made in the code in green (for saved changes) and yellow (for unsaved changes). You can turn on Map Mode in Tools > Options > Text Editor > All Languages > Scroll Bars > Use map mode for vertical scroll bar or by searching for “map” with Quick Launch (Ctrl + Q). Image may be NSFW. Clik here to view.
Class View
There are several ways of visualizing your code. One example is Class View. You can open Class View from the View menu or by pressing Ctrl + Shift + C. Class View displays a searchable set of trees of all code symbols and their scope and parent/child hierarchies, organized on a per-project basis. You can configure what Class View displays from Class View Settings (click the gear box icon at the top of the window). Image may be NSFW. Clik here to view.
Generate Graph of Include Files
To understand dependency chains between files, right-click while in any open document and choose Generate graph of include files. Image may be NSFW. Clik here to view.
You also have the option to save the graph for later viewing.
View Call Hierarchy
You can right-click any function call to view a recursive list of its call hierarchy (both functions that call it, and functions that it calls). Each function in the list can be expanded in the same way. For more information, see Call Hierarchy. Image may be NSFW. Clik here to view.
Peek Definition
You can check out the definition of a variable or function at a glance, inline, by right-clicking it and choosing Peek Definition, or pressing Alt+F12 with the cursor over that symbol. This is a quick way to learn more about the symbol without having to leave your current position in the editor. Image may be NSFW. Clik here to view.
Navigating Around Your Codebase
Visual Studio provides a suite of tools to allow you to navigate around your codebase quickly and efficiently.
Open Document
Right-click on an #include directive in your code and choose Open Document, or press Ctrl+Shift+G with the cursor over that line, to open the corresponding document.
Toggle Header/Code File
You can switch between a header file and its corresponding source file or vice versa, by right-clicking anywhere in your file and choosing Toggle Header / Code File or by pressing its corresponding keyboard shortcut: Ctrl+K, Ctrl+O.
Solution Explorer
Solution Explorer is the primary means of managing and navigating between files in your solution. You can navigate to any file by clicking it in Solution Explorer. By default, files are grouped by the projects that they appear in. To change this default view, click the Solutions and Folders button at the top of the window to switch to a folder-based view. Image may be NSFW. Clik here to view.
Go To Definition/Declaration
You can navigate to the definition of a code symbol by right-clicking it in the editor and choosing Go To Definition, or pressing F12. You can navigate to a declaration similarly from the right-click context menu, or by pressing Ctrl+F12. Image may be NSFW. Clik here to view.
Find / Find in Files
You can run a text search for anything in your solution with Find(Ctrl+F) or Find in Files(Ctrl+Shift+F).
Find can be scoped to a selection, the current document, all open documents, the current project, or the entire solution, and supports regular expressions. It also highlights all matches automatically in the IDE. Image may be NSFW. Clik here to view. Find in Files is a more sophisticated version of Find that displays a list of results in the Find Results window. It can be configured even further than Find, such as by allowing you to search external code dependencies, filter by filetypes, and more. You can organize Find results in two windows or append results from multiple searches together in the Find Results window. Individual entries in the Find Results window can also be deleted if they are not desired. Image may be NSFW. Clik here to view.Image may be NSFW. Clik here to view.
You can navigate to different symbols around your codebase by using the navbar that is above the editor window. Image may be NSFW. Clik here to view.
Go To
Go To(Ctrl + T) is a code navigation feature that can be used to navigate to files, code symbols or line numbers. For more information, take a look at Introducing Go To, the Successor to Navigate To.
Quick Launch
Quick Launch makes it easy to navigate to any window, tool, or setting in Visual Studio. Simply type Ctrl+Q or click on the search box in the top-right corner of the IDE and search for what you are looking for. Image may be NSFW. Clik here to view.
Authoring and refactoring code
Visual Studio provides a suite of tools to help you author, edit, and refactor your code.
Basic Editor Features
You can easily move lines of code up and down by selecting them, holding down Alt, and pressing the Up/Down arrow keys.
To save a file, press the Save button at the top of the IDE, or press Ctrl+S. Generally though, it’s a good idea to save all your changed files at one time by using Save All(Ctrl+Shift+S).
Change Tracking
Any time you make a change to a file, a yellow bar appears on the left to indicate that unsaved changes were made. When you save the file, the bar turns green. Image may be NSFW. Clik here to view.
The green and yellow bars are preserved as long as the document is open in the editor. They represent the changes that were made since you last opened the document.
IntelliSense
IntelliSense is a powerful code completion tool that suggests symbols and code snippets for you as you type. C++ IntelliSense in Visual Studio runs in real time, analyzing your codebase as you update it and providing contextual recommendations based on the characters of a symbol that you’ve typed. As you type more characters, the list of recommended results narrows down. Image may be NSFW. Clik here to view.
In addition, some symbols are omitted automatically to help you narrow down on what you need. For example, when accessing a class object’s members from outside the class, you will not be able to see private members by default, or protected members (if you are not in the context of a child class).
After you have picked out the symbol you want to add from the drop-down list, you can autocomplete it with Tab, Enter, or one of the other commit characters (by default: {}[]().,:;+-*/%&|^!=?@#).
TIP: If you want to change the set of characters that can be used to complete IntelliSense suggestions, search for “IntelliSense” in Quick Launch (Ctrl + Q) and choose the Text Editor -> C/C++ -> Advanced option to open the IntelliSense advanced settings page. From there, edit Member List Commit Characters with the changes you want. If you find yourself accidentally committing results you didn’t want or want a new way to do so, this is your solution. Image may be NSFW. Clik here to view.
The IntelliSense section of the advanced settings page also provides many other useful customizations. The Member List Filter Mode option, for example, has a dramatic impact on the kinds of IntelliSense autocomplete suggestions you will see. By default, it is set to Fuzzy, which uses a sophisticated algorithm to find patterns in the characters that you typed and match them to potential code symbols. For example, if you have a symbol called MyAwesomeClass, you can type “MAC” and find the class in your autocomplete suggestions, despite omitting many of the characters in the middle. The fuzzy algorithm sets a minimum threshold that code symbols must meet to show up in the list.
If you don’t like the fuzzy filtering mode, you can change it to Prefix, Smart, or None. While None won’t reduce the list at all, Smart filtering displays all symbols containing substrings that match what you typed. Prefix filtering on the other hand purely searches for strings that begin with what you typed. These settings give you many options to define your IntelliSense experience, and it’s worth trying them out to see what you prefer.
IntelliSense doesn’t just suggest individual symbols. Some IntelliSense suggestions come in the form of code snippets, which provide a basic example of a code construct. Snippets are easily identified by the square box icon beside them. In the following screenshot, “while” is a code snippet that automatically creates a basic while loop when it is committed. You can choose to toggle the appearance of snippets in the advanced settings page. Image may be NSFW. Clik here to view.
Visual Studio 2017 provides two new IntelliSense features to help you narrow down the total number of autocomplete recommendations: Predictive IntelliSense, and IntelliSense filters. Check out our blog post, C++ IntelliSense Improvements – Predictive IntelliSense & Filtering, to learn more about how these two features can improve your productivity.
If you ever find yourself in a situation where the list of results suggested by IntelliSense doesn’t match what you’re looking for, and you already typed some valid characters beforehand, you can choose to unfilter the list by clicking the Show more results button in the bottom left corner of the drop-down list–which looks like a plus (+)—or by pressing Ctrl + J. This will refresh the suggestions, and add some new entries. If you’re using Predictive IntelliSense, which is an optional mode that uses a stricter filtering mechanism than usual, you may find the list expansion feature even more useful.
Quick Fixes
Visual Studio sometimes suggests ways to improve or complete your code. This comes in the forms of some lightbulb pop-ups called Quick Fixes. For example, if you declare a class in a header file, Visual Studio will suggest that it can declare a definition for it in a separate .cpp file. Image may be NSFW. Clik here to view.
Refactoring Features
Do you have a codebase that you’re not happy with? Have you found yourself needing to make sweeping changes but are afraid of breaking your build or feel like it will take too long? This is where the C++ refactoring features in Visual Studio come in. We provide a suite of tools to help you make code changes. Currently, Visual Studio supports the following refactoring operations for C++:
Rename
Extract Function
Change Function Signature
Create Declaration/Definition
Move Function Definition
Implement Pure Virtuals
Convert to Raw String Literal
Many of these features are called out in our announcement blog post, All about C++ Refactoring in Visual Studio. Change Function Signature was added afterward, but functions exactly as you’d expect – it allows you to change the signature of a function and replicate changes throughout your codebase. You can access the various refactoring operations by right-clicking somewhere in your code or using the Edit menu. It’s also worth remembering Ctrl + R, Ctrl + R to perform symbol renames; it’s easily the most common refactoring operation.
In addition, check out the C++ Quick Fixes extension, which adds a host of other tools to help you change your code more efficiently.
Visual Studio 2017 comes with built-in support for EditorConfig, a popular code style enforcement mechanism. You can create .editorconfig files and place them in different folders of your codebase, applying code styles to those folders and all subfolders below them. An .editorconfig file supersedes any other .editorconfig files in parent folders and overwrites any formatting settings configured via Tools > Options. You can set rules around tabs vs. spaces, indent size, and more. EditorConfig is particularly useful when you are working on a project as part of a team, such as when a developer wants to check in code formatted with tabs instead of spaces, when your team normally uses spaces. EditorConfig files can easily be checked in as part of your code repo to enforce your team style.
Lastly, you can find additional resources on how to use Visual Studio in our official documentation pages at docs.microsoft.com. In particular, for developer productivity, we have the following set of articles available:
In a continuous effort to enhance the Bing user experience and promote the expression of free speech while simultaneously upholding the rights of intellectual property and copyright holders, Bing has streamlined the copyright removals process. We heard your feedback and understand that sometimes websites that experience alleged copyright infringement issues have a difficult time gaining visibility into the problem and getting relisted. Webmasters now have the ability to see what pages on their site have been impacted by “copyright removal notices” and appeal those decisions.
Enhanced visibility
This new feature will provide webmasters with more visibility into how DMCA takedowns impact their site and gives webmasters the opportunity to either address the infringement allegation or remove the offending material. All requests will be evaluated in a new appeals process.
More information
For more information on Bing’s copyright infringement policies and how Bing delivers search results, visit Bing's copyright infringement policies. Bing also provides full transparency of takedown requests in a bi-annual Content Removal Requests Report with associated FAQs. Access the latest version here Bing Content Removal Requests Report.
Yesterday, I had the honour of presenting at The Data Science Conference in Chicago. My topic was Reproducible Data Science with R, and while the specific practices in the talk are aimed at R users, my intent was to make a general argument for doing data science within a reproducible workflow. Whatever your tools, a reproducible process:
Saves time,
Produces better science,
Creates more trusted research,
Reduces the risk of errors, and
Encourages collaboration.
Sadly there's no recording of this presentation, but my hope is that the slides are sufficiently self-contained. Some of the images are links to further references, too. You can browse them below, or download (CC-BY) them from the SlideShare page.
Thanks to all who attended for the interesting questions and discussion during the panel session!
As we ramp up for Build, the Windows Dev team would like to thank you, thedeveloper community, for all the amazing work you have done over the past 12 months. Because of your efforts and feedback, we’ve managed to add countless new features to the Universal Windows Platform and the Windows Store in an ongoing effort to constantly improve. And thanks to your input on the Windows Developer Platform Backlog, you have helped us to prioritize new UWP features.
In recognition of all you have done, this year’s Build conference in Seattle will feature the first-ever Windows Developers Awards given to community developers who have built exciting UWP apps in the last year and published them in the Windows Store. The awards are being given out in four main categories:
App Creator of the Year – This award recognizes an app leveraging the latest Windows 10 capabilities. Some developers are pioneers, the first to explore and integrate the latest features in Windows 10 releases. This award honors those who made use of features like Ink, Dial, Cortana, and other features in creative ways.
Game Creator of the Year – This award recognizes a game by a first-time publisher in Windows Store. Windows is the best gaming platform–and it’s easy to see why. From Xbox to PCs to mixed reality, developers are creating the next generation of gaming experiences. This award recognizes developers who went above and beyond to publish innovative, engaging and magical games to the Windows Store over the last year.
Reality Mixer of the Year – This award recognizes the app demonstrating a unique mixed reality experience. Windows Mixed Reality lets developers create experiences that transcend the traditional view of reality. This award celebrates those who choose to mix their own view of the world by blending digital and real-world content in creative ways.
Core Maker of the Year – This award recognizes a maker project powered by Windows. Some devs talk about the cool stuff they could build–others just do it. This award applauds those who go beyond the traditional software interface to integrate Windows in drones, PIs, gardens, and robots to get stuff done.
In addition to these, a Ninja Cat of the Year award will be given as special recognition. Selected by the Windows team at Microsoft, this award celebrates the developer or experience that we believe most reflects what Windows is all about, empowering people of action to do great things.
Here’s what we want from you: we need the developer community to help us by voting for the winners of these four awards on the awards site so take a look and tell us who you think has created the most compelling apps. Once you’ve voted, check back anytime to see how your favorites are doing. Voting will end on 4/27, so get your Ninja votes in quickly.
I guess these have been around for a few years now, but I recently stumbled across these short videos dedicated to each of the 11 lines of the London Underground. Presented by Geoff Marshall (who once set a record for visiting all the stations in a single day), each video includes interesting trivial on the history, operations and art and architecture of the stations on each line. Something to check out for your next visit to London.
That's all from us for this week — it's been a big week of announcements! We'll be back on Monday with more posts. Have a great weekend!
Are you new to Visual Studio and working with C++? Then you’ve come to the right place. Whether you’re a student writing one of your first programs or a seasoned C++ developer with years of experience, you’ll find Visual Studio to be a powerful environment for C++ development. Visual Studio is an IDE packed with features, from code browsing, colorization and navigation, to autocompletion of symbols, a built-in compiler and build system, a top of the line debugger, and built-in testing and code analysis tools. We have you covered from beginning to end, from code inception to continuous integration management, but of course this means there is a lot to learn. This blog post breaks down the basics to get you started. You will get only a small glimpse of the powerful tools that Visual Studio provides, but if you want to learn more, you should click the links throughout this post.
Visual Studio crossed the 20-year mark with the release of Visual Studio 2017. There are many versions of the product out there, but in general, you should always pick the latest one. This will allow you to use the latest and greatest features, including the most up-to-date compiler. You’ll also benefit from recent bug fixes and performance improvements.
Visual Studio is available in three different editions: Community, Professional, and Enterprise. The Community Edition is completely free of charge for small businesses, open source projects, academic research, and classroom learning environments. If you don’t qualify for the Community License, you can purchase the Professional Edition instead. If you work for a large enterprise or simply want the best Visual Studio has to offer, then you should consider the Enterprise Edition. You can compare the offerings on the Visual Studio website if you are unsure. This guide is applicable to all editions.
Once you have downloaded the installer, run it. Visual Studio allows you to choose what workloads you want to install, choosing only the components you want, and nothing you don’t. The following workloads are under the C++ umbrella:
Desktop development with C++
Provides the tools needed for building and debugging classic desktop applications. This includes classic Win32 console applications.
Universal Windows Platform development
This workload is not specific to just C++, but you can enable the C++ support by checking the individual component for “C++ UWP support”.
There are a variety of other workloads for other languages such as C#, and other platforms such as Microsoft Azure (for your cloud needs). The workloads you install are not permanent, and you can always change these options by opening the installer and selecting Modify.
Once you have made your selection and clicked Install, Visual Studio will begin the installation process. Once it is complete, Visual Studio is all set up and ready to go!
Now let’s look at an actual project. For this next section, if at any time, you cannot find some feature or command that you are looking for, you can search for it via Quick Launch, the search box at the upper right of the IDE (or press Ctrl+Q to get there fast).
If you open the demo project folder in Windows File Explorer, you will find a variety of different files in addition to some source code. Generally, code organized by Visual Studio appears as a Solution, which contains a collection of Projects. When a codebase is organized this way, it includes a .sln file (which configures the solution) as well as .vcxproj files (which configure each project); these files help define things like include paths, compiler settings, and how the projects are connected.
Visual Studio also supports an Open Folder mode as of Visual Studio 2017 which does away with .sln and .vcxproj files and allows you as the user to configure your own environment independently. This approach is ideal for cross-platform projects that will be run from a variety of different IDEs or editors. Better yet, if you are a CMake user, as of Visual Studio 2017 there is a built-in CMake experience. This guide will not go over Open Folder or CMake, but you are encouraged to check out the relevant blog posts linked in this paragraph for more information.
To open demoApplication, double click the .sln file, or from Visual Studio go to File > Open > Project/Solution… and navigate to the appropriate solution.
Once you have opened the project, a view of the content in the project will appear in the Solution Explorer, pictured below:Image may be NSFW. Clik here to view.
New projects can be also created by going to File > New > Project… and selecting the template that is appropriate. Console applications are under Visual C++ > Win32.
Building the Application
Visual Studio is closely integrated with the Visual C++ compiler, which makes it easy to build and debug your C++ applications. Near the top of the IDE inside the standard toolbar, there are dropdowns where you can change your build configuration and architecture. You can also easily add more configurations, as needed. For this exercise, you can leave the default settings of Debug and x86. Image may be NSFW. Clik here to view. Attempt to build the application by going to Build > Build Solution (or alternatively by pressing F7). The build will fail, since the code contains an error.
The Output Window is a valuable tool while you are building; it provides information about the status of the build. Image may be NSFW. Clik here to view.
Fixing compiler errors
You should see an error in the Error List at the bottom of the screen when you attempt to build. With this error, you not only get the location of the problem and a description, but if you double-click the line, you will be brought to the specific location in the code. This makes it easy to quickly navigate to problem areas.
Double-click on the error after building, and fix the offending line of code.
One of the most useful features for helping you write code quickly in Visual Studio is IntelliSense, which is a context-aware code completion tool. As you type, Visual Studio will suggest classes, methods, objects, code snippets, and more symbols that make sense in relation to what you have typed so far and where you have typed it. You can scroll up and down the suggestions with the arrow keys, and complete symbols with Tab.
In the main function try adding a call to the farewell function to the mySorter object. You should see IntelliSense pop up with all the possible functions available from the sorter class. Image may be NSFW. Clik here to view.
Go To
To efficiently write and understand code, easy code navigation is essential. By using the Go To feature (Ctrl+T) you can quickly get to where you need to go, without taking your hands off the keyboard. When you open the dialog, you can filter your results by clicking on the desired button, or by starting your query with a specific token. For example, you can go to a specific file by opening the Go To dialog and typing “f ”. Another common way to access this dialog is by going to a specific line number; you can do this by opening the menu traditionally and using the “:” token, or by pressing Ctrl+G. Try using Go To to navigate around the demo project.
Use the Go To menu (Ctrl+T) to open the file sorter.h by typing “f sorter.h”.
Use the Ctrl+Gshortcut to go to line 23 to change the private member “name” to your name. Image may be NSFW. Clik here to view.
Peek/Go to Definition
Sometimes it can be challenging to find out where a function or object is defined in your codebase. This problem is easily solved in Visual Studio, where you can easily peek into definitions. Try this in the demo project by selecting the function you want to look at, and pressing Alt+F12, or selecting it from the right-click menu. This will bring up a preview of the file where the function is defined, where you can quickly make small edits. Press Esc to close the preview window. You can also go directly to the definition by pressing only F12.
Use Peek Definition on the printVector function by selecting the function and pressing Alt+F12.
You can also use Visual Studio to refactor existing code. In the demo project, there is a function that has an unhelpful name. Rather than going to each file to change the name of each occurrence manually, choose one of the functions and press Ctrl+R, Ctrl+R or right-click it and choose Rename. This will bring up a menu where you can choose what you want to rename it to, and then preview the changes before they are committed.
Use Rename (Ctrl+R, Ctrl+R) to change the method “SILLY_SALUTATION_FUNCTION” to something more useful, such as “greeting”. Image may be NSFW. Clik here to view.
Debugging and Diagnosing Issues
Once you can successfully build your application and write code easily, the next step is often debugging the application. Debugging can be a complex process, and Visual Studio provides many powerful tools to help along the way. The most commonly used debugging tool is the breakpoint, so let’s start with that. If you click on the bar to the left of your code, a red circle should appear. If you click the circle, the breakpoint will be removed. Image may be NSFW. Clik here to view.
When a breakpoint is set and the program reaches that point of execution, it will stop, allowing you to inspect variables and the current state of the program.
Place a breakpoint on line 33 of demoApplication.cpp by clicking the bar to the left of the line numbers.
Click the red circle again to remove the breakpoint. Image may be NSFW. Clik here to view.
To begin debugging, you can either press the green arrow at the top of the IDE or press F5. Once the program has stopped on the breakpoint, there are many things you can do to help you diagnose problems. One of the best ways to find problems is to understand the current state of the program, versus what it should be. This can be easily achieved by using the Autos Window, which lists recently used variables and their values. You can also hover your mouse over a variable to see what the current value is.
Do the following:
Place a breakpoint on line 14of the main function.
Click the green arrow at the top of the IDE or press F5 to begin debugging.
Find out what the value of testInt is before it is initialized by hovering over the value in the code.
Look at the value of testIntin the Autos window.
Press the green arrow or F5 again to stop debugging.
Image may be NSFW. Clik here to view. Image may be NSFW. Clik here to view.
When you have sufficiently understood the current state of the program, you can press the green arrow button or press F5 again to have the program run until the next breakpoint. You can also step the program one line at a time if needed by using the arrows at the top. Image may be NSFW. Clik here to view. Step Over(F10) will run through whatever is on the current line, and suspend execution after the function returns. Step Into(F11) will follow the function call of the next line, allowing you to see what is happening inside that function. At any time, you can step out (Shift+F11), which will place the program just after it has completed the current functional scope. Once you are finished debugging you can run the program to its completion, or press the red square (or Shift+F5) at the top of the IDE to stop the debugging session.
Use a combination of these to explore the demo project and see if you can fix the runtime bug in the sort algorithm (Hint: it is in the sort algorithm itself).
There are many more tools within Visual Studio that can help you profile and debug your applications. Check out the C++ Debugging and Diagnostics blog post to learn more.
Testing
Visual Studio has a built-in test framework to help you unit test your projects, ensuring that the code you write is working as expected. To test the demo project, which is a native console application, you can add a Native Unit Test Project to the solution.
Add a test project to the demo. This is done by going to File > New > Project then selecting Visual C++ > Test > Native Unit Test Project. Make sure to choose the Add to solution option in the Solution dropdown. You can also simply right-click your solution name in the Solution Explorer and choose Add > New Project to accomplish the same task. Image may be NSFW. Clik here to view.
Once you have added a unit test, you can open the .cpp file and see the basic testing skeleton in the template, and begin to add tests.
Add a test method, making sure that it will pass. Try the following code: TEST_METHOD(TestMethod1)
{
Assert::AreEqual(1,1);
} Image may be NSFW. Clik here to view.
Once you have added a test, you can run the test by going to Test > Run > All Tests in the menu at the top of the IDE. Once you have run the tests, you will see the results in the Test Explorer window.
Run your test by going to Test > Run > All Tests. Try adding another test that will fail, and running the tests again. Image may be NSFW. Clik here to view.
If you would like to find out more information about unit testing, including how to connect your test project to your code under test, and check the code coverage of your unit tests, check out the C++ Unit Testing in Visual Studio blog post.
Working with A Team
It is very common these days to be working on a project with a team, and Visual Studio makes collaboration with others easy! You can easily create new source control repositories using Git or Team Foundation Server to manage your codebase. To create a new repo for a project, click the Add to Source Control button at the bottom of the screen, and add the opened project to the source control system of your choice. Image may be NSFW. Clik here to view.
Once you do that, a local repository will be made for your project. From here you can make commits, or push your changes to a remote Git service such as GitHub. This is all managed in the Team Explorer window. Image may be NSFW. Clik here to view.
Try adding the demo project to source control, and pushing it to GitHub. This is done by pressing the Add to source control button, and then pushing to a remote repository inside the Team Explorer.
There are many other useful things Visual Studio can do. So many things, in fact, it is hard to cover it all in one guide. Follow the links below to find out more on how to get the most out of Visual Studio.
Code Analysis
Visual Studio by default catches a lot of code issues, but its Code Analysis tool can often uncover hard-to-find issues that would normally be missed. Common errors that are reported include buffer overflows, uninitialized memory, null pointer dereferences, and memory and resource leaks. This functionality is built into the IDE, and can easily be used to help you write better code. Try it out by going to the Analyze menu and choosing Run Code Analysis > On Solution. Learn more about Code Analysis as well as the C++ Core Guidelines Checkers in the announcement blog post.
Library Acquisition
Library acquisition in C++ can be challenging. While Visual Studio has support for NuGet package management, more recently a new tool called vcpkg was launched. Vcpkg is an open source tool maintained by Microsoft that simplifies acquiring and building open source libraries, with over 200 currently supported. This tool, while separate from Visual Studio itself, is a valuable companion for any C++ developer on Windows. Check out the announcement blog post for details.
Conclusion
We hope that this guide has allowed you to get up to speed with Visual Studio quickly, and that you have learned some of the core functionality. This should be enough to get you started, but there are still many more features that could not be covered in this guide. The Visual C++ Blog is a very useful resource to find out more about not only the product overall, but also what we are currently working on and changing. You can find the comprehensive product documentation on docs.microsoft.com as well. Now get out there and build something amazing!
We are constantly trying to improve, so if you have any feedback or suggestions for us, please feel free to reach out to us anytime! We can be reached via email at visualcpp at microsoft.com and you can provide feedback via Help > Report A Problem in the product, or via Developer Community.
We’re pleased to announce the following services which are now available in the UK!
HDInsight – HDInsight is a 100% compatible Hadoop service that allows you to easily provision and manage Hadoop clusters for big data processing in Azure.
HDInsight is the only fully-managed cloud Hadoop offering that provides optimized open source analytic clusters for Spark, Hive, MapReduce, HBase, Storm, Kafka, and R Server backed by a 99.9% SLA. Each of these big data technologies and ISV applications are easily deployable as managed clusters with enterprise-level security and monitoring. Learn more about HDInsight.
Azure Import/Export service now also available in UK is the perfect companion service to use with HDInsight – the combination allows you to easily ingest, process and optionally export a limitless amount of data.
Azure Import/Export - Import/Export Service is now live in UK South! The Azure Import/Export Service enables you to move large amounts of on-premises data into and out of your Azure Storage accounts. It does this by enabling you to securely ship hard disk drives directly to our Azure data centers. Once we receive the drives we’ll automatically transfer the data to or from your Azure Storage account. This enables you to import or export massive amounts of data more quickly and cost effectively (and not be constrained by available network bandwidth).
Customers can now use Azure Import/Export Service to copy data to and from Azure Storage by shipping hard disk drives to Azure UK South data center.
Azure Container Registry - Azure Container Registry is a private registry for hosting container images. Using the Azure Container Registry, customers can store Docker-formatted images for all types of container deployments. Azure Container Registry integrates well with orchestrators hosted in Azure Container Service, including Docker Swarm, DC/OS and Kubernetes. Users can benefit from using familiar tooling capable of working with the open source Docker Registry v2.
Customers can now create one or more container registries in their Azure subscription. Each registry is backed by a standard Azure storage account in the same location. Take advantage of local, network-close storage of your container images by creating a registry in the same Azure location as your deployments. Learn more about Azure Container Registry.
We are excited about these additions, and invite customers using the UK Azure region to try them today!