Quantcast
Channel: Category Name
Viewing all 10804 articles
Browse latest View live

Visual Studio 2017 Version 15.6 Preview

$
0
0

[Hello, we are looking to improve your experience on the Visual Studio Blog. It will be very helpful if you could share your feedback via this short survey that should take less than 2 minutes to fill out. Thanks!]

A few days ago we released Visual Studio 2017 version 15.5 and an update to Visual Studio for Mac, and today we are releasing the first preview of the next minor update: Visual Studio 2017 version 15.6. You can either download the Preview, or, if you already have it installed, click on the notification you’ll receive in the product informing you that the update is available.

This latest preview contains new features, productivity improvements, and other enhancements that address our customers’ feedback. Read the feature highlight summary below, and check out the Visual Studio 2017 version 15.6 Preview release notes for a more detailed description of what’s new and how to use the new goodness contained in this Preview.

Diagnostics

The CPU Usage tool (available during F5 Debugging in the Diagnostic Tools window and in the ALT-F2 Performance Profiler) now displays source line highlighting based on the CPU consumption of specific lines of code. When you view the Call Tree or Caller/Callee views of the CPU Usage tool, the source for the selected function is displayed with CPU consumption indicated on each source line of the function. If CPU performance of a function is a concern, now you can determine specifically what source lines of the function are responsible for the CPU consumption when the function is executing. This feature requires that source information be included in the generated PDB which is controlled by the project settings. Projects for which PDBs do not have source information will be unable to display either the line attribution or the source file.

Diagnostics Improvements

Productivity

Team Explorer: In this Preview, we made some improvements to the Git tags functionality, largely based on your feedback via UserVoice. There’s a new Tags tile available in the Team Explorer window so that you can view all the tags in your repo. In addition to create, you can now delete tags, push tags, and create a new branch from tags. Visual Studio Team Services users can now check out pull request branches, which makes it easier to review pull requests, test changes, and build your code.

Managing Secrets: Many of you know that it’s a best practice to keep sensitive settings like connection strings, passwords, or other credentials outside of source code and in a safe place like Azure Key Vault. Last month we introduced the App Authentication Extension which makes it easy to configure your machine to use these protected settings so that you can develop and debug apps locally using your Visual Studio credentials. With Visual Studio Version 15.6 Preview 1, we’ve moved this functionality directly into the main setup, so now everyone has access to this feature by default. Learn more about managing secrets in the cloud.

C++ Development

With Visual Studio 2017 version 15.6 Preview, you now have the ability to create CMake projects the Add New Project dialog. This Preview also provides built-in support for Android NDK r15c and guaranteed copy elision per the C++17 standard. In addition, the ImageWatch extension has been updated to work with Visual Studio 2017

CPlusPlus Development

Python Development

IntelliSense for Python code now no longer requires a completion database. Instead of waiting up to four hours after installing a new package, you can start using it immediately. We have also added experimental support for managing Anaconda packages, new code snippets, and more customizable syntax highlighting. Read our blog post for full details on these improvements and how to enable our experimental features.

Test Explorer

Real time test discovery is a new Visual Studio feature for managed projects that uses the Roslyn compiler to discover tests and populate the Test Explorer in real-time without requiring you to build your project. This feature was introduced behind a feature flag in version 15.5, and it will now be on by default in version 15.6. This feature not only makes test discovery significantly faster, but it also keeps the Test Explorer in sync with code changes such as adding or removing tests. Since real-time discovery is powered by Roslyn, it is only available for C# and Visual Basic test projects using xUnit, NUnit, and MSTest. To learn more, check out the Real Time Test Discovery blog post and Channel9 video.

Real Time test discovery

Azure Development

With this Preview, Visual Studio now supports configuring continuous delivery to Azure for Team Foundation Version Control (TFVC), Git SSH remotes, and Web Apps for containers. Read more about these features on this post about Continuous Delivery Tools for Visual Studio.

WCF Connected Services

The WCF Web Service Reference connected service provider now supports updating an existing service reference. This simplifies the process for regenerating the WCF client proxy code for an updated web service. To use this new feature, open the context menu of a service reference folder to be updated and select the ‘Update Microsoft WCF Web Service Reference Provider…” option. This will regenerate the Reference.cs file using the settings for which the file was originally generated. Please note, this feature is not supported for service references added to the project using a previous version of Visual Studio 2017.

WCFWebService

Try it out today!

If you’re not familiar with Visual Studio Previews, take a moment to read the Visual Studio 2017 Release Rhythm. Remember that Visual Studio 2017 Previews can be installed with other versions of Visual Studio and other installs of Visual Studio 2017 without adversely affecting either your machine or your productivity. Previews provide an opportunity for you to receive fixes faster and try out upcoming functionality before it becomes mainstream. Similarly, the Previews enable the Visual Studio Engineering team to validate usage, incorporate suggestions, and detect flaws earlier in the development process. We are highly responsive to feedback coming in through the Previews and look forward to hearing from you.

Please Install the Visual Studio 2017 Preview today, exercise your favorite workloads, and tell us what you think. You can report issues to us via the Report a Problem tool in Visual Studio or you can share a suggestion on UserVoice. You’ll be able to track your issues in the Visual Studio Developer Community where you can ask questions and find answers. You can also engage with us and other Visual Studio developers through our Visual Studio conversation in the Gitter community (requires GitHub account).

Christine Ruana, Principal Program Manager, Visual Studio

Christine is on the Visual Studio release engineering team and is responsible for making Visual Studio releases available to our customers around the world.


What’s new for Python in Visual Studio 2017 15.6 Preview 1

$
0
0

Today we have released the first preview of our next update to Visual Studio 2017. You will see a notification in Visual Studio within the next few days, or you can download the new installer from visualstudio.com.

In this post, we're going to take a look at some of the new features we have added for Python developers. As always, the preview is a way for us to get features into your hands early so you can provide feedback and we can identify issues with a smaller (and hopefully more forgiving!) audience. If you encounter any trouble, please use the Report a Problem tool to let us know.

Immediate IntelliSense updates with no database

Before and after images of the IntelliSense pane of the Python Environments window

Remember how every time you installed or updated a package we would make you wait for hours while we "refresh" our "completion DB"? No more! In this update we are fundamentally changing how we handle this for installed Python environments, including virtual environments, so that we can provide IntelliSense immediately without the refresh.

This has been available as an experimental feature for a couple of releases, and we think it's ready to turn on by default. When you open the Python Environments window, you'll see the "IntelliSense" view is disabled and there is no longer a way to refresh the database -- because there is no database!

The new system works by doing lightweight analysis of Python modules as you import them in your code. This includes .pyd files, and if you have .pyi files alongside your original sources then we will prefer those (see PEP 484 for details of .pyi files. In essence, these are Python "include" files for editors to obtain information about Python modules, but do not actually have any code in them - just function stubs with type annotations).

Completions in the editor from the pandas package

You should notice some improvements in IntelliSense for packages like pandas and scikit-learn, though there will likely be some packages that do not work as well as before. We are actively working on improving results for various code constructs, and you will also see better IntelliSense results as packages start including .pyi type hint files. We encourage you to post on this github issue to let us know about libraries that still do not work well.

(NOTE: If you install this preview alongside an earlier version of Visual Studio 2017, the preview of this feature will also be enabled in earlier version. You can go back to the old model by disabling the feature in Preview. To do this, open Tools, Options, find the Python/Experimental page, deselect "Use new style IntelliSense" and restart both versions of Visual Studio.)

conda integration

If you use Anaconda, you likely already manage your environments and packages using the conda tool. This tool installs pre-built packages from the Anaconda repository (warning: long page) and manages compatibility with your environment and the other packages you have installed.

For this preview of Visual Studio, we have added two experimental options to help you work with Anaconda:

  • Automatically detect when conda is a better option for managing packages
  • Automatically detect any Anaconda environments you have created manually

Install package from conda in the Python environments window

To enable either or both of these features, open Tools, Options, find the Python/Experimental page, and select the check box. For this preview we are starting with both disabled to avoid causing unexpected trouble, but we intend to turn them on by default in a future release.

Options dialog with experimental conda options highlighted

With "Automatically detect Conda environments" enabled, any environments created by the conda tool will be detected and listed in the Python Environments window automatically. You can open interactive windows for these environments, assign them in projects or make them your default environment.

With the "Use Conda package manager when available" option enabled, any environments that have conda installed will use that for search, install and updating instead of pip. Very little will visibly change, but we hope you'll be more successful when adding or removing packages to your environment.

Notice that these two options work independently: you can continue to use pip to manage packages if you like, even if you choose to detect environments that were created with conda. If you are an Anaconda user, you will likely want to enable both options. However, if you do this and encounter issues, disabling each one in turn and then reporting any differences will help us quickly narrow down the source.

Other improvements

We have made a range of other minor improvements and bug fixes throughout all of our Python language support and there are more to come.

Our "IPython interactive mode" is now using the latest APIs, with improved IntelliSense and the same module and class highlighting you see in the editor.

Interactive window showing current jupyter_client version and improved syntax highlighting

There are new code snippets for the argparse module. Start typing "arg" in the editor to see what is available.

In the editor adding an argparse snippet.

We've also added new color customization options for docstrings and regular expression literals (under Tools, Options, Fonts and Colors). Doc strings have a new default color.

Doc strings and regex literal strings with customized colors

If you encounter any issues, please use the Report a Problem tool to let us know (this can be found under Help, Send Feedback) or continue to use our github page. Follow our blog to make sure you hear about our updates first, and thanks for using Visual Studio!

In case you missed it: November 2017 roundup

$
0
0

In case you missed them, here are some articles from November of particular interest to R users.

R 3.4.3 "Kite Eating Tree" has been released.

Several approaches for generating a "Secret Santa" list with R.

The "RevoScaleR" package from Microsoft R Server has now been ported to Python.

The call for papers for the R/Finance 2018 conference in Chicago is now open.

Give thanks to the volunteers behind R.

Advice for R user groups from the organizer of R-Ladies Chicago.

Use containers to build R clusters for parallel workloads in Azure with the doAzureParallel package.

A collection of R scripts for interesting visualizations that fit into a 280-character Tweet.

R is featured in a StackOverflow case study at the Microsoft Connect conference.

The City of Chicago uses R to forecast water quality and issue beach safety alerts.

A collection of best practices for sharing data in spreadsheets, from a paper by Karl Broman and Kara Woo.

The MRAN website has been updated with faster package search and other improvements.

The curl package has been updated to use the built-in winSSL library on Windows.

Beginner, intermediate and advanced on-line learning plans for developing AI applications on Azure.

A recap of the EARL conference (Effective Applications of the R Language) in Boston. 

Giora Simchoni uses R to calculate the expected payout from a slot machine.

An introductory R tutorial by Jesse Sadler focuses on the analysis of historical documents.

A new RStudio cheat sheet: "Working with Strings".

An overview of generating distributions in R via simulated gaming dice.

An analysis of StackOverflow survey data ranks R and Python among the most-liked and least-disliked languages.

And some general interest stories (not necessarily related to R):

As always, thanks for the comments and please send any suggestions to me at davidsmi@microsoft.com. Don't forget you can follow the blog using an RSS reader, via email using blogtrottr, or by following me on Twitter (I'm @revodavid). You can find roundups of previous months here.

Discover Beautiful Places with Outings

$
0
0

The Bing Maps team is happy to announce the launch of Outings, a Microsoft Garage project, for iOS and Android devices.

Whether you’re looking for a fun hike near town or planning your next vacation destination, often the hardest part of travel is just figuring out where to go. Outings makes it easier by allowing you to search for high quality travel stories about beautiful places from travel blogs and other places.

Outdoor adventures, historical sites, city life, kid-friendly activities, beautiful vistas, hidden gems–Outings can help you discover them all, available now on iOS and Android devices.

Read more about Outings on the Microsoft Garage blog.

C++17 Feature Removals And Deprecations

$
0
0

Technology advances by inventing new ways of doing things and by discarding old ways. The C++ Standardization Committee is simultaneously adding new features and removing old features at a gradual pace, because we’ve discovered thoroughly better ways of writing code. While feature removals can be annoying, in the sense that programmers need to go change old codebases in order to make them conform to new Standards, they’re also important. Feature removals simplify the Core Language and Standard Library, avoiding the doom of accreting complexity forever. Additionally, removing old features makes it easier to read and write code. C++ will always be a language that offers programmers many ways to write something, but by taking away inferior techniques, it’s easier to choose one of the remaining techniques which are more modern.

 

In the Visual C++ team, we’re trying to help programmers modernize their codebases and take advantage of new Standards, while avoiding unnecessary and untimely disruption. As Visual C++ itself is a multi-decade-old codebase, we understand how valuable legacy codebases are (as they’re the product of years of development and testing), and how difficult they can be to change. While we often post about new features and how to use them, this post will explain what the recently-finalized C++17 Standard has done with old features, and what to expect from future VS 2017 toolset updates. We want to make toolset updates as painless as possible, so that you can continue to compile your code in a compatible manner. When you’re ready, you can enable compiler options to begin migrating your code to new Standards (and away from non-Standard behavior), with additional compiler/library options to (temporarily!) disable disruptive new features, restore removed features, and silence deprecation warnings.

 

We recently implemented Standard version switches, long supported by other compilers, which allow programmers to migrate to newer Standards at their own pace. This means that we can be relatively more aggressive about implementing source breaking changes (including but not limited to feature removals and deprecations) when they’re guarded by the /std:c++17 and /std:c++latest switches, because they won’t affect the /std:c++14 default. (These switches do have a complexity cost, in that they increase the number of modes that the compiler can operate in.)

 

The C++ Standard follows a certain process to remove features. Typically (but not always), a feature is first “deprecated”. This is an official term which is essentially equivalent to the Committee making a frowny face at the feature. The Standardese for deprecated features is collected in a special section (Annex D) at the end of the document. While deprecated features remain Standard and must be supported by conforming implementations, deprecation puts the world on notice that removal is likely (but not guaranteed). (Note that implementations are allowed to warn about anything, but they can definitely warn about usage of deprecated features. The Standard now has an attribute for this very purpose, for marking code to emit such warnings.) In a following Standard, deprecated features can be removed outright.

 

If you’re curious, the relevant Standardese is D [depr]/2 “These are deprecated features, where deprecated is defined as: Normative for the current edition of this International Standard, but having been identified as a candidate for removal from future revisions. An implementation may declare library names and entities described in this section with the deprecated attribute (10.6.4).” and 10.6.4 [dcl.attr.deprecated]/1 “The attribute-token deprecated can be used to mark names and entities whose use is still allowed, but is discouraged for some reason. [ Note: In particular, deprecated is appropriate for names and entities that are deemed obsolescent or unsafe. -end note ]”.

 

Technically, even removal isn’t the end of the road for a feature. Implementations can conform to C++17, yet accept features that were removed in C++17, as an extension. For example, the STL’s Standardese has a “Zombie names” section, saying that “In namespace std, the following names are reserved for previous standardization”. Essentially, C++17 is saying that while it doesn’t specify auto_ptr or unary_function or so forth, conformant C++17 programs aren’t allowed to interfere with such names (e.g. with macros), so that conformant C++17 STL implementations can provide auto_ptr/etc. as a non-C++17-Standard extension. This allows implementers to choose whether they physically remove features, and additionally makes it easier for the Committee to remove features from the Standard.

 

So, in Visual C++’s C++17 mode, we’re implementing feature removals and deprecation warnings, with the intent of permanently removing features in the future (possibly the far future, but someday). Some of this was released in VS 2017 15.3. More is available in VS 2017 15.5 (the second toolset update), and you can expect deprecation and removal to continue indefinitely, as the Committee continues its work (e.g. std::rel_ops is hopefully doomed).

 

How You Can Help Accelerate C++17 Adoption

 

1a. Download the latest released version of VS (and use it in production), and/or

 

1b. Download the latest preview version of VS (and test it against your whole codebase), and/or

 

1c. Download the “daily” MSVC toolset build (and test it against your whole codebase).

 

2. Compile with /std:c++17 or /std:c++latest (at this moment, they enable identical features and are barely distinguishable via a macro, but they will diverge when we begin implementing C++20).

 

3. Report toolset bugs. We’re trying really hard to release new features in a solid state, limited only by ambiguity in the Standard itself, but C++ is complicated and we aren’t perfect, so there will be bugs.

 

4. Update your codebase to avoid removed and deprecated features, and react to other source breaking changes as new features are implemented. (For example, every time the STL introduces a new function like std::clamp() or std::reduce(), any codebases with “using namespace std;” directives and their own clamp/reduce/etc. identifiers can be broken.)

 

5. (Important!) It’s very likely that you’ll encounter source breaking changes in third-party libraries that you can’t modify (easily or at all). We try to provide escape hatches so you can restore removed features or silence deprecation warnings and get on with your work, but first, please report such issues to the relevant library maintainers. By helping them update their code, you’ll help many more C++ programmers just like you.

 

In the last couple of years, the Visual C++ team has started to build and test many open-source projects and libraries with our development toolsets and options like /std:c++17. We’re finding and reporting breaking changes ourselves, but we can’t build everything, so we could use your help.

 

Our Strategy For Deprecation And Removal

 

* In C++14 mode (the default), we warn about non-Standard machinery (e.g. std::tr1). These warnings can be silenced in a fine-grained manner.

 

* In C++17 mode, we remove non-Standard machinery (e.g. std::tr1). This machinery can be restored in a fine-grained manner. (It will then emit the deprecation warning, unless silenced.)

 

* In the STL’s next major binary-incompatible version (internally named “WCFB02”), we’ve permanently removed this non-Standard machinery (e.g. std::tr1).

 

* In C++14 mode (the default), we currently don’t warn about features that were deprecated in C++14 (e.g. auto_ptr, which was first deprecated in C++11), nor do we warn about features that were removed in C++17 (e.g. auto_ptr again, or std::function allocator support which was removed without being deprecated first). We reserve the right to add such warnings in the future, but we’re unlikely to do so.

 

* In C++17 mode, we remove features that were removed in the C++17 Standard (e.g. auto_ptr). They can be restored in a somewhat fine-grained manner, for now. Ideally, they will be permanently removed at some point in the future (e.g. first the default mode will switch from C++14 to C++17, then someday C++14 mode will be dropped entirely – at that point, legacy C++14-but-not-17 features like auto_ptr should be dropped entirely too).

 

* In C++17 mode, we warn about all Library features that were deprecated in the C++17 Standard (including features that were deprecated in previous Standards, like <strstream>), with one exception (D.5 [depr.c.headers] deprecates the <stdio.h> family, but we’re not going to warn about that). These C++17 deprecation warnings can be silenced in a fine-grained manner (basically, each section of Annex D can be independently silenced), or in a coarse-grained manner (silencing all C++17 deprecation warnings, but not other deprecation warnings).

 

* We expect to repeat this pattern for C++20 and beyond.

 

C++17 Feature Removals – Technical Details

 

* N4190 “Removing auto_ptr, random_shuffle(), And Old <functional> Stuff”

 

Implemented in VS 2017 15.3 (and earlier). Restored by defining _HAS_AUTO_PTR_ETC to 1 (hence “somewhat fine-grained” above).

 

auto_ptr was superseded by unique_ptr.

 

unary_function and binary_function were typically unnecessary. In the C++98/03 era, many user-defined function object classes derived from these base classes in an attempt to imitate STL conventions. However, STL containers and algorithms have never required such inheritance (or the typedefs that they provide). Only the function object “adaptors” (like bind1st()) needed such typedefs. Therefore, if you have classes deriving from unary_function or binary_function, you can probably eliminate the inheritance. Otherwise, you can provide the typedefs directly.

 

The binders bind1st() and bind2nd() were superseded by bind() and lambdas.

 

ptr_fun() is no longer necessary at all – modern machinery works with function pointers directly (and STL algorithms always have).

 

The mem_fun() family has been superseded by mem_fn(). Also, anything following the invoke() protocol (like std::function) works with pointers-to-members directly.

 

random_shuffle() was superseded by shuffle().

 

* P0004R1 “Removing Deprecated Iostreams Aliases”

 

Implemented in VS 2017 15.3 (and earlier). Restored by defining _HAS_OLD_IOSTREAMS_MEMBERS to 1. Unlikely to be encountered outside of STL test suites.

 

* P0003R5 “Removing Dynamic Exception Specifications”

 

Newly implemented in VS 2017 15.5. The Library part can be restored by defining _HAS_UNEXPECTED to 1.

 

* P0302R1 “Removing Allocator Support In std::function”, LWG 2385 “function::assign allocator argument doesn’t make sense”, LWG 2921 “packaged_task and type-erased allocators”, LWG 2976 “Dangling uses_allocator specialization for packaged_task”

 

Newly implemented in VS 2017 15.5. (LWG 2385 was previously implemented with a different macro.) Restored by defining _HAS_FUNCTION_ALLOCATOR_SUPPORT to 1, although it was neither robustly implemented nor portable to other implementations which didn’t even try (which proved to be the wiser course of action).

 

Non-Standard Feature Deprecations And Removals – Technical Details

 

* The non-Standard std::tr1 namespace and TR1-only machinery

 

Removal in C++17 mode was implemented in VS 2017 15.3 (and earlier). Restored by defining _HAS_TR1_NAMESPACE to 1.

 

Newly deprecated in VS 2017 15.5 with “warning STL4002: The non-Standard std::tr1 namespace and TR1-only machinery are deprecated and will be REMOVED. You can define _SILENCE_TR1_NAMESPACE_DEPRECATION_WARNING to acknowledge that you have received this warning.”

 

* The non-Standard std::identity struct

 

Removal in C++17 mode was implemented in VS 2017 15.3 (and earlier). Restored by defining _HAS_IDENTITY_STRUCT to 1.

 

Newly deprecated in VS 2017 15.5 with “warning STL4003: The non-Standard std::identity struct is deprecated and will be REMOVED. You can define _SILENCE_IDENTITY_STRUCT_DEPRECATION_WARNING to acknowledge that you have received this warning.”

 

* The non-Standard std::tr2::sys namespace

 

Newly deprecated in C++14 mode and removed in C++17 mode in VS 2017 15.5. Restored by defining _HAS_TR2_SYS_NAMESPACE to 1. Emits “warning STL4018: The non-Standard std::tr2::sys namespace is deprecated and will be REMOVED. It is superseded by std::experimental::filesystem. You can define _SILENCE_TR2_SYS_NAMESPACE_DEPRECATION_WARNING to acknowledge that you have received this warning.”

 

C++17 Feature Deprecations – Technical Details

 

These deprecation warnings are newly implemented in VS 2017 15.5. P0174R2 “Deprecating Vestigial Library Parts”, P0521R0 “Deprecating shared_ptr::unique()”, P0618R0 “Deprecating <codecvt>”, and other papers added these sections. (For example, P0005R4 “not_fn()” added a feature and deprecated not1(), not2(), and the result_type family of typedefs. Notably, P0604R0 “invoke_result, is_invocable, is_nothrow_invocable” was implemented in VS 2017 15.3, but its deprecation of result_of is newly implemented in VS 2017 15.5.)

 

As every warning message states, the coarse-grained macro for silencing is _SILENCE_ALL_CXX17_DEPRECATION_WARNINGS . Here are the sections and their associated warning messages, where we tried to be extremely detailed and helpful:

 

D.4 [depr.cpp.headers]: “warning STL4004: <ccomplex>, <cstdalign>, <cstdbool>, and <ctgmath> are deprecated in C++17. You can define _SILENCE_CXX17_C_HEADER_DEPRECATION_WARNING or _SILENCE_ALL_CXX17_DEPRECATION_WARNINGS to acknowledge that you have received this warning.”

 

D.6 [depr.str.strstreams]: “warning STL4005: <strstream> is deprecated in C++17. You can define _SILENCE_CXX17_STRSTREAM_DEPRECATION_WARNING or _SILENCE_ALL_CXX17_DEPRECATION_WARNINGS to acknowledge that you have received this warning.”

 

D.7 [depr.uncaught]: “warning STL4006: std::uncaught_exception() is deprecated in C++17. It is superseded by std::uncaught_exceptions(), plural. You can define _SILENCE_CXX17_UNCAUGHT_EXCEPTION_DEPRECATION_WARNING or _SILENCE_ALL_CXX17_DEPRECATION_WARNINGS to acknowledge that you have received this warning.”

 

D.8.1 [depr.weak.result_type] and D.8.2 [depr.func.adaptor.typedefs]: “warning STL4007: Many result_type typedefs and all argument_type, first_argument_type, and second_argument_type typedefs are deprecated in C++17. You can define _SILENCE_CXX17_ADAPTOR_TYPEDEFS_DEPRECATION_WARNING or _SILENCE_ALL_CXX17_DEPRECATION_WARNINGS to acknowledge that you have received this warning.”

 

D.8.3 [depr.negators]: “warning STL4008: std::not1(), std::not2(), std::unary_negate, and std::binary_negate are deprecated in C++17. They are superseded by std::not_fn(). You can define _SILENCE_CXX17_NEGATORS_DEPRECATION_WARNING or _SILENCE_ALL_CXX17_DEPRECATION_WARNINGS to acknowledge that you have received this warning.”

 

D.9 [depr.default.allocator]: “warning STL4009: std::allocator<void> is deprecated in C++17. You can define _SILENCE_CXX17_ALLOCATOR_VOID_DEPRECATION_WARNING or _SILENCE_ALL_CXX17_DEPRECATION_WARNINGS to acknowledge that you have received this warning.”

 

D.9 [depr.default.allocator]: “warning STL4010: Various members of std::allocator are deprecated in C++17. Use std::allocator_traits instead of accessing these members directly. You can define _SILENCE_CXX17_OLD_ALLOCATOR_MEMBERS_DEPRECATION_WARNING or _SILENCE_ALL_CXX17_DEPRECATION_WARNINGS to acknowledge that you have received this warning.”

 

D.10 [depr.storage.iterator]: “warning STL4011: std::raw_storage_iterator is deprecated in C++17. Consider using the std::uninitialized_copy() family of algorithms instead. You can define _SILENCE_CXX17_RAW_STORAGE_ITERATOR_DEPRECATION_WARNING or _SILENCE_ALL_CXX17_DEPRECATION_WARNINGS to acknowledge that you have received this warning.”

 

D.11 [depr.temporary.buffer]: “warning STL4012: std::get_temporary_buffer() and std::return_temporary_buffer() are deprecated in C++17. You can define _SILENCE_CXX17_TEMPORARY_BUFFER_DEPRECATION_WARNING or _SILENCE_ALL_CXX17_DEPRECATION_WARNINGS to acknowledge that you have received this warning.”

 

D.12 [depr.meta.types]: “warning STL4013: std::is_literal_type and std::is_literal_type_v are deprecated in C++17. You can define _SILENCE_CXX17_IS_LITERAL_TYPE_DEPRECATION_WARNING or _SILENCE_ALL_CXX17_DEPRECATION_WARNINGS to acknowledge that you have received this warning.”

 

D.12 [depr.meta.types]: “warning STL4014: std::result_of and std::result_of_t are deprecated in C++17. They are superseded by std::invoke_result and std::invoke_result_t. You can define _SILENCE_CXX17_RESULT_OF_DEPRECATION_WARNING or _SILENCE_ALL_CXX17_DEPRECATION_WARNINGS to acknowledge that you have received this warning.”

 

D.13 [depr.iterator.primitives]: “warning STL4015: The std::iterator class template (used as a base class to provide typedefs) is deprecated in C++17. (The <iterator> header is NOT deprecated.) The C++ Standard has never required user-defined iterators to derive from std::iterator. To fix this warning, stop deriving from std::iterator and start providing publicly accessible typedefs named iterator_category, value_type, difference_type, pointer, and reference. Note that value_type is required to be non-const, even for constant iterators. You can define _SILENCE_CXX17_ITERATOR_BASE_CLASS_DEPRECATION_WARNING or _SILENCE_ALL_CXX17_DEPRECATION_WARNINGS to acknowledge that you have received this warning.”

 

D.14 [depr.util.smartptr.shared.obs]: “warning STL4016: std::shared_ptr::unique() is deprecated in C++17. You can define _SILENCE_CXX17_SHARED_PTR_UNIQUE_DEPRECATION_WARNING or _SILENCE_ALL_CXX17_DEPRECATION_WARNINGS to acknowledge that you have received this warning.”

 

D.15 [depr.locale.stdcvt] and D.16 [depr.conversions]: “warning STL4017: std::wbuffer_convert, std::wstring_convert, and the <codecvt> header (containing std::codecvt_mode, std::codecvt_utf8, std::codecvt_utf16, and std::codecvt_utf8_utf16) are deprecated in C++17. (The std::codecvt class template is NOT deprecated.) The C++ Standard doesn’t provide equivalent non-deprecated functionality; consider using MultiByteToWideChar() and WideCharToMultiByte() from <Windows.h> instead. You can define _SILENCE_CXX17_CODECVT_HEADER_DEPRECATION_WARNING or _SILENCE_ALL_CXX17_DEPRECATION_WARNINGS to acknowledge that you have received this warning.”

 

Note that for all of the warning suppression macros, you must define them before any C++ Standard Library header has been included (both <vector> etc. and <cstdlib> etc.). This is because we’ve implemented the deprecation warnings with a system of macros that are initialized when the STL’s central internal header has been dragged in. Therefore, the best place to define the warning suppression macros is on the command line, project-wide, to ensure that they’re set before any headers are included. If you need to define several macros, you can use the /FI (Name Forced Include File) compiler option to force-include a header defining those macros, which will be processed before any include directives in source files.

 

Library Warning Suppression

 

The [[deprecated]] attribute emits compiler warning C4996, which can be given custom text. (As you can see above, we’re now numbering the STL’s warnings, to make them easier to search for.)

 

Note: As C4996 is shared by all deprecation warnings (both Standard deprecations and Microsoft deprecations), you should avoid disabling it globally unless there’s no other choice. For example, silencing “warning C4996: ‘std::copy::_Unchecked_iterators::_Deprecate’: Call to ‘std::copy’ with parameters that may be unsafe – this call relies on the caller to check that the passed values are correct. To disable this warning, use -D_SCL_SECURE_NO_WARNINGS. See documentation on how to use Visual C++ ‘Checked Iterators'” should be done via the fine-grained macro mentioned, and not via /wd4996 passed to the compiler (which would also suppress the C++17 deprecation warnings here).

 

However, library code sometimes needs to do things that would trigger deprecation warnings, even though it shouldn’t really count as a use of deprecated technology. This occurs within the STL itself. For example, allocator_traits needs to ask whether UserAlloc::pointer exists (providing a fallback if it doesn’t exist). It’s possible for UserAlloc to derive from std::allocator which provides a C++17-deprecated “pointer” typedef. While deriving from std::allocator isn’t a great idea, it can be done conformantly. Giving such a derived class to allocator_traits shouldn’t trigger the “std::allocator<T>::pointer is deprecated” warning, because the programmer-user didn’t even mention that typedef.

 

Therefore, when inspecting types for nested typedefs like this, we locally suppress warning C4996, like this:

 

#pragma warning(push)

#pragma warning(disable: 4996)    // was declared deprecated

template<class _Ty>

    struct _Get_pointer_type<_Ty, void_t<typename _Ty::pointer>>

    {    // get _Ty::pointer

    using type = typename _Ty::pointer;

    };

#pragma warning(pop)

 

While this technique should be used sparingly, this is how third-library libraries can avoid triggering deprecation warnings, without requiring programmer-users to silence them throughout their entire projects.

Because it’s Friday: 3-D Animation

$
0
0

We've had 3-D animation for quite a while now, of course, but what happens when a traditional 2-D animator uses a virtual reality system to draw? When famed Disney animator Glen Keane sketches his most iconic creation — Ariel from The Little Mermaid — using Tilt Brush, the result is surprisingly moving.

That's all from us for this week. We'll be back with more on Monday, and in the meantime have a great weekend!

New Git Features in Visual Studio 2017 Update 5

$
0
0
This week we released Visual Studio 2017 Update 5. In this release, we added new Git features which were based on your UserVoice requests to support Git submodules, Git worktrees, fetch --prune, and pull --rebase. To learn more about all of our Git features and what’s new in Visual Studio 2017 Update 5, check out our Git... Read More

The 2017 Christmas List of Best STEM Toys for kids

$
0
0

In 2016 and 2015 I made a list of best Christmas STEM Toys for kids! If I may say so, they are still good lists today, so do check them out. Be aware I use Amazon referral links so I get a little kickback (and you support this blog!) when you use these links. I'll be using the pocket money to...wait for it...buy STEM toys for kids! So thanks in advance!

Here's a Christmas List of things that I've either personally purchased, tried for a time, or borrowed from a friend. These are great toys and products for kids of all genders and people of all ages.

Piper Computer Kit with Minecraft Raspberry Pi edition

The Piper is a little spendy at first glance, but it's EXTREMELY complete and very thoughtfully created. Sure, you can just get a Raspberry Pi and hack on it - but the Piper is not just a Pi. It's a complete kit where your little one builds their own wooden "laptop" box (more of a luggable), and then starting with just a single button, builds up the computer. The Minecraft content isn't just vanilla Microsoft. It's custom episodic content! Custom voice overs, episodes, and challenges.

What's genius about Piper, though, is how the software world interacts with the hardware. For example, at one point you're looking for treasure on a Minecraft beach. The Piper suggests you need a treasure detector, so you learn about wiring and LEDs and wire up a treasure detector LED while it's running. Then you run your Minecraft person around while the LED blinks faster to detect treasure. It's absolute genius. Definitely a favorite in our house for the 8-12 year old set.

Piper Raspberry Pi Kit

Suspend! by Melissa and Doug

Suspend is becoming the new Jenga for my kids. The game doesn't look like much if you judge a book by its cover, but it's addictive and my kids now want to buy a second one to see if they can build even higher. An excellent addition to family game night.

Suspend! by Melissa and Doug

Engino Discovering Stem: Levers, Linkages & Structures Building Kit

I love LEGO but I'm always trying new building kids. Engino is reminiscent of Technics or some of the advanced LEGO elements, but this modestly priced kit is far more focused - even suitable for incorporating into home schooling.

Engino Discovering Stem: Levers, Linkages & Structures Building Kit

Gravity Maze

I've always wanted a 3D Chess Set. Barring that, check out Gravity Maze. It's almost like a physical version of a well-designed iPad game. It included 60 challenges (levels) that you then add pieces to in order to solve. It gets harder than you'd think, fast! If you like this, also check out Circuit Maze.

818Ly6yML

Osmo Genius Kit (2017)

Osmo is an iPad add-on that takes the ingenious idea of an adapter that lets your iPad see the tabletop (via a mirror/lens) and then builds on that clever concept with a whole series of games, exercises, and core subject tests. It's best for the under 12 set - I'd say it's ideal for about 6-8 year olds.

81iVPligcyL


Sponsor: Check out JetBrains Rider: a new cross-platform .NET IDE. Edit, refactor, test and debug ASP.NET, .NET Framework, .NET Core, Xamarin or Unity applications. Learn more and download a 30-day trial!



© 2017 Scott Hanselman. All rights reserved.
     

A VSTS Release Gate with ServiceNow

Azure Marketplace – New offers in November 2017

$
0
0

We continue to expand the Azure Marketplace ecosystem. In November 2017, 35 new offers went live. See details of these great new offers below:

VeloCloud VeloCloud Virtual Edge: VeloCloud Cloud Delivered SD-WAN assures enterprise and cloud application performance over Internet and hybrid WAN while simplifying deployments and reducing costs.
EDB Postgres

EDB Postgres Ark: EDB Postgres Ark provides operational administrators with control of a Postgres 9.6, 9.5 or 9.4 based DBaaS for their organization while freeing DBAs and developers from the rigors of setting up and managing modern, robust database environments.

IBM WebSphere IBM WebSphere Application Server Base Edition 9.0: IBM WebSphere Application Server offers options for a faster, more flexible Java application server runtime environment.
IBM WebSphere IBM WebSphere Application Server Base Edition 8.0: IBM WebSphere Application Server offers options for a faster, more flexible Java application server runtime environment. WebSphere Application Server offers enhanced features and resiliency for building and running applications, including cloud and mobile.
IBM WebSphere IBM WebSphere MQ 9.0: This is a pre-configured image of IBM® WebSphere MQ, IBM's market-leading messaging integration middleware on the Azure Marketplace. You can install it simply with a few clicks. It’s easy to integrate with your current infrastructure.
IBM WebSphere IBM WebSphere Application Server Base Edition 8.5: IBM WebSphere Application Server offers options for a faster, more flexible Java application server runtime environment. WebSphere Application Server offers enhanced features and resiliency for building and running applications, including cloud and mobile. 
Accops HySecure Accops HySecure: Accops HySecure (formerly Propalms OneGate) is an application access gateway that enables secure access to corporate applications, desktops and network services from any device working from any network.
ComUnity

ComUnity Rapid Digitisation Platform: ComUnity is a one-stop digital services platform that enables businesses and institutions to easily deliver rich, wrap-around applications and services to their clients via any consumer device, including older mobile phones. With the easy application of ComUnity's technology, your customers and stakeholders can experience exceptional digital service.

Aquaforest Aquaforest Searchlight 1.3: Aquaforest's Findability Solutions for SharePoint and Office 365 dramatically improve search success by ensuring that Site Collections are fully text searchable.
System Recovery System Recovery: With System Recovery from Veritas, you can minimize downtime and avoid the impact of disaster by easily recovering in minutes, whether you’re restoring a single file, email, or an entire machine—physical or virtual.
Baffle Baffle Application Data Protection: Baffle Application Data Protection allows your applications to encrypt data stored in on-premises or in-the-cloud databases without code changes. 
RapidMiner RapidMiner Server 8.0 Beta: Real data science, fast and simple. RapidMiner Server makes it easy to share, reuse, and operationalize the models and results created in RM Studio.
Forcepoint Forcepoint Next Generation Firewall: Forcepoint NGFW (next generation firewall) gives you the scalability, protection, and visibility you need to more efficiently manage and protect traffic into and out of your Azure network, as well as among various components of your cloud environment. 
Scality Scality Connect Developer Edition: Scality Connect enables customers to immediately consume Azure Blob Storage with their proven Amazon S3 applications without any application modifications.
Media3

Managed cPanel Server: Media3 provides a fully managed Linux server with the latest version of cPanel installed running Centos 7 for the operating system. 24/7 support with a modern control panel for ease of use assuring an optimal user experience for every customer.

IBM WebSphere IBM WebSphere Application Server Liberty 17.0: Build and deploy awesome applications with the latest version of WebSphere Liberty! Built for developers, yet ready for production, it’s a combination of IBM technology and open source software, with fast startup times (<2 seconds), no server restarts to pick up changes, and a simple XML configuration.
Red Hat Red Hat Enterprise Linux 7: Red Hat Enterprise Linux is a leading enterprise Linux platform built to meet the needs of today's modern enterprise.
Docker Docker EE for Azure (Basic): An integrated, easy-to-deploy environment for building, assembling, and shipping applications.
QuerySurge QuerySurge: QuerySurge is a leading data testing solution for automated validation and testing of Big Data.   
Asianux Asianux Server 7 SP2: The best enterprise Linux OS with many adoption results and high-quality technical supports. 
MyWorkDrive MyWorkDrive Cloud File Server: MyWorkDrive allows you to give your users easy, secure, and remote access to your own Azure based File Server.
AppGate AppGate: AppGate for Azure supports fine-grained, dynamic access control to Azure resources.
Acronis Acronis Storage Gateway: A component of Acronis backup cloud for service providers that enables backup into Azure storage.
McAfee McAfee Advanced Threat Defense (ATD): McAfee Advanced Threat Defense provides in-depth inspection to detect evasive threats. Advanced detection techniques, from sandboxing and full static code analysis to deep learning, pinpoint malicious behavior patterns to convict emerging, difficult-to-detect threats.
MapR MapR Converged Data Platform v6: Companies use MapR to deliver game-changing big data based solutions such as improved fraud prevention, tailored customer experiences, and better healthcare insights.
LightSpeed LightSpeed Management Portal: The management portal of our Internet enabled products.
Grafana Grafana: The leading open source software for analyzing all your time series data.
Informatica Informatica Data Quality 10.1.1: Regardless of whether your data is structured or unstructured, or your data is on-premises or in the cloud, it needs to be trusted. Deliver business value by ensuring that all key initiatives and processes are fueled with relevant, timely, and trustworthy data.
Cloudian Cloudian HyperCloud: HyperCloud for Azure offers a true bimodal multi-cloud converged solution that runs fully virtualized within the Azure cloud environment.
Mesosphere Mesosphere DC/OS on Azure: Mesosphere is the premier platform for modern applications and data services, enabling cloud-independent digital transformation. 
System Recovery System Recovery: Powerful and trusted disaster recovery and data protection across virtual and physical platforms.
BlockApp BlockApps Multinode Blockchain - EnterpriseEdition: Multi-Private Ethereum blockchain with RESTful API, explorer, search, analytics, and developer tools.
Simplygon Simplygon Cloud: Simplygon’s technology exposed through a REST API for use in 3D asset optimization pipelines.
Apache Cassandra Cassandra Cluster: Apache Cassandra is an open source distributed database management system designed to handle large amounts of data across many commodity servers, providing high availability with no single point of failure.
Confluence

Confluence Data Center: Confluence is a content collaboration software that changes how modern teams work.   

November 2017 Leaderboard of Database Systems contributors on MSDN

$
0
0

Congratulations to our November top-10 contributors! Alberto Morillo maintains the first position in the cloud ranking while Visakh Murukesan maintains the top in the All Databases ranking.

leaderboard_Top10_november

This Leaderboard initiative was started in October 2016 to recognize the top Database Systems contributors on MSDN forums. The following continues to be the points hierarchy (in decreasing order of points):

leaderboard_rules

Share UI Code in any iOS and Android App with .NET Embedding

$
0
0

One of the most exciting announcements during this year’s Connect(); event was the ability to embed .NET libraries into existing iOS (Objective-C/Swift) and Android (Java) applications with .NET Embedding. This is great because you can start to share code between your iOS and Android applications, and you can also share the user interface between your apps when you combine .NET Embedding with Xamarin.Forms Native Forms. This means that you can leverage your existing investments and apps without having to re-write them from scratch to start adding cross-platform logic and UI. In fact, during the Connect(); keynote I showed how I was able to extend the open source Swift-based Kickstarter iOS application with .NET Embedding and Xamarin.Forms Native Forms. You can check out the clip below.

Today, let’s take a deeper look at how we can start sharing our native user interfaces and business logic in an iOS app written in Objective-C and an Android app written in Java using .NET Embedding.

Getting Started

.NET Embedding tooling is supported in both Visual Studio 2017 for Android, and Visual Studio for Mac for iOS, Android, and macOS. Since we will be working on an iOS and Android application, we will be using Visual Studio for Mac which can be downloaded from visualstudio.com. If you are on Windows you can still follow along for the Android portion.

.NET Embedding works by compiling a .NET assembly into a native library for the specified operating system. This can be a single .NET Standard Library or platform specific Xamarin.iOS and Xamarin.Android libraries that share code. The latter will enable us to access platform specific capabilities and utilize Xamarin.Forms’ Native Forms capability to share user interface code.

For this sample we will want to create three projects:

  • HelloSharedUI – Portable Class Library
  • HelloSharedUI.iOS – Xamarin.iOS Library
  • HelloSharedUI.Droid – Xamarin.Android Library

Inside of Visual Studio for Mac we will find the multiplatform node in the New Project Dialog, where a Xamarin.Forms class library template exists. This will create a Portable Class Library for our shared code with the Xamarin.Forms NuGet package installed and some sample code. We’ll start with this.

Shared Project

Next, we will want to create the iOS and Android Class Libraries.

Class Library iOS and Android

Then we must ensure that we have installed the Xamarin.Forms NuGet Package in the iOS and Android projects and that the shared portable class library’s NuGet is also up to date. If you are new to .NET Development, NuGet is package management for .NET project similar to CocoaPods and Maven packages.

Add Xamarin Forms

Finally, we should ensure that our iOS and Android libraries have a reference to the portable class library that has our shared code.

Add Reference

Shared User Interface

The default template gives us the following shared user interface with a button that when clicked will update its label.

We will use this page for this example, but we can easily add additional pages written in XAML or C# with Xamarin.Forms.

clip_image012

Accessing Shared User Interface

With our shared user interface in place we need a way to get access to it from the iOS and Android libraries. Let’s add a class called UIHelpers.cs to the iOS and Android libraries that will contain helper methods to initialize Xamarin.Forms and display or return the page on each platform.

iOS UIHelpers

On iOS we are able to directly push the page on the navigation stack or display it modally. This is accomplished by creating an instance of MyPage, finding the root view controller, and presenting it:

Android UIHelpers

On Android we also have access to the navigation stack, but it is easier to directly return a Fragment that can then be used anywhere in the Android application:

With our code in place we are now ready to compile the libraries to embed in the native language apps.

Adding .NET Embedding NuGet Packages

The .NET Embedding tooling can be installed directly on to our development machine or added directly to the project via the Embeddinator-4000 NuGet package. This means we can add the NuGet package to our iOS and Android projects.

After we compile the library we need to run it through the .NET embedding tooling to generate the framework. This can easily be done by adding custom commands that run after the build is successful. These can be added by right clicking on the iOS project and going to options and finding Custom Commands. For this demo we will generate an iOS framework in debug mode with the following command:

For Android we will use a similar command:

Now, when we compile the library a folder named **iosoutput** will contain HelloSharedUI.iOS.framework that we can import into an iOS Objective-C or Swift project and another folder named **androidoutput** that will contain HelloSharedUI.Droid.aar for Android.

Calling .NET Code from Xcode and Objective-C

The final step for iOS is to import the .framework into Xcode and add it as an embedded binary. Follow the simple steps on our .NET Embedding documentation to import it. I created a simple UIViewController that has a button and a TouchUpInside Action that is all wrapped in a UINavigationController.

On the top of the ViewController.m we need to add an import for our header and then implement the click event to show the embedded page:

Now simply run the application to see the embedded page and C# logic:

iOS Embedding

Adding the Framework to Android Studio

Android goes through a similar process of importing the generated .aar that is explained on our .NET Embedding documentation page. After we have the .aar imported it is ready to be used an application. I generated the default application in Android Studio that displays a single Fragment in an Activity with a FloatingActionButton that has a click listener. Instead of displaying toast when the button is clicked, we can navigate to the shared user interface that we exposed as a Fragment. To get access we can import our library and then we can create the UIHelpers class, create our fragment, and replace the current view:

Now, let’s go ahead and run the app:

Android Embedding

Embed Everywhere

In just a few minutes we were able to create shareable native user interface and business logic in XAML and C#, and embed it into a native language iOS and Android application. This is only the start as .NET embedding also supports creating embeddable libraries for not just iOS and Android, but also Objective-C/Swift macOS applications, and even C++ code for Linux based apps.

Learn More

Be sure to read through our full documentation on .NET Embedding to start leveraging .NET libraries in native language applications. You can grab the source code on my GitHub and also the full Kickstarter app source code that I demoed at Connect(); from my GitHub.

To get started with all of this, for free, head on over to VisualStudio.com and download Visual Studio 2017 or Visual Studio for Mac. You can learn more about .NET Embedding and all of our other announcements by reading our full announcement from the event.

James Montemagno, Principal Program Manager, Mobile Developer Tools

James has been a .NET developer since 2005, working in a wide range of industries including game development, printer software, and web services. Prior to becoming a Principal Program Manager, James was a professional mobile developer and has now been crafting apps since 2011 with Xamarin. In his spare time, he is most likely cycling around Seattle or guzzling gallons of coffee at a local coffee shop. He can be found on Twitter @JamesMontemagno, blogs code regularly on his personal blog http://www.montemagno.com, and co-hosts the weekly development podcast Merge Conflict http://mergeconflict.fm.

Sky’s the limit with Azure, ASP.NET Core, and Visual Studio for Mac

$
0
0

[Hello, we are looking to improve your experience on the Visual Studio Blog. It will be very helpful if you could share your feedback via this short survey that should take less than 2 minutes to fill out. Thanks!]

Cloud services represent a huge leap in functionality, performance, and management simplicity for web apps, APIs, mobile backends, and more. To help you get started with cloud-based development in Visual Studio for Mac, today we’re publishing two new hands-on labs: publishing your ASP.NET Core web app to Azure, and connecting your ASP.NET Core web app to Azure SQL Database.

These two labs will help develop cloud-ready ASP.NET web apps and APIs with Visual Studio for Mac, using an Azure hosted SQL database, and then publish the web app itself to Azure. What’s more, you can try Azure out for free!

Lab 7: Publishing ASP.NET Core websites to Azure

This lab builds on the earlier Getting Started with ASP.NET Core lab, by showing you how to publish the website to Azure using Visual Studio for Mac in just a few steps:

  1. Creating an Azure account
  2. Creating an ASP.NET Core website
  3. Publishing to Azure
  4. Managing your website in Azure

To complete the lab, follow these instructions which will guide you through the process.

VS4Mac ASPNET Core Azure

Lab 8: Using Azure SQL Database in ASP.NET Core web apps

Every website needs a database. It is easy to set up an Azure SQL database to connect and develop locally on Visual Studio for Mac and migrate it to a production instance later. This lab will walk you through getting your first cloud database and ASP.NET Core web app up and running:

  1. Creating an Azure SQL Database
  2. Setting up the ASP.NET Core app
  3. Configuring the SQL Azure Database
  4. Connecting the ASP.NET Core website to Azure SQL

These step-by-step instructions will show you how to set up and connect to the Azure database from ASP.NET Core. The same steps will work for ASP.NET Core web API projects, which you can use as a mobile app back-end.

Visual Studio for Mac, version 7.3

On December 4, we released Visual Studio for Mac, version 7.3, bringing an even better Visual Studio for Mac to you as a free update. This release brings performance and stability enhancements, as well as new features. Visual Studio Test Platform (VSTest) now provides more flexibility in choosing your test frameworks and automatic iOS app signing, reducing the number of manual steps needed to build your app. Check out the full blog post for more details and be sure to download or update to Visual Studio for Mac, version 7.3 today!

Get Started

Download Visual Studio for Mac today, and visit the VS4Mac labs repo on GitHub to check out the new Azure hands-on labs, as well as the previous ones that help you get started building apps, games, and services for Xamarin mobile, web, and cloud.

Check out the docs for more in-depth information on Visual Studio for Mac features, and let us know what you think of the labs and Visual Studio for Mac in the comments below.

Craig Dunn, Principal Program Manager
@conceptdev

Craig works on the Mobile Developer Tools documentation team, where he enjoys writing cross-platform code for iOS, Android, Mac, and Windows platforms with Visual Studio and Xamarin.

Last week in Azure: OSBA, DevOps and Kubernetes, VM sizes, and more

$
0
0

Whether you followed KubeCon in Austin, or SpringOne Platform in San Francisco, there were several announcements of interest last week – especially if containers are of interest to you. See the links below to learn more about the Open Service Broker for Azure, projects that are bringing DevOps capabilities to Kubernetes and serverless on Azure, and the updated Azure Management Libraries for Java. In addition, several storage-optimized and burstable VM sizes are now available in GA.

Compute

Announcing the Lv2-Series VMs powered by the AMD EPYC™ processor - Lv2-Series VMs are next-generation storage-optimized VMs powered by AMD’s EPYC™ processors to support customers with demanding workloads like MongoDB, Cassandra, and Cloudera that are storage intensive and demand high levels of I/O.

Announcing the general availability of B-Series and M-Series - Burstable VM sizes (B-Series) and the largest VM sizes available in Azure (M-Series) are now GA.

cloud-init for RHEL 7.4 and CentOS 7.4 preview - Now you can migrate existing cloud-init Linux configurations to Azure from other environments. cloud-init allows for VM customization during VM provisioning, adding to the existing Azure parameters used to create a VM.

Containers

Connect your applications to Azure with Open Service Broker for Azure - The Open Service Broker API is an industry-wide effort to meet that demand, simply and securely that provides a standard way to connect applications to services available in the marketplace.

Azure brings new Serverless and DevOps capabilities to the Kubernetes community - Learn about announcements made at KubeCon about more Kubernetes community projects and partnerships that extend what you can do with Kubernetes and Azure.

Java: Manage Azure Container Service (AKS) and more - The latest release of the Azure Management Libraries for Java (v1.4) adds support for Azure Container Service (AKS) and more.

Partners enhance Kubernetes support for Azure and Windows Server Containers - News about two new collaborations with Heptio (bringing Heptio Ark to Azure) and Tigera (Project Calico), as well as some progress we’ve made working with SIG Windows (Windows Server Containers support in Kubernetes 1.9).

Lift, shift, and modernize using containers on Azure Service Fabric - Learn about some of the container orchestration capabilities in Service Fabric along with a peek at what’s coming soon, such as updates to the Service Fabric Explorer UI.

Data

Resumable Online Index Rebuild is generally available for Azure SQL DB - Resume a paused index rebuild operation from where the rebuild operation was paused rather than having to restart the operation at the beginning, all while using only a small amount of log space.

Database Scoped Global Temporary Tables are generally available for Azure SQL DB - Global temporary tables for Azure SQL DB are stored in tempdb and follow the same semantics as global temporary tables for SQL Server; however, they are only scoped to a specific database and are shared among all users’ sessions within that same database.

HDInsight Tools for VSCode supports Azure environments worldwide - You can now connect HDInsight Tools for VSCode to all the Azure environments that host HDInsight services, including government and regional clouds.

Performance best practices for using Azure Database for PostgreSQL - Learn about the categories of performance issues for an application or service using Azure Database for PostgreSQL service and how to resolve them.

#AzureSQLDW cost savings with Autoscaler – part 2 - In this continuation of the post from November, #AzureSQLDW cost savings with optimized for elasticity and Azure Functions – part 1, learn how to get the most from Azure SQL Data Warehouse with an Autoscale solution you can deploy.

Internet of Things (IoT)

Azure IoT Hub Device Provisioning Service is generally available - Azure IoT Hub Device Provisioning Service provides zero-touch device provisioning to Azure IoT Hub, and it brings the scalability of the cloud to what was once a laborious one-at-a-time process.

Microsoft IoT Central delivers low-code way to build IoT solutions fast - Build production-grade IoT applications in hours and not worry about managing all the necessary backend infrastructure or hiring new skill sets to develop the solutions with Microsoft IoT Central.

Other

Deployment strategies defined - An exploration of common deployment strategies, such as Blue/Green Deployment and Canary Deployment.

What’s brewing in Visual Studio Team Services: December 2017 Digest - Buck Hodges provides a comprehensive overview of recent VSTS updates, including Azure DevOps Project and hosted Mac agents for CI/CD pipelines.

Don’t build your cloud home on shaky foundations - Learn the six top design considerations you should consider when laying the foundational components for a structured governance model in Azure.

Bringing hybrid cloud Java and Spring apps to Azure and Azure Stack - At SpringOne Platform we announced improved support for Pivotal Cloud Foundry across Azure and Azure Stack, and unveiled three new products and updates to improve support for Java and Spring on Azure.

Control how your files are cached on Azure CDN using caching rules - Learn how you can control Azure CDN caching behavior by intelligently caching files on CDN edge servers located in various geographic regions with CDN caching rules.

Content

Azure Application Architecture Guide - An overview of the Azure Application Architecture Guide, which the AzureCAT patterns & practices team published to provide a starting point for architects and application developers who are designing applications for the cloud.

Free eBook – The Developer’s Guide to Microsoft Azure now available - A free eBook written by developers for developers to give you the fundamental knowledge of what Azure is all about, what it offers you and your organization, and how to take advantage of it all.

Service Updates

Azure Shows

Azure Location Based Services - Chris Pendleton joins Scott Hanselman to discuss Azure Location Based Services, which is a portfolio of geospatial service APIs natively integrated into Azure that enables developers, enterprises, and ISVs to create location-aware apps and IoT, mobility, logistics, and asset tracking solutions. The portfolio currently comprises of services for Map Rendering, Routing, Search, Time Zones and Traffic.

Open Service Broker for Azure - In this episode, Sean McKenna shows Scott Hanselman the Open Service Broker for Azure, an easy way to connect applications running in platforms like Kubernetes and Cloud Foundry to some of the most popular Azure services, using a standard, multi-cloud API.

Azure Availability Zones - Raj Ganapathy joins Scott Hanselman to discuss the new addition to Azure's resiliency offerings – Availability Zones. Azure Availability Zones are fault-isolated locations within an Azure region to help protect customers applications and data from datacenter-level failures with independent power, network, and cooling.

Azure Security Center, Suspicious processes and JIT access - Corey Sanders, Director of Program Management on the Microsoft Azure Compute team shares some of the coolest demos from his recent Microsoft Ignite talk to help manage your infrastructure in an easier way. In this episode he covers Azure security center enhancements, tracking suspicious processes with AI and Just-In-Time (JIT) access.

The Azure Podcast: Episode 207 – Functions & Serverless – In this special All-UK episode, Russell Young has an in-depth discussion with Christos Matskas, a Senior Azure PFE in the UK, about the growing popularity of serverless computing in Azure using services like Functions and Event Grid.

Personal note on joining the Microsoft Cloud Advocates team

$
0
0

A quick personal note: today is my first day as a member of the Cloud Developer Advocates team at Microsoft! I'll still be blogging and events related to R, and supporting the R community, but now I'll be doing it as a member of a team dedicated to community outreach.

Bitmicrosoft

As a bit of background, when I joined Microsoft back in 2015 via the acquisition of Revolution Analytics, I was thrilled to be able to continue my role supporting the R community. Since then, Microsoft as a whole has continue to ramp up its support of open source projects and to interact directly with developers of all stripes (including data scientists!) through various initiatives across the company. (Aside: I knew Microsoft was a big company before I joined, but even then took me a while to appreciate the scale of the different divisions, groups, and geographies. For me, it was a bit like moving into a new city in the sense that it takes a while to learn what the neighborhoods are and how to find your way around.)

I learned of the Cloud Developer Advocates group after reading a RedMonk blog post about how some prominent techies (many of whom I've long admired and respected on Twitter) had been recruited into Microsoft to engage directly with developers. So when I learned that there was an entire group at Microsoft devoted to community outreach, and with such a fantastic roster already on board (and still growing!), I knew I had to be a part of it. I'll be working with a dedicated team (including Paige, Seth and Vadim) focused on data science, machine learning, and AI. As I mentioned above, I'll still be covering R, but will also be branching out into some other areas as well. 

Stay tuned for more, and please hit me up on Twitter or via email if you'd like to chat, want to connect at an event, or just let me know how I can help. Let's keep the conversation going!


Visual Studio Code C/C++ extension Dec 2017 update – support for more Linux distros

$
0
0

Happy holidays! Today we’re shipping the December 2017 update to the Visual Studio Code C/C++ extension – our last major update of this year, with out-of-box support for more Linux distros and built-in guidance on how to configure for a better IntelliSense experience. The original blog post, which provides an overview of this extension, has been updated with these changes.

Support for all Linux distros that VS Code runs on

The extension now works on all the Linux distros that VS Code runs on. Specifically, the following two scenarios are now supported:

– The extension no longer requires GLIBC 2.18 as a dependency, which means it now natively runs on Linux distros that don’t come with GLIBC 2.18, such as Linux distros equal to or older than Ubuntu 13.04, Fedora 19, RHEL 7, Debian 7, SUS 11, CentOS 7, and Scientific 7.

– The extension now runs on Linux 32bit distros.

IntelliSense configuration guidance

If you have seen the following message on opening a folder in VS Code and wondered how includePath can be set up, click on the new “Learn More” button in the message to see the Guidance on configuring includePath for better IntelliSense results.

Tell us what you think

Download the C/C++ extension for Visual Studio Code, try it out, and let us know what you think. File issues and suggestions on GitHub. If you haven’t already provided us feedback, please take this quick survey to help shape this extension for your needs.

Microsoft Azure preview with Azure Availability Zones now open in France

$
0
0

The preview of Microsoft Azure in France is open today to all customers, partners and ISVs worldwide giving them the opportunity to deploy services and test workloads in these latest Azure regions. This is an important step towards offering the Azure cloud platform from our datacenters in France.

The new Azure Regions in France are part of our global portfolio of 42 regions announced, which offer the scale needed to bring applications closer to users and customers around the world. We continue to prioritize geographic expansion of Azure to enable higher performance and availability, meet local regulatory requirements, and support customer preferences regarding data location. The new regions will offer the same enterprise-grade reliability and performance as our globally available services combined with data residency to support the digital transformation of businesses and organizations in France.

The new France Central region offers Azure Availability Zones which provide comprehensive native business continuity solutions and the highest availability in the industry with a 99.99% virtual machines uptime SLA when generally available. Availability Zones are fault-isolated locations within an Azure region, providing redundant power, cooling, and networking for higher availability, increased resiliency and business continuity. Starting with preview, customers can architect highly available applications and increase their resiliency to datacenter level failures by deploying IaaS resources across Availability Zones in France Central. Availability Zones in France Central can be paired with the geographically separated France South region for regional disaster recovery while maintaining data residency requirements.

You can follow these links to sign up for the Azure Preview in France, learn more about the Microsoft Cloud in France, or learn more about Azure Availability Zones.

Microsoft expands scope of Singapore MTCS certification

$
0
0

I am pleased to announce the renewal of the Singapore Multi-Tier Could Security (MTCS) Certification Level 3. As part of its commitment to customer satisfaction, Azure has adopted the MTCS standard to meet different cloud user needs for data sensitivity and business criticality. Azure has maintained its MTCS certification for the fourth consecutive year. This year, the scope has increased by 30% catching up with the latest ISO 27001 scope covering the latest data storage and analytics services including Data Lake Store, Data Lake Analytics, SQL Server Stretch Database, Azure Cosmos DB, Azure Container Service, etc.

Developed by the Infocomm Media Development Authority (IMDA) of Singapore, the MTCS Standard 584:2015 is the world’s first cloud security standard that covers three different tiers of security requirements spanning different service types including PaaS, IaaS and SaaS.  The standard comprises a total of 535 controls closely mapped to ISO 27001 Information Security Management System (ISMS) standard, covering basic security in Level 1, more stringent governance and tenancy controls in Level 2, and reliability and resiliency for high-impact information systems in Level 3.

The MTCS standard seeks to drive cloud adoption across industries by giving clarity around the security service levels of Cloud Service Providers (CSPs) while increasing the level of accountability and transparency from CSPs and encouraging the adoption of sound risk management and security practices. Certification is valid for three years with a yearly surveillance audit to be conducted by an independent third party.

Microsoft relies on its MTCS Level 3 certification to provide the level of assurance and transparency required by verticals such government, financial and healthcare with more stringent security and regulatory requirements. Such organizations are encouraged to supplement MTCS level 3 with their own industry specific requirements to help mitigate high impact risks associated with the handling of highly sensitive data.

More information on the services covered, the Azure self-disclosure form and the MTCS certification are available on the Azure Trust Center as well as the IMDA Singapore site.

Azure Monitor: Send monitoring data to an event hub

$
0
0

With Azure Monitor’s diagnostic settings you can set up your resource-level diagnostic logs and metrics to be streamed to any of three destinations including a storage account, an Event Hubs namespace, or Log Analytics. Sending to an Event Hubs namespace is a convenient way to stream Azure logs from any source into a custom logging solution, 3rd party SIEM product, or other logging tool.

Previously, you could only route your resource diagnostic logs to an Event Hubs namespace, in which an event hub was created for each category of data sent. Now, you can optionally specify which event hub within the namespace should be used for a particular diagnostic setting. This is helpful if you are routing multiple types of logs to a single endpoint, for example, a SIEM connector. Rather than having to configure that endpoint to read from multiple event hubs, you can simply route all log types to a single event hub and have your endpoint listen to that one source.

You can try this out today in the Azure Portal by creating or modifying a diagnostic setting and selecting “Stream to an event hub”.

Diagnostics settings

This can also be set up using a Resource Manager template. PowerShell and CLI support will follow in the coming months. Try it out and let us know your thoughts!

How cloud speed helps SQL Server DBAs

$
0
0

A few years ago, the Microsoft SQL Server product team introduced a new cloud Platform-as-a-Service (PaaS), Azure SQL Database, which shares the SQL Server code base. Running a cloud-first service required significant changes to the legacy SQL Server engineering model which took years of investment in order to fully enable. With these engineering model changes came big benefits which positively impacted both Azure SQL Database and SQL Server.

Even if you are a SQL Server database administrator who isn’t using Azure SQL Database today, you’ll still be seeing benefits from Microsoft’s investments in the cloud. This blog post will review how engineering model transformations, driven by cloud requirements, resulted in several improvements in how we build, ship and service SQL Server. 

Features arrive faster

In the earlier days of SQL Server (2005 through 2012), SQL Server had roughly three-year long engineering cycles. For each planned release of SQL Server, a significant amount of planning would go into the up-front design, using a waterfall-like software development process coordinated across different teams. This included the generation of functional specification documentation by program managers, design specifications by developers and automated testing code developed by testers. 

Once SQL Server finally shipped, customers could take years to upgrade or adopt the associated new features. With this legacy engineering model and the extended periods between the original planning and actual customer adoption, it would take years overall to understand if the feature “landed” properly and met the needs of the originally intended scenario. For any new feature being discussed, the SQL Server engineering team had to think several years ahead of the market. 

The development of a Platform-as-a-Service, Azure SQL Database, shifted the SQL Server engineering team’s focus into working within significantly shorter time frames. The SQL Server engineering team made a few key changes in order to make this happen:

  • The build and test loop for SQL Server was automated using thousands of machines to run tests in parallel. Tests are run all the time. This took the build and test process from weeks in the legacy engineering model to hours for the average case in the new model.
  • The SQL Server engineering team realized that the original SQL Server code surface area was very large and thus difficult to deploy in its monolithic state. Therefore, the team looked for ways to break apart the architecture into overall smaller micro-services wherever possible. This change in architecture allowed separate deployments and servicing for each component.
  • Features are now required to be built more incrementally and delivered via monthly Community Technical Previews (CTPs).

These changes allowed us to shorten the release cycle and get features to Azure SQL Database and SQL Server faster than ever before. Most recently, the SQL Server engineering team managed to ship SQL Server 2017 along with new cross-platform support only 15 months after the release of SQL Server 2016. Contrast this with the legacy three to five-year SQL Server shipping cycles of the past. 

Features are tested sooner and require customer validation

Leading up to the general availability of SQL Server 2017, Microsoft provided monthly community technical previews (CTPs) that gave the public early and ongoing access for testing new features. The ability to provide production-quality builds was very much driven by the continuous release process used in Azure SQL Database today. This early access to new features also resulted in early customer feedback that in turn was immediately used by the SQL Server engineering team. The SQL Server engineering team requires testing of features by customers during the engineering cycle in order to measure whether the features are working correctly before they are declared complete.

Features ship when ready

In older versions of SQL Server, some improvements risked being “crammed” into a SQL Server release prior to shipment.  This was because if you missed a shipment window, the feature may not light up in the product for another three years.  Now with the ability of the SQL Server engineering team to frequently ship production-quality releases this is no longer a problem (or temptation). If a feature isn’t ready, it is held back until the issues are addressed and then the feature is included in a later monthly release. 

Feature development is iterative

Planning is still a critical part of the engineering process, but the SQL Server engineering team now does it a bit differently. For a new feature idea, architects, program managers and engineering managers on the team look at a problem space and then explore basic solutions that can be shipped within a reasonable timeframe. Any proposed feature must also have associated key customers to help justify the effort and engineering funding. The SQL Server engineering team then iterates over the customer needs and then releases versions until finished and ready to ship. 

An example of this is the automatic plan correction feature, which is a new automatic tuning feature in SQL Server 2017 that identifies plan regressions and fixes them by applying a previous good plan. This feature was first deployed to Azure SQL Database and had a significant amount of real-world testing by internal customers and opt-in private preview customers. This resulted in a high volume of feedback and several changes before it ultimately shipped in SQL Server 2017 in its current form.

Feedback from early versions of the Minimum Viable Product (MVP) are used to refine (or reset) what is built. Customer feedback is used throughout the engineering cycle and is used to justify new features and changes to existing features. With more agility, we can release both small and large features that provide incremental value and not just focus on “marketing big box” features.

Friction-free upgrades

Azure SQL Database is upgraded on an ongoing basis. Fixes are streamed over time across branches, and these upgrades can include bug fixes and new improvements. With this continual change, there was a requirement to make automatic upgrades as seamless and friction free as possible. As a result, the SQL Server engineering team stopped deprecating and removing most features, instead moving to a policy of maintaining backward compatibility to allow seamless and silent upgrades. This deprecation policy applies both to SQL Server and Azure SQL Database. 

The SQL Server engineering team seeks to maintain backwards compatibility as a strong goal so that the service can upgrade transparently. If there is a case where compatibility is broken (such as a change required to maintain security in the service), engineering will proactively reach out to impacted customers and find workarounds with them or use the compatibility level to enable customers to test their application sufficiently and opt-in to the new code when ready.

On the subject of compatibility level, any new query execution plan-affecting features and fixes accrue under the next database compatibility level. The intent is to minimize regression risk. For example, the new adaptive query processing feature family introduced in SQL Server 2017 requires compatibility level 140 or higher. Upgrading to SQL Server 2017 maintains your existing user database’s compatibility level until you explicitly decide to change it. If an engine feature isn’t plan-affecting (SQL Graph, for example), we don’t tie it to the compatibility level and it can surface automatically as an available feature without having to be explicitly enabled.

Additionally, features like automatic plan correction and Query Store can act as a backstop and insurance policy post-upgrade, enabling a quick and efficient way to handle regressions.

Telemetry drives quality

The SQL Server engineering team makes use of telemetry to:

  1. Identify candidate-scenarios for future improvements
  2. Measure feature adoption
  3. Improve the product quality by surfacing issues more quickly

All telemetry data is generalized and scrubbed to protect customer data and this telemetry is then used to manage the service at scale.  With millions of databases in Azure SQL database and zero operational staff or DBAs, ~600 TBs of telemetry is collected per day from Azure SQL Database, helping the SQL Server engineering team run automated alerting and SLA infrastructure. 

Additionally, the SQL Server engineering team watches for and investigates all crash dumps coming in across the millions of databases running in Azure SQL Database. 

The servicing learnings from Azure SQL Database directly accrue to SQL Server’s quality in the following ways:

  • The SQL Server engineering team proactively fixes and deploys such fixes into Azure SQL Database, in-market SQL Server versions, and future in-development versions of SQL Server.
  • Fixes get pushed into Cumulative Updates (CUs) of SQL Server at a much more aggressive rate than in the past.

The SQL Server engineering team encourages people to run on the latest CUs in SQL Server 2016 and higher because of these engineering model investments. Customers will now get fixes that customers would have, in the past, had to have requested as hotfixes (and then wait for the engineering team to scramble and fix).  Now such fixes happen proactively and without opening support tickets.

From a SQL Server DBA perspective, keeping up with the latest CUs can help prevent known issues from happening and also help with performance, availability and the overall health of the SQL Server environment.

Removing the need for SP1

Azure SQL Database receives most feature and bug fixes first, with code changes being applied incrementally across millions of databases. The SQL Server engineering team also leverages many parallel machines to run tests faster and more often. The end-result is that regressions and bugs are caught and fixed much earlier in the engineering cycle. These fixes are then rolled into SQL Server via upcoming cumulative updates (CUs).

The extensive testing and production-ready builds led to the announcement of changes described in the Modern Servicing Model blog post. Starting with SQL Server 2017, localized cumulative updates (CUs) are the primary delivery method for fixes, with delivery every month for the first 12 months after release of SQL Server 2017 and then every quarter for the remaining 4 years of the full 5-year mainstream cycle. Because cumulative updates are production-ready and as well-tested as past service pack releases, starting with SQL Server 2017, annual service packs are no longer being published. There is no need to wait until SP1 to upgrade to SQL Server 2017!

Two worlds, one engineering model

Running Azure SQL Database has significantly transformed the SQL Server engineering model and the evolution continues. As the build-and-ship process continues to be streamlined and improved, our goal is to provide continuous value and innovation to our customers both in Azure SQL Database and SQL Server. With cloud innovations enriching SQL Server, we believe SQL Server is the best place to build your next on-premises data tier and application.

If you have feedback to share regarding Microsoft’s engineering model or the improvements described here, we would like to hear from you. To contact the SQL Server engineering team with feedback or comments on this subject, please email: SQLDBArchitects@microsoft.com.

Viewing all 10804 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>