Quantcast
Channel: Category Name
Viewing all 10804 articles
Browse latest View live

Azure SQL Database and Azure Database for MySQL at PASS SUMMIT!

$
0
0

This blog post was authored by Debbi Lyons, Senior Product Marketing Manager, Azure Marketing.

Get inspired by our industry-leading keynote speakers on the Microsoft data platform at PASS Summit 2018! There has never been a more exciting time for data professionals and developers as more organizations turn to data-driven insights to stay ahead and prepare for the future. For those wanting to know more about Azure SQL Database and migrating your SQL Server databases without changing your apps, we've got some great sessions for you.

Azure SQL Database is the intelligent, fully managed relational cloud database service that provides the broadest SQL Server engine compatibility. Learn about managed instances to accelerate app development and simplify maintenance using the SQL tools you love to use. 

Azure Database for MySQL provides a fully managed, enterprise-ready community MySQL database as a service. The MySQL Community edition helps you easily lift and shift to the cloud, using languages and frameworks of your choice. Azure Database for MySQL enables popular open source frameworks and languages, and it features tight integration with Azure Web Apps.

Below are three must attend sessions!

Introducing Azure SQL Database Hyperscale

In this session, we will talk about our newest offerings in Azure SQL Database that will provide more flexibility and scale for both Compute and Storage for your database workloads.

Date: November 7, 2018

Time: 10:15 AM - 11:30 AM

Speakers: Ajay Jagannathan, Kevin Farlee, Lindsay Allen, and Xiaochen Wu

Azure SQL DB Managed Instances - Built to Easily Modernize Application Data Layer

Managed Instances are the latest fully managed deployment model for Azure SQL Database that enables friction-free migration for SQL Server applications running on-premises. This is done by providing almost 100 percent surface area compatibility with SQL Server through supporting features such as cross-DB queries and transactions, CLR, SQL Agent, Transactional Replication, Change Data Capture, Service Broker, and more. Come and learn why Managed Instance is a great PaaS destination for all SQL Server workloads and how to start your cloud modernization at scale now, using Azure SQL Database Managed Instances.

Date: November 8, 2018

Time: 10:45 AM - 12:00 PM

Speaker: Borko Novakovic, Senior Program Manager, Microsoft

OSS Databases on Azure

Azure SQL Database is a much-loved database for developers. PostgreSQL is emerging and quickly gaining grounds among developers as well and was voted the most loved database by db engines for 2017. Azure now offers developers the choice of SQL, PostgreSQL, MySQL, and MariaDB, delivered as a fully managed service. Built on the same platform as Azure SQL Database, we are enabling database developers and DBAs a consistent managed service experience. Come learn how you can extend your SQL expertise to these new OSS database engines.

Date: November 9, 2018

Time: 9:30 AM - 10:45 AM

Speaker: Charles Christian, Principal Group Program Manager, Microsoft

Check out the full schedule of Conference Sessions to get a taste of what you won’t want to miss at PASS Summit 2018.


Announcing .NET Standard 2.1

$
0
0

Since we shipped .NET Standard 2.0 about a year ago, we’ve shipped two updates to .NET Core 2.1 and are about to release .NET Core 2.2. It’s time to update the standard to include some of the new concepts as well as a number of small improvements that make your life easier across the various implementations of .NET.

Keep reading to learn more about what’s new in this latest release, what you need to know about platform support, governance and coding.

What’s new in .NET Standard 2.1?

In total, about 3k APIs are planned to be added in .NET Standard 2.1. A good chunk of them are brand-new APIs while others are existing APIs that we added to the standard in order to converge the .NET implementations even further.

Here are the highlights:

  • Span<T>. In .NET Core 2.1 we’ve added Span<T> which is an array-like type that allows representing managed and unmanaged memory in a uniform way and supports slicing without copying. It’s at the heart of most performance-related improvements in .NET Core 2.1. Since it allows managing buffers in a more efficient way, it can help in reducing allocations and copying. We consider Span<T> to be a very fundamental type as it requires runtime and compiler support in order to be fully leveraged. If you want to learn more about this type, make sure to read Stephen Toub’s excellent article on Span<T>.
  • Foundational-APIs working with spans. While Span<T> is available as a .NET Standard compatible NuGet package (System.Memory) already, adding this package cannot extend the members of .NET Standard types that deal with spans. For example, .NET Core 2.1 added many APIs that allow working with spans, such as Stream.Read(Span<Byte>). Part of the value proposition to add span to .NET Standard is to add theses companion APIs as well.
  • Reflection emit. To boost productivity, the .NET ecosystem has always made heavy use of dynamic features such as reflection and reflection emit. Emit is often used as a tool to optimize performance as well as a way to generate types on the fly for proxying interfaces. As a result, many of you asked for reflection emit to be included in the .NET Standard. Previously, we’ve tried to provide this via a NuGet package but we discovered that we cannot model such a core technology using a package. With .NET Standard 2.1, you’ll have access to Lightweight Code Generation (LCG) as well as Reflection Emit. Of course, you might run on a runtime that doesn’t support running IL via interpretation or compiling it with a JIT, so we also exposed two new capability APIs that allow you to check for the ability to generate code at all (RuntimeFeature.IsDynamicCodeSupported) as well as whether the generated code is interpreted or compiled (RuntimeFeature.IsDynamicCodeCompiled). This will make it much easier to write libraries that can exploit these capabilities in a portable fashion.
  • SIMD. .NET Framework and .NET Core had support for SIMD for a while now. We’ve leveraged them to speed up basic operations in the BCL, such as string comparisons. We’ve received quite a few requests to expose these APIs in .NET Standard as the functionality requires runtime support and thus cannot be provided meaningfully as a NuGet package.
  • ValueTask and ValueTask<T>. In .NET Core 2.1, the biggest feature was improvements in our fundamentals to support high-performance scenarios, which also included making async/await more efficient. ValueTask<T> already exists and allows to return results if the operation completed synchronously without having to allocate a new Task<T>. With .NET Core 2.1 we’ve improved this further which made it useful to have a corresponding non-generic ValueTask that allows reducing allocations even for cases where the operation has to be completed asynchronously, a feature that types like Socket and NetworkStream now utilize. Exposing these APIs in .NET Standard 2.1 enables library authors to benefit from these improvements both, as a consumer, as well as a producer.
  • DbProviderFactories. In .NET Standard 2.0 we added almost all of the primitives in ADO.NET to allow O/R mappers and database implementers to communicate. Unfortunately, DbProviderFactories didn’t make the cut for 2.0 so we’re adding it now. In a nutshell, DbProviderFactories allows libraries and applications to utilize a specific ADO.NET provider without knowing any of its specific types at compile time, by selecting among registered DbProviderFactory instances based on a name, which can be read from, for example, configuration settings.
  • General Goodness. Since .NET Core was open sourced, we’ve added many small features across the base class libraries such as System.HashCode for combining hash codes or new overloads on System.String. There are about 800 new members in .NET Core and virtually all of them got added in .NET Standard 2.1.

For more details, you might want to check out the full API diff between .NET Standard 2.1 and .NET Standard 2.0. You can also use apisof.net to quickly check whether a given API will be included with .NET Standard 2.1.

.NET platform support

In case you missed our Update on .NET Core 3.0 and .NET Framework 4.8, we’ve described our support for .NET Framework and .NET Core as follows:

.NET Framework is the implementation of .NET that’s installed on over one billion machines and thus needs to remain as compatible as possible. Because of this, it moves at a slower pace than .NET Core. Even security and bug fixes can cause breaks in applications because applications depend on the previous behavior. We will make sure that .NET Framework always supports the latest networking protocols, security standards, and Windows features.

.NET Core is the open source, cross-platform, and fast-moving version of .NET. Because of its side-by-side nature it can take changes that we can’t risk applying back to .NET Framework. This means that .NET Core will get new APIs and language features over time that .NET Framework cannot. At Build we showed a demo how the file APIs are faster on .NET Core. If we put those same changes into .NET Framework we could break existing applications, and we don’t want to do that.

Given many of the API additions in .NET Standard 2.1 require runtime changes in order to be meaningful, .NET Framework 4.8 will remain on .NET Standard 2.0 rather than implement .NET Standard 2.1. .NET Core 3.0 as well as upcoming versions of Xamarin, Mono, and Unity will be updated to implement .NET Standard 2.1.

Library authors who need to support .NET Framework customers should stay on .NET Standard 2.0. In fact, most libraries should be able to stay on .NET Standard 2.0, as the API additions are largely for advanced scenarios. However, this doesn’t mean that library authors cannot take advantage of these APIs even if they have to support .NET Framework. In those cases they can use multi-targeting to compile for both .NET Standard 2.0 as well as .NET Standard 2.1. This allows writing code that can expose more features or provide a more efficient implementation on runtimes that support .NET Standard 2.1 while not giving up on the bigger reach that .NET Standard 2.0 offers.

For more recommendations on targeting, check out the brand new documentation on cross-platform targeting.

Governance model

The .NET Standard 1.x and 2.0 releases focused on exposing existing concepts. The bulk of the work was on the .NET Core side, as this platform started with a much smaller API set. Moving forward, we’ll often have to standardize brand-new technologies, which means we need to consider the impact on all .NET implementations, not just .NET Core, and including those managed in other communities such as Mono or Unity. Our governance model has been updated to best include all considerations, including:

A .NET Standard review board. To ensure we don’t end up adding large chunks of API surface that cannot be implemented, a review board will sign-off on API additions to the .NET Standard. The board comprises representatives from .NET platform, Xamarin and Mono, Unity and the .NET Foundation and will be chaired by Miguel de Icaza. We will continue to strive to make decisions based on consensus and will leverage Miguel’s extensive expertise and experience building .NET implementations that are supported by multiple parties when needed.

A formal approval process. The .NET Standard 1.x and 2.0 version were largely mechanically derived by computing which APIs existing .NET implementations had in common, which means the API sets were effectively a computational outcome. Moving forward, we are implementing an editorial approach:

  • Anyone can submit proposals for API additions to the .NET Standard.
  • New members on standardized types are automatically considered. To prevent accidental fragmentation, we’ll automatically consider all members added by any .NET implementation on types that are already in the standard. The rationale here is that divergence at that the member level is not desirable and unless there is something wrong with the API it’s likely a good addition.
  • Acceptance requires:
    • A sponsorship from a review board member. That person will be assigned the issue and is expected to shepherd the issue until it’s either accepted or rejected. If no board member is willing to sponsor the proposal, it will be considered rejected.
    • A stable implementation in at least one .NET implementation. The implementation must be licensed under an open source license that is compatible with MIT. This will allow other .NET implementations to jump- start their own implementations or simply take the feature as-is.
  • .NET Standard updates are planned and will generally follow a set of themes. We avoid releases with a large number of tiny features that aren’t part of a common set of scenarios. Instead, we try to define a set of goals that describe what kind of feature areas a particular .NET Standard version provides. This simplifies answering the question which .NET Standard a given library should depend on. It also makes it easier for .NET implementations to decide whether it’s worth implementing a higher version of .NET Standard.
  • The version number is subject to discussion and is generally a function of how significant the new version is. While we aren’t planning on making breaking changes, we’ll rev the major version if the new version adds large chunks of APIs (like when we doubled the number of APIs in .NET Standard 2.0) or has sizable changes in the overall developer experience (like the added compatibility mode for consuming .NET Framework libraries we added in .NET Standard 2.0).

For more information, take a look at the .NET Standard governance model and the .NET Standard review board.

Summary

The definition of .NET Standard 2.1 is ongoing. You can watch our progress on GitHub and still file requests.

If you want to quickly check whether a specific API is in .NET Standard (or any other .NET platform), you can use apisof.net. You can also use the .NET Portability Analyzer to check whether an existing project or binary can be ported to .NET Standard 2.1.

Happy coding!

Security fixes for Team Foundation Server

$
0
0
Today, we are releasing a fix for a potential cross site scripting (XSS) vulnerability. This impacts Team Foundation Server 2017 and 2018. We have released patches for TFS 2017 Update 3.1, TFS 2018 Update 1.1, and TFS 2018 Update 3. We have also released TFS 2018 Update 3.1, which is a full install that includes... Read More

Run your LOB applications with PostgreSQL powered by the plv8 extension

$
0
0

We are extremely excited to share that the plv8 extension for PostgreSQL is now enabled in all generally available regions of Microsoft Azure Database for PostgreSQL. The plv8 extension was one of the highly requested UserVoice asks from our growing customer base and the PostgreSQL community. It is a popular community extension that unlocks new scenarios and possibilities, it also enables developers to write their functions in JavaScript which can be called from SQL.

PostgreSQL is an established open source database with strong native JSON capabilities, and the plv8 extension further enhances it by integrating the JavaScript v8 engine with SQL. Marten library is one such library that uses the plv8 extension to allow developers to leverage PostgreSQL as a NoSQL document store or event store. Using PostgreSQL as a document database opens new possibilities for designing and developing retail cart applications, marketplace solutions, IoT event processing, and LOB applications.

Enterprises, small and medium businesses, as well as ISVs can now accelerate the development and deployments of their LOB applications on the managed Azure Database for PostgreSQL service. This helps shorten the time to market.

Let us see an example of how one can use the plv8 extension with Azure Database for PostgreSQL. In this example, we use xTuple, an open source ERP + CRM web application powered by PostgreSQL database with the plv8 extension. The xTuple platform, with the free PostBooks edition, provides a great starting point for business software in practically any industry. For your business, you can scale up and further add the functionality of commercial versions. xTuple’s solution on Github is an entire stack to work on top of and a lot of business objects such as invoices, currencies, tasks, contacts, and many more that you may want which are already implemented.

xTuple has an extensive step-by-step guide for installing and configuring PostgreSQL for running the xTuple PostBooks desktop client. With the managed Azure Database for PostgreSQL service, you can get up and running with the xTuple ERP + CRM platform in a few minutes.

Initializing PostgreSQL for xTuple

If you are not already familiar, use our QuickStart tutorial to provision a managed PostgreSQL server using the Azure portal or the Azure CLI.  After the server is provisioned, connect to the server using “pgadmin” or “psql” with the server admin user role and initialize PostgreSQL for xTuple software as shown below.

CREATE ROLE xtrole WITH NOLOGIN;
GRANT azure_pg_admin TO xtrole;
CREATE ROLE admin WITH PASSWORD 'admin'
                        NOSUPERUSER
                        CREATEDB
                        CREATEROLE
                        LOGIN
                        IN ROLE xtrole;

Disconnect server and then re-connect using the admin user created above to create a database for an xTuple application using UTF8 encoding, as shown below.

CREATE DATABASE xtupledb
     WITH
     OWNER = admin
     ENCODING = 'UTF8'
     LC_COLLATE = 'English_United States.1252'
     LC_CTYPE = 'English_United States.1252'
     CONNECTION LIMIT = -1;

Enabling the plv8 extension on Azure Database for PostgreSQL

Next, we enable the plv8 extension for the database and set the plv8.start_proc variable as shown below.

CREATE EXTENSION plv8;
ALTER DATABASE xtupledb  SET "plv8.start_proc" TO "xt.js_init";

Disconnect and then re-connect to establish a new session using the admin user to verify the value of plv8.start_proc parameter. This is required when setting the parameter value using “ALTER DATABASE SET” command as the specified value is set and reflected for all the subsequent new sessions.

SELECT plv8_version();
Show plv8.start_proc;

The database xtupledb is now ready to power the xTuple ERP solution. You can follow the guide to install the PostBooks database. After the database is restored, you are ready to install and browse the xTuple desktop app.

Please note, while restoring the database you might see some error messages or warnings which can be safely ignored as documented in the xTuple documentation.

xTuple log in

You are now ready for business with an open source xTuple ERP solution running on the fully managed Azure Database for PostgreSQL.

Practice Database

We would encourage you to try and leverage the plv8 extension in Azure Database for PostgreSQL to unlock new scenarios and NoSQL capabilities of PostgreSQL. Get started and create your PostgreSQL servers today!

Learn more about Azure Database for PostgreSQL in the overview and supported extensions.

Please continue to provide feedback on the features and functionality that you want to see next. If you need any help or have questions, please check out the Azure Database for PostgreSQL documentation. Follow us on Twitter @AzureDBPostgreSQL for the latest news and announcements.

Related resources

Acknowledgments

Special thanks to Ned Lilly and Perry Clark from xTuple, as well as Sunil Kamath, Rachel Agyemang, and Jim Toland for their contributions to this posting.

Microsoft Azure portal November 2018 update

$
0
0

This post was co-authored by Leon Welicki, Principal Group PM Manager, Microsoft Azure.

In October 2018, we started a monthly blog series to help you find everything that is new in the Microsoft Azure portal and the Azure mobile app in one place. We are constantly working to make it easier for you to manage your Azure environment, and we want you to be able to stay up to speed with everything that’s new. You’ll always find the most recent version of this blog at http://aka.ms/AzurePortalUpdates, so be sure you add it to your favorites and come back every month.

This month, we’re introducing a new way for you to switch between different Azure accounts without having to log-off and log-in again, or working with multiple browser tabs. We’ve also made enhancements to the way you find what you need in the Azure Marketplace, to the management experience for Site Recovery, Access Control, and database services.

Sign in to the Azure portal now and see everything that’s new. Download the Azure mobile app.

Here’s the list of November updates to the Azure portal:

    Portal shell and UI

      Azure Marketplace

      Security

      Management tools

      Databases

      Others

      Let’s look at each of these updates in detail.

      Portal shell and UI

      Improved account switching now generally available

      Using more than one account is a very common scenario in Azure. Sometimes you have a work account and a personal one, or you may be working for multiple companies.

      At Ignite, we announced a new way for you to switch between different Azure accounts in the Azure portal. Instead of logging off, closing your browser and starting over, you can simply select your account card in the right-top corner of the portal and select “Sign in with a different account." Once you sign in, your account is added to your account card and you can toggle between them seamlessly. To learn more about this announcement, please see “Announcing Azure user experience improvements at Ignite 2018.”

      We’re happy to reveal that improved account switching is now generally available to all Azure users.

      Account switching in the Azure portal for Peri Rocha

      Account switching now generally available

      Security

      Updated experience for access control (IAM)

      Controlling access to Azure resources using role-based access control (RBAC) is one of the most common tasks performed in Azure, and the experience for managing access is consistent across the Azure Portal for different service types. We’ve updated the Access control (IAM) blade in the portal with a new interface based on tabs to improve performance and to help you complete important tasks more quickly such as checking a user’s access. Here’s everything that’s changing in the access control (IAM) blade:

      • Improved performance of the access control (IAM) blade.
      • A check access feature to quickly view role assignments for a single user, group, service principal, or managed identity.
      • Tiles that link to common tasks.
      • A deny assignments tab to view any relevant deny assignments. Deny assignments are read-only and can only be set by Azure.

      The new access control (IAM) blade

      The new access control (IAM) blade

      To see the new access control (IAM) blade:

      1. Select all services and select the scope or resource you want to view or manage. For example, you can select Management groups, Subscriptions, Resource groups, or any resource.
      2. In the resource blade, select Access control (IAM) from the menu.

      Azure Marketplace

      New filters in Azure Marketplace

      We’ve added filters to the browse experience in Azure Marketplace to help you easily find the services you need. You can now filter the marketplace list by pricing, operating system, and publisher.

      New filters available in Azure Marketplace

      New filters now available in Azure Marketplace

      To find the new filters:

      1. Select Create a Resource.
      2. Select See all, next to “Azure Marketplace.”

      Management tools

      Improved Recovery Services vault dashboard

      Recently, we announced the capability to configure cross-subscription disaster recovery for Azure Virtual Machines (VMs) and to replicate VMs with disk encryption from one Azure region to another. For more details, refer to the documentation.

      As part of improving the user experience and performance, we’ve improved the Recovery Services vault dashboard with a consolidated operations menu (a), an overview section with details about recent announcements (b), and dedicated tabs for Backup and Site Recovery (c). You can also refresh the view with the latest data on demand using the “Refresh” option on the top (d).

      Improved experience for managing Recovery Services vault

      Improved experience for managing Recovery Services vault

      In replicated item view, you can view the latest recovery points by selecting “Latest recovery points."

      Recovery Services

      Easily see latest recover points.

      You can now also edit “Compute and Network” properties.

      New edit capability

      New “edit” capability.

      To see the improved dashboard, open an existing Recovery Services vault, or create a new one:

      1. Select All services above the “favorites” menu.
      2. In the “All services” box, type Recovery Services vault and select it when shown.
      3. Select Add and follow the on-screen instructions.

      Databases

      Redesigned SQL Data Warehouse overview blade

      We have redesigned the SQL Data Warehouse (DW) overview blade to provide an at-a-glance understanding of the status of your data warehouse. In the overview blade, you can now see the Data Warehouse Units (DWU) usage over the last hour, the features available in SQL DW and whether they are configured or not, and common tasks for managing your data warehouse. You can select any of these tiles in overview to be taken to the full details and settings.

      The new SQL Data Warehouse Blade

      The new SQL Data Warehouse overview blade

      To see the improved overview blade, open an existing SQL Data Warehouse, or create a new one:

      1. Select All services above the “favorites” menu.
      2. In the “All services” box, type SQL Data Warehouses and select it when shown.
      3. Select Add and follow the on-screen instructions.

      Updated experience for SQL logical server creation

      We have added some new flexibility and options to the SQL logical server creation experience. You can now start a trial of Advanced Threat Protection as soon as the server is created, which is recommended to assist with protecting your production databases. Additionally, you can now toggle the option to "allow Azure services to access [this] server," which is enabled by default. This setting is recommended if you use the SQL query editor or want to connect your app or VM to your databases.

      New options to create SQL logical servers

      New options to create SQL logical servers.

      To see the updated experience, create a new SQL server:

      1. Select All services above the Favorites menu.
      2. In the All services box, type SQL servers and select it when it is shown.
      3. Select Add.
      4. Provide information for required fields (i.e., Server name). Observe the "Advanced threat protection" setting. Select Start free trial to give it a try.
      5. Select Create.

      New "Allocated space" metric in SQL DB overview

      Monitoring allocated space is useful when you have workload patterns where the allocation of underlying data files for databases can become larger than the number of used data pages. This occurs because file space allocated is not automatically reclaimed when data is deleted. We have added the "allocated space" metric to the database storage donut chart in the SQL database overview blade, so you can now understand your allocated space and know when to shrink your data files.

      New chart shows allocated space for SWL databases

      The new chart shows allocated space for SQL databases

      To see the improved overview blade, open an existing SQL database, or create a new one:

      1. Select All services above the Favorites menu.
      2. In the All services box, type SQL databases and select it when shown.
      3. Select Add and follow the on-screen instructions.

      For more information on managing your storage space, refer to the product documentation.

      Others

      Updates to Microsoft Intune

      The Microsoft Intune team has been hard at work working on updates as well. You can find the full list of updates to Intune in the What's new in Microsoft Intune page, including changes that affect your experience using Intune.

      Did you know?

      The Azure portal offers keyboard shortcuts to help you navigate quickly without having to take your hands off of the keyboard. For instance, to go to the Global Search box from your portal dashboard, type G+/. For more shortcuts, read “Keyboard shortcuts in the Azure portal,” watch this 2-minute video, or simply select Keyboard Shortcuts in Azure Portal’s help menu.

      Let us know what you think

      As always, thank you for all your great feedback. The Azure portal is built by a large team of engineers who are always interested in hearing from you. If you’re curious to learn more about how the Azure portal is built, be sure to watch the session, “Building a scalable solution to millions of users” that Leon Welicki, Principal Group Program Manager at Azure, delivered at Microsoft Ignite 2018.

      Don’t forget to sign in the Azure portal and download the Azure mobile app today to see everything that’s new, and let us know your feedback in the comments section, or on Twitter. See you next month!

      Exploring Clang Tooling Part 3: Rewriting Code with clang-tidy

      $
      0
      0

      In the previous post in this series, we used clang-query to examine the Abstract Syntax Tree of a simple source code file. Using clang-query, we can prototype an AST Matcher which we can use in a clang-tidy check to refactor code in bulk.

      This time, we will complete the rewriting of the source code.

      Let’s return to MyFirstCheck.cpp we generated earlier and update the registerMatchers method. First we can refactor it to port both function declarations and function calls, using the callExpr() and callee() matchers we used in the previous post:

      void MyFirstCheckCheck::registerMatchers(MatchFinder *Finder) {
          
        auto nonAwesomeFunction = functionDecl(
          unless(matchesName("^::awesome_"))
          );
      
        Finder->addMatcher(
          nonAwesomeFunction.bind("addAwesomePrefix")
          , this);
      
        Finder->addMatcher(
          callExpr(callee(nonAwesomeFunction)).bind("addAwesomePrefix")
          , this);
      }
      

      Because Matchers are really C++ code, we can extract them into variables and compose them into multiple other Matchers, as done here with nonAwesomeFunction.

      In this case, I have narrowed the declaration matcher to match only on function declarations which do not start with awesome_. That matcher is then used once with a binder addAwesomePrefix, then again to specify the callee() of a callExpr(), again binding the relevant expression to the name addAwesomePrefix.

      Because large scale refactoring often involves primarily changing particular expressions, it generally makes sense to separately define the matchers for the declaration to match and the expressions referencing those declarations. In my experience, the matchers for declarations can get complicated for example with exclusions due to limitations of a reflection system, or with more specifics about functions with particular return types or argument types. Centralizing those cases helps keep your refactoring code maintainable.

      Another change I have made is that I renamed the binding from x to addAwesomePrefix. This is notable because it uses verbs to describe what should be done with the matches. It should be clear from reading matcher bindings what the result of invoking the fix is to be. Binding names can then be seen as a weakly-typed string-based language interface between the matcher and the replacement code.
      We can then implement MyFirstCheckCheck::check to consume the bindings. A first approximation might look like:

      void MyFirstCheckCheck::check(const MatchFinder::MatchResult &Result) {
        if (const auto MatchedDecl = Result.Nodes.getNodeAs<FunctionDecl>("addAwesomePrefix"))
        {
          diag(MatchedDecl->getLocation(), "function is insufficiently awesome")
            << FixItHint::CreateInsertion(MatchedDecl->getLocation(), "awesome_");
        }
      
        if (const auto MatchedExpr = Result.Nodes.getNodeAs<CallExpr>("addAwesomePrefix"))
        {
          diag(MatchedExpr->getExprLoc(), "code is insufficiently awesome")
            << FixItHint::CreateInsertion(MatchedExpr->getExprLoc(), "awesome_");
        }
      } 
      

      Perhaps a better implementation would reduce the duplication of the diagnostic code:

      void MyFirstCheckCheck::check(const MatchFinder::MatchResult &Result) {
        SourceLocation insertionLocation;
        if (const auto MatchedDecl = Result.Nodes.getNodeAs<FunctionDecl>("addAwesomePrefix"))
        {
          insertionLocation = MatchedDecl->getLocation();
        } else if (const auto MatchedExpr = Result.Nodes.getNodeAs<CallExpr>("addAwesomePrefix"))
        {
          insertionLocation = MatchedExpr->getExprLoc();
        }
        diag(insertionLocation, "code is insufficiently awesome")
            << FixItHint::CreateInsertion(insertionLocation, "awesome_");
      }
      

      Because the FunctionDecl and the CallExpr do not share an inheritance hierarchy, we need separate casting conditions for each. Even if they did share an inheritance hierarchy, we need to call getLocation in one case, and getExprLoc in another. The reason for that is that Clang records many relevant locations for each AST node. The developer of the clang-tidy check needs to know which location accessor method is appropriate or required for each situation.
      A further improvement is to change the casts to accept the relevant types of FunctionDecl and CallExprNamedDecl and Expr respectively.

      if (const auto MatchedDecl = Result.Nodes.getNodeAs<NamedDecl>("addAwesomePrefix"))
      {
        insertionLocation = MatchedDecl->getLocation();
      } else if (const auto MatchedExpr = Result.Nodes.getNodeAs<Expr>("addAwesomePrefix"))
      {
        insertionLocation = MatchedExpr->getExprLoc();
      }
      

      This change enforces the idea that the names of bound nodes form a weakly-typed interface between the Matcher code and the Rewriter code. Because the Rewriter code now expects the addAwesomePrefix to be used with the base types NamedDecl and Expr, other Matcher code can take advantage of that. We can now re-use the addAwesomePrefix binding name to add a prefix to field declarations or member expressions for example because their corresponding Clang AST classes also inherit NamedDecl:

      auto nonAwesomeField = fieldDecl(unless(hasName("::awesome_")));
      Finder->addMatcher(
        nonAwesomeField.bind("addAwesomePrefix")
        , this);
      
      Finder->addMatcher(
        memberExpr(member(nonAwesomeField)).bind("addAwesomePrefix")
        , this);
      

      Notice that this code is comparable to the matchers we wrote for the functionDecl/callExpr pairing. Taking advantage of the binding name interface, we can continue extending our matcher code to port variable declarations without changing the rewriter side of that interface:

      void MyFirstCheckCheck::registerMatchers(MatchFinder *Finder) {
        
        auto nonAwesome = namedDecl(
          unless(matchesName("::awesome_.*"))
          );
      
        auto nonAwesomeFunction = functionDecl(nonAwesome);
        // void foo(); 
        Finder->addMatcher(
          nonAwesomeFunction.bind("addAwesomePrefix")
          , this);
      
        // foo();
        Finder->addMatcher(
          callExpr(callee(nonAwesomeFunction)).bind("addAwesomePrefix")
          , this);
      
        auto nonAwesomeVar = varDecl(nonAwesome);
        // int foo;
        Finder->addMatcher(
          nonAwesomeVar.bind("addAwesomePrefix")
          , this);
      
        // foo = 7;
        Finder->addMatcher(
          declRefExpr(to(nonAwesomeVar)).bind("addAwesomePrefix")
          , this);
      
        auto nonAwesomeField = fieldDecl(nonAwesome);
        // int m_foo;
        Finder->addMatcher(
          nonAwesomeField.bind("addAwesomePrefix")
          , this);
      
        // m_foo = 42;
        Finder->addMatcher(
          memberExpr(member(nonAwesomeField)).bind("addAwesomePrefix")
          , this);
      }
      

      Location Location Location

      Let’s return to the check implementation and examine it. This method is responsible for implementing the rewriting of the source code as described by the matchers and their bound nodes.
      In this case, we have inserted code at the SourceLocation returned by either getLocation() or getExprLoc() of NamedDecl or Expr respectively. Clang AST classes have many methods returning SourceLocation which refer to various places in the source code related to particular AST nodes.
      For example, the CallExpr has SourceLocation accessors getBeginLoc, getEndLoc and getExprLoc. It is currently difficult to discover how a particular position in the source code relates to a particular SourceLocation accessor.

      clang::VarDecl represents variable declarations in the Clang AST. clang::ParmVarDecl inherits clang::VarDecl and represents parameter declarations. Notice that in all cases, end locations indicate the beginning of the last token, not the end of it. Note also that in the second example below, the source locations of the call used to initialize the variable are not part of the variable. It is necessary to traverse to the initialization expression to access those.

      clang::FunctionDecl represents function declarations in the Clang AST. clang::CXXMethodDel inherits clang::FunctionDecl and represents method declarations. Note that the location of the return type is not always given by getBeginLoc in C++.

      clang::CallExpr represents function calls in the Clang AST. clang::CXXMemberCallExpr inherits clang::CallExpr and represents method calls. Note that when calling free functions (represented by a clang::CallExpr), the getExprLoc and the getBeginLoc will be the same. Always chose the semantically correct location accessor, rather than a location which appears to indicate the correct position.

      It is important to know that locations on AST classes point to the start of tokens in all cases. This can be initially confusing when examining end locations. Sometimes to get to a desired location, it is necessary to use getLocWithOffset() to advance or retreat a SourceLocation. Advancing to the end of a token can be achieved with Lexer::getLocForEndOfToken.

      The source code locations of arguments to the function call are not accessible from the CallExpr, but must be accessed via AST nodes for the arguments themselves.

      // Get the zeroth argument:
      Expr* arg0 = someCallExpr->getArg(0);
      SourceLocation arg0Loc = arg0->getExprLoc();
      

      Every AST node has accessors getBeginLoc and getEndLoc. Expression nodes additionally have a getExprLoc, and declaration nodes have an additional getLocation accessor. More-specific subclasses have more-specific accessors for locations relevant to the C++ construct they represent. Source code locations in Clang are comprehensive, but accessing them can get complex as requirements become more advanced. A future blog post may explore this topic in more detail if there is interest among the readership.

      Once we have acquired the locations we are interested in, we need to insert, remove or replace source code fragments at those locations.

      Let’s return to MyFirstCheck.cpp:

      diag(insertionLocation, "code is insufficiently awesome")
          << FixItHint::CreateInsertion(insertionLocation, "awesome_");
      

      diag is a method on the ClangTidyCheck base class. The purpose of it is to issue diagnostics and messages to the user. It can be called with just a source location and a message, causing a diagnostic to be emitted at the specified location:

      diag(insertionLocation, "code is insufficiently awesome");
      

      Resulting in:

          testfile.cpp:19:5: warning: code is insufficiently awesome [misc-my-first-check]
          int addTwo(int num)
              ^
      

      The diag method returns a DiagnosticsBuilder to which we can stream fix suggestions using FixItHint.

      The CreateRemoval method creates a FixIt for removal of a range of source code. At its heart, a SourceRange is just a pair of SourceLocations. If we wanted to remove the awesome_ prefix from functions which have it, we might expect to write something like this:

      void MyFirstCheckCheck::registerMatchers(MatchFinder *Finder) {
        
        Finder->addMatcher(
          functionDecl(
            matchesName("::awesome_.*")
            ).bind("removeAwesomePrefix")
          , this);
      }
      
      void MyFirstCheckCheck::check(const MatchFinder::MatchResult &Result) {
      
        if (const auto MatchedDecl = Result.Nodes.getNodeAs<NamedDecl>("removeAwesomePrefix"))
        {
            auto removalStartLocation = MatchedDecl->getLocation();
            auto removalEndLocation = removalStartLocation.getLocWithOffset(sizeof("awesome_") - 1);
            auto removalRange = SourceRange(removalStartLocation, removalEndLocation);
      
            diag(removalStartLocation, "code is too awesome")
                << FixItHint::CreateRemoval(removalRange);
        }
      }
      

      The matcher part of this code is fine, but when we run clang-tidy, we find that the removal is applied to the entire function name, not only the awesome_ prefix. The problem is that Clang extends the end of the removal range to the end of the token pointed to by the end. This is symmetric with the fact that AST nodes have getEndLoc() methods which point to the start of the last token. Usually, the intent is to remove or replace entire tokens.

      To make a replacement or removal in source code which extends into the middle of a token, we need to indicate that we are replacing a range of characters instead of a range of tokens, using CharSourceRange::getCharRange:

      auto removalRange = CharSourceRange::getCharRange(removalStartLocation, removalEndLocation);
      

      Conclusion

      This concludes the mini-series about writing clang-tidy checks. This series has been an experiment to gauge interest, and there is a lot more content to cover in further posts if there is interest among the readership.

      Further topics can cover topics that occur in the real world such as

      • Creation of compile databases
      • Creating a stand-alone buildsystem for clang-tidy checks
      • Understanding and exploring source locations
      • Completing more-complex tasks
      • Extending the matcher system with custom matchers
      • Testing refactorings
      • More tips and tricks from the trenches.

      This would cover everything you need to know in order to quickly and effectively create and use custom refactoring tools on your codebase.

      Do you want to see more! Let us know in the comments below or contact the author directly via e-mail at stkelly@microsoft.com, or on Twitter @steveire.

      I will be showing even more new and future developments in clang-query and clang-tidy at code::dive tomorrow, including many of the items listed as future topics above. Make sure to schedule it in your calendar if you are attending code::dive!

      Mission critical performance with Ultra SSD for SQL Server on Azure VM

      $
      0
      0

      This blog post was authored by Mine Tokus, Senior Program Manager, COGS Data - SQL DB.

      We recently published “Storage Configuration Guidelines for SQL Server on Azure VM” on the SQL Database Engine Blog summarizing the test findings from running TPC-E profile test workloads on premium storage configuration options. We continued this testing by including Ultra SSD. Ultra SSD is the new storage offering available on Microsoft Azure for mission-critical workloads with sub-millisecond latencies at high throughput. We will summarize the test details and findings in this blog.

      We used DS14_v2 VM with 16 cores, 112GB memory and 224GB local SSD for this test. This virtual machine (VM) is capable of scaling up to 51,200 uncached IOPS and 64,000 cached and temporary IOPS. We selected a TPC-E workload representative OLTP app in e-commerce/trade space as the test workload. Our test workload drives a similar percentage of read and write IO activity.

      Size vCPU Memory: GiB Temp storage (SSD) GiB Max cached and temp storage throughput: IOPS/MBps (cache size in GiB) Max uncached disk throughput: IOPS/MBps
      Standard_DS14_v2 16 112 224 64,000/512 (576) 51,200/768

      Premium Storage Configuration

      For Premium Storage Configuration, we added 10 -P30 disks and enabled RO cache for all of them. We created a single storage pool over all 10 disks which enables 50,000 IOPS for the VM. We placed SQL Server data, log, and Temp DB files on the single storage pool of 10 -P30 disks. This is exactly the same as how SQL files are placed when Storage Configuration feature is used through the portal for SQL VMs created from Azure Marketplace Images.

      Ultra SSD configuration

      With Ultra SSD, we need only one Ultra SSD disk of 1TB which can scale up to 50,000 IOPS. Ultra SSD can be configured flexibly, size and IOPS can scale independently. With Premium Storage Configuration we had to use 10 of P30 disks to get 50,000 IOPS which bring 10TB capacity that we did not need, as our database is less than 1TB. With Ultra SSD we can provision a disk with our exact size, IOPS, and throughput requirements and we only pay for the provisioned capacity. Also, as one Ultra SSD disk can scale up to 50,000 IOPS (maximum 160,000) we did not need to create a storage pool; single disk hosts all SQL Server files including data, log, and Temp DB. Ultra SSD does not require cache configuration for reads, it already offers sub-millisecond latency for all reads and writes with the brand-new architecture and hardware; so we did not configure any cache for the disk.

      Test findings

      We executed the TPC-E profile test workload for an hour where it builds up the load in the first 10 minutes and peaks for 40 minutes. We observed the average disk read and write latencies below 1 ms during the run with Ultra SSD where with Premium Storage Pool the average write latency was 4 ms and read latency was 1 ms. We could drive 33 percent more SQL Server throughput by replacing 10 -P30 disks with single Ultra SSD drive on the same VM with the exact same configuration of SQL Server and test workload. CPU usage during our test was around 70 percent, for heavier workloads the throughput gain would be bigger.

        Premium Storage Pool Ultra SSD
      Number of disks 10 1
      Read-only cache All disks Not applicable
      Read-write cache None of the disks Not applicable
      Data, log, and temp DB files Storage pool more than 10 –P30 disks Single Ultra SSD disk
      Average red latencies 1 ms <1 ms
      Average write latencies 4 ms <1 ms
      Average transfer per second 3 ms <1 ms
      Average transfer size 9.5 KB 10.7 KB
      Batch requests per second 13.1K 17.5K
      Business transaction per second 1,688 1,980

      As this test shows, a typical SQL Server will gain significant throughput on Ultra SSD compared to Premium Storage driven by the latency differences. To find the most effective storage configuration for SQL Server workloads on an Azure VM, we recommend starting with choosing the correct VM size with enough storage scale limits for your workload. Placing Temp DB on the Local SSD would bring the maximum performance with no additional cost for storage. Premium storage with host blob cache offers low latency cached reads. Placing data files on RO cache enabled Premium Storage Pool with a VM that has large temporary and cached IOPS limits is a cost-effective option for workloads with low latency read requirements.

      Ultra SSD offers great storage performance with very low read and write latencies. Use Ultra SSD according to the latency and throughput requirements for your data and log files. For SQL Server workloads, log write latencies are critical, especially when in-memory OLTP is used. Placing log file on Ultra SSD disk will enable high SQL Server performance with very low storage latencies. By combining Ultra SSD, Azure Premium Storage together with memory and storage optimized Virtual Machine Types, Azure Virtual Machines offers enterprise-grade performance for SQL Server workloads.

      IoT for Smart Cities: New partnerships for Azure Maps and Azure Digital Twins

      $
      0
      0

      This blog post was co-authored by Julie Seto, Senior Program Manager, Azure IoT.

      Over recent years, one of the most dynamic landscapes undergoing digital transformation is the modern city. Amid increasing urbanization, cities must grapple with questions around how to strengthen their local economies, manage environmental resources, mitigate pollution and create safer, more accessible societies. The modern metropolis is on a crusade towards sustainability, prosperity and inclusivity, and critical to achieving those is its digital transformation, powered by cloud, AI and IoT technologies. Through these, cities can harness the power of real-time intelligence for monitoring, anticipating and managing urban events, from traffic congestion and flooding, to utility optimization and construction.

      Earlier this year, Microsoft announced that we will invest $5 billion in IoT over the next four years. This year alone we announced many exciting additions to our IoT portfolio. Among them are Azure Maps and Azure Digital Twins, platforms that will help cities navigate the complexities of developing urban mobility and smart infrastructure solutions. Announced just shy of a year ago, Azure Maps is Azure’s location intelligence portfolio of mapping, navigation, traffic, and geospatial services. Azure Digital twins, recently announced at Ignite, is a service for companies to unlock spatial intelligence through the modeling of the relationships between people, devices, and the environments they exist in. Together, these services empower organizations to unlock deeper insights from spatial intelligence, geographic enrichment, and location-based services. 

      Today, we’re excited to announce strategic partnerships across both platforms and solutions built by Azure IoT partners to further help cities meet their ambitions of becoming Smarter Cities.

      Last year, when we announced public preview of Azure Maps (formerly Azure Location Based Services), Sam George, Director of Azure IoT, took the stage at Automobility in Los Angeles and highlighted the criticality maps have on Smart Cities stating, Azure Maps “includes geographical data that can better connect smart cities, infrastructure and IoT solutions, and empower industrial transformation, from manufacturing to retail to automotive – and everything in between.” Now, we’re echoing that message more prominently through a new partnership with Moovit, Inc.

      Moovit will be the supplier of Public Transportation Routing services (aka Transit Routing) through Azure Maps natively from within Microsoft Azure. Once integrated, Azure customers will have the ability to use transit routing for a slew of applications including smart cities, transportation, automotive, field services, retail, and more. In the context of IoT, customers can extract insights such has public transit ridership, cost benefit of transit/ridesharing/driving/bikes, justification for additional public transit, or additional taxation for roads and parking. Since these services will be native to Azure Maps, customers will also be able to use the Moovit-powered services as an additional mode of transportation to complement current routing capabilities such as driving (including traffic influenced, HOV, toll-road avoidance, etc.), walking, taxi, and motorcycle.

      In the context of automotive, and with so many people moving further outside of urban centers, many commuters are moving farther away from the city centers, but still wanting to use public transportation. As such, public transit information is getting further integrated into vehicles in order to provide multiple modes of transportation – commute to a rideshare or transit center, and ride the train or bus into the city. This helps minimize urban congestion and reduces cost for additional fuel and parking. Not to mention it reduces carbon footprint and noise pollution since the individual vehicles are parked far outside the city. According to the International Association of Public Transport, worldwide, in 2017, the 178 metro systems accounted for a total annual ridership of 53 billion passengers. In the last six years, annual metro ridership grew globally by 8.7 billion passengers (+19.5%).  This partnership with Moovit will help in servicing Azure customers’ needs to service both their own internal applications and insights, as well as, their customers who require public transportation.

      Today is a hallmark day and we’re proud to have Moovit as one of our partners – a highly innovative, reputable company in the location space, bringing clout through rich location data and services to Azure and its respective customers.

      Technologies like cloud, AI, and IoT can also be used to better manage facilities and infrastructure, increase energy efficiency, and enhance space utilization and occupant satisfaction. To capitalize on this opportunity, in September, Microsoft announced the public preview of Azure Digital Twins, a platform that enables organizations to use comprehensive digital models to build spatially aware solutions that can be applied to any physical environment. As Bert van Hoof, a Partner Group Program manager at Azure IoT, puts it, “Cities won’t truly get smart until cross-domain solutions are connected together, and data is shared. Azure IoT is bringing the spatial intelligence that will help unlock that value at broader scale—and enhancing spaces to be more sustainable, enjoyable, and inclusive.”

      Though both Azure Maps and Azure Digital Twins are among the newest additions to the IoT portfolio, already we have seen partners leveraging these platforms to tackle some of the challenges of creating safer, smarter cities, from LTI leveraging Azure Maps to manage city-wide events in a centralized advanced operations center, to LTTS, who is using Azure Digital Twins to better manage facilities across entire connected campuses.

      City infrastructure is costly to manage, and cities must address the challenge of efficiently operating numerous buildings, from city halls, to libraries, to community centers. One partner is rethinking ways to make buildings more efficient and people centric. View manufactures View Dynamic Glass, a new generation of dynamic glass windows that let in natural light and views, and enhance mental and physical wellbeing by significantly reducing headaches, eyestrain, and drowsiness. In addition, View’s windows reduce glare and heat, improving the energy efficiency of buildings by up to 20 percent. By using Azure Digital Twins and other components of the Azure IoT platform, View can leverage its domain expertise to create smart buildings that that drive occupant satisfaction and are more energy efficient. 

      An exciting partner that is building smart city solutions that integrate Azure Maps and Azure Digital Twins is LTI. A challenge for local and city management and administration teams is understanding what is happening in the city and responding with the right resources, in the right order, to responsibly manage city operations, from traffic and public transit to crime and safety incidents, and environmental alerts. LTI is addressing this with a comprehensive, advanced, and intelligent operations center solution for tracking and managing city operations and response management. The Advanced Operations Center gives city administrators an easy to use interface to deal with and manage all types of real-time events across a city.

      The Advanced Operations Center streams real-time telemetry from various sensors across urban settings leveraging several Azure services, including Azure Maps. Within one interface, city administrators can obtain real-time situational awareness of various events, leverage GIS tools to conduct in-depth analysis, optimize response management, track historical trends, and collaborate with other city operators. Through the dashboards, city operators can seamlessly customize the base map (image 1) choosing among satellite imagery, gray-scale maps, or the standard Azure Maps base map. 

      City Operators can configure the base map, choosing from a range of different imagery styles

      Image 1: City Operators can configure the base map, choosing from a range of different imagery styles.

      They can also leverage the Azure Maps traffic service for rendering real-time traffic flow and incident data – dynamically rendering growing levels of detail based on zoom level (images 2 & 3). The advanced operations center enables operators to execute standard operating procedures for the various events and respond to the various situations effectively. Event notifications in the Advanced Operations Center enable quick tracking of the events and collaboration (image 4).

      Single pane in Azure Maps

      Multiple categories of city events in Azure Maps

      Image 2 and 3: On a single pane, operators can visualize and manage multiple categories of city events.

      Measurements and alerts relating to different city events in Azure Maps

      Image 4: City operators can track the most recent measurements and alerts relating to different city events.

      SCE Streetlight

      Image 5: Operators can hover over sensors on the map, such as a streetlight, shown above, and quickly recover status updates related to the device.

      Smart City Expo World Congress

      In order to connect with cities on their journeys for digital transformation, we will be at Smart City Expo World Congress, the industry-leading event for urbanization, showcasing technologies and partners enabling the digital transformation of smart cities. Visit our booth at Gran Via, Hall P2, Stand 213 and learn more about our conference presence at SCEWC 2018. We also encourage you to meet with us at the following theater sessions in the booth:

      • Julie Seto, Senior Program Manager – Tuesday, November 13th at 2:30 PM
      • Chris Pendleton, Principal PM – Wednesday, November 14th at 3:00 PM

      Secure incoming traffic to HDInsight clusters in a virtual network with private endpoint

      $
      0
      0

      We are excited to announce the general availability of private endpoint in HDInsight clusters deployed in a virtual network. This feature enables enterprises to better isolate access to their HDInsight clusters from the public internet and enhance their security at the networking layer.

      Previously, when customers deployed an HDI cluster in a virtual network, there was only one public endpoint available in the form of https://<CLUSTERNAME>.azurehdinsight.net. This endpoint resolves to a public IP for accessing the cluster. Customers who wanted to restrict the incoming traffic had to use network security group (NSG) rules. Specifically, they had to white-list the IPs of both the HDInsight management traffic as well as the end users who wanted to access the cluster. These end users might have already been located inside the virtual network, but they had to be white-listed to be able to reach the public endpoint. It was hard to identify and white-list these end users’ dynamic IPs, as they would often change.

      With the introduction of private endpoint, customers can now use NSG rules to separate access from the public internet and end users that are within the virtual network’s trusted boundary. The virtual network can be extended to the on-premise network, hence traffic coming from on-premise to an HDInsight cluster can also be isolated from the public internet.

      Customers can now white-list only the required static IPs needed by the HDInsight management plane to reach the public endpoint and have their end users access the private endpoint inside the virtual network. Each HDI cluster deployed in a virtual network will have a private endpoint in the form of https://<CLUSTERNAME>-int.azurehdinsight.net as well as a public endpoint. Note the “-int” in this URL, this endpoint will resolve to a private IP in that virtual network and is not accessible from the public Internet.

      HDInsight Architecture Diagram

      The isolation of public traffic from traffic inside the virtual network, potentially expanded to the on-premises environment through an express route, is an important security feature to control access at the networking layer and add an additional security layer to HDInsight clusters as a whole. To learn more about HDInsight and virtual networks, please see “Extend Azure HDInsight using an Azure Virtual Network” on the Microsoft Azure documentation page.

      Try Azure HDInsight now

      We are excited to see what you will build next with Azure HDInsight. Read the HDInsight developer guide and follow the quick start to learn more about implementing open source analytics pipelines on Azure HDInsight. Stay up-to-date on the latest Azure HDInsight news and features by following us on Twitter, #HDInsight and @AzureHDInsight. For questions and feedback, please reach out to AskHDInsight@microsoft.com.

      About HDInsight

      Azure HDInsight is an easy, cost-effective, enterprise-grade service for open source analytics that enables customers to easily run popular open source frameworks including Apache Hadoop, Spark, Kafka, and others. The service is available in 27 public regions and Azure Government Clouds in the United States and Germany. Azure HDInsight powers mission-critical applications in a wide variety of sectors and enables a wide range of use cases including ETL, streaming, and interactive querying.

      Windows 10 SDK Preview Build 18272 available now!

      $
      0
      0

      Today, we released a new Windows 10 Preview Build of the SDK to be used in conjunction with Windows 10 Insider Preview (Build 18272 or greater). The Preview SDK Build 18272 contains bug fixes and under development changes to the API surface area.

      The Preview SDK can be downloaded from developer section on Windows Insider.

      For feedback and updates to the known issues, please see the developer forum. For new developer feature requests, head over to our Windows Platform UserVoice.

      Things to note:

      • This build works in conjunction with previously released SDKs and Visual Studio 2017. You can install this SDK and still also continue to submit your apps that target Windows 10 build 1803 or earlier to the Microsoft Store.
      • The Windows SDK will now formally only be supported by Visual Studio 2017 and greater. You can download the Visual Studio 2017 here.
      • This build of the Windows SDK will install on Windows 10 Insider Preview builds and supported Windows operating systems.
      • In order to assist with script access to the SDK, the ISO will also be able to be accessed through the following URL: https://go.microsoft.com/fwlink/?prd=11966&pver=1.0&plcid=0x409&clcid=0x409&ar=Flight&sar=Sdsurl&o1=18272 once the static URL is published.

      API Updates, Additions and Removals

      Additions:

      
      namespace Windows.ApplicationModel.Calls {
        public sealed class PhoneLine {
          PhoneLineBluetoothDetails BluetoothDetails { get; }
          HResult EnableTextReply(bool value);
        }
        public sealed class PhoneLineBluetoothDetails
        public enum PhoneLineTransport {
          Bluetooth = 2,
        }
      }
      namespace Windows.ApplicationModel.Calls.Background {
        public enum PhoneIncomingCallDismissedReason
        public sealed class PhoneIncomingCallDismissedTriggerDetails
        public enum PhoneLineProperties : uint {
          BluetoothDetails = (uint)512,
        }
        public enum PhoneTriggerType {
          IncomingCallDismissed = 6,
        }
      }
      namespace Windows.ApplicationModel.Calls.Provider {
        public static class PhoneCallOriginManager {
          public static bool IsSupported { get; }
        }
      }
      namespace Windows.ApplicationModel.Resources.Core {
        public sealed class ResourceCandidate {
          ResourceCandidateKind Kind { get; }
        }
        public enum ResourceCandidateKind
      }
      namespace Windows.Globalization {
        public sealed class CurrencyAmount
      }
      namespace Windows.Management.Deployment {
        public enum AddPackageByAppInstallerOptions : uint {
          ApplyToExistingPackages = (uint)512,
        }
      }
      namespace Windows.Networking.Connectivity {
        public enum NetworkAuthenticationType {
          Wpa3 = 10,
          Wpa3Sae = 11,
        }
      }
      namespace Windows.Networking.NetworkOperators {
        public sealed class ESim {
          ESimDiscoverResult Discover();
          ESimDiscoverResult Discover(string serverAddress, string matchingId);
          IAsyncOperation<ESimDiscoverResult> DiscoverAsync();
          IAsyncOperation<ESimDiscoverResult> DiscoverAsync(string serverAddress, string matchingId);
        }
        public sealed class ESimDiscoverEvent
        public sealed class ESimDiscoverResult
        public enum ESimDiscoverResultKind
      }
      namespace Windows.Security.DataProtection {
        public enum UserDataAvailability
        public sealed class UserDataAvailabilityStateChangedEventArgs
        public sealed class UserDataBufferUnprotectResult
        public enum UserDataBufferUnprotectStatus
        public sealed class UserDataProtectionManager
        public sealed class UserDataStorageItemProtectionInfo
        public enum UserDataStorageItemProtectionStatus
      }
      namespace Windows.System {
        public enum ProcessorArchitecture {
          Arm64 = 12,
          X86OnArm64 = 14,
        }
      }
      namespace Windows.UI.Composition {
        public interface IVisualElement
      }
      namespace Windows.UI.Composition.Interactions {
        public class VisualInteractionSource : CompositionObject, ICompositionInteractionSource {
          public static VisualInteractionSource CreateFromIVisualElement(IVisualElement source);
        }
      }
      namespace Windows.UI.Input {
        public class AttachableInputObject : IClosable
        public sealed class InputActivationListener : AttachableInputObject
        public sealed class InputActivationListenerActivationChangedEventArgs
        public enum InputActivationState
      }
      namespace Windows.UI.Input.Preview {
        public static class InputActivationListenerPreview
      }
      namespace Windows.UI.Input.Preview.Injection {
        public enum InjectedInputButtonEvent
        public sealed class InjectedInputButtonInfo
        public enum InjectedInputButtonKind
        public sealed class InputInjector {
          void InjectButtonInput(IIterable<InjectedInputButtonInfo> input);
        }
      }
      namespace Windows.UI.ViewManagement {
        public sealed class ApplicationView {
          ApplicationWindowPresenterKind AppliedPresenterKind { get; }
          string PersistedStateName { get; }
          public static IAsyncOperation<bool> ClearAllPersistedStateAsync();
          public static IAsyncOperation<bool> ClearPersistedStateAsync(string value);
          bool TrySetPersistedStateName(string value);
        }
        public sealed class UISettings {
          bool AutoHideScrollBars { get; }
          event TypedEventHandler<UISettings, UISettingsAutoHideScrollBarsChangedEventArgs> AutoHideScrollBarsChanged;
        }
        public sealed class UISettingsAutoHideScrollBarsChangedEventArgs
       
      namespace Windows.UI.Xaml {
        public class ContentRoot
        public sealed class ContentRootRasterizationScaleChangedEventArgs
        public sealed class ContentRootSizeChangedEventArgs
        public sealed class ContentRootVisibilityChangedEventArgs
        public sealed class ContentRootVisibleBoundsChangedEventArgs
        public class UIElement : DependencyObject, IAnimationObject {
          Shadow Shadow { get; set; }
          public static DependencyProperty ShadowProperty { get; }
        }
        public class UIElementWeakCollection : IIterable<UIElement>, IVector<UIElement>
      }
      namespace Windows.UI.Xaml.Controls {
        public class ContentDialog : ContentControl {
          ContentRoot AssociatedContentRoot { get; set; }
        }
      }
      namespace Windows.UI.Xaml.Controls.Primitives {
        public sealed class AppBarTemplateSettings : DependencyObject {
          double NegativeCompactVerticalDelta { get; }
         double NegativeHiddenVerticalDelta { get; }
          double NegativeMinimalVerticalDelta { get; }
        }
        public sealed class CommandBarTemplateSettings : DependencyObject {
          double OverflowContentCompactOpenUpDelta { get; }
          double OverflowContentHiddenOpenUpDelta { get; }
          double OverflowContentMinimalOpenUpDelta { get; }
        }
        public class FlyoutBase : DependencyObject {
          ContentRoot AssociatedContentRoot { get; set; }
          bool IsWindowed { get; }
          public static DependencyProperty IsWindowedProperty { get; }
          bool IsWindowedRequested { get; set; }
          public static DependencyProperty IsWindowedRequestedProperty { get; }
        }
        public sealed class Popup : FrameworkElement {
          ContentRoot AssociatedContentRoot { get; set; }
          bool IsWindowed { get; }
          public static DependencyProperty IsWindowedProperty { get; }
          bool IsWindowedRequested { get; set; }
          public static DependencyProperty IsWindowedRequestedProperty { get; }
          bool ShouldMoveWithContentRoot { get; set; }
          public static DependencyProperty ShouldMoveWithContentRootProperty { get; }
        }
      }
      namespace Windows.UI.Xaml.Core.Direct {
        public enum XamlPropertyIndex {
          AppBarTemplateSettings_NegativeCompactVerticalDelta = 2367,
          AppBarTemplateSettings_NegativeHiddenVerticalDelta = 2368,
          AppBarTemplateSettings_NegativeMinimalVerticalDelta = 2369,
          CommandBarTemplateSettings_OverflowContentCompactOpenUpDelta = 2370,
          CommandBarTemplateSettings_OverflowContentHiddenOpenUpDelta = 2371,
          CommandBarTemplateSettings_OverflowContentMinimalOpenUpDelta = 2372,
        }
      }
      namespace Windows.UI.Xaml.Hosting {
        public class DesktopWindowXamlSource : IClosable {
          bool ProcessKeyboardAccelerator(VirtualKey key, VirtualKeyModifiers modifiers);
        }
        public sealed class ElementCompositionPreview {
          public static UIElement GetApplicationWindowContent(ApplicationWindow applicationWindow);
          public static void SetApplicationWindowContent(ApplicationWindow applicationWindow, UIElement xamlContent);
        }
      }
      namespace Windows.UI.Xaml.Input {
        public sealed class FocusManager {
          public static UIElement FindNextFocusableElementInContentRoot(FocusNavigationDirection focusNavigationDirection, ContentRoot contentRoot);
          public static UIElement FindNextFocusableElementInContentRoot(FocusNavigationDirection focusNavigationDirection, ContentRoot contentRoot, Rect hintRect);
          public static object GetFocusedElement(ContentRoot contentRoot);
          public static bool TryMoveFocusInContentRoot(FocusNavigationDirection focusNavigationDirection, ContentRoot contentRoot);
          public static IAsyncOperation<FocusMovementResult> TryMoveFocusInContentRootAsync(FocusNavigationDirection focusNavigationDirection, ContentRoot contentRoot);
        }
      }
      namespace Windows.UI.Xaml.Media {
        public class Shadow : DependencyObject
        public class ThemeShadow : Shadow
        public sealed class VisualTreeHelper {
          public static IVectorView<Popup> GetOpenPopupsWithinContentRoot(ContentRoot contentRoot);
        }
      }
      namespace Windows.UI.Xaml.Media.Animation {
        public class GravityConnectedAnimationConfiguration : ConnectedAnimationConfiguration {
          bool IsShadowEnabled { get; set; }
        }
      }
      namespace Windows.Web.Http {
        public sealed class HttpClient : IClosable, IStringable {
          IAsyncOperationWithProgress<HttpRequestResult, HttpProgress> TryDeleteAsync(Uri uri);
          IAsyncOperationWithProgress<HttpRequestResult, HttpProgress> TryGetAsync(Uri uri);
          IAsyncOperationWithProgress<HttpRequestResult, HttpProgress> TryGetAsync(Uri uri, HttpCompletionOption completionOption);
          IAsyncOperationWithProgress<HttpGetBufferResult, HttpProgress> TryGetBufferAsync(Uri uri);
          IAsyncOperationWithProgress<HttpGetInputStreamResult, HttpProgress> TryGetInputStreamAsync(Uri uri);
          IAsyncOperationWithProgress<HttpGetStringResult, HttpProgress> TryGetStringAsync(Uri uri);
          IAsyncOperationWithProgress<HttpRequestResult, HttpProgress> TryPostAsync(Uri uri, IHttpContent content);
          IAsyncOperationWithProgress<HttpRequestResult, HttpProgress> TryPutAsync(Uri uri, IHttpContent content);
          IAsyncOperationWithProgress<HttpRequestResult, HttpProgress> TrySendRequestAsync(HttpRequestMessage request);
          IAsyncOperationWithProgress<HttpRequestResult, HttpProgress> TrySendRequestAsync(HttpRequestMessage request, HttpCompletionOption completionOption);
        }
        public sealed class HttpGetBufferResult : IClosable, IStringable
        public sealed class HttpGetInputStreamResult : IClosable, IStringable
        public sealed class HttpGetStringResult : IClosable, IStringable
        public sealed class HttpRequestResult : IClosable, IStringable
      }
       
      

      The post Windows 10 SDK Preview Build 18272 available now! appeared first on Windows Developer Blog.

      Working with US Census Data in R

      $
      0
      0

      If you need data about the American populace, there's no source more canonical than the US Census Bureau. The bureau publishes a wide range of public sets, and not just from the main Census conducted every 10 years: there are more than 100 additional surveys and programs published as well. To help R users access this rich source of data, Ari Lamstein and Logan Powell have published A Guide to Working with US Census Data in R, a publication of the R Consortium Census Working Group.

      The guide provides an overview of the data available from the US census bureau and various tools available in R to access and analyze it. The guide notes that there are 22 R packages for working with census data, and cites as being particularly useful:

      • tigris, for working with shape files of census regions (census data is may be aggregated to any of a number of levels as shown int the diagram below)
      • acs, for downloading and managing data from the decennial census and the American Community Survey
      • choroplethr and choroplethrMaps, for mapping data (including census data) by region
      • tidycensus, to extract census data as tidy data frames
      • censusapi, for extracting data using the Census API
      • ipumsr, to extract US census data in a form that can be compared with data from other countries 

      You can find the complete guide at the link below.

      R Consortium: A Guide to Working with US Census Data in R

      Azure SQL Data Warehouse provides frictionless development using SQL Server Data Tools

      $
      0
      0

      The highly requested feature for SQL Data Warehouse (SQL DW) is now in preview with the support for SQL Server Data Tool (SSDT) in Visual Studio! Teams of developers can now collaborate over a single, version-controlled codebase and quickly deploy changes to any instance in the world. SQL DW is a flexible, secure, and fully managed analytics platform for the enterprise optimized for running complex queries quickly across petabytes of data.

      Currently, change management and deployment for SQL DW is a non-trivial effort where customers must build SQL and PowerShell scripts. This becomes an unmanageable experience as modern data warehouse solutions can have over hundreds of data pipelines and thousands of database objects. This issue is exacerbated as data warehouse deployments typically have multiple environments for development, tests, and production. A stable continuous integration and deployment process becomes critical at this point.

      With SSDT, database project support enables a first-class enterprise-grade development experience for your modern data warehouse. You can check data warehouse scripts into source control and leverage Microsoft Azure DevOps within Visual Studio. As your business requirements around data evolve, increase your development velocity with SQL DW by seamlessly applying and deploying changes using features such as schema compare and publish, all within a single tool. You can now save weeks of development efforts with SSDT and begin employing best practices with database DevOps and SQL DW. Gone are the days where you and your team must manually script and manage unstable continuous integration and deployment pipelines with SQL DW.

      Database project support

      SSDT - Target Platform

      Source control integration using Azure DevOps with Azure Repos

      SSDT - Azure Devops

      Sign up today for a preview

      Interested in joining? This feature is available for preview today! Simply register by visiting the SQL Data Warehouse Visual Studio SQL Server Data Tools (SSDT) - Preview Enrollment form. Given the high demand, we are managing acceptance into preview to ensure the best experience for our customers. Once you sign up, our goal is to confirm your status within seven business days.

      Next steps

      Automatically discover workload insights for advanced performance tuning directly in Azure portal

      $
      0
      0

      Advanced tuning for Azure SQL Data Warehouse (SQL DW) just got simpler with additional data warehouse recommendations and metrics. SQL DW is a flexible, secure, and fully managed analytics platform for enterprises optimized for running complex queries quickly across petabytes of data.

      Often times advanced tuning scenarios with SQL DW can be a challenge without the proper tools to seamlessly uncover performance insights into your data warehouse workload. This can lead to hours of troubleshooting efforts where you must ensure proper monitoring practices are continuously followed. SQL DW provides a built-in holistic management experience by having a tight integration within the Microsoft Azure ecosystem, specifically Azure Advisor and Azure Monitor. These two services are immediately configured by default for SQL Data Warehouse to automatically deliver you workload insights at no additional cost.

        There are additional advanced performance recommendations through Azure Advisor at your disposal, including:

              • Adaptive cache – Be advised when to scale to optimize cache utilization.
              • Table distribution – Determine when to replicate tables to reduce data movement and increase workload performance. 
              • Tempdb – Understand when to scale and configure resource classes to reduce tempdb contention.

                New SQL DW Recommendations


                  There is a deeper integration of data warehouse metrics with Azure Monitor including an enhanced customizable monitoring chart for near real-time metrics in the overview blade. You no longer must leave the data warehouse overview blade to access Azure Monitor metrics when monitoring usage, or validating and applying data warehouse recommendations.

                    Azure Monitor Integration


                      There are new metrics available, such as tempdb and adaptive cache utilization to complement your performance recommendations.

                      Metrics



                          Next steps

                          Row-Level Security is now supported for Microsoft Azure SQL Data Warehouse

                          $
                          0
                          0

                          Today we’re announcing the general availability of Row-Level Security (RLS) for Microsoft Azure SQL Data Warehouse, an additional capability for managing security for sensitive data. Azure SQL Data Warehouse is a fast, flexible, and secure cloud data warehouse tuned for running complex queries fast and across petabytes of data.

                          As you move data to the cloud, securing your data assets is critical to building trust with your customers and partners. With the introduction of RLS, you can implement security policies to control access to rows in your tables, as in who can access what rows. RLS enables this fine-grained access control without having to redesign your data warehouse. This simplifies the overall security model as the access restriction logic is located in the database tier itself rather than away from the data in another application. RLS also eliminates the need to introduce views to filter out rows for access control management. In addition, RLS supports both SQL authentication and Azure Active Directory (AAD) authentication.

                          Here are a few scenarios where RLS could be leveraged today:

                          • A healthcare provider enforces a security policy that allows nurses to view only data rows for their own patients.
                          • A financial services firm restricts access to rows of financial data based on either the employee’s business division or employee’s role within the company.
                          • A multi-tenant application enforces logical separation of each tenant's data rows from every other tenant's rows.

                          RLS Diagram

                          RLS is a form of predicate-based access control that works by automatically applying a security predicate to all queries on a table. The predicate determines which users can access what rows. For example, a simple predicate might be, “WHERE SalesRep = SYSTEM_USER,” while a complicated predicate might include JOINs to look up information in other tables.

                          There are two types of security predicates:

                          • Filter predicates silently filter SELECT, UPDATE, and DELETE operations to exclude rows that do not satisfy the predicate.
                          • Block predicates explicitly block INSERT, UPDATE, and DELETE operations that do not satisfy the predicate.

                          In this release, Azure SQL Data Warehouse only supports filter predicates while support for block predicates will be released soon. Also, in this release, RLS doesn’t support external tables created via PolyBase.

                          To add a security predicate on a table, you first need an inline table-valued function that defines your access criteria. Then, you create a security policy that adds filter predicate on any tables you like, using this function. Here’s a simple example that prevents sales representatives from accessing rows in a customers table that are not assigned to them:

                          CREATE SCHEMA security;
                          
                          CREATE FUNCTION security.customerPredicate(@SalesRepName AS sysname)
                          RETURNS TABLE
                              WITH SCHEMABINDING
                          AS
                              RETURN SELECT 1 AS accessResult
                          WHERE @SalesRepName = SYSTEM_USER OR SYSTEM_USER = 'Manager';
                          go
                          
                          CREATE SECURITY POLICY security.customerAccessPolicy
                          ADD FILTER PREDICATE security.customerPredicate(SalesRepName) ON dbo.Customers
                          WITH (STATE = ON);
                          go

                          This capability is available now in all Azure regions with no additional charge. The rollout has been completed in a few regions, with the goal to finish worldwide deployment within the next two weeks. Azure SQL Data Warehouse continues to lead in the areas of security, compliance, privacy, and auditing.

                          Next steps

                          Workload insights into SQL Data Warehouse delivered through Microsoft Azure Monitor diagnostic logs

                          $
                          0
                          0

                          SQL Data Warehouse (SQL DW) now enables enhanced insights into analytical workloads by integrating directly with Microsoft Azure Monitor diagnostic logs. This new capability enables developers to analyze workload behavior over an extended time period and make informed decisions on query optimization or capacity management. SQL DW is a flexible, secure, and fully managed analytics platform for the enterprise optimized for running complex queries quickly across petabytes of data.

                          Today, customers leverage Dynamic Management Views (DMVs) to get insights into their data warehouse workload. These DMVs have a limit of 10,000 rows that can easily be exceeded for intensive enterprise data warehouse workloads with heavy query activity. Relying solely on DMVs hinders or blocks many query troubleshooting scenarios for active workloads. To work around this DMV limitation, custom logging solutions were required which consumed internal system resources, increased the total cost of the data warehouse solution, and introduced additional development complexities and maintenance effort.

                          We have now introduced an external logging process through Azure Monitor diagnostic logs, which provides additional insights into your data warehouse workload. With a single click of a button, you are now able to configure diagnostic logs for historical query performance troubleshooting capabilities using Log Analytics. Azure Monitor diagnostic logs support customizable retention periods by saving the logs to a storage account for auditing purposes, the capability to stream logs to event hubs near real-time telemetry insights, and the ability to analyze logs using Log Analytics with log queries. Diagnostic logs consist of telemetry views of your data warehouse equivalent to the most commonly used performance troubleshooting DMVs for SQL Data Warehouse. For this initial release, we have enabled views for the following:

                          Configure diagnostic logs to emit to Log Analytics:

                           

                          Configure diagnostic logs to emit to Log Analytics.Set log alerts to dynamically scale your data warehouse using action groups integrated with Azure Function:

                          Azure Monitor Logs - Alerts

                          Next steps


                          Azure SQL Data Warehouse introduces new productivity and security capabilities

                          $
                          0
                          0

                          SQL Data Warehouse continues to provide a best in class price to performance offering, leading others in TPC-H and TPC-DS benchmarks based on independent testing. As a result we are seeing customers, including more than 50 percent of Fortune 1000 enterprise such as Anheuser Busch InBev, Thomson Reuters, and ThyssenKrupp build new analytics solutions on Azure. 

                          With the launch of SQL Data Warehouse Gen2 in April 2018, customers have benefited tremendously from query performance and concurrency enhancements. To support our customers’ exponentially growing data volume and resulting analytics workloads, today we are sharing new SQL Data Warehouse features. Enhanced workload management, row-level security, and improved operational experiences.


                          Azure SQL Data Warehouse

                          Enhanced workload management

                          SQL Data Warehouse will offer workload management capabilities that optimize query execution to ensure that high value work gets priority access to system resources. With features such as workload importance, customers can use a single SQL Data Warehouse database to more efficiently run multiple workloads, taking away the complexity of separate data warehouses for each solution. With this new capability, SQL Data Warehouse enables better control, utilization and optimization over deployed resources. Workload importance will be available for all SQL Data Warehouse customers later this year at no additional cost.

                          Industry leading security

                          SQL Data Warehouse now supports native row-level security (RLS), enabling customers to implement the most stringent security policies for fine-grained access control. Going forward, customers will be able change security policies without redesigning and redeploying of the data warehouse and there will be no impact to query performance when row-level security is applied. By implementing granular security directly in the database tier itself, and with native integration with Azure Active Directory, managing and controlling the overall security model is simplified via centralized security policy adjustments.

                          Complimented by Virtual Network Service Endpoints, Threat Detection, Transparent Data Encryption, and compliance with more than 40 national, regional and industry-specific requirements, SQL Data Warehouse offers best in class security and compliance at zero additional cost.

                          Best in class development tools and insights

                          SQL Data Warehouse is committed to delivering first class experience for data warehouse administrators and developers through improved insights and updated tooling to streamline automation and management. With the latest improvements, building a modern data warehouse on Azure just got faster and easier.

                          Today we are sharing the preview of SQL Server Data Tool (SSDT) in Visual Studio for SQL Data Warehouse, offering first-class development experience with integrated support for version control, test automation with continuous integration, and one-click deployment of change scripts. This means that as business requirements evolve, data warehouse implementers can code and deploy enhancements faster, whilst still adhering to robust quality controls that block regressions from creeping into production systems.

                          We are also extending the intelligent insights capability to include additional details for database schema optimization that recommends optimal use of replicated tables as well as well utilization of Adaptive Cache and TempDB. With a built-in holistic management experience through Azure Advisor and Azure Monitor integration, data warehouse administrators can seamlessly uncover performance insights and easily tune the solution for better performance.

                          Query Store has been an incredibly popular feature within SQL Server that enables developers to troubleshoot query performance issues relative to historical execution time. We’re pleased to now bring this capability to SQL Data Warehouse. With Query Store, developers can review query workloads running on the platform, and analyze associated query plans and runtime statistics to identify any performance issues that may impede productivity.

                          To keep your data warehouses fresh and up to date with data source changes, supporting updates and transactions is critical. However, interrupting long running transactions can sometimes lead to longer database recovery processes. To improve database availability, SQL Data Warehouse now incorporates Accelerated Database Recovery (ADR) feature. With ADR, SQL Data Warehouse improves database availability and enables much quicker pause and resume service operations.

                          For advanced troubleshooting scenarios, SQL Data Warehouse now provides one-click integration with Azure Monitor Diagnostic Logs that enables developers to capture and archive usage data such as queries executed and wait stats for future analysis. These logs are a natural extension of the existing dynamic management view capabilities in SQL Data Warehouse and developers will benefit from the familiar and powerful experience.

                          Azure is a great platform for all analytics

                          With its native integration with Azure Databricks, Azure Data Factory, and Power BI, SQL Data Warehouse allows customers to build new analytics solutions to support modern data warehousing, advanced analytics, and real-time analytics scenarios. A key feature now generally available is SQL Data Warehouse’s native integration with Azure Data lake Storage Gen2 the only cloud scale data lake designed specifically for mission critical analytics and AI workload.

                          Customers can also leverage 25 plus Microsoft and third-party data integration and BI tools to build an analytics solution for any enterprise. We have partnered with vendors to streamline the modernization of legacy on-premises data warehouse to Azure. These ecosystem investments allow our customers to build upon their existing infrastructure and significantly accelerate time to value for powerful analytics solutions.

                          Announcing the general availability of Azure Event Hubs for Apache Kafka®

                          $
                          0
                          0

                          In today’s business environment, with the rapidly increasing volume of data and the growing pressure to respond to events in real-time, organizations need data-driven strategies to gain valuable insights faster and increase their competitive advantage. To meet these big data challenges, you need a massively scalable distributed streaming platform that supports multiple producers and consumers, connecting data streams across your organization. Apache Kafka and Azure Event Hubs provide such distributed platforms.

                          How is Azure Event Hubs different from Apache Kafka?

                          Apache Kafka and Azure Event Hubs are both designed to handle large-scale, real-time stream ingestion. Conceptually, both are distributed, partitioned, and replicated commit log services. Both use partitioned consumer models with a client-side cursor concept that provides horizontal scalability for demanding workloads.

                          Apache Kafka is an open-source streaming platform which is installed and run as software. Event Hubs is a fully managed service in the cloud. While Kafka has a rapidly growing, broad ecosystem and has a strong presence both on-premises and in the cloud, Event Hubs is a cloud-native, serverless solution that gives you the freedom of not having to manage servers or networks, or worry about configuring brokers.

                          Announcing Azure Event Hubs for Apache Kafka

                          We are excited to announce the general availability of Azure Event Hubs for Apache Kafka. With Azure Event Hubs for Apache Kafka, you get the best of both worlds—the ecosystem and tools of Kafka, along with Azure’s security and global scale.

                          This powerful new capability enables you to start streaming events from applications using the Kafka protocol directly in to Event Hubs, simply by changing a connection string. Enable your existing Kafka applications, frameworks, and tools to talk to Event Hubs and benefit from the ease of a platform-as-a-service solution; you don’t need to run Zookeeper, manage, or configure your clusters.

                          Event Hubs for Kafka also allows you to easily unlock the capabilities of the Kafka ecosystem. Use Kafka Connect or MirrorMaker to talk to Event Hubs without changing a line of code. Find the sample tutorials on our GitHub.

                          This integration not only allows you to talk to Azure Event Hubs without changing your Kafka applications, you can also leverage the powerful and unique features of Event Hubs. For example, seamlessly send data to Blob storage or Data Lake Storage for long-term retention or micro-batch processing with Event Hubs Capture. Easily scale from streaming megabytes of data to terabytes while keeping control over when and how much to scale with Auto-Inflate. Event Hubs also supports Geo Disaster-Recovery. Event Hubs is deeply-integrated with other Azure services like Azure Databricks, Azure Stream Analytics, and Azure Functions so you can unlock further analytics and processing.

                          Event Hubs for Kafka supports Apache Kafka 1.0 and later through the Apache Kafka Protocol which we have mapped to our native AMQP 1.0 protocol. In addition to providing compatibility with Apache Kafka, this protocol translation allows other AMQP 1.0 based applications to communicate with Kafka applications. JMS based applications can use Apache Qpid™ to send data to Kafka based consumers.

                          Open, interoperable, and fully managed: Azure Event Hubs for Apache Kafka.

                          Next steps

                          Get up and running in just a few clicks and integrate Event Hubs with other Azure services to unlock further analytics.

                          Enjoyed this blog? Follow us as we update the features list. Leave us your feedback, questions, or comments below.

                          Happy streaming!

                          Automating SAP deployments in Microsoft Azure using Terraform and Ansible

                          $
                          0
                          0

                          Deploying complex SAP landscapes into a public cloud is not an easy task. While SAP basis teams tend to be very familiar with the traditional tasks of installing and configuring SAP systems on-premise, additional domain knowledge is often required to design, build, and test cloud deployments.

                          There are several options to take the guesswork out of tedious and error-prone SAP deployment projects into a public cloud:

                          • One way to get started is the SAP Cloud Appliance Library (CAL), a repository of numerous SAP solutions that can be directly deployed into a public cloud. However, apart from its cost, CAL only contains pre-configured virtual machine (VM) images, so configuration changes are hard or impossible.
                          • A free alternative has been to use SAP Quickstart Templates offered by most public cloud providers. Typically written in a shell script or a proprietary language, these templates offer some customization options for pre-defined SAP scenarios. For example, Azure’s ARM templates offer one-click deployments of SAP HANA and other solutions directly in Azure Portal.)

                          While both solutions are great starting points, they usually lack configuration options and flexibility required to build up an actual, production-ready SAP landscape.

                          Based on feedback from actual customers who move their SAP landscapes into the cloud, the truth is that existing Quickstart Templates rarely go beyond “playground” systems or proof-of-concepts; they are too rigid and offer too little flexibility to map real-life business and technical requirements.

                          This is why we, the SAP on Microsoft Azure Engineering team, decided to go into the opposite direction: Instead of offering “one-size-fits-all” templates for limited SAP scenarios that can hardly be adapted (let alone extended), we broke down SAP deployments in Azure to the most granular level and offer “building blocks” for a truly customizable, yet easy-to-use experience.

                          A new approach to automating SAP deployments in the cloud

                          In this new, modular approach to automating even more complex SAP deployments in Azure, we developed a coherent collection of:

                          • Terraform modules which deploy the infrastructure components (such as VMs, network, storage) in Azure and then call the:
                          • Ansible playbook which call different:
                          • Ansible roles to install and configure OS and SAP applications on the deployed infrastructure in Azure.

                          A new approach to automating SAP deployments in the cloud

                          Flow diagram of Terraform/Ansible SAP automation templates.

                          An important design consideration was to keep all components as open and flexible as possible; although nearly every parameter on both Azure and SAP side can be customized, most are optional. In other words, you can be spinning up your first SAP deployment in Azure within 10 minutes by using one of our boilerplate configuration templates – but you can also use our modules and roles to build up a much more complex landscape.

                           

                          Deployment

                          A sample deployment of HANA high-availability pair.

                          For your convenience, Terraform and Ansible are pre-installed in your Azure Cloud Shell, so the templates can be run directly from there with minimal configuration. Alternatively, you can, of course, use them from your local machine or any VM as well.

                          While the repository is published and maintained by Microsoft Azure, the project is community-driven and we welcome any contributions and feedback.

                          Starting with SAP HANA, but a lot more to come

                          When we started building our Terraform and Ansible templates a few months ago, we decided to start out our engineering process with HANA. SAP’s flagship in-memory database is the underlying platform and de-facto standard of most modern SAP enterprise applications, including S/4HANA and BW/4HANA. If you’ve ever built an SAP HANA high-availability cluster from scratch, you’ll appreciate that we’ve taken the guesswork out of this complex task and aligned our templates to the public cloud reference architectures certified by SAP.

                          Currently, our Terraform/Ansible templates support the following two options (more application-specific scenarios are currently being worked on):

                          HANA single-node instance

                          • Single-node HANA instance.

                          Single-node HANA instance

                          HANA high-availability pair

                          • Single-node HANA instance, two-tier replication (primary/secondary) via HSR.
                          • Pacemaker high-availability cluster, fully configured with SBD and SAP/Azure resource agents.

                          HANA high-availability pair

                          Since our key focus was to offer the greatest amount of flexibility possible, virtually every aspect of the SAP HANA landscape can be customized, including:

                          • Sizing (choose any supported Azure VM SKU).
                          • High-availability (in the high-availability pair scenario, choose to use availability sets or availability zones).
                          • Bastion host (optionally, choose from a Windows and/or Linux “jump box” including HANA Studio).
                          • Version (currently, HANA 1.0 SPS12 and HANA 2.0 SPS2 or higher are supported).
                          • XSA applications (optionally, enable XSA application server and choose from a set of supported applications like HANA Cockpit or SHINE).

                          04_XSA-SHINE

                          XSA SHINE demo content for HANA.

                          It’s worth noting that all scenarios come with “fill-in-the-blanks” boilerplate configuration templates and step-by-step instructions to help you get started.

                          Getting started is easy

                          Got a few minutes? In our popular Azure Friday series, our team member Page Bowers walks through a SAP HANA live deployment using our Terraform/Ansible templates.

                          Want to jump right in? Visit our GitHub repository and follow the “Getting Started” guide – you’ll be building up your first SAP landscapes in the Azure cloud in no time!

                          Cognitive Services – Bing Local Business Search now available in public preview

                          $
                          0
                          0

                          We are excited to share that Bing Local Business Search API on Cognitive Services is now available in public preview. Bing Local Business Search API enables users to easily find local business information within your applications, given an area of interest. The public preview of Bing Local Business Search API enables scenarios such as calling, navigation, and mapping using contact details, latitude/longitude, and other entity metadata. This metadata comes from hundreds of categories including professionals and services, retail, healthcare, food and drink, and more. Additionally, user queries can be pertaining to a single entity, such as “Microsoft City Center Plaza Bellevue”, a collection of results like “Microsoft offices in Redmond, WA”, and category queries such as “Italian Restaurant” are also supported. Alternatively, users can use one of our predefined categories to query our API.

                          Below is an example of a JSON response. Each result item contains a name, full address, phone number, website, business category, and latitude/longitude. Using these results, you can build engaging user scenarios in your applications. For instance, you could enable users to find and contact a local business. Another example is to enable navigation to the place of interest or plotting results on Bing Maps. An example from Bing.com for the “City Center Plaza Microsoft” query is shown below.

                          Example - City Center Plaza Microsoft

                          Get more details about the Bing Local Business Search API offering by reading, “API Announcement: Local Business Search Public Preview” on the Bing Developer Blog.

                          For questions please reach out to us via Stack Overflow and Azure Support. We would also love to hear your feedback.

                          Leverage Azure Security Center to detect when compromised Linux machines attack

                          $
                          0
                          0

                          When an attacker compromises a machine, they typically have a goal in mind. Some attackers are looking for information residing on the victim’s machine or are looking for access to other machines on the victim’s network. Other times, attackers have plans to use the processing power of the machine itself or even use the machine as a launch point for other attacks. While on Linux virtual machines (VM) in Microsoft Azure we most commonly see attackers installing and running cryptocurrency mining software. This blog post will focus on the latter when an attacker wants to use the compromised machine as a launch point for other attacks.

                          Azure Security Center (ASC) utilizes an agent that runs on multiple distributions of Linux. When auditd is enabled, it collects logs including process creation events. These are run through the detection pipeline to look for malicious and suspicious activity. Alerts are surfaced through the ASC portal.

                          The Microsoft Threat Intelligence Center uses a range of methods to identify new emerging threats, including a sophisticated hybrid Linux honeypot service. A honeypot is a decoy system, set up to be attacked and lure cyber attackers to reveal themselves.

                          In this post, we discuss some recent instances where attacks against the honeypot originated from IPs within customer machines. In each case, malicious behavior on those compromised customer VMs had already resulted in alerts being raised through Azure Security Center. Analysis of these attacks yielded greater insight into the attacker’s behavior. This fed further detection development, allowing us to surface more attack behavior to customers earlier, and provide a more complete view of the attack end to end.

                          Initial intrusion

                          The diagram below shows the attack setup. The analysis suggests that an Apache Cassandra account with a default password was used to initially compromise an Azure VM. Once access was gained, the attack approached the honeypot (1) and other targets (2). We identified two IP addresses (3, 4) that the attacker used to log into this VM, one of which also attacked the honeypot (5). Another thing that stood out was the two IPs that the attacker was using shared the same first two octets and resolved to Romania. We will come back to this fact later.


                          Connections

                          Intrusion breakdown

                          One of the more common attacks that we see against customer virtual machines is a brute force or password spray attack, these quickly lead to the installation of crypto coin mining malware. In this case, the malicious user was doing something a bit different.

                          Host enumeration

                          After the initial compromise, the attacker pulled down a perl based host enumeration script from the domain nasapaul.com which hosts a few enumeration and speed test scripts. Azure Security Center surfaces this behavior via a “detected file download from a known malicious source” alert.

                          Download Source

                          That script looks for specific information in the /proc/cpuinfo file to give the attacker an idea of what kind of machine they are on. You can see some of the commands run in the text box below. That same script also runs a speed test which is a service that nasapaul.com offers.

                          CPU=$(grep –m 1 “model name” /proc/cpuinfo | cut –d: –f2 | sed-e ‘s/^ */ /’ |sed –e ‘s/$/ / ‘)
                          
                          CPUS=$ (grep –c ^processor /proc/cpuinfo)
                          
                          STEP=$ (grep –m 1 “stepping” /proc/cpuinfo | cut –d: –f2 | sed –e ‘s/^ */ / ‘ | sed –e ‘s/$/ / ‘) 
                          
                          BOGO=$ (grep –m 1 “stepping” /proc/cpuinfo | cut –d: –f2 | sed –e ‘s/^ */ / ‘ | sed –e ‘s/$/ / ‘)
                          
                          OS=$ (lsb_release –si)
                          
                          ram=$ (free –m | grep –oP ‘d+’ | head –n 1)
                          
                          VER=$ (uname –a)
                          
                          uptime=$ (</proc/uptime)
                          
                          uptime=$ {uptime%%. *} bold=$ (tput bold)
                          
                          zile=$ ( ( uptime%60 ( )
                          
                          secunde=$ ( ( uptime%60 ) )
                          
                          minute=$ ( ( uptime/60%60 ) )
                          
                          ore=$ ( ( uptime/60/60%24 ) )
                          
                          vid=$ (lspci | grep VGA |cut -f5- -d ‘ ‘)
                          
                          DISK=$ (df –h --total | grep total |awk ‘ {printf “” $2 “Bnn” } ‘ )

                          Initial exploitation

                          That session ended, but the attacker started a new session and created a connection to a secure FTP server and pulled some files down. Then they modified the files for execution:

                          chmod +x 1 cyberinfo cybernetikrandom go h4e petarda port pscan2 screen speedtestvps.py sshd

                          This set of files is a toolkit from a known hacking group. The attacker uses the “go” file to run “pscan2” and “sshd” against two different class B IP ranges. That means they ran the scan against just over 65,000 addresses. They also used the tool “h4e” which our investigation showed was a perl script used in denial of service attacks. The text file “port” holds results of the scans, typically what IPs were listening too and maybe what ports were open. It isn’t clear if those commands completed successfully, but two hours later the attacker deleted them all and pulled down a different kit.

                          Password spray

                          This time the attacker used Wget to pull down their toolkit from a public website. As they did before, they pulled down the tools then modified them all for execution.

                          chmod +x a all classes co gasite.txt hu pass range scan.log ssh2 x
                          
                          /bin/bash ./a ##.49
                          ./ssh2 1500 -b ##.49 pass 22 "uname -a & lscpu"
                          /bin/bash ./a ###.66
                          ./ssh2 1500 -b ###.66 pass 22 "uname -a & lscpu"
                          nano gasite.txt

                          After that, the same simple pattern is repeated against a number of class B ranges. The file “a” takes the first two octets of a class B range as input, then calls “ssh2”. “ssh2” takes input for a number of threads, the range, a password file (“pass” which in this case contains over 35,000 user/password combinations), a port number, and then the initial commands to run. The file “gasite.txt” collects output.

                          Later on, we see the files “co” and “range” used with the “classes” folder. The “classes” folder has details of 26 cloud and hosting companies with their IP ranges. Microsoft is there along with all the other major providers. The files “co” and “range” just expand the initial two octets into a full IP.

                          The attacker didn’t appear to ever execute the files “all”, “hu”, or “x” but they all have to do with configuring IP ranges, specifically filling out the full four octets of an IP. It is possible that the “ssh2” executable uses them.

                          Analysis of the toolkit took some effort. The output filename “gasite.txt” translates to “found.txt” and the “ssh2” file is a custom Romanian version of an ssh scanner packed and/or obfuscated using UPX. Once unpacked, the Romanian strings came through (see image below). Some further research by the red team tracked down the original ssh2 source code and a forum where our attacker or someone using the same executable, was getting some help with their code.

                          strings

                          Result: Enhanced behavioral analytics in Azure Security Center

                          While investigating the intrusion, we were able to pull out a number of unique TTPs for inclusion into new analytics or for improving existing ones. They uncovered things like better password spray detection and improved coverage of attacker host enumeration. We were also able to validate that existing analytics fired as expected. The goal isn’t to show a customer multiple redundant alerts for the same intrusion, but to provide insight into the full scope of an attacker’s actions. We also acknowledge that the actor behind this attack could change some aspects of their technique. The greater the detection coverage across the attack lifecycle the more resilient we are to changes in attacker methodology. Additionally, specific techniques used by this attacker could be used by other attackers and we’d like to make sure we catch them too.

                          Recommended actions

                          Review your alerts regularly in Azure Security Center. The customer received multiple Azure Security Center alerts for this intrusion and the malicious activity stopped soon after and has not appeared again. Azure Security Center consolidates all of your alerts in one centralized location in security alerts. This makes it easy for you to see the severity of your alerts and help you prioritize your response to them. Each alert gives you a detailed description of the incident as well as steps on how to remediate the issue. For further investigation, you can review the alerts in the “Investigation Path”, an interactive and visual way to see every entity involved in the attack.

                          Change your passwords regularly. While Azure Security Center alerted on the activity, the intrusion could have been prevented through good password hygiene. Of the many username and password combinations in the attacker toolkit, a good chunk of them are defaults that are created when you first install a piece of software. By changing these default passwords or going password-less, you prevent your passwords from being used against you.

                          Final thoughts

                          Our team works both ends of the cybersecurity problem. We constantly improve and refine our detections through both public and internal security research. We also are proactive in monitoring the external threat as a key input to ensuring that our detection coverage is most relevant to the attacks facing both Microsoft and its customers. If you have Linux machines in Azure, consider using Azure Security Center to help monitor them and prevent them from targeting others.

                          Slide1

                          In addition to the actions you can take, Microsoft has several physical infrastructure and operational controls in place to help protect the Azure platform. We have over 3,500 cybersecurity experts at Microsoft to help protect, detect, and respond to security threats against our infrastructure and services 24/7, 365 days a year. One of those teams is our team, the Microsoft Threat Intelligence Center. To learn more about our team and how we work to protect against malicious activity in Azure, watch our latest Microsoft Mechanics video.

                          Other resources

                          Viewing all 10804 articles
                          Browse latest View live


                          <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>