Quantcast
Channel: Category Name
Viewing all 10804 articles
Browse latest View live

Announcing Visual Studio 2017 15.7 Preview 4

$
0
0

As you know we continue to incrementally improve Visual Studio 2017 (version 15), and our 7th significant update is currently well under way with the 4th preview shipping today. As we’re winding down the preview, we’d like to stop and take the time to tell you about all of the great things that are coming in 15.7 and ask you to try it and give us any feedback you might have while we still have time to correct things before we ship the final version.

From a .NET tools perspective, 15.7 brings a lot of great enhancements including:

  • Support for .NET Core 2.1 projects
  • Improvements to Unit Testing
  • Improvements to .NET productivity tools
  • C# 7.3
  • Updates to F# tools
  • Azure Key Vault support in Connected Services
  • Library Manager for working with client-side libraries in web projects
  • More capabilities when publishing projects

In this post we’ll take a brief tour of all these features and talk about how you can try them out (download 15.7 Preview). As always, if you run into any issues, please report them to us using Visual Studio’s built in “Report a Problem” feature.

.NET Core 2.1 Support

.NET Core 2.1 and ASP.NET Core 2.1 brings a list of great new features including performance improvements, global tools, a Windows compatibility pack, minor version roll-forward, and security improvements to name a few. For full details see the .NET Core 2.1 Roadmap and the ASP.NET Core 2.1 Roadmap respectively.

Visual Studio 15.7 is the recommended version of Visual Studio for working with .NET Core 2.1 projects. To get started building .NET Core 2.1 projects in Visual Studio,

You’ll now see ASP.NET Core 2.1 as an option in the One ASP.NET dialog

clip_image002[4]

If you are working with a Console Application or Class Library, you’ll need to create the project and then open the project’s property page and change the Target framework to “.NET Core 2.1”

clip_image004[4]

Unit Testing Improvements

  • The Test Explorer has undergone more performance improvements which results smoother scrolling and faster updating of the test list for large solutions.
  • We’ve also improved the ability to understand what is happening during test runs. When a test run is in progress, a progress ring appears next to tests that are currently executing, and a clock icon appears for tests that are pending execution.

clip_image006[4]

Productivity Improvements

Each release we’ve been working to add more and more refactorings and code fixes to make you productive. In 15.7 Preview 4, invoke Quick Actions and Refactorings (Ctrl+. or Alt+Enter) to use:

  • Convert for-loop-to-foreach (and vice versa)
  • Make private field readonly
  • Toggle between var and the explicit type (without code style enforcement)

clip_image008[4]

To learn more about productivity features see our Visual Studio 2017 Productivity Guide for .NET Developers.

C# 7.3

15.7 also brings the newest incremental update to C#, 7.3. C# 7.3 features are:

To use C# 7.3 features in your project:

  • Open your project’s property page (Project -> [Project Name] Properties…)
  • Choose the “Build” tab
  • Click the “Advanced…” button on the bottom right
  • Change the “Language version” dropdown to “C# latest minor version (latest)”.  This setting will enable your project to use the latest C# features available to the version of Visual Studio you are in without needing to change it again in the future.  If you prefer, to you can pick a specific version from the list.

F# improvements

15.7 also includes several improvements to F# and F# tooling in Visual Studio.

  • Type Providers are now enabled for .NET Core 2.1. To try it out, we recommend using FSharp.Data version 3.0.0-beta, which has been updated to use the new Type Provider infrastructure.
  • .NET SDK projects can now generate an F# AssmeblyInfo file from project properties.
  • Various smaller bugs in file ordering for .NET SDK projects have been fixed, including initial ordering when pasting a file into a folder.
  • Toggles for outlining and Structured Guidelines are now available in the Text Editor > F# > Advanced options page.
  • Improvements in editor responsiveness have been made, including ensuring that error diagnostics always appear before other diagnostic information (e.g., unused value analysis)
  • Efforts to reduce memory usage of the F# tools have been made in partnership with the open source community, with much of the improvements available in this release.

Finally, templates for ASP.NET Core projects in F# are coming soon, targeted for the RTW release of VS 2017 15.7.

Azure Key Vault support in Connected Services

We have simplified the process to manage your project’s secrets with the ability to create and add a Key Vault to your project as a connected service. The Azure Key Vault provides a secure location to safeguard keys and other secrets used by applications so that they do not get shared unintentionally. Adding a Key Vault through Connected Services will:

  • Provide Key Vault support for ASP.NET and ASP.NET Core applications
  • Automatically add configuration to access your Key Vault through your project
  • Add the required Nuget packages to your project
  • Allow you to access, add, edit, and remove your secrets and permissions through the Azure portal

To get started:

  • Double click on the “Connected Services” node in Solution Explorer in your ASP.Net or ASP.Net Core application.
  • Click on “Secure Secrets with Azure Key Vault”.
  • When the Key Vault tab opens, select the Subscription that you would like your Key Vault to be associated with and click the “Add” button on the bottom left. By default Visual Studio will create a Key Vault with a unique name.
    Tip: If you would like to use an existing Key Vault, change location settings, resource group, or pricing tiers from the preselected values, you can click on the ‘Edit’ link next to Key Vault
  • Once the Key Vault has been added, you will be able to manage secrets and permissions with the links on the right.

clip_image010[4]

Library Manager

Library Manager (“LibMan” for short) is Microsoft’s new client-side static content management system for web projects. Designed as a replacement for Bower and npm, LibMan helps users find and fetch library files from an external source (like CDNJS) or from any file system library catalogue.

To get started, right-click a web project from Solution Explorer and choose “Manage Client-side Libraries…”. This creates and opens the LibMan configuration file (libman.json) with some default content. Update the “libraries” section to add library files to your project. This example adds some jQuery files to the wwwroot/lib directory.

clip_image012[4]

For more details, see Library Manager: Client-side content management for web apps.

Azure Publishing Improvements

We also made several improvements for when publishing applications from Visual Studio, including:

For more details, see our Publish improvements in Visual Studio 2017 15.7 post on the Web Developer blog.

Conclusion

If you haven’t installed a Visual Studio preview yet, it’s worth noting that they can be installed side by side with your existing stable installations of Visual Studio 2017, so you can try the previews out, and then go back to the stable channel for your regular work. So, we hope that you’ll take the time to install the Visual Studio 2017 15.7 Preview 4 update and let us know what you think. You can either use the built-in feedback tools in Visual Studio 2017 or let us know what you think below in the comments section.


Publish Improvements in Visual Studio 2017 15.7

$
0
0

Today we released Visual Studio 2017 15.7 Preview 4. Our 15.7 update brings some exciting updates for publishing applications from Visual Studio that we’re excited to tell you about, including:

  • Ability to configure publish settings before you publish or create a publish profile
  • Create Azure Storage Accounts and automatically store the connection string for App Service
  • Automatic enablement of Managed Service Identity in App Service

If you haven’t installed a Visual Studio Preview yet, it’s worth noting that they can be installed side by side with your existing stable installations of Visual Studio 2017, so you can try the previews out, and then go back to the stable channel for your regular work. We’d be very appreciative if you’d try Visual Studio 2017 15.7 Preview 4 and give us any feedback you might have while we still have time to change or fix things before we ship the final version (download now). As always, if you run into any issues, please report them to us using Visual Studio’s built in “Report a Problem” feature.

Configure settings before publishing

When publishing your ASP.NET Core applications to either a folder or Azure App Service you can configure the following settings prior to creating your publish profile:

To configure this prior to creating your profile, click the “Advanced…” link on the publish target page to open the Advanced Settings dialog.

Advanced link on 'Pick a publish target' dialog

Create Azure Storage Accounts and automatically store the connection string in App Settings

When creating a new Azure App Service, we’ve always offered the ability to create a new SQL Azure database and automatically store its connection string in your app’s App Service Settings. With 15.7, we now offer the ability to create a new Azure Storage Account while you are creating your App Service, and automatically place the connection string in the App Service settings as well. To create a new storage account:

  • Click the “Create a storage account” link in the top right of the “Create App Service” dialog
  • Provide in the connecting string key name your app uses to access the storage account in the “(Optional) Connecting String Name” field at the bottom of the Storage Account dialog
  • Your application will now be able to talk to the storage account once your application is published

Optional Connection String Name field on Storage Account dialog

Managed Service Identity enabled for new App Services

A common challenge when building cloud applications is how to manage the credentials that need to be in your code for authenticating to other services. Ideally, credentials never appear on developer workstations or get checked into source control. Azure Key Vault provides a way to securely store credentials and other keys and secrets, but your code needs to authenticate to Key Vault to retrieve them. Managed Service Identity (MSI) makes solving this problem simpler by giving Azure services an automatically managed identity in Azure Active Directory (Azure AD). You can use this identity to authenticate to any service that supports Azure AD authentication, including Key Vault, without having any credentials in your code.

Starting in Visual Studio 2017 15.7 Preview 4, when you publish an application to Azure App Service (not Linux) Visual Studio automatically enables MSI for your application. You can then give your app permission to communicate with any service that supports MSI authentication by logging into that service’s page in the Azure Portal and granting access your App Service. For example, to create a Key Vault and give your App Service access

  1. In the Azure Portal, select Create a resource > Security + Identity > Key Vault.
  2. Provide a Name for the new Key Vault.
  3. Locate the Key Vault in the same subscription and resource group as the App Service you created from Visual Studio.
  4. Select Access policies and click Add new.
  5. In Configure from template, select Secret Management.
  6. Choose Select Principal, and in the search field enter the name of the App Service.
  7. Select the App Service’s name in the result list and click Select.
  8. Click OK to finishing adding the new access policy, and OK to finish access policy selection.
  9. Click Create to finish creating the Key Vault.

Azure portal dialog: Create a Key Vault and give your App Service access

Once you publish your application, it will have access to the Key Vault without the need for you to take any additional steps.

Conclusion

If you’re interested in the many other great things that Visual Studio 2017 15.7 brings for .NET development, check out our .NET tool updates in Visual Studio 15.7 post on the .NET blog.

We hope that you’ll give 15.7 a try and let us know how it works for you. If you run into any issues, or have any feedback, please report them to us using Visual Studio’s features for sending feedback. or let us know what you think below or via Twitter.

HttpClientFactory for typed HttpClient instances in ASP.NET Core 2.1

$
0
0

THE HANSELMINUTES PODCASTI'm continuing to upgrade my podcast site https://www.hanselminutes.com to .NET Core 2.1 running ASP.NET Core 2.1. I'm using Razor Pages having converted my old Web Matrix Site (like 8 years old) and it's gone very smoothly. I've got a ton of blog posts queued up as I'm learning a ton. I've added Unit Testing for the Razor Pages as well as more complete Integration Testing for checking things "from the outside" like URL redirects.

My podcast has recently switched away from a custom database over to using SimpleCast and their REST API for the back end. There's a number of ways to abstract that API away as well as the HttpClient that will ultimately make the call to the SimpleCast backend. I am a fan of the Refit library for typed REST Clients and there are ways to integrate these two things but for now I'm going to use the new HttpClientFactory introduced in ASP.NET Core 2.1 by itself.

Next I'll look at implementing a Polly Handler for resilience policies to be used like Retry, WaitAndRetry, and CircuitBreaker, etc. (I blogged about Polly in 2015 - you should check it out) as it's just way to useful to not use.

HttpClient Factory lets you preconfigure named HttpClients with base addresses and default headers so you can just ask for them later by name.

public void ConfigureServices(IServiceCollection services)

{
services.AddHttpClient("SomeCustomAPI", client =>
{
client.BaseAddress = new Uri("https://someapiurl/");
client.DefaultRequestHeaders.Add("Accept", "application/json");
client.DefaultRequestHeaders.Add("User-Agent", "MyCustomUserAgent");
});
services.AddMvc();
}

Then later you ask for it and you've got less to worry about.

using System.Threading.Tasks;

using Microsoft.AspNetCore.Mvc;

namespace MyApp.Controllers
{
public class HomeController : Controller
{
private readonly IHttpClientFactory _httpClientFactory;

public HomeController(IHttpClientFactory httpClientFactory)
{
_httpClientFactory = httpClientFactory;
}

public Task<IActionResult> Index()
{
var client = _httpClientFactory.CreateClient("SomeCustomAPI");
return Ok(await client.GetStringAsync("/api"));
}
}
}

I prefer a TypedClient and I just add it by type in Startup.cs...just like above except:

services.AddHttpClient<SimpleCastClient>();

Note that I could put the BaseAddress in multiple places depending on if I'm calling my own API, a 3rd party, or some dev/test/staging version. I could also pull it from config:

services.AddHttpClient<SimpleCastClient>(client => client.BaseAddress = new Uri(Configuration["SimpleCastServiceUri"]));

Again, I'll look at ways to make this even simpler AND more robust (it has no retries, etc) with Polly soon.

public class SimpleCastClient

{
private HttpClient _client;
private ILogger<SimpleCastClient> _logger;
private readonly string _apiKey;

public SimpleCastClient(HttpClient client, ILogger<SimpleCastClient> logger, IConfiguration config)
{
_client = client;
_client.BaseAddress = new Uri($"https://api.simplecast.com"); //Could also be set in Startup.cs
_logger = logger;
_apiKey = config["SimpleCastAPIKey"];
}

public async Task<List<Show>> GetShows()
{
try
{
var episodesUrl = new Uri($"/v1/podcasts/shownum/episodes.json?api_key={_apiKey}", UriKind.Relative);
_logger.LogWarning($"HttpClient: Loading {episodesUrl}");
var res = await _client.GetAsync(episodesUrl);
res.EnsureSuccessStatusCode();
return await res.Content.ReadAsAsync<List<Show>>();
}
catch (HttpRequestException ex)
{
_logger.LogError($"An error occurred connecting to SimpleCast API {ex.ToString()}");
throw;
}
}
}

Once I have the client I can use it from another layer, or just inject it with [FromServices] whenever I have a method that needs one:

public class IndexModel : PageModel

{
public async Task OnGetAsync([FromServices]SimpleCastClient client)
{
var shows = await client.GetShows();
}
}

Or in the constructor:

public class IndexModel : PageModel

{
private SimpleCastClient _client;

public IndexModel(SimpleCastClient Client)
{
_client = Client;
}
public async Task OnGetAsync()
{
var shows = await _client.GetShows();
}
}

Another nice side effect is that HttpClients that are created from the HttpClientFactory give me free logging:

info: System.Net.Http.ShowsClient.LogicalHandler[100]

Start processing HTTP request GET https://api.simplecast.com/v1/podcasts/shownum/episodes.json?api_key=
System.Net.Http.ShowsClient.LogicalHandler:Information: Start processing HTTP request GET https://api.simplecast.com/v1/podcasts/shownum/episodes.json?api_key=
info: System.Net.Http.ShowsClient.ClientHandler[100]
Sending HTTP request GET https://api.simplecast.com/v1/podcasts/shownum/episodes.json?api_key=
System.Net.Http.ShowsClient.ClientHandler:Information: Sending HTTP request GET https://api.simplecast.com/v1/podcasts/shownum/episodes.json?api_key=
info: System.Net.Http.ShowsClient.ClientHandler[101]
Received HTTP response after 882.8487ms - OK
System.Net.Http.ShowsClient.ClientHandler:Information: Received HTTP response after 882.8487ms - OK
info: System.Net.Http.ShowsClient.LogicalHandler[101]
End processing HTTP request after 895.3685ms - OK
System.Net.Http.ShowsClient.LogicalHandler:Information: End processing HTTP request after 895.3685ms - OK

It was super easy to move my existing code over to this model, and I'll keep simplifying AND adding other features as I learn more.


Sponsor: Check out JetBrains Rider: a cross-platform .NET IDE. Edit, refactor, test and debug ASP.NET, .NET Framework, .NET Core, Xamarin or Unity applications. Learn more and download a 30-day trial!



© 2018 Scott Hanselman. All rights reserved.
     

Propel your IoT platform to the cloud with Azure Time Series Insights!

$
0
0

Today we’re pleased to announce two key capabilities that Azure Time Series Insights will be delivering later this year:

  • A cost-effective long-term storage that enables a cloud-based solution to trend years’ worth of time series data pivoted on devices/tags. 
  • A device-based (also known industry-wide as “tag-based”) user experience backed by a time series model to contextualize raw time series data with device metadata and domain hierarchies.

Additionally, Time Series Insights will be integrating with advanced machine learning and analytics tools like Spark and Jupyter notebooks to help customers tackle time series data challenges in new ways. Data scientists and process engineers in industries like oil & gas, power & utility, manufacturing, and building management rely on time series data solutions for critical tasks like storage, data analysis, and KPI tracking and they’ll be able to do this using Time Series Insights . 

Time series model and tag-centric experience

Time Series Insights’ current user interface is great for data scientists and analysts. However, process engineers and asset operators may not always find this experience natural to use. To address this, we are adding a device-based user experience to the Time Series Insights explorer. This new interface and the underlying time series model that backs the experience will enable OT workers to intuitively find devices related to the assets they care about. By enabling hierarchy and device-based semantics that contextualize the raw time series data, we enable richer and deeper analytics. This means that finding and comparing observation targets (i.e., devices/tags) to trend and explore in the Time Series Insights user experience or with our REST APIs will be seamless. 

The below diagram is an example of the rich tag-based experience that we are developing in the Time Series Insights explorer.

Time Series Insights

Long-term Storage

To help organizations effortlessly scale their time series solutions, Time Series Insights will offer seamless integration with massively-scalable, cost-effective storage, archival and queryability in Azure Storage. This additional layer of storage creates a powerful duality for customers to engage their data while maximizing costs. Our warm layer, what Time Series Insights customers know and use today, continues to support interactive analytics. With the cold layer, these same customers will now have a single-source of truth for their time series data in the cloud, at a price that works. We expect most customers to store 30–120 days of data in the warm layer, and 1–20 years in the cold layer, thereby blending the best of both worlds. Cold storage, coupled with device/tag-centric querying across all data, means that customers maximize the cost-benefits of the cloud while still realizing performant querying and trending of historical time series data. 

Below is a diagram that describes Time Series Insights’ high level architecture and scenarios:

TSI’s high level architecture and scenarios

Connectors to enable rich e2e solutions

Time Series Insights will store data in Apache Parquet files based on device/tag and timestamp properties, thus optimizing integration with powerful tools like Azure Databricks. Integration with machine learning tools like Azure Machine Learning Studio and Jupyter Notebooks is simplified, so organizations can build models to predict future device states and avoid wasteful maintenance. Time Series Insights makes it easy to collaborate and share insights in seconds through integration with Power BI, Microsoft Excel, and other business intelligence reporting software. 

Our customers

We’re collaborating closely with customers like TransAlta, to define and build this new infrastructure and solutions using Time Series Insights. Below are quotes from TransAlta’s CTO and Enterprise Architect attesting to the business advantages they see with adopting Azure IoT and Time Series Insights solutions.

“Time Series Insights is the cornerstone of our IoT platform and will be pivotal in enabling our larger AI and Machine Learning strategies going forward. We have partnered closely with the Microsoft TSI team on the two features being announced today. We believe that the tag-centric user experience will enable all our users, from Engineering to Operations, to get more value from our data, due to an ease of access that we have not had as an organization with previous solutions. Time Series Insights’ new long-term storage will empower our data science and engineering teams, in collaboration with our operations teams, to discover and quickly take action on insights previously hidden in our data. As we migrate from an on-premises to a cloud-based solution, building our platform on Time Series Insights has allowed us to focus our energy on enabling our business partners to meet the growing demands of a changing Power industry, rather than becoming experts on cloud technologies.” 

– Jason Killeleagh, Enterprise Architect, TransAlta

“Energy demand is growing every year, and so are the pressures to deliver energy efficiently and sustainably. TransAlta is right in the middle of a major transformation that will ensure our global generation operations have the appropriate tools to enable timely decision-making in a rapidly changing energy market. Time Series Insights and other Azure IoT services are supporting our ability to tackle these challenges in a more dynamic, flexible and cost-effective manner than the traditional on-premises solutions we have reviewed. We see Microsoft and the Azure IoT team as a strategic partner helping us drive digital transformation in the energy sector.”

– Nipa Chakravarti, CTO, TransAlta 

Time Series Insights provides a global view of an organization's data – enabling customers to collect and generate insights from highly distributed IoT data. Time Series Insights’ REST APIs can query across devices/tags, so customers can build domain-specific solutions on top of Time Series Insights to view data streaming from multiple sites in seconds and query data as fast as they can today with a server stored down the hallway. A great example of a customer building on top of Time Series Insights’ APIs is the industrial automation leader, ABB. Recently, ABB has adopted Time Series Insights as a focal point for their mining, ES, and robotics platforms that will use Time Series Insights’ long-term storage to empower their customers with rich monitoring and analytics solutions. Below is a quote from ABB Ability’s Group Vice President of Product on their use of Time Series Insights.

“ABB is leveraging Azure Time Series Insights inside our ABB Ability™ Platform to help us build innovative solutions for our customers such as ABB Ability™ Connected Services for robotics and ABB Ability™ Remote Diagnostic Services for mining. We chose to build on Time Series Insights because we needed a scalable and performant, fully-managed platform. Time Series Insights enables us to deliver rich, interactive analytics in our solutions that make it easy for customers to solve problems and keep their assets running at peak performance.”

– Sean Parham, Group Vice President of Product Management, ABB Ability™

With the capabilities we are announcing today, Time Series Insights is evolving from a short-term asset monitoring and diagnostics service to a modern cloud IoT platform for customers and partners to build highly-capable and scalable IoT solutions. 

We will be at Hannover Messe this week. Microsoft will exhibit in the Digital Factory at HMI (booth #C40), focusing on the benefits of intelligent manufacturing. We’ll feature the world’s leading innovators through solution showcases and allow you to engage with our newest technologies enabling our customers to build a IoT solutions. Don’t hesitate to stop by and learn more.

Azure IoT Hub SDK officially provides native iOS support

$
0
0

We recently released a port of our Azure IoT Hub C SDK for iOS platform. Whether your iOS project is written in Swift or Objective-C, you can leverage our device SDK and service SDK directly and begin turning your iOS device into an IoT device! Our libraries are available on CocoaPod, a popular package manager for iOS, and the source code is available on GitHub.

iOS devices are traditionally not viewed as IoT devices, but recently, they are getting traction in the IoT space. Here are some of the interesting scenarios we gathered from our industry customers during the preview phase:

  • iOS device as the gateway for leaf devices or sensors on the factory floor.
  • iOS device in a meeting room, which acts as an end IoT device to send and receive messages from Azure IoT Hub.
  • iOS device to view the visualization of IoT telemetry.
  • iOS device to manage IoT Hub operations.

So, what is in the box? If you have interacted with our Azure IoT Hub C SDK before, this would be familiar to you! Our C SDK is written in C99 for maximum portability to various platforms. The porting process involves writing a thin adoption layer for the platform-specific components. You can find a thin adoption layer for iOS on GitHub. All the features in the C SDK can be leveraged on iOS platform directly, including the Azure IoT Hub features we support and SDK specific features such as retry policy for network reliability. In addition, iOS platform is now part of our officially supported platforms which are tested with every release. Our test suite includes unit tests, integration tests, and end-to-end tests, all available on GitHub.  Azure IoT Hub Device Provisioning Service SDKs will also be available on iOS soon.

Learn more about how to turn your iOS device into an IoT device:

Azure.Source – Volume 28

$
0
0

Azure Security News at RSA Conference 2018

Last week, we made several Azure Security announcements in conjunction with RSA Conference 2018 in San Francisco:

  • Introducing Microsoft Azure Sphere: Secure and power the intelligent edge - Microsoft Azure Sphere is a new solution for creating highly-secured, Internet-connected microcontroller (MCU) devices. Azure Sphere includes three components that work together to protect and power devices at the intelligent edge: Azure Sphere certified microcontrollers (MCUs), Azure Sphere OS, and Azure Sphere Security Service.

    Microsoft Azure Sphere Leadership Vision - Microsoft product and business leaders introduce Azure Sphere, the latest IoT offering from Microsoft that extends security and new consumer experiences to a whole new class of devices at the intelligent edge.

  • The 3 ways Azure improves your security - Learn how Azure provides value in three key areas – a secure foundation that is provided by Microsoft, built-in security controls to help you quickly configure security across the full-stack, and unique intelligence at cloud scale to help you safeguard data and respond to threats in real-time.
  • Announcing new Azure Security Center capabilities at RSA 2018 - Azure Security Center provides centralized visibility of the security state of your resources and uses the collective intelligence from machine learning and advanced analytics to not only detect threats quickly but to help you prevent them. A new overview dashboard provides visibility into your security state from an organizational level instead of a subscription level, security configuration is now integrated into the virtual machine experience, new capabilities to reduce your exposure to threats and quickly detect and respond to threats, and new partner solutions from Palo Alto Networks and McAfee.
  • Password-less Sign-In to Windows 10 & Azure AD using FIDO2 is coming soon (plus other cool news)! - A limited-preview of Password-less sign-in using a FIDO2 security key will available in the next update to Windows 10, which includes single-sign-on access to all your Azure AD protected cloud resources. Azure AD Conditional Access policies can now check device health as reported by Windows Defender Advanced Threat Protection. Azure AD access reviews, Privileged Identity Management and Terms of Use features are all now Generally Available. With the addition of domain allow and deny lists, Azure AD B2B Collaboration now gives you the ability to control which partner organizations you work with
  • Streamlining GDPR requests with the Azure portal - The new Azure portal Data Subject Request (DSR) capability will help you to fulfill DSRs. Using it, you can identify information associated with a data subject and will be able to execute DSRs against system-generated logs (data Microsoft generates to provide a given service). Azure enables the fulfillment of DSRs against customer data (data you and your users upload or create) through pre-existing application programming interfaces (APIs) and user interfaces (UIs) across the breadth of services provided.
  • Connect to the Intelligent Security Graph using a new API - Microsoft announced the public preview of a Security API empowers customers and partners to build on the Intelligent Security Graph. The Security API is part of the Microsoft Graph, which is a unified rest API for integrating data and intelligence from Microsoft products and services. The Security API opens up new possibilities for integration partners, such as Anomali, Palo Alto Networks, and PwC, to build with the Intelligent Security Graph. In addition, customers managed service providers, and technology partners, can leverage the Security APIs to build and integrate a variety of applications.

Microsoft Graph Security API block diagram

  • Announcing new Microsoft Azure Information Protection policy decision point capabilities with Ionic Security - Files protected with Azure Information Protection (AIP) further enhance security for your sensitive files. With the integration of AIP and Azure Active Directory (AAD), conditional access can be set up to allow or block access to AIP protected documents or enforce additional security requirements such as Multi-Factor Authentication (MFA) or device enrollment based on the device, location or risk score of users trying to access sensitive documents. Azure Active Directory conditional access extensibility features help solve two of the biggest challenges customers face today: usability and policy consistency. Using Azure Active Directory conditional access extensibility features, we are building a model where the customer can choose to apply externalized policies per AIP label. Ionic Security’s cross-cloud Data Trust platform is the first such provider of external decision points to our new extensibility service.
  • Tapping the intelligent cloud to make security better and easier - Conversations with customers have gone from asking ‘can we still keep our assets secure as we adopt cloud services?,’ to declaring, ‘we are adopting cloud services in order to improve our security posture.’ Learn more about the new technologies and programs that build on our unique cloud and intelligence capabilities to make it easier for enterprises to secure their assets from the cloud to the edge.

Now in preview

Preview: programmatically create Azure enterprise subscriptions using ARM APIs - As an Azure customer on Enterprise Agreement (EA), you can now create EA (MS-AZR-0017P) and EA Dev/Test (MS-AZR-0148P) subscriptions programmatically. To give another user or service principal the permission to create subscriptions billed to your account, give them Role-Based Access Control (RBAC) access to your enrollment account.

Now generally available

Azure DDoS Protection for virtual networks generally available - The Azure DDoS Protection Standard service, which is integrated with Azure Virtual Networks (VNet) and provides protection and defense for Azure resources against the impacts of DDoS attacks, is now generally available in all public cloud regions. Basic protection is integrated into the Azure platform by default and at no additional cost. Azure DDoS Protection Standard provides enhanced DDoS mitigation capabilities for your application and resources deployed in your virtual networks. To assist with establishing a well-vetted DDoS incident management response plan, we published the Best Practices & Reference Architecture guide.

Azure DDoS Protection diagram

Azure Service Fabric – announcing Reliable Services on Linux and RHEL support - Service Fabric is the foundational technology powering core Azure infrastructure as well as other Microsoft services such as Skype for Business, Intune, Azure Event Hubs, Azure Data Factory, Azure Cosmos DB, Azure SQL Database, Dynamics 365, and Cortana. Recently, we open sourced Service Fabric with the MIT license to increase opportunities for customers to participate in the development and direction of the product. Learn more about the release of Service Fabric runtime v6.2 and corresponding SDK and tooling updates.

Also generally available

News & updates

Gartner recognizes Microsoft as a leader in enterprise integration - Gartner’s Magic Quadrant for Enterprise Integration Platform as a Service (eiPaaS), 2018 positions Microsoft as a leader and it reflects Microsoft’s ability to execute and completeness of vision. For more information, download Gartner’s Magic Quadrant for Enterprise Integration Platform as a Service (eiPaaS), 2018 today.

Spring Data Azure Cosmos DB: NoSQL data access on Azure - Microsoft's Spring Boot Starter with the Azure Cosmos DB SQL API enables developers to use Spring Boot applications that easily integrate with Azure Cosmos DB by using the SQL API. With Spring Data Azure Cosmos DB, Java developers now can get started quickly to build NoSQL data access for their apps on Azure. It offers a Spring-based programming model for data access, while keeping the special traits of the underlying data store with Azure Cosmos DB. Features of Spring Data Azure Cosmos DB include a POJO centric model for interacting with an Azure Cosmos DB Collection, and an extensible repository style data access layer.

Iterative development and debugging using Data Factory - There is increasingly a need among users to develop and debug their Extract Transform/Load (ETL) and Extract Load/Transform (ELT) workflows iteratively. Azure Data Factory (ADF) visual tools now enable you to do iterative development and debugging. Data Factory visual tools also enable you to do debugging until reaching a breakpoint you place in your pipeline canvas.

Azure Data Factory visual tools

Recovery Services vault limit increased to 500 vaults per subscription per region - Scale limits for Azure Backup have been increased. Users can now create as many as 500 Azure Recovery Services vaults in each subscription per region. Also, the number of Azure virtual machines that can be registered against each vault has increased to 1,000, from an earlier limit of 200 machines under each vault.

Azure Marketplace new offers in March 2018 - In March, we published 55 new offers to the Azure Marketplace, which is the premier destination for all your software needs – certified and optimized to run on Azure. Find, try, purchase, and provision applications & services from hundreds of leading software providers.

Azure Backup now supports storage accounts secured with Azure Storage Firewalls and Virtual Networks - Azure infrastructure as a service (IaaS) virtual machine backup now supports network-restricted storage accounts. Use storage firewalls and virtual networks to allow traffic only from selected virtual networks and subnets. This helps you create a secure network boundary for your unmanaged disks in storage accounts. You can also grant access to on-premises networks and other trusted internet traffic by using network rules based on IP address ranges. With this announcement, you can perform scheduled and ad-hoc IaaS virtual machine backups and restores for virtual network-configured storage accounts.

Additional news & updates

Azure Friday

Azure Friday | Continuous integration and deployment using Azure Data Factory - Gaurav Malhotra joins Scott Hanselman to discuss how you can follow industry-leading best practices to do continuous integration and deployment for your Extract Transform/Load (ETL) and Extract Load/Transform (ELT) workflows to multiple environments such as Dev, Test, Prod, and more.

Azure Friday | Deploy Bitnami Node.js HA Cluster with Azure Cosmos DB - Rick Spencer joins Donovan to chat about deploying Bitnami Node.js High Availability with Azure Cosmos DB, a free listing in Azure Marketplace that uses ARM to automatically spin up a three-node Node.js cluster behind a load balancer with a shared file system and Azure Cosmos DB integration. See how you can quickly get a sample MEAN app from GitHub to a highly available production environment in the Azure cloud, with very little configuration or sysadmin knowledge required.

Technical content & training

Azure Advanced Threat Protection: CredSSP Exploit Analysis - In this blog, we provide network behavior analysis of the CredSSP exploitation of this vulnerability and the techniques it uses to propagate in the network. Additionally, we highlight how you can use Azure ATP to detect and investigate a variety of advanced cyberattack attempts.

Webcast: Microsoft Security Intelligence Report Volume 23—Breaking Botnets and Wrestling Ransomware - In this on-demand webcast, you’ll hear key insights and takeaways from the Security Intelligence Report Volume 23. Join us for a deep-dive analysis of the top threat trends we saw in 2017, learn about attack vectors, and get recommendations from a security industry veteran and a former CISO. You'll also learn how Microsoft took down the Gamarue botnet, and how you can stay vigilant against malware.

The Azure Podcast

The Azure Podcast | Episode 225 - Azure CXP - We talk to Jeremy Hollett, a Principal Service Engineering Manager, about the CXP organization and how it helps both Azure internally as well its customers, gain the ultimate in reliability. Evan also works for the same organization so the two of them share some good insights.

Events

Microsoft at PostgresConf US 2018 - As noted above, we released Azure Database for PostgreSQL to general availability last week. In this post, Rohan Kumar, Corporate Vice President, Azure Data, shares his thoughts about what we learned during the preview period, and about attending the 7th annual PostgresConf US 2018, which was held last week in Jersey City, New Jersey.

Automating Industrial IoT Security - This week, Microsoft is at Hannover Messe Industrie (HMI) 2018 in Hannover, Germany. Industrial IoT is the largest IoT opportunity. At Microsoft, we serve this vertical by offering an Industrial IoT Cloud Platform Reference Architecture, which we bundle into an open-source Azure IoT Suite solution called Connected Factory and launched it at HMI 2017 a year ago. In this post, learn about our continued collaboration with the OPC Foundation, the non-profit organization developing the OPC UA Industrial Interoperability Standard.

Customer and partners

Altair democratizes access to computer-aided engineering with Azure - Altair is democratizing access to CAE by building their Software-as-a-Service (SaaS) offerings on Microsoft Azure. In a case study we recently published, Altair describes how their HyperWorks Unlimited Virtual Appliance gives customers the combination of software and scale they need to quickly run their CAE workloads.

Azure tips & tricks

Underlying Software in Azure Cloud Shell

Use PowerShell with Azure Cloud Shell

Developer spotlight

Big Data & Analytics: Incorporate intelligence into your applications - Incorporating intelligence into your application, while processing big data and employing advanced analytics, is unfamiliar territory for many. The trouble is, the world of software development and those of big data and advanced analytics seem like they are light years apart. There are lots of choices to solve very different problems. They use different software stacks, different engineering approaches, and different terminology. Read this unified development white paper to learn how to solve common challenges in this space.

Solution architecture with Azure SQL Datawarehouse

Azure SQL Data Warehouse Workload Patterns and Anti-Patterns - Confused about data mart vs. data warehouse vs. data lake. Read this guidance to understand what’s a good use case for a cloud data warehouse services and what’s not. We would like to clarify some of the concepts around RDBMS usage related to OLTP and OLAP workload, Symmetric Multiprocessing (SMP) and Massively Parallel Processing (MPP).

Azure SQL Data Warehouse Cheat Sheet - Azure SQL Data Warehouse lest you spin up a MPP architecture data warehouse in cloud in minutes and load TBs of data in hours. This cheat sheet provides helpful tips and best practices for building your Azure SQL Data Warehouse solution.

Azure Container Instances for Multiplayer Gaming - Azure Container Instances allow you to host and run Docker images without the hassle of maintaining underlying servers or learning new orchestration concepts. In this session, we will explore using Azure Container Instances, Event Grid and Azure Functions to host a scalable multiplayer backend, using the open source game OpenArena as an example, without any code changes to the existing backend service.

Git with Unity for Game Development - Unity is the ultimate game development platform. Git is the ultimate version control system. But Unity and Git don't always get along so well. How can Unity and Git interact better? We'll look at some best practices for using Unity with the Git version control system.

Continuously Test, distribute and monitor your game with App Center - Connect your repository and, within minutes, build in the cloud, test on thousands of real devices, distribute to beta testers and app stores, and monitor real-world usage with crash and analytics data. All in one place: Visual Studio App Center.

Claim over $2500 in free gaming services - Start building exceptional iOS and Android games with a promotional offer from PlayFab and Visual Studio App Center.

Adding Resilience and Transient Fault handling to your .NET Core HttpClient with Polly

$
0
0

b30f5128-181e-11e6-8780-bc9e5b17685eLast week while upgrading my podcast site to ASP.NET Core 2.1 and .NET. Core 2.1 I moved my Http Client instances over to be created by the new HttpClientFactory. Now I have a single central place where my HttpClient objects are created and managed, and I can set policies as I like on each named client.

It really can't be overstated how useful a resilience framework for .NET Core like Polly is.

Take some code like this that calls a backend REST API:

public class SimpleCastClient

{
private HttpClient _client;
private ILogger<SimpleCastClient> _logger;
private readonly string _apiKey;

public SimpleCastClient(HttpClient client, ILogger<SimpleCastClient> logger, IConfiguration config)
{
_client = client;
_client.BaseAddress = new Uri($"https://api.simplecast.com");
_logger = logger;
_apiKey = config["SimpleCastAPIKey"];
}

public async Task<List<Show>> GetShows()
{
var episodesUrl = new Uri($"/v1/podcasts/shownum/episodes.json?api_key={_apiKey}", UriKind.Relative);
var res = await _client.GetAsync(episodesUrl);
return await res.Content.ReadAsAsync<List<Show>>();
}
}

Now consider what it takes to add things like

  • Retry n times - maybe it's a network blip
  • Circuit-breaker - Try a few times but stop so you don't overload the system.
  • Timeout - Try, but give up after n seconds/minutes
  • Cache - You asked before!
    • I'm going to do a separate blog post on this because I wrote a WHOLE caching system and I may be able to "refactor via subtraction."

If I want features like Retry and Timeout, I could end up littering my code. OR, I could put it in a base class and build a series of HttpClient utilities. However, I don't think I should have to do those things because while they are behaviors, they are really cross-cutting policies. I'd like a central way to manage HttpClient policy!

Enter Polly. Polly is an OSS library with a lovely Microsoft.Extensions.Http.Polly package that you can use to combine the goodness of Polly with ASP.NET Core 2.1.

As Dylan from the Polly Project says:

HttpClientFactory in ASPNET Core 2.1 provides a way to pre-configure instances of HttpClient which apply Polly policies to every outgoing call.

I just went into my Startup.cs and changed this

services.AddHttpClient<SimpleCastClient>();

to this (after adding "using Polly;" as a namespace)

services.AddHttpClient<SimpleCastClient>().

AddTransientHttpErrorPolicy(policyBuilder => policyBuilder.RetryAsync(2));

and now I've got Retries. Change it to this:

services.AddHttpClient<SimpleCastClient>().

AddTransientHttpErrorPolicy(policyBuilder => policyBuilder.CircuitBreakerAsync(
handledEventsAllowedBeforeBreaking: 2,
durationOfBreak: TimeSpan.FromMinutes(1)
));

And now I've got CircuitBreaker where it backs off for a minute if it's broken (hit a handled fault) twice!

I like AddTransientHttpErrorPolicy because it automatically handles Http5xx's and Http408s as well as the occasional System.Net.Http.HttpRequestException. I can have as many named or typed HttpClients as I like and they can have all kinds of specific policies with VERY sophisticated behaviors. If those behaviors aren't actual Business Logic (tm) then why not get them out of your code?

Go read up on Polly at https://githaub.com/App-vNext/Polly and check out the extensive samples at https://github.com/App-vNext/Polly-Samples/tree/master/PollyTestClient/Samples.

Even though it works great with ASP.NET Core 2.1 (best, IMHO) you can use Polly with .NET 4, .NET 4.5, or anything that's compliant with .NET Standard 1.1.

Gotchas

A few things to remember. If you are POSTing to an endpoint and applying retries, you want that operation to be idempotent.

"From a RESTful service standpoint, for an operation (or service call) to be idempotent, clients can make that same call repeatedly while producing the same result."

But everyone's API is different. What would happen if you applied a Polly Retry Policy to an HttpClient and it POSTed twice? Is that backend behavior compatible with your policies? Know what the behavior you expect is and plan for it. You may want to have a GET policy and a post one and use different HttpClients. Just be conscious.

Next, think about Timeouts. HttpClient's have a Timeout which is "all tries overall timeout" while a TimeoutPolicy inside a Retry is "timeout per try." Again, be aware.

Thanks to Dylan Reisenberger for his help on this post, along with Joel Hulen! Also read more about HttpClientFactory on Steve Gordon's blog and learn more about HttpClientFactory and Polly on the Polly project site.


Sponsor: Check out JetBrains Rider: a cross-platform .NET IDE. Edit, refactor, test and debug ASP.NET, .NET Framework, .NET Core, Xamarin or Unity applications. Learn more and download a 30-day trial!



© 2018 Scott Hanselman. All rights reserved.
     

Migrating your apps, data and infrastructure to Azure is easier than ever

$
0
0

Cloud computing is fundamentally changing IT and transforming businesses at an unprecedented pace. And, companies are rapidly turning to the cloud for the opportunities it brings – increased agility, faster innovation, and efficient operations, just to name a few.

The question I now most often hear from our customers is not ‘why’ should I move to the cloud, but ‘how’ do I move to the cloud. We’ve worked closely with customers like Chevron and Allscripts, who are transforming their businesses by migrating to Azure. Their experiences, along with many other customers, have confirmed the importance of a vendor who understands the need for a flexible approach to cloud migration.
With new Azure innovation and cost-saving offers, there has never been a better time to move your apps, data and infrastructure to Azure. Here’s why…

Flexible migration options with hybrid support 

Azure gives you a flexible migration path with hybrid consistency across your on-premises assets and the cloud. You don’t have to move everything all at once. Whether your business requires a hybrid state long-term or only during the migration period, Azure is hybrid by design and can support your needs.

For example, Azure Security Center and Azure Active Directory can help manage security and identity across assets on-premises and in Azure. Similarly, we continue to invest in hybrid cloud capabilities in Windows Server and SQL Server. For example, the new Windows Admin center, allows you to natively administer your Windows Servers anywhere - on-premises or in Azure.

Cost-effective throughout the entire migration experience

Azure offers great value during every stage of your cloud migration journey. We offer free assessment, migration, and cost management tools to help you develop a migration plan and optimize your spending after migration. And for Windows Server and SQL Server customers, Azure is the most cost-effective cloud with the Azure Hybrid Benefit. Combined with Azure Reserved Instances, you can save up to 82%, compared to the pay-as-you-go pricing. These benefits add up to a 67% in savings compared to AWS RIs for Windows VMs.

Migrate with lower risk, higher confidence

From our decades of experience helping enterprise customers adopt new technology, we understand the challenge of migrating your business-critical investments to the cloud. That is why we have invested in the tools to help you plan, resources and tools online to keep you informed and a flexible thoughtful approach to mitigate the all-or-nothing risk. In addition, we have a worldwide footprint of Microsoft and partner experts to help you.

Allscripts, a provider of health practice management and electronic health record technology, recently migrated to Azure after a time of rapid growth for their company.

“When you experience periods of sudden growth through acquisitions, like Allscripts has, you need a flexible cloud partner like Microsoft. Tools like Azure Site Recovery have helped us quickly, reliably, and securely migrate several hundred business critical workloads – including Linux, MySQL, Windows, and SQL Server,” says Peter Tomlinson, Director, Information Systems and Technology Operations, Allscripts.

Get started with Azure migration by visiting the Azure migration center for guidance or for connecting with a migration expert.  And to learn more about Azure’s migration capabilities from our Azure engineering leaders Rohan Kumar and Corey Sanders, register for this webcast on May 17, 2018. 


Storage scenarios for Cray in Azure

$
0
0

When you get a dedicated Cray supercomputer on your Azure virtual network, you also get attached Cray® ClusterStor™ storage. This is a great solution for the high-performance storage you need while running jobs on the supercomputer. But what happens when the jobs are done? That depends on what you’re planning to do. Azure has a broad portfolio of storage products and solutions.

Post-processing

Many times, you’re using your Cray supercomputer as part of a multi-stage workflow. Using the weather forecasting scenario we wrote about, after the modeling is done, it’s time to generate products. The most familiar setup for most HPC administrators would be to attach Azure Disks to a virtual machine and run a central file server or a fleet of Lustre servers.

But if your post-processing workload can be updated to use object storage, you get another option. Azure Blob Storage our object storage solution. It provides secure, scalable storage for cloud-native workloads. This allows your jobs to run at large scale without having to manage file servers.

Our recent acquisition of Avere Systems will bring another option for high-performance file systems. Avere’s technology will also enable hybrid setups, allowing you to move your data between on-premises and Azure resources.

Archive

Sometimes when the work is done, the only thing left to do is store your results. You might need to keep the data for future re-analysis or for compliance purposes, but you don’t expect to re-use it very often. Azure Archive storage is a special tier of Blob storage. It provides reliable, low-cost storage for rarely accessed data. Because it’s part of our Blob storage, you don’t need separate tooling to use it in your workflow. In fact, you can even control the tier on a per-object basis within the same storage account.

Reference data

We’ve talked about what to do with data after your jobs are done, but what about before? Azure has many options for storing reference data and other files that you stage into your high-performance workload. Azure Files lets you deploy cloud file shares presented to clients with the SMB 3.0 protocol. Our partnership with NetApp will give you the ability to use the powerful NFS based data management capabilities that NetApp customers know and love as a first party Azure service. And of course, you can always use Azure Disks attached to a file server of your choice.

Data ingest and hybrid

Maybe what you really need is to just get your data into Azure in the first place. StorSimple is a hybrid solution that makes it easy to intelligently tier file share data into Azure. And if you have many terabytes or petabytes of data that you need to ingest into Azure, the Azure Data Box is a secure, ruggedized, appliance that easily fits into your network for large offline data transfers.

Conclusion

No matter what kind of storage you need around your Cray supercomputer, Azure has a product that works for you. To learn more about Azure’s storage offerings, see the Azure Storage page.

How Azure Security Center helps detect attacks against your Linux machines

$
0
0

Azure Security Center (ASC) is now extending its Linux threat detection preview program, both on cloud and on-premise. New capabilities include detection of suspicious processes, suspect login attempts, and anomalous kernel module loads. Security Center is using auditd for collecting machines’ events, which is one of the most common frameworks for auditing on Linux. Auditd has the advantage of having been around for a long time and living in the mainline kernel. Any Linux machine that runs auditd by default and is covered by Security Center will benefit from this public preview. For a little more detail on how the collection works, check out our private preview announcement from October.

In addition to building up Linux-specific detections, we have also reviewed applicability of our current detections originally developed for Windows. Attackers also like to be OS-agnostic, especially for large-scale attacks, and will reuse tools and techniques where they can. In such circumstances the same detection is also applicable across operating systems. Happily, several of our analytics worked with minimal tuning. Today, I’ll walk you through an analytic example in the form of malicious crypto coin mining and then give some tips on using Azure Log Analytics with Linux machines.

The expanding threat of malicious crypto coin miners

The Windows Defender Research team published a blog post last month talking about the increasing threat of crypto currency miners. These resource thieves have become more of an issue as cryptocurrencies have increased in number and value. The Windows Defender team notes that it appears some cybercriminals are pivoting from ransomware attacks to installing and running their own mining tools on victim machines. Between September 2017 and January 2018, they saw an average of 644,000 unique computers encountering coin mining malware. The post goes on to talk about some of the different coin mining malware we have seen, how they operate, and how enterprises can defend themselves using both System Center Configuration Manager and Windows Defender Advanced Threat Protection.

Also last month, the Windows Defender team talked about how, on March 6, Windows Defender Antivirus identified, within milliseconds, and blocked nearly 500,000 instances of a Dofoil malware campaign. This malware also contained a coin mining payload. Later, it was discovered and reported that the initial infection vector was a poisoned peer-to-peer application. The peer-to-peer application was classified as a potentially unwanted application (PUA). Windows Defender AV customers who had enabled the PUA protection feature, benefited as that vector was already blocked.

Azure Security Center threat detections for Linux

So how does this relate to Azure and the public preview of our new Azure Security Center threat detections for Linux? One year ago, Jessen Kurien posted about how we are detecting crypto currency mining attacks in ASC for Windows machines. In the crypto currency malware industry, a lot of cybercriminals are using portable tools that will install and run on different operating systems based on where they get dropped. They are also utilizing common techniques across systems which means there is an opportunity to write common detections.

In February, FireEye published a blog post that very neatly shows how these mining tools and techniques are spanning both the Windows and Linux worlds. In their overview, you can see both Windows-based PowerShell commands and Bash shell script commands for downloading additional malware, scheduling tasks, and deleting competing malware. Analysis of our own data shows the same thing. These common techniques give us the opportunity to create analytics that either work for both Windows and Linux machines.

When creating analytics, we also try to identify the malicious behavior at multiple points in its lifecycle which increases the likelihood of a detection. Our crypto currency mining analytic is a good example of this. We started with simple executable or command matching then gradually increased the level of sophistication of the analytic and created detections that look for coin mining behavior. One of our more recent analytics tries to detect when a system is being optimized for coin mining.

Linux alerts in Azure Security Center

So how do we see these crypto coin mining alerts in Linux? Azure Security Center customers that have Linux machines running auditd will be able to see these alerts alongside other Azure Security Center alerts. Just go into the Azure Security Center Overview page.

image

Then, either click into your subscription alerts through New Alerts & Incidents, Detections, or dive right into a specific resource.

The alert for crypto coin malware will show up as the Suspicious Process Executed alert that you can see below.

image

When you click through into the alert, you will see the process and command line that triggered the alert, as well as suggested remediation steps.

image

As you can see, this event triggered as a Suspicious Process execution. You can see the command that was run in the Description and the specific command line down below. Also, because we recognize this as crypto coin mining behavior, we have linked to a report on Bitcoin Miners.

Tips for using Azure Log Analytics with Linux

Earlier this year, Ajeet Prakash posted an article about how to utilize Azure Log Analytics while investigating Azure Security Center alerts. He walked through how to drill into a security alert as part of an investigation, and how to get to the recorded Windows logs through Azure Log Analytics. You have the same ability to do this for your Linux machines. There are some differences you will want to be aware of to get the most out of Azure Log Analytics when looking at Linux machines and I’ve highlighted a couple of the key ones below.

When running Security Baseline searches (1.) you can call out specifically the BaselineType field to just look at the results for your “WindowsOS” or “Linux” machines. When you want to look at Linux logs specifically, you will need to pull the information from the LinuxAuditLog under the Security drop down. If you are writing out your query, use LinuxAuditLog (2.) as your source. If you look in the table of the image below, you can see there are a number of record types available to look at under LinuxAuditLog.

image

For more information about writing queries, check out the Azure Log Analytics Getting Started page.

More resources

That is a quick intro to the new Linux analytics for Azure Security Center. We’ve walked through a crypto currency example and shown how you can still utilize tools like Azure Log Analytics for investigations. Many of the things you are used to from the Windows side carry over and should seem familiar. For more information about how Azure Security Center works see the following:

See Our Smart Solutions for the Fleet Industry at NAFA 2018 I&E

$
0
0

This week, the Bing Maps team will be in sunny California at the NAFA 2018 Institute & Expo, known as the largest event for fleet professionals. We will share our latest developments for the industry – a collection of powerful Fleet Management APIs that include Truck Routing, Isochrone, Distance Matrix and Snap to Road.

With this collection of Fleet Management APIs, we are packaging up advanced fleet management capabilities into building blocks that developers can use to create new applications or improve upon an existing one. With its easy-to-use fleet management APIs, we are making possible for companies large and small to benefit from robust, yet cost-effective, solutions tailored to the fleet management industry.

We will be located at Booth# 100 showing off what each of these APIs can do and discussing the ins and outs of our solutions with attendees.

Also, we will be holding deep dive demos at the Charging Lounge in Booth #102. There attendees can see how easy it is to deploy a feature-rich fleet tracking application in just 10 minutes with the Bing Maps Fleet Tracker solution that was announced in January.

Below are details:

Microsoft/Bing Maps at Booth #100

Charging Lounge Demos at Booth #102

  • April 25 (Wednesday) - 2:15 PM, 3:45 PM, 4:15 PM
  • April 26 (Thursday) - 10:45 AM, 11:15 AM, 1:15 PM

We hope to see you at NAFA 2018 I&E. But if you aren’t able to make it, we will share a recap on all the cool happenings from the event on the Bing Maps blog.

To learn more about the Bing Maps fleet management APIs and reach out to the team, visit https://www.microsoft.com/en-us/maps/fleet-management.

- Bing Maps Team

Visual Studio 2017 roadmap now available

$
0
0

With the release of Visual Studio 2017, we moved to a release schedule that delivers new features and fixes to you faster. With this faster iteration, we heard you would like more visibility into what’s coming. So, we’ve now published the Visual Studio Roadmap. The roadmap lists some of the more notable upcoming features and improvements but is not a complete list of all that is coming to Visual Studio.

When you look at the roadmap, you’ll see that we grouped items by quarter. Since every quarter includes several minor and servicing releases, the actual delivery of a feature could happen any time during the quarter. As we release these features and improvements, we’ll update the roadmap to indicate the release in which they are first available. The roadmap also includes suggestions from all of you, which are linked to the community feedback source.

Please let us know what you think about the roadmap in the comments, as our primary goal is to make it as useful as we can to you. We will be refreshing the document every quarter – with the next update coming around July. Stay tuned!

Thanks,

John

John Montgomery, Director of Program Management for Visual Studio
@JohnMont

John is responsible for product design and customer success for all of Visual Studio, C++, C#, VB, JavaScript, and .NET. John has been at Microsoft for 17 years, working in developer technologies the whole time.

Announcing a single C++ library manager for Linux, macOS and Windows: Vcpkg

$
0
0

At Microsoft, the core of our vision is “Any Developer, Any App, Any Platform” and we are committed to bringing you the most productive development tools and services to build your apps across all platforms. With this in mind, we are thrilled to announce today the availability of vcpkg on Linux and MacOS. This gives you immediate access to the vcpkg catalog of C++ libraries on two new platforms, with the same simple steps you are familiar with on Windows and UWP today.

Vcpkg has come a long way since its launch at CppCon 2016. Starting from only 20 libraries, we have seen an incredible growth in the last 19 months with over 900 libraries and features now available. All credit goes to the invaluable contributions from our amazing community.

In the feedback you gave us so far, Linux and Mac support was the most requested feature by far. So we are excited today to see vcpkg reach an even wider community and facilitate cross-platform access to more C++ libraries. We invite you today to try vcpkg whether you target Windows, Linux or MacOS.

To learn more about using vcpkg on Windows, read our previous post on how to get started with vcpkg on Windows.

Using vcpkg on Linux and Mac

The Vcpkg tool is now compatible with Linux, Mac and other POSIX systems. This was made possible only through the contributions of several fantastic community members

At the time of writing this blogpost, over 350 libraries are available for Linux and Mac and we expect that number to grow quickly. We currently test daily on Ubuntu-LTS 16.04/18.04 and we had success on Arch, Fedora, FreeBSD.

 

Getting started:

1)      Clone the vcpkg repo: git clone https://github.com/Microsoft/vcpkg

2)      Bootstrap vcpkg: ./bootstrap-vcpkg.sh

3)      Once vcpkg is built, you can build any library using the following syntax:

vcpkg install sdl2

This will install sdl2:x64-linux (x64 static is the default and only option available on Linux)

The result (.h, .lib) is stored in the same folder tree, reference this folder in your build system configuration.

 

4)      Using the generated library

  1. If you use CMake as build system, then you should use CMAKE_TOOLCHAIN_FILE to make libraries available with `find_package()`. E.g.: cmake .. “-DCMAKE_TOOLCHAIN_FILE=vcpkg/scripts/buildsystems/vcpkg.cmake”

 

  1. You should reference the vcpkg folder containing the headers (vcpkginstalledx64-linuxinclude) and also the one containing the .lib (vcpkginstalledx64-linuxlib) to be able to build your project using the generated libraries.

Using vcpkg to target Linux from Windows via WSL

As WSL is a Linux system, we’ll use WSL as we did with Linux. Once configured correctly you will be able to produce Linux binaries from your Windows machine as if they had been generated from a Linux box. Follow the same instruction as for installing on Linux. See how to  Setup WSL on Windows 10, and configure it with the Visual Studio extension for Linux.configure it with the Visual Studio extension for Linux.

As shown in the screenshot above, the vcpkg directory could be shared between Windows and WSL. In this example sdl2 and sqlite3 were built from WSL (binaries for Linux); sqlite3 was built also for Windows (Windows dll).

In closing

Install vcpkg on Linux or Mac and try it in your cross-platform projects and let us how it helps, how we can make it better, what is your cross-platform usage scenario.

As always, your feedback and comments really matter to us, open an issue on GitHub or reach out to us at vcpkg@microsoft.com for any comments and suggestions., complettaking a moment to complete oure surveys

 

The edge of possibility: best practices for IoT-driven infrastructure transformation

$
0
0

Corporate IT infrastructure has changed a lot in the past decade. From a relatively simple bounded space, with a defined “inside” and “outside,” IT networks have evolved to incorporate a wider range of devices, such as smartphones and tablets, and a growing amount of traffic from additional diverse networks, including the public Internet. However, nothing has the potential to disrupt traditional infrastructure topologies more than the Internet of Things (IoT). This has implications for infrastructure and operations (I&O) teams, as well as developers who are responsible for IoT solutions. A recent Forrester report titled “Edge Computing: IoT Will Spawn A New Infrastructure Market” highlights many of the changes and challenges that must be faced in this rapid evolution. Let’s take a look at a few of the highlights.

  1. Consider the full breadth of devices: The “things” that are connected in IoT require new approaches to development and management, but these endpoints are not the only new hardware you have to consider. Diverse components, including field-located IoT gateways and micro-datacenters, will become part of the networked environments. The need for edge infrastructure will depend on how much latency can be tolerated in the system and the complexity of the operations that need to be performed on data. Developers should understand not only IoT endpoints and hubs, but also the architecture in between and how to best take advantage of it to maximize performance and value. For scenarios in which advanced processing needs to happen on devices, Azure IoT Edge provides a fully managed service that delivers cloud intelligence locally.
  2. Adopt a hybrid mindset: As in many other areas of IT, solutions in the IoT space often use a mix of private and public cloud. IT professionals need to understand cloud development and operations principles and best practices, while being able to incorporate owned infrastructure and proprietary line-of-business software into the mix.
  3. Be open to new architectures: IoT systems can be deployed with a wide range of topologies. The control plane may be centralized in the cloud, hosted near the edge, or distributed across multiple locations. Components may be connected to more than one system, requiring a strict hierarchy to prevent conflicts. The flexibility to join, move, and remove components across different systems may also be required. The right approach to these issues is unique to your IoT implementation.
  4. Use software-defined approaches: With a wide range of devices, network protocols, security requirements, and data processing demands, the abstraction provided by the software layer is critical to connecting IoT components into functional, manageable, and secure systems. Using a platform-as-a-service (PaaS) approach can abstract away many of the complexities of managing IoT systems at lower levels and enable you to focus on optimizing the system for your business needs.
  5. Don’t go at it alone: The capacity to handle real-time data and scale to large numbers of devices is made much easier with the use of a cloud platform. For example, Azure IoT Hub and other Azure IoT platform services can give you complete, managed solutions for key challenges, including device provisioning and management, secure authentication, messaging, data processing, and more. Additionally, using such a platform enables you to connect easily to other services that may be critical to achieving your goals, such as machine learning and event handling.

There are many other worthwhile insights to be gained from the Forrester study, so make sure to download the full report.

Azure Toolkit for IntelliJ integrates with HDInsight Ambari and supports Spark 2.2

$
0
0

To provide more authentication options, Azure Toolkit for IntelliJ now supports integration with HDInsight clusters through Ambari for job submission, cluster resource browse and storage files navigate. You can easily link or unlink any cluster by using an Ambari-managed username and password, which is independent of your Azure sign-in credentials.  The Ambari connection applies to normal Spark and Hive hosted within HDInsight on Azure. These additions give you more flexibility in how you connect to your HDInsight clusters in addition to your Azure subscriptions while also simplifying your experiences in submitting Spark jobs.

With this release, you can benefit the new functionalities and consume the new libraries & APIs from Spark 2.2 in Azure Toolkit for IntelliJ. You can create, author and submit a Spark 2.2 project to Spark 2.2 cluster.  With the backward compatibility of Spark 2.2, you can also submit your existing Spark 2.0 and Spark 2.1 projects to a Spark 2.2 cluster.

How to link a cluster

  • Click Link a cluster from Azure Explorer.

image

  • Enter Cluster Name, Storage Account, Storage Key, then select a container from Storage Container, at last, input Username and Password.

image

Please note that you can use either Ambari username, pwd or Secure Hadoop domain username, or pwd to connect. The storage account and key information will become optional in our next release for cluster connection.

  • The linked cluster is displayed under HDInsight node in Azure Explorer. You can submit you Spark jobs to this linked cluster.

image

  • You also can unlink a cluster from Azure Explorer.

image

How to install or update

Please upgrade IntelliJ to 2018.1 version first. IntelliJ will prompt you for the latest update if you have Azure Toolkit for IntelliJ installed before, or you can get the latest bits by going to the IntelliJ repository and searching Azure Toolkit for IntelliJ.

image

For more information, check out the following:

Learn more about today’s announcements on the Azure blog and Big Data blog, and discover more Azure service updates.

Feedback

We look forward to your comments and feedback. If there is any feature request, customer ask, or suggestion, please send us a note to hdivstool@microsoft.com. For bug submission, please open a new ticket using the template.


Azure Toolkit for Eclipse integrates with HDInsight Ambari and supports Spark 2.2

$
0
0

To provide more authentication options, Azure Toolkit for Eclipse now supports integration with HDInsight clusters through Ambari for job submission, cluster resource browse and storage files navigate. You can easily link or unlink any cluster by using an Ambari-managed username and password, which is independent of your Azure sign-in credentials.  The Ambari connection applies to normal Spark and Hive hosted within HDInsight on Azure. These additions give you more flexibility in how you connect to your HDInsight clusters in addition to your Azure subscriptions while also simplifying your experiences in submitting Spark jobs.

With this release, you can benefit the new functionalities and consume the new libraries & APIs from Spark 2.2 in Azure Toolkit for Eclipse. You can create, author and submit a Spark 2.2 project to Spark 2.2 cluster.  With the backward compatibility of Spark 2.2, you can also submit your existing Spark 2.0 and Spark 2.1 projects to a Spark 2.2 cluster.

How to link a cluster

  • Click Link a cluster from Azure Explorer.

Link Cluster

  • Enter Cluster Name, Storage Account, Storage Key, then select a container from Storage Container, at last, input Username and Password. Click the OK button to link cluster.

Eclipse

Please note that you can use either Ambari username, pwd or Secure Hadoop domain username, or pwd to connect. The storage account and key information will become optional in our next release for cluster connection.

  • The Linked cluster is displayed under HDInsight node in Azure Explorer after linking. You can submit you Spark jobs to this linked cluster.

linked-cluster

  • You also can unlink a cluster from Azure Explorer.

unlink

How to install/update

Eclipse will prompt you for latest update if you have the plugin installed before, or you can get the latest bits by going to the Eclipse repository and searching Azure Toolkit for Java.

8_thumb

For more information, check out the following:

Learn more about today’s announcements on the Azure blog and Big Data blog, and discover more Azure service updates.

Feedback

We look forward to your comments and feedback. If there is any feature request, customer ask, or suggestion, please send us a note to hdivstool@microsoft.com. For bug submission, please open a new ticket using the template.

Azure Analysis Services integration with VNets via On-Premises Data Gateway

$
0
0

We are pleased to announce Azure Analysis Services now provides integration with cloud data sources residing on Azure Virtual Networks (VNets). Organizations use VNets for enhanced security and isolation. Cloud data sources such as Azure SQL DW and Azure Database can be secured with VNet endpoints.

Azure Analysis Services inbound traffic can be controlled using firewall rules. However, Azure Analysis Services requires access to data sources in-order to perform data refresh operations. If the data sources are cloud-based and secured with a VNet, it is necessary to provide access to Analysis Services. This can be done using the AlwaysUseGateway server property.

Setting this property to true specifies Azure Analysis Services always use an On-premises data gateway for access to data sources - whether or not the data source happens to be cloud-based. It therefore requires the On-premises data gateway is set up as described in this article, and the gateway computer resides on the VNet. Data refresh operations are directed to the gateway machine, which in-turn can successfully access the data source(s).

The AlwaysUseGateway server property can be set by using SQL Server Management Studio (SSMS). Connect to the Azure Analysis Services server, right-click on the server, select Properties, General, and Show Advanced (All) Properties.

AlwaysUseGateway

Big changes behind the scenes in R 3.5.0

$
0
0

A major update to R is now available. The R Core group has announced the release of R 3.5.0, and binary versions for Windows and Linux are now available from the primary CRAN mirror. (The Mac release is forthcoming.)

Probably the biggest change in R 3.5.0 will be invisible to most users — except by the performance improvements it brings. The ALTREP project has now been rolled into R to use more efficient representations of many vectors, resulting in less memory usage and faster computations in many common situations. For example, the sequence vector 1:1000000 is now represented just by its start and end value, instead of allocating a vector of a million elements as earlier versions of R would do. So while R 3.4.3 takes about 1.5 seconds to run x <- 1:1e9 on my laptop, it's instantaneous in R 3.5.0.

There have been improvements in other areas too, thanks to ALTREP. The output of the sort function has a new representation: it includes a flag indicating that the vector is already sorted, so that sorting it again is instantaneous. As a result, running x <- sort(x) is now free the second and subsequent times you run it, unlike earlier versions of R. This may seem like a contrived example, but operations like this happen all the time in the internals of R code. Another good example is converting a numeric to a character vector: as.character(x) is now also instantaneous (the coercion to character is deferred until the character representation is actually needed). This has significant impact in R's statistical modelling functions, which carry around a long character vector that usually contains just numbers — the row names — with the design matrix. As a result, the calculation:

d <- data.frame(y = rnorm(1e7), x = 1:1e7)
lm(y ~ x, data=d)

runs about 4x faster on my system. (It also uses a lot less memory: running the equivalent command with 10x more rows failed for me in R 3.4.3 but succeeded in 3.5.0.)

The ALTREP system is designed to be extensible, but in R 3.5.0 the system is used exclusively for the internal operations of R. Nonetheless, if you'd like to get a sneak peek on how you might be able to use ALTREP yourself in future versions of R, you can take a look at this vignette (with the caveat that the interface may change when it's finally released).

There are many other improvements in R 3.5.0 beyond the ALTREP system, too. You can find the full details in the announcement, but here are a few highlights:

  • All packages are now byte-compiled on installation. R's base and recommended packages, and packages on CRAN, were already byte-compiled, so this will have the effect of improving the performance of packages installed from Github and from private sources.
  • R's performance is better when many packages are loaded, and more packages can be loaded at the same time on Windows (when packages use compiled code).
  • Improved support for long vectors, by functions including object.size, approx and spline
  • Reading in text data with readLines and scan should be faster, thanks to buffering on text connections.
  • R should handle some international data files better, with several bugs related to character encodings having been resolved.

Because R 3.5.0 is a major release, you will need to re-install any R packages you use. (The installr package can help with this.) On my reading of the release notes, there haven't been any major backwardly-incompatible changes, so your old scripts should continue to work. Nonetheless, given the significant changes behind the scenes, it might be best to wait for a maintenance release before using R 3.5.0 for production applications. But for developers and data science work, I recommend jumping over to R 3.5.0 right away, as the benefits are significant. 

You can find the details of what's new in R 3.5.0 at the link below. As always, many thanks go to the R Core team and the other volunteers who have contributed to the open source R project over the years.

R-announce mailing list: R 3.5.0 is released

What’s brewing in Visual Studio Team Services: April 2018 Digest

$
0
0

This post series provides the latest updates and news for Visual Studio Team Services and is a great way for Azure users to keep up-to-date with new features being released every three weeks. Visual Studio Team Services offers the best DevOps tooling to create an efficient continuous integration and release pipeline to Azure. With the rapidly expanding list of features in Team Services, teams can start to leverage it more efficiently for all areas of their Azure workflow, for apps written in any language and deployed to any OS.

Chain related builds together using build completion triggers

Large products have several components that are dependent on each other. These components are often independently built. When an upstream component (a library, for example) changes, the downstream dependencies have to be rebuilt and revalidated. Teams typically manage these dependencies manually.

Now you can trigger a build upon the successful completion of another build. Artifacts produced by an upstream build can be downloaded and used in the later build, and you can also get data from these variables: Build.TriggeredBy.BuildId, Build.TriggeredBy.BuildDefinitionId, Build.TriggeredBy.BuildDefinitionName. See the build triggers documentation for more information.

This feature was prioritized based on what is currently the second-highest voted suggestion with 1,129 votes.

Setup build chaining

Keep in mind that in some cases, a single multi-phase build could meet your needs. However, a build completion trigger is useful if your requirements include different configuration settings, options, or a different team to own the dependent process.

Trigger CI builds from YAML

You can now define your continuous integration (CI) trigger settings as part of your YAML build definition file. By default, when you push a new .vsts-ci.yml file to your Git repository, CI will be configured automatically for all branches.

To limit the branches that you want triggered, simply add the following to your file to trigger builds on pushes to master or any branch matching the releases/* pattern.

YAMLCopy

trigger:
- master
- releases/*

If you want to disable the trigger or override the trigger settings in the YAML files you can do so on the definition.

See the YAML build triggers documentation for more information.

ci triggers from yaml

Streamline deployment to Kubernetes using Helm

Helm is a tool that streamlines installing and managing Kubernetes applications. It has also gained a lot of popularity and community support in the last year. A Helm task in Release is now available for packaging and deploying Helm charts to Azure Container Service (AKS) or any other Kubernetes cluster.

VSTS already has support for Kubernetes and Docker containers. With the addition of this Helm task, now you can set up a Helm based CI/CD pipeline for delivering containers into a Kubernetes cluster. See the Deploy using Kubernetes to Azure Container Service documentation for more information.

helm tasks

Continuously deploy to Azure Database for MySQL

You can now continuously deploy to Azure Database for MySQL - Azure’s MySQL database as a service. Manage your MySQL script files in version control and continuously deploy as part of a release pipeline using a native task rather than PowerShell scripts.

Configure Go and Ruby applications using Azure DevOps Projects

Azure DevOps Projects makes it easy to get started on Azure. It helps you launch an application on the Azure service of your choice in just a few steps. DevOps Projects sets up everything you need for developing, deploying, and monitoring your app. Now you can setup an entire DevOps pipeline for Go and Ruby applications too. See the Deploy to Azure documentation for more information.

Deploy Ruby on Rails applications

A new Azure App Service release definition template now includes the tasks needed for deploying Ruby on Rails applications to Azure WebApp on Linux. When this release definition template is used, the App Service Deploy task gets pre-populated with an inline deployment script that makes bundler (dependency manager) install the applications dependencies.

Build applications written in Go

Now you can build your Go applications in VSTS. Use the Go Tool Installer task to install one or more versions of Go Tool on the fly. This task acquires a specific version of Go Tool needed by your project and adds it to the PATH of the build agent. If the targeted Go Tool version is already installed on the agent, this task will skip the process of downloading and installing it again. The Go task helps you download dependencies, build, or test your application. You can also use this task to run a custom Go command of your choice.

Deployment Groups are generally available

We are excited to announce that Deployment Groups is out of preview and is now generally available. Deployment Groups is a robust out-of-the-box multi-machine deployment feature of Release Management in VSTS/TFS.

With Deployment Groups, you can orchestrate deployments across multiple servers and perform rolling updates, while ensuring high availability of your application throughout. You can also deploy to servers on-premises or virtual machines on Azure or any cloud, plus have end-to-end traceability of deployed artifact versions down to the server level.

Agent-based deployment relies on the same agents your builds and releases use, which means you can use the full task catalog on your target machines. From an extensibility perspective, you can also use the REST APIs for deployment groups and targets for programmatic access.

The agent-based deployment capability relies on the same build and deployment agents that are already available. You can use the full task catalog on your target machines in the Deployment Group phase. From an extensibility perspective, you can also use the REST APIs for deployment groups and targets for programmatic access.

Read more in the announcement of GA.

Improve code quality with the latest extensions from SonarSource

SonarSource recently released an updated SonarQube extension and a new SonarCloud extension, which enable static code analysis for numerous languages. The VSTS Gradle and Maven tasks take advantage of these extensions for Java builds in particular. Just enable Run SonarQube or SonarCloud Analysis on version 2.* of the Gradle or Maven task, then add the Prepare and Publish SonarQube/SonarCloud tasks as shown below. See the Analyzing with SonarQube documentation for more information.

Tasks for Gradle and Maven

Publish markdown files from a Git repository as a Wiki

Developers create documentation for “APIs”, “SDKs”, and “help docs explaining code” in code repositories. Readers then need to sift through code to find the right documentation. Now you can simply publish markdown files from code repositories and host them in Wiki.

public code as wiki action

From within Wiki, start by clicking Publish code as wiki. Next, you can specify a folder in a Git repository that should be promoted.

publish pages dialog

Once you click on Publish, all the markdown files under the selected folder will be published as a wiki. This will also map the head of the branch to the wiki so that any changes you make to the Git repo will be reflected immediately.

You can learn more in the announcement. Also, the wiki REST APIs are now public. See the Wiki functions and Wiki search documentation for more information.

Integrate Power BI with VSTS Analytics using views

We are excited to announce an easy-to-use solution for integrating Power BI with the VSTS Analytics extension. You don’t have to know how to write OData queries anymore! Our new feature Analytics views makes getting VSTS work tracking data into Power BI simple, and it works for the largest accounts. Similar to a work items query, an Analytics View specifies filters that scope the result of work items data and the columns. Additionally, views allow you to report on past revisions of work items and easily create trend reports.

We provide a set of Default Analytics views that work well for customers with smaller accounts and basic scenarios. Larger accounts might need to scope down the data they are pulling into Power BI. Analytics views let you do just that. Scope your data and history to exactly what you want to report on in Power BI. Analytics views you create in the Analytics hub in VSTS are immediately available to select from the VSTS Power BI Data Connector. Now you can edit your default views and create new views to fine-tune the records, fields, and history returned to Power BI.

Work item tab showing a view filtered to Priority 1 bugs on the Fiber Suite App team and the Fiber Suite report team.

Wrapping Up

As always, you can find the full list of features in our release notes. Be sure to subscribe to the DevOps blog to keep up with the latest plans and developments for VSTS.

Happy coding!

@tfsbuck

Azure Container Instances now generally available

$
0
0

I am proud to announce the general availability of Azure Container Instances (ACI) – a serverless way to run both Linux and Windows containers. ACI offers you an on-demand compute service delivering rapid deployment of containers with no VM management and automatic, elastic scale. When we released the preview last summer of ACI, it was the first-of-its-kind and fundamentally changed the landscape of container technology. It was the first service to deliver innovative serverless containers in the public cloud. As part of today’s announcement, I am also excited to announce new lower pricing, making it even less expensive to deploy a single container in the cloud. ACI also continues to be the fastest cloud-native option for customers in the cloud, getting you compute in mere seconds that also provide rich features like interactive terminals within running containers and an integrated Azure portal experience.


ACI GA - Imgur


In addition to the ease-of-use and granular billing available with ACI, customers are choosing ACI as their serverless container solution because of its deep security model, protecting each individual container at a hyper-visor level which provides a strong security boundary for multi-tenant scenarios. It can sometimes be a challenge to secure multi-tenant workloads running inside containers on the same virtual machine. Enabling this isolation without requiring you to create a hosting cluster is unique from other clouds and is a true cloud native model.

Jedox AG, a customer providing business intelligence solutions for systematic data analysis, is using ACI to sandbox customer trials and demos for their marketplace of SaaS solutions. You can hear more from Jedox on their use of ACI in the newly published case study.

Today, we see customers using ACI across a wide spectrum of scenarios including batch processing, continuous integration, and event-driven computing. We hear consistently from customers that ACI is uniquely suited to handle their burst workloads. ACI supports quick, cleanly packaged burst compute that removes the overhead of managing cluster machines. Some of our largest customers are also using ACI for data processing where source data is ingested, processed, and placed in a durable store such as Azure Blob Storage. ACI enables each stage of work to be efficiently packaged as a container assigned with custom resource definitions for agile development, testing, and deployment. By processing the data with ACI rather than statically provisioned virtual machines, you can achieve significant cost savings due to ACI’s granular per-second billing.

To make ACI even more compelling for these workloads, along with our general availability I am thrilled to announce new, lower prices, including making initial creation free. The following table* summarizes the new Azure Container Instances pricing structure. For detailed information, please visit the pricing page. Note that the new pricing is effective starting July 1, 2018.

  New prices (US West) Old price (Global)
Create fee None $0.0025 per Instance created
vCPU per second $0.000012 $0.0000125
Memory (GB) per second $0.000004 $0.0000125

When we first launched ACI, we also created an experimental version of the ACI Connector for Kubernetes. It provided the benefits of ACI’s per-second billing and zero infrastructure execution via the portable Kubernetes API. At KubeCon last December we announced the Virtual Kubelet, an open source project designed to bridge Kubernetes with limitless and serverless container offerings similar to ACI. Since the announcement of Virtual Kubelet, the project has gained momentum in the Kubernetes community and we now have multiple providers, including VMware, AWS and Hyper.sh, collaborating closely with us.

We have been so excited about this new compute offering since preview, and we have listened to your feedback to shape several important new features including:

We have many exciting features including VNET integration planned later this year. Learn more about current features on our documentation or our sample tutorials.

ACI is a transformative way of using containers in the public cloud. With this service, Azure introduced a new compute primitive that combines the flexibility and security of virtual machines with the speed and simplicity of containers – all powered by Azure’s global cloud. With Virtual Kubelet support, ACI also realizes the vision of a “serverless Kubernetes.”

Go ahead and give it a try and provide us your feedback.

Thanks,

Corey


*Please refer to the pricing page for the most up-to-date information on product pricing.

Viewing all 10804 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>