Quantcast
Channel: Category Name
Viewing all 10804 articles
Browse latest View live

Offering the largest scale and broadest choice for SAP HANA in the cloud

$
0
0

Microsoft at SAPPHIRE NOW 2018

Enterprises have been embarking on a journey of digital transformation for many years. For many enterprises this journey cannot start or gain momentum until core SAP Enterprise Resource Planning (ERP) landscapes are transformed. The last year has seen an acceleration of this transformation with SAP customers of all sizes like Penti, Malaysia Airlines, Guyana Goldfields , Rio Tinto, Co-op, and Coats migrating to the cloud on Microsoft Azure. This cloud migration, which is central to digital transformation, helps to increase business agility, lower costs, and enable new business processes to fuel growth. In addition, it has allowed them to take advantage of advancements in technology such as big data analytics, self-service business intelligence (BI), and Internet of Things (IOT).

As leaders in enterprise software, SAP and Microsoft provide the preferred foundation for enabling the safe and trusted path to digital transformation. Together we enable the inevitable move to SAP S/4HANA which will help accelerate digital transformation for customers of all sizes.

Microsoft has collaborated with SAP for 20+ years to enable enterprise SAP deployments with Windows Server and SQL Server. In 2016 we partnered to offer SAP certified, purpose-built, SAP HANA on Azure Large Instances supporting up to 4 TB of memory. Last year at SAPPHIRENOW, we announced the largest scale for SAP HANA in the public-cloud with support up to 20 TB on a single node and our M-series VM sizes up to 4 TB. With the success of M-series VMs and our SAP HANA on Azure Large instances, customers have asked us for even more choices to address a wider variety of SAP HANA workloads.

Microsoft is committed to offering the most scale and performance for SAP HANA in the public cloud, and yesterday announced additional SAP HANA offerings on Azure which include:

  • Largest SAP HANA optimized VM size in the cloud: We are happy to announce that the Azure M-series will support large memory virtual machines with sizes up to 12 TB. These new sizes will  be launching soon, pushing the limits of virtualization in the cloud for SAP HANA. These new sizes are based on Intel Xeon Scalable (Skylake) processors and will offer the most memory available of any VM in the public cloud.
  • Wide range of SAP HANA certified VMs: For customers needing smaller instances we have expanded our offering with smaller M-series VM sizes, extending Azure’s SAP HANA certified M-series VM range from 192 GB – 4 TB with 10 different VM sizes. These sizes offer on-demand and SAP certified instances with flexibility to spin-up or scale-up in minutes and to spin-down to save costs all in a pay-as-you-go model available worldwide. This flexibility and agility is something that is not possible with a private cloud or on-premises SAP HANA deployment.
  • 24 TB bare metal instance and optimized price per TB: For customers that need a higher performance dedicated offering for SAP HANA, we are increasing our investments in our purpose-built bare metal SAP HANA infrastructure. We now offer additional SAP HANA TDIv5 options of 6 TB, 12 TB, 18 TB, and 24 TB configurations in addition to our current configurations from 0.7TB to 20 TB. This enables customers who need more memory but the same number of cores to get a better price per TB deployed.
  • Most choice for SAP HANA in the cloud: With 26 distinct SAP HANA offerings from 192 GB to 24 TB, scale-up certification up to 20 TB and scale-out certification up to 60 TB, global availability in 12 regions with plans to increase to 22 regions in the next 6 months, Azure now offers the most choice for SAP HANA workloads of any public cloud.

Microsoft Azure also enables customers to derive insights and analytics from SAP data with services such as Azure Data Factory SAP HANA connector to automate data pipelines, Azure Data Lake Store for hyper scale data storage and Power BI, an industry leading self-service visualization tool, to create rich dashboards and reports from SAP ERP data.

Our unique partnership with SAP to enable customer success

Last November, Microsoft and SAP announced an expanded partnership to help customers accelerate their business transformation with S/4HANA on Azure. Microsoft has been a long time SAP customer for many of our core business processes such as financials and supply chain. As part of this renewed partnership, Microsoft announced it will use S/4HANA for Central Finance, and SAP announced it will use Azure to host 17 internal business critical systems.

I am very pleased to share an update on SAP’s migration to Azure from Thomas Saueressig, CIO of SAP:

“In 2017 we started to leverage Azure as IaaS Platform. By the end of 2018 we will have moved 17 systems including an S/4HANA system for our Concur Business Unit. We are expecting significant operational efficiencies and increased agility which will be a foundational element for our digital transformation.”

Correspondingly here's an update on Microsoft’s migration to S/4HANA on Azure from Mike Taylor, GM Partner; Enterprise Applications Services at Microsoft.

“In 2017 we started the internal migration of our SAP system that we have been running for over 25 years, to S/4HANA. As part of that journey we felt it was necessary to first move our SAP environment completely onto Azure, which we completed in February 2018. With the agility that Azure offers we have already stood up multiple sandbox environments to help our business realize the powerful new capabilities of S/4HANA.”

As large enterprises, we are going through our business transformation with SAP S/4HANA on Azure and we will jointly share lessons from our journey and reference architectures at several SAPPHIRE NOW sessions.

Last November, we announced availability of SAP HANA Enterprise Cloud (HEC) with Azure, to offer customers an accelerated path to SAP HANA. We see several customers, such as Avianca and Aker embark on their business transformation by leveraging the best of both worlds, SAP’s managed services, and the most robust cloud infrastructure for SAP HANA on Azure.

“In Avianca, we are committed on providing the best customer experience through the use of digital technologies. We have the customer as the center of our strategy, and to do that, we are in a digital transformation of our customer experience and of our enterprise to provide our employees with the best tools to increase their productivity. Our new implementation of SAP HANA Enterprise Cloud on Microsoft Azure is a significant step forward in our enterprise digital transformation,” said Mr. Santiago Aldana Sanin, Chief Technology Officer, Chief Digital Officer at Avianca. “The SAP and Microsoft partnership continues to create powerful solutions that combine application management and product expertise from SAP with a global, trusted and intelligent cloud from Microsoft Azure. At Avianca, we are leveraging the strengths of both companies to further our journey to the cloud.”

Learn more about HEC with Azure.

Announcing SAP Cloud Platform general availability on Azure

The SAP Cloud Platform offers developers a choice to build their SAP applications and extensions using a PaaS development platform with integrated services. Today, I’m excited to announce that SAP Cloud Platform is now generally available on Azure. Developers can now deploy Cloud Foundry based SAP Cloud Platform on Azure in the West Europe region. We’re working with SAP to enable more regions in the months ahead.

“We are excited to announce general availability for SAP Cloud Platform on Azure. With our expanded partnership last November, we have been working on a number of co-engineering initiatives for the benefit of our mutual customers. SAP Cloud Platform offers the best of PaaS services for developers building apps around SAP and Azure offers world-class infrastructure for SAP solutions with cloud services for application development. With this choice, developers can spin up infrastructure on-demand in a global cloud co-located with other business apps, scale up as necessary in minutes boosting developer productivity and accelerating time to market for innovative applications with SAP solutions,” said Björn Goerke, CTO of SAP and President of SAP Cloud Platform.

Microsoft has a long history of working with developers with our .NET, Visual Studio, and Windows community. With our focus on open source support on Azure for Java, Node.js, Red Hat, SUSE, Docker, Kubernetes, Redis, and PostgreSQL to name a few, Azure offers the most developer friendly cloud according to the latest development-only public cloud platforms report from Forrester. We recently published an ABAP SDK for Azure on GitHub, to enable SAP developers to seamlessly connect into Azure services from SAP applications.

SAP application developers can now use a single cloud platform to co-locate application development next to SAP ERP data and boost development productivity with Azure’s Platform services such as Azure Event Hubs for event data processing and Azure Storage for unlimited inexpensive storage, while accessing SAP ERP data at low latencies for faster application performance. To get started with SAP Cloud Platform on Azure, sign up for a free trial account.

Today, we are also excited to announce another milestone in our partnership. SAP’s Hybris Commerce Cloud now runs on Azure as a "Software as a service" offering.

Customers embarking on digital transformation with SAP on Azure

With our broadest scale global offerings for SAP on Azure, we are seeing increased momentum with customers moving to the cloud. Here are some digital transformation stories from recent customer deployments.

  • Daimler AG: One of the world’s most successful automotive companies, Daimler AG is modernizing its purchasing system with a new SAP S/4HANA on Azure solution. The Azure-based approach is a foundational step in an overall digital transformation initiative to ensure agility and flexibility for the contracting and sourcing of its passenger cars, commercial vehicles, and International Procurement Services on a global basis.
  • Devon Energy: Fully committed to its digital transformation, Devon Energy is pioneering efforts to deploy SAP applications on Azure across all its systems. The Oklahoma City-based independent oil and natural gas exploration and production company is strategically partnering with Microsoft on multiple fronts such as AI, IT modernization, and SAP on Azure. Learn more about Devon’s digital transformation at their SAPPHIRE NOW session.
  • MMG: MMG, a global mining company, recognized that existing SAP infrastructure was approaching end-of-life, and that moving to the cloud would deliver the lowest long-term cost whilst providing flexibility to grow and enable new capabilities. Immediate benefits have been realized in relation to the overall performance of SAP, in particular, data loads into Business Warehouse.

For more on how you can accelerate your digital transformation with SAP on Azure, please check out our website.

In closing, at Microsoft, we are committed to ensuring Azure offers the best enterprise-grade option for all your SAP workload needs, whether you are ready to move to HANA now or later. I will be at SAPPHIRE NOW 2018 and encourage you to check our SAPPHIRE NOW event website for details on 40+ sessions and several demos that we’ll be showcasing at the event. Stop by and see us at the Microsoft booth #358 to learn more.


Use Azure Data Lake Analytics to query AVRO data from IoT Hub

$
0
0

Recently a customer asked me how to read blob data produced from the routing capability of Azure IoT Hub. To provide this customer with a complete answer, I put together a step-by-step guide that I am happy to share with you in the video below.

One of the common patterns of Internet of Things applications is called “cold path” and consists of storing all data produced by IoT devices in the cloud for later processing. To make such an implementation trivial, Azure IoT Hub supports routing of messages coming from devices directly to cloud storage services. IoT Hub can also apply simple rules based on both properties, and the message body can route messages to various custom endpoints of your choice. IoT Hub will write blob content in AVRO format, which has both message body and message properties. Great for data/message preservation, AVRO can be challenging for querying and processing the data. Here is a suggested solution to process this data.

Many of the big data patterns can be used for processing non-relational data files in custom file formats. Focusing on cost and deployment simplicity, Azure Data Lake Analytics (ADLA) is one of the only “pay per query” big data patterns. With ADLA, we don’t have to setup virtual machines, databases, networks, or storage accounts. Using U-SQL, the query language for, and an AVRO “extractor” we can parse, transform, and query our IoT Hub data.

The following video walk through the process of transforming, querying, and exporting data from Azure IoT Hub to a standard file format. This process could also be adapted to place the data in other repositories or relational stores.

You can read the detailed step by step process in our documentation.

Accenture fosters inclusive workspace to empower employees with Microsoft 365

$
0
0

Today’s post was written by Stephen Cutchins CIO and accessibility lead at Accenture.

Headshot of Stephen Cutchins, CIO and accessibility lead at Accenture.Diversity at Accenture is a source of strength; the wealth of different perspectives and skillsets that our employees bring to the table keeps us leading in our field. Achieving more as a company starts with addressing the needs of every single employee in our workforce. I am passionate about accessibility. I grew up with two cousins with disabilities and it shaped my outlook on the whole idea of inclusion in the workplace. Accessible technology is about one thing—fitting the tools to the humans who use themand I’m fortunate to work with a company that shares my vision. I wanted to create an accessibility practice at Accenture, and to that end, I started as the first employee in the CIO’s Center for Excellence, where we look at finding the right tools for an inclusive workplace. And when it comes to business tools, we see Microsoft as a leader in inclusive technology and a great partner, a perfect match for our goals to put technology to work empowering every one of our employees. In fact, we now take it for granted that the experiences within Microsoft 365 are going to work well for our employees.

As a human-centric company, our workplace initiatives are designed to bring the conversation about accessibility to the forefront, encouraging an open dialogue about how we can support employees’ needs in the workplace. Accenture runs on Office 365 productivity services that include a wealth of built-in accessibility features. The Microsoft approach of “accessibility by design” matches our philosophy that accessibility is not an add-on or an afterthought, but an inherent part of the technology we use to communicate and collaborate as an organization.

The ability to collaborate effectively with your colleagues to get work done is the baseline of any productive organization. A lot of credit goes to accessibility features in Office 365 ProPlus applications—such as Skype for Business, Word, and Outlook—for helping us tap into the incredible resources in our company. Daily Skype voice and video calls become transformative when people who are blind or motor-disabled can participate by using JAWS screen reader for Windows or voice dictation software. Even minor changes can have an enormous impact. I was excited to see that when Microsoft moved the Accessibility Checker front and center in Word, near the spell check, it raised awareness of both the feature itself and the need to use it. We are all of different abilities, and learning to consider the full range of situations across the disability spectrum means employees will use the tools at hand for better communication and collaboration with everyone.

It gives me enormous satisfaction that our inclusive workplace, with Microsoft technologies, engages our employees to do their best work and to help them realize their true potential and grow as human beings. Everyone benefits.

—Stephen Cutchins

Read the case study to learn more about how Accenture is empowering its workforce with the intuitive accessibility tools built into Windows 10 and Office 365.

The post Accenture fosters inclusive workspace to empower employees with Microsoft 365 appeared first on Microsoft 365 Blog.

Get started with U-SQL: It’s easy!

$
0
0

Azure Data Lake Analytics combines declarative and imperative concepts in the form of a new language called U-SQL. The idea of learning a new language is daunting. Don’t worry! U-SQL is easy to learn. You can learn the vast majority of the language in a single day. If you are familiar with SQL or languages like C# or Java, you will find that learning U-SQL is natural and that you will be productive incredibly fast.
  
A common question we get is “How can I get started with U-SQL?” This blog will show you all the core steps you need to get ramped up on U-SQL.

What is U-SQL?

U-SQL is the big data query language and execution framework in the Azure Data Lake Analytics. U-SQL uses familiar SQL concepts and language to scale out your custom code (.NET/C#/Python) from Gigabyte to Petabyte scale. U-SQL offers the usual big data processing concepts such as “schema on reads,” custom processors, and reducers. The language lets you query and combine data from multiple data sources including Azure Data Lake Storage, Azure Blob Storage, Azure SQL DB, Azure SQL Data Warehouse and SQL Server instances running on Azure VMs.

Step 1: Read the U-SQL tutorial

The U-SQL tutorial is best place to start. It will lead you step-by-step, incrementally through the language and tools. You don’t need an Azure subscription or an ADLA account for this step. The tutorial teaches you how to run U-SQL on your own Windows box. Following through this doc will only take one or two days, and afterward you will have a solid grasp on most of the U-SQL code you would ever need to write in practice.

Step 2: Submit a U-SQL job through the Azure Portal

At this point, you’ve got a sense of the language. Now we will explore the mechanics of using the language with Azure. In this step you will learn how to submit a U-SQL script to an ADLA account and monitor its progress. Read the Getting Started guide. The guide will walk through all the steps needed. As an alternative, you could also watch this YouTube video.

Step 3: Run some of the U-SQL samples in the Azure Portal

Since the language is still new to you at this point, you can take advantage of the sample scripts available in the Azure Portal. These samples cover some very common U-SQL scenarios. We find that many users prefer starting with one of our existing samples rather than starting with an empty U-SQL script.

Below are the steps you can follow to start with existing samples in the portal:

  • Go to your ADLA account
  • Click on Sample Scripts (on the left, or on the top)
  • If you are prompted to install sample data, do so. It will copy a few MBs of sample data into your ADLS account
  • Select any sample – for example Query a TSV file and click SubmitSample scripts

Step 4: Submit a U-SQL Job with Data Lake Tools for Visual Studio

In step 1, you used Visual Studio to run U-SQL scripts on your own machine. In steps 2 and 3 we switched to using the Azure portal and working with an ADLA account. Now we will return to Visual Studio and submit U-SQL scripts to an ADLA account in Azure. Read this Getting Started guide and follow the steps to run U-SQL script using Visual Studio.

Step 5: Learn how a U-SQL script executes

So far, the blog post was focused on using the U-SQL language. Now you should explore what is happening under the covers when a U-SQL script runs. Understanding U-SQL query execution is a MUST DO task for any U-SQL developer. This will help you immensely when debugging problems and optimizing your code to scale.
 
Watch video: U-SQL Batch Query Execution

Job execution

Step 6: C# code behind in Visual Studio

U-SQL makes it easy to use .NET code. If you are a C# developer, our Visual Studio tooling makes it especially easy to reuse C# code directly in your U-SQL scripts through a feature called “U-SQL C# Code-Behind.”

Watch video: C# Code-Behind for U-SQL in Visual Studio

6.1

U-SQL Script calling a method in C# Code-Behind:

6.2

And here’s what’s in the C# Code-behind:

6.3

Step 7: User code debugging in Visual Studio

One of the challenges in running your code in a distributed fashion is that it typically is very hard to track down and reproduce failures. Our Visual Studio tooling has simplified the experience. For example, debugging a .NET exception thrown by your code that is being used in a U-SQL script, is just as easy as debugging that exception with a normal C# project on your machine.

Watch video: Debug C# code errors in failed U-SQL jobs

Step 8: Learn about the AU Analyzer

As you begin to use U-SQL scripts against larger amounts of data, you are going to eventually wonder about how to optimize the number of Analytics Units (AUs) for your U-SQL jobs. Fortunately, we’ve built a wonderful tool called the AU Analyzer to help you perform these optimizations.
 
Watch video: AU analysis for U-SQL jobs

Step 9: Learn how to save money and control costs

As you start working with big data, you will need to think about all the ways in which you can control your costs. Read our cost saving guides.

Step 10: Use GitHub samples

If you are facing a problem that needs solving, likely someone else is facing a similar one too. We use GitHub to share code samples and more with the U-SQL community. We get popular questions on a range of topics. See samples for specific topics below:

You can find more samples, and to download the Git repo, just click Clone or download, then Download ZIP.

GitHub
 

 

 

Advanced learning

Read documentation to understand advanced concepts

Become part of the U-SQL community!

We look forward to welcoming you to our growing community of U-SQL users, and we love to hear your feedback. Especially, let us know when U-SQL is easing your way to write a function that you couldn’t otherwise do or allows you to perform your analytics tasks in a compressed time. Go ahead and run a U-SQL job and share your experiences.

If you have any questions in regard to U-SQL, ask them at U-SQL Stack overflow or Azure Data Lake MSDN.

Azure is the best place for analytics

$
0
0

The confluence of cloud, data, and AI is foundational to innovation and is driving unprecedented change. This week at Spark + AI Summit, I talked about how Microsoft enables organizations to take advantage of Azure to build advanced machine learning models and intelligent applications virtually anywhere.

As Satya mentioned during our Build conference last month, applications will increasingly require a ubiquitous computing fabric from the cloud to the edge. These applications also require new machine learning and AI capabilities that enable them to see, hear and predict. The driving force behind these capabilities is data. Data is vital to every app and experience we build today. Organizations are using their data to extract important insights to drive their businesses forward and engage their customers in new ways. Customers like Renault-Nissan are revolutionizing their customer experience with connected cars. Rockwell Automation, a leader in industrial automation has built predictive maintenance capabilities on their equipment to save time and reduce cost associated with device failure. Liebherr, a leader in manufacturing, produces intelligent refrigerators that use object recognition to recommend grocery lists based on refrigerator contents. These are just a few examples of customers leveraging their data, wherever it exists, to turn it into breakthrough insights. 

To enable developers and data scientists to build data and AI solutions that impact the world at large, we announced the general availability of Azure Databricks in March. Azure Databricks combines the best of the Apache® Spark™ analytics platform and Microsoft Azure to help customers unleash the power of data like never before. Azure Databricks enables customers like Shell to transform the way they do business. 

"At Shell, we are constantly pushing the boundaries of technology, for instance we are using Artificial Intelligence (AI) to help improve our predictive maintenance. We have drones that take pictures of our facilities and use machine vision to help us identify the need for maintenance. With Azure Databricks, we can now run deep learning to build predictive maintenance models that are used to detect and fix issues, increase our operational efficiency and enhance safety.”

- Dan Jeavons, General Manager of Data Science  

Organizations also benefit from Azure Databricks' native integration with other services like Azure Blob Storage, Azure Data Factory, Azure SQL Data Warehouse, and Azure Cosmos DB. This enables new analytics solutions that support modern data warehousing, advanced analytics, and real-time analytics scenarios.

To continue driving innovation for our customers, we made a couple of exciting enhancements to Azure Databricks that we announced this week at the Spark + AI Summit.

Support for GPU enabled virtual machines 

To quickly build AI models from large volumes of data, specialized hardware like GPUs is required. 

Azure Databricks now supports the ability to use GPU enabled VMs as a choice for its clusters. Developers can now easily build, train and deploy AI models at scale using these optimized clusters.

Runtime for machine learning 

In addition to GPU support, we have also enhanced Azure Databricks’ AI capabilities with a new machine learning runtime. This runtime enables distributed, multi-GPU training of deep neural networks using Horovod and includes HorovodEstimator for seamless integration with Spark DataFrames. It also comes pre-installed and pre-configured with all the necessary packages such as TensorFlow, Keras, and XGBoost.

This runtime enables developers to build deep learning models with a few lines of code. Previously, developers had to invest considerable time and effort to leverage these toolkits. Now, they no longer have to write their own logic to load data, distribute training code to multiple clusters and validate model accuracy. 

Azure Databricks Runtime for Machine Learning is available in preview today, as part of premium SKU in Azure Databricks.

At Microsoft, we are committed to delivering AI into the hands of every developer and data scientist so they can unleash the power of data and reimagine possibilities that will improve our world.

Get started today!

Azure Marketplace new offers: May 1–15

$
0
0

We continue to expand the Azure Marketplace ecosystem. From May 1 to 15, 28 new offers successfully met the onboarding criteria and went live. See details of the new offers below:

Altova Server Platform

Altova Server Platform: This free Azure virtual machine template makes it easier and more convenient to use Altova server software in the cloud. The VM template installs the complete line of Altova server software products, including the free LicenseServer, on the VM you specify in Azure.

Apptimized Test

Apptimized Test: Apptimized Test takes away the pain from constant retesting against the Windows platform. Using our unique Azure-based solution, test all your products against every Insider Release of every Microsoft Windows change well before that change moves into production.

Apptimized Catalogue

Apptimized Catalogue: Get instant access to latest versions of the world’s most commonly packaged applications, already packaged to Apptimized’s high quality standards. No longer pay each time an application needs to be repackaged; simply log in and download the latest version.

Apptimized Packaging Service

Apptimized Packaging Service: We package applications for all formats, against any platform, without you needing to invest a penny in hardware, software, or expensive resources. Built on over 15 years delivering specialist packaging services to hundreds of customers of all sizes across all sectors.

Apptimized Packaging Tool

Apptimized Packaging Tool: The Apptimized Packaging Tool is a scalable, low-cost alternative to the traditional thick client toolsets. For a low monthly fee, access everything needed to discover, package, remediate, test, and store as many applications and application packages as you like.

Apptimized Monitor

Apptimized Monitor: Apptimized Monitor continually analyzes multiple sources of industry data, so we can let you know instantly when any one of 250 applications in your portfolio has been updated by its vendor. We monitor 250 products, keeping your applications estate current.

CentOS (with RDMA) for Azure Batch container pools

CentOS for Azure Batch container pools: Use this CentOS image to create Azure Batch pools to run container applications. These images should only be used with Azure Batch service to create pools that run container applications. The images have the container runtime pre-installed.

CentOS for Azure Batch container pools

CentOS (with RDMA) for Azure Batch container pools: Use this image (with RDMA driver) to create Azure Batch pools to run container applications. Only use these images with Azure Batch service to create pools that run container applications. The images have the container runtime pre-installed.

CloudEndure Disaster Recovery - Tier-2

CloudEndure Disaster Recovery - Tier-2: CloudEndure utilizes Azure to provide an affordable enterprise-grade disaster recovery solution for any source workload – physical, virtual, or cloud-based. Recover workloads into Azure, launching an identical copy of your source machines.

FastCollect from Archive360

FastCollect from Archive360: FastCollect is a powerful data onboarding platform that is based on a legally compliant data validation engine. Onboard 80+ data types to Azure at high speed while maintaining 100% data fidelity and chain of custody. Meets all required compliance regulations.

hive - Azure Self Service Portal

hive - Azure Self Service Portal: hive removes bottlenecks, eliminates human errors, and reduces VM request time from 1 week to 1 hour. Our workflows include: list all VMs in Azure IaaS; email requestor and approver; empower users to schedule start and stop of VMs; and more.

PHP 5.6 - Zend Server

PHP 5.6 - Zend Server: Zend Server on Azure is designed for both development and production. Create higher-performing applications and run mission-critical PHP applications in the cloud. Zend Server is an application server designed to scale applications seamlessly across cloud resources.

PlateSpin Migrate

PlateSpin Migrate: PlateSpin Migrate is a powerful workload portability solution that automates the process of moving workloads over the network between physical servers, virtual hosts, and enterprise cloud platforms – all from a single point of control. Test, migrate, and rebalance workloads.

Rapid Recovery Core VM

Rapid Recovery Core VM: Rapid Recovery advanced data protection unifies backup, replication, and recovery in one easy-to-use software solution. The Rapid Recovery Core Virtual Machine for Azure leverages the Microsoft cloud for snapshot backups and all the features of Rapid Recovery.

Solar inCode

Solar inCode: Solar inCode seamlessly plugs into each stage of the software development lifecycle (SDLC), thus allowing your developers to run security scans with ease and focus on building applications. Control the security of applications, provided to you by third-party developers.

STAR-CCM  v12

STAR-CCM+ v12: The STAR-CCM+ v12 integrated engineering simulation software on Microsoft Azure gives you the additional compute power you need to solve your complex simulations. With one click you can run STAR-CCM+ on your Azure instance of choice with STAR-CCM+ pre-installed.

Stratusphere UX

Stratusphere UX: Stratusphere UX provides complete Microsoft Windows desktop monitoring, diagnostics, performance validation, and optimization. The solution supports all Microsoft Windows-based delivery approaches, including virtual and mixed platform desktop environments.

Ubuntu (with RDMA) for Azure Batch container pools

Ubuntu server OS for Azure Batch container pools: Use this Ubuntu server OS image to create Azure Batch container pools. These images should only be used with Azure Batch service to create pools that run container applications. The images have the container runtime pre-installed.

Ubuntu server OS for Azure Batch container pools

Ubuntu (with RDMA) for Azure Batch container pools: Use this Ubuntu server OS image to create Azure Batch container pools. These images should only be used with Azure Batch service to create pools that run container applications. The images have the container runtime pre-installed.

 

Microsoft Azure Applications

Alchemi Intelligent Data Management Stack

Alchemi Intelligent Data Management Stack: Alchemi is a solution that sits above all file data, creating a unified virtual view that describes the universe of content, yet leaves the physical data where it sits. It creates a near-real-time, central metadata index representing all environments it sees.

AppGate SDP

AppGate SDP: AppGate SDP for Azure supports fine-grained, dynamic access control to Azure resources. Using a Software-Defined Perimeter approach for granular security control, it makes your Azure resources inaccessible and invisible. It then delivers access to authorized Azure resources only.

Azure Blockchain Workbench

Azure Blockchain Workbench: The Azure Blockchain Workbench is the fastest way to get started with blockchain on Microsoft Azure. This tool allows developers to deploy a blockchain ledger along with a set of relevant Microsoft Azure services most often used to build a blockchain-based application.

Haivision Media Gateway 1.5

Haivision Media Gateway 1.5: Haivision Media Gateway on Microsoft Azure is used for efficiently transporting high-quality, low-latency live HD video via the open-source SRT protocol to multiple locations around the world, making it ideal for broadcast distribution and enterprise events.

Haivision Media Gateway 1.6.2

Haivision Media Gateway 1.6.2: Haivision Media Gateway on Microsoft Azure is used for efficiently transporting high-quality, low-latency live HD video via the open-source SRT protocol to multiple locations around the world, making it ideal for broadcast distribution and enterprise events.

Infrastructure for SAP Netweaver and SAP HANA

Infrastructure for SAP Netweaver and SAP HANA: Get the most from your SAP HANA and SAP business application software with decreased downtime, greater efficiency, and accelerated innovation with the reliability, availability, and service ability of SUSE Linux Enterprise Server for SAP.

Paxata Self-Service Data Preparation

Paxata Self-Service Data Preparation: Paxata Self-Service Data Preparation is a solution for business analysts and data professionals to discover, ingest, explore, transform, and export data, creating clean and contextual information from raw data. This fuels data exploration, discovery and analytics.

TeamCity

TeamCity: TeamCity is a continuous integration and continuous delivery server from JetBrains. It takes moments to set up, shows your build results on the fly, and works out of the box. TeamCity will make sure your software gets built, tested, and deployed, and will notify you on that the way you choose.

Unraveldata

Unraveldata: Unravel Data is the Application Performance Management platform for big data that is full-stack and intelligent. Unravel Data guarantees the reliability and performance of apps, maximizes cost savings, and more. The Unraveldata app for HDInsight is prepared for Azure HDInsight clusters.

Windows 10 SDK Preview Build 17682 available now!

$
0
0

Today, we released a new Windows 10 Preview Build of the SDK to be used in conjunction with Windows 10 Insider Preview (Build 17682 or greater). The Preview SDK Build 17682 contains bug fixes and under development changes to the API surface area.

The Preview SDK can be downloaded from developer section on Windows Insider.

For feedback and updates to the known issues, please see the developer forum.  For new developer feature requests, head over to our Windows Platform UserVoice.

Things to note:

  • This build works in conjunction with previously released SDKs and Visual Studio 2017. You can install this SDK and still also continue to submit your apps that target Windows 10 Creators build or earlier to the Store.
  • The Windows SDK will now formally only be supported by Visual Studio 2017 and greater. You can download the Visual Studio 2017 here.
  • This build of the Windows SDK will install on Windows 10 Insider Preview and supported Windows operating systems.

What’s New:

MSIX Support

It’s finally here! You can now package your applications as MSIX! These applications can be installed and run on any device with 17682 build or later.

To package your application with MSIX, use the MakeAppx tool. To install the application – just click on the MSIX file. To understand more about MSIX, watch this introductory video.

Feedback and comments are welcome on our MSIX community: http://aka.ms/MSIXCommunity

MSIX is not currently supported by the App Certification Kit nor the Microsoft Store at this time.

MC.EXE

We’ve made some important changes to the C/C++ ETW code generation of mc.exe (Message Compiler):

The “-mof” parameter is deprecated. This parameter instructs MC.exe to generate ETW code that is compatible with Windows XP and earlier. Support for the “-mof” parameter will be removed in a future version of mc.exe.

As long as the “-mof” parameter is not used, the generated C/C++ header is now compatible with both kernel-mode and user-mode, regardless of whether “-km” or “-um” was specified on the command line. The header will use the _ETW_KM_ macro to automatically determine whether it is being compiled for kernel-mode or user-mode and will call the appropriate ETW APIs for each mode.

  • The only remaining difference between “-km” and “-um” is that the EventWrite[EventName] macros generated with “-km” have an Activity ID parameter while the EventWrite[EventName] macros generated with “-um” do not have an Activity ID parameter.

The EventWrite[EventName] macros now default to calling EventWriteTransfer (user mode) or EtwWriteTransfer (kernel mode). Previously, the EventWrite[EventName] macros defaulted to calling EventWrite (user mode) or EtwWrite (kernel mode).

  • The generated header now supports several customization macros. For example, you can set the MCGEN_EVENTWRITETRANSFER macro if you need the generated macros to call something other than EventWriteTransfer.
  • The manifest supports new attributes.
    • Event “name”: non-localized event name.
    • Event “attributes”: additional key-value metadata for an event such as filename, line number, component name, function name.
    • Event “tags”: 28-bit value with user-defined semantics (per-event).
    • Field “tags”: 28-bit value with user-defined semantics (per-field – can be applied to “data” or “struct” elements).
  • You can now define “provider traits” in the manifest (e.g. provider group). If provider traits are used in the manifest, the EventRegister[ProviderName] macro will automatically register them.
  • MC will now report an error if a localized message file is missing a string. (Previously MC would silently generate a corrupt message resource.)
  • MC can now generate Unicode (utf-8 or utf-16) output with the “-cp utf-8” or “-cp utf-16” parameters.

API Spot Light:

Check out LauncherOptions.GroupingPreference.

 
namespace Windows.System {
  public sealed class FolderLauncherOptions : ILauncherViewOptions {
    ViewGrouping GroupingPreference { get; set; }
  }
  public sealed class LauncherOptions : ILauncherViewOptions {
    ViewGrouping GroupingPreference { get; set; }
  }

This release contains the new LauncherOptions.GroupingPreference property to assist your app in tailoring its behavior for Sets. Watch the presentation here.

Known Issues

Missing Contract File

The contract Windows.System.SystemManagementContract is not included in this release. In order to access the following APIs, please use a previous Windows IoT extension SDK with your project.

This bug will be fixed in a future preview build of the SDK.

The following APIs are affected by this bug:


namespace Windows.Services.Cortana {
  public sealed class CortanaSettings     
}

namespace Windows.System {
  public enum AutoUpdateTimeZoneStatus
  public static class DateTimeSettings
  public enum PowerState
  public static class ProcessLauncher
  public sealed class ProcessLauncherOptions
  public sealed class ProcessLauncherResult
  public enum ShutdownKind
  public static class ShutdownManager
  public struct SystemManagementContract
  public static class TimeZoneSettings
}

MSIX

MSIX is not currently supported by the App Certification Kit nor the Windows Store at this time.

API Updates and Additions

When targeting new APIs, consider writing your app to be adaptive in order to run correctly on the widest number of Windows 10 devices. Please see Dynamically detecting features with API contracts (10 by 10) for more information.

The following APIs have been added to the platform since the release of 17134.


namespace Windows.AI.MachineLearning {
  public sealed class BooleanTensorValue : IClosable, IFeatureValue, ITensorValue
  public sealed class DoubleTensorValue : IClosable, IFeatureValue, ITensorValue
  public sealed class FeatureValueProvider
  public sealed class Float16TensorValue : IClosable, IFeatureValue, ITensorValue
  public sealed class FloatTensorValue : IClosable, IFeatureValue, ITensorValue
  public interface IFeatureDescriptor
  public interface IFeatureValue
  public interface IMachineLearningModel : IClosable
  public interface IMachineLearningOperator
  public sealed class ImageDescriptor : IFeatureDescriptor, ITensorDescriptor
  public sealed class ImageTensorValue : IClosable, IFeatureValue, ITensorValue
  public interface IModelEvaluationResult
  public interface IModelSubmitEvaluationResult
  public sealed class Int16TensorValue : IClosable, IFeatureValue, ITensorValue
  public sealed class Int32TensorValue : IClosable, IFeatureValue, ITensorValue
  public sealed class Int64TensorValue : IClosable, IFeatureValue, ITensorValue
  public sealed class Int8TensorValue : IClosable, IFeatureValue, ITensorValue
  public interface ITensorDescriptor : IFeatureDescriptor
  public interface ITensorValue : IClosable, IFeatureValue
  public struct MachineLearningContract
  public sealed class MachineLearningModel : IClosable, IMachineLearningModel
  public sealed class MachineLearningOperatorContext
  public sealed class MapDescriptor : IFeatureDescriptor
  public sealed class ModelBinding
  public enum ModelDataKind
  public sealed class ModelDevice : IClosable
  public enum ModelDeviceKind
  public enum ModelFeatureKind
  public sealed class ModelSession : IClosable
  public sealed class SequenceDescriptor : IFeatureDescriptor
  public sealed class StringTensorValue : IClosable, IFeatureValue, ITensorValue
  public sealed class TensorDescriptor : IFeatureDescriptor, ITensorDescriptor
  public sealed class UInt16TensorValue : IClosable, IFeatureValue, ITensorValue
  public sealed class UInt32TensorValue : IClosable, IFeatureValue, ITensorValue
  public sealed class UInt64TensorValue : IClosable, IFeatureValue, ITensorValue
  public sealed class UInt8TensorValue : IClosable, IFeatureValue, ITensorValue
}
namespace Windows.ApplicationModel {
  public sealed class AppInstallerFileInfo
  public sealed class LimitedAccessFeatureRequestResult
  public static class LimitedAccessFeatures
  public enum LimitedAccessFeatureStatus
  public sealed class Package {
    IAsyncOperation<PackageUpdateAvailabilityResult> CheckUpdateAvailabilityAsync();
    AppInstallerFileInfo GetAppInstallerFileInfo();
  }
  public enum PackageUpdateAvailability
  public sealed class PackageUpdateAvailabilityResult
}
namespace Windows.ApplicationModel.Calls {
  public sealed class VoipCallCoordinator {
    IAsyncOperation<VoipPhoneCallResourceReservationStatus> ReserveCallResourcesAsync();
  }
}
namespace Windows.ApplicationModel.Chat {
  public static class ChatCapabilitiesManager {
    public static IAsyncOperation<ChatCapabilities> GetCachedCapabilitiesAsync(string address, string transportId);
    public static IAsyncOperation<ChatCapabilities> GetCapabilitiesFromNetworkAsync(string address, string transportId);
  }
  public static class RcsManager {
    public static event EventHandler<object> TransportListChanged;
  }
}
namespace Windows.ApplicationModel.ComponentUI {
  public sealed class ComponentAddedEventArgs
  public enum ComponentLaunchError
  public sealed class ComponentLaunchOptions
  public sealed class ComponentLaunchResults
  public sealed class ComponentManager
  public sealed class ComponentRemovedEventArgs
  public sealed class ComponentReparentResults
  public sealed class ComponentSite
  public enum ComponentState
  public sealed class ComponentStateEventArgs
  public sealed class InputSitePrototype
}
namespace Windows.ApplicationModel.Store.Preview {
  public static class StoreConfiguration {
    public static bool IsPinToDesktopSupported();
    public static bool IsPinToStartSupported();
    public static bool IsPinToTaskbarSupported();
    public static void PinToDesktop(string appPackageFamilyName);
    public static void PinToDesktopForUser(User user, string appPackageFamilyName);
  }
}
namespace Windows.ApplicationModel.Store.Preview.InstallControl {
  public enum AppInstallationToastNotificationMode
  public sealed class AppInstallItem {
    AppInstallationToastNotificationMode CompletedInstallToastNotificationMode { get; set; }
    AppInstallationToastNotificationMode InstallInProgressToastNotificationMode { get; set; }
    bool PinToDesktopAfterInstall { get; set; }
    bool PinToStartAfterInstall { get; set; }
    bool PinToTaskbarAfterInstall { get; set; }
  }
  public sealed class AppInstallManager {
    bool CanInstallForAllUsers { get; }
  }
  public sealed class AppInstallOptions {
    AppInstallationToastNotificationMode CompletedInstallToastNotificationMode { get; set; }
    bool InstallForAllUsers { get; set; }
    AppInstallationToastNotificationMode InstallInProgressToastNotificationMode { get; set; }
    bool PinToDesktopAfterInstall { get; set; }
    bool PinToStartAfterInstall { get; set; }
    bool PinToTaskbarAfterInstall { get; set; }
    bool StageButDoNotInstall { get; set; }
  }
  public sealed class AppUpdateOptions {
    bool AutomaticallyDownloadAndInstallUpdateIfFound { get; set; }
  }
}
namespace Windows.Devices.Enumeration {
  public sealed class DeviceInformation {
    string ContainerDeviceId { get; }
    DevicePhysicalInfo PhysicalInfo { get; }
  }
  public enum DeviceInformationKind {
    DevicePanel = 8,
  }
  public sealed class DeviceInformationPairing {
    public static bool TryRegisterForAllInboundPairingRequestsWithProtectionLevel(DevicePairingKinds pairingKindsSupported, DevicePairingProtectionLevel minProtectionLevel);
  }
  public sealed class DevicePhysicalInfo
  public enum PanelDeviceShape
}
namespace Windows.Devices.Enumeration.Pnp {
  public enum PnpObjectType {
    DevicePanel = 8,
  }
}
namespace Windows.Devices.Lights {
  public sealed class LampArray
  public enum LampArrayKind
  public sealed class LampInfo
  public enum LampPurposes : uint
}
namespace Windows.Devices.Lights.Effects {
  public interface ILampArrayEffect
  public sealed class LampArrayBitmapEffect : ILampArrayEffect
  public sealed class LampArrayBitmapRequestedEventArgs
  public sealed class LampArrayBlinkEffect : ILampArrayEffect
  public sealed class LampArrayColorRampEffect : ILampArrayEffect
  public sealed class LampArrayCustomEffect : ILampArrayEffect
  public enum LampArrayEffectCompletionBehavior
  public sealed class LampArrayEffectPlaylist : IIterable<ILampArrayEffect>, IVectorView<ILampArrayEffect>
  public enum LampArrayEffectStartMode
  public enum LampArrayRepetitionMode
  public sealed class LampArraySolidEffect : ILampArrayEffect
  public sealed class LampArrayUpdateRequestedEventArgs
}
namespace Windows.Devices.Sensors {
  public sealed class SimpleOrientationSensor {
    public static IAsyncOperation<SimpleOrientationSensor> FromIdAsync(string deviceId);
    public static string GetDeviceSelector();
  }
}
namespace Windows.Devices.SmartCards {
  public static class KnownSmartCardAppletIds
  public sealed class SmartCardAppletIdGroup {
    string Description { get; set; }
    IRandomAccessStreamReference Logo { get; set; }
    ValueSet Properties { get; }
    bool SecureUserAuthenticationRequired { get; set; }
  }
  public sealed class SmartCardAppletIdGroupRegistration {
    string SmartCardReaderId { get; }
    IAsyncAction SetPropertiesAsync(ValueSet props);
  }
}
namespace Windows.Devices.WiFi {
  public enum WiFiPhyKind {
    He = 10,
  }
}
namespace Windows.Graphics.Capture {
  public sealed class GraphicsCaptureItem {
    public static GraphicsCaptureItem CreateFromVisual(Visual visual);
  }
}
namespace Windows.Graphics.Imaging {
  public sealed class BitmapDecoder : IBitmapFrame, IBitmapFrameWithSoftwareBitmap {
    public static Guid HeifDecoderId { get; }
    public static Guid WebpDecoderId { get; }
  }
  public sealed class BitmapEncoder {
    public static Guid HeifEncoderId { get; }
  }
}
namespace Windows.Management.Deployment {
  public enum DeploymentOptions : uint {
    ForceUpdateFromAnyVersion = (uint)262144,
  }
  public sealed class PackageManager {
    IAsyncOperationWithProgress<DeploymentResult, DeploymentProgress> DeprovisionPackageForAllUsersAsync(string packageFamilyName);
  }
  public enum RemovalOptions : uint {
    RemoveForAllUsers = (uint)524288,
  }
}
namespace Windows.Management.Policies {
  public static class NamedPolicy {
    public static IAsyncAction ClearAllPoliciesAsync(string accountId);
    public static void SetPolicyAtPath(string accountId, string area, string name, NamedPolicyValue policyValue);
    public static void SetPolicyAtPathForUser(User user, string accountId, string area, string name, NamedPolicyValue policyValue);
  }
  public sealed class NamedPolicyValue
  public static class NamedPolicyValueFactory
}
namespace Windows.Media.Control {
  public sealed class CurrentSessionChangedEventArgs
  public sealed class GlobalSystemMediaTransportControlsSession
  public sealed class GlobalSystemMediaTransportControlsSessionManager
  public sealed class GlobalSystemMediaTransportControlsSessionMediaProperties
  public sealed class GlobalSystemMediaTransportControlsSessionPlaybackControls
  public sealed class GlobalSystemMediaTransportControlsSessionPlaybackInfo
  public sealed class GlobalSystemMediaTransportControlsSessionTimelineProperties
  public sealed class MediaPropertiesChangedEventArgs
  public sealed class PlaybackInfoChangedEventArgs
  public sealed class SessionsChangedEventArgs
  public sealed class TimelinePropertiesChangedEventArgs
}
namespace Windows.Media.Core {
  public sealed class MediaStreamSample {
    IDirect3DSurface Direct3D11Surface { get; }
    public static MediaStreamSample CreateFromDirect3D11Surface(IDirect3DSurface surface, TimeSpan timestamp);
  }
}
namespace Windows.Media.Devices.Core {
  public sealed class CameraIntrinsics {
    public CameraIntrinsics(Vector2 focalLength, Vector2 principalPoint, Vector3 radialDistortion, Vector2 tangentialDistortion, uint imageWidth, uint imageHeight);
  }
}
namespace Windows.Media.MediaProperties {
  public sealed class ImageEncodingProperties : IMediaEncodingProperties {
    public static ImageEncodingProperties CreateHeif();
  }
  public static class MediaEncodingSubtypes {
    public static string Heif { get; }
  }
}
namespace Windows.Media.Streaming.Adaptive {
  public enum AdaptiveMediaSourceResourceType {
    MediaSegmentIndex = 5,
  }
}
namespace Windows.Security.Authentication.Web.Provider {
  public sealed class WebAccountProviderInvalidateCacheOperation : IWebAccountProviderBaseReportOperation, IWebAccountProviderOperation
  public enum WebAccountProviderOperationKind {
    InvalidateCache = 7,
  }
  public sealed class WebProviderTokenRequest {
    string Id { get; }
  }
}
namespace Windows.Security.DataProtection {
  public enum UserDataAvailability
  public sealed class UserDataAvailabilityStateChangedEventArgs
  public sealed class UserDataBufferUnprotectResult
  public enum UserDataBufferUnprotectStatus
  public sealed class UserDataProtectionManager
  public sealed class UserDataStorageItemProtectionInfo
  public enum UserDataStorageItemProtectionStatus
}
namespace Windows.Services.Cortana {
  public sealed class CortanaActionableInsights
  public sealed class CortanaActionableInsightsOptions
}
namespace Windows.Services.Store {
  public sealed class StoreContext {
    IAsyncOperation<StoreRateAndReviewResult> RequestRateAndReviewAppAsync();
    IAsyncOperation<IVectorView<StoreQueueItem>> SetInstallOrderForAssociatedStoreQueueItemsAsync(IIterable<StoreQueueItem> items);
  }
  public sealed class StoreQueueItem {
    IAsyncAction CancelInstallAsync();
    IAsyncAction PauseInstallAsync();
    IAsyncAction ResumeInstallAsync();
  }
  public sealed class StoreRateAndReviewResult
  public enum StoreRateAndReviewStatus
}
namespace Windows.Storage.Provider {
  public enum StorageProviderHydrationPolicyModifier : uint {
    AutoDehydrationAllowed = (uint)4,
  }
}
namespace Windows.System {
  public sealed class FolderLauncherOptions : ILauncherViewOptions {
    ViewGrouping GroupingPreference { get; set; }
  }
  public static class Launcher {
    public static IAsyncOperation<bool> LaunchFolderPathAsync(string path);
    public static IAsyncOperation<bool> LaunchFolderPathAsync(string path, FolderLauncherOptions options);
    public static IAsyncOperation<bool> LaunchFolderPathForUserAsync(User user, string path);
    public static IAsyncOperation<bool> LaunchFolderPathForUserAsync(User user, string path, FolderLauncherOptions options);
  }
  public sealed class LauncherOptions : ILauncherViewOptions {
    ViewGrouping GroupingPreference { get; set; }
namespace Windows.System.Profile.SystemManufacturers {
  public sealed class SystemSupportDeviceInfo
  public static class SystemSupportInfo {
    public static SystemSupportDeviceInfo LocalDeviceInfo { get; }
  }
}
namespace Windows.System.UserProfile {
  public sealed class AssignedAccessSettings
}
namespace Windows.UI.Composition {
  public sealed class AnimatablePropertyInfo : CompositionObject
  public enum AnimationPropertyAccessMode
  public enum AnimationPropertyType
  public class CompositionAnimation : CompositionObject, ICompositionAnimationBase {
    void SetAnimatableReferenceParameter(string parameterName, IAnimatable source);
  }
  public enum CompositionBatchTypes : uint {
    AllAnimations = (uint)5,
    InfiniteAnimation = (uint)4,
  }
  public sealed class CompositionGeometricClip : CompositionClip
  public class CompositionObject : IAnimatable, IClosable {
    void GetPropertyInfo(string propertyName, AnimatablePropertyInfo propertyInfo);
  }
  public sealed class Compositor : IClosable {
    CompositionGeometricClip CreateGeometricClip();
    CompositionGeometricClip CreateGeometricClip(CompositionGeometry geometry);
  }
  public interface IAnimatable
}
namespace Windows.UI.Composition.Interactions {
  public sealed class InteractionTracker : CompositionObject {
    IReference<float> PositionDefaultAnimationDurationInSeconds { get; set; }
    IReference<float> ScaleDefaultAnimationDurationInSeconds { get; set; }
    int TryUpdatePosition(Vector3 value, PropertyUpdateOption options);
    int TryUpdatePositionBy(Vector3 amount, PropertyUpdateOption options);
    int TryUpdatePositionWithDefaultAnimation(Vector3 value);
    int TryUpdateScaleWithDefaultAnimation(float value, Vector3 centerPoint);
  }
  public enum PropertyUpdateOption
}
namespace Windows.UI.Notifications {
  public sealed class ScheduledToastNotification {
    public ScheduledToastNotification(DateTime deliveryTime);
    IAdaptiveCard AdaptiveCard { get; set; }
  }
  public sealed class ToastNotification {
    public ToastNotification();
    IAdaptiveCard AdaptiveCard { get; set; }
  }
}
namespace Windows.UI.Shell {
  public sealed class TaskbarManager {
    IAsyncOperation<bool> IsSecondaryTilePinnedAsync(string tileId);
    IAsyncOperation<bool> RequestPinSecondaryTileAsync(SecondaryTile secondaryTile);
    IAsyncOperation<bool> TryUnpinSecondaryTileAsync(string tileId);
  }
}
namespace Windows.UI.StartScreen {
  public sealed class StartScreenManager {
    IAsyncOperation<bool> ContainsSecondaryTileAsync(string tileId);
    IAsyncOperation<bool> TryRemoveSecondaryTileAsync(string tileId);
  }
}
namespace Windows.UI.Text.Core {
  public sealed class CoreTextLayoutRequest {
    CoreTextLayoutBounds LayoutBoundsVisualPixels { get; }
  }
}
namespace Windows.UI.ViewManagement {
  public sealed class ApplicationView {
    bool IsTabGroupingSupported { get; }
  }
  public sealed class ApplicationViewTitleBar {
    void SetActiveIconStreamAsync(RandomAccessStreamReference activeIcon);
  }
  public enum ApplicationViewWindowingMode {
    CompactOverlay = 3,
    Maximized = 4,
  }
  public enum ViewGrouping
  public sealed class ViewModePreferences {
    ViewGrouping GroupingPreference { get; set; }
  }
}
namespace Windows.UI.ViewManagement.Core {
  public sealed class CoreInputView {
    bool TryHide();
    bool TryShow();
    bool TryShow(CoreInputViewKind type);
  }
  public enum CoreInputViewKind
}
namespace Windows.UI.WebUI {
  public sealed class NewWebUIViewCreatedEventArgs
  public static class WebUIApplication {
    public static event EventHandler<NewWebUIViewCreatedEventArgs> NewWebUIViewCreated;
  }
  public sealed class WebUIView : IWebViewControl
}
namespace Windows.UI.Xaml.Automation {
  public sealed class AutomationElementIdentifiers {
    public static AutomationProperty IsDialogProperty { get; }
  }
  public sealed class AutomationProperties {
    public static DependencyProperty IsDialogProperty { get; }
    public static bool GetIsDialog(DependencyObject element);
    public static void SetIsDialog(DependencyObject element, bool value);
  }
}
namespace Windows.UI.Xaml.Automation.Peers {
  public class AppBarButtonAutomationPeer : ButtonAutomationPeer, IExpandCollapseProvider {
    ExpandCollapseState ExpandCollapseState { get; }
    void Collapse();
    void Expand();
  }
  public class AutomationPeer : DependencyObject {
    bool IsDialog();
    virtual bool IsDialogCore();
  }
}
namespace Windows.UI.Xaml.Controls {
  public class AppBarElementContainer : ContentControl, ICommandBarElement, ICommandBarElement2
  public enum BackgroundSizing
  public sealed class Border : FrameworkElement {
    BackgroundSizing BackgroundSizing { get; set; }
    public static DependencyProperty BackgroundSizingProperty { get; }
  }
  public class ContentPresenter : FrameworkElement {
    BackgroundSizing BackgroundSizing { get; set; }
    public static DependencyProperty BackgroundSizingProperty { get; }
  }
  public class Control : FrameworkElement {
    BackgroundSizing BackgroundSizing { get; set; }
    public static DependencyProperty BackgroundSizingProperty { get; }
    CornerRadius CornerRadius { get; set; }
    public static DependencyProperty CornerRadiusProperty { get; }
    bool UseSystemValidationVisuals { get; set; }
    public static DependencyProperty UseSystemValidationVisualsProperty { get; }
  }
  public class DropDownButton : Button
  public class Grid : Panel {
    BackgroundSizing BackgroundSizing { get; set; }
    public static DependencyProperty BackgroundSizingProperty { get; }
  }
  public class NavigationView : ContentControl {
    bool IsTopNavigationForcedHidden { get; set; }
    NavigationViewOrientation Orientation { get; set; }
    UIElement TopNavigationContentOverlayArea { get; set; }
    UIElement TopNavigationLeftHeader { get; set; }
    UIElement TopNavigationMiddleHeader { get; set; }
    UIElement TopNavigationRightHeader { get; set; }
  }
  public enum NavigationViewOrientation
  public sealed class PasswordBox : Control {
    bool CanPasteClipboardContent { get; }
    public static DependencyProperty CanPasteClipboardContentProperty { get; }
    void PasteFromClipboard();
  }
  public class RelativePanel : Panel {
    BackgroundSizing BackgroundSizing { get; set; }
    public static DependencyProperty BackgroundSizingProperty { get; }
  }
  public class RichEditBox : Control {
    RichEditTextDocument RichEditDocument { get; }
    FlyoutBase SelectionFlyout { get; set; }
    public static DependencyProperty SelectionFlyoutProperty { get; }
    event TypedEventHandler<RichEditBox, RichEditBoxSelectionChangingEventArgs> SelectionChanging;
  }
  public sealed class RichEditBoxSelectionChangingEventArgs
  public sealed class RichTextBlock : FrameworkElement {
    void CopySelectionToClipboard();
  }
  public class StackPanel : Panel, IInsertionPanel, IScrollSnapPointsInfo {
    BackgroundSizing BackgroundSizing { get; set; }
    public static DependencyProperty BackgroundSizingProperty { get; }
  }
  public sealed class TextBlock : FrameworkElement {
    void CopySelectionToClipboard();
  }
  public class TextBox : Control {
    bool CanPasteClipboardContent { get; }
    public static DependencyProperty CanPasteClipboardContentProperty { get; }
    bool CanRedo { get; }
    public static DependencyProperty CanRedoProperty { get; }
    bool CanUndo { get; }
    public static DependencyProperty CanUndoProperty { get; }
    FlyoutBase SelectionFlyout { get; set; }
    public static DependencyProperty SelectionFlyoutProperty { get; }
    event TypedEventHandler<TextBox, TextBoxSelectionChangingEventArgs> SelectionChanging;
    void CopySelectionToClipboard();
    void CutSelectionToClipboard();
    void PasteFromClipboard();
    void Redo();
    void Undo();
  }
  public sealed class TextBoxSelectionChangingEventArgs
  public class TreeView : Control {
    bool CanDragItems { get; set; }
    public static DependencyProperty CanDragItemsProperty { get; }
    bool CanReorderItems { get; set; }
    public static DependencyProperty CanReorderItemsProperty { get; }
    Style ItemContainerStyle { get; set; }
    public static DependencyProperty ItemContainerStyleProperty { get; }
    StyleSelector ItemContainerStyleSelector { get; set; }
    public static DependencyProperty ItemContainerStyleSelectorProperty { get; }
    TransitionCollection ItemContainerTransitions { get; set; }
    public static DependencyProperty ItemContainerTransitionsProperty { get; }
    DataTemplate ItemTemplate { get; set; }
    public static DependencyProperty ItemTemplateProperty { get; }
    DataTemplateSelector ItemTemplateSelector { get; set; }
    public static DependencyProperty ItemTemplateSelectorProperty { get; }
    event TypedEventHandler<TreeView, TreeViewDragItemsCompletedEventArgs> DragItemsCompleted;
    event TypedEventHandler<TreeView, TreeViewDragItemsStartingEventArgs> DragItemsStarting;
  }
  public sealed class TreeViewDragItemsCompletedEventArgs
  public sealed class TreeViewDragItemsStartingEventArgs
  public sealed class WebView : FrameworkElement {
    event TypedEventHandler<WebView, WebViewWebResourceRequestedEventArgs> WebResourceRequested;
  }
  public sealed class WebViewWebResourceRequestedEventArgs
}
namespace Windows.UI.Xaml.Controls.Primitives {
  public class FlyoutBase : DependencyObject {
    bool IsOpen { get; }
    FlyoutShowMode ShowMode { get; set; }
    public static DependencyProperty ShowModeProperty { get; }
    public static DependencyProperty TargetProperty { get; }
    void Show(FlyoutShowOptions showOptions);
  }
  public enum FlyoutPlacementMode {
    BottomLeftJustified = 7,
    BottomRightJustified = 8,
    LeftBottomJustified = 10,
    LeftTopJustified = 9,
    RightBottomJustified = 12,
    RightTopJustified = 11,
    TopLeftJustified = 5,
    TopRightJustified = 6,
  }
  public enum FlyoutShowMode
  public sealed class FlyoutShowOptions : DependencyObject
}
namespace Windows.UI.Xaml.Hosting {
  public sealed class XamlBridge : IClosable
}
namespace Windows.UI.Xaml.Input {
  public sealed class FocusManager {
    public static event EventHandler<GettingFocusEventArgs> GettingFocus;
    public static event EventHandler<FocusManagerGotFocusEventArgs> GotFocus;
    public static event EventHandler<LosingFocusEventArgs> LosingFocus;
    public static event EventHandler<FocusManagerLostFocusEventArgs> LostFocus;
  }
  public sealed class FocusManagerGotFocusEventArgs
  public sealed class FocusManagerLostFocusEventArgs
  public sealed class GettingFocusEventArgs : RoutedEventArgs {
    Guid CorrelationId { get; }
  }
  public sealed class LosingFocusEventArgs : RoutedEventArgs {
    Guid CorrelationId { get; }
  }
}
namespace Windows.UI.Xaml.Markup {
  public sealed class FullXamlMetadataProviderAttribute : Attribute
  public interface IXamlBindScopeDiagnostics
}
namespace Windows.UI.Xaml.Media.Animation {
  public class BasicConnectedAnimationConfiguration : ConnectedAnimationConfiguration
  public sealed class BrushTransition : Transition
  public sealed class ConnectedAnimation {
    ConnectedAnimationConfiguration Configuration { get; set; }
  }
  public class ConnectedAnimationConfiguration
  public class DirectConnectedAnimationConfiguration : ConnectedAnimationConfiguration
  public class GravityConnectedAnimationConfiguration : ConnectedAnimationConfiguration
  public enum SlideNavigationTransitionEffect
  public sealed class SlideNavigationTransitionInfo : NavigationTransitionInfo {
    SlideNavigationTransitionEffect Effect { get; set; }
    public static DependencyProperty EffectProperty { get; }
  }
}
namespace Windows.Web.UI.Interop {
  public sealed class WebViewControl : IWebViewControl {
    event TypedEventHandler<WebViewControl, object> GotFocus;
    event TypedEventHandler<WebViewControl, object> LostFocus;
  }
  public sealed class WebViewControlProcess {
    string Partition { get; }
    string UserAgent { get; }
  }
  public sealed class WebViewControlProcessOptions {
    string Partition { get; set; }
    string UserAgent { get; set; }
  }
}

The post Windows 10 SDK Preview Build 17682 available now! appeared first on Windows Developer Blog.

Use Dependency Injection In WebForms Application

$
0
0

Dependency Injection design pattern is widely used in modern applications.  It decouples objects to the extent that no client code needs to be changed simply because an object it depends changes to a different one.  It brings you a lot of benefits, like reduced dependency, more reusable code, more testable code, etc.  in the past, it was very difficult to use Dependency Injection in WebForms application before.  Starting from .NET 4.7.2, it is now easy for developers to use Dependency Injection in WebForms applications.  With the UnityAdapter, you can add it to your existing WebForms application in 4 simple steps.

How to enable Dependency Injection in your existing WebForms application

Suppose you have a movie website which lists most popular movies in history.  You use repository pattern to separate the logic that retrieves the data and maps it to the business entity.  Currently you are creating business logic object and repository object in default.aspx page.  The code looks like bellow.

Now simply follow 4 steps below, you will be able to adopt Dependency Injection to decouple the MovieManager from default.aspx page. The sample web application is on this Github repo.  And you can use tag to retrieve the code change in each step.

1. Retarget the project to .NET Framework 4.7.2. (Git Tag: step-1)

Open project property and change the targetFramework of the project to .NET Framework 4.7.2. You would also need to change targetFramework in httpRuntime section in web.config file as illustrated below.

2. Install AspNet.WebFormsDependencyInjection.Unity NuGet package. (Git Tag: step-2)

3. Register types in Global.asax. (Git Tag: step-3)

4. Refactor Default.aspx.cs. (Git Tag: step-4)

Areas that Dependency Injection can be used

There are many areas you can use Dependency Injection in WebForms applications now. Here is a complete list.

  • Pages and controls
    • WebForms page
    • User control
    • Custom control
  • IHttpHandler and IHttpHandlerFactory
  • IHttpModule
  • Providers
    • BuildProvider
    • ResourceProviderFactory
    • Health monitoring provider
    • Any ProviderBase based provider created by System.Web.Configuration.ProvidersHelper.InstantiateProvider. e.g. custom sessionstate provider

 

Summary

Using Microsoft.AspNet.WebFormsDependencyInjection.Unity NuGet package on .net framework 4.7.2, Dependency Injection can be easily added into your existing WebForms application. Please give it a try and let us know your feedback.


Python in Visual Studio Code – May 2018 Release

$
0
0

We are pleased to announce that the May 2018 release of the Python Extension for Visual Studio Code is now available from the marketplace and the gallery. You can download the Python extension from the marketplace, or install it directly from the extension gallery in Visual Studio Code. You can learn more about Python support in Visual Studio Code in the VS Code documentation.

In this release we have closed a total of 103 issues including support for the new and popular formatter Black, improvements to the experimental debugger and formatting as you type.

Support for Black Formatter

Black is a new code formatting tool for Python that was first released in March and has quickly gained popularity. Black has a single opinion about how Python code should be formatted, allowing you to easily achieve consistency across your codebase. The Python extension now supports using it as a formatter.

To enable the Black formatter, go into File > User Preferences > Settings, and put the following setting in your User Settings (for settings for all workspaces) or Workspace settings (for the current workspace/folder).

"python.formatting.provider": "black"

Then run the VS Code command “Format Document”. You will get a prompt to install the Black formatter:

Selecting Yes will install Black into the currently selected interpreter in VS Code. Once black has finished installing, you will need to run the Format Document command again to format your document.

In the below code example, we can see that black adds a blank line before functions, spaces around equals signs, and uses double quotation marks instead of single quotation marks:

If you want formatting to happen automatically when hitting save, you can add the following setting:

"editor.formatOnSave": true

If you want to format Python 2.7 code, Black will need to run in a Python 3 environment. In that case, you can install black using python3 –m pip install –upgrade black into a Python 3 interpreter/environment of your choice, and then set the python.formatting.blackPath setting to point to the black command that was installed (on UNIX-based OSs you can typically find this with the command which black).

Various Fixes and Enhancements

We have also added small enhancements and fixed issues requested by users that should improve your experience working with Python in Visual Studio Code. The full list of improvements is listed in our changelog, some notable improvements are:

  1. Improvements to testing: added a ‘Discover Unit Tests’ command (#1474) for discovering unit tests, removed error in the output window when running tests (#1529)
  2. Improvements to the Experimental Debugger: auto-enable jinja template debugging on *.jinja and *.j2 files (#1484), ensure debugged program is terminated when Stop debugging button is clicked. (#1345), support for attach/detach (#1255)
  3. Fixed syntax errors caused when using the editor.formatOnType setting is enabled (#1799)
  4. Ensure python environment activation works as expected within a multi-root workspace. (#1476)
  5. Fixed flask debugging configurations so that they work with the latest versions of flask (#1634)
  6. Ensure the display name of an interpreter does not get prefixed twice with the words Python. (#1651)
  7. `Go to Definition` now works for functions which have numbers that use `_` as a separator (as part of our Jedi 0.12.0 upgrade). (#180)
  8. Fixed rename refactor issue that removes the last line of the source file when the line is being refactored and source does not end with an EOL. (#695)

Be sure to download the Python extension for VS Code now to try out the above improvements. If you run into any issues be sure to file an issue on the Python VS Code GitHub page.

Carriage Returns and Line Feeds will ultimately bite you – Some Git Tips

$
0
0

Typewriter by Matunos used under Creative CommonsWhat's a Carriage and why is it Returning? Carriage Return Line Feed WHAT DOES IT ALL MEAN!?!

The paper on a typewriter rides horizontally on a carriage. The Carriage Return or CR was a non-printable control character that would reset the typewriter to the beginning of the line of text.

However, a Carriage Return moves the carriage back but doesn't advance the paper by one line. The carriage moves on the X axes...

And Line Feed or LF is the non-printable control character that turns the Platen (the main rubber cylinder) by one line.

Hence, Carriage Turn and Line Feed. Two actions, and for years, two control characters.

Every operating system seems to encode an EOL (end of line) differently. Operating systems in the late 70s all used CR LF together literally because they were interfacing with typewriters/printers on the daily.

Windows uses CRLF because DOS used CRLF because CP/M used CRLF because history.

Mac OS used CR for years until OS X switched to LF.

Unix used just a single LF over CRLF and has since the beginning, likely because systems like Multics started using just LF around 1965. Saving a single byte EVERY LINE was a huge deal for both storage and transmission.

Fast-forward to 2018 and it's maybe time for Windows to also switch to just using LF as the EOL character for Text Files.

Why? For starters, Microsoft finally updated Notepad to handle text files that use LF.

BUT

Would such a change be possible? Likely not, it would break the world. Here's NewLine on .NET Core.

public static String NewLine {
    get {
        Contract.Ensures(Contract.Result() != null);
#if !PLATFORM_UNIX
        return "rn";
#else
        return "n";
#endif // !PLATFORM_UNIX
    }
}

Regardless, if you regularly use Windows and WSL (Linux on Windows) and Linux together, you'll want to be conscious and aware of CRLF and LF.

I ran into an interesting situation recently. First, let's review what Git does

You can configure .gitattributes to tell Git how to to treat files, either individually or by extension.

When

git config --global core.autocrlf true

is set, git will automatically convert files quietly so that they are checked out in an OS-specific way. If you're on Linux and checkout, you'll get LF, if you're on Windows you'll get CRLF.

99% of the time this works great.

Except when you are sharing file systems between Linux and Windows. I use Windows 10 and Ubuntu (via WSL) and keep stuff in /mnt/c/github.

However, if I pull from Windows 10 I get CRLF and if I pull from Linux I can LF so then my shell scripts MAY OR MAY NOT WORK while in Ubuntu.

I've chosen to create a .gitattributes file that set both shell scripts and PowerShell scripts to LF. This way those scripts can be used and shared and RUN between systems.

*.sh eol=lf
*.ps1 eol=lf

You've got lots of choices. Again 99% of the time autocrlf is the right thing.

From the GitHub docs:

You'll notice that files are matched--*.c, *.sln, *.png--, separated by a space, then given a setting--text, text eol=crlf, binary. We'll go over some possible settings below.

  • text=auto
    • Git will handle the files in whatever way it thinks is best. This is a good default option.
  • text eol=crlf
    • Git will always convert line endings to CRLF on checkout. You should use this for files that must keep CRLF endings, even on OSX or Linux.
  • text eol=lf
    • Git will always convert line endings to LF on checkout. You should use this for files that must keep LF endings, even on Windows.
  • binary
    • Git will understand that the files specified are not text, and it should not try to change them. The binary setting is also an alias for -text -diff.

Again, the defaults are probably correct. BUT - if you're doing weird stuff, sharing files or file systems across operating systems then you should be aware. If you're having trouble, it's probably CRLF.

I hope Microsoft bought Github so they can fix this CRLF vs LF issue.

— Scott Hanselman (@shanselman) June 4, 2018

* Typewriter by Matunos used under Creative Commons


Sponsor: Check out JetBrains Rider: a cross-platform .NET IDE. Edit, refactor, test and debug ASP.NET, .NET Framework, .NET Core, Xamarin or Unity applications. Learn more and download a 30-day trial!



© 2018 Scott Hanselman. All rights reserved.
     

Announcing ML.NET 0.2

$
0
0

Last month at //Build 2018, we released ML.NET 0.1, a cross-platform, open source machine learning framework. We would like to thank the community for the engagement so far in helping us shape ML.NET.

Today we are releasing ML.NET 0.2. This release focuses on adding new ML tasks like clustering, making it easier to validate models, adding a brand-new repo for ML.NET samples and addressing a variety of issues and feedback we received in the GitHub repo.

Some of the highlights with ML.NET 0.2 release are mentioned below.

New Machine Learning Tasks: Clustering

Clustering is an unsupervised learning task that groups sets of items based on their features. It identifies which items are more similar to each other than other items.

This might be useful in scenarios such as organizing news articles into groups based on their topics, segmenting users based on their shopping habits, and grouping viewers based on their taste in movies.

The Iris Flower sample illustrates how you can use Clustering with ML.NET 0.2

Easier model validation with cross-validation and train-test

Cross-validation is an approach to validating how well your model statistically performs. It does not require a separate test dataset, but rather uses your training data to test your model (it partitions the data so different data is used for training and testing, and it does this multiple times).With ML.NET 0.2 you can now use cross-validation and here is a good example.

Train-test is a shortcut to testing your model on a separate dataset. See example usage here.

Train using data objects with CollectionDataSource

ML.NET 0.1 enabled loading data from a delimited text file. CollectionDataSource in ML.NET 0.2 adds the ability to use a collection of objects as the input to a LearningPipeline.

The code-snippet below shows how you can use CollectionDataSource with ML.NET 0.2.

var pipeline = new LearningPipeline();
var data = new List() {
           new IrisData { SepalLength = 1f, SepalWidth = 1f 
                         ,PetalLength=0.3f, PetalWidth=5.1f, Label=1},
           new IrisData { SepalLength = 1f, SepalWidth = 1f 
                         ,PetalLength=0.3f, PetalWidth=5.1f, Label=1},
           new IrisData { SepalLength = 1.2f, SepalWidth = 0.5f 
                         ,PetalLength=0.3f, PetalWidth=5.1f, Label=0}
           };
var collection = CollectionDataSource.Create(data);
pipeline.Add(collection); 

Full code snippet for CollectionDataSource can be found here.

New ML.NET samples repo

We have created a new repo https://github.com/dotnet/machinelearning-samples and added a few getting started and end-end app samples.

  • Sentiment Analysis (Binary Classification)
    This sample demonstrates how ML.NET can be used to analyze sentiment for customer reviews (positive or negative). The sample uses IMDB and Yelp reviews.
  • Classification of Iris Flowers (Multi-class Classification)
    This sample is centered around predicting the type of an iris flower (setosa, versicolor, or virginica) based on the flower’s parameters such as petal length, petal width, etc.
  • Taxi-Fare prediction (Regression)
    Taxi-Fare prediction sample demonstrates how to build a ML.NET model for predicting New York City taxi fares. A regression model is used in this sample which takes into account features like number of passengers, type of credit and distance traveled.
  • Cluster analysis on Iris Dataset (Clustering)
    The sample demonstrates how to build a clustering model with ML.NET by performing a cluster analysis on the Iris Dataset.
  • GitHub Issue Classification (Multi-class classification)
    This is an E2E sample which shows how to use ML.NET to build a GitHub issue classifier.

This blog post only goes over a few top announcements with ML.NET 0.2 release, the complete release notes for ML.NET 0.2 can be found  here.

Help shape ML.NET for your needs

If you haven’t already, try out ML.NET you can get started here.  We look forward to your feedback and welcome you to file file issues with any suggestions or enhancements in the GitHub repo.

https://github.com/dotnet/machinelearning

This blog was co-authored by Gal Oshri and Ankit Asthana

Thanks,
ML.NET Team

Mellanox uses Azure to accelerate network design

$
0
0

What do you do when your networking powers 60 percent of the HPC systems on the Top500 supercomputer list and you have 70 percent market share for 25G and faster network adapters? You continue to push the boundaries of performance to keep your position. But developing high-speed, low-latency networking gear is not an easy process, and you need to have an efficient IT infrastructure to hold your mission.

Journey to the cloud

Mellanox’s journey to running HPC workloads in Azure started a few years ago. At first, they were looking for services like disaster recovery, where they could keep the environment deprovisioned most of the time. Then they started looking at moving services like backups to the cloud. Blob Storage was more attractive to them than managing tape libraries. As they gained comfort with that, they began moving additional services like email and SharePoint.

Mellanox’s users became comfortable using routine services in Azure the performance and stability were attractive, and it allowed IT teams to focus on the areas that add value. When it came time to look at bursting the design environment, Mellanox looked into public cloud options.

Mellanox worked with Univa, a leading provider of HPC scheduling and orchestration software to evaluate different public cloud options. In the end, Mellanox chose Azure for both technical capabilities and the support provided by the Microsoft team. “It’s very straightforward”, said Udi Weinstein, VP Information Technology at Mellanox, “It’s compute power with the ability to manage it. Azure is stable, people know the environment, and it’s predictable”.

Solving bottlenecks

By bursting their design simulations to Azure, Mellanox was able to eliminate bottlenecks in the design process. Each design team was given their own budget that they could spend in the way that made the most sense for their workload. “Who knows best?”, Weinstein asked, “The end user”.

This flexibility in policy matches Azure’s flexibility in resources. With different types of resources and different sizes within virtual machine families, users can match the compute resource to the job. Senior IT Manager Yoni Myoslavski said “one cloud usage benefit is that we may adjust the VM types pretty simply to better fit our needs”.

“It does not make sense to buy expensive, high end computers to support temporary compute bursts”, said Weinstein. The elasticity of Azure, managed by Univa’s software, gives Mellanox the resources they need when they’re needed.

To read more about Mellanox’s use of Azure, see Univa’s case study. To learn more about Azure’s HPC offerings, visit our HPC solution page.

Azure Search is now certified for several levels of compliance

$
0
0

Compliance is an important factor for customers when looking at software and services as they look to meet their own compliance obligations across regulated industries and markets worldwide. For example, ISO 27001 certification is a security standard that provides a baseline set of requirements for many other international standards and regulations and HIPAA (Health Insurance Portability and Accountability Act) is a US law that establishes requirements for the use, disclosure, and safeguarding of protected health information (PHI).

For that reason, we are excited to announce that Azure Search has been certified for several levels of compliance including:

  • ISO 27001:2013
  • SOC 2 Type 2
  • GxP (21 CFR Part 11)
  • HIPAA and the HITECH Act
  • HITRUST
  • PCI DSS Level 1
  • Australia IRAP Unclassified

With these certifications and attestations, we hope to enable Azure Search as a viable option for customers looking to meet and attain key international and industry-specific compliance standards within their solutions.

Azure compliance offerings are grouped into four segments: globally applicable, US government, industry specific, and region/country specific. To view an overview of Azure Search as well as other Microsoft Azure compliance offerings, please visit the Microsoft Trust Center. In addition, you can directly download a document that provides an up to date scope statement indicating which Azure customer-facing services are in scope for the assessment, as well as links to downloadable resources to assist customers with their own compliance obligations.

If there are other areas of compliance you would like to see us support, please let me know.

Detecting script-based attacks on Linux

$
0
0

Last month, we announced the extension of Azure Security Center’s detection for Linux. This post aims to demonstrate how existing Windows detections often have Linux analogs. A specific example of this is the encoding or obfuscation of command-lines. Some of the reasons an attacker might wish to encode their commands include minimizing quoting/escaping issues when encapsulating commands in scripts and a basic means of hiding from host-based intrusion detection. These techniques have the additional benefit of avoiding the need to drop a file to disk, reducing the risk to an attacker of being detected by traditional anti-virus products.

Encoded PowerShell attacks on Windows

There are many examples of such behavior being used in attacks against Windows environments. A previous blog post highlights one such technique to encode PowerShell commands as base64. PowerShell actually makes this amazingly easy, allowing commands of the form:

powershell.exe -EncodedCommand dwByAGkAdABlAC0AbwB1AHQAcAB1AHQAIABFAG4AYwBvAGQAZQBkACAAUABvAHcAZQByAFMAaABlAGwAbAAgAHMAYwByAGkAcAB0AA==

The only real stumbling block being the requirement that the decoded command must be UTF-16 (hence the prevalence of ‘A’ in the resulting base64).

Encoded shell attacks on Linux

As with attacks on Windows systems, the same motivations exist for encoding commands on Linux systems. Namely, to avoid encapsulation issues with special characters and to evade any naïve anti-virus or log analysis.

Whilst Linux has no native equivalent to PowerShell’s -EncodedCommand parameter, it almost always comes packaged with commands such as base64 from the coreutils package (the package that provides critical commands such as cp, ls, mv, rm, etc.). On embedded systems, these same utilities are often provided through inclusion of BusyBox or similar utilities. This makes executing an encoded command as simple as redirecting the output of such a call into your shell of choice. A simple Bash example might look like:

echo ZWNobyAiT2JmdXNjYXRlZCBiYXNoIHNjcmlwdCI= | base64 -d | bash

Worth noting at this point is that on Linux especially, base64 is not the only game in town, arguably equally likely is the ubiquitous hexadecimal encoding:

echo 6563686f20224f626675736361746564206261736820736372697074220a | xxd -r -p | bash

or the even more verbose backslash escaped octal or hex:

printf '145143150157404211714214616516314314116414514440142141163150401631431621511601644212' | bash

Encoded script attacks on Linux

In addition to these dedicated decoders, most scripting languages such as Python / Perl / Ruby come with their own built-in base64 libraries, making obfuscation of such scripts extremely easy. A simple Perl example might look something like:

exec(decode_base64("SW5zZXJ0IFBlcmwgc2NyaXB0IGhlcmUuLi4="))

The one minor downside to this technique is the general requirement to import the relevant library first (in this case, I have omitted the required “use MIME::Base64” statement).

Real-world attacks observed on Linux hosts

The following example demonstrates real-world usage of this approach in Python. In this event, the behavior was observed following a successful brute-force of an SSH password.

This activity was picked up by several different analytics in Azure Security Center, starting with the successful brute-force. The following screenshot shows the detection of suspicious encoded Python of the type described in the previous section.

Azure Security Center detection
 
Decoding the encapsulated base64 from the command-line highlighted in the alert gives the following:

Python botnet downloader
 
As you can see, the urlopen and further base64decode / exec calls downloads yet more base64 encoded Python and executes it. This behavior, beneath the outer encoding, is what the highlighted alert has detected. The script to be downloaded being the main controller code for this entirely Python botnet.

Subsequent analysis of the controller script (including configuration that it receives through further urlopen calls), appears to suggest the primary goal of this botnet is the downloading and execution of a Monero crypto-currency miner, saved as a binary named wipefs (strangely assuming the name of a little-used tool for wiping Linux filesystem signatures). Finally, persistence is achieved through the addition of a 6-hourly cron job, running yet another base64 encoded Python script, this time stored as httpsd (presumably chosen to look like an imaginary SSL variant of Apache’s httpd). The following snippet of the controller code highlights this latter operation:

Python botnet controller

Protecting Linux VMs

Many of the type of attack mentioned in this post originate with brute force of SSH passwords. There are several ways of mitigating this technique, depending on access requirements for individual machines. The simplest solution is, where possible, disabling SSH or locking down access to specific networks, as mentioned in our network best practice advice. Where ad-hoc connections from the internet are required to particular machines, enabling Azure Security Center’s JIT virtual machine access allows the granting of on-demand access for a limited time window. Lastly, as with the vast majority of SSH guidance, disabling password login by enabling public key authentication is almost always a sensible precaution.

Summary

Whilst PowerShell analytics may at first glance appear to be entirely Windows focussed, attackers targeting Linux systems exhibit similar behaviour. Fundamentally, this should come as no real surprise, as it is often the same people attacking both platforms. It is also important to remember methods of detecting them can often be similarly portable. This portability can in fact be taken a step further, as shown in the above example, using platform agnostic scripting languages such as Python, an attacker is often able to use the exact same code on both Windows and Linux, making any subsequent detections identical.

Azure Data Lake Tools for VSCode supports Azure blob storage integration

$
0
0

We are pleased to announce the integration of VSCode explorer with Azure blob storage. If you are a data scientist and want to explore the data in your Azure blob storage, please try the Data Lake Explorer blob storage integration. If you are a developer and want to access and manage your Azure blob storage files, please try the Data Lake Explorer blob storage integration. The Data Lake Explorer allows you easily navigate to your blob storage, access and manage your blob container, folder and files.  

Summary of new features

  • Blob container - Refresh, Delete Blob Container and Upload Blob 

blob-storage-blob-container-node

  • Folder in blob - Refresh and Upload Blob 

blob-storage-folder-node

  • File in blob - Preview/Edit, Download, Delete, Create EXTRACT Script (only available for CSV, TSV and TXT files), as well as Copy Relative Path, and Copy Full Path

create-extract-script-from-context-menu-2

How to install or update

Install Visual Studio Code and download Mono 4.2.x (for Linux and Mac). Then get the latest Azure Data Lake Tools by going to the VSCode Extension repository or the VSCode Marketplace and searching Azure Data Lake Tools.

Extension_thumb3_thumb

For more information about Azure Data Lake Tool for VSCode, please use the following resources:

Learn more about today’s announcements on the Azure blog and the Big Data blog. Discover more on the Azure service updates page.

If you have questions, feedback, comments, or bug reports, please use the comments below or send a note to hdivstool@microsoft.com.


New white paper shows how companies can thrive with an evolving workplace

$
0
0

In a recent study produced in partnership with Harvard Business Review Analytic Services, more than three-quarters of business leaders said that the evolution of their workplace strategy, processes, and technology is very important to their company’s overall performance. Yet less than one-third said their company was forward-looking in its approach. In a global economy, where new technologies have the potential to disrupt entire industries, how do companies enable their most important asset: people?

Read the white paper to see how companies can embrace a new culture of work, and thrive in it.

The post New white paper shows how companies can thrive with an evolving workplace appeared first on Microsoft 365 Blog.

C# Console UWP Applications

$
0
0

We’ve just published an update to the Console UWP App project templates on the Visual Studio marketplace here. The latest version (v1.5) adds support for C#. The C# template code only works with Visual Studio 2017 version 15.7 or later. In a previous post, I described how to build a simple findstr UWP app using the C++ Console templates. In this post, we’ll look at how to achieve the same with C#, and call out a few additional wrinkles you should be aware of.

Having installed the updated VSIX, you can now choose a C# Console UWP App from the New Project dialog:

choose a C# Console UWP App from the New Project dialog

Note that C# console apps are only supported from version 10.0.17134.0 of the platform. You should therefore specify a version >= 10.0.17134 for the minimum platform version when you create your project. If you forget this step, you can fix it at any time later by manually editing your .csproj and updating the TargetPlatformMinVersion value. 

If you forget this step, you can fix it at any time later by manually editing your .csproj and updating the TargetPlatformMinVersion value.

Also note that with the C# template, you might get the error message “Output type ‘Console Application’ is not supported by one or more of the project’s targets.” You can safely ignore this message – even though it is flagged as an error, it doesn’t actually make any difference to anything and doesn’t prevent the project from building correctly.

As with the C++ template, the generated code includes a Main method. One difference you’ll notice with the C# version is that the command-line arguments are passed directly into Main. Recall that in the C++ version, you don’t get the arguments into main, but instead you need to use the global __argc and __argv variables. Notice that you can also now use the System.Console APIs just as you would in a non-UWP console app.


static void Main(string[] args) 
{ 
    if (args.Length == 0) 
    { 
        Console.WriteLine("Hello - no args"); 
    } 
    else 
    { 
        for (int i = 0; i < args.Length; i++) 
        { 
            Console.WriteLine($"arg[{i}] = {args[i]}"); 
        } 
    } 
    Console.WriteLine("Press a key to continue: "); 
    Console.ReadLine(); 
} 

As before, for the file-handling behavior needed for the findstr app, you need to add the broadFileSystemAccess restricted capability. Adding this will cause your app to get some extra scrutiny when you submit it to the Store. You will need to describe how you intend to use the feature, and show that your usage is reasonable and legitimate. 


xmlns:rescap="http://schemas.microsoft.com/appx/manifest/foundation/windows10/restrictedcapabilities"  
  IgnorableNamespaces="uap mp uap5 desktop4 iot2 rescap"> 
… 
  <Capabilities> 
    <Capability Name="internetClient" /> 
    <rescap:Capability Name="broadFileSystemAccess" /> 
  </Capabilities> 

Because the app will be doing some simple file handling and pattern matching, in the C++ version, I had to #include the Windows.Storage.h and regex, and declare the corresponding namespaces. In C#, you need the equivalent Windows.Storage and System.Text.RegularExpressions namespaces.

For the findstr functionality, recall that I’m expecting a command-line such as “CsFindstr foo C:Bar”, where “foo” is the pattern to search for, and “C:Bar” is the folder location from which to start the recursive search. I can strip out all the generated code in Main, and replace it with firstly a simple test for the expected number of command-line arguments, and secondly a call to a RecurseFolders method (which I’ll write in a minute). In the C++ version, I tested __argc < 3, but in the managed version I need to test the incoming args.Length for < 2 (the executable module name itself is not included in the C# args).


static void Main(string[] args) 
{ 
    if (args.Length < 2) 
    { 
        Console.WriteLine("Insufficient arguments."); 
        Console.WriteLine("Usage:"); 
        Console.WriteLine("   mFindstr <search-pattern> <fully-qualified-folder-path>."); 
        Console.WriteLine("Example:"); 
        Console.WriteLine("   mFindstr on D:\Temp."); 
    } 
    else 
    { 
        string searchPattern = args[0]; 
        string folderPath = args[1]; 
        RecurseFolders(folderPath, searchPattern).Wait(); 
    } 
 
    Console.WriteLine("Press a key to continue: "); 
    Console.ReadLine(); 
} 

Now for the custom RecurseFolders method. Inside this method, I need to use a number of async methods for the file handling, so the method needs to be declared async – and this is also why I called Wait() on the Task return from the method back up in Main. I can’t make Main async, so I must make sure to contain all meaningful async return values within the lower-level methods.

In this method, I’ll get the StorageFolder for the root folder supplied by the user on the command-line, get the files in this folder, and then continue down the folder tree for all sub-folders and their files:


private static async Task<bool> RecurseFolders(string folderPath, string searchPattern) 
{ 
    bool success = true; 
    try 
    { 
        StorageFolder folder = await StorageFolder.GetFolderFromPathAsync(folderPath); 
 
        if (folder != null) 
        { 
            Console.WriteLine( 
                $"Searching folder '{folder}' and below for pattern '{searchPattern}'"); 
            try 
            { 
                // Get the files in this folder. 
                IReadOnlyList<StorageFile> files = await folder.GetFilesAsync(); 
                foreach (StorageFile file in files) 
                { 
                    SearchFile(file, searchPattern); 
                } 
 
                // Recurse sub-directories. 
                IReadOnlyList<StorageFolder> subDirs = await folder.GetFoldersAsync(); 
                if (subDirs.Count != 0) 
                { 
                    GetDirectories(subDirs, searchPattern); 
                } 
            } 
            catch (Exception ex) 
            { 
                success = false; 
                Console.WriteLine(ex.Message); 
            } 
        } 
    } 
    catch (Exception ex) 
    { 
        success = false; 
        Console.WriteLine(ex.Message); 
    } 
    return success; 
} 

The GetDirectories method is the actual recursive method that performs the same operation (get the files in the current folder, then recurse sub-folders): 


private static async void GetDirectories(IReadOnlyList<StorageFolder> folders, string searchPattern) 
{ 
    try 
    { 
        foreach (StorageFolder folder in folders) 
        { 
            // Get the files in this folder. 
            IReadOnlyList<StorageFile> files = await folder.GetFilesAsync(); 
            foreach (StorageFile file in files) 
            { 
                SearchFile(file, searchPattern); 
            } 
 
            // Recurse this folder to get sub-folder info. 
            IReadOnlyList<StorageFolder> subDirs = await folder.GetFoldersAsync(); 
            if (subDirs.Count != 0) 
            { 
                GetDirectories(subDirs, searchPattern); 
            } 
        } 
    } 
    catch (Exception ex) 
    { 
        Console.WriteLine(ex.Message); 
    } 
} 

Finally, the SearchFile method, which is where I’m doing the pattern-matching, using Regex. As before, I’m enhancing the raw search pattern to search for any whitespace-delimited “word” that contains the user-supplied pattern. Then I walk the returned MatchCollection, and print out all the found “words” and their position in the file. 


private static async void SearchFile(StorageFile file, string searchPattern) 
{ 
    if (file != null) 
    { 
        try 
        { 
            Console.WriteLine($"Scanning file '{file.Path}'"); 
            string text = await FileIO.ReadTextAsync(file); 
            string compositePattern =  
                "(\S+\s+){0}\S*" + searchPattern + "\S*(\s+\S+){0}"; 
            Regex regex = new Regex(compositePattern); 
            MatchCollection matches = regex.Matches(text); 
            foreach (Match match in matches) 
            { 
                Console.WriteLine($"{match.Index,8} {match.Value}"); 
            } 
        } 
        catch (Exception ex) 
        { 
            Console.WriteLine(ex.Message); 
        } 
    } 
} 

With this, I can now press F5 to build and deploy the app. For console apps it often makes sense to set the Debug properties to “Do not launch, but debug my code when it starts” – because the most useful testing will be done with varying command-line arguments, and therefore by launching the app from a command prompt rather from Visual Studio. 

For console apps it often makes sense to set the Debug properties to “Do not launch, but debug my code when it starts.”

I can test the app using a command window or powershell window: 

Test the app using a command window or powershell window

That’s it! You can now write Console UWP apps in C#. Full source code for this sample app is on Github here. 

The post C# Console UWP Applications appeared first on Windows Developer Blog.

Visual Studio Code May 2018

What’s Next for Visual Studio

$
0
0

Since we launched Visual Studio 2017 in March of that year, it has become our most popular Visual Studio release ever. Your feedback has helped our team publish seven updates since our initial GA, which have improved solution load performance, build performance, and unit test discovery performance. We’ve also made Visual Studio 2017 our most accessible releases ever, helping developers with low-vision or no-vision be more productive.

Our team is focused on introducing features that make every developer more productive: better navigation features like “go to all” (Ctrl + ,), features to improve code quality like Live Unit Testing, and most recently, to enable real time collaboration with Live Share. And we have even started to show how we will use artificial intelligence to assist developers with IntelliCode.

Now, it’s time to start to look at what comes next.

The short answer is Visual Studio 2019

Because the Developer Tools teams (especially .NET and Roslyn) do so much work in GitHub, you’ll start to see check-ins that indicate that we’re laying the foundation for Visual Studio 2019, and we’re now in the early planning phase of Visual Studio 2019 and Visual Studio for Mac. We remain committed to making Visual Studio faster, more reliable, more productive for individuals and teams, easier to use, and easier to get started with. Expect more and better refactorings, better navigation, more capabilities in the debugger, faster solution load, and faster builds. But also expect us to continue to explore how connected capabilities like Live Share can enable developers to collaborate in real time from across the world and how we can make cloud scenarios like working with online source repositories more seamless. Expect us to push the boundaries of individual and team productivity with capabilities like IntelliCode, where Visual Studio can use Azure to train and deliver AI-powered assistance into the IDE.

Our goal with this next release is to make it a simple, easy upgrade for everyone – for example, Visual Studio 2019 previews will install side by side with Visual Studio 2017 and won’t require a major operating system upgrade.

As for timing of the next release, we’ll say more in the coming months, but be assured we want to deliver Visual Studio 2019 quickly and iteratively. We’ve learned a lot from the cadence we’ve used with Visual Studio 2017, and one of the biggest things we have learned is that we can do a lot of good work if we focus on continually delivering and listening to your feedback. There are no bits to preview yet, but the best way to ensure you are on the cutting edge will be to watch this blog and to subscribe to the Visual Studio 2017 Preview.

In the meantime, our team will continue to publish a roadmap of what we’re planning online, work in many open source repositories, and take your feedback through our Developer Community website. This blog post is just another example of sharing our plans with you early, so you can plan and work with us to continue to make Visual Studio a great coding environment.

John

John Montgomery, Director of Program Management for Visual Studio
@JohnMont

John is responsible for product design and customer success for all of Visual Studio, C++, C#, VB, JavaScript, and .NET. John has been at Microsoft for 17 years, working in developer technologies the whole time.

What’s new in Azure for Machine Learning and AI

$
0
0

There were a lot of big announcements at last month's Build conference, and many of them were related to machine learning and artificial intelligence. With my colleague Tim Heuer, we summarized some of the big announcements — and a few you may have missed — in a recent webinar. The slides are embedded below, and include links to recordings of the Build sessions where you can find in-depth details.

You can't see the videos or demos in the slides, unfortunately — my favorite is a demo of using Microsoft Translator, trained by a hearing-impaired user, to accurately transcribe "deaf voice". But you can find the videos and discussion from Tim and me in the on-demand recording available at the link below.

Azure Webinar Series: Top Azure Takeaways from Microsoft Build

Viewing all 10804 articles
Browse latest View live