Quantcast
Channel: Category Name
Viewing all 10804 articles
Browse latest View live

Join Microsoft Build Live for the ultimate digital experience

$
0
0

Build-MSFT%20Dev%20Banner-%202000x590

Registration is now open for the Microsoft Build Live digital experience, May 7–9!

Microsoft Build Live brings you live as well as on-demand access to three days of inspiring speakers, spirited discussions, and virtual networking. Whether you can’t make it to Seattle or just want to enhance your on-the-ground experience at Microsoft Build, the livestream gives you another way to connect, spark ideas, and deepen your engagement with the latest ideas in the cloud, AI, mixed reality, and more.

Register for free and start designing your own personalized digital experience:

  • Learn: There’s a lot going on at Microsoft Build! Select your favorite, must-know topics and receive content recommendations. Then tailor your own feed for the speakers and sessions you most want to see.
  • Curate: Prioritize what to watch live, and save concurrent sessions for on-demand viewing. Create and share your playlists and favorites with your dev peers.
  • Participate: Engage in conversations with the most innovative minds in dev through live Q&As, chats, and session comments. And don’t miss the Drone Skills Search and Rescue Challenge – even if you can’t be there to build your own drone, you can watch and cheer on your favorites virtually.
  • Sustain: Keep the excitement going with personalized, post-event summaries and updates on the latest developments as technologies evolve.

From the intelligent edge to the intelligent cloud, get inspired with game-changing keynotes and world-changing ideas, from Microsoft experts like:

  • Joe Belfiore, Corporate Vice President, Windows
  • Scott Hanselman, Principal Program Manager, .NET
  • Elaine Chang, Senior Program Manager, AI Platform
  • Cindy Alvarez, Principal Design Researcher, Visual Studio

Expand your knowledge, spark your creativity, and explore the future of technology in standout conference sessions such as:

Enhance your skillset through virtual participation in workshops, including:

 

Join Microsoft Build Live to start designing your virtual experience!


Introducing the Microsoft Edge DevTools Preview app

$
0
0

Last fall at the Microsoft Edge Web Summit, we laid out our plans for rebooting the Microsoft Edge DevTools in response to your feedback.  Today we’re announcing the availability of the DevTools as a web app from the Microsoft Store. The new Microsoft Edge DevTools Preview app allows you to preview the very latest DevTools running side by side with the tools included in Microsoft Edge.

You can install the DevTools Preview app on Windows 10 Fall Creators Update or newer. Because the DevTools Preview app is based on the most recent Insider Preview version of the Microsoft Edge DevTools, this allows you to use the most recent updates to the Microsoft Edge DevTools without installing a full Insider release.

In addition, some new features will be exclusive to the app – such as debugging outside of the local browser, including web content in apps and remote debugging devices.

Debugging the web outside of the browser

When we think of the web, we largely think of browsers. But the web shows up on many more surfaces than just the browser in Windows: WebViews in apps, add-ins for Office, Cortana, Progressive Web Apps in the Microsoft Store, and many more places. The DevTools preview app gives you the ability to attach the tools to any instance of the EdgeHTML engine on Windows to debug.

Screen capture of the DevTools Preview "debug targets" windows

The DevTools Preview app allows you to attach the Microsoft Edge DevTools to any local or remote EdgeHTML target.

Debugging remote devices

The web also runs on devices other than your dev machine. How do you debug the web on an Xbox, HoloLens, or an IoT device? One of the first features we’re previewing in the new DevTools app is remote debugging. By enabling Device Portal in the Settings app, you can now connect to that device over the network or via USB to debug remotely from the DevTools Preview app.

Screen capture showing the Settings toggle for "Developer Mode"

We’re previewing this with support for JS debugging of another instance of Microsoft Edge on another desktop device or VM. Over time, we’ll add support for the full set of DevTools against any EdgeHTML instance on any Windows 10 device. We’ll go into more detail on remote debugging in a future post.

Screen capture of remote debugging in the Edge Dev Tools

Building a DevTools protocol for an ecosystem of tools

One of the biggest changes to the DevTools isn’t visible in a screenshot – the new Microsoft Edge DevTools Protocol. Previously, the Microsoft Edge DevTools worked via invasive native hooks into the EdgeHTML and Chakra engines.  This made it hard for other tools like VS, VS Code, Sonarwhal, and other open source tools in the ecosystem to support Microsoft Edge.

We started a conversation with other browsers last fall to incubate DevTools protocols at the W3C, with the goal of promoting interoperability between engines and allowing tools to support cross-browser debugging more easily.

The new DevTools app uses EDP to remotely debug Microsoft Edge today, and, eventually, all of our DevTools will use EDP. We’ll share more about EDP and our roadmap for additional tools in a future blog post.

The future of the Microsoft Edge DevTools

We’ve heard your feedback on the DevTools and we’re investing to make them great. Over the next several releases, we’ll evolve our tools based on your feedback. We encourage you to try out the DevTools Preview app, file feedback (just click the Smile icon in the DevTools), and reach out to us on Twitter with any comments!

Screen capture showing the Send Feedback button in the Edge DevTools

We’ve got lots in store for the DevTools, including ongoing improvements in reliability and performance. With the Microsoft Edge DevTools Preview app, we can address your feedback faster and experiment with new ideas for the tools. We look forward to hearing what you think!

Jacob Rossi, Principal PM Lead, Microsoft Edge DevTools

The post Introducing the Microsoft Edge DevTools Preview app appeared first on Microsoft Edge Dev Blog.

Replicated Tables now generally available in Azure SQL Data Warehouse

$
0
0

We are excited to announce that Replicated Tables, a new type of table distribution, are now generally available in Azure SQL Data Warehouse (SQL DW). SQL DW is a fully managed, flexible, and secure cloud data warehouse tuned for running complex queries fast and across petabytes of data.

The key to performance for large-scale data warehouses is how data is distributed across the system. When queries join across tables and data is distributed differently, data movement is required to complete the query. The same can be said when transforming data to load, enrich, and apply business rules. With Replicated Tables, the data is available on all compute nodes, hence data movement is eliminated, and queries run faster. In some cases, such as small dimension tables, choosing a Replicated Table versus a Round Robin table, can increase performance because data movement is reduced. As with all optimization techniques, performance gains may vary and should be tested.

replicated-table

Reducing data movement to boost performance

During the public preview of Replicated Tables, SQL Data Warehouse customers are seeing up to 5x performance gains while transforming data with Replicated Tables when compared to using Round Robin distribution.

Taking a look at an example of increased query performance using the TPC-H schema and Q2 query showcases the query benefits of replicated tables. Q2 per the TPC-H spec answers the following business question:

The Minimum Cost Supplier Query finds, in a given region, for each part of a certain type and size, the supplier who can supply it at minimum cost. If several suppliers in that region offer the desired part type and size at the same (minimum) cost, the query lists the parts from suppliers with the 100 highest account balances. For each supplier, the query lists the supplier's account balance (name and nation), the part's number and manufacturer, as well as the supplier's address, phone number, and comment information.

From a schema implementation perspective, the query leverages three smaller tables that align with the design guidance documentation and are prime candidates for replicated tables including supplier, nation, and region. From a query perspective, these tables are accessed a total of six times to complete this query. Below is a query execution graph comparing the run times with and without replicated tables while running on DW400 with 1 TB scale-factor. Different SQL DW scale settings, TPC-H queries, and scale factors can produce different results.

ElaspedTimeChart

For this query with cold cache (SQL buffer cache), performance improvement from Round Robin table distribution to replicated table distribution is 13x. The warm cache is over 100x faster. Data movement needed in the Round Robin case is a fixed cost, which is the primary reason the warm cached run further boosts performance. It is recommended that you follow the design guidance to understand how Replicated Tables work and ensure they are leveraged in the right scenarios for your workload.

Recreate dimensions as replicated tables

Previously, SQL Data Warehouse instances containing a smaller domain, reference, or dimension tables used the default round robin distribution. During query execution, data was copied to each compute node forcing queries to execute longer. Furthermore, system resources are taken away from other queries on the system to move the data. A dimension table such as a date dimension could be recreated with the following code sample:

RenameQuery

Next steps

Replicated Tables are available in all versions of Azure SQL Data Warehouse. To get started:

Bing Custom Search: Build a customized search experience in minutes

$
0
0

It’s a little over five months since we launched the Bing Custom Search for general availability, and we're happy to see a considerable number of websites around the world that are now powered by Bing Custom Search. You can use the Bing Custom Search offering for either powering your site search or building the vertical search experience through multiple relevant domains.

Bing Custom Search is an easy-to-use, ad-free search solution that enables users to build a search experience and query content on their specific site, or across a hand-picked set of websites or domains. To help users surface the results they want, Bing Custom Search provides a simple web interface where users can control ranking specifics and pin or block responses to suit their needs.

The Bing API Team’s goal is to empower both developers and non-developers to harness the power of the web by allowing them to build a customized search engine experience. Setting up a custom search instance is easy and quick. For customers who don’t have resources to invest in development efforts, we offer an easy to use Hosted UI solution at no additional cost.

If you believe in delighting your users and want to integrate customized Search AI capabilities into your application, then read the full blog post on the Bing Developers Blog. There you will see how Bing Custom Search works, get step-by-step guidance on how to create a search instance in minutes, and see Custom Search in action on AI: The Show hosted by Channel 9.

Accelerate real-time big data analytics with Spark connector for Microsoft SQL Databases

$
0
0

Apache Spark is a unified analytics engine for large-scale data processing. Today, you can use the built-in JDBC connector to connect to Azure SQL Database or SQL Server to read or write data from Spark jobs.

The Spark connector for Azure SQL Database and SQL Server enables SQL databases, including Azure SQL Database and SQL Server, to act as input data source or output data sink for Spark jobs. It allows you to utilize real-time transactional data in big data analytics and persist results for adhoc queries or reporting.

Compared to the built-in Spark connector, this connector provides the ability to bulk insert data into SQL databases. It can outperform row-by-row insertion with 10x to 20x faster performance. The Spark connector for Azure SQL Databases and SQL Server also supports Azure Active Directory authentication. It allows you to securely connect to your Azure SQL database from Azure Databricks using your AAD account. The Spark connector also provides similar interfaces with the built-in JDBC connector and is easy to migrate your existing Spark jobs to use this new connector.

The Spark connector for Azure SQL Database and SQL Server utilizes the Microsoft JDBC Driver for SQL Server to move data between Spark worker nodes and SQL databases:

  1. The Spark master node connects to SQL Server or Azure SQL Database and loads data from a specific table or using a specific SQL query.
  2. The Spark master node distributes data to worker nodes for transformation.
  3. The Worker node connects to SQL Server or Azure SQL Database and writes data to the database. The user can choose to use row-by-row insertion or bulk insert.

clip_image002[5]


To get started, visit the azure-sqldb-spark repository on GitHub. You can also find the Sample Azure Databricks notebooks and Sample scripts in Scala in the same repository. You can also find more details from online documentation.

You might also want to review the Apache Spark SQL, DataFrames, and Datasets Guide and the Azure Databricks documentation to learn more details about Spark and Azure Databricks.

Azure #CosmosDB: Secure, private, compliant

$
0
0

Azure Cosmos DB is Microsoft's globally distributed, multi-model database. Azure Cosmos DB enables you to elastically and independently scale throughput and storage across any number of Azure's geographic regions with a single click. It offers throughput, latency, availability, and consistency guarantees with comprehensive service level agreements (SLAs), a feature that no other database service can offer.

A database that holds sensitive data across international borders must meet high standards for security, privacy, and compliance. Additionally, the cloud service provider must anticipate and be ready for new standards, such as the General Data Protection Regulation (GDPR), which will soon govern the collection and use of EU resident’s data. Microsoft has pledged that Azure services will be GDPR compliant by the May 25 implementation date.

image

Privacy

Microsoft’s cloud privacy policies state that we will use your customer data only to provide the services we have agreed upon, and for purposes that are compatible with providing those services. We do not share your data with our advertiser-supported services, nor do we mine it for marketing or advertising. 

Encryption

Azure Cosmos DB also implements stringent security practices. All the documents, attachments and backups stored in Azure Cosmos DB are encrypted at rest and in transit without any configuration by you. You get the same low latency, and high throughput, availability, and functionality with encryption enabled.

Data residency

Azure Cosmos DB is a multi-tenant hyper scale cloud platform that is available in all the Azure regions, more than 50 regions worldwide. Customers can specify the region(s) where their data should be located. Microsoft may replicate customer data to other regions within the same Geographical-region for high availability and data resiliency, but Microsoft will not replicate customer data outside the chosen geographical region (e.g., United States).

Azure Cosmos DB is available in four different Azure cloud environments:

  • Azure public cloud service is available globally.
  • Azure China is available through a unique partnership between Microsoft and one of the largest Internet providers in the country.
  • Azure Germany provides services under a trusted data model, which ensures that the customer data remains in Germany only.
  • Azure Government is available across 4 regions in the United States to US government agencies and their partners.

Compliance

To help customers meet their own compliance obligations across regulated industries and markets worldwide, Azure maintains the largest compliance portfolio in the industry both in terms of breadth (total number of offerings), as well as depth (number of customer-facing services in assessment scope). Azure compliance offerings are grouped into four segments; globally applicable, US government specific, industry specific, and region/country specific. All of these are applicable to Azure Cosmos DB.

Azure compliance offerings are based on several types of assurances, such as:

  • Formal certifications, attestations, validations, authorizations.
  • Assessments produced by third-party auditing firms.
  • Contractual amendments, self-assessments.
  • Customer guidance documents produced by Microsoft. 

As of April 2018, here is the full list of certificates for Azure Cosmos DB. You can find a more detailed description of each of these compliance offerings, and how they benefit you.

  • CSA STAR Self-Assessment
  • CSA STAR Certification
  • CSA STAR Attestation
  • ISO 20000-1:2011
  • ISO 22301:2012
  • ISO 27001:2013
  • ISO 27017:2015
  • ISO 27018:2014
  • ISO 9001:2015
  • SOC 1 Type 2
  • SOC 2 Type 2
  • SOC 3
  • FIPS 140-2
  • 23 NYCRR 500
  • APRA (Australia)
  • DPP (UK)
  • FCA (UK)
  • FERPA
  • FFIEC
  • GLBA
  • GxP (21 CFR Part 11)
  • HIPAA and the HITECH Act
  • HITRUST
  • MAS and ABS (Singapore)
  • NEN 7510:2011 (Netherlands)
  • NHS IG Toolkit (UK)
  • PCI DSS Level 1
  • Shared Assessments
  • SOX
  • Argentina PDPA
  • Australia IRAP Unclassified
  • Canadian Privacy Laws
  • EU ENISA IAF
  • EU Model Clauses
  • EU-US Privacy Shield
  • Germany C5
  • Germany IT-Grundschutz Workbook
  • Japan My Number Act
  • Netherlands BIR 2012
  • Singapore MTCS Level 3
  • Spain DPA
  • UK Cyber Essentials Plus
  • UK G-Cloud
  • UK PASF (covers physical datacenter infrastructure)

To help you comply with national, regional, and industry-specific requirements governing the collection and use of individuals’ data, Microsoft offers the most comprehensive set of compliance offerings of any cloud service provider. See the detail updated list of all the compliance offered.

Stay up-to-date on the latest Azure Cosmos DB news and features by following us on Twitter #CosmosDB, @AzureCosmosDB.

Bing-powered settings search in VS Code

Get the Azure Quick Start Guide for .NET Developers

$
0
0

This blog post was co-authored by Barry Luijbregts, Azure MVP.

If you’re a .NET developer, we’re excited to introduce a new resource that can help you learn about Azure: The Azure Quick Start Guide for .NET Developers!

Azure Quick Start Guide for .NET Developers Contents

Download this free e-book here.

This guide shows .NET developers how they can start with Azure and get the most out of it. The e-book is also great for .NET developers who already use Azure and want to learn more about which Azure services are available to them, and the tools they can use to develop applications for the platform.

Specifically, the Azure Quick Start Guide for .NET Developers covers:

  • What Azure can do for you as a .NET developer. The guide describes how Azure can take care of things like automatic scaling, continuous integration/continuous delivery (CI/CD), and much more so you can focus on creating the things that really matter and add value.
  • A catalog that explains what Azure services to use when for:
    • Running your .NET applications in Azure. There are many services that you can use to run your application in Azure. For instance, the guide discusses the differences between running your app in a service like Azure App Service Web Apps and running it in Azure Container Instances.
    • Storing your data. In Azure, you have many storage options for your application data. This section clarifies which services are meant for storing relational data, and which services you can use for storing NoSQL and other data. It also explains how Azure SQL Data Warehouse and Azure Data Lake Store differ.
    • Securing your .NET applications. The guide describes the most useful security services in Azure for .NET developers. For example, you can use Azure Active Directory for storing user identities and doing authentication and authorization. You can also take advantage of Azure Key Vault to store your application secrets. Managed Service Identity for Azure couples your app to Azure Active Directory and injects credentials into your application without having to store them in your config files or anywhere else.
    • Which tools you can use to develop .NET applications for Azure. Most of the tools in this guide are specifically for .NET developers, who already have access to the great tools available with Visual Studio. Now you can extend them with tools and workloads that make you more productive when you work with Azure, including the Snapshot Debugger, Cloud Explorer, and Azure Functions Core Tools.

For more information, check out this recent Azure Friday episode where e-book co-author Barry Luijbregts chats with Scott Hanselman about the Azure Quick Start Guide for .NET Developers, and demos some of the tools that .NET developers can use to develop and troubleshoot applications for Azure. These include Cloud Explorer and the new Snapshot Debugger, which enables you to debug applications in production without disrupting the application.

Links to items mentioned in this episode of Azure Friday:

We hope that this free guide helps you to start your journey into Azure or to learn more about Azure if you’re already using it. If you have any questions or feedback, please don’t hesitate to reach out to the authors of the e-book on Twitter: @CesardelaTorre, @MBCrump, @AzureBarry, and @BethMassi. And be sure to check out another great resource for developers, The Developer’s Guide to Microsoft Azure.


Train an IoT-equipped drone and compete to win at Microsoft Build

$
0
0

Build-Drone-Contest-Banner%202%202000x590%20

Are you ready for Microsoft’s ultimate developer event? You probably already know Microsoft Build, happening May 7–9 in Seattle, Washington, is where you need to be to connect with the experts, discover new tools, and boost your skills around cloud technologies, AI, mixed reality, and more.

Sure, there will be great speakers and tech sessions galore—but did you know you could win a drone?

That’s right, we’re having a drone contest. Participants will compete against fellow conference go-ers who’s drone can complete the outdoor search and rescue course designed specifically for Microsoft Build. You’ll get hands-on, end-to-end experience with Microsoft’s intelligent cloud platform, Azure IoT Edge, and be eligible to win a DJI Mavic Air drone.

How cool is that?

Here’s how it works

Contestants will create training images using AirSim, an open-source aerial informatics and robotics simulation platform. Then contestants will build and train a realistic AI drone model in the cloud using Custom Vision, and then create a container for AI deployment to Microsoft Azure IoT Edge. This intelligent cloud platform lets you run artificial intelligence at the edge of the cloud, perform analytics, deploy IoT solutions from cloud to edge-enabled devices, and manage them centrally from the cloud.

You can download the Azure IoT Edge developer toolkit from AI School during the conference, and enter your model in the contest at the Drone booth before 11 AM Pacific Time on Wednesday, May 9, 2018. At noon, we’ll randomly draw 10 lucky contestants to test their models on live drones that evening. And we’ll livestream the competition during the Microsoft Build closing celebration so everyone can get a glimpse of the action.

The winner will receive a DJI Mavic Air drone; second-place and third-place winners will take home DJI Spark drones. All finalists will receive a goodie bag full of exclusive swag. For more information and complete contest rules, please visit the Microsoft Build website.

Beyond the contest

Whether you enter the contest or fill your Microsoft Build experience with other activities, be sure to check out the drone booth in the AI area of the Expo. We’ll also have a drone museum, live mini drones flying in their own special cage, competition videos, and more.

With more than 350 incredible technical sessions and interactive workshops to choose from and dozens of inspiring speakers you’ll come away from Microsoft Build with a deeper understanding of where tech is headed and how you can be a part of the transformation.

All that and drones, too? Sounds like developer heaven. But only if you register for Microsoft Build today!

HDInsight tools for VS Code now supports argparse and Spark 2.2

$
0
0

We are happy to announce that HDInsight Tools for VSCode now supports argparse and accepts parameter based Pyspark Job submission. We also enabled the tools to support Spark 2.2 for PySpark author and job submission.

The argparse feature grants you great flexibility for your PySpark code author, test and job submission for both batch and interactive query. You can fully enjoy the advantage of PySpark argparse, and simply keep your configuration and your job-related arguments in the Json based configuration file.

The Spark 2.2 update allows you to benefit the new functionalities and to consume the new libraries and APIs from Spark 2.2 in VSCode. You can create, author and submit a Spark 2.2 PySpark job to Spark 2.2 cluster. With the backward compatibility of Spark 2.2, you can also submit your existing Spark 2.0 and Spark 2.1 PySpark scripts to a Spark 2.2 cluster.

Summary of key new features

Argparse support – set up your arguments in Json format.

  1. Set up configurations: Go to command palate, choose command HDInsight: Set Configuration.

image

2. Set up the parameters in the xxx_hdi_settings.json file, including script to cluster, Livy configuration, Spark configuration, etc.

image

image

Spark 2.2 Support – Submit PySpark batch and interactive query to Spark 2.2 cluster.

image

image

How to install or update

First, install Visual Studio Code and download Mono 4.2.x (for Linux and Mac). Then get the latest HDInsight Tools by going to the VSCode Extension repository or the VSCode Marketplace and searching HDInsight Tools for VSCode.

Install_thumb2

For more information about HDInsight Tools for VSCode, please use the following resources:

Learn more about today’s announcements on the Azure blog and Big Data blog. Discover more on the Azure service updates page.

If you have questions, feedback, comments, or bug reports, please use the comments below or send a note to hdivstool@microsoft.com.

Organizing subscriptions and resource groups within the Enterprise

$
0
0

Special thanks to Robert Venable, Principal Software Engineer in the Finance Engineering team of Core Services Engineering (formerly Microsoft IT) for sharing their story of enabling development teams while ensuring security and compliance. Thanks also to Scott Hoag, Principal Cloud Solutions Architect at Opsgility and Rob Dendtler, Account Technology Strategist at Microsoft for reviewing and providing invaluable feedback.

One of the common questions members of the Core Services Engineering and Operations teams frequently get when speaking to customers at the Executive Briefing Center here in Redmond is how do our engineering teams secure our Azure footprint for our Line of Business applications while still giving developers the freedom to go fast, have visibility into our environment and use the capabilities of Visual Studio Team Services for CI/CD, Release, and much more.

At the core of this answer is how we use the combination of subscriptions, resource groups, and Role Based Access Control to ensure compliance with a set of guidelines.

Let's start at the top level: Azure Subscriptions. CSEO, as you can imagine has a lot of Line of Business applications, currently over a thousand. We loosely follow the business unit pattern from the Azure enterprise scaffold - prescriptive subscription governance article.

business oriented subscription design

The business unit pattern

In particular, many of our teams have adopted a common mapping of the above pattern to enterprise/federal/state/local. This common vocabulary provides practical constructs that everyone understands and can relate to, ensuring we're on the same page.

What does this translation look like in reality with examples for subscription organization? It looks like this from the top down:

  • Enterprise - This stays the same as Enterprise in the Azure scaffold for us. Enterprise level items are common concerns across the entire company – it might be ensuring we don't count internal Azure consumption as public revenue, or how secure we are across all Azure subscriptions in our tenants, or other high-level strategic objectives that we care about regardless of level. Another way to think of this might be how Microsoft reports our global quarterly earnings – it's across the entire company.
  • Federal - Our major departments are named Federal. For example, CSEO is one of the federal groups. At this level, we may have additional policies and procedures, automation that runs against our footprint, or other things specific to that department. Typically, this is where large budgets have roll-up views, etc.
  • State - A group of services or a service offerings that are related. For example, the Tax Service Offering within the Finance IT organization. Here you may have additional policies and procedures, for example, HIPAA, PCI, SOX controls, and procedures. A state has a group of services that are related.
  • Local – This is where a subscription lives and is associated with a service. Each subscription contains multiple applications that are related to delivering the set of functionalities that make up the service. Each application is typically contained in an explicit resource group. The resource group becomes the container for that application, which is part of the service (the subscription). There may sometimes be a shared or common application in the service. At the application/resource group level is where the team of application developers live and they’re accountable for their footprint in Azure from security to optimal Azure spend in everything they do. A great development team operating at this level solves most of the concerns and roll-up reporting questions that are typically asked from higher levels. If each development team looks at the Azure Security Center blade, pinned dashboards built from Azure Log analytics, and the Azure Advisor blades on a daily basis, you wouldn’t have division-wide efforts created to reduce spend or bring up patch compliance, etc.

This hierarchical construct allows the differing level of controls and policies while allowing the developers to go fast. Below is a typical subscription configuration:

image

An example CSEO Subscription for the Tax Service

Within the resource groups above, the typical components you would see in each of the production resource groups (applications) would be the Azure components used to build that specific service such as:

  • Azure HDInsight cluster
  • Storage Accounts
  • SQL Database
  • Log Analytics
  • Application Insights
  • Etc., etc.

Each resource group has the components specific to that application. On occasion, the subscription might have a Common or Shared Services resource group. These are items that are used across the applications in the service, for example:

  • Common Log Analytics workspace
  • Common Blob Storage accounts where files are dumped for processing by the other services
  • An ExpressRoute VNet

In our CSEO Tax Service there are multiple applications, each in their own resource groups such as the data warehouse, ask tax (the help web portal and some bots), a calculation engine, an archiving application, a reporting application and more.

Within the subscription and resource groups, we use least privilege access principles to ensure that only the people that need to do the work have access to resources. Therefore, only the engineering owners of the service are the owners of the subscription. No contributors exist on the subscription. Some specific identities are added to the reader role, these are typically accounts used by automated tooling.

Each resource group has only the identities necessary added with the minimum required permissions. We try to avoid creating custom roles, which over the passage of time and with scale create management headaches.

Within the resource groups, the owner and reader roles are inherited from the subscription. The VSTS Build Account identity is added in as a contributor to the resource group for automated deployments. This means that only the service owner and the build identities can touch the production service on a continuous basis.

In the pre-production resource group, the engineering team is added to the reader role. This still means that only the service owner and build accounts can touch pre-production on a continuous basis, but the engineering team can see what's going on in the resource group. If a developer needs to do some work for testing, they can’t go putting that in the pre-prod or prod environment.

There are some variations on this but they’re not common. For instance, some teams might want someone from the security as the sub owner, and some teams even remove people from the equation and use a form of service account as the sub owner. Some teams might give engineers contributor on pre-prod if they’re not fully mature on the automation required. It all depends on the needs of the team.

So now that we have these together, what does it mean for typical roles in the organization?

Developers have access to the pre-production resource group to see what's going on in the dev/pre-production/uat/whatever-you-want-to-call-non-production-in-your-company but must get used to using telemetry mechanisms in the platform for debugging, just like they would have to do in production. While the teams are maturing to this level, you may see developers with contributor level access at the pre-production resource groups. The result of this discipline is typically much richer Application Insights portals and Azure Log Analytics dashboards. As teams mature they switch to deploying from a CI/CD system like Visual Studio Team Services that uses Microsoft Release Management and get really good at creating build and release definitions. Developers also script out and automate operational concerns like key rotation.

Security & Operations have access via identities with Just-in-Time VM Access through Azure Security Center for IaaS and Privileged Identity Management through Azure AD. Some operations teams might use automated tools to interrogate our Azure subscription footprint looking for configurations to reduce risk, for example looking for a resource group containing a public internet endpoint (PiP) and an ExpressRoute circuit that might pose a security risk. These teams use those audit account identities added at the subscription level.

Another thing that this model implicitly drives is the shift of accountability from any Central IT team to the development team. This does not mean shift of concerns as security and operations teams still care about compliance and risk. But if the local development team makes part of their daily standup looking at billing, security center, and the Azure Advisor tools, then cost optimization, security compliance and concerns that are inevitably asked from the enterprise, federal and state layers will already be optimized.

Do you have a question to ask the engineers of Core Services Engineering? You can reach Lyle Dodge on Twitter at @lyledodge and our team will work on the answer to your question in a future article here, or on the IT Showcase site.

Per disk metrics for Managed & Unmanaged Disks now in public preview

$
0
0

Today we’re sharing the public preview of per disk metrics for all Managed & Unmanaged Disks. This enables you to closely monitor and make the right disk selection to suit your application usage pattern. You can also use it to create alerts, diagnosis, and build automation.

Prior to this, you could get the aggregate metrics for all the disks attached to the virtual machine (VM), which provided limited insights into the performance characteristics of your application, especially if your workload is not evenly distributed across all attached disks. With this release, it is now very easy to drill down to a specific disk and figure out the performance characteristics of your workload.

Here are the new metrics that we're enabling with today's preview:

  • OS Disk Read Operations/Sec
  • OS Disk Write Operations/Sec
  • OS Disk Read Bytes/sec
  • OS Disk Write Bytes/sec
  • OS Disk QD
  • Data Disk Read Operations/Sec
  • Data Disk Write Operations/Sec
  • Data Disk Read Bytes/sec
  • Data Disk Write Bytes/sec
  • Data Disk QD

The following GIF shows how easy it is to build a metric dashboard for a specific disk in the Azure portal.

2018-04-02_14-22-12

Additionally, because of Azure Monitor integration with Grafana, it’s very easy to build a Grafana dashboard with these metrics. Here’s a GIF that shows a Grafana dashboard setup with metrics from a VM with 1 OS disk and 3 data disks.

2018-04-02_17-06-26

Azure CLI

With Azure CLI it’s very easy to get the metric values programmatically. First, we need to get the list of metric definitions for a particular resource

➜  ~ az monitor metrics list-definitions --resource /subscriptions/<sub-id>/resourceGroups/metric-rg/providers/Microsoft.Compute/virtualMachines/metric-md

Display Name                    Metric Name                       Unit            Type     Dimension Required    Dimensions
------------------------------  --------------------------------  --------------  -------  --------------------  ------------
Percentage CPU                  Percentage CPU                    Percent         Average  False
Network In                      Network In                        Bytes           Total    False
Network Out                     Network Out                       Bytes           Total    False
Disk Read Bytes                 Disk Read Bytes                   Bytes           Total    False
Disk Write Bytes                Disk Write Bytes                  Bytes           Total    False
Disk Read Operations/Sec        Disk Read Operations/Sec          CountPerSecond  Average  False
Disk Write Operations/Sec       Disk Write Operations/Sec         CountPerSecond  Average  False
CPU Credits Remaining           CPU Credits Remaining             Count           Average  False
CPU Credits Consumed            CPU Credits Consumed              Count           Average  False
Data Disk Read Bytes/Sec        Per Disk Read Bytes/sec           CountPerSecond  Average  False                 SlotId
Data Disk Write Bytes/Sec       Per Disk Write Bytes/sec          CountPerSecond  Average  False                 SlotId
Data Disk Read Operations/Sec   Per Disk Read Operations/Sec      CountPerSecond  Average  False                 SlotId
Data Disk Write Operations/Sec  Per Disk Write Operations/Sec     CountPerSecond  Average  False                 SlotId
Data Disk QD                    Per Disk QD                       Count           Average  False                 SlotId
OS Disk Read Bytes/Sec          OS Per Disk Read Bytes/sec        CountPerSecond  Average  False
OS Disk Write Bytes/Sec         OS Per Disk Write Bytes/sec       CountPerSecond  Average  False
OS Disk Read Operations/Sec     OS Per Disk Read Operations/Sec   CountPerSecond  Average  False
OS Disk Write Operations/Sec    OS Per Disk Write Operations/Sec  CountPerSecond  Average  False
OS Disk QD                      OS Per Disk QD                    Count           Average  False

Once, we have a list of metrics, we can get the values for a specific metric, for example Data Disk Write Bytes/Sec in this case.

➜ ~ az monitor metrics list --resource /subscriptions/<sub-id>/resourceGroups/metric-rg/providers/Microsoft.Compute/virtualMachines/metric-md --metric 'Per Disk Write Bytes/Sec' --aggregation Maximum --filter "SlotId eq '1'" --interval PT1M --start-time 2018-04-02T21:35:09Z --end-time 2018-04-03T00:49:09Z

Timestamp            Name                         Slotid      Maximum
-------------------  -------------------------  --------  -----------
2018-04-02 21:35:00  Data Disk Write Bytes/Sec         1
2018-04-02 21:36:00  Data Disk Write Bytes/Sec         1
2018-04-02 21:37:00  Data Disk Write Bytes/Sec         1
2018-04-02 21:38:00  Data Disk Write Bytes/Sec         1
2018-04-02 21:39:00  Data Disk Write Bytes/Sec         1
2018-04-02 21:40:00  Data Disk Write Bytes/Sec         1
2018-04-02 21:41:00  Data Disk Write Bytes/Sec         1
2018-04-02 21:42:00  Data Disk Write Bytes/Sec         1
2018-04-02 21:43:00  Data Disk Write Bytes/Sec         1
2018-04-02 21:44:00  Data Disk Write Bytes/Sec         1
2018-04-02 21:45:00  Data Disk Write Bytes/Sec         1
2018-04-02 21:46:00  Data Disk Write Bytes/Sec         1
2018-04-02 21:47:00  Data Disk Write Bytes/Sec         1  1.01974e+08
2018-04-02 21:48:00  Data Disk Write Bytes/Sec         1  1.02015e+08
2018-04-02 21:49:00  Data Disk Write Bytes/Sec         1  1.02041e+08
2018-04-02 21:50:00  Data Disk Write Bytes/Sec         1  1.01976e+08
2018-04-02 21:51:00  Data Disk Write Bytes/Sec         1  6.37246e+07

OS Disk Swap for Managed Virtual Machines now available

$
0
0

Today, we are excited to announce the availability of the OS Disk Swap capability for VMs using Managed Disks. Until now, this capability was only available for Unmanaged Disks.

1

With this capability, it becomes very easy to restore a previous backup of the OS Disk or swap out the OS Disk for VM troubleshooting without having to delete the VM. To leverage this capability, the VM needs to be in stop deallocated state. After the VM is stop deallocated, the resource ID of the existing Managed OS Disk can be replaced with the resource ID of the new Managed OS Disk. You will need to specify the name of the new disk to swap. Please note that you cannot switch the OS Type of the VM i.e. Switch an OS Disk with Linux for an OS Disk with Windows

Here are the instructions on how to leverage this capability:

Azure CLI

To read more about using Azure CLI, see Change the OS disk used by an Azure VM using the CLI.

For CLI, use the full resource ID of the new disk to the --osdisk parameter

NOTE: required Azure CLI version > 2.0.25

az vm update -g swaprg -n vm2 --os-disk /subscriptions/<sub-id>/resourceGroups/osrg/providers/Microsoft.Compute/disks/osbackup

Azure PowerShell

To read more about using PowerShell, see Change the OS disk used by an Azure VM using PowerShell.

$vm = Get-AzureRmVM -ResourceGroupName osrg -Name vm2
$disk = Get-AzureRmDisk -ResourceGroupName osrg -Name osbackup
Set-AzureRmVMOSDisk -VM $vm -ManagedDiskId $disk.Id -Name $disk.Name
Update-AzureRmVM -ResourceGroupName swaprg -VM $vm

Java SDK

VirtualMachine virtualMachine = azure.virtualMachines().getById("<vm_id>");

virtualMachine
    .inner()
    .storageProfile()
    .osDisk()
    .withName("<disk-name>")
    .managedDisk()
    .withId("<disk_resource_id>");
    
virtualMachine.update()
    .apply();

GO SDK

func UpdateVM(ctx context.Context, vmName string, diskId string, diskName string) (vm compute.VirtualMachine, err error) {
    vm, err = GetVM(ctx, vmName)
    
    if err != nil {
        return
    }
    
    vm.VirtualMachineProperties.StorageProfile.OSDisk.Name = diskName
    vm.VirtualMachineProperties.StorageProfile.ManagedDisk.Id = diskId
    
    vmClient := getVMClient()
    
    future, err := vmClient.CreateOrUpdate(ctx, helpers.ResourceGroupName(), vmName, vm)
    
    if err != nil {
        return vm, fmt.Errorf("cannot update vm: %v", err)
    }
    
    err = future.WaitForCompletion(ctx, vmClient.Client)
    
    if err != nil {
        return vm, fmt.Errorf("cannot get the vm create or update future response: %v", err)
    }
    
    return future.Result(vmClient)
}

Making IT simpler with a modern workplace

$
0
0

There is a simple way to explain one of the biggest threats to any organization’s infrastructure. It’s just one word: complexity.

Complexity is the absolute enemy of security and productivity. The simpler you can make your productivity and security solutions, the easier it will be for IT to manage and secure—making the user experience that much more elegant and useful. We’ve learned from building and running over 200 global cloud services that a truly modern and truly secure service is a simple one.

Microsoft 365 is built to help you solve this problem of complexity so that you can simplify. But let me be clear, simpler doesn’t mean less robust or less capable.

From thousands of conversations with customers, we heard clearly how important it is for IT to simplify the way it enables users across PCs, mobile devices, cloud services, and on-premises apps. Microsoft 365 provides that all with an integrated solution that’s simpler, yet also more powerful and intelligent.

Because the way you work and do business is so important to us, our work will never be done—we will constantly innovate, improve, and discover new and better ways to help your organization do more. Today, I am excited to announce some new capabilities and updates coming soon to Microsoft 365, including:

  • A modern desktop.
  • Solutions for Firstline Workers.
  • Streamlined device management with lower costs.
  • Integrated administration experience.
  • Built-in compliance.

Each of these new capabilities will allow you to simplify your modern workplace, which means delighting and empowering your users, while enabling IT to protect and secure the corporate assets.

Time for a modern desktop

What do I mean by a “modern desktop?”

A modern desktop is powered by Windows 10 and Office 365 ProPlus and is always up to date with insights and security powered by the cloud. After years of refinements, we believe this is the most productive and secure computing experience for businesses. Not only does it provide the richest user experience, it also helps IT better manage devices and data, with lower costs.

Today, we are making two announcements about enhancements we’ve delivered for managing modern desktops:

First, Delivery Optimization enhancements are coming in the Windows 10 April 2018 Update (which you can learn more about in Yusuf’s blog today as well).

Delivery Optimization allows for one device to download an update and then use the local network to deliver that update to peers. This significantly reduces bandwidth (by as much as 90 percent) and that results in a much better experience for everyone on the network.

With the Windows 10 April 2018 Update, you will be able to monitor Delivery Optimization Status using Windows Analytics—including how many devices are enabled and the bandwidth savings you’ve achieved.

Image of a tablet showing Delivery Optimization Status using Windows Analytics

Delivery Optimization Status using Windows Analytics.

Second, recently we announced the Readiness Toolkit for Office (RTO), which helps with your Office VBA, Macro, and add-in compatibility. The Application Health Analyzer (AHA) tool, which can assess the dependencies of your internally developed apps and help you ensure they remain compatible with Windows 10 updates, will be available in public preview in the coming months.

ConfigMgr also plays an important part in how so many of you manage the servicing process. In fact, I am excited to share that this week we hit a new milestone of 115 million devices under management by ConfigMgr! The recent 1802 release of ConfigMgr will add the ability for you to execute phased deployment rings. This will further automate the servicing of Windows 10 and Office 365 ProPlus by updating IT-defined groups one at a time, and automatically initiating the next group once the health of the first deployment is confirmed.

We recognize, however, that organizations are in various stages of transition to the cloud. To support customers who are not fully ready to move to the cloud in the near future, we will release Office 2019 in the second half of 2018. Commercial previews of the Office 2019 applications on Windows 10 are available starting today.

Finally, in February we shared there are just two years before the end of extended support for Windows 7 and Office 2010 (January and October 2020, respectively). There has never been a better time than now to plan and accelerate your transition and upgrade to a modern desktop experience with Microsoft 365.

Solutions for Firstline Workers and kiosks

Whether for customers in your lobby or for your Firstline Workers, Windows kiosk devices often are the first representation of your organization’s brands, products, or services. IT needs a simpler process to configure and manage these devices for both Firstline Workers and customer-facing kiosks.

Today, we are extending the assigned access capabilities for Windows 10, so you can easily deploy and manage kiosk devices with Microsoft Intune for your single or multiple app scenarios. This includes the new Kiosk Browser that will be available from the Microsoft Store. Kiosk Browser is great for delivering a reliable and custom-tailored browsing experience for scenarios such as retail and signage.

Image of a tablet showing Kiosk Browser from the Microsoft Store

Kiosk Browser available from the Microsoft Store.

Over the next year, we will add additional capabilities to help you streamline kiosk deployment and keep them in a pristine state for a reliable Firstline Worker experience. You can learn more about these investments in the Windows IT Pro blog.

Kiosks and Firstline Worker devices are most secure, resilient, and performant when deployed with Windows 10 in S mode. With the Windows 10 April 2018 Update, Windows 10 Enterprise can be configured in S mode, so organizations can deploy both Credential Guard and Application Guard, and benefit from centralized management of the Microsoft Store, Cortana, and more. All of this is available with a Microsoft 365 subscription.

In addition, we are also simplifying our licensing to add the Office mobile apps for iOS and Android to Office 365 E1, F1, and Business Essential licenses. With this change, all users licensed for Microsoft 365 and Office 365, including Firstline Workers, will be able to use the Office mobile apps and be productive on the go. Outlook for iOS and Android is available to users now. Word, PowerPoint, Excel, and OneNote mobile apps will be available over the next few months.

Streamline device management at lower costs

Modern management promises to dramatically reduce and simplify the process of managing desktop images, saving valuable time and money.

Windows AutoPilot is a key part of the flexible device management approach needed in a modern workplace. It’s as simple as taking a new device from the box, powering it on, entering your credentials, and sitting back while it is configured and managed from the cloud with minimal user or IT effort. With no management of images!

Starting with the Windows 10 April 2018 Update, Windows AutoPilot now includes an enrollment status page. This page enables you to ensure policies, settings, and apps can be provisioned on the device during that out-of-box experience before the user gets to the desktop and begins interacting with the device. Now IT can ensure every device is compliant and secure before it is used.

Image of a tablet showing the Windows AutoPilot enrollment status page

Windows AutoPilot enrollment status page.

Lenovo announced that they are the first Microsoft OEM PC partner to have direct integration with the Windows AutoPilot deployment service. They are ramping up to worldwide availability and working with early pilot customers. Dell is also now shipping PCs with Windows AutoPilot to customers in the U.S. and select countries and can enroll devices on behalf of customers in the factory for provisioning. HP, Toshiba, Panasonic, and Fujitsu remain committed to bringing seamless deployments of Windows 10 to customers through Windows AutoPilot on their respective PCs in the fall.

Windows AutoPilot is an absolute gamechanger. I urge you to spend some time learning more about how it can simplify your deployments, reduce the massive amount of time and money you spend provisioning hardware, and, of course, your users are going to love the simplicity.

An integrated administration experience

Our vision for the cloud services we build is to help simplify your work with a unified and intuitive management experience that spans your users, devices, apps, and services.

Back in March, we took a major step in this direction by announcing the Microsoft 365 admin center as the common management entry point for your entire Microsoft 365 implementation. Today, we are expanding this integrated and intuitive admin experience to Office 365 users.

Image of a tablet showing the Microsoft 365 admin center.

The Microsoft 365 admin center.

Users of both Office 365 and Microsoft 365 will now have access to the same admin center with the same capabilities. For Office 365 users, this means a simpler admin experience that easily integrates with your other Microsoft services—all without giving up any capabilities or control.

If you want to manage Microsoft 365, you can now simply go to admin.microsoft.com. Previously, IT pros who were managing Microsoft 365 had to go to multiple consoles. Not any longer!

Compliance that’s built-in

The complexity and difficulty of managing compliance can be overwhelming, especially for larger organizations. We updated Microsoft 365 to include built-in and continuously updated capabilities that help with regulations that govern archiving, retention, disposition, classification, and discovery of data. These new features will really help reduce the complexity of executing compliance workflows.

The Microsoft 365 Security & Compliance Center is the central place that’s integrated with Azure Active Directory, Microsoft Exchange, SharePoint, and Teams—and it allows you to import data for retention and content discovery, as well as across cloud services.

Image of a tablet showing the Microsoft 365 Security & Compliance Center.

The Microsoft 365 Security & Compliance Center.

We’ve recently added several new capabilities to the Security & Compliance Center, including:

  • A new Data Privacy tab that gives you the ability to execute Data Subject Requests as part of the fulfillment requirements for the General Data Protection Regulation (GDPR).
  • Privileged Access Management that allows you to prevent standing admin privilege by providing just-in-time access for admin roles and tasks in Microsoft 365.
  • Multi-Geo Capabilities in Microsoft 365 that give you control over where your data resides at a per-user level based on your global data location and compliance needs.
  • New Advanced Data Governance controls for event-based retention and disposition.

In addition to the Security & Compliance Center, each of the apps in Microsoft 365 supports the compliance levels you need. The latest application to join this list is Microsoft Forms, a simple app for creating surveys, quizzes, and polls. Used by more than three million users in education, thanks to customer demand, Forms was brought to commercial preview last year. Now, having received SOC compliance, and after feedback from 50,000+ companies during the preview, Forms is enterprise ready and generally available to all commercial customers. To learn more, visit the Forms Tech Community.

Simplifying your IT

I am really excited about the capabilities we are delivering today. These updates are going to positively impact the way you use Microsoft 365 across desktops, devices, services, and compliance—and you will tangibly see those benefits across the countless things your IT organization manages.

Here are a handful of things you can do right now to begin to simplify your IT management:

  • Plan for Windows 7 and Office 2010 EOL (January and October 2020, respectively) and upgrade to a modern desktop.
  • Enroll in Windows Analytics, activate Upgrade Readiness, onboard your devices, and upgrade to the latest version of Windows.
  • Plan and execute your first Windows AutoPilot deployment.
  • Get familiar with the new Microsoft 365 admin center experience.
  • Start using the Security & Compliance Center and the Compliance Manager to track regulatory compliance and controls.

And don’t forget, Microsoft FastTrack is available to help guide you on your path to IT management simplification with Microsoft 365.

There is a real elegance in simplifying; it means having fewer things to manage, configure, integrate, secure, and (simply put) break down. This means fewer things that can go wrong and there are fewer places where a misconfiguration can create an entry point for an attacker. Now you have both a better user experience and improved IT control.

Simplified IT means better security at a lower cost, and more productivity with less risk.

The post Making IT simpler with a modern workplace appeared first on Microsoft 365 Blog.

VSTS Public Projects Limited Preview

$
0
0
Visual Studio Team Services (VSTS) offers a suite of DevOps capabilities to developers including Source control, Agile planning, Build, Release, Test and more. But until now all these features require the user to first login using a Microsoft Account before they can be used.  Today, we’re starting a limited preview of a new capability that... Read More

Top stories from the VSTS community – 2018.04.27

$
0
0
Here are top stories we found in our streams this week related to DevOps, VSTS, TFS and other interesting topics, listed in no specific order: TOP STORIES Tip: Creating Task Groups with Azure Service Endpoint ParametersColin Dembovsky introduces the DRY practice “Don’t Repeat Yourself” with Release Management and Task Groups. Benefits of Agile for TestersAdam... Read More

A maze-solving Minecraft robot, in R

$
0
0

Last week at the New York R Conference, I gave a presentation on using R in Minecraft. (I've embedded the slides below.) The demo gods were not kind to me, and while I was able to show building a randomly-generated maze in the Minecraft world, my attempt to have the player solve it automatically was stymied by some server issues. In this blog post, I'll show you how you can write an R function to build a maze, and use the left-hand rule to solve it automatically.

If you want to play along, you'll need to launch a Spigot server with the RaspberryJuice plugin. An easy way to do this is to use a Dockerfile to launch the Minecraft server. I used a Ubuntu instance of the Data Science Virtual Machine to do this, mainly because it comes with Docker already installed. Once you have your server running, you can join the multiplayer with from the Minecraft Java Edition using its IP address.

The R script mazebot.R steps through the process of connecting to the server from R, and building and solving a maze. It uses the miner package (which you will need to install from Github), which provides functions to interface between R and Minecraft. After connecting to the server, the first step is to get the ID code used to identify the player in Minecraft. (It's best to have just one player in the Minecraft world for this step, so you're guaranteed to get the active player's ID.)

id <- getPlayerIds()

You'll use that id code to identify the player later on. Next, we use the make_maze function (in the script genmaze.R) to design a random maze. This uses a simple recursive backtracking algorithm algorithm: explore in random directions until you can go no further, and then retreat until a new route is available to explore. Once we've generated the maze, we use print_maze to convert it to a matrix of characters, and place a marker "!" for the exit.

## Maze dimensions (we'll create a square maze)
mazeSize <- 8

## using the functions in genmaze.R:
m <- print_maze(make_maze(mazeSize,mazeSize), print=FALSE)
nmaze <- ncol(m) # dimensions
m[nmaze,nmaze-1] <- "!" ## end of the maze. 

Now for the fun bit: building the maze in the world. This is a simple matter of looping over the character matrix representing the maze, and building a stack of three blocks (enough so you can't see over the top while playing) where the walls should go. The function build_maze (in solvemaze.R) does this, so all we need to do is provide a location for the maze. Find a clear spot of land, and the code below builds the maze nearby:

## get the current player position 
v <- getPlayerPos(id, TRUE)
altitude <- -1 ## height offset of maze
pos <- v+c(3, altitude, 3) # corner

## Build a maze near the player
buildMaze(m, pos, id)

You can try solving the maze yourself, just by moving the player in Minecraft. It's surprisingly how difficult even small mazes can be, if you don't cheat by looking at the layout of the maze from above! A simple way to solve this and many other mazes is by using the left hand rule: follow the wall on your left until you find the exit. This is something we can also code in R to solve the maze automatically, to check the positions of walls around the player, and move the player according to the left hand rule. Unfortunately, you can't actually make the player avatar turn using Spigot, so we track the direction the player should be facing with the botheading variable in the solveMaze function, which we use like this:

And here's what it looks like from a third-person view (use F5 in the game to change the view):

Solvedmaze

You can find all of the R code to implement the maze-building and maze-solving at the Github repository below. For more on using R in Minecraft, check out the online book R Programming in Minecraft, which has lots of other ideas for building and playing in Minecraft with R.

Github (revodavid): minecraft-maze

Cosmos DB Solves Common Data Challenges in App Development

$
0
0

When considering how to implement your application with a relational database, it can change how you build it dramatically. Some of these challenges include adding an abstract implementation of the schema in the code, mapping data to objects, building queries, and preventing SQL injection attacks. Wouldn’t it be great if there was a way to reduce the number of problems that can arise with data when building your app? Cosmos DB could be the answer for you.

Azure Cosmos DB is a fully managed, globally distributed database service that alleviates the common application development stressors of working with data. Cosmos DB also comes with guaranteed high availability, low latency, and scalable throughput and storage. Getting started is fast and easy, thanks to Cosmos DB’s schema agnostic database engine and extensive number of comprehensive APIs.

In this blog post, we’ll walk through how Cosmos DB solves these problems in addition to some of its other benefits. With a sample .NET web application, we’ll walk through how to use the Cosmos DB SQL .NET API, which allows you to use familiar SQL syntax to query data.

Core Cosmos DB Concepts

A Cosmos DB database is the central location of the data and users who have varying administrative rights to the data. Databases store collections, which are a repository of documents, containing the raw data in JSON format. Each document relates to one unit of data, where its properties and respective values define it. These terms can be translated directly to a traditional database, like SQL Server for example, illustrated in the comparison table below.

SQL Server Cosmos DB
Database Database
Table Collection
Record Document
Column Property
Sample Overview

Imagine a common scenario where you expect a user to interact with your application and its underlying data. Usually, the user will want to view the data in a readable format, and a way to alter, add, or delete the data.

The sample is an ASP.NET web application that allows a user to maintain a list of their favorite Microsoft documentation links and Channel 9 videos. The application’s data is stored and maintained in Cosmos DB, which contains the user populated data and web page metadata that the application finds from the provided documentation or video url.

Visit the Url Notes repository to view the complete sample

Domain model flexibility

The Cosmos DB SQL .NET API makes working with your model seamless by managing the object mapping for you. You can work directly with your domain objects; perfect for applications that require create, read, update, and delete (CRUD) operations. This will look familiar if you’ve worked with the object relational mapping provided by Entity Framework.

In the sample, GetAllDocs demonstrates getting all documents that belong to a collection. Notice that this method has a generic type parameter, this is used to pass the model type that the data set will bind to.

The sample uses generic methods as a straightforward way to pass types to CreateDocumentQuery, which requires a type parameter.

Here’s a example of the returned data in the debugger and the data in the portal.

Screenshot of an object while debugging the url notes app in visual studio

A Video object as seen while debugging in Visual Studio

Screenshot of a document in cosmos db

The same Video object as a document in Cosmos DB via the Cosmos DB Data Explorer in the Azure Portal

Queries with SQL-like syntax or LINQ

The API gives developers options how to query data using the familiar syntax of either SQL or LINQ to build queries. The API helps reduce query complexity as building queries on properties are much easier. Furthermore, you can still protect against SQL injection with parameters.

The sample’s GetDocument method uses LINQ to search for a document by ID, a property of the Document type, as all documents have a unique ID.

Here’s a conceptual example of a query with a Video object:

In the sample, a user can choose up to two properties to go along with their search text, in searchTerms and searchText, respectively. The Search method uses these parameters to look for matches in the specified properties. The query is built based on those properties and are set as SQL parameters.

Reduce the complexities of inserts, updates, and deletes

The API provides operations for making changes to your data whether it be adding more records, altering them, or removing them. For most of these operations, the API only requires the collection, and the domain model object.

CreateDocumentAsync requires two parameters: the url to the collection, and the object to be inserted into the collection. For example, inserting a new document into the Video collection requires passing the name of the Video collection, and the Video object parameter. When adding a new document to a collection, the API takes care of serializing the object to JSON.

The API’s ReplaceDocumentAsync method reduces the complexities of updating by providing multiple ways to directly access and change properties. In the sample, the user-editable properties are accessed by their document’s property name.

Before updating a document in EditDocument, the sample confirms it exists, then returns it as a Document object in the doc variable. Finally, the document’s properties are updates the via object’s SetPropertyValue method before calling ReplaceDocumentAsync.

Alternatively, you can use your defined domain objects to update the document. Below is an example on how that could be done with the Video object:

Use your favorite database management features

Cosmos DB comes with the ability to create and manage stored procedures, user defined functions, and triggers. This functionality will be familiar to those using SQL Server or another database that has these features. Use either the API, Azure portal, command line (CLI), or the Cosmos DB Data Explorer  to manage these features. While this sample does not use any of these features, here’s an example stored procedure:

A Cosmos DB stored procedure in the Azure Portal

A Cosmos DB stored procedure in the Azure Portal

High Availability and Scaling

In addition to what the API offers, Cosmos DB also comes with the guarantee of optimal performance with features like low latency, automatic indexing, region-based availability, and scalable throughput.

Cosmos DB enables region-based availability with their replication protocol that ensures database operations are performed in the region that is local to the client. One benefit of this is low latency at the 99th percentile, along with the promise of data consistency across all regions. By default, the level of consistency in Cosmos DB provides automatic indexing after each insert, delete, and replace/update of documents.

Performance dependent variables like CPU, and memory are all calculated in a measurement specific to Cosmos DB called Request Units (RUs). With RUs, throughput is managed based on the computational complexity of operations done on a database. For common scenarios, databases with predictable performance will need 100 RUs/second, with the option to provide more as needed. You can use the request unit calculator to estimate RUs needed.

Worried about throughput limits? With the Cosmos DB .NET APIs you get the added bonus of automatic retries for single clients that are operating above the provisioned rate. Additionally, you can catch the DocumentClientException rate limit exceptions and customize the retry process yourself.

Summary

Building data driven applications can come with several problems to solve, and Cosmos DB can lighten the load of addressing them. With an extensive and comprehensive API to work with domain objects directly for database operations, and a globally available and scalable database, Cosmos DB simplifies these complexities and allows developers to focus on what matters to them the most.

Related Links

Interested in learning more about Cosmos DB? In depth information on the topics covered in this post can be found in the links below. New to Azure?  Sign up today, no credit card required.

Because it’s Friday: Every Wes Anderson Movie

$
0
0

I've found the Honest Trailers series a bit hit-and-miss: sometimes the virtual eyebrow arches just a bit too sharply. But this take on Wes Anderson's films is spot on, and actually makes for a loving review of a series that's a genre in its own right.

By the way, apologies to anyone who came looking for the BIF post last Friday -- I was simply having too much fun at the NY R Conference to make a post! In the meantime that's all from us for this week: have a great weekend and we'll be back with more next week (including your regularly scheduled Friday post).

Adding Cross-Cutting Memory Caching to an HttpClientFactory in ASP.NET Core with Polly

$
0
0

5480805977_27d92598ca_oCouple days ago I Added Resilience and Transient Fault handling to your .NET Core HttpClient with Polly. Polly provides a way to pre-configure instances of HttpClient which apply Polly policies to every outgoing call. That means I can say things like "Call this API and try 3 times if there's any issues before you panic," as an example. It lets me move the cross-cutting concerns - the policies - out of the business part of the code and over to a central location or module, or even configuration if I wanted.

I've been upgrading my podcast of late at https://www.hanselminutes.com to ASP.NET Core 2.1 and Razor Pages with SimpleCast for the back end. Since I've removed SQL Server as my previous data store and I'm now sitting entirely on top of a third party API I want to think about how often I call this API. As a rule, there's no reason to call it any more often that a change might occur.

I publish a new show every Thursday at 5pm PST, so I suppose I could cache the feed for 7 days, but sometimes I make description changes, add links, update titles, etc. The show gets many tens of thousands of hits per episode so I definitely don't want to abuse the SimpleCast API, so I decided that caching for 4 hours seemed reasonable.

I went and wrote a bunch of caching code on my own. This is fine and it works and has been in production for a few months without any issues.

A few random notes:

  • Stuff is being passed into the Constructor by the IoC system built into ASP.NET Core
    • That means the HttpClient, Logger, and MemoryCache are handed to this little abstraction. I don't new them up myself
  • All my "Show Database" is, is a GetShows()
    • That means I have TestDatabase that implements IShowDatabase that I use for some Unit Tests. And I could have multiple implementations if I liked.
  • Caching here is interesting.
    • Sure I could do the caching in just a line or two, but a caching double check is more needed that one often realizes.
    • I check the cache, and if I hit it, I am done and I bail. Yay!
    • If not, Let's wait on a semaphoreSlim. This a great, simple way to manage waiting around a limited resource. I don't want to accidentally have two threads call out to the SimpleCast API if I'm literally in the middle of doing it already.
      • "The SemaphoreSlim class represents a lightweight, fast semaphore that can be used for waiting within a single process when wait times are expected to be very short."
    • So I check again inside that block to see if it showed up in the cache in the space between there and the previous check. Doesn't hurt to be paranoid.
    • Got it? Cool. Store it away and release as we finally the try.

Don't copy paste this. My GOAL is to NOT have to do any of this, even though it's valid.

public class ShowDatabase : IShowDatabase

{
private readonly IMemoryCache _cache;
private readonly ILogger _logger;
private SimpleCastClient _client;

public ShowDatabase(IMemoryCache memoryCache,
ILogger<ShowDatabase> logger,
SimpleCastClient client)
{
_client = client;
_logger = logger;
_cache = memoryCache;
}

static SemaphoreSlim semaphoreSlim = new SemaphoreSlim(1);

static HttpClient client = new HttpClient();

public async Task<List<Show>> GetShows()
{
Func<Show, bool> whereClause = c => c.PublishedAt < DateTime.UtcNow;

var cacheKey = "showsList";
List<Show> shows = null;

//CHECK and BAIL - optimistic
if (_cache.TryGetValue(cacheKey, out shows))
{
_logger.LogDebug($"Cache HIT: Found {cacheKey}");
return shows.Where(whereClause).ToList();
}

await semaphoreSlim.WaitAsync();
try
{
//RARE BUT NEEDED DOUBLE PARANOID CHECK - pessimistic
if (_cache.TryGetValue(cacheKey, out shows))
{
_logger.LogDebug($"Amazing Speed Cache HIT: Found {cacheKey}");
return shows.Where(whereClause).ToList();
}

_logger.LogWarning($"Cache MISS: Loading new shows");
shows = await _client.GetShows();
_logger.LogWarning($"Cache MISS: Loaded {shows.Count} shows");
_logger.LogWarning($"Cache MISS: Loaded {shows.Where(whereClause).ToList().Count} PUBLISHED shows");

var cacheExpirationOptions = new MemoryCacheEntryOptions();
cacheExpirationOptions.AbsoluteExpiration = DateTime.Now.AddHours(4);
cacheExpirationOptions.Priority = CacheItemPriority.Normal;

_cache.Set(cacheKey, shows, cacheExpirationOptions);
return shows.Where(whereClause).ToList(); ;
}
catch (Exception e)
{
_logger.LogCritical("Error getting episodes!");
_logger.LogCritical(e.ToString());
_logger.LogCritical(e?.InnerException?.ToString());
throw;
}
finally
{
semaphoreSlim.Release();
}
}
}

public interface IShowDatabase
{
Task<List<Show>> GetShows();
}

Again, this is great and it works fine. But the BUSINESS is in _client.GetShows() and the rest is all CEREMONY. Can this be broken up? Sure, I could put stuff in a base class, or make an extension method and bury it in there, so use Akavache or make a GetOrFetch and start passing around Funcs of "do this but check here first":

IObservable<T> GetOrFetchObject<T>(string key, Func<Task<T>> fetchFunc, DateTimeOffset? absoluteExpiration = null);

Could I use Polly and refactor via subtraction?

Per the Polly docs:

The Polly CachePolicy is an implementation of read-through cache, also known as the cache-aside pattern. Providing results from cache where possible reduces overall call duration and can reduce network traffic.

First, I'll remove all my own caching code and just make the call on every page view. Yes, I could write the Linq a few ways. Yes, this could all be one line. Yes, I like Logging.

public async Task<List<Show>> GetShows()

{
_logger.LogInformation($"Loading new shows");
List<Show> shows = await _client.GetShows();
_logger.LogInformation($"Loaded {shows.Count} shows");
return shows.Where(c => c.PublishedAt < DateTime.UtcNow).ToList(); ;
}

No caching, I'm doing The Least.

Polly supports both the .NET MemoryCache that is per process/per node, an also .NET Core's IDistributedCache for having one cache that lives somewhere shared like Redis or SQL Server. Since my podcast is just one node, one web server, and it's low-CPU, I'm not super worried about it. If Azure WebSites does end up auto-scaling it, sure, this cache strategy will happen n times. I'll switch to Distributed if that becomes a problem.

I'll add a reference to Polly.Caching.MemoryCache in my project.

I ensure I have the .NET Memory Cache in my list of services in ConfigureServices in Startup.cs:

services.AddMemoryCache();

STUCK...for now!

AND...here is where I'm stuck. I got this far into the process and now I'm either confused OR I'm in a Chicken and the Egg Situation.

Forgive me, friends, and Polly authors, as this Blog Post will temporarily turn into a GitHub Issue. Once I work through it, I'll update this so others can benefit. And I still love you; no disrespect intended.

The Polly.Caching.MemoryCache stuff is several months old, and existed (and worked) well before the new HttpClientFactory stuff I blogged about earlier.

I would LIKE to add my Polly Caching Policy chained after my Transient Error Policy:

services.AddHttpClient<SimpleCastClient>().

AddTransientHttpErrorPolicy(policyBuilder => policyBuilder.CircuitBreakerAsync(
handledEventsAllowedBeforeBreaking: 2,
durationOfBreak: TimeSpan.FromMinutes(1)
)).
AddPolicyHandlerFromRegistry("myCachingPolicy"); //WHAT I WANT?

However, that policy hasn't been added to the Policy Registry yet. It doesn't exist! This makes me feel like some of the work that is happening in ConfigureServices() is a little premature. ConfigureServices() is READY, AIM and Configure() is FIRE/DO-IT, in my mind.

If I set up a Memory Cache in Configure, I need to use the Dependency System to get the stuff I want, like the .NET Core IMemoryCache that I put in services just now.

public void Configure(IApplicationBuilder app, IPolicyRegistry<string> policyRegistry, IMemoryCache memoryCache)

{
MemoryCacheProvider memoryCacheProvider = new MemoryCacheProvider(memoryCache);
var cachePolicy = Policy.CacheAsync(memoryCacheProvider, TimeSpan.FromMinutes(5));
policyRegistry.Add("cachePolicy", cachePolicy);
...

But at this point, it's too late! I need this policy up earlier...or I need to figure a way to tell the HttpClientFactory to use this policy...but I've been using extension methods in ConfigureServices to do that so far. Perhaps some exception methods are needed like AddCachingPolicy(). However, from what I can see:

  • This code can't work with the ASP.NET Core 2.1's HttpClientFactory pattern...yet. https://github.com/App-vNext/Polly.Caching.MemoryCache
  • I could manually new things up, but I'm already deep into Dependency Injection...I don't want to start newing things and get into scoping issues.
  • There appear to be changes between  v5.4.0 and 5.8.0. Still looking at this.
  • Bringing in the Microsoft.Extensions.Http.Polly package brings in Polly-Signed 5.8.0...

I'm likely bumping into a point in time thing. I will head to bed and return to this post in a day and see if I (or you, Dear Reader) have solved the problem in my sleep.

"code making and code breaking" by earthlightbooks - Licensed under CC-BY 2.0 - Original source via Flickr


Sponsor: Check out JetBrains Rider: a cross-platform .NET IDE. Edit, refactor, test and debug ASP.NET, .NET Framework, .NET Core, Xamarin or Unity applications. Learn more and download a 30-day trial!



© 2018 Scott Hanselman. All rights reserved.
     
Viewing all 10804 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>