Quantcast
Channel: Category Name
Viewing all 10804 articles
Browse latest View live

How Azure Security Center detects a Bitcoin mining attack

$
0
0

Azure Security Center helps customers deal with myriads of threats using advanced analytics backed by global threat intelligence. In addition, a team of security researchers often work directly with customers to gain insight into security incidents affecting Microsoft Azure customers, with the goal of constantly improving Security Center detection and alerting capabilities.

In the previous blog post "How Azure Security Center helps reveal a Cyberattack", security researchers detailed the stages of one real-world attack campaign that began with a brute force attack detected by Security Center and the steps taken to investigate and remediate the attack. In this post, we’ll focus on an Azure Security Center detection that led researchers to discover a ring of mining activity, which made use of a well-known bitcoin mining algorithm named Cryptonight.

Before we get into the details, let’s quickly explain some terms that you’ll see throughout this blog. “Bitcoin Miners” are a special class of software that use mining algorithms to generate or “mine” bitcoins, which are a form of digital currency. Mining software is often flagged as malicious because it hijacks system hardware resources like the Central Processing Unit (CPU) or Graphics Processing Unit (GPU) as well as network bandwidth of an affected host. Cryptonight is one such mining algorithm which relies specifically on the host’s CPU. In our investigations, we’ve seen bitcoin miners installed through a variety of techniques including malicious downloads, emails with malicious links, attachments downloaded by already-installed malware, peer to peer file sharing networks, and through cracked installers/bundlers.

Initial Azure Security Center alert details

Our initial investigation started when Azure Security Center detected suspicious process execution and created an alert like the one below. The alert provided details such as date and time of the detected activity, affected resources, subscription information, and included a link to a detailed report about hacker tools like the one detected in this case.

Suspicious process executed Threat summary

We began a deeper investigation, which revealed the initial compromise was through a suspicious download that got detected as “HackTool: Win32/Keygen".  We suspect one of the administrators on the box was trying to download tools that are usually used to patch or "crack" some software keys. Malware is frequently installed along with these tools allowing attackers a backdoor and access to the box.

  • Based on our log analysis, the attack began with the creation of a user account named “*server$”.
  • The “*server$” account then created a scheduled task called "ngm”. This task launched a batch script named "kit.bat” located in the "C:WindowsTempngmtx" folder.
  • We then observed process named "servies.exe“  being launched with cryptonight related parameters.
  • Note: The ‘bond007.01’ represents the bitcoin user’s account behind this activity and ‘x’ represents the password.

Image 1

Two days later we observed the same activity with different file names. In the screenshot below, sst.bat has now replaced kit.bat and mstdc.exe has replaced servies.exe . This same cycle of batch file and process execution was observed periodically.

Image 2

These .bat scripts appear to be used for making connections to the crypto net pool (XCN or Shark coin) and launched by a scheduled task that restarts these connections approximately every hour.

Additional Observation: The downloaded executables used for connecting to the bitcoin service and generating the bitcoins are renamed from the original, 32.exe or 64.exe, to “mstdc.exe” and “servies.exe” respectively. These executable’s naming schemes are based on an old technique used by attackers trying to hide malicious binaries in plain sight. The technique attempts to make files look like legitimate benign-sounding Windows filenames.

  1. Mstdc.exe:mstdc.exe” looks like “msdtc.exe” which is a legitimate executable on Windows systems, namely Microsoft Distributed Transaction Coordinator required by various applications such as Microsoft Exchange or SQL Server installed in clusters.
  2. Servies.exe: Similarly, “services.exe” is a legitimate Service Control Manager (SCM) is a special system process under the Windows NT family of operating systems, which starts, stops and interacts with Windows service processes. Here again attackers are trying to hide by using similar looking binaries. “Servies.exe” and “services.exe”, they look very similar, don’t they? Great tactic used by attackers.

As we did our timeline log analysis, we noted other activity including wscript.exe using the “VBScript.Encode” to execute ‘test.zip’.

Image 3

On extraction, it revealed ‘iissstt.dat’ file that was communicating with an IP address in Korea. The ‘mofcomp.exe’ command appears to be registering the file iisstt.dat with WMI. The mofcomp.exe compiler parses a file containing MOF statements and adds the classes and class instances defined in the file to the WMI repository.

Recommended remediation and mitigation steps

The initial compromise was the result of malware installation through cracked installers/bundlers which resulted in complete compromise of the machine. With that, our recommendation was first to rebuild the machine if possible. However, with the understanding that this sometimes cannot be done immediately, we recommend implementing the following remediation steps:

1. Password Policies: Reset passwords for all users of the affected host and ensure password policies meet best practices.

2. Defender Scan: Run a full antimalware scan using Microsoft Antimalware or another solution, which can flag potential malware.

3. Software Update Consideration: Ensure the OS and applications are being kept up to date. Azure Security Center can help you identify virtual machines that are missing critical and security OS updates.

4. OS Vulnerabilities & Version: Align your OS configurations with the recommended rules for the most hardened version of the OS. For example, do not allow passwords to be saved. Update the operating system (OS) version for your Cloud Service to the most recent version available for your OS family. Azure Security Center can help you identify OS configurations that do not align with these recommendations as well as Cloud Services running outdates OS version.

5. Backup: Regular backups are important not only for the software update management platform itself, but also for the servers that will be updated. To ensure that you have a rollback configuration in place in case an update fails, make sure to back up the system regularly.

6. Avoid Usage of Cracked Software: Using cracked software introduces unwanted risk into your home or business by way of malware and other threats that are associated with pirated software. Microsoft highly recommends evading usage of cracked software and following legal software policy as recommended by their respective organization.

More information can be found at:

7. Email Notification: Finally, configure Azure Security Center to send email notifications when threats like these are detected.

  • Click on Policy tile in Prevention Section.
  • On the Security Policy blade, you pick which Subscription you want to configure Email Alerts for.
  • This brings us to the Security Policy blade. Click on the Email Notifications option to configure email alerting.

An email alert from Azure Security Center will look like the one below.

NotificationTo learn more about Azure Security Center, see the following:


“How to make a movie the secure way” at NAB Show 2017

$
0
0

At the NAB Show 2017 Conference this week, we’ll be reprising our session “Securing the Making of the Next Hollywood Blockbuster(Las Vegas | April 25, 2017 | 1:30 PM - 2:00 PM in the Cybersecurity & Content Protection Pavilion – Booth C3830CS – Central Hall). Azure’s very own Joel Sloss will regale the audience on the transition to secure movie production in the cloud, made possible by Microsoft Azure and a cadre of ISV partners who’ve ported their solutions to the platform.

Microsoft at NAB Show 2017

Content comes in many forms and must be stored in many places, including on servers, workstations, mobile storage, archives, etc., which presents a massive security challenge. Securing this ever-moving data on top of the added challenge of properly handling personal information such as health records, contract details, and paystubs, strains an already complicated data governance situation.

In this session, we’ll look at an end-to-end workflow leveraging Azure and the combined wizardry of Avid, 5th Kind, Contractlogix, Docusign, MarkLogic, and SyncOnSet. Lulu Zezza, Physical Production Executive and the driving force behind the end-to-end workflow, will be co-presenting with Mr. Sloss.  She noted that, “Moving to the cloud is the best way to implement security controls across so many different physical and logical environments, locations, and data types. We have people working all over the world, for different companies, using different systems, all contributing to the same production. In the past, it’s been like a free-for-all, with contractors getting access to things they shouldn’t, information being duplicated and stored in the wrong places, and sensitive content left out in the open.”

The new digital workflow enables a secure “script-to-screen” experience for the management of both production data and the crew’s personal HR information, to which new global privacy standards apply. Metadata captured from contracts, scripts, and camera files is associated with filming days, scenes and takes recorded, and later to the final edit of the film, reducing the need for document sharing and film screenings. Plus communications are kept protected and confidential. It’s a whole new way to make movies.

Join Joel at his session where you’ll hear about:

  • Architectural considerations for multi-domain cloud environments
  • Secure access and device management for BYOD users
  • Content protection and privacy in connected and disconnected networks

Learn more information about Microsoft’s activities at NAB Show 2017.

R 3.4.0 now available

$
0
0

R 3.4.0, the latest release of the R programming language (codename: "You Stupid Darkness"), is now available. This is the annual major update to the R language engine, and provides improved performance for R programs. The source code was released by the R Core Team on Friday and binaries for Windows, Mac and Linux are available for download now from CRAN.

The most significant change in this release is that the JIT ('Just In Time') byte-code compiler is now enabled at level 3 by default. This means that top-level loops will be compiled before being run, and all functions will be compiled on their first or second use. Byte-compilation has been available for packages for some time, but the inclusion of this just-in-time compiler means you'll see similar performance benefits for your own R code without having to take any additional steps to compile it.

There have been various other performance improvements as well. Sorting vectors of numbers is faster (thanks to the use of the radix-sort algorithm by default). Tables with missing values compute quicker. Long strings no longer cause slowness in the str function. And the sapply function is faster when applied to arrays with dimension names.

Also on the performance front, R now will use some higher-performance BLAS routines for linear algebra operations (for example, DGEMV is now used instead of DGEMM). R falls back to a single-threaded (but higher-precision) built-in linear algebra functions when missing values are present in the data, and the check for that situation has also been sped up which should further improve performance.

For package authors using compiled C/C++/Fortran code, there is a new function to register the routines that should be exposed to other packages. Registering functions speeds up the process of calling compiled code, particularly on Windows systems. The gain is measured in the order of microseconds per call, but when these functions are called thousands or millions of times the impact can be noticeable.

This update also makes a number tweaks to the language to improve consistency, especially in corner cases. R is now more consistent in its handling of missing values when constructing tables. Some common programmer errors, like comparing a vector with a zero-length array, now generate warnings. And there have also been some accuracy improvements for extreme values in some statistical functions.

This is just a summary of the changes; check out the announcement linked below for the detailed list. As always, big thanks go to the volunteers of the R Core Group for continuing to push the boundaries of R's capabilities with each annual release and the several patches that improve it each year. (By the way, the R Foundation now accepts donations via its website. If you benefit from using R, consider contributing.)

R-announce mailing list: R 3.4.0 is released

Update on Team Explorer in Visual Studio 2017

$
0
0

Last month, we shipped Visual Studio 2017 RTM and since then we’ve had many reports on Team Explorer issues.  In a nutshell, the quality of Team Explorer in Visual Studio 2017 RTM isn’t up to our usual standards. Most of the bugs stem from 2 sets of changes: a major refactor of the authentication library and moving the Git experience from Libgit2 to Git for Windows. Those changes enabled us to add new features, as well as other Git features we haven’t released yet, but the changes weren’t high quality. We’re working to ship fixes as fast as possible.

Going forward, we’re making several changes in how we work so this doesn’t happen again.

  1. Expand our test matrix to cover more use cases with a specific focus on different network configurations
  2. Engage the community earlier on design changes and make sure those make it into early preview builds
  3. Deliver major architectural changes into preview builds earlier

Please keep sending your feedback using the Developer Community site. That’s the best way for us to track and fix issues.

Upcoming fixes

We’re making the following fixes to Visual Studio Team Explorer, as well as Team Services, between now and Visual Studio 2017 Update 3. Most of the issues will be fixed by Update 2, but some will wait until Update 3.

Fixed in Visual Studio 2017 Update 1

Fixed in Visual Studio 2017 Update 2 (current preview).  You can find out how to participate in previews in this blog post.

Fixed in Visual Studio 2017 Update 2 (next preview). You can find out how to participate in previews in this blog post.

To be fixed in Visual Studio 2017 Update 3

To be fixed on Visual Studio Team Service (we found a service side fix)

Configuring your release pipelines for safe deployments

$
0
0

For large and high scale applications, the promise of “enterprise grade” availability and high reliability levels are key to customer confidence on the applications. Continuous delivery pipelines for such scaled out applications typically consist of multiple environments.

DevOPS enables faster & automated delivery of changes, thereby helping customers with the most advanced set of features. In theory, any change to a production system has risks. Safe deployment guidelines help in managing this risk for large scaled out applications, thereby fulfilling the customer promise.

In this blog post, we shall share the safe deployment guidelines that we follow in Microsoft and how do we configure the pipelines or release definitions using Visual Studio Team Services to enforce the guidelines.

Gradual rollout to multiple environments

For applications under discussion, the production environment would comprise of multiple scale units (one per region).

You may want to deploy the changes a test or staging environment before deploying to any of the production environments (as a final quality validation) and a canary environment that interacts with production environments for the dependent services serving some synthetic load.

Also, it is recommended to not deploy to all production environments in one go, exposing all the customers to the changes. A gradual rollout that exposes the changes to customers over a period, thereby implicitly validating the changes in production with a smaller set of customers at a time.

In effect, the deployment pipeline would look like the following:
order of environments in deployment pipeline
As an example, for an application is deployed in 12 regions with US regions (4) having a high load, European regions (4) having a medium load and Asian regions (4) having a relatively lighter load, following would be the order of rollout.

  Regions   Description
Test Run final functional validation on the application
Canary Process synthetic load on the application, interacting with production instances of dependent services
Pilot customers Pilot customers (Internal and early adopter customers) are onboarded to a separate scale unit. Deploy after deployment to Canary succeeds.
Asian regions 1, 2, 3 & 4 Asian regions have a lighter load. Deploy to all regions in parallel after deployment to Pilot succeeds.
European regions 1, 2, 3 & 4 European regions have medium load. Deploy to all regions in parallel after deployment to all Asian regions succeed.
 US regions 1, 2, 3 & 4 US regions have high load. Deploy to all regions in parallel after deployment to all European regions succeed.

In a release definition, we use environment triggers to configure the environment deployment conditions as follows.

environment deployment conditions for safe deployment

If required, you can configure to deploy to the four scale units in each region sequentially for additional level of control.

Uniform deployment workflow for all environments

As discussed above, we are deploying the application to each scale unit independently. A deployment and validation process is defined for each of the scale units. As a best practice, you should follow the same procedure to deploy those bits in every environment. The only thing that should change from one environment to the next is the configuration you want to apply and the validation steps to execute.

To enforce the deployment procedures to be same across environments, we define a task group for the deployment procedure and include the same in each of the environments. The different configurations are parameterized and the values are managed using environment variables in the release definition.

The deployment workflow for each of the environments in our release definitions looks like the following.

Use Task Group in deployment workflow

Manual approval for rollouts

There are various reasons due to which you may not want the application to be updated at some points of time. It could be due to an upcoming major event for which you want to avoid all risks, or a known issue with a dependency that needs changes to your application to be deferred.

Configuring manual approvals before the pipeline begins ensures that we get two pairs of eyes ensure the application is not going through these special circumstances and can be updated.

Moreover, there might be urgent hotfixes or special changes that do not apply to all the scale units. For such changes, we need to bypass the pipeline and directly deploy to specific environments only. We still would like to get approvals in such scenarios. However, in case of a pipeline flow we do not want to get multiple approvals, one for each of the environments.

So, in a nut shell, we are looking for one approval at the start of the deployment sequence. The sequence may or may not start from the Test environment.

We configure approvals for each of the environments in our release definitions to fulfil these requirements. The approvals for the production environments are configured like the following.
approvals for the production environments

Segregation of roles

As we discussed above, we would like to have two people analyze every deployment and ensure that all’s well for deploying to the environment.

It is possible that the approver mentioned in the release definition is same as the person requesting the deployment. In such a scenario, the requirement of segregating the roles of deployment submitter and approver between two different users does not get fulfilled, thereby risking a wrong deployment due to manual oversight.
Release management provides an option to deny the submitter from approving a deployment, thereby helping enforce segregation of roles.

segregationofroles

Health check during roll out

With the above environment and approval settings, validation phase for the environments play a key role in ensuring the environments are healthy after the deployments. It may always not be possible to fully automate the validation and health monitoring of environments.

In such circumstances, adopting a “Wait and auto-promote” criteria for the production environments are recommended. The pipeline is paused for a certain duration of time, during which team members monitor various health indicators of the service and can abort the pipeline if it is not appropriate to continue. Users can manually re-start the pipeline from the next environment once the issue is analyzed.

Including a manual intervention task in the validation workflow for production environments helps us configure the release definitions to be in “wait and auto-promote” mode with a 24 hours wait between environments.
manualintervention

Branch filters for deployments

With extensive use of Git as the version control system for development, developers commit changes in various branches. All the changes come together in master or release branches. To ensure completeness of features being deployed, it is recommended to restrict deployments of artifacts generated from these branches to the production environments.

Secure the pipelines

We have now configured our pipelines that ensure we safely deploy the changes to all the environments. Team members can now create releases and start the deployments. To avoid any issues, we need to ensure that the configurations are not disturbed and the checks out in place are not by passed by users.

We configure Release permissions on the release definitions to avoid any unwanted changes. Specifically, we control the users who are allowed “Administer release permissions”, “Delete release definition”, “Delete release environment”, “Delete releases”, “Edit release definition”, “Edit release environment”, “Manage release approvers” and “Manage releases” permissions.

See what’s next for Azure at Microsoft Ignite

$
0
0

Get all your Azure questions answered by the experts who build it. This year’s schedule is still in progress, but here are some highlights from last year’s conference:

  • Deliver more features faster with a modern development and test solution

This session shows how to use the infrastructure provided by Microsoft Azure DevTest Labs to quickly build dev and test environments.

  • Protect your data using Azure's encryption capabilities and key management

Cloud security is essential, and this deep dive explores Azure’s built-in encryption and looks at data disposal, key management, and access control.

  • Build tiered cloud storage in Microsoft Azure

Explore scalable, cost-efficient object storage using Azure’s Blob Storage service.

Register now to join us at Microsoft Ignite to connect with the tech community and discover new innovations.

Microsoft Azure at NAB Show 2017

$
0
0

Cloud computing is changing the world and is powering the digital transformation in the media industry. We are thrilled to see the significant momentum with which businesses large and small are selecting Azure for their digital transformation needs whether it be launching a new Over-The-Top (OTT) video service, web and mobile applications with rich media, or using cutting-edge media AI technologies to unlock insights that enhance content discovery and drive deeper user engagement.

This year at NAB Show 2017, we are showcasing why Microsoft Azure is the trusted and global-scale cloud for the media industry’s needs. At the core of it are new innovations and enhancements that we are releasing or have launched in the last few months to meet our customers ever evolving needs.

Encoding service enhancements

In our quest to be responsive to customer feedback we have made significant enhancements to the encoding service which include:

  • Reduced pricing and per minute billing: We launched a new pricing model based on output minutes instead of GB’s processed which reduces the price by half for typical us cases. Customers can now use our service for content-adaptive encoding, where the encoder will generate a ‘bit-rate ladder’ that is optimized for each input video. Learn more our new pricing model.
  • Autoscaling capacity: Our service can now also monitor your workload and automatically scale resources up or down, providing increased concurrency/throughput when needed. Combined with Azure Functions and Azure Logic Apps, you can quickly build, test and deploy custom media workflows at scale. This feature is in preview, please contact amsved@microsoft.com for more information.
  • DTS-HD surround sound now available in Premium Encoder for content creation and streaming delivery to connected devices.

Media analytics

Adding to the growing family of media analytics which include face and emotion detection, motion detection, video OCR, video summarization, content moderation and audio indexing, we are excited to add the following new capabilities,

  • Private Preview of Azure Media Video Annotator: identifies objects in the video such as cars, house etc. Information from Annotator can be used to build deep search applications. This information can also be combined with data obtained from other Media Analytics processors  to build custom workflows.
  • Public Preview of Azure Media Face Redactor: Azure Media Face Redactor enables customers to protect the identities of people before releasing their private videos to public. We see many use cases scenarios in broadcast news and look forward to seeing how our customers can use this new service. Learn more about Azure Media Face Redactor.

Streaming Service Enhancements

In order to simplify our customers decision around configuring streaming origins, we are excited about the following:

  • Autoscale Origins: We have introduced a new Standard Streaming Units offer that has the same features as Premium Streaming Units but scales automatically based on outbound bandwidth. Premium Streaming Units (Endpoints) are suitable for advanced workloads, providing dedicated, scalable bandwidth capacity where as Standard streaming unit is operated in a shared pool while still delivering scalable bandwidth.  Learn more here. In addition, the streaming team has delivered the following enhancements:
    • CMAF support – Microsoft and Apple worked closely to define the Common Media Application Format (CMAF) standard and submit it to MPEG. The new standard provides for storing and delivering streaming content using a single encrypted, adaptable multimedia presentation to a wide range of devices. The industry will greatly benefit from this common format, embodied in an MPEG standard, to improve interoperability and distribution efficiency.
    • DTS-HD surround sound streaming is now integrated with our dynamic packager and streaming services across all protocols (HLS, DASH and Smooth).
    • FairPlay HLS offline playback - new support for offline playback of HLS content using the Apple FairPlay DRM system.
    • RTMP ingest improvements – we've updated our RTMP ingest support to allow for better integration with open source live encoders such as FFmpeg, OBS, Xsplit and more.
    • Serverless media workflows using Azure Functions & Azure Logic Apps: Azure offers a serverless compute platform that lets you easily trigger code based on events in Azure, such as when media is uploaded to a folder or through partners like Aspera . We’ve published a collection of integrated media workflows on Github to allow developers to get started building codeless and customized media workflows with Azure Functions and Logic Apps. Try it out!

Azure Media Player

The advertising features in Azure Media Player for video on demand is now GA. This enables the insertion of pre, mid and post roll advertisements from any VAST compliant ad server, empowering content owners to monetize their streams. In addition to our GA announcement of VOD ad insertion, we are excited to announce a preview of Live ad insertion. Additionally, we have a new player skin with enhanced accessibility features.

Azure CDN

Building on the unique multi-CDN offering of any public cloud platform we are excited to add new capabilities including custom domain SSL and “One click” integration of CDN with streaming origin, storage & web apps.

  • Custom Domain SSL: Azure CDN now supports custom domain SSL to enhance the security of data in transit. Use of HTTPS protocol ensures data exchanged with a website is encrypted while in transit. Azure CDN already supported HTTPS for Azure provided domains (e.g. https://contoso.azureedge.net) and it was enabled by default. Now, with custom domain HTTPS, you can enable secure delivery for custom domains (e.g. https://www.contoso.com) too. Learn more about Custom Domain SSL.
  •  “One click” CDN Integration with Streaming Endpoint, Storage & Web Apps: We have added deeper integration of Azure CDN with multiple Azure services to simplify the configuration of CDN. When Content Delivery Network is enabled for a streaming endpoint, network transfer charges from Streaming Endpoint to the CDN are waived. For more information, please visit the Azure Blog.

Growing partner ecosystem

At NAB, we are excited to announce that Avid has selected Microsoft Azure as their preferred partner to power their business in the cloud. Siemens has expanded their Azure integrations with deep integration into Azure media analytics to enhance their Smart video engine product. We have also expanded the partnership with Verizon Digital Media Services with a deeper integration of their CDN services with Azure storage. Ooyala has expanded their integration with Azure to include the Azure media analytics capabilities to enhance their media logistics platform.

While launching product and services is exciting, the goal we strive for is to make our customers successful. It is great to see this in some of recent case studies we have released on NBC’s Streaming of Rio 2016 Olympics and Lobster Ink.

One common theme that we hear from customers on why they adopt Azure for building media workflows is that it offers an enterprise grade battle tested media cloud platform that is simple, scalable, and flexible.

Come see us at NAB

If you’re attending NAB Show 2017, I encourage you to by stop by our booth SL6710 to learn more about Microsoft’s cloud media services and see demos from us and several of our partners. Also, don’t forget to check out Steve Guggenheimer’s blog post  and his keynote presentation on digital transformation in media in the Las Vegas Convention Center in rooms N262/N264 followed by a panel discussion.

If you are not attending the conference but would like to learn more about our media services, follow the Azure Blog to say up to date on new announcements.

Monetizing your content with Azure Media Player

$
0
0

Video Advertisements- Tell me more!

As the playback of online video content increases in popularity, video publishers are looking to monetize their content via instream video advertisements. The growth of online video has been accompanied by a steep rise in the amount of money advertisers are interested in spending on video ads. Content owners can take advantage of this and leverage video advertisement in order to (need a better word for make money from) their media

As of version 2.1.0 (released this week, check out the blog post for more details!), Azure Media Player supports the insertion of linear advertisements for pre-roll (before regular content), mid-roll (during regular content) and post-roll (after regular content) for video on demand. These linear video advertisements are fetched and inserted into your content using the VAST standard.

Check out our demo for NAB!

To learn more about the Interactive Advertisement Bureau and some advertisement standards.

How do I start inserting ads into my stream?

To enable ads, first update your version of Azure Media Player version to 2.1.0 or higher as older versions of AMP do not support ads.
Next, configure and generate your ad tags from a VAST compliant ad server like OpenX or AdButler. You can use that ad tag in your player options so Azure Media Player knows where to request the ad from.

Configuring a pre-roll is as simple as:

	ampAds:{
                preRoll: {
                    sourceUri: ‘[VAST_FILE.xml]',
                        options:
					{
                        skipAd:
						{
                            enabled: true,
                            offset:5
                        }
                    }
                },

This will insert a pre-roll ad requested from the sourceUri that is skippable after 5 seconds. You can find a more complex sample  with pre- mid- and post-rolls here.

Interested in Live Ad insertion?

If you are a customer interested in leveraging live ad insertion with Azure Media Serviced content and Azure Media Player, keep an eye out for my blog post coming out next week on how to insert video ads into your AMS streams on the fly. You can also contact us at ampinfo@microsoft.com to test out the Live Ad Insertion (Now in Preview!)

Calling all ad servers!

If you are an ad server and are interested in developing a custom plugin that supports ad insertion with AMP, please contact ampinfo@microsoft.com.

Providing feedback

As Azure Media Player continues to grow, evolve, and add additional features/enabling new scenarios, you can request new features, provide ideas or feedback via UserVoice. If you have and specific issues, questions or find any bugs, drop us a line at ampinfo@microsoft.com.

Sign up to stay up-to-date with everything Azure Media Player has to offer.


Now announcing: Azure Media Player v2.0

$
0
0

Since its release at NAB two years ago, Azure media Player has grown significantly in robustness and in richness of features. We have been working hard addressing feedback from our fantastic customers (that’s you!) to enhance and improve a player that everyone can benefit from. For AMP’s 2nd birthday I am incredibly excited to announce our first major release its initial; welcome AMP 2.0!

What’s new in AMP 2.0?

Advertisement support

Azure Media Player version 2.1.0 and higher supports the insertion of pre- mid- and post- roll ads in all your on demand assets. The player inserts ads in accordance to the IAB’s VAST standard and allows you to configure options like ad position and skipabilty. To learn more about video ads with Azure Media Player check out my blog post: Monetizing Your Content With Azure Media Player

A new skin

We released a new skin along with as a counterpart to “AMP-Default” called “AMP-Flush”. You can enable AMP flush by simply changing two point in your code:

1) update the CSS your application loads

from

<link href="//amp.azure.net/libs/amp/2.1.0/skins/amp-default/azuremediaplayer.min.css" rel="stylesheet">

to

<link href="//amp.azure.net/libs/amp/2.1.0/skins/amp-flush/azuremediaplayer.min.css" rel="stylesheet">

2) updating the class in your videotag

from

<video id="azuremediaplayer" class="azuremediaplayer amp-default-skin amp-big-play-centered" tabindex="0"> </video>

to

<video id="azuremediaplayer" class="azuremediaplayer amp-flush-skin amp-big-play-centered" tabindex="0"> </video>

making these changes should result in the following new skin:

image

A more accessible player

We are always working towards creating a more accessible and user friendly player. The team has been working hard to improve your experience with the player in use cases like:

  • interfacing with assistive technologies (like JAWS or Narrator)
  • playback in High Contrast mode
  • navigating without a mouse (or Tab To Navigate)

New plugins

This release comes with some new plugins you can load from our plugin gallery  as well as new functionality baked into the player like Playback speed (icon in the screencap above). The details for these new features and additional APIs can all be found in our documentation. You can utilize them to customize the player to support the playback scenario you want to achieve. Plugin development is a very community driven operation and if you have any questions about creating plugins, modifying the ones in the gallery, or contributing them to the player, please email me at saraje@microsoft.com

Making the Switch to AMP 2.0

Transitioning to AMP 2.0 is an incredibly simple process. Make sure to update your CDN endpoints to point to 2.1.0 like so:

    <link rel="stylesheet" href="http://amp.azure.net/libs/amp/2.1.0/skins/amp-default/azuremediaplayer.min.css">

    <script src="//amp.azure.net/2.1.0/amp/latest/azuremediaplayer.min.js"></script>

 

Providing Feedback

Azure Media Player will continue to grow and evolve, adding additional features and enabling new scenarios. You can request new features, provide ideas or feedback via UserVoice. If you have and specific issues, questions or find any bugs, drop us a line at ampinfo@microsoft.com.

Sign up for the latest news and updates

Sign up to stay up-to-date with everything Azure Media Player has to offer.

Additional resources

Azure management libraries for Java generally available now

$
0
0

Today, we are announcing the general availability of the new, simplified Azure management libraries for Java for Compute, Storage, SQL Database, Networking, Resource Manager, Key Vault, Redis, CDN and Batch services.

Azure Management Libraries for Java are open source - https://github.com/Azure/azure-sdk-for-java.

Service | feature Generally available Available as preview Coming soon

Compute

Virtual machines and VM extensions
Virtual machine scale sets
Managed disks

 

Azure container services
Azure container registry

Storage

Storage accounts

 

Encryption

SQL Database

Databases
Firewalls
Elastic pools

 

 

Networking

Virtual networks
Network interfaces
IP addresses
Routing table
Network security groups
DNS
Traffic managers

Load balances
Application gateways

 

More services

Resource Manager
Key Vault
Redis
CDN
Batch

App service - Web apps
Functions
Service bus

Monitor
Graph RBAC
DocumentDB
Scheduler

Fundamentals

Authentication - core

Async methods

 

Generally available means that developers can use these libraries in production with full support by Microsoft through GitHub or Azure support channels. Preview features are flagged with the @Beta annotation in libraries.

In Spring 2016, based on Java developer feedback, we started a journey to simplify the Azure management libraries for Java. Our goal is to improve the developer experience by providing a higher-level, object-oriented API, optimized for readability and writability. We announced multiple previews of the libraries. During the preview period, early adopters provided us with valuable feedback and helped us prioritize features and Azure services to be supported. For example, we added support for asynchronous methods that enables developers to use reactive programming patterns. And, we also added support for Azure Service Bus.

Getting Started

Add the following dependency fragment to your Maven POM file to use the generally available version of the libraries:

<dependency>
    <groupId>com.microsoft.azure</groupId>
    <artifactId>azure</artifactId>
    <version>1.0.0</version>
</dependency>

Working with the Azure Management Libraries for Java

One Java statement to authenticate. One statement to create a virtual machine. One statement to modify an existing virtual network ... No more guessing about what is required vs. optional vs. non-modifiable.

Azure Authentication

One statement to authenticate and choose a subscription. The Azure class is the simplest entry point for creating and interacting with Azure resources.

Azure azure = Azure.authenticate(credFile).withDefaultSubscription();

Create a Virtual Machine

You can create a virtual machine instance by using the define() … create() method chain.

VirtualMachine linuxVM = azure.virtualMachines()
  .define(linuxVM1Name)
  .withRegion(Region.US_EAST)
  .withNewResourceGroup(rgName)
  .withNewPrimaryNetwork("10.0.0.0/28")
  .withPrimaryPrivateIpAddressDynamic()
  .withNewPrimaryPublicIpAddress(linuxVM1Pip)
  .withPopularLinuxImage(KnownLinuxVirtualMachineImage.UBUNTU_SERVER_16_04_LTS)
  .withRootUsername(“tirekicker”)
  .withSsh(sshkey)
  .withNewDataDisk(100)
  .withSize(VirtualMachineSizeTypes.STANDARD_D3_V2)
  .create();

Update a Virtual Machine

You can update a virtual machine instance by using an update() … apply() method chain.

linuxVM.update()
        .withNewDataDisk(20,  lun,  CachingTypes.READ_WRITE)
        .apply();

Create a Virtual Machine Scale Set

You can create a virtual machine scale set instance by using another define() … create() method chain.

VirtualMachineScaleSet vmScaleSet = azure.virtualMachineScaleSets()
  .define(vmssName)
  .withRegion(Region.US_EAST)
  .withExistingResourceGroup(rgName)
  .withSku(VirtualMachineScaleSetSkuTypes.STANDARD_D5_V2)
  .withExistingPrimaryNetworkSubnet(network, "subnet1")
  .withExistingPrimaryInternetFacingLoadBalancer(publicLoadBalancer)
  .withoutPrimaryInternalLoadBalancer()
  .withPopularLinuxImage(KnownLinuxVirtualMachineImage.UBUNTU_SERVER_16_04_LTS)
  .withRootUsername("tirekicker")
  .withSsh(sshkey)
  .withNewDataDisk(100)
  .withNewDataDisk(100, 1, CachingTypes.READ_WRITE)
  .withNewDataDisk(100, 2, CachingTypes.READ_ONLY)
  .withCapacity(10)
  .create();

Create a Network Security Group

You can create a network security group instance by using another define() … create() method chain.

NetworkSecurityGroup frontEndNSG = azure.networkSecurityGroups().define(frontEndNSGName)
    .withRegion(Region.US_EAST)
    .withNewResourceGroup(rgName)
    .defineRule("ALLOW-SSH")
        .allowInbound()
        .fromAnyAddress()
        .fromAnyPort()
        .toAnyAddress()
        .toPort(22)
        .withProtocol(SecurityRuleProtocol.TCP)
        .withPriority(100)
        .withDescription("Allow SSH")
        .attach()
    .defineRule("ALLOW-HTTP")
        .allowInbound()
        .fromAnyAddress()
        .fromAnyPort()
        .toAnyAddress()
        .toPort(80)
        .withProtocol(SecurityRuleProtocol.TCP)
        .withPriority(101)
        .withDescription("Allow HTTP")
        .attach()
    .create();

Create a Web App

You can create a Web App instance by using another define() … create() method chain.

WebApp webApp = azure.webApps()
    .define(appName)
    .withRegion(Region.US_WEST)
    .withNewResourceGroup(rgName)
    .withNewWindowsPlan(PricingTier.STANDARD_S1)
    .create();

Create a SQL Database

You can create a SQL server instance by using another define() … create() method chain.

SqlServer sqlServer = azure.sqlServers().define(sqlServerName)
    .withRegion(Region.US_EAST)
    .withNewResourceGroup(rgName)
    .withAdministratorLogin("adminlogin123")
    .withAdministratorPassword("myS3cureP@ssword")
    .withNewFirewallRule("10.0.0.1")
    .withNewFirewallRule("10.2.0.1", "10.2.0.10")
    .create();

Then, you can create a SQL database instance by using another define() … create() method chain.

SqlDatabase database = sqlServer.databases().define("myNewDatabase")
    .create();

Sample Code

You can find plenty of sample code that illustrates management scenarios (69+ end-to-end scenarios) for Azure.

Service Management Scenario
Virtual Machines
Virtual Machines - parallel execution
Virtual Machine Scale Sets
Storage
Networking
Networking – DNS
Traffic Manager
Application Gateway
SQL Database
App Service – Web apps on Windows
App Service – Web apps on Linux
Functions
Service Bus
Resource Groups
Redis Cache
Key Vault
CDN
Batch

Start using Azure Management Libraries for Java today!

Start using these libraries today. It is easy to get started. You can run the samples above.
 
As always, we like to hear your feedback via comments on this blog or by opening issues in GitHub or via e-mail to Java@Microsoft.com.
 
Also. You can find plenty of additional info about Java on Azure at http://azure.com/java.

How we use RM – Part 1

$
0
0

The teams that contribute to VSTS (TFS and other micro-services like Release Management, Package Management, etc) began using Release Management to deploy to production as outlined by Buck Hodges in this blog. However, in Feb this year, there was some feedback that it was difficult to debug failed deployments using RM, and that engineers were being forced to use unnatural workarounds.

We (the RM team) used that as an opportunity to re-look at our RM usage, and to fix things up so that it becomes easier to use. Along the way, we fixed up some things in the product, and some things in the way we use the product. In this two-part blog, I will walk you through what we did. The first part covers the issues that we faced, and the fixes that have been rolled out.  I will blog the second part when the remaining fixes have been rolled out (since they are still in-flight).

The various things that didn’t work very well in RM

As we walk through the issues, we will use the release “TFS – Prod Update 538” as an example of what didn’t work very well in RM:

image

Double-clicking on the release showed a pretty un-actionable set of Issues.  We used to see this “wrapper script” text for all errors, and it wasn’t very useful:

image

Further, the Release Summary indicated that the deployments to Ring 0, Ring 1, Ring 2, and Ring 3 succeeded.  However, clicking on the Environments tab and looking closely at the number of tasks enabled in these rings told a different story.

image

There were, in fact, zero tasks enabled in Rings 0, 1, 2 and 3!  The bits needed to be deployed only to Ring 4, but the release was taken through Rings 0, 1, 2 and 3 without doing anything meaningful in these environments.  This used to mess up the traceability of the product, because RM thought that the current release on Ring 0 (and Rings 1, 2 and 3 also) was “TFS – Prod Update 538” (at least till the next release rolled out), whereas actually the bits on Ring 0 corresponded to an older release.  The “current release in an environment” concept is pretty important for RM: The product surfaces this in some of its views, and also uses this for some internal operations like release retention e.g. RM won’t delete a release if it is the current release on an environment.

The above design – of dragging the release through Rings 0, 1, 2 and 3 unnecessarily – begs the question, “Why?  Why couldn’t the release be created so that zero environments started automatically ‘after release creation’, and then Ring 4 was started manually?”  The reason for this was the requirement that there should be only 1 approval for the entire release, across all the environments.  When the release was created, the approver wanted to check that everything was in order across all the environments, and then didn’t want to be asked to approve again.  Therefore, this requirement was modeled as an approval on Ring 0, and every release had to go through this environment before it reached its real destination.

image

Moving on … going to the Logs tab showed that the log file corresponding to the failed task wasn’t even available in the browser.  This used to sometimes happen when the log file was very large.  The workaround was that we used to have to log into the agent box and view the logs on the agent. 

image

Once we got to the logs, we found out that the logs were not easy to understand.  Reason: Each environment corresponded to multiple Scale Units (SUs) e.g. Ring 3 corresponds to three scale units (WEU2, SU6 and WUS22).  So the logs for the three scale units were interspersed.

Finally, as part of aligning to Azure’s Safe Deployment practices, there was a requirement to have each Ring “bake” for some time before proceeding to the next ring, so that issues were flushed out in the inner rings before moving to the outer (and more public) rings.  The bake time was modeled as a “sleep” task.  This was less-than-ideal because it used up an agent unnecessarily while sleeping.  

image  

In the screenshot above, the Sleep task was disabled for this particular hotfix deployment, probably to get some fix out into prod early, but typically this task is enabled.

Fixes that we have rolled out

We fixed the following issues either by enhancing RM or by changing the way we used the product.

Problem statement: The Issues list in the release was un-actionable.

Solution: [Change in the way we use RM] Changed the Powershell script so that it logged the inner exception as the Error, as opposed to the outer exception

image

Problem statement: Incorrect traceability in the product caused by taking each release through Ring 0 even if the bits were meant for a different Ring

Solution: [Enhanced RM] We enabled correct traceability in the product by adding support for a new feature “Release-level-approvals”, and then used this in the “TFS – Prod Update” Release Definition.  This feature ensures that approval is required only once in the release – no matter which Ring is deployed first – as long as the approvers for all the Rings are the same:

image

As you can see above, all Rings, except for the first Ring, have the following option selected: “Automatically approve auto-triggered deployments to this environment for users who have approved deployment to the previous environment”.  (That’s quite a mouthful!)  Ring 0 doesn’t need to have this option selected since there is no “previous” environment.

This ensures clean traceability of the release i.e. the bits are not unnecessarily dragged through rings where they are not meant to be deployed.

image

Fixes still to be rolled out

The issues in this section have not been completely addressed.  Once we address them, I will write up the details in part 2 of this blog.

Problem statement: The log file was sometimes not available in the browser – typically when it was very large.

Solution: [TBD: Change in the way we use RM] We will fix the log file upload reliability issue by moving from the 1.x agent to the 2.x agent.  There is a known reliability issue with the 1.x agent with respect to uploading large log files.

image

The upgrade of the agent is being delayed because we used a legacy variable $(Agent.WorkingDirectory) which was available in the 1.x agent, but is not available in the 2..x agent.  So we need to re-write the Powershell scripts that used this variable, and replace its usage with $(System.DefaultWorkingDirectory).  

Problem statement: The logs were difficult to understand, since each log file had mangled information from multiple scale units.

Solution: [TBD: Change in the way we use RM] We will enable better traceability per environment by having a scale unit per environment.

This design, however, gave rise to several new issues:

Sub-problem statement: The number of environments will go up from 5 to more than 15. Viewing the list of Releases will become a pain because of the need to constantly re-size the Environments column.  In addition, even with the Release-level-approval feature, approvals will still be problematic.  Reason: Each Ring will blow up into several Environments e.g. Ring 3 –  which used to correspond to three scale units WEU2, SU6 and WUS22 – would now correspond to three environments.  Hence, starting Ring 3 would correspond to deploying three environments manually, and approving three times – once for each environment (since Release-level-approvals kick in only if the previous approval is completed by the time the next deployment starts).

Solution: We did some things to make this better, with some more work pending:

  • [Enhanced RM] We added support for “remembering” the width of the environments column per [User, Release Definition] combination per browser
  • [Enhanced RM] We also made the environments smaller so that they took up less real estate
  1. image
  • [TBD: Enhanced RM] Over the next few sprints, we will add support for bulk deployments and bulk approvals.  After that, hopefully, we will be able to move to an environment-per-scale unit.
  • Problem statement: How do you model the scenario of “waiting for an environment to bake” without using up an agent which sleeps?

    Solution:

            (a) [Enhanced RM] We introduced the feature “Resume task on timeout” feature in the Manual Intervention task.  When this is set to “Resume on timeout”, it acts like a sleep, without using up any agent resources.

     

    image

            (b) [TBD: Enhanced RM] However, there is an additional requirement to make the timeout of the Manual Intervention task specify-able through a variable, so that the timeouts of the environments can be easily managed through environment variables.  Once we do that over the next few sprints, we will be able to replace the Sleep task in the Release Definition with the Manual Intervention task with “Resume on timeout”.

    Conclusion

    VSTS relies heavily on RM for its production deployments, and now you have some insight into how we use RM, and the improvements we are making in RM as we fine-tune this experience.  Stay tuned for part 2 of this blog, as we iron out more of the issues that have come up during this dogfooding.

    Hopefully some of the techniques used by us will apply to your releases too. 

    How to Build & Deploy a Java Web Application using Team Services and Azure

    $
    0
    0

    So, you’ve heard the tagline “Microsoft Loves Java” but the skeptic in you still has doubts. Well, it’s true! Visual Studio Team Services (VSTS) and Team Foundation Server (TFS) are Microsoft developer toolkits to help developers plan, design, develop, test, deploy and support (the entire DevOps cycle) with all programming languages, including Java. We have Java focused products dating back over six years with our plug-in for Eclipse, Team Explorer Everywhere (TEE) and have had development teams focused on making the Java experience complete and fully featured for over 3 years turning out features as quickly as every 3 weeks. This blog will present various resources (blogs, videos, documentation) for build and deployment options available with VSTS and TFS when using Java and deploying to Azure.

    Build and Deploy a Java App to Azure App Service

    One of the primary sources of information for building with Java using Team Services and TFS is our dedicated Java subsite at java.visualstudio.com. On this site is a series of web pages that comprise an introductory walk-through of how to host your Java code for free in the VSTS Git service (which provides unlimited, free private Git repositories), build your Java web application using Maven, and then deploy your web application to Azure as a web app. This walk through can be found at the following URL:

    http://java.visualstudio.com/Docs/gettingstarted/intro

    Build and Deploy a Java App to an Azure Linux VM Running Tomcat

    A second source of useful information on how to build and deploy using Java with VSTS and TFS are the many blog posts our teams publish for various scenarios. To track all our features, including blog post announcements and useful videos, follow our “News” page. One of our blogs shows how to setup an Ubuntu (there is a blog for setting up Red Hat too) Linux VM in Azure running Tomcat. The blog leads you through three different methods for deploying your Java web app to the VM: using Tomcat deploy, using SSH and using FTP. These three tasks are available within VSTS to assist you in easily deploying your applications:

    https://blogs.msdn.microsoft.com/visualstudioalm/2016/08/18/deploying-an-azure-ubuntu-linux-vm-running-apache-tomcat-for-use-with-visual-studio-team-services-and-team-foundation-server/

    Build Using Jenkins (integrated with Team Services) and Deploy to Azure

    Many Java developers have used Jenkins as their primary build system and may have jobs configured on Jenkins for building and deploying their applications. VSTS has a Jenkins integration that essentially enables developers to use Jenkins as the build server (vs. using the built-in build support of VSTS) while still using VSTS for other features such as Git repo hosting, backlog management, sprint planning, testing and deployment. Our integration enables end-to-end traceability even when using Jenkins for builds with VSTS. There are two primary ways we support Jenkins. The first is via web hooks (aka service hooks) such that events (like pushing your Java code into a VSTS Git repo) trigger builds from VSTS to Jenkins. The second is using a VSTS build task to trigger remote Jenkins build jobs and provide real-time build output and feedback directly into the VSTS web interface. Using this build task approach enables the developer to also pull from Jenkins (once the build is complete) the artifacts (such as .war files) and test and code coverage files into the VSTS build system so that this information can be used in VSTS (artifacts can be deployed using our release management system and test and code coverage results can be displayed in VSTS build summaries). For more information on using Jenkins with VSTS, use the following resources:

    Using the resources referenced in this blog, a developer can discover the benefits of using VSTS and TFS to build and deploy their Java applications to the Azure cloud. To stay up-to-date on exciting new features, subscribe to our blogs at the following URL:

    https://blogs.msdn.microsoft.com/visualstudioalm/

    Azure Billing Reader role and preview of Invoice API

    $
    0
    0

    Today, we are pleased to announce the addition of a new in-built role, Billing Reader role. The new Billing Reader role allows you to delegate access to just billing information with no access to services such as VMs and storage accounts. Users in this role can perform Azure billing management operations such as viewing subscription scoped cost reporting data and downloading invoices. Also, today we are releasing the public preview of a new billing API that will allow you to programmatically download subscription’s billing invoices.

    billing-reader-view

    Allowing additional users to download invoices

    Today, only the account administrator for a subscription can download and view invoices. Now the account administrator can allow users in subscription scoped roles, Owner, Contributor, Reader, User Access Administrator, Billing Reader, Service Administrator and Co-Administrator, to view invoices. The invoice contains personal information and hence the account administrator is required to enable access to allow users in subscription scoped roles to view invoices. The steps to allow users in subscription scoped roles to view invoices are below:

    1. Login to the Azure Management Portal with account administrator credentials.
    2. Select the subscription that you want to allow additional users to download invoices.
    3. From the subscription blade, select the Invoices tab within billing section. Click on Access to invoices command. The feature to allow additional users to download invoices is in preview, not all invoices may be available. The account administrator will have access to all invoices.
      AA-optinAnnotated
    4. Allow subscription scoped roles to download invoice
      AA-optinAllow

    How to add users to Billing Reader Role

    Users who are in administrative roles i.e. Owner, User Access Administrator, Service Administrator and Co-administrator roles can delegate Billing Reader access to other users. Users in Billing Reader can view subscription scoped billing information such as usage and invoices. Note, currently billing information is only viewable for non-enterprise subscription. Support for enterprise subscriptions will be available in the future.

    1. Select the subscription for which you want to delegate Billing Reader access
    2. From the subscription blade, select Access Control (IAM)
      select-iam
    3. Click Add
    4. Select “Billing Reader” role
      select-roles
    5. Select or add user that you want to delegate access to subscription scoped billing information
      add-user

    The full definition of access allowed for user in Billing Reader role is detailed in built in roles documentation.

    Downloading invoice using new Billing API

    Till now you could only download invoices for your subscription via the Azure management portal. We are enabling users in administrative (Owner, Contributor, Reader, Service Administrator and Co-administrator) and Billing Reader roles to download invoices for a subscription programmatically. The invoice API allows you to download current and past invoices for an Azure subscription. During the API preview some invoices may not be available for download. The detailed API documentation is available and samples can also be downloaded. The feature to download invoices via API is not available for certain subscriptions types such as support, enterprise agreements, or Azure in Open. To be able to download invoices through API the account admin has to enable access for users in subscription scoped roles as outlined above.
    You can easily download the latest invoice for your subscription using Azure PowerShell.

    1. Login using Login-AzureRmAccount
    2. Set your subscription context using Set-AzureRmContext -SubscriptionId <subscription Id>
    3. To get the URL of the latest invoice, execute Get-AzureRmBillingInvoice –Latest

    The output will give back an invoice link to download the latest invoice document in PDF format, an example is shown below

    PS C:> Get-AzureRmBillingInvoice -Latest
             Id                           : /subscriptions/{subscription ID}/providers/Microsoft.Billing/invoices/2017-02-09-117274100066163
             Name                   : 2017-02-09-117274100066163
             Type                     : Microsoft. Billing/invoices
             InvoicePeriodStartDate : 1/10/2017 12:00:00 AM
             InvoicePeriodEndDate   : 2/9/2017 12:00:00 AM
             DownloadUrl            : https://{billingstorage}.blob.core.windows.net/invoices/{invoice identifier}.pdf?sv=2014-02-14&sr=b&sig=XlW87Ii7A5MhwQVvN1kMa0AR79iGiw72RGzQTT% 2Fh4YI%3D&se=2017-03-01T23%3A25%3A56Z&sp=r
             DownloadUrlExpiry      : 3/1/2017 3:25:56 PM
    
    

    To download invoice to a local directory you can run the following

    PS C:> Get-AzureRmBillingInvoice -Latest
    PS C:> Invoke-WebRequest -Uri $invoice.DownloadUrl -OutFile <directory>InvoiceLatest.pdf

    In the future,  you will see additions to this API which will enable expanded programmatic access to billing functionality.

    Automatically build and deploy ASP.NET Core projects to Azure App Services

    $
    0
    0

    Over the last few updates we’ve been working on evening out our support for popular scenarios. Earlier this month we added support for setting up an automated DevOps pipeline in VSTS that pulls source from a public or private GitHub repository. TFVC is another scenario we’re working on to round out the extension. This update continues to round out the Continuous Delivery Tools for Visual Studio extension by adding support for automating the build and deployment of an ASP.NET Core application targeting Azure app services. Now the extension can configure a VSTS build and release definition that can automatically build, test, and deploy any ASP.NET 4.x or ASP.NET Core application. We’ve also continued to fixed bugs we’ve gotten from the community. Thank!

    Configuring Continuous Delivery for an ASP.NET Core project

    To configure Continuous Delivery for ASP.NET Core projects, open a solution with an ASP.NET Core project and click on the Configure Continuous Delivery command in the Build menu. If the solution is not already under source control, the extension will guide you through the process.

    Configure Continuous Delivery

    The Configure Continuous Delivery dialog is pre-populated with a list of Azure subscriptions and App Services available to the personalization currently selected in Visual Studio. The default configuration is a new App Service configuration with a S1 service plan but you can pick an existing app service as well.

    Azure Subscriptions and App Services

    Click OK and the extension will use the selected app service or create a new one on Azure, then call VSTS to create a build and release definition for the repository on VSTS or GitHub. Now each time you push a new commit a build will start automatically and if that’s successful VSTS deploys the app to the target App Service on Azure.

    Build failure notification

    Of course if the build fails you’ll get a notification. Clicking the notification will take you to VSTS where you investigate the results.

    Investigate build failure results in VSTS

    Please keep the feedback coming!

    Thank you to everyone who has reached out and shared feedback and ideas so far. We’re always looking for feedback on where to take this Microsoft DevLabs extension next. There’s a Slack channel and a team alias vsdevops@microsoft.com where you can reach out to the team and others in the community sharing ideas on this topic.

    Ahmed Metwally, Senior PM, Visual Studio
    @cd4vs

    Ahmed is a Program Manager on the Visual Studio Platform team focused on improving team collaboration and application lifecycle management integration.

    Who’s In iMessage App– Plan A Fun Event With Friends

    $
    0
    0

    Ever wasted so much time coordinating an event that you told yourself you were never going to do it again? We feel you. We know that often the hard part is not finding great restaurants, it’s getting a group of people to agree on which one to go to. That’s why we built the Who’s In iMessage App – it’s search meets group consensus-building. Who’s In makes planning a night out with friends a breeze. With Who’s In, you can find activities with Bing, suggest times to meet, and then sit back and relax as your friends vote for the best option – all from within your group conversation on iMessage.

    Who's In iMessage App

    Great things happen in the flow of a conversation so we wanted to give you access to Bing directly from your chats, so that you can turn talk into action. You can use Bing to find the best restaurants, movies, and attractions and click to more information by tapping on the search result card. This way you can view the menu, clip available coupons or even make reservations at selected restaurants. For movies, you can locate and watch trailers, view available show times, and book tickets in advance. We also have you covered with custom events. Bing search results can help you plan a birthday party, a cookout, a football game or anything else. You decide!

    We know that not everyone is on iPhone, so we designed Who’s In to work cross-platform from the get-go. Who’s In was built with many awesome open-source technologies, including and most notably, React Native. Leveraging cross-platform open-source technologies allowed us to not only ship a native experience on iOS, but also support group consensus among friends on different devices, like iPhone and Android. When you send an iMessage to a group with at least one Android user in it, iMessage falls back to sending MMS messages instead of iMessages to everyone. In this case, thanks to the choices we have made to cross-platform technologies, everyone will still be able to vote for the best restaurant, movie or attraction via a Bing.com link that will open in a mobile browser.


    Who's In iMessage App

    In this first release of Who’s In, we’ve focused on shipping a delightful and fast user experience by surfacing information clearly and effectively. We’re looking forward to seeing how people use Who’s In to have more productive conversations, and learning from the feedback. We’re committed to quickly iterate on the feedback we receive from you to make Who’s In even better. We hope you’ll enjoy using it as much as we have enjoyed building it.

    Download the Who’s In iMessage App now and plan a fun event with friends later today. Send us an email at and tell us at whosin@microsoft.com what you’d like to see in a future version of Who’s In.

    - The Bing Team


    The week in .NET – Happy Birthday .NET with Chris Sells, free ASP.NET Core book, We are the Dwarves

    $
    0
    0

    Previous posts:

    Happy birthday .NET with Chris Sells

    In February, we threw a big .NET birthday bash with Microsoft Alumni and product teams. We caught up with Chris Sells who is currently a Product Manager at Google, and before that a Program Manager at Microsoft. Chris has been part of the .NET developer community since the beginning and he tells us a few great stories in this fun interview.

    Book of the week: ASP.NET Core succinctly, by Simone Chiaretta and Ugo Lattenzi

    In ASP.NET Core Succinctly, seasoned authors Simone Chiaretta and Ugo Lattanzi update you on all the advances provided by Microsoft’s landmark framework. Learn the foundations of the library, understand the new versions of ASP.NET MVC and Web API, and you’ll have everything you need to build .NET web applications on Windows, Mac, and Linux.

    ASP.NET Core Succinctly

    You can get the book now, for free!

    Game of the Week: We are the Dwarves

    We are the Dwarves is a real-time tactical adventure game. Set in a world where the Dwarven stars are slowly dying, you must guide three astronauts through their expedition to find a new star in the depths of the Endless Stone. Each dwarf has individual abilities and skill trees, letting you customize to your play style. Pay close attention to the hostile environment as you lay out your tactical strategy when fighting your enemies.

    We are the Dwarves

    We are the Dwarves was created by Whale Rock Games using C# and Unity. It is available Steam for Windows, Mac and Linux, Xbox One and PlayStation 4.

    Meetup of the week: Productivity in Visual Studio 2017 with GitHub, .NET Core, and Docker in San Francisco, CA

    The Bay .NET user group has a two-part meeting on Thursday, April 27 at 6:30PM featuring excellent speakers: Sara Ford will talk about the GitHub extension for Visual Studio 2017, and Beth Massi will tell you all about .NET Core, Docker, and microservices.

    .NET

    ASP.NET

    C#

    F#

    The F# weekly is taking a break this week, but F# links will be back next week.

    Xamarin

    Azure

    UWP

    Data

    Game Development

    And this is it for this week!

    Contribute to the week in .NET

    As always, this weekly post couldn’t exist without community contributions, and I’d like to thank all those who sent links and tips. The F# section is provided by Phillip Carter, the gaming section by Stacey Haffner, the Xamarin section by Dan Rigby, and the UWP section by Michael Crump.

    You can participate too. Did you write a great blog post, or just read one? Do you want everyone to know about an amazing new contribution or a useful library? Did you make or play a great game built on .NET?
    We’d love to hear from you, and feature your contributions on future posts:

    This week’s post (and future posts) also contains news I first read on The ASP.NET Community Standup, on Weekly Xamarin, on F# weekly, and on The Morning Brew.

    Visual Studio Code C/C++ extension April 2017 Update

    $
    0
    0

    Earlier today we shipped the April 2017 update to the C/C++ extension for Visual Studio Code. We are excited to introduce the following new features in this update:

    • Error Squiggles
    • Quick Info
    • Go to Declaration
    • Bash on Windows debugging support

    The original blog post has been updated with these changes. If you have this extension installed already, Visual Studio Code sends a notification for the update and installs the update for you automatically. If you haven’t installed it before, download the C/C++ extension for Visual Studio Code to try it out.

    1-download

    Error Squiggles and Quick Info

    Enabling the features

    In this update, we’re shipping Error Squiggles and Quick Info as “experimental features”. What this means is that they are turned on by default only for those using VSCode Insiders, and off by default for anyone else. You can enable or disable the features by toggling the setting in the settings.json file (File->Preferences->Settings). In the settings file, search for “intellisense” to locate the C_Cpp.intelliSenseEngine setting, and set it to “Default” to enable the new IntelliSense Engine (see the following screenshot). “Default” will become the true default for everyone when these features exit their experimental state. 🙂

    2-intellisense-engine

    The “C_cpp.errorSquiggles” setting in the same file allows turning on or off the squiggle feature when the default IntelliSense engine is used.

    We encourage you to turn on the features, try them out, and send us feedback so we can further polish these features and turn them on by default soon.

    Error Squiggles

    A while back we enabled showing error squiggles for the #include statements. This update adds support for showing squiggles under any program element, including variables, keywords, and braces, if a workspace exists. In other words, squiggles are not enabled when only single files are open.

    For example, in the following screenshot, Vector3 has a red squiggly underline, indicating this type can’t be found in the specified include paths.

    3-quick-action

    Clicking on Vector3 in the code, you will see a light bulb on the left side of the line. The “Add include path to settings” menu on the light bulb will take you to the c_cpp_properties.json file in which you can specify the include paths for IntelliSense. If the c_cpp_properties.json file does not already exist, it will be created and populated with the following default include paths:

    • On Windows, we default to the workspace root, the VC include path if Visual Studio 2017 or 2015 is installed, and the latest Windows SDK if found.
    • On Linux, we default to the workspace root, the highest version of the includes found in /usr/include/c++, 64-bit specific headers if they are present, and headers found under /usr/lib/clang if present.
    • On Mac, we default to the workspace root and the Xcode default toolchain if present or the same paths as Linux if Xcode is not found.

    You can add, remove, or modify the paths in the includePath setting to fit your scenario. In this example, I added another path (highlighted with the red underline) for the IntelliSense engine to look for types.

    4-include-paths

    Note that there’s a newly-added “path” setting under “browse”, which is used by the tag parser for performing fuzzy search results. The “includePath” setting, which was formerly used by the tag parser, now controls the include paths for the new IntelliSense engine. On opening any existing c_cpp_properties.json files, the value in the “includePath” is automatically copied into the “browse.path” setting.

    You can also configure the “defines” setting in the c_cpp_properties.json file to define preprocessor symbols.

    Now if I save the change in the json file and switch back to the previous header file, the types are now resolved and the red squiggles are gone.

    Quick Info

    Quick Info enables viewing the type information or function’s signature when hovering the mouse cursor over a variable or a function. In this extension, this used to be performed by the tag parser, which provides quick but fuzzy results – sometimes inaccurate. With the new IntelliSense engine, we can provide more accurate results for local and global variables, functions, and symbols. In the following screenshot, hovering over the local variable “it” displays its accurate type information.

    5-quick-info

    Go to Declaration

    With this extension, you can already perform “Go to Definition” (F12) on variables or functions to open the document where the object is defined. This update added “Go to Declaration” (Ctrl+F12) for navigating to the file where the object is declared. To use this feature, simply right click on any variable or function and select “Go to Declaration” from the menu.

    In this example in the following screenshot, I selected “Text.DrawString” function and clicked on “Go to Declaration”.

    6-go-to-declaration

    In the next screenshot, you can see that the “TextRenderer.h” file is open and the two DrawString function declarations are being highlighted.

    7-show-declaration

    Bash on Windows debugging support

    With the release of Windows 10 Creators Update, you are now able to use VSCode and this extension to debug your Windows Subsystem for Linux (WSL) Bash on Ubuntu projects. You can use VSCode to write code on Windows, and debug through bash.exe to the Bash on Windows layer. Please see these instructions on how to use VSCode C/C++ extension to debug Windows 10’s Subsystem for Linux.

    Tell us what you think

    Download the C/C++ extension for Visual Studio Code, try it out and let us know what you think. File issues and suggestions on GitHub. If you haven’t already provided us feedback, please take this quick survey to help shape this extension for your needs.

    Using checkpoint with knitr and RStudio

    $
    0
    0

    The knitr package by Yihui Xie is a wonderful tool for reproducible data science. I especially like using it with R Markdown documents, where with some simple markup in an easy-to-read document I can easily combine R code and narrative text to generate an attractive document with words, tables and pictures in HTML, PDF or Word format. Say, something like this:

    Weather-report

    In that document, the numerical weather records and the chart were generated by R,  combined into a document using R Markdown, and then generated as a word file with knitr. (You can find the R Markdown file to generate that report, and the R script to download the data, in my weather-report repository.)

    Another useful tool for reproducible data science is the checkpoint package. It helps you manage the ever-changing ecosystem of R packages on CRAN, by making it easy to "lock in" specific versions of R packages. With a single call to the checkpoint function — say checkpoint("2017-04-25"), for April 25, 2017 — you can automatically find all the packages used by your current R project (i.e. the current folder) and install them as they used to be on the specified date. A colleague or collaborator can use the same script to get the same versions too, and so be confident of reproducing your results without having to worry a newer package version may have affected the results. By the way, those package versions get installed in a special folder (.checkpoint, in your home directory), so they won't change the results of any other R projects, either. 

    RStudio includes a very useful tool for working with R Markdown and knitr: you can press the "Knit" toolbar button to process the document with a single click. For that to work, it does require certain R packages to be available for use behind the scenes. In normal circumstances RStudio will offer to install them, but the process doesn't work when a checkpoint folder is active. A simple workaround is to include a file in the same folder (I call mine knitr-packages.R) with the following lines:

    library("formatR")
    library("htmltools")
    library("caTools")
    library("bitops")
    library("base64enc")
    library("rprojroot")
    library("rmarkdown")
    library("evaluate")
    library("stringi")
    

    Although you never run that file directly, the checkpoint process will discover it and ensure the necessary packages are installed for RStudio to perform its magic. (In my tests this works with recent versions of RStudio including the latest, 1.0.143). All you need to do is make sure you run checkpoint from the R command line (just press Control-ENTER on the corresponding line in the .Rmd file) before attempting to knit. Simple!

    Penny Pinching in the Cloud: Lift and Shift vs App Services – When a VM in the Cloud isn’t what you want

    $
    0
    0

    I got an interesting question today. This is actually an extremely common one so I thought I'd take a bit to explore it. It's worth noting that I don't know the result of this blog post. That is, I don't know if I'll be right or not, and I'm not going to edit it. Let's see how this goes!

    The individual emailed and said they were new to Azure and said:

    Question for you.  (and we may have made a mistake – some opinions and help needed)
    A month or so ago, we setup a full up Win2016 server on Azure, with the idea that it would host a SQL server as well two IIS web sites

    Long story short, they were mired in the setup of IIS on Win2k6, messing with ports, yada yada yada. '

    All they wanted was:

    • The ability to right-click publish from Visual Studio for two sites.
    • Management of a SQL Database from SQL Management Studio.

    This is a classic "lift and shift" story. Someone has a VM locally or under their desk or in hosting, so they figure they'll move it to the cloud. They LIFT the site as a Virtual Machine and SHIFT it to the cloud.

    For many, this is a totally reasonable and logical thing to do. If you did this and things work for you, fab, and congrats. However, if, at this point, you're finding the whole "Cloud" thing to be underwhelming, it's likely because you're not really using the cloud, you've just moved a VM into a giant host. You still have to feed and water the VM and deal with its incessant needs. This is likely NOT what you wanted to do. You just want your app running.

    Making a VM to do Everything

    If I go into Azure and make a new Virtual Machine (Linux or Windows) it's important to remember that I'm now responsible for giving that VM a loving home and a place to poop. Just making sure you're still reading.

    NOTE: If you're making a Windows VM and you already have a Windows license you can save like 40%, so be aware of that, but I'll assume they didn't have a license.

    You can check out the Pricing Calculator if you like, but I'll just go and actually setup the VM and see what the Azure Portal says. Note that it's going to need to be beefy enough for two websites AND a SQL Server, per the requirements from before.

    Pricing for VMs in Azure

    For a SQL Server and two sites I might want the second or third choice here, which isn't too bad given they have SSDs and lots of RAM. But again, you're responsible for them. Not to mention you have ONE VM so your web server and SQL Server Database are living on that one machine. Anything fails and it's over. You're also possibly giving up perf as you're sharing resources.

    App Service Plans with Web Sites/Apps and SQL Azure Server

    An "App Service Plan" on Azure is a fancy word for "A VM you don't need to worry about." You can host as many Web Apps, Mobile Apps/Backends, Logic Apps and stuff in one as you like, barring perf or memory issues. I have between 19 and 20 small websites in one Small App Service Plan. So, to be clear, you put n number of App Services as you'd like into one App Service Plan.

    When you check out the pricing tier for an App Service Plan, be sure to View All and really explore and think about your options. Some includes support for custom domains and SSL, others have 50 backups a day, or support BizTalk Services, etc. They start at Free, go to Shared, and then Basic, Standard, etc. Best part is that you can scale these up and down. If I go from a Small to a Medium App Service Plan, every App on the Plan gets better.

    However, we don't need a SQL Server, remember? This is going to be a plan that we'll use to host those two websites. AND we can use the the same App Service Plan for staging slots (dev/test/staging/production) if we like. So just get the plan that works for your sites, today. Unlike a VM, you can change it whenever.

    App Service Plan pricing

    SQL Server on Azure is similar. You make a SQL Server Database that is hosted on a SQL Server that supports the number of Database Throughput Units that I need. Again, because it's the capital-C Cloud, I can change the size anytime. I can even script it and turn it up and down on the weekends. Whatever saves me money!

    SQL Azure Pricing

    I can scale the SQL Server from $5 to a month to bajillions and everything in between.

    What the difference here?

    First, we started here

    • VM in the Cloud: At the start we had "A VM in the Cloud." I have total control over the Virtual Machine, which is good, but I have total control over the Virtual Machine, which is bad. I can scale up or out, but just as one Unit, unless I split things up into three VMs.

    Now we've got.

    • IIS/Web Server in the Cloud: I don't have to think about the underlying OS or keeping it patched. I can use Linux or Windows if I like, and I can run PHP, Ruby, Java, .NET, and on and on in an Azure App Service. I can put lots of sites in one Plan but the IIS publishing endpoint for Visual Studio is automatically configured. I can also use Git for deployment as well
    • SQL Server in the Cloud: The SQL Server is managed, backed up, and independently scalable.

    This is a slightly more "cloudy" of doing things. It's not microservices and independently scalable containers, but it does give you:

    • Independently scalable tiers (pricing and CPU and Memory and disk)
    • Lots of automatic benefits - backups, custom domains, ssl certs, git deploy, App Service Extensions, and on and on. Dozens of features.
    • Control over pricing that is scriptable. You could write scripts to really pinch pennies by scaling your units up and down based on time of day or month.

    What are your thoughts on Lift and Shift to IaaS (Infrastructure as a Service) vs using PaaS (Platform as a Service)? What did I forget? (I'm sure lots!)


    Sponsor: Check out JetBrains Rider: a new cross-platform .NET IDE. Edit, refactor, test, build and debug ASP.NET, .NET Framework, .NET Core, or Unity applications. Learn more and get access to early builds!



    © 2017 Scott Hanselman. All rights reserved.
         

    Backup and restore your Azure Analysis Services models

    $
    0
    0

    This month we announced the general availability of Azure Analysis Services, which evolved from the proven analytics engine in Microsoft SQL Server Analysis Services. The success of any modern data-driven organization requires that information is available at the fingertips of every business user, not just IT professionals and data scientists, to guide their day-to-day decisions. Self-service BI tools have made huge strides in making data accessible to business users. However, most business users don’t have the expertise or desire to do the heavy lifting that is typically required, including finding the right sources of data, importing the raw data, transforming it into the right shape, and adding business logic and metrics, before they can explore the data to derive insights. With Azure Analysis Services, a BI professional can create a semantic model over the raw data and share it with business users so that all they need to do is connect to the model from any BI tool and immediately explore the data and gain insights. Azure Analysis Services uses a highly optimized in-memory engine to provide responses to user queries at the speed of thought.

    One of the features that was added to Azure Analysis Services is the ability to backup your semantic models and all the data within them to a blob storage account. The backups can later be restored to same Azure Analysis Services server or to a different one. This method can also be used to backup models from SQL Server Analysis services and then restore them to Azure Analysis services. Please note that you can only restore models with a 1200 or higher compatibility level and that any active directory users or groups bust be removed from any role membership before restoring. After restoring, you can re-add those users and groups from Azure Active Directory.

    Configure storage settings

    Before backing up or restoring, you need to configure storage settings for your server. Azure Analysis Services will backup your models to blob storage account of your choosing. You can configure multiple servers to use the same storage account making it easy to move models between servers.

    To configure storage settings:

    1. In Azure portal > Settings, click Backup.

    clip_image001

    1. Click Enabled, then click Storage Settings.

    clip_image002

    1. Select your storage account or create a new one.
    2. Select a container or create a new one.

    clip_image003

    1. Save your backup settings. You must save your changes whenever you change storage settings, or enable or disable backup.

    clip_image004

    Backup

    Backups can be performed using the latest version of SQL Server Management Studio. It can also be automated through PowerShell or with the Analysis Services Tabular Object Model (TOM).

    To backup using SQL Server Management Studio:

    1. In SSMS, right-click a database > Back Up.
    2. In Backup Database > Backup file, click Browse.
    3. In the Save file as dialog, verify the folder path, and then type a name for the backup file. By default, the file name is given a .abf extension.
    4. In the Backup Database dialog, select options.

    Allow file overwrite - Select this option to overwrite backup files of the same name. If this option is not selected, the file you are saving cannot have the same name as a file that already exists in the same location.

    Apply compression - Select this option to compress the backup file. Compressed backup files save disk space, but require slightly higher CPU utilization.

    Encrypt backup file - Select this option to encrypt the backup file. This option requires a user-supplied password to secure the backup file. The password prevents reading of the backup data any other means than a restore operation. If you choose to encrypt backups, store the password in a safe location.

    1. Click OK to create and save the backup file.

    Restore

    When restoring, your backup file must be in the storage account you've configured for your server. If you need to move a backup file from an on-premises location to your storage account, use Microsoft Azure Storage Explorer or the AzCopy command-line utility.

    If you're restoring a tabular 1200 model database from an on-premises SQL Server Analysis Services server, you must first remove all of the domain users from the model's roles, and add them back to the roles as Azure Active Directory users. The roles will be the same.

    To restore by using SSMS:

    1. In SSMS, right-click a database > Restore.
    2. In the Backup Database dialog, in Backup file, click Browse.
    3. In the Locate Database Files dialog, select the file you want to restore.
    4. In Restore database, select the database.
    5. Specify options. Security options must match the backup options you used when backing up.

    New to Azure Analysis Services? Find out how you can try Azure Analysis Services or learn how to create your first data model.

    Viewing all 10804 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>