Quantcast
Channel: Category Name
Viewing all 10804 articles
Browse latest View live

IoT SWC 2018: IoT solutions for the built world

$
0
0

It’s amazing to see how IoT is transforming our customers’ businesses—from optimizing operations and reducing unplanned downtime with companies like Chevron, to powering new connected vehicle experiences as we recently announced with Volkswagen.

Beyond business transformation, IoT has the potential to create more efficient and vibrant cities and communities by providing new insights and approaches to transportation and traffic, energy reduction, construction, utilities, parking, and so much more.

We are continuing to simplify the customer journey for secure, scalable IoT solutions for the cloud and the edge with a large set of announcements last month, including the general availability of Azure IoT Central at our Ignite 2018 conference, and more just last week about bringing intelligence to the edge in Windows IoT.

This week at IoT Solutions World Congress, we look forward to connecting with companies across industries and inspiring them with new possibilities for IoT, from creating Digital Twins of physical environments to taking advantage of Vision and AI on edge devices. We’ll also be talking about how we’re breaking down common barriers to entry in IoT by addressing security from the start with solutions like Azure Sphere and Azure Security Center for IoT, and empowering organizations to provision and customize fully managed IoT solutions in minutes with Azure IoT Central. And that’s just the start.

Vision and AI at the edge power breakthrough applications

Vision and AI capabilities on edge devices are the ultimate sensor and will help companies create breakthrough applications. From automatically detecting manufacturing defects, to detecting any object, to detecting unsafe conditions in the enterprise or industry—the possibilities are endless.

Today we are announcing the public preview of a Vision AI developer kit, the newest addition in the Microsoft Azure IoT starter kit family, for IoT solution makers to easily deploy AI models built using Azure Machine Learning and Azure IoT Edge. The kit includes a device using Qualcomm Visual Intelligence Platform for hardware acceleration of the AI model to deliver superior inferencing performance. To get started, visit www.visionaidevkit.com.

Vision AI developer kit

Model the physical world with Azure Digital Twins

At Ignite, we introduced Azure Digital Twins, a breakthrough new offering in our IoT platform that represents the evolution of IoT. Azure Digital Twins enables customers and partners to create a digital model of any physical environment, connect it to IoT devices using Azure IoT Hub to make the model live, and then respond to changes in it to create serverless business logic. Customers and partners can now query Azure Digital Twins in the context of a space—rather than from separate sensors—empowering them to build repeatable, scalable experiences that correlate data from digital sources and the physical world.

Today we are announcing that Azure Digital Twins is available in public preview. Several of our early partners are at IoT Solutions World Congress showcasing their solutions that span a wide range of applications that represents the broad applicability of Azure Digital Twins, including:

IoT partners

Applied innovation for smarter cities

Today, as cities and communities embrace digital transformation, technology like Digital Twins and easy-to-use machine learning is helping ideas become real, actionable solutions. Environments and infrastructure of all types—offices, schools, hospitals, banks, stadiums, warehouses, factories, parking lots, streets, intersections, parks, plazas, electrical grids, and more—can become smarter to help the people who use them live better lives. We’ll have more to share on this topic at Smart City Expo World Congress in November.

Connect with us at IoT Solutions World Congress

If you’re in Barcelona this week, connect with us at the Microsoft IoT booth (# C321) and hear from us in the following sessions:


Azure PowerShell – Cross-platform “Az” module replacing “AzureRM”

$
0
0

There is a new Azure PowerShell module, built to harness the power of PowerShell Core and Cloud Shell, and maintain compatibility with PowerShell 5.1. Its name is Az. Az ensures that PowerShell and PowerShell Core users can get the latest Azure tooling in every PowerShell, on every platform. Az also simplifies and normalizes Azure PowerShell cmdlet and module names. Az ships in Azure Cloud Shell and is available from the PowerShell Gallery. 

For complete details on the release, timeline, and compatibility features, please see GitHub announcement page.

Az module features

  • Az is a replacement for AzureRM and AzureRM.Netcore.
  • Az runs on PowerShell 5.1 and PowerShell Core.
  • Az is always up to date with the latest tooling for Azure services.
  • Az ships in Cloud Shell.
  • Az shortens and normalizes cmdlet names. All cmdlets use "Az" as their noun prefix.
  • Az will simplify and normalize module names. Data plane and management plane cmdlets for each service will use the same Az module.
  • Az ships with new cmdlets to enable script compatibility with AzureRM (Enable/Disable-AzureRmAlias).

Supported platforms

  • PowerShell 5.1 – Windows 7 or greater with .Net Framework 4.7.2 or greater installed
  • PowerShell Core 6.0 – Windows, Mac OS, Linux
  • PowerShell Core 6.1 – Windows, Mac OS, Linux

Installing Az

You can install Az from the PowerShell gallery.

     PS C:> Install-Module Az

You should not install Az side-by-side with AzureRM. Remove all AzureRM modules before installing Az.

Compatibility with AzureRM

If you would like to run scripts developed for AzureRM using Az, use the Enable/Disable-AzureRmAlias cmdlets to add or remove aliases from AzureRM cmdlets to Az cmdlets.

Enable-AzureRmAlias [-Module <string>] [-Scope Process | CurrentUser | LocalMachine]

Adds AzureRM cmdlet aliases for the given modules (or all modules if no modules are specified) to the current session (default), all sessions for the current user, or all session on the machine.

Disable-AzureRmAlias [-Module <string[]>] [-Scope Process | CurrentUser | LocalMachine]

Removes AzureRM cmdlet aliases for the given modules (or all modules if no modules are specified) from the current session (default), all sessions for the current user, or all session on the machine.

Learn more about the compatibility features in Az

More information

  • Az is currently in preview, and will replace AzureRM as the recommended module for all Azure PowerShell tooling later this year. Learn more about the product roadmap.
  • Az is Open Source and uses the Azure PowerShell source code repository on GitHub.
  • If you have questions, comments, or issues with the new module, please tell us about them on GitHub.

New metric in Azure Stream Analytics tracks latency of your streaming pipeline

$
0
0

Azure Stream Analytics is a fully managed service for real-time data processing. Stream Analytics jobs read data from input sources like Azure Event Hubs or IoT Hub. They can perform a variety of tasks from simple ETL and archiving to complex event pattern detection and machine learning scoring. Jobs run 24/7 and while Azure Stream Analytics provides 99.9 percent availability SLA, various external issues may impact a streaming pipeline and can have the significant business impact. For this reason, it is important to proactively monitor jobs, quickly identify root causes, and mitigate possible issues. In this blog, we will explain how to leverage the newly introduced output watermark delay to monitor mission-critical streaming jobs.

Challenges affecting streaming pipelines

What are some issues that can occur in your streaming pipeline? Here are a few examples:

  • Data stops arriving or arrives with a delay due to network issues that prevent the data from reaching the cloud.
  • The volume of incoming data increases significantly and the Stream Analytics job is not scaled appropriately to manage the increase.
  • Logical errors are encountered causing failures and preventing the job from making progress.
  • Output destination such as SQL or Event Hubs are not scaled properly and are throttling write operations.

Stream Analytics provides a large set of metrics that can be used to detect such conditions as input events per second, output events per second, late events per second, number of runtime errors, and more. The list of existing metrics and instructions on how to use them can be found on our monitoring documentation.

Because some streaming jobs may have complex and unpredictable patterns of incoming data and the output they produce, it can be difficult to monitor such jobs using conventional metrics like input and output events per second. For example, the job may be receiving data only during specific times in the day and only produce an output when some rare condition occurs.

For this reason, we introduced a new metric called output watermark delay. This metric is aimed towards providing a reliable signal of job health which is agnostic to input and output patterns of the job.

So, how does output watermark delay work?

Modern stream processing systems differentiate between event time also referred to as application time, and arrival time.

Event time is the time generated by the producer of the event and typically contained in the event data as one of the columns. Arrival time is the time when the event was received by the event ingestion layer, for example, when the event reaches Event Hubs.

Most applications prefer to use event time as it excludes possible delays associated with transferring and processing of events. In-Stream Analytics, you can use the timestamp by clause to specify what value should be used as event time.

The watermark represents a specific timestamp in the event time timeline. This timestamp is used as a pointer or indicator of progress in the temporal computations. For example, when Stream Analytics reports a certain watermark value at the output, it guarantees that all events prior to this timestamp were already computed. Watermark can be used as an indicator of liveliness for the data produced by the job. If the delay between the current time and watermark is small, it means the job is keeping up with the incoming data and produces results defined by the query on time.

Below we show an illustration of this concept using a simple example of a passthrough query:

A New Metric

Stream Analytics conveniently displays this metric, shown as watermark delay in the metrics view of the job in Azure Portal. This value represents the maximum watermark delay across all partitions of all outputs in the job.

It is highly recommended to set up automated alerts to detect when the watermark delay exceeds expected value. It is recommended to pick threshold value to be greater than late arrival tolerance specified in event ordering settings of the job.

The following screenshot demonstrates the alert configuration in the Azure Portal. You can also use PowerShell or REST APIs to configure alerts programmatically.

Sweet updates about Truffle on Azure

$
0
0

Development and deployment of Ethereum based applications, commonly referred to as DApps, can be a challenging task for developers. With over 1 million downloads, Truffle has proven to be a premier suite of tools that can be used for developing smart contracts and DApps for both the public Ethereum network on main-net as well as private consortiums.

Truffle streamlines the process of recognizing changes, migrating them to the underlying blockchain, and providing a framework to allow a rich debugging experience with common step through of code and inspection of low-level components.

The Truffle Suite fits very nicely with the products that Microsoft is building to help developers build end to end solutions on Ethereum. For example, when building solutions leveraging Azure Blockchain Workbench, a key component is creating the underlying smart contract(s). Using Truffle framework to build, test, and maintain the smart contracts, which will be uploaded to Azure Blockchain Workbench, makes the process for developers and IT Pros a streamlined process.

The team at Truffle hosted the first annual TruffleCon this weekend in Portland, Oregon. The event brought together developers from around the country to continue to build the community and create connections in the Ethereum developer space, sharing experiences, feedback, challenges, and success stories! We at Microsoft are big fans of both the Truffle Suite and the team, and we’re proud to sponsor and participate in this first ever event. Program managers and engineers from our blockchain engineering team attended the event to share our experiences and learn from others.

As part of our collaboration with Truffle, we have been working on the deeper integration of Truffle into the development pipeline. We are pleased to announce the development of a VS Code extension to better integrate the experience of developing with Truffle and VS Code.

The goal of the extension is to increase developer productivity when using Truffle and associated framework elements, such as Ganache. The Truffle suite of tools already works very well in the IDE, it does require developers to jump between sub-windows in the IDE, especially when using Ganache for local development. We showed this extension for the first time at the TruffleCon conference and plan to make it available for download in the Visual Studio marketplace later this fall.

We are pleased to announce that Microsoft is connecting Truffle and Azure Blockchain Workbench in a more seamless way. Currently, users of Azure Blockchain Workbench can leverage sample smart contracts as a starting point for their solutions. Samples for basic provenance, supply chain, asset management, and more are available via our Workbench Git. However, they are now available as Truffle boxes so developers can get started with Workbench even faster.

For those that would like to get started using Truffle with zero installs, the Azure Marketplace images for the current stable version as well as the beta versions are available now. These single click images make it very easy to get started using Truffle, in a sandboxed compute node on Azure.

Lastly, David Burela, Senior SDE from Microsoft Australia presented at Trufflecon and has published a guide to show how any Ethereum project on GitHub can take advantage of Azure DevOps Pipelines with the Truffle suite of tools, to automatically build and test Ethereum smart contracts upon every commit or pull request. With the new YAML based build pipeline, a project on GitHub can be configured in less than five minutes, and three lines of YAML code. Find the notes from his presentation online.

We look forward to partnering with Truffle to continue integrating our offerings!

Modernizing TLS connections in Microsoft Edge and Internet Explorer 11

$
0
0

Today, we’re announcing our intent to disable Transport Layer Security (TLS) 1.0 and 1.1 by default in supported versions of Microsoft Edge and Internet Explorer 11 in the first half of 2020.

This change, alongside similar announcements from other browser vendors, supports more performant, secure connections, helping advance a safer browsing experience for everyone.

January 19th of next year marks the 20th anniversary of TLS 1.0, the inaugural version of the protocol that encrypts and authenticates secure connections across the web. Over the last 20 years, successor versions of TLS have grown more advanced, culminating with the publication of TLS 1.3, which is currently in development for a future version of Microsoft Edge.

Two decades is a long time for a security technology to stand unmodified. While we aren’t aware of significant vulnerabilities with our up-to-date implementations of TLS 1.0 and TLS 1.1, vulnerable third-party implementations do exist. Moving to newer versions helps ensure a more secure Web for everyone. Additionally, we expect the IETF to formally deprecate TLS 1.0 and 1.1 later this year, at which point protocol vulnerabilities in these versions will no longer be addressed by the IETF.

For these reasons, sites should begin to move off of TLS 1.0 and 1.1 as soon as is practical. Newer versions enable more modern cryptography and are broadly supported across modern browsers.

Getting your sites and organizations ready

Most sites should not be impacted by this change. As TLS 1.0 continues to age, many sites have already moved to newer versions of the protocol – data from SSL Labs shows that 94% of sites already support TLS 1.2, and less than one percent of daily connections in Microsoft Edge are using TLS 1.0 or 1.1.

Charts illustrating data from SSL Labs which shows that 94% of sites already support TLS 1.2, and less than one percent of daily connections in Microsoft Edge are using TLS 1.0 or 1.1.

We are announcing our intent to disable these versions by default early, to allow the small portion  of remaining sites sufficient time to upgrade to a newer version. You can test the impact of this change today by opening the Internet Options Control Panel in Windows and unchecking the “Use TLS 1.0” and “Use TLS 1.1” options (under Advanced -> Security).

Kyle Pflug, Senior Program Manager, Microsoft Edge

The post Modernizing TLS connections in Microsoft Edge and Internet Explorer 11 appeared first on Microsoft Edge Dev Blog.

Guidance for library authors

$
0
0

We’ve just published our first cut of the .NET Library Guidance. It’s brand new set of articles for .NET developers who want to create high-quality libraries for .NET. The guidance contains recommendations we’ve identified as common best practices that apply to most public .NET libraries.

We want to help .NET developers build great libraries with these aspects:

  • Inclusive – Good .NET libraries strive to support many platforms and applications.
  • Stable – Good .NET libraries coexist in the .NET ecosystem, running in applications built with many libraries.
  • Designed to evolve – .NET libraries should improve and evolve over time, while supporting existing users.
  • Debuggable – A high-quality .NET library should use the latest tools to create a great debugging experience for users.
  • Trusted – .NET libraries have developers’ trust by publishing to NuGet using security best practices.​

As well as a source of information, we hope the guidance can be a topic of discussion between Microsoft and the .NET open-source community: when creating a .NET library, what feels good and what are the points of friction. In recent years, Microsoft has made large investments in .NET tooling to make it easier to build .NET libraries, including cross-platform targeting, .NET Standard, and close integration with NuGet. We want your feedback to help improve .NET, and the .NET open-source ecosystem into the future.

Finally, the guidance isn’t complete. We want input from authors of .NET libraries to help improve and expand the documentation as .NET continues to grow and improve.

Video

.NET Conf 2018 featured a video that demonstrates many of the same guidelines:

Closing

Please check out the .NET Library Guidance and use the Feedback section to leave any comments. We hope to see your fantastic .NET libraries on NuGet soon!

Microsoft’s Developer Blogs are Getting an Update

$
0
0

In the coming days, we’ll be moving our developer blogs to a new platform with a modern, clean design and powerful features that will make it easy for you to discover and share great content. This week, you’ll see the Visual Studio, IoTDev, and Premier Developer blogs move to a new URL while additional developer blogs will transition over the coming weeks.  You’ll continue to receive all the information and news from our teams, including access to all past posts. Your RSS feeds will continue to seamlessly deliver posts and any of your bookmarked links will re-direct to the new URL location.

We’d love your feedback

Our most inspirational ideas have come from this community and we want to make sure our developer blogs are exactly what you need. If you have immediate questions, check our FAQ below. If you don’t see your issue or question, please submit a bug report through our Feedback button.

Share your thoughts about the current blog design and content by taking our Microsoft Developer Blogs survey and telling us what you think! We’d also love your suggestions for future topics that our authors could write about. Please use the Feedback button docked to the right on the blogs (as shown above) to share what kinds of information, tutorials, features, or anything you’d like to see covered in a future blog post.

Frequently Asked Questions

To see the latest questions and known issues, please see our Developer Blogs FAQ, which includes any known issues and will be updated in real time with your feedback. If you have a question, idea, or concern that isn’t listed, please click on the feedback button and tell us about it.

I have saved/bookmarked the URL to this blog. Will I need to change this?

No need to change your bookmark. All existing URLs will auto-redirect to the new site. However, if you discover a broken link, please report it through our feedback button.

Will my RSS feed still work with my feed reader to get all the latest posts?

Your RSS feed will continue to work through your current set-up. If you encounter any issues with your RSS feed, please report it through our feedback button with details about your feed reader.

Which blogs are moving?

This migration involves the majority of our Microsoft developer blogs so expect to see changes in our Visual Studio, IoTDev, and Premier Developer blogs this week. The rest will be migrated in waves over the next few weeks, starting with our most popular blogs.

See additional questions on our Developer Blogs FAQ.

Detecting fileless attacks with Azure Security Center

$
0
0

As the security solutions get better at detecting attacks, attackers are increasingly employing stealthier methods to avoid detection. In Azure, we regularly see fileless attacks targeting our customers’ endpoints. To avoid detection by traditional antivirus software and other filesystem-based detection mechanisms, attackers inject malicious payloads into memory. Attacker payloads surreptitiously persist within the memory of compromised processes and perform a wide range of malicious activities. 

We are excited to announce the general availability of Security Center’s Fileless Attack Detection. With Fileless Attack Detection, automated memory forensic techniques identify fileless attack toolkits, techniques, and behaviors. Fileless Attack Detection periodically scans your machine at runtime and extracts insights directly from the memory of security-critical processes. It finds evidence of exploitation, code injection and execution of malicious payloads. Fileless attack detection generates detailed security alerts to accelerate alert triage, correlation, and downstream response time. This approach complements event-based EDR solutions such as Windows Defender ATP providing greater detection coverage.  

In this post, we will dive into the details to see how Security Center’s Fileless Attack Detection discovers different stages of a multi-stage attack, starting with targeted exploit payload, or shellcode. We will also provide a walkthrough of an example alert based on a real-world detection. 

Detecting shellcode

After exploiting a vulnerability, attackers typically use a small set of assembly instructions, called shellcode, to retrieve and load more capable payloads.  Due to Address Space Layout Randomization (ASLR), shellcode must first locate the addresses of the operating system functions required to retrieve/load additional payloads and transfer execution control to them. A typical shellcode workflow might include: accessing the Process Execution Block (PEB), traversing the in memory order module list to identify OS modules with required capabilities, and parsing PE image headers and image export directories to locate the addresses of specific OS functions.

Additionally, shellcode will often also perform other activities, such as unpacking/decrypting payloads, manipulating permissions and privileges, performing anti-debugging techniques, and hijacking code execution control. 

The above patterns can be identified using memory forensic techniques. Fileless Attack Detection reads machine code located in dynamically allocated code segments of commonly targeted processes. Fileless Attack Detection then disassembles the machine code and uses both static analysis and targeted emulation techniques to identify malicious behaviors. 

Detecting more complex payloads

Fileless Attack Detection also detects more complex payloads, which can perform any number of malicious activities. Common examples include impersonating the user, escalating privileges through additional software vulnerabilities, stealing account credentials, accessing certificates and private keys, moving laterally to new machines and accessing sensitive data. These capabilities are available in off-the-shelf toolkits which can be reused and modified for the attacker’s purpose. We have seen these types of toolkits used by red teams and attackers. 

Fileless Attack Detection identifies such payloads by scanning dynamically allocated code segments for a number of signals, including injected modules, obfuscated modules, references to security sensitive operating system functions, indicators of known fileless attack toolkits, and many others. A classifier analyzes these signals and emits an alert of the appropriate severity. The classifier also filters out signals from legitimate security and management software which often use techniques similar to fileless malware to monitor critical system functions.

Fileless Attack Detection example alert

Below is section-by-section walkthrough of an example alert based on a real world detection. 

As you can see in the small red box below, Fileless Attack Detection has identified the toolkit: “Meterpreter.” Below the toolkit name is a list of specific techniques and behaviors present in the memory of the infected process. Even if the attacker uses new or unknown malware, Fileless Attack Detection still generates alerts highlighting the techniques and behaviors detected from the payload.

FilelessAttackTechnique

The alert also displays metadata associated with the compromised process. In this case, the infection occurred in winlogon.exe, a long running system process that handles account credentials.

winlogon

Based only on the above metadata, the process appears to be quite normal. Tools that examine only the process path, command line parameters, or parent/child process relationships will not have sufficient clues to identify the compromise.

However, by using memory forensic techniques, we can determine how many suspicious code segments are present, what capabilities are present in the code segments, find threads executing code from dynamically allocated segments, and emit that information in an alert. Should a process have active network connections, that information will be displayed as well including the remote IP address and start time.

Analysts can also use Log Analytics to create a view of alert data from multiple processes and machines, describing when and where the malicious activity was first detected. Analysts can also use Log Analytics to correlate these alerts with other data sources, such as user logon data, to determine which account credentials may be at risk. This capability is very useful when determining the nature and scope of a compromise.

Getting started with Security Center’s Fileless Attack Detection

Fileless Attack Detection supports both Azure IaaS VMs and virtual machines that run on other clouds or on-premises. Supported operating systems include: Windows Server 2008 R2 and higher, and Windows 7 client and higher.  

To learn more about different types of fileless attacks here, visit our documentation.

To start using Fileless Attack Detection, enable the Standard Tier of Security Center for your subscriptions.


New prescriptive guidance for Open Source .NET Library Authors

$
0
0

Open-source library guidanceThere's a great new bunch of guidance just published representing Best Practices for creating .NET Libraries. Best of all, it was shepherded by JSON.NET's James Newton-King. Who better to help explain the best way to build and publish a .NET library than the author of the world's most popular open source .NET library?

Perhaps you've got an open source (OSS) .NET Library on your GitHub, GitLab, or Bitbucket. Go check out the open-source library guidance.

These are the identified aspects of high-quality open-source .NET libraries:

  • Inclusive - Good .NET libraries strive to support many platforms and applications.
  • Stable - Good .NET libraries coexist in the .NET ecosystem, running in applications built with many libraries.
  • Designed to evolve - .NET libraries should improve and evolve over time, while supporting existing users.
  • Debuggable - .NET libraries should use the latest tools to create a great debugging experience for users.
  • Trusted - .NET libraries have developers' trust by publishing to NuGet using security best practices.

The guidance is deep but also preliminary. As with all Microsoft Documentation these days it's open source in Markdown and on GitHub. If you've got suggestions or thoughts, share them! Be sure to sound off in the Feedback Section at the bottom of the guidance. James and the Team will be actively incorporating your thoughts.

Cross-platform targeting

Since the whole point of .NET Core and the .NET Standard is reuse, this section covers how and why to make reusable code but also how to access platform-specific APIs when needed with multi-targeting.

Strong naming

Strong naming seemed like a good idea but you should know WHY and WHEN to strong name. It all depends on your use case! Are you publishing internally or publically? What are your dependencies and who depends on you?

NuGet

When publishing on the NuGet public repository (or your own private/internal one) what do you need to know about SemVer 2.0.0? What about pre-release packages? Should you embed PDBs for easier debugging? Consider things like Dependencies, SourceLink, how and where to Publish and how Versioning applies to you and when (or if) you cause Breaking changes.

Also be sure to check out Immo's video on "Building Great Libraries with .NET Standard" on YouTube!


Sponsor: Check out the latest JetBrains Rider with built-in spell checking, enhanced debugger, Docker support, full C# 7.3 support, publishing to IIS and more advanced Unity support.



© 2018 Scott Hanselman. All rights reserved.
     

A small logical change with big impact

$
0
0

In R, the logical || (OR) and && (AND) operators are unique in that they are designed only to work with scalar arguments. Typically used in statements like

while(iter < 1000 && eps < 0.0001) continue_optimization()

the assumption is that the objects on either side (in the example above, iter and eps) are single values (that is, vectors of length 1) — nothing else makes sense for control flow branching like this. If either iter or eps above happened to be vectors with more than one value, R would silently consider only the first elements when making the AND comparison.

In an ideal world that should never happen, of course. But R programmers, like all programmers, do make errors, and using non-scalar values with || or && is a good sign that something, somewhere, has gone wrong. And with an experimental feature in the next version of R, you will be able to set an environment variable, _R_CHECK_LENGTH_1_LOGIC2_, to signal a warning or error in this case. But why make it experimental, and not the default behavior? Surely making such a change is a no-brainer, right?

It turns out things are more difficult than they seem. The issue is the large ecosystem of R packages, which may rely (wittingly or unwittingly) on the existing behavior. Suddenly throwing an error would throw those packages out of CRAN, and all of their dependencies as well.

We can see the scale of the impact in a recent R Foundation blog post by Tomas Kalibara where he details the effort it took to introduce an even simpler change: making if(cond) { ... } throw an error if cond is a logical vector of length greater than one. Again, it seems like a no-brainer, but when this change was introduced experimentally in March 2017, 154 packages on CRAN started failing because of it ... and that rose to 179 packages by November 2017. The next step was to implement a new CRAN check to alert those package authors that, in the future, their package would fail. It wasn't until October 2018 that the number of affected packages had dropped to a suitable level that the error could actually be implemented in R as well.

So the lesson is this: with an ecosystem as large as CRAN, even "trivial" changes take a tremendous amount of effort and time. They involve testing the impact on CRAN packages, coding up tests and warnings for maintainers, and allowing enough time for packages to be udpated. For an in-depth perspective, read Tomas Kalibera's essay at the link below.

R Developer blog: Tomas Kalibera: Conditions of Length Greater Than One

Announcing the public preview of Azure Digital Twins

$
0
0

Last month at our Ignite conference in Orlando, we proudly announced the public preview of Azure Digital Twins, a new platform for comprehensive digital models and spatially aware solutions that can be applied to any physical environment. Today, we take the next step in the journey of simplifying IoT for our customers and are happy to announce that Azure Digital Twins is now officially live.

Starting today, customers and partners can create an Azure account and begin using Azure Digital Twins. We encourage you to explore the content available in the Azure Digital Twins documentation and on the product page. You can also try out the quickstart to begin building your solution with Azure Digital Twins today.

As mentioned in the announcement blog post from last month, Azure Digital Twins allows customers to benefit from first modeling the physical environment before connecting devices to that model. By transitioning the IoT approach to one that goes beyond mapping sensors and devices, customers can benefit from new spatial intelligence capabilities and insights into how spaces and infrastructure are really used. This allows organizations tp better serve people’s needs at every level, from energy efficiency to employee satisfaction and productivity. In addition, the platform accelerates and simplifies the creation of powerful and contextually-aware IoT solutions, by coming equipped with the right capabilities to streamline the development of these robust and powerful solutions. Concretely, Azure Digital Twins comes equipped with the following key features:

Spatial intelligence graph: A virtual representation of a physical environment that models the relationships between people, places, and devices. This digital twin generates insights that allow organizations to build solutions that can improve energy efficiency, space utilization, occupant experience, and more. This includes blob storage, the ability to attach and store maps, documents, manuals, and pictures as metadata to the spaces, people, and devices represented in the graph. The image below provides a simple visual of what this graph could look like, and how it provides a hierarchical and layered approach that virtually represents a physical environment.

Spatial intelligence graph

Twin object models: Azure Digital Twins also offers pre-defined schema and device protocols that align to a solution’s domain-specific needs to accelerate and simplify their creation. These benefits can apply to any interior or exterior space, as well as to infrastructure or even entire cities. Azure Digital Twins is applicable to customers that work with any type of environment; whether it be a factory, stadium, or grid. The twin object models are what enables this by allowing the graph to be fully customizable to align to the different needs of solutions.

Advanced compute capabilities: Users can define functions that generate notifications or events based on telemetry from devices and sensors. This capability can be applied in a variety of powerful ways. For example, in a conference room when a presentation is started in PowerPoint, the environment could automatically dim the lights and lower the blinds. After the meeting, when everyone has left, the lights are turned off and the air conditioning is lowered.

Data isolation via multi- and nested-tenancy capabilities: Users can build solutions that scale and securely replicate across multiple tenants and sub-tenants by leveraging built-in multi- and nested-tenancy capabilities to ensure data is isolated. Microsoft doesn’t want to compete in the application layer, but instead enable others to build on top of the Azure Digital Twins platform. The full multi and nested tenancy model enables organizations to build and sell to multiple end-customers in a way that fully secures and isolates their data.

Security through access control and Azure Active Directory (AAD): Role-Based Access Control and Azure Active Directory serve as automated gatekeepers for people or devices, specifying what actions are allowed—and helping to ensure security, data privacy, and compliance.

Integration with Microsoft services: Customers and partners can build out their solution by connecting Azure Digital Twins to the broader set of Azure analytics, AI, and storage services, as well as Azure Maps, Azure High-Performance Computing, Microsoft Mixed Reality, Dynamics 365, and Office 365. The integration with these services enables customers to build end-to-end best-in-class IoT solutions.

Azure Digital Twins has the power to shift the paradigm in IoT. Some customers and partners, from a wide range of industries and backgrounds, are already leveraging the potential of next generation spatial intelligence IoT solutions. To provide concrete examples of how partners are packaging the capabilities present in Azure Digital Twins to create digital twins of physical environments, below are images of what finished solutions can look like and how they can be visualized. These impressive images are examples of UIs of solutions built on Azure Digital Twins, but the impact that these created is even more inspiring. We encourage you to dive-deep into the existing customer stories to understand more.

Willow, Steelcase, CBRE

For more information we have included a video below that explains the key capabilities and value proposition of Azure Digital Twins in further detail. We look forward to continuing to deliver on our commitment of simplifying IoT for our customers and are excited about the role Azure Digital Twins will play in our customers’ digital transformation journey. Try out the quickstart to begin building your solution with Azure Digital Twins today. Please provide feedback and suggestion on how we can improve. We value your input and are constantly looking for ways to address our customers’ pain points and needs.


Azure Marketplace new offers – Volume 21

$
0
0

We continue to expand the Azure Marketplace ecosystem. From September 1 to 15, 29 new offers successfully met the onboarding criteria and went live. See details of the new offers below:

Virtual Machines

Active Directory Certificate Services 2016 PKI

Active Directory Certificate Services 2016 PKI: Quickly deploy Active Directory Certificate Services 2016 PKI to your Microsoft Azure tenant Infrastructure-as-a-Service. Build a new PKI hierarchy or set up a subordinate certificate authority to an established PKI hierarchy.

Aider-MigrateR

Aider-MigrateR: Aider-MigrateR is a specialized virtual machine to assist in the planning and execution of migrations.

Azul Zulu on Ubuntu 18.04

Azul Zulu on Ubuntu 18.04: Azul Zulu is a Java virtual machine that is compliant with the Java SE specification. Zulu in JDK form is an open-source distribution of the Java Development Kit of the OpenJDK project.

HubStor Test Drive

HubStor Test Drive: This Test Drive consists of a virtual machine that is connected to a view-only HubStor demo tenant. While the admin portal is view-only access, you can still perform searches, evaluate the user experience, access test data, and run export jobs.

Jamcracker Cloud Analytics Version 6.1

Jamcracker Cloud Analytics Version 6.1: By providing a multi-dimensional view of cloud usage and expenses through its interactive dashboards, Jamcracker helps you make prompt and efficient decisions about your cloud costs.

Jamcracker Hybrid Cloud Management Version 6.1

Jamcracker Hybrid Cloud Management Version 6.1: The Jamcracker Hybrid Cloud Management appliance offers an all-in-one portal where users are able to perform all cloud management functions.

McAfee Cloud Workload Security

McAfee® Cloud Workload Security: McAfee Cloud Workload Security delivers comprehensive security for Microsoft Azure, with firewall audit and hardening, IP traffic visibility, and threat insights for Azure instances.

Microsoft DNS Server 2016 IaaS

Microsoft DNS Server 2016 IaaS: Quickly deploy a Microsoft DNS Server 2016 Infrastructure-as-a-Service. This 2016 virtual machine comes preloaded with the Microsoft DNS server role, remote administration tools for DNS, and the required PowerShell modules.

NetkaView Network Manager

NetkaView Network Manager: This network management software provides fault management, performance management, configuration management, and more for multi-vendor IP devices such as Cisco, Juniper, Extreme, H3C, Huawei, 3Com, Redback, and Foundry.

officeatwork EDC Server light

officeatwork EDC Server light: Generate Word or PDF documents based on native Word templates and contents. Use web services and XML to create letters, offers, and invoices. officeatwork EDC Server light even comes with two free hours of onboarding support.

Pulse Virtual Traffic Manager with WAF

Pulse Virtual Traffic Manager: Pulse vTM is a software-based Layer 7 application delivery controller designed to deliver faster, high-performance experiences, with reliable access to websites and apps while maximizing the efficiency and capacity of web and application servers.

Pulse Virtual Traffic Manager

Pulse Virtual Traffic Manager with WAF: Pulse vWAF is a software-based Layer 7 application delivery controller ideal for hybrid deployment. It proactively detects and blocks attacks at the application layer and shortens the window of attack from external threats.

Rubrik Cloud Data Management on Azure

Rubrik CDM: Rubrik delivers a single platform to manage and protect data in the cloud, at the edge, and on-premises. Rubrik’s Cloud Data Management software simplifies backup and recovery, accelerates cloud adoption, and enables automation at scale.

Sapphire v5 DNS appliance

Sapphire Ev10 IPAM appliance: This appliance provides centralized IP address management (IPAM) capabilities for mid-range applications. It runs BT's IPControl software to enable management of your IPv4 and IPv6 address space.

Sapphire v5 DNS appliance

Sapphire Ev20 IPAM appliance: This appliance provides centralized IP address management (IPAM) capabilities for large-scale IP networks. The Sapphire Ev20 runs BT's IPControl software to enable management of your IPv4 and IPv6 address space.

Sapphire v5 DNS appliance

Sapphire v5 DNS appliance: This appliance from BT provides basic-level performance with reliability, security, and the convenience of prepackaged DHCP/DNS/IPAM (DDI) appliances.

Sapphire v5 DNS appliance

Sapphire v10 DNS appliance: This appliance from BT provides modest performance with reliability, security, and the convenience of prepackaged DHCP/DNS/IPAM (DDI) appliances. The Sapphire v10 can be configured to extend your DNS/IPAM capabilities to Azure.

Sapphire v5 DNS appliance

Sapphire v20 DNS appliance: The Sapphire v20 is a highly scalable DNS virtual appliance for use with BT's IPControl IPAM. It can be configured and managed using your BT IPControl implementation to extend your DNS/IPAM capabilities to Azure.

Sapphire v5 DNS appliance

Sapphire vCAA20 cloud automation appliance: The Sapphire CAA20 Cloud Automation Appliance (CAA), in conjunction with cloud provisioning tools, enables simple, scalable cloud IPAM automation.

SimpliGov

SimpliGov: Our government automation platform’s easy-to-learn design tools let you quickly automate any organizational process, whether it’s simple or complex. Reduce errors and increase efficiency every step of the way.

TigerGraph Enterprise Edition 2.1.7

TigerGraph Enterprise Edition 2.1.7: TigerGraph is a fast, scalable graph analytics platform to create Big Data graph apps. TigerGraph offers a native, enterprise graph MPP (massively parallel processing) database with GraphStudio, a visual SDK to design and explore your graph.

Web Applications

Corda Open Source 3.2

Corda Open Source 3.2: R3’s Corda is the blockchain built for business. It enables institutions to transact directly using smart contracts, and it ensures privacy and security. With deployment on Microsoft Azure, developers can deploy nodes on a Corda network using cloud templates.

Waterline Data Catalog

Waterline Data Catalog: Waterline is a comprehensive data catalog system that organizes and governs data, and it uses AI to auto-tag data with business terms. Improve your self-service analytics, compliance, security, and data management.

Container Solutions

ExternalDNS Container Image

ExternalDNS Container Image: ExternalDNS is a Kubernetes add-on that configures public DNS servers with information about exposed Kubernetes services to make them discoverable.

NeuVector Container Security Platform

NeuVector Container Security Platform: NeuVector delivers an integrated and automated security platform for Kubernetes and Red Hat OpenShift. Monitor and protect east-west container network traffic.

Metrics Server Container Image

NGINX Ingress Controller Container Image: NGINX Ingress Controller is an Ingress controller that manages external access to HTTP services in a Kubernetes cluster using NGINX.

Metrics Server Container Image

Metrics Server Container Image: Metrics Server aggregates resource usage data, such as container CPU and memory usage, in a Kubernetes cluster and makes it available via the Metrics API.

TensorFlow Serving Container Image

TensorFlow Serving Container Image: TensorFlow Serving is a system for serving machine learning models. This stack comes with Inception v3 with trained data for image recognition, but it can be extended to serve other models.

WordPress with NGINX Container Image

WordPress with NGINX Container Image: WordPress with NGINX combines the popular blogging application with the power of the NGINX web server.

Apache Spark jobs gain up to 9x speed up with HDInsight IO Cache

$
0
0

Today, we are pleased to reveal the preview of HDInsight IO Cache, a new transparent data caching feature of Azure HDInsight that provides customers with up to a 9x performance improvement for Apache Spark jobs. We know from our customers that when it comes to analytics cost efficiency of managed cloud-based Apache Hadoop and Spark services is one of their major attractors. HDInsight IO Cache allows us to improve this key value proposition even further by improving performance without a corresponding increase in costs.

Architecture

Azure HDInsight is a cloud platform service for open source analytics that aims to bring the best open source projects and integrate them natively on Azure. There are many open source caching projects that exists in the ecosystem: Alluxio, Ignite, and RubiX to name a few prominent ones.

HDInsight IO Cache is based on RubiX. RubiX is one of the more recent projects and has a distinct architecture. Unlike other caching projects, it doesn’t reserve operating memory for caching purposes. Instead, it leverages recent advances in SSD technology to their fullest potential to make explicit memory management unnecessary. Modern SSDs routinely provide more than 1GB per second of bandwidth. Coupled with automatic operating system in-memory file cache, this provides more than sufficient bandwidth to load big data compute processing engines, such as Apache Spark. The operating memory in this architecture remains available for Apache Spark to process heavily memory dependent tasks, such as shuffles, which allows it to achieve highest resource utilization and further improves performance. Overall, this results in a great speed up for jobs that read data from remote cloud storage, which is the dominant architecture pattern in the cloud.

Getting started

Azure HDInsight IO Cache is available on Azure HDInsight 3.6 and 4.0 Spark clusters on the latest version of Apache Spark 2.3. During Preview, this feature is deactivated by default. To activate it, in Ambari management UI of the cluster, select HDInsight IO Cache service, then click Actions > Activate.

Ambari IO Cache Service  Action activate

Proceed by confirming restart of the affected services on the cluster.

Once activated, HDInsight IO Cache launches and manages RubiX Cache Metadata Servers on each worker node of the cluster. It also configures all services of the cluster for transparent use of RubiX cache. This allows you to benefit from caching without making any changes to Spark jobs. For example, when IO Cache is deactivated, this Spark code would perform a regular remote read from Azure Blob Storage:

spark.read.load('wasbs:///myfolder/data.parquet').count()

When IO Cache is activated, the same line of code would perform a cached read through RubiX cache, and on subsequent reads, will read data locally from SSD. Worker nodes on HDInsight cluster are equipped with locally attached, dedicated SSD drives. HDInsight IO Cache uses these local SSDs for caching, which minimizes latency and maximizes bandwidth.

Performance benchmarking

We compared the performance of an Azure HDInsight Spark cluster with IO Cache and corresponding optimized settings to the previous version of the cluster without the IO Cache feature. We used a benchmark derived from TPC-DS and comprised of 99 SQL queries analyzing a 1TB dataset. The configuration of the cluster in both cases is the same, consisting of 16xD14v2 worker node VMs running HDInsight 3.6 Spark 2.3. The results show up to a 9x performance improvement in the query run time and a 2.25x improvement in the geomean.

Top 20 TPC-DS queries

Total running time

Summary

HDInsight IO Cache is now available in preview on the latest Azure HDInsight Apache Spark clusters. Once enabled, it improves the performance of Spark jobs in a completely transparent manner without any changes to the jobs required. This provides an excellent cost to performance ratio of cloud-based Spark deployments. Try it now on Azure HDInsight. For questions and feedback, please reach out to AskHDInsight@microsoft.com.

About HDInsight

Azure HDInsight is Microsoft’s premium managed offering for running open source workloads on Azure. Apache Spark clusters on HDInsight provide best in class security features, authorization and audit controls, development tools for authoring, and diagnostics of Spark jobs. Azure HDInsight powers some of the top customer’s mission critical applications in a wide range of sectors including manufacturing, retail, education, nonprofit, government, healthcare, media, banking, telecommunication, and insurance. With these industries comes a range in use cases including ETL, data warehousing, machine learning, IoT, and many more.

Standard Library Algorithms: Changes and Additions in C++17

$
0
0

Today we have a guest post from Marc Gregoire, Software Architect at Nikon Metrology and Microsoft MVP since 2007.

 

The C++14 standard already contains a wealth of different kinds of algorithms. C++17 adds a couple more algorithms and updates some existing ones. This article explains what’s new and what has changed in the C++17 Standard Library.

New Algorithms

Sampling

C++17 includes the following new sampling algorithm:

  • sample(first, last, out, n, gen)

It uses the given random number generator (gen) to pick n random elements from a given range [first, last) and writes them to the given output iterator (out).

Here is a simple piece of code that constructs a vector containing the integers 1 to 20. It then sets up a random number generator, and finally generates 10 sequences of 5 values, in which each value is randomly sampled from the data vector:

using namespace std;

vector<int> data(20);
iota(begin(data), end(data), 1);
copy(cbegin(data), cend(data), ostream_iterator<int>(cout, " "));
cout << 'n';

random_device seeder;
const auto seed = seeder.entropy() ? seeder() : time(nullptr);
default_random_engine generator(
       static_cast<default_random_engine::result_type>(seed));

const size_t numberOfSamples = 5;
vector<int> sampledData(numberOfSamples);

for (size_t i = 0; i < 10; ++i)
{
    sample(cbegin(data), cend(data), begin(sampledData),
           numberOfSamples, generator);
    copy(cbegin(sampledData), cend(sampledData),
         ostream_iterator<int>(cout, " "));
    cout << 'n';
}

Here is an example of a possible output:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
4 8 9 17 19
4 7 12 13 18
5 7 8 14 18
1 4 5 10 20
2 4 8 13 17
2 3 4 5 20
4 7 8 9 13
1 7 8 10 15
4 5 8 12 13
1 3 8 10 19

Iterating

The C++ Standard Library already included for_each() to process each element in a given range. C++17 adds a for_each_n(first, n, func) algorithm. It calls the given function object (func) for each element in the range given by a first iterator (first) and a number of elements (n). As such, it is very similar to for_each(), but for_each_n() only processes the first n elements of the range.

Here is a simple example that generates a vector of 20 values, then uses for_each_n() to print the first 5 values to the console:

using namespace std;

vector<int> data(20);
iota(begin(data), end(data), 1);

for_each_n(begin(data), 5,
           [](const auto& value) { cout << value << 'n'; });

Searching

C++17 includes a couple of specialized searchers, all defined in <functional>:

  • default_searcher
  • boyer_moore_searcher
  • boyer_moore_horspool_searcher

The Boyer-Moore searchers are often used to find a piece of text in a large block of text, and are usually more efficient than the default searcher. In practice, the two Boyer-Moore searchers are able to skip certain characters instead of having to compare each individual character. This gives these algorithms a sublinear complexity, making them much faster than the default searcher. See the Wikipedia article for more details of the algorithm.

To use these specialized searchers, you create an instance of one of them and pass that instance as the last parameter to std::search(), for example:

using namespace std;

const string haystack = "Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.";
const string needle = "consectetur";

const auto result = search(cbegin(haystack), cend(haystack),
       boyer_moore_searcher(cbegin(needle), cend(needle)));

if (result != cend(haystack))
    cout << "Found it.n";
else
    cout << "Not found.n";

If you want to carry out multiple searches on the same range, you can construct a single instance of std::boyer_moore_searcher and reuse it, rather than creating a new one for each std::search() call.

Generalized Sum Algorithms

Scan

The following generalized sum algorithms have been added to the C++17 Standard Library:

  • exclusive_scan(first, last, out, init[, bin_op])
  • inclusive_scan(first, last, out[, bin_op[, init]])
  • transform_exclusive_scan(first, last, out, init, bin_op, un_op)
  • transform_inclusive_scan(first, last, out, bin_op, un_op[, init])

Where bin_op is a binary operator (std::plus<>() by default), and un_op is a unary operator.

All these algorithms calculate a sequence of sums of the elements in a given range [first, last), denoted as [e0, en). The calculated sums are written to [out, out + (last-first)), denoted as [s0, sn). Suppose further that we denote the binary operator (bin_op) as ⊕. The exclusive_scan() algorithm then calculates the following sequence of sums:

s0 = init
s1 = init ⊕ e0s2 = init ⊕ e0e1sn-1 = init ⊕ e0e1 ⊕ … ⊕ en−2

While inclusive_scan() calculates the following sums:

s0 = init ⊕ e0s1 = init ⊕ e0e1sn-1 = init ⊕ e0e1 ⊕ … ⊕ en−1

The only difference is that inclusive_scan() includes the ith element in the ith sum, while exclusive_scan() does not include the ith element in the ith sum.

exclusive_scan() and inclusive_scan() are similar to partial_sum(). However, partial_sum() evaluates everything from left to right, while exclusive_scan() and inclusive_scan() evaluate everything in a non-deterministic order. That means that the result of these will be non-deterministic if the binary operator that is used is not associative. Because of the non-deterministic order, these algorithms can be executed in parallel by specifying a parallel execution policy, see later.

The sums calculated by inclusive_scan() with init equal to 0 and an associative binary operator are exactly the same as the sums calculated by partial_sum().

transform_exclusive_scan() and transform_inclusive_scan() are very similar. The only difference is that they apply the given unary operator before calculating the sum. Suppose the unary operator is denoted as a function call f(). The transform_exclusive_scan() algorithm then calculates the following sums sequence:

s0 = init
s1 = init ⊕ f(e0)
s2 = init ⊕ f(e0) ⊕ f(e1)
…
sn-1 = init ⊕ f(e0) ⊕ f(e1) ⊕ … ⊕ f(en−2)

And the transform_inclusive_scan() algorithm calculates the following sums:

s0 = init ⊕ f(e0)
s1 = init ⊕ f(e0) ⊕ f(e1)
…
sn-1 = init ⊕ f(e0) ⊕ f(e1) ⊕ … ⊕ f(en−1)

Reduce

Additionally, the following two reduce algorithms have been added:

  • reduce(first, last[, init[, bin_op]])
  • transform_reduce(first, last, init, bin_op, un_op)

Where bin_op is a binary operator, std::plus<>() by default. These algorithms result in a single value, similar to accumulate().

Suppose again that the range [first, last) is denoted as [e0, en). reduce() then calculates the following sum:

init ⊕ e0e1 ⊕ … ⊕ en−1

While transform_reduce() results in the following sum, assuming the unary operator is denoted as a function call f():

init ⊕ f(e0) ⊕ f(e1) ⊕ … ⊕ f(en−1)

Unlike accumulate(), reduce() supports parallel execution. The accumulate() algorithm always evaluates everything deterministically from left to right, while the evaluation order is non-deterministic for reduce(). A consequence is that the result of reduce() will be non-deterministic in case the binary operator is not associative or not commutative.

The sum calculated by reduce() with init equal to 0 is exactly the same as the result of calling accumulate() as long as the binary operator that is used is associative and commutative.

Finally, there is another set of overloads for transform_reduce():

  • transform_reduce(first1, last1, first2, init[, bin_op1, bin_op2])

It requires two ranges: a range [first1, last1), denoted as [a0, an), and a range starting at first2, denoted as [b0, bn). Suppose bin_op1 (std::plus<>() by default) is denoted as ⊕, and bin_op2 (std::multiplies<>() by default) is denoted as ⊖, then it calculates the following sum:

init ⊕ (a0b0) ⊕ (a1b1) ⊕ … ⊕ (an-1bn-1)

Parallel Algorithms

A major addition to the C++17 Standard Library is support for parallel execution of more than 60 of its algorithms, such as sort(), all_of(), find(), transform(), …

If a Standard Library algorithm supports parallel execution, then it accepts an execution policy as its first parameter. This policy determines to what degree the algorithm may parallelize or vectorize its execution. Currently, the following policy types and instances are defined in the std::execution namespace in the <execution> header:

Execution Policy Type Global Instance Description
sequenced_policy seq No parallel execution is allowed.
parallel_policy par Parallel execution is allowed.
parallel_unsequenced_policy par_unseq Parallel and vectorized execution is allowed. Execution is also allowed to switch between different threads.

The parallel_unsequenced_policy imposes a lot of restrictions on what the algorithm’s function callbacks are allowed to do. With that policy, the calls to the callbacks are unsequenced. As such, its callbacks are not allowed to perform memory allocation/deallocation, acquire mutexes, and more. The other policies do not have such restrictions, and in fact guarantee that their callback calls are sequenced, although of course they can be in a non-deterministic order. In any case, you are responsible to prevent data races and deadlocks.

Using these parallel policies is straightforward. Here is a quick example that generates a vector of 1 billion double values, then uses the std::transform() algorithm to calculate the square root of each value in parallel:

using namespace std;

vector<double> data(1'000'000'000);
iota(begin(data), end(data), 1);

transform(execution::par_unseq, begin(data), end(data), begin(data),
          [](const auto& value) { return sqrt(value); });

If you run this piece of code on an 8-core machine, the CPU load can look as follows. The peak you see on all eight cores is the parallel execution of the call to std::transform().

Utility Functions

C++17 also includes a couple of handy utility functions that are not really algorithms, but still useful to know.

clamp()

std::clamp(value, low, high) is defined in the <algorithm> header. It ensures that a given value is within a given range [low, high]. The result of calling clamp() is:

  • a reference to low if value < low
  • a reference to high if value > high
  • otherwise, a reference to the given value

One use-case is to clamp audio samples to a 16-bit range:

const int low = -32'768;
const int high = 32'767;
std::cout << std::clamp(12'000, low, high) << 'n';
std::cout << std::clamp(-36'000, low, high) << 'n';
std::cout << std::clamp(40'000, low, high) << 'n';

The output of this code snippet is as follows:

12000
-32768
32767

gcd() and lcm()

std::gcd() returns the greatest common divisor of two integer types, while lcm() returns the least common multiple of two integer types. Both algorithms are defined in the <numeric> header.

Using these algorithms is straightforward, for example:

std::cout << std::gcd(24, 44) << 'n';
std::cout << std::lcm(24, 44) << 'n';

The output is as follows:

4
264

Removed Algorithms

C++17 has removed one algorithm: std::random_shuffle(). This algorithm was previously already marked as deprecated by C++14. You should use std::shuffle() instead.

Further Reading Material

Have a look at my book, “Professional C++, 4th Edition”, published by Wiley/Wrox, for a more in-depth overview of all the functionality provided by the C++17 Standard Library. It also includes a description of all language features that have been added by C++17.

Additionally, you can also read more about certain C++17 features on my C++17 blog post series.

Automating Release Notes with Azure Functions

$
0
0

We can all agree that tracking the progress of a project enhances productivity and is an effective way to keep everyone involved of its progress. When it comes to managing your project in Azure DevOps (formerly VSTS) or GitHub, you have all of your artifacts in one place: code, CI/CD pipelines, releases, work items, and more. In cases where there’s a larger project with a larger team, the rate at which pull requests and work items are created, opened, and closed will increase significantly between each release. Imagine a large user base that wanted to stay updated on these changes and updates through release notes. They’ll want to know if that pesky bug that was introduced in the last version got fixed this time, or if that feature they’re excited about finally made it out of beta.

Release notes tend to map directly to items that a team is tracking internally; I’d expect a work item on a bug high severity to make it into the release documentation. However, putting together release notes can be quite a challenge and very time consuming. When it’s time to ship new software updates, someone must manually go back in time to the last release, gather the relevant information, and compile it into a document to share with users. How do we keep the release information current and accurate for end users?

It’d be nice to automate the process of extracting information from completed work items and merged pull requests to create a document that outlines changes in a new release. This was the inspiration of the Release Notes Generator. With Azure Functions and Azure Blob Storage, the generator creates a markdown file every time a new release is created in Azure DevOps. In this post, we’ll walk through how the generator works, and use a sample DevOps project as an example for the generator. If you’d like a GitHub version, see the GitHub release notes generator sister post.

 

Overview of an Azure DevOps Project’s Work Items

 

View of Rendered Markdown version of release notes in VS Code with Markdown All in One Extension

 

The generator is an Azure Function app; Functions allow you to only pay for the time your code is running, so I’m only paying for the time it takes my code that generates notes to execute. With an HTTP triggered function, a webhook is configured in Azure DevOps to send an HTTP request to the function to kick off the notes generation process. Webhook configuration simply requires you to copy the url of the function that you’d like to send the endpoint to. An added benefit of using Azure Functions is that you can get started locally on your machine using or the  . You can create, debug, test, and deploy your function app all from the comfort of your own computer without even needing an Azure Subscription. You can test HTTP triggered functions locally with a tool like ngrok.

 

Local development of HTTP triggered function in Visual Studio

Out of all of Azure’s storage account offerings, blob storage is suited for serving and storing unstructured data objects like textual files, including the markdown representation of the release notes. The blob storage structure is similar to common file systems where objects, named blobs, are organized and stored in containers. This way, the release notes have a dedicated location inside a “releases” container. You can manage a storage account on your computer with the Azure Storage Explorer.

 

Release notes blobs in the releases container in Azure Storage Explorer

The release function uses the Azure Storage API to create a file and append text and links to a file. Interacting with blobs and blob containers through the API requires minimal setup; you just need the associated storage account connection string to get started.

Creating a new release file with Azure Storage API

Azure Functions are a quick and straightforward way to enhance your workflows with Azure DevOps webhooks. The release notes generator sample code is a good start if you’re interested in exploring the serverless possibilities that work for you. The sample includes instructions on how to run it in Visual Studio. Once you’ve got your own generator up and running, be sure to visit the docs and samples to see what else you can do.

Resources

Sample Code on GitHub

Overview of Azure DevOps Project

Azure Functions Documentation

Develop Azure Functions using Visual Studio

Code and test Azure Functions Locally

WebHooks with Azure DevOps Services

Quickstart: Use .NET to create a blob in object storage

Get started with Storage Explorer

Microsoft Learn Learning Path: Create Serverless Applications

GitHub Version sister post


Microsoft named a 2018 Gartner Peer Insights Customers’ Choice for Access Management

$
0
0

Howdy folks,

Every day, everyone in the Microsoft Identity Division comes to work focused on helping you, our customers, make your employees, partners, and customers more productive and to make it easier for you to securely manage access to your enterprise resources.

So, I was pretty excited to learn that Microsoft was recently recognized as a 2018 Gartner Peer Insights Customers’ Choice for Access Management, Worldwide.

Image of several workers gathered around a laptop.

In the announcement, Gartner explained, “The Gartner Peer Insights Customers’ Choice is a recognition of vendors in this market by verified end-user professionals, taking into account both the number of reviews and the overall user ratings.” To ensure fair evaluation, Gartner maintains rigorous criteria for recognizing vendors with a high customer satisfaction rate.

Receiving this recognition is incredibly energizing. It’s a strong validation that we’re making a positive impact for our customers and that they value the innovations we added to Azure Active Directory (Azure AD) this year.

To receive this recognition, a vendor must have a minimum of 50 published reviews with an average overall rating of 4.2 stars or higher.

Here are few quotes from the reviews our customers wrote for us:

“Azure AD is fast becoming the single solution to most of our identity and access problems.”
—Enterprise Security Architect in the Transportation Industry. Read full review.

“Azure Active Directory is making great strides to become a highly available and ubiquitous directory service.”
—Chief Technology Officer in the Services Industry. Read full review.

“[Microsoft] has been a great partner in our implementing an identity solution [that] met the needs of our multiple agencies and provided us with a roadmap to continue to move forward with SSO and integration of our legacy and newly developed application. We were also able to set a standard for our SaaS application authentication and access.”
—Director of Technology in the Government Industry. Read full review.

Read more reviews for Microsoft.

Today, more than 90,000 organizations in 89 countries use Azure AD Premium and we manage over eight billion authentications per day. Our engineering team works around the clock to deliver high reliability, scalability, and satisfaction with our service, so being recognized as a Customers’ Choice is pretty motivating for us. It’s been exciting to see the amazing things many of our customers are doing with our identity services.

On behalf of everyone working on Azure AD, I want to say thank you to our customers for this recognition! We look forward to building on the experience and trust that led to us being named a Customers’ Choice!

The Gartner Peer Insights Customers’ Choice logo is a trademark and service mark of Gartner, Inc., and/or its affiliates, and is used herein with permission. All rights reserved. Gartner Peer Insights Customers’ Choice distinctions are determined by the subjective opinions of individual end-user customers based on their own experiences, the number of published reviews on Gartner Peer Insights, and overall ratings for a given vendor in the market, as further described here, and are not intended in any way to represent the views of Gartner or its affiliates.

Best Regards,

Alex Simons (@Twitter: @Alex_A_Simons)
Corporate VP of Program Management
Microsoft Identity Division

The post Microsoft named a 2018 Gartner Peer Insights Customers’ Choice for Access Management appeared first on Microsoft 365 Blog.

bingbot Series: Maximizing Crawl Efficiency

$
0
0

At the SMX Advanced conference in June, I announced that over the next 18 months my team will focus on improving our Bing crawler bingbot . I asked the audience to share data helping us to optimize our plans. First, I want to say "Thank you"  to those of you who responded and provided us with great insights. Please keep them coming! 

To keep you informed of the work we've done so far, we are starting this series of blog posts related to our crawler, bingbot. In this series we will share best practices, demonstrate improvements, and unveil new crawler abilities.

Before drilling into details about how our team is continuing to improve our crawler, let me explain why we need bingbot and how we measure bingbot's success.

First things first: What is the goal of bingbot?

Bingbot is Bing's crawler, sometimes also referred to as a "spider". Crawling is the process by which bingbot discovers new and updated documents or content to be added to Bing's searchable index. Its primary goal is to maintain a comprehensive index updated with fresh content.

Bingbot uses an algorithm to determine which sites to crawl, how often, and how many pages to fetch from each site. The goal is to minimize bingbot crawl footprint on your web sites while ensuring that the freshest content is available. How do we do that? The algorithmic process selects URLs to be crawled by prioritizing relevant known URLs that may not be indexed yet, and URLs that have already been indexed that we are checking for updates to ensure that the content is still valid (example not a dead link) and that it has not changed. We also crawl content specifically to discovery links to new URLs that have yet to be discovered. Sitemaps and RSS/Atom feeds are examples of URLs fetched primarily to discovery new links.

Measuring bingbot success : Maximizing Crawl efficiency

Bingbot crawls billions of URLs every day. It's a hard task to do this at scale, globally, while satisfying all webmasters, web sites, content management systems, whiling handling site downtimes and ensuring that we aren't crawling too frequently or often. We've heard concerns that bingbot doesn't crawl frequently enough and their content isn't fresh within the index; while at the same time we've heard that bingbot crawls too often causing constraints on the websites resources. It's an engineering problem that hasn't fully been solved yet.

Often, the issue is in managing the frequency that bingbot needs to crawl a site to ensure new and updated content is included in the search index. Some webmasters request to have their sites crawled daily by the bingbot to ensure that Bing has the freshest version of their site in the index;  whereas the majority of webmasters would prefer to only have bingbot crawl their site when new URLs have been added or content has been updated and changed. The challenge we face, is how to model the bingbot algorithms based on both what a webmaster wants for their specific site, the frequency in which content is added or updated, and how to do this at scale.

To measure how smart our crawler is, we measure bingbot crawl efficiency. The crawl efficiency is how often we crawl and discover new and fresh content per page crawled.  Our crawl efficiency north star is to crawl an URL only when the content has been added (URL not crawled before), updated (fresh on-page context or useful outbound links) . The more we crawl duplicated, unchanged content, the lower our Crawl Efficiency metric is.

Later this month, Cheng Lu, our engineer lead for the crawler team, will continue this series of blog posts by sharing examples of how the Crawl Efficiency has improved over the last few months. I hope you are looking forward to learning more about how we improve crawl efficiency and as always, we look forward to seeing your comments and feedback.

Thanks!

Fabrice Canel
Principal Program Manager, Webmaster Tools
Microsoft - Bing
 

Accelerating AI in healthcare: Security, privacy, and compliance

$
0
0

Healthcare is drowning in data. Every patient brings a record that could span decades, with x-rays, MRIs, and other data that can affect every decision. Providers and payers bring their own collateral to the table. Skills, policies, and certifications are just the start. Complying with regulations such as the Health Insurance Portability and Accountability Act (HIPAA) present additional burdens on any process. But there is a remedy: AI (Artificial Intelligence) and ML (machine learning) are powerful tools that can change the way healthcare organizations deal with the tsunami of healthcare data.

Use cases where AI can help vary from diagnostic imaging to predicting the patient length of stay, and even chatbots. But these initiatives must avoid breaches, ransomware, and other privacy compliance issues.

Join this upcoming webinar live on October 18, 2018, at 11:00 AM Pacific Time, bringing you thought leaders from across the healthcare industry including:

  • Andrew Hicks, VP, Healthcare Assurance Services at Coalfire
  • Mitch Parker, Executive Director, Information Security, and Compliance at IU Health
  • David Houlding, Principal Healthcare lead at Microsoft

Each presenter will provide practical advice on how to address security, privacy, and compliance with AI solutions in healthcare.

Key takeaways:

  • AI in healthcare trends and risks
  • Protecting the confidentiality, integrity, and availability of AI systems and sensitive data
  • Ensuring privacy throughout the data lifecycle from collection to disposal
  • Achieving compliance with healthcare regulations and data protection laws
  • Opportunities to work with Microsoft and partners to accelerate your AI in healthcare initiative with security and compliance blueprints

Register now or watch Artificial intelligence customer story industry trends accelerating AI in healthcare: security, privacy, and compliance on demand.

What’s Next for Visual Studio for Mac

$
0
0

Since it was released a little more than a year ago, Visual Studio 2017 for Mac has grown from being an IDE primarily focused on mobile application development using Xamarin to one that includes support for all major .NET cross-platform workloads including Xamarin, Unity, and .NET Core. Our aspiration with Visual Studio for Mac is to bring the Visual Studio experiences that developers have come to know and love on Windows to the MacOS and to provide an excellent IDE experience for all .NET cross-platform developers.

Over the past year, we added several new capabilities to Visual Studio for Mac including .NET Core 2; richer language services for editing JavaScript, TypeScript, and Razor pages; Azure Functions; and the ability to deploy and debug .NET Core apps inside Docker containers. At the same time, we have continued to improve Xamarin mobile development inside Visual Studio for Mac by adding same-day support for the latest iOS and Android SDKs, improving the visual designers and streamlining the emulator and SDK acquisition experiences. And we have updated the Unity game development experience to reduce launch times of Visual Studio for Mac when working together with the Unity IDE.  Finally, we have been investing heavily in fundamentals such as customer feedback via the Report-a-Problem tool, accessibility improvements, and more regular updates of components that we share with the broader .NET ecosystem such as the .NET compiler service (“Roslyn”), and the .NET Core SDKs. We believe that these changes will allow us to significantly accelerate delivery of new experiences in the near future.

While we will continue to make improvements to Visual Studio 2017 for Mac into early next year, we also want to start talking about what’s next: Visual Studio 2019 for Mac. Today, we are publishing a roadmap for Visual Studio for Mac, and in this blog post, I wanted to write about some of the major themes of feedback we are hearing and our plans to address them as described in our roadmap.

Improving the performance and reliability of the code editor

Improving the typing performance and reliability is our single biggest focus area for Visual Studio 2019 for Mac. We plan to replace most of the internals of the Visual Studio for Mac editor with those from Visual Studio. Combined with the work to improve our integration of various language services, our aspiration is to bring similar levels of editor productivity from Visual Studio to Visual Studio for Mac. Finally, as a result of this work, we will also be able to address a top request from users to add Right-To-Left (RTL) support to our editor.

Supporting Team Foundation Version Control

Including support for Team Foundation Server, with both Team Foundation Version Control (TFVC) and Git as the source control mechanisms, has been one of the top requested experiences on the Mac. While we currently have an extension available for Visual Studio 2017 for Mac that adds support for TFVC, we will integrate it into the core of the source control experience in Visual Studio 2019 for Mac.

Increased productivity when working with your projects

The C# editor in Visual Studio for Mac will be built on top of the same Roslyn backend used by Visual Studio on Windows and will see continuous improvements. In Visual Studio 2017 for Mac (version 7.7), we will enable the Roslyn-powered brace completion and indentation engine which helps improve your efficiency and productivity while writing C# code. We’re also making our quick fixes and code action more discoverable by introducing a light-bulb experience. With the light bulb, you’ll see recommendations highlighted inline in the editor as you code, with quick keyboard actions to preview and apply the recommendations. In the Visual Studio 2019 for Mac release, we’ll also dramatically reduce the time it takes you to connect to your source code and begin working with it in the product, by introducing a streamlined “open from version control” dialog with a brand-new Git-focused workflow.

.NET Core and ASP.NET Core support

In future updates to Visual Studio 2017 for Mac, we will add support for .NET Core 2.2. We will add the ability to publish ASP.NET Core projects to a folder. We will also add support for Azure Functions 2.0, as well as update the New Functions Project dialog to support updating to the latest version of Azure Functions tooling and templates. In Visual Studio 2019 for Mac, we will add support for .NET Core 3.0 when it becomes available in 2019. We will add more ASP.NET Core templates and template options to Visual Studio for Mac and improve the Azure publishing options. Finally, building upon the code editor changes described above, we will improve all our language services supporting ASP.NET Core development including Razor, JavaScript and TypeScript.

Xamarin support

In addition to continuing to make improvements to the Xamarin platform itself, we will focus on improving Android build performance and improving the reliability of deploying iOS and Android apps. We will make it easy to acquire the Android emulators from within the Visual Studio for Mac IDE. Finally, we aim to make further improvements in the Xamarin.Forms Previewer and the Xamarin.Android Designer as well as the XAML language service for Xamarin Forms.

Unity support

We continue to invest in improving the experience of game developers using Unity to write and debug cross platform games as well as 2D and 3D content using Visual Studio for Mac. Unity now supports a .NET 4.7 and .NET Standard 2.0 profile, and we’re making sure that Visual Studio for Mac works out of the box to support those scenarios. Unity 2018.3 ships with Roslyn, the same C# compiler that is used with Visual Studio for Mac, and we’re enabling this for your IDE. In addition to this, we’ll be bringing our fine-tuned Unity debugger from the Visual Studio Tools for Unity to Visual Studio for Mac for a more reliable and faster Unity debugging experience.

Help us shape Visual Studio 2019 for Mac!

By supporting installation of both versions of the product side-by-side, we’ll make it easy for you to try out the Visual Studio 2019 for Mac preview releases while we are still also working on the stable Visual Studio 2017 for Mac releases in parallel.

We don’t have preview bits to share with you just yet, but we wanted to share our plans early so you can help us shape the product with your feedback that you can share through our Developer Community website.  We will update our roadmap for Visual Studio for Mac once a quarter to reflect any significant changes. We will also post an update to our roadmap for Visual Studio soon.

Unni Ravindranathan, Principal Program Manager

Unni currently leads the Visual Studio for Mac Program Management team. Ever since joining Visual Studio as an Intern many years ago, he has focused primarily on improving the productivity of developers buildings apps for various devices and platform, including tools for building Windows apps, XAML, Blend, and our NuGet service.

Accessibility and array support with Azure Blockchain Workbench 1.4.0

$
0
0

We’ve been very grateful for the feedback you’ve given us since we first introduced Azure Blockchain Workbench in public preview a few months ago. Your feedback continues to be an essential and impactful part of our work. For this release, we focused on making Workbench more accessible for everyone. Accessibility is a key pillar in our vision of empowering every person and every organization to achieve more. We are excited to share some of the improvements we’ve made with accessibility in mind.

To use 1.4, you can either deploy a new instance of Workbench through the Azure Portal or upgrade your existing deployment to 1.4.0 using our upgrade script. This update includes the following improvements:

Better accessibility for screen readers and keyboard navigation

Azure Blockchain Workbench is far more than UI within client apps. Workbench provides a rich developer scaffold for you to develop and integrate blockchain solutions within your enterprise.

architecture

The Web client gives you an easy to use the environment to validate, test, and view blockchain applications. The application interface is dynamically generated based on smart contract metadata and can accommodate any use case. The client application delivers a user-facing front-end to the complete blockchain applications generated by Blockchain Workbench.

With version 1.4.0, the Web client now fully supports screen readers in terms of navigation and reading information. In addition, we updated the Web client to better support keyboard shortcuts and navigation. We hope these improvements can make you more productive when creating blockchain applications in Workbench.

Customization of smart contract table columns

Workbench dynamically creates Web client UI based on your smart contracts and application configuration. Workbench summarizes smart contract instances as a table in the Web client based on the properties specified in the application configuration. Depending on the blockchain scenario, developers may specify a lot of properties for any given application. Unfortunately, if any properties are specified, the smart contract table within the Web client UI will become hard to read due to the size and number of columns of the table, see the below image.

one

In some cases, properties may be more useful from a reporting perspective rather than a user experience perspective. To help with the readability of the smart contract tables, we’ve introduced a new feature, which allows users to customize the smart contract table in terms of visible columns and order of columns.

Below is a screenshot of the customized table pane, which allows each user to toggle the visibility of table columns as well as adjust the ordering of columns within the table.

two

The smart contract table view will reflect all changes applied via the customize table pane.

three

New supported datatype – Arrays

With 1.4.0, we now support array datatypes as part of constructor and function parameters as well as smart contract properties. Arrays allow you to create blockchain apps where you can input and represent a strongly typed list of content, such as a list of numbers or values.

Workbench currently supports static and dynamic arrays of integers, booleans, money, and time. Workbench does not yet support an array of enums or array of arrays, including strings. Note, for string support we’re waiting for Solidity functionality to get out of preview. Let us know if these limitations are blockers for you.

The array type is specified via the configuration file as follows:

"Properties": [
{
   {
            "Name": "PropertyType",
            "DisplayName": "Property Type",
            "Type": {
            "Name": "array",
                "ElementType": {
                     "Name": "int"
                }
             }
      }
},

There is a limitation is solidity when it comes to public properties related to arrays. If you have a public state variable of an array type, Solidity only allows you to retrieve single elements of the array with auto-generated getter functions. To work around this limitation, you will need to provide an appropriate function to return arrays. For example:

function GetPropertyTypes() public constant returns (int[]) {
         return PropertyType;
}

If this function is not part of your Solidity code, we will show an error when uploading your blockchain application.

Support for strings up to 4k characters

One of the limitations in Workbench is the data type strings can only be 256 characters. We’ve received feedback from folks who wanted us to increase the limit. With 1.4.0, the new limit is 4,000 characters. Note, using more characters will result in using more gas when processing transactions. When building your smart contracts, please be aware of the block gas limit and build your smart contracts with that limit in mind.

four

five

Faster and more reliable transaction processing

With 1.4.0, we’ve made additional reliability improvements to the DLT Watcher microservice, see the Blockchain Workbench architecture document for more information on this component. The Watcher has been rewritten and is able to process blocks at a much faster rate.

Please use our Blockchain User Voice to provide feedback and suggest features/ideas for Workbench. Your input is helping make this a great service.  We look forward to hearing from you.

Viewing all 10804 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>