Quantcast
Channel: Category Name
Viewing all 10804 articles
Browse latest View live

New to Microsoft 365 in May—empowering and securing users

$
0
0

Each month on the Microsoft 365 Blog, we highlight key updates to Microsoft 365 that build on our vision for the modern workplace. This month, we introduced a number of new capabilities to help individuals produce accessible content, work together in real-time, and create a secure and compliant workplace.

Here’s a look at what we brought to Microsoft 365 in May.

Empowering creative teamwork

Create accessible content in Office 365—We enhanced the Accessibility Checker to streamline the process of creating quality content that is accessible to people with disabilities. Now, the Accessibility Checker identifies an expanded range of issues within a document, including low-contrast text that is difficult to read because the font color is too similar to the background color. The checker also includes a recommended action menu and utilizes AI to make intelligent suggestions for improvements—like suggesting a description for an image—making it easier to fix flagged issues from within your workflow.

GIF showing the Accessibility Checker being run from the Review tab in a Word document with black text on a grey background and an image of a forest. Accessibility Checker inspection results show that the image is missing alternative text and the user clicks the recommended action: Add a description to fix this. This opens the Alt Text pane and the user types the image description in it. The user then clicks the Low contrast text warning in the Accessibility Checker inspection results and clicks the recommended action and changes the page color to white. The inspection results now show no more accessibility issues.

Accessibility Checker alerts you in real-time of issues that make your content difficult for people with disabilities to access.

Work in mixed reality with SharePoint—This month, we unveiled SharePoint spaces—immersive, mixed reality experiences built on SharePoint—which enable you to interact with and explore content in new ways. Now, Microsoft 365 subscribers can work with 3D models, 360-degree videos, panoramic images, organizational charts, visualizations, and any information on your intranet to create immersive mixed reality experiences. SharePoint spaces make it easy to create virtual environments with point-and-click simplicity to help viewers digest information that might be too numerous or too complex to experience in the real world or in a two-dimensional environment.

Create immersive virtual environments in seconds with SharePoint spaces.

Find relevant content faster in SharePoint—The new Find tab in the SharePoint mobile app makes it easier to access the information you need when looking for expertise, content, apps, or resources on the go. The Find tab uses AI to automatically surface sites, files, news, and people relevant to you without having to search—including documents and sites that you were recently working on from across your devices. The Find tab also refines search results as you type, and leverages AI to provide instant answers to questions you ask based on information from across your intranet.

A screenshot of the SharePoint Find tab.

By learning from your existing content and organizational knowledge, AI provides instant answers, transforming search into action.

Run efficient meetings with Microsoft Teams—This month at Build, we demonstrated a range of future capabilities in Microsoft Teams that utilize AI to make meetings smarter and more intuitive over time—including real-time transcription, Cortana voice interactions for Teams-enabled devices, and automatic notetaking. Today, we’re announcing new capabilities for mobile users that make it easier to participate in meetings on the go. Now, you can quickly share your screen with others in the meeting directly from your mobile device, or upload images and video from your library. These improvements make everyone a first-class meeting participant—regardless of location or device.

Source video.

Extend meeting capabilities with Surface Hub 2—Earlier this month, we introduced Surface Hub 2, a device built from the ground up to be used by teams in any organization. Surface Hub 2 integrates Teams, Microsoft Whiteboard, Office 365, Windows 10, and the intelligent cloud into a seamless collaboration experience, which extends the capabilities of any meeting space and allows users to create—whether in the same room or separated by thousands of miles.

Creating a secure and compliant workplace

Achieve GDPR compliance with the Microsoft Cloud—This month marked a major milestone for individual privacy rights with the General Data Protection Regulation (GDPR) that took effect on May 25, 2018. Over the last few months, we introduced new capabilities across the Microsoft Cloud to help you effectively demonstrate that your organization has taken appropriate steps to protect the privacy rights of individuals. To learn more about these capabilities, read our summary of Microsoft’s investment to support GDPR and the privacy rights of individuals.

Microsoft 365 customer INAIL leverages Azure Information Protection to classify, label, and protect their most sensitive data.

Work securely with external partners in Microsoft 365—We introduced several new capabilities in Azure Active Directory Business-to-Business (B2B) collaboration that make it easier to work safely and securely with people outside of your Microsoft 365 tenant. B2B collaboration allows administrators to share access to internal resources and applications with external partners while maintaining complete control over their own corporate data. Starting this month, first-time external users are welcomed to your tenant with a modernized experience and improved consent flow, making it easier for users to accept the terms of use agreements set by your organization.

We also improved Business-to-Consumer (B2C) collaboration, making it easier to invite external partners who use consumer email accounts like Outlook and Gmail while protecting your organization’s data and improving the process of setting access policies.

A screenshot from Azure Active Directory's Review permissions tab.

Track terms of use agreements in Azure Active Directory B2B by tracking when users consent.

Other updates

As companies seek to empower people to do their best work, a cultural transformation isn’t just inevitable—it’s essential. This month, we released a white paper outlining how Microsoft is partnering with customers to foster a modern workplace that is productive, responsive, creative, and secure. To learn more, read the New Culture of Work white paper.

Check out these other updates from across Microsoft 365:

The post New to Microsoft 365 in May—empowering and securing users appeared first on Microsoft 365 Blog.


Announcing TypeScript 2.9

$
0
0
Today we’re announcing the release of TypeScript 2.9!

If you’re not familiar with TypeScript, it’s a language that adds optional static types to JavaScript. Those static types help make guarantees about your code to avoid typos and other silly errors. They can also provide nice things like code completions and easier project navigation thanks to tooling built around those types. When your code is run through the TypeScript compiler, you’re left with clean, readable, and standards-compliant JavaScript code, potentially rewritten to support much older browsers that only support ECMAScript 5 or even ECMAScript 3.

If you can’t wait any longer, you can download TypeScript via NuGet or by running

npm install -g typescript

You can also get editor support for

Other editors may have different update schedules, but should all have excellent TypeScript support soon as well.

This release brings some great editor features:

And we also have core language/compiler features:

We also have some minor breaking changes that you should keep in mind if upgrading.

But otherwise, let’s look at what new features come with TypeScript 2.9!

Editor features

Because TypeScript’s language server is built in conjunction with the rest of the compiler, TypeScript can provide consistent cross-platform tooling that can be used on any editor. While we’ll dive into language improvements in a bit, it should only take a minute to cover these features which are often the most applicable to users, and, well, fun to see in action!

Rename file and move declaration to new file

After much community demand, two extremely useful refactorings are now available! First, this release of TypeScript allows users to move declarations to their own new files. Second, TypeScript 2.9 has functionality to rename files within your project while keeping import paths up-to-date.

While not every editor has implemented these features yet, we expect they’ll be more broadly available soon.

Unused span reporting

TypeScript provices two lint-like flags: --noUnusedLocals and --noUnusedParameters. These options provide errors when certain declarations are found to be unused; however, while this information is generally useful, errors can be a bit much.

TypeScript 2.9 has functionality for editors to surface these as “unused” suggestion spans. Editors are free to display these as they wish. As an example, Visual Studio Code will be displaying these as grayed-out text.

A parameter being grayed out as an unused declaration

Convert property to getter/setter

Thanks to community contributor Wenlu Wang, TypeScript 2.9 supports converting properties to get- and set- accessors.

import() types

One long-running pain-point in TypeScript has been the inability to reference a type in another module, or the type of the module itself, without including an import at the top of the file.

In some cases, this is just a matter of convenience – you might not want to add an import at the top of your file just to describe a single type’s usage. For example, to reference the type of a module at an arbitrary location, here’s what you’d have to write before TypeScript 2.9:

import * as _foo from "foo";

export async function bar() {
    let foo: typeof _foo = await import("foo");
}

In other cases, there are simply things that you can’t achieve today – for example, referencing a type within a module in the global scope is impossible today. This is because a file with any imports or exports is considered a module, so adding an import for a type in a global script file will automatically turn that file into a module, which drastically changes things like scoping rules and strict mode within that file.

That’s why TypeScript 2.9 is introducing the new import(...) type syntax. Much like ECMAScript’s proposed import(...) expressions, import types use the same syntax, and provide a convenient way to reference the type of a module, or the types which a module contains.

// foo.ts
export interface Person {
    name: string;
    age: number;
}

// bar.ts
export function greet(p: import("./foo").Person) {
    return `
        Hello, I'm ${p.name}, and I'm ${p.age} years old.
    `;
}

Notice we didn’t need to add a top-level import specify the type of p. We could also rewrite our example from above where we awkwardly needed to reference the type of a module:

export async function bar() {
    let foo: typeof import("./foo") = await import("./foo");
}

Of course, in this specific example foo could have been inferred, but this might be more useful with something like the TypeScript language server plugin API.

--pretty by default

TypeScript’s --pretty mode has been around for a while, and is meant to provide a friendlier console experience. Unfortunately it’s been opt-in for fear of breaking changes. However, this meant that users often never knew --pretty existed.

To minimize breaking changes, we’ve made --pretty the default when TypeScript can reasonably detect that it’s printing output to a terminal (or really, whatever Node considers to be a TTY device). Users who want to turn --pretty off may do so by specifying --pretty false on the command line. Programs that rely on TypeScript’s output should adjust the spawned process’s TTY options.

Support for well-typed JSON imports

TypeScript is now able to import JSON files as input files when using the node strategy for moduleResolution. This means you can use json files as part of their project, and they’ll be well-typed!

// ./tsconfig.json
{
    "compilerOptions": {
        "module": "commonjs",
        "resolveJsonModule": true,
        "esModuleInterop": true
        "outDir": "lib"
    },
    "include": ["src"]
}
// ./src/settings.json
{
    "dry": false,
    "debug": false
}
// ./src/foo.ts
import settings from "./settings.json";

settings.debug === true;  // Okay
settings.dry === 2;       // Error! Can't compare a `boolean` and `number`

These JSON files will also carry over to your output directory so that things “just work” at runtime.

Type arguments for tagged template strings

If you use tagged template strings, you might be interested in some of the improvements in TypeScript 2.9.

Most of the time when calling generic functions, TypeScript can infer type arguments. However, there are times where type arguments can’t be inferred. For example, one might imagine an API like the following:

export interface RenderedResult {
    // ...
}

export interface TimestampedProps {
    timestamp: Date;
}

export function timestamped<OtherProps>(
    component: (props: TimestampedProps & OtherProps) => RenderedResult):
        (props: OtherProps) => RenderedResult {
    return props => {
        const timestampedProps =
            Object.assign({}, props, { timestamp: new Date() });
        return component(timestampedProps);
    }
}

Here, let’s assume a library where “components” are functions which take objects and return some rendered content. The idea is that timestamped will take a component that may use a timestamp property (from TimestampedProps) and some other properties (from OtherProps), and return a new component which only takes properties specified in OtherProps.

Unfortunately there’s a problem with inference when using timestamped naively:

declare function createDiv(contents: string | RenderedContent): RenderedContent;

const TimestampedMessage = timestamped(props => createDiv(`
    Message opened at : ${props.timestamp}
    Message contentsn${props.contents}
`));

Here, TypeScript infers the wrong type for props when calling timestamped because it can’t find any candidates for OtherProps. OtherProps gets the type {}, and props is then assigned the type TimestampedProps & {} which is undesirable.

We can get around this with an explicit annotation on props:

interface MessageProps {
    contents: string;
}

//        Notice this intersection type vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
const TimestampedMessage = timestamped((props: MessageProps & TimestampedProps) => /*...*/);

But we would prefer not to write as much; the type system already knows TimestampedProps will be part of the type; it just needs to know what OtherProps will be, so we can specify that explicitly.

interface MessageProps {
    contents: string;
}

const TimestampedMessage = timestamped<MessageProps>(props => createDiv(`
    Message opened at : ${props.timestamp.toLocaleString()}
    Message contentsn${props.contents}
`));

Whew! Great! But what does that have to do with tagged template strings?

Well, the point here is that we the users were able to give type arguments when the type system had a hard time figuring things out on our invocations. It’s not ideal, but it at least it was possible.

But tagged template strings are also a type of invocation. Tagged template strings actually invoke functions, but up until TypeScript 2.9, they support type arguments at all.

For tagged template strings, this can be useful for libraries that work like styled-components:

interface StyleProps {
    themeName: string;
}

declare function styledInput<OtherProps>(
    strs: TemplateStringsArray, 
    ...fns: ((props: OtherProps & StyleProps) => string)[]):
        React.Component<OtherProps>;

Similar to the above example, TypeScript would have no way to infer the type of OtherProps if the functions passed to fns were not annotated:

export interface InputFormProps {
    invalidInput: string;
}

// Error! Type 'StyleProps' has no property 'invalidInput'.
export const InputForm = styledInput `
    color:
        ${({themeName}) => themeName === 'dark' ? 'black' : 'white'};
    border-color: ${({invalidInput}) => invalidInput ? 'red' : 'black'};
`;

TypeScript now 2.9 allows type arguments to be placed on tagged template strings, and makes this just as easy as a regular function call!

export interface InputFormProps {
    invalidInput: string;
}

export const InputForm = styledInput<InputFormProps> `
    color:
        ${({themeName}) => themeName === 'dark' ? 'black' : 'white'};
    border-color: ${({invalidInput}) => invalidInput ? 'red' : 'black'};
`;

In the above example, themeName and invalidInput are both well-typed. TypeScript knows they are both strings, and would have told us if we’d misspelled either.

Support for symbols and numeric literals in keyof and mapped object types

TypeScript’s keyof operator is a useful way to query the property names of an existing type.

interface Person {
    name: string;
    age: number;
}

// Equivalent to the type
//  "name" | "age"
type PersonPropertiesNames = keyof Person;

Unfortunately, because keyof predates TypeScript’s ability to reason about unique symbol types, keyof never recognized symbolic keys.

const baz = Symbol("baz");

interface Thing {
    foo: string;
    bar: number;
    [baz]: boolean; // this is a computed property type
}

// Error in TypeScript 2.8 and earlier!
// `typeof baz` isn't assignable to `"foo" | "bar"`
let x: keyof Thing = baz;

TypeScript 2.9 changes the behavior of keyof to factor in both unique symbols as well as number and numeric literal types. As such, the above example now compiles as expected. keyof Thing now boils down to the type "foo" | "bar" | typeof baz.

With this functionality, mapped object types like Partial, Required, or Readonly also recognize symbolic and numeric property keys, and no longer drop properties named by symbols:

type Partial<T> = {
    [K in keyof T]: T[K]
}

interface Thing {
    foo: string;
    bar: number;
    [baz]: boolean;
}

type PartialThing = Partial<Thing>;

// This now works correctly and is equivalent to
//
//   interface PartialThing {
//       foo?: string;
//       bar?: number;
//       [baz]?: boolean;
//   }

Unfortunately this is a breaking change for any usage where users believed that for any type T, keyof T would always be assignable to a string. Because symbol- and numeric-named properties invalidate this assumption, we expect some minor breaks which we believe to be easy to catch. In such cases, there are several possible workarounds.

If you have code that’s really meant to only operate on string properties, you can use Extract<keyof T, string> to remove symbol and number inputs:

function useKey<T, K extends Extract<keyof T, string>>(obj: T, k: K) {
    let propName: string = k;
    // ...
}

If you have code that’s more broadly applicable and can handle more than just strings, you should be able to substitute string with string | number | symbol, or use the built-in type alias PropertyKey.

function useKey<T, K extends keyof T>(obj: T, k: K) {
    let propName: string | number | symbol = k; 
    // ...
}

Alternatively, you can revert to the old behavior under the --keyofStringsOnly compiler flag, but this is meant to be used as a transitionary flag.

If you intend on using --keyofStringsOnly and migrating off, instead of PropertyKey, you can create a type alias on keyof any, which is equivalent to string | number | symbol under normal circumstances, but becomes string when --keyofStringsOnly is set.

type KeyofBase = keyof any;

Breaking changes

keyof types include symbolic/numeric properties

As mentioned above, keyof types (also called “key query types”) now include names that are symbols and numbers, which can break some code that assumes keyof T is assignable to string. You can correct your code’s assumptions, or revert to the old behavior by using the --keyofStringsOnly compiler option:

// tsconfig.json
{
    "compilerOptions": {
        "keyofStringsOnly": true
    }
}

--pretty on by default

Also mentioned above, --pretty is now turned on by default, though this may be a breaking change for some workflows.

Trailing commas not allowed on rest parameters

Trailing commas can no longer occur after ...rest-parameters, as in the following.

function pushElement(
        foo: number,
        bar: string,
        ...rest: any[], // error!
    ) {
    // ...
}

This break was added for conformance with ECMAScript, as trailing commas are not allowed to follow rest parameters in the specification. Trailing commas should simply be removed following this syntax.

Unconstrained type parameters are no longer assignable to object in strictNullChecks

The following code now errors:

function f<T>(x: T) {
    const y: object | null | undefined = x;
}

Since generic type parameters can be substituted with any primitive type, this is a precaution TypeScript has added under strictNullChecks. To fix this, you can add a constraint of object:

// We can add an upper-bound constraint here.
//           vvvvvvvvvvvvvv
function f<T extends object>(x: T) {
    const y: object | null | undefined = x;
}

never can no longer be iterated over

Values of type never can no longer be iterated over, which may catch a good class of bugs.

declare let foo: never;
for (let prop in foo) {
    // Error! `foo` has type `never`.
}

Users can avoid this behavior by using a type assertion to cast to the type any (i.e. foo as any).

What’s next?

We hope you’re as excited about the improvements to TypeScript 2.9 as we are – but save some excitement for TypeScript 3.0, where we’re aiming to deliver an experience around project-to-project references, a new unknown type, a stricter any type, and more!

As always, you can keep an eye on the TypeScript roadmap to see what we’re working on for our next release (as well as anything we didn’t get the chance to mention in this blog post for this release). We also have nightly releases so you can try things out on your machine and give us your feedback early on. You can even try installing it now (npm install -g typescript@next) and play around with the new unknown type.

Let us know what you think of this release over on Twitter or in the comments below, and feel free to report issues and suggestions filing a GitHub issue.

Happy Hacking!

Visual Studio 2017 version 15.8 Preview 2

$
0
0

We’re happy to share the highlights of the latest Visual Studio 2017 preview, which is now available for download, including:

  • Emulator and Designer improvements when developing mobile Android apps, and Xamarin.Android support of Android P Developer Preview 1
  • C++ development improvements
  • Ability to fine-tune solution load configuration settings to maximize performance
  • Significant new functionality in the CPU Usage tool to help you profile your applications’ performance
  • Improvements to make it easier to build and debug extensions

This Preview builds upon the features that debuted in Visual Studio version 15.8 Preview 1, which was released at Microsoft Build 2018 earlier this month. As always, you can drill into the details of all of these features by exploring the Visual Studio 2017 version 15.8 Preview release notes.

We hope that you will install and use this Preview, and most importantly, share your feedback with us. To acquire the Preview, you can either install it fresh from here, update your bits directly from the Preview IDE, or if you have an Azure subscription, you can simply provision a virtual machine with this latest Preview. We appreciate your early adoption, engagement, and feedback as it helps us ship the most high-quality tools to everyone in the Visual Studio community.

And now, without further ado, I’d like to introduce the new features to you.

Mobile Development for Android

Android Emulator: This release contains a preview of the Google Android emulator that is compatible with Hyper-V, which is available in the Windows 10 April 2018 Update. This means you can use Google’s Android emulator side-by-side with other Hyper-V based technologies, including Hyper-V VMs, Docker tooling, the HoloLens emulator, and more. Developers who use Hyper-V now have access to a fast Android emulator that will always support the latest Android APIs, works with Google Play Services out of the box, and has all features of the Android emulator, including camera, geolocation, and Quick Boot.

This Visual Studio 2017 version 15.8 Preview release adds support for launching, deploying, and debugging Android apps from within Visual Studio with the new Hyper-V based emulator. For more information, visit the documentation on enabling Hyper-V based hardware acceleration in the Android emulator.

Hyper-V Emulator

Android Designer: We’d like to call your attention to a couple of improvements to the Android development experience. First, we’ve enabled the split-view editor, which allows you to create, edit, and preview your layouts at the same time.

Second, you can now inject sample placeholder data or images into your views so that you can preview how the layout would behave. Refer to the Preview release notes for additional configuration details. Both of these improvements will make you more productive as you iterate on your UI design.

Sample data example

Xamarin.Android SDK: We’re excited to announce Xamarin.Android support for Android P Developer Preview 1. Android P introduces new features including display cutout support, notification enhancements, multi-camera support, AnimatedImageDrawable and much more. Be on the look-out in future Visual Studio Previews for Xamarin.Android support of Android P Developer Preview 2 and Developer Preview 3. For more information about the release cycle of Android P Preview, read about the Android P Preview Program Overview.

C++ Development

ClangFormat: In Visual Studio 2017 version 15.8 Preview 2, we added the Add > New Item template for generating a .clang-format file following the coding convention specified for ClangFormat in Tools > Options. If the Visual Studio convention is selected, the generated file will match the user’s current Visual Studio formatting configuration from Tools > Options. In addition, we updated the shipped clang-format.exe version to 6.0.0.

Open Folder: Adding configurations to CppProperties.json is now as simple as selecting a template. Use the configuration dropdown and select the option to create a CppProperties.json from a template, or right click in the editor of CppProperties.json to add templates to existing CppProperties.json files.

CMake

Performance – Loading Solutions

By default, loading solutions automatically re-opens documents that were opened in the previous session. Sometimes, opening all of these files can cause solution loads to take a long time to complete. In Visual Studio 2017 version 15.8 Preview 2, we’ve introduced a Project and Solution config setting checkbox to give you the ability to change the default behavior regarding opening documents generated during the prior session. More information can be found here.

Performance – Profiling with the CPU Usage tool

In Visual Studio version 15.8 Preview 2 we’ve released several new features of the CPU Usage tool in the Performance Profiler.

First, the CPU Usage tool now displays asynchronous code in the Call Tree view with logical call stack stitching on by default. The feature can be controlled via the “Stitch Async Code” option in the Filter dropdown of the CPU Usage tools main view (to turn it off & on). Displaying logical call stacks for asynchronous code is helpful because asynchronous code is written with context surrounding the code, even if the actual execution is performed independently of that logical context (on its own thread, for example). Below is a screen shot of deeply nested asynchronous functions displayed with their parent context (note the ‘[Async]’ tag prepending asynchronous functions).

CPU Usage Async

The CPU Usage tool now also includes a Modules/Functions view. This view will display execution cost (sample count) by module (dll) and by function within a module. This is useful when you need to determine how much a module contributed to CPU usage in total during a profiling session. You can expand a module to see a flat list of functions from the module breaking down the usage by individual function. This can be helpful to determine where to focus attention when investigating bugs or changes that impact performance. You can display the Modules view by selecting the “View in Modules” menu command from the context menu of a function in the list of the CPU Usage tool’s main view or by selecting “Modules” from the Current View dropdown when displaying the Call Tree or Caller/Callee views of the CPU Usage tool. Below is a screen shot of the Modules view with a module expanded.

CPU Usage Modules

Finally, the CPU Usage tool now has instance indication in the CPU Usage graph displayed in the main view. Instance indication is useful to know when a function was executing during a profiling session. This makes is very easy to determine if a function was executing in a single block of time or multiple times during a profiling session. Instances are indicated via a dark (purple) line over the CPU utilization % graph in the ruler at the top of the CPU Usage tool’s main view. To view instances in the CPU Usage graph, simply double-click a function in any of the views offered by the CPU Usage tool. Below is a screen shot of the instances of a selected method when it was on the stack during the profiling session.

Note: The below screen shot displays the main view (on the right) and the Modules view (on the left) side-by-side; this configuration is most useful to avoid having to switch the views to see the instance indication when working with a child view (Modules, Call Tree, or Caller/Callee). This configuration is available by click-dragging one of the window titles out of the title bar at the top of the document area and dropping it on the desired drop target.

CPU Usage Instance

Extension Development

We’ve added some components and features to the Build Tools SKU, which means that you can now build Visual Studio extension (VSIX) projects using only the Build Tools for Visual Studio 2017. Additionally, if you have more than one instance of Visual Studio 2017 installed (like, for example, you’re using the 15.8 Preview 2 version as well as the 15.7 side-by-side Release version), then you can now select which instance to deploy your extension to when debugging. This means that you can develop your extension using the released version while debugging using the preview one.

Extension Debugger Picker

Try out the Preview today!

If you’re not familiar with Visual Studio Previews, take a moment to read the Visual Studio 2017 Release Rhythm. Remember that Visual Studio 2017 Previews can be installed side-by-side with other versions of Visual Studio and other installs of Visual Studio 2017 without adversely affecting either your machine or your productivity. Previews provide an opportunity for you to receive fixes faster and try out upcoming functionality before they become mainstream. Similarly, the Previews enable the Visual Studio engineering team to validate usage, incorporate suggestions, and detect flaws earlier in the development process. We are highly responsive to feedback coming in through the Previews and look forward to hearing from you.

Please get the Visual Studio Preview today, exercise your favorite workloads, and tell us what you think. If you have an Azure subscription, you can provision a virtual machine of this preview. You can report issues to us via the Report a Problem tool in Visual Studio or you can share a suggestion on UserVoice. You’ll be able to track your issues in the Visual Studio Developer Community where you can ask questions and find answers. You can also engage with us and other Visual Studio developers through our Visual Studio conversation in the Gitter community (requires GitHub account). Thank you for using the Visual Studio Previews.

Christine Ruana Principal Program Manager, Visual Studio

Christine is on the Visual Studio release engineering team and is responsible for making Visual Studio releases available to our customers around the world.

3 reasons why Azure’s infrastructure is secure

$
0
0

This is the third blog in a 4-part blog post series on how Microsoft Azure provides a secure foundation.

Customers tell me that securing their datacenter infrastructure requires an enormous amount of resources and investments. With the challenges of recruiting security experts to maintain secure infrastructure, there is not a clear return on investment. To keep pace in this ever-changing security landscape, it’s important that they can protect their infrastructure while also lowering their costs and reducing complexity. Azure in uniquely positioned to help with these challenges.

Microsoft Azure provides a secure foundation across physical, infrastructure, and operational security. Customers like Smithfield and Merrill Corporation choose Azure to be their trusted cloud due to its platform security. Microsoft invests over a billion dollars every year into security, including the security of the Azure platform, so that your data and business assets can be protected.

A few months ago, we started an Azure security blog series with a blog on our layered approach to physical security. We shared the 3 ways that Azure improves your security at the RSA conference. Today, we will discuss the network infrastructure, firmware and hardware, and continuous testing and monitoring that make up Azure’s secure infrastructure. At the end of this blog, we will discuss some of the security services you can use to further secure your network.

1. Secure network infrastructure

Adopting cloud helps you reduce infrastructure costs while scaling resources and being agile. Even though the network is shared, Microsoft has several mechanisms in place to ensure Azure’s network and our customers’ networks remain segregated and secure.

Management (Microsoft-managed) networks and customer networks are isolated in Azure to improve performance and ensure the traffic moving through the platform is secure. The management networks are managed by Microsoft and are only available for devices and administrators to connect to Azure. When devices or administrators want to connect to Azure, controls such as just-in-time access and privileged access workstations limit accessibility to help ensure unauthorized individuals do not gain access to the Azure network. In addition, network cabling, the equipment to support and secure the network, and the integration of systems for monitoring the network are managed by Microsoft.

The customer networks are segregated from management networks to protect them from attacks targeting management networks. Customer networks are separated from each other using networking virtualization methods, so customers cannot gain access to other customers’ networks.

Besides using different networks to help secure your data, all data in transit through Azure’s infrastructure is automatically encrypted to ensure the confidentiality and integrity of data. Encrypting customer traffic moving over the Azure network helps prevent unauthorized users from gaining access to the infrastructure.

Azure’s secure network also has built-in mechanisms to protect against distributed denial-of-service (DDoS) attacks. DDoS attacks try to disrupt access to services by generating so much traffic that it exceeds capacity. DDoS protections are built into the Azure platform to help ensure attacks do not bring down our services. These protections continuously monitor traffic and use scrubbers and customer traffic profiling to detect and then deflect these attacks. Microsoft’s experience safeguarding some of the largest services on the Internet, such as Xbox and O365, gives us the ability to scale protection from attacks.

Microsoft isolates networks, ensures the confidentiality of data, and actively works to combat against DDoS attacks so that you can reallocate datacenter security resources into another area in your enterprise. 

2. Secure hardware and firmware

Security controls are integrated into the firmware and hardware of Azure to ensure its secure by default and continues to be secure throughout its lifetime.

Microsoft recently announced Project Cerberus to ensure the security of our firmware. Cerberus is a microcontroller, a chip made up of CPU, memory, and programmable input/output, that protects against unauthorized access and malicious updates. The microcontroller also makes it possible to secure the pre-boot, boot-time, and runtime integrity of the firmware. Our hardware has access to the boot environment before the OS loads to ensure malicious code is detected and stopped. Our firmware goes through regular code reviews. We monitor the security of the hardware and firmware to ensure that any threats are detected and mitigated before it can impact your business.

One of our most recent advancements in hardware is confidential computing, which uses Hyper-V and Intel SGX chip-enabled servers to segregate execution and data from the underlying operation system and operators. Azure can encrypt data in use, in transit, and at rest. Azure is the first cloud platform to support both software and hardware-based Trusted Execution Environments (TEEs). Trusted Execution Environments are a portion of memory on a server where customer data is stored. Only systems have access to it to prevent unauthorized administrators or processes from gaining access to this data.

3. Secure testing and monitoring

Microsoft has over 3,500 cybersecurity experts who work on your behalf 24x7x365. This number includes over 200 professionals who identify potential vulnerabilities through red and blue team exercises. The red team tries to compromise Azure’s infrastructure, and the blue team defends against attacks made by the red team. At the end of each red and blue team exercise, the team codifies what they’ve learned into the Azure operational security process, so the team becomes more effective at continuous detection and response.

Microsoft employs cybersecurity experts to protect your infrastructure, so your resources can be available for other business initiatives.

Azure Security

We’ve just discussed the ways that Microsoft can help secure Azure’s infrastructure. However, there are services for your network that you will still be responsible for setting up in Azure. For example, the same way you need to configure network access controls, load balancers, or network virtual appliances on-premises, you will need to do this in Azure. A service called Azure ExpressRoute helps you establish a secure connection from your on-premises environment to Azure.

In addition to taking advantage of the basic DDoS protections automatically enabled in the platform, you can use a new service, Azure DDoS Protection Standard, for further protection against layer 3-7 attacks. For example, it can protect against volumetric attacks like UDP floods, amplification floods, or attacks that target IPv4 and IPv6. Layer 3 and Layer 4 attacks are detected and are sent to the scrubbers. Scrubbers determine if the traffic is malicious or not and if it’s safe to travel through the network. At Layer 7, we can protect against attacks targeting HTTP and SQL protocols.

Microsoft’s scale of investments across infrastructure, hardware, and experts are unparalleled. Microsoft provides a secure infrastructure for our datacenters, composed of segregated networks, well-maintained hardware and firmware, and industry-leading operational security processes so that you can have more resources available to deliver business value.

To learn more about Azure security, watch our Azure Essentials video on Azure Security. Start building your infrastructure and applications on the secure foundation Microsoft provides today for free.

How I choose which services to use in Azure

$
0
0

This blog post was co-authored by Barry Luijbregts, Azure MVP.

Last year, I attended a Pluralsight webinar hosted by Azure MVP and Pluralsight author, Barry Luijbregts, called Keep your dev team productive with the right Azure service. It was a fantastic webinar and I really enjoyed learning Barry’s thought process on how he selects which Azure services and capabilities to use for his own projects, and when he consults for his clients. Recently, I asked Barry to share his process in this blog post and on an episode of Azure Friday with Scott Hanselman (included below).

Microsoft Azure is huge and changes fast! I’m impressed by the services and capabilities offered in Azure and by how quickly Microsoft releases new services and features. It can be overwhelming. There is so much out there — and the list continues to grow — it is sometimes hard to know which services to use for a given scenario.

I create Azure solutions for my customers, and I have a method that I use to help me pick the right services. This method helps me narrow down the services to choose from and pick the right ones for my solution. It helps me decide how to implement high-level requirements such as “Running my application in Azure” or “Storing data for my application in Azure.” Of course, these are just examples. There are many other categories to address when I’m architecting an Azure solution.

A look at the process

Let me show you the process that I use for “Running my application in Azure.”

First, I try to answer several high-level questions, which in this case would be:

  1. How much control do I need?
  2. Where do I need my app to run?
  3. What usage model do I need?

Once I’ve answered these questions, I’ve narrowed down the services from which to choose. And then, I look deeper into the services to see which one best matches the requirements of my application, including functionality as well as availability, performance, and costs.

Let’s go through the first part of the process where I answer the high-level questions about the category.

Question 1: How much control do I need?

In considering how much control I need, I try to figure out the degree of control I need over the operating system, load balancers, infrastructure, application scaling, and so on. This decides the category of services that I will be selecting from.

On the control side of the spectrum is infrastructure-as-a-service (IaaS) category, which includes services like Azure Virtual Machines and Azure Container Instances. These give a lot of control over the operating system and the infrastructure that runs your application. But with control comes great responsibility. For instance, if you control the operating system, you are responsible to update it and make sure that it is secure.

Illustration showing the tradeoffs in going from IaaS, to Paas, to LaaS, and SaaS.

Figure 1. How much control do I need?

Further up the stack are services that fall into the platform-as-a-service (PaaS) category, which contains services like Azure App Service Web Apps. In PaaS, you don’t have control over the operating system that your application runs on, nor are you responsible for it. You do have control over scaling your application and your application configuration (e.g., the version of .NET you want your application to run on.

The next abstraction level is what I will here call logic as a service (LaaS), also known as serverless. This category contains services like Azure Functions and Azure Logic Apps. Here, Azure takes care of the underlying infrastructure for you, including scaling your app. Logic as a service gives little control over the infrastructure that your application runs on, which means that you are only responsible for creating the application itself and configuring its application settings, like connection strings to databases.

The highest level of abstraction is software as a service (SaaS), which offers the least amount of control and the most amount of time that you can focus on working on business value. An example of Azure SaaS is Azure Cognitive Services, which are APIs that you just call from your application. Somebody else owns their application code and infrastructure; you just consume them. And all you manage is basic application configuration, like managing the API keys that you use to call the service.

Once I know how much control I need, I can pick the category of services in Azure and narrow down my choice.

Question 2: Where do I need my app to run?

The second question stemming from “Running my application in Azure” is: Where do I need my application to run?

Illustration contrasting running apps in Azure and somewhere else

Figure 2. Where do I need my app to run?

You might think that the answer would be: I need to run my application in Azure. But the answer may not be that simple. For example, maybe I do want parts of my application to run in the Azure public cloud but I want to run other parts in Azure Government or the Azure China cloud or even on-premises using Azure Stack.

Or it could be that I want to be able run my application in Azure and on-premises (if rules and regulations change), on my local development computer, or even in public clouds from other vendors.

This question boils down to how vendor-agnostic I’d like to be and where to store my data.

Once I’ve answered this question, I can narrow down the choice of Azure services even further.

Question 3: What usage model do I need?

How my app will be used guides me to the answer to the third and final question: what usage model do I need?

Decision tree branching between apps used all the time (pay per month) and occasionally (pay per execution)

Figure 3. What usage model do I need?

Some applications are in use all the time, like a website. If that is the case for my application, I need to look for services that run on what I call the classic model. This means that they are always running and that you pay for them all month.

Other applications are only in use occasionally, like a background job that runs once every hour, or an API that handles order cancellations (called a few times a day). If my application runs occasionally, I need to select a service from the logic-as-a-service (or serverless) category. These services only run when you need them, and you only pay for them when they run.

After I’ve answered the high-level questions for this category, I can narrow down the services that I can choose from to just a couple or even one.

The next step: Match service functionality to my application requirements

Now, I need to figure out which service fulfills the most requirements for my application. I do so by looking at the functionality that I need, the availability that the service offers in its service level agreement, the performance that it provides, and what it costs.

Finally, this leads me to the service that I want to use to run my application. Often, I use multiple services to run my application, because it consists of many pieces, like a user interface and APIs. So, it might turn out that I run my web application in an Azure App Service Web App and my APIs in Azure Functions.

Other categories of services

We’ve only discussed one category of requirements: “Running my application in Azure.” There are many more, like “Securing my application in Azure,” “Doing data analytics in Azure,” and “Monitoring my application in Azure.” The method I have described will help you with all these categories.

In a recent Azure Friday episode, How I choose which services to use in Azure, I spoke with Scott Hanselman about my process for deciding which services to use in Azure. In it, we discuss how I choose services based on “Running my application in Azure” and on “Storing my data in Azure.”

I hope that my method helps you to sort through and choose the best services for your application from the vast array of what’s available in Azure. Please don’t hesitate to reach out to me through Twitter (@AzureBarry) if you have any questions or feedback.

StatCheck the Game

$
0
0

If you don't get enough joy from publishing scientific papers in your day job, or simply want to experience what it's like to be in a publish-or-perish environment where the P-value is the only important part of a paper, you might want to try StatCheck: the board game where the object is to publish two papers before any of your opponents.

Statchecklogo

As the game progresses, players combine "Test", "Statistic" and "P-value" cards to form the statistical test featured in the paper (and of course, significant tests are worth more than non-significant ones). Opponents may then have the opportunity to play a "StatCheck" card to challenge the validity of the test, which can then be verified using a companion R package or online Shiny application. Other modifier cards include "Bayes Factor" (which can be used to boost the value of your own papers, or diminish the value of an opponents'), "Post-Hoc Theory" (improving the value of already-published papers), and "Behind the Paywall" (making it more difficult to challenge the validity of your statistics).

StatCheck The Game was created by Sacha Epskamp and Adela Isvoranu, who provide all the the code to create the cards as open source on GitHub, along with instructions to print and play with your own game materials. You can find everything you need (except the needed 8-sided die and some like-minded friends to play with) at the link below.

StatCheck: An open source game for methodological terrorists!

Top stories from the VSTS community – 2018.06.01

$
0
0
The big news that landed this week is that there’s a security vulnerability in Git that can be hidden inside a repository. Please update your Git clients — but in the meantime, hosting providers like GitHub and VSTS are actively blocking these malicious repositories for your protection. Top Stories Data Governance, DevOps, and Delivery William... Read More

Because it’s Friday: Buildings shake

$
0
0

In 1978, a 59-story skyscraper in New York City was at risk of collapse. An engineering flaw, serendipitously discovered by an architecture PhD candidate studying the Citigroup Center as a thesis project, meant the building was unexpectedly susceptible to winds — and hurricane Ella was bearing down on the eastern seaboard. Meanwhile, 2500 Red Cross volunteers were on standby to execute a 10-block-radius evacuation plan should the building topple (and possibly cause a domino-like chain reaction), while engineers worked to reinforce the structural integrity of the building. And all of this happened in secret.

Citigroup Center
Credit: Joel Werner

A recent tweet from a former resident of the building reminded me of this remarkable story. To learn more about the unusual stilt-based design of the building and how the design flaw was discovered, check out the article from 99% Invisible or listen to the accompanying podcast. 

That's all from us for this week. Have a great weekend, and we'll be back with more next week.


Why you should bet on Azure for your infrastructure needs, today and in the future

$
0
0

I love all the amazing things our partners and customers are doing on Azure! Adobe, for example, is using a combination of infrastructure and platform services to deliver the Adobe Experience Manager globally. HP is using AI to help improve their customer experiences. Jet is using microservices and containers on IaaS VMs to deliver a unique ecommerce experience. These three customers are just a few examples where major businesses have bet on Azure, the most productive, hybrid, trusted and intelligent cloud.

For the last few years, Infrastructure-as-a-service (IaaS) has been the primary service hosting customer applications. Azure VMs are the easiest to migrate from on-premises while still enabling you to modernize your IT infrastructure, improve efficiency, enhance security, manage apps better and reduce costs. And I am proud that Azure continues to be recognized as a leader in this key area.

As you are considering your movement and deployment in the cloud, I want to share a few key reasons you should bet on Azure for your infrastructure needs.

Infrastructure for every workload

We are committed to providing the right infrastructure for every workload. Across our 50 Azure regions, you can pick from a broad array of virtual machines with varying CPU, GPU, Memory and disk configurations for your application needs. For HPC workloads that need extremely fast interconnect, we have InfiniBand and for supercomputing workloads, we offer Cray hardware in Azure. For SAP HANA, we provide virtual machines up to 4 TB of RAM and provide purpose-built bare metal infrastructure up to 20 TB of RAM. Not only do we have some of the most high-powered infrastructure out there, but we also provide confidential computing capabilities for securing your data while in use. These latest computing capabilities even include quantum computing. Whether it’s Windows Server, Red Hat, Ubuntu, CoreOS, SQL, Postgres or Oracle, we support over 4,000 pre-built applications to run on this broad set of hardware. With the broad set of infrastructure we provide, enterprises such as Coats are moving their entire datacenter footprint to Azure.

Today, I am announcing some exciting new capabilities:

  • Industry leading M-series VM sizes offering memory up to 12 TB on a single VM, the largest memory for a single VM in the public cloud for in-memory workloads such as SAP HANA. With these larger VM configurations, we are not only advancing the limits of virtualization in the cloud but also the performance of SAP HANA on VMs. These new sizes will be based on Intel Xeon Scalable (Skylake) processors, with more details available in the coming months.
  • Newer M-series VM sizes with memory as low as 192 GB, extending M-series VM range from 192 GB to 4 TB in RAM, available now, enabling fast scale-up and scale-down with 10 different size choices. M-series VMs are certified for SAP HANA and available worldwide in 12 regions. Using Azure ARM template automation scripts for SAP HANA, you can deploy entire SAP HANA environments in just minutes compared to weeks on-premises.
  • New SAP HANA TDIv5 optimized configurations for SAP HANA availability on Azure Large Instances with memory sizes of 6 TB, 12 TB, 18 TB. In addition to this, we now offer the industry-leading public cloud instance scale for SAP HANA with our new 24TB TDIv5 configuration. This extends our purpose-built SAP HANA offering to 15 different instance choices. With these new configurations, you can benefit from a lower price for TDIv5 configuration with an unparalleled 99.99% SLA for SAP HANA infrastructure and the ability to step up to larger configurations.
  • New Standard SSDs provide a low-cost SSD-based Azure Disk solution, optimized for test and entry-level production workloads requiring consistent performance and high throughput. You will experience improved latency, reliability and scalability as compared to Standard HDDs. Standard SSDs can be easily upgraded to Premium SSDs for more demanding and latency-sensitive enterprise workloads. Standard SSDs come with the same industry leading durability and availability that our clients expect from Azure Disks.

Truly consistent hybrid capabilities

When I talk with many of our customers about their cloud strategy, there is a clear need for choice and flexibility on where to run workloads and applications. Like most customers, you want to be able to bridge your on-premises and cloud investments. From VPN and ExpressRoute to File Sync and Azure Security Center, Azure offers a variety of services that help you enable, connect and manage your on-premises and cloud environments creating a truly hybrid infrastructure. In addition, with Azure Stack, you can extend Azure services and capabilities to on-premises and the edge, allowing you to build, deploy and operate hybrid cloud applications seamlessly.

Today, I’m happy to announce that we are expanding the geographical coverage of Azure Stack to meet the growing demand of the customers globally. Azure Stack will now be available in 92 countries throughout the world. Given your excitement over Azure Stack, we continue to expand opportunities for you to deploy this unique service. For a full list of the supported countries, please visit the Azure Stack overview page.

Liquid Telecom, a leading data, voice and IP provider in eastern, central and southern Africa, plans to use Azure Stack to deliver value to its customers and partners in some of the most remote parts of the world.

“We have a long history of delivering future-focused and innovative services to our customers. Microsoft Azure Stack strengthens this mission while also enabling us to deliver value to a new set of customers and partners in some of the most remote parts of Africa. By using Azure Stack, alongside Azure and ExpressRoute over our award-winning pan-African fiber network, we can now guide our customers through increasingly complex business challenges, such as data privacy, compliance and overall governance in the cloud. This helps not only us, but our entire channel of distribution and value-added partners in enhancing the customer experience on their digital journey.” David Behr, Chief Product Officer, Liquid Telecom

Built-in security and management

I frequently get asked about best practices for security and management in Azure. We have a unique set of services that make it incredibly easy for you to follow best practices whether running a single VM or 1,000s of VMs, including built-in backup, policy, advisor, security detection, monitoring, log analytics and patching. This will help you proactively protect your VMs and detect potential threats to your environment. These services are built upon decades of experience at Microsoft in delivering services across Xbox, Office, Windows and Azure with thousands of security professionals and more than 70 compliance certifications. We have also taken a leadership position on topics such as privacy and compliance to standards such as General Data Protection Regulation (GDPR), ISO 27001, HIPAA and more. Last week we announced the general availability of Azure Policy which is a free service to help you control and govern your Azure resources at scale.

Today, I’m excited to announce a few additional built-in security and management capabilities:

  • Disaster recovery for Azure IaaS virtual machines general availability: You likely need disaster recovery capabilities to ensure your applications are compliant with regulations that require a business continuity plan (such as ISO27001). You also may need your application to run continuously in the unlikely event of a natural disaster that could impact an entire region. With the general availability of this new service, you can configure disaster recovery within minutes, not days or weeks, with a built-in disaster recovery as a service that is unique to Azure. Learn more about how to get started by visiting our documentation.

“ASR has helped Finastra refine our DR posture through intuitive configuration of replication between Azure regions. It’s currently our standard platform for disaster recovery and handling thousands of systems with no issues regarding scale or adherence to our tight RPO/RTO requirements.” Bryan Heymann, Director of Systems & Architecture, D+H

  • Azure Backup for SQL in Azure Virtual Machines preview: Today we are extending the Azure backup capability beyond virtual machines and files to also include backup of a SQL instance running on a VM. This is a zero-infrastructure backup service that provides freedom from managing backups scripts, agents, backup servers or even backup storage. Moreover, customers can perform SQL log backups with 15-minute intervals on SQL Servers and SQL Always On Availability groups. Learn more on the key benefits of this capability and how to get started.
  • VM Run command: Customers can easily run scripts on an Azure VM directly from the Azure portal without having to connect to the machine. You can run either PowerShell scripts or Bash scripts and you can even troubleshoot a machine that has lost connection to the network. Learn more about Run Command for Windows and Linux.

More ways to save money, manage costs and optimize infrastructure

Given the agility offered by cloud infrastructure, I know you not only want freedom to deploy but also want tight control on your costs. You want to optimize your spending as you transition to the cloud. We can help drive higher ROI by reducing and optimizing infrastructure costs.

Azure offers innovative products and services to help reduce costs, like low priority VMs, burstable VMs, vCPU-constrained VMs for Oracle and SQL databases, and archive storage so customers can choose the right cost optimized infrastructure option for their app. Azure also uniquely offers free Cost Management so customers can manage and optimize their overall budget better.

With Azure Reserved VM Instances (RIs), you can save up to 72 percent. By combining RIs with Azure Hybrid Benefit, you can save up to 80 percent on Windows Server virtual machines, and up to 73* percent compared to AWS RIs for Windows VMs – making Azure the most cost-effective cloud to run Windows Server workloads. Customers like Smithfield Foods have been able to slash datacenter costs significantly, reduce new-application delivery time and optimize their infrastructure spend.

I hope you enjoyed this overview of some of the coolest new capabilities and services in Azure. We are constantly working to improve the platform and make a simpler and easier infrastructure service for you! Please let us know how we can make it even better.

Get started today with Azure IaaS. You can also register now to the Azure IaaS webcast I am hosting on June 18, 2018 on many of these topics.


Thanks,

Corey

 

*Disclaimer:
  1. Sample annual cost comparison of two D2V3 Windows Server VMs. Savings based two D2V3 VMs in US West 2 Region running 744 hours/month for 12 months; Base compute rate at SUSE Linux Enterprise rate for US West 2. Azure pricing as of April 24, 2018. AWS pricing updated as of April, 24, 2018. Price subject to change.
  2. The 80 percent of saving is based on the combined cost of Azure Hybrid Benefit for Windows Server and 3-year Azure Reserved Instance. It does not include Software Assurance cost.
  3. Actual savings may vary based on location, instance type, or usage.

(Preview) Standard SSD Disks for Azure Virtual machine workloads

$
0
0

We are excited to announce the preview of Azure Standard SSD Managed Disks, a new type of durable storage for Microsoft Azure Virtual machines. Standard SSD Disks are a cost effective storage option optimized for workloads that need consistent performance at lower IOPS levels. The new Azure Standard SSD Disks store data on Solid State Drives (SSDs), like our existing Premium Storage Disks. Whereas our Standard HDD disks store data on Hard Disk Drive (HDD). With the addition of Standard SSD, Azure now offers three types of persistent Disks for use with Azure Virtual machines, optimized for different workload requirements: Premium SSD Disks, Standard SSD Disks, and Standard HDD Disks.

Following is a summary comparing Azure Disk types.

Disk Type

Premium SSD

new Standard SSD

Standard HDD

Summary

Designed for IO intensive enterprise workloads. Delivers consistent performance with low latency and high availability.

Designed to provide consistent performance for low IOPS workloads. Delivers better availability and latency compared to HDD Disks.

Optimized for low-cost mass storage with infrequent access. Can exhibit some variability in performance.

Workload

Demanding enterprise workloads such as SQL Server, Oracle, Dynamics, Exchange Server, MySQL, Cassandra, MongoDB, SAP Business Suite and other production workloads

Web servers, low IOPS application servers, lightly used enterprise applications and Dev/Test

Backup storage

Max IOPS

7,500 IOPS provisioned

Up to 500 IOPS

Up to 500 IOPS

Max Throughput

250 MBPS provisioned

Up to 60 MBPS

Up to 60 MBPS

Benefits of Standard SSD Disks

We designed Standard SSD Disks to improve the performance and reliability of Standard Disks. This new disk offering combines the elements of Premium SSD Disks and Standard HDD Disks to form a cost-effective solution best suited for applications like web servers which do not need high IOPS on disks. Today many of these workloads use HDD-based disks to optimize the cost. However, HDD disks are typically less performant and less reliable than SSD based disks. In principle, all laaS workloads should leverage SSD-based disks, and experience the better performance, better reliability, and overall smoother operations that the technology enables. Standard SSD Disk is our answer to this, and the new disk type is uniquely designed to meet the specific workload requirements at optimal cost.

Virtual machines

Standard SSD Disks are designed to work with all Azure Virtual Machine SKUs. If you are using an A-series VM, or N-series VM, or D-series VM, or any other Azure VM series, you can use Standard SSD Disks with that VM. With the introduction of Standard SSD, we are enabling a broad range of workloads that previously used HDD-based disks to transition to SSD-based disks, and experience the consistent performance, higher availability and an overall better experience that come with SSDs.

Standard SSD Managed Disk sizes

Standard SSD Disks are offered as a type of Managed Disks. Unmanaged Disks and Page Blobs are not supported on Standard SSD. The following Disk sizes will be offered at preview.

Standard SSD Disk Type

E10

E15

E20

E30

E40

E50

Disk Size

128 GiB

256 GiB

512 GiB

1024 GiB (1 TiB)

2048 GiB (2 TiB)

4095 GiB (4 TiB)

Standard SSD Disks support all service management operations offered by Managed Disks. For example, you can create Managed Snapshots from Standard SSD Managed Disks in the same way you create snapshots for Managed Disks. Refer to the Managed Disks documentation for detailed instructions on all disk operations.

Performance

Standard SSD disks are designed to provide single-digit millisecond latencies for most IO operations. Also, the above disk sizes are designed to deliver up to 500 IOPS and 60 MBPS throughput similar to the HDD disks. Actual IOPS and Throughput may vary sometimes depending on the traffic patterns. Standard SSD disks will provide more consistent performance than the HDD disks with the lower latency.

Premium SSD disks on the other hand perform better than Standard SSD disks, with very low latencies, high IOPS/throughput and even better consistency with provisioned disk performance, and it is the recommended disk type for all other production workloads.

Like the existing Premium SSD Disks, Standard SSD Disks also use IO Unit size of 256KB. If the data being transferred is less than 256 KB, it is considered 1 I/O unit. Larger I/O sizes are counted as multiple I/Os of size 256 KB. For example, a 1,100 KB I/O is counted as five I/O units.

Highly durable and available

Standard SSD disks are built on the same Azure Disks platform which has consistently delivered high availability and durability for disks. Azure Disks are designed for 99.999 percent availability. Like all Managed Disks, Standard SSD disks will also offer Local Redundant Storage (LRS). With LRS the platform maintains multiple replicas of data for every disk and has consistently delivered enterprise-grade durability for IaaS disks, with an industry-leading ZERO percent Annualized Failure Rate.

Pricing

Refer to the Price details for the new Standard SSD Disks that are available. Similar to Standard HDD Disks, billing is based on disk size and actual transactions (I/O units).

Getting started

You can create and manage Standard SSD disks in the same way as the regular Managed Disks. Please use Azure Resource Managed (ARM) templates to deploy VMs with Standard SSD Disks. Below are the parameters needed in the ARM template for creating Standard SSD Disks:

  • apiVersion for Microsoft.Compute/virtualMachines must be set as “2018-04-01” (or later)
  • Specify storageAccountType as “StandardSSD_LRS” for creating a Standard SSD Disk.

In the coming weeks, we will enable Portal support, API/PS/CLI tools and other services like Backup/ASR for Standard SSD disks. Refer to Managed Disks documentation for detailed instructions on all disk operations.

Standard SSD Disks Preview is current available in the following region:

  • North Europe

Following additional regions will support Standard SSD Disks by mid-June 2018:

  • France Central
  • East US 2
  • Central US
  • Canada Central
  • East Asia
  • Korea South
  • Australia East

We will enable additional regions in the coming weeks. Please refer to Disks FAQ document for the current list of regions supported for Standard SSD Preview.

Also refer to Azure Preview guidelines for general information on using preview features.

Use Azure Monitor to integrate with SIEM tools

$
0
0

Over the past two years since introducing Azure Monitor, we’ve made significant strides in terms of consolidating on a single logging pipeline for all Azure services. A majority of the top Azure services, including Azure Resource Manager and Azure Security Center, have onboarded to Azure Monitor and are producing relevant security logs.

We’ve also delivered key capabilities to simplify the integration process with security information and event management (SIEM) tools, such as routing data to a single event hub and enabling multiple diagnostic settings per resource, and have work in flight that will ease setup and management of log routing across large Azure environments.

Meanwhile, we’ve been partnering with the top SIEM partners to build connectors that get the data from Azure Monitor into those tools. These connectors consume data routed to Azure Event Hubs by Azure Monitor – a simple, scalable, and manageable approach for delivering log data to an external application, and Microsoft’s recommended approach for integrating Azure with SIEM tools going forwards. Read more about how you can set up your Azure environment to send data to these SIEM tools.

We’ve also continued to support customers who are using the Azure Log Integration tool (AzLog) to integrate with these same SIEMs. AzLog was initially released to help customers navigate the complex process of consolidating, translating, and forwarding logs from a variety of Azure services to a SIEM tool. At the time, Azure Monitor didn’t exist and there was very little standardization in terms of how Azure services exposed log data to customers (some dumped data into a storage account, others exposed an API, etc).

We’ve come a long way since then, and today we’re announcing that there will be no further capabilities added to the Azure Log Integration tool and end of support will happen June 1, 2019. Our recommendation for integrating Azure with popular SIEM tools is below.

Integration recommendations

The table below indicates what you should do based off the SIEM tool(s) you are using and your current integration status. Only SIEM tools that were officially supported by AzLog are listed below.

SIEM Tool Currently using log integrator Currently investigating SIEM integration options
Splunk Begin migrating to the Azure Monitor Add-On for Splunk. Use the Azure Monitor Add-On for Splunk.
IBM QRadar Begin migrating to the Microsoft Azure DSM and Microsoft Azure Event Hub Protocol, available from the IBM support website. You can learn more about the integration with Azure here. Use the Microsoft Azure DSM and Microsoft Azure Event Hub Protocol, available from the IBM support website. You can learn more about the integration with Azure.
ArcSight The Azure log integration tool offered collection of Azure logs into JSON files for the purpose of integrating with ArcSight using existing JSON connectors from ArcSight, with a JSON to CEF mapping available only for Azure Activity Logs and not for the other types of Azure Logs. The ArcSight team is currently working on a new comprehensive solution, which is planned to have its first release with limited coverage in the October 2018 timeframe. Please contact ArcSight for more details. If you are already using the Azure Log Integration tool, you should make plans to use the ArcSight connector for Azure when it is available.  

While not supported by the AzLog tool, we also recommend looking into some of our other partners that offer Azure Monitor event hub integration, including ELK stack and SumoLogic.

Integration roadmap

Today, Azure Monitor’s SIEM integration capabilities can’t do everything the Azure Log Integration tool could do. Below is our roadmap for addressing known gaps between what you could accomplish with Azure Log Integration and what you can accomplish with Azure Monitor.

  • Azure Active Directory logs – Azure Active Directory logs are the only log type directly integrated with AzLog that aren’t yet available in Azure Monitor. Public preview of Azure Active Directory logs in Azure Monitor is expected to begin by July 2018.
  • Integrate Azure VM logs – AzLog provided the option to integrate your Azure VM guest operating system logs (e.g., Windows Security Events) with select SIEMs. Azure Monitor has agents available for Linux and Windows that are capable of routing OS logs to an event hub, but end-to-end integration with SIEMs is nontrivial. We tentatively plan to deliver improved support for routing OS logs to event hubs by the end of 2018 and we’re working with partners to develop a plan for their connectors to consume these logs. For now, our recommendation is that you use the VM log agent or log forwarder provided by your SIEM.
  • End-to-end setup – AzLog has a script that automates the end-to-end setup of log sources. While Azure Monitor offers the ability to script out creation of diagnostic settings, we’re partnering with the Azure Policy team to deliver seamless enablement via Resource Manager policies that ensure log data is being routed from all sources. You will begin to see built-in policies that support these scenarios over the next two months, with support for custom policies expected by late 2018.
  • Integration with other SIEM tools – AzLog provided a generic capability to push standardized Azure logs in JSON format to disk. While other SIEM tools weren’t officially supported by AzLog, this offered a way to easily get log data into tools such as LogRhythm. Our recommendation for customers using AzLog for these tools is to work with the producer of that tool to provide an Azure Monitor Event Hubs integration.

The security of your Azure environment is always top priority on the Azure team, both in terms of how we engineer the Azure platform and in terms of the capabilities we provide for you for securing your own assets on that platform. Moving SIEM integration to Azure Monitor is a step towards enabling you to manageably secure your applications on Azure at scale. If you have any questions or concerns, reach out to AZSIEMTeam@microsoft.com.

Empowering developers to ship iOS apps that scale

$
0
0

Microsoft believes any developer should be able to build, deploy, and scale their apps seamlessly on our trusted Azure cloud without having to manage services or infrastructure. Whether you are an Objective-C or Swift developer, Azure has what you need to ship your apps faster and with more confidence. Scaling your apps from zero to millions of users is easy with Azure’s powerful Mobile Back-end as a Service (MBaaS). Azure also offers Artificial Intelligence services that you can easily integrate into your apps, so you’re not only building apps—you’re building intelligent apps.

Last year, we announced the general availability of Visual Studio App Center, a continuous integration, delivery, and monitoring service that lets you easily connect your repo. Within minutes build your iOS app in the cloud, test on thousands of real iOS devices, distribute to beta testers and the App Store, and monitor real-world usage with crash and analytics data.

As part of our mission to simplify the process of distributing and deploying your apps to users, we’re excited to release the App Center auto-provisioning feature, so you can spend your time focusing on creating great apps, not on iOS provisioning. For more about auto-provisioning, see our post on the App Center blog.

If you’re an iOS game developer, PlayFab, recently acquired by Microsoft, gives you a complete back-end platform for iOS games with real-time analytics, player management, leaderboards, messaging, commerce, content, and LiveOps. PlayFab powers live games ranging from small indies to games with over 10 million monthly active players. Whether you're building the next great collectible card game, multiplayer shooter, VR experience, or addictive Facebook game, you're in great company.

Speaking of being in great company, if you’re in San Jose, California this week, join us! Apple’s annual Worldwide Developers Conference (WWDC) is happening and for the fifth year in a row, we’ll be in town talking about how we can help you build intelligent iOS apps that scale.

AltConf

Microsoft is a Gold Sponsor of this free, community-driven event that happens in parallel with Apple's WWDC. AltConf tickets are available now. We hope you'll come see us at our booth and attend one of our sessions:

Wednesday, June 6, 2018 | Sessions | Talks Room 1

Wednesday, June 6, 2018 | Sessions | Talks Room 2

Podcasts, podcasts, podcasts!

This year, we are pleased to support some great podcasts for iOS developers. Although tickets for all three live recordings are sold out, follow the links to download the episodes once posted:

Learn more about Azure for iOS services

Learn more about Azure MBaaS, data, AI, location APIs, gaming LiveOps, and mobile Continuous Integration, and Continuous Delivery (CI/CD) services. If you are at AltConf, stop by our booth C and come say hello in San Jose!

Learn more about Azure services for iOS.

Regenerative Maps alive on the Edge

$
0
0

This week Mapbox announced it will integrate its Vision SDK with the Microsoft Azure IoT platform, enabling developers to build innovative applications and solutions for smart cities, the automotive industry, public safety, and more. This is an important moment in the evolution of map creation. The Mapbox Vision SDK provides artificial intelligence (AI) capabilities for identifying objects through semantic segmentation – a technique of machine learning using computer vision that classifies what things are through a camera lens. Semantic segmentation on the edge for maps means objects such as stop signs, crosswalks, speed limits signs, people, bicycles, and other moving objects can be identified at run time through a camera running AI under the covers. These classifications are largely referred to as HD (high definition) maps.

HD maps are more machine friendly as an input to autonomous vehicles. Once the HD map objects are classified, and because other sensors like GPS and accelerometer are onboard, the location of these objects can be registered and placed onto a map, or in the advancement of “living maps,” registered into the map at run time. This is an important concept and where edge computing intersects with location to streamline the digitization of our world. Edge computing, like what will be powered by Azure IoT Edge and the Mapbox Vision SDK, will be enablers of map data generation in real time. What will be critical is to allow these HD maps to be (1) created through semantic segmentation; (2) integrated into the onboard map; (3) validated with a map using CRUD (create, read, update, delete) operations and then distributed back to devices for use. Microsoft’s Azure Machine Learning services can enable the training for these AI modules to support the creation of additional segmentation capabilities, not to mention the additional Azure services which provide deeper analysis of the data.

Azure maps

The advent and adoption of HD maps leads us to a different challenge – infrastructure. With 5G networks on the way, we’re seeing autonomous driving test vehicles create upwards of 5TB of data each per day. With scale and the required millions of miles of training, not to mention the number of vehicles, we estimate daily data rates to be in the 10’s of petabytes per day. Even a 5G network won’t be able to support that amount of data transference, and it certainly won’t be cheap. As such, edge computing and artificial intelligence will be some of the leading technologies for powering autonomous vehicles, the Location of Things, and smart city technologies.

A natural place to start connecting digital dots is location. Microsoft’s collaboration with Mapbox illustrates our commitment to cloud and edge computing in conjunction with the proliferation of Artificial Intelligence. The Mapbox Vision SDK, plus Azure IoT Hub and Azure IoT Edge, simplifies the flow of data from the edge to the Azure cloud and back out to devices. Imagine hundreds of millions of connected devices all making and sharing real time maps for the empowerment of the customer. This is a map regenerating itself. This is a live map.

Microsoft Azure is excited to collaborate with Mapbox on this because we understand the importance of geospatial capabilities for the Location of Things, as demonstrated in our own, continued investment in Azure Maps. Azure Maps as a service on the Azure IoT platform is bringing the Location of Things to Microsoft enterprise customers and developers through a number of location based services and mapping capabilities with more to come.

Microsoft Azure Stack expands availability to 92 countries

$
0
0

Azure Stack geographical availability

Earlier today, we announced some exciting updates across Azure IaaS, including that we will expand the footprint of Microsoft Azure Stack to a total of 92 countries. On my first week as the new Azure Stack marketing leader, I am incredibly proud of the great work that has been done by the Azure Stack team and our hardware partners to make this happen.

I am very excited about this investment and effort to meet our growing customer interest and demand for a true hybrid cloud. We are seeing amazing use cases envisioned by our customers, and solutions built by our partners that leverage Azure and Azure Stack together, such as Intelligent Edge scenarios, DevOps, or solutions that meet policy requirements.

At Ignite last year, we launched Azure Stack in 46 countries, and given customer demand since then, we worked with our hardware partners to double the number of countries Azure Stack now supports.

Since launch, we have heard some great stories from our customers and partners on how they are using Azure Stack, and the value they see Azure and Azure Stack bring to their business and IT environments in true distributed and hybrid environments. For example, Azure Stack empowers Romania’s BinBox, a leading telecom service provider, to stand out in an extremely competitive market by expanding their services.

“The inclusion of Microsoft Azure Stack services into our portfolio enhances our value proposition in a number of ways, from DevOps tools, a truly hybrid cloud offering, access for customers to Azure services like Business Intelligence and AI, to a fully managed service for any customer who wants it. Microsoft Azure Stack will bring us customers who wanted to exploit public cloud but were holding back due to data location concerns. In fact, our pipeline already includes about 60 customers we couldn’t have targeted pre-Azure Stack,” said Tiberiu Croitoru, CEO of BinBox.

Additionally, we have been collaborating with our ecosystem partners to help customers navigate their business needs. Read on to hear from our partners about their excitement for Azure Stack.

“As a joint venture between Accenture and Microsoft, Avanade provides Azure Stack as a fully managed service, so customers can focus on their business. We are excited to work with Microsoft to expand our coverage to a broader part of the world,” said Rich Stern, Executive, Global Market Units and Cloud, Avanade.

“The Cisco Unified Computing System has a global footprint with over 60,000 worldwide customers and is a leader with service providers globally. Partnering with Microsoft, and leveraging leading Nexus-based networking capabilities, we have installed Azure Stack in a wide range of customer locations in EMEA (for example Binbox), North America and Asia-Pacific. We are delighted that the Cisco Integrated System for Microsoft Azure Stack is now available for an expanded audience who need the benefits of hybrid cloud solutions deployed with Microsoft Azure Stack,” said Mary Perisic, Global Alliance Manager, Cisco Systems, Inc.

“Enterprises across the world are demanding a complete hybrid solution to meet their growing and ever-evolving business needs. With the latest HPE ProLiant for Microsoft Azure Stack (Gen 10) solution, we enable customers across the globe to simplify IT implementation and reap the benefits associated with cloud operating models, delivered on-premises,” said McLeod Glass, vice president and general manager, Software-Defined and Hyperconverged Solutions, HPE.

“At Lenovo, we are committed to solutions that focus on performance. As Microsoft continues to expand the availability of Azure Stack across the globe, we will continue to support this with our unique, industry-leading ThinkAgile SX products and services in many of those regions,” said Per Overgaard, Executive Director, Business Unit, Lenovo Data Center Group, EMEA. “Customers will experience the simplicity of Azure Public Cloud and the reliability of Lenovo's ThinkAgile SX for Microsoft Azure Stack.

A few weeks ago, Natalia Mackevicius shared a recap of what we announced at Microsoft Build 2018 for Azure Stack developer features and the innovation we had over the last 6 months.

Looking ahead, I am super excited about the opportunity we have with Azure Stack and how it complements Azure by bringing new hybrid cloud scenarios to our customers, and empowering them to realize business value much faster than before.

Finally, for our partners, we hope to see you at Microsoft Inspire 2018 where we will share how our ecosystem partners can take advantage of the Microsoft Hybrid Cloud Opportunity!

For a full list of countries where Azure Stack is available, please visit the Azure Stack webpage. To learn more, see a list of countries supported by our partners, and click on your preferred partner’s logo.

Speech services now in preview

$
0
0

This blog post was authored by the Microsoft Speech Services team​.

At Microsoft Build 2018, the Microsoft Speech Services team announced the following new and improved products and services.

  • Speech service as a preview, including Speech to Text with custom speech, Text to Speech with custom voice, and Speech Translation.
  • Speech SDK as a preview, which will replace the old Bing Speech APIs when generally available in fall 2018. It will be the single SDK for most of our speech services, and will require only one Azure subscription key for speech recognition and LUIS (language understanding service). With simplified APIs, Speech SDK makes it easy for new and experienced speech developers.
  • Speech Devices SDK, as a restricted preview, has advanced multi-microphone array audio processing algorithm that's fine-tuned to the backend Speech Services, and works great on the Roobo's dev kits for exceptional speech experiences, and the ability to customize the wake word to strengthen your brand.

To learn more, please read the ZDNet article highlighting these products and services.

We also demonstrated our Speech Recognition capabilities in the Satya Nadella’s vision keynote at Microsoft Build 2018. You can skip to the 1:22:40 mark if you want to jump to this demonstration. You can see Speaker Identification, multiple simultaneous recognition, transcription, translation, besides other awesome AI features.

Build Keynote

If you want to build a device like the prototype device used in that demo, check out the Speech Devices SDK. It also uses multi-microphone array hardware and advanced algorithm for audio processing.

Azure AI-enabled edge devices

You can find video recordings of our Speech related Build 2018 presentations below:

Want to try out Microsoft Speech services? You can try it out for free. To learn more and review sample code, please reference our documentation page.

Let us know if you have questions or feedback via Stake Overflow by using the tag microsoft-cognitive. You can also append specific area topic tags to the URL by adding “+[tagName]” like:

  • speech-recognition
  • speech-to-text
  • text-to-speech
  • microsoft-speech-api

Keep an eye on the the Azure blog as we will continue to announce speech services news, updates to our SDKs, as well as new features.


VSTS and GitHub

$
0
0
Today, Satya announced the exciting news – our intent to acquire GitHub! GitHub and Microsoft have been partnering on several levels for years. Specifically, the VSTS team has worked closely with GitHub on Git at a technical level and on other open source projects such as libgit2, GVFS, and Git LFS. It’s been a great... Read More

Azure.Source – Volume 34

$
0
0

Microsoft + GitHub = Empowering Developers - Before we can take a look at what's happened in Azure in the past week, be sure to check out this exciting news about an agreement for Microsoft to acquire GitHub. GitHub&emdash;home to more than 85 million code repositories&emdash;will retain its developer-first ethos, operate independently and remain an open platform.

Now in preview

VNet service endpoints for Azure database services for MySQL and PostgreSQL in preview - You can use virtual network service endpoints to isolate connectivity to your logical server from only a subnet or a set of subnets within your virtual network. The traffic to Azure Database for MySQL and PostgreSQL from your virtual network always stays within the Azure backbone network. Preference for this direct route is over any specific routes that take internet traffic through virtual appliances or on-premises.

Architectural diagram showing VNet service endpoints for subnet-to-subnet connectivity

Using VNet service endpoints for subnet-to-subnet connectivity

Receiving and handling HTTP requests anywhere with the Azure Relay - Azure Relay is a part of the messaging services family along with Azure Service Bus, Azure Event Hubs, and Azure Event Grid. It provides the "networking magic" that enables the Visual Studio Live Share. Relay now also supports relayed HTTP requests in preview, which is very interesting for applications or application components that run inside of containers and where it's difficult to provide a public endpoint, and is especially well-suited for implementing Webhooks that can be integrated with Azure Event Grid.

Azure Security Center can identify attacks targeting Azure App Service applications - At RSA, we announced that Azure Security Center leverages the scale of the cloud to identify attacks targeting App Service applications. Security Center threat detection analyzes App Service internal logs to identify attack methodology on multiple targets. A public preview of this feature is available now on the Standard and Trial tiers of Security Center at no additional cost.

Now generally available

Soft delete for Azure Storage Blobs generally available - Soft delete for Azure Storage blobs has entered general availability. The feature is available in all regions for public, government, and sovereign cloud. When you turn on soft delete, you can save and recover your data when blobs or blob snapshots are deleted. This protection extends to blob data that's erased as the result of an overwrite.

Tuesdays with Corey

Combining Azure scheduled events and Event Grid - Corey Sanders, Corporate VP - Microsoft Azure Compute team sat down with Ziv Rafalovich, Principal PM on the Azure Compute Team to talk about scheduled events (now GA) combined with the power of Event Grid for notifications.

Additional news and updates

The IoT Show

The IoT Show | COPA-DATA tells us about integrating IoT solutions in manufacturing and public sector - Bringing an IoT solution to production in a specific industry requires a deep understanding of both the said industry as well as Cloud and devices technologies. In addition, a high degree of digitization of companies' processes is necessary. Johannes Petrowisch from our elite partner COPA-DATA came on the IoT Show to share their experience working with customers in manufacturing and public sector integrating Azure IoT technologies in combination with their industrial software platform zenon.

The IoT Show | Azure IoT Hub automatic device configuration - Device Management is complex, especially so for Internet of Things solutions considering the variety and heterogenous nature of IoT devices. Building on top of Device Twins and Device Direct Methods primitives, the new Automatic Device Configuration feature of IoT Hub paves the way toward a simple way to configure IoT devices at scale from the comfort of Azure. Chris Green, PM in the Azure IoT team shows us the new feature recently announced.

Technical content and training

Digging in with Azure IoT: Our interactive developer guide - The Azure IoT developer guide takes you through things, insights, and actions, mapping Azure IoT services to each level of IoT, and then provides a step-by-step plan for experimenting with live IoT capabilities using languages and tools of your choice. Read this guide to learn about the IoT Application pattern and a framework for thinking about IoT application architectures.

Make Azure IoT Hub C SDK work on tiny devices! - Azure IoT Hub C SDK is written in ANSI C (C99), which makes it well-suited for a variety of platforms with small disk and memory footprint. We recommend at least 64KB of RAM, but the exact memory footprint depends on the protocol used, the number of connections opened, as well as the platform targeted. This blog post walks through how to optimize the Azure IoT Hub C SDK for constrained devices.

3 reasons why Azure’s infrastructure is secure - Appropriately enough, this is the third post in a four-part series on how Microsoft Azure provides a secure foundation. This blog post discusses the network infrastructure, firmware and hardware, and continuous testing and monitoring that make up Azure’s secure infrastructure. It also covers some of the security services you can use to further secure your network.

How I choose which services to use in Azure - In this blog post, Microsoft MVP Barry Luijbregts outlines his thought process on how he selects which Azure services and capabilities to use for his own projects, and when he consults for his clients. It also includes an episode of Azure Friday with Scott Hanselman, which provides a discussion on this topic.

AI Show

AI Show | Azure Custom Vision: How to Train and Identify Unique Designs or Image Content - Ruth shows how to use Azure Custom Vision to train a model to recognize a modern Mercedes-Benz car keys since the design does not look like a traditional key. Then shows to how to call the generated REST API from the trained model in a Java application that displays tags and description of uploaded images.

Events

Gain application insights for Big Data solutions using Unravel data on Azure HDInsight - This post announces Unravel on Azure HDInsight Application Platform. Azure HDInsight is a fully-managed open-source big data analytics service for enterprises. Unravel provides comprehensive application performance management (APM) for these scenarios and more. Check out this post to register for a webinar on June 13 for how to build fast and reliable big data apps on Azure while keeping cloud expenses within your budget.

Getting started with Apache Spark on Azure Databricks - Attend this on-demand webinar to learn how to get started with Apache Spark on Azure Databricks. Designed in collaboration with the original founders of Apache® Spark™, Azure Databricks combines the best of Databricks and Microsoft Azure to help customers accelerate innovation with streamlined workflows, an interactive workspace and one-click set up. Azure Databricks is an analytics engine built for large scale data processing that enables collaboration between data scientists, data engineers and business analysts.

The best of AppSource & Azure Marketplace at Build 2018 - This post highlights key sessions from Build 2018 that address the new features, functionality and services that we recently announced for Azure Marketplace.

Azure tips & tricks

Use the Azure Resource Explorer to quickly explore REST APIs

Testing Web Apps in Production with Azure App Service

The Azure Podcast

The Azure Podcast: Episode 231 - IaaS VM options - Cynthia, Cale and Sujit chat about the plethora of IaaS VMs available and things to watch for when selecting the right VM SKU/Size.

Developer spotlight

Kubernetes objects on Microsoft Azure - Consider this ebook a jumping off point for Kubernetes development projects on Azure. Mahesh Kshirsagar of the Azure Customer Advisory Team (AzureCAT) introduces Kubernetes objects for Azure deployments. This ebook demystifies Kubernetes by focusing on a real-life scenario in which a basic tiered application is deployed using pods and controllers. Mahesh walks you through the steps to deploy a simple application with a web front end running ASP.NET Core 1.0 and a back end with a SQL Server container running on Linux. Scripts and guidance are available in the accompanying GitHub repository.

Container Native Development with Ralph Squillace - Ralph Squillace is a principal program manager with Microsoft, where he works on containers, Linux, and cloud products. Ralph joins the Software Engineering Daily podcast to talk about how developing with containers has changed in the last few years, and how it will continue to evolve in the near future.

Docker extension for working with Docker in Visual Studio Code - Docker is a very popular container platform that lets you easily package, deploy, and consume applications and services. Whether you are a seasoned Docker developer or just getting started, Visual Studio Code makes it easy to author Dockerfile and docker-compose.yml files in your workspace. VS Code even supports generating and adding the appropriate Docker files based on your project type.

Tutorial: Deploy to Azure using Docker - This tutorial walks you through containerizing an existing Node.js application using Docker, pushing the app image to a Docker registry, then deploying the image to Azure Web App for Containers directly from Visual Studio Code.

Containerized Docker Application Lifecycle with Microsoft Platform and Tools - This guide provides end-to-end guidance on the Docker application development lifecycle with Microsoft tools and services while providing an introduction to Docker development concepts for readers who might be new to the Docker ecosystem. This enables anyone to understand the global picture and start planning development projects based on Docker and Microsoft technologies/cloud.

Azure Friday

Azure Friday | Episode 436 - Deploying Node.js Applications from VS Code to Kubernetes - Brendan Burns joins Donovan Brown to discuss how you can quickly and easily build a containerized Node.js app on Linux and deploy it to Azure Kubernetes Service using Visual Studio Code and the Visual Studio Code Kubernetes Extension.

A Cloud Guru: Azure this Week

Azure This Week - 1 June 2018 - In this episode of Azure This Week, James takes a look at the public preview of Azure AD authentication for Azure Storage accounts, Data-In Replication for Azure Database for MySQL and how you can start building a serverless, event-driven framework for responding to scheduled Azure maintenance.

Cesium: Fast and Consistent Bing Maps Tile Performance Powers 3D Map Apps Platform

$
0
0

Cesium, a platform for developers to build web-based 3D map apps, uses Bing Maps imagery to help power their services. Started in 2011, when WebGL was first released, as a lightweight web-based 3D virtual globe engine for the aerospace and defense market, Cesium is now used across many markets from autonomous driving to drones to augmented reality.

“Bing Maps has been Cesium’s default imagery service since our start in 2011.  Bing Maps tiles coverage, resolution, and performance serve intense 3D use cases very well,” said Patrick Cozzi, creator of Cesium and 3D Tiles.

“The Cesium team has a long history of collaborating with Microsoft, from Bing Maps to NORAD Tracks Santa to co-creating the open standard glTF 3D format as part of Khronos,” Cozzi continued.

Below is Q&A with the Cesium team about why they have chosen to use Bing Maps in their solution:

What are you using Bing Maps for Enterprise for?

Bing Maps tiles is the default imagery service for Cesium.

Cesium is composed of the open-source CesiumJS JavaScript library for 3D rendering using WebGL, and the Cesium ion SaaS/enterprise cloud for streaming and tiling raw 3D content, such as terrain, point clouds, photogrammetry, and 3D buildings, into 3D Tiles.

Bing Maps looks great laid on top of Cesium World Terrain and is the base for other 3D content that our users load into Cesium, such as point clouds, photogrammetry, 3D buildings, vector data, BIM/CAD models, and 3D models.

Cesium World Terrain - Mount Barney, Australia

Mount Barney, Australia with Bing Maps and Cesium World Terrain

Cesium with Bing Maps and 1.1 million styled buildings in NYC using 3D Tiles

Cesium with Bing Maps and 1.1 million styled buildings in NYC using 3D Tiles

Why did you choose Bing Maps and what are some of the benefits?

We love the global coverage and high-resolution imagery of Bing Maps. Global coverage is important because our users (and their users) are everywhere, from the United States to Australia: Texas Groundwater Well Levels Visualization, East Japan Earthquake Archive, and Eclipse Tracks worldwide solar eclipse tracking are all built with Cesium. High-resolution imagery is important because our users work at various scales, from visualization of all the satellites in space to autonomous driving to interior BIM models.

OneSky: UAS Traffic Management (UTM) built with Cesium and Bing Maps

OneSky: UAS Traffic Management (UTM) built with Cesium and Bing Maps

We also value the fast and consistent performance of Bing Maps, served via a CDN and with multiple subdomains to allow concurrent requests. Since users can tilt the 3D view (compared to a top-down 2D map), the demand for high-performance tile serving is much higher as more tiles are loaded for horizon views, a technique called Hierarchical Level of Detail.

Cesium with Bing Maps and textured 3D buildings in Berlin using 3D Tiles

Cesium with Bing Maps and textured 3D buildings in Berlin using 3D Tiles

Drone racing game built with Cesium and Bing Maps

Drone racing game built with Cesium and Bing Maps

Bing Maps also brings highly reliable uptime. For example, since 2012 we have partnered with Microsoft and NORAD on NORAD Tracks Santa, which reaches more than 20 million unique viewers on Christmas Eve.  Bing Maps performance and reliability are fantastic.

We also love the well-documented public API of Bing Maps, and we use both imagery and street maps, with and without labels.

For more information on Cesium, visit https://cesium.com/. For more about the Bing Maps for Enterprise solutions, go to https://www.microsoft.com/maps.

- Bing Maps Team

Use Boost.Hana via vcpkg with the latest MSVC compiler

$
0
0

Overview

As we continue to work towards improving the conformance of the MSVC compiler for the C++ community, we would like to enable more C++ libraries, and today we are bringing Boost.Hana to Visual C++.  Building on our recent C++ conformance progress, customers can now use Boost.Hana with the VS2017 15.7 update after we’ve applied some source workarounds in the vcpkg version. We want to thank the author of Boost.Hana, Louis Dionne, for working with us on this effort and for extending his support.

How do I get it?

  • Go to the vcpkg repo on GitHub and follow the instructions in the README.md here
  • We created a new fork of Boost.Hana and redirected vcpkg to that fork
  • After you’ve built vcpkg, then run this command to install Boost.Hana:
    • vcpkg.exe install boost-hana
    • You will see this disclaimer during the installation of Boost on Windows: “The current MSVC releases can’t compile boost hana yet. Use a fork from boost 1.67 which has source workarounds instead.”

Background

You may have seen our VCBlog post about C++ conformance completion for our compiler with the recent VS2017 update. As of early June 2018, MSVC cannot build the master branch of Boost.Hana due to several blocking bugs. We started working on that two years ago and have since fixed around 40 compiler bugs exposed by the library. We hit some blocking issues in constexpr, so the effort was halted for a while until we made sufficient progress recently in C++14 and C++17 constexpr conformance with the VS2017 15.7 release.

We have revisited the status recently and most of the blocking constexpr issues are fixed in the VS2017 15.7 update. There are still outstanding issues in multiple feature areas within the compiler that prevent us from building Boost.Hana.

While work remains to be done in the compiler, because of the heavy demand for this library from our customers, we have made source workarounds in Boost.Hana for the remaining compiler bugs. This was needed so we can have complete test coverage of all the known issues. We are now building it in our daily testing repertoire to maintain regression-free compiler development while we progress toward achieving one-to-one parity with the public library sources.

After discussing this issue with Louis, we’ve jointly agreed to provide a version of Boost.Hana in vcpkg to promote usage of the library among more C++ users from the Visual C++ community. This includes the patches we previously identified, and as we fix the remaining bugs, we’ll gradually update the version of Boost.Hana in vcpkg, ultimately removing it and replacing it with master as the bugs are fixed. We think we’re close enough with our efforts that we can conduct this progress publicly in vcpkg without hindering new users who take a dependency on the library.

 Again, a huge THANKS to Louis for being willing to take bug reports on Hana and quickly resolving them, which greatly sped up our progress. 

Source workarounds in place

  • MSVC-specific source workarounds appear in 70 places: 27 in the library itself, 20 in the tests, and 23 in the examples.
  • These are all under unique macro definitions that are prefixed with “BOOST_HANA_WORKAROUND_MSVC_” and postfixed with a specific bug ID number from our internal database for each issue. Full details can be found here.
  • Here are some examples:

Compiler bugs

  • There are 25 active bugs with the VS2017 15.7 update.
  • We plan to fix them all by the VS2017 15.9 update.

What’s next…

  • Throughout the remaining updates of Visual Studio 2017, we will exhaust the remaining MSVC bugs that block upstream versions of the Boost.Hana library. As we fix bugs, we will gradually update the corresponding source workarounds.
  • We will continue to provide status updates on our progress.
  • We will ensure that users who take a dependency on this library in vcpkg will not be affected by our work.
  • What about Range-v3?
    • Similarly, we are tracking all Range-v3 blocking bugs in the compiler and have plans to fix them in the remaining Visual Studio 2017 updates.

 

Securing the modern workplace with enhanced threat protection services in Office 365

$
0
0

Today’s post was written by Rudra Mitra, director for Office 365 Information Protection Engineering.

The built-in suite of powerful threat protection services for Office 365—including Exchange Online Protection (EOP), Advanced Threat Protection (ATP), and Threat Intelligence—is a paramount requirement for customers choosing Office 365 to help drive their digital transformation. With InfoSecurity Europe kicking off today, we are providing an update on how these security workloads provide enhanced protection for our customers and meet the strict data residency, compliance, and privacy requirements of Office 365.

The foundational elements of Office 365 threat protection include:

  • Protecting users from threats.
  • Detecting threats.
  • Remediating threats.
  • Raising awareness and enabling education of threats.

Overview of enhanced capabilities for Office 365 threat protection.

Here’s a summary of the enhancements we made to these elements:

Protection enhancements

Phishing has many forms ranging from “royalty” offering rewards to more sophisticated and targeted campaigns. Cybercriminals continue to find other attack methods, manifesting in a rise of phishing campaigns across the industry landscape, including in Office 365. We have a >99.9 percent malware catch rate, and to better protect customers against phishing, we made following enhancements in Office 365:

The new anti-phishing technology made available to our customers is leveraged by Microsoft Core Services Engineering and Operations (CSEO). As one of the world’s largest enterprises, Microsoft faces concerns and security requirements similar to our customers. With the enhanced protection capabilities, our customers should have even greater confidence with Office 365 threat protection.

Different types of phishing attacks seen in the threat landscape.

Detection enhancements enriching the user and admin experience

For users to be educated on flagging suspicious links coming through email, they must have visibility to the destination URLs point. Now, Office 365 offers the Native Link Rendering feature that enables Office 365 ATP customers to view the destination of URLs and make more informed decisions when clicking links. This feature is currently available for the Outlook Web App and coming to the Outlook client later this year.

In addition, admins who enable the Report Message button in their Outlook client can empower users with the ability to report a suspicious email as potential phishing, sending the email directly to Microsoft for further analysis.

Example of the Native Link Rendering and Report Message features in Outlook Web App.

The enhanced Phish view in the Security & Compliance Center dashboard offers admins new reporting capabilities that provide broader visibility, control, and finer detail on the threats impacting their organization. The Phish view offers ATP admins with enriched phishing, malware, and user submission (the emails reported by users) details. In the coming months, we will add more details to the Phish view, exposing the category of phishing email that is blocked. Additionally, enhancements are also launching for new reporting views for EOP customers.

Enhanced Phish view in the enriched admin capabilities for Office 365 ATP.

Threat remediation, awareness, and education

Since we first announced Office 365 Threat Intelligence, we added powerful threat remediation features in the service. Admins can proactively search for suspicious emails and delete them before any adverse impact to the organization. Additionally, threat trackers are available enabling admins to perform investigations on potential threats to the tenant.

Remediation actions available for proactive deletion of suspicious emails.

Threat Intelligence also features the Attack Simulator to help raise awareness and provide an education for modern threats. Using the Attack Simulator, admins can launch realistic threats at specific users. This allows them to update policies and protection based on actionable insights from the simulations. Our recent webcast provides a detailed update on the impressive set of capabilities offered in Office 365 Threat Intelligence.

Attack simulator dashboard.

Secure your digital transformation

Security for the modern workplace is fundamental to our customers’ digital transformation journey. Start an Office 365 E5 trial to experience holistic protection of the modern workplace.

Please send us your in-product feedback so we can continue making enhancements and provide the most advanced services to protect your Office 365 environment.

The post Securing the modern workplace with enhanced threat protection services in Office 365 appeared first on Microsoft 365 Blog.

Viewing all 10804 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>