363
I Use This!
Inactive

News

Analyzed 21 minutes ago. based on code collected 21 minutes ago.
Posted almost 11 years ago
This week we released a major set of updates to Microsoft Azure. This week’s updates include: SQL Databases: General Availability of Azure SQL Database Service Tiers API Management: General Availability of our API Management Service Media ... [More] Services: Live Streaming, Content Protection, Faster and Cost Effective Encoding, and Media Indexer Web Sites: Virtual Network integration, new scalable CMS with WordPress and updates to Web Site Backup in the Preview Portal Role-based Access Control: Preview release of role-based access control for Azure Management operations Alerting: General Availability of Azure Alerting and new alerts on events All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them:   SQL Databases: General Availability of Azure SQL Database Service Tiers I’m happy to announce the General Availability of our new Azure SQL Database service tiers - Basic, Standard, and Premium.  The SQL Database service within Azure provides a compelling database-as-a-service offering that enables you to quickly innovate & stand up and run SQL databases without having to manage or operate VMs or infrastructure. Today’s SQL Database Service Tiers all come with a 99.99% SLA, and databases can now grow up to 500GB in size. Each SQL Database tier now guarantees a consistent performance level that you can depend on within your applications – avoiding the need to worry about “noisy neighbors” who might impact your performance from time to time. Built-in point-in-time restore support now provides you with the ability to automatically re-create databases at a certain point of time (giving you much more backup flexibility and allowing you to restore to exactly the point before you accidentally did something bad to your data). Built-in auditing support enables you to gain insight into events and changes that occur with the databases you host. Built-in active geo-replication support, available with the premium tier, enables you to create up to 4 readable, secondary, databases in any Azure region.  When active geo-replication is enabled, we will ensure that all transactions committed to the database in your primary region are continuously replicated to the databases in the other regions as well: One of the primary benefits of active geo-replication is that it provides application control over disaster recovery at a database level.  Having cross-region redundancy enables your applications to recover in the event of a disaster (e.g. a natural disaster, etc).  The new active geo-replication support enables you to initiate/control any failovers – allowing you to shift the primary database to any of your secondary regions: This provides a robust business continuity offering, and enables you to run mission critical solutions in the cloud with confidence.  More Flexible Pricing SQL Databases are now billed on a per-hour basis – allowing you to quickly create and tear down databases, and dynamically scale up or down databases even more cost effectively. Basic Tier databases support databases up to 2GB in size and cost $4.99 for a full month of use.  Standard Tier databases support 250GB databases and now start at $15/month (there are also higher performance standard tiers at $30/month and $75/month). Premium Tier databases support 500GB databases as well as the active geo-replication feature and now start at $465/month. The below table provides a quick look at the different tiers and functionality: This page provides more details on how to think about DTU performance with each of the above tiers, and provides benchmark details on the number of transactions supported by each of the above service tiers and performance levels. During the preview, we’ve heard from some ISVs, which have a large number of databases with variable performance demands, that they need the flexibility to share DTU performance resources across multiple databases as opposed to managing tiers for databases individually.  For example, some SaaS ISVs may have a separate SQL database for each customer and as the activity of each database varies, they want to manage a pool of resources with a defined budget across these customer databases.  We are working to enable this scenario within the new service tiers in a future service update. If you are an ISV with a similar scenario, please click here to sign up to learn more. Learn more about SQL Databases on Azure here. API Management Service: General Availability Release I’m excited to announce the General Availability of the Azure API Management Service. In my last post I discussed how API Management enables customers to securely publish APIs to developers and accelerate partner adoption.  These APIs can be used from mobile and client applications (on any device) as well as other cloud and service based applications. The API management service supports the ability to take any APIs you already have (either in the cloud or on-premises) and publish them for others to use.  The API Management service enables you to: Throttle, rate limit and quota your APIs Gain analytic insights on how your APIs are being used and by whom Secure your APIs using OAuth or key-based access Track the health of your APIs and quickly identify errors Easily expose a developer portal for your APIs that provides documentation and test experiences to developers who want to use your APIs Today’s General Availability provides a formal SLA for Standard tier services.  We also have a developer tier of the service that you can use, starting at just $49 per month. OAuth support in the Developer Portal The API Management service provides a developer console that enables a great on-boarding and interactive learning experience for developers who want to use your APIs.  The developer console enables you to easily expose documentation as well enable developers to try/test your APIs. With this week’s GA release we are also adding support that enables API publishers to register their OAuth Authorization Servers for use in the console, which in turn allows developers to sign in with their own login credentials when interacting with your API - a critical feature for any API that supports OAuth. All normative authorization grant types are supported plus scopes and default scopes. For more details on how to enable OAuth 2 support with API Management and integration in the new developer portal, check out this tutorial. Click here to learn more about the API Management service and try it out for free. Media Services: Live Streaming, DRM, Faster Cost Effective Encoding, and Media Indexer This week we are excited to announce the public preview of Live Streaming and Content Protection support with Azure Media Services. The same Internet scale streaming solution that leading international broadcasters used to live stream the 2014 Winter Olympic Games and 2014 FIFA World Cup to tens of millions of customers globally is now available in public preview to all Azure customers. This means you can now stream live events of any size with the same level of scalability, uptime, and reliability that was available to the Olympics and World Cup. DRM Content Protection This week Azure Media Services is also introducing a new Content Protection offering which features both static and dynamic encryption with first party PlayReady license delivery and an AES 128-bit key delivery service.  This makes it easy to DRM protect both your live and pre-recorded video assets – and have them be available for users to easily watch them on any device or platform (Windows, Mac, iOS, Android and more). Faster and More Cost Effective Media Encoding This week, we are also introducing faster media encoding speeds and more cost-effective billing. Our enhanced Azure Media Encoder is designed for premium media encoding and is billed based on output GBs. Our previous encoder was billed on both input + output GBs, so the shift to output only billing will result in a substantial price reduction for all of our customers. To help you further optimize your encoding workflows, we’re introducing Basic, Standard, and Premium Encoding Reserved units, which give you more flexibility and allow you to tailor the encoding capability you pay for to the needs of your specific workflows. Media Indexer Additionally, I’m happy to announce the General Availability of Azure Media Indexer, a powerful, market differentiated content extraction service which can be used to enhance the searchability of audio and video files.  With Media Indexer you can automatically analyze your media files and index the audio and video content in them. You can learn more about it here. More Media Partners I’m also pleased to announce the addition this week of several media workflow partners and client players to our existing large set of media partners: Azure Media Services and Telestream’s Wirecast are now fully integrated, including a built-in destination that makes its quick and easy to send content from Wirecast’s live streaming production software to Azure. Similarly, Newtek’s Tricaster has also been integrated into the Azure platform, enabling customers to combine the high production value of Tricaster with the scalability and reliability of Azure Media Services. Cires21 and Azure Media have paired up to help make monitoring the health of your live channels simple and easy, and the widely-used JW player is now fully integrated with Azure to enable you to quickly build video playback experiences across virtually all platforms. Learn More Visit the Azure Media Services site for more information and to get started for free. Websites: Virtual Network Integration, new Scalable CMS with WordPress This week we’ve also released a number of great updates to our Azure Websites service. Virtual Network Integration Starting this week you can now integrate your Azure Websites with Azure Virtual Networks. This support enables your Websites to access resources attached to your virtual networks.  For example: this means you can now have a Website directly connect to a database hosted in a non-public VM on a virtual network.  If your Virtual Network is connected to your on-premises network (using a Site-to-Site software VPN or ExpressRoute dedicated fiber VPN) you can also now have your Website connect to resources in your on-premises network as well. The new Virtual Network support enables both TCP and UDP protocols and will work with your VNET DNS. Hybrid Connections and Virtual Network are compatible such that you can also mix both in the same Website.  The new virtual network support for Web Sites is being released this week in preview.  Standard web hosting plans can have up to 5 virtual networks enabled. A website can only be connected to one virtual network at a time but there is no restriction on the number of websites that can be connected to a virtual network. You can configure a Website to use a Virtual Network using the new Preview Azure Portal (http://portal.azure.com).  Click the “Virtual Network” tile in your web-site to bring up a virtual network blade that you can use to either create a new virtual network or attach to an existing one you already have: Note that an Azure Website requires that your Virtual Network has a configured gateway and Point-to-Site enabled. It will remained grayed out in the UI above until you have enabled this. Scalable CMS with WordPress This week we also released support for a Scalable CMS solution with WordPress running on Azure Websites.  Scalable CMS with WordPress provides the fastest way to build an optimized and hassle free WordPress Website. It is architected so that your WordPress site loads fast and can support millions of page views a month, and you can easily scale up or scale out as your traffic increases. It is pre-configured to use Azure Storage, which can be used to store your site’s media library content, and can be easily configured to use the Azure CDN.  Every Scalable CMS site comes with auto-scale, staged publishing, SSL, custom domains, Webjobs, and backup and restore features of Azure Websites enabled. Scalable WordPress also allows you to use Jetpack to supercharge your WordPress site with powerful features available to WordPress.com users. You can now easily deploy Scalable CMS with WordPress solutions on Azure via the Azure Gallery integrated within the new Azure Preview Portal (http://portal.azure.com).  When you select it within the portal it will walk you through automatically setting up and deploying a complete solution on Azure: Scalable WordPress is ideal for Web developers, creative agencies, businesses and enterprises wanting a turn-key solution that maximizes performance of running WordPress on Azure Websites.  It’s fast, simple and secure WordPress hosting on Azure Websites. Updates to Website Backup This week we also updated our built-in Backup feature within Azure Websites with a number of nice enhancements.  Starting today, you can now: Choose the exact destination of your backups, including the specific Storage account and blob container you wish to store your backups within. Choose to backup SQL databases or MySQL databases that are declared in the connection strings of the website. On the restore side, you can now restore to both a new site, and to a deployment slot on a site. This makes it possible to verify your backup before you make it live. These new capabilities make it easier than ever to have a full history of your website and its associated data. Security: Role Based Access Control for Management of Azure As organizations move more and more of their workloads to Azure, one of the most requested features has been the ability to control which cloud resources different employees can access and what actions they can perform on those resources. Today, I’m excited to announce the preview release of Role Based Access Control (RBAC) support in the Azure platform. RBAC is now available in the Azure preview portal and can be used to control access in the portal or access to the Azure Resource Manager APIs. You can use this support to limit the access of users and groups by assigning them roles on Azure resources. Highlights include: A subscription is no longer the access management boundary in Azure. In April, we introduced Resource Groups, a container to group resources that share lifecycle. Now, you can grant users access on a resource group as well as on individual resources like specific Websites or VMs. You can now grant access to both users groups. RBAC is based on Azure Active Directory, so if your organization already uses groups in Azure Active Directory or Windows Server Active Directory for access management, you will be able to manage access to Azure the same way. Below are some more details on how this works and can be enabled. Azure Active Directory Azure Active Directory is our directory service in the cloud.  You can create organizational tenants within Azure Active Directory and define users and groups within it – without having to have any existing Active Directory setup on-premises. Alternatively, you can also sync (or federate) users and groups from your existing on-premises Active Directory to Azure Active Directory, and have your existing users and groups automatically be available for use in the cloud with Azure, Office 365, as well as over 2000 other SaaS based applications: All users that access your Azure subscriptions, are now present in the Azure Active Directory, to which the subscription is associated. This enables you to manage what they can do as well as revoke their access to all Azure subscriptions by disabling their account in the directory. Role Permissions In this first preview we are pre-defining three built-in Azure roles that give you a choice of granting restricted access: A Owner can perform all management operations for a resource and its child resources including access management. A Contributor can perform all management operations for a resource including create and delete resources. A contributor cannot grant access to others. A Reader has read-only access to a resource and its child resources. A Reader cannot read secrets. In the RBAC model, users who have been configured to be the service administrator and co-administrators of an Azure subscription are mapped as belonging to the Owners role of the subscription. Their access to both the current and preview management portals remains unchanged. Additional users and groups that you then assign to the new RBAC roles will only have those permissions, and also will only be able to manage Azure resources using the new Azure preview portal and Azure Resource Manager APIs.  RBAC is not supported in the current Azure management portal or via older management APIs (since neither of these were built with the concept of role based security built-in). Restricting Access based on Role Based Permissions Let’s assume that your team is using Azure for development, as well as to host the production instance of your application. When doing this you might want to separate the resources employed in development and testing from the production resources using Resource Groups. You might want to allow everyone in your team to have a read-only view of all resources in your Azure subscription – including the ability to read and review production analytics data. You might then want to only allow certain users to have write/contributor access to the production resources.  Let’s look at how to set this up: Step 1: Setting up Roles at the Subscription Level We’ll begin by mapping some users to roles at the subscription level.  These will then by default be inherited by all resources and resource groups within our Azure subscription. To set this up, open the Billing blade within the Preview Azure Portal (http://portal.azure.com), and within the Billing blade select the Azure subscription that you wish to setup roles for:  Then scroll down within the blade of subscription you opened, and locate the Roles tile within it: Clicking the Roles title will bring up a blade that lists the pre-defined roles we provide by default (Owner, Contributor, Reader).  You can click any of the roles to bring up a list of the users assigned to the role.  Clicking the Add button will then allow you to search your Azure Active Directory and add either a user or group to that role.  Below I’ve opened up the default Reader role and added David and Fred to it: Once we do this, David and Fred will be able to log into the Preview Azure Portal and will have read-only access to the resources contained within our subscription.  They will not be able to edit any changes, though, nor be able to see secrets (passwords, etc). Note that in addition to adding users and groups from within your directory, you can also use the Invite button above to invite users who are not currently part of your directory, but who have a Microsoft Account (e.g. [email protected]), to also be mapped into a role. Step 2: Setting up Roles at the Resource Level Once you’ve defined the default role mappings at the subscription level, they will by default apply to all resources and resource groups contained within it.  If you wish to scope permissions even further at just an individual resource (e.g. a VM or Website or Database) or at a resource group level (e.g. an entire application and all resources within it), you can also open up the individual resource/resource-group blade and use the Roles tile within it to further specify permissions. For example, earlier we granted David reader role access to all resources within our Azure subscription.  Let’s now grant him contributor role access to just an individual VM within the subscription.  Once we do this he’ll be able to stop/start the VM as well as make changes to it. To enable this, I’ve opened up the blade for the VM below.  I’ve then scrolled down the blade and found the Roles tile within the VM.  Clicking the contributor role within the Roles tile will then bring up a blade that allows me to configure which users will be contributors (meaning have read and modify permissions) for this particular VM.  Notice below how I’ve added David to this: Using this resource/resource-group level approach enables you to have really fine-grained access control permissions on your resources. Command Line and API Access for Azure Role Based Access Control The enforcement of the access policies that you configure using RBAC is done by the Azure Resource Manager APIs.  Both the Azure preview portal as well as the command line tools we ship use the Resource Manager APIs to execute management operations. This ensures that access is consistently enforced regardless of what tools are used to manage Azure resources. With this week’s release we’ve included a number of new Powershell APIs that enable you to automate setting up as well as controlling role based access. Learn More about Role Based Access Today’s Role Based Access Control Preview provides a lot more flexibility in how you manage the security of your Azure resources.  It is easy to setup and configure.  And because it integrates with Azure Active Directory, you can easily sync/federate it to also integrate with the existing Active Directory configuration you might already have in your on-premises environment. Getting started with the new Azure Role Based Access Control support is as simple as assigning the appropriate users and groups to roles on your Azure subscription or individual resources. You can read more detailed information on the concepts and capabilities of RBAC here. Your feedback on the preview features is critical for all improvements and new capabilities coming in this space, so please try out the new features and provide us your feedback. Alerts: General Availability of Azure Alerting and new Alerts on Events support I’m excited to announce the release of Azure Alerting to General Availability. Azure alerts supports the ability to create alert thresholds on metrics that you are interested in, and then have Azure automatically send an email notification when that threshold is crossed. As part of the general availability release, we are removing the 10 alert rule cap per subscription. Alerts are available in the full azure portal by clicking Management Services in the left navigation bar: Also, alerting is available on most of the resources in the Azure preview portal: You can create alerts on metrics from 8 different services today (and we are adding more all the time): Cloud Services Virtual Machines Websites Web hosting plans Storage accounts SQL databases Redis Cache DocumentDB accounts In addition to general availability for alerts on metrics, we are also previewing the ability to create alerts on operational events. This means you can get an email if someone stops your website, if your virtual machines are deleted, or if your Azure Resource Manager template deployment failed. Like alerts on metrics, you can route these alerts to the service and co-administrators, or, to a custom email address you provide.  You can configure these events on a resource in the Azure Preview Portal.  We have enabled this within the Portal for Websites – we’ll be extending it to all resources in the future. Summary Today’s Microsoft Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier. If you don’t already have a Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Microsoft Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu [Less]
Posted almost 11 years ago
Today we released a major set of updates to Microsoft Azure. Today’s updates include: DocumentDB: Preview of a New NoSQL Document Service for Azure Search: Preview of a New Search-as-a-Service offering for Azure Virtual Machines: Portal support ... [More] for SQL Server AlwaysOn + community-driven VMs Web Sites: Support for Web Jobs and Web Site processes in the Preview Portal Azure Insights: General Availability of Microsoft Azure Monitoring Services Management Library API Management: Support for API Management REST APIs All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them: DocumentDB: Announcing a New NoSQL Document Service for Azure I’m excited to announce the preview of our new DocumentDB service - a NoSQL document database service designed for scalable and high performance modern applications.  DocumentDB is delivered as a fully managed service (meaning you don’t have to manage any infrastructure or VMs yourself) with an enterprise grade SLA. As a NoSQL store, DocumentDB is truly schema-free. It allows you to store and query any JSON document, regardless of schema. The service provides built-in automatic indexing support – which means you can write JSON documents to the store and immediately query them using a familiar document oriented SQL query grammar. You can optionally extend the query grammar to perform service side evaluation of user defined functions (UDFs) written in server-side JavaScript as well.  DocumentDB is designed to linearly scale to meet the needs of your application. The DocumentDB service is purchased in capacity units, each offering a reservation of high performance storage and dedicated performance throughput. Capacity units can be easily added or removed via the Azure portal or REST based management API based on your scale needs. This allows you to elastically scale databases in fine grained increments with predictable performance and no application downtime simply by increasing or decreasing capacity units. Over the last year, we have used DocumentDB internally within Microsoft for several high-profile services.  We now have DocumentDB databases that are each 100s of TBs in size, each processing millions of complex DocumentDB queries per day, with predictable performance of low single digit ms latency.  DocumentDB provides a great way to scale applications and solutions like this to an incredible size. DocumentDB also enables you to tune performance further by customizing the index policies and consistency levels you want for a particular application or scenario, making it an incredibly flexible and powerful data service for your applications.   For queries and read operations, DocumentDB offers four distinct consistency levels - Strong, Bounded Staleness, Session, and Eventual. These consistency levels allow you to make sound tradeoffs between consistency and performance. Each consistency level is backed by a predictable performance level ensuring you can achieve reliable results for your application. DocumentDB has made a significant bet on ubiquitous formats like JSON, HTTP and REST – which makes it easy to start taking advantage of from any Web or Mobile applications.  With today’s release we are also distributing .NET, Node.js, JavaScript and Python SDKs.  The service can also be accessed through RESTful HTTP interfaces and is simple to manage through the Azure preview portal. Provisioning a DocumentDB account To get started with DocumentDB you provision a new database account. To do this, use the new Azure Preview Portal (http://portal.azure.com), click the Azure gallery and select the Data, storage, cache + backup category, and locate the DocumentDB gallery item. Once you select the DocumentDB item, choose the Create command to bring up the Create blade for it. In the create blade, specify the name of the service you wish to create, the amount of capacity you wish to scale your DocumentDB instance to, and the location around the world that you want to deploy it (e.g. the West US Azure region): Once provisioning is complete, you can start to manage your DocumentDB account by clicking the new instance icon on your Azure portal dashboard.  The keys tile can be used to retrieve the security keys to use to access the DocumentDB service programmatically. Developing with DocumentDB DocumentDB provides a number of different ways to program against it. You can use the REST API directly over HTTPS, or you can choose from either the .NET, Node.js, JavaScript or Python client SDKs. The JSON data I am going to use for this example are two families: // AndersonFamily.json file {     "id": "AndersenFamily",     "lastName": "Andersen",     "parents": [         { "firstName": "Thomas" },         { "firstName": "Mary Kay" }     ],     "children": [         { "firstName": "John", "gender": "male", "grade": 7 }     ],     "pets": [         { "givenName": "Fluffy" }     ],     "address": { "country": "USA", "state": "WA", "city": "Seattle" } } and // WakefieldFamily.json file {     "id": "WakefieldFamily",     "parents": [         { "familyName": "Wakefield", "givenName": "Robin" },         { "familyName": "Miller", "givenName": "Ben" }     ],     "children": [         {             "familyName": "Wakefield",             "givenName": "Jesse",             "gender": "female",             "grade": 1         },         {             "familyName": "Miller",             "givenName": "Lisa",             "gender": "female",             "grade": 8         }     ],     "pets": [         { "givenName": "Goofy" },         { "givenName": "Shadow" }     ],     "address": { "country": "USA", "state": "NY", "county": "Manhattan", "city": "NY" } } Using the NuGet package manager in Visual Studio, I can search for and install the DocumentDB .NET package into any .NET application. With the URI and Authentication Keys for the DocumentDB service that I retrieved earlier from the Azure Management portal, I can then connect to the DocumentDB service I just provisioned, create a Database, create a Collection, Insert some JSON documents and immediately start querying for them: using (client = new DocumentClient(new Uri(endpoint), authKey)) {     var database = new Database { Id = "ScottsDemoDB" };     database = await client.CreateDatabaseAsync(database);       var collection = new DocumentCollection { Id = "Families" };     collection = await client.CreateDocumentCollectionAsync(database.SelfLink, collection);       //DocumentDB supports strongly typed POCO objects and also dynamic objects     dynamic andersonFamily =  JsonConvert.DeserializeObject(File.ReadAllText(@".\Data\AndersonFamily.json"));     dynamic wakefieldFamily = JsonConvert.DeserializeObject(File.ReadAllText(@".\Data\WakefieldFamily.json"));       //persist the documents in DocumentDB     await client.CreateDocumentAsync(collection.SelfLink, andersonFamily);     await client.CreateDocumentAsync(collection.SelfLink, wakefieldFamily);       //very simple query returning the full JSON document matching a simple WHERE clause     var query = client.CreateDocumentQuery(collection.SelfLink, "SELECT * FROM Families f WHERE f.id = 'AndersenFamily'");     var family = query.AsEnumerable().FirstOrDefault();       Console.WriteLine("The Anderson family have the following pets:");                   foreach (var pet in family.pets)     {         Console.WriteLine(pet.givenName);     }       //select JUST the child record out of the Family record where the child's gender is male     query = client.CreateDocumentQuery(collection.DocumentsLink, "SELECT * FROM c IN Families.children WHERE c.gender='male'");     var child = query.AsEnumerable().FirstOrDefault();       Console.WriteLine("The Andersons have a son named {0} in grade {1} ", child.firstName, child.grade);       //cleanup test database     await client.DeleteDatabaseAsync(database.SelfLink); } As you can see above – the .NET API for DocumentDB fully supports the .NET async pattern, which makes it ideal for use with applications you want to scale well.  Server-side JavaScript Stored Procedures If I wanted to perform some updates affecting multiple documents within a transaction, I can define a stored procedure using JavaScript that swapped pets between families. In this scenario it would be important to ensure that one family didn’t end up with all the pets and another ended up with none due to something unexpected happening. Therefore if an error occurred during the swap process, it would be crucial that the database rollback the transaction and leave things in a consistent state.  I can do this with the following stored procedure that I run within the DocumentDB service: function SwapPets(family1Id, family2Id) {     var context = getContext();     var collection = context.getCollection();     var response = context.getResponse();       collection.queryDocuments(collection.getSelfLink(), 'SELECT * FROM Families f where f.id  = "' + family1Id + '"', {},     function (err, documents, responseOptions) {         var family1 = documents[0];           collection.queryDocuments(collection.getSelfLink(), 'SELECT * FROM Families f where f.id = "' + family2Id + '"', {},         function (err2, documents2, responseOptions2) {             var family2 = documents2[0];                                 var itemSave = family1.pets;             family1.pets = family2.pets;             family2.pets = itemSave;               collection.replaceDocument(family1._self, family1,                 function (err, docReplaced) {                     collection.replaceDocument(family2._self, family2, {});                 });               response.setBody(true);         });     }); }   If an exception is thrown in the JavaScript function due to for instance a concurrency violation when updating a record, the transaction is reversed and system is returned to the state it was in before the function began. It’s easy to register the stored procedure in code like below (for example: in a deployment script or app startup code):     //register a stored procedure     StoredProcedure storedProcedure = new StoredProcedure     {         Id = "SwapPets",         Body = File.ReadAllText(@".\JS\SwapPets.js")     };                     storedProcedure = await client.CreateStoredProcedureAsync(collection.SelfLink, storedProcedure);   And just as easy to execute the stored procedure from within your application:     //execute stored procedure passing in the two family documents involved in the pet swap                   dynamic result = await client.ExecuteStoredProcedureAsync<dynamic>(storedProcedure.SelfLink, "AndersenFamily", "WakefieldFamily"); If we checked the pets now linked to the Anderson Family we’d see they have been swapped. Learning More It’s really easy to get started with DocumentDB and create a simple working application in a couple of minutes.  The above was but one simple example of how to start using it.  Because DocumentDB is schema-less you can use it with literally any JSON document.  Because it performs automatic indexing on every JSON document stored within it, you get screaming performance when querying those JSON documents later. Because it scales linearly with consistent performance, it is ideal for applications you think might get large. You can learn more about DocumentDB from the new DocumentDB development center here. Search: Announcing preview of new Search as a Service for Azure I’m excited to announce the preview of our new Azure Search service.  Azure Search makes it easy for developers to add great search experiences to any web or mobile application.    Azure Search provides developers with all of the features needed to build out their search experience without having to deal with the typical complexities that come with managing, tuning and scaling a real-world search service.  It is delivered as a fully managed service with an enterprise grade SLA.  We also are releasing a Free tier of the service today that enables you to use it with small-scale solutions on Azure at no cost. Provisioning a Search Service To get started, let’s create a new search service.  In the Azure Preview Portal (http://portal.azure.com), navigate to the Azure Gallery, and choose the Data storage, cache + backup category, and locate the Azure Search gallery item. Locate the “Search” service icon and select Create to create an instance of the service: You can choose from two Pricing Tier options: Standard which provides dedicated capacity for your search service, and a Free option that allows every Azure subscription to get a free small search service in a shared environment. The standard tier can be easily scaled up or down and provides dedicated capacity guarantees to ensure that search performance is predictable for your application.  It also supports the ability to index 10s of millions of documents with lots of indexes. The free tier is limited to 10,000 documents, up to 3 indexes and has no dedicated capacity guarantees. However it is also totally free, and also provides a great way to learn and experiment with all of the features of Azure Search. Managing your Azure Search service After provisioning your Search service, you will land in the Search blade within the portal - which allows you to manage the service, view usage data and tune the performance of the service: I can click on the Scale tile above to bring up the details of the number of resources allocated to my search service. If I had created a Standard search service, I could use this to increase the number of replicas allocated to my service to support more searches per second (or to provide higher availability) and the number of partitions to give me support for higher numbers of documents within my search service. Creating a Search Index Now that the search service is created, I need to create a search index that will hold the documents (data) that will be searched. To get started, I need two pieces of information from the Azure Portal, the service URL to access my Azure Search service (accessed via the Properties tile) and the Admin Key to authenticate against the service (accessed via the Keys title). Using this search service URL and admin key, I can start using the search service APIs to create an index and later upload data and issue search requests. I will be sending HTTP requests against the API using that key, so I’ll setup a .NET HttpClient object to do this as follows: HttpClient client = new HttpClient(); client.DefaultRequestHeaders.Add("api-key", "19F1BACDCD154F4D3918504CBF24CA1F"); I’ll start by creating the search index. In this case I want an index I can use to search for contacts in my dataset, so I want searchable fields for their names and tags; I also want to track the last contact date (so I can filter or sort on that later on) and their address as a lat/long location so I can use it in filters as well. To make things easy I will be using JSON.NET (to do this, add the NuGet package to your VS project) to serialize objects to JSON. var index = new {     name = "contacts",     fields = new[]     {         new { name = "id", type = "Edm.String", key = true },         new { name = "fullname", type = "Edm.String", key = false },         new { name = "tags", type = "Collection(Edm.String)", key = false },         new { name = "lastcontacted", type = "Edm.DateTimeOffset", key = false },         new { name = "worklocation", type = "Edm.GeographyPoint", key = false },     } };   var response = client.PostAsync("https://scottgu-dev.search.windows.net/indexes/?api-version=2014-07-31-Preview",                                 new StringContent(JsonConvert.SerializeObject(index), Encoding.UTF8, "application/json")).Result; response.EnsureSuccessStatusCode(); You can run this code as part of your deployment code or as part of application initialization. Populating a Search Index Azure Search uses a push API for indexing data. You can call this API with batches of up to 1000 documents to be indexed at a time. Since it’s your code that pushes data into the index, the original data may be anywhere: in a SQL Database in Azure, DocumentDb database, blob/table storage, etc.  You can even populate it with data stored on-premises or in a non-Azure cloud provider. Note that indexing is rarely a one-time operation. You will probably have an initial set of data to load from your data source, but then you will want to push new documents as well as update and delete existing ones. If you use Azure Websites, this is a natural scenario for Webjobs that can run your indexing code regularly in the background. Regardless of where you host it, the code to index data needs to pull data from the source and push it into Azure Search. In the example below I’m just making up data, but you can see how I could be using the result of a SQL or LINQ query or anything that produces a set of objects that match the index fields we identified above. var batch = new {     value = new[]     {         new         {             id = "221",             fullname = "Jay Adams",             tags = new string[] { "work" },             lastcontacted = DateTimeOffset.UtcNow,             worklocation = new             {                 type = "Point",                 coordinates = new [] { -122.131577, 47.678581 }             }         },         new         {             id = "714",             fullname = "Catherine Abel",             tags = new string[] { "work", "personal" },             lastcontacted = DateTimeOffset.UtcNow,             worklocation = new             {                 type = "Point",                 coordinates = new [] { -121.825579, 47.1419814}             }         }     } };   var response = client.PostAsync("https://scottgu-dev.search.windows.net/indexes/contacts/docs/index?api-version=2014-07-31-Preview",                                 new StringContent(JsonConvert.SerializeObject(batch), Encoding.UTF8, "application/json")).Result; response.EnsureSuccessStatusCode(); Searching an Index After creating an index and populating it with data, I can now issue search requests against the index. Searches are simple HTTP GET requests against the index, and responses contain the data we originally uploaded as well as accompanying scoring information. I can do a simple search by executing the code below, where searchText is a string containing the user input, something like abel work for example: var response = client.GetAsync("https://scottgu-dev.search.windows.net/indexes/contacts/docs?api-version=2014-07-31-Preview&search=" + Uri.EscapeDataString(searchText)).Result; response.EnsureSuccessStatusCode();   dynamic results = JsonConvert.DeserializeObject(response.Content.ReadAsStringAsync().Result);   foreach (var result in results.value) {     Console.WriteLine("FullName:" + result.fullname + " score:" + (double)result["@search.score"]); } Learning More The above is just a simple scenario of what you can do.  There are a lot of other things we could do with searches. For example, I can use query string options to filter, sort, project and page over the results. I can use hit-highlighting and faceting to create a richer way to navigate results and suggestions to implement auto-complete within my web or mobile UI. In this example, I used the default ranking model, which uses statistics of the indexed text and search string to compute scores. You can also author your own scoring profiles that model scores in ways that match the needs of your application. Check out the Azure Search documentation for more details on how to get started, and some of the more advanced use-cases you can take advantage of.  With the free tier now available at no cost to every Azure subscriber, there is no longer any reason not to have Search fully integrated within your applications. Virtual Machines: Support for SQL Server AlwaysOn, VM Depot images Last month we added support for managing VMs within the Azure Preview Portal (http://portal.azure.com).  We also released built-in portal support that enables you to easily create multi-VM SharePoint Server Farms as well as a slew of additional Azure Certified VM images.  You can learn more about these updates in my last blog post. Today, I’m excited to announce new support for automatically deploying SQL Server VMs with AlwaysOn configured, as well as integrated portal support for community supported VM Depot images. SQL Server AlwaysOn Template AlwaysOn Availability Groups, released in SQL Server 2012 and enhanced in SQL Server 2014, guarantee high availability for mission-critical workloads. Last year we started supporting SQL Availability Groups on Azure Infrastructure Services. In such a configuration, two SQL replicas (primary and secondary), each in its own Azure VM, are configured for automatic failover, and a listener (DNS name) is configured for client connectivity. Other components required are a file share witness to guarantee quorum in the configuration to avoid “split brain” scenarios, and a domain controller to join all VMs to the same domain. The SQL as well as the domain controller replicas are each deployed to an availability set to ensure they are in different Azure failure and upgrade domains. Prior to today’s release, setting up the Availability Group configuration could be tedious and time consuming. We have dramatically simplified this experience through a new SQL Server AlwaysOn template in the Azure Gallery. This template fully automates the configuration of a highly available SQL Server deployment on Azure Infrastructure Services using an Availability Group. You can find the template by navigating to the Azure Gallery within the Azure Preview Portal (http://portal.azure.com), selecting the Virtual Machine category on the left and selecting the SQL Server 2014 AlwaysOn gallery item. In the gallery details page, select Create. All you need is to provide some basic configuration information such as the administrator credentials for the VMs and the rest of the settings are defaulted for you. You may consider changing the defaults for Listener name as this is what your applications will use to connect to SQL Server. Upon creation, 5 VMs are created in the resource group: 2 VMs for the SQL Server replicas, 2 VMs for the Domain Controller replicas, and 1 VM for the file share witness. Once created, you can RDP to one of the SQL Server VMs to see the Availability Group configuration as depicted below: Try out the SQL Server AlwaysOn template in the Azure Preview Portal today and give us your feedback! VM Depot in Azure Gallery Community-driven VM Depot images have been supported on the Azure platform for a couple of years now. But prior to today’s release they weren’t fully integrated into the mainline user experience. Today, I’m excited to announce that we have integrated community VMs  into the Azure Preview Portal and the Azure gallery. With this release, you will find close to 300 pre-configured Virtual Machine images for Microsoft Azure. Using these images, fully functional Virtual Machines can be deployed in the Preview Portal in minutes and customized for specific use cases. Starting from base operating system distributions (such as Debian, Ubuntu, CentOS, Suse and FreeBSD) through developer stacks (such as LAMP, Ruby on Rails, Node and Django), to complete applications (such as Wordpress, Drupal and Apache Solr), there is something for everyone in VM Depot. Try out the VM Depot images in the Azure gallery from within the Virtual Machine category. Web Sites: WebJobs and Process Management in the Preview Portal Starting with today’s Azure release, Web Site WebJobs are now supported in the Azure Preview Portal.  You can also now drill into your Web Sites and monitor the health of any processes running within them (both to host your web code as well as your web jobs). Web Site WebJobs Using WebJobs, you can now now run any code within your Azure Web Sites – and do so in a way that is readily parallelizable, globally scalable, and complete with remote debugging, full VS support and an optional SDK to facilitate authoring. For more information about the power of WebJobs, visit Azure WebJobs recommended resources. With today’s Azure release, we now support two types of Webjobs: on Demand and Continuous.  To use WebJobs in the preview portal, navigate to your web site and select the WebJobs tile within the Web Site blade. Notice that the part also now shows the count of WebJobs available. By drilling into the title, you can view existing WebJobs as well as create new OnDemand or Continuous WebJobs. Scheduled WebJobs are not yet supported in the preview portal, but expect to see this in the near future. Web Site Processes I’m excited to announce a new feature in the Azure Web Sites experience in the Preview Portal - Websites Processes. Using Websites Process you can enumerate the different instances of your site, browse through the different processes on each instance, and even drill down to the handles and modules associated with each process. You can then check for detailed information like version, language and more. In addition, you also get rich monitoring for CPU, Working Set and Thread count at the process level.  Just like with Task Manager for Windows, data collection begins when you open the Websites Processes blade, and stops when you close it. This feature is especially useful when your site has been scaled out and is misbehaving in some specific instances but not in others. You can quickly identify runaway processes, find open file handles, and even kill a specific process instance. Monitoring and Management SDK: Programmatic Access to Monitoring Data The Azure Management Portal provides built-in monitoring and management support that makes it easy for you to track the health of your applications and solutions deployed within Azure. If you want to programmatically access monitoring and management features in Azure, you can also now use our .NET SDK from Nuget. We are releasing this SDK to general availability today, so you can now use it for your production services! For example, if you want to build your own custom dashboard that shows metric data from across your services, you can get that metric data via the SDK: // Create the metrics client by obtain the certificate with the specified thumbprint. MetricsClient metricsClient = new MetricsClient(new CertificateCloudCredentials(SubscriptionId, GetStoreCertificate(Thumbprint)));   // Build the resource ID string. string resourceId = ResourceIdBuilder.BuildWebSiteResourceId("webtest-group-WestUSwebspace", "webtests-site");   // Get the metric definitions. MetricDefinitionCollection metricDefinitions = metricsClient.MetricDefinitions.List(resourceId, null, null).MetricDefinitionCollection;   // Display the available metric definitions. Console.WriteLine("Choose metrics (comma separated) to list:"); int count = 0; foreach (MetricDefinition metricDefinition in metricDefinitions.Value) {     Console.WriteLine(count + ":" + metricDefinition.DisplayName);     count++; }   // Ask the user which metrics they are interested in. var desiredMetrics = Console.ReadLine().Split(',').Select(x =>  metricDefinitions.Value.ToArray()[Convert.ToInt32(x.Trim())]);   // Get the metric values for the last 20 minutes. MetricValueSetCollection values = metricsClient.MetricValues.List(     resourceId,     desiredMetrics.Select(x => x.Name).ToList(),     "",     desiredMetrics.First().MetricAvailabilities.Select(x => x.TimeGrain).Min(),     DateTime.UtcNow - TimeSpan.FromMinutes(20),     DateTime.UtcNow ).MetricValueSetCollection;   // Display the metric values to the user. foreach (MetricValueSet valueSet in values.Value ) {     Console.WriteLine(valueSet.DisplayName + " for the past 20 minutes:");     foreach (MetricValue metricValue in valueSet.MetricValues)     {         Console.WriteLine(metricValue.Timestamp + "\t" + metricValue.Average);     } }   Console.Write("Press any key to continue:"); Console.ReadKey(); We support metrics for a variety of services with the monitoring SDK: Service Typical metrics Frequencies Cloud services CPU, Network, Disk 5 min, 1 hr, 12 hrs Virtual machines CPU, Network, Disk 5 min, 1 hr, 12 hrs Websites Requests, Errors, Memory, Response time, Data out 1 min, 1 hr Mobile Services API Calls, Data Out, SQL performance 1 hr Storage Requests, Success rate, End2End latency 1 min, 1 hr Service Bus Messages, Errors, Queue length, Requests 5 min HDInsight Containers, Apps running 15 min If you’d like to manage advanced autoscale settings that aren’t possible to do in the Portal, you can also do that via the SDK. For example, you can construct autoscale based on custom metrics – you can autoscale by anything that is returned from MetricDefinitions. All of the documentation on the SDK is available on MSDN. API Management: Support for Services REST API We launched the Azure API Management service into preview in May of this year.  The API Management service enables  customers to quickly and securely publish APIs to partners, the public development community, and even internal developers. Today, I’m excited to announce the availability of the API Management REST API which opens up a large number of new scenarios. It can be used to manage APIs, products, subscriptions, users and groups in addition to accessing your API analytics. In fact, virtually any management operation available in the Management API Portal is now accessible programmatically - opening up a host of integration and automation scenarios, including directly monetizing an API with your commerce provider of choice, taking over user or subscription management, automating API deployments and more. We've even provided an additional SAS (Shared Access Signature) security option. An integrated experience in the publisher portal allows you to generate SAS tokens - so securely calling your API service couldn’t be easier. In just three easy steps: Enable the API on the System Settings page on the Publisher Portal Acquire a time-limited access token either manually or programmatically Start sending requests to the API, providing the token with every request   See the REST API reference for full details. Delegation of user registration and product subscription The new API Management REST API makes it easy to automate and integrate other processes with API management. Many customers integrating in this way already have a user account system and would prefer to use this existing resource, instead of the built-in functionality provided by the Developer Portal. This feature, called Delegation, enables your existing website or backend to own the user data, manage subscriptions and seamlessly integrate with API Management's dynamically generated API documentation. It's easy to enable Delegation: in the Publisher Portal navigate to the Delegation section and enable Delegated Sign-in and Sign up, provide the endpoint URL and validation key and you're good to go. For more details, check out the how-to guide. Summary Today’s Microsoft Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier. If you don’t already have a Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Microsoft Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at:twitter.com/scottgu [Less]
Posted almost 11 years ago
This past month we’ve released a number of great enhancements to Microsoft Azure.  These include: Virtual Machines: Preview Portal Support as well as SharePoint Farm Creation Machine Learning: Public preview of the new Azure Machine Learning ... [More] service Event Hub: Public preview of new Azure Event Ingestion Service Mobile Services: General Availability of .NET support, SignalR support Notification Hubs: Price Reductions and New Features SQL Database: New Geo-Restore, Geo-Replication and Auditing support Redis Cache: Larger Cache Sizes Storage: Support for Zone Redundant Storage SDK: Tons of great VS and SDK improvements All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them: Virtual Machines: Support in the new Azure Preview portal We previewed the new Azure Preview Portal at the //Build conference earlier this year.  It brings together all of your Azure resources in a single management portal, and makes it easy to build cloud applications on the Azure platform using our new Azure Resource Manager (which enables you to manage multiple Azure resources as a single application).  The initial preview of the portal supported Web Sites, SQL Databases, Storage, and Visual Studio Online resources. This past month we’ve extended the preview portal to also now support Virtual Machines.  You can create standalone VMs using the portal, or group multiple VMs (and PaaS services) together into a Resource Group and manage them as a single logical entity. You can use the preview portal to get deep insights into billing and monitoring of these resources, and customize the portal to view the data however you want.  If you are an existing Azure customer you can start using the new portal today: http://portal.azure.com. Below is a screen-shot of the new portal in action.  The service dashboard showing service/region health can be seen in the top-left of the portal, along with billing data about my subscriptions – both make it really easy for you to see the health and usage of your services in Azure.  In the screen-shot below I have a single VM running named “scottguvstest” – and clicking the tile for it displays a “blade” of additional details about it to the right – including integrated performance monitoring usage data: The initial “blade” for a VM provides a summary view of common metrics about it.  You can click any of the titles to get even more detailed information as well.  For example, below I’ve clicked the CPU monitoring title in my VM, which brought up a Metric blade with even more details about CPU utilization over the last few days.  I’ve then clicked the “Add Alert” command within it to setup an automatic alert that will trigger (and send an email to me) any time the CPU of the VM goes above 95%: In the screen-shot below, I’ve clicked the “Usage” tile within the VM blade, which displays details about the different VM sizes available – and what each VM size provides in terms of CPU, memory, disk IOPS and other capabilities.  Changing the size of the VM being used is as simple as clicking another of the pricing tiles within the portal – no redeployment of the VM required: SharePoint Farm support via the Azure Gallery Built-into the Azure Preview Portal is a new “Azure Gallery” that provides an easy way to deploy a wide variety of VM images and online services.  VM images in the Azure Gallery include Windows Server, SQL Server, SharePoint Server, Ubuntu, Oracle, Baracuda images.  Last month, we also enabled a new “SharePoint Server Farm” gallery item.  It enables you to easily configure and deploy a highly-available SharePoint Server Farm consisting of multiple VM images (databases, web servers, domain controllers, etc) in only minutes.  It provides the easiest way to create and configure SharePoint farms anywhere: Over the next few months you’ll see even more items show up in the gallery – enabling a variety of additional new scenarios.  Try out the ones in the gallery today by visiting the new Azure portal: http://portal.azure.com/ Machine Learning: Preview of new Machine Learning Service for Azure Last month we delivered the public preview of our new Microsoft Azure Machine Learning service, a game changing service that enables your applications and systems to significantly improve your organization’s understanding across vast amounts of data. Azure Machine Learning (Azure ML) is a fully managed cloud service with no software to install, no hardware to manage, and no OS versions or development environments to grapple with. Armed with nothing but a browser, data scientists can log into Azure and start developing Machine Learning models from any location, and from any device. ML Studio, an integrated development environment for Machine Learning, lets you set up experiments as simple data flow graphs, with an easy to use drag, drop and connect paradigm. Data scientists can use it to avoid programming a large number of common tasks, allowing them to focus on experiment design and iteration. A collection of best of breed algorithms developed by Microsoft Research comes built-in, as is support for custom R code – and over 350 open source R packages can be used securely within Azure ML today. Azure ML also makes it simple to create production deployments at scale in the cloud. Pre-trained Machine Learning models can be incorporated into a scoring workflow and, with a few clicks, a new cloud-hosted REST API can be created. Azure ML makes the incredible potential of Machine Learning accessible both to startups and large enterprises. Startups are now able to immediately apply machine learning to their applications. Larger enterprises are able to unleash the latent value in their big data to generate significantly more revenue and efficiencies. Above all, the speed of iteration and experimentation that is now possible will allow for rapid innovation and pave the way for intelligence in cloud-connected devices all around us. Getting Started Getting started with the Azure Machine Learning Service is easy.  Within the current Azure Portal simply choose New->Data Services->Machine Learning to create your first ML service today: Subscribe to the Machine Learning Team Blog to learn more about the Azure Machine Learning service.  And visit our Azure Machine Learning documentation center to watch videos and explore tutorials on how to get started immediately. Event Hub: Preview of new Azure Event Ingestion Service Today’s connected world is defined by big data.  Big data may originate from connected cars and thermostats that produce telemetry data every few minutes, application performance counters that generate events every second or mobile apps that capture telemetry for every user’s individual action. The rapid proliferation of connected devices raises challenges due to the variety of platforms and protocols involved.  Connecting these disparate data sources while handling the scale of the aggregate stream is a significant challenge.  I’m happy to announce the public preview of a significant new Azure service: Event Hub. Event Hub is a highly scalable pub-sub ingestor capable of elastic scale to handle millions of events per second from millions of connected devices so that you can process and analyze the massive amounts of data produced by your connected devices and applications. With this new service, we now provide an easy way for you to provision capacity for ingesting events from a variety of sources, and over a variety of protocols in a secure manner. Event Hub supports a variety of partitioning modes to enable parallelism and scale in your downstream processing tier while preserving the order of events on a per device basis. Creating an Event Hub You can easily create a new instance of Event Hub from the Azure Management Portal by clicking New->App Services->Service Bus->Event Hub. During the Preview, Event Hub service is available in a limited number of regions (East US 2, West Europe, Southeast Asia) and requires that you first create a new Service Bus Namespace: Learn More Try out the new Event Hub service and give us your feedback! For more information, visit the links below: Event Hub introduction Event Hub pricing details Event Hub code sample Service Bus Introduction Mobile Services: General Availability of .NET Support, SignalR and Offline Sync A few months ago I announced a preview of Mobile Services with .NET backend support. Today I am excited to announce the general availability of the Mobile Services .NET offering, which makes it an incredibly attractive choice for developers building mobile facing backend APIs using .NET.  Using Mobile Services you can now: Quickly add a fully featured backend to your iOS, Android, Windows, Windows Phone, HTML or cross-platform Xamarin, Sencha, or PhoneGap app, leveraging ASP.NET Web API, Mobile Services, and corresponding Mobile Services client SDKs. Publish any existing ASP.NET Web API to Azure and have Mobile Services monitor and manage your Web API controllers for you. Take advantage of built-in mobile capabilities like push notifications, real-time notifications with SignalR, enterprise sign-on with Azure Active Directory, social auth, offline data sync for occasionally connected scenarios.  You can also take full advantage of Web API features like OData controllers, and 3rd party Web API-based frameworks like Breeze. Have your mobile app’s users login via Azure Active Directory and securely access enterprise assets such as SharePoint and Office 365. In addition, we've also enabled seamless connectivity to on-premises assets, so you can reach databases and web services that are not exposed to the Internet and behind your company’s firewall. Build, test, and debug your Mobile Services .NET backend using Visual Studio running locally on your machine or remotely in Azure. You can learn more about Mobile Services .NET from this blog post, and the Mobile Services documentation center. Real-time Push with Mobile Services and SignalR We recently released an update to our Mobile Services .NET backend support which enables you to use ASP.NET SignalR for real-time, bi-directional communications with your mobile applications. SignalR will use WebSockets under the covers when it's available, and fallback to other “techniques” (i.e. HTTP hacks) when it isn't. Regardless of the mode, your application code stays the same. The SignalR integration with Azure Mobile Services includes: Turnkey Web API Integration: Send messages to your connected SignalR applications from any Web API controller or scheduled job – we automatically give you access to SignalR Hubs from the ApiServices context. Unified Authentication: Protect your SignalR Hubs the same way you protect any of your Mobile Service Web API controllers using a simple AuthorizeLevel attribute. Automatic Scale-out: When scaling out your Azure Mobile Service using multiple front-ends, we automatically scale out SignalR using Azure Service Bus as the backplane for sync’ing between the front-ends. You don’t need to do anything to scale your SignalR Hubs. Learn more about the SignalR capability in Mobile Services from Henrik’s blog. Mobile Services Offline Sync support for Xamarin and native iOS apps I've blogged earlier about the new Offline Sync feature in Mobile Services, which provides a lightweight, cross-platform way for applications to work with data even when they are offline / disconnected from the network. At that time we released Offline Sync support for Windows Phone and Windows Store apps. Today we are also introducing a preview of Mobile Services Offline Sync for native iOS apps, as well as Xamarin.iOS, and Xamarin.Android. Mobile Services Accelerators I’m pleased to also introduce our new Mobile Services Accelerators, which are feature complete sample apps that demonstrate how to leverage the new enterprise features of the Mobile Services platform in an end-to-end scenario. We will have two accelerator apps for you today, available as a source code, as well as published in the app store. Field Engineer App (Windows app source, Xamarin iOS app source, Windows app in the store), an app for mobile workers needing to visit customer sites to fix an issue. Sales Assistant App (Windows app source, Windows app in the store), an app for store managers and sales assistants that facilitates typical operations on the store floor, as well as reporting. These apps leverage the Mobile Services .NET backend support to handle authenticating employees with Azure Active Directory, store data securely, working with data offline, as well as get reminders via push notifications. We hope you will find these apps useful for your teams as a reference material. Stay tuned, as more accelerators are coming! Notification Hubs: Price reductions and new features The Azure Notification Hubs service enables large scale cross platform push notifications from any server backend running on-premise or in the cloud.  It supports a variety of mobile devices including iOS, Android, Windows, Kindle Fire, and Nokia X. I am excited to announce several great updates to Azure Notification Hubs today: Price reduction. We are reducing the Notification Hubs price by up to 40x to accommodate a wider range of customer scenarios. With the new price (effective September 1st), customers can send 1 million mobile push notifications per month for free, and pay $1 per additional million pushes using our new Basic tier. Visit the Notification Hubs pricing page for more details. Scheduled Push. You can now use Notification Hubs to schedule individual and broadcast push notifications at certain times of the day. For example, you can use this feature to schedule announcements to be delivered in the morning to your customers.  We include support to enable this no matter which time zone your customers are in. Bulk Registration management. You can now send bulk jobs to create, update or export millions of mobile device registrations at a time with a single API call. This is useful if you are moving from an old push notification system to Notification Hubs, or to import user segments from a 3rd party analytics system. You can learn more about Azure Notification Hubs at the developer center. SQL Databases: New Geo-Restore, Geo-Replication and Auditing support In April 2014, we first previewed our new SQL Database service tiers: Basic, Standard, and Premium. Today, I’m excited to announce the addition of more features to the preview: Geo-restore: Designed for emergency data recovery when you need it most, geo-restore allows you to recover a database to any Azure region. Geo-restore uses geo-redundant Azure blob storage for automatic database backups and is available for Basic, Standard, and Premium databases in the Windows Azure Management Portal and REST APIs. Geo-replication: You can now configure your SQL Databases to use our built-in geo-replication support that enables you to setup an asynchronously replicated secondary SQL Database that can be failed over to in the event of disaster.  Geo-replication is available for Standard and Premium databases, and can be configured via the Windows Azure Management portal and REST APIs. You can get more information about Azure SQL Database Business Continuity and geo-replication here and here. Auditing: Our new auditing capability tracks and logs events that occur in your database and provides dashboard views and reports that enables you to get insights into these events. You can use auditing to streamline compliance-related activities, gain knowledge about what is happening in your database, and to identify trends, discrepancies and anomalies. Audit events are also written to an audit log which is stored in a user-designated Azure storage account.  Auditing is now available for all Basic, Standard, and Premium databases. You can learn even more about these new features here. Redis Cache: Large Cache Sizes, Six New Regions, Redis MaxMemory Policy Support This past May, we launched the public preview of the new Azure Redis Cache service. This cache service gives you the ability to use a secure, dedicated Redis cache, managed as a service by Microsoft. Using the new Cache service, you get to leverage the rich feature set and ecosystem provided by Redis, and reliable hosting and monitoring from Microsoft. Last month we updated the service with the following features: Support for larger cache sizes. We now support the following sizes: 250 MB, 1 GB, 2.5 GB, 6 GB, 13 GB and 26 GB.  Support for six new Azure Regions. The full list of supported regions can be found in the Azure Regions page. Support for configuring Redis MaxMemory policy For more information on the Azure Redis Cache, check out this blog post: Lap around Azure Redis Cache. Storage: Support for Zone Redundant Storage We are happy to introduce a new Azure Storage account offering: Zone Redundant Storage (ZRS). ZRS replicates your data across 2 to 3 facilities either within a single Azure region or across two Azure regions. If your storage account has ZRS enabled, then your data is durable even in the case where one of the datacenter facilities hosting your data suffers a catastrophic issue. ZRS is also more cost efficient than the existing Global Redundant Storage (GRS) offering we have today. You can create a ZRS storage account by simply choosing the ZRS option under the replication dropdown in the Azure Management Portal. You can find more information on pricing for ZRS at http://azure.microsoft.com/en-us/pricing/details/storage/. Azure SDK: WebSites, Mobile, Virtual Machines, Storage and Cloud Service Enhancements Earlier today we released the Update 3 release of Visual Studio 2013 as well as the new Azure SDK 2.4 release.  These updates contain a ton of great new features that make it even easier to build solutions in the cloud using Azure.  Today’s updates include: Visual Studio Update 3 Websites: Publish WebJobs from Console or Web projects. Mobile Services: Create a Dev/Test environment in the cloud when creating Mobile Services projects. Use the Push Notification Wizard with .NET Mobile Services. Notification Hubs: View and manage device registrations. Azure SDK 2.4 Virtual Machines: Remote debug 32-bit Virtual Machines. Configure Virtual Machines, including installation & configuration of dynamic extensions (e.g. anti-malware, Puppet, Chef and custom script). Create Virtual Machine snapshots of the disk state. Storage: View Storage activity logs for diagnostics. Provision Read-Access Geo-redundant Storage from Visual Studio. Cloud Services: Emulator Express is the default option for new projects (Full Emulator is deprecated). Configure new networking capabilities in the service model. You can learn all about the updates from the Azure team’s SDK announcement blog post. Summary This most recent release of Azure includes a bunch of great features that enable you to build even better cloud solutions.  If you don’t already have a Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu [Less]
Posted about 11 years ago
Last week MS Press published a free ebook based on the Building Real-World Apps using Azure talks I gave at the NDC and TechEd conferences.  The talks + book walks through a patterns-based approach to building real world cloud solutions, and help ... [More] make it easier to understand how to be successful with cloud development. Videos of the Talks You can watch a video recording of the talks I gave here:  Part 1: Building Real World Cloud Apps with Azure  Part 2: Building Real World Cloud Apps with Azure eBook Downloads You can now download a completely free PDF, Mobi or ePub version of the ebook based on the talks using the links below: Download the PDF (6.35 MB)   Download the EPUB file (12.3 MB)   Download the Mobi for Kindle file (22.7 MB) Hope this helps, Scott [Less]
Posted about 11 years ago by ScottGu
This morning we released a massive amount of enhancements to Microsoft Azure.  Today’s new capabilities and announcements include: Virtual Machines: Integrated Security Extensions including Built-in Anti-Virus Support and Support for Capturing VM ... [More] images in the portal Networking: ExpressRoute General Availability, Multiple Site-to-Site VPNs, VNET-to-VNET Secure Connectivity, Reserved IPs, Internal Load Balancing Storage: General Availability of Import/Export service and preview of new SMB file sharing support Remote App: Public preview of Remote App Service – run client apps in the cloud API Management: Preview of the new Azure API Management Service Hybrid Connections: Easily integrate Azure Web Sites and Mobile Services with on-premises data+apps (free tier included) Cache: Preview of new Redis Cache Service Store: Support for Enterprise Agreement customers and channel partners All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them: Virtual Machines: Integrated Security Extensions including Built-in Anti-Virus Support In a previous blog post I talked about the new VM Agent we introduced as an optional extension to Virtual Machines hosted on Azure.  The VM Agent is a lightweight and unobtrusive process that you can optionally enable to run inside your Windows and Linux VMs. The VM Agent can then be used to install and manage extensions, which are software modules that extend the functionality of a VM and help make common management scenarios easier. Today I’m pleased to announce three new security extensions that we are enabling via the VM Agent: Microsoft Antimalware Symantec Endpoint Protection TrendMicro’s Deep Security Agent These extensions enable you to add richer security protection to your Virtual Machines using respected security products that we automate installing/managing.  These extensions are easy to enable within your Virtual Machines through either the Azure Management Portal or via the command-line.  To enable them using the Azure Management Portal simply check them when you create new a new Virtual Machine: Once checked we’ll automate installing and running them within your VM. Custom Powershell Script This week we’ve also enabled a new “Custom Script” extension that enables you to specify a Powershell script file (.ps1 extension) to run in the VM immediately after it’s created.  This provides another way to customize your VM on creation without having to RDP in.  Alternatively you can also take advantage of the Chef and Puppet extensions we shipped last month. Virtual Machines: Support for Capturing Images with both OS + Data Drives attached Last month at the //Build conference we released command-line support for capturing VM images that contain both an OS disk as well as multiple data disks attached.  This new VM image support made it much easier to capture and automate VMs with richer configurations, as well as to snapshot VMs without having to run sysprep on them.  With today’s release we have updated the Azure Management Portal to add support for capturing VM images that contain both an OS disk and multiple data disks as well.  One cool aspect of the “Capture” command is that it can now be run on both a stopped VM, as well as on a running VM as well (there is no need to restart it and the capture command completes in under a minute).  To try this new support out, simply click the “Capture” button on a VM, and it will present a dialog that enables you to name the image you want to create:  Once the image is captured it will show up in the “Images” section of the VM gallery – allowing to you easily create any number of new VM instances from it: This new support is ideal for dev/test scenarios as well as for creating re-usable images for use with any other VM creation scenario. Networking: General Availability of Azure ExpressRoute I’m excited to announce the general availability release today of the Azure ExpressRoute service. ExpressRoute enables dedicated, private, high-throughput network connectivity between Azure datacenters and your on-premises IT environments. Using ExpressRoute, you can connect your existing datacenters to Azure without having to flow any traffic over the public Internet, and enable–guaranteed network quality-of-service and the ability to use Azure as a natural extension of an existing private network or datacenter.  As part of our GA release we now offer an enterprise SLA for the service, as well as a variety of bandwidth tiers. We have previously announced several provider partnerships with ExpressRoute including with: AT&T, Equinix, Verizon, BT, and Level3.  This week we are excited to announce new partnerships with TelecityGroup, SingTel and Zadara as well.  You can use any of these providers to setup private fiber connectivity directly to Azure using ExpressRoute. You can get more information on the ExpressRoute website. Networking: Multiple Site-to-Site VPNs and VNET-to-VNET Connectivity I’m excited to announce the general availability release of two highly requested virtual networking features: multiple site-to-site VPN support and VNET-to-VNET connectivity. Multiple Site to Site VPNs Virtual Networks in Azure now supports more than one site-to-site connection, which enables you to securely connect multiple on-premises locations with a Virtual Network (VNET) in Azure. Using more than one site-to-site connection comes at no additional cost. You incur charges only for the VNET gateway uptime. VNET to VNET Connectivity With today’s release, we are also enabling VNET-to-VNET connectivity. That means that multiple virtual networks can now be directly and securely connected with one another. Using this feature, you can connect VNETs that are running in the same or different Azure regions and in case of different Azure regions have the traffic securely route via the Microsoft network backbone. This feature enables scenarios that require presence in multiple regions (e.g. Europe and US, or East US and West US), applications that are highly available, or the integration of VNETs within a single region for a much larger network. This feature also enables you to connect VNETs across multiple different Azure account subscriptions, so you can now connect workloads across different divisions of your organization, or even different companies. The data traffic flowing between VNETs is charged at the same rate as egress traffic. You can get more information on the Virtual Network website. Networking: IP Reservation, Instance-level public IPs, Internal Load Balancing Support, Traffic Manager  With today’s release we are also making available three highly request IP address features: IP Reservations With IP reservation, you can now reserve public IP addresses and use them as virtual IP (VIP) addresses for your applications. This enables scenarios where applications need to have static public IP addresses, and you want to be able to have the IP address survive the application being deleted and redeployed.  You can now reserve up to 5 addresses per subscription free of charge and assign them to VM or Cloud Service instances of your choice. If additional VIP reservations are needed, you can also reserve more addresses at additional cost. This feature is now generally available as of today.  You can enable it via the command-line using new powershell cmdlets that we now support: #Reserve a IP New-AzureReservedIP -ReservedIPName EastUSVIP -Label "Reserved VIP in EastUS" -Location "East US" #Use the Reserved IP during deployment New-AzureVM -ServiceName "MyApp" -VMs $web1 -Location "East US" -VNetName VNetUSEast -ReservedIPName EastUSVIP We will enable portal management support in a future management portal update. Public IP Address per Virtual Machine With Instance-level Public IPs for VMs, you can now assign public IP addresses to your virtual machines, so they become directly addressable without having to map an endpoint through a VIP. This feature will enable scenarios like easily running FTP servers in Azure and monitoring virtual machines directly using their IPs.  We are making this new capability available in preview form today.  This feature is available only with new deployments and new virtual networks and can be enabled via PowerShell. Internal Load Balancing (ILB) Support Today’s new Internal Load Balancing support enables you to load-balance Azure virtual machines with a private IP address. The internally load balanced IP address will be accessible only within a virtual network (if the VM is within a virtual network) or within a cloud service (if the VM isn’t within a virtual network) – and means that no one outside of your application can access it. Internal Load Balancing is useful when you’re creating applications in which some of the tiers (for example: the database layer) aren’t public facing but require load balancing functionality. Internal Load Balancing is available in the standard tier of VMs at no additional cost. We are making this new capability available in preview form today. ILB is available only with new deployments and new virtual networks and can be accessed via PowerShell. Traffic Manager support for external endpoints Starting today, Traffic Manager now supports routing traffic to both Azure endpoints and external endpoints (previously it only supported Azure endpoints). Traffic Manager enables you to control the distribution of user traffic to your specified endpoints. With support for endpoints that reside outside of Azure, you can now build highly available applications that span both Azure, on-premises environments, and even other cloud providers. You can apply intelligent traffic management policies across all managed endpoints. This functionality is available now in preview and you can manage it via the command-line using powershell. Learning More You can learn more about Reserved IP addresses and the above networking features here. Storage: General Availability Release of Azure Import/Export Service Last November, we launched the preview of our Microsoft Azure Import/Export Service. Today, I am excited to announce the general availability release of the service. The Microsoft Azure Import/Export Service enables you to move large amounts of data into and out of your Microsoft Azure Storage accounts by transferring them on hard disks. You can ship encrypted hard drives directly to our Microsoft Azure data centers, and we will automatically transfer the data to or from your Microsoft Azure Blobs for your storage account.  This enables you to import or export massive amounts of data quickly, cost effectively, and without being constrained by your network bandwidth. This release of the Import/Export service has several new features as well as improvements to the preview functionality. We have expanded our service to new regions in addition to the US. We are now available in the US, Europe and the Asia Pacific regions. You can also now use either FedEx or DHL to ship the drives.  Simply provide an appropriate Fedex/DHL account number and we will also automatically ship the drives back to you: More details about the improvements and new features of the Import/Export service can be found on the Microsoft Azure Storage Team Blog. Check out the Getting Started Guide to learn about how to use the Import/Export service. Feel free to send questions and comments to the [email protected]. Storage: New SMB File Sharing Service I’m excited to announce the preview of the new Microsoft Azure File Service. The Azure File Service is a new capability of our existing Azure storage system and supports exposing network file shares using the standard SMB protocol.  Applications running in Azure can now easily share files across Windows and Linux VMs using this new SMB file-sharing service, with all VMs having both read and write access to the files.  The files stored within the service can also be accessed via a REST interface, which opens a variety of additional non-SMB sharing scenarios. The Azure File Service is built on the same technology as the Blob, Table, and Queue Services, which means Azure Files is able to leverage the existing availability, durability, scalability, and geo redundancy that is built into our Storage platform. It is provided as a high-availability managed service run by us, meaning you don’t have to manage any VMs to coordinate it and we take care of all backups and maintenance for you. Common Scenarios Lift and Shift applications: Azure Files makes it easier to “lift and shift” existing applications to the cloud that use on-premise file shares to share data between parts of the application. Shared Application Settings: A common pattern for distributed applications is to have configuration files in a centralized location where they can be accessed from many different virtual machines. Such configuration files can now be stored in an Azure File share, and read by all application instances. Diagnostic Share: An Azure File share can also be used to save diagnostic files like logs, metrics, and crash dumps. Having these available through both the SMB and REST interface allows applications to build or leverage a variety of analysis tools for processing and analyzing the diagnostic data. Dev/Test/Debug: When developers or administrators are working on virtual machines in the cloud, they often need a set of tools or utilities. Installing and distributing these utilities on each virtual machine where they are needed can be a time consuming exercise. With Azure Files, a developer or administrator can store their favorite tools on a file share, which can be easily connected to from any virtual machine. To learn more about how to use the new Azure File Service visit here. RemoteApp: Preview of new Remote App Service I’m happy to announce the public preview today of Azure RemoteApp, a new service delivering Windows Client applications from the Azure cloud. Azure RemoteApp can be used by IT to enable employees to securely access their corporate applications from a variety of devices (including mobile devices like iPads and Phones).  Applications can be scaled up or down quickly without expensive infrastructure costs and management complexity. With Azure RemoteApp, your client applications run in the Azure cloud. Employees simply install the Microsoft Remote Desktop client on their devices and then can access applications via Microsoft’s Remote Desktop Protocol (RDP).  IT can optionally connect the applications back to on-premises networks (enabling hybrid connectivity) or alternatively run them entirely in the cloud. With this service, you can bring scale, agility and global access to your business applications. Azure RemoteApp is free during preview period. Learn more about Azure RemoteApp and try the service free during preview. Hybrid Connections: Easily integrate Azure Websites and Mobile Services with on-premises resources I’m excited to announce Hybrid Connections, a new and easy way to build hybrid applications on Azure. Hybrid Connections enable your Azure Website or Mobile Service to connect to on-premises data & services with just a few clicks within the Azure Management portal.  Today, we're also introducing a Free tier of Azure BizTalk Services that enables everyone to use this new hybrid connections feature for free. With Hybrid Connections, Azure websites and mobile services can easily access on-premises resources as if they were located on the same private network. This makes it much easier to move applications to the cloud, while still connecting securely with existing enterprise assets. Hybrid Connections support all languages and frameworks supported by Azure Websites (.NET, PHP, Java, Python, node.js) and Mobile Services (node.js, .NET). The Hybrid Connections service does not require you to enable a VPN or open up firewall rules in order to use it. This makes it easy to deploy within enterprise environments.  Built-in monitoring and management support still enables enterprise administrators control and visibility into the resources accessed by their hybrid applications. You can learn more about Hybrid Connections using the following links: Overview: Hybrid Connections How-To: Connect an Azure Website with an On-Premises Resource Tutorial: Connect an Azure Website to an On-Premises SQL Server using Hybrid Connections Tutorial: Connect an Azure Mobile Services .NET Backend to an On-Premises Resource using Hybrid Connections API Management: Announcing Preview of new Azure API Management Service With the proliferation of mobile devices, it is important for organizations to be able to expose their existing backend systems via mobile-friendly APIs that enable internal app developers as well as external developer programs. Today, I’m excited announce the public preview of the new Azure API Management service that helps you better achieve this. The new Azure API management service allows you to create an easy to use API façade over a diverse set of mobile backend services (including Mobile Services, Web Sites, VMs, Cloud Services and on-premises systems), and enables you to deliver a friendly API developer portal to your customers with documentation and samples, enable per-developer metering support that protects your APIs from abuse and overuse, and enable to you monitor and track API usage analytics: Creating an API Management service You can easily create a new instance of the Azure API Management service from the Azure Management Portal by clicking New->App Services->API Management->Create. Once the service has been created, you can get started on your API by clicking on the Manage button and transitioning to the Dashboard page on the Publisher portal. Publishing an API A typical API publishing workflow involves creating an API: first creating a façade over an existing backend service, and then configuring policies on it and packaging/publishing the API to the Developer portal for developers to be able to consume. To create an API, select the Add API button within the publisher portal, and in the dialog that appears enter the API name, location of the backend service and suffix of the API root under the service domain name.  Note that you can implement the back-end of the API anywhere (including non-Azure cloud providers or locations).  You can also obviously host the API using Azure – including within a VM, Cloud Service, Web Site or Mobile Service. Once you’ve defined the settings, click Save to create the API endpoint:   Once you’ve defined have created your API endpoint, you can customize it.  You can also set policies such as caching rules, and usage quotas and rate limits that you can apply for developers calling the API. These features end up being extremely useful when publishing an API for external developers (or mobile apps) to consume, and help ensure that your APIs cannot be abused. Developer Portal Once your API has been published, click on the Developer Portal link.  This will launch a developer portal page that can be used by developers to learn how to consume and use the API that you have published.  It provides a bunch of built-in support to help you create documentation pages for your APIs, as well as built-in testing tools.  You’ll also get an impressive list of copy-and-paste-ready code samples that help teach developers how to invoke your APIs from the most popular programming languages.  Best of all this is all automatically generated for you: You can test out any of the APIs you’ve published without writing a line code by using the interactive console. Analytics and reports Once your API is published, you’ll want to be able to track how it is being used.  Back in the publisher portal you can click on the Analytics page to find reports on various aspects of the API, such as usage, health, latency, cache efficiency and more. With a single click, you can find out your most active developers and your most popular APIs and products. You can get time series metrics as well maps to show what geographies drive them. Learn More We are really excited about the new API Management service, and it is going to make securely publishing and tracking external APIs much simpler.  To learn more about API Management, follow the tutorials below: Easily create an API façade for the existing backend services Quickly add new capabilities to the APIs, such as response caching and cross domain access Package and publish APIs to developers and partners, internal and external Reliably protect published APIs from misuse and abuse Azure API Management developer center Cache: New Azure Redis Cache Service I’m excited to announce the preview of a new Azure Redis Cache Service. This new cache service gives customers the ability to use a secure, dedicated Redis cache, managed by Microsoft. With this offer, you get to leverage the rich feature set and ecosystem provided by Redis, and reliable hosting and monitoring from Microsoft. We are offering the Azure Redis Cache Preview in two tiers: Basic – A single Cache node (ideal for dev/test and non-critical workloads) Standard – A replicated Cache (Two nodes, a Master and a Slave) During the preview period, the Azure Redis Cache will be available in a 250 MB and 1 GB size. For a limited time, the cache will be offered free, with a limit of two caches per subscription. Creating a New Cache Instance Getting started with the new Azure Redis Cache is easy.  To create a new cache, sign in to the Azure Preview Portal, and click New -> Redis Cache (Preview): Once the new cache options are configured, click Create. It can take a few minutes for the cache to be created. After the cache has been created, your new cache has a Running status and is ready for use with default settings: Connect to the Cache Application developers can use a variety of languages and corresponding client packages to connect to the Azure Redis Cache. Below we’ll use a .NET Redis client called StackExchange.Redis to connect to the cache endpoint. You can open any Visual Studio project and add the StackExchange.Redis NuGet package to it, via the NuGet package manager. The cache endpoint and key can be obtained respectively from the Properties blade and the Keys blade for your cache instance within the Azure Preview Portal: Once you’ve retrieved these you can create a connection instance to the cache with the code below: var connection = StackExchange.Redis                                                    .ConnectionMultiplexer.Connect("contoso5.redis.cache.windows.net,ssl=true,password=..."); Once the connection is established, you can retrieve a reference to the Redis cache database, by calling the ConnectionMultiplexer.GetDatabase method. IDatabase cache = connection.GetDatabase(); Items can be stored in and retrieved from a cache by using the StringSet and StringGet methods. cache.StringSet("Key1", "HelloWorld"); string value = cache.StringGet("Key1"); You have now stored and retrieved a “Hello World” string from a Redis cache instance running on Azure. Learn More For more information, visit the following links: Getting Started guide Documentation MSDN forum for answers to all your Redis Cache questions Store: Support for EA customers and channel partners in the Azure Store With today’s update we are expanding the Azure Store to customers and channel partners subscribed to Azure via a direct Enterprise Agreement (EA). Azure EA customers in North America and Europe can now purchase a range of application and data services from 3rd party providers through the Store and have these subscriptions automatically billed against their EA. You will be billed against your EA each quarter for all of your Store purchases on a separate, consolidated invoice.  Access to Azure Store can be managed by your EA Azure enrollment administrators, by going to Manage Accounts and Subscriptions under the Accounts section in the Enterprise Portal, where you can disable or re-enable access to 3rd party purchases via Store.  Please visit Azure Store to learn more. Summary Today’s Microsoft Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier. If you don’t already have a Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Microosft Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu [Less]
Posted about 11 years ago
This morning we released a massive amount of enhancements to Microsoft Azure.  Today’s new capabilities and announcements include: Virtual Machines: Integrated Security Extensions including Built-in Anti-Virus Support and Support for Capturing VM ... [More] images in the portal Networking: ExpressRoute General Availability, Multiple Site-to-Site VPNs, VNET-to-VNET Secure Connectivity, Reserved IPs, Internal Load Balancing Storage: General Availability of Import/Export service and preview of new SMB file sharing support Remote App: Public preview of Remote App Service – run client apps in the cloud API Management: Preview of the new Azure API Management Service Hybrid Connections: Easily integrate Azure Web Sites and Mobile Services with on-premises data+apps (free tier included) Cache: Preview of new Redis Cache Service Store: Support for Enterprise Agreement customers and channel partners All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them: Virtual Machines: Integrated Security Extensions including Built-in Anti-Virus Support In a previous blog post I talked about the new VM Agent we introduced as an optional extension to Virtual Machines hosted on Azure.  The VM Agent is a lightweight and unobtrusive process that you can optionally enable to run inside your Windows and Linux VMs. The VM Agent can then be used to install and manage extensions, which are software modules that extend the functionality of a VM and help make common management scenarios easier. Today I’m pleased to announce three new security extensions that we are enabling via the VM Agent: Microsoft Antimalware Symantec Endpoint Protection TrendMicro’s Deep Security Agent These extensions enable you to add richer security protection to your Virtual Machines using respected security products that we automate installing/managing.  These extensions are easy to enable within your Virtual Machines through either the Azure Management Portal or via the command-line.  To enable them using the Azure Management Portal simply check them when you create new a new Virtual Machine: Once checked we’ll automate installing and running them within your VM. Custom Powershell Script This week we’ve also enabled a new “Custom Script” extension that enables you to specify a Powershell script file (.ps1 extension) to run in the VM immediately after it’s created.  This provides another way to customize your VM on creation without having to RDP in.  Alternatively you can also take advantage of the Chef and Puppet extensions we shipped last month. Virtual Machines: Support for Capturing Images with both OS + Data Drives attached Last month at the //Build conference we released command-line support for capturing VM images that contain both an OS disk as well as multiple data disks attached.  This new VM image support made it much easier to capture and automate VMs with richer configurations, as well as to snapshot VMs without having to run sysprep on them.  With today’s release we have updated the Azure Management Portal to add support for capturing VM images that contain both an OS disk and multiple data disks as well.  One cool aspect of the “Capture” command is that it can now be run on both a stopped VM, as well as on a running VM as well (there is no need to restart it and the capture command completes in under a minute).  To try this new support out, simply click the “Capture” button on a VM, and it will present a dialog that enables you to name the image you want to create:  Once the image is captured it will show up in the “Images” section of the VM gallery – allowing to you easily create any number of new VM instances from it: This new support is ideal for dev/test scenarios as well as for creating re-usable images for use with any other VM creation scenario. Networking: General Availability of Azure ExpressRoute I’m excited to announce the general availability release today of the Azure ExpressRoute service. ExpressRoute enables dedicated, private, high-throughput network connectivity between Azure datacenters and your on-premises IT environments. Using ExpressRoute, you can connect your existing datacenters to Azure without having to flow any traffic over the public Internet, and enable–guaranteed network quality-of-service and the ability to use Azure as a natural extension of an existing private network or datacenter.  As part of our GA release we now offer an enterprise SLA for the service, as well as a variety of bandwidth tiers. We have previously announced several provider partnerships with ExpressRoute including with: AT&T, Equinix, Verizon, BT, and Level3.  This week we are excited to announce new partnerships with TelecityGroup, SingTel and Zadara as well.  You can use any of these providers to setup private fiber connectivity directly to Azure using ExpressRoute. You can get more information on the ExpressRoute website. Networking: Multiple Site-to-Site VPNs and VNET-to-VNET Connectivity I’m excited to announce the general availability release of two highly requested virtual networking features: multiple site-to-site VPN support and VNET-to-VNET connectivity. Multiple Site to Site VPNs Virtual Networks in Azure now supports more than one site-to-site connection, which enables you to securely connect multiple on-premises locations with a Virtual Network (VNET) in Azure. Using more than one site-to-site connection comes at no additional cost. You incur charges only for the VNET gateway uptime. VNET to VNET Connectivity With today’s release, we are also enabling VNET-to-VNET connectivity. That means that multiple virtual networks can now be directly and securely connected with one another. Using this feature, you can connect VNETs that are running in the same or different Azure regions and in case of different Azure regions have the traffic securely route via the Microsoft network backbone. This feature enables scenarios that require presence in multiple regions (e.g. Europe and US, or East US and West US), applications that are highly available, or the integration of VNETs within a single region for a much larger network. This feature also enables you to connect VNETs across multiple different Azure account subscriptions, so you can now connect workloads across different divisions of your organization, or even different companies. The data traffic flowing between VNETs is charged at the same rate as egress traffic. You can get more information on the Virtual Network website. Networking: IP Reservation, Instance-level public IPs, Internal Load Balancing Support, Traffic Manager  With today’s release we are also making available three highly request IP address features: IP Reservations With IP reservation, you can now reserve public IP addresses and use them as virtual IP (VIP) addresses for your applications. This enables scenarios where applications need to have static public IP addresses, and you want to be able to have the IP address survive the application being deleted and redeployed.  You can now reserve up to 5 addresses per subscription free of charge and assign them to VM or Cloud Service instances of your choice. If additional VIP reservations are needed, you can also reserve more addresses at additional cost. This feature is now generally available as of today.  You can enable it via the command-line using new powershell cmdlets that we now support: #Reserve a IP New-AzureReservedIP -ReservedIPName EastUSVIP -Label "Reserved VIP in EastUS" -Location "East US" #Use the Reserved IP during deployment New-AzureVM -ServiceName "MyApp" -VMs $web1 -Location "East US" -VNetName VNetUSEast -ReservedIPName EastUSVIP We will enable portal management support in a future management portal update. Public IP Address per Virtual Machine With Instance-level Public IPs for VMs, you can now assign public IP addresses to your virtual machines, so they become directly addressable without having to map an endpoint through a VIP. This feature will enable scenarios like easily running FTP servers in Azure and monitoring virtual machines directly using their IPs.  We are making this new capability available in preview form today.  This feature is available only with new deployments and new virtual networks and can be enabled via PowerShell. Internal Load Balancing (ILB) Support Today’s new Internal Load Balancing support enables you to load-balance Azure virtual machines with a private IP address. The internally load balanced IP address will be accessible only within a virtual network (if the VM is within a virtual network) or within a cloud service (if the VM isn’t within a virtual network) – and means that no one outside of your application can access it. Internal Load Balancing is useful when you’re creating applications in which some of the tiers (for example: the database layer) aren’t public facing but require load balancing functionality. Internal Load Balancing is available in the standard tier of VMs at no additional cost. We are making this new capability available in preview form today. ILB is available only with new deployments and new virtual networks and can be accessed via PowerShell. Traffic Manager support for external endpoints Starting today, Traffic Manager now supports routing traffic to both Azure endpoints and external endpoints (previously it only supported Azure endpoints). Traffic Manager enables you to control the distribution of user traffic to your specified endpoints. With support for endpoints that reside outside of Azure, you can now build highly available applications that span both Azure, on-premises environments, and even other cloud providers. You can apply intelligent traffic management policies across all managed endpoints. This functionality is available now in preview and you can manage it via the command-line using powershell. Learning More You can learn more about Reserved IP addresses and the above networking features here. Storage: General Availability Release of Azure Import/Export Service Last November, we launched the preview of our Microsoft Azure Import/Export Service. Today, I am excited to announce the general availability release of the service. The Microsoft Azure Import/Export Service enables you to move large amounts of data into and out of your Microsoft Azure Storage accounts by transferring them on hard disks. You can ship encrypted hard drives directly to our Microsoft Azure data centers, and we will automatically transfer the data to or from your Microsoft Azure Blobs for your storage account.  This enables you to import or export massive amounts of data quickly, cost effectively, and without being constrained by your network bandwidth. This release of the Import/Export service has several new features as well as improvements to the preview functionality. We have expanded our service to new regions in addition to the US. We are now available in the US, Europe and the Asia Pacific regions. You can also now use either FedEx or DHL to ship the drives.  Simply provide an appropriate Fedex/DHL account number and we will also automatically ship the drives back to you: More details about the improvements and new features of the Import/Export service can be found on the Microsoft Azure Storage Team Blog. Check out the Getting Started Guide to learn about how to use the Import/Export service. Feel free to send questions and comments to the [email protected]. Storage: New SMB File Sharing Service I’m excited to announce the preview of the new Microsoft Azure File Service. The Azure File Service is a new capability of our existing Azure storage system and supports exposing network file shares using the standard SMB protocol.  Applications running in Azure can now easily share files across Windows and Linux VMs using this new SMB file-sharing service, with all VMs having both read and write access to the files.  The files stored within the service can also be accessed via a REST interface, which opens a variety of additional non-SMB sharing scenarios. The Azure File Service is built on the same technology as the Blob, Table, and Queue Services, which means Azure Files is able to leverage the existing availability, durability, scalability, and geo redundancy that is built into our Storage platform. It is provided as a high-availability managed service run by us, meaning you don’t have to manage any VMs to coordinate it and we take care of all backups and maintenance for you. Common Scenarios Lift and Shift applications: Azure Files makes it easier to “lift and shift” existing applications to the cloud that use on-premise file shares to share data between parts of the application. Shared Application Settings: A common pattern for distributed applications is to have configuration files in a centralized location where they can be accessed from many different virtual machines. Such configuration files can now be stored in an Azure File share, and read by all application instances. Diagnostic Share: An Azure File share can also be used to save diagnostic files like logs, metrics, and crash dumps. Having these available through both the SMB and REST interface allows applications to build or leverage a variety of analysis tools for processing and analyzing the diagnostic data. Dev/Test/Debug: When developers or administrators are working on virtual machines in the cloud, they often need a set of tools or utilities. Installing and distributing these utilities on each virtual machine where they are needed can be a time consuming exercise. With Azure Files, a developer or administrator can store their favorite tools on a file share, which can be easily connected to from any virtual machine. To learn more about how to use the new Azure File Service visit here. RemoteApp: Preview of new Remote App Service I’m happy to announce the public preview today of Azure RemoteApp, a new service delivering Windows Client applications from the Azure cloud. Azure RemoteApp can be used by IT to enable employees to securely access their corporate applications from a variety of devices (including mobile devices like iPads and Phones).  Applications can be scaled up or down quickly without expensive infrastructure costs and management complexity. With Azure RemoteApp, your client applications run in the Azure cloud. Employees simply install the Microsoft Remote Desktop client on their devices and then can access applications via Microsoft’s Remote Desktop Protocol (RDP).  IT can optionally connect the applications back to on-premises networks (enabling hybrid connectivity) or alternatively run them entirely in the cloud. With this service, you can bring scale, agility and global access to your business applications. Azure RemoteApp is free during preview period. Learn more about Azure RemoteApp and try the service free during preview. Hybrid Connections: Easily integrate Azure Websites and Mobile Services with on-premises resources I’m excited to announce Hybrid Connections, a new and easy way to build hybrid applications on Azure. Hybrid Connections enable your Azure Website or Mobile Service to connect to on-premises data & services with just a few clicks within the Azure Management portal.  Today, we're also introducing a Free tier of Azure BizTalk Services that enables everyone to use this new hybrid connections feature for free. With Hybrid Connections, Azure websites and mobile services can easily access on-premises resources as if they were located on the same private network. This makes it much easier to move applications to the cloud, while still connecting securely with existing enterprise assets. Hybrid Connections support all languages and frameworks supported by Azure Websites (.NET, PHP, Java, Python, node.js) and Mobile Services (node.js, .NET). The Hybrid Connections service does not require you to enable a VPN or open up firewall rules in order to use it. This makes it easy to deploy within enterprise environments.  Built-in monitoring and management support still enables enterprise administrators control and visibility into the resources accessed by their hybrid applications. You can learn more about Hybrid Connections using the following links: Overview: Hybrid Connections How-To: Connect an Azure Website with an On-Premises Resource Tutorial: Connect an Azure Website to an On-Premises SQL Server using Hybrid Connections Tutorial: Connect an Azure Mobile Services .NET Backend to an On-Premises Resource using Hybrid Connections API Management: Announcing Preview of new Azure API Management Service With the proliferation of mobile devices, it is important for organizations to be able to expose their existing backend systems via mobile-friendly APIs that enable internal app developers as well as external developer programs. Today, I’m excited announce the public preview of the new Azure API Management service that helps you better achieve this. The new Azure API management service allows you to create an easy to use API façade over a diverse set of mobile backend services (including Mobile Services, Web Sites, VMs, Cloud Services and on-premises systems), and enables you to deliver a friendly API developer portal to your customers with documentation and samples, enable per-developer metering support that protects your APIs from abuse and overuse, and enable to you monitor and track API usage analytics: Creating an API Management service You can easily create a new instance of the Azure API Management service from the Azure Management Portal by clicking New->App Services->API Management->Create. Once the service has been created, you can get started on your API by clicking on the Manage button and transitioning to the Dashboard page on the Publisher portal. Publishing an API A typical API publishing workflow involves creating an API: first creating a façade over an existing backend service, and then configuring policies on it and packaging/publishing the API to the Developer portal for developers to be able to consume. To create an API, select the Add API button within the publisher portal, and in the dialog that appears enter the API name, location of the backend service and suffix of the API root under the service domain name.  Note that you can implement the back-end of the API anywhere (including non-Azure cloud providers or locations).  You can also obviously host the API using Azure – including within a VM, Cloud Service, Web Site or Mobile Service. Once you’ve defined the settings, click Save to create the API endpoint:   Once you’ve defined have created your API endpoint, you can customize it.  You can also set policies such as caching rules, and usage quotas and rate limits that you can apply for developers calling the API. These features end up being extremely useful when publishing an API for external developers (or mobile apps) to consume, and help ensure that your APIs cannot be abused. Developer Portal Once your API has been published, click on the Developer Portal link.  This will launch a developer portal page that can be used by developers to learn how to consume and use the API that you have published.  It provides a bunch of built-in support to help you create documentation pages for your APIs, as well as built-in testing tools.  You’ll also get an impressive list of copy-and-paste-ready code samples that help teach developers how to invoke your APIs from the most popular programming languages.  Best of all this is all automatically generated for you: You can test out any of the APIs you’ve published without writing a line code by using the interactive console. Analytics and reports Once your API is published, you’ll want to be able to track how it is being used.  Back in the publisher portal you can click on the Analytics page to find reports on various aspects of the API, such as usage, health, latency, cache efficiency and more. With a single click, you can find out your most active developers and your most popular APIs and products. You can get time series metrics as well maps to show what geographies drive them. Learn More We are really excited about the new API Management service, and it is going to make securely publishing and tracking external APIs much simpler.  To learn more about API Management, follow the tutorials below: Easily create an API façade for the existing backend services Quickly add new capabilities to the APIs, such as response caching and cross domain access Package and publish APIs to developers and partners, internal and external Reliably protect published APIs from misuse and abuse Azure API Management developer center Cache: New Azure Redis Cache Service I’m excited to announce the preview of a new Azure Redis Cache Service. This new cache service gives customers the ability to use a secure, dedicated Redis cache, managed by Microsoft. With this offer, you get to leverage the rich feature set and ecosystem provided by Redis, and reliable hosting and monitoring from Microsoft. We are offering the Azure Redis Cache Preview in two tiers: Basic – A single Cache node (ideal for dev/test and non-critical workloads) Standard – A replicated Cache (Two nodes, a Master and a Slave) During the preview period, the Azure Redis Cache will be available in a 250 MB and 1 GB size. For a limited time, the cache will be offered free, with a limit of two caches per subscription. Creating a New Cache Instance Getting started with the new Azure Redis Cache is easy.  To create a new cache, sign in to the Azure Preview Portal, and click New -> Redis Cache (Preview): Once the new cache options are configured, click Create. It can take a few minutes for the cache to be created. After the cache has been created, your new cache has a Running status and is ready for use with default settings: Connect to the Cache Application developers can use a variety of languages and corresponding client packages to connect to the Azure Redis Cache. Below we’ll use a .NET Redis client called StackExchange.Redis to connect to the cache endpoint. You can open any Visual Studio project and add the StackExchange.Redis NuGet package to it, via the NuGet package manager. The cache endpoint and key can be obtained respectively from the Properties blade and the Keys blade for your cache instance within the Azure Preview Portal: Once you’ve retrieved these you can create a connection instance to the cache with the code below: var connection = StackExchange.Redis                                                    .ConnectionMultiplexer.Connect("contoso5.redis.cache.windows.net,ssl=true,password=..."); Once the connection is established, you can retrieve a reference to the Redis cache database, by calling the ConnectionMultiplexer.GetDatabase method. IDatabase cache = connection.GetDatabase(); Items can be stored in and retrieved from a cache by using the StringSet and StringGet methods. cache.StringSet("Key1", "HelloWorld"); string value = cache.StringGet("Key1"); You have now stored and retrieved a “Hello World” string from a Redis cache instance running on Azure. Learn More For more information, visit the following links: Getting Started guide Documentation MSDN forum for answers to all your Redis Cache questions Store: Support for EA customers and channel partners in the Azure Store With today’s update we are expanding the Azure Store to customers and channel partners subscribed to Azure via a direct Enterprise Agreement (EA). Azure EA customers in North America and Europe can now purchase a range of application and data services from 3rd party providers through the Store and have these subscriptions automatically billed against their EA. You will be billed against your EA each quarter for all of your Store purchases on a separate, consolidated invoice.  Access to Azure Store can be managed by your EA Azure enrollment administrators, by going to Manage Accounts and Subscriptions under the Accounts section in the Enterprise Portal, where you can disable or re-enable access to 3rd party purchases via Store.  Please visit Azure Store to learn more. Summary Today’s Microsoft Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier. If you don’t already have a Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Microosft Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu [Less]
Posted over 11 years ago by ScottGu
Earlier this month at the Build conference, we announced a number of great new improvements coming to SQL Databases on Azure including: an improved 99.95% SLA, support for databases up to 500GB in size, self-service restore capability, and new Active ... [More] Geo Replication support.  This 3 minute video shows a segment of my keynote where I walked through the new capabilities: Last week we made these new capabilities available in preview form, and also introduced new SQL Database service tiers that make it easy to take advantage of them. New SQL Database Service Tiers Last week we introduced a new Basic and Standard tier option with SQL Databases – which are additions to the existing Premium tier we previously announced.  Collectively these tiers provide a flexible set of offerings that enable you to cost effectively deploy and host SQL Databases on Azure: Basic Tier: Designed for applications with a light transactional workload. Performance objectives for Basic provide a predictable hourly transaction rate. Standard Tier: Standard is the go-to option for cloud-designed business applications. It offers mid-level performance and business continuity features. Performance objectives for Standard deliver predictable per minute transaction rates. Premium Tier: Premium is designed for mission-critical databases. It offers the highest performance levels and access to advanced business continuity features. Performance objectives for Premium deliver predictable per second transaction rates. You do not need to buy a SQL Server license in order to use any of these pricing tiers – all of the licensing and runtime costs are built-into the price, and the databases are automatically managed (high availability, auto-patching and backups are all built-in).  We also now provide you the ability to pay for the database at the per-day granularity (meaning if you only run the database for a few days you only pay for the days you had it – not the entire month).  The price for the new SQL Database Basic tier starts as low as $0.16/day ($4.96 per month) for a 2 GB SQL Database.  During the preview period we are providing an additional 50% discount on top of these prices.  You can learn more about the pricing of the new tiers here. Improved 99.95% SLA and Larger Database Sizes We are extending the availability SLA of all of the new SQL Database tiers to be 99.95%.  This SLA applies to the Basic, Standard and Premium tier options – enabling you to deploy and run SQL Databases on Azure with even more confidence. We are also increasing the maximum sizes of databases that are supported: Basic Tier: Supports databases up to 2 GB in size Standard Tier: Supports databases up to 250 GB in size.  Premium Tier: Supports databases up to 500 GB in size. Note that the pricing model for our service tiers has also changed so that you no longer need to pay a per-database size fee (previously we charged a per-GB rate) - instead we now charge a flat rate per service tier. Predictable Performance Levels with Built-in Usage Reports Within the new service tiers, we are also introducing the concept of performance levels, which are a defined level of database resources that you can depend on when choosing a tier.  This enables us to provide a much more consistent performance experience that you can design your application around. The resources of each service tier and performance level are expressed in terms of Database Throughput Units (DTUs). A DTU provides a way to describe the relative capacity of a performance level based on a blended measure of CPU, memory, and read and write rates. Doubling the DTU rating of a database equates to doubling the database resources.  You can learn more about the performance levels of each service tier here. Monitoring your resource usage You can now monitor the resource usage of your SQL Databases via both an API as well as the Azure Management Portal.  Metrics include: CPU, reads/writes and memory (not available this week but coming soon),  You can also track your performance usage relative (as a percentage) to the available DTU resources within your service tier level: Dynamically Adjusting your Service Tier One of the benefits of the new SQL Database Service Tiers is that you can dynamically increase or decrease them depending on the needs of your application.  For example, you can start off on a lower service tier/performance level and then gradually increase the service tier levels as your application becomes popular and you need more resources.  It is quick and easy to change between service tiers or performance levels — it’s a simple online operation.  Because you now pay for SQL Databases by the day (as opposed to the month) this ability to dynamically adjust your service tier up or down also enables you to leverage the elastic nature of the cloud and save money. Read this article to learn more about how performance works in the new system and the benchmarks for each service tier. New Service-Service Restore Support Have you ever had that sickening feeling when you’ve realized that you inadvertently deleted data within a database and might not have a backup?  We now have built-in Service Service Restore support with SQL Databases that helps you protect against this.  This support is available in all service tiers (even the Basic Tier). SQL Databases now automatically takes database backups daily and log backups every 5 minutes. The daily backups are also stored in geo-replicated Azure Storage (which will store a copy of them at least 500 miles away from your primary region). Using the new self-service restore functionality, you can now restore your database to a point in time in the past as defined by the specified backup retention policies of your service tier: Basic Tier: Restore from most recent daily backup Standard Tier: Restore to any point in last 7 days Premium Tier: Restore to any point in last 35 days Restores can be accomplishing using either an API we provide or via the Azure Management Portal: New Active Geo-replication Support For Premium Tier databases, we are also adding support that enables you to create up to 4 readable, secondary, databases in any Azure region.  When active geo-replication is enabled, we will ensure that all transactions committed to the database in your primary region are continuously replicated to the databases in the other regions as well: One of the primary benefits of active geo-replication is that it provides application control over disaster recovery at a database level.  Having cross-region redundancy enables your applications to recover in the event of a disaster (e.g. a natural disaster, etc).  The new active geo-replication support enables you to initiate/control any failovers – allowing you to shift the primary database to any of your secondary regions: This provides a robust business continuity offering, and enables you to run mission critical solutions in the cloud with confidence.  You can learn more about this support here. Starting using the Preview of all of the Above Features Today! All of the above features are now available to starting using in preview form.  You can sign-up for the preview by visiting our Preview center and clicking the “Try Now” button on the “New Service Tiers for SQL Databases” option.  You can then choose which Azure subscription you wish to enable them for.  Once enabled, you can immediately start creating new Basic, Standard or Premium SQL Databases. Summary This update of SQL Database support on Azure provides some great new features that enable you to build even better cloud solutions.  If you don’t already have a Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu [Less]
Posted over 11 years ago
Earlier this month at the Build conference, we announced a number of great new improvements coming to SQL Databases on Azure including: an improved 99.95% SLA, support for databases up to 500GB in size, self-service restore capability, and new Active ... [More] Geo Replication support.  This 3 minute video shows a segment of my keynote where I walked through the new capabilities: Last week we made these new capabilities available in preview form, and also introduced new SQL Database service tiers that make it easy to take advantage of them. New SQL Database Service Tiers Last week we introduced a new Basic and Standard tier option with SQL Databases – which are additions to the existing Premium tier we previously announced.  Collectively these tiers provide a flexible set of offerings that enable you to cost effectively deploy and host SQL Databases on Azure: Basic Tier: Designed for applications with a light transactional workload. Performance objectives for Basic provide a predictable hourly transaction rate. Standard Tier: Standard is the go-to option for cloud-designed business applications. It offers mid-level performance and business continuity features. Performance objectives for Standard deliver predictable per minute transaction rates. Premium Tier: Premium is designed for mission-critical databases. It offers the highest performance levels and access to advanced business continuity features. Performance objectives for Premium deliver predictable per second transaction rates. You do not need to buy a SQL Server license in order to use any of these pricing tiers – all of the licensing and runtime costs are built-into the price, and the databases are automatically managed (high availability, auto-patching and backups are all built-in).  We also now provide you the ability to pay for the database at the per-day granularity (meaning if you only run the database for a few days you only pay for the days you had it – not the entire month).  The price for the new SQL Database Basic tier starts as low as $0.16/day ($4.96 per month) for a 2 GB SQL Database.  During the preview period we are providing an additional 50% discount on top of these prices.  You can learn more about the pricing of the new tiers here. Improved 99.95% SLA and Larger Database Sizes We are extending the availability SLA of all of the new SQL Database tiers to be 99.95%.  This SLA applies to the Basic, Standard and Premium tier options – enabling you to deploy and run SQL Databases on Azure with even more confidence. We are also increasing the maximum sizes of databases that are supported: Basic Tier: Supports databases up to 2 GB in size Standard Tier: Supports databases up to 250 GB in size.  Premium Tier: Supports databases up to 500 GB in size. Note that the pricing model for our service tiers has also changed so that you no longer need to pay a per-database size fee (previously we charged a per-GB rate) - instead we now charge a flat rate per service tier. Predictable Performance Levels with Built-in Usage Reports Within the new service tiers, we are also introducing the concept of performance levels, which are a defined level of database resources that you can depend on when choosing a tier.  This enables us to provide a much more consistent performance experience that you can design your application around. The resources of each service tier and performance level are expressed in terms of Database Throughput Units (DTUs). A DTU provides a way to describe the relative capacity of a performance level based on a blended measure of CPU, memory, and read and write rates. Doubling the DTU rating of a database equates to doubling the database resources.  You can learn more about the performance levels of each service tier here. Monitoring your resource usage You can now monitor the resource usage of your SQL Databases via both an API as well as the Azure Management Portal.  Metrics include: CPU, reads/writes and memory (not available this week but coming soon),  You can also track your performance usage relative (as a percentage) to the available DTU resources within your service tier level: Dynamically Adjusting your Service Tier One of the benefits of the new SQL Database Service Tiers is that you can dynamically increase or decrease them depending on the needs of your application.  For example, you can start off on a lower service tier/performance level and then gradually increase the service tier levels as your application becomes popular and you need more resources.  It is quick and easy to change between service tiers or performance levels — it’s a simple online operation.  Because you now pay for SQL Databases by the day (as opposed to the month) this ability to dynamically adjust your service tier up or down also enables you to leverage the elastic nature of the cloud and save money. Read this article to learn more about how performance works in the new system and the benchmarks for each service tier. New Service-Service Restore Support Have you ever had that sickening feeling when you’ve realized that you inadvertently deleted data within a database and might not have a backup?  We now have built-in Service Service Restore support with SQL Databases that helps you protect against this.  This support is available in all service tiers (even the Basic Tier). SQL Databases now automatically takes database backups daily and log backups every 5 minutes. The daily backups are also stored in geo-replicated Azure Storage (which will store a copy of them at least 500 miles away from your primary region). Using the new self-service restore functionality, you can now restore your database to a point in time in the past as defined by the specified backup retention policies of your service tier: Basic Tier: Restore from most recent daily backup Standard Tier: Restore to any point in last 7 days Premium Tier: Restore to any point in last 35 days Restores can be accomplishing using either an API we provide or via the Azure Management Portal: New Active Geo-replication Support For Premium Tier databases, we are also adding support that enables you to create up to 4 readable, secondary, databases in any Azure region.  When active geo-replication is enabled, we will ensure that all transactions committed to the database in your primary region are continuously replicated to the databases in the other regions as well: One of the primary benefits of active geo-replication is that it provides application control over disaster recovery at a database level.  Having cross-region redundancy enables your applications to recover in the event of a disaster (e.g. a natural disaster, etc).  The new active geo-replication support enables you to initiate/control any failovers – allowing you to shift the primary database to any of your secondary regions: This provides a robust business continuity offering, and enables you to run mission critical solutions in the cloud with confidence.  You can learn more about this support here. Starting using the Preview of all of the Above Features Today! All of the above features are now available to starting using in preview form.  You can sign-up for the preview by visiting our Preview center and clicking the “Try Now” button on the “New Service Tiers for SQL Databases” option.  You can then choose which Azure subscription you wish to enable them for.  Once enabled, you can immediately start creating new Basic, Standard or Premium SQL Databases. Summary This update of SQL Database support on Azure provides some great new features that enable you to build even better cloud solutions.  If you don’t already have a Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu [Less]
Posted over 11 years ago by ScottGu
It has been a really busy last 10 days for the Azure team. This blog post quickly recaps a few of the significant enhancements we’ve made.  These include: Web Sites: SSL included, Traffic Manager, Java Support, Basic Tier Virtual Machines: ... [More] Support for Chef and Puppet extensions, Basic Pricing tier for Compute Instances Virtual Network: General Availability of DynamicRouting VPN Gateways and Point-to-Site VPN Mobile Services: Preview of Visual Studio support for .NET, Azure Active Directory integration and Offline support; Notification Hubs: Support for Kindle Fire devices and Visual Studio Server Explorer integration Autoscale: General Availability release Storage: General Availability release of Read Access Geo Redundant Storage Active Directory Premium: General Availability release Scheduler service: General Availability release Automation: Preview release of new Azure Automation service All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them: Web Sites: SSL now included at no additional charge in Standard Tiers With Azure Web Sites you can host up to 500 web-sites in a single standard tier hosting plan.  Azure web-sites run in VMs isolated to host only your web applications (giving you predictable performance and security isolation), and you can scale-up/down the number of VMs either manually or using our built-in AutoScale functionality.  The pricing for standard tier web-sites is based on the number of VMs you run – if you host all 500 web-sites in a single VM then all you pay for is for that single VM, if you scale up your web site plan to run across two VMs then you’d pay for two VMs, etc. Prior to this month we charged an additional fee if you wanted to enable SSL for the sites.  Starting this month, we now include the ability to use 5 SNI based SSL certificates and 1 IP based SSL certificate with each standard tier web site hosting plan at no additional charge.  This helps make it even easier (and cheaper) to SSL enable your web-sites. Web Sites: Traffic Manager Support I’ve blogged in the past about the Traffic Manager service we have built-into Azure.  The Azure Traffic Manager service allows you to control the distribution of user traffic to applications that you host within Azure. This enables you to run instances of your applications across different azure regions all over the world.  Traffic Manager works by applying an intelligent routing policy engine to the Domain Name Service (DNS) queries on your domain names, and maps the DNS routes to the appropriate instances of your applications (e.g. you can setup Traffic Manager to route customers in Europe to a European instance of your app, and customers in North America to a US instance of your app). You can use Traffic Manager to improve application availability - by enabling automatic customer traffic fail-over scenarios in the event of issues with one of your application instances.  You can also use Traffic Manager to improve application performance - by automatically routing your customers to the closet application instance nearest them. We are excited to now provide general availability support of Traffic Manager with Azure Web Sites.  This enables you to both improve the performance and availability of your web-sites.  You can learn more about how to take advantage of this new support here. Web Sites: Java Support This past week we added support for an additional server language with Azure Web Sites – Java.  It is now easy to deploy and run Java web applications written using a variety of frameworks and containers including: Java 1.7.0_51 – this is the default supported Java runtime Tomcat 7.0.50 – the default Java container Jetty 9.1.0 You can manage which Java runtime you use, as well as which container hosts your applications using the Azure management portal or our management APIs.  This blog post provides more detail on the new support and options. With this announcement, Azure Web Sites now provides first class support for building web applications and sites using .NET, PHP, Node.js, Python and Java.  This enables you to use a wide variety of language + frameworks to build your applications, and take advantage of all the great capabilities that Web Sites provide (Easy Deployment, Continuous Deployment, AutoScale, Staging Support, Traffic Manager, outside-in monitoring, Backup, etc). Web Sites: Support for Wildcard DNS and SSL Certificates Azure Web Sites now supports the ability to map wildcard DNS and SSL Certficates to web-sites.  This enables a variety of scenarios – including the ability to map wildcard vanity domains (e.g. *.myapp.com – for example: scottgu.myapp.com) to a single backend web site.  This can be particularly useful for SaaS based scenarios. Scott Cate has an excellent video that walks through how to easily set this support up. Web Sites: New Basic Tier Pricing Option Earlier in this post I talked about how we are now including the ability to use 5 SNI and 1 IP based SSL certificate at no additional cost with each standard tier azure web site hosting plan.  We have also recently announced that we are also including the auto-scale, traffic management, backup, staging and web jobs features at no additional cost as part of each standard tier azure web site hosting plan as well.  We think the combination of these features provides an incredibly compelling way to securely host and run any web application. New Basic Tier Pricing Option Starting this month we are also introducing a new “basic tier” option for Azure web sites which enables you to run web applications without some of these additional features – and at 25% less cost.  We think the basic tier is great for smaller/less-sophisticated web applications, and enables you to be successful while paying even less.  For additional details about the Basic tier pricing, visit the Azure Web sites pricing page.  You can select which tier your web-site hosting plan uses by clicking the Scale tab within the Web Site extension of the Azure management portal. Virtual Machines: Create from Visual Studio With the most recent Azure SDK 2.3 release, it is now possible to create Virtual Machines from directly inside Visual Studio’s Server Explorer.  Simply right-click on the Azure node within it, and choose the “Create Virtual Machine” menu option: This will bring up a “Create New Virtual Machine” wizard that enables you to walkthrough creating a Virtual Machine, picking an image to run in it, attaching it to a virtual network, and open up firewall ports all from within Visual Studio: Once created you can then manage the VM (shutdown, restart, start, remote desktop, enable debugging, attach debugger) all from within Visual Studio: This makes it incredibly easy to start taking advantage of Azure without having to leave the Visual Studio IDE. Virtual Machines: Integrated Puppet and Chef support In a previous blog post I talked about the new VM Agent we introduced as an optional extension to Virtual Machines hosted on Azure.  The VM Agent is a lightweight and unobtrusive process that you can optionally enable to run inside your Windows and Linux VMs. The VM Agent can then be used to install and manage extensions, which are software modules that extend the functionality of a VM and help make common management scenarios easier.  At last week’s Build conference we announced built-in support for several new extensions – including extensions that enable easy support for Puppet and Chef.  Puppet and Chef allow developers and IT administrators to define and automate the desired state of infrastructure configuration, making it effortless to manage 1000s of VMs in Azure. Enabling Puppet Support We now have a built-in VM image within the Azure VM gallery that enables you to easily stand up a puppet-master server that you can use to store and manage your infrastructure using Puppet.  Creating a Puppet Master in Azure is now easy – simply select the “Puppet Enterprise” template within the VM gallery: You can then create new Azure virtual machines that connect to this Puppet Master.  Enabling this with VMs created using the Azure management portal is easy (we also make it easy to do with VMs created with the command-line).  To enable the Puppet extension within a VM you create using the Azure portal simply navigate to the last page of the Create VM from gallery experience and check the “Puppet Extension Agent” extension within it: Specify the URL of the Puppet master server to get started. Once you deploy the VM, the extension will configure the puppet agent to connect to this Puppet master server and pull down the initial configuration that should be used to configure the machine. This new support makes it incredibly easy to get started with both Puppet and Chef and enable even richer configuration management of your IaaS infrastructure within Azure. Virtual Machines: Basic Tier Earlier in this blog post I discussed how we are introducing a new “Basic Tier” option for Azure Web Sites.  Starting this month we are also introducing a “Basic Tier” for Virtual Machines as well. The Basic Tier option provides VM options with similar CPU + memory configuration options as our existing VMs (which are now called “Standard Tier” VMs) but do not include the built-in load balancing and AutoScale capabilities.  They also cost up to 27% less.  These instances are well-suited for production applications that do not require a built-in load balancer (you can optionally bring your own load balancer), batch processing scenarios, as well as for dev/test workloads.  Our new Basic tier VMs also have similar performance characteristics to AWS’s equivalent VM instances (which are less powerful than the Standard tier VMs we have today). Comprehensive pricing information is now available on the Virtual Machines Pricing Details page. Networking: General Availability of Azure Virtual Network Dynamic Routing VPN Gateways and Point-to-Site VPN Last year, we previewed a feature called DynamicRouting Gateway and Point-to-Site VPN that supports Route-based VPNs and allows you to connect individual computers to a Virtual Network in Azure. Earlier this month we announced that the feature is now generally available. The DynamicRouting VPN Gateway in a Virtual Network will now carry the same 99.9% SLA as the StaticRouting VPN Gateway. Now that we’re in General Availability mode, DynamicRouting Gateway will automatically incur standard Gateway charges which will take effect starting May 1, 2014.  For further details on the service, please visit the Virtual Network website. Mobile Services: Visual Studio Support for Mobile Services .NET Backend With Visual Studio 2013 Update 2, you can now create your backend Mobile Service logic using .NET and the ASP.NET Web API framework in Visual Studio, using Mobile Services templates and scaffolds. Mobile Services support for .NET on the backend offers the following benefits: You can use ASP.NET Web API and Visual Studio together with Mobile Services to add a backend to your mobile app in minutes You can publish any existing Web API to Mobile Services and benefit from authentication, push notifications and other capabilities that Mobile Services provides. You can also take advantage of any Web API features like OData controllers, or 3rd party Web API-based frameworks like Breeze. You can debug your Mobile Services .NET backend using Visual Studio running locally on your machine or remotely in Azure. With Mobile Services we run, manage and monitor your Web API for you. Azure will automatically notify you if we discover you have a problem with your app. With Mobile Services .NET support you can store your data securely using any data backend of your choice: SQL Azure, SQL on Virtual Machine, Azure Table storage, Mongo, et al. It’s easy to get started with Mobile Services .NET support in Visual Studio. Simply use the File-New Project dialog and select the Windows Azure Mobile Service project template under the Cloud node. Choose Windows Azure Mobile Service in the New ASP .NET Project dialog. You will see a Mobile Services .NET project, notice this is a customized ASP .NET Web API project with additional Mobile Service NuGet packages and sample controllers automatically included: Running the Mobile Service Locally You can now test your .Net Mobile Service project locally. Open the sample TodoItemController.cs in the project. The controller shows you how you can use the built-in TableController<T> .NET class we provide with Mobile Services. Set a breakpoint inside the GetAllTodoItems() method and hit F5 within Visual Studio to run the Mobile Service locally. Mobile Services includes a help page to view and test your APIs. On the help page, click on the try it out link and then click the GET tables/TodoItem link. Then click try this out and send on the GET tables/TodoItem page. As you might expect, you will hit the breakpoint you set earlier. Add APIs to your Mobile Service using Scaffolds You can add additional functionality to your Mobile Service using Mobile Service or generic Web API controller scaffolds through the Add Scaffold dialog (right click on your project and choose Add -> New Scaffolded Item… command) Publish your Mobile Services project to Azure Once you are done developing your Mobile Service locally, you can publish it to Azure. Simply right click on your project and choose the Publish command. Using the publish wizard, you can publish to a new or existing Azure Mobile Service: Remote debugging Just like Cloud Services and Websites, you can now remote debug your Mobile Service to get more visibility into how your code is operating live in Azure. To enable remote debugging for a Mobile Service, publish your Mobile Service again and set the Configuration to Debug in the Publish wizard. Once your Mobile Service is published and running live in the cloud, simply set a breakpoint in local source code. Then use Visual Studio’s Server Explorer to select the Mobile Service instance deployed in the cloud, right click and choose the Attach Debugger command. Once the debugger attaches to the mobile service, you can use the debugging capabilities of Visual Studio to instantly and in-real time debug your app running in the cloud. To learn more about Visual Studio Support for Mobile Services .NET backend follow tutorials at: Quickstart: Add a mobile service (.NET backend) How to use controllers to access data in mobile services (.NET backend) How to create custom APIs and scheduled jobs in a mobile service (.NET backend) This new .NET backend supports makes it easy to create even better mobile applications than ever before. Mobile Services: Offline Support In addition to the above support, we are also introducing a preview of a new Mobile Services Offline capability with client SDK support for Windows Phone and Windows Store apps. With this functionality, mobile applications can create and modify data even when they are offline/disconnected from a network. When the app is back online, it can synchronize local changes with the Mobile Services Table APIs. The feature also includes support for detecting conflicts when the same record is changed on both the client and the backend. To use the new Mobile Services offline functionality, set up a local sync store. You can define your own sync store or use the provided SQLite-based implementation.  The Mobile Services SDK provides a new local table API for the sync store, with a symmetrical programming model to the existing Mobile Services Table API. You can use Optimistic Concurrency along with the offline feature to detect conflicting changes between the client and backend. The preview of the Mobile Services Offline feature is available now as part of the Mobile Services SDK for Windows Store and Windows Phone apps. In the future, we will support all client platforms supported by Mobile Services, including iOS, Android, Xamarin, etc. Mobile Services: Support for Azure Active Directory Sign On We now support Azure Active Directory Single Sign On for Mobile Services.  Azure Active Directory authentication is available for both the .NET and Node.js backend options of Mobile Services. To take advantage of the feature, first register your client app and your Mobile Service with your Azure Active Directory tenant using the Applications tab in the Azure Active Directory management portal. In your client project, you will need to add the Active Directory Authentication Library (ADAL), currently available for Windows Store, iOS, and Android clients. From there on, the token retrieved from ADAL library can be used to authenticate and access Mobile Services.  The single sign-on features of ADAL also enables your mobile service to make calls to other resources (such as SharePoint and Office 365) on behalf of the user.  You can read more about the new ADAL functionality here. These new updates make Mobile Services an even more attractive platform for building powerful employee facing apps. Notification Hub: Kindle Support and Visual Studio Integration I’ve previously blogged about Azure Notification Hubs, a high scale cross platform push notification service that allows you to instantly send personalized push notifications to segments of your audience or individuals containing millions of iOS, Android, Windows, Widows Phone devices with a single API call. Today we’ve made two important updates to Azure Notification Hubs: adding support for Amazon Kindle Fire devices, and Visual Studio support for Notification Hubs. Support for Amazon’s Kindle With today’s addition you can now configure your Notification Hubs with Amazon Device Messaging (ADM) service credentials on the configuration page for your Notification Hub in the Azure Management portal, and start sending push notifications to your app on Amazon’s Kindle device, in addition to iOS, Android, or Windows. Testing Push Notifications with Visual Studio Earlier I blogged about how we enabled debugging push notifications using the Azure Management Portal. With today’s Visual Studio update, you can now browse your notification hubs and send test push notifications directly from Visual Studio Server Explorer as well. Simply select your notification hub in the Server Explorer of Visual Studio under the Notifications Hubs node.  Then right click, and choose the Send Test Notifications command: In the notification hub window, you can then send a message either to a particular tag or all registered devices (broadcast). You can select from a variety of templates - Windows Store, Windows Phone, Android, iOS, or even a cross platform message using the Custom Template. After you hit Send, you’ll receive the message result instantly to help you diagnose if your message was successfully sent or not. To learn more about Azure Notification Hubs, read tutorials here. AutoScale: Announcing General Availability of Autoscale Service Last summer we announced the preview release of our Autoscale service. I’m happy to announce that Autoscale is now generally available!  Better yet, there's no additional charge for using Autoscale. We've added new features since we first released it as a preview version: support for both performance-and schedule-based autoscaling, along with an API and .NET SDK so you can programmatically scale using any performance counters that you define. Autoscale supports all four Azure compute services: Cloud Services, Virtual Machines, Mobile Services and Web Sites. For Virtual Machines and Web Sites, Autoscale is included as a feature in the Standard pricing tiers, and for Mobile Services, it's included as a part of both Basic and Standard pricing tiers. Storage: Announcing General Availability of Read Access Geo Redundant Storage (RA-GRS) In December, we added the ability to allow customers to achieve higher read availability for their data. This feature called Read Access - Geo Redundant Storage (RA-GRS) allows you to read an eventually consistent copy of your geo-replicated data from the storage account’s secondary region in case of any unavailability to the storage account’s primary region. Last week we announced that RA-GRS feature is now out of preview mode, and generally available. It is available to all Azure customers across all regions including the users in China. RA-GRS SLA and Pricing The benefit of using RA-GRS is that it provides a higher read availability (99.99+%) for a storage account over GRS (99.9+%). When using RA-GRS, the write availability continues to be 99.9+% (same as GRS today) and read availability for RA-GRS is 99.99+%, where the data is expected to be read from secondary if primary is unavailable. In terms of pricing, the capacity (GB) charge is slightly higher for RA-GRS than GRS, whereas the transaction and bandwidth charges are the same for GRS and RA-GRS. See the Windows Azure Storage pricing page here for more details about the SLA and pricing. You can find more information on the storage blog here. Active Directory: General Availability of Azure AD Premium Earlier this month we announced the general availability of Azure Active Directory Premium, which provides additional identity and access management capabilities for enterprises. Building upon the capabilities of Azure AD, Azure AD Premium provides these capabilities with a guaranteed SLA and no limit on directory size. Additional capabilities include: Group-based access assignment enables administrators to use groups in AD to assign access for end users to over 1200 cloud applications in the AD Application Gallery. End users can get single-sign on access to their applications from their Access Panel at https://myapps.microsoft.com or from our iOS application. Self-service password reset that enables end users to reset forgotten passwords without calling your help desk. Delegated group management that enables end users to create security groups and manage membership in security groups they own. Multi-Factor Authentication that lets you easily deploy a Multi-Factor Authentication solution for your business without deploying new software or hardware. Customized branding that lets you include your organization’s branding elements in the experiences that users see when signing in to AD or accessing their Access Panel. Reporting, alerting, and analytics that increase your visibility into application usage in your organization, and potential security concerns with user accounts. Azure AD Premium also includes usage rights for Forefront Identity Manager Server and Client Access Licenses. To read more about AD Premium, including how to acquire it, read the Active Directory Team blog. Active Directory: Public Preview of Azure Rights Management Service Earlier this month we announced the public preview of the ability to manage your Azure Rights Management service within the Azure Management Portal. If your organization has Azure Rights Management either as a stand-alone service or as part of your Office 365 or EMS subscriptions you can now manage it by signing into the Azure Management Portal. Once in the Portal, select ACTIVE DIRECTORY in the left navigation bar, navigate to the RIGHTS MANAGEMENT tab, then click on the name of your directory. With this preview you can now create custom rights policy templates that let you define who can access sensitive documents, and what permissions (view, edit, save, print, and more) users can have on those documents.  To begin creating a rights policy template, in the Quick Start page, click on Create an additional rights policy template option and follow the instructions on the page to define a name and description for the template, add users and rights and define other restrictions. Once your template has been created and published, it will become available to users in your organization in their favorite applications. To learn more managing Azure Rights Management and the benefits it offers to organizations, see the Information Protection group’s blog.  Scheduler: General Availability Release Scheduler Service This month we’ve also delivered the General Availability release of the Azure Scheduler service.  Scheduler allows you to run jobs on simple or complex recurring schedules that can invoke HTTP/S endpoints or post messages to storage queues. Scheduler has built-in high availability and can reliably call services inside or outside of Azure. During preview customers have used it for a wide set of scenarios including for invoking services in their backend for Hadoop workloads, triggering diagnostics cleanup, and periodically checking that partners have submitted content on time. ISVs have used it to empower their applications to add scheduling capabilities such as report generation and sending reminders. In the Scheduler portal extension you can easily create and manage your scheduler jobs. Since the initial release, Scheduler has also added the ability to update HTTP jobs with custom headers and basic authentication. It has also exposed the ability to change the recurrence schedule which will allow you to also choose to limit the execution of a job or allow the job to run infinitely. With the general availability, new Azure Scheduler cmdlets have been released with Azure PowerShell and the Scheduler .NET API has been included in WAML 1.0. I highly encourage you to try out the Scheduler today. You might find the following links helpful: Azure Scheduler documentation Download Azure PowerShell .NET API NuGet Package MSDN forum to find answers to all your Scheduler questions Scheduler Pricing Details page It makes scheduling recurring tasks really easy. Automation: Announcing Microsoft Azure Automation Preview Last week we announced the preview of a new Microsoft Azure service: Automation. Automation allows you to automate the creation, deployment, monitoring, and maintenance of resources in your Azure environment using a highly scalable and reliable workflow execution engine. The service can be used to orchestrate the time-consuming, error-prone, and frequently repeated tasks you’d otherwise accomplish manually across Microsoft Azure and third-party systems to decrease operational expense for your cloud operations. To get started with Automation, you first need to sign-up for the preview on the Azure Preview page. Once you have been approved for the preview, you can sign in to the Management Portal and start using it. Automation is currently only available in the East-US data center, but we will add the ability to deploy to additional data centers in the future. Authoring a Runbook Once you have the Automation preview enabled on your subscription, you can easily get started automating by following a few simple steps: Step 1: In the Microsoft Azure management portal, click New->App Services->Automation->Runbook->Quick Create to create a new runbook. Runbooks are collections of activities that provide an environment for automating everything from diagnostic logging to applying updates to all instances of a virtual machine or web role to renewing certificates to cleaning storage accounts. Enter a name and description for the runbook, and create a new Automation account which will store your Runbooks, Assets, and Jobs. Next time you create a runbook you can either use the same Automation account as you just created or create a separate one to if you’d like to maintain separation between a few different collections of runbooks / assets. Step 2: Click on your runbook, then click Author->Draft. Type some PowerShell commands in the editor, then hit ‘Publish’ to make this runbook draft available for production execution. Starting a Runbook and Viewing the Job 1. To start the runbook you just published, go back to the ‘Runbooks’ tab, click on your newly-published runbook, and hit ‘Start.’ Enter any required parameters for the runbook, then click the checkmark button. 2. Click on your runbook, then click on the ‘Jobs’ tab for this runbook. Here you can view all the instances of a runbook that have run, called jobs. You should see the job you just started. 3. Click on the job you just started to view more details about its execution. Here you can see the job output, as well as any exceptions that may have occurred while the job was executing. Once you get familiar with the service, you’ll be able to create more sophisticated runbooks to automate your scenarios. I encourage you to try out Microsoft Azure Automation today. For more information, click through the following links: Service Overview Documentation Getting Started guide Runbook Authoring guide Pricing Details MSDN forum for answers to all your Automation questions Summary This most recent release of Azure includes a bunch of great features that enable you to build even better cloud solutions.  If you don’t already have a Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu [Less]
Posted over 11 years ago
It has been a really busy last 10 days for the Azure team. This blog post quickly recaps a few of the significant enhancements we’ve made.  These include: Web Sites: SSL included, Traffic Manager, Java Support, Basic Tier Virtual Machines: ... [More] Support for Chef and Puppet extensions, Basic Pricing tier for Compute Instances Virtual Network: General Availability of DynamicRouting VPN Gateways and Point-to-Site VPN Mobile Services: Preview of Visual Studio support for .NET, Azure Active Directory integration and Offline support; Notification Hubs: Support for Kindle Fire devices and Visual Studio Server Explorer integration Autoscale: General Availability release Storage: General Availability release of Read Access Geo Redundant Storage Active Directory Premium: General Availability release Scheduler service: General Availability release Automation: Preview release of new Azure Automation service All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them: Web Sites: SSL now included at no additional charge in Standard Tiers With Azure Web Sites you can host up to 500 web-sites in a single standard tier hosting plan.  Azure web-sites run in VMs isolated to host only your web applications (giving you predictable performance and security isolation), and you can scale-up/down the number of VMs either manually or using our built-in AutoScale functionality.  The pricing for standard tier web-sites is based on the number of VMs you run – if you host all 500 web-sites in a single VM then all you pay for is for that single VM, if you scale up your web site plan to run across two VMs then you’d pay for two VMs, etc. Prior to this month we charged an additional fee if you wanted to enable SSL for the sites.  Starting this month, we now include the ability to use 5 SNI based SSL certificates and 1 IP based SSL certificate with each standard tier web site hosting plan at no additional charge.  This helps make it even easier (and cheaper) to SSL enable your web-sites. Web Sites: Traffic Manager Support I’ve blogged in the past about the Traffic Manager service we have built-into Azure.  The Azure Traffic Manager service allows you to control the distribution of user traffic to applications that you host within Azure. This enables you to run instances of your applications across different azure regions all over the world.  Traffic Manager works by applying an intelligent routing policy engine to the Domain Name Service (DNS) queries on your domain names, and maps the DNS routes to the appropriate instances of your applications (e.g. you can setup Traffic Manager to route customers in Europe to a European instance of your app, and customers in North America to a US instance of your app). You can use Traffic Manager to improve application availability - by enabling automatic customer traffic fail-over scenarios in the event of issues with one of your application instances.  You can also use Traffic Manager to improve application performance - by automatically routing your customers to the closet application instance nearest them. We are excited to now provide general availability support of Traffic Manager with Azure Web Sites.  This enables you to both improve the performance and availability of your web-sites.  You can learn more about how to take advantage of this new support here. Web Sites: Java Support This past week we added support for an additional server language with Azure Web Sites – Java.  It is now easy to deploy and run Java web applications written using a variety of frameworks and containers including: Java 1.7.0_51 – this is the default supported Java runtime Tomcat 7.0.50 – the default Java container Jetty 9.1.0 You can manage which Java runtime you use, as well as which container hosts your applications using the Azure management portal or our management APIs.  This blog post provides more detail on the new support and options. With this announcement, Azure Web Sites now provides first class support for building web applications and sites using .NET, PHP, Node.js, Python and Java.  This enables you to use a wide variety of language + frameworks to build your applications, and take advantage of all the great capabilities that Web Sites provide (Easy Deployment, Continuous Deployment, AutoScale, Staging Support, Traffic Manager, outside-in monitoring, Backup, etc). Web Sites: Support for Wildcard DNS and SSL Certificates Azure Web Sites now supports the ability to map wildcard DNS and SSL Certficates to web-sites.  This enables a variety of scenarios – including the ability to map wildcard vanity domains (e.g. *.myapp.com – for example: scottgu.myapp.com) to a single backend web site.  This can be particularly useful for SaaS based scenarios. Scott Cate has an excellent video that walks through how to easily set this support up. Web Sites: New Basic Tier Pricing Option Earlier in this post I talked about how we are now including the ability to use 5 SNI and 1 IP based SSL certificate at no additional cost with each standard tier azure web site hosting plan.  We have also recently announced that we are also including the auto-scale, traffic management, backup, staging and web jobs features at no additional cost as part of each standard tier azure web site hosting plan as well.  We think the combination of these features provides an incredibly compelling way to securely host and run any web application. New Basic Tier Pricing Option Starting this month we are also introducing a new “basic tier” option for Azure web sites which enables you to run web applications without some of these additional features – and at 25% less cost.  We think the basic tier is great for smaller/less-sophisticated web applications, and enables you to be successful while paying even less.  For additional details about the Basic tier pricing, visit the Azure Web sites pricing page.  You can select which tier your web-site hosting plan uses by clicking the Scale tab within the Web Site extension of the Azure management portal. Virtual Machines: Create from Visual Studio With the most recent Azure SDK 2.3 release, it is now possible to create Virtual Machines from directly inside Visual Studio’s Server Explorer.  Simply right-click on the Azure node within it, and choose the “Create Virtual Machine” menu option: This will bring up a “Create New Virtual Machine” wizard that enables you to walkthrough creating a Virtual Machine, picking an image to run in it, attaching it to a virtual network, and open up firewall ports all from within Visual Studio: Once created you can then manage the VM (shutdown, restart, start, remote desktop, enable debugging, attach debugger) all from within Visual Studio: This makes it incredibly easy to start taking advantage of Azure without having to leave the Visual Studio IDE. Virtual Machines: Integrated Puppet and Chef support In a previous blog post I talked about the new VM Agent we introduced as an optional extension to Virtual Machines hosted on Azure.  The VM Agent is a lightweight and unobtrusive process that you can optionally enable to run inside your Windows and Linux VMs. The VM Agent can then be used to install and manage extensions, which are software modules that extend the functionality of a VM and help make common management scenarios easier.  At last week’s Build conference we announced built-in support for several new extensions – including extensions that enable easy support for Puppet and Chef.  Puppet and Chef allow developers and IT administrators to define and automate the desired state of infrastructure configuration, making it effortless to manage 1000s of VMs in Azure. Enabling Puppet Support We now have a built-in VM image within the Azure VM gallery that enables you to easily stand up a puppet-master server that you can use to store and manage your infrastructure using Puppet.  Creating a Puppet Master in Azure is now easy – simply select the “Puppet Enterprise” template within the VM gallery: You can then create new Azure virtual machines that connect to this Puppet Master.  Enabling this with VMs created using the Azure management portal is easy (we also make it easy to do with VMs created with the command-line).  To enable the Puppet extension within a VM you create using the Azure portal simply navigate to the last page of the Create VM from gallery experience and check the “Puppet Enterprise Agent” extension within it: Specify the URL of the Puppet master server to get started. Once you deploy the VM, the extension will configure the puppet agent to connect to this Puppet master server and pull down the initial configuration that should be used to configure the machine. This new support makes it incredibly easy to get started with both Puppet and Chef and enable even richer configuration management of your IaaS infrastructure within Azure. Virtual Machines: Basic Tier Earlier in this blog post I discussed how we are introducing a new “Basic Tier” option for Azure Web Sites.  Starting this month we are also introducing a “Basic Tier” for Virtual Machines as well. The Basic Tier option provides VM options with similar CPU + memory configuration options as our existing VMs (which are now called “Standard Tier” VMs) but do not include the built-in load balancing and AutoScale capabilities.  They also cost up to 27% less.  These instances are well-suited for production applications that do not require a built-in load balancer (you can optionally bring your own load balancer), batch processing scenarios, as well as for dev/test workloads.  Our new Basic tier VMs also have similar performance characteristics to AWS’s equivalent VM instances (which are less powerful than the Standard tier VMs we have today). Comprehensive pricing information is now available on the Virtual Machines Pricing Details page. Networking: General Availability of Azure Virtual Network Dynamic Routing VPN Gateways and Point-to-Site VPN Last year, we previewed a feature called DynamicRouting Gateway and Point-to-Site VPN that supports Route-based VPNs and allows you to connect individual computers to a Virtual Network in Azure. Earlier this month we announced that the feature is now generally available. The DynamicRouting VPN Gateway in a Virtual Network will now carry the same 99.9% SLA as the StaticRouting VPN Gateway. Now that we’re in General Availability mode, DynamicRouting Gateway will automatically incur standard Gateway charges which will take effect starting May 1, 2014.  For further details on the service, please visit the Virtual Network website. Mobile Services: Visual Studio Support for Mobile Services .NET Backend With Visual Studio 2013 Update 2, you can now create your backend Mobile Service logic using .NET and the ASP.NET Web API framework in Visual Studio, using Mobile Services templates and scaffolds. Mobile Services support for .NET on the backend offers the following benefits: You can use ASP.NET Web API and Visual Studio together with Mobile Services to add a backend to your mobile app in minutes You can publish any existing Web API to Mobile Services and benefit from authentication, push notifications and other capabilities that Mobile Services provides. You can also take advantage of any Web API features like OData controllers, or 3rd party Web API-based frameworks like Breeze. You can debug your Mobile Services .NET backend using Visual Studio running locally on your machine or remotely in Azure. With Mobile Services we run, manage and monitor your Web API for you. Azure will automatically notify you if we discover you have a problem with your app. With Mobile Services .NET support you can store your data securely using any data backend of your choice: SQL Azure, SQL on Virtual Machine, Azure Table storage, Mongo, et al. It’s easy to get started with Mobile Services .NET support in Visual Studio. Simply use the File-New Project dialog and select the Windows Azure Mobile Service project template under the Cloud node. Choose Windows Azure Mobile Service in the New ASP .NET Project dialog. You will see a Mobile Services .NET project, notice this is a customized ASP .NET Web API project with additional Mobile Service NuGet packages and sample controllers automatically included: Running the Mobile Service Locally You can now test your .Net Mobile Service project locally. Open the sample TodoItemController.cs in the project. The controller shows you how you can use the built-in TableController<T> .NET class we provide with Mobile Services. Set a breakpoint inside the GetAllTodoItems() method and hit F5 within Visual Studio to run the Mobile Service locally. Mobile Services includes a help page to view and test your APIs. On the help page, click on the try it out link and then click the GET tables/TodoItem link. Then click try this out and send on the GET tables/TodoItem page. As you might expect, you will hit the breakpoint you set earlier. Add APIs to your Mobile Service using Scaffolds You can add additional functionality to your Mobile Service using Mobile Service or generic Web API controller scaffolds through the Add Scaffold dialog (right click on your project and choose Add -> New Scaffolded Item… command) Publish your Mobile Services project to Azure Once you are done developing your Mobile Service locally, you can publish it to Azure. Simply right click on your project and choose the Publish command. Using the publish wizard, you can publish to a new or existing Azure Mobile Service: Remote debugging Just like Cloud Services and Websites, you can now remote debug your Mobile Service to get more visibility into how your code is operating live in Azure. To enable remote debugging for a Mobile Service, publish your Mobile Service again and set the Configuration to Debug in the Publish wizard. Once your Mobile Service is published and running live in the cloud, simply set a breakpoint in local source code. Then use Visual Studio’s Server Explorer to select the Mobile Service instance deployed in the cloud, right click and choose the Attach Debugger command. Once the debugger attaches to the mobile service, you can use the debugging capabilities of Visual Studio to instantly and in-real time debug your app running in the cloud. To learn more about Visual Studio Support for Mobile Services .NET backend follow tutorials at: Quickstart: Add a mobile service (.NET backend) How to use controllers to access data in mobile services (.NET backend) How to create custom APIs and scheduled jobs in a mobile service (.NET backend) This new .NET backend supports makes it easy to create even better mobile applications than ever before. Mobile Services: Offline Support In addition to the above support, we are also introducing a preview of a new Mobile Services Offline capability with client SDK support for Windows Phone and Windows Store apps. With this functionality, mobile applications can create and modify data even when they are offline/disconnected from a network. When the app is back online, it can synchronize local changes with the Mobile Services Table APIs. The feature also includes support for detecting conflicts when the same record is changed on both the client and the backend. To use the new Mobile Services offline functionality, set up a local sync store. You can define your own sync store or use the provided SQLite-based implementation.  The Mobile Services SDK provides a new local table API for the sync store, with a symmetrical programming model to the existing Mobile Services Table API. You can use Optimistic Concurrency along with the offline feature to detect conflicting changes between the client and backend. The preview of the Mobile Services Offline feature is available now as part of the Mobile Services SDK for Windows Store and Windows Phone apps. In the future, we will support all client platforms supported by Mobile Services, including iOS, Android, Xamarin, etc. Mobile Services: Support for Azure Active Directory Sign On We now support Azure Active Directory Single Sign On for Mobile Services.  Azure Active Directory authentication is available for both the .NET and Node.js backend options of Mobile Services. To take advantage of the feature, first register your client app and your Mobile Service with your Azure Active Directory tenant using the Applications tab in the Azure Active Directory management portal. In your client project, you will need to add the Active Directory Authentication Library (ADAL), currently available for Windows Store, iOS, and Android clients. From there on, the token retrieved from ADAL library can be used to authenticate and access Mobile Services.  The single sign-on features of ADAL also enables your mobile service to make calls to other resources (such as SharePoint and Office 365) on behalf of the user.  You can read more about the new ADAL functionality here. These new updates make Mobile Services an even more attractive platform for building powerful employee facing apps. Notification Hub: Kindle Support and Visual Studio Integration I’ve previously blogged about Azure Notification Hubs, a high scale cross platform push notification service that allows you to instantly send personalized push notifications to segments of your audience or individuals containing millions of iOS, Android, Windows, Widows Phone devices with a single API call. Today we’ve made two important updates to Azure Notification Hubs: adding support for Amazon Kindle Fire devices, and Visual Studio support for Notification Hubs. Support for Amazon’s Kindle With today’s addition you can now configure your Notification Hubs with Amazon Device Messaging (ADM) service credentials on the configuration page for your Notification Hub in the Azure Management portal, and start sending push notifications to your app on Amazon’s Kindle device, in addition to iOS, Android, or Windows. Testing Push Notifications with Visual Studio Earlier I blogged about how we enabled debugging push notifications using the Azure Management Portal. With today’s Visual Studio update, you can now browse your notification hubs and send test push notifications directly from Visual Studio Server Explorer as well. Simply select your notification hub in the Server Explorer of Visual Studio under the Notifications Hubs node.  Then right click, and choose the Send Test Notifications command: In the notification hub window, you can then send a message either to a particular tag or all registered devices (broadcast). You can select from a variety of templates - Windows Store, Windows Phone, Android, iOS, or even a cross platform message using the Custom Template. After you hit Send, you’ll receive the message result instantly to help you diagnose if your message was successfully sent or not. To learn more about Azure Notification Hubs, read tutorials here. AutoScale: Announcing General Availability of Autoscale Service Last summer we announced the preview release of our Autoscale service. I’m happy to announce that Autoscale is now generally available!  Better yet, there's no additional charge for using Autoscale. We've added new features since we first released it as a preview version: support for both performance-and schedule-based autoscaling, along with an API and .NET SDK so you can programmatically scale using any performance counters that you define. Autoscale supports all four Azure compute services: Cloud Services, Virtual Machines, Mobile Services and Web Sites. For Virtual Machines and Web Sites, Autoscale is included as a feature in the Standard pricing tiers, and for Mobile Services, it's included as a part of both Basic and Standard pricing tiers. Storage: Announcing General Availability of Read Access Geo Redundant Storage (RA-GRS) In December, we added the ability to allow customers to achieve higher read availability for their data. This feature called Read Access - Geo Redundant Storage (RA-GRS) allows you to read an eventually consistent copy of your geo-replicated data from the storage account’s secondary region in case of any unavailability to the storage account’s primary region. Last week we announced that RA-GRS feature is now out of preview mode, and generally available. It is available to all Azure customers across all regions including the users in China. RA-GRS SLA and Pricing The benefit of using RA-GRS is that it provides a higher read availability (99.99+%) for a storage account over GRS (99.9+%). When using RA-GRS, the write availability continues to be 99.9+% (same as GRS today) and read availability for RA-GRS is 99.99+%, where the data is expected to be read from secondary if primary is unavailable. In terms of pricing, the capacity (GB) charge is slightly higher for RA-GRS than GRS, whereas the transaction and bandwidth charges are the same for GRS and RA-GRS. See the Windows Azure Storage pricing page here for more details about the SLA and pricing. You can find more information on the storage blog here. Active Directory: General Availability of Azure AD Premium Earlier this month we announced the general availability of Azure Active Directory Premium, which provides additional identity and access management capabilities for enterprises. Building upon the capabilities of Azure AD, Azure AD Premium provides these capabilities with a guaranteed SLA and no limit on directory size. Additional capabilities include: Group-based access assignment enables administrators to use groups in AD to assign access for end users to over 1200 cloud applications in the AD Application Gallery. End users can get single-sign on access to their applications from their Access Panel at https://myapps.microsoft.com or from our iOS application. Self-service password reset that enables end users to reset forgotten passwords without calling your help desk. Delegated group management that enables end users to create security groups and manage membership in security groups they own. Multi-Factor Authentication that lets you easily deploy a Multi-Factor Authentication solution for your business without deploying new software or hardware. Customized branding that lets you include your organization’s branding elements in the experiences that users see when signing in to AD or accessing their Access Panel. Reporting, alerting, and analytics that increase your visibility into application usage in your organization, and potential security concerns with user accounts. Azure AD Premium also includes usage rights for Forefront Identity Manager Server and Client Access Licenses. To read more about AD Premium, including how to acquire it, read the Active Directory Team blog. Active Directory: Public Preview of Azure Rights Management Service Earlier this month we announced the public preview of the ability to manage your Azure Rights Management service within the Azure Management Portal. If your organization has Azure Rights Management either as a stand-alone service or as part of your Office 365 or EMS subscriptions you can now manage it by signing into the Azure Management Portal. Once in the Portal, select ACTIVE DIRECTORY in the left navigation bar, navigate to the RIGHTS MANAGEMENT tab, then click on the name of your directory. With this preview you can now create custom rights policy templates that let you define who can access sensitive documents, and what permissions (view, edit, save, print, and more) users can have on those documents.  To begin creating a rights policy template, in the Quick Start page, click on Create an additional rights policy template option and follow the instructions on the page to define a name and description for the template, add users and rights and define other restrictions. Once your template has been created and published, it will become available to users in your organization in their favorite applications. To learn more managing Azure Rights Management and the benefits it offers to organizations, see the Information Protection group’s blog.  Scheduler: General Availability Release Scheduler Service This month we’ve also delivered the General Availability release of the Azure Scheduler service.  Scheduler allows you to run jobs on simple or complex recurring schedules that can invoke HTTP/S endpoints or post messages to storage queues. Scheduler has built-in high availability and can reliably call services inside or outside of Azure. During preview customers have used it for a wide set of scenarios including for invoking services in their backend for Hadoop workloads, triggering diagnostics cleanup, and periodically checking that partners have submitted content on time. ISVs have used it to empower their applications to add scheduling capabilities such as report generation and sending reminders. In the Scheduler portal extension you can easily create and manage your scheduler jobs. Since the initial release, Scheduler has also added the ability to update HTTP jobs with custom headers and basic authentication. It has also exposed the ability to change the recurrence schedule which will allow you to also choose to limit the execution of a job or allow the job to run infinitely. With the general availability, new Azure Scheduler cmdlets have been released with Azure PowerShell and the Scheduler .NET API has been included in WAML 1.0. I highly encourage you to try out the Scheduler today. You might find the following links helpful: Azure Scheduler documentation Download Azure PowerShell .NET API NuGet Package MSDN forum to find answers to all your Scheduler questions Scheduler Pricing Details page It makes scheduling recurring tasks really easy. Automation: Announcing Microsoft Azure Automation Preview Last week we announced the preview of a new Microsoft Azure service: Automation. Automation allows you to automate the creation, deployment, monitoring, and maintenance of resources in your Azure environment using a highly scalable and reliable workflow execution engine. The service can be used to orchestrate the time-consuming, error-prone, and frequently repeated tasks you’d otherwise accomplish manually across Microsoft Azure and third-party systems to decrease operational expense for your cloud operations. To get started with Automation, you first need to sign-up for the preview on the Azure Preview page. Once you have been approved for the preview, you can sign in to the Management Portal and start using it. Automation is currently only available in the East-US data center, but we will add the ability to deploy to additional data centers in the future. Authoring a Runbook Once you have the Automation preview enabled on your subscription, you can easily get started automating by following a few simple steps: Step 1: In the Microsoft Azure management portal, click New->App Services->Automation->Runbook->Quick Create to create a new runbook. Runbooks are collections of activities that provide an environment for automating everything from diagnostic logging to applying updates to all instances of a virtual machine or web role to renewing certificates to cleaning storage accounts. Enter a name and description for the runbook, and create a new Automation account which will store your Runbooks, Assets, and Jobs. Next time you create a runbook you can either use the same Automation account as you just created or create a separate one to if you’d like to maintain separation between a few different collections of runbooks / assets. Step 2: Click on your runbook, then click Author->Draft. Type some PowerShell commands in the editor, then hit ‘Publish’ to make this runbook draft available for production execution. Starting a Runbook and Viewing the Job 1. To start the runbook you just published, go back to the ‘Runbooks’ tab, click on your newly-published runbook, and hit ‘Start.’ Enter any required parameters for the runbook, then click the checkmark button. 2. Click on your runbook, then click on the ‘Jobs’ tab for this runbook. Here you can view all the instances of a runbook that have run, called jobs. You should see the job you just started. 3. Click on the job you just started to view more details about its execution. Here you can see the job output, as well as any exceptions that may have occurred while the job was executing. Once you get familiar with the service, you’ll be able to create more sophisticated runbooks to automate your scenarios. I encourage you to try out Microsoft Azure Automation today. For more information, click through the following links: Service Overview Documentation Getting Started guide Runbook Authoring guide Pricing Details MSDN forum for answers to all your Automation questions Summary This most recent release of Azure includes a bunch of great features that enable you to build even better cloud solutions.  If you don’t already have a Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu [Less]