Skip to main content

Different methods of creating CRON jobs on Windows Azure

In the last couple of weeks we saw a lot of new features related to Windows Azure. In this moment if want to run a task or a job on Azure we had a lot of possibilities starting from Worker Roles and ending with Web Jobs or Schedulers on Mobile Services.
In this post we will look over a part of the services that are available on Windows Azure to run different jobs. For each of them we will try to identify the best case when we can use them.

We will start with Worker Roles. Theoretically behind the Worker Role we have a VM that gives to us the full power of the CPU and Memory that is available on it. On such machines we can run the most complex jobs that we can have. On such roles we can implement any kind of jobs, without any kind of limitations. Because the VM is managed by Azure platform we don’t need to be worried about OS, security updates and so on.
I would use this kind of machines if I would need to process an audio stream, generate an ISO with a custom content or doing some heavy tasks.
Another place where we have the ability to run jobs is on Hadoop (HDInsight). On this service we can run jobs that are consuming a lot of resources to process different content. For example if we have a data source (like a job) from where we need to extract some statistical information. This jobs perfect option for us when we have a lot of input data and based on it we need to extract reports or statistical information.
I would use Hadoop for jobs that need to process large amounts of logs data to extract different reports.
In Mobile Services we have the ability to schedule and run different jobs. They are very similar with CRON jobs and can be written in JavaScript. The jobs that can be run from Mobile Services are very simple and are oriented to the mobile service and mobile applications. This means that the job should be very simple and would process the data only for a user or for a limited number of users. For example this jobs are great if you need to update different fields from customers table or clean some tables.
But you shouldn't use this job to process pictures or make long call requests to different jobs (for example call 10-20 different external services and send clients data). You should use the jobs from Mobil Services at your application level. For more complicated jobs, you should try to use other solutions.
Azure Scheduler is something like a scheduler. Basically, this is the scheduler mechanism that gives you the ability to call a custom URL at a specific time interval. You will not be able to define the action that needs to be executed, only the endpoint that needs to be called. The endpoint that can be called can be an HTTP, HTTPs endpoint or a queue. For queues you have the ability to specify the body content of the message that is added to the queue.
This can be used with success in combination with Worker Roles (using queues) or WebJobs. For example if we have a cleaning task that needs to be executed every 4 days, we can define an Azure Scheduler that will call a specific URL of our site every 4 days. The URL will trigger our custom clean action.
The last available mechanism to define and run a job or a task is by Web Jobs. Using Web Jobs we have the ability to run any task on the web instances where we run our application. It is important to know that the Web Jobs will run on the same instances that we host our application. Because of this we need to be aware of what kind of tasks we want to run using Web Jobs. We don’t want to have tasks that consume a lot of resources from our instances and put our web application down. The good part is that the load balancer from Windows Azure will select the instances that has the lowest load. Like Mobile Services “CRON’ jobs, Web Jobs are recommended to use for small and simple jobs. For complex jobs we can use with success worker roles of HDInsight.

In this post we saw different ways to define a job/task. We don’t have a best solution, based on our needs we need to know to select the solution that is more suitable for us.

Comments

Popular posts from this blog

Windows Docker Containers can make WIN32 API calls, use COM and ASP.NET WebForms

After the last post , I received two interesting questions related to Docker and Windows. People were interested if we do Win32 API calls from a Docker container and if there is support for COM. WIN32 Support To test calls to WIN32 API, let’s try to populate SYSTEM_INFO class. [StructLayout(LayoutKind.Sequential)] public struct SYSTEM_INFO { public uint dwOemId; public uint dwPageSize; public uint lpMinimumApplicationAddress; public uint lpMaximumApplicationAddress; public uint dwActiveProcessorMask; public uint dwNumberOfProcessors; public uint dwProcessorType; public uint dwAllocationGranularity; public uint dwProcessorLevel; public uint dwProcessorRevision; } ... [DllImport("kernel32")] static extern void GetSystemInfo(ref SYSTEM_INFO pSI); ... SYSTEM_INFO pSI = new SYSTEM_INFO(

Azure AD and AWS Cognito side-by-side

In the last few weeks, I was involved in multiple opportunities on Microsoft Azure and Amazon, where we had to analyse AWS Cognito, Azure AD and other solutions that are available on the market. I decided to consolidate in one post all features and differences that I identified for both of them that we should need to take into account. Take into account that Azure AD is an identity and access management services well integrated with Microsoft stack. In comparison, AWS Cognito is just a user sign-up, sign-in and access control and nothing more. The focus is not on the main features, is more on small things that can make a difference when you want to decide where we want to store and manage our users.  This information might be useful in the future when we need to decide where we want to keep and manage our users.  Feature Azure AD (B2C, B2C) AWS Cognito Access token lifetime Default 1h – the value is configurable 1h – cannot be modified

What to do when you hit the throughput limits of Azure Storage (Blobs)

In this post we will talk about how we can detect when we hit a throughput limit of Azure Storage and what we can do in that moment. Context If we take a look on Scalability Targets of Azure Storage ( https://azure.microsoft.com/en-us/documentation/articles/storage-scalability-targets/ ) we will observe that the limits are prety high. But, based on our business logic we can end up at this limits. If you create a system that is hitted by a high number of device, you can hit easily the total number of requests rate that can be done on a Storage Account. This limits on Azure is 20.000 IOPS (entities or messages per second) where (and this is very important) the size of the request is 1KB. Normally, if you make a load tests where 20.000 clients will hit different blobs storages from the same Azure Storage Account, this limits can be reached. How we can detect this problem? From client, we can detect that this limits was reached based on the HTTP error code that is returned by HTTP