Tips·

Week 4, 2024 - Tips I learned this week

Some tips about Azure and Azure DevOps.

Easily debug a non-HTTP-triggered Azure Function

The other day, I wanted to locally debug a Queue-triggered function without manually adding a queue message to my local storage.

My Azure Function looked like that:

public record Order(string Product,int Count);

public class ProcessOrder
{
    private readonly ILogger<ProcessOrder> _logger;

    public ProcessOrder(ILogger<ProcessOrder> logger)
    {
        _logger = logger;
    }

    [Function(nameof(ProcessOrder))]
    public void Run([QueueTrigger("orders")] Order sentOrder)
    {
        _logger.LogInformation($"Order contains {sentOrder.Count} {sentOrder.Product}");
    }
}

To trigger it, I could simply add a message in the order queue of my storage emulator like this:

Queue message in Azure Storage Explorer.

You may notice that I don't even have to go to the Azure Storage Explorer to add the message, I can do it directly in the IDE. However, call me lazy but I wanted to execute the function just by making an HTTP call, like we do for HTTP-triggered functions.

This way, I could write the HTTP request in an HTTP file, commit it, and push it to my repository to share it with my colleagues, so they don't have to guess what message they should put in the queue to trigger the function.

Fortunately, the documentation explains how to do this.

Define the request location: host name + folder path + function name.

Thus, for my use case, the resulting request is as follows:

POST http://localhost:7071/admin/functions/ProcessOrder HTTP/1.1
Content-Type: application/json

{
  "input": "{\n  \"product\": \"laptop\",\n  \"count\": 3\n}"
}

The content of your queue message goes in the value of the key "input" and must be escaped.

If like me, you skim through the documentation, you might miss the "escape" requirement and your request will fail so be sure to properly escape your content.

The Azure DevOps tip you did not know about: Azure Pipelines tasks name conflicts

I recently discovered that when you install extensions from the Azure DevOps marketplace, several Azure Pipelines tasks can have the same name. And if you use that name in your pipelines, Azure Pipelines won't know which task you are referring to and will prevent your pipeline from running.

This can easily occur if you install multiple extensions for Terraform in your Azure DevOps organization. For instance, the extensions Azure Pipelines Terraform Tasks from Jason Johnson and Terraform from Microsoft Dev Labs both have a task named the same way: TerraformInstaller.

To avoid these conflicts, you must use the full name of the tasks in your pipelines. You can find their full names in the GitHub repository of the extensions. Another way is to use these tasks in a test Release and click on the "View YAML" button to see the full name of the task you added.

Screenshot of a release in Azure DevOps.

Using metrics to understand your usage of Azure resources

I don't often use all my monthly free credits of my Azure subscription, but this month my spending limit was quickly reached and my subscription was disabled!

The cost analysis tab of my subscription showed me that an Azure Maps Account resource was responsible for consuming most of my credits but didn't provide more details.

So, I went to the Metrics tab of my resource and discovered that I could split the Usage metric by API name to determine exactly which Azure Maps API was heavily used by my applications. Combined with the pricing page, I can deduce which API requests I'm making too frequently and, therefore how to optimize costs.

Azure Maps usage metrics by API name.

Depending on the type of resource, you will use different metrics and split on different properties. Regardless, metrics can help you comprehend your resource usage and its associated cost.

And that's it for this week, happy learning!


The opinions expressed herein are my own and do not represent those of my employer or any other third-party views in any way.

Copyright © 2024 Alexandre Nédélec. All rights reserved.