Share via

Azure Function Flex Consumption plan - Blob files not created or updated and no tables created in storage ccount

Lennart Bauer 0 Reputation points
2026-04-27T08:06:39.6966667+00:00

Hi!

I have updated an Azure function from Consumption to the Flex Consumption plan.

It is deployed in two different subscriptions, one for test and one for production.

The configurations are exactly the same in both environments.

The app has two functions, one http trigger and one timer trigger. In prod, the timer is never triggered.

I cannot see any trigger history for prod. If I manually trigger the timer funtion, it runs as excpeted, but it is never triggerd again. If I manually create a status file in the storage account, the trigger runs, but is always trigger directly again because the status file is not being updated and no tables are created (the ones beginning with AzureFunctionsDiagnosticEvents).

Everything has public access and the function apps identity has access to the storage account.

What can I do to fix this? Is it possible to investigate what the error is?

Azure Functions
Azure Functions

An Azure service that provides an event-driven serverless compute platform.


2 answers

Sort by: Most helpful
  1. Sina Salam 28,606 Reputation points Volunteer Moderator
    2026-04-28T12:31:06.02+00:00

    Hello Lennart Bauer,

    Welcome to the Microsoft Q&A and thank you for posting your questions here.

    I understand that your Azure Function Flex Consumption plan - Blob files not created or updated, and no tables created in storage account.

    You should do the following to resolve the issue:

    1. Confirm you’re not using the legacy polling Blob trigger, because Flex Consumption only supports the event‑based Blob trigger. - https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-storage-blob-trigger, and https://learn.microsoft.com/en-us/azure/azure-functions/functions-event-grid-blob-trigger
    2. Switch the binding to Event Grid source in function.json:
         { 
         "type":"blobTrigger",
              "direction":"in",
              "name":"myBlob",
               "path":"container/{name}",
                "source":"EventGrid" 
         }
      

    This is the supported trigger mode for low‑latency blob events.

    1. Upgrade to Storage extension v5+ (required for Event Grid–based blob triggers): dotnet add package Microsoft.Azure.Functions.Worker.Extensions.Storage.Blobs
    2. Create an Event Grid subscription on the storage container for Microsoft.Storage.BlobCreated and point it to the blob webhook endpoint format shown in the tutorial. - https://learn.microsoft.com/en-us/azure/azure-functions/functions-event-grid-blob-trigger
    3. Validate runtime and settings because Flex requires Functions runtime v4+, and your AzureWebJobsStorage/identity access must allow Event Grid delivery. [
    4. Test by uploading a blob; it should fire immediately without “warming” the host in the portal.

    The issue is caused by using a polling-based blob trigger in a Flex Consumption plan, which is not supported, resulting in no active event listener and execution only occurring when the host is manually initialized. Alternative (reliability pattern): Blob Created > Event Grid > Storage Queue > Queue Trigger Function (decouples spikes and improves retry control).

    Regarding your clarification on "Timer Trigger": Follow the steps below to resolve it:

    Step 1: Confirm host storage requirements and that you are editing the correct storage account

    1. Identify the storage account referenced by AzureWebJobsStorage (or its identity-based variant).
    2. Ensure it’s a general-purpose storage account that supports Blob, Queue, and Table endpoints (Functions uses all three for internal operations). - https://learn.microsoft.com/en-us/azure/azure-functions/storage-considerations

    Step 2: Fix AzureWebJobsStorage configuration. If you use Managed Identity for host storage (common in secure setups):

    Minimum settings (system-assigned identity):

    • AzureWebJobsStorage__accountName = <storageAccountName>
    • AzureWebJobsStorage__credential = managedidentity

    For user-assigned identity, also include:

    In addition, if you have custom DNS/private endpoints/advanced networking, you may need explicit service URIs (Blob/Queue/Table) instead of relying on default endpoint resolution. Microsoft notes identity-based storage approaches can fail in nonstandard DNS scenarios. - https://techcommunity.microsoft.com/blog/appsonazureblog/use-managed-identity-instead-of-azurewebjobsstorage-to-connect-a-function-app-to/3657606

    Step 3: Grant the Function App identity data-plane access required by host storage. Uses the default storage account for:

    Ensure that your Function App identity must have permissions that cover Blob + Queue + Table data actions on that storage account. Practically (RBAC):

    • Storage Blob Data Contributor (or higher)
    • Storage Queue Data Contributor
    • Storage Table Data Contributor

    https://learn.microsoft.com/en-us/azure/azure-functions/storage-considerations

    Step 4: Force trigger registration (“Sync triggers”) the supported way. Because if triggers aren’t synced, timer schedules may not register. Use a supported trigger sync mechanism:

    • In Azure Portal: Function App > “Sync triggers”
    • Or redeploy using supported methods that auto-sync triggers

    https://stackoverflow.com/questions/78594942/how-to-logs-are-logging-in-application-insights-using-managed-identity-in-net-c

    Step 5: After Steps 2–4:

    1. Restart Function App
    2. Wait for the next expected cron occurrence
    3. Verify that schedule state now updates (e.g., LastUpdated advances)

    Step 6: (Optional but strongly recommended) Especially if using Entra-authenticated ingestion for Application Insights, assign the correct role. If you have configured Microsoft Entra authentication for Application Insights (for example by disabling local auth and using AAD-based ingestion), Microsoft states you must assign the identity the Monitoring Metrics Publisher role on the Application Insights resource scope. - https://learn.microsoft.com/en-us/azure/azure-monitor/app/azure-ad-authentication. This only ensure you can see the logs needed to diagnose host startup/trigger listener issues, and avoids telemetry auth failures that can complicate troubleshooting.

    I hope this is helpful! Do not hesitate to let me know if you have any other questions or clarifications.


    Please don't forget to close up the thread here by upvoting and accept it as an answer if it is helpful.


  2. Pravallika KV 14,235 Reputation points Microsoft External Staff Moderator
    2026-04-27T09:33:03.31+00:00

    Hi @Lennart Bauer ,

    Thanks for the confirmation, glad the issue is resolved.

    This issue wasn't caused by storage access itself, but due to mismatch between function runtime expectations and app startup configuration

    Steps followed to resolve the issue:

    The issue was resolved by aligning the function app with the recommended configuration for the .NET isolated process model using IHostBuilder, as described in the official MSDOC:

    Changes made:

    • Updated the startup code to use IHostBuilder-based configuration
    • Adjusted dependency injection and configuration setup per the isolated worker model
    • Updated host.json logging settings to ensure proper runtime behavior and visibility
    • Re-deployed the function app

    Once the app was aligned with the isolated worker model setup, the timer trigger lifecycle behaved correctly.

    If you want to make it even more robust, you could still:

    • Keep Application Insights logging at a slightly higher verbosity for timer triggers
    • Validate behavior across scale-out scenarios

    Hope this helps!


    If the resolution was helpful, kindly take a moment to click on User's imageand click on Yes for was this answer helpful. And, if you have any further query do let us know.

    0 comments No comments

Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.