Share via

Azure AI Foundry model deployment fails with 715-123420 for both Bicep and Portal

Praveen Kumar Pudi 1 Reputation point
2026-04-27T14:32:16.6766667+00:00

We are unable to create model deployments in an Azure AI Foundry resource. The failure happens consistently both through Infrastructure as Code (Bicep/ARM) and directly from the Azure portal UI, which suggests this is not a template syntax issue.

What works

  • Azure AI Foundry account creation succeeds
  • Azure AI Foundry project creation succeeds
  • RBAC / managed identity setup succeeds

What fails

  • Any model deployment under the Foundry account fails during validation/provisioning with:
    • 715-123420: An error occurred. Please reach out to support for additional assistance.

How we tested

  1. Created Foundry account and project successfully
  2. Tried deploying models through Bicep
  3. Tried deploying models manually from Azure portal
  4. Same error in both paths

Models attempted

  • text-embedding-3-large
  • text-embedding-3-small
  • gpt-5.4-mini
  • gpt-5.4
  • gpt-5.4-nano

Regions tested

  • westeurope
  • eastus2

Result

  • Base Foundry resource and project deploy successfully in both scenarios
  • Model deployment fails with the same error code regardless of deployment method

Why this seems service-side

  • The error occurs from both portal and Bicep
  • Foundry account/project provisioning works
  • The issue starts only when creating Microsoft.CognitiveServices/accounts/deploymentsWe are unable to create model deployments in an Azure AI Foundry resource. The failure happens consistently both through Infrastructure as Code (Bicep/ARM) and directly from the Azure portal UI, which suggests this is not a template syntax issue. What works
      - Azure AI Foundry account creation succeeds
    
      - Azure AI Foundry project creation succeeds
    
         - RBAC / managed identity setup succeeds
    
    What fails
    • Any model deployment under the Foundry account fails during validation/provisioning with:
          - 715-123420: An error occurred. Please reach out to support for additional assistance.
      
      How we tested
      1. Created Foundry account and project successfully
      
      1. Tried deploying models through Bicep
      
         1. Tried deploying models manually from Azure portal
      
            1. Same error in both paths
      
      Models attempted
      - text-embedding-3-large
      
      - text-embedding-3-small
      
         - gpt-5.4-mini
      
            - gpt-5.4
      
               - gpt-5.4-nano
      
      Regions tested
      - westeurope
      
      - eastus2
      
      Result
      - Base Foundry resource and project deploy successfully in both scenarios
      
      - Model deployment fails with the same error code regardless of deployment method
      
      Why this seems service-side
    • The error occurs from both portal and Bicep
    • Foundry account/project provisioning works
    • The issue starts only when creating Microsoft.CognitiveServices/accounts/deployments
Foundry Models
Foundry Models

A catalog of AI models in Microsoft Foundry that you can discover, compare, and deploy using Azure’s built‑in tools for evaluation, fine‑tuning, and inference


2 answers

Sort by: Most helpful
  1. Karnam Venkata Rajeswari 2,395 Reputation points Microsoft External Staff Moderator
    2026-05-01T01:19:43.4733333+00:00

    Hello @Praveen Kumar Pudi

    Welcome to Microsoft Q&A .Thank you for reaching out to us.

     In addition to the inputs provided by Jerald Felix , please check if the following help

    The consistent failure pattern across models, regions, and deployment methods provides strong insight into where the issue is occurring.

    Model deployment failures are consistently observed across:

    • Multiple models
    • Multiple regions
    • Multiple deployment methods

    At the same time, resource creation (account, project, identity) succeeds without issues.

    This pattern indicates that requests are successfully reaching the service but are failing during backend validation at the deployment stage, specifically for Microsoft.CognitiveServices/accounts/deployments.

    Based on this behavior, the issue is most likely related to:

    • Subscription-level entitlement not enabled for the requested models, or
    • Quota not allocated for the subscription in the selected regions, or
    • Service-side validation blocking deployment provisioning

    In such scenarios, deployments may fail with generic errors like 715-123420, even when configuration and templates are correct.

    Please check if the following steps help -

    1. Confirming model availability - please run the command below to verify which models are enabled az cognitiveservices account list-models -n <accountName> -g <resourceGroup> If intended models (for example, gpt-5.4, text-embedding-3-*) are not listed, deployment will fail due to missing entitlement
    2. Validating quota allocation From the Foundry resource - Quota, verify:
      • Tokens per minute (TPM)
      • Requests per minute (RPM)
      • If quota is zero or not assigned, deployments cannot proceed and may return generic errors
    3. Validating region and model alignment
      1. Ensure selected models are supported in the chosen regions
      2. If required, test in regions with broader availability (for example: Sweden Central, South Central US, West US 3)
    4. Validating resource configuration
      1. Confirm resource kind is AIServices
      2. Ensure deployment uses a supported API version (2024‑10‑01 or later)
    5. Checking policy and deployment restrictions
      1. Review Azure Policy / deny assignments
      2. Confirm there are no restrictions on Microsoft.CognitiveServices/accounts/deployments

    The following references might be helpful , please check them out

    Thank you

    0 comments No comments

  2. Jerald Felix 11,550 Reputation points Volunteer Moderator
    2026-04-28T01:31:44.1533333+00:00

    Hello Praveen Kumar Pudi,

    Greetings!

    Thanks for raising this question in Q&A forum.

    You've done an excellent job of isolating this — the fact that error 715-123420 occurs across both Bicep/ARM and the Azure Portal, across multiple models, and across two different regions (West Europe and East US 2), strongly points to a service-side issue at the Microsoft.CognitiveServices/accounts/deployments resource level. This is not something caused by your templates or configuration.

    That said, here are some things worth verifying and trying before escalating to support:

    Step 1: Confirm the Correct Resource Kind for Azure AI Foundry Deployments Azure AI Foundry (the new unified experience) uses a different resource kind than classic Azure OpenAI. Make sure your Bicep is targeting kind: "AIServices" and not kind: "OpenAI". Using the wrong kind can silently block deployments at the provisioning stage. Here's a quick reference snippet:

    resource foundryAccount 'Microsoft.CognitiveServices/accounts@2024-10-01' = {
      name: accountName
      location: location
      kind: 'AIServices'
      sku: {
        name: 'S0'
      }
      properties: {
        customSubDomainName: accountName
      }
    }
    

    Step 2: Verify the API Version for the Deployment Resource Older API versions may not support the newer Foundry-style model deployments. Make sure you are using at least 2024-10-01 or later for Microsoft.CognitiveServices/accounts/deployments. Using an older API version like 2023-05-01 can cause validation failures with newer models like gpt-5.4.

    Step 3: Check That the Foundry Account and Deployment Are in the Same Resource Group and Subscription Even though account and project creation succeeds, deployment provisioning can fail silently if there's a policy restriction (like Azure Policy denying specific child resource types). Check your subscription's Policy compliance blade for any deny assignments targeting Microsoft.CognitiveServices/accounts/deployments.

    Step 4: Validate Quota at the Model + Region Level Some of the models you attempted (like gpt-5.4 and gpt-5.4-nano) may have very limited or zero quota in West Europe and East US 2 at this time. Go to Azure Portal → Subscriptions → Usage + Quotas → filter for Azure OpenAI/AI Services and check per-model TPM availability for each region.

    Step 5: Try with a Minimal Deployment via REST API To completely rule out any Bicep/portal layer issues, try a raw REST call using a tool like Postman or curl to directly call the deployments endpoint. This helps confirm whether the error is truly service-side.

    Step 6: Open an Azure Support Ticket (Recommended) Given the consistent failure across all models, both regions, and both deployment methods, this is very likely a backend service issue tied to your subscription or tenant. When raising the ticket, please include:

    • Subscription ID
    • Foundry Account resource ID
    • Timestamps of failed deployment attempts
    • Error code: 715-123420
    • All models and regions attempted

    This will allow the Azure support team to check backend provisioning logs and identify the root cause quickly.

    If this answer helps you kindly accept the answer which will help others who have similar questions.

    Best Regards,

    Jerald Felix.

    0 comments No comments

Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.