Share via

GPT-4.1 deployment blocked by fraud protection (error 715-123420) — need RTFP unblock for Content Understanding

Oguz Kokes 0 Reputation points
2026-04-08T20:26:58.9066667+00:00

I am trying to deploy GPT-4.1 and GPT-4.1-mini models within my Microsoft Foundry resource to use with Azure Content Understanding.

The deployment is being blocked with error code 715-123420, which I understand is related to Realtime Fraud Protection (RTFP).

How can I solve this?

Foundry Tools
Foundry Tools

Formerly known as Azure AI Services or Azure Cognitive Services is a unified collection of prebuilt AI capabilities within the Microsoft Foundry platform


1 answer

Sort by: Most helpful
  1. Manas Mohanty 16,670 Reputation points Microsoft External Staff Moderator
    2026-04-26T08:27:20.3833333+00:00

    Hey Oguz Kokes,

    Good day. Thank you for confirming that issue is not replicable anymore.

    Emphasizing on “Production scenario” context as you requested.

    These error (error 715-123420) are auto triggered from Backend team when system detects and intends to surface in below scenario

    1. Anomalous resource creation with unsupported templates/API versions or 
    2. There is anomalous usage at customer side (Un-authorized from outside players)
    3. Recursive inputs violating content filtering or chain of thought violation

    Best practice would be to monitor below perspectives

    On Anomalous usage

    1. Enable Entra credential instead of key based authentication. Reference - How to migrate to OpenAI Python v1.x (classic) - Microsoft Foundry (classic) portal | Microsoft Learn
    2. Create custom monitor alerts to monitor over-usage Reference - Monitor usage and spending with cost alerts in Cost Management - Microsoft Cost Management | Microsoft Learn
    3. Secure with VNET if you are using public resources.
    4. Adopt least point privileging access for user through proper RBAC

    On prompts

    Sanitize the prompts and file with content safety resource and sanitize based on probably of detection before reaching model endpoint

    Enable tracing in SDK to flag the prompts and save it in a separate storge and view it workbooks

    Use Advanced abuse monitoring feature (as a Managed customer) if you are dealing sensitive info like medical records for your use case

    Reference

    Tracing in Azure OpenAI

    View Trace Results for AI Applications using OpenAI SDK (classic) - Microsoft Foundry (classic) portal | Microsoft Learn

    Advanced abuse monitoring form

    Resiliency in production case

    Create multi region deployment and create runbook automation/use APIM to load balance properly based on signal received in 

    Relevant guide 

    Securely Integrating Azure API Management with Azure OpenAI via Application Gateway | Microsoft Community Hub

    Architecture 

    You can use AI gateway to setup alerts and routing points for Azure OpenAI deployment (Slightly affordable for developers compared to APIM gateway)

    AI gateway in Azure API Management | Microsoft Learn

    You need to enable this feature as per documentation prior usage though.

    AI gateway in Azure API Management | Microsoft Learn

    Please let us know we can consider this resolved with positive feedback if all above queries addressed your additional queries on fool proofing with respect to production scenario.

    Thank you for inputs on forum.

     


Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.