Running Azure Logic Apps Standard on Azure Container Apps
Should you use Logic Apps Standard on ACA instead of n8n? n8n is popular for workflow automation — Docker-native, visual editor, hundreds of integrations. But if you're already in Azure, it means running and paying for another self-hosted service on top of your existing infrastructure. Logic Apps Standard on Azure Container Apps is a cost-effective alternative if your workflows stay within the built-in connector set: Azure Blob, Queue, Service Bus, Event Hubs, HTTP, OpenAI, AI Search. No extra services, no OAuth setup — connection strings in app settings, durable run history, GitOps-friendly JSON definitions, all included. Logic Apps Standard traditionally required an App Service Premium plan (always-on, expensive). Containerising it on ACA removes that constraint. But it has hard limits. Know them before you start: No managed connectors. The 400+ gallery connectors (Office 365, SharePoint, Salesforce, SQL Server, etc.) don't work in Linux containers. They require an App Service MSI endpoint that ACA doesn't provide. No XSLT maps. The Transform XML action delegates to NetFxWorker.exe — a Windows-only .NET Framework binary in the extension bundle. It won't execute on Linux. Liquid/JSON transforms work fine because they run in-process in the managed worker, with no external binary dependency. Rebuild to deploy changes. Workflows are baked into the image. Any change means a Docker build + push + ACA update. GitOps-friendly, but slower iteration than n8n's live editor. Visual designer needs local Docker. You design and test locally (Part 2 covers this), then deploy. There's no cloud-based designer for ACA. Cold starts. Scale-to-zero means latency on the first trigger after idle — matters for synchronous HTTP workflows. If any of those are blockers, App Service Standard plan is the right call. If they're not — particularly if your workflows are Azure-native and event-driven — keep reading. A Logic Apps Standard app with six workflows, deployed as a Docker container on Azure Container Apps: Workflow Trigger Purpose wf1 HTTP GET Stateful HTTP request/response wf2 Azure Blob Storage Fires on blob upload, reads metadata, deletes the blob wf3 Azure Queue Storage Processes queue messages wf4 Azure Service Bus Processes messages from a Service Bus queue (service provider) wf5 Azure Service Bus Receives SB message, calls an external HTTP endpoint wf6 HTTP POST JSON-to-JSON transform via Liquid map (Artifacts/Maps) Infrastructure is defined in Bicep and deployed to Azure Container Apps. Run history is stable across pod restarts. All event-driven triggers survive stop/start cycles without replaying old events. ┌─────────────────────────────────────────────────────┐ │ Azure Container Apps │ │ │ │ ┌─────────────────────────────────────────────┐ │ │ │ Container: logicapp-basicdemo │ │ │ │ (Azure Functions v4 + Logic Apps runtime) │ │ │ │ │ │ │ │ wf1 wf2 wf3 wf4 wf5 wf6 │ │ │ └──────────────┬──────────────────────────────┘ │ │ │ volume mount │ │ ┌──────────────▼──────────────────────────────┐ │ │ │ Azure Files share (checkpoints + locks) │ │ │ └─────────────────────────────────────────────┘ │ └─────────────────────────────────────────────────────┘ Azure Storage (logicappshubsa) ├── Blob containers (wf2 trigger source) ├── Queue: testqueue (wf3 trigger source) └── Tables (run history, flow state) Azure Service Bus Basic (la-basicdemobus) ├── wf4queue (processed by wf4) └── wf5queue (processed by wf5) Azure Container Registry (labasicdemoacr) └── logicapp-basicdemo:latest Logic Apps Standard runs on the Azure Functions v4 host. There is no official pre-built container image for the Standard tier that you can just pull and run — you build your own image that includes the Functions Core Tools and your workflow files. FROM mcr.microsoft.com/dotnet/sdk:8.0 ENV DEBIAN_FRONTEND=noninteractive WORKDIR /home/site/wwwroot # Install Node 18 LTS and Azure Functions Core Tools v4 RUN apt-get update && \ apt-get install -y curl gnupg unzip coreutils && \ curl -fsSL https://deb.nodesource.com/setup_18.x | bash - && \ apt-get install -y nodejs && \ npm install -g azure-functions-core-tools@4 --unsafe-perm=true && \ apt-get clean && rm -rf /var/lib/apt/lists/* COPY . . ENV FUNCTIONS_WORKER_RUNTIME="node" ENV FUNCTIONS_WORKER_RUNTIME_VERSION="~4" ENV AzureWebJobsFeatureFlags="EnableMultiLanguageWorker" ENV AzureWebJobsSecretStorageType="Files" ENV APP_KIND="workflowapp" ENV WEBSITE_SITE_NAME="logicapp-local" EXPOSE 7074 ENTRYPOINT ["func", "start", "--verbose", "--port", "7074"] Your workflow JSON files (wf1/workflow.json, wf2/workflow.json, etc.) are baked directly into the image at COPY . .. The Logic Apps runtime reads and executes them — no compilation step. LABasicDemo/ ├── host.json # Extension bundle declaration ├── connections.json # Service provider connections ├── Dockerfile ├── wf1/workflow.json ├── wf2/workflow.json ├── wf3/workflow.json ├── wf4/workflow.json └── wf5/workflow.json └── wf6/workflow.json Declares the Logic Apps extension bundle — the only required entry: { "extensionBundle": { "id": "Microsoft.Azure.Functions.ExtensionBundle.Workflows", "version": "[1.*, 2.0.0)" } } Service provider connections reference app settings via @appsetting(...) — the runtime resolves them at startup with no ARM roundtrip. Each connection maps a name (e.g. servicebus) to a serviceProviderId and a connection string from the environment. The full Bicep deploys: ACR, Log Analytics, ACA Environment, Azure Files mount, Service Bus namespace + queues, and the Container App itself. The sections below focus on the non-obvious parts — the env var stability fixes and the Azure Files mount path are where most of the debugging time went. For queue-based workflows, the Basic SKU is all you need. Standard is only required if you use topics or need managed API connections — which as we'll explain later, don't work in ACA containers anyway. resource sbNamespace 'Microsoft.ServiceBus/namespaces@2022-10-01-preview' = { name: sbNamespaceName location: location sku: { name: 'Basic' tier: 'Basic' } } This is where most of the debugging time went. The Logic Apps runtime generates a 15-character hash prefix (the LAIdentifier) used to namespace all Azure Table Storage tables for run history. By default this hash is derived from a combination of the host ID and site name — and if the host ID changes on restart, you get a new prefix and all run history appears lost. Three env vars pin the identity so it survives pod restarts: // Pin the Functions host ID — prevents new LAIdentifier on every restart { name: 'AzureFunctionsWebHost__hostid', value: appName } // Pin the site FQDN — prevents identity hash changing when domain resolves differently { name: 'WEBSITE_HOSTNAME', value: '${appName}.${acaEnv.properties.defaultDomain}' } // Pin the content share identity — keeps Logic Apps environment stable { name: 'WEBSITE_CONTENTSHARE', value: contentShareName } Without AzureFunctionsWebHost__hostid, every container restart generates a new random host ID, a new LAIdentifier, new storage tables — and the run history from before the restart is effectively orphaned. The Azure Files volume must be mounted at .azure-webjobs-hosts, not at /home/site/wwwroot. Mounting at the root wipes all workflow JSON files baked into the image. volumeMounts: [ { volumeName: 'content-share' mountPath: '/home/site/wwwroot/.azure-webjobs-hosts' } ] The .azure-webjobs-hosts directory stores blob trigger checkpoints and distributed locks. Persisting it means the blob trigger doesn't replay already-processed blobs after a restart. env: [ { name: 'AzureWebJobsStorage', secretRef: 'storage-connection-string' } { name: 'WORKFLOWS_STORAGE_CONNECTION_STRING', secretRef: 'storage-connection-string' } { name: 'AzureBlob_connectionString', secretRef: 'storage-connection-string' } { name: 'azurequeues_connectionString', secretRef: 'storage-connection-string' } { name: 'FUNCTIONS_WORKER_RUNTIME', value: 'node' } { name: 'FUNCTIONS_WORKER_RUNTIME_VERSION', value: '~4' } { name: 'AzureWebJobsFeatureFlags', value: 'EnableMultiLanguageWorker' } { name: 'APP_KIND', value: 'workflowapp' } { name: 'WEBSITE_SITE_NAME', value: appName } { name: 'APPINSIGHTS_INSTRUMENTATIONKEY', value: appInsightsKey } { name: 'WEBSITE_CONTENTAZUREFILECONNECTIONSTRING', secretRef: 'storage-connection-string' } { name: 'WEBSITE_CONTENTSHARE', value: contentShareName } { name: 'WEBSITE_HOSTNAME', value: '${appName}.${acaEnv.properties.defaultDomain}' } { name: 'AzureFunctionsWebHost__hostid', value: appName } { name: 'WEBSITE_RESOURCE_GROUP', value: resourceGroup().name } { name: 'WEBSITE_OWNER_NAME', value: '${subscription().subscriptionId}+${resourceGroup().name}-WestEuropewebspace' } { name: 'servicebus_connectionString', secretRef: 'servicebus-connection-string' } ] Two scripts handle the full deploy cycle: az deployment group create \ --resource-group LogicAppHubRG \ --template-file infra/main.bicep \ --parameters infra/main.bicepparam \ --output table A key lesson learned: ACA caches the image digest at revision creation time. If you always push to :latest and then do az containerapp update --image ...:latest, ACA may keep running the old digest. Always resolve and deploy the exact digest after each build: az acr build \ --registry labasicdemoacr \ --image logicapp-basicdemo:latest \ --file ../LABasicDemo/Dockerfile \ ../LABasicDemo # Resolve the exact digest just pushed — avoids stale :latest caching in ACA DIGEST=$(az acr repository show-manifests \ --name labasicdemoacr \ --repository logicapp-basicdemo \ --orderby time_desc \ --query "[0].digest" -o tsv) az containerapp update \ --name la-basicdemo \ --resource-group LogicAppHubRG \ --image "labasicdemoacr.azurecr.io/logicapp-basicdemo@$DIGEST" az acr build runs the Docker build in the cloud — no local Docker needed. After a successful deploy.sh run, the resource group contains five resources: la-basicdemo — the Container App running the Logic Apps runtime la-basicdemo-env — the ACA Environment (shared networking + Log Analytics sink) la-basicdemo-logs — Log Analytics workspace (required by ACA Environment) labasicdemoacr — Azure Container Registry storing the built image logicappsbts-appins — Application Insights (pre-existing, shared across Logic Apps) The Service Bus namespace (la-basicdemobus) and the storage account (logicappshubsa) are in a separate resource group — they were provisioned earlier and are shared resources. Each workflow lives in its own directory as a single workflow.json. The trigger types for wf1–wf4 are standard service-provider patterns well covered in the Logic Apps docs — the key thing is they all use serviceProviderConfiguration with a connectionName that maps to connections.json. While designing these workflows we hit two hard limits in Linux containers worth flagging upfront: XSLT maps don't work. The built-in Transform XML action delegates to NetFxWorker.exe — a .NET Framework Windows binary bundled in the extension bundle. It can't execute on Linux. The workflow registers as Healthy and the trigger fires, but the transform action times out waiting for a worker that can never start. ACA is Linux-only, so there is no workaround. Managed API connections don't work. The gallery-style connectors (Office 365, SharePoint, SQL, etc.) require the runtime to acquire an ARM token at execution time. We tried a service principal via WORKFLOWAPP_AAD_* variables and a user-assigned managed identity — neither worked. The root cause is that ACA doesn't expose the App Service MSI endpoint that the managed-connection code path depends on. Full details in the connector boundary section below. Liquid / JSON transforms do work. The runtime processes them in-process. wf6 demonstrates this — a JSON-to-JSON map stored in Artifacts/Maps/. Liquid templates work in Linux containers — the runtime processes them in-process without any external worker. The map lives in Artifacts/Maps/PersonToContact.liquid: { "fullName": "{{content.firstName}} {{content.lastName}}", "email": "{{content.email}}" } The workflow receives a JSON POST, applies the map, returns the transformed JSON: { "actions": { "LiquidTransform": { "type": "Liquid", "kind": "JsonToJson", "inputs": { "content": "@triggerBody()", "map": { "source": "LogicApp", "name": "PersonToContact.liquid" } } } } } Test: curl -X POST \ -H "Content-Type: application/json" \ -d '{"firstName":"John","lastName":"Doe","email":"[email protected]"}' # → {"fullName":"John Doe","email":"[email protected]"} wf5 started as an attempt to use a managed API connection (the gallery-style Service Bus connector via Microsoft.Web/connections). It ended up as a service provider trigger + built-in HTTP action instead — because managed connections don't work in ACA containers. See the next section for the full story. This is the most important architectural decision when choosing this deployment model. Logic Apps connectors come in two families: Service provider connectors (built-in) — authenticate via connection strings stored in app settings. The runtime calls the service directly. No ARM roundtrip. These work perfectly in containers: Azure Blob Storage Azure Queue Storage Azure Service Bus (Basic SKU is sufficient) Azure Event Hubs Azure OpenAI (with API key auth) Azure AI Search (with API key auth) HTTP / HTTPS Managed API connections (Microsoft.Web/connections) — the gallery of 400+ connectors (Office 365, SharePoint, Salesforce, SQL, etc.). These require the runtime to call back into Azure Resource Manager at trigger/action time to resolve the connection and acquire an OAuth token. The runtime needs an App Service MSI endpoint (IDENTITY_ENDPOINT + IDENTITY_HEADER) to acquire an ARM token at execution time — which App Service injects automatically but ACA does not provide. The error is unambiguous: AuthorizationFailed: Can't get an access token for the managed identity on the 'https://management.core.windows.net/' resource, due to no valid credentials in local development. We tried two approaches: a service principal via WORKFLOWAPP_AAD_CLIENTID / TENANTID / CLIENTSECRET, and a user-assigned managed identity via AZURE_CLIENT_ID. Neither worked. The WORKFLOWAPP_AAD_* variables are only active in the Hybrid Deployment Model (Arc-enabled AKS + the ACA Logic Apps extension) — not in the custom image approach used here. Without the MSI endpoint, there is no path forward for managed connections in vanilla ACA. XSLT maps (Artifacts/Maps) also don't work on Linux containers. The built-in Transform XML action delegates to Microsoft.Azure.Workflows.Functions.NetFxWorker.exe — a Windows PE32 (.NET Framework) binary bundled inside the extension bundle. On any Linux host (local Docker, ACA, or otherwise) the OS refuses to execute it: the process start fails with "Permission denied" because the kernel rejects a non-ELF binary. The workflow shows as Healthy, the trigger fires, but the transform action times out waiting for a worker that can never start. This is a Linux container constraint, not specific to Mac or any particular environment — the .exe simply doesn't run on Linux without Wine. Summary of what doesn't work in Linux containers: Feature Reason Managed API connections (O365, SharePoint, etc.) Requires App Service MSI endpoint not present in ACA XSLT / Transform XML maps Requires NetFxWorker.exe — Windows PE32, won't run on Linux Design for this constraint from the start: if you need managed connectors, XSLT maps, or Liquid transforms, use App Service (Standard plan) — which runs on Windows and has all of these. If your workflows use service provider connectors + HTTP actions, ACA Linux containers are a great fit — and that covers a very large number of real-world integration scenarios. After getting everything working, the key validation is: Trigger wf1 (HTTP), wf2 (upload blob), wf3 (enqueue message), wf4/wf5 (send SB messages) Note the run IDs Stop the container: az containerapp revision deactivate --name la-basicdemo --resource-group LogicAppHubRG --revision Start it again: az containerapp revision activate ... Check run history — same run IDs, same history FQDN=$(az containerapp show --name la-basicdemo --resource-group LogicAppHubRG \ --query 'properties.configuration.ingress.fqdn' -o tsv) curl -s "https://${FQDN}/runtime/webhooks/workflow/api/management/workflows/wf1/runs?\$top=5&api-version=2022-05-01" Before the AzureFunctionsWebHost__hostid fix, run history was lost on every restart. After the fix, it survives indefinitely — the same run IDs appear every time. n8n (self-hosted) Logic Apps Standard on ACA Compute VM or container host Serverless, scale to 0 possible Billing Fixed VM cost Per request + container uptime Storage for state n8n database (SQLite/Postgres) Azure Table Storage (~pennies) Connectors 400+ community nodes Service providers (built-in) + HTTP actions Managed connectors (O365 etc.) ✅ ❌ (App Service only) XSLT maps ✅ ❌ (Windows container only) Liquid / JSON transforms ✅ ✅ Audit / run history Basic Full run history with input/output per action Visual designer ✅ ✅ (VS Code extension) GitOps / IaC Manual Native (JSON files + Bicep) AI agent support Via community nodes Native (Azure OpenAI built-in, Agent action) The sweet spot for Logic Apps Standard on ACA: Azure-native workloads, event-driven pipelines (blob, queue, Service Bus), outbound HTTP integrations, and AI orchestration workflows — where you want durable run history and GitOps-friendly deployment without paying for an always-on App Service plan.
