Skip to content

Infrastructure

The infrastructure is configured as code via Terraform, for various reasons.

Architecture

Resources outside of Terraform

The following things in Azure are managed outside of Terraform:

  • Subcriptions
  • Active Directory (users, groups, service principals, etc.)
  • Service connections
  • Configuration files, stored as blobs
  • Role assignments

Environments

Environment Azure Resource Group Terraform Workspace Git Reference
Dev $(AGENCY_RESOURCE_GROUP_PREFIX)-eligibility-dev dev main
Test $(AGENCY_RESOURCE_GROUP_PREFIX)-eligibility-test test release candidate tag
Prod $(AGENCY_RESOURCE_GROUP_PREFIX)-eligibility-prod default release tag

(See Version number format for naming pattern for release candidate/release tags.)

All resources in these Resource Groups should be reflected in Terraform in this repository. The exceptions are:

For browsing the Azure portal, you can switch your Default subscription filter.

Access restrictions

We restrict which IP addresses that can access the app service by using a Web Application Firewall (WAF) configured on a Front Door. There is an exception for the /healthcheck and /static paths, which can be accessed by any IP address.

The app service itself gives access only to our Front Door and to Azure availability tests.

Monitoring

We have ping tests set up to notify about availability of each environment. Alerts go to #benefits-notify.

Logs

Logs can be found a couple of places:

Azure App Service Logs

Open the Logs for the environment you are interested in. The following tables are likely of interest:

  • AppServiceConsoleLogs: stdout and stderr coming from the container
  • AppServiceHTTPLogs: requests coming through App Service
  • AppServicePlatformLogs: deployment information

For some pre-defined queries, click Queries, then Group by: Query type, and look under Query pack queries.

Azure Monitor Logs

Open the Logs for the environment you are interested in.

The following tables are likely of interest:

  • requests
  • traces

In the latter two, you should see recent log output. Note there is some latency.

See Failures in the sidebar (or exceptions under Logs) for application errors/exceptions.

Live tail

After setting up the Azure CLI, you can use the following command to stream live logs:

az webapp log tail --resource-group <resource group name> --name <app service name> 2>&1 | grep -v /healthcheck

e.g.

az webapp log tail --resource-group courtesy-cards-eligibility-prod --name mst-courtesy-cards-eligibility-server-prod 2>&1 | grep -v /healthcheck

SCM

Docker logs can be viewed in the Advanced Tools for the instance. The URL pattern is https://<app service name>.scm.azurewebsites.net/api/logs/docker

Making changes

Terraform is plan‘d when commits that change any file under the terraform directory are either:

  • merged into the main branch
  • tagged with a release candidate or release tag

Then, the Azure DevOps pipeline that ran the plan will wait for approval to run apply.

While other automation for this project is done through GitHub Actions, we use an Azure DevOps Pipeline (above) for a couple of reasons:

  • Easier authentication with the Azure API using a service connnection
  • Log output is hidden, avoiding accidentally leaking secrets

Local development

  1. Get access to the Azure account.
  2. Install dependencies:

  3. Azure CLI

  4. Terraform - see exact version in pipeline/deploy.yml

  5. Authenticate using the Azure CLI.

az login
  1. Outside the dev container, navigate to the terraform/ directory.
  2. Create a terraform.tfvars file and specify the variables.
  3. Initialize Terraform. You can also use this script later to switch between environments.
./init.sh <env> <agency>
  1. Make changes to Terraform files.
  2. Preview the changes, as necessary.
terraform plan
  1. Submit the changes via pull request.

Azure environment setup

The steps we took to set up MST’s environment are documented in a separate Google Doc.

In general, the steps that must be done manually before the pipeline can be run are:

  • Create an Azure DevOps organization and project
  • Request a free grant of parallel jobs using the form at https://aka.ms/azpipelines-parallelism-request
  • Create Resource Group and storage account dedicated to the Terraform state
  • Create container in storage account for Terraform state
  • Create environment Resource Group for each environment, Region: West US
  • We create these manually to avoid having to give the pipeline service connection permissions for creating resource groups
  • Create Terraform workspace for each environment
  • Trigger a pipeline run to verify plan and apply
  • Known chicken-and-egg problem: Terraform both creates the Key Vault and expects a secret within it, so will always fail on the first deploy. Add the Benefits slack email secret and re-run the pipeline.

Once the pipeline has run, there are a few more steps to be done manually in the Azure portal. These are related to configuring the service principal used for ETL:

  • Create the service principal
  • Give the ETL service principal access to the prod storage account created by the pipeline:
  • Navigate to the storage account container
  • Select Access Control (IAM)
  • Select Add, then select Add role assignment
  • In the Role tab, select Storage Blob Data Contributor
  • In the Members tab, select Select Members and search for the ETL service principal. Add it to the role.
  • Also in the Members tab, add a description of This role assignment gives write access only for the path of the hashed data file.
  • In the Conditions tab, select Add condition and change the editor type to Code
  • Add the following condition into the editor, filling in <filename> with the appropriate value:
(
 (
  @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:path] StringLike '<filename>'
 )
)