Infrastructure ¶
The infrastructure is configured as code via Terraform, for various reasons.
Architecture ¶
Resources outside of Terraform ¶
The following things in Azure are managed outside of Terraform:
- Subcriptions
- Active Directory (users, groups, service principals, etc.)
- Service connections
- Configuration files, stored as blobs
- Role assignments
Environments ¶
| Environment | Azure Resource Group | Terraform Workspace | Git Reference |
|---|---|---|---|
| Dev | $(AGENCY_RESOURCE_GROUP_PREFIX)-eligibility-dev |
dev |
main |
| Test | $(AGENCY_RESOURCE_GROUP_PREFIX)-eligibility-test |
test |
release candidate tag |
| Prod | $(AGENCY_RESOURCE_GROUP_PREFIX)-eligibility-prod |
default |
release tag |
(See Version number format for naming pattern for release candidate/release tags.)
All resources in these Resource Groups should be reflected in Terraform in this repository. The exceptions are:
- Secrets, such as values under Key Vault.
prevent_destroyis used on these Resources. - Things managed outside of Terraform
For browsing the Azure portal, you can switch your Default subscription filter.
Access restrictions ¶
We restrict which IP addresses can access the app service by using a Web Application Firewall (WAF) configured on a Front Door. There is an exception for the /healthcheck and /static paths, which can be accessed by any IP address.
The app service itself gives access only to our Front Door and to Azure availability tests.
Monitoring ¶
We have ping tests set up to notify about availability of each environment. Alerts go to #notify-benefits.
Logs ¶
Logs can be found a couple of places:
Azure App Service Logs ¶
Open the Logs for the environment you are interested in. The following tables are likely of interest:
AppServiceConsoleLogs:stdoutandstderrcoming from the containerAppServiceHTTPLogs: requests coming through App ServiceAppServicePlatformLogs: deployment information
For some pre-defined queries, click Queries, then Group by: Query type, and look under Query pack queries.
Azure Monitor Logs ¶
Open the Logs for the environment you are interested in.
The following tables are likely of interest:
requeststraces
In the latter two, you should see recent log output. Note there is some latency.
See Failures in the sidebar (or exceptions under Logs) for application errors/exceptions.
Live tail ¶
After setting up the Azure CLI, you can use the following command to stream live logs:
az webapp log tail --resource-group <resource group name> --name <app service name> 2>&1 | grep -v /healthcheck
e.g.
az webapp log tail --resource-group courtesy-cards-eligibility-prod --name mst-courtesy-cards-eligibility-server-prod 2>&1 | grep -v /healthcheck
SCM ¶
Docker logs can be viewed in the Advanced Tools for the instance. The URL pattern is https://<app service name>.scm.azurewebsites.net/api/logs/docker
Making changes ¶
Terraform is plan‘d when commits that change any file under the terraform directory are either:
- merged into the
mainbranch - tagged with a release candidate or release tag
Then, the Azure DevOps pipeline that ran the plan will wait for approval to run apply.
While other automation for this project is done through GitHub Actions, we use an Azure DevOps Pipeline (above) for a couple of reasons:
- Easier authentication with the Azure API using a service connnection
- Log output is hidden, avoiding accidentally leaking secrets
Local development ¶
- Get access to the Azure account.
-
Install dependencies:
-
Terraform - see exact version in
pipeline/deploy.yml
az login
- Outside the dev container, navigate to the
terraform/directory. - Create a
terraform.tfvarsfile and specify the variables. - Initialize Terraform. You can also use this script later to switch between environments.
./init.sh <env> <agency>
- Make changes to Terraform files.
- Preview the changes, as necessary.
terraform plan
- Submit the changes via pull request.
Azure environment setup ¶
The steps we took to set up MST’s environment are documented in a separate Google Doc.
In general, the steps that must be done manually before the pipeline can be run are:
- Create an Azure DevOps organization and project
- Request a free grant of parallel jobs using the form at https://aka.ms/azpipelines-parallelism-request
- Create Resource Group and storage account dedicated to the Terraform state
- Create container in storage account for Terraform state
- Create environment Resource Group for each environment, Region: West US
- We create these manually to avoid having to give the pipeline service connection permissions for creating resource groups
- Create Terraform workspace for each environment
- Trigger a pipeline run to verify
planandapply - Known chicken-and-egg problem: Terraform both creates the Key Vault and expects a secret within it, so will always fail on the first deploy. Add the Benefits slack email secret and re-run the pipeline.
Once the pipeline has run, there are a few more steps to be done manually in the Azure portal. These are related to configuring the service principal used for ETL:
- Create the service principal
- Give the ETL service principal access to the
prodstorage account created by the pipeline: - Navigate to the storage account container
- Select Access Control (IAM)
- Select Add, then select Add role assignment
- In the Role tab, select
Storage Blob Data Contributor - In the Members tab, select
Select Membersand search for the ETL service principal. Add it to the role. - Also in the Members tab, add a description of
This role assignment gives write access only for the path of the hashed data file. - In the Conditions tab, select Add condition and change the editor type to
Code - Add the following condition into the editor, filling in
<filename>with the appropriate value:
(
(
@Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:path] StringLike '<filename>'
)
)