Skip to main content

Migrate From Storage-Based to Pub/Sub-Based GCP Audit Log Integration Using Terraform

This topic describes how you can use Terraform to migrate your existing GCP Storage-based audit log integration to a Pub/Sub-based audit log integration for audit log monitoring.

Lacework recommends this migration procedure because it ensures audit log monitoring coverage for your GCP organization or project during the migration.

The migration involves the following four steps. All the steps are required to ensure that there is no gap in delivery of audit log data from GCP to the Lacework platform during the migration.

  1. Collect details of existing Storage-based audit log integration
  2. Delete Terraform files for the existing Storage-based audit log integration
  3. Create Pub/Sub-based audit log integration
  4. Mark existing Storage-based audit log integration for migration
note

You can also use the Lacework Console to manually migrate your Storage-based integration to a Pub/Sub-based integration. For more information, see Migrate From Storage-Based to Pub/Sub-Based GCP Audit Log Integration - Manual Configuration.

Important

If you do not want audit log monitoring coverage for your GCP organization or project during the migration, you can skip this migration procedure and perform the following four steps. Note that this can result in a brief gap in delivery of audit log data from GCP to the Lacework platform.

  1. Delete the Terraform files for your Storage-based audit log integration. For more information, see Delete Terraform Files for the Existing Storage-Based Audit Log Integration.
  2. Create a Pub/Sub-based audit log integration using one of the following methods:
  3. Do one of the following to delete your Storage-based audit log integration:
    • Use the lacework cloud-account delete Lacework CLI command to delete the integration.
    • In the Lacework Console, go to Settings > Integrations > Cloud Accounts and delete the integration.
  4. (Optional) Delete the sink and storage bucket for the Storage-based audit log integration. For more information, see Delete the Sink and Storage Bucket for the Storage-based Integration.

Prerequisites

Collect Details of Existing Storage-Based Audit Log Integration

You can reuse the project and service account that you created for the Storage-based audit log integration for the Pub/Sub-based audit log integration. Do the following to collect the project and service account details from the Lacework Console.

  1. In the Lacework Console, go to Settings > Integrations > Cloud Accounts.

  2. Select the row for the Storage-based integration. A Storage-based integration has the provider as GCP and type as Audit Log (Storage).

    The Cloud Account page displays the integration details.

    • The Account field displays the ID of the project in which you configured the resources for the Storage-based integration.
    • The Client Email field displays the email ID of the service account you created for the Storage-based integration. The service account email ID is in the format: my-service-account@my-project-name.iam.gserviceaccount.com.
    • The ID field displays the ID of the integration.
  3. Copy the project ID, service account email ID, and integration ID for use in the procedures below.

Delete Terraform Files for the Existing Storage-Based Audit Log Integration

By default, the Terraform files for the Storage-based integration are created in the ~/lacework/gcp directory when you run the lacework generate cloud-account gcp Lacework CLI command.

Lacework recommends that you delete the main.tf and terraform.tfstate Terraform files for your Storage-based integration to ensure that they are not used to accidentally recreate the Storage-based integration or cause Terraform state conflicts.

Create Pub/Sub-Based Audit Log Integration

In this procedure, you will create a Pub/Sub topic and subscription to record audit log events and add a Pub/Sub-based audit log integration to the Lacework platform.

  1. Do one of the following:

    • For an organization-level Pub/Sub-based audit log integration, run the following command:

      lacework generate cloud-account gcp  \
      --audit_log --use_pub_sub --audit_log_integration_name AuditLogIntegName \
      --organization_integration \
      --organization_id OrganizationId \
      --project_id ProjectId \
      --service_account_credentials PathToServiceAccountKeyFile \
      --output OutputDirectoryPath \
      --noninteractive
    • For a project-level Pub/Sub-based audit log integration, run the following command:

      lacework generate cloud-account gcp  \
      --audit_log --use_pub_sub --audit_log_integration_name AuditLogIntegName \
      --project_id ProjectId \
      --service_account_credentials PathToServiceAccountKeyFile \
      --output OutputDirectoryPath \
      --noninteractive

    Where:

    • AuditLogIntegName is the name of the Pub/Sub-based audit log integration.
    • OrganizationId is the ID of the GCP organization being integrated.
    • ProjectId is the ID of the project you identified in the Collect Details of Existing Storage-Based Audit Log Integration procedure.
    • PathToServiceAccountKeyFile is the path to the service account key JSON file for the service account you identified in the Collect Details of Existing Storage-Based Audit Log Integration procedure.
    • OutputDirectoryPath is the path to the directory where you want the Terraform files for the Pub/Sub-based audit log integration to be created. For example, if you specify the directory path as ~/lacework/gcp-pubsub, the Terraform files are created in the ~/lacework/gcp-pubsub folder where you run the lacework generate cloud-account gcp command.
    Warning
    • Lacework recommends that you specify an output directory that is different from the one where the Terraform files for a GCP configuration or audit log integration are stored. If you specify an output directory that contains the Terraform files for an existing GCP configuration or audit log integration, all the resources created for that integration in GCP will be deleted.
    • If you do not specify the --output OutputDirectoryPath option, the Terraform files will be created in the ~/lacework/gcp directory. If the ~/lacework/gcp directory contains the Terraform files for an existing GCP configuration or audit log integration, all the resources created for that integration in GCP will be deleted.
    tip

    If you have a CI/CD pipeline, ensure that you run the Terraform for the Pub/Sub-based audit log integration in a directory that does not contain Terraform files for an existing GCP configuration or audit log integration.

  2. Navigate to the output directory that you specified in the --output OutputDirectoryPath option.

  3. Run terraform plan and review the changes that will be applied.

  4. Once satisfied with the changes that will be applied, run terraform apply to execute Terraform.

Mark Existing Storage-Based Audit Log Integration for Migration

After you create the Pub/Sub-based audit log integration, you must mark the existing Storage-based audit log integration for migration. When you mark a Storage-based integration for migration, the Lacework Platform ensures that all the audit log messages in the storage bucket for the integration are ingested, and then safely deletes the integration.

  1. Run the following Lacework CLI command:

    lacework cloud-account migrate IntegrationID

    Where IntegrationID is the integration ID you identified in the Collect Details of Existing Storage-Based Audit Log Integration procedure.

Delete the Sink and Storage Bucket for the Storage-based Integration (Optional)

To reduce your GCP storage costs, you can delete the log sink and storage bucket for the Storage-based integration.

  1. In the Lacework Console, go to Settings > Integrations > Cloud Accounts.
  2. Ensure that the Storage-based audit log integration that you marked for migration is not displayed on the Cloud accounts page. It can take up to five hours for an integration that is marked for migration to be deleted.
  3. To delete a sink, see the instructions in Manage Sinks.
  4. To delete a storage bucket, see the instructions in Delete a Bucket.