Workload Visibility and Security
Having completed Cloud Compliance and Security integration you are now ready to deploy the Lacework Agent. The Lacework Agent provides workload level security insights like network process attribution details, dns attribution details, process execution insights, and insights into file system changes.
| Use Cases | Lacework Feature(s) | Data Source |
|---|---|---|
| User and Entity Behavior Analytics (UEBA) | Workload Anomaly Detection Process Dashboard and Polygraph Network Dashboard and Polygraph Filesystem Dashboard and Polygraph | Linux Agent Windows Agent Kubernetes Agent |
| Vulnerability Management | Host Vulnerability Management (with Active Vulnerability Detection) Container Vulnerability Management | Linux Agent Kubernetes Agent Kubernetes Admission Contoller |
| Kubernetes Posture Management (KSPM) | Kubernetes Compliance Dashboard and Reports | EKS Compliance |
Select the deployment scenario for your environment below to get started:
- Linux
- Windows
- Kubernetes
| Use Cases | Lacework Feature(s) | Data Source |
|---|---|---|
| User and Entity Behavior Analytics (UEBA) | Workload Anomaly Detection Process Dashboard and Polygraph Network Dashboard and Polygraph Filesystem Dashboard and Polygraph | Linux Agent |
| Vulnerability Management | Vulnerability Management Dashboard (including Active Vulnerability Detection) | Linux Agent |
- Lacework CLI
- Install Script
- Other
Install the Linux Agent via the Lacework CLI 📎
To analyze application, host, and user behavior, Lacework uses a lightweight agent that securely forwards collected metadata to the Lacework platform for analysis. The agent requires minimal system resources and runs on most Linux distributions.
The Lacework CLI runs on macOS, Linux, and Windows. If you are new to the Lacework CLI, see Getting Started with the Lacework CLI..
You can use the Lacework CLI to create agent access tokens and install the Lacework agent on supported Linux distributions.
Manage Agent Access Token Using Lacework CLI​
You can use the Lacework CLI to create, edit, and enable or disable agent access tokens from the command-line, without the need to login to the Lacework Console.
Agent tokens should be treated as secret and not published. A token uniquely identifies a Lacework customer. If you suspect your token has been publicly exposed or compromised, generate a new token and update the new token on all machines using the old token. When complete, the old token can be safely disabled without interrupting Lacework services.
To list all agent access tokens:
lacework agent token listFor more information, see lacework agent token list.
To create a new agent access token:
lacework agent token create MyTokenName [description]Note:
[description]is an optional argument.You can use the agent token name to logically separate your deployments, for example, by environment type (QA, Dev) or system type (CentOS, RHEL).
For more information, see lacework agent token create.
To view agent access token details:
lacework agent token show MyAgentTokenFor more information, see lacework agent token show.
To disable an agent access token:
lacework agent token update MyAgentToken --disableFor more information, see lacework agent token update.
Note: By design, agent tokens cannot be deleted. You can only disable tokens.
To enable an agent access token:
lacework agent token update MyAgentToken --enableTo update the name and description of an agent access token:
lacework agent token update MyAgentToken --name dev --description "k8s deployment for dev env"
Install Agent on Hosts with the Lacework CLI​
You can use the lacework agent install command to install the agent on a remote host over SSH if you have root privileges on the remote host. When you run this command without any options, an interactive prompt appears to collect the authentication information required to access the remote host.
This method for deployment is suitable for one off installations, but does not take into account the configuration of the Lacework agent. For custom configuration of the agent with the /var/lib/lacework/datacollector/config.json file, Lacework recommends using a configuration management tool such as Ansible or Chef.
To authenticate the remote host with a username and password:
lacework agent install MyHost --ssh_username MyUsername --ssh_password MyPasswordTo authenticate the remote host with an identity file:
lacework agent install MyUsername@MyHost -i /path/to/your/keyTo use an agent access token of your choice, do the following:
- Run the
lacework agent token listcommand to view the list of agent access tokens. - Copy the token you want to use and specify it using the
--tokenoption for thelacework agent installcommand.
- Run the
After you install the agent, it takes 10 to 15 minutes for agent data to appear in the Lacework Console under Resources > Agents.
Install Agent on AWS EC2 Instances​
You can use the following commands to install the Lacework agent on all the EC2 instances in your AWS account:
| Command | Description |
|---|---|
| lacework agent aws-install ec2ic | Uses EC2 Instance Connect to securely connect to EC2 instances and install the agent. |
| lacework agent aws-install ec2ssh | Uses SSH to securely connect to EC2 instances and install the agent. |
These commands are supported only for EC2 instances with public IP addresses that are open to the Internet on port 22.
Ensure that your AWS account credentials have the AmazonEC2FullAccess or equivalent policy attached.
Ensure that your EC2 instances have public IP addresses that are open to the Internet on port 22.
Open a terminal window.
Add your AWS account credentials as environment variables.
export AWS_ACCESS_KEY_ID=YOUR-AWS-ACCESS-KEY-ID
export AWS_SECRET_ACCESS_KEY=YOUR-AWS-SECRET-ACCESS-KEYRun the
lacework agent aws-install ec2icorlacework agent aws-install ec2sshcommand. You can use the following options to install the agent only on specific EC2 instances:Option Description --include_regionsInstalls the agent only on EC2 instances in a specified region.
For example, use the following command to install the agent only on EC2 instances in the us-west-2 and us-east-2 regions:lacework agent aws-install ec2ic --include_regions us-west-2,us-east-2--tag TagName,TagValueInstalls the agent only on EC2 instances that have a tag with a specific value.
For example, use the following command to install the agent only on EC2 instances that have a tag namedsaleswith the valueEMEA:lacework agent aws-install ec2ic --tag sales,EMEA
Note: This option is supported only for EC2 instances for which you have permissions to retrieve tags. For more information, see Configure Access to Tags in AWS.--tag_key TagNameInstalls the agent only on EC2 instances that have a specific tag.
For example, use the following command to install the agent only on EC2 instances that have a tag namedsales:lacework agent aws-install ec2ic --tag_key sales
Note: This option is supported only for EC2 instances for which you have permissions to retrieve tags. For more information, see Configure Access to Tags in AWS.
The list of agent access tokens defined in your Lacework account are displayed. Select an agent access token using the up or down arrow key and press Enter.
The agent is installed on all the EC2 instances on which it is not already installed.
Install Agent on Google Compute Engine Instances​
You can use the lacework agent gcp-install osl command to install the Lacework agent on all the Google Compute Engine (GCE) instances in your Google Cloud organization.
This command is supported only for GCE instances with OS Login enabled and with public IP addresses that are open to the Internet on port 22. For more information about enabling OS Login, see Set up OS Login.
Ensure that your GCP account credentials have privileges equivalent to the Compute Instance Admin role.
Ensure that your GCE instances have OS Login enabled and have public IP addresses that are open to the Internet on port 22.
Open a terminal window.
Add your Google Cloud credentials as an environment variable.
export GOOGLE_APPLICATION_CREDENTIALS=PATH-TO-YOUR-CREDENTIAL-JSON-FILEFor more information, see GOOGLE_APPLICATION_CREDENTIALS.
Run the command:
lacework agent gcp-install osl GCPUserNameWhere
GCPUserNameis your GCP username.You can use the following options to install the agent only on specific GCE instances:
Option Description --project_idInstalls the agent only on GCE instances in a specified project.
For example, use the following command to install the agent only on GCE instances in themy-laceworkproject:lacework agent gcp-install osl GCPUserName --project_id my-lacework
Note:- If you run the command on a GCE instance, the project ID for the instance is read from the GCP metadata server and the agent is installed only on the GCE instances in that project.
- If you do not run the command on a GCE instance, you must specify the project ID to install the agent only on GCE instances in that project.
--include_regionsInstalls the agent only on GCE instances in a specified region.
For example, use the following command to install the agent only on GCE instances in the us-west1 and us-east1 regions:lacework agent gcp-install osl GCPUserName --include_regions us-west1,us-east1--metadata MetadataKey,MetadataValueInstalls the agent only on GCE instances that have a metadata key with a specific value.
For example, use the following command to install the agent only on GCE instances that have the metadata key namedsaleswith the valueEMEA:lacework agent gcp-install osl GCPUserName --metadata sales,EMEA
Note: This option is supported only for GCE instances for which you have permissions to retrieve user-defined labels. For more information, see Configure Access to Labels in Google Cloud.--metadata_key MetadataKeyInstalls the agent only on EC2 instances that have a specific metadata key.
For example, use the following command to install the agent only on GCE instances that have the metadata key namedsales:lacework agent gcp-install osl GCPUserName --metadata_key sales
Note: This option is supported only for GCE instances for which you have permissions to retrieve user-defined labels. For more information, see Configure Access to Labels in Google Cloud.The list of agent access tokens defined in your Lacework account are displayed. Select an agent access token using the up or down arrow key and press Enter.
The agent is installed on all the GCE instances in your Google Cloud organization on which it is not already installed.
Install the Linux Agent via install.sh script 📎
For single host installations, Lacework recommends using the installation script called install.sh to download and install the Linux agent.
Download the Installation Script from the Lacework Console​
The script you download from the Lacework Console should not be publicly shared because it is customer-specific and uses a private agent access token.
In the Lacework Console, go to Settings > Configuration > Agent Tokens.
View the list of access tokens and sort by OS type (either Windows or Linux) in the OS column.
Click on the row for the Linux access token you want to use to install the agent.
The Access Token page appears.
Click the Install tab.
Click Lacework Script.
Do either of the following:
- Click Download script to download the install.sh script to the
/tmpor another directory on your target Linux server. - Click Copy URL to copy the URL for the script. Then use
wgetto download the script to your target Linux server.
- Click Download script to download the install.sh script to the
Run the script as sudo:
sudo sh install.sh
install.sh Script Parameters​
The install.sh script supports the following optional parameters:
| Parameter | Description |
|---|---|
| -h | Displays the list of parameters supported by the install.sh script. |
| access_token | Specifies the agent access token to use during installation. For more information, see Create Agent Access Tokens. Note: If you specify the access token, ensure that it is the first parameter for the install.sh script. |
| -v | Displays the version of the install.sh script. |
| -F | Disables file integrity monitoring (FIM). For more information, see File Integrity Monitoring (FIM). |
| -S | Verifies if the correct certificate is installed on the host. For more information, see Required Connectivity, Proxies & Certificates. |
| -O | Filters auditd related messages going to the system journal. |
| -U | Specifies the agent server URL. For more information, see Agent Server URL. |
| -L | Lists the agent versions that are available to install. For more information, see List Agent Versions Available for Installation. |
| -V | Specifies the agent version to install. For more information, see Specify the Agent Version to Install. |
| -H | Specifies the RPM or DEB package hash to validate the installation. For more information, see Specify a Hash to Validate the Install. |
Agent Server URL​
You only need to specify an agent server URL if you are installing Linux agent v6.6 or earlier outside the US. For more information, see Agent Server URL.
When you download the install.sh script from the Lacework Console, the agent server URL is already included in the install.sh script and you do not need any additional configuration.
List Agent Versions Available for Installation​
In agent v5.3 and higher, you can list all agent versions available for installation by specifying the -L parameter when running the install.sh command:
sudo ./install.sh -L
Available versions:
3.8.2
3.9.5
4.0.32
4.1.62
4.2.0.218
4.3.0.5556
5.0.0.5826
5.1.0.6419
5.2.0.6913
latest
Specify the Agent Version to Install​
In agent v5.3 and higher, you can specify the agent versions to download and install by specifying the -V parameter when running the install.sh command:
sudo ./install.sh -U https://agent-server-url -V 5.2.0.6913
Using serverurl already set in local config: https://agent-server-url
Check connectivity to Lacework server
Check Go Daddy root certificate
Installing on ubuntu (focal)
Skipping writing config since a config file already exists
+ curl -fsSL https://s3-us-west-2.amazonaws.com/www.lacework.net/download/5.3.0.7160_2022-02-16_main_6771b46ad72cc525f0ada1cf7458230f2f78ab77/latest/packages/lacework_5.2.0.6913_amd64.deb
+ sh -c sleep 3; apt-get -qq update
+ sh -c sleep 3; dpkg -i /tmp/W4XUtE.deb
Selecting previously unselected package lacework.
(Reading database ... 95405 files and directories currently installed.)
Preparing to unpack /tmp/W4XUtE.deb ...
Unpacking lacework (5.2.0.6913) ...
Setting up lacework (5.2.0.6913) ...
Systemd detected
Synchronizing state of datacollector.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable datacollector
Created symlink /etc/systemd/system/multi-user.target.wants/datacollector.service → /lib/systemd/system/datacollector.service.
Processing triggers for systemd (245.4-4ubuntu3.15) ...
Lacework successfully installed
Specify a Hash to Validate the Install​
By default, the agent install script uses an embedded hash value in the agent package for validation purposes. Lacework recommends using this hash for validation. Optionally, you can explicitly specify a hash value to validate the install package during the installation process. Specify a hash by using the -H parameter when running the install.sh command:
sudo ./install.sh -V 5.2.0.6913 -H 1a883b975e7725b01298d65ce12932ed5b3a8eaea9ecbbfa6a4efe5effdd7dcc
Using serverurl already set in local config: https://api.lacework.net
Check connectivity to Lacework server
Check Go Daddy root certificate
Installing on ubuntu (focal)
Skipping writing config since a config file already exists
+ curl -fsSL https://s3-us-west-2.amazonaws.com/www.lacework.net/download/5.3.0.7160_2022-02-16_main_6771b46ad72cc525f0ada1cf7458230f2f78ab77/latest/packages/lacework_5.2.0.6913_amd64.deb
Using provided hash: 1a883b975e7725b01298d65ce12932ed5b3a8eaea9ecbbfa6a4efe5effdd7dcc
+ sh -c sleep 3; apt-get -qq update
+ sh -c sleep 3; dpkg -i /tmp/F6ePrn.deb
Selecting previously unselected package lacework.
(Reading database ... 63895 files and directories currently installed.)
Preparing to unpack /tmp/F6ePrn.deb ...
Unpacking lacework (5.2.0.6913) ...
Setting up lacework (5.2.0.6913) ...
Systemd detected
Synchronizing state of datacollector.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable datacollector
Created symlink /etc/systemd/system/multi-user.target.wants/datacollector.service → /lib/systemd/system/datacollector.service.
Processing triggers for systemd (245.4-4ubuntu3.13) ...
Lacework successfully installed
Download the Installation Script from the Lacework Package Repository​
When you download the install.sh script from the Lacework Package Repository, you can (optionally) run the following command to specify the agent server URL:
sudo ./install.sh -U Your_API_Endpoint
Where Your_API_Endpoint is your agent server URL.
For example:
sudo ./install.sh -U https://api.fra.lacework.net
Install the Linux Agent via package repositories 📎
Overview​
This topic describes how to access and run multiple versions of agent packages from the Lacework package repository: packages.lacework.net.
Lacework provides the following agent repositories:
- Latest: This is the latest version of the agent. Agent releases currently follow a monthly release cadence.
- Established: This is the fleet upgrade release of the agent (quarterly update). Lacework tags a release as an established release once every few months. This established release version is tagged for auto-upgrading all agents running older versions, unless auto-upgrade has been explicitly disabled in the config.json file.
- Archived: These are older versions of the agent. They do not appear in the Latest or Established repositories.
You can use these repositories to manage agent packages using package managers such as APT, YUM, and Zypper. You can install the latest version of the agent or a specific version in the Archived and Established repositories.
Install from APT, YUM, and Zypper Repositories​
Lacework provides repositories for Debian-based (APT) or RPM-based (YUM and Zypper) distributions. When installing the repositories, each host requires a config.json file for the agent to communicate with Lacework. You can create a config.json file locally or copy it from a centralized server using any orchestration tool. For details, see config.json.
APT​
For Debian-based distributions (Debian, Ubuntu), use the following steps to the set up the new Lacework repositories:
Install gpg if it is not already installed:
sudo apt install -y gpgInstall the
ca-certificatespackage:sudo apt-get install -y ca-certificatesImport the Lacework key:
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 360D55D76727556814078E25FF3E1D4DEE0CC692Install the
lsb-releasepackage:sudo apt-get install -y lsb-releaseCreate the APT repository configuration file and include Lacework repositories:
lsb_distro=$(lsb_release -i | cut -f2 | tr '[:upper:]' '[:lower:]')
lsb_rel=$(lsb_release -c | cut -f2)
sudo sh -c "echo deb [arch=amd64] https://packages.lacework.net/latest/DEB/"$lsb_distro" "$lsb_rel" main >> /etc/apt/sources.list.d/lacework.list"
sudo sh -c "echo deb [arch=amd64] https://packages.lacework.net/established/DEB/"$lsb_distro" "$lsb_rel" main >> /etc/apt/sources.list.d/lacework.list"
sudo sh -c "echo deb [arch=amd64] https://packages.lacework.net/archived/DEB/"$lsb_distro" "$lsb_rel" main >> /etc/apt/sources.list.d/lacework.list"Replace
[arch=amd64]with[arch=arm64]if you are installing on an ARM64 system.Refresh the repositories information:
sudo apt update
To list all the available Lacework packages, use this command:
sudo apt list -a lacework
To install the latest version of the agent, use this command:
sudo apt install lacework
To install a specific version of the agent, use this command:
sudo apt install lacework=VERSION
Replace VERSION with the specific version that you want to install.
For example, to install v6.5.0.12833:
sudo apt install lacework=6.5.0.12833
Delete a Package (APT)​
To delete the Lacework APT repository package, use the command:
sudo apt-get remove --purge lacework
YUM​
- Download the repository configuration file for YUM-based distributions using this command:See Repository Configuration File for an example of the repository configuration file.
curl -O -sSL https://packages.lacework.net/lacework.repo - Move the configuration file to the /etc/yum.repos.d directory.
sudo mv lacework.repo /etc/yum.repos.d - You can enable all the repositories at the same time.
You can also disable any of the repositories if they are not needed. To do this, set the
enabledflag to 0 in the repository configuration file as follows:[packages-lacework-latest]
name=Lacework latest agent release
baseurl=https://packages.lacework.net/latest/RPMS/$basearch/
enabled=0 - To install the latest version of the agent, use this command:
sudo yum install lacework - Ensure that the correct GPG key is installed.Example output
Importing GPG key 0xEE0CC692:
Userid : "Lacework Inc. <support@lacework.net>"
Fingerprint: 360D 55D7 6727 5568 1407 8E25 FF3E 1D4D EE0C C692
From : https://packages.lacework.net/keys/RPM-GPG-KEY-lacework
Key imported successfully
To list all available agent versions, use this command:
sudo yum --showduplicates list lacework
To install a specific version, use this command:
sudo yum install lacework-VERSION
Replace VERSION with the specific agent version that you want to install.
For example, to install v6.5.0.12833-1:
sudo yum install lacework-6.5.0.12833-1
If you install an older version of agent from the archived repository, it is upgraded to the established version of the agent release. To prevent this auto-upgrade and pin your package to a specific version, you should disable auto-upgrade in the agent configuration file (config.json) in the /var/lib/lacework/config directory.
To disable auto-upgrade, enter the following in the config.json file:
"autoupgrade": "disable"
For improved security and to benefit from new and improved features, Lacework recommends that you do not disable automatic upgrade of the agent.
Delete a Package (YUM)​
To delete the Lacework YUM repository package, use the command:
sudo yum remove lacework
Zypper​
Download the repository configuration file for Zypper-based distributions using this command:
curl -O -sSL https://packages.lacework.net/lacework.repoSee Repository Configuration File for an example of the repository configuration file.
Move the configuration file to the /etc/zypp/repos.d directory.
sudo mv lacework.repo /etc/zypp/repos.dYou can enable all the repositories at the same time. You can also disable any of the repositories if they are not needed. To do this, set the
enabledflag to 0 in the repository configuration file as follows:[packages-lacework-latest]
name=Lacework latest agent release
baseurl=https://packages.lacework.net/latest/RPMS/$basearch/
enabled=0To install the latest version of the agent, use this command:
sudo zypper install laceworkEnsure that the correct GPG key is installed.
Example outputRetrieving: RPM-GPG-KEY-lacework .........................................[done]
New repository or package signing key received:
Repository: Lacework latest agent release
Key Fingerprint: 360D 55D7 6727 5568 1407 8E25 FF3E 1D4D EE0C C692
Key Name: Lacework Inc. <support@lacework.net>
Key Algorithm: RSA 4096
Key Created: Mon Apr 24 11:04:37 2023
Key Expires: Sun May 1 11:04:17 2033
Rpm Name: gpg-pubkey-ee0cc692-64466245
To list all available agent versions, use this command:
sudo zypper search -s lacework
To install a newer version, use this command:
sudo zypper install lacework-VERSION
Replace VERSION with the specific agent version that you want to install.
For example, to install v6.5.0.12833-1:
sudo zypper install lacework-6.5.0.12833-1
To install an older version, use this command:
sudo zypper install --oldpackage lacework-VERSION
Replace VERSION with the specific agent version that you want to install.
For example, to install v4.2.0.218-1:
sudo zypper install --oldpackage lacework-4.2.0.218-1
If you install an older version of agent from the archived repository, it is upgraded to the established version of the agent release. To prevent this auto-upgrade and pin your package to a specific version, you should disable auto-upgrade in the agent configuration file (config.json).
To disable auto-upgrade, enter the following in the config.json file:
"autoupgrade": "disable"
Delete a Package (Zypper)​
To delete the Lacework Zypper repository package, use this command:
sudo zypper remove lacework
Sample Repository Configuration File​
The following is a sample repository configuration file for YUM and Zypper-based distributions:
[packages-lacework-latest]
name=Lacework latest agent release
baseurl=https://packages.lacework.net/latest/RPMS/$basearch/
enabled=1
gpgcheck=1
gpgkey=https://packages.lacework.net/latest/keys/RPM-GPG-KEY-lacework
[packages-lacework-established]
name=Lacework established agent release
baseurl=https://packages.lacework.net/established/RPMS/$basearch/
enabled=1
gpgcheck=1
gpgkey=https://packages.lacework.net/established/keys/RPM-GPG-KEY-lacework
[packages-lacework-archived]
name=Lacework archived agent release
baseurl=https://packages.lacework.net/archived/RPMS/$basearch/
enabled=1
gpgcheck=1
gpgkey=https://packages.lacework.net/archived/keys/RPM-GPG-KEY-lacework
Install the Linux Agent via deb or rpm package 📎
For single host installations, you can install the Lacework agent using a .deb or .rpm package.
After downloading the appropriate package locally, copy it to /tmp or another directory on the target Linux server using your preferred method. Or, you can download the package directly from the Linux instance.
When installing using a package, you must manually create the config.json file on your target Linux server and add your access token. In the following steps, replace Your_Agent_Access_Token with your agent access token. For more information, see Download Agent Installers and Get the Agent Access Token. Replace Your_API_Endpoint with your agent server URL. For more information, see Agent Server URL.
- Create the directory where the agent will look for the config.json file.
sudo mkdir -p /var/lib/lacework/config - Using your preferred text editor, create a file called config.json in the /var/lib/lacework/config directory with your agent access token and optionally your agent server URL.
{
"tokens": { "AccessToken":"YourAgentAccessToken" },
"serverurl" : "Your_API_Endpoint"
} - Verify that the file contains your access token.
cat /var/lib/lacework/config/config.json - Install using the appropriate package manager.
- For Debian Package Manager, install the x86 package using this command:Install the ARM64 package using this command:
sudo dpkg -i lacework_latest_amd64.debsudo dpkg -i lacework_latest_arm64.deb - For RedHat Package Manager, install the x86 package using this command:Install the ARM64 package using this command:
sudo rpm -ivh lacework-latest-1.x86_64.rpmÂsudo rpm -ivh lacework-latest-1.aarch64.rpm - For Zypper Package Manager, install the x86 package using this command:Install the ARM64 package using this command:
sudo zypper install lacework-latest-1.x86_64.rpmÂsudo zypper install lacework-latest-1.aarch64.rpm
- For Debian Package Manager, install the x86 package using this command:
- Data collection from agents is sent to Lacework and a newly added agent on the VM (installed as a package or a container) should be visible in 10 to 15 minutes. Verify that the Lacework Console's Resources > Agents page displays the new host.
Install the Linux Agent via Chef 📎
If using Chef Infra for configuration management, Lacework maintains the following two Chef cookbooks that can be used to deploy the Lacework Linux agent to supported Linux hosts:
Datacollector Cookbook - Simple cookbook used to install the latest 'GA' version of the datacollector agent using an embedded agent token. This cookbook is not idempotent, customizable, or specifically designed to be run using a Chef
run_list.Chef Lacework Cookbook - This cookbook is open source and is published to the Chef Supermarket. The cookbook is idempotent by design, customizable using Chef attributes, supports multiple installation methods (script, repo, package), provides the ability to install specific versions of the datacollector agent, and manage any supported configuration for the datacollector agent. This cookbook is suitable for customers that run Chef repeatedly on a schedule using a Chef
run_listand a Chef Server. For more information see the Chef Lacework cookbook on the Lacework Chef GitHub Repository, or on the Chef Supermarket.
Lacework Datacollector Cookbook​
This simple Chef cookbook distributes the Lacework install.sh script to your nodes. The script subsequently installs the latest GA release of the Lacework agent.
This cookbook does not contain custom attribute files, resources, templates, providers, or library files. By design, this cookbook is not idempotent. After download, you can customize the cookbook for your environment, or alternatively you can consider the open source Chef Lacework Cookbook maintained by Lacework on the Chef Supermarket.
The installation script, which can be found in the files directory, is also commented.
To try this recipe:
- Unzip the .tar.gz.
- Review the datacollector cookbook, which contains the following directories:
datacollector
|- recipes
|- README.md
|- metadata.rb
|- files - Move the datacollector cookbook to your repo on your Chef DK workstation.
- Upload the cookbook to your Chef server.
- Add the datacollector recipe to your test node or production nodes using your preferred Chef CLI commands.
- The install.sh script is periodically updated. Download the current cookbook or install script before proceeding.
- The datacollector install script itself is idempotent.
After you install the agent, it takes 10 to 15 minutes for agent data to appear in the Lacework Console under Resources > Agents.
Use Agent Server URL​
You only need to specify an agent server URL if you are installing Linux agent v6.6 or earlier outside the US. For more information, see Agent Server URL.
When you download chef.tar.gz from the Lacework Console, the agent server URL is already included in the file and you do not need any additional configuration.
Download the Script from GitHub​
When you download Chef files from the Lacework Chef GitHub Repository, edit the chef recipe default.rb to pass the serverurl as follows:
Edit: chef/datacollector/recipes/default.rb
execute 'datacollector' do
command 'sh /tmp/install.sh -U Your_API_Endpoint'
end
Where Your_API_Endpoint is your agent server URL.
Install the Linux Agent via Ansible 📎
Because Ansible is a flexible and extensible automation tool, you can use multiple strategies to install the Lacework agent. Use the following skeleton Debian and RPM playbooks as building blocks to create more advanced, environment-specific playbooks.
Each playbook consists of two parts:
Installation of the Lacework agent. To ensure the latest package, the playbooks query the Lacework repository. Playbooks can be made to retrieve files locally.
Distribution of a Lacework configuration file - config.json. The config.json file must minimally include an access token or the Lacework agent cannot communicate with the Lacework platform. For more information about the agent access token, see Download Agent Installers and Get the Agent Access Token. For Linux agent v6.6 or earlier installed outside the US, you must explicitly configure the agent server URL in the config.json file. For more information, see Agent Server URL.
In the examples below, config.json is located in the /etc/ansible/lacework/ directory of the Ansible server. You must create this file.
RPM Installation​
- hosts: lacework_servers
become: yes
tasks:
- name: configure the lacework repo
yum_repository:
name: packages-lacework-prod
description: packages-lacework-prod
baseurl: https://packages.lacework.net/latest/RPMS/x86_64/
gpgkey: https://packages.lacework.net/latest/keys/RPM-GPG-KEY-lacework
gpgcheck: yes
enabled: yes
- name: install lacework datacollector
yum:
name: lacework
state: latest
- name: wait until /var/lib/lacework/config/ is created
wait_for:
path: /var/lib/lacework/config/
- name: copy config.json
copy:
src: /etc/ansible/lacework/config.json
dest: /var/lib/lacework/config/config.json
owner: root
group: root
mode: 0644
Debian Installation​
- hosts: lacework_servers
become: yes
tasks:
- name: add apt signing key
apt_key:
keyserver: hkp://keyserver.ubuntu.com:80
id: EE0CC692
state: present
- name: add lacework repository into source list
apt_repository:
repo: "deb [arch=amd64] https://packages.lacework.net/latest/DEB/{{ ansible_distribution | lower }} {{ ansible_distribution_release }} main"
filename: lacework
state: present
update_cache: yes
- name: install lacework datacollector
apt:
name: lacework
state: latest
- name: wait until /var/lib/lacework/config/ is created
wait_for:
path: /var/lib/lacework/config/
- name: copy config.json
copy:
src: /etc/ansible/lacework/config.json
dest: /var/lib/lacework/config/config.json
owner: root
group: root
mode: 0644
After you install the agent, it takes 10 to 15 minutes for agent data to appear in the Lacework Console under Resources > Agents.
Dockerized Host Installation​
Follow these steps to deploy the Lacework agent as a container to a Dockerized host using an Ansible playbook.
Prerequisites​
The Ansible playbook uses the docker_container module available with Ansible to manage container control in Docker. Ensure that the host that executes the module (the target host) meets the following prerequisites:
Docker API >= 1.20
Docker SDK for Python >= 1.8.0 (use
docker-pyfor Python 2.6)For Python 2.6, use
docker-py. Otherwise, install the Docker SDK for Python module as this supersedes thedocker-pyPython module.Do not install both modules at the same time. If both modules are installed and one of them is uninstalled, the other may no longer function and a you will have to reinstall the module.
Deployment Process​
The following Ansible playbook pulls the latest image of the Lacework agent and starts the container.
Change the templated values for the following options before executing the playbook:
hosts:This targets the host or group of hosts specified in your/etc/ansible/hostsfile. Change it fromallto the group name of hosts that you want to deploy the agent to.ACCESS_TOKEN:Enter the agent access token from the Lacework Console.
Ansible Playbook​
- name: Lacework Agent
hosts: "all"
tasks:
- name: pull image and run Lacework agent container
docker_container:
name: datacollector
network_mode: host
pid_mode: host
privileged: yes
volumes:
- /:/laceworkfim:ro
- /var/lib/lacework:/var/lib/lacework
- /var/log:/var/log
- /var/run:/var/run
- /etc/passwd:/etc/passwd:ro
- /etc/group:/etc/group:ro
env:
ACCESS_TOKEN: "INSERT_ACCESS_TOKEN_HERE"
image: lacework/datacollector:latest
Deployment Steps​
Copy the Ansible playbook above and make the necessary changes to parameters.
Save the playbook as a
.yamlfile.Run the following command on the Ansible control node:
ansible-playbook <your-playbook-file>.yamlConfirm that the containers are running and the agents appear in the Lacework Console under Resources > Agents after 10 to 15 minutes.
Install the Linux Agent via AWS Systems Manager 📎
This article covers using Terraform to configure AWS Systems Manager to deploy the Lacework Agent to supported EC2 instances.
Lacework maintains the terraform-aws-ssm-agent module, which creates an SSM document for managing the deployment of the Lacework agent to EC2 instances.
If you are new to the Lacework Terraform Provider, or Lacework Terraform Modules, read the Terraform for Lacework Overview article to learn the basics on how to configure the provider, and more.
Overview of Using AWS Systems Manager with Lacework​
AWS Systems Manager (formerly known as SSM) is an AWS service that you can use to view and control your infrastructure on AWS. Using the Systems Manager console, you can view operational data from multiple AWS services and automate operational tasks across your AWS resources. Systems Manager helps you maintain security and compliance by scanning your managed instances and reporting on (or taking corrective action on) any policy violations it detects.
For Lacework customers using AWS Systems Manager to manage EC2 instances in their AWS account, the terraform-aws-ssm-agent Terraform Module can be used to create an SM document to install the Lacework agent on EC2 instances.
This installation method creates a agent token, and then installs the latest stable (GA) version of the Lacework Datacollector agent.
Scenario 1: Configuring AWS Systems Manager for Lacework Agent Deployments in a Single Region​
The following code example creates a Lacework agent access token, then creates an SSM document to install the Lacework agent on EC2 instances. Additionally, an AWS resource group is created with EC2 instances that have the machine tag environment:testing, and then the SSM document is associated with that AWS Resource group. Once Terraform executes, AWS Systems Manager will be configured and the Lacework Datacollector agent will be installed automatically.
Considerations and items to update:
- Ensure you update the
TagFiltersstanza to match the applicable tags deployed in your environment. Alternatively, remove it. - Validate that you have properly configured SSM and the appropriate permissions are in place. This requires that the instance profile has the AmazonSSMManagedInstanceCore policy attached to the instances you want to deploy to. Additional information on the policy can be referenced at Add permissions to a Systems Manager instance profile.
The following example assumes you already have AWS Systems Manager configured on your instances. If you are new to AWS SSM and want to test this install method, read the AWS Systems Manager Quick Setup documentation.
terraform {
required_providers {
lacework = {
source = "lacework/lacework"
version = "~> 1.0"
}
}
}
provider "aws" {
region = "us-east-1"
}
provider "lacework" {}
# Create an agent access token in Lacework
resource "lacework_agent_access_token" "ssm_deployment" {
name = "ssm-deployment"
description = "Used to deploy agents using AWS System Manager"
}
# Create AWS SSM Document
module "lacework_aws_ssm_agents_install" {
source = "lacework/ssm-agent/aws"
version = "~> 0.8"
lacework_agent_tags = {
env = "testing"
}
aws_resources_tags = {
billing = "testing"
owner = "myself"
}
lacework_access_token = lacework_agent_access_token.ssm_deployment.token
}
# Create an AWS Resource group for EC2 Instances with
# the tag 'environment:testing'
resource "aws_resourcegroups_group" "testing" {
name = "testing"
resource_query {
query = jsonencode({
ResourceTypeFilters = [
"AWS::EC2::Instance"
]
TagFilters = [
{
Key = "environment"
Values = [
"testing"
]
}
]
})
}
tags = {
billing = "testing"
owner = "myself"
}
}
# Create an SSM Association group called install-lacework-agents-testing-group
resource "aws_ssm_association" "lacework_aws_ssm_agents_install_testing" {
association_name = "install-lacework-agents-testing-group"
name = module.lacework_aws_ssm_agents_install.ssm_document_name
targets {
key = "resource-groups:Name"
values = [
aws_resourcegroups_group.testing.name,
]
}
compliance_severity = "HIGH"
}
Scenario 2: Configuring AWS Systems Manager for Lacework Agent Deployments in Multiple Regions​
The following code example creates a Lacework agent access token, then creates an SSM document to install the Lacework agent on EC2 instances. Additionally, an AWS resource group is created with EC2 instances that have the machine tag environment:testing, and then the SSM document is associated with that AWS Resource group. Once Terraform executes, AWS Systems Manager will be configured and the Lacework Datacollector agent will be installed automatically.
Additionally, we configure multiple AWS provider blocks, one for each region we want to target within the AWS account. We then create the SSM document twice, one for each region by associating the modules and resources with different providers.
For the modules:
providers = {
aws = aws.america
}
For the resources:
provider = aws.america
Considerations and items to update:
- Ensure you update the
TagFiltersstanza to match the applicable tags deployed in your environment. Alternatively, remove it. - Ensure you configure the necessary AWS provider blocks for each region you want to target.
- Ensure you update the provider aliases for each module and resource to associate them with the correct provider.
- Validate that you have properly configured SSM in all the regions you are targeting and the appropriate permissions are in place. This requires that the instance profile has the AmazonSSMManagedInstanceCore policy attached to the instances you want to deploy to. Additional information on the policy can be referenced at Add permissions to a Systems Manager instance profile.
The following example assumes you already have AWS Systems Manager configured on your instances. If you are new to AWS SSM and want to test this install method, read the AWS Systems Manager Quick Setup documentation.
terraform {
required_providers {
lacework = {
source = "lacework/lacework"
version = "~> 1.0"
}
}
}
provider "aws" {
alias = "america"
region = "us-east-1"
}
provider "aws" {
alias = "europe"
region = "eu-west-1"
}
provider "lacework" {}
# Create an agent access token in Lacework
resource "lacework_agent_access_token" "ssm_deployment" {
name = "ssm-deployment"
description = "Used to deploy agents using AWS System Manager"
}
# Create AWS SSM document for the us-east-1 region
module "lacework_aws_ssm_agents_install_america" {
source = "lacework/ssm-agent/aws"
version = "~> 0.8"
lacework_agent_tags = {
env = "testing"
}
aws_resources_tags = {
billing = "testing"
owner = "myself"
}
lacework_access_token = lacework_agent_access_token.ssm_deployment.token
providers = {
aws = aws.america
}
}
# Create an AWS resource group for EC2 instances in the us-east-1 region with
# the tag 'environment:testing'
resource "aws_resourcegroups_group" "testing_america" {
name = "testing_america"
resource_query {
query = jsonencode({
ResourceTypeFilters = [
"AWS::EC2::Instance"
]
TagFilters = [
{
Key = "environment"
Values = [
"testing"
]
}
]
})
}
tags = {
billing = "testing"
owner = "myself"
}
provider = aws.america
}
# Create an SSM association group called install-lacework-agents-testing-group
# for the us-east-1 region
resource "aws_ssm_association" "lacework_aws_ssm_agents_install_testing_america" {
association_name = "install-lacework-agents-testing-group"
name = module.lacework_aws_ssm_agents_install.ssm_document_name
targets {
key = "resource-groups:Name"
values = [
aws_resourcegroups_group.testing.name,
]
}
compliance_severity = "HIGH"
provider = aws.america
}
# Create AWS SSM document for the eu-west-1 region
module "lacework_aws_ssm_agents_install_europe" {
source = "lacework/ssm-agent/aws"
version = "~> 0.8"
lacework_agent_tags = {
env = "testing"
}
aws_resources_tags = {
billing = "testing"
owner = "myself"
}
lacework_access_token = lacework_agent_access_token.ssm_deployment.token
providers = {
aws = aws.europe
}
}
# Create an AWS resource group for EC2 instances in the eu-west-1 region with
# the tag 'environment:testing'
resource "aws_resourcegroups_group" "testing_europe" {
name = "testing_europe"
resource_query {
query = jsonencode({
ResourceTypeFilters = [
"AWS::EC2::Instance"
]
TagFilters = [
{
Key = "environment"
Values = [
"testing"
]
}
]
})
}
tags = {
billing = "testing"
owner = "myself"
}
provider = aws.europe
}
# Create an SSM association group called install-lacework-agents-testing-group
# for the eu-west-1 region
resource "aws_ssm_association" "lacework_aws_ssm_agents_install_testing_europe" {
association_name = "install-lacework-agents-testing-group"
name = module.lacework_aws_ssm_agents_install.ssm_document_name
targets {
key = "resource-groups:Name"
values = [
aws_resourcegroups_group.testing.name,
]
}
compliance_severity = "HIGH"
provider = aws.europe
}
Run Terraform​
This example shows 3 existing EC2 instances with the machine tag environment:testing
- Copy and paste the code snippet above into a
main.tffile and then save the file. - Run
terraform planand review the changes. Four resources should be created. - After you have reviewed the changes, run
terraform apply -auto-approveto execute Terraform.
Validate Changes​
After Terraform executes, open AWS Resource Groups in the region you applied the changes. You should see a new resource group called testing with the instances that have the tag environment:testing.

Open the AWS Systems Manager. Under Node Management, click State Manager, click Association id for the install-lacework-agents-testing-group, and click the Resources tab, where you should see the status of action taken on the instances.

After you install the agent, it takes 10 to 15 minutes for agent data to appear in the Lacework Console under Resources > Agents.
Install the Linux Agent in a AMI Image using Packer 📎
You can use HashiCorp Packer to create a machine image with the Lacework agent pre-installed and configured. To learn more about HashiCorp Packer, see the Packer documentation.
Example Packer Template​
The following example Packer template creates a machine image by remotely uploading and executing the Lacework ‘install.sh’ script on a staging instance before making the machine image available in your cloud console. You can customize the template for your environment or automate an alternative installation method using Packer.
For an overview of the Lacework agent installation script, see Lacework for Workload Security.
The following example template creates an Amazon Linux 2 AMI with the Lacework agent installed and running.
{
  "variables": {
    "aws_access_key": "{{env `AWS_ACCESS_KEY_ID`}}",
    "aws_secret_key": "{{env `AWS_SECRET_ACCESS_KEY`}}",
    "lacework_token": "{{env `LACEWORK_TOKEN`}}"
  },
  "builders": [
    {
      "type": "amazon-ebs",
      "access_key": "{{user `aws_access_key`}}",
      "secret_key": "{{user `aws_secret_key`}}",
      "region": "us-east-1",
      "source_ami_filter": {
      "filters": {
      "virtualization-type": "hvm",
      "name": "amzn2-ami-hvm-*",
      "root-device-type": "ebs"
    },
      "owners": ["amazon"],
      "most_recent": true
    },
      "instance_type": "t2.micro",
      "ssh_username": "ec2-user",
      "ami_name": "lacework {{timestamp}}"
    }
  ],
  "provisioners": [
    {
      "type": "shell",
      "inline": ["curl -sSL https://packages.lacework.net/install.sh | sudo bash -s -- {{user `lacework_token`}} -U Your_API_Endpoint"]
    }
  ]
}
To use this template:
Install Packer.
Create a template file called lacework.json.
Add your credentials as environment variables.
noteYou can find your Lacework Agent Access Token in the Lacework Console at Settings > Configuration > Agents. You can find and inspect the install.sh script in the same location under Install Options.
export AWS_ACCESS_KEY_ID=YOUR_AWS_ACCESS_KEY
export AWS_SECRET_ACCESS_KEY=YOUR_AWS_SECRET_KEY
export LACEWORK_TOKEN=YOUR_LACEWORK_ACCESS_TOKENRun Packer:
packer build lacework.jsonIn the AWS AMI console, an AMI named ‘lacework TIMESTAMP’ is displayed and ready for use.
The datacollector install script is idempotent.
| Use Cases | Lacework Feature(s) | Data Source |
|---|---|---|
| User and Entity Behavior Analytics (UEBA) | Workload Anomaly Detection Process Dashboard and Polygraph Network Dashboard and Polygraph Filesystem Dashboard and Polygraph | Windows Agent |
- MSI
Install the Windows Agent via MSI and Config File 📎
You can use a config.json configuration file with the Windows agent MSI package to install the agent.
Download the Windows agent MSI package using the instructions in Download Windows Agent Installer.
Create a config.json file on your host using a text editor.
noteDo not create the config.json file in the
C:\ProgramData\Lacework\directory where the Windows agent components will be installed.Paste the following into the config.json file:
{
"tokens": {
"accesstoken": "Your_Access_Token"
},
"serverurl": "Your_API_Endpoint"
}Specify the agent access token in the
"accessToken":"Your_Access_Tokenline, whereYour_Access_Tokenis the character string that identifies the specific access token to use with the agent. Obtain the access token using the instructions in Obtain an Access Token for the Windows Agent.Specify the Lacework agent server URL in the
"serverurl":"Your_API_Endpointline, whereYour_API_Endpointis the agent server URL. For more information, see serverurl Property.By default, the agent is automatically upgraded when a new version is available. To disable automatic upgrade, see Automatic Upgrade of Windows Agent.
Save the config.json file in the ASCII format and note the location of the file.
Open a PowerShell terminal as administrator.
Navigate to the directory containing the Windows agent MSI package on your host.
Run the MSI package using the following command in the PowerShell command line:
C:\Users\Administrator> msiexec.exe /i LWDataCollector.msi CONFIGFILE=C:\path\to\config.jsonWhere
C:\path\to\config.jsonis the file path for the config.json file. An installation wizard appears.Complete the installation using the installation wizard.
The config.json file is copied to the
C:\ProgramData\Lacework\directory. You can modify this file to change the settings for the agent. If you modify the file, you must restart the agent for the changes to take effect. For more information, see Restart Windows Agent.
Install the Windows Agent via MSI on the Command Line 📎
For single host installations, install the Windows agent using the command line. Instead of specifying configuration parameters for the agent installation in the config.json file, you specify them directly in the command line.
Download the Windows agent MSI package using the instructions in Download Windows Agent Installers.
Open a PowerShell terminal as administrator.
Navigate to the folder containing the MSI package file on your host.
Run the installer using the following command in the PowerShell command line:
C:\Users\Administrator> msiexec.exe /i LWDataCollector.msi ACCESSTOKEN=Your_Access_Token SERVERURL=Your_API_EndpointWhere:
Your_Access_Tokenis the character string that identifies the specific access token to use with the agent. Obtain the access token using the instructions in Obtain an Access Token for the Windows Agent.Your_API_Endpointis the agent server URL. For more information, see serverurl Property.
By default, the agent is automatically upgraded when a new version is available. To disable automatic upgrade, use the
AUTOUPGRADE=disabledoption. For example:C:\Users\Administrator> msiexec.exe /i LWDataCollector.msi ACCESSTOKEN=Your_Access_Token SERVERURL=Your_API_Endpoint AUTOUPGRADE=disablednoteFor improved security and to benefit from new and improved features, Lacework recommends that you do not disable automatic upgrade of the agent.
An installation wizard appears. Complete the installation using the installation wizard.
A config.json file that contains the options you specified in the command line is created in the
C:\ProgramData\Lacework\directory. You can modify this file to change the settings for the agent. If you modify the file, you must restart the agent for the changes to take effect. For more information, see Restart Windows Agent.
Install the Windows Agent via MSI Silently 📎
For scenarios where you want to run the Windows agent MSI package silently or unattended without any notifications or prompts, include the /qn parameter when you run the install command in the PowerShell command line:
C:\Users\Administrator> msiexec.exe /i LWDataCollector.msi ACCESSTOKEN=Your_Access_Token SERVERURL=Your_API_Endpoint /qn
Where:
Your_Access_Tokenis the character string that identifies the specific access token to use with the agent. Obtain the access token using the instructions in Obtain an Access Token for the Windows Agent.Your_API_Endpointis the agent server URL. For more information, see serverurl Property.
By default, the agent is automatically upgraded when a new version is available. To disable automatic upgrade, use the AUTOUPGRADE=disabled option. For example:
C:\Users\Administrator> msiexec.exe /i LWDataCollector.msi ACCESSTOKEN=Your_Access_Token SERVERURL=Your_API_Endpoint AUTOUPGRADE=disabled /qn
For improved security and to benefit from new and improved features, Lacework recommends that you do not disable automatic upgrade of the agent.
A config.json file that contains the options you specified in the command line is created in the C:\ProgramData\Lacework\ directory. You can modify this file to change the settings for the agent. If you modify the file, you must restart the agent for the changes to take effect. For more information, see Restart Windows Agent.
Install the Windows Agent via MSI using Powershell 📎
Lacework provides the following PowerShell scripts to enable you to download and install the Windows agent:
- The
Install-LWDataCollector.ps1script downloads the Windows agent MSI package and installs the Windows agent on a host machine. - The
Azure-Deploy-LW-Win.ps1script downloads the Windows agent MSI package and installs the agent on all Windows VMs in an Azure resource group.
Prerequisites​
- Download the Lacework Powershell Script (powershell.zip file) using the instructions in Download Windows Agent Installer.
- Unzip the powershell.zip file. The signed-scripts folder that is created contains the following files:
- Install-LWDataCollector.ps1
- Azure-Deploy-LW-Win.ps1
Use Install-LWDataCollector.ps1 Script to Install Windows Agent on a Host Machine​
The Install-LWDataCollector.ps1 PowerShell script installs the Lacework Windows agent and adds a local firewall rule to allow the agent to communicate with Lacework. In addition, the script optionally configures a Windows Defender exclusion for the agent.
Lacework recommends that you exclude the agent from any antivirus or Endpoint Detection and Response (EDR) applications on your host. The Install-LWDataCollector.ps1 script allows you to enable this exclusion for Microsoft Defender. For other antivirus applications, you can customize the script to exclude the agent from scanning.
Install Windows Agent with config.json Configuration File​
Create a config.json file on your host using a text editor.
noteDo not create the config.json file in the
C:\ProgramData\Lacework\directory where the Windows agent components will be installed.Paste the following into the config.json file:
{
"tokens": {
"accesstoken": "Your_Access_Token"
},
"serverurl": "Your_API_Endpoint"
}Where:
Your_Access_Tokenspecifies the access token to use with the agent. Obtain the access token using the instructions in Obtain an Access Token for the Windows Agent.Your_API_Endpointspecifies the agent server URL. For more information, see serverurl Property.
By default, the agent is automatically upgraded when a new version is available. To disable automatic upgrade, see Automatic Upgrade of Windows Agent.
Save the config.json file in the ASCII format and note the location of the file.
Open a PowerShell terminal as administrator.
Navigate to the directory containing the Install-LWDataCollector.ps1 script on your host.
Run the script using the following command in the PowerShell command line:
C:\Users\Administrator> .\Install-LWDataCollector.ps1 -MSIURL Agent_MSI_Download_URL -ConfigPath C:\path\to\config.json -DefenderWhere:
C:\path\to\config.jsonspecifies the file path for the config.json file.Agent_MSI_Download_URLspecifies the URL for downloading the Windows agent MSI package. To obtain the URL, do the following:- Go to the Lacework Windows Agent Releases page. This page lists the Windows agent releases you can install.
- Go to the release you want to install.
- Copy the URL for Lacework Windows Agent MSI Package.
- The
-Defenderoption excludes the Windows agent from scanning with Windows Defender.
The config.json file is copied to the
C:\ProgramData\Lacework\directory. You can modify this file to change the settings for the agent. If you modify the file, you must restart the agent for the changes to take effect. For more information, see Restart Windows Agent.
Install Windows Agent without a config.json Configuration File​
Instead of specifying configuration parameters for the agent installation in a config.json file, you can specify them directly in the command line.
Open a PowerShell terminal as administrator.
Navigate to the directory containing the Install-LWDataCollector.ps1 script on your host.
Run the script using the following command in the PowerShell command line:
C:\Users\Administrator> .\Install-LWDataCollector.ps1 -MSIURL Agent_MSI_Download_URL -AccessToken Your_Access_Token -ServerURL Your_API_Endpoint -DefenderWhere:
Your_Access_Tokenspecifies your agent access token. For more information, see Obtain an Access Token for the Windows Agent.Your_API_Endpointspecifies your Lacework agent server URL. For more information, see serverurl Property.Agent_MSI_Download_URLspecifies the URL for downloading the Windows agent MSI package. To obtain the URL, do the following:- Go to the Lacework Windows Agent Releases page. This page lists the Windows agent releases you can install.
- Go to the release you want to install.
- Copy the URL for Lacework Windows Agent MSI Package.
- The
-Defenderoption excludes the Windows agent from scanning with Windows Defender.
A config.json file that contains the options you specified in the command line is created in the
C:\ProgramData\Lacework\directory. You can modify this file to change the settings for the agent. If you modify the file, you must restart the agent for the changes to take effect. For more information, see Restart Windows Agent.
Use Azure-Deploy-LW-Win.ps1 Script to Install Windows Agent on Windows VMs in an Azure Resource Group​
The Azure-Deploy-LW-Win.ps1 PowerShell script installs the Lacework Windows agent to all Windows VMs it finds in an Azure resource group. It uses the Install-LWDataCollector.ps1 PowerShell script during the installation process.
Open a PowerShell terminal as administrator.
Navigate to the directory containing the Azure-Deploy-LW-Win.ps1 script on your host.
Run the script using the following command in the PowerShell command line:
C:\Users\Administrator> .\Azure-Deploy-LW-Win.ps1 -EnableExtensions -Defender- If extension operations are disabled on an Azure VM, use the
-EnableExtensionsoption to enable extension operations on the VM and install the Windows agent. If you do not specify this option, the Windows agent is not installed on the VMs on which you have disabled extension operations. - Use the
-Defenderoption to exclude the Windows agent from scanning with Windows Defender on the VMs. Note that the Windows agent will be excluded from scanning only on the VMs on which the Defender PowerShell module is installed.
- If extension operations are disabled on an Azure VM, use the
Specify the values for the parameters required by the script. Press Enter after you specify the value for each parameter.
Parameter Description ResourceGroups The Azure resource group in which you want to install the Windows agent. The agent will be installed on all the Windows VMs in the specified resource group. To specify more than one resource group, enter the name of a resource group and then press Enter. InstallScript The path or URL for the Install-LWDataCollector.ps1 PowerShell script. To obtain the URL, do the following: - Go to the Lacework Windows Agent Releases page. This page lists the Windows agent releases you can install.
- Go to the release you want to install.
- Copy the URL for Install-LWDataCollector.ps1 Script.
Vault The name of the Azure Key Vault that contains the secret for the Lacework token. TokenSecret The name of a secret in the Azure Key Vault for the Lacework token. MSIURL The URL for downloading the Windows agent MSI package. To obtain the URL, do the following: - Go to the Lacework Windows Agent Releases page. This page lists the Windows agent releases you can install.
- Go to the release you want to install.
- Copy the URL for Lacework Windows Agent MSI Package.
Install the Windows Agent via MSI using Azure Resource Manager 📎
You can install the Lacework agent on your Windows host through an Azure Resource Manager (ARM) template. In this type of deployment, the ARM template uses the CustomScriptExtension to download and run a Install-LWDataCollector.ps1 PowerShell script that installs the agent onto a Windows VM instance.
Prerequisites​
- Install Azure CLI on your machine. For instructions, see How to install the Azure CLI.
- Download the ARM Template (azurerm.zip file) using the instructions in Download Windows Agent Installers.
- Unzip the azurerm.zip file. The azurerm folder that is created contains the following files:
- parameters.json
- template.bicep
- template.json
Configure an ARM Template​
Create an ARM template that deploys your Azure resources and the Windows agent. You can use the sample ARM template (template.json or template.bicep) in the azurerm folder. This template creates a VM instance and installs the Windows agent. This template downloads and runs a PowerShell script (Install-LWDataCollector.ps1) to install the agent on the VM instance.
The Install-LWDataCollector.ps1 script installs the Windows agent and adds a local firewall rule to allow the agent to communicate with Lacework. In addition, the script optionally configures a Windows Defender exclusion for the agent with the -defender parameter.
Lacework recommends that you exclude the agent from any antivirus or Endpoint Detection and Response (EDR) applications on your host. The Install-LWDataCollector.ps1 script allows you to enable this exclusion for Microsoft Defender. For other antivirus applications, you can customize the script to exclude the agent from scanning.
Configure a Parameters File for your Azure Deployment​
Create a JSON file for your deployment parameters. You can modify the sample parameters file (parameters.json) in the azurerm folder as required. Specify values for the following parameters in the parameters.json file:
laceworkEndpoint- The Lacework agent server URL. For more information, see serverurl Property.laceworkMSIURL- The URL for downloading the Windows agent MSI package. To obtain the URL, do the following:- Go to the Lacework Windows Agent Releases page. This page lists the Windows agent releases you can install.
- Go to the release you want to install.
- Copy the URL for Lacework Windows Agent MSI Package.
laceworkPSScript- The URL for the Install-LWDataCollector.ps1 PowerShell script. To obtain the URL, do the following:- Go to the Lacework Windows Agent Releases page. This page lists the Windows agent releases you can install.
- Go to the release you want to install.
- Copy the URL for Install-LWDataCollector.ps1 Script.
laceworkToken- A valid Lacework agent access token. For more information, see Obtain an Access Token for the Windows Agent. It is good practice to store access tokens securely in Azure Key Vault. Theparameters.jsonfile references the access token in a Key Vault.laceworkDefender- (Optional) To configure a Windows Defender exclusion for the agent, change the value of this parameter totrue.- Parameters for your Windows VM on Azure.
Deploy the ARM Template​
Execute the command to deploy the ARM template.
PowerShell
Open a PowerShell terminal as administrator and execute the following command:
- To use the parameters.json file:
New-AzResourceGroupDeployment -Name <deployment_name> -ResourceGroupName <resource_group> -TemplateFile template.json -TemplateParameterFile parameters.json - To use the parameters.bicep file:Where
New-AzResourceGroupDeployment -Name <deployment_name> -ResourceGroupName <resource_group> -TemplateFile template.json -TemplateParameterFile parameters.bicep-Namespecifies the name of your Azure deployment, and-ResourceGroupNamespecifies the name of the Azure resource group to which you want to add the deployment.
Azure CLI
Execute the following command in the Azure CLI:
- To use the parameters.json file:
az deployment group create -n <deployment_name> -g <resource_group> -f template.json -p @parameters.json - To use the parameters.bicep file:
az deployment group create -n <deployment_name> -g <resource_group> -f template.json -p @parameters.bicep
Deploy to an Existing Azure VM Instance without Using an ARM Template​
You can install the Lacework Windows agent to an existing Azure VM instance without using an ARM template.
PowerShell
Open a PowerShell terminal as administrator and execute the following command:
Set-AzVMCustomScriptExtension -ResourceGroupName Your_Resource_Group_Name `
-VMName Your_VM_Name `
-Location Your_Azure_Region `
-FileUri "https://updates.lacework.net/windows/<Release-Version>/Install-LWDataCollector.ps1" `
-Run 'Install-LWDataCollector.ps1 -MSIURL Agent_MSI_Download_URL -AccessToken Your_Access_Token -ServerURL Your_API_Endpoint -defender' `
-Name install-lacework-dc `
-SecureExecution
Azure CLI
Execute the following command in the Azure CLI:
az vm extension set -n customScriptExtension --publisher Microsoft.Compute --extension-instance-name install-lacework-dc -g Your_Resource_Group_Name --vm-name Your_VM_Name --protected-settings '{"FileUris": ["https://updates.lacework.net/windows/<Release-Version>/Install-LWDataCollector.ps1"], "commandToExecute": "powershell -File Install-LWDataCollector.ps1 -MSIURL Agent_MSI_Download_URL -AccessToken Your_Access_Token -ServerURL Your_API_Endpoint -defender"}'
Where:
Your_Resource_Group_Namespecies your Azure resource group name.Your_VM_Namespecifies the name of the Azure VM in which you want to install the agent.Your_Azure_Regionspecifies the Azure Region in which the VM exists. For example,eastus.https://updates.lacework.net/windows/<Release-Version>/Install-LWDataCollector.ps1specifies the URL for theInstall-LWDataCollector.ps1PowerShell script. To obtain the URL for the Install-LWDataCollector.ps1 script, do the following:- Go to the Lacework Windows Agent Releases page. This page lists the Windows agent releases you can install.
- Go to the release you want to install.
- Copy the URL for the Install-LWDataCollector.ps1 Script.
Agent_MSI_Download_URLspecifies the URL for downloading the Windows agent MSI package. To obtain the URL, do the following:- Go to the Lacework Windows Agent Releases page. This page lists the Windows agent releases you can install.
- Go to the release you want to install.
- Copy the URL for Lacework Windows Agent MSI Package.
Your_Access_Tokenspecifies your agent access token. For more information, see Obtain an Access Token for the Windows Agent.Your_API_Endpointspecifies your Lacework agent server URL. For more information, see serverurl Property.-defenderconfigures a Windows Defender exclusion for the agent.
Install the Windows Agent via MSI using Terraform 📎
You can use Terraform to deploy the Lacework agent to Azure VM instances. In this type of deployment, the Terraform template uses the CustomScriptExtension to download and run a Install-LWDataCollector.ps1 PowerShell script that installs the agent onto a Windows VM instance.
Prerequisites​
- Install Terraform on your machine. For instructions, see Install Terraform.
- Install Azure CLI on your machine. For instructions, see How to install the Azure CLI.
- Download the Terraform script for Azure (azure-terraform.zip file) using the instructions in Download Windows Agent Installers.
- Unzip the azure-terraform.zip file. The azure-terraform folder that is created contains the following files:
- main.tf
- variables.tf
Configure the Terraform Template File​
Open the main.tf file in the azure-terraform folder and examine the local variables at the beginning of the file. These variables define the script commands used to install the agent. The azurerm_virtual_machine_extension resource specifies the CustomScriptExtension to download and run the Install-LWDataCollector.ps1 script.
The Install-LWDataCollector.ps1 script installs the Windows agent and adds a local firewall rule to allow the agent to communicate with Lacework. In addition, the script optionally configures a Windows Defender exclusion for the agent with the -defender parameter.
Lacework recommends that you exclude the agent from any antivirus or Endpoint Detection and Response (EDR) applications on your host. The Install-LWDataCollector.ps1 script allows you to enable this exclusion for Microsoft Defender. For other antivirus applications, you can customize the Install-LWDataCollector.ps1 script to exclude the agent from scanning.
Configure the Input Variable File​
Open the variables.tf file in the azure-terraform folder and configure the parameters for the Terraform module. The Terraform template uses the variables.tf file to define the parameters for the Terraform module. These variables correspond to the parameters used in the Install-LWDataCollector.ps1 script.
Specify values for the following variables in the variables.tf file:
lacework_token- A valid Lacework agent access token. For more information, see Obtain an Access Token for the Windows Agent. It is good practice to store access tokens securely in Azure Key Vault. Thevariables.tffile references the access token in a Key Vault.lacework_endpoint- The Lacework agent server URL. For more information, see serverurl Property.lacework_msi_url- The URL for downloading the Windows agent MSI package. To obtain the URL, do the following:- Go to the Lacework Windows Agent Releases page. This page lists the Windows agent releases you can install.
- Go to the release you want to install.
- Copy the URL for Lacework Windows Agent MSI Package.
lacework_ps_script- The URL for the Install-LWDataCollector.ps1 PowerShell script. To obtain the URL, do the following:- Go to the Lacework Windows Agent Releases page. This page lists the Windows agent releases you can install.
- Go to the release you want to install.
- Copy the URL for Install-LWDataCollector.ps1 Script.
lacework_defender- (Optional) To configure a Windows Defender exclusion for the agent, change the value of this variable totrue.- Variables for your Azure resource group, Azure Key Vault, and Windows VM on Azure.
Run Terraform​
- Open a Terminal and navigate to the directory that contains the
main.tffile. - Run
terraform initto initialize the project and download the required modules. - Run
terraform planto validate the configuration and review pending changes. - After you review the pending changes, run
terraform apply.
After you install the agent, it takes 10 to 15 minutes for agent data to appear in the Lacework Console under Resources > Configuration > Agents.
Install the Windows Agent via MSI in an Image using Packer 📎
You can use HashiCorp Packer to create an Amazon Machine Image (AMI) with the Lacework agent pre-installed and configured.
Prerequisites​
- Install Packer on your machine. For details on how to install and provision Packer, see Install Packer.
- Install the AWS CLI on your machine. For instructions, see Installing or updating the latest version of the AWS CLI.
- Download the Packer for AWS script (packer.zip file) using the instructions in Download Windows Agent Installers to your machine.
- Unzip the packer.zip file. The packer folder that is created contains the following folders and files:
\config-json
config.json
install.ps1
lacework-vars.pkrvars.hcl
lacework.pkr.hcl
variables.pkr.hcl
winrm_bootstrap.txt
\setting-token
install-man.ps1
lacework-vars.pkrvars.hcl
lacework-without-config-json.pkr.hcl
variables.pkr.hcl
winrm_bootstrap.txt
Packer Build using Configuration File​
This deployment uses the config.json agent configuration file to provision the Windows agent.
Prepare Files Required to Install Agent with Packer​
The following sections describe the files that are required to configure the variables for your environment. You can modify the sample files in the config-json folder.
Prepare config.json File​
Modify the config.json file in the config-json folder.
{
"tokens": {
"accesstoken":"<accessToken>"
},
"schemaversion": "0.6",
"serverurl": "<serverURL>"
}
Where:
accessTokenspecifies your agent access token. For more information, see Obtain an Access Token for the Windows Agent.serverUrlspecifies your Lacework agent server URL. For more information, see serverurl Property.
By default, the agent is automatically upgraded when a new version is available. To disable automatic upgrade, see Upgrade the Windows Agent.
Prepare HCL Files​
Modify the following HashiCorp Configuration Language (HCL) files in the config-json folder.
lacework-vars.pkrvars.hcl
region="<awsRegion>"
ami_name="<amiPrefixName>"
instance_type="<awsInstanceType>"
Where region specifies the AWS region, ami_namespecifies the name of the AMI built by Packer, and instance_type specifies the AWS EC2 instance type.
variables.pkr.hcl
AWS_ACCESS_KEY_ID="<awsAccessID>"
AWS_SECRET_ACCESS_KEY="<awsSecretKey>"
Where AWS_ACCESS_KEY_ID specifies your AWS access key ID and AWS_SECRET_ACCESS_KEY specifies your AWS secret access key.
Prepare Install PowerShell Script​
Modify the install.ps1 PowerShell script in the config-json folder. This script runs the agent's MSI installer.
# Install Lacework Windows Agent
#
try {
Write-Host "Downloading Lacework Windows Agent"
Invoke-WebRequest -Uri "https://updates.lacework.net/windows/<ReleaseVersion>/LWDataCollector.msi" -OutFile LWDataCollector.msi
Write-Host "Installing Lacework Windows Agent"
$lacework = (Start-Process msiexec.exe -ArgumentList "/i","LWDataCollector.msi","CONFIGFILE=C:\config.json","/passive" -NoNewWindow -Wait -PassThru)
if ($lacework.ExitCode -ne 0) {
Write-Error "Error installing Lacework Windows Agent"
exit 1
}
}
catch
{
Write-Error $_.Exception
exit 1
}
Where:
Invoke-WebRequest -Uri "https://updates.lacework.net/windows/<ReleaseVersion>/LWDataCollector.msi"cmdlet specifies the URL for the Lacework Windows agent MSI package. To obtain the URL for the MSI package, do the following:- Follow the instructions in Download the Windows Agent Installer and click MSI Package.
- Click Copy URL to obtain the URL for the MSI package.
- Use the copied URL in the
Invoke-WebRequest -Uricmdlet.
CONFIGFILEspecifies the location of the config.json file.
Run Packer to Build AMI​
Install the Windows agent by running the following Packer command:
packer build -var-file=lacework-vars.pkrvars.hcl lacework.pkr.hcl
Packer Build without Configuration File​
This deployment does not use an agent configuration file, but instead specifies the agent token and API endpoint in the install script.
Prepare Files Required to Install Agent with Packer​
The following sections describe the files that are required file to configure the variables for your environment. You can modify the sample files in the setting-token folder.
Create HCL Files​
Modify the following HashiCorp Configuration Language (HCL) files in the setting-token folder.
lacework-vars.pkrvars.hcl
region="<awsRegion>"
ami_name="<amiPrefixName>"
instance_type="<awsInstanceType>"
Where region specifies the AWS region, ami_namespecifies the name of the AMI built by Packer, and instance_type specifies the AWS instance type
variables.pkr.hcl
AWS_ACCESS_KEY_ID="<ACCESS_KEY_ID>"
AWS_SECRET_ACCESS_KEY="<SECRET_ACCESS_KEY>"
Where AWS_ACCESS_KEY_ID specifies your AWS access key ID and AWS_SECRET_ACCESS_KEY specifies your AWS secret access key.
Prepare Install PowerShell Script​
Modify the install-man.ps1 PowerShell script in the setting-token folder. This script runs the agent's MSI installer.
# Install Lacework Windows Agent
#
try {
Write-Host "Downloading Lacework Windows Agent"
Invoke-WebRequest -Uri "https://updates.lacework.net/windows/<ReleaseVersion>/LWDataCollector.msi" -OutFile LWDataCollector.msi
Write-Host "Installing Lacework Windows Agent"
$lacework = (Start-Process msiexec.exe -ArgumentList "/i","LWDataCollector.msi","ACCESSTOKEN=<accessToken>", "SERVERURL=<serverURL>","/passive" -NoNewWindow -Wait -PassThru)
if ($lacework.ExitCode -ne 0) {
Write-Error "Error installing Lacework Windows Agent"
exit 1
}
}
catch
{
Write-Error $_.Exception
exit 1
}
Where:
Invoke-WebRequest -Uri "https://updates.lacework.net/windows/<ReleaseVersion>/LWDataCollector.msi"cmdlet specifies the URL for the Lacework Windows agent MSI package. To obtain the URL for the MSI package, do the following:- Follow the instructions in Download the Windows Agent Installer and click MSI Package.
- Click Copy URL to obtain the URL for the MSI package.
- Use the copied URL in the
Invoke-WebRequest -Uricmdlet.
ACCESSTOKENspecifies the access token for your agent. For more information, see Obtain an Access Token for the Windows Agent.SERVERURLspecifies your Lacework agent server URL. For more information, see serverurl Property.
Run Packer to Build AMI​
Install the Windows agent by running the following Packer command:
packer build -var-file=lacework-vars.pkrvars.hcl lacework-without-config-json.pkr.hcl
- Linux Agent Daemonset
- Windows Agent Daemonset
- Kubernetes Admission Controller
- EKS Compliance Agent
| Use Cases | Lacework Feature(s) | Data Source |
|---|---|---|
| User and Entity Behavior Analytics (UEBA) | Workload Anomaly Detection Process Dashboard and Polygraph Network Dashboard and Polygraph Filesystem Dashboard and Polygraph | Linux Agent Windows Agent Kubernetes Agent |
| Vulnerability Management | Container Vulnerability Management | Kubernetes Agent |
Install the Kubernetes Agent Daemonset on Linux Nodes 📎
You can use the following methods to install the Lacework Linux agent on Kubernetes:
- Install with a Helm Chart - Helm is a package manager for Kubernetes that uses a packaging format called charts. A chart is a collection of files that describe a related set of Kubernetes resources. You can download the Lacework Helm chart and use it to install the agent.
- Deploy with a DaemonSet - DaemonSets are an easy way to deploy a Kubernetes pod onto every node in the cluster. This is useful for monitoring tools such as Lacework. You can use the DaemonSet method to deploy the agent onto any Kubernetes cluster, including hosted versions like AKS, EKS, and GKE.
- Install with Terraform - For organizations using Hashicorp Terraform to automate their environments, Lacework provides the terraform-kubernetes-agent module to create a Secret and DaemonSet for installing the agent in a Kubernetes cluster.
- Install in gVisor on Kubernetes - gVisor provides an additional layer of isolation between running applications and the host operating system. You can install the agent in gVisor on a Kubernetes cluster.
After you install the agent, it takes 10 to 15 minutes for agent data to appear in the Lacework Console under Resources > Agents. You can also view your Kubernetes cluster in the Lacework Console under Resources > Kubernetes. If your cluster is not displayed, see How Lacework Derives the Kubernetes Cluster Name.
The datacollector pod uses privileged containers and requires access to host PID namespace, networking, and volumes.
Prerequisites​
- A Kubernetes cluster on a supported Kubernetes environment. For more information, see Supported Kubernetes Environments.
- To enable the agent to read the cluster name:
- If you created the Kubernetes cluster in a K8s orchestrator that supports machine tags such as AKS, EKS, and GKE, add the
KubernetesClustermachine tag for the cluster using the instructions at Add KubernetesCluster Machine Tag. - If you created your own Kubernetes cluster (rather than utilizing EKS, AKS, GKE or similar orchestrator), specify the cluster name using the
KubernetesClustertag in the config.json file using the instructions at Set KubernetesCluster Agent Tag in config.json File.
- If you created the Kubernetes cluster in a K8s orchestrator that supports machine tags such as AKS, EKS, and GKE, add the
Supported Kubernetes Environments​
The Lacework Linux agent supports the following Kubernetes versions, managed Kubernetes services, container network interfaces (CNI), service meshes, and container runtime engines:
| Kubernetes Environment | Environment Name |
|---|---|
| Kubernetes versions | 1.9.x to 1.28 |
| K8s orchestrators | AKS EKS GKE EKS EKS Fargate ECS Fargate MicroK8s Openshift Rancher ROSA |
| CNI | Weavenet Calico Flannel Cilium kubenet |
| Service mesh | Linkerd 2.11 |
| Container runtime engine | Docker Containerd CRI-O |
Install using Helm​
Supported Versions​
- EKS (Bottlerocket and Amazon Linux)
- Helm v3.1.x to v3.12.x
- Kops 1.20
- Kubernetes v1.10 to v1.27
- Ubuntu 20.04
Install using Lacework Charts Repository (Recommended)​
Use Helm to Install the Agent (Charts Repository)​
Helm Charts help you define, install, and upgrade Kubernetes applications.
Add the Lacework Helm Charts repository:
helm repo add lacework https://lacework.github.io/helm-charts/Install the Helm charts or upgrade an existing Helm chart. If the tenant you are using is located outside North America, replace the values for the
LACEWORK_AGENT_TOKENandLACEWORK_SERVER_URL.noteKUBERNETES_CLUSTER_NAMEandKUBERNETES_ENVIRONMENT_NAMEare optional. Replace them with values from your setup. To change theKUBERNETES_CLUSTER_NAME, see How Lacework Derives the Kubernetes Cluster Name.If you are using a tenant located in North America, run the following command:
helm upgrade --install --namespace lacework --create-namespace \
--set laceworkConfig.accessToken=${LACEWORK_AGENT_TOKEN} \
--set laceworkConfig.kubernetesCluster=${KUBERNETES_CLUSTER_NAME} \
--set laceworkConfig.env=${KUBERNETES_ENVIRONMENT_NAME} \
lacework-agent lacework/lacework-agentIf you are using a tenant located outside of North America, run the following command:
helm upgrade --install --namespace lacework --create-namespace \
--set laceworkConfig.accessToken=${LACEWORK_AGENT_TOKEN} \
--set laceworkConfig.serverUrl=${LACEWORK_SERVER_URL} \
--set laceworkConfig.kubernetesCluster=${KUBERNETES_CLUSTER_NAME} \
--set laceworkConfig.env=${KUBERNETES_ENVIRONMENT_NAME} \
lacework-agent lacework/lacework-agentVerify the pods.
kubectl get pods -n lacework -o wideAfter you install the agent, it takes 10 to 15 minutes for the agent data to appear in the Lacework Console. In the Lacework Console, go to Resources > Kubernetes and click Clusters to verify that the cluster on which you installed the agent is displayed. If the cluster is not displayed, see How Lacework Derives the Kubernetes Cluster Name.
Install Using the Charts from the Lacework Release Page​
Lacework recommends installing from the charts repository than from the Lacework Release Page if possible. Installing from the charts repository does not require editing of the Charts.yaml file, whereas this method does.
Get the Helm Chart for the Agent​
The Helm chart is available as part of the agent release tarball from the Lacework Agent Release GitHub repository (v2.12.1 or later).
The Helm chart includes the following:
./helm/./helm/lacework-agent/./helm/lacework-agent/Chart.yaml./helm/lacework-agent/templates/./helm/lacework-agent/templates/_helpers.tpl./helm/lacework-agent/templates/configmap.yaml./helm/lacework-agent/templates/daemonset.yaml./helm/lacework-agent/values.yaml
Edit Charts.yaml​
For Helm charts v4.2, in the Charts.yaml, change the version: 4.2.0.218 line to be version: 4.2.0
Use Helm to Install the Agent (Release Page)​
Replace the example text with your own values.
Install the charts or upgrade an existing installation.
helm upgrade --install --namespace lacework --create-namespace \
--set laceworkConfig.accessToken=${LACEWORK_AGENT_TOKEN} \
--set laceworkConfig.serverUrl=${LACEWORK_SERVER_URL} \
--set laceworkConfig.kubernetesCluster=${KUBERNETES_CLUSTER_NAME} \
--set laceworkConfig.env=${KUBERNETES_ENVIRONMENT_NAME} \
lacework-agent helm.tar.gzVerify the pods.
kubectl get pods -n lacework -o wideAfter you install the agent, it takes 10 to 15 minutes for the agent data to appear in the Lacework Console. In the Lacework Console, go to Resources > Kubernetes and click Clusters to verify that the cluster on which you installed the agent is displayed. If the cluster is not displayed, see How Lacework Derives the Kubernetes Cluster Name.
If you have an autogenerated or custom Helm deployment and these steps do not work, you can optionally:
- Change
"additionalProperties": trueinvalues.schema.json. Lacework supports this change, but it is not encouraged. - Use Helm to install the agent (charts repository).
Install on Openshift​
Install with the cluster-admin Role​
Use the normal Helm installation instructions to install the Lacework agent on Openshift.
Install with a Service Account​
You can also install the Lacework agent using Helm charts and a service account.
Before deploying the Helm chart, ensure that the service account has permissions to create privileged pods by running the following command:
oc adm policy add-scc-to-user privileged -z ${SERVICE_ACCOUNT_NAME}
To install the agent using a Helm chart:
Specify a service account when installing the Helm chart by adding
laceworkConfig.serviceAccountNameto the Helm command:--set laceworkConfig.serviceAccountName="${SERVICE_ACCOUNT_NAME}"Modify the values.yaml file and add the service account:
# [Optional] Specify the service account for agent pods
serviceAccountName: ${SERVICE_ACCOUNT_NAME}
You can specify that the agent runs on all nodes in a cluster or in a subset of nodes in the cluster.
Enable the Lacework Agent on all Nodes​
To run the Lacework agent on all nodes in your cluster, specify the following toleration during installation in one of the following ways:
Enter a command, such as:
--set "tolerations[0].effect=NoSchedule" --set "tolerations[0].operator=Exists"Modify the values.yaml file and add data similar to the following:
tolerations:
# Allow Lacework agent to run on all nodes in case of a taint
- effect: NoSchedule
operator: Exists
Enable the Lacework Agent on a Subset of Nodes​
To set multiple tolerations for the Lacework agent, set an array of desired tolerations in one of the following ways:
Enter the following command and repeat for each scheduling condition:
--set "tolerations[0].effect=NoSchedule" --set "tolerations[0].operator=node-role.kubernetes.io/master". Ensure you increment the array index for each scheduling condition.Modify the values.yaml file and add data similar to the following:
tolerations:
# Allow Lacework agent to run on all nodes in case of a taint tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
- effect: NoSchedule
key: node-role.kubernetes.io/control-plane
- effect: NoSchedule
key: node-role.kubernetes.io/infra
Helm Configuration Options​
Specify the Container Runtime that the Agent Uses to Discover Containers​
By default, the agent automatically discovers the container runtime (containerd, cri-o, and docker). You can use the containerRuntime option to specify the runtime that you want the agent to use to discover containers.
To specify the container runtime that the agent uses, do one of the following:
- Use the following option with the
helm installorhelm upgradecommand:--set laceworkConfig.containerRuntime=docker - Modify the values.yaml file and add data similar to the following:
containerRuntime:docker
If either the containerRunTime or the containerEngineEndpoint setting is wrong, the agent will not detect the container.
Specify the Endpoint that the Agent Uses to Discover Containers​
By default, the agent uses the default endpoint for the system's container runtime. You can use the containerEngineEndpoint option to specify any valid URL, TCP endpoint, or a Unix socket as the endpoint.
To specify the endpoint that the agent uses to discover containers, do one of the following:
- Use the following option with the
helm installorhelm upgradecommand:--set laceworkConfig.containerEngineEndpoint=unix:///run/docker.sock - Modify the values.yaml file and add data similar to the following:
containerEngineEndpoint:unix:///run/docker.sock
If either the containerRunTime or the containerEngineEndpoint setting is wrong, the agent will not detect the container.
Specify Nodes for Your Agent Deployment​
Tolerations let you run the agent on nodes that have scheduling constraints such as master nodes or infrastructure nodes (for OpenShift).
By default, the Lacework agent is permitted to run on worker nodes and master nodes in your Kubernetes cluster. This is done by specifying the toleration as follows:
# Allow Lacework agent to run on all nodes including master node
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
- effect: NoSchedule
key: node-role.kubernetes.io/control-plane
# Allow Lacework agent to run on all nodes in case of a taint
# - effect: NoSchedule
# operator: Exists
Prevent Agent Pods from Being Evicted
To prevent agent pods from being evicted in oversubscribed clusters, Lacework recommends that you assign a higher priority for agent pods. For more information about pod priority, see Pod Priority and Preemption.
You can assign a higher priority for agent pods in one of the following ways:
- Use the following option with the helm install or helm upgrade command:
--set priorityClassCreate=true - Change
priorityClassCreate:falsein the values.yaml file topriorityClassCreate:true.
Specify CPU Requests and Limits​
CPU requests specify the minimum CPU resources available to containers. CPU limits specify the maximum CPU resources available to containers. For more information, see Resource Management for Pods and Containers.
The default CPU request is 200m. The default CPU limit is 500m.
You can specify the CPU requests and limits in one of the following ways:
Enter a command such as the following on the command line:
--set resources.requests.cpu=300m
--set resources.limits.cpu=500mModify the values.yaml file and add data similar to the following:
resources:
requests:
cpu: 300m
limits:
cpu: 500m
Specify Memory Requests and Limits​
Memory requests specify the minimum memory available to containers. Memory limits specify the maximum memory available to containers. For more information, see Resource Management for Pods and Containers.
The default memory request is 512Mi. The default memory limit is 1450Mi.
You can specify the memory requests and limits in one of the following ways:
Enter a command such as the following on the command line:
--set resources.requests.memory=384Mi
--set resources.limits.memory=512MiModify the values.yaml file and add data similar to the following:
resources:
requests:
memory: 384Mi
limits:
memory: 512Mi
Specify a Proxy URL on Helm Charts​
Proxy servers allow you to specify a URL to route agent traffic.
You can set the proxy server URL in your Lacework Helm charts in one of the following ways:
Enter a command such as
--set laceworkConfig.proxyUrl=${LACEWORK_PROXY_URL}on the command line.Modify the values.yaml file and add data similar to the following:
# [Required] Specify a proxy server URL to use for routing agent
proxyUrl: value
Configure File Integrity Monitoring Properties​
Enable or Disable FIM​
Enable FIM in one of the following ways:
Enter a command such as
--set laceworkConfig.fim.mode=enableon the command line.Modify the values.yaml file and add data similar to the following:
mode: enable
Disable FIM in one of the following ways:
Enter a command such as
--set laceworkConfig.fim.mode=disableon the command line.Modify the values.yaml file and add data similar to the following:
mode: disable
Specify the File Path​
You can override default paths for FIM using this property in one of the following ways:
Enter a command such as
--set laceworkConfig.fim.filePath={<path1>,<path2>, ...}on the command line.Modify the values.yaml file and add data similar to the following:
filePath: [<path1>, <path2>, ...]
Specify the File Path to Ignore​
Alternatively, you can override default paths by specifying files to ignore for FIM in one of the following ways:
Enter a command such as
--set laceworkConfig.fileIgnore={<path1>,<path2>, ...}on the command line.Modify the values.yaml file and add data similar to the following:
fileIgnore: [<path1>, <path2>, ...]
Prevent the Access Timestamp from Being Used in Hash Computation​
You can prevent the access timestamp from being used by utilizing this property in one of the following ways:
Enter a command such as
--set laceworkConfig.fim.noAtime=trueon the command line.Modify the values.yaml file and add data similar to the following:
noAtime: true
Alternatively, you can enable access timestamp to be used by utilizing this property in one of the following ways:
Enter a command such as
--set laceworkConfig.fim.noAtime=falseon the command line.Modify the values.yaml file and add data similar to the following:
noAtime: false
Specify the FIM Scan Start Time​
You can specify a start time for the daily FIM scan using this property in one of the following ways:
Enter a command such as
--set laceworkConfig.fim.runAt=<HH:MM>on the command line.Modify the values.yaml file and add data similar to the following:
runAt: <HH:MM>
Specify the FIM Scan Interval​
You can specify the FIM scan interval using this property in one of the following ways:
Enter a command such as
--set laceworkConfig.fim.crawlInterval=<time_interval>on the command line.Modify the values.yaml file and add data similar to the following:
crawlInterval: <time_interval>
Enable Active Package Detection​
Active package detection enables you to know whether a vulnerable package is being used by an application on your host and prioritize fixing active vulnerable packages first.
For the list of supported package managers and types, see Which package managers and types are supported?.
Use the Package Status filter in the Host Vulnerability page to view active or inactive vulnerable packages on hosts. See Host Vulnerability - Package Status for details.
By default, active package detection is disabled.
To enable active package detection, do one of the following:
Use the following option with the
helm installorhelm upgradecommand:--set laceworkConfig.codeaware.enable=allModify the values.yaml file and add data similar to the following:
codeaware:
enable:allnoteFor some package types, you also need to enable Agentless Workload Scanning in your environment. See Which package managers and types are supported? for details.
If active package detection is enabled, do one of the following to disable it:
- Use the following option with the
helm installorhelm upgradecommand:--set laceworkConfig.codeaware.enable=false - Modify the values.yaml file and add data similar to the following:
codeaware:
enable:false
Specify Package Scan Options​
By default, package scan is enabled.
To disable package scan, do one of the following:
- Use the following option with the
helm installorhelm upgradecommand:--set laceworkConfig.packagescan.enable=false - Modify the values.yaml file and add data similar to the following:
packagescan:
enable:false
If package scan is disabled, do one of the following to enable it:
- Use the following option with the
helm installorhelm upgradecommand:--set laceworkConfig.packagescan.enable=true - Modify the values.yaml file and add data similar to the following:
packagescan:
enable:true
To specify interval (in minutes) between package scans, do one of the following:
- Use the following option with the
helm installorhelm upgradecommand:--set laceworkConfig.packagescan.interval=60 - Modify the values.yaml file and add data similar to the following:
packagescan:
interval:60
Specify Process Scan Options​
By default, process scan is enabled.
To disable process scan, do one of the following:
- Use the following option with the
helm installorhelm upgradecommand:--set laceworkConfig.procscan.enable=false - Modify the values.yaml file and add data similar to the following:
procscan:
enable:false
If process scan is disabled, do one of the following to enable it:
- Use the following option with the
helm installorhelm upgradecommand:--set laceworkConfig.procscan.enable=true - Modify the values.yaml file and add data similar to the following:To specify interval (in minutes) between process scans, do one of the following:
procscan:
enable:true - Use the following option with the
helm installorhelm upgradecommand:--set laceworkConfig.procscan.interval=60 - Modify the values.yaml file and add data similar to the following:
procscan:
interval:60
Filter Executables Tracked by the Agent​
By default, the agent collects command-line arguments for all executables when it is collecting process metadata. You can use the cmdlinefilter option to selectively enable or disable collection of command-line arguments for executables.
To collect command-line arguments for specific executables only, do one of the following:
Use one of the following with the
helm installorhelm upgradecommand:- To collect data for one executable:
--set laceworkConfig.cmdlinefilter.allow=java - To collect data for more than one executable, use a comma separated list:
--set laceworkConfig.cmdlinefilter.allow=java,python - To collect data for all executables, use the * wildcard. This is the default and recommended setting.
--set laceworkConfig.cmdlinefilter.allow=*
- To collect data for one executable:
Use one of the following in the values.yaml file:
- To collect data for one executable:
cmdlinefilter:
allow:java - To collect data for more than one executable, use a comma separated list:
cmdlinefilter:
allow:java,python - To collect data for all executables, use the * wildcard. This is the default and recommended setting.
cmdlinefilter:
allow:*
- To collect data for one executable:
To disable collection of command-line arguments for specific executables, do one of the following:
Use one of the following with the
helm installorhelm upgradecommand:- To disable collection of data for one executable:
--set laceworkConfig.cmdlinefilter.disallow=java - To disable collection of data for more than one executable, use a comma separated list:
--set laceworkConfig.cmdlinefilter.disallow=java,python - To disable collection of data for all executables, use the * wildcard. This setting stops data collection for all executables and is not recommended.
--set laceworkConfig.cmdlinefilter.disallow=*
- To disable collection of data for one executable:
Use one of the following in the values.yaml file:
- To disable collection of data for one executable:
cmdlinefilter:
disallow:java - To disable collection of data for more than one executable, use a comma separated list:
cmdlinefilter:
disallow:java,python - To disable collection of data for all executables, use the * wildcard. This setting stops data collection for all executables and is not recommended.
cmdlinefilter:
disallow:*
- To disable collection of data for one executable:
Limiting the data collected by the agent reduces Lacework’s process-aware threat and intrusion detection in your cloud environment and limits the alerts that Lacework generates. If you must disable sensitive data collection in your environment, Lacework recommends disabling the smallest set of executables possible.
Specify Image Pull Secrets​
Image pull secrets enable fetching the Lacework agent image from private repositories and/or allow bypassing rate limits.
You can configure image pull secrets in one of the following ways:
Modify your Helm install/upgrade command with the following options:
--set image.imagePullSecrets.name=<registrySecret>Modify the values.yaml file and add data similar to:
# [Optional] imagePullSecrets.
# https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
imagePullSecrets:
- name: <registrySecret>Where
<registrySecret>is the name of the secret that contains the credentials necessary to fetch the Lacework agent image.
Specify an Existing Secret​
Existing secrets allow you to store the Lacework access token outside of Helm.
You can use an existing secret in your Lacework Helm charts in one of the following ways:
Enter a command such as the following on the command line:
--set laceworkConfig.accessToken.existingSecret.key="lacework_agent_token"
--set laceworkConfig.accessToken.existingSecret.name="lacework-agent-secret"Modify the values.yaml file and add data similar to the following:
laceworkConfig:
# [Required] An access token is required before running agents.
# Visit https://<LACEWORK CONSOLE URL> for eg: https://lacework.lacework.net
accessToken:
existingSecret:
key: lacework_agent_token
name: lacework-agent-secret
Kubernetes requires that the existing secret is base64 encoded.
Specify the AWS Metadata Request Interval​
The agent retrieves metadata tags from AWS to enable you to quickly identify where you need to take actions to fix alerts displayed in the Lacework Console. To ensure that the latest metadata is displayed in the Lacework Console, the agent periodically makes describe-tags API calls to retrieve tags from AWS.
To limit the number of API calls, specify the interval during which the agent retrieves the tags. The interval can be specified in ns (nanoseconds), us (microseconds), ms (milliseconds), s (seconds), m (minutes), or h (hours).
For example, to retrieve the tags once every 15 minutes, do one of the following:
Use the following option with the
helm installorhelm upgradecommand to retrieve the tags once every 15 minutes:--set laceworkConfig.metadataRequestInterval="15m"Modify the
values.yamlfile and add data similar to the following:metadataRequestInterval: 15m
To disable the agent from retrieving tags from AWS, do one of the following:
Use the following option with the
helm installorhelm upgradecommand to retrieve the tags once every 15 minutes:--set laceworkConfig.metadataRequestInterval="0"Modify the
values.yamlfile and add data similar to the following:metadataRequestInterval: 0
Specify Custom Annotations on Helm Charts​
Annotations are a way of adding non-identifying metadata to Kubernetes objects. They are used by external tools to provide extra functionalities.
You can set annotations in your Lacework Helm charts in one of the following ways:
Enter a command such as
--set laceworkConfig.annotations.<key> <value>on the command line.Modify the values.yaml file and add data similar to the following:
# [Optional] Define custom annotations to use for identifying resources created by these charts
annotations:
key: value
another_key: another_value
Specify Custom Labels on Helm Charts​
Similar to custom annotations, custom labels are a way of adding non-identifying metadata to Kubernetes objects. They are used by external tools to provide extra functionalities.
You can set labels in your Lacework Helm charts in one of the following ways:
Enter a command such as
--set laceworkConfig.labels.<key> <value>on the command line.Modify the values.yaml file and add data similar to the following:
# [Optional] Define custom labels to use for identifying resources created by these charts
labels:
key: value
another_key: another_value
Specify Tags to Categorize Agents​
You can use the tags option to specify name/value tags to categorize your agents. For more information, see Adding Agent Tags.
To specify tags, do one of the following:
Use the following option with the
helm installorhelm upgradecommand:--set laceworkConfig.tags.<tagname1>=<value1>
--set laceworkConfig.tags.<tagname2>=<value2>For example:
--set laceworkConfig.tags.location=austin
--set laceworkConfig.tags.owner=peteModify the values.yaml file and add data similar to the following:
tags:
<tagname1>: <value1>
<tagname2>: <value2>For example:
tags:
location: austin
owner: pete
Specify the perfmode Property on Helm Charts​
You can set the perfmode property in your Lacework Helm charts in one of the following ways:
Enter a command such as
--set laceworkConfig.perfmode=PERFMODE_TYPEon the command line.Modify the values.yaml file and add data similar to the following:
# [Optional] Set to one of the other modes like ebpflite, scan, or lite for load balancers.
perfmode: PERFMODE_TYPE
Where PERFMODE_TYPE can be one of the following values:
ebpflite- The eBPF lite mode.lite- The lite mode.scan- The scan mode.null- Disables the perfmode property. The agent runs in normal mode.
Disable or Enable Logging to stdout​
Logging to stdout is enabled by default for Lacework Helm charts. You can disable stdout logging in one of the following ways:
Enter a command such as
--set laceworkConfig.stdoutLogging=falseon the command line.Modify the values.yaml file and add data similar to the following:
stdoutLogging: false
Install a Specific Version of the Lacework Agent Using Helm Charts​
The Lacework Helm Charts Repository contains a helm chart version for every agent version. By default, the latest version of the Lacework agent is installed when you use the Lacework Helms Charts Repository to install the agent. You can use the chart version corresponding to an agent version to install a specific version of the agent.
Add the Lacework Helm Charts repository:
helm repo add lacework https://lacework.github.io/helm-charts/If the repository was already added on your machine, update the repository:
helm repo update laceworkView the chart versions available in the repository:
helm search repo lacework --versions
NAME CHART VERSION APP VERSION DESCRIPTION
lacework/lacework-agent 6.2.0 1.0 Lacework Agent
lacework/lacework-agent 6.1.2 1.0 Lacework Agent
lacework/lacework-agent 6.1.0 1.0 Lacework Agent
lacework/lacework-agent 6.0.2 1.0 Lacework Agent
lacework/lacework-agent 6.0.1 1.0 Lacework Agent
lacework/lacework-agent 6.0.0 1.0 Lacework Agent
lacework/lacework-agent 5.9.0 1.0 Lacework Agent
lacework/lacework-agent 5.8.0 1.0 Lacework Agent
lacework/lacework-agent 5.7.0 1.0 Lacework Agent
lacework/lacework-agent 5.6.0 1.0 Lacework Agent
lacework/lacework-agent 5.5.2 1.0 Lacework AgentIn this example, the 6.2.0 chart version corresponds to the 6.2.0 version of the agent.
Use the
--versionoption to use a specific chart version to install the agent. For example, run the following command to install the 6.2.0 version of the agent with the 6.2.0 chart version:helm upgrade --install –version 6.2.0 --namespace lacework --create-namespace \
--set laceworkConfig.accessToken=${LACEWORK_AGENT_TOKEN} \
--set laceworkConfig.kubernetesCluster=${KUBERNETES_CLUSTER_NAME} \
--set laceworkConfig.env=${KUBERNETES_ENVIRONMENT_NAME} \
lacework-agent lacework/lacework-agent
Deploy with a DaemonSet​
DaemonSet Visibility​
When an agent is installed on a node as a DaemonSet/Pod or on the node itself, the agent has container visibility of the following resources:
- Processes running on the host.
- Processes running in a container that make a network connection (server or client).
- All container internal servers and processes that are listening actively on certain ports.
- File Integrity Monitoring (FIM) on the host.
- Host vulnerability on the host.
DaemonSet Deployment Using a configmap​
Download the Kubernetes Config (
lacework-cfg-k8s.yaml) and Kubernetes Orchestration (lacework-k8s.yaml) files using the instructions in Create Agent Access Tokens and Download Linux Agent Installers.Create the pods namespace
kubectl create namespace laceworknoteLacework recommends assigning a namespace to the DaemonSet config.
Using the kubectl command line interface, add the Lacework configuration file into the cluster in the newly created namespace.
kubectl create -f lacework-cfg-k8s.yaml -n laceworkInstruct the Kubernetes orchestrator to deploy an agent on all nodes in the cluster, including the master.
To change the CPU and memory limits, see Change Agent Resource Installation Limits on K8s Environments.kubectl apply -f lacework-k8s.yaml -n laceworkAfter you install the agent, it takes 10 to 15 minutes for the agent data to appear in the Lacework Console. In the Lacework Console, go to Resources > Kubernetes and click Clusters to verify that the cluster on which you installed the agent is displayed. If the cluster is not displayed, see How Lacework Derives the Kubernetes Cluster Name.
Repeat the above steps for each Kubernetes cluster.
The config.json file is embedded in the lacework-cfg-k8s.yaml file.
To customize FIM or add tags in a Kubernetes environment, edit the configuration section of the YAML file and push the revised lacework-cfg-k8s.yaml file to the cluster using the following command:kubectl replace -f lacework-cfg-k8s.yaml -n laceworknoteLacework always recommends assigning a namespace to the DaemonSet config.
DaemonSet Deployment Using a Secret​
Download the Kubernetes orchestration file (lacework-k8s.yaml) using the instructions in Create Agent Access Tokens and Download Linux Agent Installers.
Edit the lacework-k8s.yaml file and make the following changes:
- Change
configMaptosecret - Change
nametosecretName
- Change
Use the following command in the kubectl command line interface to create the Lacework access token secret. In the command, replace:
YOUR_ACCESS_TOKENwith the agent access token. For more information, see Create Agent Access Tokens and Download Linux Agent Installers.CLUSTER_NAMEwith your Kubernetes cluster name.SERVER_URLwith your Lacework agent server URL. For more information, see Agent Server URL.
kubectl create secret generic lacework-config --from-literal config.json='{"tokens":{"AccessToken":"YOUR_ACCESS_TOKEN"}, "serverurl":"SERVER_URL", "tags":{"Env":"k8s", "KubernetesCluster":"CLUSTER_NAME"}}' --from-literal syscall_config.yaml=""You should see the message
secret/lacework-config createdif the secret is created successfully.Instruct the Kubernetes orchestrator to deploy an agent on all nodes in the cluster, including the master. To change the CPU and memory limits, see Change Agent Resource Installation Limits on K8s Environments.
kubectl create -f lacework-k8s.yamlYou should see the message
daemonset.apps/lacework-agent createdif the DaemonSet is created successfully.After you install the agent, it takes 10 to 15 minutes for the agent data to appear in the Lacework Console. In the Lacework Console, go to Resources > Kubernetes and click Clusters to verify that the cluster on which you installed the agent is displayed. If the cluster is not displayed, see How Lacework Derives the Kubernetes Cluster Name.
Repeat the above steps for each Kubernetes cluster.
To customize FIM or add tags in a Kubernetes environment, edit the configuration section of the YAML file and push the revised lacework-cfg-k8s.yaml file to the cluster using the following commands:kubectl replace -f lacework-k8s.yamlkubectl create namespace laceworkkubectl apply -f lacework-k8s.yaml -n laceworkYou can confirm the DaemonSet status using the following command:
kubectl get dsor
kubectl get pods --all-namespaces | grep lacework-agent
Deploy DaemonSet Using Terraform​
Lacework maintains the terraform-kubernetes-agent module to create a Secret and DaemonSet for deploying the Lacework Datacollector Agent in a Kubernetes cluster.
If you are new to the Lacework Terraform Provider or Lacework Terraform Modules, read the Terraform for Lacework Overview article to learn the basics on how to configure the provider and more.
This topic assumes familiarity with the Terraform Provider for Kubernetes maintained by Hashicorp on the Terraform Registry.
DaemonSets are an easy way to deploy a Kubernetes pod onto every node in the cluster. This is useful for monitoring tools such as Lacework. You can use the DaemonSet method to deploy Lacework onto any Kubernetes cluster, including hosted versions such as EKS, AKS, and GKE.
Run Terraform​
The following code snippet creates a Lacework Agent Access token with Terraform and then deploys the DaemonSet to the Kubernetes cluster being managed with Terraform.
Before running this code, ensure that the following settings match the configurations for your deployment.
config_pathconfig_contextlacework_server_url
terraform {
required_providers {
kubernetes = {
source = "hashicorp/kubernetes"
version = "~> 2.0"
}
lacework = {
source = "lacework/lacework"
version = "~> 1.0"
}
}
}
provider "kubernetes" {
config_path = "~/.kube/config"
config_context = "my-context"
}
provider "lacework" {
# Configuration options
}
resource "lacework_agent_access_token" "k8s" {
name = "prod"
description = "k8s deployment for production env"
}
module "lacework_k8s_datacollector" {
source = "lacework/agent/kubernetes"
version = "~> 1.0"
lacework_access_token = lacework_agent_access_token.k8s.token
# For deployments in Europe, overwrite the Lacework agent server URL
#lacework_server_url = "https://api.fra.lacework.net"
# For deployments in Australia and New Zealand, overwrite the Lacework agent server URL
#lacework_server_url = "https://auprodn1.agent.lacework.net"
# Add the lacework_agent_tag argument to retrieve the cluster name in the Kubernetes Dossier
lacework_agent_tags = {KubernetesCluster: "Name of the Kubernetes cluster"}
pod_cpu_request = "200m"
pod_mem_request = "512Mi"
pod_cpu_limit = "500m"
pod_mem_limit = "1024Mi"
}
Due to upstream breaking changes, version 1.0+ of this module discontinued support for version 1.x of the hashicorp/kubernetes provider. If 1.x of the hashicorp/kubernetes provider is required, pin this module's version to ~> 0.1.
- Open an editor and create a file called
main.tf. - Copy/Paste the code snippet above into the
main.tffile and save the file. - Run
terraform planand review the changes that will be applied. - Once satisfied with the changes that will be applied, run
terraform apply -auto-approveto execute Terraform.
Validate the Changes​
After Terraform executes, you can use kubectl to validate the DaemonSet is deployed successfully:
kubectl get pods -l name=lacework -o=wide --all-namespaces
Install in gVisor on Kubernetes​
gVisor is an application kernel written in Go that implements a substantial portion of the Linux system call interface. It provides an additional layer of isolation between running applications and the host operating system. gVisor provides a virtualized environment in order to sandbox containers. The system interfaces normally implemented by the host kernel are moved into a distinct, per-sandbox application kernel in order to minimize the risk of a container escape exploit.
Install in gVisor on a Kubernetes Cluster Using GKE Sandbox​
Set up gVisor on a Kubernetes cluster using GKE sandbox using the steps described in Enabling GKE Sandbox.
After all nodes are running correctly, create a Lacework agent and Google microservices.
Download lacework-cfg-k8s.yaml and lacework-k8s.yaml files.
Use the following steps to create Lacework agent on the cluster:
Download the lacework-cfg-k8s.yaml and lacework-k8s.yaml files.
Update the Daemonset to include proper NodeAffinity and Toleration as follows:
template:
metadata:
labels:
name: lacework
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: sandbox.gke.io/runtime
operator: In
values:
- gvisor
tolerations:
- effect: NoSchedule
key: sandbox.gke.io/runtime
operator: Equal
value: gvisor
- key: node-role.kubernetes.io/master
effect: NoSchedule
- key: node-role.kubernetes.io/control-plane
effect: NoScheduleGo to your home directory.
Run
sudo mkdir lw
cd lwCreate lacework-cfg-k8s.yaml and lacework-k8s.yaml files in the lw directory.
Run these commands to create the Lacework agent:
kubectl create namespace laceworkkubectl create -f lacework-cfg-k8s.yaml -n laceworkkubectl apply -f lacework-k8s.yaml -n laceworkkubectl get ds -n lacework(This command shows the daemonsets created)
The Lacework agent pod is now deployed and should be up and running. To confirm, run this command:
kubectl get pods -n lacework -o wide
After the Lacework agent pod is running, deploy microservices on the cluster using the steps in Migrating a Monolithic Website to Microservices on Google Kubernetes Engine.
Verify your configuration using this command:
kubectl get pods
Install in gVisor on a Kubernetes Cluster Using containerd​
Launch any GCP instance (such as an Ubuntu instance).
Configure the security group of the GCP instance to allow traffic only to your IP address.
Install gCloud on the instance and create a cluster with gCloud.
Configure containerd using steps in Containerd Configuration.
After successful installation of containerd, configure containerd and update
/etc/containerd/config.toml. Ensurecontainerd-shim-runsc-v1is in ${PATH} or in the same directory as the containerd binary.After successful setup of containerd, set up Lacework agent and microservices pods.
- Go to your home directory
- Run these commands:
sudo mkdir lw
cd lw - Create lacework-cfg-k8s.yaml and lacework-k8s.yaml files in lw directory.
- Run these commands to create the Lacework agent:
kubectl create namespace laceworkkubectl create -f lacework-cfg-k8s.yaml -n laceworkkubectl apply -f lacework-k8s.yaml -n laceworkkubectl get ds -n lacework(This command shows the daemonsets created)
- The Lacework agent pod is now deployed and should be up and running. To confirm, run this command:
kubectl get pods -n lacework -o wide.
After the Lacework pod is running, deploy microservices on the cluster.
Verify your configuration using this command:
kubectl get pods
| Use Cases | Lacework Feature(s) | Data Source |
|---|---|---|
| User and Entity Behavior Analytics (UEBA) | Workload Anomaly Detection Process Dashboard and Polygraph Network Dashboard and Polygraph Filesystem Dashboard and Polygraph | Windows Agent |
| Vulnerability Management | Container Vulnerability Management | Windows Agent |
Install the Kubernetes Agent Daemonset on Windows Nodes 📎
You can install the Windows agent on an Azure Kubernetes Service (AKS) or Amazon Elastic Kubernetes Service (EKS) cluster with a Helm chart. The Helm chart enables you to automatically deploy a Kubernetes pod containing the agent onto every node in your cluster.
The Windows agent running on AKS and EKS clusters currently does not support host vulnerability assessment.
Prerequisites​
An AKS or EKS cluster with Windows Server nodes that meet the system requirements specified in Supported Kubernetes Environments.
Lacework Windows agent version 1.5 or later for AKS.
Lacework Windows agent version 1.4 or later for EKS.
Ensure that the agent has access to tags in your AWS account. For more information, see Configure Access to Tags in AWS.
Note: The agent can automatically access tags in Microsoft Azure. Hence, no special configuration is required for Microsoft Azure.
Install the following on your machine:
- Docker
- Helm
- kubectl command-line tool
- For AKS, install:
- For EKS, install:
Supported Kubernetes Environments​
| Environment | Environment Name / Version |
|---|---|
| Kubernetes | Version 1.23, 1.24 |
| K8s orchestrator | Azure Kubernetes Service (AKS) Amazon Elastic Kubernetes Service (EKS) |
| Supported Windows OS for Nodes |
|
| Container runtime | containerd version 1.6 or later |
| Container isolation mode | Process isolation mode Note: Hyper-V isolation mode is not supported. |
| Helm | Version 3.8x, 3.9.x, 3.10.x |
Install Agent with a Helm Chart​
To install the agent with a Helm chart:
Add the Lacework Helm Charts repository:
helm repo add lacework https://lacework.github.io/helm-charts/Do the following:
If you are using AKS, run the
az logincommand to use the Azure CLI with your Azure account.If you are using EKS, run the
aws configurecommand to use the AWS CLI with your AWS account.Ensure that you have connected to the AWS region that contains your EKS cluster.
Use Helm to install the agent.
If you are using a tenant located in North America, run the following command:
helm upgrade --install lw-agent lacework/lacework-agent-windows \
--set windowsAgent.agentConfig.accessToken=LACEWORK_AGENT_TOKEN \
--set windowsAgent.agentConfig.kubernetesCluster=CLUSTER_NAME \If you are using a tenant located outside of North America, run the following command:
helm upgrade --install lw-agent lacework/lacework-agent-windows \
--set windowsAgent.agentConfig.accessToken=LACEWORK_AGENT_TOKEN \
--set windowsAgent.agentConfig.serverUrl=LACEWORK_SERVER_URL \
--set windowsAgent.agentConfig.kubernetesCluster=CLUSTER_NAME \- Replace
LACEWORK_AGENT_TOKENwith your agent access token. For more information, see Obtain an Access Token for the Windows Agent. - Replace
LACEWORK_SERVER_URLwith your Lacework agent server URL. For more information, see serverurl Property. - Replace
CLUSTER_NAMEwith the name of your cluster.
- Replace
Verify that the pods for the Windows agent have the Running status.
kubectl get podsConfirm if the Windows agent is installed successfully.
kubectl logs POD_NAME | grep 'MSI Installation successful'Where
POD_NAMEis the name of your agent POD.
After you install the agent, it takes 10 to 15 minutes for agent data to appear in the Lacework Console under Resources > Agents. You can also view your cluster in the Lacework Console under Resources > Kubernetes.
Configure Agent with a Helm Chart​
You can do one of the following to configure the agent with a Helm chart:
- Use command-line options for the
helm installorhelm upgradecommand. - Modify the parameters in the
values.yamlfile in your Helm chart to configure the agent and use thehelm installorhelm upgradecommand to apply the configuration.
See the following sections for more information about the command-line options and values.yaml parameters that you can use to configure the agent.
Specify Lacework Agent Access Token​
You can specify your Lacework agent access token in one of the following ways:
- Use the following option with the
helm installorhelm upgradecommand:--set windowsAgent.agentConfig.accessToken=AGENT_ACCESS_TOKEN - Modify the
values.yamlfile and add data similar to the following:For more information, see Obtain an Access Token for the Windows Agent.accessToken: AGENT_ACCESS_TOKEN
Specify Lacework Agent Server URL​
You can specify your Lacework agent server URL in one of the following ways. For more information, see serverurl Property.
- Use the following option with the
helm installorhelm upgradecommand:--set windowsAgent.agentConfig.serverUrl=LACEWORK_SERVER_URL - Modify the
values.yamlfile and add data similar to the following:serverUrl: LACEWORK_SERVER_URL
Specify CPU Requests and Limits​​
CPU requests specify the minimum CPU resources available to containers. CPU limits specify the maximum CPU resources available to containers. For more information, see Resource Management for Pods and Containers.
The default CPU request is 200m. The default CPU limit is 500m.
You can specify the CPU requests and limits in one of the following ways:
- Use the following options with
helm installorhelm upgradecommand:--set windowsAgent.resources.requests.cpu=300m
--set windowsAgent.resources.limits.cpu=500m - Modify the
values.yamlfile in your Helm chart and add data similar to the following:resources:
requests:
cpu: 300m
limits:
cpu: 500m
The CPU requests and limits are currently not applied to the Windows agent.
Specify Memory Requests and Limits​​
Memory requests specify the minimum memory available to containers. Memory limits specify the maximum memory available to containers. For more information, see Resource Management for Pods and Containers.
The default memory request is 64Mi. The default memory limit is 1024Mi.
You can specify the memory requests and limits in one of the following ways:
- Use the following options with the
helm installorhelm upgradecommand:--set windowsAgent.resources.requests.memory=384Mi
--set windowsAgent.resources.limits.memory=512Mi - Modify the
values.yamlfile and add data similar to the following:resources:
requests:
memory: 384Mi
limits:
memory: 512Mi
The memory requests and limits are currently not applied to the Windows agent.
Specify Image Pull Secrets​​
Image pull secrets enable fetching the Lacework agent image from private repositories and/or allow bypassing rate limits.
You can configure image pull secrets in one of the following ways:
- Use the following option with the
helm installorhelm upgradecommand:--set windowsAgent.image.imagePullSecrets.name=<registrySecret> - Modify the
values.yamlfile and add data similar to the following:imagePullSecrets:
- name: <registrySecret>
Where <registrySecret> is the name of the secret that contains the credentials necessary to fetch the Lacework Windows agent image.
Specify a Proxy URL​
Proxy servers allow you to specify a URL to route agent traffic.
You can set the proxy server URL in one of the following ways:
- Use the following option with the
helm installorhelm upgradecommand:--set windowsAgent.agentConfig.proxyUrl=LACEWORK_PROXY_URL - Modify the
values.yamlfile and add data similar to the following:proxyUrl: value
For more information, see Use a Network Proxy for Windows Agent Traffic.
Configure File Integrity Monitoring Properties​​
Disable or Enable FIM​​
File Integrity Monitoring (FIM) is enabled by default. You can disable FIM in one of the following ways:
- Use the following option with the
helm installorhelm upgradecommand:--set windowsAgent.agentConfig.fim.enable=false - Modify the
values.yamlfile and add data similar to the following:fim:
enable: false
If FIM is disabled, you can enable it in one of the following ways:
- Use the following option with the
helm installorhelm upgradecommand:--set windowsAgent.agentConfig.fim.enable=true - Modify the
values.yamlfile and add data similar to the following:fim:
enable: true
For more information, see File Integrity Monitoring for Windows Overview.
Override Default File Path​s for FIM​
You can override default file paths for FIM in one of the following ways:
- Use the following option with the
helm installorhelm upgradecommand:--set windowsAgent.agentConfig.fim.filePath={C:\\users,C:\\data} - Modify the
values.yamlfile and add data similar to the following:fim:
filePath: [C:\\users,C:\\data]
For more information, see filepath Property.
Specify the File Paths to Ignore​ for FIM​
You can specify file paths to ignore for FIM in one of the following ways:
- Use the following option with the
helm installorhelm upgradecommand:--set windowsAgent.agentConfig.fim.fileIgnore={C:\\backup,C:\\test} - Modify the
values.yamlfile and add data similar to the following:fim:
fileIgnore: [C:\\backup,C:\\test]
For more information, see fileignore Property.
Specify the FIM Scan Start Time​​
You can specify a start time for the daily FIM scan. For example, to start the FIM scan at 7:30 PM every day, do one of the following:
- Use the following option with the
helm installorhelm upgradecommand:--set windowsAgent.agentConfig.fim.runAt=19:30 - Modify the
values.yamlfile and add data similar to the following:fim:
runAt: 19:30
For more information, see runat Property.
Override the Default Maximum Number of Files to Scan​
By default, Lacework runs the FIM scan on up to 500000 files.
You can increase or decrease the maximum number of files to scan. For example, to limit the FIM scan to 20000 files, do one of the following:
- Use the following option with the
helm installorhelm upgradecommand:--set windowsAgent.agentConfig.fim.maxscanfiles=20000 - Modify the
values.yamlfile and add data similar to the following:fim:
maxscanfiles: 20000
For more information, see maxscanfiles Property.
Prevent File Access Timestamp from Being Used in Hash Computation​​
You can prevent the file access timestamp from being used in one of the following ways:
- Use the following option with the
helm installorhelm upgradecommand:--set windowsAgent.agentConfig.fim.noAtime=true - Modify the
values.yamlfile and add data similar to the following:fim:
noAtime: true
Alternatively, you can enable file access timestamp to be used in one of the following ways:
- Use the following option with the
helm installorhelm upgradecommand:--set windowsAgent.agentConfig.fim.noAtime=false - Modify the
values.yamlfile and add data similar to the following:fim:
noAtime: false
For more information, see noatime Property.
Disable or Enable Windows Registry Monitoring​
Windows registry monitoring is enabled by default.
You can disable registry monitoring in one of the following ways:
- Use the following option with the
helm installorhelm upgradecommand:--set windowsAgent.agentConfig.registry=disable - Modify the
values.yamlfile and add data similar to the following:registry: disable
If registry monitoring is disabled, you can enable it in one of the following ways:
- Use the following option with the
helm installorhelm upgradecommand:--set windowsAgent.agentConfig.registry=enable - Modify the
values.yamlfile and add data similar to the following:registry: enable
For more information, see Monitor Windows Registry Changes.
Specify tolerations for Agent Pods on Kubernetes Clusters​
You can specify tolerations for agent pods on Kubernetes clusters.
For example, to schedule the agent pods on a node named myNode1, do the following:
- Add a taint to the
myNode1node.kubectl taint nodes myNode1 key1=green:NoSchedule - Do one of the following to specify a toleration to schedule the agent pods on the
myNode1node.- Use the following option with the
helm installorhelm upgradecommand:--set "windowsAgent.tolerations[0].key=key1" \
--set "windowsAgent.tolerations[0].operator=Equal" \
--set "windowsAgent.tolerations[0].value=green" \
--set "windowsAgent.tolerations[0].effect=NoSchedule" \ - Modify the
values.yamlfile and add data similar to the following:tolerations:
- key: key1
operator: Equal
value: green
effect: NoSchedule
- Use the following option with the
Disable or Enable Automatic Upgrade of the Agent​
By default, the Windows agent is automatically upgraded when a new version is available.
You can disable automatic upgrade in one of the following ways:
- Use the following option with the
helm installorhelm upgradecommand:--set windowsAgent.agentConfig.autoUpgrade=disabled - Modify the
values.yamlfile and add data similar to the following:autoUpgrade: disabled
If automatic upgrade is disabled, you can enable it in one of the following ways:
- Use the following option with the
helm installorhelm upgradecommand:--set windowsAgent.agentConfig.autoUpgrade=enable - Modify the
values.yamlfile and add data similar to the following:autoUpgrade: enable
Specify Tags to Categorize Agents​
You can use the tags option to specify name/value tags to categorize your agents. For more information, see Adding Agent Tags.
To specify tags, do one of the following:
Use the following option with the
helm installorhelm upgradecommand:--set windowsAgent.agentConfig.tags.<tagname1>=<value1>
--set windowsAgent.agentConfig.tags.<tagname2>=<value2>For example:
--set windowsAgent.agentConfig.tags.location=austin
--set windowsAgent.agentConfig.tags.owner=peteModify the values.yaml file and add data similar to the following:
tags:
<tagname1>: <value1>
<tagname2>: <value2>For example:
tags:
location: austin
owner: pete
Specify Custom Annotations​
Annotations are a way of adding non-identifying metadata to Kubernetes objects. They are used by external tools to provide extra functionalities. For more information, see Annotations.
You can set annotations in one of the following ways:
Use the following option with the
helm installorhelm upgradecommand:--set windowsAgent.agentConfig.annotations.<key1>=<value1>
--set windowsAgent.agentConfig.annotations.<key2>=<value2>For example:
--set windowsAgent.agentConfig.annotations.owner=pete
--set windowsAgent.agentConfig.annotations.repository=https://github.com/lacework-testModify the
values.yamlfile and add data similar to the following:annotations:
<key1>: <value1>
<key2>: <value2>For example:
annotations:
owner: pete
repository: https://github.com/lacework-test
Specify Custom Labels​​
Similar to custom annotations, custom labels are a way of adding non-identifying metadata to Kubernetes objects. They are used by external tools to provide extra functionalities. For more information, see Labels and Selectors.
You can set labels in one of the following ways:
Use the following option with the
helm installorhelm upgradecommand:--set windowsAgent.agentConfig.labels.<key1>=<value1>
--set windowsAgent.agentConfig.labels.<key2>=<value2>For example:
--set windowsAgent.agentConfig.labels.release=stable
--set windowsAgent.agentConfig.labels.environment=productionModify the
values.yamlfile and add data similar to the following:labels:
<key1>: <value1>
<key2>: <value2>For example:
labels:
release: stable
environment: production
Specify the Cluster Name​
If your cluster does not appear in the Lacework Console under Resources > Kubernetes after the agent is installed successfully, you can specify the cluster name using the kubernetesCluster option.
You can specify the cluster name in one of the following ways:
- Use the following option with the
helm installorhelm upgradecommand:--set windowsAgent.agentConfig.kubernetesCluster=CLUSTER_NAME - Modify the
values.yamlfile and add data similar to the following:kubernetesCluster: CLUSTER_NAME
Specify a Name for your Kubernetes Environment​
You can specify a user-friendly name for your Kubernetes environment. For example, K8s_production. The name you specify is displayed as the value for the Env tag in the Lacework Console. For more information, see Add Agent Tags.
You can specify a name for your Kubernetes Environment in one of the following ways:
- Use the following option with the
helm installorhelm upgradecommand:--set windowsAgent.agentConfig.env=KUBERNETES_ENVIRONMENT_NAME - Modify the
values.yamlfile and add data similar to the following:env: KUBERNETES_ENVIRONMENT_NAME
Uninstall Agent with a Helm Chart​
To uninstall the agent with a Helm chart:
Open a Terminal and navigate to the
helm_chartdirectory that contains the Helm chart.Do the following:
If you are using AKS, run the
az logincommand to use the Azure CLI with your Azure account.If you are using EKS, run the
aws configurecommand to use the AWS CLI with your AWS account.Ensure that you have connected to the AWS region that contains your EKS cluster.
Use Helm to uninstall the agent.
helm uninstall lw-agentVerify that the pods for the Windows agent have been terminated.
kubectl get pods
| Use Cases | Lacework Feature(s) | Data Source |
|---|---|---|
| Kubernetes Posture Management (KSPM) | Kubernetes Compliance Dashboard and Reports | EKS Compliance |
EKS Compliance setup using Terraform 📎
Overview​
Watch Video Summary
This article describes how to integrate Lacework with your Amazon Elastic Kubernetes Service (EKS) cluster using Helm or Terraform.
Lacework integrates with Amazon EKS to monitor configuration compliance of your cluster resources.
Optionally, Lacework can also monitor workload security on your Amazon EKS cluster. This is provided as an additional option during the installation steps in this article.
If you are only wanting to monitor workload security on your EKS clusters (rather than configuration compliance), see Deploy Linux Agent on Kubernetes.
Supported Versions​
See Deploy on Kubernetes (Supported Versions) for the operating systems, Kubernetes, and Helm versions that are supported for Amazon EKS Compliance.
EKS Fargate is not supported for this type of integration.
EKS Compliance Integration Components​
Lacework uses three components to collect data for EKS Compliance integrations:
Node Collector - collects data on each Kubernetes node.
The Node Collector is an independent component that shares the same installation journey as the Lacework Agent. It has separate configuration to allow operation on EKS nodes.
infoIf the Lacework Agent is already installed on the cluster nodes, the installation will update the Agent configuration map to enable the Node Collector functionality.
It may also upgrade the Lacework Agent to the latest available release. The minimum agent version for EKS Compliance functionality is v6.2.
This component is installed on every Kubernetes node in the cluster.
Node data is collected and sent to Lacework every hour.
The Node Collector will collect data relating to workload security if you choose to enable it during the installation steps.
Cluster Collector - collects Kubernetes cluster data from the Kubernetes API server.
- This component is installed on one container per cluster.
- The container runs as a non-root user.
- Retrieves AWS instance metadata.
- Cluster data is collected and sent to Lacework every 24 hours.
Cloud Collector (through Cloud Provider Integration) - collects data from cloud provider end points.
- This is already provided through the AWS Configuration integration type. See Integrate Lacework with AWS to set this up (if you haven't already done so).
- The cloud collection occurs every 24 hours at the scheduled time in the Lacework Console (under Settings > Configuration: General > Resource Management Collection Schedule).
The EKS Compliance data is complete and available for assessment once all 3 collections have occurred at least once.
The node and cluster data is sent to Lacework within 2 hours of the collectors being installed on a cluster. Once the cloud collection has occurred, data will be visible in the Lacework platform.
Prerequisites​
AWS Configuration integration has been configured and is working for your account or organization.
Lacework Linux Agent - Access Token has been created.
- Use an existing agent token or create a new one for this integration.
- If you only want to monitor compliance configuration, it is recommended that you create a new access token. You can then disable or enable the Agent token for this integration without affecting other integrations that use this token.
- If you want to monitor both compliance configuration and workload security, you may want to use an existing access token. For example, if you have an Agent token in use for workload security on Kubernetes clusters, it may be better to combine this integration with the same access token.
noteYou only need to generate the access token as the Agent is installed during the installation steps steps.
- Use an existing agent token or create a new one for this integration.
Installation Steps​
Choose one of the following options to integrate Lacework with your EKS cluster:
Option 1: Install using Helm​
Follow these steps to install the Node and Cluster collectors on your EKS cluster.
Add the Lacework Helm Charts repository:
helm repo add lacework https://lacework.github.io/helm-charts/Choose one of the following options to install the necessary components on your EKS cluster:
tipAdd
--debugto this command to enter debug mode:helm upgrade --debug --install --create-namespace...Configuration compliance integration only:
Template with Workload Security disabledhelm upgrade --install --create-namespace --namespace lacework \
--set laceworkConfig.serverUrl=${LACEWORK_SERVER_URL} \
--set laceworkConfig.accessToken=${LACEWORK_AGENT_TOKEN} \
--set laceworkConfig.kubernetesCluster=${KUBERNETES_CLUSTER_NAME} \
--set laceworkConfig.datacollector=disable \
--set clusterAgent.enable=True \
--set clusterAgent.image.repository=lacework/k8scollector \
--set clusterAgent.clusterType=${KUBERNETES_CLUSTER_TYPE} \
--set clusterAgent.clusterRegion=${KUBERNETES_CLUSTER_REGION} \
--set image.repository=lacework/datacollector \
lacework-agent lacework/lacework-agentAdjust the parameter values to match your environment, see Configuration Parameters for guidance.
Configuration compliance and Workload Security integration:
tipUse this option if you already have the Lacework Agent installed on your cluster nodes.
Template with Workload Security enabledhelm upgrade --install --create-namespace --namespace lacework \
--set laceworkConfig.serverUrl=${LACEWORK_SERVER_URL} \
--set laceworkConfig.accessToken=${LACEWORK_AGENT_TOKEN} \
--set laceworkConfig.kubernetesCluster=${KUBERNETES_CLUSTER_NAME} \
--set laceworkConfig.env=${KUBERNETES_ENVIRONMENT_NAME} \
--set clusterAgent.enable=True \
--set clusterAgent.image.repository=lacework/k8scollector \
--set clusterAgent.clusterType=${KUBERNETES_CLUSTER_TYPE} \
--set clusterAgent.clusterRegion=${KUBERNETES_CLUSTER_REGION} \
--set image.repository=lacework/datacollector \
lacework-agent lacework/lacework-agentAdjust the parameter values to match your environment, see Configuration Parameters for guidance.
Display the pods for verification. Choose one of the following options:
Run the following
kubectlcommand:kubectl get pods -n lacework -o wideGo to Resources > Kubernetes in the Lacework Console.
In the Behavior section, click Pod network and then Pod activity.
All Node Collector and Cluster Collector pods have a naming convention that includes
lacework-agent-*andlacework-agent-cluster-*respectively.
Configuration Parameters​
Required Parameters​
Adjust the following values to match your environment:
| Value | Description | Example(s) |
|---|---|---|
${LACEWORK_SERVER_URL} | Your Lacework Agent server URL. | https://api.lacework.net https://aprodus2.agent.lacework.net https://api.fra.lacework.net https://auprodn1.agent.lacework.net |
${LACEWORK_AGENT_TOKEN} | Your Lacework Agent access token. | 0123456789abc... |
${KUBERNETES_CLUSTER_NAME} | Provide your EKS cluster name and ensure it matches the name defined in AWS. | prd01 |
${KUBERNETES_ENVIRONMENT_NAME} | Your EKS environment name (this will be shown in the Lacework Console). Only required for Workload Security integrations. | Production |
${KUBERNETES_CLUSTER_TYPE} | The Kubernetes cluster type. NOTE: For EKS integrations, the cluster type must be written as eks in lower case. | eks |
${KUBERNETES_CLUSTER_REGION} | The AWS Region of the EKS cluster. | us-west eu-west-1 |
Optional Parameters​
The following parameters are optional and not required for the installation:
| Parameter | Description | Example(s) |
|---|---|---|
clusterAgent.image.tag | Specify a Lacework Agent tag suitable for your cluster. The default is latest when this parameter is omitted. | 5.6.0.8352-amd64 |
image.tag | Specify a Lacework Agent tag suitable for your cluster. The default is latest when this parameter is omitted. | 5.6.0.8352-amd64 |
Add these parameters when running the installation command:
helm upgrade --install --create-namespace --namespace lacework \
...
--set clusterAgent.image.tag=5.6.0.8352-amd64 \
--set image.tag=5.6.0.8352-amd64 \
...
See Helm Configuration Options for additional parameters that can also be set using Helm.
Option 2: Install using Terraform​
Use the Lacework terraform-kubernetes-agent module to create a Secret and DaemonSet and deploy the Node and Cluster collectors in your EKS cluster.
DaemonSets are an easy way to deploy a Kubernetes pod onto every node in the cluster. This is useful for monitoring tools such as Lacework.
If you are new to the Lacework Terraform Provider or Lacework Terraform Modules, read the Terraform for Lacework Overview article to learn the basics on how to configure the provider and more.
This topic assumes familiarity with the Terraform Provider for Kubernetes maintained by Hashicorp on the Terraform Registry.
Run Terraform​
The following code snippets deploy the DaemonSet to the Kubernetes cluster being managed with Terraform.
Before running this code, ensure that the following settings match the configurations for your deployment:
config_pathconfig_contextlacework_access_tokenlacework_server_urllacework_cluster_namelacework_cluster_exclusivefalse= Configuration Compliance and Workload Security integration.- Set to
falseor omit this variable if you already have the Lacework Agent installed on your cluster nodes.
- Set to
true= Configuration Compliance integration only.
terraform {
required_providers {
kubernetes = {
source = "hashicorp/kubernetes"
version = "~> 2.0"
}
lacework = {
source = "lacework/lacework"
version = "~> 1.0"
}
}
}
provider "kubernetes" {
config_path = "~/.kube/config"
config_context = "my-context"
}
data "aws_region" "current" {}
# Use the access token resource below if you are intending
# to generate a new access token for this integration.
resource "lacework_agent_access_token" "k8s" {
name = "prod"
}
# Use the data entry below if you are choosing to use an
# existing access token for this integration.
data "lacework_agent_access_token" "k8s" {
name = "k8s-deployments"
}
module "lacework_k8s_datacollector" {
source = "lacework/agent/kubernetes"
version = "~> 2.0"
# Use one of the lacework_access_token options below depending
# on whether you are generating a new token or using an existing one.
# Option 1: Generate a new access token
#lacework_access_token = lacework_agent_access_token.k8s.token
# Option 2: Use an existing access token
#lacework_access_token = "data.lacework_agent_access_token.k8s.token"
# The lacework_server_url property is optional if your Lacework tenant
# is deployed in the US, but mandatory for non-US tenants.
# See https://docs.lacework.net/onboarding/agent-server-url for endpoints.
#lacework_server_url = "<agent-server-url>"
# Provide your EKS cluster name and ensure it matches the name defined in AWS.
# https://docs.aws.amazon.com/cli/latest/reference/eks/list-clusters.html#examples
lacework_cluster_name = "My-EKS-Cluster"
# Set lacework_cluster_exclusive to true if you only want a Configuration Compliance integration.
# Default is false.
#lacework_cluster_exclusive = true
enable_cluster_agent = true
lacework_cluster_region = data.aws_region.current.name
lacework_cluster_type = "eks"
}
- Open an editor and create a file called
main.tf. - Copy/Paste the code snippet above into the
main.tffile and save the file. - Run
terraform planand review the changes that will be applied. - Once satisfied with the changes that will be applied, run
terraform apply -auto-approveto execute Terraform.
Validate the Changes​
After Terraform executes, you can use kubectl or check the Lacework Console to validate the DaemonSet is deployed successfully:
Run the following
kubectlcommand:kubectl get pods -n lacework -o wideGo to Resources > Kubernetes in the Lacework Console.
In the Behavior section, click Pod network and then Pod activity.
All Node Collector and Cluster Collector pods have a naming convention that includes lacework-agent-* and lacework-agent-cluster-* respectively.
Next Steps​
See Kubernetes Benchmarks for details on how to check whether your resources are compliant with CIS and other regulatory benchmarks.
| Use Cases | Lacework Feature(s) | Data Source |
|---|---|---|
| Vulnerability Management | Container Vulnerability Management | Container Vulnerability Details |
Install the Kubernetes Admission Controller 📎
Installation Methods
Choose one of the following installation methods for the Proxy Scanner and Webhook:
-
Option 1: Install the Webhook and Proxy Scanner in a Combined Helm Chart (Recommended).
Use this method to install both the Webhook and Proxy Scanner on the same host as the Kubernetes Admission Controller. -
Option 2: Install the Proxy Scanner and Webhook in Separate Helm Charts.
This method can be used to install the Webhook and Proxy Scanner on separate hosts (or if you already have the Proxy Scanner installed). -
Option 3: Download and Install the Helm Charts Manually.
This method allows you to download templates for both the Proxy Scanner and Webhook configuration (values.yaml) and configure them manually prior to installation.
Helm Charts help you define, install, and upgrade Kubernetes applications.
Create TLS/SSL Certificates
Secure communication between the Admission Controller Webhook and the Kubernetes API server by providing Base64 encoded certificates for the Webhook. Choose from one of the following options:
- Option 1: Provide your own certificates.
- Option 2: Generate the certificates using a script provided in the Lacework Helm Charts repository.
You can also choose to secure communication between the Proxy Scanner and Admission Controller Webhook by providing Base64 encoded certificates for the Proxy Scanner. This is optional though.
The lacework namespace should be used when installing the Admission Controller Webhook and Proxy Scanner.
If integrating with EKS Fargate, make sure the lacework namespace is added to your Fargate profile:text title="Example"<br/>eksctl create fargateprofile \<br/>--cluster <fargate_cluster_name> \<br/>--name <fargate_profile_name> \<br/>--namespace lacework<br/>
If you want to deploy in a different namespace, ensure that the domain of the Subject Alternative Names in your certificates are adjusted accordingly.
Option 1: Provide Your Own Certificates
Provide your own Base64 encoded certificates for the Admission Controller Webhook.
-
Create these files for the following properties or values during installation:
File to use Entries in values.yaml or Helm entry Webhook Root CA <base64_webhook_ca.crt>Webhook server certificate <base64_webhook.crt>Webhook server key <base64_webhook.key>The Kubernetes documentation provides guidance for creating certificates manually.
-
Ensure to include the following DNS names for the Admission Controller Webhook:
[ alt_names ] DNS.1 = lacework-admission-controller.lacework.svc DNS.2 = lacework-admission-controller.lacework.svc.cluster.local DNS.3 = admission.lacework-dev.svc DNS.4 = admission.lacework-dev.svc.cluster.local DNS.5 = lacework-proxy-scanner.lacework.svc DNS.6 = lacework-proxy-scanner.lacework.svc.cluster.local DNS.7 = lacework-proxy-scanner.laceworkThe
DNS.5,DNS.6, andDNS.7entries are only required if you are using the same certificates for traffic between the Admission Controller and Proxy Scanner. -
Encode the certificates in Base64 format before using them:
cat <webhook_ca.crt> | base64 | tr -d '\n' > <base64_webhook_ca.crt> \ cat <webhook.crt> | base64 | tr -d '\n' > <base64_webhook.crt> \ cat <webhook.key> | base64 | tr -d '\n' > <base64_webhook.key>
Proxy Scanner Certificates (Optional)
Provide your own Base64 encoded certificates for the Proxy Scanner.
-
Create these files for the following properties or values during installation:
File to use Entries in values.yaml or Helm entry Proxy Scanner Root CA <base64_scanner_ca.crt>Proxy Scanner server certificate <base64_scanner.crt>Proxy Scanner server key <base64_scanner.key> -
Ensure to include the following DNS names for the Proxy Scanner:
[ alt_names ] DNS.1 = lacework-admission-controller.lacework.svc DNS.2 = lacework-admission-controller.lacework.svc.cluster.local DNS.3 = admission.lacework-dev.svc DNS.4 = admission.lacework-dev.svc.cluster.local DNS.5 = lacework-proxy-scanner.lacework.svc DNS.6 = lacework-proxy-scanner.lacework.svc.cluster.local DNS.7 = lacework-proxy-scanner.lacework -
Encode the certificates in Base64 format before using them:
cat <scanner_ca.crt> | base64 | tr -d '\n' > <base64_scanner_ca.crt> \ cat <scanner.crt> | base64 | tr -d '\n' > <base64_scanner.crt> \ cat <scanner.key> | base64 | tr -d '\n' > <base64_scanner.key>
Option 2: Generate Certificates Using the Script in the Lacework Helm Charts Repository
-
Get the latest copy of the Admission Controller from the Lacework Helm Charts repository
If using Helm, you can pull the repository directly using
helm pull lacework/admission-controller.The file will be in the format of admission-controller-<version>.tgz.
-
Extract the script:
tar -xvf admission-controller-*.tgz \ admission-controller/generate-certs.shThis extracts the script to a subdirectory named 'admission-controller' in your current working directory.
-
Run the script to generate the certificates:
cd admission-controllerchmod u+x generate-certs.sh./generate-certs.sh -
Use these generated files for the following properties or values during installation:
File to use Entries in values.yaml or Helm entry ca.crt_b64<base64_webhook_ca.crt>admission.crt_b64<base64_webhook.crt>admission.key_b64<base64_webhook.key>
Generate the Proxy Scanner certificates (Optional)
Generating certificates using the script in the Lacework Helm Charts repository only provides the Webhook certificates.
You can reuse the same certificates for the Proxy Scanner if you generated them for the Webhook:
| File to use | Entries in values.yaml or Helm entry |
|---|---|
ca.crt_b64 (same as the Webhook Root CA) | <base64_scanner_ca.crt> |
admission.crt_b64 | <base64_scanner.crt> |
admission.key_b64 | <base64_scanner.key> |
Configure the Proxy Scanner
Configure the Proxy Scanner to authenticate with Lacework and automatically initiate scans prior to deployment. This also allows container vulnerability policies to be evaluated during scans. Scan results are then sent to the Admission Controller prior to pod deployment.
During installation of the Proxy Scanner, the values.yaml file will need to be provided. This section explains how to prepare this file depending on your installation method.
If choosing the manual installation method, a template of values.yaml is downloaded as part of the installation and does not need to be prepared in advance.
Proxy Scanner Config for Combined Helm Chart
When installing the Webhook and Proxy Scanner in a Combined Helm Chart, create the values.yaml file in advance by using the following blank template of the proxy-scanner section:
proxy-scanner:
config:
scan_public_registries:
default_registry:
static_cache_location: /opt/lacework
lacework:
account_name:
integration_access_token:
existingSecret:
key:
name:
registry_secret_name:
registries:
- domain:
name:
ssl:
auto_poll: false
is_public:
auth_type:
auth_header_name:
credentials:
user_name:
password:
poll_frequency_minutes: 20
disable_non_os_package_scanning:
go_binary_scanning:
enable:
scan_directory_path: ""
See Configure the Proxy Scanner for on-demand scans for instructions on how to populate the configuration.
There are also example configurations available to view (other authentication types with examples can be found in this section).
Proxy Scanner Config for Separate Helm Charts
When installing the Proxy Scanner and Webhook in Separate Helm Charts, create the values.yaml file in advance by using the following blank template of the config section:
config:
scan_public_registries:
static_cache_location: /opt/lacework
default_registry:
lacework:
account_name:
integration_access_token:
existingSecret:
key:
name:
registry_secret_name:
registries:
- domain:
name:
ssl:
auto_poll: false
is_public:
auth_type:
auth_header_name:
credentials:
user_name:
password:
poll_frequency_minutes: 20
disable_non_os_package_scanning:
go_binary_scanning:
enable:
scan_directory_path: ""
See Configure the Proxy Scanner for on-demand scans for instructions on how to populate the configuration.
There are also example configurations available to view (other authentication types with examples can be found in this section).
Exclude Workload Resources
It is highly recommended that all resources except Pod are excluded from scans. All other resources may spawn multiple child resources, and these will cause excessive scanning during the deployment if not excluded.
For example, a CronJob will spawn instances of Jobs, which can also launch Pods. If only CronJob and Job are excluded, then a scan will still be triggered on Pod.
Only remove resources from the exclude list if you are okay with the additional delay that may occur during deployments.
You can exclude Kubernetes Workload Resources from vulnerability scans by adding them to an excluded resources list.
The types of resources that can be excluded from scans are:
PodDeploymentReplicaSetDaemonSetJobCronJobStatefulSet
Edit the values.yaml to Exclude Resources
Exclude resources by adding an admission.excluded_resources field to the values.yaml file for the Admission Controller Webhook configuration.
If not added, the default configuration will exclude all resources except Pod (which is recommended).
The example below is an exhaustive list of all possible resources that can be excluded. Only add the resources that you want to exclude from scanning.
admission:
excluded_resources:
- Pod
- Deployment
- ReplicaSet
- DaemonSet
- Job
- CronJob
- StatefulSet
Where and when to add this field depends on which installation method you have chosen:
-
Combined Helm Chart: Add this field to the Proxy Scanner configuration prior to installation as it can contain both Proxy Scanner and Webhook configuration.
-
Separate Helm Charts: Create a separate
values_resources.yamlfile and add this field as the contents. It can then be used when prompted during the Webhook installation steps. -
Manual Installation: Add this field to the
values.yamlwhen prompted during the Webhook installation steps.
Installation Steps
Option 1: Install the Webhook and Proxy Scanner in a Combined Helm Chart
-
Before you begin, ensure you have all the TLS/SSL certificates in your working directory.
-
Add the Lacework Helm Charts repository:
helm repo add lacework https://lacework.github.io/helm-charts/ -
Install and configure the Admission Controller Webhook and Proxy Scanner by using the
values.yamlfile created earlier:helm upgrade --install --create-namespace --namespace lacework \ --set webhooks.caBundle=`cat <base64_webhook_ca.crt>` \ --set certs.serverCertificate=`cat <base64_webhook.crt>` \ --set certs.serverKey=`cat <base64_webhook.key>` \ --values values.yaml \ lacework-admission-controller lacework/admission-controller-
(Optional) If you want to secure communication between the Proxy Scanner and Admission Controller Webhook, include these optional properties:
--set scanner.skipVerify=false \ --set scanner.caCert=`cat <base64_scanner_ca.crt>` \ --set proxy-scanner.certs.skipCert=false \ --set proxy-scanner.certs.serverCertificate=`cat <base64_scanner.crt>` \ --set proxy-scanner.certs.serverKey=`cat <base64_scanner.key>` \helm upgrade --install --create-namespace --namespace lacework \ --set webhooks.caBundle=`cat <base64_webhook_ca.crt>` \ --set certs.serverCertificate=`cat <base64_webhook.crt>` \ --set certs.serverKey=`cat <base64_webhook.key>` \ --values values.yaml \ --set scanner.skipVerify=false \ --set scanner.caCert=`cat <base64_scanner_ca.crt>` \ --set proxy-scanner.certs.skipCert=false \ --set proxy-scanner.certs.serverCertificate=`cat <base64_scanner.crt>` \ --set proxy-scanner.certs.serverKey=`cat <base64_scanner.key>` \ lacework-admission-controller lacework/admission-controllerAlternatively, provide the full Base64 encoded string for each certificate instead of the
cat certificateFilecommand. -
(Optional) If you want to exclude workload resources from vulnerability scans, ensure you have added the
admission.excluded_resourcesfield to thevalues.yamlfile.
-
-
Display the pods for verification:
kubectl get pods -n lacework
Option 2: Install the Proxy Scanner and Webhook in Separate Helm Charts
Install the Proxy Scanner with Helm
-
Before you begin, ensure you have all the TLS/SSL certificates in your working directory.
-
Add the Lacework Helm Charts repository:
helm repo add lacework https://lacework.github.io/helm-charts/ -
Install and configure the Proxy Scanner by using the
values.yamlfile created earlier:helm upgrade --install --create-namespace --namespace lacework \ --values values.yaml \ lacework-proxy-scanner lacework/proxy-scanner-
(Optional) If you want to secure communication between the Proxy Scanner and Admission Controller Webhook, include these optional properties:
--set certs.skipCert=false \ --set certs.serverCertificate=`cat <base64_scanner.crt>` \ --set certs.serverKey=`cat <base64_scanner.key>` \helm upgrade --install --create-namespace --namespace lacework \ --set certs.skipCert=false \ --set certs.serverCertificate=`cat <base64_scanner.crt>` \ --set certs.serverKey=`cat <base64_scanner.key>` \ --values values.yaml \ lacework-proxy-scanner lacework/proxy-scanner
-
-
Display the pods for verification:
kubectl get pods -n lacework
Install the Admission Controller Webhook With Helm
-
Before you begin, ensure you have all the TLS/SSL certificates in your working directory.
-
Install and configure the Admission Controller Webhook:
helm upgrade --install --create-namespace --namespace lacework \ --set proxy-scanner.enabled=false \ --set webhooks.caBundle=`cat <base64_webhook_ca.crt>` \ --set certs.serverCertificate=`cat <base64_webhook.crt>` \ --set certs.serverKey=`cat <base64_webhook.key>` \ lacework-admission-controller lacework/admission-controller-
(Optional) If you want to exclude workload resources from vulnerability scans, include the
values_resources.yamlfile that was created earlier.--values values_resources.yaml \helm upgrade --install --create-namespace --namespace lacework \ --set proxy-scanner.enabled=false \ --set webhooks.caBundle=`cat <base64_webhook_ca.crt>` \ --set certs.serverCertificate=`cat <base64_webhook.crt>` \ --set certs.serverKey=`cat <base64_webhook.key>` \ --values values_resources.yaml \ lacework-admission-controller lacework/admission-controller -
(Optional) If you want to secure communication between the Proxy Scanner and Admission Controller Webhook, include this optional property:
--set scanner.caCert=`cat <base64_scanner_ca.crt>` \ --set scanner.skipVerify=false \helm upgrade --install --create-namespace --namespace lacework \ --set proxy-scanner.enabled=false \ --set webhooks.caBundle=`cat <base64_webhook_ca.crt>` \ --set certs.serverCertificate=`cat <base64_webhook.crt>` \ --set certs.serverKey=`cat <base64_webhook.key>` \ --set scanner.caCert=`cat <base64_scanner_ca.crt>` \ --set scanner.skipVerify=false \ lacework-admission-controller lacework/admission-controller
-
-
Display the pods for verification:
kubectl get pods -n lacework
Option 3: Download and Install the Helm Charts Manually
Install the Proxy Scanner Manually
-
Get the latest copy of the Proxy Scanner from the Lacework Helm Charts repository.
The file will be in the format of
proxy-scanner-<version>.tgz. -
Extract the Proxy Scanner files into your current working directory:
tar -xvf proxy-scanner-*.tgzThis creates the 'proxy-scanner' directory containing the related files.
-
Edit the
proxy-scanner/values.yamlfile to configure the Proxy Scanner for on-demand scans (follow the links for instructions on how to populate the configuration). There are also example configurations available to view for each authentication type.-
(Optional) If you want to secure communication between the Proxy Scanner and Admission Controller Webhook, provide the scanner certificate and key entries in the
values.yamlfile.certs: skipCert: false serverCertificate: "<base64_scanner.crt>" serverKey: "<base64_scanner.key>"Provide the full Base64 encoded strings for the certificates encapsulated in double quotes.
-
-
Install the Proxy Scanner:
helm install -n lacework --create-namespace lacework-proxy-scanner ./proxy-scanner -
Display the pods for verification:
kubectl get pods -n lacework
Install the Admission Controller Webhook Manually
-
Get the latest copy of the Admission Controller from the Lacework Helm Charts repository.
The file will be in the format of
admission-controller-<version>.tgz. -
Extract the Admission Controller Webhook files into your current working directory:
tar -xvf admission-controller-*.tgzThis creates the 'admission-controller' directory containing the related files.
-
Provide the certificate entries in
values.yamlfor the Admission Controller.certs: name: lacework-admission-certs serverCertificate: "<base64_webhook.crt>" serverKey: "<base64_webhook.key>" webhooks: caBundle: "<base64_webhook_ca.crt>"Provide the full Base64 encoded strings for the certificates encapsulated in double quotes.
-
(Optional) If you want to exclude workload resources from vulnerability scans, add the
admission.excluded_resourcesfield to thevalues.yaml. -
(Optional) If you want to secure communication between the Proxy Scanner and Admission Controller Webhook, provide the scanner root CA entry in the
values.yamlfile:scanner: skipVerify: false caCert: "<base64_scanner_ca.crt>"Provide the full Base64 encoded string for the certificate encapsulated in double quotes.
-
-
Install the Admission Controller Webhook:
helm install -n lacework --create-namespace lacework-admission-controller ./admission-controller -
Display the pods for verification:
kubectl get pods -n lacework
Configurable Parameters for Admission Controller Webhook
The following table lists all parameters for the Admission Controller Webhook configurable in the values.yaml file:
| Parameter | Description | Default | Mandatory |
|---|---|---|---|
logger.debug | Set to enable debug logging | false | NO |
certs.name | Secret name for Helios certs | helios-admission-certs | NO |
certs.serverCertificate | Certificate for TLS authentication with the Kubernetes api-server | N/A | YES |
certs.serverKey | Certificate key for TLS authentication with the Kubernetes api-server | N/A | YES |
webhooks.caBundle | Root certificate for TLS authentication with the Kubernetes api-server | N/A | YES |
policy.block_exec | Set to enable 'exec' shell access to a Kubernetes pod. | false | NO |
policy.bypass_scope | CSV of namespaces to bypass | kube-system,kube-public,lacework,lacework-dev | NO |
nodeSelector | Kubernetes node selector | {} | NO |
scanner.server | Lacework proxy scanner name | lacework-proxy-scanner | NO |
scanner.namespace | Namespace in which it is deployed | lacework | NO |
scanner.skipVerify | SSL between the webhook and the scanner | true | NO |
scanner.caCert | Root cert of scanner | N/A | NO |
scanner.timeout | Context deadline timeout | 30 | NO |
scanner.defaultRegistry | Default registry to use when one is not provided in the image name | index.docker.io | NO |
admission.excluded_resources | List of resources to skip admission review | N/A | NO |
scanner.blockOnError | Block admission request if proxy scanner returns error | false | YES |
scanner.defaultRegistry Configuration
If the scanner.defaultRegistry value is changed to be empty (the default value of index.docker.io would need to be removed), the registry must also be provided when supplying the image name that you want to scan.
Configuring Parameters After Installation
You can configure parameters using Helm after an installation, but the changes will not become active until you redeploy.
Configure parameters by using either of the following methods:
Method 1: Make Changes to values.yaml
Make your changes to the values.yaml and push them through Helm:
helm upgrade --values values.yaml lacework-admission-controller lacework/admission-controller
helm upgrade --values values.yaml lacework-proxy-scanner lacework/proxy-scanner
Method 2: Set Values Directly
Set values directly using Helm:
helm upgrade --set logger.debug=true lacework-admission-controller lacework/admission-controller
helm upgrade --set default_registry="NewDefaultRegistry.io" lacework-proxy-scanner lacework/proxy-scanner
Default Scan Behavior
If a Kubernetes image tag is not specified during a scan, the latest tag is used by default.