• PDF


  • PDF

The Glasswall-Menlo Security plug-in integration is a multi-step process which we have broken down into the following specific steps.

Note: These steps focuses on deployment in the eu-west-1 region.

Creating a Key Pair

In order to be able to SSH to you instance you need to create key par (.PEM) file.

To create a key pair:

  1. Login to your AWS account and navigate to the AWS Management Console section.
  2. Under the AWS services sectionselect EC2.
  3. From the side navigation bar, under the Network & Security tab, select Key Pairs.

  4. From the Create key pair page, enter a Key Pair Name.
  5. Ensure that the .pem Private key file format option is selected.

  6. Download the .pem Key Pair.

Creating a S3 Bucket

The next step is to create a S3 bucket:

  1. Navigate to the Amazon S3 service page.
  2. Ensure you are under the correct region, then click Create bucket.

  3. On the Create bucket page, enter the Bucket name.

  4. Scroll down to the Default encryption section and ensure that Server-side encryption is set to Enable.
  5. Under the Encryption key type section, ensure that the Amazon S3 key (SSE-S3) option is selected.

  6. Click Create bucket.

Creating a Certificate & Hosted Zone Record

This process can be done in two ways, and the following steps cover both.


To create a certificate automatically via the Route 53 Dashboard:

  1. Navigate to the Route 53 Dashboard and click Hosted zone.
  2.  Under Domain name, select your registered domain should be listed (as mentioned in prerequisites).

  3. Select the Records tab.

  4. Now navigate to the Certificate Manager section, and click Request a certificate.

  5. Under the Request a certificate section, ensure that Request a public certificate is selected.

  6. Under the Add domain names section, enter a Domain name. Click Next.

  7. Under the Add tags section, enter the Tag Name and Value. Click Review.

  8. Review your information and then click Confirm and request.

  9. You can now click Create record in Route 53, or you can copy your Name, Type, and Value information and create a record manually in Route 53.
    Note: see below for the manual record creation process.
    If you would like to proceed with automatically creating a record, click Continue.

  10. Once the certificate has been created, it should be displayed under the Certificates section and the Status should be Issued.


To create a certificate manually:

  1. Navigate back to the Route 53 Dashboard.
  2. Click Create record.

  3. Under the Quick create record section, add Record name and Value.

  4. Click Create records.

Selecting Subnet

To select a subnet:

  1. Navigate to the VPC page.
  2. Click Subnets.
    Note: Skip this step if you already have a public subnet.
  3. Ensure you are under the correct region (eu-west-1) and select one of the public subnets. For eg. aws-landing-zone-PublicSubnet1.
  4. From the Actions drop-down menu, select Modify auto-assign IP settings.

  5. Ensure that Enable auto-assign public IPv4 address is selected.
  6. Click Save.

Creating a Stack

Firstly, to create a stack you need to download the CloudFormationEc2.yaml file from here.

This is a script that will deploy an Ubuntu EC2 instance with terraform pre-installed.

To create a stack:

  1. Login to your AWS account and navigate to the CloudFormation page.
  2. From the Stacks section, select Create stack.
  3. Under the Create stack section, select the Template is ready option.

  4. Under Specify template section, select the Upload a template file option.
  5. Click Next.
  6. Under the Specify stack detailssection, enter the following information:
    1. Stack Name
    2. InstanceType
    3. Ubuntu LatestAmild
      1. Select according to region
    4. SSHLocation Key
      1. The IP address range that can be used to SSH to the EC2 controller VM.
         Note: Ensure this is whitelisted.
    5. SubnetID
      1. Subnet Id where instance will be created. Ensure you select the public subnet we modified before (select from the drop down or check VPC Manger/Subnets).
    6. VPCId
      1. This is where the instance will be created (select from the drop down or check VPC Manger/VPCs).

  7. Click Next to create a stack with default configuration.
    Note: Wait till the resources are created completely (this could take up to 1 minute). The EC2 instance will be created, and you can see it in the selected region named CF-EC2-TF.

Cloning GitHub Menlo Repository to AWS Resource

For this step, it is necessary for you to have a GitHub account with access to this repository as it’s not public.

Note: You will need to know your Github account email address.

To set the SSH key to clone Menlo integration repository:

  1. Type:

    ssh-keygen -t rsa -b 4096 -C <email address associated with github>
    eval "$(ssh-agent -s)"
    ssh-add ~/.ssh/id_rsa
  2. Now you need to print the public key. To copy it, type:

    cat ~/.ssh/
  3. Login to your Github account and click Settings from the Account Menu drop-down.

  4. From the side menu, select SSH and GPG keys.

  5. On the SSH keys page, select New SSH key.
  6. Paste the public key you printed from the previous step.

  7. The Menlo key will be created and shown as above.
  8. Navigate back to the terminal and clone the repo on your home directory.
  9. Type the following to move to the home folder:

  10. Type:

    git clone [email protected]:k8-proxy/gw-menlo-integration.git \
    to clone repository.
    Note: if you do not have permission to create new folder inside home folder, use this command to go a step back:

    sudo chmod -R 777 /home
    cd home
    cd gw-menlo-integration/infra/terraform/
    git checkout master

    The repository should now be cloned to your AWS resource.

Exporting your AWS Credentials

To export your AWS credentials:

  1. Login to your AWS account and from the home page select the Command Line or programmatic access as below:

  2. Copy your credentials and paste them directly into the terminal.

    Note: if at any time you want to deploy or destroy infrastructure, you will need to repeat this process.
    If the credentials are not exported correctly, you will see the following error:

    Save the credentials into .bashrc for future reference.

    echo "export AWS_ACCESS_KEY=" >> ~/.bashrc
    echo "export AWS_SECRET_ACCESS_KEY=" >> ~/.bashrc
    echo "export AWS_DEFAULT_REGION=" >> ~/.bashrc
    Note: replace the placeholders.

Creating a tfvars File

The tfvars file holds deployment information you need to add:

  1. Firstly, create a copy of dev.tfvars file in the tfvars directory.

    cp tfvars/example.tfvars tfvars/menlo2gw.tfvars
    Note: Ensure that the new copy has the .tfvars extension and try to use a complex and unique name, for eg. menlo2gw. During the deployment two S3 buckets will be created so the two names need to be different otherwise you will get an error.
  2. Open the newly created file and update it with the required values.

    vim tfvars/menlo2gw.tfvars
    Quick Reminder VIM Commands
    Ctrl+ I: insert to start editing
    Esc: stop editing
    WQ: write and quit
    An example tfvars file:

  3. You'll notice that there are some extra lines in the copy you have created, so you can delete them as below:

    image_id = ""
    (This is the Middleware AMI mentioned in prerequisites)
    ssh_key_name = “<REPLACE WITH YOUR KEY NAME>”  #without .pem par
     (The key par you created at the beginning of process)
    app_lb_ports = [443]
    app_lb_protocols = ["HTTPS"]
    storage_type = "S3"
    (They are not to be changed)
    sdk_image_id = ""
    sc_image_id = ""
    (Latest AMI for WC and SC cluster)
    ertificate_arn = ""  
    sc_certificate_arn = ""
    These hold the same information which can be found in the AWS Certificate Manager mentioned above.

    monitoring_password = "<SHARED ON PASSBOLT SERVER>"
    logging_password =  "<SHARED ON PASSBOLT SERVER>"
    (This is shared with you)
    s3_access_key = "<ADD YOUR AWS ACCESS KEY>"
    s3_secret_key = "<ADD YOUR AWS SECRET KEY>"
    This information can also be found in the programmatic access section mentioned above.

    4.	desired_capacity = 2 #Middleware ASG desired capacity
    sdk_desired_capacity = 2 #CloudSDK ASG desired capacity
    max_instances = 4 #Middleware ASG max capacity
    sdk_max_instances = 12 #CloudSDK ASG max capacity
    (These should not be changed)
    bastion_whitelist_ips = ["<IP ADDRESS OF THIS SERVER/32"]
     Note: the IP address you need can be found in AWS VPC.


    middleware_base_url = "<DESRIRED URL FOR THE DEPLOYMENT>" #EXAMPLE:
    Note: ensure that it holds your domain information at the end.
  4. Once the information has been updated, click Save and exit.

Deploying Infrastructure

There are two bash scripts named & in your current working directory, which will either apply or destroy the pre-configured infrastructure depending on which you run.

To run the TF-apply script:

  1. Type: bash
  2. Provide the S3 bucket name that we created at the beginning of the process.
  3. Provide S3 default region.
  4. Provide the name of the workspace (the name of the tfvars file we created above, in our example: menlo2gw).

    Note: terraform should be running ; print the resources it will deploy.
  5. Enter Yes and wait for the deployment to finish. This can take up to 5 minutes. Once the deployment has completed, print out the following:

Creating Two More Records in Route 53

To create two more records in AWS Route 53:

  1. Copy the alb_dns (the middleware loadbalancer) DNS.
  2. From the AWS Dashboard navigate to Route 53 again (from the AWS account where the domain name is registered).
  3. From the chosen domain name create a new record with the middleware_base_url value set in the tfvars file created above. In this case, we will create a new record in domain.
  4. Update the following:

    Record name: middle_base_url value (in the example above menlo2gw)
    Record type: choose CNAME - route traffic to another domain name and some AWS resources.
    Value: alb_dns value.

    The newly created record will be used in

    Note: copy the sc_alb_dns (the service cluster loadbalancer) DNS.
  5. Now we will create the second record. From AWS Dashboard navigate to Route 53 again (from the AWS account where the domain name is registered).
  6. From the chosen domain name create a new record for service cluster access (Kibana & Grafana).
    In this case, we will create a new record in domain.
  7. Update the following:

    Record name: middle_base_url value (example menlo2)
    Record type: choose CNAME - route traffic to another domain name and some AWS resources.
    Value: sc_alb_dns value.

Additional Information

The newly created record will be used to access Grafana and Kibana




Password:shared on Glasswall Passbolt server tagged with SC AMI ID

If any changes are committed into the repository just git pull the updates, then rerun to update the infrastructure:

cd ~/gw-menlo-integration/infra/terraform
git pull

To update one or more of the deployment variables for example: AMIs ID, ASG desired capacity, ASG max capacity:

Open the created tfvars and edit the desired variables

Rerun TF-apply and terraform will update the infrastructure


To delete all created resources:

  1. Go to running EC2 instances and find your deployment bastion and service instances namedmenlo-bastion-(YOUR TFVARS FILE NAME)&menlo-service-(YOUR TFVARS FILE NAME)respectively.
  2. Select the Instances > Actions > Instance settings > Change termination protection.
  3. Ensure that the Termination protection option is not enabled for both instances and click Save.
  4. Navigate to S3 and find the Reports Bucket named menlo-report-(YOUR TFVARS FILE NAME).
  5. Select and delete all reports
  6. Run TF-destroy script

    cd ~/gw-menlo-integration/infra/terraform

Was this article helpful?

What's Next