Appearance
Appearance
Target Audience
This page is primarily intended for users of the Self Hosting (Private Cloud) tier package. If you are using any of the Metaplay SaaS plans, information from this page may not be directly relevant to your needs.
An AWS account with an authorized user - To be able to follow the steps of this guide, you will need an AWS account to deploy infrastructure and an AWS IAM user with permission to create and interact with IAM, VPC, EC2, EKS, ECR, RDS, S3, and Route53 services, as well as the access keys for the user. Please refer to the AWS Identity and Access Management Getting Started guide for details on how to create a user.
A domain managed through Route53 - You need to have a Route53-managed domain, e.g., example.org
, where the Terraform module will create the DNS entries required for the deployment.
The necessary tools installed on your system - You need installations of Terraform(v1.3.0 or later), kubectl(v1.18 or later), and Helm(v3.9 or later) to follow the steps in this guide.
Access to the Metaplay shared infra modules repository - You need authorization to access the Infra Modules repository for the Terraform instructions to work correctly.
We have strived to make getting a base infrastructure set deployed as straightforward as possible by providing a Terraform module, which collects other modules and deploys a sane base infrastructure configuration.
This page goes through the necessary steps to deploy this infrastructure, as well as information about accessing the database directly and removing infrastructure, if necessary.
To get started, let's create a simple Terraform file called infra.tf
with the following contents:
provider "aws" {
region = "eu-west-1"
}
module "infra" {
source = "git@github.com:metaplay-shared/infra-modules.git//environments/aws-region?ref=main"
organization = "metaplay"
environment = "dev"
domain_name = "example.org" # Configure your own domain here
region = "eu-west-1"
azs = ["eu-west-1a", "eu-west-1b"]
cidr = "10.0.0.0/16"
private_subnets = ["10.0.16.0/20", "10.0.32.0/20"]
public_subnets = ["10.0.128.0/20", "10.0.144.0/20"]
node_groups = {
generic = {
instance_type = "t3.medium"
azs = ["eu-west-1a"]
desired_size = 1
min_size = 1
max_size = 1
autoscaler = false
}
autoscaling = {
instance_type = "t3.small"
azs = ["eu-west-1a"]
desired_size = 1
min_size = 0
max_size = 5
autoscaler = true
}
}
}
The domain_name
parameter refers to the domain in Route53 you have available, and the module will create all required DNS entries within that domain. It is good practice to namespace individual infrastructure deployments underneath their respective subdomains to minimize risks of name collisions.
With this relatively simple starting point, we can use Terraform to deploy the infrastructure. First, we will configure our AWS access key details as environment variables.
$ export AWS_ACCESS_KEY_ID="..."
$ export AWS_SECRET_ACCESS_KEY="..."
INFO
Terraform AWS Provider document provides further examples about how to configure AWS credentials; for simplicity's sake, we are now using environment variables to store them.
We need to then initialize the project, which will download all of the dependencies.
$ ls
infra.tf # ensure we are at the right directory
$ terraform init # start initialization of Terraform dependencies
Initializing modules...
Downloading git@github.com:metaplay-shared/infra-modules.git?ref=main for infra...
- infra in ../../../infra-modules/environments/aws-region
- infra.aws-region in ../../../infra-modules/components/data/aws-region
- infra.backup in ../../../infra-modules/components/backup/aws
- infra.backup.aws-region in ../../../infra-modules/components/data/aws-region
- infra.base in ../../../infra-modules/components/base/aws
Downloading registry.terraform.io/terraform-aws-modules/vpc/aws 3.14.4 for infra.base.vpc...
- infra.base.vpc in .terraform/modules/infra.base.vpc
- infra.cluster in ../../../infra-modules/components/cluster/aws-eks
- infra.cluster.aws-region in ../../../infra-modules/components/data/aws-region
Downloading registry.terraform.io/terraform-aws-modules/iam/aws 5.34.0 for infra.cluster.cluster_autoscaler_irsa_role...
- infra.cluster.cluster_autoscaler_irsa_role in .terraform/modules/infra.cluster.cluster_autoscaler_irsa_role/modules/iam-role-for-service-accounts-eks
Downloading registry.terraform.io/terraform-aws-modules/iam/aws 5.34.0 for infra.cluster.ebs_csi_irsa_role...
- infra.cluster.ebs_csi_irsa_role in .terraform/modules/infra.cluster.ebs_csi_irsa_role/modules/iam-role-for-service-accounts-eks
Downloading registry.terraform.io/terraform-aws-modules/eks/aws 19.10.3 for infra.cluster.eks...
- infra.cluster.eks in .terraform/modules/infra.cluster.eks
- infra.cluster.eks.eks_managed_node_group in .terraform/modules/infra.cluster.eks/modules/eks-managed-node-group
- infra.cluster.eks.eks_managed_node_group.user_data in .terraform/modules/infra.cluster.eks/modules/_user_data
- infra.cluster.eks.fargate_profile in .terraform/modules/infra.cluster.eks/modules/fargate-profile
Downloading registry.terraform.io/terraform-aws-modules/kms/aws 1.1.0 for infra.cluster.eks.kms...
- infra.cluster.eks.kms in .terraform/modules/infra.cluster.eks.kms
- infra.cluster.eks.self_managed_node_group in .terraform/modules/infra.cluster.eks/modules/self-managed-node-group
- infra.cluster.eks.self_managed_node_group.user_data in .terraform/modules/infra.cluster.eks/modules/_user_data
- infra.cluster.helm-postrender in ../../../infra-modules/components/data/helm-postrender
Downloading registry.terraform.io/terraform-aws-modules/eventbridge/aws 2.1.0 for infra.cluster.karpenter_interruption_eventbridge...
- infra.cluster.karpenter_interruption_eventbridge in .terraform/modules/infra.cluster.karpenter_interruption_eventbridge
- infra.cluster.registry_credentials in ../../../infra-modules/components/data/aws-eks-userdata-registry-credentials
Downloading registry.terraform.io/terraform-aws-modules/iam/aws 5.34.0 for infra.cluster.vpc_cni_ipv4_irsa_role...
- infra.cluster.vpc_cni_ipv4_irsa_role in .terraform/modules/infra.cluster.vpc_cni_ipv4_irsa_role/modules/iam-role-for-service-accounts-eks
- infra.database in ../../../infra-modules/components/database/aws-aurora
Downloading registry.terraform.io/terraform-aws-modules/rds-aurora/aws 7.7.1 for infra.database.aurora...
- infra.database.aurora in .terraform/modules/infra.database.aurora
- infra.deployment in ../../../infra-modules/components/deployment
- infra.deployment.aws-region in ../../../infra-modules/components/data/aws-region
- infra.deployment.eks_node_userdata in ../../../infra-modules/components/data/aws-eks-node-userdata
- infra.deployment.helm-postrender in ../../../infra-modules/components/data/helm-postrender
- infra.deployment.infra-modules in ../../../infra-modules/components/data/infra-modules
- infra.infra-modules in ../../../infra-modules/components/data/infra-modules
- infra.instance_data in ../../../infra-modules/components/data/aws-instance-type
- infra.services in ../../../infra-modules/components/services
- infra.services.aws-region in ../../../infra-modules/components/data/aws-region
- infra.services.helm-postrender in ../../../infra-modules/components/data/helm-postrender
- infra.tenant_database in ../../../infra-modules/components/database/aws-aurora
Downloading registry.terraform.io/terraform-aws-modules/rds-aurora/aws 7.7.1 for infra.tenant_database.aurora...
- infra.tenant_database.aurora in .terraform/modules/infra.tenant_database.aurora
Initializing provider plugins...
- Finding latest version of hashicorp/local...
- Finding hashicorp/random versions matching ">= 2.2.0"...
- Finding hashicorp/kubernetes versions matching ">= 2.0.0, >= 2.10.0"...
- Finding alekc/kubectl versions matching ">= 2.0.3"...
- Finding hashicorp/cloudinit versions matching ">= 2.0.0"...
- Finding hashicorp/time versions matching ">= 0.9.0"...
- Finding hashicorp/aws versions matching ">= 3.63.0, >= 3.72.0, >= 4.0.0, ~> 4.0, >= 4.22.0, >= 4.30.0, >= 4.47.0, >= 4.64.0"...
- Finding terraform-registry.platform.metaplay.dev/metaplay/hydra versions matching ">= 0.5.0"...
- Finding hashicorp/helm versions matching ">= 2.9.0"...
- Finding hashicorp/tls versions matching ">= 3.0.0"...
- Installing hashicorp/local v2.4.1...
- Installed hashicorp/local v2.4.1 (signed by HashiCorp)
- Installing hashicorp/random v3.6.0...
- Installed hashicorp/random v3.6.0 (signed by HashiCorp)
- Installing hashicorp/kubernetes v2.26.0...
- Installed hashicorp/kubernetes v2.26.0 (signed by HashiCorp)
- Installing hashicorp/time v0.10.0...
- Installed hashicorp/time v0.10.0 (signed by HashiCorp)
- Installing hashicorp/tls v4.0.5...
- Installed hashicorp/tls v4.0.5 (signed by HashiCorp)
- Installing alekc/kubectl v2.0.4...
- Installed alekc/kubectl v2.0.4 (self-signed, key ID 772FB27A86DAFCE7)
- Installing hashicorp/cloudinit v2.3.3...
- Installed hashicorp/cloudinit v2.3.3 (signed by HashiCorp)
- Installing hashicorp/aws v4.67.0...
- Installed hashicorp/aws v4.67.0 (signed by HashiCorp)
- Installing terraform-registry.platform.metaplay.dev/metaplay/hydra v0.5.2...
- Installed terraform-registry.platform.metaplay.dev/metaplay/hydra v0.5.2 (self-signed, key ID 866445064D71A5AE)
- Installing hashicorp/helm v2.12.1...
- Installed hashicorp/helm v2.12.1 (signed by HashiCorp)
Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
https://www.terraform.io/docs/cli/plugins/signing.html
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running [terraform plan](https://developer.hashicorp.com/terraform/cli/commands/plan) to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
With our dependencies and Terraform context in place, we're ready to deploy the infrastructure using the terraform plan and terraform apply commands.
Given the complex dependency chain between various resources, it's strongly advised to deploy one module at a time. Deploying all resources simultaneously carries a high risk of resource creation failure. The recommended order for creating the modules is as follows:
$ terraform apply -target=module.infra.module.base # Create base resources
$ terraform apply -target=module.infra.module.cluster -target=module.infra.module.database # Create databases and EKS cluster resources
$ terraform apply -target=module.infra.module.services # Create Metaplay service resources
$ terraform apply -target=module.infra.module.deployment # Create game server resources
$ terraform apply # Create other miscellaneous resources
Depending on the configurations, it can take between 5 and 20 minutes before each module is successfully deployed.
WARNING
Executing the terraform apply command without an explicit plan, such as in each step above, makes Terraform generate just-in-time execution plans and ask the user to confirm whether it should execute such plans. Answering yes allows Terraform to start carrying out calls to get the infrastructure set up. A safer way of applying cloud resources with Terraform is by using the terraform plan command, which can write plans into files and refresh cloud resource states before applying any changes.
At this point, we have a functioning infrastructure stack deployed in AWS. You can login to the AWS web console and check that the following resources should have been created:
By default, EKS has a kubeconfig
file that we can use to interact with the Kubernetes cluster. You can use the AWS CLI to get the kubeconfig
:
$ aws eks update-kubeconfig --kubeconfig /path/to/file/with/kubeconfig.yaml --name metaplay-dev-eks --region eu-west-1
$ export KUBECONFIG=/path/to/file/with/kubeconfig.yaml
With the kubeconfig.yaml
file, we can inspect the cluster with kubectl and Helm:
$ kubectl get nodes -A
NAME STATUS ROLES AGE VERSION
ip-10-0-25-213.eu-west-1.compute.internal Ready <none> 14m v1.27.9-eks-5e0fdde
$ kubectl get ns
NAME STATUS AGE
cluster-system Active 18m
default Active 22m
kube-node-lease Active 22m
kube-public Active 22m
kube-system Active 22m
metaplay-env-init Active 27s
metaplay-system Active 27s
$ helm ls -n metaplay-system
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
metaplay-services metaplay-system 1 2024-02-26 02:23:50.498412684 +0200 EET deployed metaplay-services-0.1.21
In addition, the module also deploys an AWS Secrets Manager secret, which gives us some details about the infrastructure. The name of the secret is [organization]/[environment]/infra
, and we can take a look at it using the AWS CLI tool:
$ aws secretsmanager get-secret-value --secret-id example/dev/infra --region eu-west-1 | jq -r .SecretString | jq .
{
"cloud_account": "000011112222",
"cloud_region": "eu-west-1",
"cloud_type": "aws",
"cluster_admin_role_arn": "arn:aws:iam::000011112222:role/metaplay-dev-eks-admin-role",
"cluster_architecture": "amd64",
"cluster_ca_certificate": "...base64 encoded cluster certificate...",
"cluster_endpoint": "https://01234567890123456789012345678901.gr7.eu-west-1.eks.amazonaws.com",
"cluster_id": "metaplay-dev-eks",
"cluster_kubeconfig": "...base64 encoded cluster kubeconfig...",
"cluster_os": "linux",
"database_shards": [
{
"database_endpoint": "metaplay-dev-rds-0.cluster-aaaaaaaa00.eu-west-1.rds.amazonaws.com",
"database_id": "metaplay-dev-rds-0",
"database_master_password": "<password>",
"database_master_username": "<user>",
"database_port": "<port>"
},
{
"database_endpoint": "metaplay-dev-rds-1.cluster-aaaaaaaa00.eu-west-1.rds.amazonaws.com",
"database_id": "metaplay-dev-rds-1",
"database_master_password": "<password>",
"database_master_username": "<user>",
"database_port": "<port>"
}
],
"format_version": 6,
"infra_version": "0.2.11",
"timestamp": "2024-02-16T10:48:37Z"
}
At this point, we have a base set of infrastructure deployed. However, before we're able to deploy a Metaplay game server, we need to configure some details.
In our Terraform module, there is a parameter named deployments
, which allows us to define a set of servers we wish to support with this infrastructure. We can add support to, for example, idler-develop
in the following way:
module "infra" {
....
# deployments to support with the infrastructure
deployments = {
"idler-develop" = {
enabled = true
tenant_organization = "idler"
tenant_project = "idler"
tenant_environment = "develop"
oauth2_client_enabled = true
deployment = "idler-develop"
subdomain = "idler-develop"
},
}
}
The addition of deployments as values like these during infrastructure deployment carries out a handful of purposes:
Secrets
and ConfigMaps
that the game server can use to interact with other resources).Each deployment carries additional parameters, and in idler-develop
, we are primarily interested in three: enabled
, deployment
, and subdomain
:
enabled
defines whether the supporting resources should be created or not. Toggling an enabled deployment to false will cause the removal of supporting resources, including databases, so please be careful!deployment
is the name of the game environment. In our example case, we have opted to use the name idler-develop
to signify that we intend to deploy the develop
branch of our Idler game there. The name can be anything, but it will ideally give you some context on what the deployment is about. The value will also be used to name different deployment-specific resources.subdomain
is the name under [environment].[domain_name]
to use for naming of resources. This should, in most cases, just be the same as the value of the deployment
parameterDanger!
Please note that due to restrictions in Terraform, if you intend to remove a deployment, you should do the removal in two steps: first, toggle the enabled
value of the deployment to false
and run terraform apply once to allow for the clearing of resources inside Kubernetes. After this, you can safely remove the actual code altogether and run terraform apply again to clear any remaining cloud resources.
With our configurations done, we can then execute the same terraform apply command as before. As Terraform allows infrastructure to be described in a declarative and idempotent way, the run will now only add the required resources to satisfy the newly added resources for the deployment:
$ terraform apply
module.infra.module.services.module.helm-postrender.local_file.postrender: Refreshing state... [id=eeead06f338e6ef853e9112217df5e3849dddc66]
module.infra.module.cluster.module.eks.data.aws_partition.current: Reading...
module.infra.module.cluster.data.aws_kms_alias.default: Reading...
...
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
~ update in-place
-/+ destroy and then create replacement
<= read (data resources)
Terraform will perform the following actions:
...
Plan: 14 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
...
module.infra.aws_secretsmanager_secret.infra: Creating...
module.infra.aws_backup_vault.backup[0]: Creating...
module.infra.aws_secretsmanager_secret.deployments-index: Creating...
module.infra.module.backup[0].aws_iam_role.backup: Creating...
...
Apply complete! Resources: 14 added, 0 changed, 1 destroyed.
For each support deployment, a new AWS Secrets Manager secret is created at [organization]/[environment]/deployments/[deployment name]
, in our case it was created at example/dev/deployments/idler-develop
. You can read the contents of the secret with the AWS CLI similarly to the infrastructure secrets above:
$ aws secretsmanager get-secret-value --secret-id example/dev/deployments/idler-develop --region eu-west-1 | jq -r .SecretString | jq .
{
"admin_hostname": "idler-develop-admin.dev.example.org",
"admin_tls_cert": "arn:aws:acm:eu-west-1:000011112222:certificate/00000000-0000-0000-0000-000000000000",
"api_hostname": "idler-develop-api.dev.example.org",
"cdn_distribution_arn": "arn:aws:cloudfront::000011112222:distribution/ABCDEFGHIJKL",
"cdn_distribution_id": "ABCDEFGHIJKL",
"cdn_s3_bucket": "idler-develop.dev.example.org",
"cdn_s3_fqdn": "idler-develop-assets.dev.example.org",
"cluster_ca_certificate": "...redacted base64 encoded Kubernetes cluster CA...",
"cluster_endpoint": "https://AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA.gr7.eu-west-1.eks.amazonaws.com",
"cluster_namespace": "idler-develop",
"cluster_token": "...redacted Kubernetes service account JWT token...",
"deployment": "idler-develop",
"deployment_endpoint": "idler-develop.dev.example.org",
"deployment_kubeconfig": "...redacted base64 encoded standalone kubeconfig for service account...",
"deployment_port": 9339,
"deployment_ports": [
9339
],
"ecr_repo_botclient": "000011112222.dkr.ecr.eu-west-1.amazonaws.com/metaplay-idler-develop-botclient",
"ecr_repo_gameserver": "000011112222.dkr.ecr.eu-west-1.amazonaws.com/metaplay-idler-develop-server",
"enabled": true,
"format_version": 8,
"gameserver_admin_iam_role": "arn:aws:iam::000011112222:role/metaplay-dev-idler-develop-gameserver-admin",
"gameserver_iam_role": "arn:aws:iam::000011112222:role/metaplay-dev-idler-develop-gameserver",
"gameserver_namespace": "idler-develop",
"gameserver_service_account": "gameserver",
"general_tls_cert": "arn:aws:acm:eu-west-1:000011112222:certificate/00000000-0000-0000-0000-000000000000",
"metaplay_infra_version": "0.2.11",
"metaplay_required_sdk_version": "14.0.0",
"metaplay_supported_chart_versions": [
"0.4.4",
"0.4.5",
"0.4.6",
"0.4.7",
"0.5.0",
"0.5.1"
],
"oauth2_client_enabled": true,
"server_hostname": "idler-develop.dev.example.org",
"server_tls_cert": "arn:aws:acm:eu-west-1:000011112222:certificate/00000000-0000-0000-0000-000000000000",
"subdomain": "idler-develop",
"tenant_environment": "develop",
"tenant_organization": "idler",
"tenant_project": "idler",
"timestamp": "2024-02-26T00:30:01Z",
"values_infra": "...redacted base64 encoded YAML file containing useful infrastructure-specific values..."
}
Danger!
Please note that interacting directly with the database can be dangerous. Please be mindful of the risks when applying the approach described here.
By default, in our setup, the AWS RDS database cluster being provisioned is not publicly accessible. This means that if you wish to access the database directly, you need to connect from the same AWS VPC as where the RDS endpoint is and have the appropriate permissions configured for the RDS security groups to allow access.
An easy way of accomplishing this is by running a temporary pod in the Kubernetes cluster, which has a MySQL client. Additionally, you will need the database credentials, which can be retrieved from the Kubernetes secret of the deployment.
In our example case, using the deployment_kubeconfig
value from the AWS Secrets Manager secret, we can achieve this as follows:
$ export KUBECONFIG=/path/to/file/with/kubeconfig
$ kubectl get secret -n idler-develop -o yaml metaplay-config
apiVersion: v1
data:
metaplay-config.json: ...base64 encoded data....
metaplay-helm-hints.json: ...base64 encoded data...
metaplay-infra-options.yaml: ...base64 encoded data...
tls_arn: ...base64 encoded data...
values-infra.yaml: ...base64 encoded data...
kind: Secret
metadata:
annotations:
meta.helm.sh/release-name: idler-develop-env-init
meta.helm.sh/release-namespace: metaplay-env-init
creationTimestamp: "2024-02-26T00:28:12Z"
labels:
app: metaplay-env-init
app.kubernetes.io/components: secrets
app.kubernetes.io/instance: idler-develop-env-init
app.kubernetes.io/managed-by: Helm
helm.sh/chart: metaplay-env-init-0.1.1
name: metaplay-config
namespace: idler-develop
resourceVersion: "19374"
uid: d21614ce-3136-4ca7-a50c-fd306315cf7e
type: Opaque
You can obtain the credentials from the metaplay-config.json
entry in the secret, and they will need to be base64 decoded, after which you can use kubectl
to start a temporary pod running a mysql Docker image:
$ kubectl run \
-it \
--rm \
--image=mysql:latest \
--restart=Never \
-n idler-develop \
mysql-client -- bash
If you don't see a command prompt, try pressing enter.
root@mysql-client:/# mysql -h metaplay-dev-rds-1.cluster-aaaaaaaa00.eu-west-1.rds.amazonaws.com -u idler-develop -p idler-develop
Enter password:
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 623623
Server version: 5.7.12 MySQL Community Server (GPL)
Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> show tables;
+-------------------------+
| Tables_in_idler-develop |
+-------------------------+
| AuthEntries |
| GlobalStates |
| InAppPurchases |
| MetaInfo |
| Players |
| __EFMigrationsHistory |
+-------------------------+
6 rows in set (0.00 sec)
mysql>
If you wish to interact with the remote database directly, you can leverage port forwarding to reach the game database from your machine. In this case, one possibility is to run socat
to carry the connection over. With a functioning kubeconfig, you can do the following:
$ kubectl run \
--restart=Never \
--image=alpine/socat \
mysql-proxy -- \
-d -d \
tcp-listen:3306,fork,reuseaddr \
tcp-connect:metaplay-dev-rds-1.cluster-aaaaaaaa00.eu-west-1.rds.amazonaws.com:3306
pod/mysql-proxy created
$ kubectl port-forward pod/mysql-proxy 8080:3306
Forwarding from 127.0.0.1:8080 -> 3306
Forwarding from [::1]:8080 -> 3306
After this, you can use any tools you wish to interact with the remote database by connecting it to the 8080
port on your local machine. You can easily, for instance, obtain a copy of your database to a local MySQL instance in the following way:
mysqldump -h 127.0.0.1 -P 8080 -u idler-develop -p idler-develop | mysql -h 127.0.0.1 -P 3306 -u root -p idler-develop
The commands can all be wrapped into a single shell script:
#!/bin/bash
trap cleanup INT
function cleanup() {
if [ "$MODE" == "tunnel" ] && [ ! -z "$NAMESPACE" ]; then
echo "Stopping mysql-proxy pod..."
kubectl delete -n $NAMESPACE pod mysql-proxy
fi
exit 0
}
if [ -z "$1" ]; then
echo "Usage: $0 NAMESPACE [client|tunnel] [shard]"
exit 0
fi
NAMESPACE=$1
MODE="client"
if [ "$2" == "tunnel" ]; then
MODE="tunnel"
fi
SHARD=0
if [ ! -z "$3" ]; then
SHARD=$3
fi
echo "Shard is $SHARD"
CONFIGS=$(kubectl get secret -n $NAMESPACE metaplay-config -o json | jq -r '.data["metaplay-infra-options.yaml"]' | base64 -d | yq -o=json)
DB_HOST=$(echo $CONFIGS | jq -r .Database.Shards[$SHARD].ReadWriteHost)
DB_NAME=$(echo $CONFIGS | jq -r .Database.Shards[$SHARD].DatabaseName)
DB_USER=$(echo $CONFIGS | jq -r .Database.Shards[$SHARD].UserId)
DB_PASS=$(echo $CONFIGS | jq -r .Database.Shards[$SHARD].Password)
echo "DB host is ${DB_HOST}"
if [ "$MODE" == "tunnel" ]; then
echo "Starting mysql-proxy pod..."
kubectl run \
--restart=Never \
--image=alpine/socat \
-n $NAMESPACE \
mysql-proxy -- \
-d -d \
tcp-listen:3306,fork,reuseaddr \
tcp-connect:$DB_HOST:3306
echo "Waiting for mysql-proxy to become ready..."
kubectl wait -n $NAMESPACE --for=condition=ready pod/mysql-proxy
echo "Database connection details:"
echo " Host: localhost"
echo " Port: 3306"
echo " Username: $DB_USER"
echo " Password: kubectl get secret -n $NAMESPACE metaplay-config -o json | jq -r '.data["metaplay-infra-options.yaml"]' | base64 -d | yq -o=json | jq -r .DatabaseShards[0].Password"
echo ""
echo "Binding database to local port 3306..."
echo "Shut down tunnel with Ctrl-C"
kubectl port-forward -n $NAMESPACE pod/mysql-proxy 3306:3306
else
echo "Starting MySQL client..."
kubectl run \
-it \
--rm \
--image=mysql:5.7 \
--restart=Never \
-n $NAMESPACE \
mysql-client -- mysql -h $DB_HOST -u $DB_USER -p$DB_PASS $DB_NAME
fi
You can then use the script to obtain an interactive MySQL client:
$ ./connect.sh idler-develop
Starting MySQL client...
If you don't see a command prompt, try pressing enter.
mysql>
Or alternatively, it can be run in a tunnel mode where the remote database is tied to your localhost's port 3306:
$ ./connect.sh idler-develop tunnel
Starting mysql-proxy pod...
pod/mysql-proxy created
Waiting for mysql-proxy to become ready...
pod/mysql-proxy condition met
Database connection details:
Host: localhost
Port: 3306
Username: idler_develop
Password: kubectl get secret -n idler-develop metaplay-config -o json | jq -r '.data["metaplay-infra-options.yaml"]' | base64 -d | yq -o=json | jq -r .DatabaseShards[0].Password
Binding database to local port 3306...
Shut down tunnel with Ctrl-C
Forwarding from 127.0.0.1:3306 -> 3306
Forwarding from [::1]:3306 -> 3306
^CStopping mysql-proxy pod...
pod "mysql-proxy" deleted
You can easily remove all of the deployed infrastructure with Terraform. However, because of how game server environments are handled, the destruction should be handled in multiple tiers as well.
Firstly, all of the metaplay-gameserver
Helm chart deployments should be safely removed from the Kubernetes cluster and final dumps of the game databases should be taken, if data is to be preserved.
To remove all the infrastructure, you can execute terraform destroy
, which indiscriminately allows Terraform to create a dependency graph of all resources and begin dismantling the infrastructure. If you need more granular control, it's possible to dismantle the infrastructure in parts, starting from the game deployments and Kubernetes services, then the cluster itself, followed by the database, and finally removing all remaining infrastructure. An example of this flow is presented below.
Pro Tip
As Terraform translates declarative code into actionable API calls, sometimes Terraform might get stuck. While things mostly work, occasionally, manual intervention is required to untangle a mess. The most frequent culprits can be misbehaving cloud APIs, internal Terraform sequencing issues (especially with multi-provider scenarios), or cases where someone has manually introduced new resources that block the removal of Terraform-managed resources.
In these cases, a combination of terraform state list
, manual removal of cloud resources through web consoles, removal of resources from Terraform state files with terraform state rm
and patience is the most common way of getting a situation sorted.
terraform destroy -target=module.infra.module.deployments
terraform destroy -target=module.infra.module.services
terraform destroy -target=module.infra.module.cluster
terraform destroy -target=module.infra.module.database
terraform destroy -target=module.infra.module.base
terraform destroy
Q: When doing a terraform apply, I get an error saying:
Error: Kubernetes cluster unreachable: the server has asked for the client to provide credentials
on .terraform/modules/infra/components/services/01-metaplay-services.tf line 11, in resource "helm_release" "services":
11: resource "helm_release" "services" {
A: This indicates that the connection to the Kubernetes cluster failed. In the Terraform modules, we utilize the AWS IAM authenticator to authenticate to Kubernetes, which means that your AWS session may have timed out. This can happen occasionally if the terraform apply
was very long or you have been running multiple Terraform calls in the past hour, in which case the cached credentials may have expired. You should run Terraform again and likely the refreshed credentials will allow you to finish the infrastructure deployment.
Q: What if I want to deploy infrastructure in China?
A: Starting with infra-modules
v0.1.1
, we provide experimental support for deploying infrastructure stacks in China. To be able to deploy in China, you need to obtain separate AWS China accounts and an ICP license. Once you have obtained those, you can deploy the environments/aws-region
module and as long as you provide the correct region
parameter reflecting the Chinese region you wish to deploy to, the module will attempt to make appropriate changes to deploy a stack.
Please note that there are many small nuances between AWS China and the rest of the regions (e.g., you must obtain DNS zones from an external registrar and use TLS certificates signed by a 3rd party instead of AWS ACM-managed certificates). In short, if you are thinking of deploying to China, please reach out to us to discuss how to do it.