Appearance
Appearance
To run a load test using the approach from this chapter, we first need to ensure that we have some pre-requisites satisfied:
Again, it is possible to functionally test the load tester on the Metaplay-hosted platform, but we restrict resources, so running full-fledged load tests from the shared platform is not advisable.
For this guide, we will assume that we have a development infrastructure stack available to us at [dev.metaplay.io](http://dev.metaplay.io)
and that it has been deployed in the fashion described by the guide on deploying cloud infrastructure. Additionally, we assume we have builds of the game server and the bot clients running and deployed as idler-develop
(so that our game endpoint is at idler-develop.dev.metaplay.io:9339
).
With our prerequisites met, we can run our load test using the metaplay-loadtest
Helm chart. We will first check that we have our game deployments running successfully:
$ helm ls -n idler-develop
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
idler-develop idler-develop 92 2020-05-18 09:36:48.547475 +0300 EEST deployed metaplay-gameserver-0.0.8
We can then get details of the specific game server version that is running. You can either check the description of the game server pod on Kubernetes or you can query the details from the Helm deployment:
$ helm get values -n idler-develop idler-develop -o json | jq .image
{
"pullSecrets": "aws-ecr",
"repository": "000011112222.dkr.ecr.eu-west-1.amazonaws.com/metaplay-idler-develop-server",
"tag": "36f3dcb4cf54dba037f4b5f3bcc88e3fbe3d57d1"
}
Typically you will want to store the bot client images near the server images. In the above case, our AWS Elastic Container Registry for the server is named metaplay-idler-develop-server
, and we are storing our bot client images correspondingly in the metaplay-idler-develop-botclient
registry. This allows us to easily pick out a bot client for the running server using the same tag.
We can also confirm the details of our game endpoint in the same way:
$ helm get values -n idler-develop idler-develop -o json | jq .service
{
"enabled": true,
"hostname": "idler-develop.dev.metaplay.io",
"port": 9339,
"tls": {
"enabled": true,
"sslCertArn": "arn:aws:acm:eu-west-1:000011112222:certificate/46452642-66cf-4507-aef9-fc00209ce310"
}
}
Using this set of information, we can craft a Helm values file to use with our load test. You can check the metaplay-loadtest` repository for more details on the values and the defaults, but for our case, we will use the following minimal value file to get going:
$ cat idler-develop-loadtest.yaml
botclients:
replicas: 1
image:
repository: "000011112222.dkr.ecr.eu-west-1.amazonaws.com/metaplay-idler-develop-botclient"
tag: "36f3dcb4cf54dba037f4b5f3bcc88e3fbe3d57d1"
targetHost: "idler-develop.dev.metaplay.io"
targetPort: 9339
targetEnableTls: true
cdnBaseUrl: "https://idler-develop-assets.dev.metaplay.io/GameConfig"
botsPerPod: 1
The above configuration is rather straightforward and defines the specific bot client to use, and targets it against our game server endpoint. We also restrict the amount of bot client pods (or replicas) to 1, and we express we only want one bot for the pod (or replica). Finally, we know that we have our game configs stored in the assets S3 bucket, which is shared via the cdnBaseUrl
.
INFO
Do note that when defining the required resources for a load test and adjusting the replica count, you must make sure that you have a sufficient amount of resources available in your Kubernetes cluster. If you have a cluster-autoscaler
configured and have room to grow, you should receive additional nodes based on the resource requests, but this can take a while as the autoscaler adjusts the cluster size.
You can also manually grow the cluster size to satisfy the requirements but do remember to then scale the cluster down after your load test to save on costs.
Armed with this Helm value file, we can then trigger a load test using the same cluster. You may want to create dedicated namespaces for your tests, but in this case, we are just going to run the chart in the default
namespace:
$ helm upgrade --install \
--repo "https://metaplay-charts-stable.s3-eu-west-1.amazonaws.com/" \
--version "0.0.2" \
-f idler-develop-loadtest.yaml \
idler-develop-loadtest metaplay-loadtest
Release "idler-develop-loadtest" does not exist. Installing it now.
NAME: idler-develop-loadtest
LAST DEPLOYED: Mon May 18 10:04:32 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
We now have our load test running on Kubernetes.
To end a load test, you can simply delete the Helm deployment:
$ helm ls
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
idler-develop-loadtest default 1 2020-05-18 10:04:32.081662 +0300 EEST deployed metaplay-loadtest-0.0.1
$ helm delete idler-develop-loadtest
release "idler-develop-loadtest" uninstalled
After this, you should scale your cluster down either by allowing the cluster-autoscaler
to remove nodes or by manually removing unneeded nodes.
The bot clients output by default their logs to stdout
. If you have at your disposal the standard infrastructure stack from Deploying Infrastructure, you can leverage Grafana as the dashboard through which to obtain logs. The logs can be accessed e.g. under Explore>Loki. Querying logs from the above load test could be done with the LogQL query {job="default/botclient"}
.
If the bot client has been written to expose Prometheus metrics, you can enable metric scraping in the Helm values file by adding the following parameters:
botclients:
prometheus:
enabled: true
port: 9090
This will add the required annotations to the bot client pods to tell Prometheus to scrape the metrics.