Appearance
Appearance
Target Audience
This page is primarily intended for users of the Self Hosting (Private Cloud) tier package. If you are using any of the Metaplay SaaS plans, information from this page may not be directly relevant to your needs.
To run a load test using the approach from this chapter, we first need to ensure that we have some prerequisites satisfied:
Again, it is possible to functionally test the load tester on the Metaplay-hosted platform, but we have put a restriction on computing resources, so running full-fledged load tests from the Metaplay-hosted platform is not advisable.
For this guide, we assume to have an infrastructure stack available to us at dev.metaplay.io
and that it has been deployed in the fashion described by the Deploying Infrastructure guide. Additionally, we assume a game server deployed and running is idler-develop
(so that our game endpoint is at idler-develop.dev.metaplay.io:9339
).
With all prerequisites met, we can run a load test using the metaplay-loadtest
Helm chart. We will first check that our game server deployment is running successfully:
$ helm ls -n idler-develop
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
idler-develop idler-develop 92 2020-05-18 09:36:48.547475 +0300 EEST deployed metaplay-gameserver-0.6.1
To get details of the specific game server image that has been deployed, you can either check the game server pod spec in Kubernetes or query the details from the Helm deployment:
$ helm get values -n idler-develop --all idler-develop -o json | jq .image
{
"pullPolicy": "Always",
"repository": "000011112222.dkr.ecr.eu-west-1.amazonaws.com/metaplay-idler-develop-server",
"tag": "server"
}
We can also confirm the details of our game server endpoint in the same way:
$ helm get values -n idler-develop --all idler-develop -o json | jq .service
Example output:
{
"enabled": true,
"ipv6Enabled": true,
"port": [
{
"name": "game",
"port": 9339,
"tls": true
},
{
"name": "game-443",
"port": 443,
"tls": true
}
],
"tls": {
"enabled": true,
}
}
Using the information above, we can create a YAML file to be used by themetaplay-loadtest Helm chart. In this show case, we'll name the file idler-develop-loadtest.yaml
and define a minimal configuration:
$ cat idler-develop-loadtest.yaml
environmentFamily: Development
botclients:
replicas: 1
image:
repository: "000011112222.dkr.ecr.eu-west-1.amazonaws.com/metaplay-idler-develop-server"
tag: "botclient"
targetHost: "idler-develop.dev.metaplay.io"
targetPort: 9339
targetEnableTls: true
cdnBaseUrl: "https://idler-develop-assets.dev.metaplay.io/GameConfig"
botsPerPod: 1
The configuration above is rather straightforward. It defines the specific bot client to use and targets it against our game server endpoint. We also restrict the number of replicas (number of pods) to 1 and express we only want one bot for the pod. Finally, we know that the game configs are stored in the assets S3 bucket, which is shared via the botclients.cdnBaseUrl
.
Scaling
Do note that when defining the required resources for a load test and adjusting the replica count, you must make sure that you have a sufficient amount of resources available in your Kubernetes cluster. If you have a cluster-autoscaler
configured and have room to grow, you should receive additional nodes based on the resource requests, but this can take a while as the autoscaler adjusts the cluster size.
You can also manually grow the cluster size to satisfy the requirements but do remember to then scale the cluster down after your load test to save on costs.
Armed with idler-develop-loadtest.yaml
, we can then start a load test within the same cluster. You may want to create dedicated namespaces for your load test, as sharing a namespace between bots and the game server may lead to inaccurate results. For simplicity, we are just going to run the metaplay-loadtest Helm chart in the default
namespace:
$ helm upgrade --install \
--repo "https://charts.metaplay.dev/" \
--version "0.4.1" \
-f idler-develop-loadtest.yaml \
idler-develop-loadtest metaplay-loadtest
Release "idler-develop-loadtest" does not exist. Installing it now.
NAME: idler-develop-loadtest
LAST DEPLOYED: Mon May 18 10:04:32 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
We now have our load test running in the default namespace in Kubernetes.
To end a load test, you can simply delete the Helm deployment:
$ helm ls
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
idler-develop-loadtest default 1 2020-05-18 10:04:32.081662 +0300 EEST deployed metaplay-loadtest-0.4.1
$ helm delete idler-develop-loadtest
release "idler-develop-loadtest" uninstalled
After completing the chart deletion step, you should scale your cluster down either by allowing the cluster-autoscaler
to remove nodes or by manually removing unneeded ones yourself.
The bot clients output by default their logs to stdout
. If you have at your disposal the standard infrastructure stack from Deploying Infrastructure, you can leverage Grafana as the dashboard through which to obtain logs. The logs can be accessed e.g. under Explore>Loki. Querying logs from the above load test could be done with the LogQL query {job="default/botclient"}
.
If the bot client has been written to expose Prometheus metrics, you can enable metric scraping in the Helm values file by adding the following parameters:
botclients:
prometheus:
enabled: true
port: 9090
This will add the required annotations to the bot client pods to tell Prometheus to scrape the metrics.