Appearance
Appearance
The Metaplay backend uses a Database to achieve long-term persistence of crucial game data, such as player states. You may need to add data to the Database manually when creating custom entities or do manual queries to the Database, depending on your game's data model. This page details how the Database is organized and the design philosophy behind it.
Amazon's Aurora MySQL 8.0 is used as the underlying storage technology when running in the cloud, and SQLite is used locally by default to minimize the dependencies that need to be installed.
Entity Framework Core is used for managing the Database table Schemas and Schema migrations whereas Dapper is used at runtime to ensure high performance of the queries.
In order to achieve scalability, the Database is used with a key-value orientation, and the storage is sharded into multiple individual Databases.
This allows us to achieve practically infinite scalability on the persistence layer, by dividing the Database into as many Shards as is needed. The persisted items use a combination of a partition key and a unique key: the partition key dictates in which Shard to place an item, and the primary key allows fast indexed access within a Shard.
The Database also supports using read-only replicas in addition to the primary ones. The read-only replicas are generally used for long-running or complex queries, to minimize the risk of the primary read-write replicas being negatively impacted in case lots of expensive queries are dispatched simultaneously. The Database queries also specify a priority level where the lower levels are throttled more aggressively to avoid impacting operations crucial for the availability of the game.
Most items persisted in the Database are serialized into a binary blob using Metaplay's serializer for the bulk of the data. This is done for performance reasons and to get better control over how game data is versioned over time. Some additional columns may be included for indexing to achieve faster Database operations.
Metaplay includes a set of default tables and columns in the Schema, but you can expand it by introducing your own tables and indexes.
Here's an example of how you can declare custom tables in the Database:
// Derive an item class from the IPersistedItem to inform Metaplay of our persisted item.
// Use the [Table] attribute to inform EFCore that this class maps to a table in the Database.
[Table("MyTable")]
[Index(nameof(PlayerId), nameof(Key)] // Example of compound index on (PlayerId, Key)
public class MyPersistedItem : IPersistedItem
{
[Key] // This is the unique primary key for the item (creates an index implicitly)
[PartitionKey] // This is also the partition key for the item
[Required] // The primary key must be specified
[Column(TypeName = "varchar(160)")] // Explicitly declare the type for SQL
public string Key { get; set; }
// Example of another required field -- EntityIds are persisted as strings in the db
[Required]
[Column(TypeName = "varchar(64)")]
public string PlayerId { get; set; }
// Helper accessor for parsing the PlayerId back to an EntityId
public EntityId PlayerEntityId => EntityId.ParseFromString(PlayerId);
MyPersistedItem() { } // for deserialization
public MyPersistedItem(string key, EntityId playerId)
{
Key = key;
PlayerId = playerId.ToString(); // convert to string for persisting
}
}
We recommend using basic types as much as possible for your persisted items as the translations between C# types and SQL may be hard to reason about and cause issues subtle to debug. In short, try to stick to the following types: int
, string
, bool
, byte[]
(for binary blobs), and DateTime
.
The primary way to access the Database at runtime is through MetaDatabase
class as shown below.
// Create a helper to access the Database on the Normal priority level.
MetaDatabase db = MetaDatabase.Get(QueryPriority.Normal);
// Insert a new item in the Database:
await db.InsertAsync(new MyPersistedItem(..));
await db.InsertOrIgnore(new MyPersistedItem(..));
await db.InsertOrUpdate(new MyPersistedItem(..));
// Querying an item & checking for existence:
bool exists = await db.ExistsAsync<MyPersistedItem>("my-item-key");
MyPersistedItem item = await db.TryGetAsync<MyPersistedItem>("my-item-key");
// Remove an item
await db.RemoveItem<MyPersistedItem>("my-item-key");
The Metaplay SDK supports multiple different priority levels for different use cases. You should always consider using the lowest priority that still fits the needs of your use case.
QueryPriority.Normal
-- Use for operations that are crucial for the operations for the players to be able to play the game.QueryPriority.Low
-- Use for operations where getting a result is important but not critical. For example, performing Database queries to be visualized in the LiveOps Dashboard.QueryPriority.Lowest
-- Use for long-running or expensive background queries where time-to-completion doesn't need to be fast.You can also introduce more complicated Database access methods into MetaDatabase
class if needed by adding new extension methods. The source code of MetaplayServerDatabaseExtensions
has many good examples to get started.
As your project evolves, the Database Schema is also likely to get more complicated as new features are added.
When making changes to the C# classes that make up the Schema, you should also create migration scripts to account for such changes:
Backend/Server$ dotnet ef migrations add AddedSomeFeature
After creating the migration scripts, add the generated and updated files to version control:
Migrations/GameDbContextModelSnapshot.cs
Migrations/<datetime>_AddedSomeFeature.cs
Migrations/<datetime>_AddedSomeFeature.Designer.cs
Note that the server will execute this migration against all Shards when it starts up. Since this process happens automatically as part of the server startup, the migrations don't need to be explicitly run against the environments.
For more information on managing the Schema migrations, see Entity Framework's documentation on migrations.
If Schema migrations are introduced in parallel branches, it is considered best practice to flatten or linearize the migrations when merging the branches in your main development branch. While EFCore should be capable of handling migrations from divergent branches, subtle and hard-to-debug issues can arise from this.
When working with a game that has millions of players or more in its Database, the Schema migrations on the high-volume tables can consume a significant amount of time. There are ways to mitigate this, however.
Starting with MySQL 8.0, the most typical changes to a table's Schema can be done by modifying only the metadata, making them practically instant. This includes adding and removing columns from existing tables. See MySQL's documentation on Online DDL Operations for more details on the operations.
TIP
To avoid surprises with Database migrations taking an unexpectedly long time, you should also always test the migrations in a staging environment with a representative Database before rolling the changes out into production.
Sometimes, it can be useful to modify the migrations generated by dotnet ef migrations add
. You can do this by manually modifying the Up()
method in the generated code under Backend/Server/Migrations
. Here is an example of truncating the PlayerIncidents
table before applying a migration. By emptying the table, the migration is guaranteed to run fast.
protected override void Up(MigrationBuilder migrationBuilder)
{
if (migrationBuilder.IsSqlite())
migrationBuilder.Sql("DELETE FROM PlayerIncidents");
else
migrationBuilder.Sql("TRUNCATE TABLE PlayerIncidents");
... rest of the migration
}
During development, you may want to clean up a messy Database Schema migration history before publishing the game. This can be achieved by:
Backend/Server/Migrations/
.Backend/Server$ dotnet ef migrations add InitialCreate
.Database:MasterVersion
to ensure a clean Database in all environments.WARNING
This will forcibly remove all of the data in your Databases, so only do this if you're willing to lose it!
During the early phases of development, it can be easier to occasionally reset the contents of the Database instead of migrating the contents more gracefully as feature development happens.
The simplest way to force the Database to be reset is to bump the Database:MasterVersion
in the project's Options.base.yaml
. The next time the server is started, it notices the version discrepancy and forcibly resets the Database. This change can be committed to your version control and it will then reset all the environments as the version gets deployed into them, including running the server locally on the developers' machines. This way, you don't need to explicitly remind everyone on the team to remember to reset their Databases.
Stay Safe
By default, the production environments have a safety flag set to not allow the Database reset to happen, to avoid accidentally deleting the contents. The Database can still be restored from backups, but it's not a fun or simple process.
If you want to reset your production Database, you can disable the extra check with the following:
Database:
NukeOnVersionMismatch: true # REMOVE THIS WHEN NO LONGER NEEDED !!
If you only wish to reset a particular type of entity in the Database, for example, all player states instead of the whole Database, you can achieve it by bumping the [SupportedSchemaVersions(minVersion, maxVersion)]
minimum version for that specific entity. See Entity Schema Versions and Migrations for more details on entity Schemas and their migrations.
The Database is configured through the runtime options. If you're not sure how that works, check out Working with Runtime Options first.
Below are some of the more useful options you may wish to change. Be mindful of which environment(s) you change the values for, i.e., which YAML files you modify.
Database:
# Specify which Database engine to use.
Backend: Sqlite # or MySql
# Specify how many Shards should be used by the game server.
NumActiveShards: 2
# Specify SQLite to use in-memory storage, useful for local load tests due to better performance.
SqliteInMemory: true
# Specify the master version of the Database -- the Database gets reset on a version mismatch.
MasterVersion: 10
# Specify whether the Database nuking on master version mismatch is allowed.
NukeOnVersionMismatch: true
# Specify the maximum number of simultaneous queries for the query priority levels:
MaxConnectionsLowPriority: 4
MaxConnectionsLowestPriority: 1
# Specify which compression algorithm to use for binary blob payloads
CompressionAlgorithm: Zstandard # or LZ4
By default, Metaplay uses MySQL 8.0 in the cloud and the embedded SQLite when running locally. While the two implement the SQL language in mostly the same way, they do use different flavors of SQL and have some other behavioral differences worth knowing.
To mitigate the differences, we strongly recommend:
Some of the more common pitfalls include:
byte[]
type to MySQL type longblob
to allow larger payloads.utf8mb4
to ensure UTF-8 compatibility.[Column(TypeName = "varchar(<len>)")]
to avoid character set-related issues.DateTime
s can lose some precision when being persisted in the Database. Only use them for approximate times.To allow supporting Database-specific SQL query syntax, Metaplay abstracts the non-standard queries behind the DatabaseBackend
abstract class with backend-specific implementations: DatabaseBackendMysql
and DatabaseBackendSqlite
. The same pattern can be used to apply to any game-specific queries as well.
When running game server load tests or benchmarks locally, for example with BotClient Testing, the default SQLite on-disk storage can easily become a bottleneck due to its limited performance.
The easiest workaround is to run the game server using SQLite's in-memory mode which performs much better:
Backend/Server$ dotnet run --Database:Backend=Sqlite --Database:SqliteInMemory=true
Alternatively, you can also set up MySQL locally and use that. See the instructions below:
When developing new features that require changes to the Database Schema or non-trivial new operations with the Database, it's a good idea to also test the changes using a real MySQL Database locally. MySQL and SQLite use a slightly different flavor of SQL and catching those issues is easiest by testing the game server against both Databases.
The steps to use MySQL locally are:
Install MySQL 8.0. You can follow the official installation guide.
Set up the installation with the required Database Shards and users with access to them.
The name of the Databases to use is <projectName>_<shard>
, in lowercase. You can find the projectName
in your GlobalOptions.cs
. The default number of Shards when running locally is 4.
The game server defaults to using the username metaplay
and password s3cret
. Make sure to not expose the Database to the rest of the world!
To configure a two-Shard Databases for the Idler sample project, apply the following SQL commands using mysql
CLI tool:
CREATE DATABASE idler_0;
CREATE USER 'metaplay'@'%' IDENTIFIED BY 's3cret';
GRANT ALL PRIVILEGES ON idler_0.* TO 'metaplay'@'%';
FLUSH PRIVILEGES;
CREATE DATABASE idler_1;
CREATE USER 'metaplay'@'%' IDENTIFIED BY 's3cret';
GRANT ALL PRIVILEGES ON idler_1.* TO 'metaplay'@'%';
FLUSH PRIVILEGES;
Configure the local server to use MySQL with two Shards.
Apply the following changes to the local runtime options to avoid accidentally changing any cloud environments:
Database:
Backend: MySql
NumActiveShards: 2