Deployment

The metadata service is a small application built on FastAPI that can be deployed running in a virtual environment, managed by Hatch – running under an unprivileged user account. This scenario is described here. However, any other deployment approaches suitable for Python-based applications may work just as fine.

Required software

The only software that is required outside the virtual environment (and the web server) is pipx, which is used to deploy hatch for a user – no need for administrator privileges otherwise.

sudo apt install pipx --no-install-recommends

User account setup

Here we set up a dedicated user dumpthing to run the service. However, the service could also run under any other (existing) user account.

# new user, prohibit login, disable password
sudo adduser dumpthing --disabled-password --disabled-login

# allow this user to run prcoess while not logged in
sudo loginctl enable-linger dumpthing

# allow this user to execute systemd commands interactively.
# this needs XDG_RUNTIME_DIR define.
# the demo below is for ZSH
sudo -u dumpthing -s
cd
echo 'export XDG_RUNTIME_DIR="/run/user/$UID"' >> ~/.zshrc
# put `hatch` in the PATH for convenience
echo 'export PATH="/home/dumpthing/.local/bin:$PATH"' >> ~/.zshrc

Service environment setup

Everything in this section is done under the target user account. Use something like sudo -u dumpthing -s to enter it.

The following commands perform the initial setup, which provides an installation of the dump-things-service to query and encrich the TRR379 knowledge base.

# install `hatch` to run the service in a virtual environment
pipx install hatch

# obtain the source code for the service
git clone https://hub.trr379.de/q02/dump-things-service.git

# obtain the dataset with the (curated) metadata to be served
# by the service
git clone https://hub.trr379.de/q02/trr379-knowledge.git curated_metadata

# set up a directory for receiving metadata submissions
# each subdirectory in it must match a "token" the needs to be
# presented to the service to make it accept a record posting.
mkdir token_realms

# the service expects a particular data organization.
# we opt to create a dedicated root directory for it,
# and symlink all necessary components into it
mkdir server_root
ln -s ../curated_metadata/metadata server_root/global_store
ln -s ../token_realms server_root/token_stores

# now we can test-launch the service
hatch run fastapi:run --port 17345 /home/dumpthing/server_root

If the service comes up with no error, we can ctrl-c it.

Service management with systemd

We use systemd for managing the service process, the launch, and logging. This makes it largely unnecessary to interact with hatch directly, and allows for treating the user-space service like any other system service on the system.

The following service unit specification is all that is needed.

mkdir -p .config/systemd/user/
cat << EOT > .config/systemd/user/dumpthing.service
[Unit]
Description=DumpThing service (hatch environment)
Wants=network-online.target
After=network-online.target


[Service]
Type=simple
WorkingDirectory=/home/dumpthing/dump-things-service
ExecStart=/home/dumpthing/.local/bin/hatch run fastapi:run --port 17345 /home/dumpthing/server_root

[Install]
WantedBy=default.target
EOT

With this setup in place, we can control the service via systemd.

# launch the service
systemctl --user start dumpthing

# configure systemd to auto-launch the service in case of a
# system reboot
systemctl --user enable dumpthing.service

Web server setup

Here we use caddy as a reverse proxy to expose the services via https at metadata.trr379.de.

# append the following configuration to the caddy config
cat << EOT >> /etc/caddy/Caddyfile
# dumpthings service endpoints
metadata.trr379.de {
    reverse_proxy localhost:17345
}
EOT

A matching DNS setup must be configured separately.

Afterwards we can reload the web server configuration and have it expose the service.

# reload the webserver config to enable the reverse proxy setup
# (only necessary once)
sudo systemctl reload caddy

Updates and curation

Whenever there are updates to the to-be-served curated metadata, the setup described here only required the equivalent of a git pull to fetch these updates from the “knowledge” repository.

When records are submitted, they end up in the directory matching the token that was used for submission. Until such records are integrated with the curated metadata in global_store, they are only available for service requests that use that particular token.

An independent workflow must be used to perform this curation (acceptance, correction, rejection) of submitted records.