Subsections of How to ...

Fulfill RTG requirements

All PhD students funded by the TRR379 participate in its Research Training Group.

This involves certain obligations, of which you can find an overview below:

Participation

official schedule

  • internal meetings
    • attendance via Zoom (in library | Aachen only)
    • present own work twice (once | medical students only)
  • lecture series
    • attendance via Zoom (on-site option for lecture location)
    • chair one session (optional | medical students only)
  • workshops
    • attend obligatory + 3 (1 | medical students only) voluntary
Note of absence requirement

Send a note of absence (via email) in case of illness | traveling for work | holidays | non-self-arranged meetings*

Documentation via Gitlab

Get a Hub account

For working with the TRR379 collaboration platform an account is required.

In order to obtain an account, please email Michael Hanke with a request. Please email from an institutional email, and CC the respective TRR379 project lead. In the email, please mention any projects and/or organizations you require read or write access to.

Edit the website

The main TRR379 website is automatically rendered with Hugo. The website sources are hosted on the TRR379 collaboration hub in a repository under the Q04 project organization.

This repository is a DataLad dataset, the same technology that is used for research data management in the TRR379.

The repository README contains instruction on how to obtain a clone of the website for working on it, and testing it locally.

Alternatively, small edits can be done on the collaboration hub directly. The content of all pages can be browsed at https://hub.trr379.de/q04/www.trr379.de/src/branch/main/content.

Each page provides an “edit button” that enables editing content directly in the browser. The following screenshot show this button for the source file of the personal profile of Michael Hanke.

Screenshot of a Forgejo page edit button Screenshot of a Forgejo page edit button

Once a change has been submitted, it is not reflected on the website immediately. All changes are reviewed and possibly adjusted by the website editor team of the Q04 project.

Submit annual DFG survey

The 2025 survey is not open yet!

Please wait for the official announcement email with the personal access link to the survey.

Need help?

I am having issues with the survey form.

If there are issues with the form, they can be submitted on the TRR379 collaboration hub at https://hub.trr379.de/q02/dfg-cp-survey-ui/issues (requires an account), or emailed to trr379@lists.fz-juelich.de.

I need a personalized access token.

As a member of the TRR379, you should receive an individual access token via email. If you have not received one, please email Michael Hanke with a request.

Background

The Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) gathers information from its funded projects on an annual basis. Some information is collected from all members of a coordinated programme, such as a collaborative research centre (CRC).

The data are used to produce statistical reports and provide answers to statistical queries. In addition, the data serve as the basis for statistical evaluations, which the DFG uses to comply with its reporting obligations to financial backers, the federal government and the state governments.

For more information, please refer to the information on data protection in relation to DFG research funding (see point 10 under Research Funding question IV), which can be accessed at https://www.dfg.de/privacy_policy.

TRR379 approach

The TRR379 collects the DFG survey data using the same infrastructure that is employed for research metadata collection and management, rather than the PDF-form based approach suggested by the DFG.

This offers the following advantages:

  • members only have to enter their details in the first survey, and can efficiently resubmit updated records in subsequent years
  • members can consent to having a subset of their information be used to create and update a profile page on their person on the TRR379 website with no additional effort
  • members can consent to having basic information (e.g., name, ORCID) on them be made available in other TRR379 services, for example for reporting publications, or data annotation, to avoid repeated, manual information entry

Collected information is submitted to a TRR379-dedicated virtual server hosted at Forschunsgzentrum Jülich (operated by the Q02 project) via an encrypted connection. Individual person records are only accessible via a personalized link with an individual access token. Members receive this access information via email.

This service is opt-in. Members who do not wish to have their data submitted and kept in the described way can follow the standard DFG-procedure and send the PDF-form available at https://www.dfg.de/resource/blob/203580/questionnaire-sfb-grk-fip-2025-en.pdf to the TRR379 management team.

Submit data

The electronic survey form is provided at https://dfg-cp-survey.trr379.de

Members receive a personalized access link via email. Only with this link is it possible to see one’s personal record and to submit information.

Tip

It is highly recommended to use this webform with a desktop browser; the form offers rich contextual information that is hard to access on mobile devices.

Create or edit a record

If no record on a person already exists, a new record must be created. This is done by clicking on the + button in the “Person” category.

Screenshot showing the add-button for a person record Screenshot showing the add-button for a person record

Alternatively, if a previous record already exist, it can be loaded by clicking on the “Edit” button (pencil icon).

Screenshot showing the edit-button for a person record Screenshot showing the edit-button for a person record

Fill the form

Context help is provided for every form filed. This information is accessible by hovering the mouse pointer over the field title.

Screenshot showing where to hover to access the context help Screenshot showing where to hover to access the context help

For some fields (e.g., prior workplace, area of study) one or more choices have to be selected. The selector offers type-ahead search of choices. The selection is constrained to choices matching the entered text in their label.

Screenshot showing the item selector and constrained search Screenshot showing the item selector and constrained search

When no adequate choice exists, a new item can be created by clicking the “Add New Item” button.

Save and submit the form

There are two different save actions: local saves and submitting to the server.

Save form sections (local save)

Whenever a form (or sub-form) is completed, it must be saved by clicking the “Save” button. You must do this for each section you complete.

Screenshot showing the form save button Screenshot showing the form save button

Important

Saving only stores the information in the local browser session! It does not submit the form to the server.

Final submission to server

When the person record is complete and is saved, the record can be submitted to the server by clicking the “Submit” button (cloud icon with up arrow).

Screenshot showing the form submit button Screenshot showing the form submit button

Submission is only possible with a valid access token. If the form was accessed via the personalized link, this token is already in place. If the form was accessed by other means, the token can be entered via the “Key” button. The token is embedded in the access link received via email.

Deploy services

Subsections of Deploy services

Dump-things API

The metadata service is a small application built on FastAPI that can be deployed running in a virtual environment, managed by Hatch – running under an unprivileged user account. This scenario is described here. However, any other deployment approaches suitable for Python-based applications may work just as fine.

Required software

The only software that is required outside the virtual environment (and the web server) is pipx, which is used to deploy hatch for a user – no need for administrator privileges otherwise.

sudo apt install pipx --no-install-recommends

User account setup

Here we set up a dedicated user dumpthing to run the service. However, the service could also run under any other (existing) user account.

# new user, prohibit login, disable password
sudo adduser dumpthing --disabled-password --disabled-login

# allow this user to run prcoess while not logged in
sudo loginctl enable-linger dumpthing

# allow this user to execute systemd commands interactively.
# this needs XDG_RUNTIME_DIR define.
# the demo below is for ZSH
sudo -u dumpthing -s
cd
echo 'export XDG_RUNTIME_DIR="/run/user/$UID"' >> ~/.zshrc
# put `hatch` in the PATH for convenience
echo 'export PATH="/home/dumpthing/.local/bin:$PATH"' >> ~/.zshrc

Service environment setup

Everything in this section is done under the target user account. Use something like sudo -u dumpthing -s to enter it.

The following commands perform the initial setup, which provides an installation of the dump-things-service to query and encrich the TRR379 knowledge base.

# install `hatch` to run the service in a virtual environment
pipx install hatch

# obtain the source code for the service
git clone https://hub.trr379.de/q02/dump-things-service.git

# obtain the dataset with the (curated) metadata to be served
# by the service
git clone https://hub.trr379.de/q02/trr379-knowledge.git curated_metadata

# set up a directory for receiving metadata submissions
# each subdirectory in it must match a "token" the needs to be
# presented to the service to make it accept a record posting.
mkdir token_realms

# the service expects a particular data organization.
# we opt to create a dedicated root directory for it,
# and symlink all necessary components into it
mkdir server_root
ln -s ../curated_metadata/metadata server_root/global_store
ln -s ../token_realms server_root/token_stores

# now we can test-launch the service
hatch run fastapi:run --port 17345 /home/dumpthing/server_root

If the service comes up with no error, we can ctrl-c it.

Service management with systemd

We use systemd for managing the service process, the launch, and logging. This makes it largely unnecessary to interact with hatch directly, and allows for treating the user-space service like any other system service on the system.

The following service unit specification is all that is needed.

mkdir -p .config/systemd/user/
cat << EOT > .config/systemd/user/dumpthing.service
[Unit]
Description=DumpThing service (hatch environment)
Wants=network-online.target
After=network-online.target


[Service]
Type=simple
WorkingDirectory=/home/dumpthing/dump-things-service
ExecStart=/home/dumpthing/.local/bin/hatch run fastapi:run --port 17345 /home/dumpthing/server_root

[Install]
WantedBy=default.target
EOT

With this setup in place, we can control the service via systemd.

# launch the service
systemctl --user start dumpthing

# configure systemd to auto-launch the service in case of a
# system reboot
systemctl --user enable dumpthing.service

Web server setup

Here we use caddy as a reverse proxy to expose the services via https at metadata.trr379.de.

# append the following configuration to the caddy config
cat << EOT >> /etc/caddy/Caddyfile
# dumpthings service endpoints
metadata.trr379.de {
    reverse_proxy localhost:17345
}
EOT

A matching DNS setup must be configured separately.

Afterwards we can reload the web server configuration and have it expose the service.

# reload the webserver config to enable the reverse proxy setup
# (only necessary once)
sudo systemctl reload caddy

Updates and curation

Whenever there are updates to the to-be-served curated metadata, the setup described here only required the equivalent of a git pull to fetch these updates from the “knowledge” repository.

When records are submitted, they end up in the directory matching the token that was used for submission. Until such records are integrated with the curated metadata in global_store, they are only available for service requests that use that particular token.

An independent workflow must be used to perform this curation (acceptance, correction, rejection) of submitted records.

Neurobagel

NeuroBagel is a collection of containerized services that can be deployed in a variety of way. This page describes a deployment using podman and podman-compose that is confirmed to be working on machine with a basic Debian 12 installation.

For other installation methods, please refer to the NeuroBagel documentation.

The following instruction set up a “full-stack” NeuroBagel deployment. The contains all relevant components

  • query front-end
  • federation API
  • node API
  • graph database

This setup is suitable for a self-contained deployment, such as the central TRR379 node. Other deployments may only need a subset of these services.

On the target machine, NeuroBagel services will run “rootless”. This means they operate under a dedicated user account with minimal privileges.

Required software

Only podman, and its compose feature are needed. They can be installed via the system package manager.

sudo apt install podman podman-compose

User setup

We create a dedicated user neurobagel on the target machine. NeuroBagel will be deployed under this user account, and all software and data will be stored in its HOME directory.

# new user, prohibit login, disable password
sudo adduser neurobagel --disabled-password --disabled-login

# allow this user to run prcoess while not logged in
sudo loginctl enable-linger neurobagel

# allow this user to execute systemd commands interactively.
# this needs XDG_RUNTIME_DIR define.
# the demo below is for ZSH
sudo -u neurobagel -s
cd
echo 'export XDG_RUNTIME_DIR="/run/user/$UID"' >> ~/.zshrc
exit

Configure NeuroBagel

In the HOME directory of the neurobagel user we create the complete runtime environment for the service. All configuration is obtained from a Git repository.

# become the `neurobagel` user
sudo -u neurobagel -s
cd

# fetch the setup
git clone https://hub.trr379.de/q02/neurobagel-recipes recipes

# create the runtime directory
mkdir -p run/data/
mkdir -p run/secrets/

# copy over the demo data for testing (can be removed later)
cp recipes/data/* run/data

# generate passwords (using `pwgen` here, but could be any)
pwgen 20 1 > run/secrets/NB_GRAPH_ADMIN_PASSWORD.txt
pwgen 20 1 > run/secrets/NB_GRAPH_PASSWORD.txt

# configure the the address of the "local" NeuroBagel
# node to query
cat << EOT > recipes/local_nb_nodes.json
[
    {
        "NodeName": "TRR379 central node",
        "ApiURL": "https://nb-cnode.trr379.de" 
    }
]
EOT

Web server setup

NeuroBagel comprises a set of services that run on local ports that are routed to the respective containers. Here we use caddy as a reverse proxy to expose the necessary services via https at their canonical locations.

# append the following configuration to the caddy config
cat << EOT >> /etc/caddy/Caddyfile
# neurobagel query tool
nb-query.trr379.de {
    reverse_proxy localhost:13700
}
# neurobagel apis
# graph db api not exposed at 13701
nb-cnode.trr379.de {
    reverse_proxy localhost:13702
}
nb-federation.trr379.de {
    reverse_proxy localhost:13703
}
EOT

A matching DNS setup must be configured separately.

Manage NeuroBagel with systemd

We use systemd for managing the NeuroBagel service processes, the launch, and logging. This makes it largely unnecessary to interact with podman directly, and allows for treating the containerized NeuroBagel like any other system service.

The following service unit specification is all that is needed. With more recent versions of podman and podman-compose better setups are possible. using podman version. However, this one is working with the stock versions that come with Debian 12 (podman 4.3.1 and podman-composer 1.0.3) and requires no custom installations.

mkdir -p .config/systemd/user/
cat << EOT > .config/systemd/user/neurobagel.service
[Unit]
Description=NeuroBagel rootless pod (podman-compose)
Wants=network-online.target
After=network-online.target


[Service]
Type=simple
WorkingDirectory=/home/neurobagel/recipes
EnvironmentFile=/home/neurobagel/recipes/trr379.env
ExecStart=/usr/bin/podman-compose -f ./docker-compose.yml up
ExecStop=/usr/bin/podman-compose -f ./docker-compose.yml down

[Install]
WantedBy=default.target
EOT

Launch

With this setup in place, we can launch NeuroBagel

# reload the webserver config to enable the reverse proxy setup
# (only necessary once)
sudo systemctl reload caddy

# launch the service (pulls all images, loads data, etc).
# after a minutes the service should be up
systemctl --user start neurobagel

# configure systemd to auto-launch NeuroBagel in case of a
# system reboot
systemctl --user enable neurobagel.service

Work with raw MRI data

Magnetic resonance imaging is an essential data acquisition method for TRR379. Four sites acquire such data. Each site has (different) established routines and conventions. This documentation collects resources and best practices that can be adopted by TRR379 members.

Subsections of Work with raw MRI data

Archive DICOMs

As the “true” raw data, DICOMs are rarely (re)accessed and hardly ever need to change. However, they need to be stored somewhere. Tracking DICOMs in DataLad datasets allows dependency tracking for conversion to NIfTI. However, it’s good to keep DataLad optional (allow DataLad and non-DataLad access).

Historical precedent: ICF at FZJ

The following solution has been proposed for the Imaging Core Facility at FZJ:

  • DICOMs are packed into tar files (tarballs) 1
  • the tarballs are placed on a web server (intranet only), organized by project (HTTP Basic Authentication for access management)
  • DataLad datasets record availability via archivist and uncurl special remotes, which translates to:
    • a file is available from a tarball (archivist special remote)
    • a tarball is available from a given URL, pointing to the web server (uncurl special remote)2.
  • Only the Git repository (no annex) is stored by the consuming institute; the ICF web server is the permanent DICOM storage.

The system has been documented in https://inm-icf-utilities.readthedocs.io/en/latest/ and the tarball & dataset generation tools implementation is in https://github.com/psychoinformatics-de/inm-icf-utilities.

TRR reimplementation

One of the TRR sites indicated intent to use a Forgejo instance for DICOM storage. A particular challenge for the underlying system was inode limitation. For this reason, an adaptation of the ICF system has been proposed:

  • a dataset is generated upfront, and DICOM tarball is stored with the dataset in Forgejo (as annex)
  • we keep using the archivist remote (file in tarball) to avoid using up thousands of inodes for individual files (Git can pack its repository into several files, so we only add one more for the tarball).

A proof of principle for dataset generation (using re-written ICF code) has been proposed in https://hub.trr379.de/q02/dicom-utilities. See the README for more detailed explanations (and commit messages for even more detail).


  1. timestamps are normalized to ensure re-packing the same data does not change tarball checksums ↩︎

  2. uncurl is chosen because it allows re-writing patterns with just configuration, e.g., should the base URL change ↩︎

BIDS conversion

Converting the heterogeneous, site-specific raw MRI data acquisitions into a standardized dataset is an essential precondition for the collaborative work in TRR379. It readies the data for processing with established pipelines, and applies a pseudonymization as a safeguard for responsible use of this personal data.

TRR379 uses the Brain Imaging Data Structure (BIDS) as the standard for its datasets.

Conversion to BIDS

The conversion of raw MRI data in DICOM format to a BIDS-compliant dataset is a largely automated process. The recommended software to be used for conversion is Heudiconv.

Heudiconv uses dcm2niix as the actual DICOM→NIfTI converter. In our experience, dcm2niix is the most robust and most correct tool available for this task.

Heudiconv does the job of mapping DICOM series to BIDS entities (ie. determine BIDS-compliant file names). A key heudiconv concept is a heuristic: a Python program (function) which looks at the DICOM series properties and matches it with a file naming pattern. A heuristic typically relies on DICOM series naming (set at the scanner console), but it can also use other properties such as number of images or acquisition parameters.

Because TRR379 uses its own conventions, a matching heuristic needs to be provided (possibly one for each TRR379 site). An implementation of such a heuristic has been created, and was tested on phantom MRI acquisitions from all sites (see below). Using this heuristic, MRI data from all sites can be BIDS-standardized. As with any automation, caution and oversight is needed for edge cases (e.g. repeated / discarded acquisitions).

Heudiconv tutorials further illustrate the process and capabilities of the software.

Good practices

  • Use heudiconv as a containerized application. Q02 provides a readily usable utility dataset with a configured container. See that repository for an example usage.
  • DICOMs as subdatasets helps with provenance, even if those DICOMs are never accessed outside
  • Heudiconv takes paths and (optionally) intended subject IDs as input
    • if paths contain identifying information, this would leak into DataLad run records
    • having a helper script / lookup table in the (private) DICOM dataset can hide this information

Caveats

Demonstrators and resources

TRR phantom DICOMs

Scans of MRI phantoms were carried out using the intended sequences (presumably - see caveats section below). These were shared with Q02 and uploaded to the TRR Hub forgejo instance:

Note: Aachen did a re-scan which was shared by e-mail / cloud (June 03, 2025). This has not been uploaded to forgejo (permissions + size).

TRR phantom BIDS

Conversion of re-scanned Aachen phantom is in https://hub.trr379.de/q02/tmp-phantom-bids (separate from the above because input data is not available as a DataLad dataset)

Data consistency

Conversion: technical issues

These are open questions: