I have a very simple setup running Gitea which I love. However, I enabled Elastic Search because it makes searching much faster than the default method.
I have a VPS running 16GB memory. The only things running on it are Nginx, PHP, Mysql, docker, and a few other things. Very rarely I ever hit over 6GB usage.
The issue comes when I enable Elastic Search. It seems to wipe me out at 15.7GB usage out of 16GB as soon as I start it up.
I searched online and found out about the /etc/elasticsearch/jvm.options.d/jvm.options
and adding
-XmxXG
-XmsXG
The question is, what should this amount be. I read that by default, Elastic uses 50%, however, when I started it up, it was wiping me out of memory and making the system almost have a stroke.
But setting it to 2GB seems to make it not as responsive on the Gitea website, sometimes even timing the website out.
So I’m not sure what “range” I should be using here. Or if I’m going to have to upgrade my VPS to 32GB in order to run this properly.
Those values depends on the use case. You can set min and max values and fine tune as the need occurs. Some extensive information are discussed in these links:
https://stackoverflow.com/questions/14763079/what-are-the-xms-and-xmx-parameters-when-starting-jvm
https://www.elastic.co/guide/en/elasticsearch/reference/current/advanced-configuration.html
Thanks, I saw the last link when I first set this up, but not the first two. I’ll go through them and see if I can find the sweet spot.
It’s hard to tell because while I’m the only user using my Gitea repo website, which is pretty much your own personal Github. However, from what I’ve read, even though there may only be one or two users, the usage of Elastic greatly depends on how much code it has to cache. Then when you search for something, Elastic has to go through all that code.
So from what I understand, the more code you have in a repo, the more Elastic has to work, which makes figuring out the memory a bit of a random gamble.
Just in case youre curious, here is my complete Gitea stack using MariaDB as db and issue indexer, and Redis for cache, queue and sessions.
Disclaimer: I am neither a Gitea or MariaDB or Redis expert, i know nothing about anything…
version: "3.3" networks: gitea: name: gitea traefikproxy: external: true services: gitea: container_name: gitea image: gitea/gitea:1.20.5-rootless restart: unless-stopped labels: - traefik.enable=true - traefik.docker.network=traefikproxy - traefik.http.routers.gitea.rule=Host(`gitea.local.example.com`) - traefik.http.services.gitea.loadbalancer.server.port=3000 networks: - gitea - traefikproxy # ports: # - 7744:3000 # webui # - 2200:22 # ssh (needs more configuration) depends_on: gitea-mariadb: condition: service_healthy gitea-redis: condition: service_healthy environment: - TZ=Europe/Berlin - USER_UID=1000 - USER_GID=1000 - USER=username ### basic settings - "GITEA__default__APP_NAME=Gitea" - "GITEA__time__DEFAULT_UI_LOCATION=Europe/Berlin" - "GITEA__ui__DEFAULT_THEME=arc-green" - "GITEA__server__HTTP_PORT=3000" - "GITEA__server__OFFLINE_MODE=true" - "GITEA__server__ROOT_URL=https://gitea.local.example.com" #- "GITEA__server__LOCAL_ROOT_URL=http://localhost:3000/" #- "GITEA__server__DOMAIN=" ### database settings - GITEA__database__DB_TYPE=mysql - GITEA__database__HOST=gitea-mariadb:3306 - GITEA__database__NAME=gitea - GITEA__database__USER=gitea - GITEA__database__PASSWD=gitea ### use db as indexer - GITEA__indexer__ISSUE_INDEXER_TYPE=db ### redis for cache, session and queue - GITEA__cache__ENABLED=true - GITEA__cache__ADAPTER=redis - GITEA__cache__HOST=redis://gitea-redis:6379/0 - GITEA__session__PROVIDER=redis - GITEA__session__PROVIDER_CONFIG=redis://gitea-redis:6379/1 - GITEA__queue__TYPE=redis - GITEA__queue__CONN_STR=redis://gitea-redis:6379/2 ### mail settings - "GITEA__mailer__ENABLED=true" - "GITEA__mailer__PROTOCOL=smtp+starttls" - "GITEA__mailer__SUBJECT_PREFIX=Gitea: " - "GITEA__mailer__FROM=example@gmail.com" - "GITEA__mailer__USER=example@gmail.com" - "GITEA__mailer__PASSWD=CHANGEME" - "GITEA__mailer__SMTP_ADDR=smtp.gmail.com" - "GITEA__mailer__SMTP_PORT=587" ### secret key and token ### generate once with: # docker run -it --rm gitea/gitea:1.20.5-rootless gitea generate secret SECRET_KEY # docker run -it --rm gitea/gitea:1.20.5-rootless gitea generate secret INTERNAL_TOKEN - "GITEA__security__SECRET_KEY=CHANGEME" - "GITEA__security__INTERNAL_TOKEN=CHANGEME" volumes: - ./data:/var/lib/gitea - ./config:/etc/gitea - /etc/timezone:/etc/timezone:ro - /etc/localtime:/etc/localtime:ro healthcheck: test: "wget --no-verbose --tries=1 --spider --no-check-certificate http://localhost:3000/api/healthz || exit 1" start_period: 60s # mem_limit: 250m # memswap_limit: 250m # mem_reservation: 125m # cpu_shares: 1024 # cpus: 1.0 gitea-mariadb: container_name: gitea-mariadb image: mariadb:11.1.2 restart: unless-stopped networks: - gitea # ports: # - 3306:3306 environment: - TZ=Europe/Berlin - MYSQL_ROOT_PASSWORD=gitea - MYSQL_USER=gitea - MYSQL_PASSWORD=gitea - MYSQL_DATABASE=gitea volumes: - /etc/timezone:/etc/timezone:ro - /etc/localtime:/etc/localtime:ro - ./mariadb:/var/lib/mysql command: --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci healthcheck: test: ["CMD-SHELL", "mariadb-admin --user=$${MYSQL_USER} --password=$${MYSQL_PASSWORD} --host=localhost ping"] start_period: 60s # mem_limit: 400m # memswap_limit: 400m # mem_reservation: 200m # cpus: 0.5 gitea-redis: container_name: gitea-redis image: redis:7.2.2-alpine restart: unless-stopped networks: - gitea # ports: # - 6379:6379 volumes: - ./redis:/data ### run on Docker host and reboot: ### echo "vm.overcommit_memory = 1" | sudo tee /etc/sysctl.d/nextcloud-aio-memory-overcommit.conf command: redis-server --save 30 1 --loglevel warning healthcheck: test: ["CMD", "redis-cli", "ping"] # mem_limit: 150m # memswap_limit: 150m # mem_reservation: 25m # cpus: 0.5
Oh damn, thanks. I’ll throw this in Obsidian.
Reverse proxy is exactly why I don’t have more things setup in docker. I haven’t quite figured out how it, nginx, and the app work together yet.
I had to setup caddy when I installed vaultwarden, and while that was easy because I had a very good guide to assist me, I would have been completely and totally lost if I had to setup caddy2 on my own.
So I definitely need to sit down one day and just do a full day’s read on reverse proxy, how it works with Docker and its function, and what I can do with it. Because the vaultwarden setup made it no easier to understand.
I wanted to actually move nginx and mysql over to docker, but reverse proxy is also the reason that’s holding me back.