I have found transcoding to work noticeably better when using quicksync (the intel chip native encoder) rather than a GPU.
At this point, I think the only real reason you would want a GPU is for LLMs.
I have found transcoding to work noticeably better when using quicksync (the intel chip native encoder) rather than a GPU.
At this point, I think the only real reason you would want a GPU is for LLMs.
One thing nobody has mentioned here, I run all my services as a docker container. It makes them very easy to back up, and very easy to segregate. If a service gets compromised, in theory, it’s isolated to what it can access inside the docker container and can’t compromise the host. And if you delete and rebuild the container, any damage done in the container dies with it.
Running home assistant with docker is as simple as the command:
sudo docker run -d \
--name homeassistant \
--restart=unless-stopped \
-e TZ=America/Chicago \
-v $(pwd)/homeassistant:/config \
--network=host \
homeassistant/home-assistant
There is of course, more details to learn and the devils are in the details, but thankfully anything you want to know on how to set up your network in this regard you can just ask chatgpt.
I use restic on Linux, but duplicati seems like the new hotness and it’s cross platform