• 0 Posts
  • 16 Comments
Joined 11 months ago
cake
Cake day: October 12th, 2023

help-circle

  • I am running BIND9 to achieve this very thing.

    You can set up different “views” in BIND. Different zonefiles are served to different clients based on the IP address.

    I have an external view that allows AXFR transfers to my public slave DNS provider, and an internal view for clients accessible over my VPN. I use DNS-01 challenges to issue valid Let’s Encrypt certificates to both LAN-facing and public-facing services.

    My DNS server is running on my VPN coordination server, but, if I was not doing that, I’d run it on my router.

    I do not use dnsmasq, so I am not sure if it supports split-view DNS, but if it does not, you can try coredns as a lightweight alternative.



  • So to answer your last question first: I run dual boot Arch+Windows, and I can mount the physical Arch disk inside a WSL VM and then chroot into it to run or fix some things when I CBA to reboot properly. I haven’t tried booting a WSL instance off of the physical arch disk but I don’t imagine it would work. Firstly, WSL uses a modified linux kernel (which won’t be accessible without tinkering with the physical install). Secondly, the physical install is obviously configured for physical ACPI and Network use which will break if I boot into it from WSL. After all, WSL is not a proper VM.

    To answer the first question as to services: notes, kanban boards, network monitoring tools (connected to a VPN / management LAN), databases, more databases, even MOAR databases, database managers, web scrapers, etc.

    The very first thing I used WSL for (a long time ago) was to run ffmpeg. I just could not be bothered building it for Windows myself.


  • So on my workstation / daily driver box:

    • I have Docker using the WSL2 backend. I use this instance of docker to test deployments of software before I push it to my remote servers, to perform local development tasks, and to host some services that I only ever use when my PC is on (so services that require trust and don’t require 24x7 uptime).
    • I have about 8 distros of linux in WSL2.
    • The main distro is Ubuntu 22.04 for legacy reasons. I use this to host an nginx server on my machine (use it as a reverse proxy to my docker services running on my machine) and to run a bunch of linux apps, including GUI ones, without rebooting into my Arch install.
    • I have two instances of Archlinux. One is ‘clean’ and is only used to mount my physical arch disk if I want to do something quick without rebooting into Arch, and the other one I actively tinker with.
    • Other distros are just there for me to play with
    • I use HyperV (since it is required for WSL) to orchestrate Windows virtual machines. Yes, I do use Windows VMs on Windows host. Why? Software testing, running dodgy software in an isolated environment, running spyware I mean Facebook, and similar.
    • Prior to HyperV, I used to use Virtualbox. I switched to hyperv when I started using WSL. For a time, hyperv was incompatible with any other hypervosor on the same host so I dropped virtualbox. That seems to have been fixed now, and I reinstalled virtualbox to orchestrate Oracle Cloud VMs as well.

  • I do, indeed, use slave DNS servers, in fact, I’m currently in the market for a second independent provider.

    What features am I looking for? Honestly, a competitive amount of POPs and ability to accept AXFR in. I don’t need much more than that.

    Oh and pricing: I’m looking for something on the level of AWS or cheaper. I’ve tried approaching some other players in the field like ns1 and Hurricane Electric’s commercial service and those are quoting me $350+/month for < 100 zones and <10m req/month. No thank you.



  • Never used it, don’t trust random github repos with only 3 stars, and I don’t feel comfortable using turnkey solutions or “configuration scripts”. I am a firm believer in the maxim that configuration is a deeply personal thing. Therefore, I would not use someone’s configuration scripts because they are configured as he wants it, not as I want it.

    Running Docker Desktop on Windows is not exactly hard. And once you have docker desktop running, it is not exactly hard to run whatever other software / media server you might like.

    Windows is my primary workstation OS because I am legally blind and Windows has the best on-screen magnifier on the market. No other product, whether commercial or free, whether standalone or baked into the WM, comes even remotely close. So I use Windows. But within Windows, I leverage both WSL and Docker to run linux tools properly. All of my remote servers are linux. My home server is linux. More than half of my virtual machines are linux.



  • A few things, in no particular order:

    • Docker interferes with user-defined firewall rules on the host. You need to expend a lot of effort to make your rules persist above docker. This functionally means that, if you are running a public-facing VPS/dedicated server and bind services to 0.0.0.0, even if you set up a firewall on the same machine, it won’t work and your services will be publicly accessible
    • If you have access to a second firewall device  —  whether it is your router at home, or your hosting provider’s firewall (Hetzner, OVH both like to provide firewall controls external to your server)  — this is not the biggest concern.
    • There is no reason to bind your containers to 0.0.0.0. You will usually access most of your containers from a certain IP address, so just bind them to that IP address. My preference is to bind to any address in the 127.0.0.0/8 subnet (yes, that entire subnet is loopback) and then use a reverse proxy. Alternatively, look into the ‘macvlan’ and ‘ipvlan’ docker network drivers.

    Good luck


  • In no particular order:

    • Price (if looking to host something low value)
    • Price/performance (if longer-term)
    • Details of Fair Usage Policy
    • Bandwidth limits
    • Overlimit pricing
    • Location - proximity
    • Location - creepiness of government / jurisdiction
    • Reputation of the company - are they scummy? Do they oversell? Is their datacenter about to get yeeted? (cough Dedipath cough)
    • Are they bullshitting me with RAID 100000 PURE SSD STORAGE!!!

    In fact, I actually prefer HDD storage for most of my servers: for most websites, your bandwidth will be a bigger limitation than your data access speed.




  • If you want to go build a high-capacity all-SSD NAS, you need to decide how many kidneys you can part with.

    The folks at /r/datahoarder will be the best to talk to about storage solutions. I prefer 7200 drives for use in my daily driver box, and I don’t really care (but have them) in the NAS.

    2.5GBe is overkill  —  don’t forget you also need the cables that can support the throughput. But, even when you get the cables sorted, drive speed will be a bottleneck anyway.

    I would not spend money on 2.5Gbe gear if your WAN is limited to 1gbps.

    There just aren’t many reasons in a small network context where having that kind of network speed will give you a tangible benefit. In those circumstances where it would make a benefit, you would already know and would not be asking this question.

    When it comes to gaming, the speed of data transfer on your internal network will mean diddly-squat. Firstly, I am not aware of any games that will saturate a 2.5g link. That’s because most online games are designed to be playable on ADSL+. There just isn’t that much data transfer.

    And if you are doing LAN party only type stuff, then you will likely want a switch with more ports than with more bandwidth per port.


  • I am legally blind. I got into programming and linux specifically so that I can improve my life, even though I don’t want to pursue an IT career professionally.

    So, the short answer to your question is: most of my apps really do improve my daily life. And a good many of them I wrote myself.

    Here’s a largely-arbitrary mind dump:

    • Windows, unfortunately, has the best on-screen magnifier, so I cannot entirely leave the platform.
    • However, most GUI apps and web pages suck. They suck in many fascinating ways that are beyond the scope of this comment, but I have found that some tasks are quicker to perform from a CLI than from a GUI. For instance, managing documents. I can write a shell oneliner faster than I can load a GUI app for bulk file renaming or whatever other thing people tend to do. I can tell gnuplot to produce a graph much faster than I can draw one by hand.
    • Until very recently there wasn’t a Dark Mode for word processors. So I’d just write Markdown files in VS Code and then convert with pandoc.
    • Math is much easier with scripting than with calculators
    • Text to speech is a lifesaver. And sometimes you need to write your own whacky scripts to scrap webpages and read them out to yourself.
    • I need to conform to academic referencing standards. Who’s got time for that? Nobody. Computers can do that for me.
    • Web scraping — some websites are so bad, the only way to use them is to scrape then convert.

    But that’s from an accessibility perspective and more programming than self-hosting per se.

    Now from reading your OP, I think it is an attitude problem rather than a selfhosting problem. uBlock Origin and AdGuard (blocky, in my case) are not mutually exclusive. You just need to know how TF to use them. Since I use uBlock in Paranoid Mode (basically a lite uMatrix mode with filterlists), I don’t need to block so-called tracker scripts at the DNS level. My DNS adblocker is only blocking ads. Ergo, things like shopping do not break. You are saying that it is easier to disable uBlock for shopping — but I can change DNS with one script. Just temporarily switch to 1.1.1.1 or something, and everything works. Where’s the problem?

    I’m not sure what your complaint is with Bitwarden. It is not exactly hard to back it up when it is running in docker, and easier still if you use vaultwarden (much simpler backend).

    You say that you use ‘Portainer, Nginx Proxy Manager, Authentik, Uptime-Kuma, Wireguard’ and they are not improving your life.

    I’ll agree on the first two, but maybe that’s just because I hate webuis with a burning passion. But how are Authentik and Wireguard not improving your life?

    Do you know why I use wireguard? I’ll tell you why I use wireguard.

    A long time ago, I needed to go to hospital. I also had a university assignment due the same day as I was in hospital. Thought to myself, ‘no problem, I’ll just bring my laptop with me; I’ve got Google Drive Sync set up so I can work on my files remotely’. So I check in, boot up, log in, and what do I see? Old files. Old files from three weeks ago. Why? Because Google Drive decided to go on strike and, in true GUI App fashion, displayed a tiny error notification in the tray icon that you would need a microscope to see. Naturally, being half-blind, I didn’t see it. So now I am, figuratively up shit creek without a paddle!

    So what do I do? Well, I deploy “KVM over Mom”. I ask my mom to drive back home — mind you, this is a 70-minute drive — and get her to bring my machine up. I walk her through getting into my machine and resurrecting Google Drive Sync. And then I spend 4 hours in the hospital queue finishing off my assignment.

    That episode taught me a few things:

    • Google sucks but I have to live with it
    • KVM-over-Mom is not a viable long-term solution
    • I need remote access
    • Redundancy is good.

    So, fast-forward a few months and I am using my dad’s NAS as a jumphost/proxy into our home network, where I can use wake-on-lan and RDP to connect to my machine. I have also switched from Google Drive Sync to File Stream (as it then was) so that my files are automatically available in gdrive. And that latter bit saves my ass some months later when my dad’s NAS has a disagreement with a kernel update and I can no longer remote in. We also have a hoard of Chinese bots hammering away at our internet-facing 16-year-old router, so that’s not great either. Also, ssh tunnels are neat, but are annoying to configure.

    Fast forward a few years and an Unspecified Virus of Unspecified Origin that temporarily obviated the need for remote access, I now use a VPN. In fact, me being a somewhat cautious person, I use several VPNs, for remote access into my home network. There is a vanilla wireguard “in case things with multiple moving parts break” tunnel and more convenient mesh orchestrators, although I have a hard time finally deciding between innernet and headscale.

    And does having remote access to my home computer improve my life? Yes. Most definitely. My home computer and server have much more storage than does my laptop. And sometimes you just need access to your copy of Hanks Australian Constitutional Law 12th ed, what can I say…

    The issue I see with many self-hosters is that they start with a solution looking for a problem as evidenced by the frequent “I am bored, tell me what to selfhost” posts we see on this sub. It is much better to start with a problem and try to solve it. Then you don’t have to have an existential crisis over whether you are hosting too many replicas of postresql…

    :wq


  • What do you guys think, is it worth it? Any better alternatives for similar performance for the price? I am from Slovenia, and I am looking at a max budget of around 16€ after tax per month…

    You are going to have a hard time finding a dedicated server for under €16/month. You might be able to get some cheap Intel ATOM boxes from OVH or Online.net. I believe OVH’s N2800s are slow AF, while Online.net C270 Avatons “feel” anecdotally better.

    But this price range is fairly low.

    The good news is that you can get a lot of VPS servers for that budget. I would recommend using lowendbox/lowendtalk to look for deals in the field. Wait a few months for Black Friday and try to get on an annual deal.

    Would I be still able to have the free SSL via Let’s encrypt?

    Yes. No matter what marketing gimmicks people use, Let’s Encrypt certificates come from Let’s Encrypt, not a special feature of the provider. So, if you can serve a file over HTTP or publish a TXT record to your DNS zone, you can have free SSL certificates from Let’s Encrypt or any other ACME-compliant provider.

    Could I set up the right things to have a mail server set up, so it doesn’t go to spam?

    Short answer: yes. Long answer: it takes effort. Some things to look out for:

    • Some VPS providers block SMTP port;
    • Some VPS providers have “dirty” IP addresses — not much you can do about that, the owner of the IP needs to get it removed from spamhaus et al
    • It helps (for spam reasons) if you can set a reverse DNS / PTR record to match your MX hostname. Not all providers allow that, but many do.
    • You can always use an external SMTP gateway like AWS SES, and, for personal use, this should be well within your budget.