

Of course! Let me know how you run your containers and I may be able to help on that side too
Little bit of everything!
Avid Swiftie (come join us at [email protected] )
Gaming (Mass Effect, Witcher, and too much Satisfactory)
Sci-fi
I live for 90s TV sitcoms


Of course! Let me know how you run your containers and I may be able to help on that side too


Sure! I use Kaniko (Although I see now that it’s not maintained anymore). I’ll probably pull the image in locally to protect it…
Kaniko does the Docker in Docker, and I found an action that I use, but it looks like that was taken down… Luckily I archived it! Make an action in Forgejo (I have an infrastructure group that I add public repos to for actions. So this one is called action-koniko-build and all it has is this action.yml file in it:
name: Kaniko
description: Build a container image using Kaniko
inputs:
Dockerfile:
description: The Dockerfile to pass to Kaniko
required: true
image:
description: Name and tag under which to upload the image
required: true
registry:
description: Domain of the registry. Should be the same as the first path component of the tag.
required: true
username:
description: Username for the container registry
required: true
password:
description: Password for the container registry
required: true
context:
description: Workspace for the build
required: true
runs:
using: docker
image: docker://gcr.io/kaniko-project/executor:debug
entrypoint: /bin/sh
args:
- -c
- |
mkdir -p /kaniko/.docker
echo '{"auths":{"${{ inputs.registry }}":{"auth":"'$(printf "%s:%s" "${{ inputs.username }}" "${{ inputs.password }}" | base64 | tr -d '\n')'"}}}' > /kaniko/.docker/config.json
echo Config file follows!
cat /kaniko/.docker/config.json
/kaniko/executor --insecure --dockerfile ${{ inputs.Dockerfile }} --destination ${{ inputs.image }} --context dir://${{ inputs.context }}
Then, you can use it directly like:
name: Build and Deploy Docker Image
on:
push:
branches:
- main
workflow_dispatch:
jobs:
build:
runs-on: docker
steps:
# Checkout the repository
- name: Checkout code
uses: actions/checkout@v3
- name: Get current date # This is just how I label my containers, do whatever you prefer
id: date
run: echo "::set-output name=date::$(date '+%Y%m%d-%H%M')"
- uses: path.to.your.forgejo.instance:port/infrastructure/action-koniko-build@main # This is what I said above, it references your infrastructure action, on the main branch
with:
Dockerfile: cluster/charts/auth/operator/Dockerfile
image: path.to.your.forgejo.instance:port/group/repo:${{ steps.date.outputs.date }}
registry: path.to.your.forgejo.instance:port/v1
username: ${{ env.GITHUB_ACTOR }}
password: ${{ secrets.RUNNER_TOKEN }} # I haven't found a good secret option that works well, I should see if they have fixed the built-in token
context: ${{ env.GITHUB_WORKSPACE }}
I run my runners in Kubernetes in the same cluster as my forgejo instance, so this all hooks up pretty easy. Lmk if you want to see that at all if it’s relevant. The big thing is that you’ll need to have them be Privileged, and there’s some complicated stuff where you need to run both the runner and the “dind” container together.


But you are charged for it.


Forgejo runners are great! I found some simple actions to do docker in docker and now build all my images with them!


I picked up a few today from smaller online stores before they realized. Will have to keep the servers running somehow
When I worked at Best buy over 10 years ago they had the exact same propaganda. You know instead of union dues you could buy an Xbox! (From us no less!)
…okay I added that last bit but it was implied


Good note, and good callout, we should always call out these things.
But yes if you’re self hosting and you both have a public facing instance and allow open registration, you are a much much braver person than I.


Okay that changes things. If they turned off these guardrails than that was on them, never blindly trust an LLM like that


Oh my god really? Cursor explicitly asks you each command and could only do this in “yolo” mode. Not having these guardrails is insane


Okay that makes so much sense, because I knew I had calling before in Element but they wanted me to set up all this extra stuff. Is it still a thing to do the plugin?


Wait there’s a jitsi plugin?


Element on Matrix is the only one I’m aware of - but it’s not the easiest to set up. I would try creating an account on matrix.org’s server just temporarily to try it out and see if it fits what you’re looking for. I like the decentralized nature of it, but the support is very piecemeal, and onboarding people essentially needs a class.


Holy setup batman. Was thinking it was going to be another container I spin up, but it’s enabling kernel modules, needs IOMMU, needs a ton of setup and then it looks like you still have to compile it? For now at least that’s above my needs


Lawyers and marketers who refuse to see nuance and view everything as a potential threat


but the vast majority of crawlers don’t care to do that. That’s a very specific implementation for this one problem. I actually did work at a big scraping farm, and if they encounter something like this,they just give up. It’s not worth it to them. That’s where the “worthiness” check is, you didn’t bother to do anything to gain access.
I didn’t know we were supposed to get high! We just got out of the family for a while


That’s counting on one machine using the same cookie session continuously, or they code up a way to share the tokens across machines. That’s now how the bot farms work


I was a single server with only me and 2 others or so, and then saw that I had thousands of requests per minutes at times! Absolutely nuts! My cloud bill was way higher. Adding anubis and it dropped down to just our requests, and bills dropped too. Very very strong proponent now.


This dance to get access is just a minor annoyance for me, but I question how it proves I’m not a bot. These steps can be trivially and cheaply automated.
I don’t think the author understands the point of Anubis. The point isn’t to block bots completely from your site, bots can still get in. The point is to put up a problem at the door to the site. This problem, as the author states, is relatively trivial for the average device to solve, it’s meant to be solved by a phone or any consumer device.
The actual protection mechanism is scale, the scale of this solving solution is costly. Bot farms aren’t one single host or machine, they’re thousands, tens of thousands of VMs running in clusters constantly trying to scrape sites. So to them, a calculating something that trivial is simple once, very very costly at scale. Say calculating the hash once takes about 5 seconds. Easy for a phone. Let’s say that’s 1000 scrapes of your site, that’s now 5000 seconds to scrape, roughly an hour and a half. Now we’re talking about real dollars and cents lost. Scraping does have a cost, and having worked at a company that does professionally scrape content they know this. Most companies will back off after trying to load a page that takes too long, or is too intensive - and that is why we see the dropoff in bot attacks. It’s that it’s not worth it for them to scrape the site anymore.
So for Anubis they’re “judging your value” by saying “Are you willing to put your money where your mouth is to access this site?” For consumer it’s a fraction of a fraction of a penny in electricity spent for that one page load, barely noticeable. For large bot farms it’s real dollars wasted on my little lemmy instance/blog, and thankfully they’ve stopped caring.
They killed off openvpn support a few years ago and am glad I did. They don’t care about power users, so they don’t care about my money either. Good riddance