blog

RSS
  1. Install Stable Diffusion in WSL with AMD Radeon ROCm

    Recently released Adrenalin 24.12.1 driver unlocks new AI-related potential!

    Recently when upgrading my AMD Adrenalin driver, a line in the release notes piqued my interest:

    Official support for Windows Subsystem for Linux (WSL 2) enables users with supported hardware to develop with AMD ROCm™ software on a Windows system, eliminating the need for dual boot set ups.

    Historically, AMD ROCm support has been pretty limited compared to NVIDIA CUDA, which has worked in Windows Subsystem for Linux (WSL 2) for awhile. So this new driver seemed like kind of a big deal, and I thought I'd check it out!

    AMD's article is short and sweet. Obviously you'll need the latest AMD Adrenalin Edition GPU driver installed, and also Windows Subsystem for Linux. Microsoft's official documentation is good, and I've gone through my own installation experience here.

    Once you have the amdgpu driver installed, you can run rocminfo to confirm everything is working. You should see output like this:

    *******
    Agent 2
    *******
      Name:                    gfx1100
      Marketing Name:          AMD Radeon RX 7900 XTX
      Vendor Name:             AMD
      Feature:                 KERNEL_DISPATCH
      Profile:                 BASE_PROFILE

    Installing the Stable Diffusion Web UI is also easy. You'll need Python 3.10 and Git installed (sudo apt install python3 git) if you don't already. Then just pick an installation folder and clone the stable-diffusion-webui repository to your local machine: git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git

    Fixing AMD-specific problems in Stable Diffusion

    Once you have the Stable Diffusion code, you should be able to run ./webui-sh to start the Web UI. However, more than likely you'll run into a couple of specific errors that prevent it from starting:

    • Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check'

      By default, PyTorch is trying to talk to the NVIDIA CUDA driver. Obviously on an AMD GPU, that's not going to work. Helpfully, this error message tells us how to fix the problem.

    • RuntimeError: "addmm_impl_cpu" not implemented for 'Half'

      I'm not sure if this is a driver bug or what, but apparently half-precision mode isn't working under ROCm. You can fix this by adding --precision full --no-half to your COMMANDLINE_ARGS.

    To fix both problems, simply edit your webui-user.sh file, find and un-comment the line (remove the leading #) with export COMMANDLINE_ARGS, and customize it like so:

    export COMMANDLINE_ARGS="--skip-torch-cuda-test --precision full --no-half"

    Edit webui-user.sh with GNU nano

    Save the file, and now you should be able to ./webui-sh to start the Web UI and begin generating images with your AMD Radeon GPU! Once the Web UI is running, you can open it in your browser by navigating to http://127.0.0.1:7860

    A shiba inu sitting at a table, no methamphetamines

    Posted 2024-12-21 12:02:00 CST by henriquez. Comments
  2. successfully reversed time

    johNny wAs oBsEsseD wiTH gOiNg fASt. HELEN Was OBseSSEd wIth GOiNG SlOw. I wENT INfinITy stepS FuRther AnD reveRsed ALL THE way bacK To aLL THe way BACk tO thE beGINninG. IRoNiCaLLY i dON't HaVe A loT Of TimE tO eXPLAiN, AND cERTainly DON't unDERSTaNd All THE afTer-EfFEcts. WErE NeW iTerAtIOnS spAwned, oR WerE TheY ALReady RunniNG? I DON't KNOw, BUt I THouGHt You shOuLd knOw.

    Posted 2024-12-19 00:00:00 CST by henriquez. 1 comment
  3. I quit tech.

    I pulled the RTX 4080 from the server and sold it for ETH. I've had this midlife crisis on a slow burn for a very long time now. Three years ago I said I would drop out and I finally did. My disappearing act is complete.

    Working for tech companies never made me feel good. The tech industry is not making the world a better place. The Internet is mostly destroyed and our top innovators are focused on putting more people out of jobs with their stupid AI language models. It felt good to burn bridges on my way out.

    Luckily I found an infinite money hack so I'm set for life as long as I keep playing the game. I live in your critical infrastructure now, maintaining, advancing, defending. It feels good to do something useful, that helps people. Nobody around me knows what a psycho I really am. I charm, blend in, fade to gray.

    Now I am finally living life on my own terms and it feels good.

    Posted 2024-05-12 14:49:00 CST by henriquez. Comments
  4. Stable Diffusion is trippy

    I just started playing around Stable Diffusion, an "AI" image-generation model. It can be run locally via a very nice Web UI, and it generates images based on text prompts, provided you have enough GPU power. (Generally bigger images take longer and consume more GPU memory). I'm just scratching the surface and some of the results are trippy as hell. So far, most of the images are pretty hallucinatory, only vaguely relating to my prompts, and with all the spookiness inherent to a machine trying to replicate something approaching "art." More images after the click to not wreck my bandwidth.

    Read More

    Posted 2023-12-11 12:42:00 CST by henriquez. 1 comment
  5. Why I'm not hosting Matrix / Mastodon / etc. services

    This is a frequently asked question, and here's the answer. I love decentralized and federated protocols, but for security reasons I won't host Matrix or Mastodon services on this domain. If I had the time and patience, I'd love to create my own Matrix implementation, but it's a tall order. Matrix has always been a very complicated protocol to implement, and with the recent release of Matrix 2.0 it got even more complicated. Similarly, I'd love for the social features on this site to interoperate with Mastodon, but it has a very particular implementation of the ActivityPub protocol which would be a ton of work to recreate.

    The vast majority of individuals hosting these services appear to be using publicly released Docker containers, or similar, which is great for spinning servers up quickly but bad (IMO) from a security standpoint. Trusting other people with the specifics of your packages is one thing, but spinning up other people's virtual machines on your own network(s) is a dangerous game. Attack surface is a big deal in cybersecurity, which is why I prefer to roll my own protocol implementations when possible. At least that way, if I fuck up, I know who to blame when shit goes south.

    Matrix, Mastodon, and the Fediverse in general are amazing innovations, and likely the future of social interactions online. There are very smart people with domain expertise running and hosting these services, but I'm not one of them, nor do I care to be.

    For people with shared goals and vision, don't fret. Obsessive Facts is a small operation, and I am likely not the threat actor you're looking for. Again, there are smart people with domain expertise working on the problems listed on the About Us page. You can either trust their implementation or roll your own, but sending toots on this domain won't change the inevitable outcome.

    Posted 2023-09-24 01:42:00 CST by henriquez. Comments