Dwarf Fortress’ graphical upgrade provides a new way into a wildly wonky game

Comments

bryced 12d
Try out the pre-release like this:

`pip install imaginairy==6.0.0a0 --upgrade`

New 512x512 model supported with all samplers and inpainting

New 768x768 model supported with the DDIM sampler only

Not yet supported is the upscaling and depth maps.

To be honest I'm not sure the new model produces better images but maybe they will release some improved models in the future now that they have the pipeline open.

superpope99 12d
This seems to work for me. Incredible work turning this around so quickly!
davely 12d
I've been working on a web client[1] that interacts with a neat project called Stable Horde[2] to create a distributed cluster of GPUs that run Stable Diffusion. Just added support for SD 2.0:

[1] https://tinybots.net/artbot?model=stable_diffusion_2.0

[2] https://stablehorde.net/

Smaug123 12d
Nicely done; this seems to work for me. In my own attempt, I got stock Stable Diffusion 2.0 "working" on M1 using the GPU but it's producing some of the most cursed (and low-res) images I've ever seen, so I've definitely got it wrong somewhere. The reader can infer the usual rant about dynamic typing causing runtime misconfiguration in Python.
yreg 12d
As with previous macOS Stable Diffusion tools, this is Apple Silicon only.
gbighin 12d
Requirements:

> A decent computer with either a CUDA supported graphics card or M1 processor.

Why so? How does an M1 processor replace CUDA in a way a x86_64 processor can't? Do they use ARM assembly?

typest 12d
How much of this is stable diffusion 2, and how much is something else? For instance, the text based masks, the syntax like AND and OR, the face up scaling — are these all part of stable diffusion 2 (and can be used via other stable diffusion apis)?
TekMol 12d
What is a good VM to try this out?

Something on AWS, Hetzner etc?

88stacks 12d
awesome library, I haven't seen this before. I just added it to my stable diffusion api service so you can query stable diffusion 2.0 if you don't GPUs setup currently: https://88stacks.com
fareesh 12d
What's the minimum VRAM requirement?
egeozcan 12d
This would have been perfect if it worked on Windows too. I need to look into dual booting Linux (opening a can of worms) just to give it a try, as WSL doesn't seem to cut it.
algon33 12d
Nice, a friend was looking for something like this.
lostintangent 12d
Wow, this looks awesome! I noticed that the sample notebook doesn’t include SD 2.0 by default, and says that it’s too big for Colab. Is that a disk size/RAM limitation?

As an aside, it would be cool if you versioned that notebook in the repo, so that it could be easily opened with Codespaces.

underlines 11d
is it possible to add volta or xformers for a massive speed increase?

https://github.com/VoltaML/voltaML-fast-stable-diffusion

greggh 12d
This is awesome, but I still like using the GUI for m1/m2 Macs, DiffusionBee.

https://github.com/divamgupta/diffusionbee-stable-diffusion-...

semicolon_storm 12d
Pretty slick, SD 2.0 performance actually seems to be better than 1.5?