Just thinking of the poor sods that are going to be working today (and all night).

Oh and more pics since the bingilator is not without its randomness.

    • M0oP0o@mander.xyzOPM
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 days ago

      Wonder where the Bingilator got the splash from. I sometimes wonder who it is impersonating when some pictures come out stylish.

        • tal@lemmy.today
          link
          fedilink
          English
          arrow-up
          3
          ·
          7 days ago

          UI: ComfyUI

          Model: STOIQNewrealityFLUXSD_F1DAlpha

          The image is an illustration.

          A raccoon sitting on a stool at a desk. The viewer is looking at the raccoon from behind.

          On top of the desk, on the left-hand side of the desk, is a blue tray named In.

          In contains a tall stack of papers.

          In is labeled “In”.

          On top of the desk, on the right-hand side of the desk, is a red tray named Moon.

          Moon contains a small stack of papers.

          Moon is labeled “Moon”.

          Next to the desk, there is a metal garbage can labeled “Non-Moon”. The garbage can is heaping high with crumpled wads of paper. There are crumpled wads of paper on the floor by the trash can.

          The raccoon is holding and looking intently at a piece of paper with an anime picture of Sailor Moon on it.

          raccoon-workflow.json.xz.base64
          /Td6WFoAAATm1rRGBMDfDvFdIQEWAAAAAAAAAHRmztXgLvAHV10APYKAFxwti8poPaQKsgzf7gNj
          HOV2cM7dN97GoIvoTF/iSMYEQoJZBJzI3UDxBXrl3MvcMb5e6h3lATBYjawf1klb5xXG0y783VSu
          rsL7JTqCcpc7ml6EBcQysA/lczXi4/z4NEU2MzJ6gMxLs5qhHCWRzV2fkgD1H3KHbl2of4CNliwR
          JNef6hoZP4HFMLybTP66HNwa0oszmGtepDwWHXSrlYVAso8imkQU11LM4qXGRXQKKo6EvjyTeYPG
          eZvQb4FxhvQIkQHLQWfLOyENpBMbl2mOgW0siOFXT72aS9mBHSeYP8AzU1giF+yk4IQh8W8UCh+k
          wncXTpG2M2mdiF0t0fMlAoOmMD0dDDSlpTIhgkEUgqjuFzi7YRnIgI9xJ4RbMQqrVgcEj4C8LWsP
          q8Pt9DnxVUmUQXwS04QFJpghbh6a73UsV1kjFiQZ9yo4RhR5ouTehVLXwyXn9E0/MW3nIZGdW6cW
          BqD/FwjDedfDigE0VsQAWD9OfH1I4pjkNePpmoHErOG2KQ9vec7Rd5iPWUIgxnBiNdg8Hczv3Ggy
          e02Chw1yzKXcTaIVW6tMxm5H62cyRRwy5a0Rn3KVLVQTyaZt2f+GplozjiLe3T1X8E7sKPoVWNsX
          cVGRPGouZdNO00VQlpmaKnpf9BXG+ujVGn0WCzsTBZZLqJNfGE5CwOMGqqib2L3pNFesoaku2U4n
          andtH2bHkiNNf1DpDmkAuNuGvmKRHfBXHVrU6+jcUbAjBZxe4kYsPP2+f5vJqNIWRPankSGF3+GF
          xjD4ntouwO3ruBHQlRMDf0Lcd6qy4ICW3OakgceBbk2vT42s9thrPuF779tKQ63RSN+nL/R9GyOb
          Tr7qEL71NSRqsK/hDhb2+lrcy8mLsN8wktMs0h6sMyzl+vtWOTZ9dtpkEJy604v9Tor+T6zmwsQi
          Ou/32w7U3tH2UVFne9B/cjAdWe758OZyjzqq0AlTqaC7Bi4Zq0xY0yRrsZGrMOnIO+Ymd5l8FUyZ
          wBhmgPNUBntFJgS9wmA+6xQDi+CIyz1nJqJbT6o7Ah3XGYZbrYgZ78IfUxfcYmu/t2ESYzPN3YIg
          eRcfwlREKU8MObl3TijlXyV+jUdzZY5gYxoICjkAbPTmp/y9QH+ej9KgUnnK/9JE+avY5YFovhq7
          vQlhfeqtIFkr78/zDG0g/u36awI8pm1IxLSyalbAzQXz3Lbqr2p/4hepLeu9l1nycn+hV23ACZ5G
          +dfoRNQRxfd/0jieMD5sCmzvjFdFZF56XvUJwHPAg9hmBCtKnfWZJXQlPN5EsOp8cFtDA86vWyEo
          DG4+Y2lIovz4rCBcLmyEZvPtdJxvp6ebl+ocRcGki3fyI+9fM7N2vpKh9yzhZ9Ew324Ge4//Dp9O
          +udl4DYCzXmRTF3SbDAGUJBqCmZpB7L4ztv18QuCisNizaUSJ5UH3kkm+7EbB3qTtX1eT5s6b4sr
          duL8/Vrkjb33GKPJ1610/7/yX/v7CSmt9+tct2b4YoW9XX5Ljg+5pQlhDWQ47ICWYFEDy4o4ZpiU
          ZJAYH4BMjqArKErqWmsCij+OIpPuYfSupQ57P+RFUn2Y/Tlv5pJe0aokMgVpsH25n/JgwALZkmi8
          +CGQlFPeyZ5xpLSZmYTdaKkWmZWNJYgAy/auRNyfImoPFlsXWedSlVAJiSImZ4nbPW5oY1k/dq+8
          WjE/0zsMNG5mj/F8UG/Gl1Ule49LOdNCMWcYqIm950moFHIyFiT1pyPN20+/vManQgvGu7JQFCld
          qGtF+NSMVkaFxMqJN7RSZKMkAGqRLoLKkbWsLBx8/3Cpo96Z5bNqdjgWbhVWvwX+zXToHyFl9nec
          jNIQZofnJVsVgj44/udVMw1wAtnjzAr6nNs2ONR00IYtSE0ylEUielWh9RX917BlLeOKqvFTxSAn
          rU/DonXvgYUImmTxj+W0hE/RZVS429Yaf7xXVAcow+dM1xV9eKGb7JllXk2Y+grb3o7h49W8ymDo
          PSV0xpccatO7Ctdh7FA0az7hPM/ryRMq+nNEOSaGOAucBbBeo/rTCyUwcRElRK05lD12zH1ozbyr
          jCIVyY+z1oXEZ4sdVfZ3aIh0h/UUlY2xhj4SsDBTw6LOmlCLT0VgUN3LuyHWwck2Arhh8Jqj7JUv
          XpShbC7Li4Az4qq3lDGbcQ5BLEpA4LIM/XL9/OWbEuk9MRbbw4wn6E5gvoH5r+hX9eGPQdrUA8Lv
          lqXCxIQ5Acd6wAriL3TTGUOwhx1RXHJdDmHUH9Z9GN652Bv6x1DdIjLGScL/74I67eT4V80Eh73T
          temEf2wiL6zsc5rA48m43Zkfgx9sejkElYtMFH8sWZxJ74ISIQFmD4vQSrJK7Y9UUYzMaH+P6oWV
          SxvYuM2nLkgxneVQdpgJf/0W7pTrXpYGbnZB9GY9ZOY8TQpey7IPXbEpWlRWDnfAL7mUedGqAd++
          NSQwarPZyFjIkQizkP1XbBA+xAMD7xrWJkAL+VlGgC9IC1cUYAAAHGLPjJZhxw8AAfsO8V0AAEdP
          jlmxxGf7AgAAAAAEWVo=
          
          • Kojichan@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            6 days ago

            ComfyUI keeps crapping out on me when I so much as sneeze. The nodes get screwy, the canvas wigs out and becomes inoperable unless I kill the server and start it up again.

            Are there any other alternatives that you can think of?

            • tal@lemmy.today
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              6 days ago

              When a model is initially being loaded, I see slowdown, but once that has happened, I don’t. I see that with Automatic1111 as well. Once it’s been loaded, though, I don’t get that. I regularly do (non-GPU-using) stuff on another workspace when rendering, can’t detect any slowdown.

              So I don’t know what might be the cause. Maybe memory exhaustion? A system that’s paging like mad might do that, I guess.

              As to an alternative, it depends on what you want to do.

              If you’ve never done local GPU-based image generation, then Automatic1111 is probably the most-widely-used UI (albeit the oldest).

              If you want to run Flux and Flux-derived models – which I’m using to generate my above image – I believe I recall reading that while Automatic1111 cannot run them – and maybe that’s changed, have not been monitoring the situation – the Forge UI can do so as well. But I’ve never used it, so I can’t provide any real guidance as to setup.

              kagis

              Yeah, looks like Automatic1111 can’t do Flux:

              https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16311

              And it looks like Forge can indeed run Flux:

              https://sandner.art/flux1-in-forge-ui-setup-guide-with-sdsdxl-tips/

              If you’re short of VRAM or RAM or something, though, I don’t know if Forge will do better than ComfyUI. I think that I might at least try to diagnose what is causing the issue first, as there are some things that can be done to reduce resource usage, like generating images at lower resolution and relying more-heavily on tile-based upscaling. With at least some of the systems, haven’t played around with ComfyUI here, there are also some command-line options to reduce VRAM usage in exchange for longer compute time, like --medvram or --lowvram in Automatic1111.

              I don’t think that there’s a platform-agnostic way to see VRAM usage. I use a Radeon card on Linux, and there, the radeontop command will show VRAM usage. But I don’t know what tools one would use in, say, Windows to look up the same numbers.

              On Linux, top will show regular memory usage, can hit “M” to sort by memory usage. I’m pretty out of date in Windows or MacOS – probably Task Manager or mmc on Windows and maybe top on MacOS as well? You may know better then me if you’re accustomed to that platform.

              I can maybe try to give some better suggestions if you can list any of the OS being used, what GPU you’re running it on, and if you can, how much VRAM and RAM is on the system and if you can determine how much is being used.

              • Kojichan@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                6 days ago

                Thank you so much for that wonderful information! No joke!

                I’ll have some new things to test when I actually start generating. I’m doing the local generation in Linux. I have a 12GB GPU, 32GB ram, so I know about some slowdowns.

                But, I was talking specifically about ComfyUI, the native web app that you launch in a browser. I can work with it for a little bit, but once I stay in the window too long (even without generating), it starts flipping frames between the nodes and a different set of nodes.

                Not sure what that issue is. Can’t even save or load workspaces properly… I’m going to blame the Snap Firefox I’m using… Maybe I’ll try something else that’s not a Snap, or a Flatpak.

                • tal@lemmy.today
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  6 days ago

                  it starts flipping frames between the nodes and a different set of nodes.

                  Yeah, I don’t know what would cause that. I use it in Firefox.

                  Maybe try opening it in Chromium or a private window to disable addons (if you have your Firefox install set up not to run addons in private windows?)

                  I’m still suspicious of resource consumption, either RAM or VRAM. I don’t see another reason that you’d suddenly smack into problems when running ComfyUI.

                  I’m currently running ComfyUI and Firefox and some relatively-light other stuff, and I’m at 23GB RAM used (by processes, not disk caching), so I wouldn’t expect that you’d be running into trouble on memory unless you’ve got some other hefty stuff going on. I run it on a 128GB RAM, 128GB paging NVMe machine, so I’ve got headroom, but I don’t think that you’d need more than what you’re running if you’re generating stuff on the order of what I am.

                  goes investigating

                  Hmm. Currently all of my video memory (24GB) is being used, but I’m assuming that that’s because Wayland is caching data or something there. I’m pretty sure that I remember having a lot of free VRAM at some point, though maybe that was in X.

                  considers

                  Let me kill off ComfyUI and see how much that frees up. Operating on the assumption that nothing immediately re-grabs the memory, that’d presumably give a ballpark for VRAM consumption.

                  tries

                  Hmm. That went down to 1GB for non-ComfyUI stuff like Wayland, so ComfyUI was eating all of that.

                  I don’t know. Maybe it caches something.

                  experiments further

                  About 17GB (this number and others not excluding the 1GB for other stuff) while running, down to 15GB after the pass is complete. That was for a 1280x720 image, and I was loading the SwinIR upscaler; while not used, it might be resident in VRAM.

                  goes to set up a workflow without the upscaler to generate a 512x512 image

                  Hmm. 21GB while running. I’d guess that ComfyUI might be doing something to try to make use of all free VRAM, like, do more parallel processing.

                  Lemme try with a Stable Diffusion-based model (Realmixxl) instead of the Flux-based Newreality.

                  tries

                  About 10GB. Hmm.

                  kagis

                  https://old.reddit.com/r/comfyui/comments/1adhqgy/how_to_run_comfyui_with_mid_vram/

                  It sounds like ComfyUI also supports the --midvram and --lowvram flags, but that it’s supposed to automatically select something reasonable based on your system. I dunno, haven’t played with that myself.

                  tries --lowvram

                  I peak at about 14GB for ComfyUI at 512x512, was 13GB for most of generation.

                  tries 1280x720

                  Up to 15.7GB, down to 13.9GB after generation. No upscaling, just Newreality.

                  Hmm. So, based on that testing, I wouldn’t be incredibly surprised if you might be exhausting your VRAM if you’re running Flux on a GPU with 12GB. I’m guessing that it might be running dry on cards below 16GB (keeping in mind that it looks like other stuff is consuming about 1GB for me). I don’t think I have a way to simulate running the card with less VRAM than it physically has to see what happens.

                  Keep in mind that I have no idea what kind of memory management is going on here. It could be that pytorch purges stuff if it’s running low and doesn’t actually need that much, so these numbers are too conservative. Or it could be that you really do need that much.

                  Here’s a workflow (it generates a landscape painting, something I did a while back) using a Stable Diffusion XL-based model, Realmixxl (note: model and webpage includes NSFW content), which ran with what looked like maximum VRAM usage of about 10GB on my system using the attached workflow prompt/settings. You don’t have to use Realmixxl, if you have another model, should be able to just choose that other one. But maybe try running it, see if those problems go away? Because if that works without issues, that’d make me suspicious that you’re running dry on VRAM.

                  realmixxx.json.xz.base64
                  /Td6WFoAAATm1rRGBMDODKdlIQEWAAAAAAAAAJBbwA/gMqYGRl0APYKAFxwti8poPaQKsgzf7gNj
                  HOV2cLGoVLRUIxJu+Mk99kmS9PZ9/aKzcWYFHurbdYORPrA+/NX4nRVi/aTnEFuG5YjSEp0pkaGI
                  CQDQpU2cQJVvbLOVQkE/8eb+nYPjBdD/2F6iieDiqxnd414rxO4yDpswrkVGaHmXOJheZAle8f6d
                  3MWIGkQGaLsQHSly8COMYvcV4OF1aqOwr9aNIBr8MjflhnuwrpPIP0jdlp+CJEoFM9a21B9XUedf
                  VMUQNT9ILtmejaAHkkHu3IAhRShlONNqrea4yYBfdSLRYELtrB8Gp9MXN63qLW582DjC9zsG5s65
                  tRHRfW+q7lbZxkOVt4B21lYlrctxReIqyenZ9xKs9RA9BXCV6imysPX4W2J3g1XwpdMBWxRan3Pq
                  yX5bD9e4wehtqz0XzM38BLL3+oDneO83P7mHO6Cf6LcLWNzZlLcmpvaznZR1weft1wsCN1nbuAt5
                  PJVweTW+s1NEhJe+awSofX+fFMG8IfNGt5tGWU5W3PuthZlBsYD4l5hkRilB8Jf7lTL60kMKv9uh
                  pXv5Xuoo9PPj2Ot2YTHJHpsf0jjT/N5Z65v9CDjsdh+gJ5ZzH8vFYtWlD1K6/rIH3kjPas23ERFU
                  xoCcYk7R6uxzjZMfdxSy1Mgo6/VqC0ZX+uSzfLok1LLYA7RcppeyY4c/fEpcbOLfYCEr9V+bwI4F
                  VDwzBENC412i8JTF8KzzqA0fPF8Q4MwAeBFuJjVq1glsFgYqTpihnw5jVc5UfALRSXS2vjQR78v5
                  XDmiK7EvUIinqDJjmCzV+dpnTbjBAURsZNgCU+IJEQeggVybB+DkjOGnr/iIjvaSylO3vu9kq3Kn
                  Dhzd/kzXutPecPtterHkiPjJI+5a9nxJPMLMuAqmnsh2sk7LX6OWHssHhxd/b2O2Y4/Ej0WoIZlf
                  GD2gOt57hHvTrQ/HaG1AA8wjbHsZXWW9MXbJtDxZbECLIYGfPW2tQCfBaqYlxGXurrhOfZlKPUVx
                  K9etDItoDHdwUxeM4HbCdptOjcSWAYgjwcQ4gI3Ook/5YLRbL+4rIgOIwz643v/bMh2jiLa4MEYm
                  9a4O8GL4xED3kvRQSgt0SkkIRIHO1AJ042TQC450OKwEtfFgBpmFQ+utgUOObjA409jIEhMoOIeP
                  kDqv62f0Uu4qojiX7+o8rrWp/knAkDoFWam1E3ZKEiCINRfmRAMqTsPr8Wq/TQZM5OKtMGBLK9LY
                  GxLOOUBCahU5iNPm01X3STNRmQKtATPgqPcszNeLONnZqcWusbZKvwkCoX4Z75T+s+852oo65Li6
                  7WQ3SaDBrs47qXeUobVIOjlXO2ln2oRRtTRfbk7/gD6K6s5kBjPexHEEIGseJysilyHyT2+VMtSy
                  cyET83Exi5KEnrtl7XgMI4GM1tDeaq6anNdW1VgXdS4ypk8xqHTpQgszuJCgh3ID5pfvbHzzX0A7
                  zC5A+4oGk98ihe0dJc+KLzigJuZLk7jX5A7sGkBtht7oKjH8qrYM//DbQXkZbI06h/FP+2aBz5/t
                  U3zTsSHyoU3LwihFOj0TA+DKnZUnm4TJtX6ABQaJPTYwHgQJ/B77VI9A+RYf7qd9o4cGaGrLoOES
                  QdGPFUOqO0vL9EkpTsxgCEjxApBIw1gTCiaBF8Dofp6vBrd1zY1mXP9p1UunaaFZtdmx/vrWkLXQ
                  iO09P6waY+6daKtZ7i+3j0WGvBFHx32toDgd94wGWXBa+hfEEK3d6kq8eGRWJC+OEL9KgUrrN4ki
                  vwPjGe/1DXxzPIvZrMP2BtoxO34E9VuvsnmofW3kZvtLZXC+97RznZ5nIpG4Vk+uOPs1ne/s1UD3
                  x0vyTviwiK+lFIu5T3NdxFFssClZSDyFUkzCZUpbsLjcH3dzbwEDX4Vjq6rAz2IbXUGU6aTQ7RO1
                  Q1iUHaCqHZzNJEKKFcBaL/IGbmeOPUZJ7G3TbEYcMhBPtsmYSwNJkQ0cGj/KKqPF6fxpvNEt+QNh
                  isgyNP+AuP0xxQgwXbxW2kO/3Y70L5+eNs2L8u0gJBHevYTAebv/mORBcNcs8hpFVZLOAahajv0h
                  zj++ssD9BcgBTVMEC+knn0HjVaRjIW3UPsDApNjIsF7h06hWAGG79VGJb3mQ6PcwQAAAALZxoY8E
                  al4jAAHqDKdlAABPKdovscRn+wIAAAAABFla
                  

                  EDIT: Keep in mind that I’m not an expert on resource consumption on this, haven’t read material about what requirements are, and there may be good material out there covering it. This is my ad-hoc, five-minutes-or-so-of testing; my own solution was mostly to just throw hardware at the problem, so I haven’t spent a lot of time optimizing workflows for VRAM consumption.

                  EDIT2: Some of the systems (Automatic1111 I know, dunno about ComfyUI) are also capable, IIRC, of running at reduced precision, which can reduce VRAM usage on some GPUs (though it will affect the output slightly, won’t perfectly reproduce a workflow), so I’m not saying that the numbers I give are hard lower limits; might be possible to configure a system to operate with less VRAM in some other ways. Like I said, I haven’t spent a lot of time trying to drive down ComfyUI VRAM usage.

        • M0oP0o@mander.xyzOPM
          link
          fedilink
          English
          arrow-up
          2
          ·
          7 days ago

          It will return. I am still trying to get another llm generator working. Bingilator has kinda soured for me lately.

            • M0oP0o@mander.xyzOPM
              link
              fedilink
              English
              arrow-up
              3
              ·
              7 days ago

              Eh just hit the limits of what it can seem to do I think. Its hard to get it to do anything fun these days.

              • tal@lemmy.today
                link
                fedilink
                English
                arrow-up
                3
                ·
                edit-2
                7 days ago

                Ah, okay, I thought that they might have started throttling generation or something.

                If I understand aright, they and ChatGPT use DALL-E 3 – I don’t really understand the relationship between Bing and ChatGPT’s image generation. I see vague references to a DALL-E 4 coming out at some point online, so I assume that it’s gonna get some kinda big functionality bump then.

                I have been – if you’ve been reading my posts – pretty impressed by the natural-language parsing in Flux, and I would be willing to wager that the next iteration of most of the models out there is probably gonna improve on natural-language parsing.

                I also see some stuff talking about improving text rendering, which would be nice.

                https://old.reddit.com/r/singularity/comments/1craik9/gpt4o_is_a_huge_step_forward_for_image_generation/

                I don’t know whether the functionality there is shared between ChatGPT and Bing or will be or what. But that example on ChatGPT is showing both a lot of text generation (better than I’m seeing in Flux, at any rate), and what looks kinda like that natural-language description stuff.