A client has asked me to replace their video editor for a video podcast. It’s the standard quick-cut style, zooms, loud transitions, and big bubble-letter subtitles throughout.
They recommended using Descript, which looks to be an AI platform that does all the work for you. Throw your video/audio into the site, and it transcribes the video, allowing you to edit based on the transcription. It then makes AI recommendations and inserts zooms and transitions.
There’s no getting around using AI for some of this, like subtitle generation, but I’d rather not buy a sub to an AI platform, nor would I like to use one, so I’m looking for alternatives. The pay certainly isn’t worth the time it would take without cutting corners unfortunately.
Unfortunately, Davinci Resolve isn’t playing well with my system and the nvidia driver I use (580, it worked on 550 but that’s not an option in Additional Drivers anymore for some reason) results in a black screen for the video timeline (not ideal for a video editor haha). I’ve been playing around with Kdenlive and Blender’s video editor.
I found an add-on for both programs that transcribes speech-to-text, which I finally got mostly working with Kdenlive (using whisper) but not with Blender. I also found a FOSS app called audapolis which does well pulling a transciption into an exportable file.
Anyone have any experience making these mass-produced-style videos without going full AI? My client mentioned the old VE spent 1-2 hours with Descript for a 15ish min video and 2 shorts. I’m ok doubling that timeframe at first if it means not using Descript.
There’s no getting around using AI for some of this, like subtitle generation
Eh… yes there is, you can pay actual humans to do that. In fact if you do “subtitle generation” (whatever that might mean) without any editing you are taking a huge risk. Sure it might get 99% of the words right but it fucks up on the main topic… well good luck.
Anyway, if you do want to go that road still you could try
- ffmpeg with whisper.cpp (but honestly I’m not convinced hardcoding subtitles is a good practice, why not package as e.g.
.mkv? Depends on context obviously) - Kdenlive with vosk
- Kdenlive with whatever else via
*.srt *.ass *.vtt *.sbvformats
deleted by creator
I’ll be checking over the subtitles anyway, generating just saves a bunch of time before a full pass over it. […] The editing for the subs generation looks to be as much work as just transcribing a handful of frames at a time.
Sorry I’m confused, which is it?
doing this as a favour […] Honestly I hate the style haha
I’m probably out of line for saying this but I recommend you reconsider.
You may already have the answer from the other comments - but specifically for subtitle transcription, I’ve used whisper and set it to output directly into SRT, which I could then import directly into kdenlive or VLC or whatever, with timecodes and everything. It seemed accurate enough that the editing of the subs afterwards was almost non-existant.
I can’t remember how I installed Whisper in the first place, but I know (from pressing the up arrow in terminal 50 times) that the command I used was:
whisper FILENAME.MP3 --model medium.en --language English --output_format srt
I was surprised/terrified how accurate the output was - and this was a variety of accents from Northern England and rural Scotland. A few minutes of correcting mistakes only.
I can’t remember how I installed Whisper in the first place
Typically however you want and if not https://github.com/ggml-org/whisper.cpp/releases
- ffmpeg with whisper.cpp (but honestly I’m not convinced hardcoding subtitles is a good practice, why not package as e.g.
i think Kdenlive has subtitle transcription which you can enable and configure in settings.
deleted by creator
Why not use what the client requested
deleted by creator
If I pay you to do something I’d expect you to do what I’m paying you to do. If you can’t, don’t take the job.
Unless there’s a stipulation, there’s no reason you must do it the way they suggest. Fuck off with that my money my rules shit.
Tell me you’ve never had a job without telling me you’ve never had job. Lmao
Have you ever had a client who wanted something done but had an asinine suggestion for how to do it? Do you do it their way or do you focus on getting the job done? Unless it’s a stipulation, you do it the way that gets the job done.
If I pay you to do something and you don’t do it, I’ll aka for my money back. How is this a hard concept for you?
Not at all, but there is a difference between creating a finished product and creating a finished product a particular way. If you make a suggestion for how I do something, and it isn’t stipulated to be part of the finished product, then I can do it however I want. If however part of the contract is I do it your way, then that matters. Hope that clarifies things
Yeah, I get that. But seems it’s pretty well suited for the task.
You can probably create a similar workflow using comfyui though. But it will require time and effort.
deleted by creator
I know that’s not a ready to use solution but blender has a very powerful python API which should allow you to automate everything including doing calls to a AI backend of your choice if needed.
deleted by creator
I think this libcudnn is a Nvidia CUDA thing. I guess you have checked that the correct CUDA libs are installed and blended has permission and knows where to look for them?
First start for learning blender Python API would be it’s documentation: https://docs.blender.org/api/current/index.html
In general you can skip anything that you can do on the user interface. But video editing is just a very small part of this and if you don’t have any programming experience yet this could be overkill for what you are looking for.
Perhaps someone had the same problems like you before and implemented something. Maybe searching explicitly for blender video editing automation or Python API will give you some results.
deleted by creator
It’s a lot to learn, but the knowledge is more durable than learning where Microsoft has moved the menu option to in this version (or learning the new arcane method of summoning the old menu from the nether realm.)
I’m new to Linux from about 3 months ago, so it’s been a bit of a learning curve on top to learning VE haha. I didn’t realize CUDA had versions
Yeah… it’s not you. I’m a professional developer and have been using Linux for decades. It’s still hard for me to install specific environments. Sometimes it just works… but often I give up. Sometimes it’s my mistake but sometimes it’s also because the packaging is not actually reproducible. It works on the setup that the developer used, great for them, but slight variation throw you right into dependency hell.


