• floofloof@lemmy.caOP
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    13 hours ago

    Following their link to LiveKit’s blog, it seems LiveKit provides a real-time communication stack with adaptive video encoding. So they’re using it to handle multiple video streams over connections of varying quality. I don’t think it’s mainly about AI, even though that’s LiveKit’s focus.

    • 4am@lemmy.zip
      link
      fedilink
      arrow-up
      6
      ·
      13 hours ago

      There are lots of ML models that relate to frame generation (you may have heard of Nvidia DLSS) so the “AI” might not be LLM slop, it might be an actual good application of ML like reducing bandwidth by halving the framerate and interpolating on the client.