Jitsi Videobridge Webrtc Example

The Jitsi Meet project (Jitsi Video Bridge) is a tried and true bandwidth efficient WebRTC compatible SFU (server based) solution from our gracious FOSS partner, Jitsi. Through the collaboration arrangement with Jitsi, Rocket.Chat users can enjoy reliable and robust group video chat, audio chat, and screen sharing experience out of the box. 8x8's new Jitsi as a Service (JaaS) provides a complete video conferencing tool right out of the box that can be easily integrated into Wordpress. Perfect for smaller businesses with simple use cases and budgets that cannot support a fully custom WebRTC application. I'm relatively new to xmpp and webrtc. Now, I try to create simple video conference web app using Jitsi Videobridge and prosody, and I don't want to use jitsi-meet, because I need to know how the basic to get jitsi-videobridge to work. Until now, I can get prosody to accept the jitsi-videobridge component. Combining Jitsi videobridge and Jitsi Meet into a single package, Openfire Meetings makes WebRTC video conferences simple to deploy and use. Otalk is an open-source platform for building realtime applications using XMPP. Talky is an example of an application built using these libraries.

  1. Jitsi Video Bridge Webrtc Examples
  2. Jitsi Video Bridge Webrtc Example Program
  3. Jitsi Videobridge Tutorial
  4. Jitsi Video Bridge Webrtc Example Free

In today’s world, video conferencing is getting more and more important – be it for learning, business events or social interaction in general. Most people use one of the big players like Zoom or Microsoft Teams, which both have their share of privacy issues. However, there is an alternative approach: self-hosting open-source software like Jitsi Meet. In this article, we are going to explore the different scaling options for deploying anything from a single Jitsi server to a sharded Kubernetes cluster.

Jitsi is an open-source video conferencing service that you can host on your own. Aside from its source code, Jitsi is available as a Debian/Ubuntu package and as a Docker image. If you want to test Jitsi Meet, you may use the public instance at meet.jit.si.

To understand how a self-hosted Jitsi Meet service can be scaled horizontally, we need to look at the different components that are involved in providing the service first.

Components

A Jitsi Meet service is comprised of various different architectural components that can be scaled independently.

Webrtc

Jitsi Meet Frontend

The actual web interface is rendered by a WebRTC-compatible JavaScript application. It is hosted by a number of simple web servers and connects to videobridges to send and receive audio and video signals.

Jitsi Videobridge (jvb)

The videobridge is a WebRTC-compatible server that routes audio and video streams between the participants of a conference. Jitsi’s videobridge is an XMPP (Extensible Messaging and Presence Protocol) server component.

Jitsi Conference Focus (jicofo)

Jicofo manages media sessions between each of the participants of a conference and the videobridge. It also acts as a load balancer if multiple videobridges are used.

Prosody

Prosody is a XMPP communication server that is used by Jitsi to create multi-user conferences.

Jitsi Gateway to SIP (jigasi)

Jigasi is a server component that allows telephony SIP clients to join a conference. We won’t need this component for the proposed setup, though.

Jitsi video bridge webrtc examples

Jitsi Broadcasting Infrastructure (jibri)

Jibri allows for recording and/or streaming conferences by using headless Chrome instances. We won’t need this component for the proposed setup, though.

Now that we know what the different components of a Jitsi Meet service are, we can take a look at the different possible deployment scenarios.

Single Server Setup

The simplest deployment is to run all of Jitsi’s components on a single server. Jitsi’s documentation features an excellent self-hosting guide for Debian/Ubuntu. It is best to use a bare-metal server with dedicated CPU cores and enough RAM. A steady, fast network connection is also essential (1 Gbit/s). However, you will quickly hit the limits of a single Jitsi server if you want to host multiple conferences that each have multiple participants.

Jitsi Videobridge Webrtc Example

Single Jitsi Meet, Multiple Videobridges

The videobridges will typically have the most workload since they distribute the actual video streams. Thus, it makes sense to mainly scale this component. By default, all participants of a conference will use the same videobridge (without Octo). If you want to host many conferences on your Jitsi cluster, you will need a lot of videobridges to process all of the resulting video streams.

Luckily, Jitsi’s architecture allows for scaling videobridges up and down pretty easily. If you have multiple videobridges, two things are very important for facilitating trouble-free conferences. Firstly, once a conference has begun, it is important that all other connecting clients will use the same videobridge (without Octo). Secondly, Jitsi needs to be able to balance the load of multiple conferences between all videobridges.

When connecting to a conference, Jicofo will point the client to a videobridge that it should connect to. To consistently point all participants of a conference to the same videobridge, Jicofo holds state about which conferences run on which videobridge. When a new conference is initiated, Jicofo will load-balance between the videobridges and select one of the ones that are available.

Autoscaling of Videobridges

Let’s say you have to serve 1.000 concurrent users mid-day, but only 100 in the evening. Your Jitsi cluster does not need to constantly run 30 videobridges if 28 of them are idle between 5pm and 8am. Especially if your cluster is not running on dedicated hardware but in the cloud, it absolutely makes sense to autoscale the number of running videobridges based on usage to save a significant amount of money on your cloud provider’s next bill.

Unfortunately, Prosody needs an existing XMPP component configuration for every new videobridge that is connected. And if you create a new component configuration, you need to reload the Prosody service – that’s not a good idea in production. This means that you need to predetermine the maximum number of videobridges that can be running at any given time. However, you should probably do that anyways since Prosody (and Jicofo) cannot handle an infinite number of videobridges.

Most of all cloud providers allow you to define an equivalent to autoscaling groups in AWS. Now, you create an autoscaling group with a minimum and a maximum number of videobridges that may be running simultaneously. In Prosody, you define the same number of XMPP components that you used for the maximum number in the autoscaling group.

Next, you need a monitoring value that can be used to termine if additional videobridges should be started or running bridges should be stopped. Appropriate parameters can be CPU usage or network traffic of the videobridges. Of course, the exact limits will differ for each setup and use case.

Sharded Jitsi Meet Instances with Multiple Videobridges

As previously suggested, Prosody and Jicofo cannot handle an unlimited number of connected videobridges or user requests. Additionally, it makes sense to have additional servers for failover and rolling updates. When Prosody and Jicofo need to be scaled, it makes sense to create multiple Jitsi shards that run independently from one another.

The German HPI Schul-Cloud’s open-source Jitsi deployment in Kubernetes that is available on GitHub is suitable as a great starting point, since it’s architecture is pretty well documented. They use two shards in their production deployment.

As far as I can tell, Freifunk München’s public Jitsi cluster consists of four shards – though they deploy directly to the machines without the use of Kubernetes.

Back to the HPI Schul-Cloud example: Inside a single shard, they deploy one pod each for the Jicofo and Prosody services, as well as a static web server hosting the Jitsi Meet JavaScript client application. The videobridges are managed by a Stateful Set in order to get predictable (incrementing) pod names. Based on the average network traffic to and from the videobridge pods, a horizontal pod autoscaler consistently adjusts the number of running videobridges to save on resources.

Inside the Kubernetes cluster, an Ingress controller will accept HTTPS requests and terminate their TLS connections. The incoming connections now need to be load-balanced between the shards. Additionally, new participants that want to join a running conference need to be routed to the correct shard.

To satisfy both requirements, a service running multiple instances of HAProxy is used. HAProxy is a load-balancer for TCP and HTTP traffic. New requests are load-balanced between the shards using the round-robin algorithm for a fair load distribution. HAProxy uses DNS service discovery to find all existing shards. The following snippet is an extract of HAProxy’s configuration:

The configuration for HAProxy uses stick tables to route all traffic for an existing conference to the correct shard. Stick tables work similar to sticky sessions. In our example, HAProxy will store the mapping of a conference room URI to a specific shard in a dedicated key-value store that is shared with the other HAProxy instances. Thereby, all clients will be routed to the correct shard when joining a conference.

Another advantage that sharding gives you is that you can place shards in different geolocated regions and employ geobased routing. This way, users in North America, Europe or Asia can use different shards to optimize network latency.

By splitting your Jitsi cluster in shards and scaling them horizontally, you can successfully serve an enormous amount of concurrent video conferences.

Jitsi Videobridge Webrtc Example

The Octo Protocol

There is still a scaling problem when a lot of participants try to join the same conference, though. Up to this point, a single videobridge is responsible for routing all video stream traffic of a conference. This clearly limits the maximum number of participants of one conference.

Additionally, imagine a globe-spanning conference between four people. Two in North America and two in Australia. So far, geobased routing still requires two of the participants to connect to a videobridge on another continent, which has some serious latency disadvantages.

Fortunately, we can improve both situations by using the Octo protocol. Octo routes video streams between videobridge servers, essentially forming a cascade of forwarding servers. On the one hand, this removes the limit for a large number of participants in one conference due to the distributed client connections to multiple videobridges. On the other hand, Octo results in lower end-to-end media delay for gegraphically distributed participants.

The downside of Octo is that its traffic is unencrypted. That is why lower-level protocols need to take care of encrypting the inter-bridge traffic. Freifunk München’s Jitsi cluster uses an overlay network with Nebula, VXLAN and a Wireguard VPN to connect the videobridge servers.

Load Testing

When setting up a Jitsi cluster, it makes sense to perform load tests to determine your cluster’s limits before real people are starting to use the service. Jitsi’s developers have thankfully created a loadtesting tool that you can use: Jitsi Meet Torture. It simulates conference participants by sending prerecorded audio and video streams.

The results of loadtests performed by HPI Schul-Cloud’s team may be an initial reference point – they too are published on GitHub.

Conclusion

Jitsi Meet is free and open-source software that can be scaled pretty easily. It is possible to serve a large number of simultaneous conferences using sharding. However, even though Octo increases the maximum number of participants in a single conference, there are still some limitations in conference size – if nothing else because clients will have a hard time rendering lots of parallel video streams.

Still, Jitsi Meet is a privacy-friendly alternative to commercial offerings like Zoom or Microsoft Teams that does not require participants to install yet another video conferencing app on their machines. Additionally, it can be self-hosted on quite a large scale, both in the public or private cloud – or on bare metal.

References

  • Jitsi Meet Handbook: Architecture – https://jitsi.github.io/handbook/docs/architecture
  • Jitsi Meet Handbook: DevOps Guide (scalable setup) – https://jitsi.github.io/handbook/docs/devops-guide/devops-guide-scalable
  • HPI Schul-Cloud Architecture Documentation – https://github.com/hpi-schul-cloud/jitsi-deployment/blob/master/docs/architecture/architecture.md
  • Jitsi Blog: New tutorial video: Scaling Jitsi Meet in the Cloud – https://jitsi.org/blog/new-tutorial-video-scaling-jitsi-meet-in-the-cloud/
  • Meetrix.IO: Auto Scaling Jitsi Meet on AWS – https://meetrix.io/blog/webrtc/jitsi/jitsi-meet-auto-scaling.html
  • Meetrix.IO: How many Users and Conferences can Jitsi support on AWS – https://meetrix.io/blog/webrtc/jitsi/how-many-users-does-jitsi-support.html
  • Annika Wickert et al. FFMUC goes wild: Infrastructure recap 2020 #rc3 – https://www.slideshare.net/AnnikaWickert/ffmuc-goes-wild-infrastructure-recap-2020-rc3
  • Annika Wikert and Matthias Kesler. FFMUC presents #ffmeet – #virtualUKNOF – https://www.slideshare.net/AnnikaWickert/ffmuc-presents-ffmeet-virtualuknof
  • Freifunk München Jitsi Server Setup – https://ffmuc.net/wiki/doku.php?id=knb:meet-server
  • Boris Grozev and Emil Ivov. Jitsi Videobridge Performance Evaluation – https://jitsi.org/jitsi-videobridge-performance-evaluation/
  • FFMUC Meet Stats: Grafana Dashboard – https://stats.ffmuc.net/d/U6sKqPuZz/meet-stats
  • Arjun Nemani. How to integrate and scale Jitsi Video Conferencing – https://github.com/nemani/scalable-jitsi
  • Chad Lavoie. Introduction to HAProxy Stick Tables – https://www.haproxy.com/blog/introduction-to-haproxy-stick-tables/
  • HPI Schul-Cloud Jitsi Deployment: Loadtest results – https://github.com/hpi-schul-cloud/jitsi-deployment/blob/master/docs/loadtests/loadtestresults.md
  • Jitsi Videobridge Docs: Setting up Octo (cascaded bridges) – https://github.com/jitsi/jitsi-videobridge/blob/master/doc/octo.md
  • Boris Grozev. Improving Scale and Media Quality with Cascading SFUs – https://webrtchacks.com/sfu-cascading/

Photo by Surface on Unsplash

We recommend following the quick-install document. The current document describes the steps that are needed to install a working deployment, but steps are easy to mess up, and the debian packages are more up-to-date, where this document is sometimes not updated to reflect latest changes.

This describes configuring a server jitsi.example.com on a Debian-based distribution.
For other distributions you can adapt the steps (especially changing the dependencies package installations (e.g. for nginx) and paths accordingly) so that it matches your host's distribution.
You will also need to generate some passwords for YOURSECRET1, YOURSECRET2 and YOURSECRET3.

There are also some complete example config files available, mentioned in each section.

There are additional configurations to be done for a scalable installation.

Network description

This is how the network looks:

Install prosody

Configure prosody

Add config file in /etc/prosody/conf.avail/jitsi.example.com.cfg.lua :

  • add your domain virtual host section:
  • add domain with authentication for conference focus user:
  • add focus user to server admins:
  • and finally configure components:

Add link for the added configuration

Generate certs for the domain:

Add auth.jitsi.example.com to the trusted certificates on the local machine:

Note that the -f flag is necessary if there are symlinks left from a previous installation.

If you are using a JDK package not provided by Debian, as the ones from adopjdk, you should also make your JDK aware of the new debian certificate keystore replacing or linking the JDK cacerts. Example, if you use JDK from adoptjdk:

Create conference focus user:

Restart prosody XMPP server with the new config

Install Nginx

Add a new file jitsi.example.com in /etc/nginx/sites-available (see also the example config file):

Add link for the added configuration

Install Jitsi Videobridge

Visit https://download.jitsi.org/jitsi-videobridge/linux to determine the current build number, download and unzip it:

Install JRE if missing:

NOTE: When installing on older Debian releases keep in mind that you need JRE >= 1.7. Central european time to gmt.

Jitsi Video Bridge Webrtc Examples

Create ~/.sip-communicator/sip-communicator.properties in the home folder of the user that will be starting Jitsi Videobridge:

Start the videobridge with:

Or autostart it by adding the line in /etc/rc.local:

Videobridge

Install Jitsi Conference Focus (jicofo)

Install JDK and Maven if missing:

NOTE: When installing on older Debian releases keep in mind that you need JDK >= 1.7.

Clone source from Github repo:

Build the package.

Run jicofo:

Deploy Jitsi Meet

Checkout and configure Jitsi Meet:

NOTE: When installing on older distributions keep in mind that you need Node.js >= 12 and npm >= 6.

Edit host names in /srv/jitsi-meet/config.js (see also the example config file):

Verify that nginx config is valid and reload nginx:

Running behind NAT

Jitsi Videobridge can run behind a NAT, provided that both required ports are routed (forwarded) to the machine that it runs on. By default these ports are TCP/4443 and UDP/10000.

If you do not route these two ports, Jitsi Meet will only work with video for two people, breaking upon 3 or more people trying to show video.

TCP/443 is required for the webserver which can be running on another machine than the Jitsi Videobrige is running on.

Jitsi Video Bridge Webrtc Example Program

The following extra lines need to be added to the file ~/.sip-communicator/sip-communicator.properties (in the home directory of the user running the videobridge):

Hold your first conference

Jitsi Videobridge Tutorial

You are now all set and ready to have your first meet by going to http://jitsi.example.com

Enabling recording

Jitsi Video Bridge Webrtc Example Free

Jibri is a set of tools for recording and/or streaming a Jitsi Meet conference.