Talk server load: any abacus?

Hi Community!

I’m running a test with Talk, which looks soooo great :slight_smile: but, although chat is OK, I couldn’t launch a one-to-one audio/video communication. It seems the server lags and I’m suspecting that Talk puts a lot of processing power requirement on my poor virtual server (however, running a 2.4Ghz 4-core Xeon virtual processor).

Therefore, I’m wondering if there’s any abacus that would tell me: 2 connections = min RAM / min BPS, 10 connections = min RAM, min BPS, etc.

Thanks in advance for any help!

How many users use Talk? Do you run a TURN/STUN server? It helps a lot if your participants are behind restrictive firewalls/NAT networks.

Your hardware seems to be quite powerful for a lot of users. Can you run a tool to monitor your server performance? Netdata or sometimes a simple htop/nettop can show you the resource usage on your server.

Aside from that, I measured 2Mbit for each participant if a TURN server is used for Talk.

Hi Alfred,

Figures probably won’t exceed 10 simultaneous users, and they will probably half be from the company (say 5 internal / 5 external users).

I don’t know what is a TURN/STUN server and just activated the Talk app within our nextCloud installation.

I’ll try to get a picture of the current server performances but it’s for sure underloaded! :wink: By the way, even the chat looks slow…

Thanks for your help.

Hi all,

Do I need to install any TURN/STUN addon on my server to run Talk? If yes, which one do you advise? How come that the chat is also very slow?

Thanks in advance for any help!

If you have users behind NAT or restrictive firewalls, then a TURN/STUN server is required for reliable operation. You can use Coturn which is well supported and you can find tutorials in other threads.

Your chat messages showing up with a high latency sounds like a strange problem. Maybe you should report it as a bug if it persists.

Thanks a lot Alfred! I’ll dig a bit about coturn… and will probably come back with new questions! :wink:

As for the chat latency, we’re currently migrating to our own dedicated server with plenty of cores/RAM/HDD, so I’ll wait to check the latency on this target infra…

Have a nice day!