Coturn more debugging + resolution

Today I:

  • Switched to an AMI for Ubuntu 20.04 LTS as opposed to 16.04 LTS,
  • Modified the security group to open Turn Relay ports (potentially a contributing factor to the server not working),
  • Migrated from apt to snap in terms of fetching certbot,
  • Modified the shell script to run as a single bash script rather than inline in terraform,
  • Improved the shell script so as not to hang at a certain point (in particular appended –no-pager to a systemctl status: sudo systemctl status nginx –no-pager),
  • Altered the turnserver.conf in terraform fixtures so as to use diffie-hellman and several other bespoke options, and also fixed paths to the letsencrypt certs, as well as fixed logging,
  • Modified nginx.conf in fixtures so as to have a server_names_hash_bucket_size of 128 (under the http block), which was important to fix another issue.

I feel that I’m getting there, gradually.

I do know that this worked before on DigitalOcean with manually executed scripts. The key difference there was that all ports were open on the VM, so it is possible that is all there is to it. Possibly adding the turn relay ports may be enough to get this over the line.

If not, I’ll look carefully into techniques to test STUN servers in a more definite way rather than using a tool like Trickle ICE.

Still not working …

Looks like maybe an nginx problem? So confusing … or maybe I need to do something with iptables? If I do want to do things with iptables, then ufw is likely The Way.

I wonder …?

sudo ufw allow 5349
sudo ufw allow 3478

Still no-go. But this seems on the right track per this. And the security group opens both of these ports on UDP and TCP. So I’m evidently missing something else. Maybe nginx configuration?

Fixed an issue with the “turnserver” user not having access to /var/logs, hopefully that helps as well.

Still not working per “If you test a STUN server, it works if you can gather a candidate with type “srflx”.” for request stun:stun.subdomain.domain.tld:5349. Nonetheless, considerable progress today. Maybe tomorrow I’ll sort this out.

Hmm, okay, in the logs:

0: IPv4. SCTP listener opened on :
0: IPv4. UDP listener opened on:
0: IPv4. TCP listener opened on :

sure enough, if I jump back to my machine

telnet subdomain.domain.tld 3478 –> access

… and then look at the logs again

711: IPv4. tcp or tls connected to: <ip>:<port>
729: session 000000000000000001: TCP socket closed remotely
729: session 000000000000000001: usage: realm=<>, username=<>, rp=0, rb=0, sp=0, sb=0
729: session 000000000000000001: closed (2nd stage), user <> realm <> origin <>, local, remote <ip>:<port>, reason: TCP connection closed by client (callback)


telnet subdomain.domain.tld 5349 –> no access.

Got it, so coturn isn’t running in turn mode … interesting! And 5349 is not the correct port.

What about 3478?

  • Trickle ICE with stun:stun.coturn.domain.tld:3478 –> no srflx, fail.
  • Trickle ICE with stun:coturn.domain.tld:3478 –> srflx, success!!!!

There is a problem, however. This server is exposed to the wider world and can be contacted by anyone who knows the DNS entry or the IP address.

To fix this, I can use the external-ip field in turnserver.conf. This can then be used as a target by an AWS API Gateway I believe, which in turn is secured by an SSL certificate. Godot can then make requests via the WebRTC library to the stun server indirectly, by making a request to the API Gateway. Maybe using a web application firewall?


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: