This uses a separate-socket approach since there are systems that do not
support dual binding sockets under *any* circumstances, for instance
OpenBSD. Using separate sockets for IPv4 and IPv6 is thus more portable
than having a v6 socket handle v4 connections as well.
Signed-off-by: Christian Beier <dontmind@freeshell.org>
This is required to be able to do proper event loop integration with Qt.
Idea was taken from vino's libvncserver fork.
Signed-off-by: Christian Beier <dontmind@freeshell.org>
This adds generic before/after encoding buffers to the rfbClient
struct, so there is no need for thread local storage.
Signed-off-by: Christian Beier <dontmind@freeshell.org>
This implements the xvp VNC extension, which is described in the
community version of the RFB protocol:
http://tigervnc.sourceforge.net/cgi-bin/rfbproto
It is also mentioned in the official RFB protocol.
When not calling rfbRegisterProtocolExtension() the extension mutex
is uninitialized but used upon calling rfbGetExtensionIterator() and
rfbReleaseExtensionIterator() in rfbNewTCPOrUDPClient(). This causes
libvncserver to crash on Win32 when building with thread support.
Signed-off-by: Tobias Doerffel <tobias.doerffel@gmail.com>
Signed-off-by: Christian Beier <dontmind@freeshell.org>
There seems to be a locking problem in libvncserver, with respect to how
condition variables are used.
On certain machines in our lab, when using a vncviewer to view a display
that has a very high rate of updates, we will occasionally see the VNC
server process crash. In one stack trace that was obtained, an assertion
had tripped in glibc's pthread_cond_wait, which was called from
clientOutput.
Inspection of clientOutput suggests that WAIT is being called incorrectly.
The mutex that protects a condition variable should always be locked when
calling wait, and on return from the wait will still be locked. The
attached patch fixes the locking around this condition variable, and one
other that I found by grepping the source for similar occurrences.
Signed-off-by: Charles Coffing <ccoffing@novell.com>