TIdTCPServer - killing the thread once client connection is lost

TIdTCPServer - killing the thread once client connection is lost

Post by Dean » Sun, 15 Feb 2004 20:39:47

Hi all,

Martin just pointed me here following a post I made on delphi.win32. I'm
glad he did - just had a good read through and already learned a couple of

I have this problem with a TIdTCPServer based app. I'm trapping when a
client connection is lost by way of a "dirty exit" (I'm pulling the RJ45 out
of the back of the PC to simulate). In this apps protocol I've implemented a
PING exchange which I needed anyway, so this is fired every few seconds and
that doubles as my poll to see if a client has abnormally disconnected.

It works in trapping the lost client connection, but I can't get rid of the
thread which seems to get left behind. A read through of the Indy Help made
me think I need to use Connection.DisconnectSocket rather than the
Disconnect method (which does the TCP/IP layer exit handshake as I
understand it, and no point doing that as the client's not listening). So
what I have is this:-

procedure dirtyexit(Thread : TIdPeerThread);
//set internal flag so the rest of the app knows its gone
//otherwise I get an error if I try a broadcast to all clients
//as it's still in the threadlist
Thread.StatusFlag := tsConnectionDropped;

//Thread stays alive, so I then tried:-
//neither of which worked

I'm a big stuck as to how to move forward - any help or advice would be much

I basically need in the dirty exit routine to get the thread out of the
threadlist, stop it executing and terminate it's thread instance.



TIdTCPServer - killing the thread once client connection is lost

Post by Team » Mon, 16 Feb 2004 08:01:28

Depending on the OS you are actually using, there is no guarantee that the
underlying socket stack will detect such a condition right away.

That is fine.

TIdTCPServer automatically terminates threads when their connections have
been disconnected. If it is not, then you are probably doing something in
your own code to prevent its normal behavior.

That is fine, although Disconnect() is usually preferred.

You shouldn't need to do anything else. Disconnecting the connection is
enough to stop the thread.



TIdTCPServer - killing the thread once client connection is lost

Post by Dean » Mon, 16 Feb 2004 09:08:32

Hi Remy,

That's not a problem - I'm doing the detection myself by way of a timed poll
in the form of a ping.

Only with a clean disconnect.

With an abrubt (I call them dirty) disconnect, the underlying disconnect
procedure cannot execute it's normal behaviour, as the TCP/IP handshaking
that goes with it cannot work (client is no longer at the end of the wire).
Hence the problem.

Unforunately, Disconnect only works when the client is connected and at the
end of the wire (no abrubt disconnect). I had hoped that DisconnectSocket
tells the stack the connection has gone, without trying to handshake a
goodbye. That's the implication from the Indy help. But it doesn't seem to
work, as the thread continues to try and handshake with a non-existing

Disconnecting the connection requires that the connection is there to be
disconnected via handshake.

Thanks anyway.

Anyone else managed to solve this problem? I can't be the first person to
have come across this.



TIdTCPServer - killing the thread once client connection is lost

Post by Team » Mon, 16 Feb 2004 10:20:05

No, with any disconnect. Whether the client disconnects, or your own server
code disconnects. As soon as the socket has been closed and/or invalidated,
Indy starts reporting errors dring its internal looping of the thread and
will terminate the thread as a result.

In which case, the underlying socket stack would report an error that Indy
catches and handles. It doesn't matter whether it was a "dirty" disconnect
or not. Call Disconnect() anyway, because it also cleans up the
connection's IOHandler, whereas DisconnectSocket() doesn't.

Disconnect() works just fine as long as the endpoint of the calling party is
still valid. It doesn't matter whether the other end of the connection
disappeared or not. Remember that sockets have two endpoints, and both
endpoints have to be cleaned up separately from each other.

It does.

The main difference between DisconnectSocket() and Disconnect() is that the
first once simply closes the connection only while the other one closes the
connection as well as cleans up and detaches the IOHandler. In fact,
Disconnect() calls DisconnectSocket() internally, so the actual connection
itself is being closed in the same manner either way.

Where did you get that idea from exactly?

Then it should be failing at the lower socket stack level and errors
reported up the call chain back to Indy so it can react to it. The only way
what you describe could be happening is if the soacket stack itself is not
actually reporting errors to begin with. Which is a very likely
possibility, considering that TCP itself is specifically designed to take
network outages into account and keep connections "alive" for a period of
time in case the network comes back online in a timely manner.


TIdTCPServer - killing the thread once client connection is lost

Post by Dean » Mon, 16 Feb 2004 12:12:03

> No, with any disconnect. Whether the client disconnects, or your own

OK, well if you're correct, for some other reason it's not getting that far.
Does a thread have to call it's own disconnect method for that to be
effected? Or can the parent app call a thread's disconnect method?

From the Indy Help :

"Unlike the Disconnect method, DisconnectSocket is not overridden by
descendant classes to provide termination sequences required for a
particular protocol.

DisconnectSocket is used when an error has occurred in a protocol handler
and a guaranteed disconnect is needed."

I was looking for the guaranteed disconnect.


A disconnect is not an outage. If I tell a client to disconnect, it means I
want it to disconnect, not wait and see if a network comes back up again.

If something is not making its way back up through the stack and terminating
my thread after I've called disconnect, is there a way that I can terminate
it, and remove it from the ThreadList manually?



TIdTCPServer - killing the thread once client connection is lost

Post by Team » Mon, 16 Feb 2004 13:22:48

The thread remains active as long as the associated connection is active.
If the socket is disconnected for any reason at all, the thread will
terminate once the socket stack itself detects the disconnect.

That is not referring to lower-level socket shutdown negotiations. That is
referring to higher-level protocol close commands, such as for protocols
like POP3 and SMTP which send a QUIT command prior to closing the

That is not what I was referring to. Networks momentarily go down
unexpectedly, people pull out the network cables, machines are shut off
prematurely, etc. All of those are unconditional outages that are not
negotiated or announced ahead of time. As such, the socket stack has no way
of knowing for awhile that connections are gone in such cases. In the case
of momentarily network outages, TCP allows the outage to not effect existing
socket connectivity. Since momentary network outages are common, as long as
the network comes back online quickly, TCP does not invalidate any existing
sockets, they remain active the entire time. Neither end of the connection
knows the outage occured at all, and they can continue talking to each
other. Because of that, if a connection is lost without proper negotiation,
the socket stack simply has no way to detect it until the stack times out
internally. Until that happens, there is no way for Indy to know that the
connection is lost because the stack does not report any errors for awhile,
since it is waiting to see if the connection is reconnected at the lower
levels. This is fundamental TCP design, not Indy design.

You are not listening. If you Disconnect() manually, then the stack *does*
know right away that the connection is being closed intentionally, and any
further operations on the socket will fail. Indy will terminate the thread
automatically in that case. If, on the other hand, you are not
Disconnecting the socket manually, but simply relying on your "simulated"
disconnects by pulling the cable out, as you mentioned earlier, then you are
*not* guaranteed that any action will occur for awhile.


TIdTCPServer - killing the thread once client connection is lost

Post by Dean » Mon, 16 Feb 2004 20:16:59

> You are not listening.

Actually, I am, but perhaps I haven't made it clear enough what I'm doing
here as I can tell we're at cross-purposes from your posts (I know you're
trying to help, and I very much appreciate that).

I'm not asking for Indy to call OnDisconnect or generate an Exception or
anything else when a cable has been pulled out. Instead, I am polling for it
(see my first post in this thread).

In a nutshell:-

1. I *know* that the other side of the connection has vanished abrubtly as I
am polling for it. So my poll tells me that it sent a PING message to the
client and the client hasn't responded within my set timeframe. I have the
information that the cable has been pulled out from this. I have no need for
Indy or the TCP/IP layer to either know, or tell me that. I already have
this information - what I want to do is tell Indy this. I pull the plug, my
poll detects and tells me. The poll works perfectly.

2. I then call disconnect on the connection which I know is lost when I
discover this.

3. The client thread remains running after I've called disconnect. As far as
I can tell, internally, Indy still has the socket connection in place.

Disconnect therefore appears to be failing *where the connection has been
lost abrubtly*.

Disconnect works perfectly on a client connection which is still live and
has not abrubtly disconnected.

I'm trying to tell Indy that something has happened, not the other way
around. Indy seems intent to see for itself, which I don't want it to do. I
discover that a cable has been pulled so I call disconnect. On calling
disconnect, I want Indy to drop its socket allocation and kill the thread.
That doesn't happen.



TIdTCPServer - killing the thread once client connection is lost

Post by Team » Tue, 17 Feb 2004 08:35:43

Dean" <dean@[spambuster]weany.com> wrote in message
news:402f552c$ XXXX@XXXXX.COM ...

It wouldn't do that anyway.

It will do that, though.

I understand that point already. What you need to understand is how Indy is
actually designed to work.

That does not matter, though. The *server's* endpoint of the connection is
still valid, and needs to be cleaned up separately from anything that
happens on the client side. If you pull the cable by hand, the underlying
socket stack ***will not recognize it right away***. The stack must time
itself out internally before it will recognize the connection is gone.
Until that happens, ***Indy thinks the connection is still active***,
because that is what the underlying socket stack itself still thinks. Even
if you read from or write to the socket after pulling the cable, the
underlying socket stack will buffer/queue the operation in hopes that the
connection comes back and it can then perform the operation normally. As a
result, the socket stack does not return any errors right away, and thus
Indy cannot detect that anything has happened to the connection, ***no
matter how much polling you do***. In your own code, you will just have to
use your own timeouts. If you are sending a ping-type packet periodically,
and you do not get any response in a timely fshion, just Disconnect() the
connection regardless of why the response is not received.

You as a person may have that information, but your software program does
not. It has no concept of why the response is not being received. Not
until the underlying socket stack itself times out internally and starts
reporting errors regarding the socket no longer being valid.

You already know how to do that, because it has been told to you several
times now - just Disconnect() the client socket on the server's end.


No, it does not. It will terminate. Disconnect() closed the server's
endpoint of the connection and then sets the connection's IOHandler to NULL.
Inside the actual thread, everytime the OnExecute event is reentered, any
operations you perform on the socket are dependant on the connection
remaining open and the IOHandler assigned. If you Disconnect() the socket,
then the next time the OnExecute event is triggered, your operation will
throw an exception since the socket is no longer valid. TIdPeerThread then
catches that exception internally and terminates itself as a result.

If the thread is not terminating correctly, then it has to be a problem with
your own code preventing it somehow. Left to its default behavior, it
*will* terminate properly.

Just call Disconnect(). That is the correct thing to do. If that is not
working for you, then please show your actual thread code, you are probably
doing something wrong with it.

It already does exactly that.



TIdTCPServer - killing the thread once client connection is lost

Post by Dean » Tue, 17 Feb 2004 22:42:11

In terms of code, basically I have something like this:-

HeaderByte = ( hbNull, hbPing, hbLoginPacket etc etc );

procedure execute(AThread : TIdPeerThread);
var HB : HeaderByte;

ReadHeaderByteFromSocket(AThread, HB);

Case HB of

.......... lots of switches............ eg:-

hbPing :
AThread.PingTime := GetTickCount-AThread.SentPingTick;
If (AThread.PingTime > 8000) then AThread.Connection.Disconnect;

Fetch the Login packet and validate etc


LogError('Invalid header byte');

Now from what you've said, I can deduce that the problem here is that after
I've called disconnect if the PingTime is too high (this all works fine btw
and for the record a client with a pingtime of more than 8 seconds is of no
use to me or itself), the execute method of this Thread instance still
continue to loop until Indy's TCP protocol stack generates an error,
notwithstanding that I have called disconnect. In which case,
ReadHeaderByteFromSocket generates an exception at some point as the socket
has closed (I get a socket error 104, connection has closed).

Do I have that right?

In which case I'll have to flag the client just prior to my calling

hbPing :
AThread.PingTime := GetTickCount-AThread.SentPingTick;
If PingTime > 8000 then
AThread.Status := tsClosing;

Then at the top of my execute procedure I have:-

If (AThread.Status = tsClosing) then exit;

Unless AThread.Connection.Connected gets set to false immediately on my
calling Disconnect, in which case I already have the flag I need, but the
principle is the same. Does Connected get immediately set to false?

Then in the rest of my code, I need to check that flag again, e.g:-

List := Server.Threads.LockList
For i := 0 to ListCount-1 do
If Not((TIdPeerThread(List.Items[i]).Status = tsClosing)
then BroadcastStuff(TIdPeerThread(List.Items[i]));

That's part 1 I guess. Part two is that the thread seems to want to stay
alive as per my original post, but I haven'tleft it for more than a minute
or so - maybe it can take longer? If so, as long as it does die eventually,
as you are saying it will, that's ok as long I as know and can flag it
myself so the rest of the app. does not keep having to trap Socket Not
Connected errors.



TIdTCPServer - killing the thread once client connection is lost

Post by Atle Smel » Wed, 18 Feb 2004 00:49:02

As a default, Indy does not have a read timeout. If you have not set a read
timeout, you will not get into your check, and Disconnect will never happen.

Please make sure that ReadTimeout is set to an appropriate value, and please
remember that if a readtimeout occur, it will raise an Read timeout


TIdTCPServer - killing the thread once client connection is lost

Post by Dean » Wed, 18 Feb 2004 01:25:18

Hi Atle,


Thanks - this is the standard blocking, non-event model I think, not
particular to Indy?

So I think you're saying, for example:-

AThread.Connection.ReadBuffer(MyBuffer, 10);

Will sit and wait forever if only 9 bytes are sent by the client? That's my
understanding of how this model works.

My dirty exit routine which actually also checks the pingtimes runs in it's
own thread looping every 5 seconds or so. My above CASE statement was by way
of example, and, as you say, may fail. Sorry for being misleading, I wanted
to keep the example short and hadn't thought about it that much.

I adapated TIdPeerThread for this app by adding some additional properties,
like SentPingTick and PingTime. The looping DirtyExit thread measures
SentPingTick against GetTickCount and ditches clients that haven't been
heard from in 8 seconds.

So AThread.Disconnect actually gets called by the DirtyExit thread, not
within the case statement. At the end of the case statement, if an unkown
header byte or malformed packet format error occurs, the client gets ditched
also, to maintain server integrity. So even if the PING byte comes along
giving the socket connection the 10th byte it was expecting, the messaging
sequence would get messed up. Disconnect would then eventually get called by
this sweep-up at the end as an unknown header byte or malformed packet would
come in, being x bytes too long or short. Either that or PingTime would go
over 8 seconds anyway.

My example was a bad one.

Hopefully that makes more sense now? Or am I still off-track with the



TIdTCPServer - killing the thread once client connection is lost

Post by Zinvo » Wed, 18 Feb 2004 03:51:58

This behaviour is the standard blocking behaviour. Except, you would
ordinarily have a readtimeout. Also, in your case you will have problems if
you are reading anything big.
By using readtimeout and setting the value to what your ordinary timeout
value is, you can handle this problem also. The timeout is controlled not by
when the whole packet is received, but the time since any read at all has
been made. Then, when you get a read timeout exception, just disconnect and
terminate the server thread.

You should not introduce any other threads and have them use the
TCPConnection because the connected property etc. is not threadsafe. Set up
a threadsafe function by using critical sections or other locking mechanisms
inside the IdPeerThread where you send your ping packets etc. from. And make
your ping thread only send ping, not do any timeout.

Hope this helps you.


TIdTCPServer - killing the thread once client connection is lost

Post by Dean » Wed, 18 Feb 2004 05:30:46


Packets are never larger than 200 bytes.


PING is sent from another thread which does not a lot else. It doesn't
interact with threads in any other way.

OK, well that's easy enough. Timeout is on a seperate thread actually, which
is the DirtyExit thread to look at pingtimes in a LockList loop.


DirtyThread thread just monitors Ping times and disconnects. Nothing else.

PingThread sends out pings to connected clients.

So the Ping routine should have a CS, even though it runs in it's own




procedure DirtyClientThread.Execute;
var List : TList;
i : integer;
//Check Ping times and Kill if > 8 seconds
//Ping time is set in server Execute procedure

If Terminated then exit;

List := Server.Threads.LockList;
For i := 0 to List.Count-1 do
With TIdPeerThread(List.Items[I]) do
If (PingTime > 8000) then Connection.Disconnect;


procedure PingThread.Execute;
If Terminated then exit;

procedure PingEveryone(Server : TIdTCPServer);
var List : TList;
i : integer;
List := Server.Threads.LockList;
For i := 0 to List.Count-1 do
If (TIdPeerThread(List.Items[i]).Connection.Connected)
then Ping(TIdPeerThread(List.Items[i]));

procedure Ping(Thread : TIdPeerThread);
var Ping : HeaderByte;
Ping := hbPing;
With Thread do
Connection.WriteBuffer(Ping, SizeOf(Ping));

TIdTCPServer - killing the thread once client connection is lost

Post by Zinvo » Wed, 18 Feb 2004 08:19:03

You should create a descendant of the PeerThread, and make that one the
threadclass, so that the listenerthread creates your thread instead. Then
you override the BeforeRun and AfterRun procedures to create your critical
section. You set up some functions inside the thread for sending stuff with
Connection and disconnecting. You don't want to mess with the connection
object inside other threads, only inside the PeerThread (and by using
threadsafe functions that you made available through this thread object).

Even though you only send small packets, you should still use ReadTimeout on
the server object instead of a thread handling it.

Your timer thread should use events for the sleep, as timeout. This way you
only have to trigger the event if make a faster terminate on this thread.

You could create one timer thread pr. connection, freeing this thread when
finishing the peerthread. Or make one to hold all of them, but this one must
hold its own list of connection threads, not lock the whole server
threadlist for each control on connection threads (which will give you a bad
performance hit, and other problems). And then you could have a threadsafe
function call to activate and deactivate any threads for it. These calls you
use inside the connect and disconnect.

TIdTCPServer - killing the thread once client connection is lost

Post by Zinvo » Wed, 18 Feb 2004 08:39:15

Get rid of your dirtythread, but keep your PingThread (you might want to
change it a bit). Don't use the connection object inside your ping thread.
Call a procedure or function inside the new peerthread object that you
created. Set ReadTimeout to your ping interval + actual timeout.

If you use the connection object from more than one thread, every thread
must use the same locking mechanism when dealing with it. You could use
TMREWSync, using BeginRead when only checking connection, and BeginWrite
when actually disconnecting. The tricky part is that you are using blocking
socket, so it's not nice to lock around the connection when reading. Since
disconnecting from another thread when reading will give you a socket
exception inside the read thread I think that you may not need locking on
that part. But it's a bit tricky, just remember that there are an IOHandler
object in there getting freed when disconnecting, and you might have a
reading thread just entering into the control if IOHandler is nil, and just
trying to use the object (when it's freed). Then you get an ugly exception.
Many things can happen here, so there is no easy solution when using more
than one threads on the connection object. Just wish that the Indy team
would have made this object threadsafe.

If you find a nice solution, tell me. At this stage it's really easier to
deal with just plain sockets and API calls.