question about multithreaded server approach

question about multithreaded server approach

Post by Coca Smyth » Fri, 02 Dec 2005 07:18:31


hi,

i have made a server that manages <500 socket connections, that each has to
be handled individually.

what ive done is that i have a core server, that maintain all the socket
communication (pool of threads with pool of socket-handeling-events), and
then i have a routine that directs data to and from theese connections.

The way i manage interfacing between my directing routine and the
socket-control part, is by having an incoming queue, and an outgoing queue
to and from the two parts.

I have run into some problems though with this design, mainly beceause some
of the sockets really should act like they were blocking, and its a problem
'simulating' this behavior with nonblocking sockets.

Therefore i was wondering:

Would it be ok on an win 2K machine, to have upto 500 sockets, each in their
own thread, and each with an queuecontrol (one for incoming commands from my
cointrol routine, and one that can stuff data into my control routine) -
this queue is a sempahorebased one protected with a critical section.

Will it kill the system to have 500+ semaphores firing, and having 500
threads doing socket IO ?
 
 
 

question about multithreaded server approach

Post by Jgen Devli » Fri, 02 Dec 2005 17:35:31

Hi Coca,

I don't think so. We tested on a rather slow machine with 300, with such a
set-up, and that was no problem at all.

We did have issues with the sockets themselves however. If the client at the
other side of the socket just dies (e.g. power off the machine), the open
socket is not notified. We had to build in an I'm-alive mechanism to detect
clients that were no longer there. If not, after some months you are faced
with a lot more socket connections that you ever designed.

Jgen

 
 
 

question about multithreaded server approach

Post by Markus Elf » Fri, 02 Dec 2005 18:24:42

> I have run into some problems though with this design, mainly beceause some

Which open issues have you got with your current design?
Why do you think that you need to switch to the thread-per-socket pattern instead to keep
the thread pool working?

Regards,
Markus
 
 
 

question about multithreaded server approach

Post by Coca Smyth » Fri, 02 Dec 2005 20:03:02

>

Hi Markus,

Well the problem i am (feeling) im facing is the way its handled while
closing the socket.

I have 1..n threads that handles socket events for 60 sockets, every time i
get an event on a socket its handled in a function (all the FD_XXXX's
events).

This function will stuff an outgoing queue with complete incoming
datapackets recieved, this is a sempahore based queue, that is popped from
another thread where its handled.

In this other thread i also control when i am about to send some data to a
socket (and in the end close it) or when i am about to connect to a
serverside socket and then i can tell the threadpool to incooperate this
socket in its list of sockets.

This is working very fine, but when im gonna close a socket manually, then
its get abit messy.

I send data, and then i make a shutdown on the socket (FD_READ) so that
socket will receive the FD_CLOSE notification when everything is sent.

But since the FD_CLOSE is happening in another thread, and i dont have a
direct link between the place where im shutting it down (my other thread),
and where it actually shuts down, i sometimes come in a condition, where
theres already new data to be sent to the actual socket-connection (or more
precise, on the given IP, since every connection is based on IP from a
list).

So somehow i would really like to halt everything until im sure the socket
is closed ?

A solution in this architecture could be is to have a flag in my
socket-objects that indicate that the socket is "closing" and in the end
"closed" ? and then discard any attempts to access this socket while
"closing" until its actually "closed" ?

Anyways, the idea of using pure blocking sockets, and have a one thread pr.
socket, that handles the events, BUT also handles every action performed on
this socket with a queue for outgoing packets would ensure me that i only
access a socket one place in all the code - and my idea was to have a simple
commandbased queue-protocol, so i would have a "send" a "close" and a
"connect" part (read would be handled automatically) - so when i wanted to
send something and close it i would do this:

result = ThisSocket->Push (sendpacket)
result = ThisSocket->Push (closepacket)

As soon as the thread receives the "closepacket" it would block the queue
for any more incoming packets, so i would know in my code that when the push
returns "result = fail" that the specific socket isnt ready, when the thread
is sure that the socket is closed, then it would unblock the queue so its
ready to receive data again.

Ofcourse, if i could implement such a feature in the architecture i have
today with multiple socketevents pr. thread, that would be the most optimal
case, but here i dont control a semaphorebased queue pr. socket, here i
work directly at the objects.

While writing i got some ideas though lol.....but any reflection about my
writings here are appriciated :) but the idea that just popped up, was to
implement the "block-queue" on my socket objects "to be sent"
buffer.....when in "closing" state, it should discard any sends.........
 
 
 

question about multithreaded server approach

Post by Markus Elf » Sat, 03 Dec 2005 05:26:45

> This is working very fine, but when im gonna close a socket manually, then

What do you mean with a "manual" closing?
Do you use any synchronisation objects in your shutdown process?

Do you apply the RAII pattern?
http://www.yqcomputer.com/



Is there any delay between the closing of a socket and the processing of the related
FD_CLOSE notification in your code?



Would you like to look into other shutdown approaches from well-known server
implementations?



Can it be that your code should be more separated into specific multi-threading details
and network aspects?
Are all components organized in a way that their work is safe?

Regards,
Markus
 
 
 

question about multithreaded server approach

Post by Coca Smyth » Sat, 03 Dec 2005 07:53:20


Well i mean serverside closing (i close the socket, not the client) - and no
i dont have any sync in the shutdown phase, and thats the problem i think.
Implementing a flag and a check on "closing" i think would solve the problem
?


Yes i would like that :) if im going to rewrite it, i would appriciate any
thoughts on this - just not IOCP, since that has some disadvanteges in my
point of view, and in my use - then i prefer to build my own pools.


Its very likely that my whole architecture could be made abit different, ive
been thinking about rewriting parts of it, its just to find the right
approach, thought that i made it okay, but now i see some problems with
especially the close thing.
 
 
 

question about multithreaded server approach

Post by Torsten Ro » Thu, 08 Dec 2005 02:01:41


Most socket implementations have a feature called "keep alive" that will
send some data over an established connection and generate an error if
the connection seem to be broken.

regards
Torsten
 
 
 

question about multithreaded server approach

Post by Phil Frisb » Thu, 08 Dec 2005 02:30:56


Yes, but it is normally a useless feature because the default time-out is two
hours! The reason for such a long default time-out is that the TCP keep-alive
feature was depreciated even as it was being added to the TCP/IP spec, so it was
in effect disabled by the long time-out.

Application level keep-alives is the proper solution.


--
Phil Frisbie, Jr.
Hawk Software
http://www.yqcomputer.com/
 
 
 

question about multithreaded server approach

Post by Torsten Ro » Thu, 08 Dec 2005 04:25:44


Hi Phil,




Jgen talked about "If not, after some months you are faced with a lot
more socket connections that you ever designed.". So in his case the
usual time-out of two hours seem to be ok.

In an application I'm aware of the "keep alive" time-out is configured
down to a few minutes without any problems (a VMS Server communicating
with a win 2000 box).

Can you give me some background, why this feature was marked to become
depreciated? We are quit happy with the feature.

> Application level keep-alives is the proper solution.

Why? An application level solution have to be constructed while a tcp
based solution seems to be there already.

best regards
Torsten

P.S. Maybe this is getting the wrong group, maybe you could suggests a
follow-up.
 
 
 

question about multithreaded server approach

Post by David Schw » Thu, 08 Dec 2005 04:36:51


Because it's neither necessary nor sufficient. It's not necessary
because an application-level timeout can be used. It's not sufficient
because it cannot detect if the application on the other end is still
responsive.



It's a solution to a different problem, provided as a workaround for
defective applications.

DS
 
 
 

question about multithreaded server approach

Post by Torsten Ro » Sat, 10 Dec 2005 06:57:26


Yes, and the afford to do so can be spend to make the world better. I'm
quit sure that this feature can not solve every problem by I would not
call it pointless.


Agreed, but it might detect at least cases where a connection isn't
alive because a computer was shut down.

regards
Torsten
 
 
 

question about multithreaded server approach

Post by David Schw » Sun, 11 Dec 2005 04:42:30


The thing is, it only solves part of the problem. And any solution to
the remaining part will solve the whole problem anyway. So it is useless,
except as a workaround for cases where it's not possible to solve the
problem properly. In that case, solving part of the problem is better than
solving none of it.



Can you think of any case where you need to detect one failure and not
the other?

DS
 
 
 

question about multithreaded server approach

Post by Torsten Ro » Sun, 11 Dec 2005 18:23:38


Point taken.


Depends on the requirements ;-) Everywhere where you can't change the
application level protocol. We have some older applications here that
were communicating over DECnet once and were ported to TCP/IP without
changing the application level protocol.

regards
Torsten
 
 
 

question about multithreaded server approach

Post by David Schw » Tue, 13 Dec 2005 12:29:18


Exactly. It's still needed as a workaround for broken or badly-designed
protocols. Deprecated is not the same as pointless. Pointless means there is
no case where you should ever use it. Deprecated means that if you're in a
case where you have to use it, it's because something else is broken (and it
would be better to fix that something else, if that's possible).

DS