[zeromq-dev] Welcome to the "zeromq-dev" mailing list

Benjamin Cordes benjamin.l.cordes at gmail.com
Fri May 30 20:53:26 CEST 2014


Jeremy, your questions are well covered by the Guide (= Bible). Chapter 5
covers the clone pattern. If the client drops out it REQ's a snapshot and
then continues to SUB's the updates. The clone pattern is not elementary,
because you don't know where the updates will come from under what
conditions etc. The REQ's can be handled as DEALER. The client will know if
it is out of sync, the server will not. In a way a server really is nothing
else than a reliable permanent service with an address. Example.com can
route to any number of machines. What matters is that the functions are
performed by some machine. If you need high reliability (over an inherently
unreliable protocol) you need more tricks, which are covered, e.g.
heartbeating, maybe even cloning from Peers.


On Fri, May 30, 2014 at 8:27 PM, Keith Henrickson <
Keith.Henrickson at nominum.com> wrote:

>  I’m curious about the answer here myself, since our use case is similar.
> Since there don't seem to be any explicit notifications that a client
> connects or disconnects (outside the ZAP interface when negotiating
> security), we are thinking of using some type of sliding window and
> DEALER/ROUTER so that we have bidirectional communication. Of course, that
> means we then re-invent TCP over ZeroMQ, with all the bad ideas that
> entails.
>
>  On May 30, 2014, at 7:21 AM, Jeremy Richemont <jrichemont at gmail.com>
> wrote:
>
>  Thanks, Charles. I did, in fact, find that pattern. The problem is it
> does not match what I am trying to do. That pattern for when you have state
> + deltas. What I have is a continuous message stream which, once started to
> client x must be preserved even if client x dies for a bit (not forever of
> course, I put an SLA of 1 million messages/client) and then reconnects,
> every message it missed is replayed, in order, then the live stream
> resumes.
>
>  It needs to handle n clients, any of which may drop and reconnect so
> each one will need an independent message cache. PUB/SUB will not do for
> this because I may need to send messages 10 - 100 to client x on reconnect
> but 50 - 200 to client y.
>
>  Asking for state is a good idea - ask for missing updates in my case -
> but the question remains; how does the server know the client is no longer
> available and it must therefore start backing up messages from a PUB
> socket? The client can't tell it over OOB because it died already.
>
>  If I could just query PUB and get a list of clients plus a notification
> when one drops that'd solve the problem I think. But how to do that?
>
>  Jeremy
>
> On 30 May 2014 15:00, Charles Remes <lists at chuckremes.com> wrote:
>
>> Take a look at the Clone pattern in the zguide.
>>
>> http://zguide.zeromq.org/page:all#Reliable-Pub-Sub-Clone-Pattern
>>
>> This might be what you need.
>>
>> cr
>>
>> On May 29, 2014, at 11:20 AM, Jeremy Richemont <jrichemont at gmail.com>
>> wrote:
>>
>> >
>> > Hi. I am struggling to work out how to use zmq to implement the
>> architecture I need. I have a classic publish/subscribe situation except
>> that once client x has subscribed to a topic I need the topic data to be
>> sent to it to be cached if the client dies and resent on reconnect. The
>> data order is important and I can't miss messages should the client be
>> offline for a while.
>> >
>> > The PUB/SUB pattern doesn't seem to know about individual clients and
>> will just stop sending to client x if it dies. Plus I can't find out this
>> has happened and cache the messages, or know when it reconnects.
>> >
>> > To try to get around this I used the REQ/REP pattern so the clients can
>> announce themselves and have some persistence but this is not ideal for a
>> couple of reasons:
>> >
>> > 1) The clients must constantly ask "got any data for me?" which offends
>> my sensibilities
>> >
>> > 2) What happens if there's no data to send to client x but there is to
>> client y? Without zmq I'd have had a thread per client and simply block the
>> one with no data but I can't block client x without also blocking client y
>> in a single thread.
>> >
>> > Am I trying to shove a round peg in a square hole, here? Is there some
>> way I can get feedback from PUB saying 'failed to send to client x'? so I
>> can cache the messages instead? Or is there some other pattern I should be
>> using?
>> >
>> > Otherwise it's back to low level tcp for me...
>> >
>> > Many thanks;
>> >
>> > Jeremy
>> >
>>   > _______________________________________________
>> > zeromq-dev mailing list
>> > zeromq-dev at lists.zeromq.org
>> > http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>>
>> _______________________________________________
>> zeromq-dev mailing list
>> zeromq-dev at lists.zeromq.org
>> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>>
>
>  _______________________________________________
> zeromq-dev mailing list
> zeromq-dev at lists.zeromq.org
> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
>
>
> _______________________________________________
> zeromq-dev mailing list
> zeromq-dev at lists.zeromq.org
> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.zeromq.org/pipermail/zeromq-dev/attachments/20140530/7620a2fb/attachment.htm>


More information about the zeromq-dev mailing list