[zeromq-dev] source material for TIBCO to ØMQ migration

Martin Sustrik sustrik at 250bpm.com
Tue Jan 4 10:09:48 CET 2011


Steven,

Wouldn't it make more sense to place this on the website somewhere?

Martin

On 01/03/2011 02:15 PM, Steven McCoy wrote:
> I'm working through messaging pattern examples for TIB (MSA), Rendezvous
> 5, TIB/Rendezvous 6/7/8, and ØMQ.  Every framework unsurprisingly
> doesn't support all combinations but I'm working from what is available
> on the TIBCO side, i.e. leveraging location transparency and no single
> point of failure.
>
>    1. *Broadcast Publish-Subscribe*, dependent upon an underlying
>       broadcast transport, i.e. PGM/IP or PGM/UDP.  TCP being
>       fundamentally flawed due to requiring a fixed endpoint.
>    2. *Broadcast Request-Reply*, the server and client possess location
>       independence but multiple running servers would yield multiple
>       responses as each receivers the clients query.
>    3. *Unicast Publish-Subscribe*, feeding from one node to another
>       requiring at least one temporally fixed address and introducing a
>       point of faliure.  Analogous to ØMQ using a TCP transport with
>       PUB/SUB sockets.
>    4. *Unicast Request-Reply*, requiring both sides to possess
>       temporally fixed addressing and hence two points of failure.
>         Analogous to REQ/REP sockets in ØMQ.
>    5. *Fault-Tolerant Request-Reply*, extending the /broadcast
>       publish-subscribe/ model by implementing fail-over fault-tolerance
>       in the server side.
>    6. *Certified Publish-Subscribe*, using a ledger publish side and
>       subscriber side in order to track receivers and state of confirmed
>       messages.
>    7. *Certified Request-Reply*, extending ledger usage for
>       client-to-server and server-to-client response.  Note TIBCO
>       Rendezvous has a well known flaw with tracing certified unicast
>       responses causing the ledger not to be purged.
>    8. *Distributed Request-Reply*, combining certified and
>       fault-tolerant messaging a distributed group of servers is created
>       using fault-tolerance to elect a scheduler which uses certified
>       message to dispatch messages to application workers.  Failure of
>       the scheduler is a point of failure causing dropped messages.
>         Matches ØMQ's PUSH/PULL sockets that implement a load-sharing
>       group of receivers, however ØMQ still requires a temporally fixed
>       endpoint address.
>    9. *Certified Distributed Request-Reply*, adding certified messaging
>       from the client through to the application worker.  Failure of the
>       scheduler defers recovery to the client ledger.  When configured
>       with a memory ledger is functionally equivalent to disk spooled
>       ØMQ PUSH/PULL sockets with fixed identities, however common
>       configuration is a disk ledger which can continue message delivery
>       independent of client and server restarts.  If the application
>       crashes the ledger will be corrupted and therefore applications
>       must use an external transaction manager to implement guaranteed
>       once-only delivery.  Note that ØMQ has a major issue that messages
>       delivered to the receiver but not processed by the application
>       will be lost upon failure, only undelivered messages are held in
>       queue.
>
> Not sure how all of this is going to be organised, but an FYI anyhow and
> someone can point out any incorrect understanding of ØMQ sockets.
>
> --
> Steve-o
>
>
>
> _______________________________________________
> zeromq-dev mailing list
> zeromq-dev at lists.zeromq.org
> http://lists.zeromq.org/mailman/listinfo/zeromq-dev




More information about the zeromq-dev mailing list