[zeromq-dev] source material for TIBCO to ØMQ migration

Steven McCoy steven.mccoy at miru.hk
Thu Jan 6 11:07:48 CET 2011


And it looks like Google Image Insert is broken yay.

 Also images on Cacoo.com but it looks like you cannot publicly share
> diagrams.
>
> *Broadcast Publish-Subscribe*
> *[image: Broadcast Publish.png]
> *
> *
> *
> *Broadcast Request-Reply*
> *[image: Broadcast Request-Reply.png]
> *
> *
> *
> *Unicast Publish-Subscribe*
> *[image: Unicast Publish.png]
> *
> *
> *
> *Unicast Request-Reply*
> *[image: Unicast Request-Reply.png]
> *
> *
> *
> *Certified Publish*
> *[image: Certified Publish.png]
> *
> *
> *
> *Distributed Request-Reply*
> *[image: Distributed Request-Reply.png]
> *
>
> --
> Steve-o
>
>
> On 6 January 2011 16:39, Steven McCoy <steven.mccoy at miru.hk> wrote:
>
>> Somewhere.  Here is the basic package, compresses pretty well (8KB).  It's
>> only C code, I have C++ and Java equivalents but I don't think they help too
>> much.
>>
>> http://miru.hk/archive/patterns.1.tar.bz2
>> md5sum: 49d3b8d2bbccb62a13e17bef84332bc5
>>
>> There are obvious gaps in ØMQ that need to be rationalized, possibly with
>> brokers as per The Guide.  Similarly some patterns in the TIB are
>> architecture limitations that do not need direct equivalents, i.e. broadcast
>> request and reply.
>>
>> The major target architectures are namely FT request-reply, how to
>> replicate a primary-secondary model; anonymous distributed request-reply
>> which in itself isn't useful but leads to the usage of certified distributed
>> request-reply which is commonly used as a high speed alternative to
>> guaranteed once-only transactional middleware systems like TIBCO RVTX, TIBCO
>> ETX, and IBM WebSphere MQ.
>>
>> --
>> Steve-o
>>
>>
>> On 4 January 2011 17:09, Martin Sustrik <sustrik at 250bpm.com> wrote:
>>
>>> Steven,
>>>
>>> Wouldn't it make more sense to place this on the website somewhere?
>>>
>>> Martin
>>>
>>>
>>> On 01/03/2011 02:15 PM, Steven McCoy wrote:
>>>
>>>> I'm working through messaging pattern examples for TIB (MSA), Rendezvous
>>>> 5, TIB/Rendezvous 6/7/8, and ØMQ.  Every framework unsurprisingly
>>>> doesn't support all combinations but I'm working from what is available
>>>> on the TIBCO side, i.e. leveraging location transparency and no single
>>>> point of failure.
>>>>
>>>>   1. *Broadcast Publish-Subscribe*, dependent upon an underlying
>>>>
>>>>      broadcast transport, i.e. PGM/IP or PGM/UDP.  TCP being
>>>>      fundamentally flawed due to requiring a fixed endpoint.
>>>>   2. *Broadcast Request-Reply*, the server and client possess location
>>>>
>>>>      independence but multiple running servers would yield multiple
>>>>      responses as each receivers the clients query.
>>>>   3. *Unicast Publish-Subscribe*, feeding from one node to another
>>>>
>>>>      requiring at least one temporally fixed address and introducing a
>>>>      point of faliure.  Analogous to ØMQ using a TCP transport with
>>>>      PUB/SUB sockets.
>>>>   4. *Unicast Request-Reply*, requiring both sides to possess
>>>>
>>>>      temporally fixed addressing and hence two points of failure.
>>>>        Analogous to REQ/REP sockets in ØMQ.
>>>>   5. *Fault-Tolerant Request-Reply*, extending the /broadcast
>>>>
>>>>      publish-subscribe/ model by implementing fail-over fault-tolerance
>>>>      in the server side.
>>>>   6. *Certified Publish-Subscribe*, using a ledger publish side and
>>>>
>>>>      subscriber side in order to track receivers and state of confirmed
>>>>      messages.
>>>>   7. *Certified Request-Reply*, extending ledger usage for
>>>>
>>>>      client-to-server and server-to-client response.  Note TIBCO
>>>>      Rendezvous has a well known flaw with tracing certified unicast
>>>>      responses causing the ledger not to be purged.
>>>>   8. *Distributed Request-Reply*, combining certified and
>>>>
>>>>      fault-tolerant messaging a distributed group of servers is created
>>>>      using fault-tolerance to elect a scheduler which uses certified
>>>>      message to dispatch messages to application workers.  Failure of
>>>>      the scheduler is a point of failure causing dropped messages.
>>>>        Matches ØMQ's PUSH/PULL sockets that implement a load-sharing
>>>>      group of receivers, however ØMQ still requires a temporally fixed
>>>>      endpoint address.
>>>>   9. *Certified Distributed Request-Reply*, adding certified messaging
>>>>
>>>>      from the client through to the application worker.  Failure of the
>>>>      scheduler defers recovery to the client ledger.  When configured
>>>>      with a memory ledger is functionally equivalent to disk spooled
>>>>      ØMQ PUSH/PULL sockets with fixed identities, however common
>>>>      configuration is a disk ledger which can continue message delivery
>>>>      independent of client and server restarts.  If the application
>>>>      crashes the ledger will be corrupted and therefore applications
>>>>      must use an external transaction manager to implement guaranteed
>>>>      once-only delivery.  Note that ØMQ has a major issue that messages
>>>>      delivered to the receiver but not processed by the application
>>>>      will be lost upon failure, only undelivered messages are held in
>>>>      queue.
>>>>
>>>> Not sure how all of this is going to be organised, but an FYI anyhow and
>>>> someone can point out any incorrect understanding of ØMQ sockets.
>>>>
>>>> --
>>>> Steve-o
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> zeromq-dev mailing list
>>>> zeromq-dev at lists.zeromq.org
>>>> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>>>>
>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.zeromq.org/pipermail/zeromq-dev/attachments/20110106/4e3fe701/attachment.htm>


More information about the zeromq-dev mailing list