[zeromq-dev] is pgm transport working in alpha3?
Yuri Finkelstein
yurif2005 at gmail.com
Fri Nov 6 20:46:35 CET 2009
yep, fix confirmed :)
So, last question on this. Can I use zmq_forwarder to bridge between:
1. TCP and Multicast
2. Multicast and Multicast (different services of course)
In general, zmq_forwarder could be a very interesting element in this
solution. By looking at the code, it seems it can open multiple input
channels and multiple output channels.
Is it intended to be a simple fan-out solution? Any plans on supporting
routing/filtering rules here?
Also, for an RPC style messaging, I wound need to run 2 of forwarders: one
for A->B, and another for B->A. Is this the idea? It would be better to have
one forwarder process do both.
Thanks,
Yuri
On Thu, Nov 5, 2009 at 1:34 PM, Pavol Malosek <malosek at fastmq.com> wrote:
> Hello,
>
> This bug was solved few weeks ago (commit 39d915de)
> Here is the patch against alpha3
>
> --- zeromq-2.0-alpha3-orig/src/pgm_receiver.cpp 2009-09-23
> 11:44:50.000000000 +0200
> +++ zeromq-2.0-alpha3/src/pgm_receiver.cpp 2009-11-05
> 23:18:36.000000000 +0100
> @@ -153,7 +153,6 @@
> // information (sizeof uint16_t).
> raw_data += sizeof (uint16_t);
> nbytes -= sizeof (uint16_t);
> - zmq_assert (apdu_offset <= nbytes);
>
> // New peer.
> if (it == peers.end ()) {
> @@ -174,6 +173,7 @@
> // Now is the possibility to join the stream.
> if (!it->second.joined) {
>
> + zmq_assert (apdu_offset <= nbytes);
> zmq_assert (it->second.decoder == NULL);
>
> // We have to move data to the begining of the first message.
> Thanks for the report anyway!
>
> malo
>
>
>
> ----- Original Message -----
> *From:* Yuri Finkelstein <yurif2005 at gmail.com>
> *To:* Pavol Malosek <malosek at fastmq.com>
> *Sent:* Thursday, November 05, 2009 8:40 PM
> *Subject:* Re: [zeromq-dev] is pgm transport working in alpha3?
>
> Another issue. This works:
>
> -bash-3.2$ ./perf/c/local_thr "udp://;226.0.0.1:5555" 1024 *10*
> message size: 1024 [B]
> message count: 10
> mean throughput: 1250000 [msg/s]
> mean throughput: 10240.000 [Mb/s]
>
>
> But this doesn't:
> -bash-3.2$ ./perf/c/local_thr "udp://;226.0.0.1:5555" 1024 *100*
> Assertion failed: apdu_offset <= nbytes (pgm_receiver.cpp:156)
> Aborted
>
>
> On Thu, Nov 5, 2009 at 11:07 AM, Yuri Finkelstein <yurif2005 at gmail.com>wrote:
>
>> Thanks! Indeed, if I lower the message size and count the setup below
>> works.
>> I'm not sure what role does this pgm bug/feature play in it, but one thing
>> looks fairly sure to me: in perf samples, the pgm_rate has to be derived
>> from the message size and count supplied on the command line rather then
>> being set to a constant.
>>
>> Also, printing a meaningful error text on stderr in case of r c!= 0 would
>> be a very good idea :)
>>
>> Yuri
>>
>>
>> On Thu, Nov 5, 2009 at 2:52 AM, Pavol Malosek <malosek at fastmq.com> wrote:
>>
>>> Hello Yuri,
>>>
>>> There was a bug/feature in OpenPGM rate limiter which could cause the
>>> problem you are describing.
>>>
>>> http://groups.google.com/group/openpgm-dev/browse_thread/thread/2e40a5d428d9b27a
>>>
>>> Since 0MQ II default rate for PGM is 100kb/s try to increase it
>>> add in your remote/local_thr.cpp
>>> // Add your socket options here.
>>> // For example ZMQ_RATE, ZMQ_RECOVERY_IVL and ZMQ_MCAST_LOOP for PGM.
>>> uint64_t pgm_rate = 50000; // 50Mb/s
>>> int rc = zmq_setsockopt (s, ZMQ_RATE, &pgm_rate, sizeof (pgm_rate));
>>> assert (rc == 0);
>>>
>>> HTH
>>>
>>> Yes you are right, in 0MQ II there is no zmq_server at all.
>>>
>>> malo
>>>
>>>
>>> ----- Original Message -----
>>> *From:* Yuri Finkelstein <yurif2005 at gmail.com>
>>> *To:* zeromq-dev at lists.zeromq.org
>>> *Sent:* Wednesday, November 04, 2009 11:26 PM
>>> *Subject:* [zeromq-dev] is pgm transport working in alpha3?
>>>
>>> I'm trying to run the perf test using openpgm multicast and can't get it
>>> to work:
>>>
>>> machine 1:
>>>
>>> sudo ./perf/cpp/local_thr "udp://eth0;226.0.0.1:5555" 1024 100000
>>>
>>> machine 2
>>> sudo ./perf/cpp/remote_thr "udp://eth0;226.0.0.1:5555" 1024 100000
>>>
>>>
>>> the remote_thr process exits silently after few seconds.
>>>
>>> Some time after that tje local_thr process reports:
>>> ** (process:32674): WARNING **: peer expired, tsi
>>> 227.186.205.113.22.215.39440
>>>
>>>
>>> No evidence ot packets sent or received. The same works with TCP as
>>> expected.
>>>
>>> I tried pgm:// with the same result.
>>>
>>> I do see both processes joining multicast group on their boxes using
>>> netstat -g.
>>>
>>> Documentation for 1.0 version says that zmq_server needs to be running in
>>> the background for multicast to work. But in 2.0 alpha there is no such
>>> process needed anymore. Right?
>>>
>>>
>>> What's wrong here?
>>>
>>> Thanks,
>>> Yuri
>>>
>>> ------------------------------
>>>
>>> _______________________________________________
>>> zeromq-dev mailing list
>>> zeromq-dev at lists.zeromq.org
>>> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.zeromq.org/pipermail/zeromq-dev/attachments/20091106/990f717a/attachment.htm>
More information about the zeromq-dev
mailing list