[zeromq-dev] multiple network interfaces for one exchange?

Aamir M aamirjvm at gmail.com
Fri Mar 20 15:38:05 CET 2009


Hi Martin,

It's the second use case that I'm interested. We have different
compute clusters on different networks. We want to combine all of
these into a single cluster for our use. The only way to do this at
the 0MQ level would require the ability to define a single global
object on multiple network interfaces.

I understand your concerns about not wanting to bloat the API. XML is
probably a good solution, although I imagine that this would create
dependencies on third-party libraries such as Xerces-C++. Would it be
possible to separate global object creation and pairing the object to
specific network locations? For example, something like:

api->create_exchange("E");
api->add_location("E", "eth0:1234");
api->add_location("E", "tun0:1234");
api->remove_location("E", "eth0:1234");

I'm not sure if this is a good idea or even possible, but it doesn't
seem to make the API too complicated.

Thanks,
Aamir

On Fri, Mar 20, 2009 at 6:01 AM, Martin Sustrik <sustrik at fastmq.com> wrote:
> Hi Aamir,
>
>> I have a situation where some clients are on a VPN tunnel interface
>> (e.g. tun0 in ifconfig) and some are on a regular ethernet interface
>> (e.g. eth0). I want to use a load-balancing global exchange to
>> distribute requests across both network interfaces. I don't know how
>> easy to is to abstract these two interfaces into a single interface at
>> the OS level (e.g. using OpenVPN). Is it correct to assume that the
>> 0MQ exchange cannot publish on both interfaces at the same time?
>
> You are right there's no way to attach a single global object to multiple
> interfaces. However, this functionality would be really useful so let me
> discuss it a little.
>
> Basically, I can imagine 2 use cases (I'm not sure which one of them is the
> one you want):
>
> 1.) Increasing the bandwidth.
>
> In this use case 1Gb/sec that your NIC provides isn't enough for you, so
> instead of buying 10GbE or InfiniBand cards you buy a second 1GbE card.
>
> Now, Linux allows you to bond the two cards into a single virtual interface.
> No need to change 0MQ at all. The drawbacks of the solution are:
>
> * Individual network connections are still assigned to a single NIC, so
> single connection cannot use more than 1Gb/sec of bandwidth anyway.
>
> * Interface bonding affects all the traffic, doesn't allow you to balance
> the load only for specific message streams.
>
> It would be easy to implement this kind of thing at 0MQ level. You will
> simply specify multiple interfaces when creating the global objects. Client
> applications binding to the global object will choose (either randomly, or
> by doing XOR on MAC address) one of the interfaces to connect to.
> Advantages:
>
> * It will work even on OSes that don't support interface bonding.
>
> * Bonding will be controlled on the level of particular data stream rather
> than on the level on network interface as a whole.
>
> 2.) Load-balancing requests among separate networks.
>
> If you have two NICs, each connected to a different network, you can load
> balance the messages on the application level (create 2 exchanges, send
> messages alternatively to the first one and the second one). The problem
> with this approach is that if there was a single connection on one NIC and
> five network connections on the other NIC, 50% messages will be routed to
> the single connection connected via first NIC, while five connections on the
> second NIC will each recieve 10% of the messages.
>
> To solve this problem, global object which is currently a pair consisting of
> name and network location would have to be changed to contain multiple
> locations. Also, applications binding to a global object shouldn't bind to
> the object as a whole, rather to a specific location (NIC) within the
> object.
>
> This kind of thing is easy to implement, the problem being that it would
> make the API much more complex.
>
> I was thinking of this kind of functionality before, although with a
> different use case (distributing market data on local subnet via PGM while
> sending them to a geographically distance location via TCP - see diagram
> attached), but the complexity of the API scared me off. Now, there's an
> setup-via-XML feature on our roadmap. Defining you dataflows using XML would
> allow you to write rather complex configurations without need for bloated
> API. Once we have that, implementing this feature will be fairly trivial.
>
> Thoughts?
> Martin
>
>



More information about the zeromq-dev mailing list