[zeromq-dev] need to lower memory usage

Dave Peacock dave.peacock at gmail.com
Fri Aug 1 20:23:24 CEST 2014


Have just run into this same issue. I haven't tried uclibc yet, tho thanks
for that suggestion, will investigate later. For those of you running
embedded linux or similar, under posix it looks like there are a few
options for controlling this.

* Before the thread is created, you could call pthread_attr_setstacksize (
http://man7.org/linux/man-pages/man3/pthread_attr_setstacksize.3.html).
This requires modifying zeromq source.

* Use setrlimit (http://man7.org/linux/man-pages/man2/getrlimit.2.html)
with the RLIMIT_STACK option. This limit is what pthread uses for the stack
size of the thread on create. Note however that the value present at
process start is what is used, so you would need to set this prior to
forking. Specify size in bytes, and prob want to be a multiple of page
size. (RLIMIT_AS might be something to play with as well.)

* Use ulimit in the shell -- eg ulimit -s 512. (This size in kbytes)

So far so good here with last option, but early in testing. Have set
ZMQ_SNDBUF and ZMQ_RCVBUF relatively low as well, hopefully this does not
impact too much. Our messages are typically small but frequent, we have
been using ipc but have to go tcp soon for at least some of traffic.


cheers
dave


On Fri, Jul 25, 2014 at 3:35 PM, Ben Kloosterman <bklooste at gmail.com> wrote:

> Threads are not that cheap  , libs like to give them plenty of stack these
> days. ...
>
> There were some papers on  high  thread count memory management  with
> segmented stacks but there was a significant performance cost.  That (
> memory allocation ) and contention is one of the reasons why co-routines
> (nodejs  ) / disruptor patterns perform better than actor ( with 1 thread
> per actor)  in most cases.
>
> Why do you need to use so many ? Cant you pool them  or are you just
> working on embedded hw ?
>
> In your case its likely glibc  . If you want low memory try  uclibc. On my
> old system glibc takes 8Meg , uclibc  1 Mb .. If your that memory
> constrained glibc is probably not a good idea
>
> Ben
>
>
> On Sat, Jul 26, 2014 at 8:03 AM, Philip Dizon <philipdotdev at gmail.com>
> wrote:
>
>> I tried using valgrind's massif tool with the following command
>>
>> valgrind --trace-children=yes --tool=massif --pages-as-heap=yes
>> --detailed-freq=1000000 /zmqtestapp
>>
>> Again, all my test app does is create context, create socket, destroy
>> socket, destroy context.
>>
>> Here's what I saw as the output.  I'm not very familiar with valgrind but
>> I'm guessing some pthread_create call is allocating 16MB
>>
>>
>> --------------------------------------------------------------------------------
>>   n        time(i)         total(B)   useful-heap(B) extra-heap(B)
>> stacks(B)
>>
>> --------------------------------------------------------------------------------
>>  37      2,877,217        3,629,056        3,629,056
>> 0            0
>>  38      2,967,318        3,764,224        3,764,224
>> 0            0
>>  39      2,985,374        3,768,320        3,768,320
>> 0            0
>>  40      3,013,543       12,156,928       12,156,928
>> 0            0
>>  41      3,034,072       20,545,536       20,545,536
>> 0            0
>>  42      3,082,222       22,642,688       22,642,688
>> 0            0
>>  43      3,082,241       21,934,080       21,934,080
>> 0            0
>> 100.00% (21,934,080B) (page allocation syscalls) mmap/mremap/brk,
>> --alloc-fns, etc.
>> ->82.84% (18,169,856B) 0x49756F7: mmap (in /lib/libc-2.17.so)
>> | ->76.49% (16,777,216B) 0x49F48D6: pthread_create@@GLIBC_2.4 (in /lib/
>> libpthread-2.17.so)
>> | |
>> | ->06.33% (1,388,544B) 0x491CE96: new_heap (in /lib/libc-2.17.so)
>> | |
>> | ->00.02% (4,096B) in 1+ places, all below ms_print's threshold (01.00%)
>> |
>> ->15.80% (3,465,216B) 0x4018127: mmap (in /lib/ld-2.17.so)
>> | ->15.13% (3,317,760B) 0x4006B7A: _dl_map_object_from_fd (in /lib/
>> ld-2.17.so)
>> | |
>> | ->00.67% (147,456B) in 1+ places, all below ms_print's threshold
>> (01.00%)
>> |
>> ->01.36% (299,008B) in 2 places, all below massif's threshold (01.00%)
>>
>>
>>
>> On Thu, Jul 24, 2014 at 4:21 PM, Pieter Hintjens <ph at imatix.com> wrote:
>>
>>> Perhaps there's a way to run under valgrind or some other checker, and
>>> see where that memory is being allocated? Working blind is not going
>>> to be easy.
>>>
>>> On Thu, Jul 24, 2014 at 7:23 PM, Philip Dizon <philipdotdev at gmail.com>
>>> wrote:
>>> > Sorry to resurrect this old thread as I've been working on a more
>>> urgent
>>> > project.
>>> >
>>> > Anyway, it seems that setting the max sockets has no effect on the
>>> amount of
>>> > virtual memory used.  I tried setting max sockets to 1 and the memory
>>> still
>>> > jumped up by 17MB after the first socket was created.
>>> >
>>> > Here's my sample code using just plain zmq and not czmq.
>>> > void *acontext = zmq_ctx_new();
>>> > zmq_ctx_set (acontext, ZMQ_MAX_SOCKETS, maxSockets);
>>> > printf("ZMQ_MAX_SOCKETS = %d\n", zmq_ctx_get (acontext,
>>> ZMQ_MAX_SOCKETS));
>>> > // 3MB virtual memory used at this point
>>> > void *foo = zmq_socket (acontext, ZMQ_REP);  // 20MB virtual memory
>>> used at
>>> > this point
>>> >
>>> >
>>> > On Mon, Jun 23, 2014 at 10:44 PM, Pieter Hintjens <ph at imatix.com>
>>> wrote:
>>> >>
>>> >> In czmq master this is now configurable via zsys for the zsock API,
>>> which
>>> >> doesn't use contexts.
>>> >>
>>> >> On Jun 24, 2014 6:55 AM, "Michel Pelletier" <
>>> pelletier.michel at gmail.com>
>>> >> wrote:
>>> >>>
>>> >>> Yep, you can see a comment in the Python code I posted that
>>> demonstrates
>>> >>> there is no underlying context until a socket is created.
>>> >>>
>>> >>> Why not create a dummy socket and the destroy it?  It's not pretty
>>> but it
>>> >>> works.
>>> >>>
>>> >>> -Michel
>>> >>>
>>> >>>
>>> >>> On Mon, Jun 23, 2014 at 9:16 PM, Steve Murphy <murf at parsetree.com>
>>> wrote:
>>> >>>>
>>> >>>> Michel--
>>> >>>>
>>> >>>> Just read a little deeper on zctx_underlying...
>>> >>>>
>>> >>>> The doc says:
>>> >>>>
>>> >>>> will be NULL before first socket
>>> >>>> //  is created. Use with care.
>>> >>>>
>>> >>>> And, yet, in this use case, we would have to set
>>> >>>> the MAX_SOCKETS before the first socket is created....
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>> so, zctx_underlying() won't be useful.
>>> >>>>
>>> >>>>
>>> >>>> murf
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>> On Mon, Jun 23, 2014 at 9:41 PM, Steve Murphy <murf at parsetree.com>
>>> >>>> wrote:
>>> >>>>>
>>> >>>>> Thanks, Michel--
>>> >>>>>
>>> >>>>> missed that function in the zctx set!
>>> >>>>> Many thanks!
>>> >>>>>
>>> >>>>> murf
>>> >>>>>
>>> >>>>>
>>> >>>>>
>>> >>>>> On Mon, Jun 23, 2014 at 8:56 PM, Michel Pelletier
>>> >>>>> <pelletier.michel at gmail.com> wrote:
>>> >>>>>>
>>> >>>>>> You can use zctx_underlying to get the low level context object.
>>>  Here
>>> >>>>>> is an example using pyczmq:
>>> >>>>>>
>>> >>>>>> >>> from pyczmq import zctx, zsocket, zmq
>>> >>>>>> >>> s = zctx.new()
>>> >>>>>> >>> zctx.underlying(s)  # None until a socket is made
>>> >>>>>> >>> p = zsocket.new(s, zmq.PUSH)
>>> >>>>>> >>> zmq.ctx_set(zctx.underlying(s), zmq.MAX_SOCKETS, 10)
>>> >>>>>> 0
>>> >>>>>> >>> zmq.ctx_get(zctx.underlying(s), zmq.MAX_SOCKETS)
>>> >>>>>> 10
>>> >>>>>> >>>
>>> >>>>>>
>>> >>>>>>
>>> >>>>>>
>>> >>>>>> On Mon, Jun 23, 2014 at 7:26 PM, Steve Murphy <murf at parsetree.com
>>> >
>>> >>>>>> wrote:
>>> >>>>>>>
>>> >>>>>>> Philip--
>>> >>>>>>>
>>> >>>>>>> In the code you provided, you are mixing czmq
>>> >>>>>>> and zmq lib calls, and as stated, it won't work.
>>> >>>>>>>
>>> >>>>>>> The zctx_t that zctx_new provides is NOT a void*
>>> >>>>>>> that zmq_ctx_new() would give you, and the calls to
>>> zmq_ctx_set() and
>>> >>>>>>> .._get()
>>> >>>>>>> will not work properly if given a zctx_t. If you replace your
>>> call
>>> >>>>>>> to zctx_new() with zmq_ctx_new(), you will get better results.
>>> >>>>>>>
>>> >>>>>>> Now, Pieter mentioned that:
>>> >>>>>>>
>>> >>>>>>> "You can lower the max sockets per context, before creating your
>>> >>>>>>> first
>>> >>>>>>> context. See zmq_ctx_set ()."
>>> >>>>>>>
>>> >>>>>>> But I think he meant to say "before creating your first socket."
>>> >>>>>>>
>>> >>>>>>> (which, btw, is not in the ZMQ ref manual.)
>>> >>>>>>>
>>> >>>>>>> I double checked the zctx page in the CZMQ spec, and no function
>>> >>>>>>> is available to get/set the context options... at least the
>>> >>>>>>> MAX_SOCKETS, that is.
>>> >>>>>>> So, if you need to play with MAX_SOCKETS, you have to abandon
>>> CZMQ,
>>> >>>>>>> as there is no
>>> >>>>>>> way to slip from the zctx_t world to the void* world.
>>> >>>>>>>
>>> >>>>>>> And a quick look at max sockets on a new context shows that the
>>> >>>>>>> default for
>>> >>>>>>> a new context is 1023. in the 4.1 stuff, there is also a
>>> >>>>>>> ZMQ_SOCKET_LIMIT,
>>> >>>>>>> which is the absolute maximum you can set in a set() call, but
>>> this
>>> >>>>>>> isn't
>>> >>>>>>> in the 4.0 versions.
>>> >>>>>>>
>>> >>>>>>> murf
>>> >>>>>>>
>>> >>>>>>>
>>> >>>>>>>
>>> >>>>>>> On Mon, Jun 23, 2014 at 10:25 AM, Philip Dizon
>>> >>>>>>> <philipdotdev at gmail.com> wrote:
>>> >>>>>>>>
>>> >>>>>>>> Is it possible to set it ZMQ_MAX_SOCKETS for a czmq context
>>> object
>>> >>>>>>>> because I'm getting an assert failure when I do a zmq_ctx_get?
>>> >>>>>>>>
>>> >>>>>>>>  e.g.
>>> >>>>>>>>     client->ctx = zctx_new();
>>> >>>>>>>>     assert(client->ctx);
>>> >>>>>>>>
>>> >>>>>>>>     int max_sockets = 256;
>>> >>>>>>>>     zmq_ctx_set (client->ctx, ZMQ_MAX_SOCKETS, max_sockets);
>>> >>>>>>>>     assert (zmq_ctx_get (client->ctx, ZMQ_MAX_SOCKETS) ==
>>> >>>>>>>> max_sockets);
>>> >>>>>>>>
>>> >>>>>>>>
>>> >>>>>>>>
>>> >>>>>>>> On Thu, Jun 19, 2014 at 4:05 PM, Pieter Hintjens <ph at imatix.com
>>> >
>>> >>>>>>>> wrote:
>>> >>>>>>>>>
>>> >>>>>>>>> You can lower the max sockets per context, before creating your
>>> >>>>>>>>> first
>>> >>>>>>>>> context. See zmq_ctx_set ().
>>> >>>>>>>>>
>>> >>>>>>>>> On Fri, Jun 20, 2014 at 12:39 AM, Philip Dizon
>>> >>>>>>>>> <philipdotdev at gmail.com> wrote:
>>> >>>>>>>>> > Hi,
>>> >>>>>>>>> >
>>> >>>>>>>>> > I noticed that just creating a new zmq context and socket
>>> bumps
>>> >>>>>>>>> > my memory
>>> >>>>>>>>> > usage by 18MB, and this is a big problem on my embedded
>>> system
>>> >>>>>>>>> > which only
>>> >>>>>>>>> > contains about 100MB.  I determined this by using top
>>> command and
>>> >>>>>>>>> > comparing
>>> >>>>>>>>> > the difference in mem usage.
>>> >>>>>>>>> > Is there any sort of option I can use to lower the memory
>>> usage?
>>> >>>>>>>>> >
>>> >>>>>>>>> > Thanks
>>> >>>>>>>>> >
>>> >>>>>>>>> > _______________________________________________
>>> >>>>>>>>> > zeromq-dev mailing list
>>> >>>>>>>>> > zeromq-dev at lists.zeromq.org
>>> >>>>>>>>> > http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>>> >>>>>>>>> >
>>> >>>>>>>>> _______________________________________________
>>> >>>>>>>>> zeromq-dev mailing list
>>> >>>>>>>>> zeromq-dev at lists.zeromq.org
>>> >>>>>>>>> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>>> >>>>>>>>
>>> >>>>>>>>
>>> >>>>>>>>
>>> >>>>>>>> _______________________________________________
>>> >>>>>>>> zeromq-dev mailing list
>>> >>>>>>>> zeromq-dev at lists.zeromq.org
>>> >>>>>>>> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>>> >>>>>>>>
>>> >>>>>>>
>>> >>>>>>>
>>> >>>>>>>
>>> >>>>>>> --
>>> >>>>>>>
>>> >>>>>>> Steve Murphy
>>> >>>>>>> ParseTree Corporation
>>> >>>>>>> 57 Lane 17
>>> >>>>>>> Cody, WY 82414
>>> >>>>>>> ✉  murf at parsetree dot com
>>> >>>>>>> ☎ 307-899-5535
>>> >>>>>>>
>>> >>>>>>>
>>> >>>>>>>
>>> >>>>>>> _______________________________________________
>>> >>>>>>> zeromq-dev mailing list
>>> >>>>>>> zeromq-dev at lists.zeromq.org
>>> >>>>>>> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>>> >>>>>>>
>>> >>>>>>
>>> >>>>>>
>>> >>>>>> _______________________________________________
>>> >>>>>> zeromq-dev mailing list
>>> >>>>>> zeromq-dev at lists.zeromq.org
>>> >>>>>> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>>> >>>>>>
>>> >>>>>
>>> >>>>>
>>> >>>>>
>>> >>>>> --
>>> >>>>>
>>> >>>>> Steve Murphy
>>> >>>>> ParseTree Corporation
>>> >>>>> 57 Lane 17
>>> >>>>> Cody, WY 82414
>>> >>>>> ✉  murf at parsetree dot com
>>> >>>>> ☎ 307-899-5535
>>> >>>>>
>>> >>>>>
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>> --
>>> >>>>
>>> >>>> Steve Murphy
>>> >>>> ParseTree Corporation
>>> >>>> 57 Lane 17
>>> >>>> Cody, WY 82414
>>> >>>> ✉  murf at parsetree dot com
>>> >>>> ☎ 307-899-5535
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>> _______________________________________________
>>> >>>> zeromq-dev mailing list
>>> >>>> zeromq-dev at lists.zeromq.org
>>> >>>> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>>> >>>>
>>> >>>
>>> >>>
>>> >>> _______________________________________________
>>> >>> zeromq-dev mailing list
>>> >>> zeromq-dev at lists.zeromq.org
>>> >>> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>>> >>>
>>> >>
>>> >> _______________________________________________
>>> >> zeromq-dev mailing list
>>> >> zeromq-dev at lists.zeromq.org
>>> >> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>>> >>
>>> >
>>> >
>>> > _______________________________________________
>>> > zeromq-dev mailing list
>>> > zeromq-dev at lists.zeromq.org
>>> > http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>>> >
>>> _______________________________________________
>>> zeromq-dev mailing list
>>> zeromq-dev at lists.zeromq.org
>>> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>>>
>>
>>
>> _______________________________________________
>> zeromq-dev mailing list
>> zeromq-dev at lists.zeromq.org
>> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>>
>>
>
> _______________________________________________
> zeromq-dev mailing list
> zeromq-dev at lists.zeromq.org
> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.zeromq.org/pipermail/zeromq-dev/attachments/20140801/3bd28b7b/attachment.html>


More information about the zeromq-dev mailing list