Difference between p_vmspace quota between stable/11 and stable/10
Shrikanth Kamath
shrikanth07 at gmail.com
Thu May 25 08:13:41 UTC 2017
Thanks for the reply Konstantin, I captured the procstat -v snapshots
for the forked process (got the readings for both stable/10 vs
stable/11). I am still trying to figure how to interpret these
mappings,
In stable/11 the first few lines of procstat -v show
PID START END PRT RES PRES REF SHD FLAG TP PATH
19933 0x8048000 0x877e000 r-x 933 1003 2 1 CN--
vn /packages/mnt/junos-platform/sbin/dcd
19933 0x877e000 0x87f2000 rw- 70 0 1 0 C---
vn /packages/mnt/junos-platform/sbin/dcd
19933 0x87f2000 0x8a73000 rw- 59 59 1 0 C--- df
19933 0xc8797000 0xc87a1000 rw- 10 10 1 0 CN-- df
The same for stable/10 show
PID START END PRT RES PRES REF SHD FL TP PATH
43678 0x8048000 0x8779000 r-x 943 1014 2 1 CN--
vn /packages/mnt/junos-platform/sbin/dcd
43678 0x8779000 0x87ed000 rw- 70 0 1 0 C---
vn /packages/mnt/junos-platform/sbin/dcd
43678 0x87ed000 0x2cc00000 rw- 145872 145872 1 0 C-S- df
43678 0xc8792000 0xc879c000 rw- 10 10 1 0 C--- df
The third entry in two cases show a stark difference, does this
indicated the space that was setup was much lower compared to
stable/10?
--
Shrikanth R K
From: Konstantin Belousov <kostikbel at gmail.com>
To: Shrikanth Kamath <shrikanth07 at gmail.com>
Cc: freebsd-hackers at freebsd.org
Subject: Re: Difference between p_vmspace quota between stable/11 and stable/10
Message-ID: <20170524090713.GG1622 at kib.kiev.ua>
Content-Type: text/plain; charset=us-ascii
On Wed, May 24, 2017 at 01:00:51AM -0700, Shrikanth Kamath wrote:
> I have a certain application(32 bit user space running in compat mode
> in 64 bit system X86) which does a fork, and the child process does a
> series of mmap calls as part of a scaling test. I am currently
> debugging an issue with this application which hits a ENOMEM when it
> is run on a stable/11 based setup and fails subsequent mmap and/or any
> malloc calls v/s stable/10 where the issue is not seen.. I probed the
> vm_map_find function using DTrace when "execname" was my application
> in question, and got these readings
>
> fbt:kernel:vm_map_find:entry
> /self->trace == 1/ /*enabled only during sys_mmap call of this application */
> {
> @bytes[args[4]] = sum(args[4]);
> printf("request length [%x]", args[4]);
> }
>
> For stable_10 --> Total of 124 requests (length requested was
> 0x500000) with the test successful
> 124 * 0x500000 (5MB) ~ 620MB
>
> For stable_11 --> Total of 109 mmap requests
> (0x500000/0x200000/0x3ff000 are the different vm_size_t length
> arguments in vm_map_find). The test fails after 386MB has been
> approved.
> 24 * 0x500000 (5MB) ~ 120MB
> 82 * 0x200000 (2MB) ~ 164MB
> 3 * 0x3ff000 (4MB) ~ 12MB
>
>
> The process parent rlimits are
>
> # cat /proc/5058/rlimit
>
> cpu -1 -1
> fsize -1 -1
> data 3221225472 3221225472
> stack 16777216 16777216
> core -1 -1
> rss 67108864 33265819648
> memlock 67108864 33265819648
> nproc 512 512
> nofile 1024 1024
> sbsize -1 -1
> vmem -1 -1
> npts -1 -1
> swap -1 -1
> kqueues -1 -1
> umtx -1 -1
>
> The requests started failing in stable/11 with just 386 MB approved
> v/s stable/10 which was successful in approving ~620MB.
>
> My stable/11 is from early days and is at GRN 302410 (probably 10 months old)
> Any pointers or tips on what to probe further will be very helpful. Is
> there any limits breach that I should probe further? The limits set
> when a process is forked?
> Should I probe the p->vmspace initiazliation?
I doubt that limits are relevant for your issue. Look at the process address
map at the moment when the request failed, I suspect that it is fragmented.
Use procstat -v <pid> to examine the address space. You may spawn the
tool from your program when mmap(2) fails.
--
Shrikanth R K
More information about the freebsd-hackers
mailing list