svn commit: r267858 - in head/sys/dev: virtio/balloon xen/balloon

Alfred Perlstein alfred at freebsd.org
Thu Jun 26 07:07:33 UTC 2014


On 6/25/14 8:30 AM, Attilio Rao wrote:
> On Wed, Jun 25, 2014 at 5:16 PM, Alfred Perlstein <alfred at freebsd.org> wrote:
>> On 6/25/14 5:41 AM, Attilio Rao wrote:
>>> On Wed, Jun 25, 2014 at 2:09 PM, Gleb Smirnoff <glebius at freebsd.org>
>>> wrote:
>>>> On Wed, Jun 25, 2014 at 01:58:29PM +0200, Attilio Rao wrote:
>>>> A> > Log:
>>>> A> >   xen/virtio: fix balloon drivers to not mark pages as WIRED
>>>> A> >
>>>> A> >   Prevent the Xen and VirtIO balloon drivers from marking pages as
>>>> A> >   wired. This prevents them from increasing the system wired page
>>>> count,
>>>> A> >   which can lead to mlock failing because of hitting the limit in
>>>> A> >   vm.max_wired.
>>>> A>
>>>> A> This change is conceptually wrong.
>>>> A> The pages balloon is allocating are unmanaged and they should be wired
>>>> A> by definition. Alan and I are considering enforcing this (mandatory
>>>> A> wired pages for unmanaged pages allocation) directly in the KPI.
>>>> A> This in practice just seem an artifact to deal with scarce  wired
>>>> A> memory limit. I suggest that for the XEN case this limit gets bumped
>>>> A> rather relying on similar type of hacks.
>>>>
>>>> Proper limit would be to count pages wired by userland via mlock(2)
>>>> and enforce limit only on those pages. Pages wired by kernel should
>>>> be either unlimited or controled by a separate limit.
>>> FWIW, I mostly agree with this. I think that the kernel and userland
>>> limits should be split apart. But for the time being, rising the limit
>>> is better.
>>>
>>> Attilio
>>>
>>>
>> Can you explain?  I would think that if you were designing some kind of
>> embedded device you would want to know exactly how much locked pages there
>> are overall, not just in userland.
>>
>> Meaning you would not want to overcommit and have too many locked pages due
>> to kernel+user.
> Well, assuming you trace them indipendently I don't think this is
> going to be problematic to aggregate them, is it?
I am not sure as I am not as strong in this area as you are.
>
> As far as I understand it, right now we have RMEM_LIMIT to someway
> limit per-process amount of wired memory and finally max_wired as a
> global accounted wired memory limit.
>
> I think that the idea now is that RMEM_LIMIT is enough to correctly
> control all the front-end check, coming from untrusted sources
> (userland, non-privileged syscalls like mlock(), mmap(), etc.).
> Possibly that's not always the case and I think that the hypervisor
> can be a fair example of this.
>
> Having "more granular" accountability, means that rather than having a
> global limit (or, rather, along with it) we can grow a per-process
> limit to control kernel-allocated wired memory.
>
>> Perhaps that needs an API as well?
> I don't have anything in my mind yet. My initial point was more trying
> to get a better semantic on a paradigm that is at least dangerous.
>
> Attilio
>
>
My concern is a group of daemons working to provide system services 
playing nicely with each other and the system as a whole.

I think the point is that let's say you have a concert of userspace 
daemons "importantd(8)" and "imperatived(8)" running on a system.

Both importantd(8) and imperatived(8) need pages wired for dealing with 
important timing/throughput issues.  The kernel obviously needs such 
pages as well.

importantd(8) and imperatived(8) do not want to blow up the system by 
requesting more than a fixed amount otherwise supposed bad things will 
happen, or perhaps they deadlock against each other.

A global count seems (to me) to make sense at this point even if the 
kernel ignores it as a way for everything to act in concert.

Is there a way for kernel, importantd(8), and imperatived(8) to "play 
nice together", meaning they can take each other's wired count into 
account if we get rid of the global?  My feeling is "no", that they will 
then need another rendezvous to do global accounting, if we retire this 
facility.

I'm likely wrong, but wanted to bring this up as a concern.

-Alfred




More information about the svn-src-all mailing list