116 * |
117 * | In addition to various attempts at advisory caution, clock()
118 * | will wake up the thread that is ordinarily parked in sched().
119 * | This routine is responsible for the heavy-handed swapping out
120 * v of entire processes in an attempt to arrest the slide of free
121 * | memory. See comments in sched.c for more details.
122 * |
123 * +----- minfree & throttlefree (3/4 of desfree, 0.59% of physmem, min. 6MB)
124 * |
125 * | These two separate tunables have, by default, the same value.
126 * v Various parts of the kernel use minfree to signal the need for
127 * | more aggressive reclamation of memory, and sched() is more
128 * | aggressive at swapping processes out.
129 * |
130 * | If free memory falls below throttlefree, page_create_va() will
131 * | use page_create_throttle() to begin holding most requests for
132 * | new pages while pageout and reaping free up memory. Sleeping
133 * v allocations (e.g., KM_SLEEP) are held here while we wait for
134 * | more memory. Non-sleeping allocations are generally allowed to
135 * | proceed, unless their priority is explicitly lowered with
136 * | KM_NORMALPRI.
137 * |
138 * +------- pageout_reserve (3/4 of throttlefree, 0.44% of physmem, min. 4MB)
139 * |
140 * | When we hit throttlefree, the situation is already dire. The
141 * v system is generally paging out memory and swapping out entire
142 * | processes in order to free up memory for continued operation.
143 * |
144 * | Unfortunately, evicting memory to disk generally requires short
145 * | term use of additional memory; e.g., allocation of buffers for
146 * | storage drivers, updating maps of free and used blocks, etc.
147 * | As such, pageout_reserve is the number of pages that we keep in
148 * | special reserve for use by pageout() and sched() and by any
149 * v other parts of the kernel that need to be working for those to
150 * | make forward progress such as the ZFS I/O pipeline.
151 * |
152 * | When we are below pageout_reserve, we fail or hold any allocation
153 * | that has not explicitly requested access to the reserve pool.
154 * | Access to the reserve is generally granted via the KM_PUSHPAGE
155 * | flag, or by marking a thread T_PUSHPAGE such that all allocations
156 * | can implicitly tap the reserve. For more details, see the
|
116 * |
117 * | In addition to various attempts at advisory caution, clock()
118 * | will wake up the thread that is ordinarily parked in sched().
119 * | This routine is responsible for the heavy-handed swapping out
120 * v of entire processes in an attempt to arrest the slide of free
121 * | memory. See comments in sched.c for more details.
122 * |
123 * +----- minfree & throttlefree (3/4 of desfree, 0.59% of physmem, min. 6MB)
124 * |
125 * | These two separate tunables have, by default, the same value.
126 * v Various parts of the kernel use minfree to signal the need for
127 * | more aggressive reclamation of memory, and sched() is more
128 * | aggressive at swapping processes out.
129 * |
130 * | If free memory falls below throttlefree, page_create_va() will
131 * | use page_create_throttle() to begin holding most requests for
132 * | new pages while pageout and reaping free up memory. Sleeping
133 * v allocations (e.g., KM_SLEEP) are held here while we wait for
134 * | more memory. Non-sleeping allocations are generally allowed to
135 * | proceed, unless their priority is explicitly lowered with
136 * | KM_NORMALPRI (Note: KM_NOSLEEP_LAZY == (KM_NOSLEEP | KM_NORMALPRI).).
137 * |
138 * +------- pageout_reserve (3/4 of throttlefree, 0.44% of physmem, min. 4MB)
139 * |
140 * | When we hit throttlefree, the situation is already dire. The
141 * v system is generally paging out memory and swapping out entire
142 * | processes in order to free up memory for continued operation.
143 * |
144 * | Unfortunately, evicting memory to disk generally requires short
145 * | term use of additional memory; e.g., allocation of buffers for
146 * | storage drivers, updating maps of free and used blocks, etc.
147 * | As such, pageout_reserve is the number of pages that we keep in
148 * | special reserve for use by pageout() and sched() and by any
149 * v other parts of the kernel that need to be working for those to
150 * | make forward progress such as the ZFS I/O pipeline.
151 * |
152 * | When we are below pageout_reserve, we fail or hold any allocation
153 * | that has not explicitly requested access to the reserve pool.
154 * | Access to the reserve is generally granted via the KM_PUSHPAGE
155 * | flag, or by marking a thread T_PUSHPAGE such that all allocations
156 * | can implicitly tap the reserve. For more details, see the
|