summaryrefslogtreecommitdiff
path: root/vm
AgeCommit message (Collapse)Author
2021-01-04vm_map: Avoid linking gaps for vm_copy_tSamuel Thibault
This does not make sense, and produces incorrect results (since vme_end is 0, etc.) * vm/vm_map.h (_vm_map_clip_start, _vm_map_clip_end): Add link_gap parameter. * vm/vm_map.c (_vm_map_entry_link): Add link_gap parameter, do not call vm_map_gap_insert if it is 0. (vm_map_entry_link): Set link_gap to 1 in _vm_map_entry_link call. (_vm_map_clip_start): Add link_gap parameter, pass it to _vm_map_entry_link call.. (vm_map_clip_start): Set link_gap_to 1 in _vm_map_clip_start call. (vm_map_copy_entry_link): Set link_gap to 0 in _vm_map_entry_link call. (vm_map_copy_clip_start): Set link_gap_to 0 in _vm_map_clip_start call. (_vm_map_entry_unlink): Add unlink_gap parameter, do not call vm_map_gap_remove if it is 0. (vm_map_entry_unlink): Set unlink_gap to 1 in _vm_map_entry_unlink call. (_vm_map_clip_end): Add link_gap parameter, pass it to _vm_map_entry_link call.. (vm_map_clip_end): Set link_gap_to 1 in _vm_map_clip_end call. (vm_map_copy_entry_unlink): Set unlink_gap to 0 in _vm_map_entry_unlink call. (vm_map_copy_clip_end): Set link_gap_to 0 in _vm_map_clip_end call. * vm/vm_kern.c (projected_buffer_deallocate): set link_gap to 1 in _vm_map_clip_start and _vm_map_clip_end calls.
2021-01-04vm_map: print warning when max_size gets smaller than sizeSamuel Thibault
* vm/vm_map.c (vm_map_find_entry_anywhere): Print warning when max_size gets smaller than size.
2020-12-20vm_map: Fix taking into account high bits in maskSamuel Thibault
glibc's sysdeps/mach/hurd/dl-sysdep.c has been wanting to use this for decades. * include/string.h (ffs): New declaration. * vm/vm_map.c: Include <string.h>. (vm_map_find_entry_anywhere): Separate out high bits from mask, to compute the maximum offset instead of map->max_offset. * doc/mach.texi (vm_map): Update documentation accordingly.
2020-08-12vm_kern: Add kmem_alloc_aligned_tableAlmudena Garcia
This function allows to map a table in a memory page, using its physical address,aligning the start of the page with the start of the table *vm/vm_kern.c (kmem_alloc_aligned_table): New function. Returns a reference for the virtual address of the table. *vm/vm_kern.h (kmem_alloc_aligned_table): New prototype
2020-07-09Add vm_allocate_contiguous RPCSamuel Thibault
This allows privileged userland drivers to allocate buffers for e.g. DMA, and thus need them to be physically contiguous and get their physical address. Initial work by Zheng Da, reworked by Richard Braun, Damien Zammit, and myself. * doc/mach.texi (vm_allocate_contiguous): New RPC. * i386/include/mach/i386/machine_types.defs (rpc_phys_addr_t): New type. * i386/include/mach/i386/vm_types.h [!MACH_KERNEL] (phys_addr_t): Set type to 64bits. (rpc_phys_addr_t): New type, always 64bits. * include/mach/gnumach.defs (vm_allocate_contiguous):New RPC. * vm/vm_user.c (vm_allocate_contiguous): New function.
2020-06-20vm_map: Allow passing the name of a memory objectSamuel Thibault
as is returned by vm_info. * vm/vm_user.c (vm_map): Before trying to vm_object_enter, try to simply lookup the name.
2020-05-01kmem_alloc_wired: factorize with kmem_vallocSamuel Thibault
* vm/vm_kern.c (kmem_alloc_wired): Factorize with kmem_valloc.
2020-05-01Add kmem_vallocSamuel Thibault
Functions like vremap need to allocate some virtual addressing space before making their own mapping. kmem_alloc_wired can be used for that but that wastes memory. * vm/vm_kern.c (kmem_valloc): New function. * vm/vm_kern.h (kmem_valloc): New prototype. * linux/dev/glue/kmem.c (vremap): Call kmem_valloc instead of kmem_alloc_wired. Also check that `offset' is aligned on a page.
2020-03-2964bit: Fix vm_size_t sizeSamuel Thibault
It needs to be able to hold > 4G size. * i386/include/mach/i386/vm_types.h (vm_size_t): Set type to unsigned long. * vm/vm_user.c (vm_read, vm_write): Fix type according to RPC. * i386/i386at/model_dep.c (c_boot_entry): Fix format. * device/dev_pager.c (device_pager_data_request): Fix format.
2020-03-15vm_object_copy_call: Make sure vm_object_enter succeedsSamuel Thibault
Suggested by guy fleury iteriteka <gfleury@disroot.org> * vm/vm_object.c (ipc/mavm_object_copy_call): Make sure vm_object_enter call succeeds.
2019-08-31satisfy 'werror=parantheses'.guy fleury iteriteka
* vm/vm_map.c(vm_map_msync): explit group of first condition.
2019-08-31Fix the pointer comparison of different type.guy fleury iteriteka
* vm/vm_map.c(vm_map_fork): use VM_MAP_NULL instead of PMAP_NULL when compare with new_map.
2019-08-30fix return KERN_INVALID_ARGUMENT when the map is NULL.guy fleury iteriteka
* vm/vm_map.c(vm_map_msync): Add missing return keyword.
2019-08-11Fix allocation testSamuel Thibault
* vm/vm_map.c (vm_map_fork): Check for `new_map` being non-NULL, and not for `new_pmap` a second time.
2018-11-03Add vm_object_sync supportSamuel Thibault
* include/mach/vm_sync.h: New file. * include/mach/mach_types.h: Include <mach/vm_sync.h> * Makefrag.am (include_mach_HEADERS): Add include/mach/vm_sync.h. * include/mach/mach_types.defs (vm_sync_t): Add type. * include/mach/gnumach.defs (vm_object_sync, vm_msync): Add RPCs. * vm/vm_map.h: Include <mach/vm_sync.h>. (vm_map_msync): New declaration. * vm/vm_map.c (vm_map_msync): New function. * vm/vm_user.c: Include <mach/vm_sync.h> and <kern/mach.server.h>. (vm_object_sync, vm_msync): New functions.
2018-04-22vm_map: Fix bugs on huge masks parametersSamuel Thibault
* vm/vm_map.c (vm_map_find_entry_anywhere): Also check that (min + mask) & ~mask remains bigger than min.
2018-01-28Fix warningSamuel Thibault
* vm/vm_map.c (vm_map_copyout): Fix panic format.
2017-10-23Drop the register qualifier.Justus Winter
* i386/intel/pmap.c: Drop the register qualifier. * ipc/ipc_kmsg.h: Likewise. * kern/bootstrap.c: Likewise. * kern/profile.c: Likewise. * kern/thread.c: Likewise. * vm/vm_object.c: Likewise.
2017-09-21vm: Remove old memory manager remnants.Justus Winter
* vm/vm_object.c (vm_object_accept_old_init_protocol): Remove. (vm_object_enter): Adapt.
2017-08-14vm: Improve error handling.Justus Winter
* vm/vm_map.c (vm_map_create): Gracefully handle resource exhaustion. (vm_map_fork): Likewise at the callsite.
2017-08-14Fix typo.Justus Winter
2017-08-12vm: Mute paging error message.Justus Winter
* vm/vm_fault.c (vm_fault_page): Mute paging error message if the objects pager is NULL. This happens when a pager is destroyed, e.g. at system shutdown time when the root filesystem terminates.
2017-03-04Rewrite gsync so that it works with remote tasks v2Agustina Arzille
2016-12-27VM: really fix pageout of external objects backed by the default pagerRichard Braun
Commit eb07428ffb0009085fcd01dd1b79d9953af8e0ad does fix pageout of external objects backed by the default pager, but the way it's done has a vicious side effect: because they're considered external, the pageout daemon can keep evicting them even though the external pagers haven't released them, unlike internal pages which must all be released before the pageout daemon can make progress. This can lead to a situation where too many pages become wired, the default pager cannot allocate memory to process new requests, and the pageout daemon cannot recycle any more page, causing a panic. This change makes the pageout daemon use the same strategy for both internal pages and external pages sent to the default pager: use the laundry bit and wait for all laundry pages to be released, thereby completely synchronizing the pageout daemon and the default pager. * vm/vm_page.c (vm_page_can_move): Allow external laundry pages to be moved. (vm_page_seg_evict): Don't alter the `external_laundry' bit, merely disable double paging for external pages sent to the default pager. * vm/vm_pageout.c: Include vm/memory_object.h. (vm_pageout_setup): Don't check whether the `external_laundry' bit is set, but handle external pages sent to the default pager the same as internal pages.
2016-12-24VM: add the vm_wire_all callRichard Braun
This call maps the POSIX mlockall and munlockall calls. * Makefrag.am (include_mach_HEADERS): Add include/mach/vm_wire.h. * include/mach/gnumach.defs (vm_wire_t): New type. (vm_wire_all): New routine. * include/mach/mach_types.h: Include mach/vm_wire.h. * vm/vm_map.c: Likewise. (vm_map_enter): Automatically wire new entries if requested. (vm_map_copyout): Likewise. (vm_map_pageable_all): New function. vm/vm_map.h: Include mach/vm_wire.h. (struct vm_map): Update description of member `wiring_required'. (vm_map_pageable_all): New function. * vm/vm_user.c (vm_wire_all): New function.
2016-12-24VM: rework map entry wiringRichard Braun
First, user wiring is removed, simply because it has never been used. Second, make the VM system track wiring requests to better handle protection. This change makes it possible to wire entries with VM_PROT_NONE protection without actually reserving any page for them until protection changes, and even make those pages pageable if protection is downgraded to VM_PROT_NONE. * ddb/db_ext_symtab.c: Update call to vm_map_pageable. * i386/i386/user_ldt.c: Likewise. * ipc/mach_port.c: Likewise. * vm/vm_debug.c (mach_vm_region_info): Update values returned as appropriate. * vm/vm_map.c (vm_map_entry_copy): Update operation as appropriate. (vm_map_setup): Update member names as appropriate. (vm_map_find_entry): Update to account for map member variable changes. (vm_map_enter): Likewise. (vm_map_entry_inc_wired): New function. (vm_map_entry_reset_wired): Likewise. (vm_map_pageable_scan): Likewise. (vm_map_protect): Update wired access, call vm_map_pageable_scan. (vm_map_pageable_common): Rename to ... (vm_map_pageable): ... and rewrite to use vm_map_pageable_scan. (vm_map_entry_delete): Fix unwiring. (vm_map_copy_overwrite): Replace inline code with a call to vm_map_entry_reset_wired. (vm_map_copyin_page_list): Likewise. (vm_map_print): Likewise. Also print map size and wired size. (vm_map_copyout_page_list): Update to account for map member variable changes. * vm/vm_map.h (struct vm_map_entry): Remove `user_wired_count' member, add `wired_access' member. (struct vm_map): Rename `user_wired' member to `size_wired'. (vm_map_pageable_common): Remove function. (vm_map_pageable_user): Remove macro. (vm_map_pageable): Replace macro with function declaration. * vm/vm_user.c (vm_wire): Update call to vm_map_pageable.
2016-12-24VM: fix pageout of external objects backed by the default pagerRichard Braun
Double paging on such objects causes deadlocks. * vm/vm_page.c: Include <vm/memory_object.h>. (vm_page_seg_evict): Rename laundry to double_paging to increase clarity. Set the `external_laundry' bit when evicting a page from an external object backed by the default pager. * vm/vm_pageout.c (vm_pageout_setup): Wire page if the `external_laundry' bit is set.
2016-12-24VM: fix pageability checkRichard Braun
Unlike laundry pages sent to the default pager, pages marked with the `external_laundry' bit remain in the page queues and must be filtered out by the pageability check. * vm/vm_page.c (vm_page_can_move): Check the `external_laundry' bit.
2016-12-21VM: fix pageout timeoutRichard Braun
The interval parameter to the thread_set_timeout function is actually in ticks. * vm/vm_pageout.c (vm_pageout): Fix call to thread_set_timeout.
2016-12-11VM: make vm_wire more POSIX-friendlyRichard Braun
* doc/mach.texi: Update return codes. * vm/vm_map.c (vm_map_pageable_common): Return KERN_NO_SPACE instead of KERN_FAILURE if some of the specified address range does not correspond to mapped pages. Skip unwired entries instead of failing when unwiring.
0 min.VM: fix pageout throttling to external pagersRichard Braun
Since the VM system has been tracking whether pages belong to internal or external objects, pageout throttling to external pagers has simply not been working. The reason is that, on pageout, requests for external pages are correctly tracked, but on page release (which is used to acknowledge the request), external pages are not marked external any more. This is because the external bit tracks whether a page belongs to an external object, and all pages, including external ones, are moved to an internal object during pageout. To solve this issue, a new "external_laundry" bit is added. It has the same purpose as the laundry bit, but for external pagers. * vm/vm_page.c (vm_page_seg_min_page_available): Function unused, remove. (vm_page_seg_evict): Use vm_page_external_laundry_count instead of vm_page_external_pagedout. Add an assertion about double paging. (vm_page_check_usable): Use vm_page_external_laundry_count instead of vm_page_external_pagedout. (vm_page_evict): Likewise. * vm/vm_page.h (struct vm_page): New `external_laundry' member. (vm_page_external_pagedout): Rename to ... (vm_page_external_laundry_count): ... this. * vm/vm_pageout.c: Include kern/printf.h. (DEBUG): New macro. (VM_PAGEOUT_TIMEOUT): Likewise. (vm_pageout_setup): Use vm_page_external_laundry_count instead of vm_page_external_pagedout. Set `external_laundry' where appropriate. (vm_pageout): Use VM_PAGEOUT_TIMEOUT with thread_set_timeout. Add debugging code, commented out by default. * vm/vm_resident.c (vm_page_external_pagedout): Rename to ... (vm_page_external_laundry_count): ... this. (vm_page_init_template): Set `external_laundry' member to FALSE. (vm_page_release): Rename external parameter to external_laundry. Slightly change pageout resuming. (vm_page_free): Rename external variable to external_laundry.
2016-11-30VM: fix pageout on low memoryRichard Braun
Instead of determining if memory is low, directly use the vm_page_alloc_paused variable, which is true when memory has reached a minimum threshold until it gets back above the high thresholds. This makes sure double paging is used when external pagers are unable to allocate memory. * vm/vm_page.c (vm_page_seg_evict): Rename low_memory to alloc_paused. (vm_page_evict_once): Remove low_memory and its computation. Blindly pass the new alloc_paused argument instead. (vm_page_evict): Pass the value of vm_page_alloc_paused to vm_page_evict_once.
2016-11-30VM: fix eviction logic errorRichard Braun
* vm/vm_page.c (vm_page_evict): Test both vm_page_external_pagedout and vm_page_laundry_count in order to determine there was "no pageout".
2016-11-30VM: fix pageout stop conditionRichard Braun
When checking whether to continue paging out or not, the pageout daemon only considers the high free page threshold of a segment. But if e.g. the default pager had to allocate reserved pages during a previous pageout cycle, it could have exhausted a segment (this is currently only seen with the DMA segment). In that case, the high threshold cannot be reached because the segment has currently no pageable page. This change makes the pageout daemon identify this condition and consider the segment as usable in order to make progress. The segment will simply be ignored on the allocation path for unprivileged threads, and if this happens with too many segments, the system will fail at allocation time. * vm/vm_page.c (vm_page_seg_usable): Report usable if the segment has no pageable page.
2016-11-04vm: Print names of maps in the debugger.Justus Winter
* vm/vm_map.c (vm_map_print): Print name of the map.
2016-10-21Gracefully handle pmap allocation failures.Justus Winter
* kern/task.c (task_create): Gracefully handle pmap allocation failures. * vm/vm_map.c (vm_map_fork): Likewise.
2016-10-21vm: Print map names in case of failures.Justus Winter
* vm/vm_kern.c (kmem_alloc): Print map names in case of failures. (kmem_alloc_wired): Likewise. (kmem_alloc_aligned): Likewise. (kmem_alloc_pageable): Likewise.
2016-10-03Remove deprecated external memory management interface.Justus Winter
* NEWS: Update. * device/dev_pager.c (device_pager_data_request): Prune unused branch. (device_pager_data_request_done): Remove function. (device_pager_data_write): Likewise. (device_pager_data_write_done): Likewise. (device_pager_copy): Use 'memory_object_ready'. * device/dev_pager.h (device_pager_data_write_done): Remove prototype. * device/device_pager.srv (memory_object_data_write): Remove macro. * doc/mach.texi: Update documentation. * include/mach/mach.defs (memory_object_data_provided): Drop RPC. (memory_object_set_attributes): Likewise. * include/mach/memory_object.defs: Update comments. (memory_object_data_write): Drop RPC. * include/mach/memory_object_default.defs: Update comments. * include/mach_debug/vm_info.h (VOI_STATE_USE_OLD_PAGEOUT): Drop macro. * vm/memory_object.c (memory_object_data_provided): Remove function. (memory_object_data_error): Simplify. (memory_object_set_attributes_common): Make static, remove unused parameters, simplify. (memory_object_change_attributes): Update callsite. (memory_object_set_attributes): Remove function. (memory_object_ready): Update callsite. * vm/vm_debug.c (mach_vm_object_info): Adapt to the changes. * vm/vm_object.c (vm_object_bootstrap): Likewise. * vm/vm_object.h (struct vm_object): Drop flag 'use_old_pageout'. * vm/vm_pageout.c: Update comments. (vm_pageout_page): Simplify.
2016-09-21Enable high memoryRichard Braun
* i386/i386at/biosmem.c (biosmem_setup): Load the HIGHMEM segment if present. (biosmem_free_usable): Report high memory as usable. * vm/vm_page.c (vm_page_boot_table_size, vm_page_table_size, vm_page_mem_size, vm_page_mem_free): Scan all segments. * vm/vm_resident.c (vm_page_grab): Describe allocation strategy with regard to the HIGHMEM segment.
2016-09-21Rework pageout to handle multiple segmentsRichard Braun
As we're about to use a new HIGHMEM segment, potentially much larger than the existing DMA and DIRECTMAP ones, it's now compulsory to make the pageout daemon aware of those segments. And while we're at it, let's fix some of the defects that have been plaguing pageout forever, such as throttling, and pageout of internal versus external pages (this commit notably introduces a hardcoded policy in which as many external pages are selected before considering internal pages). * kern/slab.c (kmem_pagefree_physmem): Update call to vm_page_release. * vm/vm_page.c: Include <kern/counters.h> and <vm/vm_pageout.h>. (VM_PAGE_SEG_THRESHOLD_MIN_NUM, VM_PAGE_SEG_THRESHOLD_MIN_DENOM, VM_PAGE_SEG_THRESHOLD_MIN, VM_PAGE_SEG_THRESHOLD_LOW_NUM, VM_PAGE_SEG_THRESHOLD_LOW_DENOM, VM_PAGE_SEG_THRESHOLD_LOW, VM_PAGE_SEG_THRESHOLD_HIGH_NUM, VM_PAGE_SEG_THRESHOLD_HIGH_DENOM, VM_PAGE_SEG_THRESHOLD_HIGH, VM_PAGE_SEG_MIN_PAGES, VM_PAGE_HIGH_ACTIVE_PAGE_NUM, VM_PAGE_HIGH_ACTIVE_PAGE_DENOM): New macros. (struct vm_page_queue): New type. (struct vm_page_seg): Add new members `min_free_pages', `low_free_pages', `high_free_pages', `active_pages', `nr_active_pages', `high_active_pages', `inactive_pages', `nr_inactive_pages'. (vm_page_alloc_paused): New variable. (vm_page_pageable, vm_page_can_move, vm_page_remove_mappings): New functions. (vm_page_seg_alloc_from_buddy): Pause allocations and start the pageout daemon as appropriate. (vm_page_queue_init, vm_page_queue_push, vm_page_queue_remove, vm_page_queue_first, vm_page_seg_get, vm_page_seg_index, vm_page_seg_compute_pageout_thresholds): New functions. (vm_page_seg_init): Initialize the new segment members. (vm_page_seg_add_active_page, vm_page_seg_remove_active_page, vm_page_seg_add_inactive_page, vm_page_seg_remove_inactive_page, vm_page_seg_pull_active_page, vm_page_seg_pull_inactive_page, vm_page_seg_pull_cache_page): New functions. (vm_page_seg_min_page_available, vm_page_seg_page_available, vm_page_seg_usable, vm_page_seg_double_lock, vm_page_seg_double_unlock, vm_page_seg_balance_page, vm_page_seg_balance, vm_page_seg_evict, vm_page_seg_compute_high_active_page, vm_page_seg_refill_inactive, vm_page_lookup_seg, vm_page_check): New functions. (vm_page_alloc_pa): Handle allocation failure from VM privileged thread. (vm_page_info_all): Display additional segment properties. (vm_page_wire, vm_page_unwire, vm_page_deactivate, vm_page_activate, vm_page_wait): Move from vm/vm_resident.c and rewrite to use segments. (vm_page_queues_remove, vm_page_check_usable, vm_page_may_balance, vm_page_balance_once, vm_page_balance, vm_page_evict_once): New functions. (VM_PAGE_MAX_LAUNDRY, VM_PAGE_MAX_EVICTIONS): New macros. (vm_page_evict, vm_page_refill_inactive): New functions. * vm/vm_page.h: Include <kern/list.h>. (struct vm_page): Remove member `pageq', reuse the `node' member instead, move the `listq' and `next' members above `vm_page_header'. (VM_PAGE_CHECK): Define as an alias to vm_page_check. (vm_page_check): New function declaration. (vm_page_queue_fictitious, vm_page_queue_active, vm_page_queue_inactive, vm_page_free_target, vm_page_free_min, vm_page_inactive_target, vm_page_free_reserved, vm_page_free_wanted): Remove extern declarations. (vm_page_external_pagedout): New extern declaration. (vm_page_release): Update declaration. (VM_PAGE_QUEUES_REMOVE): Define as an alias to vm_page_queues_remove. (VM_PT_PMAP, VM_PT_KMEM, VM_PT_STACK): Remove macros. (VM_PT_KERNEL): Update value. (vm_page_queues_remove, vm_page_balance, vm_page_evict, vm_page_refill_inactive): New function declarations. * vm/vm_pageout.c (VM_PAGEOUT_BURST_MAX, VM_PAGEOUT_BURST_MIN, VM_PAGEOUT_BURST_WAIT, VM_PAGEOUT_EMPTY_WAIT, VM_PAGEOUT_PAUSE_MAX, VM_PAGE_INACTIVE_TARGET, VM_PAGE_FREE_TARGET, VM_PAGE_FREE_MIN, VM_PAGE_FREE_RESERVED, VM_PAGEOUT_RESERVED_INTERNAL, VM_PAGEOUT_RESERVED_REALLY): Remove macros. (vm_pageout_reserved_internal, vm_pageout_reserved_really, vm_pageout_burst_max, vm_pageout_burst_min, vm_pageout_burst_wait, vm_pageout_empty_wait, vm_pageout_pause_count, vm_pageout_pause_max, vm_pageout_active, vm_pageout_inactive, vm_pageout_inactive_nolock, vm_pageout_inactive_busy, vm_pageout_inactive_absent, vm_pageout_inactive_used, vm_pageout_inactive_clean, vm_pageout_inactive_dirty, vm_pageout_inactive_double, vm_pageout_inactive_cleaned_external): Remove variables. (vm_pageout_requested, vm_pageout_continue): New variables. (vm_pageout_setup): Wait for page allocation to succeed instead of falling back to flush, update double paging protocol with caller, add pageout throttling setup. (vm_pageout_scan): Rewrite to use the new vm_page balancing, eviction and inactive queue refill functions. (vm_pageout_scan_continue, vm_pageout_continue): Remove functions. (vm_pageout): Rewrite. (vm_pageout_start, vm_pageout_resume): New functions. * vm/vm_pageout.h (vm_pageout_continue, vm_pageout_scan_continue): Remove function declarations. (vm_pageout_start, vm_pageout_resume): New function declarations. * vm/vm_resident.c: Include <kern/list.h>. (vm_page_queue_fictitious): Define as a struct list. (vm_page_free_wanted, vm_page_external_count, vm_page_free_avail, vm_page_queue_active, vm_page_queue_inactive, vm_page_free_target, vm_page_free_min, vm_page_inactive_target, vm_page_free_reserved): Remove variables. (vm_page_external_pagedout): New variable. (vm_page_bootstrap): Don't initialize removed variable, update initialization of vm_page_queue_fictitious. (vm_page_replace): Call VM_PAGE_QUEUES_REMOVE where appropriate. (vm_page_remove): Likewise. (vm_page_grab_fictitious): Update to use list_xxx functions. (vm_page_release_fictitious): Likewise. (vm_page_grab): Remove pageout related code. (vm_page_release): Add `laundry' and `external' parameters for pageout throttling. (vm_page_grab_contig): Remove pageout related code. (vm_page_free_contig): Likewise. (vm_page_free): Remove pageout related code, update call to vm_page_release. (vm_page_wait, vm_page_wire, vm_page_unwire, vm_page_deactivate, vm_page_activate): Move to vm/vm_page.c.
2016-09-21Redefine what an external page isRichard Braun
Instead of a "page considered external", which apparently takes into account whether a page is dirty or not, redefine this property to reliably mean "is in an external object". This commit mostly deals with the impact of this change on the page allocation interface. * i386/intel/pmap.c (pmap_page_table_page_alloc): Update call to vm_page_grab. * kern/slab.c (kmem_pagealloc_physmem): Use vm_page_grab instead of vm_page_grab_contig. (kmem_pagefree_physmem): Use vm_page_release instead of vm_page_free_contig. * linux/dev/glue/block.c (alloc_buffer, device_read): Update call to vm_page_grab. * vm/vm_fault.c (vm_fault_page): Update calls to vm_page_grab and vm_page_convert. * vm/vm_map.c (vm_map_copy_steal_pages): Update call to vm_page_grab. * vm/vm_page.h (struct vm_page): Remove `extcounted' member. (vm_page_external_limit, vm_page_external_count): Remove extern declarations. (vm_page_convert, vm_page_grab): Update declarations. (vm_page_release, vm_page_grab_phys_addr): New function declarations. * vm/vm_pageout.c (VM_PAGE_EXTERNAL_LIMIT): Remove macro. (VM_PAGE_EXTERNAL_TARGET): Likewise. (vm_page_external_target): Remove variable. (vm_pageout_scan): Remove specific handling of external pages. (vm_pageout): Don't set vm_page_external_limit and vm_page_external_target. * vm/vm_resident.c (vm_page_external_limit): Remove variable. (vm_page_insert, vm_page_replace, vm_page_remove): Update external page tracking. (vm_page_convert): RemoveĀ `external' parameter. (vm_page_grab): Likewise. Remove specific handling of external pages. (vm_page_grab_phys_addr): Update call to vm_page_grab. (vm_page_release): Remove `external' parameter and remove specific handling of external pages. (vm_page_wait): Remove specific handling of external pages. (vm_page_alloc): Update call to vm_page_grab. (vm_page_free): Update call to vm_page_release. * xen/block.c (device_read): Update call to vm_page_grab. * xen/net.c (device_write): Likewise.
2016-09-21Replace vm_offset_t with phys_addr_t where appropriateRichard Braun
* i386/i386/phys.c (pmap_zero_page, pmap_copy_page, copy_to_phys, copy_from_phys, kvtophys): Use the phys_addr_t type for physical addresses. * i386/intel/pmap.c (pmap_map, pmap_map_bd, pmap_destroy, pmap_remove_range, pmap_page_protect, pmap_enter, pmap_extract, pmap_collect, phys_attribute_clear, phys_attribute_test, pmap_clear_modify, pmap_is_modified, pmap_clear_reference, pmap_is_referenced): Likewise. * i386/intel/pmap.h (pt_entry_t): Unconditionally define as a phys_addr_t. (pmap_zero_page, pmap_copy_page, kvtophys): Use the phys_addr_t type for physical addresses. * vm/pmap.h (pmap_enter, pmap_page_protect, pmap_clear_reference, pmap_is_referenced, pmap_clear_modify, pmap_is_modified, pmap_extract, pmap_map_bd): Likewise. * vm/vm_page.h (vm_page_fictitious_addr): Declare as a phys_addr_t. * vm/vm_resident.c (vm_page_fictitious_addr): Likewise. (vm_page_grab_phys_addr): Change return type to phys_addr_t.
2016-09-21Remove phys_first_addr and phys_last_addr global variablesRichard Braun
The old assumption that all physical memory is directly mapped in kernel space is about to go away. Those variables are directly linked to that assumption. * i386/i386/model_dep.h (phys_first_addr): Remove extern declaration. (phys_last_addr): Likewise. * i386/i386/phys.c (pmap_zero_page): Use VM_PAGE_DIRECTMAP_LIMIT instead of phys_last_addr. (pmap_copy_page, copy_to_phys, copy_from_phys): Likewise. * i386/i386/trap.c (user_trap): Remove check against phys_last_addr. * i386/i386at/biosmem.c (biosmem_bootstrap_common): Don't set phys_last_addr. * i386/i386at/mem.c (memmmap): Use vm_page_lookup_pa to determine if a physical address references physical memory. * i386/i386at/model_dep.c (phys_first_addr): Remove variable. (phys_last_addr): Likewise. (pmap_free_pages, pmap_valid_page): Remove functions. * i386/intel/pmap.c: Include i386at/biosmem.h. (pa_index): Turn into an alias for vm_page_table_index. (pmap_bootstrap): Replace uses of phys_first_addr and phys_last_addr as appropriate. (pmap_virtual_space): Use vm_page_table_size instead of phys_first_addr and phys_last_addr to obtain the number of physical pages. (pmap_verify_free): Remove function. (valid_page): Turn this macro into an inline function and rewrite using vm_page_lookup_pa. (pmap_page_table_page_alloc): Build the pmap VM object using vm_page_table_size to determine its size. (pmap_remove_range, pmap_page_protect, phys_attribute_clear, phys_attribute_test): Turn page indexes into unsigned long integers. (pmap_enter): Likewise. In addition, use either vm_page_lookup_pa or biosmem_directmap_end to determine if a physical address references physical memory. * i386/xen/xen.c (hyp_p2m_init): Use vm_page_table_size instead of phys_last_addr to obtain the number of physical pages. * kern/startup.c (phys_first_addr): Remove extern declaration. (phys_last_addr): Likewise. * linux/dev/init/main.c (linux_init): Use vm_page_seg_end with the appropriate segment selector instead of phys_last_addr to determine where high memory starts. * vm/pmap.h: Update requirements description. (pmap_free_pages, pmap_valid_page): Remove declarations. * vm/vm_page.c (vm_page_seg_end, vm_page_boot_table_size, vm_page_table_size, vm_page_table_index): New functions. * vm/vm_page.h (vm_page_seg_end, vm_page_table_size, vm_page_table_index): New function declarations. * vm/vm_resident.c (vm_page_bucket_count, vm_page_hash_mask): Define as unsigned long integers. (vm_page_bootstrap): Compute VP table size based on the page table size instead of the value returned by pmap_free_pages.
2016-09-20VM: remove commented out codeRichard Braun
The vm_page_direct_va, vm_page_direct_pa and vm_page_direct_ptr functions were imported along with the new vm_page module, but never actually used since the kernel already has phystokv and kvtophys functions.
2016-09-16VM: improve pageout deadlock workaroundRichard Braun
Commit 5dd4f67522ad0d49a2cecdb9b109251f546d4dd1 makes VM map entry allocation done with VM privilege, so that a VM map isn't held locked while physical allocations are paused, which may block the default pager during page eviction, causing a system-wide deadlock. First, it turns out that map entries aren't the only buffers allocated, and second, their number can't be easily determined, which makes a preallocation strategy very hard to implement. This change generalizes the strategy of VM privilege increase when a VM map is locked. * device/ds_routines.c (io_done_thread): Use integer values instead of booleans when setting VM privilege. * kern/thread.c (thread_init, thread_wire): Likewise. * vm/vm_pageout.c (vm_pageout): Likewise. * kern/thread.h (struct thread): Turn member `vm_privilege' into an unsigned integer. * vm/vm_map.c (vm_map_lock): New function, where VM privilege is temporarily increased. (vm_map_unlock): New function, where VM privilege is decreased. (_vm_map_entry_create): Remove VM privilege workaround from this function. * vm/vm_map.h (vm_map_lock, vm_map_unlock): Turn into functions.
2016-09-07Remove map entry pageability property.Richard Braun
Since the replacement of the zone allocator, kernel objects have been wired in memory. Besides, as of 5e9f6f (Stack the slab allocator directly on top of the physical allocator), there is a single cache used to allocate map entries. Those changes make the pageability attribute of VM maps irrelevant. * device/ds_routines.c (mach_device_init): Update call to kmem_submap. * ipc/ipc_init.c (ipc_init): Likewise. * kern/task.c (task_create): Update call to vm_map_create. * vm/vm_kern.c (kmem_submap): Remove `pageable' argument. Update call to vm_map_setup. (kmem_init): Update call to vm_map_setup. * vm/vm_kern.h (kmem_submap): Update declaration. * vm/vm_map.c (vm_map_setup): Remove `pageable' argument. Don't set `entries_pageable' member. (vm_map_create): Likewise. (vm_map_copyout): Don't bother creating copies of page entries with the right pageability. (vm_map_copyin): Don't set `entries_pageable' member. (vm_map_fork): Update call to vm_map_create. * vm/vm_map.h (struct vm_map_header): Remove `entries_pageable' member. (vm_map_setup, vm_map_create): Remove `pageable' argument.
2016-09-03Fix early physical page allocationRichard Braun
Import upstream biosmem and vm_page changes, and adjust for local modifications. Specifically, the biosmem module was mistakenly loading physical segments that did not clip with the heap as completely available. This change makes it load them as completely unavailable during startup, and once the VM system is ready, additional pages are loaded. * i386/i386at/biosmem.c (DEBUG): New macro. (struct biosmem_segment): Remove members `avail_start' and `avail_end'. (biosmem_heap_cur): Remove variable. (biosmem_heap_bottom, biosmem_heap_top, biosmem_heap_topdown): New variables. (biosmem_find_boot_data_update, biosmem_find_boot_data): Remove functions. (biosmem_find_heap_clip, biosmem_find_heap): New functions. (biosmem_setup_allocator): Rewritten to use the new biosmem_find_heap function. (biosmem_bootalloc): Support both bottom-up and top-down allocations. (biosmem_directmap_size): Renamed to ... (biosmem_directmap_end): ... this function. (biosmem_load_segment): Fix segment loading. (biosmem_setup): Restrict usable memory to the directmap segment. (biosmem_free_usable_range): Add checks on input parameters. (biosmem_free_usable_update_start, biosmem_free_usable_start, biosmem_free_usable_reserved, biosmem_free_usable_end): Remove functions. (biosmem_free_usable_entry): Rewritten to use the new biosmem_find_heap function. (biosmem_free_usable): Restrict usable memory to the directmap segment. * i386/i386at/biosmem.h (biosmem_bootalloc): Update description. (biosmem_directmap_size): Renamed to ... (biosmem_directmap_end): ... this function. (biosmem_free_usable): Update declaration. * i386/i386at/model_dep.c (machine_init): Call biosmem_free_usable. * vm/vm_page.c (DEBUG): New macro. (struct vm_page_seg): New member `heap_present'. (vm_page_load): Remove heap related parameters. (vm_page_load_heap): New function. * vm/vm_page.h (vm_page_load): Remove heap related parameters. Update description. (vm_page_load_heap): New function.
2016-08-29vm: fix boot on xenRichard Braun
* vm/vm_map.c (_vm_map_entry_create: Make sure there is a thread before accessing VM privilege.
2016-08-07VM: fix pageout-related deadlockRichard Braun
* vm/vm_map.c (_vm_map_entry_create): Temporarily set the current thread as VM privileged.
2016-08-06Augment VM maps with task namesRichard Braun
This change improves the clarity of "no more room for ..." VM map allocation errors. * kern/task.c (task_init): Call vm_map_set_name for the kernel map. (task_create): Call vm_map_set_name where appropriate. * vm/vm_map.c (vm_map_setup): Set map name to NULL. (vm_map_find_entry_anywhere): Update error message to include map name. * vm/vm_map.h (struct vm_map): New `name' member. (vm_map_set_name): New inline function.