理解Computer中memory-mapped file

    技术2022-07-10  99

    内存映射

    内存映射:将内核空间的一段内存区域映射到用户空间。可以将内核空间的一段内存区域同时映射到多个进程,实现进程间的共享内存通信。

    《理解内核空间、内存区域、用户空间》

    《Memory-mapped I/O》

    《理解Computer中slack space, page cache, thrash》

    memory map in computer science

    Wikipedia

    A memory map is a structure of data(which usually resides in memory itself) that indicates how memory is laid out.

    Memory-mapped file

    wikipedia

    A memory-mapped file is a segment of virtual memory that has been assigned a direct byte-for-byte correlation with some portion of a file or file-like resource.

    This resource is typically a file that is physically present on disk, but can also be a device, shared momory object, or other resource that the OS can reference through a file descriptor.

    Once present, this correlation between the file and the memory space permits applications to treat the mapped portion as if it were primary memory.

    Benefits

    The benefit of memory mapping a file is increasing I/O performance, especially when used on large files.

    For small files, memory-mapped files can result in a waste result in a waste of slack space as memory maps are always aligned to the page size, which is mostly 4KiB. Therefore, a 5 KiB file will allocate 8KiB and thus 3KiB was wasted.

    Accessing memory mapped files is faster than using direct read and write operations for two reasons:

    Firstly, a system call is orders of magnitude slower than a simple change to a program’s local memory.Secondly, in most operating systems the memory region mapped actually is the kernel’s page cache(file cache), meaning that no copies need to be created in user space.

    Certain application-level memory-mapped file operations also perform better than their phsical file counterparts. Applications can access and update data in the file directly and in-place, as oppsed to seeking from the start of the file or rewriting the entire edited contents to a temporary location. Since the memory-mapped file is handled internally in pages, linear file access(as seen, for example, in flat file data storage or configuration files) requires disk access only when a new page boundary is crossed, and can write larger sections of the file to disk in a single operation.

    A possible benefit of memory-mapped files is a “lazy loading”, thus using small amount of RAM even for a very large file. Trying to load the entire contents of a file that is significantly larger than the amount of memory available can cause severe thrashing as the operating system reads from disk into memory and simultaneously writes pages from memory back to disk. Memory-mapping may not only bypass the page file completely, but also allow smaller page-sized sections to be loaded as data is being edited, similarly to demand paging used for programs.

    The memory mapping process is handled by the virtual memory manager, which is the same subsystem responsible for dealing with the page file. Memory mapped files are loaded into memory one entire page at a time. The page size is selected by the OS for maximum performance. Since page file management is one of the most critical elements of a virtual memory system, loading page sized sections of a file into physical memory is typically a very highly optimized system function.

    Types

    There are two types of memory-mapped files:

    Persisted

    Persisted files are associated with a source file on a disk. The data is saved to the source file on the disk once the last process is finished. These memory-mapped files are suitable for working with extremely large source files.

    Non-persisted

    Non-persisted files are not associated with a file on a disk. When the last process has finished working with the file, the data is lost. These files are suitable for creating shared memory for inter-process communications(IPC).

    Drawbacks

    The major reason to choose memory mapped file I/O is performance. Nevertheless, there can be tradeoffs.

    The standard I/O approach is costly due to system call overhead and memory copying.

    The memory-mapped approach has its cost in minor page faults - when a block of data is loaded in page cache, but is not yet mapped into the process’s virtual memory space.

    In some circumstances, memory mapped file I/O can be substantially slower than standard file I/O.

    Another drawback of memory-mapped files relates to a given architechture’s address space: a file larger than the addressable space can have only portions mapped at a time, complicating reading it. For example, a 32-bit architecture such as Intel’s IA-32 can only directly address 4GiB or smaller portions of files. An even smaller amount of addressable space is available to individual programs-- typically in the range of 2 to 3 GiB, depending on the OS kernel.

    Common uses

    Perhaps the most common use for a memory-mapped file is the process loader in most modern OS when a process is started , the OS uses a memory mapped file to bring the executable file, along with any loadable modules, into memory for execution.

    Most memory-mapping systems use a technique called demand paging, where the file is loaded into physical memory in subsets(one page each), and only when that page is actually referenced. In the specific case of execuable files, this permits the OS to selectively load only those portions of a process image that actually need to execute.

    Another common use for memory-mapped files is to share memory between multiple processes. In modern protected mode OS, processes are generally not permitted to access memory space that is allocated for use by another process. (A program’s attempt to do so causes invalid page fault or segmentation violations.)

    There are a number of techniques available to safely share memory, and memory-mapped file I/O is one of the most popular. Two or more applications can simultaneously map a single physical file into memory and access this memory.

    Platform support

    Most modern OS systems or runtime environments support some form of memory-mapped file access.

    The function ***mmap()***, which creates a mapping of a file given a file descriptor, starting location in the file, and a length, is part of the POSIX specification, so the wide variety of POSIX-compliant systems, such as UNIX, Linux, Mac OS X or OPEN VMS, support a common mechanism for memory mapping files.

    mmap

    Wikipedia

    In computing, mmap(2) is a POSIX-compliant Unix system call that maps files or devices into memory.

    It is a method of memory-mapped file I/O.

    It implements demand paging, because file contents are not read from disk directly and initially do not use physical RAM at all.

    The actual reads from disk are performed in a “lazy” manner, after a specific location is accessed.

    After the memory is no longer needed,

    it is important to munmap(2) the pointers to itProtection information can be managed using mprotect(2)Special treatment can be enforced using madvise(2)

    In Linux, MacOS, BSDs, mmap can create several types of mapping. Other OS may only support a subset of these.

    Reference

    mmap详解Questions tagged [memory-mapped-files] **
    Processed: 0.018, SQL: 9