What Does The Docker: Failed To Register Layer: Devmapper: Thin Pool Has 274 Free Data Blocks
Apply the Device Mapper storage commuter
Estimated reading time: 28 minutes
Device Mapper is a kernel-based framework that underpins many advanced book management technologies on Linux. Docker's devicemapper
storage driver leverages the thin provisioning and snapshotting capabilities of this framework for image and container direction. This article refers to the Device Mapper storage driver as devicemapper
, and the kernel framework as Device Mapper.
For the systems where it is supported, devicemapper
support is included in the Linux kernel. Still, specific configuration is required to use it with Docker.
The devicemapper
driver uses cake devices dedicated to Docker and operates at the block level, rather than the file level. These devices can be extended by adding concrete storage to your Docker host, and they perform better than using a filesystem at the operating system (OS) level.
Prerequisites
-
devicemapper
is supported on Docker Engine - Community running on CentOS, Fedora, SLES 15, Ubuntu, Debian, or RHEL. -
devicemapper
requires thelvm2
anddevice-mapper-persistent-data
packages to be installed. - Changing the storage driver makes whatsoever containers you lot accept already created inaccessible on the local organisation. Use
docker save
to save containers, and push existing images to Docker Hub or a private repository, so yous do not need to recreate them later.
Configure Docker with the devicemapper
storage driver
Before following these procedures, you must kickoff meet all the prerequisites.
Configure loop-lvm
mode for testing
This configuration is only appropriate for testing. The loop-lvm
mode makes utilise of a 'loopback' machinery that allows files on the local disk to be read from and written to as if they were an actual physical disk or block device. However, the addition of the loopback mechanism, and interaction with the OS filesystem layer, means that IO operations tin be slow and resource-intensive. Utilize of loopback devices can also innovate race conditions. However, setting up loop-lvm
way tin help identify basic issues (such as missing user space packages, kernel drivers, etc.) ahead of attempting the more complex set up required to enable directly-lvm
style. loop-lvm
style should therefore only exist used to perform rudimentary testing prior to configuring straight-lvm
.
For production systems, encounter Configure straight-lvm way for production.
-
Stop Docker.
$ sudo systemctl cease docker
-
Edit
/etc/docker/daemon.json
. If information technology does not even so exist, create it. Bold that the file was empty, add the following contents.{ "storage-commuter" : "devicemapper" }
See all storage options for each storage commuter in the daemon reference documentation
Docker does not commencement if the
daemon.json
file contains badly-formed JSON. -
Start Docker.
$ sudo systemctl first docker
-
Verify that the daemon is using the
devicemapper
storage driver. Use thedocker info
command and look forStorage Driver
.$ docker info Containers: 0 Running: 0 Paused: 0 Stopped: 0 Images: 0 Server Version: 17.03.i-ce Storage Driver: devicemapper Pool Name: docker-202:i-8413957-pool Puddle Blocksize: 65.54 kB Base of operations Device Size: x.74 GB Backing Filesystem: xfs Data file: /dev/loop0 Metadata file: /dev/loop1 Data Space Used: xi.viii MB Data Space Total: 107.iv GB Data Space Available: 7.44 GB Metadata Space Used: 581.6 KB Metadata Infinite Total: 2.147 GB Metadata Space Bachelor: 2.147 GB Thin Pool Minimum Gratis Space: ten.74 GB Udev Sync Supported: true Deferred Removal Enabled: false Deferred Deletion Enabled: faux Deferred Deleted Device Count: 0 Data loop file: /var/lib/docker/devicemapper/information Metadata loop file: /var/lib/docker/devicemapper/metadata Library Version: 1.02.135-RHEL7 (2016-xi-xvi) <...>
This host is running in loop-lvm
way, which is non supported on production systems. This is indicated by the fact that the Data loop file
and a Metadata loop file
are on files under /var/lib/docker/devicemapper
. These are loopback-mounted sparse files. For production systems, meet Configure direct-lvm mode for product.
Configure straight-lvm mode for product
Product hosts using the devicemapper
storage driver must use direct-lvm
way. This mode uses block devices to create the thin pool. This is faster than using loopback devices, uses system resource more than efficiently, and block devices tin can abound equally needed. Yet, more setup is required than in loop-lvm
way.
Subsequently you have satisfied the prerequisites, follow the steps beneath to configure Docker to use the devicemapper
storage driver in directly-lvm
style.
Warning: Changing the storage driver makes whatsoever containers yous take already created inaccessible on the local system. Employ
docker save
to save containers, and push existing images to Docker Hub or a private repository, so you practise non need to recreate them later.
Allow Docker to configure straight-lvm mode
Docker can manage the block device for you lot, simplifying configuration of direct-lvm
mode. This is appropriate for fresh Docker setups merely. You tin can merely use a single block device. If you need to use multiple cake devices, configure direct-lvm mode manually instead. The post-obit new configuration options are available:
Option | Description | Required? | Default | Example |
---|---|---|---|---|
dm.directlvm_device | The path to the block device to configure for directly-lvm . | Yes | dm.directlvm_device="/dev/xvdf" | |
dm.thinp_percent | The per centum of space to use for storage from the passed in block device. | No | 95 | dm.thinp_percent=95 |
dm.thinp_metapercent | The percentage of space to utilize for metadata storage from the passed-in cake device. | No | i | dm.thinp_metapercent=ane |
dm.thinp_autoextend_threshold | The threshold for when lvm should automatically extend the thin puddle as a per centum of the total storage infinite. | No | 80 | dm.thinp_autoextend_threshold=fourscore |
dm.thinp_autoextend_percent | The percentage to increase the thin puddle by when an autoextend is triggered. | No | 20 | dm.thinp_autoextend_percent=20 |
dm.directlvm_device_force | Whether to format the cake device fifty-fifty if a filesystem already exists on it. If set to simulated and a filesystem is present, an mistake is logged and the filesystem is left intact. | No | faux | dm.directlvm_device_force=truthful |
Edit the daemon.json
file and set the appropriate options, so restart Docker for the changes to take event. The post-obit daemon.json
configuration sets all of the options in the table above.
{ "storage-commuter" : "devicemapper" , "storage-opts" : [ "dm.directlvm_device=/dev/xdf" , "dm.thinp_percent=95" , "dm.thinp_metapercent=1" , "dm.thinp_autoextend_threshold=80" , "dm.thinp_autoextend_percent=20" , "dm.directlvm_device_force=false" ] }
Meet all storage options for each storage commuter in the daemon reference documentation
Restart Docker for the changes to have issue. Docker invokes the commands to configure the cake device for you lot.
Warning: Changing these values afterwards Docker has prepared the block device for you is not supported and causes an fault.
You still demand to perform periodic maintenance tasks.
Configure directly-lvm mode manually
The process beneath creates a logical volume configured every bit a thin pool to use as backing for the storage pool. It assumes that you lot have a spare block device at /dev/xvdf
with enough gratis space to complete the job. The device identifier and volume sizes may be different in your environment and you should substitute your ain values throughout the procedure. The process also assumes that the Docker daemon is in the stopped
state.
-
Identify the block device you want to employ. The device is located under
/dev/
(such as/dev/xvdf
) and needs plenty complimentary space to store the images and container layers for the workloads that host runs. A solid country drive is ideal. -
Stop Docker.
$ sudo systemctl stop docker
-
Install the post-obit packages:
-
RHEL / CentOS:
device-mapper-persistent-data
,lvm2
, and all dependencies -
Ubuntu / Debian / SLES 15:
sparse-provisioning-tools
,lvm2
, and all dependencies
-
-
Create a physical volume on your block device from step one, using the
pvcreate
command. Substitute your device proper name for/dev/xvdf
.Alarm: The side by side few steps are subversive, so be sure that you have specified the right device!
$ sudo pvcreate /dev/xvdf Physical volume "/dev/xvdf" successfully created.
-
Create a
docker
volume group on the same device, using thevgcreate
command.$ sudo vgcreate docker /dev/xvdf Book group "docker" successfully created
-
Create two logical volumes named
thinpool
andthinpoolmeta
using thelvcreate
control. The last parameter specifies the corporeality of free space to allow for automatic expanding of the data or metadata if infinite runs low, as a temporary stop-gap. These are the recommended values.$ sudo lvcreate --wipesignatures y -north thinpool docker -l 95%VG Logical volume "thinpool" created. $ sudo lvcreate --wipesignatures y -northward thinpoolmeta docker -fifty one%VG Logical volume "thinpoolmeta" created.
-
Convert the volumes to a sparse pool and a storage location for metadata for the sparse pool, using the
lvconvert
command.$ sudo lvconvert -y \ --zippo north \ -c 512K \ --thinpool docker/thinpool \ --poolmetadata docker/thinpoolmeta WARNING: Converting logical volume docker/thinpool and docker/thinpoolmeta to thin puddle'due south data and metadata volumes with metadata wiping. THIS Will DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) Converted docker/thinpool to sparse pool.
-
Configure autoextension of thin pools via an
lvm
contour.$ sudo vi /etc/lvm/profile/docker-thinpool.contour
-
Specify
thin_pool_autoextend_threshold
andthin_pool_autoextend_percent
values.thin_pool_autoextend_threshold
is the pct of space used beforelvm
attempts to autoextend the available space (100 = disabled, not recommended).thin_pool_autoextend_percent
is the corporeality of space to add to the device when automatically extending (0 = disabled).The instance below adds twenty% more capacity when the disk usage reaches lxxx%.
activation { thin_pool_autoextend_threshold=eighty thin_pool_autoextend_percent=xx }
Save the file.
-
Apply the LVM profile, using the
lvchange
command.$ sudo lvchange --metadataprofile docker-thinpool docker/thinpool Logical book docker/thinpool changed.
-
Ensure monitoring of the logical volume is enabled.
$ sudo lvs -o+seg_monitor LV VG Attr LSize Pool Origin Data% Meta% Motility Log Cpy%Sync Convert Monitor thinpool docker twi-a-t--- 95.00g 0.00 0.01 not monitored
If the output in the
Monitor
column reports, as above, that the volume isnot monitored
, then monitoring needs to exist explicitly enabled. Without this footstep, automated extension of the logical volume will not occur, regardless of any settings in the applied contour.$ sudo lvchange --monitor y docker/thinpool
Double bank check that monitoring is now enabled by running the
sudo lvs -o+seg_monitor
command a 2d time. TheMonitor
column should now written report the logical volume is beingmonitored
. -
If you have ever run Docker on this host before, or if
/var/lib/docker/
exists, move it out of the way so that Docker can employ the new LVM pool to store the contents of image and containers.$ sudo su - # mkdir /var/lib/docker.bk # mv /var/lib/docker/* /var/lib/docker.bk # leave
If whatsoever of the following steps fail and you demand to restore, you tin remove
/var/lib/docker
and replace it with/var/lib/docker.bk
. -
Edit
/etc/docker/daemon.json
and configure the options needed for thedevicemapper
storage driver. If the file was previously empty, information technology should now contain the following contents:{ "storage-driver" : "devicemapper" , "storage-opts" : [ "dm.thinpooldev=/dev/mapper/docker-thinpool" , "dm.use_deferred_removal=true" , "dm.use_deferred_deletion=true" ] }
-
Beginning Docker.
systemd:
$ sudo systemctl kickoff docker
service:
$ sudo service docker beginning
-
Verify that Docker is using the new configuration using
docker info
.$ docker info Containers: 0 Running: 0 Paused: 0 Stopped: 0 Images: 0 Server Version: 17.03.1-ce Storage Commuter: devicemapper Pool Name: docker-thinpool Pool Blocksize: 524.3 kB Base Device Size: 10.74 GB Backing Filesystem: xfs Data file: Metadata file: Information Space Used: 19.92 MB Data Space Total: 102 GB Data Space Available: 102 GB Metadata Infinite Used: 147.5 kB Metadata Space Total: 1.07 GB Metadata Space Available: ane.069 GB Thin Pool Minimum Free Space: 10.two GB Udev Sync Supported: truthful Deferred Removal Enabled: true Deferred Deletion Enabled: truthful Deferred Deleted Device Count: 0 Library Version: 1.02.135-RHEL7 (2016-11-xvi) <...>
If Docker is configured correctly, the
Data file
andMetadata file
is blank, and the pool name isdocker-thinpool
. -
Afterward you have verified that the configuration is correct, y'all can remove the
/var/lib/docker.bk
directory which contains the previous configuration.$ sudo rm -rf /var/lib/docker.bk
Manage devicemapper
Monitor the thin pool
Practice non rely on LVM auto-extension lonely. The volume grouping automatically extends, but the book can still fill upwards. Yous can monitor costless infinite on the volume using lvs
or lvs -a
. Consider using a monitoring tool at the OS level, such as Nagios.
To view the LVM logs, you can use journalctl
:
$ sudo journalctl -fu dm-event.service
If you run across repeated problems with sparse puddle, yous tin can set the storage option dm.min_free_space
to a value (representing a pct) in /etc/docker/daemon.json
. For instance, setting information technology to ten
ensures that operations fail with a warning when the gratis space is at or well-nigh 10%. Run into the storage driver options in the Engine daemon reference.
Increase capacity on a running device
Y'all can increase the capacity of the pool on a running thin-pool device. This is useful if the data's logical volume is full and the book group is at full chapters. The specific procedure depends on whether you are using a loop-lvm thin pool or a direct-lvm thin pool.
Resize a loop-lvm thin pool
The easiest mode to resize a loop-lvm
thin pool is to utilize the device_tool utility, merely you lot can use operating system utilities instead.
Utilise the device_tool utility
A community-contributed script called device_tool.go
is available in the moby/moby Github repository. You can utilise this tool to resize a loop-lvm
thin pool, avoiding the long process higher up. This tool is not guaranteed to work, but you should only be using loop-lvm
on not-product systems.
If you do not desire to use device_tool
, you lot can resize the thin pool manually instead.
-
To use the tool, clone the Github repository, change to the
contrib/docker-device-tool
, and follow the instructions in theREADME.doctor
to compile the tool. -
Use the tool. The post-obit example resizes the thin pool to 200GB.
$ ./device_tool resize 200GB
Utilize operating system utilities
If you exercise not desire to use the device-tool utility, you can resize a loop-lvm
thin puddle manually using the following procedure.
In loop-lvm
style, a loopback device is used to store the data, and another to store the metadata. loop-lvm
mode is only supported for testing, because information technology has meaning performance and stability drawbacks.
If yous are using loop-lvm
mode, the output of docker info
shows file paths for Information loop file
and Metadata loop file
:
$ docker info |grep 'loop file' Data loop file: /var/lib/docker/devicemapper/data Metadata loop file: /var/lib/docker/devicemapper/metadata
Follow these steps to increase the size of the sparse puddle. In this case, the thin puddle is 100 GB, and is increased to 200 GB.
-
Listing the sizes of the devices.
$ sudo ls -lh /var/lib/docker/devicemapper/ total 1175492 -rw------- one root root 100G Mar 30 05:22 data -rw------- 1 root root 2.0G Mar 31 eleven:17 metadata
-
Increase the size of the
data
file to 200 1000 using thetruncate
command, which is used to increase or subtract the size of a file. Note that decreasing the size is a subversive operation.$ sudo truncate -due south 200G /var/lib/docker/devicemapper/data
-
Verify the file size changed.
$ sudo ls -lh /var/lib/docker/devicemapper/ full 1.2G -rw------- 1 root root 200G Apr 14 08:47 data -rw------- 1 root root ii.0G Apr nineteen 13:27 metadata
-
The loopback file has changed on disk but not in retentiveness. List the size of the loopback device in memory, in GB. Reload information technology, then listing the size once more. After the reload, the size is 200 GB.
$ echo $[ $( sudo blockdev --getsize64 /dev/loop0) / 1024 / 1024 / 1024 ] 100 $ sudo losetup -c /dev/loop0 $ repeat $[ $( sudo blockdev --getsize64 /dev/loop0) / 1024 / 1024 / 1024 ] 200
-
Reload the devicemapper sparse pool.
a. Get the pool proper noun start. The puddle name is the first field, delimited by ` :`. This control extracts it.
$ sudo dmsetup status | grep ' thin-puddle ' | awk -F ': ' {'print $1'} docker-8:1-123141-pool
b. Dump the device mapper table for the thin pool.
$ sudo dmsetup table docker-8:ane-123141-pool 0 209715200 thin-pool 7:1 7:0 128 32768 1 skip_block_zeroing
c. Calculate the total sectors of the sparse puddle using the second field of the output. The number is expressed in 512-chiliad sectors. A 100G file has 209715200 512-k sectors. If you double this number to 200G, y'all get 419430400 512-k sectors.
d. Reload the sparse puddle with the new sector number, using the post-obit iii
dmsetup
commands.$ sudo dmsetup append docker-viii:1-123141-pool $ sudo dmsetup reload docker-8:i-123141-pool --tabular array '0 419430400 thin-pool 7:1 seven:0 128 32768 one skip_block_zeroing' $ sudo dmsetup resume docker-eight:1-123141-pool
Resize a directly-lvm thin pool
To extend a direct-lvm
thin pool, yous need to commencement attach a new block device to the Docker host, and brand note of the proper noun assigned to it by the kernel. In this example, the new block device is /dev/xvdg
.
Follow this process to extend a straight-lvm
sparse pool, substituting your block device and other parameters to adjust your situation.
-
Gather information nigh your volume grouping.
Use the
pvdisplay
control to observe the physical block devices currently in employ past your thin pool, and the volume group'due south name.$ sudo pvdisplay |grep 'VG Name' PV Proper name /dev/xvdf VG Name docker
In the following steps, substitute your block device or volume group name as appropriate.
-
Extend the volume grouping, using the
vgextend
command with theVG Proper name
from the previous stride, and the name of your new block device.$ sudo vgextend docker /dev/xvdg Physical book "/dev/xvdg" successfully created. Volume group "docker" successfully extended
-
Extend the
docker/thinpool
logical volume. This command uses 100% of the volume correct away, without automobile-extend. To extend the metadata thinpool instead, usedocker/thinpool_tmeta
.$ sudo lvextend -fifty+100%Gratis -n docker/thinpool Size of logical volume docker/thinpool_tdata inverse from 95.00 GiB (24319 extents) to 198.00 GiB (50688 extents). Logical volume docker/thinpool_tdata successfully resized.
-
Verify the new thin pool size using the
Data Space Available
field in the output ofdocker info
. If y'all extended thedocker/thinpool_tmeta
logical book instead, look forMetadata Infinite Available
.Storage Driver: devicemapper Pool Proper name: docker-thinpool Pool Blocksize: 524.3 kB Base of operations Device Size: 10.74 GB Backing Filesystem: xfs Data file: Metadata file: Data Space Used: 212.3 MB Information Space Total: 212.6 GB Information Space Available: 212.four GB Metadata Space Used: 286.7 kB Metadata Space Total: ane.07 GB Metadata Infinite Available: 1.069 GB <...>
Activate the devicemapper
later on reboot
If you reboot the host and notice that the docker
service failed to start, look for the error, "Non existing device". You need to re-actuate the logical volumes with this command:
$ sudo lvchange -ay docker/thinpool
How the devicemapper
storage driver works
Warning: Do non straight manipulate any files or directories within
/var/lib/docker/
. These files and directories are managed by Docker.
Use the lsblk
control to run across the devices and their pools, from the operating system'south point of view:
$ sudo lsblk Proper name MAJ:MIN RM SIZE RO TYPE MOUNTPOINT xvda 202:0 0 8G 0 disk └─xvda1 202:i 0 8G 0 office / xvdf 202:80 0 100G 0 disk ├─docker-thinpool_tmeta 253:0 0 1020M 0 lvm │ └─docker-thinpool 253:2 0 95G 0 lvm └─docker-thinpool_tdata 253:1 0 95G 0 lvm └─docker-thinpool 253:2 0 95G 0 lvm
Apply the mount
command to see the mount-point Docker is using:
$ mount |grep devicemapper /dev/xvda1 on /var/lib/docker/devicemapper type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
When you use devicemapper
, Docker stores image and layer contents in the thinpool, and exposes them to containers by mounting them under subdirectories of /var/lib/docker/devicemapper/
.
Epitome and container layers on-disk
The /var/lib/docker/devicemapper/metadata/
directory contains metadata about the Devicemapper configuration itself and virtually each paradigm and container layer that be. The devicemapper
storage driver uses snapshots, and this metadata include information about those snapshots. These files are in JSON format.
The /var/lib/docker/devicemapper/mnt/
directory contains a mountain point for each image and container layer that exists. Epitome layer mount points are empty, but a container'due south mount betoken shows the container's filesystem equally it appears from inside the container.
Image layering and sharing
The devicemapper
storage commuter uses defended cake devices rather than formatted filesystems, and operates on files at the cake level for maximum performance during copy-on-write (Cow) operations.
Snapshots
Another feature of devicemapper
is its utilize of snapshots (too sometimes chosen thin devices or virtual devices), which store the differences introduced in each layer as very modest, lightweight sparse pools. Snapshots provide many benefits:
-
Layers which are shared in mutual between containers are only stored on disk one time, unless they are writable. For instance, if you accept 10 dissimilar images which are all based on
alpine
, thealpine
image and all its parent images are merely stored in one case each on disk. -
Snapshots are an implementation of a copy-on-write (Cow) strategy. This means that a given file or directory is simply copied to the container's writable layer when it is modified or deleted past that container.
-
Because
devicemapper
operates at the block level, multiple blocks in a writable layer tin can exist modified simultaneously. -
Snapshots can be backed up using standard OS-level backup utilities. Just brand a re-create of
/var/lib/docker/devicemapper/
.
Devicemapper workflow
When you first Docker with the devicemapper
storage driver, all objects related to image and container layers are stored in /var/lib/docker/devicemapper/
, which is backed past ane or more cake-level devices, either loopback devices (testing only) or physical disks.
-
The base device is the lowest-level object. This is the thin pool itself. You can examine it using
docker info
. It contains a filesystem. This base of operations device is the starting signal for every image and container layer. The base device is a Device Mapper implementation detail, rather than a Docker layer. -
Metadata about the base of operations device and each image or container layer is stored in
/var/lib/docker/devicemapper/metadata/
in JSON format. These layers are re-create-on-write snapshots, which means that they are empty until they diverge from their parent layers. -
Each container's writable layer is mounted on a mountpoint in
/var/lib/docker/devicemapper/mnt/
. An empty directory exists for each read-only image layer and each stopped container.
Each image layer is a snapshot of the layer below it. The lowest layer of each prototype is a snapshot of the base of operations device that exists in the puddle. When you lot run a container, information technology is a snapshot of the image the container is based on. The following example shows a Docker host with two running containers. The kickoff is a ubuntu
container and the second is a busybox
container.
How container reads and writes work with devicemapper
Reading files
With devicemapper
, reads happen at the block level. The diagram beneath shows the high level process for reading a single block (0x44f
) in an example container.
An application makes a read asking for cake 0x44f
in the container. Because the container is a sparse snapshot of an image, it doesn't have the cake, merely information technology has a arrow to the block on the nearest parent image where it does exist, and information technology reads the block from there. The cake now exists in the container's memory.
Writing files
Writing a new file: With the devicemapper
driver, writing new data to a container is accomplished by an classify-on-demand operation. Each block of the new file is allocated in the container's writable layer and the block is written at that place.
Updating an existing file: The relevant cake of the file is read from the nearest layer where information technology exists. When the container writes the file, simply the modified blocks are written to the container'due south writable layer.
Deleting a file or directory: When you delete a file or directory in a container'south writable layer, or when an image layer deletes a file that exists in its parent layer, the devicemapper
storage driver intercepts further read attempts on that file or directory and responds that the file or directory does not be.
Writing and then deleting a file: If a container writes to a file and later deletes the file, all of those operations happen in the container's writable layer. In that case, if yous are using straight-lvm
, the blocks are freed. If you lot use loop-lvm
, the blocks may not be freed. This is another reason not to employ loop-lvm
in production.
Device Mapper and Docker operation
-
classify-on demand
performance impact:The
devicemapper
storage commuter uses anallocate-on-need
operation to allocate new blocks from the thin pool into a container'due south writable layer. Each block is 64KB, then this is the minimum amount of space that is used for a write. -
Copy-on-write performance bear upon: The start fourth dimension a container modifies a specific block, that block is written to the container's writable layer. Considering these writes happen at the level of the block rather than the file, performance touch on is minimized. However, writing a large number of blocks can notwithstanding negatively bear on performance, and the
devicemapper
storage commuter may actually perform worse than other storage drivers in this scenario. For write-heavy workloads, you should apply information volumes, which featherbed the storage commuter completely.
Performance all-time practices
Go along these things in mind to maximize performance when using the devicemapper
storage driver.
-
Use
direct-lvm
: Theloop-lvm
mode is not performant and should never be used in production. -
Employ fast storage: Solid-state drives (SSDs) provide faster reads and writes than spinning disks.
-
Memory usage: the
devicemapper
uses more retention than some other storage drivers. Each launched container loads i or more copies of its files into retentivity, depending on how many blocks of the same file are being modified at the same fourth dimension. Due to the memory pressure, thedevicemapper
storage driver may not be the correct choice for certain workloads in high-density use cases. -
Use volumes for write-heavy workloads: Volumes provide the all-time and most predictable performance for write-heavy workloads. This is because they bypass the storage driver and do non incur whatsoever of the potential overheads introduced by thin provisioning and copy-on-write. Volumes have other benefits, such as allowing you to share data among containers and persisting fifty-fifty when no running container is using them.
-
Note: when using
devicemapper
and thejson-file
log driver, the log files generated by a container are yet stored in Docker's dataroot directory, by default/var/lib/docker
. If your containers generate lots of log letters, this may pb to increased deejay usage or the inability to manage your system due to a full disk. You can configure a log commuter to store your container logs externally.
- Volumes
- Sympathise images, containers, and storage drivers
- Select a storage commuter
What Does The Docker: Failed To Register Layer: Devmapper: Thin Pool Has 274 Free Data Blocks,
Source: https://docs.docker.com/storage/storagedriver/device-mapper-driver/
Posted by: lloydbourre.blogspot.com
0 Response to "What Does The Docker: Failed To Register Layer: Devmapper: Thin Pool Has 274 Free Data Blocks"
Post a Comment