queue-sysfs.txt 5.1 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146
  1. Queue sysfs files
  2. =================
  3. This text file will detail the queue files that are located in the sysfs tree
  4. for each block device. Note that stacked devices typically do not export
  5. any settings, since their queue merely functions are a remapping target.
  6. These files are the ones found in the /sys/block/xxx/queue/ directory.
  7. Files denoted with a RO postfix are readonly and the RW postfix means
  8. read-write.
  9. add_random (RW)
  10. ----------------
  11. This file allows to turn off the disk entropy contribution. Default
  12. value of this file is '1'(on).
  13. discard_granularity (RO)
  14. -----------------------
  15. This shows the size of internal allocation of the device in bytes, if
  16. reported by the device. A value of '0' means device does not support
  17. the discard functionality.
  18. discard_max_hw_bytes (RO)
  19. ----------------------
  20. Devices that support discard functionality may have internal limits on
  21. the number of bytes that can be trimmed or unmapped in a single operation.
  22. The discard_max_bytes parameter is set by the device driver to the maximum
  23. number of bytes that can be discarded in a single operation. Discard
  24. requests issued to the device must not exceed this limit. A discard_max_bytes
  25. value of 0 means that the device does not support discard functionality.
  26. discard_max_bytes (RW)
  27. ----------------------
  28. While discard_max_hw_bytes is the hardware limit for the device, this
  29. setting is the software limit. Some devices exhibit large latencies when
  30. large discards are issued, setting this value lower will make Linux issue
  31. smaller discards and potentially help reduce latencies induced by large
  32. discard operations.
  33. discard_zeroes_data (RO)
  34. ------------------------
  35. When read, this file will show if the discarded block are zeroed by the
  36. device or not. If its value is '1' the blocks are zeroed otherwise not.
  37. hw_sector_size (RO)
  38. -------------------
  39. This is the hardware sector size of the device, in bytes.
  40. iostats (RW)
  41. -------------
  42. This file is used to control (on/off) the iostats accounting of the
  43. disk.
  44. logical_block_size (RO)
  45. -----------------------
  46. This is the logcal block size of the device, in bytes.
  47. max_hw_sectors_kb (RO)
  48. ----------------------
  49. This is the maximum number of kilobytes supported in a single data transfer.
  50. max_integrity_segments (RO)
  51. ---------------------------
  52. When read, this file shows the max limit of integrity segments as
  53. set by block layer which a hardware controller can handle.
  54. max_sectors_kb (RW)
  55. -------------------
  56. This is the maximum number of kilobytes that the block layer will allow
  57. for a filesystem request. Must be smaller than or equal to the maximum
  58. size allowed by the hardware.
  59. max_segments (RO)
  60. -----------------
  61. Maximum number of segments of the device.
  62. max_segment_size (RO)
  63. ---------------------
  64. Maximum segment size of the device.
  65. minimum_io_size (RO)
  66. --------------------
  67. This is the smallest preferred IO size reported by the device.
  68. nomerges (RW)
  69. -------------
  70. This enables the user to disable the lookup logic involved with IO
  71. merging requests in the block layer. By default (0) all merges are
  72. enabled. When set to 1 only simple one-hit merges will be tried. When
  73. set to 2 no merge algorithms will be tried (including one-hit or more
  74. complex tree/hash lookups).
  75. nr_requests (RW)
  76. ----------------
  77. This controls how many requests may be allocated in the block layer for
  78. read or write requests. Note that the total allocated number may be twice
  79. this amount, since it applies only to reads or writes (not the accumulated
  80. sum).
  81. To avoid priority inversion through request starvation, a request
  82. queue maintains a separate request pool per each cgroup when
  83. CONFIG_BLK_CGROUP is enabled, and this parameter applies to each such
  84. per-block-cgroup request pool. IOW, if there are N block cgroups,
  85. each request queue may have up to N request pools, each independently
  86. regulated by nr_requests.
  87. optimal_io_size (RO)
  88. --------------------
  89. This is the optimal IO size reported by the device.
  90. physical_block_size (RO)
  91. ------------------------
  92. This is the physical block size of device, in bytes.
  93. read_ahead_kb (RW)
  94. ------------------
  95. Maximum number of kilobytes to read-ahead for filesystems on this block
  96. device.
  97. rotational (RW)
  98. ---------------
  99. This file is used to stat if the device is of rotational type or
  100. non-rotational type.
  101. rq_affinity (RW)
  102. ----------------
  103. If this option is '1', the block layer will migrate request completions to the
  104. cpu "group" that originally submitted the request. For some workloads this
  105. provides a significant reduction in CPU cycles due to caching effects.
  106. For storage configurations that need to maximize distribution of completion
  107. processing setting this option to '2' forces the completion to run on the
  108. requesting cpu (bypassing the "group" aggregation logic).
  109. scheduler (RW)
  110. --------------
  111. When read, this file will display the current and available IO schedulers
  112. for this block device. The currently active IO scheduler will be enclosed
  113. in [] brackets. Writing an IO scheduler name to this file will switch
  114. control of this block device to that new IO scheduler. Note that writing
  115. an IO scheduler name to this file will attempt to load that IO scheduler
  116. module, if it isn't already present in the system.
  117. Jens Axboe <jens.axboe@oracle.com>, February 2009