overview.txt 4.6 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123
  1. The Linux Kernel Device Model
  2. Patrick Mochel <mochel@digitalimplant.org>
  3. Drafted 26 August 2002
  4. Updated 31 January 2006
  5. Overview
  6. ~~~~~~~~
  7. The Linux Kernel Driver Model is a unification of all the disparate driver
  8. models that were previously used in the kernel. It is intended to augment the
  9. bus-specific drivers for bridges and devices by consolidating a set of data
  10. and operations into globally accessible data structures.
  11. Traditional driver models implemented some sort of tree-like structure
  12. (sometimes just a list) for the devices they control. There wasn't any
  13. uniformity across the different bus types.
  14. The current driver model provides a common, uniform data model for describing
  15. a bus and the devices that can appear under the bus. The unified bus
  16. model includes a set of common attributes which all busses carry, and a set
  17. of common callbacks, such as device discovery during bus probing, bus
  18. shutdown, bus power management, etc.
  19. The common device and bridge interface reflects the goals of the modern
  20. computer: namely the ability to do seamless device "plug and play", power
  21. management, and hot plug. In particular, the model dictated by Intel and
  22. Microsoft (namely ACPI) ensures that almost every device on almost any bus
  23. on an x86-compatible system can work within this paradigm. Of course,
  24. not every bus is able to support all such operations, although most
  25. buses support most of those operations.
  26. Downstream Access
  27. ~~~~~~~~~~~~~~~~~
  28. Common data fields have been moved out of individual bus layers into a common
  29. data structure. These fields must still be accessed by the bus layers,
  30. and sometimes by the device-specific drivers.
  31. Other bus layers are encouraged to do what has been done for the PCI layer.
  32. struct pci_dev now looks like this:
  33. struct pci_dev {
  34. ...
  35. struct device dev; /* Generic device interface */
  36. ...
  37. };
  38. Note first that the struct device dev within the struct pci_dev is
  39. statically allocated. This means only one allocation on device discovery.
  40. Note also that that struct device dev is not necessarily defined at the
  41. front of the pci_dev structure. This is to make people think about what
  42. they're doing when switching between the bus driver and the global driver,
  43. and to discourage meaningless and incorrect casts between the two.
  44. The PCI bus layer freely accesses the fields of struct device. It knows about
  45. the structure of struct pci_dev, and it should know the structure of struct
  46. device. Individual PCI device drivers that have been converted to the current
  47. driver model generally do not and should not touch the fields of struct device,
  48. unless there is a compelling reason to do so.
  49. The above abstraction prevents unnecessary pain during transitional phases.
  50. If it were not done this way, then when a field was renamed or removed, every
  51. downstream driver would break. On the other hand, if only the bus layer
  52. (and not the device layer) accesses the struct device, it is only the bus
  53. layer that needs to change.
  54. User Interface
  55. ~~~~~~~~~~~~~~
  56. By virtue of having a complete hierarchical view of all the devices in the
  57. system, exporting a complete hierarchical view to userspace becomes relatively
  58. easy. This has been accomplished by implementing a special purpose virtual
  59. file system named sysfs.
  60. Almost all mainstream Linux distros mount this filesystem automatically; you
  61. can see some variation of the following in the output of the "mount" command:
  62. $ mount
  63. ...
  64. none on /sys type sysfs (rw,noexec,nosuid,nodev)
  65. ...
  66. $
  67. The auto-mounting of sysfs is typically accomplished by an entry similar to
  68. the following in the /etc/fstab file:
  69. none /sys sysfs defaults 0 0
  70. or something similar in the /lib/init/fstab file on Debian-based systems:
  71. none /sys sysfs nodev,noexec,nosuid 0 0
  72. If sysfs is not automatically mounted, you can always do it manually with:
  73. # mount -t sysfs sysfs /sys
  74. Whenever a device is inserted into the tree, a directory is created for it.
  75. This directory may be populated at each layer of discovery - the global layer,
  76. the bus layer, or the device layer.
  77. The global layer currently creates two files - 'name' and 'power'. The
  78. former only reports the name of the device. The latter reports the
  79. current power state of the device. It will also be used to set the current
  80. power state.
  81. The bus layer may also create files for the devices it finds while probing the
  82. bus. For example, the PCI layer currently creates 'irq' and 'resource' files
  83. for each PCI device.
  84. A device-specific driver may also export files in its directory to expose
  85. device-specific data or tunable interfaces.
  86. More information about the sysfs directory layout can be found in
  87. the other documents in this directory and in the file
  88. Documentation/filesystems/sysfs.txt.