soc-camera.txt 7.7 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164
  1. Soc-Camera Subsystem
  2. ====================
  3. Terminology
  4. -----------
  5. The following terms are used in this document:
  6. - camera / camera device / camera sensor - a video-camera sensor chip, capable
  7. of connecting to a variety of systems and interfaces, typically uses i2c for
  8. control and configuration, and a parallel or a serial bus for data.
  9. - camera host - an interface, to which a camera is connected. Typically a
  10. specialised interface, present on many SoCs, e.g. PXA27x and PXA3xx, SuperH,
  11. AVR32, i.MX27, i.MX31.
  12. - camera host bus - a connection between a camera host and a camera. Can be
  13. parallel or serial, consists of data and control lines, e.g. clock, vertical
  14. and horizontal synchronization signals.
  15. Purpose of the soc-camera subsystem
  16. -----------------------------------
  17. The soc-camera subsystem initially provided a unified API between camera host
  18. drivers and camera sensor drivers. Later the soc-camera sensor API has been
  19. replaced with the V4L2 standard subdev API. This also made camera driver re-use
  20. with non-soc-camera hosts possible. The camera host API to the soc-camera core
  21. has been preserved.
  22. Soc-camera implements a V4L2 interface to the user, currently only the "mmap"
  23. method is supported by host drivers. However, the soc-camera core also provides
  24. support for the "read" method.
  25. The subsystem has been designed to support multiple camera host interfaces and
  26. multiple cameras per interface, although most applications have only one camera
  27. sensor.
  28. Existing drivers
  29. ----------------
  30. As of 3.7 there are seven host drivers in the mainline: atmel-isi.c,
  31. mx1_camera.c (broken, scheduled for removal), mx2_camera.c, mx3_camera.c,
  32. omap1_camera.c, pxa_camera.c, sh_mobile_ceu_camera.c, and multiple sensor
  33. drivers under drivers/media/i2c/soc_camera/.
  34. Camera host API
  35. ---------------
  36. A host camera driver is registered using the
  37. soc_camera_host_register(struct soc_camera_host *);
  38. function. The host object can be initialized as follows:
  39. struct soc_camera_host *ici;
  40. ici->drv_name = DRV_NAME;
  41. ici->ops = &camera_host_ops;
  42. ici->priv = pcdev;
  43. ici->v4l2_dev.dev = &pdev->dev;
  44. ici->nr = pdev->id;
  45. All camera host methods are passed in a struct soc_camera_host_ops:
  46. static struct soc_camera_host_ops camera_host_ops = {
  47. .owner = THIS_MODULE,
  48. .add = camera_add_device,
  49. .remove = camera_remove_device,
  50. .set_fmt = camera_set_fmt_cap,
  51. .try_fmt = camera_try_fmt_cap,
  52. .init_videobuf2 = camera_init_videobuf2,
  53. .poll = camera_poll,
  54. .querycap = camera_querycap,
  55. .set_bus_param = camera_set_bus_param,
  56. /* The rest of host operations are optional */
  57. };
  58. .add and .remove methods are called when a sensor is attached to or detached
  59. from the host. .set_bus_param is used to configure physical connection
  60. parameters between the host and the sensor. .init_videobuf2 is called by
  61. soc-camera core when a video-device is opened, the host driver would typically
  62. call vb2_queue_init() in this method. Further video-buffer management is
  63. implemented completely by the specific camera host driver. If the host driver
  64. supports non-standard pixel format conversion, it should implement a
  65. .get_formats and, possibly, a .put_formats operations. See below for more
  66. details about format conversion. The rest of the methods are called from
  67. respective V4L2 operations.
  68. Camera API
  69. ----------
  70. Sensor drivers can use struct soc_camera_link, typically provided by the
  71. platform, and used to specify to which camera host bus the sensor is connected,
  72. and optionally provide platform .power and .reset methods for the camera. This
  73. struct is provided to the camera driver via the I2C client device platform data
  74. and can be obtained, using the soc_camera_i2c_to_link() macro. Care should be
  75. taken, when using soc_camera_vdev_to_subdev() and when accessing struct
  76. soc_camera_device, using v4l2_get_subdev_hostdata(): both only work, when
  77. running on an soc-camera host. The actual camera driver operation is implemented
  78. using the V4L2 subdev API. Additionally soc-camera camera drivers can use
  79. auxiliary soc-camera helper functions like soc_camera_power_on() and
  80. soc_camera_power_off(), which switch regulators, provided by the platform and call
  81. board-specific power switching methods. soc_camera_apply_board_flags() takes
  82. camera bus configuration capability flags and applies any board transformations,
  83. e.g. signal polarity inversion. soc_mbus_get_fmtdesc() can be used to obtain a
  84. pixel format descriptor, corresponding to a certain media-bus pixel format code.
  85. soc_camera_limit_side() can be used to restrict beginning and length of a frame
  86. side, based on camera capabilities.
  87. VIDIOC_S_CROP and VIDIOC_S_FMT behaviour
  88. ----------------------------------------
  89. Above user ioctls modify image geometry as follows:
  90. VIDIOC_S_CROP: sets location and sizes of the sensor window. Unit is one sensor
  91. pixel. Changing sensor window sizes preserves any scaling factors, therefore
  92. user window sizes change as well.
  93. VIDIOC_S_FMT: sets user window. Should preserve previously set sensor window as
  94. much as possible by modifying scaling factors. If the sensor window cannot be
  95. preserved precisely, it may be changed too.
  96. In soc-camera there are two locations, where scaling and cropping can take
  97. place: in the camera driver and in the host driver. User ioctls are first passed
  98. to the host driver, which then generally passes them down to the camera driver.
  99. It is more efficient to perform scaling and cropping in the camera driver to
  100. save camera bus bandwidth and maximise the framerate. However, if the camera
  101. driver failed to set the required parameters with sufficient precision, the host
  102. driver may decide to also use its own scaling and cropping to fulfill the user's
  103. request.
  104. Camera drivers are interfaced to the soc-camera core and to host drivers over
  105. the v4l2-subdev API, which is completely functional, it doesn't pass any data.
  106. Therefore all camera drivers shall reply to .g_fmt() requests with their current
  107. output geometry. This is necessary to correctly configure the camera bus.
  108. .s_fmt() and .try_fmt() have to be implemented too. Sensor window and scaling
  109. factors have to be maintained by camera drivers internally. According to the
  110. V4L2 API all capture drivers must support the VIDIOC_CROPCAP ioctl, hence we
  111. rely on camera drivers implementing .cropcap(). If the camera driver does not
  112. support cropping, it may choose to not implement .s_crop(), but to enable
  113. cropping support by the camera host driver at least the .g_crop method must be
  114. implemented.
  115. User window geometry is kept in .user_width and .user_height fields in struct
  116. soc_camera_device and used by the soc-camera core and host drivers. The core
  117. updates these fields upon successful completion of a .s_fmt() call, but if these
  118. fields change elsewhere, e.g. during .s_crop() processing, the host driver is
  119. responsible for updating them.
  120. Format conversion
  121. -----------------
  122. V4L2 distinguishes between pixel formats, as they are stored in memory, and as
  123. they are transferred over a media bus. Soc-camera provides support to
  124. conveniently manage these formats. A table of standard transformations is
  125. maintained by soc-camera core, which describes, what FOURCC pixel format will
  126. be obtained, if a media-bus pixel format is stored in memory according to
  127. certain rules. E.g. if MEDIA_BUS_FMT_YUYV8_2X8 data is sampled with 8 bits per
  128. sample and stored in memory in the little-endian order with no gaps between
  129. bytes, data in memory will represent the V4L2_PIX_FMT_YUYV FOURCC format. These
  130. standard transformations will be used by soc-camera or by camera host drivers to
  131. configure camera drivers to produce the FOURCC format, requested by the user,
  132. using the VIDIOC_S_FMT ioctl(). Apart from those standard format conversions,
  133. host drivers can also provide their own conversion rules by implementing a
  134. .get_formats and, if required, a .put_formats methods.
  135. --
  136. Author: Guennadi Liakhovetski <g.liakhovetski@gmx.de>