spider_net.txt 9.7 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204
  1. The Spidernet Device Driver
  2. ===========================
  3. Written by Linas Vepstas <linas@austin.ibm.com>
  4. Version of 7 June 2007
  5. Abstract
  6. ========
  7. This document sketches the structure of portions of the spidernet
  8. device driver in the Linux kernel tree. The spidernet is a gigabit
  9. ethernet device built into the Toshiba southbridge commonly used
  10. in the SONY Playstation 3 and the IBM QS20 Cell blade.
  11. The Structure of the RX Ring.
  12. =============================
  13. The receive (RX) ring is a circular linked list of RX descriptors,
  14. together with three pointers into the ring that are used to manage its
  15. contents.
  16. The elements of the ring are called "descriptors" or "descrs"; they
  17. describe the received data. This includes a pointer to a buffer
  18. containing the received data, the buffer size, and various status bits.
  19. There are three primary states that a descriptor can be in: "empty",
  20. "full" and "not-in-use". An "empty" or "ready" descriptor is ready
  21. to receive data from the hardware. A "full" descriptor has data in it,
  22. and is waiting to be emptied and processed by the OS. A "not-in-use"
  23. descriptor is neither empty or full; it is simply not ready. It may
  24. not even have a data buffer in it, or is otherwise unusable.
  25. During normal operation, on device startup, the OS (specifically, the
  26. spidernet device driver) allocates a set of RX descriptors and RX
  27. buffers. These are all marked "empty", ready to receive data. This
  28. ring is handed off to the hardware, which sequentially fills in the
  29. buffers, and marks them "full". The OS follows up, taking the full
  30. buffers, processing them, and re-marking them empty.
  31. This filling and emptying is managed by three pointers, the "head"
  32. and "tail" pointers, managed by the OS, and a hardware current
  33. descriptor pointer (GDACTDPA). The GDACTDPA points at the descr
  34. currently being filled. When this descr is filled, the hardware
  35. marks it full, and advances the GDACTDPA by one. Thus, when there is
  36. flowing RX traffic, every descr behind it should be marked "full",
  37. and everything in front of it should be "empty". If the hardware
  38. discovers that the current descr is not empty, it will signal an
  39. interrupt, and halt processing.
  40. The tail pointer tails or trails the hardware pointer. When the
  41. hardware is ahead, the tail pointer will be pointing at a "full"
  42. descr. The OS will process this descr, and then mark it "not-in-use",
  43. and advance the tail pointer. Thus, when there is flowing RX traffic,
  44. all of the descrs in front of the tail pointer should be "full", and
  45. all of those behind it should be "not-in-use". When RX traffic is not
  46. flowing, then the tail pointer can catch up to the hardware pointer.
  47. The OS will then note that the current tail is "empty", and halt
  48. processing.
  49. The head pointer (somewhat mis-named) follows after the tail pointer.
  50. When traffic is flowing, then the head pointer will be pointing at
  51. a "not-in-use" descr. The OS will perform various housekeeping duties
  52. on this descr. This includes allocating a new data buffer and
  53. dma-mapping it so as to make it visible to the hardware. The OS will
  54. then mark the descr as "empty", ready to receive data. Thus, when there
  55. is flowing RX traffic, everything in front of the head pointer should
  56. be "not-in-use", and everything behind it should be "empty". If no
  57. RX traffic is flowing, then the head pointer can catch up to the tail
  58. pointer, at which point the OS will notice that the head descr is
  59. "empty", and it will halt processing.
  60. Thus, in an idle system, the GDACTDPA, tail and head pointers will
  61. all be pointing at the same descr, which should be "empty". All of the
  62. other descrs in the ring should be "empty" as well.
  63. The show_rx_chain() routine will print out the locations of the
  64. GDACTDPA, tail and head pointers. It will also summarize the contents
  65. of the ring, starting at the tail pointer, and listing the status
  66. of the descrs that follow.
  67. A typical example of the output, for a nearly idle system, might be
  68. net eth1: Total number of descrs=256
  69. net eth1: Chain tail located at descr=20
  70. net eth1: Chain head is at 20
  71. net eth1: HW curr desc (GDACTDPA) is at 21
  72. net eth1: Have 1 descrs with stat=x40800101
  73. net eth1: HW next desc (GDACNEXTDA) is at 22
  74. net eth1: Last 255 descrs with stat=xa0800000
  75. In the above, the hardware has filled in one descr, number 20. Both
  76. head and tail are pointing at 20, because it has not yet been emptied.
  77. Meanwhile, hw is pointing at 21, which is free.
  78. The "Have nnn decrs" refers to the descr starting at the tail: in this
  79. case, nnn=1 descr, starting at descr 20. The "Last nnn descrs" refers
  80. to all of the rest of the descrs, from the last status change. The "nnn"
  81. is a count of how many descrs have exactly the same status.
  82. The status x4... corresponds to "full" and status xa... corresponds
  83. to "empty". The actual value printed is RXCOMST_A.
  84. In the device driver source code, a different set of names are
  85. used for these same concepts, so that
  86. "empty" == SPIDER_NET_DESCR_CARDOWNED == 0xa
  87. "full" == SPIDER_NET_DESCR_FRAME_END == 0x4
  88. "not in use" == SPIDER_NET_DESCR_NOT_IN_USE == 0xf
  89. The RX RAM full bug/feature
  90. ===========================
  91. As long as the OS can empty out the RX buffers at a rate faster than
  92. the hardware can fill them, there is no problem. If, for some reason,
  93. the OS fails to empty the RX ring fast enough, the hardware GDACTDPA
  94. pointer will catch up to the head, notice the not-empty condition,
  95. ad stop. However, RX packets may still continue arriving on the wire.
  96. The spidernet chip can save some limited number of these in local RAM.
  97. When this local ram fills up, the spider chip will issue an interrupt
  98. indicating this (GHIINT0STS will show ERRINT, and the GRMFLLINT bit
  99. will be set in GHIINT1STS). When the RX ram full condition occurs,
  100. a certain bug/feature is triggered that has to be specially handled.
  101. This section describes the special handling for this condition.
  102. When the OS finally has a chance to run, it will empty out the RX ring.
  103. In particular, it will clear the descriptor on which the hardware had
  104. stopped. However, once the hardware has decided that a certain
  105. descriptor is invalid, it will not restart at that descriptor; instead
  106. it will restart at the next descr. This potentially will lead to a
  107. deadlock condition, as the tail pointer will be pointing at this descr,
  108. which, from the OS point of view, is empty; the OS will be waiting for
  109. this descr to be filled. However, the hardware has skipped this descr,
  110. and is filling the next descrs. Since the OS doesn't see this, there
  111. is a potential deadlock, with the OS waiting for one descr to fill,
  112. while the hardware is waiting for a different set of descrs to become
  113. empty.
  114. A call to show_rx_chain() at this point indicates the nature of the
  115. problem. A typical print when the network is hung shows the following:
  116. net eth1: Spider RX RAM full, incoming packets might be discarded!
  117. net eth1: Total number of descrs=256
  118. net eth1: Chain tail located at descr=255
  119. net eth1: Chain head is at 255
  120. net eth1: HW curr desc (GDACTDPA) is at 0
  121. net eth1: Have 1 descrs with stat=xa0800000
  122. net eth1: HW next desc (GDACNEXTDA) is at 1
  123. net eth1: Have 127 descrs with stat=x40800101
  124. net eth1: Have 1 descrs with stat=x40800001
  125. net eth1: Have 126 descrs with stat=x40800101
  126. net eth1: Last 1 descrs with stat=xa0800000
  127. Both the tail and head pointers are pointing at descr 255, which is
  128. marked xa... which is "empty". Thus, from the OS point of view, there
  129. is nothing to be done. In particular, there is the implicit assumption
  130. that everything in front of the "empty" descr must surely also be empty,
  131. as explained in the last section. The OS is waiting for descr 255 to
  132. become non-empty, which, in this case, will never happen.
  133. The HW pointer is at descr 0. This descr is marked 0x4.. or "full".
  134. Since its already full, the hardware can do nothing more, and thus has
  135. halted processing. Notice that descrs 0 through 254 are all marked
  136. "full", while descr 254 and 255 are empty. (The "Last 1 descrs" is
  137. descr 254, since tail was at 255.) Thus, the system is deadlocked,
  138. and there can be no forward progress; the OS thinks there's nothing
  139. to do, and the hardware has nowhere to put incoming data.
  140. This bug/feature is worked around with the spider_net_resync_head_ptr()
  141. routine. When the driver receives RX interrupts, but an examination
  142. of the RX chain seems to show it is empty, then it is probable that
  143. the hardware has skipped a descr or two (sometimes dozens under heavy
  144. network conditions). The spider_net_resync_head_ptr() subroutine will
  145. search the ring for the next full descr, and the driver will resume
  146. operations there. Since this will leave "holes" in the ring, there
  147. is also a spider_net_resync_tail_ptr() that will skip over such holes.
  148. As of this writing, the spider_net_resync() strategy seems to work very
  149. well, even under heavy network loads.
  150. The TX ring
  151. ===========
  152. The TX ring uses a low-watermark interrupt scheme to make sure that
  153. the TX queue is appropriately serviced for large packet sizes.
  154. For packet sizes greater than about 1KBytes, the kernel can fill
  155. the TX ring quicker than the device can drain it. Once the ring
  156. is full, the netdev is stopped. When there is room in the ring,
  157. the netdev needs to be reawakened, so that more TX packets are placed
  158. in the ring. The hardware can empty the ring about four times per jiffy,
  159. so its not appropriate to wait for the poll routine to refill, since
  160. the poll routine runs only once per jiffy. The low-watermark mechanism
  161. marks a descr about 1/4th of the way from the bottom of the queue, so
  162. that an interrupt is generated when the descr is processed. This
  163. interrupt wakes up the netdev, which can then refill the queue.
  164. For large packets, this mechanism generates a relatively small number
  165. of interrupts, about 1K/sec. For smaller packets, this will drop to zero
  166. interrupts, as the hardware can empty the queue faster than the kernel
  167. can fill it.
  168. ======= END OF DOCUMENT ========