The intrinsic downtime of a detector is often due to its physical properties; For example, a spark chamber is “dead” until the potential between the plates recovers above a sufficiently high value. In other cases, the detector is always “live” after a first event and generates a signal for the successive event, but the signal is such that the detector display is unable to distinguish and separate it, resulting in a loss of event or a so-called “stacking” event, in which, for example, a sum (possibly partial) of the deposited energies of the two events is recorded instead. In some cases, this can be minimized by proper design, but often only at the expense of other properties such as energy dissolution. For detection systems that record discrete events, such as particle and nuclear detectors, downtime is the time after each event in which the system is unable to record another event. [1] A daily example of this is what happens when someone takes a picture with a flash – another image cannot be taken immediately afterwards because the flash takes a few seconds to load. In addition to reducing the efficiency of detection, downtime can have other effects, such as creating possible exploits in quantum cryptography. [2] We considered another technique to determine the detector`s downtime, which requires a single detector and a single counting rate. This technique results from the distribution of time intervals between successive counts. Modern data acquisition systems, such as those used in the EUROTRANS experiments on the Yalina booster, can record the times of individual detector signals instead of average counting rates. It is known that for a Poisson process, the distribution of time intervals between successive counts (let`s call it) takes the form of a simple exponential.
However, the presence of a timeout changes this distribution. Analytical formulas for this distribution have long been known for cases of simple downtime that can be paralyzed or not paralyzed [14]. For the non-paralyzable case, this formula is, while for the paralyzable case, the largest integer is at the bottom. In the case of a serial arrangement of downtime, the determination of the analytical form becomes cumbersome and the reader is referred to the bibliography [19, 20]. Nevertheless, the shape of complex downtime arrangements can also be determined with Monte Carlo simulations. To this end, we have set up a program that generates events that simulate an initial random Poisson process and then apply downtime of different types one by one. Some of the results achieved with this program are illustrated in Figure 5. These are the cases of a purely paralyzable and non-paralyzable downtime, as well as the cases of serial arrangements of two dead periods: paralyzable/non-paralyzable and non-paralyzable/paralyzable. Standardization was chosen so that for an undisturbed Poisson distribution; with this normalization is equal to the counting rate of the undisturbed Poisson distribution. Note the distinct differences in the form of these cases. This leads to the fact that the form of can be used to determine not only the value of the system`s downtime, but also its nature (paralyzed or non-paralyzable) even in the presence of complex series of downtime.
The most common technique for measuring downtime is the two-source technique [13, 15]. Suppose we have two sources and we are placed at a distance and from the detector, as shown schematically in Figure 2. “Dead Time.” dictionary Merriam-Webster.com, Merriam-Webster, www.merriam-webster.com/dictionary/dead%20time. Retrieved 10 October 2022. Typical values of the problem are given in Table 1. At these values, the optimal counting rate is. The correction per death with these parameters is approximately , within the limit of the first-order approximation (3). The only parameter that can be changed to (20) is that if we choose 10 s instead of 1 s, the new optimal counting rate will be, and the correction is now 10%, low enough for the use of (3). Therefore, the detector should be placed at points with counting rates. The diva factor can be considered as a unit without major errors in the calculation of the optimal counting rate. The measurement of downtime is usually carried out with two neutron sources of constant intensity (the so-called two-source method). However, in Section 2, we propose alternative techniques that can be applied directly to measurements in spallation sources or ADD without the need for special calibration experiments, and that can provide a better characterization of the detection system`s downtime than the two-source method.
These techniques were applied to the experimental results obtained in the Yalina subcritical overpressure plant [10, 11] during the EUROTRANS test campaign conducted in this facility [12]. Downtime is defined as the period during which data cannot be collected or has been blocked. Whenever a gamma particle is absorbed by the scintillation crystal, the crystal absorbs its energy and releases the absorbed energy in the form of light. The light pulse is processed and analyzed by the system. As long as the light pulse is not turned off and the signal has not been processed, the system does not record any new gamma particles. The downtime depends on the counting rate, the type of scintillation crystal and the electronics. The phenomenon of downtime is also very important for Geiger counters. Typically, it has a value of about 100 μs (for proportional meters, it is much less), and so after each ionizing event, a geiger tube is essentially turned off per 100 μs.
The downtime of a detector is defined as the minimum time interval in which two consecutive counts must be separated in order to be recorded as two different events.