Pseudo high-frequency boosts the generalization of a convolutional neural network for cassava disease detection

Frequency is essential in signal transmission, especially in convolutional neural networks. It is vital to maintain the signal frequency in the neural network to maintain the performance of a convolutional neural network. Due to destructive signal transmission in convolutional neural network, signal frequency downconversion in channels results into incomplete spatial information. In communication theory, the number of Fourier series coefficients determines the integrity of the information transmitted in channels. Consequently, the number of Fourier series coefficients of the signals can be replenished to reduce the information transmission loss. To achieve this, the ArsenicNetPlus neural network was proposed for signal transmission modulation in detecting cassava diseases. First, multiattention was used to maintain the long-term dependency of the features of cassava diseases. Afterward, depthwise convolution was implemented to remove aliasing signals and downconvert before the sampling operation. Instance batch normalization algorithm was utilized to keep features in an appropriate form in the convolutional neural network channels. Finally, the ArsenicPlus block was implemented to generate pseudo high-frequency in the residual structure. The proposed method was tested on the Cassava Datasets and compared with the V2-ResNet-101, EfficientNet-B5, RepVGG-B3g4 and AlexNet. The results showed that the proposed method performed \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$95.93\%$$\end{document}95.93% in terms of accuracy, 1.2440 in terms of loss, and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$95.94\%$$\end{document}95.94% in terms of the F1-score, outperforming the comparison algorithms.


Introduction
Cassava (Manihot esculenta Crantz) is one of the most common crops widely grown throughout the world and is a major staple food crop, feeding approximately 800 million people worldwide in Africa (55.5%) , Asia (30.2%) , Americas (14.3%) and Oceania (0.1%) [1,2]. Cassava is used as fodder and starch to develop ethanol fuel and as an industrial raw material. During food crises, research and exploration of cassava disease diagnosis using vision algorithms have helped people manage the crises and ensure that no unnecessary losses to crops occur.
There are more than 30 known cassava leaf diseases [3], of which four diseases, named cassava bacterial blight (CBB), cassava brown streak (CBSD), cassava mosaic (CMD) and cassava green mottle (CGM) are extremely damaging to cassava and are the main ones which will cause cassava yield reduction.
Growing cassava on small, and large scales across Southeast Asia and Africa has been challenging. The primary challenge is that cassava plants are vulnerable to a broad range of diseases as well as lesser-known viral strains. The incidence of epidemics of cassava mosaic virus has increased for decades in East Africa, especially the brown streak virus (CBSD), leading to losses of 47% of production and US$ 60 million per annum (in lost yield) and causing local famine. This has resulted in significant *Correspondence: kunjiechen@njau.edu.cn investments in plant breeding programs to overcome this issue [4]. Cassava bacterial blight disease (CBB) is a major constraint on cassava cultivation worldwide, and losses have exceeded 50-75% in regions where highly susceptible cultivars are grown [5]. To recognize disease rapidly, researchers have been exploring effective means of detecting diseases in cassava using visual algorithms.
Plant disease detection is a branch of fine-grained problems that can be expressed using the t-SNE (t-Distributed Stochastic Neighbor Embedding) algorithm [6] to indicate the class separability and compactness in features extracted from a convolutional neural network [7]. The t-SNE visualization result is illustrated in Fig. 1. However, different from the clear background of images in the common fine-grained dataset, the cassava disease images in paper were captured in a real scenario with significant disorder texture, similar colour distribution, and irregular gradient disturbance. With the rapid development of the technology, the fine-grained research has been considered a high-performance feature descriptor for the encoder of the neural network, such as the Effi-cientNets algorithm [8]. Cassava diseases are shown in Fig. 2.
Ai et al. [9] utilized the Inception-ResNet-V2 model to recognize diseases in an efficient approach. The researchers used the competition disease leaf dataset to find the most efficient model by using an image dataset of 47,363 images for 27 disease-related 10 crop varieties. The based-inception algorithm structures exhibit excellent performance for fine-grained tasks based on transfer learning [10]. Fu et al. [11] proposed an algorithm to introduce the attention proposal sub-network (APN) as the local attention mechanism for convolutional neural networks for fine-grained tasks. The APN algorithm eliminates useless information and pays more attention to local responses. Fine-grained technology is essential for the development of neural networks, especially in person re-identification technology [12,13].
Using deep neural networks, significant applications can be implemented in plant disease detection tasks. Various technologies have been utilized in neural networks to pursue high-performance results. These technologies include transfer learning, multi-task learning, meta learning [14], fine-tuning methods [14], ensemble learning [15], knowledge distillation [16], and loss function   [17]. Several applications have been used in the literature. For instance, Tetila et al. [18] proposed a neural network algorithm to automatically recognize soybean leaf diseases based on unmanned aerial vehicle (UAV) images. The result of this automatic algorithm had 99.04% in terms of its accuracy based on the fine-tuning method. However, the number of images was too low to provide many features of disease detection in real scenarios. The performance of this neural network was based on transfer learning to fine-tune the neural network weight. MobileNet [19], a lightweight-class CNN-based algorithm [20], achieved an accuracy of 94% in cassava disease diagnosis. This algorithm was pretrained on the COCO dataset. Singh et al. [21] proposed a preprocessing algorithm to process images of mango leaf datasets and proposed a customized algorithm to detect the anthracnose disease in mango leaves with the dropout algorithms. As stated in a study by Li et al. [22], the variance shift in dropout was different from batch normalization, which illuminated an applicable case for plant disease detection. Many studies have been conducted to find an appropriate expression for features to make up for the limitations of Batch Normalization. Nonetheless, more research is needed [23][24][25][26][27]. For example, background images in the research of Singh et al. [21] were not clearly captured in a field the fields scenario. This may make neural networks unsuitable for detecting leaf diseases. Yuan et al. [28] proposed a spatial pyramid-oriented encoder-decoder method cascade with a convolutional neural network for crop disease segmentation to locate the infected regions of leaves. This disease segmentation algorithm was 90% accurate based on K-fold cross-validation. The number of parameters and the inference time may not be considered in many research explorations but can be considered in the deployment stage. Zhang et al. [29] proposed the global pooling dilated convolutional neural network to detect cucumber leaf disease. The researchers used the inception block to develop high-level feature maps based on the AlexNet structure and replaced the fully connected layer with a global pooling layer to reduce the network parameters. The results showed that the AlexNet neural network was a classical algorithm. However, the spatial dimension decreases, as each convolutional layer or block is followed by a sub-sampling layer [30]. Therefore, Han et al. [31] argued that in deep CNNs, a drastic increase in the feature-map depth and, at the same time, the loss of spatial information limits the learning ability of CNNs. Reyes et al. [32] used a pre-trained convolutional neural network using 1.8 million images and a fine-tuning strategy to transfer the learned recognition ability from the general domain to the specific challenge of the plant recognition task. Lee et al. [33] proposed a deep learning approach to quantify discriminatory leaf.
Thai et al. [34] proposed a vision transformer (ViT) [35] to detect the early leaf disease. It was a expensive method for plant disease detection, however, its a powerful solution for early leaf disease detection. De et al. [36] apply Faster Region-based Convolutional Neural Network (F-RCNN) to detect and recognize tomato plant leaf disease. Zhang et al. [37] improve F-RCNN by replacing VGG16 with a depth residual network resulting in 2.71% higher recognition accuracy compared with previous work. RepVGGs may be an excellent solution in F-RCNN. The reparametrization method can be utilized to boost the generalization of VGG neural networks. Sun et al. [38] used data enhancement and image segmentation for tea images and achieved higher accuracy through frequently adjusting iteration times and learning rates. Zhou et al. [39] proposed a deep residual dense networks to obtain higher accuracy in classifying tomato leaf diseases using fewer parameters. Oyewola et al. [40] proposed the detection of cassava mosaic disease using deep residual convolutional neural networks with different computation block.
In a third generation neural network [41], variation of light in an image has an essential property in feature description. The texture information expresses the highfrequency component in the images [42]. In the study of Wang et al. [43], a high-frequency component is known to boost the generalization performance in a convolutional neural network. As mentioned above, the proposed multi-attention mechanism proposed maintained the long-term dependency of the feature maps in neural network channels. To comply with the constraints of the Nyquist-Shannon sampling theorem, the Arsenic Block was proposed to downconvert the signal frequency in channels of the neural network. The pseudo high-frequency component was utilized to maintain the number of Fourier series coefficients of signals in neural network channels.
The field images are utilized in this paper to overcome implicit obstacles in the field [44].

The proposed method
A large dataset may cause a lower angular frequency of the kernel function. Consequently, based on the property of the convolution, the high-frequency of the convolution kernel function is maintained, and more information can be maintained in the neural network channels. Thus, more effective information can be saved in the filter operations. The effective information can be expressed as an objective function of the input signal in the mathematical expression.
Angular frequency is essential for maintaining feature long-term dependency to keep the objective function with arbitrary small loss in a convolutional neural network. When the angular frequency of the convolution kernel function refers to ω kernel → 0 , and ω kernel = 0 , the objective function of the input signal refers to S, and the frequency of ω S = ρ , the most ideal case is ω S ω kernel = C, C ∈ N + . When the angular frequency of the Fourier series coefficients of the convolution kernel function refer to ω kernel → 0 and the action scope of the Fourier series was ∞ = lim ω→0 2π ω , all objective functions of the input signals will be maintained in this convolution operation. Kernel functions uniform convergence to a good kernel function is stated in Appendix.
Considering the indicators of GFLOPs and Parameters in the neural network, v2-ResNet-101 was utilized as the baseline. The pipeline is illustrated in Fig. 3. Figure 4 shows the head block utilized to capture contour information at the beginning of the network. The depth-wise convolution block of the essential component of the Arsenic basic block is illustrated in Fig. 5. The Arsenic block is illustrated in Fig. 6.
ArsenicNet is composed of a multi-attention ResBlock and Arsenic block. The multi-attention ResBlock was modified with a pseudo high-frequency component to give the ArsenicPlus block. The ArsenicPlus block was the basic component in stage 4 [45] of ArsenicNetPlus. The other stages in ArsenicNetPlus were maintained in the architecture of ArsenicNet without being modified. The architecture of the ArsenicPlus block is illustrated in Fig. 7.

Keeping long-term dependency based on multi-attention component
In the communication theorem, the greater the numbers of Fourier series coefficients, the clearer the information transmitted in channels. This idea can be transferred from communication theory to neural networks. More numbers of Fourier series coefficients of the signals help boost the generalization of the neural networks.
Subsequently, the multi-attention component was proposed to maintain long-term dependency in feature maps. The SE-block [46] and FCA-block [47] were utilized as the multi-attention structure. The multi-attention is a product of two linear transformation coefficients, and the architecture is shown in Fig. 8.

Boosting the generalization using instance batch normalization
The instance batch normalization (IBN) [48] is a special algorithm that can be applied to convolutional neural networks. It is a combination of the instance normalization and batch normalization [49]. The architecture of the IBN is illustrated in Fig. 9.

Complying with the restrictions in down-sampling limitation
In the Noisy-channel coding theorem, which demonstrates that if the transmission rate R ≤ capacity C, there exists an encode mode to transmit information with minimum error probability. The correlation in bandwidth B, capacity C, and white Gaussian noise is stated as follows: where C refers to the capacity of channels, B refers to the bandwidth, S refers to the signal power, and N refers to the noise power. The Noisy-channel coding theorem is applicable to digital signals and analogue signals. If the noise in the convolutional neural network can be controlled and it approaches to zero, then Cǫ → 0 ; thus S T → ∞ . The Noisy-channel coding theorem can be rewritten as follows: Based the curve of log 2 ∞ , the asymptotic value C can be calcuated based on bandwidth B, where C is the actual coding capacity. The coding capacity is an unknown parameter in a convolutional neural network.
In a convolutional neural network, the sampling frequency did not comply with the definition of the Nyquist-Shannon sampling theorem. To comply with the restrictions of the down-sampling limitation, the Arsenic block plays two roles in the proposed neural network. First, it cleans the aliasing signals in feature maps. Second,it down-converts the frequency to comply with the down-sampling frequency. However, the feature descriptor was not an arbitrarily small loss coding operator, when the neural network did not use transfer learning.
Based on the abovementioned information, the signal frequency in the neural network channels will eventually meet the down-sampling frequency limitation. A weaker signal frequency was proven to exist in the stage 4 of the convolutional neural network, which is illustrated in Table 1. Thus, to repair the weakness signals, the ArsenicPlus block (Fig. 7) was utilized in stage 4 of the proposed method. The results in Table 1 were evaluated using 7-fold cross-validation.
The Nyquist-Shannon sampling theorem was found to be applicable to convolutional neural networks and was named ArsenicNet. To evaluation the generalization of ArsenicNet, the Fine-Grained Visual Classification of Aircraft (FGVC-Aircraft) dataset was utilised in this paper. The FGVC-Aircraft dataset was cited in over 1000 papers, and was utilised as a benchmark dataset in over 200 papers [50].
As statistic in Table 6, the ArsenicNet-3 (based ResNet50) has achieved 84.70% in terms of accuracy, that is 5.9% higher than the experimental consequence of ResNet-50 method in the study of Lee et al. [51] in terms accuracy. Therefore, ArsenicNet is mentioned in this paper as the basic neural network of the ArsenicNetPlus neural network.   Building pseudo high-frequency residual structure As stated in the studies of Wang et al. [43], high-frequency plays an important role in convolutional neural networks. Unfortunately, a signal with destructive transmission in a convolutional neural network causes a high-frequency loss, and the signals fall to an extreme weakness state. The extremely weakness signals can not provide an accurate representation of the objective function in the source signal. We put forwarded a new concept: pseudohigh frequencies, and invented a method by adding pseudo-high frequencies to extreme weakness signals to maintain the integrity of the signal as much as possible, which can be used to solve such questions. Consequently, it is difficult to reconstruct the extreme weakness signals to the original signals. The pseudo high-frequency approximates the original signals rather than restoring the original signals. The equations of the pseudo highfrequency component are as follows: 1. Initialize an offset template. Set Matrix M ∈ C m×n , C m×n init = 1.0 , and update the value of Matrix M via backpropagation.

Matrix M is used as the exponent of the input tensor:
where i refers to the width index of feature map x, j refers to the height index of feature map x, and refers to the index of feature map channels. This is a stretching operation in the frequency domain; nevertheless, the nonlinear phase spectra change causes distortion of the signal distribution. Hence, this pseudo high-frequency residual operation was utilised only once in the ArsenicPlus block (Fig. 7) of stage 4 of the proposed neural network to replenish the pseudo high-frequency in the weak signals.

Cassava datasets
There are 21,393 images maintained in the original cassava leaf disease dataset. The origin dataset was not kept balanced for data distribution to categories, and the most imbalance categories of CMD disease and CBB disease had 13,158 images and 1086 images, respectively. The imbalanced data distribution was an obstacle for plant disease detection training. The imbalanced distribution may cause the the primary performance to tilt to the most images of the categories.
This network dataset has a significant number of imprecise images. To avoid the problem of image pollution, downstream projects cause downstream of models such as costly iterations, discard, and harm to communities [52]. Three main problems images were removed [53]. The three problems are shown as follows: (1) Unmaintained attributes: The unclear and lowquality images of cassava leaf disease. It is difficult to clearly distinguish regions of disease in these images. (2) Typing error: Labeling errors were present. The origin cassava leaf disease dataset includes not only cassava leaves but also cassava fruits, magazine covers and other unrelated material. (3) Inaccurate data: Losing focus. Losing focus will cause high-frequency component loss in images. The high-frequency component of the images was an essential component to boost the generalization in a convolutional neural network. Thus, the inaccurate data will destroy the downstream project.
Based on the abovementioned items, there were found more than 1000 healthy category images with niduses, which is an unacceptable rate of disease diagnosis errors in medical image diagnosis. To maintain balance among categories, the Gaussian noise, horizontal flipping, cutting-out, and vertical flipping were used to conduct augmentation. The 20,000 colour images were randomly combined into five balanced categories, and the CMD category in this paper was selected from 13,158 images from raw data with random. The preprocessed images with a resolution of 448 × 448 × 3 pixels, and the details are presented in Table 2.
There are approximately 3400 bad lighting and backlight cassava images, accounting for 17% of the image dataset in this paper, and partially obstructed in approximately 2000 images, accounting for 10% of the dataset.

Experimental parameters and methods for performance comparison
The proposed method was trained on the cassava dataset using the following settings: an stochastic gradient descent (SGD) optimizer [54] was used with an initial learning rate of 0.2, decay of 0.96 in every epoch, momentum of 0.9, weight decay of 1e−5, and batch normalization momentum of 0.9. The coefficient of L2 regularization in descriptor is set to 1e-5. The Hard-Sigmoid function in SE-Block reduces the computing cost of the neural network.
Categorical cross-entropy was utilized as the loss function in this paper. This experiment utilized 7-fold cross-validation to obtain the representativeness result.
The proposed network was compared with Efficient-Net-B5 [8], RepVGG-B3g4 [55], V2-Resnet-101 [45], and AlexNet [56]. As stated in the study of Ferentions [57], the VGG nuclear network and AlexNet accuracy have been ranked as first and second over other neural networks. The classical neural network VGG was modified to a new structure named RepVGG.

Experimental parameters and methods for ArsenicNet
The parameters and methods used in the experiment are consistent with those mentioned above. The proposed network was compared with the ArsenicNet neural network to verify the effectiveness of the pseudo high-frequency component.

Classic algorithm comparison results
In this section, several classical algorithms including V2-ResNet-101, EfficientNet-B5, AlexNet, and RepVGG-B3g4 were compared with ArsenicNetPlus. Notably, this comparison did not use transfer learning and ensemble learning. The comparison results are illustrated in Table 3. The experimental software platform is Tensor-Flow framework 2.4.1, and the hardware is AMD Ryzen 7 3800XT @3.89GHz with a NVIDIA GeForce RTX 3090.
The above-mentioned classical methods were not have an indicator for the the extreme weakness signals, and not have the ability to repair the extreme weakness signals. The Arsenic block can be utilized as an indicator to check the extreme weakness signals, and Arsenic-NetPlus block can be utilised in the extreme weakness stage to boost performance.
The Accuracy (Fig. 10), Recall (Fig. 11), Precision (Fig. 12), and F1-Score (Fig. 13) curves of ArsenicNetPlus are similar. The formulas for accuracy, recall, precision, and F1-score are as follows: accuracy = TP+TN TP+TN +FP+FN , recall = TP TP+FN , precision = TP TP+FP , and F 1 − score = 2 * precision * recall precision+recall , respectively. The fluctuations of the aforementioned indicators had a narrow interval and were smoother than those of the other comparison algorithms used. The validation loss function (Fig. 14) of the ArsenicNetPlus neural network had a fast gradient descent rate similar to curves of The accumulative confusion matrix of ArsenicNetPlus is shown in Table 4.

Ablation experiment with pseudo high-frequency component
The best performance of ArsenicNetPlus and the ArsenicNet neural network on the cassava dataset using 7-fold cross-validation is shown in Table 5.
The comparison loss curves of ArsenicNet-3 and ArsenicNetPlus are shown in Fig. 15, and the accuracy curves is shown in Fig. 16. The comparison curves of Recall, Precision, and F1-Score were similar to the Accuracy comparison curve (Fig. 16).
The results of ArsenicNet and ArsenicNetPlus were carried out in the same software environment, with the same training strategy and the same training hyperparameters.

Benchmark dataset performance
In the fine-grained research field, the Fine-Grained Visual Classification of Aircraft (FGVC-Aircraft) dataset [58] was a classical fine-grained categorization dataset. We used the FGVC-Aircraft dataset to evaluate the performance of our proposed algorithm and prove the effectiveness of our proposed fine-grained algorithm.
This evaluation was executed based on the manufacturer data format. To keep the image distribution balanced, a series of augmentation methods, including horizontal flipping, vertical flipping, horizontal vertical flipping, image offsetting, shift scaling and rotation, and Gaussian noise addition, were used to enlarge the number of images. As a result, the dataset contained 30   Table 6.
The ArsenicNetPlus (based ResNet50) has achieved 86.59% in terms of accuracy, that is 7.79% higher than the experimental consequence of [51] in terms accuracy, and improve 1.89% in terms of accuracy than ArsenicNet-3.

Comparison of existing methods for cassava disease detection
ArsenicNetPlus was an end to end neural network algorithm. In comparison to other cassava leaf disease detection methods (Table 7), ArsenicNetPlus has not utilised transfer learning, ensemble learning or fine-tuning methods. The comparison of existing approaches for cassava disease detection was shown in Table 7.

Discussion
To verify the performance of the algorithm proposed in this paper, the other four algorithms are compared in Table 8. The proposed algorithm achieved the highest accuracy among the comparison algorithms used [20,59]. As a comparison algorithm, the traditional machine learning methods used by Emuoyibofarhe et al. [66] had a weaker encoding performance in complex contexts than ArsenicNetPlus.

Conclusion
A signal frequency was continued to down-convert the convolutional neural network, and the objective function in signals was lost in extreme weakness. Thus, the pseudo high-frequency component can be utilized to approximate the destination function to boost the generalization performance.
A clear difference can be found in the loss curves in Fig. 15, where the loss values for ArsenicNetPlus are lower than that for ArsenicNet. Correspondingly, the accuracy of the ArsenicNetPlus was higher than Arsenic-Net in Fig. 16. The performance of ArsenicNetPlus on the FGVC-Aircraft dataset demonstrates ( Table 6) that pseudo high-frequency can improve the generalisation ability of the neural network.
Consequently, the pseudo high-frequency component is useful in two ways: (1) The ability to maintain high frequency in feature maps is an important factor that impacts the generalization performance of the neural network.  (2) The pseudo high-frequency is an approximate approach to replenish the high-frequency in a weaker state of a convolutional neural network.
In contrast, the proposed method has a higher initial loss value and its loss function converges more slowly than that of RepVGG-B3g5. Thus, our next work will be devoted to modifying the loss function to make it converge faster, to boost the performance of the proposed neural network [67].
The function F (jnω) was a result of replenishing the Fourier series coefficients. The Cesàro sum operation was a special method to convert the non-good kernel functions to the good kernel functions. The good kernel function was a property in mathematics and physics, a manifestation of a form of function. However, the Cesàro sum operation is still an idea in convolutional neural networks.

Authors' information
Jiayu Zhang was born in Xuzhou, Jiangsu, China in 1993. He received the M.Sc. degree in software engineering from the Hangzhou Normal University, in 2019. He is currently pursuing the Ph.D. degree in agricultural electrification and automation at Nanjing Agricultural University. His research interests include machine vision, deep learning and digital image processing. Chao Qi is a doctoral student at Nanjing Agricultural University, majoring in agricultural electrification and automation, with research interests in image processing and machine learning, especially for deep learning techniques. He received MS degree in 2019 at Nanjing Agricultural University, with his research interests in image processing and machine learning, focusing mainly on digital image processing techniques. Peter Mecha is a doctoral student at Nanjing Agricultural University, China. He also works as an assistant lecturer at Egerton University, Kenya. He is currently involved in designing heat pump drying processes for various vegetables like mushrooms and day lily flowers among others.additionally, he is working on applications of deep learning and image processing in agricultural processing to help solve food security problems. Yi Zuo is a doctoral student at Nanjing Agricultural University, majoring in agricultural electrification and automation, with research interests in image processing and machine learning, especially for deep learning techniques. He received MS degree in 2021 at Nanjing Agricultural University, with his research interests in image processing and machine learning, focusing mainly on digital image processing techniques. Zongyou Ben is a doctoral student at Nanjing Agricultural University, majoring in agricultural mechanization engineering, with research interests in deep learning, digital image processing. He received MS degree in 2016 at Nanjing Tech University, with his research interests in fluid machinery design.

Funding
Not applicable.

Availability of data and materials
The origin dataset can be found in https:// www. kaggle. com/ compe titio ns/ cassa va-leaf-disea se-class ifica tion/ data. There are 21,394 images in original cassava leaf disease dataset. The original dataset was annotated by experts at the Uganda National Crops Resources Research Institute (NaCRRI) in collaboration with the AI lab at Makerere University, Kampala. The cassava dataset used in this study can be found at the following link: https:// pan. baidu. com/s/ 1thrIr_ 0uB3g zYSPT 317gtg (Password: abcd).

Declarations
Ethics approval and consent to participate Not applicable.

Consent for publication
Not applicable.