Sample preparation
Arabidopsis samples
Arabidopsis seeds were gas-sterilized using 100 mL of sodium hypochlorite (commercial bleach) supplemented by 3 mL of 37% HCl for at least 2 h. Seeds were embedded in agarose and stratified overnight at 4 °C before sowing on half Murashige and Skoog (MS) plates. Seeds were sown in square Petri dishes with a distance of 1 cm between each other. Plates were imaged from germination to early root growth (up to 5 days) under a photosynthetic photon flux (PPF) of 180 µmol m-2 s-1 using an 18/6-h lighting cycle.
Tomato samples
To test the imaging setup in the greenhouse (23°C; 70% relative humidity [RH]), we conducted an experiment to observe the effects of wounding on the roots of tomatoes (Solanum lycopersicum). We planted tomato seeds on top of a nylon mesh between the soil and polystyrene plates, which resembled a small rhizotron. This small 12.5 × 10 × 5 cm Petri dish rhizotron was cut at the top to allow the plants to grow out, with the planted seeds at about ¾ of the plate height at an angle of −45°. This configuration, combined with a nylon mesh interface between the seeds and soil, allowed the roots to grow only on the mesh, facilitating the monitoring of specific conditions in the root system architecture during long-term experiments.
Tomato root regeneration
Tomato roots were grown for 4 days on soil, and then the meristem was excised using dental microneedles. The root growth was continuously monitored for a few days until full regeneration was detected.
Dehydration experiment
Tomato seedling with a few millimeters in length [3 days after sowing] was transferred onto a wet mesh sitting directly on the top of a Petri dish with no agar for the first imaging stage (control/hydrated). As the moisture in the mesh dries in about 45 minutes (25 °C; 60% RH), the root begins to dehydrate, shrink, and deform (second stage − dehydrated). After the root was dehydrated, we wet the mesh for 10 min to rehydrate the root and capture the third and last stage. The stacking operation of 70 shots took approximately 15 minutes to be acquired in each stage.
Customized mini-rhizotron
The mini-rhizotron system was set up using a standard transparent polystyrene square Petri dish cut open on one side. It was filled with soil and entirely covered by a 100 μm resolution nylon mesh (Sefar Nitex 03-100/44; Additional file 1: Fig. S1) to prevent the roots from growing into the soil and remain visible on the mesh surface to facilitate imaging.
Building the imaging system
The initial imaging setup consists of a modified digital camera, the Canon 5DSr SLR (Canon Inc., Tokyo, Japan), with a 50.1-megapixel full-frame sensor (36 × 24 mm). The camera is attached to a Canon MP-E 65 mm f/2.8 1-5x lens stacked on a 2x teleconverter (Vivitar Series 1) mounted on a vertical/horizontal motorized rail system (WeMacro, Shanghai, China; Fig. 1, A to H, Additional file 1: Table S1), MultipleXLab’s simplest configuration. A step-by-step assembly of the single-plate imaging setup is presented in Additional file 8: Movie S7. The expanded version achieved by MultipleXLab (Fig. 4J) is also equipped with a Canon EF 40 mm f/2.8 STM lens stacked on a 2:1 teleconverter (Vivitar Series 1). This setup provides a unique 28° angle of view at 30 cm away to cover the entire view (150 cm2) of the Petri dish and permits proper lighting and larger magnification of 0.24x (0.18x without the teleconverter) at life-size, given by the 15 cm gain in the minimum focus distance. A set of band-pass filters, including infrared, visible light, and ultraviolet light, can be used in front of the lens element to narrow the spectrum of interest.
Lighting enclosure for high-resolution imaging using the single-plate setup
The imaging system was placed inside a large, illuminated lightbox (80 × 80 × 50 cm) that provides constant lighting for imaging, and it is used to assist in refocusing the optical system, as well as in video recording. Furthermore, a ring flash (Canon MR-14EX II) was attached to the lens, and two Speedlite flashes (Yongnuo YN600EX-RT II) that were diffused with a strap-on light softbox (15.2 × 12.7 cm) were used. Mounted on articulated arms (Manfrotto 244 Variable Friction Magic Arm, Cassola, Italy), these external flashes provided oblique light on the specimens inside the lightbox to assist in high-magnification imaging when high light intensity is required. We also used three 150 W E27 5500 K lightbulbs in the light modifiers outside the lightbox to provide constant fill light and facilitate sample alignment and focusing during imaging. These components are listed in Additional file 1: Table S1 and depicted in Fig. 1I and Additional file 8: Movie S7.
3D-printing a multiplate carousel stage
First, the hexagonal 3D-printed stage design was conceptualized and designed using SketchUp (v.18.0.16976, Google LLC, Mountain View, California, USA). Then, it was rendered using Fusion 360™ (Autodesk®, Inc., Mill Valley, California, USA; Additional file 1: Fig. S5). The design file was exported to an STL format and converted into a printable file using the slicing software ideaMaker (Raise 3D, Irvine, California, USA) to set printing parameters on the raft base. The layer height was set to 0.2 mm, the infill to 15%, and two shells were used. The file was then loaded into the Raise 3D Pro 2 printer and printed in several sessions of 27 h for each stage level.
Building the MultipleXLab control system
The device is equipped with high-resolution stepper motors (Nema 17 or 23 Model 17HS15-1684S-PG5, 1.8° per step) in three linear actuators (400, 200, and 150 mm stroke) using ball screw (Fuyu Motion FSL40, Sichuan, China) and one rotary table (PX110, Beijing PDV Instrument Co., Ltd, China) driving the carousel, which also interfaces with two stacked goniometers to provide fine tilt adjustments in high-magnification applications to achieve parallelism with the vertical axis carrying the camera. The system can operate under preset lighting cycles (18/6 on/off) using built-in plant-growth lighting at 24 V, providing up to 400 µmol m-2 s-1 of PPF within a 10 cm distance. Using the 3D-printed carousel with three levels, we can tightly fit 18 plates simultaneously (Additional file 1: Fig. S3) and precisely lighten the plates with cross-polarized lighting using an array of Speedlight flashes covered with linear polarizer films (P100A-3Dlens, Taipei, Taiwan) working in conjunction with a circular polarizer (B+W MRC filter, Bad Kreuznach, Germany) on the Canon EF 40 mm f/2.8 STM lens stacked on a 2:1 teleconverter, effectively turning it into an 80 mm focal length lens.
The system is integrated using a custom printed circuit board with an ESP32 as the master microcontroller to control the lighting and the camera shutter release (Additional file 1: Fig. S3D). It also reads a program from an SD card to perform routine monitoring cycles in selected plates. Additionally, the ESP32 interfaces with an I2C port expander, enabling controlling and communicating with auxiliary sensors and actuators and a real-time clock module to synchronize the program timing and lighting cycles accurately. A slave microcontroller performed by an Arduino Nano acts as a liaison to the stepper motors, and it receives commands from the ESP32, completes stepper operations, and handles limit switches in the calibration step. The power consumption of the entire system is approximately 0.1 kWh.
The device can be preprogrammed and independently operated using the onboard control systems. Alternatively, the user can run the system using the MultipleXLab Control Center UI software (Additional file 1: Fig. S3E). The software has features that allow the end user to perform device calibration and focus stacking, control the lighting, and acquire images on demand. Images corresponding to each plate were tagged with a QR or barcode to facilitate data acquisition and management. The images can be easily retrieved, allowing the proper allocation of numerous images into labeled folders linked by the QR code.
Imaging settings
We employed a set of different imaging settings during z-stacking, single-snap, and time-lapse. For each condition, as determined by sample size, geometry, and environment, we specifically tailored the lighting as needed by selecting different types of external lighting and light modifiers to create soft and diffused lighting around the specimens. For comparison, we also acquired images using the stereomicroscope Stemi 508 (Carl Zeiss Microscopy GmbH, Jena, Germany) coupled with a color CMOS camera (AxioCam 105 by Carl Zeiss Microscopy GmbH) using a camera adapter (Zeiss 60N-C 2/3 0.5X) operated using the manufacture’s ZEN lite imaging acquisition software.
Macrophotographs of plant organs were shot in raw CR2 format with a native resolution of 8688 × 5792 pixels, taken in 40-μm step increments in z-resolution at 5:1 magnification, f/2.8, 1/200 s, and ISO 100. These were combined with a 2:1 teleconverter turning the effective magnification to nearly 10:1 and two perpendicular Speedlight flashes optically triggered by one parallel ring light flash at 1/64 power mounted on the camera. The imaging parameters for the z-stacks were different for each specimen. The Thysanoptera in Arabidopsis leaves was imaged using a single exposure (i.e., no focus stacking) because the insect was continuously moving; therefore, we used a faster shutter speed of 1/2500 s and an aperture of f/5.6 without the 2:1 teleconverter. When capturing the static ladybird beetle (Harmonia axyridis) on Arabidopsis leaves, we employed focus stacking with a finer 20-μm stepping in z-resolution to counter the even shallower depth of field.
We used a flower as a specimen to assess the performance of the single-plate imaging setup versus a stereomicroscope. A stack of 21 (2560 × 1920 pixels) snaps was taken using the stereomicroscope by manually focusing through the entire depth of the visible parts of the flower. The image was obtained from 103 stacks (8688 × 5792 pixels) stepped in 10 μm in z-resolution at about 10:1 magnification, f/5.6, 1/100 s, and ISO 100, using two perpendicular Speedlight flashes optically trigged by a ring light flash mounted on the camera using the single-plate imaging setup; all set at 1/16 power. The close-up comparison of the stigmatic papilla was captured using the stereomicroscope and the proposed system. The final image was made from 13 stacked images obtained by manually focusing through the entire depth or radial thickness of the root using the stereomicroscope. The image obtained from the single-plate imaging setup was made using 12 images so that only the stigma was in focus. The entire datasets from time series acquired using the MultipleXLab were taken at f/11, 1/10 s, and ISO 200, with external flashes set at 1/32 power.
Batch processing using Adobe Photoshop CC (20.0.5) was employed to handle the thousands of images generated by the MultipleXLab device. Each raw image has a file size of around 50 MB. These raw images were treated using the Camera Raw Editor in Adobe Photoshop CC to apply a fixed preset that was created to even out brightness in the image, correct color temperature, and increase contrast. The output images were exported to digital negative format and aligned using the auto-align translation function through a batch process in Photoshop. Aligned frames from the same plate were center-cropped to a fixed size of 4639 × 4480 pixels, comprising the region of interest. Frames were exported to TIFF format using an automated batch process, and each TIFF frame had a final file size of approximately 30 MB. Time lapses and animations were rendered using Final Cut Pro 10.5.0 (Apple, Cupertino, California, USA).
Measuring pixel contrast between the imaging setup and stereomicroscope
The calculated theoretical maximum numerical aperture of the imaging setup was 0.09 at 1× magnification and 0.03 at 5× magnification. Subsequently, we determined the lateral resolution of the entire imaging setup based on the calculated modulation transfer function using an inexpensive 1951 USAF resolution targetFootnote 5 with negative and positive chrome patterns manufactured according to MIL-S-150A standards, measuring 63 × 63 × 2 mm and containing the entire group and elements from 0 to 7, indicating a minimum and maximum of 1 to 228 lp/mm, respectively.
The resolution target was imaged at 4:1 magnification using the stereomicroscope at its maximum brightness illumination and auto-exposure based on a selected area of the negative chrome pattern on the mask. For the 5:1 magnification image obtained using the proposed imaging setup, we configured the imaging setup vertically to mimic the stereomicroscope orientation and used a mini-LED light array (Aputure Amaran AL-MX Bicolor LED) as a transmitted light source behind the resolution target. The raw photos taken by the imaging setup were shot at f/3.2 (optimum optical resolution), 1/40 s, and ISO 100, with the native resolution of 8688 × 5792 pixels.
We calculated the contrast from the difference in the amount of light in the greyscale (0 to 255) peaks “max” and valleys “min” between dark and bright line-pair patterns categorized from 0 to 100%. We used line probes in Avizo 2020.1 (Thermo Scientific, USA) to examine the intensity values of the target images quantitatively because this module scans the greyscale intensity values along a line probe assigned by the user. We identified the peaks and valleys that can be smoothed using sampling and averaging factors for better comparison. A graph of the contrast versus spatial frequency from both imaging systems can be obtained from the local modulation [48], given by Eq. (1):
$$\backslash (modulation=\backslash frac\left\{max-min\right\} \left\{max+min\right\} \backslash )$$
(1)
Root profilometry
We exported TIFF files containing the depth map layers from the stacks using Helicon Focus (Helicon Soft Ltd. v.7.5.6, Helicon Soft Ltd), Kharkov, Ukraine). These layers were processed with Avizo, starting with the determination of the scale using the known physical size of the images in x, y, and z (2.55 × 3.88 × 0.01 mm), where the micro-step determines z between layers within the stack. After loading the scaled dataset into Avizo, an image conversion step extracts the alpha channel from the stack made from the individual TIFF files. This channel is binarized to generate a tetrahedral mesh to reveal a triangulated surface of the root topography, which is carried out using the surface generation module in Avizo. Cleaning up the root-labeled channel may be necessary to obtain a result restricted to the region of interest because certain areas that do not represent the roots may be picked up due to uneven surfaces on the substrate where the root is growing or from dust accumulation in the sensor, creating ‘dust trails’ in the stack. Therefore, we advise to manually remove unwanted features in the Segmentation Editor in Avizo to apply a certain level of noise-smoothing and correct the contour roughness due to voxel aliasing, so that accurate and somewhat complex surfaces can be rendered with ease in Avizo. When necessary, the cleanup step should be performed before generating a mesh.
To visualize the 4D dynamics on the roots during dehydration and growth we configured the imaging setup as shown in Additional file 1: Fig. S6F, and we set the acquisition time to observe these dynamics in space and time. For example, during the dehydration observations (Fig. 4J), we captured three consecutive stacks (control/ hydrated, dehydrated, and rehydrated) made from 70 snaps at a 10-μm step in z, which generated 700-μm stacks imaged within 15 min. Therefore, the dynamics in a 700 μm stack imaged at a 10-μm resolution in z could not develop faster than the acquisition time (<15 min) to generate artifact-free datasets that could arise from specimen movement during acquisition. The image acquisition period was primarily bottlenecked by the Speedlight flashes that could not fire successively at full power, requiring a 12-s recharging delay between shots. Downtime could be improved by adding more light sources at lower relative power to allow for faster recharging.
High-throughput analysis of developmental, cell-cycle, and auxin mutants
We used the MultipleXLab to examine root-growth dynamics in several different Arabidopsis mutants. Small Petri dishes were used to plate either 64 or 56 seeds on 1/2 MS agar mixed with charcoal to make the media dark, increasing the contrast between the roots and background. Plates were stored in a growth room (20.5 °C; 67% RH) for 24 h prior to being loaded into the carousel for hourly imaging for up to five days. The lid of the plates was removed to avoid condensation from blocking the view, and the entire device was enclosed using a plexiglass frame to minimize the drying of the agar plates and decelerate contamination that may happen after a week of ongoing experimentation.
Statistical analysis
Data analysis was performed using computer software (OriginPro 2020, OriginLab, Massachusetts, USA). Datasets were significantly drawn from normally distributed populations according to the Shapiro–Wilk test using a 0.05 significance level [49]. The one-way analysis of variance followed by the post hoc Tukey test was employed using p < .05 to .001 to compare the difference between mutants.
SeedNet and RootNet models
To determine the initial seed locations in the first frame of the time series, we developed an artificial neural network called SeedNet. The network is a binary image segmentation model based on the U-Net architecture [50]. Given an RGB image, SeedNet outputs a pixel-wise mask classifying each pixel as a seed [1] or not [0]. Raw images were too large to be processed with SeedNet; therefore, we divided the original image into patches of 256 × 256 pixels. Furthermore, we down sampled the patches to 32 × 32 pixels because we aimed to locate only the seed positions. SeedNet outputs were then upsampled to 256 × 256 pixel patches and stitched back together to obtain a pixel-wise mask with the original size of the raw image.
Similarly, we developed an artificial neural network called RootNet based on the same U‑Net architecture. RootNet outputs a pixel-wise mask classifying each pixel as a root [1] or not [0] from a given RGB image. Raw images were also too large for RootNet; therefore, we also patched (256 × 256 pixels) the original image. RootNet outputs were stitched back together to obtain a pixel-wise mask of the original-size raw image, like in the SeedNet outputs.
SeedNet and RootNet were developed using Keras [51], an open-source software library. The implementation details for the models are provided in the supplementary information (Additional file 1: Tables S2 and 3).
To train both RootNet and SeedNet models the dataset was randomly split into training [46 images] and testing [19 images] sets. Annotations were performed by labeling the root and seed pixels (Additional file 1: Fig. S7, B and D) using iLastik [52]. Each image had 64 seed/roots, hence a total of 3200 annotated plants. Since one entire image is too large to be used for training, each image was divided into smaller patches (256 × 256 pixels) and these patches were used to train the models. Thousands of patches were used for training each model and it took <10 minutes on a desktop computer with 30 GB of RAM and an NVIDIA Quadro P4000 GPU.
Computing resources for image processing
For inference, the entire pipeline processing took about one hour to analyze a single timeseries on a laptop (8 GB RAM, CPU 2.6 GHz Intel Core i5), alternatively, and it took 30 minutes using a workstation (128GB RAM, Intel Xeon Gold 6130 @ 2.10GHz; NVIDIA Quadro M2000 GPU). Image processing tasks using Photoshop were also carried out using the laptop and workstation.