Abstract

In this paper, we describe a technique, which uses an adaptive background learning method to detect the CME (coronal mass ejections) automatically from SOHO/LASCO C2 image sequences. The method consists of several modules: adaptive background module, candidate CME area detection module, and CME detection module. The core of the method is based on adaptive background learning, where CMEs are assumed to be a foreground moving object outward as observed in running-difference time series. Using the static and dynamic features to model the corona observation scene can more accurately describe the complex background. Moreover, the method can detect the subtle changes in the corona sequences while filtering their noise effectively. We applied this method to a month of continuous corona images, compared the result with CDAW, CACTus, SEEDS, and CORIMP catalogs and found a good detection rate in the automatic methods. It detected about 73% of the CMEs listed in the CDAW CME catalog, which is identified by human visual inspection. Currently, the derived parameters are position angle, angular width, linear velocity, minimum velocity, and maximum velocity of CMES. Other parameters could also easily be added if needed.

1. Introduction

A coronal mass ejection (CME) is a significant release of plasma and accompanying magnetic field from the solar corona. It often follows solar flares and is normally present during a solar prominence eruption. The plasma is released into the solar wind and can be observed in coronagraph imagery [13]. CME is the most energetic and important solar activity and is a significant driver of space weather in the near-Earth environment and throughout the heliosphere. When the ejection is directed towards and reaches the Earth as an interplanetary CME (ICME), ICME can cause geomagnetic storms that may disrupt Earth’s magnetosphere, damage satellites potentially, induce ground currents, and increase the radiation risk for astronauts [4]. Thus, CME detection is an active area of research.

CME was first observed coincided with the first-observed solar flare on 1 September 1859, and it has been studied extensively since it was first reported [5] more than four decades ago. Along with the continuous progress of space observations of the corona, a series of satellites with coronal imaging observation ability such as OSO-7, P78-1, Skylab, SMM, and SOHO were launched; especially over the past 24 years, coronal mass ejections have been detected routinely by visual inspection of each image from the Large Angle Spectrometric Coronagraph (LASCO) onboard SOHO [6]. To further understand CME, especially its three-dimensional properties which can be observed by the Sun-Earth Connection Coronal and Heliospheric Investigation (SECCHI) [7], SECCHI flew aboard NASA’s recently launched Solar Terrestrial Relations Observatory (STEREO).

In contrast to this huge amount of observations of CMEs, the identification and cataloging of CMEs are important tasks that provide the basic knowledge for further scientific studies. There are two main categories of methods used to detect the CMEs. One category is the manual detection method, with the LASCO instrument coronagraphs. Currently, there exists a manual catalog which is the Coordinated Data Analysis Workshop Data Center (CDAW) catalog [8] to catalog observed CMEs. This catalog is compiled by observers who look through the sequences of LASCO coronagraph images. But this human-based process is tedious and subjected to observers’ biases. To promote the detection of CMEs, another category is the automatic detection method, which detects and characterizes CMEs in coronagraph images.

The Computer Aided CME Tracking software package is the first automatic detection method introduced in 2004 [9]. It utilizes the Hough transform to identify CMEs. In 2005, Boursier et al. proposed a method named the Automatic Recognition of Transient Events and Marseille Inventory from Synoptic maps (ARTEMIS) [10]; it utilizes LASCO C2 synoptic maps and based on an adaptive filtering and segmentation to detect CMEs. In [11], Olmedo et al. presented the Solar Eruptive Event Detection System (SEEDS) which used image segmentation techniques to detect CMEs. In [12], Young and Gallagher described and demonstrated a multiscale edge detection technique that addresses the CME detection and tracking, which could serve as one part of an automated CME detection system. In 2009, Goussies et al. developed an algorithm based on level set and region competition methods to characterize the CME texture, and by using the texture information in the region competition motion equations to evolve the curve, to this end, segmentation of the leading edge of CMEs is performed on individual frames [13]. In the same year, Byrne et al. adopted a multiscale decomposition technology to extract structure of the processing image and used an ellipse parameterization of the front to extract the kinematics (height, velocity, and acceleration) and morphology (width and orientation) change to detect the CMEs [14]. In [15], Gallagher et al. developed an image processing technique to define the evolution of CMEs by texture and used a supervised segmentation algorithm to isolate a particular region of interest based upon its similarity with a prespecified model to automatically track the CMEs. In 2012, Zhao-Xian et al. [16] presented a method to detect CMEs by analyzing the sudden change of frequency spectrum in the coronagraph. In 2014, Bemporad et al. [17] described the onboard CME detection algorithm for the Solar Orbiter-METIS coronagraph. The algorithm is based on the running differences between consecutive images to get significant changes and to provide the CME first detection time. In 2017, Zhang et al. [18] proposed a suspected CME region detection algorithm by using the extreme learning machine (ELM) method which takes into account the features of the grayscale and the texture. In 2018, based on the intensity thresholding followed by the area thresholding in successive difference images spatially rebinned to improve signal-to-noise ratio, Patel et al. [19] proposed a CME detection algorithm for the Visible Emission Line Coronagraph on ADITYA-L1. Recently, machine learning has been used in solar physics. Dhuri et al. [20] used machine learning to classify vector magnetic field observations from flaring ARs. Huang et al. [21] applied a deep learning method to flare forecasting. Very recently, Wang et al. [22] even proposed an automatic tool for CME detection and tracking with machine learning techniques.

These CME automatic detection methods that mentioned above are mainly based on three kinds of strategies: (i) enhance the coronagraph images and describe the kinematics and morphology features (edge, luminance, shape, etc) of the processing images and then use these features to determine the occurrence of CME; (ii) establish the CME evolution models according to the historical CMEs’ dynamic evolution characteristics, use the same feature extraction method to extract dynamic evolution characteristics of the processing sequences, and then compare the extracted characteristics to the model to determine the occurrence of CME; and (iii) apply supervised classification problem in machine learning to detect CME.

The coronagraph data can be considered as a three/four-dimensional dataset with two/three spatial and one temporal dimension. The key to automatic detection methods is how to distinguish CME regions from other parts of the image. These methods do not utilize time dimension information adequately. To make full use of time-domain information, we can use video processing technology for CME detection. In fact, we can consider a coronagraph image sequence as a video and regard CMEs as abnormal events in the video. The detection process of CME can use the video surveillance technology, which includes change detection, background model, foreground detection, and object tracking. Further, considering the coronagraph image sequence itself is a dynamic scene, and the CME is also a dynamic process, so the CME detection methods must adapt to the scene change. Inspired by these ideas, in this paper, we attempt to detect CMEs based on adaptive background learning technology. The method consists of three main modules described below:(1)Adaptive background module: this module is mainly implemented to maintain the background model of the coronagraph image sequence(2)Candidate CME area detection module: this module is used to detect the foreground areas of the coronagraph images(3)CME detection module: this module is based on the candidate areas to identify the CME event

The remaining of the paper is organized as follows. In Section 2, we first give a specification about the adaptive background module. Then in Section 3, we will formulate the background and foreground classification problem and propose a method of candidate CME area detection. Section 4 describes an algorithm for CME detection based on candidate CME area detection module. The experimental results and validation on LASCO C2 data are presented in Section 5. The paper is concluded in Section 6.

2. Adaptive Background Module

In coronagraph image sequence, the background environments always change; for example, small moving objects such as stars and cosmic rays can make the background change. So, the background representation model must be more robust and adaptive, and the background module must be continuously updated to represent the change of the scene. To solve the strong chaotic interference in the background, several methods have been proposed to adapt to variety of background situations. Among them, mixture of Gaussians (MoG) [23] is considered as a promising method. In the video monitoring, because of the high frame rate, the MoG can achieve good results in the gradual change scene, but for CME detection, the interference changes significantly, so it needs better method to model the dynamic scene. Li et al. proposed a statistical modeling [24] that used the co-occurrence of color characteristics of two consecutive frames to model the dynamic scene. By using this statistical modeling, this method can represent nonstatic background objects, so it has good robustness for the existence of dynamic background periodic interference. The statistical modeling is very suitable for CME detection which is often associated with other forms of solar activities. We apply this method to model the background, namely, employing the color feature to describe the static background and co-occurrence color features to describe the moving background, and then use a Bayes decision rule for classification of background and foreground.

2.1. Formulation of the Classification Rule Based on Bayes

In the method of automated detection of CMEs based on an adaptive background module, each pixel in the coronagraph image is divided into two categories: background pixels and foreground pixels (candidate CME area pixels). Therefore, using the Bayes rule, the feature vector distribution probability of each pixel satisfies the following equation:where indicates pixel position, is the statistical feature vector, is the probability of the feature vector being observed as a background at , is the prior probability of the pixel belonging to the background, and is the prior probability of the feature vector being observed at the position . Similarly, denotes the foreground (or candidate CME area). By using the Bayes decision rule, the pixel can be classified as background if the feature vector satisfies the following equation:and by using Bayesian conditional posterior probability,and substituting (1) and (3) into (2), it becomesthat is, if we obtained the prior probability , , and conditional probability at the moment , the pixel with the feature vector can be classified as background or foreground based on formula (4).

2.2. Description of the Feature Vector

In formula (4), the probability functions and are all associated with the feature vector . For the coronagraph images, the most prominent feature is the luminance characteristics and takes into account the dynamic disturbance; we must increase the feature vector to characterize the dynamic properties. In this paper, we adopt the luminance features and co-occurrence luminance features to model the background.

The coronal image’s luminance level is high, if calculating and recording all the luminance feature vectors’ probability is unrealistic. Fortunately, at the same location of the coronagraph image, the luminance change is not very big. So for each pixel, it will be enough to record a small subspace feature vectors as the background model. An example of the principal feature representation with luminance and co-occurrence luminance in LASCO C2 pseudocolor coronagraph images in the year of 2014 is shown in Figure 1. The left image (a) shows the position of the selected pixel, and the right image (b) and image (c) are the histograms of the statistics for the most significant color and co-occurrence color. The histogram of the color features shows that only the first thirty color distributions account for 68.38% of all color feature space, and the first thirty co-occurrence color distributions account for 79.51% of all co-occurrence color feature spaces.

Therefore, as shown in Figure 1, we can represent and well by selecting a small number of feature vectors. In the experiments of this paper, the color feature vector is quantized for 128 levels and recorded the first 25 feature vectors and the co-occurrence color feature vector is quantized for 64 levels and recorded the first 40 feature vectors.

2.3. Background Model and Parameters

In this paper, we focus on the effective detection method of CME, and the data object processed is pseudocolor coronagraph images. So we use statistical features in pseudocolor coronagraph images to model the background, in particular including the prior probability of feature vectors belonging to the background, color, and co-occurrence color feature vectors statistics list information. Suppose at the time , at pixel point , the color is , the previous frame’s luminance is , and the co-occurrence color feature vector can be defined as . For each pixel, the background model includes the following:(1)The prior probabilities and , indicate that color feature vector belongs to the background of the time at pixel point , and indicates that the co-occurrence color feature vector belongs to the background of the time at pixel point (2)Color feature vector statistics list of the time at pixel point , , :where is the recording number of statistical color feature vectors, is the statistical probability of th color feature vector at position until time , and is the probability of th color feature vector at position which was judged as the background(3)Co-occurrence color feature vector statistics list of the time at pixel point , , :where is the recording number of statistical co-occurrence color feature vectors, is the statistical probability of th co-occurrence color feature vector at position until time , and is probability of th co-occurrence color feature vector at position which was judged as the background

According to the distribution of the feature vectors, as shown in Figure 1, the first elements of the list are enough to cover the majority part of the feature vectors from the background. Therefore, in the case or , and can be used to represent the background. Otherwise, when or , it indicates that this feature vector is corresponding to the foreground. This is the foundation that we used to detect the CMEs.

3. Candidate CME Area Detection Module

The candidate CME area detection module is based on the established background model we discussed in 2.3 and the formulation of background and foreground classification we discussed in 2.1. The candidate CME area detection module consists of three parts: change detection, change classification, and candidate CME area segmentation. In the first step, nonchange pixels are filtered out by using the background difference and frame difference, which will improve the computing speed; in the meantime, the detected change pixels are separated as pixels belonging to stationary and moving scene according to interframe changes. In the second step, based on the learned statistics information of color feature vectors and co-occurrence color feature vectors, the pixels associated with stationary or moving scene are further classified as background or candidate CME area by using the Bayes decision rule. In the third step, candidate CME areas are segmented by the morphological processing based on the classification results. The process is the basis of the algorithm [25], and the block diagram of the candidate CME area detection is shown in Figure 2.

3.1. Change Classification

Candidate CME area detection is based on two-class background features (color and co-occurrence color); first of all, each coronagraph image’s change must be classified into two types. As shown in Figure 2, the change classification gets through temporal differences and background differences. The temporal difference binary image is denoted by , and the background difference binary image is denoted by . If (no matter the result what is) is detected, the pixel is classified as a change pixel. If and are detected, the pixel is classified as a stationary pixel. They are further classified as background or candidate CME area separately, the change pixel will be classified by co-occurrence color features, and the stationary pixel will be classified by color features.

3.2. Pixel’s Classification

For the current processing coronagraph image’s each pixel point , at first to extract the feature vector (color feather vector and co-occurrence color vector which are based on pixel change classification) and match with each vector in the pixel’s feature vector statistics list by using formula (7), the sum of all matched () prior probability and condition probability in the statistical list was further calculated to obtain the prior probability and conditional probability of the pixel’s vector . Meanwhile, the prior probability is obtained, which was maintained in the background model. Finally, by substituting , , and into formula (4), the pixel point can be classified as background or candidate CME area:where is chosen so that if the similar features are quantized into neighboring vectors, the statistics can still be retrieved. If no element in the pixel’s feature vector statistics list is matched, and are set to 0.

3.3. Candidate CME Area Segmentation

It is obvious that, after the pixel’s classification, only a small percentage of the background pixels are wrongly classified as candidate CME ones. There are many isolated points, so the morphological operation (a pair of open and close) is applied to remove the scattered error points and connect the candidate CME area points. Finally, the candidate CME area detection module will output a binary image .

3.4. Adaptive Background Learning

The coronagraph image sequence is a gradually changing scene, so the background model must be maintained to adapt to the various changes over time. In practice, the background model’s probability information and a reference background image must be updating.

3.4.1. Updating to Background Model’s Probability Information

Based on the previous obtained binary image , the pixel with the feature vector is classified as candidate CME area or background. The prior probability and the conditional probability associated with the color feature are gradually updated by formula (8), and the updating of the prior probability and the conditional probability associated with the co-occurrence color feature is similar:for , where is a learning rate which controls the speed of feature learning; in the experiment, we set ; when is labeled as the background at time from ; otherwise, . when in the color feature vector statistics list in formula (5) matches best and for others. In more detail, the above updating can be stated as follows:(a)If the pixel is labeled as a background point at time by color feature, is slightly increased from due to . Meanwhile, the probability of the matched feature is also increased due to . If , then the statistics for the unmatched features are gradually decreased. If there is no matched feature between and the elements of the feature vector recording list , the th element in the list is replaced by a new feature vector by formula (9). If the number of the elements is smaller than , a new feature vector by formula (9) is added:(b)If the pixel is labeled as a foreground point at time by color feature, and are slightly decreased due to . However, the probability of the matched feature is increased.

To ensure that the element that is replaced is the lowest probability one, updated elements in the feature vector statistics list are resorted to a descending order according to .

3.4.2. Updating to the Reference Background Image

In the candidate CME area detection process, it need to use background difference to classify the change, so a reference background image that represents the most recent appearance of the sense must be maintained at each time step. An infinite impulse response (IIR) is used to update the gradual changes for stationary background sense. If the pixel is classified as a change point in the change classification step and the candidate CME area segmentation result , the reference background image is updated aswhere is a parameter of the IIR filter and is the color information of the process point. A small positive number of is selected to smooth out the disturbances caused by image noise, and in the experiment, we set .

If and , but . This means that there is a significant change, but in the end, it was not classified as candidate CME area; it indicates a background change is detected. So the processed pixel ’s color information should replace the reference background, that is, .

Through this operation, the reference background image can be a good representation of coronal scene change.

4. CME Detection Module

Based on the candidate CME areas, we can detect the CME according to the morphological and dynamic characteristics of the candidate CME area. For example, to identify a newly emerging CME, it must be seen to move outward in at least two running-difference images. This condition is set by Robbrecht and Berghmans [9] and Olmedo [26] to define a newly emerging CME.

The CME detection method we proposed is based on a continuous frame processing approach, so after the detection of the candidate CME area, we set two conditions as criterion of CME event: (1) the CME candidate region of two consecutive frames detected must be extended from the heliocentric; (2) since the start of the CME candidate region is detected, the region has enlarged gradually.

Besides, considering the angle range of the CMEs, we set the minimum angle threshold filtering noise. And the features of interest are intrinsically in polar coordinates owing to the spherical structure of the Sun. A polar transformation is applied to each candidate CME area image: the field of view (FOV), starting from the North of the Sun going counterclockwise, becomes a FOV, with the poloidal angle around the Sun and the radial distance measured from the limb. This kind of transformation has been used in other CME detection algorithms [9, 11]. While transforming, we also rebin, from 1024 × 1024 pixels for the FOV to 360 × 360 pixels for the FOV. Through the appropriate selection, the dark occulter and corner regions can easily be avoided. The radial FOV in polar coordinate image corresponds to 360 discrete points between 2.2 and 6.2 solar radii. We set the minimum detection angle parameter (refer to the CME list in CDAW in 2014, the minimum angle is 5 degrees, and we set ).

5. Results and Validation

In this section, the visual examples and comparison on LASCO C2 pseudocolor coronagraph images are described, respectively.

5.1. Results

We present the results obtained by running the detection algorithms based on adaptive background learning technology.

An experimental process figure of the extraction candidate CME region is based on the scene modeling, we use data from the LASCO C2 pseudocolor coronagraph images, and 1024  1024 image sequences are processed. Figure 3 is a CME candidate area segmentation process, which includes 6 frames (22 : 12 : 05, 22 : 24 : 05, 22 : 36 : 05, 23 : 12 : 10, 23 : 24 : 05, and 23 : 36 : 06 in 2014/03/04) processing result; column (a) is LASCO C2 pseudocolor coronagraph images; column (b) is the reference background images; column (c) is the difference images between the two sequential frames; column (d) is the difference images between the current image and reference background; column (e) is the final candidate CME area images; column (f) is the changing region images of the candidate CME areas.

Figure 4 is an example of CME detection process, which includes 2 frames (14 : 12 and 14 : 24 in 2014/01/01) processing result; column (a) is original coronal images; column (b) is candidate CME area images; column (c) is polar images of the candidate CME areas; column (d) is the increasing region images of the candidate CME areas; column (e) is the polar images of the increasing regions. The red box area in the last image is the detected CME area.

5.2. Validation and Comparison

Without loss of generality, we have chosen a full month of pseudocolor coronagraph image sequences observed by LASCO C2 in June 2014 as a test dataset for comparison. The manual CDAW list is used as a reference, and we compared the results of the adaptive background learning method with CORIMP, CACTus, and SEEDS catalog to verify the effectiveness of our proposed algorithm. The main comparisons include accuracy rate, false-negative rate, and the number of undetected CME events. The accuracy rate is the ratio of the total number of the CME events which were both detected by the automated method and recorded in the CDAW list to the total number of CME events in the CDAW catalog. The false-negative rate is the ratio of the total number of the CME events which the automated method did not detect but were recorded in the CDAW catalog to the total number of CME events in the CDAW catalog.

In the comparison experiments, for each CME event recorded in the CDAW catalog, if other automated methods detected the CME event within the time range of this event and within the angular range of this event, it considers that the automated method detect a CDAW list CME event. The comparison of the detection results by the adaptive background detection algorithm we propose with the other automated algorithms is shown in Table 1.

For the processing datasets, as shown in Table 1, the method we propose has a higher accuracy rate than other methods, and the false-negative rate is only higher than CORIMP method and lower than the SEEDS and CACTus methods. For the total detected CME number, our proposed method is higher than CORIMP and CACTus and is only lower than the SEEDS method. In terms of the undetected CME events in CDAW catalog number comparison, our method is the lowest.

In recent years, the CDAW catalog CME events are more finely recorded; especially, the very weak events in the Helmet streamers are recorded, and the number of the event recorded is also more and more. For example, CME event number recorded in 1996 is 206 and in 2014 it is 2477. Such changes make the automatic CME detection very difficult, so the novel detection method must detect the subtle changes in the coronal images. The automatically detecting CME method based on adaptive background learning can represent the dynamic scene very well, which is suitable for the event detection in dynamic scenes. Figure 5 is an example of the very weak CME event detection by our method, which occurred in the Helmet streamer area. This event was not detected by CORIMP, SEEDS, and CACTus methods and only recorded in CDAW catalog. Figure 5 shows two continuous coronagraph images, the candidate CME area images, the candidate CME change area images, and the very weak CME event area located in the red box. In the experiment, for the very weak CME events, our detection algorithm can only detect parts of the CME event, and this is the main reason to cause the misdetection. Figure 6 is a weak CME event detection process images, the weak CME event area is also located in the red box, the first column is the coronagraph images, the second column is the foreground detected by our method, and the third column is the change area of the foreground. This event was also not detected by CORIMP, SEEDS, and CACTus methods.

5.3. Computation of Information on CME

The information on CME events can be calculated conveniently by using the processed images. Figure 7 is a sequence of processed images of CME events. Our method detected the event’s first C2 appearance date-time (UT): 2014/06/24 05 : 35 : 05, and the duration of this CME event is 2.6 hours, including 14 frames. In Figure 7, we show nine processed results of these frames. The first column is the coronal images; the second column is the detected candidate CME regions; the third column is the outline of the candidate CME regions which were marked by the blue curve; the fourth column is the changing areas of the candidate CME regions’ images; the fifth column is the contour of the changing areas which were marked by the purple curve. We use the location information of the timestamp to filter the noise caused by the timestamp, so we can extract a more accurate CME area and ensure that the final calculated CME feature information is more accurate. The comparison of extracted information on this CME with other methods is shown in Table 2. During the calculation of our method, the speed of each frame can be calculated according to the change of each frame, the calculating schematic diagram, and the speed change curve shown in Figure 8.

In Table 2, the speed calculated by our method is the lowest; this is mainly due to the method did not use the frontier extreme point of the CME area in each frame to calculate the speed, but use the average value of the frontier sample points to calculate the speed. If the speed is calculated according to the extreme points of the frontier extreme point, the average speed of this CME event calculated by our method is 490 km/s, which is similar to the other automatic detection methods.

6. Discussion and Conclusion

In this paper, we have developed a new method that is capable of detecting, tracking, and calculating the information of CMEs in SOHO/LASCO C2 pseudocolor coronagraph images. The basic algorithm includes the following: (i) establishing and maintaining the background model of the coronal image sequences, (ii) detecting the candidate regions of CME based on Bayesian theorem, (iii) identifying the CME events, and (iv) calculating the information of CME events.

This novel method is based on adaptive background learning technology, and through the static and dynamic characteristics to model the background, this method can describe the complex background well, especially the dynamic changes in the background. So by using the proposed method to detect the CME in the superposition area with the Helmet streamers has more obvious advantage. At the same time, due to the background modeling learning, in this method, the information of multiframe images is counted. In this way, the influence on the results caused by the noise in the single-frame image can be suppressed and can enhance the robustness to CME detection. Our CME event identification method is based on the candidate CME area. It uses the fact that the CME region always enlarged gradually; on the one hand, it can avoid the effect of the noise, and on the other hand, it can effectively track a complete CME event. Finally, through the detected region information on each frame, it is convenient and effective to extract the morphological and motion information of the CME event.

Automated methods such as CACTus, SEEDS, and CORIMP have a low detection rate of CMEs compared to CDAW catalogs made by human observers. This is mainly because the method of manual labeling in recent years has marked poor CME events, especially the poor events in the Helmet streamers. So new approaches are needed to detect subtle changes in the dynamic scenes, and the method we proposed has good performance in this aspect.

Similar to other automated methods, the biggest problem in the adaptive background learning method is the estimation of the various parameters and thresholds, such as quantized levels of the pixel information, learning update rate, and foreground detection threshold. For example, a small foreground detection threshold can reduce not only the false-negative rate, but also the accuracy rate. So the selection of these empirical values has a certain effect on the algorithm, and further investigation will be carried out in these areas. We are also planning to apply the method to corona images acquired by other devices.

Data Availability

The SOHO/LASCO data used to support the findings of this study are available from the SOHO/LASCO Instrument Homepage (http://lasco-www.nrl.navy.mil/).

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (Grant nos. 11603016 and 11873062), Key Scientific Research Foundation Project of Southwest Forestry University (Grant no. 111827), and Open Research Program of CAS Key Laboratory of Solar Activity, National Astronomical Observatories (KLSA201909). SOHO is a project of international cooperation between ESA and NASA. The SOHO/LASCO data used here are produced by a consortium of the Naval Research Laboratory (USA), Max-Planck-Institute für Aeronomie (Germany), Laboratoire d’Astronomie Spatiale (France), and the University of Birmingham (UK). The authors acknowledge the use of the CME catalog generated and maintained at the CDAW Data Center by NASA and the Catholic University of America in cooperation with the Naval Research Laboratory. The CACTus CME catalog is generated and maintained by the SIDC at the Royal Observatory of Belgium. The SEEDS CME catalog has been supported by NASA Living with a Star Program and NASA Applied Information Systems Research Program. The CORIMP CME catalog has been provided by Institute for Astronomy University of Hawaii.