Yuanli Ma is currently a master student at the University of Science and Technology of China. Her research mainly focuses on big data problems
Yang Li is currently a postdoctoral researcher at the University of Science and Technology of China (USTC). He received his Ph.D. degree in Statistics from USTC in 2021. His research interests include high-dimensional statistical inference and distributed learning
Regression problems among multiple responses and predictors have been widely employed in many applications, such as biomedical sciences and economics. In this paper, we focus on statistical inference for the unknown coefficient matrix in high-dimensional multi-task learning problems. The new statistic is constructed in a row-wise manner based on a two-step projection technique, which improves the inference efficiency by removing the impacts of important signals. Based on the established asymptotic normality for the proposed two-step projection estimator (TPE), we generate corresponding confidence intervals for all components of the unknown coefficient matrix. The performance of the proposed method is presented through simulation studies and a real data analysis.
Graphical Abstract
Statistical inference for coefficient matrix in high-dimensional multi-task regression.
Abstract
Regression problems among multiple responses and predictors have been widely employed in many applications, such as biomedical sciences and economics. In this paper, we focus on statistical inference for the unknown coefficient matrix in high-dimensional multi-task learning problems. The new statistic is constructed in a row-wise manner based on a two-step projection technique, which improves the inference efficiency by removing the impacts of important signals. Based on the established asymptotic normality for the proposed two-step projection estimator (TPE), we generate corresponding confidence intervals for all components of the unknown coefficient matrix. The performance of the proposed method is presented through simulation studies and a real data analysis.
Public Summary
We propose a two-step projection estimator for statistical inference in high-dimensional multi-task learning problems.
We establish the asymptotic properties of the proposed estimator.
The performance of our method is presented through simulation studies and a TCGA-OV dataset.
With the advent of massive data in recent years, great attention has been given to identifying relationships among multiple responses and predictors in various applications, including multi-task learning in machine learning[1–3], imaging genetics[4, 5] and genetic association[6, 7]. Specifically, in cancer genomic studies, some miRNAs are known to regulate protein expression of various genes in cellular processes and their dysregulation plays a crucial role in human cancer. Hence, investigating the relationship between miRNA expression and protein expression is of great importance for human cancer diagnosis[8–9]. A standard approach for quantifying these relationships is to perform a univariate regression model for each response separately via the least squares estimation. Although it is easy to implement, this method ignores the dependence information among response variables, and it is also not applicable to high-dimensional cases. Thus, it is necessary to develop statistical tools that account for the dependence structure of responses in multi-response regression under high-dimensional settings.
An enormous effort has been mounted for variable selection and coefficient estimation in high-dimensional multi-response regression. Among them, one popular way is to consider the reduced rank regression model, including some foundational works[10–12]. Based on this framework, several methods that apply a latent factor point of view have been proposed to estimate the unknown coefficient matrix[13,14]. Another class of methods relies on different kinds of regularization[15–18]. Specifically, a line of research for regularization methods focuses on some structural prior knowledge of the coefficient matrix, that is, the set of variables is assumed to be structured into several groups. Please refer to Refs. [19–21] for more details about this kind of method.
To further assign uncertainty, some important progress has been made in high-dimensional multi-response settings. For instance, Greenlaw et al.[5] developed a hierarchical Bayesian model and constructed confidence intervals for the unknown coefficient matrix. More recently, by generalizing low-dimensional projection estimation (LDPE)[22] in the univariate response case, Chevalier et al.[23] proposed the desparsified multi-task lasso method and applied it to source imaging. However, when deriving the asymptotic distributions of estimators, this approach requires the number of nonzero rows of coefficient matrix to be s=o(√n/log(p)) for p covariates and n samples. To further alleviate this requirement, we attempt to use the two-step projection technique proposed by Li et al.[24], which treats important variables and others differently and can greatly improve the inference efficiency.
In this paper, we aim to develop a new methodology for statistical inference in the setting of high-dimensional multi-task regression. By taking group structures of the unknown coefficient matrix into consideration, the proposed estimator is constructed in a row-wise manner based on the two-step projection technique, which enjoys the benefit of reducing the estimation bias induced by these important signals. Under suitable conditions, we establish the asymptotic normality of the proposed two-step projection estimator along with corresponding confidence intervals for all components of the unknown coefficient matrix. Moreover, the satisfactory numerical performance of the proposed method strongly supports the theoretical results.
The rest of the paper is organized as follows. Section 2 presents the model setting and the new inference procedure for high-dimensional multi-response regression. Theoretical properties are established in Section 3. Numerical results and a real data analysis are provided in Sections 4 and 5, respectively. We conclude this work and possible future work in Section 6. The proofs of main theoretical properties are delegated to Appendix.
Notations. For any matrix B=(b1⊤,⋯,bp⊤)⊤=(b1,⋯,bq)=(bi,j)∈Rp×q, denote by bi and bj its ith row and jth column, respectively. We use ‖B‖=(p∑i=1q∑j=1b2i,j)1/2 to denote the Frobenius norm for matrix B. Given any K, BK denotes the submatrix of B consisting of columns in K. We write ‖B‖ℓ2,1=p∑i=1‖bi‖ and ‖B‖ℓ2,∞=max, where \| {\boldsymbol{b}}^i\| = \left( {\displaystyle \sum\limits_{j = 1}^{q}|b_{i,j}^2|} \right)^{1/2} denotes the Euclidean norm for the vector {\boldsymbol{b}}^i. Denote by \mathrm{supp}( {\boldsymbol{B}}) = \{ i \in \{1,\cdots,p \} : {\boldsymbol{b}}^i \neq {\bf{0}} \} the nonzero rows of {\boldsymbol{B}} with size |\mathrm{supp}( {\boldsymbol{B}})|.
2.
Inference procedure via two-step projection estimator
2.1
Model setting
Consider the multi-task learning problem with a high-dimensional multi-response linear regression model
where {\boldsymbol{y}} = (y_{1}, \cdots, y_{q})^{\top} is a q -dimensional response vector, {\boldsymbol{B}} = ( {{\boldsymbol{b}}^1}^{\top},\cdots, {{\boldsymbol{b}}^p}^{\top})^{\top} = ( {\boldsymbol{b}}_1,\cdots, {\boldsymbol{b}}_q) = (b_{i,j}) \in {\mathbb{R}}^{p \times q} is the unknown coefficient matrix, {\boldsymbol{x}} = (x_{1}, ... , x_{p})^{\top} is the p -dimensional covariate vector, and {\boldsymbol{e}} is the q -dimensional error vector, which is independent of {\boldsymbol{x}}. Following Li et al.[25], the dimension p is allowed to be much larger than the sample size n , and the dimension q is regarded as a fixed number in this paper.
Suppose we have n independent observations ( {\boldsymbol{x}}^{i}, {\boldsymbol{y}}^{i})_{i = 1}^n from ( {\boldsymbol{x}}, {\boldsymbol{y}}) in model (1). Using the matrix notation, the multi-response linear regression model (1) can be rewritten as
where {\boldsymbol{Y}} = ( {\boldsymbol{y}}_1,\cdots, {\boldsymbol{y}}_q) \in {\mathbb{R}}^{n \times q} is the response matrix, {\boldsymbol{X}} = ( {\boldsymbol{x}}_1,\cdots, {\boldsymbol{x}}_p) \in {\mathbb{R}}^{n \times p} is the design matrix, and {\boldsymbol{E}} = ( {\boldsymbol{e}}_1,\cdots, {\boldsymbol{e}}_q) = (e_{i,j})\in {\mathbb{R}}^{n \times q} is the random error matrix that is independent of {\boldsymbol{X}}. Without loss of generality, we assume that e_{i,j} s are independent and identically distributed random variables with mean zero and variance \sigma^2 .
We aim to construct confidence intervals for the coefficients in {\boldsymbol{B}}. Different from the classical assumption that {\boldsymbol{B}} is row-sparse, we allow {\boldsymbol{B}} to have a more complex sparsity structure. More specifically, we first establish the relationship between the distance correlation[25, 26] and sparsity structure. Define the population quantity
with 1\leqslant i \leqslant p for the effects caused by {\boldsymbol{b}}^i. Here, the distance correlation \mathrm{dcorr}( {\boldsymbol{u}}, {\boldsymbol{v}}) between two random vectors {\boldsymbol{u}} \in {\mathbb{R}}^{d_{u}} and {\boldsymbol{v}} \in {\mathbb{R}}^{d_{v}} is defined as
where the constant c_{d} = \dfrac{\pi^{(1+d)/2}}{\Gamma((1+d)/2)} for d = d_{u}, d_{v} , and f_{ {\boldsymbol{u}}, {\boldsymbol{v}}}( {\boldsymbol{t}}, {\boldsymbol{s}}), f_{ {\boldsymbol{u}}}( {\boldsymbol{t}}) and f_{ {\boldsymbol{v}}}( {\boldsymbol{s}}) are the characteristic functions of ( {\boldsymbol{u}}, {\boldsymbol{v}}), {\boldsymbol{u}} and {\boldsymbol{v}}, respectively. Please refer to Ref. [26] for more details about the distance correlation.
Because \mathrm{dcorr}( {\boldsymbol{u}}, {\boldsymbol{v}}) = 0 if and only if {\boldsymbol{u}} and {\boldsymbol{v}} are independent, we establish the following relationship between the population quantity \omega_{i} and the structure of {\boldsymbol{B}}:
\omega_{i} = 0\Leftrightarrow {\boldsymbol{b}}^i = {\bf{0}}, \ \ \ \ 1\leqslant i \leqslant p.
Similar to Li et al.[25], we regard x_i as an important predictor if \omega_i\geqslant cn^{-\kappa}, where c>0 and 0\leqslant \kappa < 1/2 are some constants. Moreover, we define S_A as follows to contain the indices of all important predictors:
The indices of the rest of the unimportant predictors are collected by S_A^c . Correspondingly, any {\boldsymbol{b}}^i with i\in S_A is regarded as an important signal in the form of a vector, while any {\boldsymbol{b}}^i with i\in S_A^c is treated as a weak signal.
2.2
Two-step projection estimator
By borrowing ideas from Li et al.[24], the proposed estimator is constructed in a row-wise manner, i.e.,
for any j \in \{1,\cdots,p\}, where the two-step projection residual vector {\boldsymbol{z}}_j is defined by the following two-step procedure:
Step 1: Given a prescreened set \hat{S} for those identifiable signals in {\boldsymbol{B}}, we obtain the following residual vector by an exact orthogonalization of {\boldsymbol{x}}_k against {\boldsymbol{X}}_{\hat{S}\backslash \{j\}}:
where {\boldsymbol{P}}_{\hat{S}\backslash \{j\}} = {\boldsymbol{X}}_{\hat{S}\backslash \{j\}} ( {\boldsymbol{X}}_{\hat{S}\backslash \{j\}}^{\top} {\boldsymbol{X}}_{\hat{S}\backslash \{j\}})^{-1} {\boldsymbol{X}}_{\hat{S}\backslash \{j\}}^{\top} is the projection matrix of the column space of {\boldsymbol{X}}_{\hat{S}\backslash \{j\}}.
Step 2: Then, {\boldsymbol{z}}_{j} is constructed by the residual of lasso regression of {\boldsymbol{\psi}}_{j}^{(j)} against {\boldsymbol{\psi}}_{\hat{S}^{c}\backslash \{j\}}^{(j)} . That is,
where {\boldsymbol{\psi}}_{\hat{S}^{c}\backslash \{j\}}^{(j)} is the matrix composed of column vectors {\boldsymbol{\psi}}_{k}^{(j)} for k \in \hat{S}^{c}\backslash \{j\} , and {\hat{\boldsymbol{\nu}}}_{j}(\lambda_{j}) is the lasso estimator depending on the regularization parameter \lambda_{j} .
Based on the above two-step strategy, the two-step projection residual vector {\boldsymbol{z}}_{j} satisfies the following two properties:
(a) It is strictly orthogonal to {\boldsymbol{X}}_{\hat{S} \backslash \{j\}} consisting of important covariates.
(b) It is relaxed orthogonally to {\boldsymbol{X}}_{\hat{S}^{c} \backslash \{j\}} consisting of other covariates.
This kind of hybrid orthogonalization brings the benefit of reducing the estimation bias of { \widehat{\boldsymbol{b}}}^j. To see this, plugging model (2) to (3) yields
It is easy to see from the above that the influence generated by these identifiable signals in \hat{S} \backslash \{j\} is eliminated from the bias term of the estimation error.
Denote by \tau_{j} = \| {\boldsymbol{z}}_{j}\| / | {\boldsymbol{z}}_{j}^{\top} {\boldsymbol{x}}_{j}|. Then the confidence interval of b_{j,\,k} is constructed by
for any j = 1,\cdots,p and k = 1,\cdots,q, where \varPhi denotes the standard normal distribution function, \alpha is the significance level, and \hat{\sigma} is a consistent estimator of \sigma . Following Chevalier et al.[23] and Reid et al.[27], in this paper, we suggest the cross-validation-based variance estimator
where \hat {\boldsymbol{e}}_t is the t th column of { \hat{\boldsymbol{E}}} = {\boldsymbol{Y}}- {\boldsymbol{X}} \hat {\boldsymbol{B}}_{\rm CV} with \hat {\boldsymbol{B}}_{\rm CV} being the multivariate group lasso estimator tuned by cross-validation, and \hat{s} = |{\rm supp}( \hat {\boldsymbol B}_{\rm CV})| means the number of nonzero rows of \hat {\boldsymbol{B}}_{\rm CV}.
Note that a prescreened set \hat{S} for those identifiable signals in {\boldsymbol{B}} is necessary for the proposed method. In this paper, we suggest utilizing DC-SIS[25] to obtain a suitable prescreened set \hat{S} and we will provide the sure screening property of DC-SIS in Proposition 1. In conclusion, the proposed method TPE is summarized in Algorithm 1. However, we cannot guarantee that all the truly important signals are retained in practice. In view of this potential scenario, we alternatively propose a variant two-step estimator based on the self-bias correction idea.
Given a preliminary estimate such as the multivariate group lasso estimate {\hat {\boldsymbol{B}}_0} = {\left( {\hat {\boldsymbol{b}}{{_0^1}^ \top }, \cdots ,\hat {\boldsymbol{b}}{{_0^p}^ \top }} \right)^ \top }, the variant two-step estimator {\hat {\boldsymbol{B}}_{\rm{V}}} = {\left( {\hat {\boldsymbol{b}}{{_{\rm{V}}^1}^ \top }, \cdots ,\hat {\boldsymbol{b}}{{_{\rm{V}}^p}^ \top }} \right)^ \top } can be defined through each row as
If some truly important features are omitted in the prescreening step, the variant procedure can be a reliable choice since the magnitude of the bias term depends on \displaystyle \sum\nolimits_{k \in \hat{S}^{c}, k \neq j} \| {\boldsymbol{b}}^k - { \hat{\boldsymbol{b}}}_{0}^k\| instead of the diverging term \displaystyle \sum\nolimits_{k \in \hat{S}^{c}, k \neq j} \| {\boldsymbol{b}}^k\|.
Algorithm 1 TPE algorithm
Require:{\boldsymbol{X}} \in {\mathbb{R}}^{n \times p}, {\boldsymbol{Y}} \in {\mathbb{R}}^{n \times q}, a prescreened set \hat{S} , a significance level \alpha ;
Ensure: {CI}_{j,k} for j=1,\cdots,p and k=1,\cdots,q
3.
Theoretical properties
In this section, we provide statistical properties for the proposed method TPE. First, we need to clarify some conditions on the model.
Condition 1. The rows of {\boldsymbol{X}} are independent and identically distributed (i.i.d.) from N( {\bf{0}},{\boldsymbol{\varSigma}}_{X}), and the eigenvalues of {\boldsymbol{\varTheta}} = {\boldsymbol{\varSigma}}_{X}^{-1} = (\theta_{i,j}) are bounded within the interval [1/L,L] for some L \geqslant 1.
Condition 2.s^{*} = \max_{1 \leqslant j \leqslant p} s_{j} = o(n/\log (p)), where s_{j} = |\{ k \in \{1,\cdots,p\}: k \neq j, {\theta}_{j,k} \neq 0 \}| is the sparsity with respect to rows of {\boldsymbol{\varTheta}}.
Condition 3. s = o(\max\{n/\log (p), n/s^{*} \}) with s = |S_{A}| , and \displaystyle \sum\nolimits_{k \in S_{A}^{c}} \| {\boldsymbol{b}}^k\| = o(1/\sqrt{\log (p)}).
Conditions 1 and 2 are the same as those in Ref. [24], which provide theoretical guarantees for estimation and prediction consistency in the two-step procedure. The first part of Condition 1 is a common Gaussian assumption, which can be relaxed to a general case such as sub-Gaussian. The second part of Condition 1 assumes that the eigenvalues of {\boldsymbol{\varTheta}} are well bounded from below and above, which is used for characterizing the identifiability of the design matrix {\boldsymbol{\psi}}_{\hat{S}^{c}\backslash \{j\}}^{(j)} in Eq. (5) based on the bounded sign-restricted cone invertibility factor[28]. Condition 2 imposes a typical constraint on the maximum column sparsity of {\boldsymbol{\varTheta}}, which is needed to guarantee consistent estimation in the two-step procedure.
Condition 3 entails the main contributions of the proposed method. The first part of this condition allows the number of identifiable signals s = o(\max\{n/\log (p), n/s^{*} \}) , which is much weaker than o(\sqrt{n}/\log (p)) in Ref. [23]. Moreover, the order of s can be much larger than o(n/\log (p)) if s^{*} \ll \log(p) . The second part of Condition 3 imposes a constraint on the weak signals in S_{A}^{c} , which is used to guarantee that the influence on the weak signals cannot break the inference procedure. Next, the following definition characterizes the theoretical properties of a suitable prescreened set \hat{S}.
Definition 1 (suitable set). \hat{S} is called a suitable prescreened set if it satisfies: (a) \hat{S} is independent of ( {\boldsymbol{X}}, {\boldsymbol{Y}}); (b) with probability at least 1-\epsilon_{n,p} , S_{A} \subset \hat{S} and |\hat{S}| = O(s) , where \epsilon_{n,p} is asymptotically vanishing.
Definition 1 is similar to the definition of an acceptable set in Ref. [24]. The first part of this definition is applied to eliminate the dependence between \hat{S} and the proposed estimator, which can be achieved by the common sample splitting technique. The second part of this definition assumes the sure screening property of \hat{S} , which can be justified by the following proposition.
Proposition 1. Under Condition 1, there exists a constant c_{0}>0 such that
for any j = 1,\ldots, p , where {\boldsymbol{\varLambda}}_{j,\cdot} denotes the j th row of {\boldsymbol{\varLambda}}.
Part b: Further assume that Conditions 1–3 hold. For some constants \epsilon > 0 and \delta \geqslant 1, let \lambda_{j} = (1+\epsilon)\sqrt{2\delta \log(p)/(n\theta_{j,j})} with j = 1,\ldots, p , where \lambda_{j} is the regularization parameter in Eq. (5). Then, for any j = 1,\ldots, p , with probability at least 1-\epsilon_{n,p}-o(p^{1-\delta}) , we have
The first part of Theorem 1 presents that the error \sqrt{n}( \hat {\boldsymbol{B}}- {\boldsymbol{B}}) can be decomposed into a Gaussian term {\boldsymbol{\varLambda}} with zero mean and a bias term {\boldsymbol{\varDelta}}. The second part of this theorem shows that the bias term {\boldsymbol{\varDelta}} is asymptotically negligible with high probability. In particular, we prove that \hat{\theta}_{j,j}^{1/2} converges to {\theta}_{j,j}^{-1/2} with asymptotic probability one, which ensures the effectiveness of the inference in terms of the length of the confidence interval. It is worth noting that the product of the noise term \tau_{j} and n^{1/2} converges to the same constant as that of the estimator in Ref. [23] with asymptotic probability one. Since the noise factor is proportional to the variance of the estimator, the lengths of the confidence intervals for the two estimators are theoretically equal. Based on the conclusion in this theorem, we immediately obtain the asymptotic properties for all elements of the proposed two-step projection estimator, as shown in the following corollary.
Corollary 1. Under the conditions in Theorem 1, with a given significance level \alpha , we further have
Note that Corollary 1 still holds if the noise level \sigma is replaced by a consistent estimator \hat{\sigma}. Therefore this corollary provides theoretical guarantees for the constructed confidence interval in (6).
4.
Simulation studies
In this section, we conduct simulation studies to investigate the performance of the proposed method compared with the generalization of LDPE[22] for multi-task regression[23] (denoted by MLDPE for simplicity). The implementation of \hat {\boldsymbol{B}}_{\rm CV} with \ell_{2,1} group regularization is performed via the R package RMTL[29]. Based on \hat {\boldsymbol{B}}_{\rm CV}, we obtain a consistent estimator \hat{\sigma} for \sigma by applying the method stated in Section 2.2. Moreover, DC-SIS[25] is utilized to obtain a suitable prescreened set \hat{S} for those important predictors. To accurately control the size of \hat{S}, we take the least squares estimates on the subsets of \hat{S} and then use a BIC-type criterion[30] to choose the best subset.
In terms of the generation methods of the design matrix and error matrix, we conduct some simulations based on the following two models in both the sparse setting and the approximately sparse setting.
Model 1: The rows of the design matrix {\boldsymbol{X}} are sampled as independent and identically distributed copies from N( {\bf{0}},{\boldsymbol{\varSigma}}_{X}), where {\boldsymbol{\varSigma}}_{X} = (0.5^{|i-j|})_{p\times p}. The error items of {\boldsymbol{E}} are independent and identically distributed N(0,\sigma^{2}) with \sigma = 1 .
Model 2: The entries of the design matrix {\boldsymbol{X}} = (x_{i,j}) are Bernoulli random variables with a success probability of 0.8. All the columns of {\boldsymbol{X}} are centered to have zero mean. The entries of {\boldsymbol{E}} are generated from a t-distribution with 10 degrees of freedom.
We set the number of responses q = 200 . Then, the sample size n , the number of predictors p and the coefficient matrices {\boldsymbol{B}} in different settings are constructed as follows:
Sparse setting: We set (n,p) = (100,200), (150,400), (200,800), respectively. Similar to Ref. [31], the elements in the first five rows of {\boldsymbol{B}} are drawn from a uniform distribution on [-5,-1]\cup[1,5] , and the elements in other rows are set to be 0.
Approximately sparse setting: We set (n,p) = (100,400), (150,600), \,(200,1000), respectively. Similar to Refs. [22, 24], the j th important signal satisfies \| {\boldsymbol{b}}^j\| = 3\lambda_{\rm univ} with \lambda_{\rm univ} = \sqrt{2\log(p)/n} for j = 40,\;80,\;120,\;160,\;200, and \| {\boldsymbol{b}}^j\| = 3\lambda_{\rm univ}/j^{2} for all other j . More specifically, to generate the j th row of {\boldsymbol{B}} with \| {\boldsymbol{b}}^j\| = 3\lambda_{\rm univ}, we first generate a q -dimensional vector {\boldsymbol{v}} = (v_{1}, ..., v_{q}) with items v_{k} \sim U[0,1],\, k = 1,\ldots,q. Then, we normalize {\boldsymbol{v}} such that \| {\boldsymbol{v}}\| = 1. Finally, we set {\boldsymbol{b}}^j = 3\lambda_{\rm univ} {\boldsymbol{v}}.
The primary purpose of our simulation is to yield the 95% confidence intervals for the regression coefficients {\boldsymbol{B}}. In each setting, we run 100 replications and calculate the same three performance measures as those in Ref. [24]: the average coverage probability for all regression coefficients (CPA), the average coverage probability for important coefficients (CPI), and the average length of confidence intervals for all regression coefficients (Length).
Tables 1 and 2 summarize the results for the two methods in different settings. Clearly, the results in the sparse setting and approximately sparse setting are similar. In Gaussian settings, it can be seen from CPA (or CPI) that the average coverage probabilities for all regression coefficients (or important coefficients) of the proposed method are approximately 95%, while the average coverage probabilities for all regression coefficients (or important coefficients) of MLDPE deviate from 95% slightly. In view of Length, the average lengths of the confidence intervals for the two methods are roughly the same. In non-Gaussian settings, the performance of both TPE and MLDPE tends to worsen.
Table
1.
Comparison of performance measures for two methods in the sparse setting.
In each setting, we further set \hat{\sigma} = 1 to calculate performance measures for comparison. We can see that the performance of the proposed method remains stable, while the performance of MLDPE fluctuates slightly. In summary, the proposed method outperforms MLDPE in both the sparse setting and the approximately sparse setting.
5.
Application to TCGA-OV data
The Cancer Genome Atlas (TCGA) is a cancer genomics program incorporating clinical data on human cancers and tumor subtypes, including aberrations in gene expression, epigenetics (miRNAs, methylation), and protein expression. In this section, we apply our methodology to the ovarian serous cystadenocarcinoma (TCGA-OV) data downloaded from the TCGA website①. We regarded the miRNA expression quantification obtained by the miRNA-seq experimental strategy as predictors, and protein expression quantification using the reversed-phase protein array technique as responses. Our goal is to explore the regulatory effect of miRNA on protein expression in ovarian cancer.
After preliminary data processing, the number of miRNAs was reduced to 1530, and the number of proteins was 216. Then, we utilize the DC-SIS method to perform feature screening on the predictors, select the first 400 important miRNAs as predictors, and choose the first 50 proteins with larger variance as response variables. Finally, we obtained n = 300 common samples with p = 400 miRNAs as predictors and q = 50 proteins as responses. It is worth noting that both predictors and responses are centered and normalized to have zero mean and a common \ell_{2} -norm \sqrt{n} .
By applying the proposed method, we obtain 95% confidence intervals for all unknown coefficients. As a result, the top 45 important miRNAs are selected in Table 3, 32 of which highlighted in bold are also chosen by MLDPE. These selected miRNAs play important regulatory roles in cancer. For example, compared to normal ovaries, some miRNAs are dysregulated in ovarian cancer. It is worth noting that the importance is reflected by the magnitude of the sum of absolute values for these estimated coefficients over all 50 proteins. Among these selected miRNAs, hsa-mir-182, hsa-mir-200a, hsa-mir-223, and hsa-mir-16 are upregulated, and hsa-mir-432, hsa-mir-493, hsa-mir-9 and hsa-mir-377 are downregulated[ 32]. Due to the dysregulation of these miRNAs, some of them could potentially be used as diagnostic biomarkers, including hsa-mir-214 and hsa-mir-21[33].
Table
3.
45 miRNAs selected by TPE. The miRNAs also selected by MLDPE are highlighted in bold.
Experimental results show that the average length of confidence intervals obtained by TPE method is 0.4372, while MLDPE method yields the average length of 0.4136. It is clear that the average lengths of both methods are around the same level. The No.1 miRNA chosen by both TPE and MLDPE is hsa-mir-486-2, which was also identified as a potential biomarker for lung adenocarcinoma[34]. For the unknown coefficients of hsa-mir-486-2, Fig. 1 displays their estimates and the corresponding 95% confidence intervals over all 50 proteins, which further justifies the important regulatory roles of hsa-mir-486-2.
Figure
1.
Estimates of the unknown coefficients of miRNA hsa-mir-486-2 (red squares for TPE and black dots for MLDPE) and the corresponding 95% confidence intervals (obtained by TPE) over all 50 proteins.
In this paper, we develop a new inference methodology based on the two-step projection estimator (TPE) in high-dimensional multi-task regression. The proposed estimator is established in a row-wise manner, which has the benefit of reducing the estimation bias induced by these important signals. In addition, we provide strict theoretical guarantees for our method, including asymptotic normality and corresponding confidence intervals. Moreover, the numerical results of the proposed method indicate that the proposed method works quite well. Specifically, we apply this approach to an ovarian cancer dataset and identify several miRNAs that are closely associated with protein expression levels. Results demonstrate that these miRNAs can potentially serve as biomarkers in disease research, aiding in the diagnosis of the ovarian cancer.
It would be interesting to extend our method to more general settings, such as multi-response linear regression models with measurement errors and generalized linear models. It is also of interest to build a relationship between the two-step projection technique and the partially penalized procedure in high-dimensional multi-response settings. These generalizations are interesting topics for future research.
Acknowledgements
This work was supported by the National Natural Science Foundation of China (12101584), the China Postdoctoral Science Foundation (2021TQ0326, 2021M703100), Fundamental Research Funds for the Central Universities (WK2040000047), Hefei Postdoctoral Research Project Funds in 2021, and Anhui Postdoctoral Research Project Funds in 2021.
Conflict of interest
The authors declare that they have no conflict of interest.
We propose a two-step projection estimator for statistical inference in high-dimensional multi-task learning problems.
We establish the asymptotic properties of the proposed estimator.
The performance of our method is presented through simulation studies and a TCGA-OV dataset.
Lounici K, Pontil M, Tsybakov A B, et al. Taking advantage of sparsity in multi-task learning. arXiv:0903.1468, 2009.
[2]
Obozinski G, Taskar B, Jordan M I. Joint covariate selection and joint subspace selection for multiple classification problems. Stat. Comput.,2010, 20 (2): 231–252. DOI: 10.1007/s11222-008-9111-x
[3]
Lounici K, Pontil M, Van De Geer S, et al. Oracle inequalities and optimal inference under group sparsity. Ann. Statist.,2011, 39 (4): 2164–2204. DOI: 10.1214/11-AOS896
[4]
Wang H, Nie F, Huang H, et al. Identifying quantitative trait loci via group-sparse multitask regression and feature selection: An imaging genetics study of the ADNI cohort. Bioinformatics,2012, 28 (2): 229–237. DOI: 10.1093/bioinformatics/btr649
[5]
Greenlaw K, Szefer E, Graham J, et al. A Bayesian group sparse multi-task regression model for imaging genetics. Bioinformatics,2017, 33 (16): 2513–2522. DOI: 10.1093/bioinformatics/btx215
[6]
Zhou J J, Cho M H, Lange C, et al. Integrating multiple correlated phenotypes for genetic association analysis by maximizing heritability. Human Heredity,2015, 79 (2): 93–104. DOI: 10.1159/000381641
[7]
Kim S, Sohn K-A, Xing E P. A multivariate regression approach to association analysis of a quantitative trait network. Bioinformatics,2009, 25 (12): i204–i212. DOI: 10.1093/bioinformatics/btp218
[8]
Mørk S, Pletscher-Frankild S, Palleja Caro A, et al. Protein-driven inference of miRNA-disease associations. Bioinformatics,2014, 30 (3): 392–397. DOI: 10.1093/bioinformatics/btt677
[9]
Gommans W M, Berezikov E. Controlling miRNA regulation in disease. In: Next-Generation MicroRNA Expression Profiling Technology: Methods and Protocols. Totowa, NJ: Humana Press, 2012: 1–18.
[10]
Izenman A J. Reduced-rank regression for the multivariate linear model. J. Multivariate Anal.,1975, 5 (2): 248–264. DOI: 10.1016/0047-259X(75)90042-1
[11]
Velu R, Reinsel G C. Multivariate Reduced-Rank Regression: Theory and Applications. New York: Springer Science & Business Media, 1998.
[12]
Anderson T W. Asymptotic distribution of the reduced rank regression estimator under general conditions. Ann. Statist.,1999, 27 (4): 1141–1154. DOI: 10.1214/aos/1017938918
[13]
Uematsu Y, Fan Y, Chen K, et al. SOFAR: Large-scale association network learning. IEEE Trans. Inform. Theory,2019, 65 (8): 4924–4939. DOI: 10.1109/TIT.2019.2909889
[14]
Zheng Z, Li Y, Wu J, et al. Sequential scaled sparse factor regression. J. Bus. Econom. Statist.,2022, 40 (2): 595–604. DOI: 10.1080/07350015.2020.1844212
[15]
Yuan M, Ekici A, Lu Z, et al. Dimension reduction and coefficient estimation in multivariate linear regression. The Journal of the Royal Statistical Society, Series B: Statistical Methodology,2007, 69 (3): 329–346. DOI: 10.1111/j.1467-9868.2007.00591.x
[16]
Bunea F, She Y, Wegkamp M H. Joint variable and rank selection for parsimonious estimation of high-dimensional matrices. Ann. Statist.,2012, 40 (5): 2359–2388. DOI: 10.1214/12-AOS1039
[17]
Chen L, Huang J Z. Sparse reduced-rank regression for simultaneous dimension reduction and variable selection. J. Amer. Statist. Assoc.,2012, 107 (500): 1533–1545. DOI: 10.1080/01621459.2012.734178
[18]
Chen K, Chan K-S, Stenseth N C. Reduced rank stochastic regression with a sparse singular value decomposition. Journal of the Royal Statistical Society, Series B: Statistical Methodology,2012, 74 (2): 203–221. DOI: 10.1111/j.1467-9868.2011.01002.x
[19]
Obozinski G, Wainwright M J, Jordan M I. Support union recovery in high-dimensional multivariate regression. Ann. Statist.,2011, 39 (1): 1–47. DOI: 10.1214/09-AOS776
[20]
Turlach B A, Venables W N, Wright S J. Simultaneous variable selection. Technometrics,2005, 47 (3): 349–363. DOI: 10.1198/004017005000000139
[21]
Quattoni A, Carreras X, Collins M, et al. An efficient projection for ℓ1, ∞ regularization. In: Proceedings of the 26th Annual International Conference on Machine Learning. New York: ACM, 2009: 857–864.
[22]
Zhang C-H, Zhang S S. Confidence intervals for low dimensional parameters in high dimensional linear models. Journal of the Royal Statistical Society Series B: Statistical Methodology,2014, 76 (1): 217–242. DOI: 10.1111/rssb.12026
[23]
Chevalier J-A, Salmon J, Gramfort A, et al. Statistical control for spatio-temporal MEG/EEG source imaging with desparsified mutli-task lasso. In: Advances in Neural Information Processing Systems 33. Red Hook, NY: Curran Associates, Inc., 2020: 1759–1770.
[24]
Li Y, Zheng Z, Zhou J, et al. High-dimensional inference via hybrid orthogonalization. arXiv:2111.13391, 2012.
[25]
Li R, Zhong W, Zhu L. Feature screening via distance correlation learning. Journal of the American Statistical Association,2012, 107 (499): 1129–1139. DOI: 10.1080/01621459.2012.695654
[26]
Székely G J, Rizzo M L, Bakirov N K. Measuring and testing dependence by correlation of distances. Ann. Statist.,2007, 35 (6): 2769–2794. DOI: 10.1214/009053607000000505
[27]
Reid S, Tibshirani R, Friedman J. A study of error variance estimation in lasso regression. Statist. Sinica,2016, 26: 35–67. DOI: 10.5705/ss.2014.042
[28]
Ye F,Zhang C H. Rate minimaxity of the lasso and Dantzig selector for the ℓq loss in ℓr balls. Journal of Machine Learning Research,2010, 11: 3519–3540.
[29]
Cao H, Zhou J, Schwarz E. RMTL: an R library for multi-task learning. Bioinformatics,2019, 35 (10): 1797–1798. DOI: 10.1093/bioinformatics/bty831
[30]
Sakurai T, Fujikoshi Y. High-dimensional properties of information criteria and their efficient criteria for multivariate linear regression models with covariance structures. 2017. http://www.math.sci.hiroshima-u.ac.jp/stat/TR/TR17/TR17-13.pdf. Accessed August 1, 2022
[31]
Li Y, Nan B, Zhu J. Multivariate sparse group lasso for the multivariate multiple linear regression with an arbitrary group structure. Biometrics,2015, 71 (2): 354–363. DOI: 10.1111/biom.12292
[32]
Aziz N B, Mahmudunnabi R G, Umer M, et al. MicroRNAs in ovarian cancer and recent advances in the development of microRNA-based biosensors. Analyst,2020, 145 (6): 2038–2057. DOI: 10.1039/c9an02263e
[33]
Wu Y D, Li Q, Zhang R S, et al. Circulating microRNAs: Biomarkers of disease. Clinica Chimica Acta,2021, 516: 46–54. DOI: 10.1016/j.cca.2021.01.008
[34]
Ren Z P, Hou X B, Tian X D, et al. Identification of nine microRNAs as potential biomarkers for lung adenocarcinoma. FEBS Open Bio,2019, 9 (2): 315–327. DOI: 10.1002/2211-5463.12572
Figure
1.
Estimates of the unknown coefficients of miRNA hsa-mir-486-2 (red squares for TPE and black dots for MLDPE) and the corresponding 95% confidence intervals (obtained by TPE) over all 50 proteins.
References
[1]
Lounici K, Pontil M, Tsybakov A B, et al. Taking advantage of sparsity in multi-task learning. arXiv:0903.1468, 2009.
[2]
Obozinski G, Taskar B, Jordan M I. Joint covariate selection and joint subspace selection for multiple classification problems. Stat. Comput.,2010, 20 (2): 231–252. DOI: 10.1007/s11222-008-9111-x
[3]
Lounici K, Pontil M, Van De Geer S, et al. Oracle inequalities and optimal inference under group sparsity. Ann. Statist.,2011, 39 (4): 2164–2204. DOI: 10.1214/11-AOS896
[4]
Wang H, Nie F, Huang H, et al. Identifying quantitative trait loci via group-sparse multitask regression and feature selection: An imaging genetics study of the ADNI cohort. Bioinformatics,2012, 28 (2): 229–237. DOI: 10.1093/bioinformatics/btr649
[5]
Greenlaw K, Szefer E, Graham J, et al. A Bayesian group sparse multi-task regression model for imaging genetics. Bioinformatics,2017, 33 (16): 2513–2522. DOI: 10.1093/bioinformatics/btx215
[6]
Zhou J J, Cho M H, Lange C, et al. Integrating multiple correlated phenotypes for genetic association analysis by maximizing heritability. Human Heredity,2015, 79 (2): 93–104. DOI: 10.1159/000381641
[7]
Kim S, Sohn K-A, Xing E P. A multivariate regression approach to association analysis of a quantitative trait network. Bioinformatics,2009, 25 (12): i204–i212. DOI: 10.1093/bioinformatics/btp218
[8]
Mørk S, Pletscher-Frankild S, Palleja Caro A, et al. Protein-driven inference of miRNA-disease associations. Bioinformatics,2014, 30 (3): 392–397. DOI: 10.1093/bioinformatics/btt677
[9]
Gommans W M, Berezikov E. Controlling miRNA regulation in disease. In: Next-Generation MicroRNA Expression Profiling Technology: Methods and Protocols. Totowa, NJ: Humana Press, 2012: 1–18.
[10]
Izenman A J. Reduced-rank regression for the multivariate linear model. J. Multivariate Anal.,1975, 5 (2): 248–264. DOI: 10.1016/0047-259X(75)90042-1
[11]
Velu R, Reinsel G C. Multivariate Reduced-Rank Regression: Theory and Applications. New York: Springer Science & Business Media, 1998.
[12]
Anderson T W. Asymptotic distribution of the reduced rank regression estimator under general conditions. Ann. Statist.,1999, 27 (4): 1141–1154. DOI: 10.1214/aos/1017938918
[13]
Uematsu Y, Fan Y, Chen K, et al. SOFAR: Large-scale association network learning. IEEE Trans. Inform. Theory,2019, 65 (8): 4924–4939. DOI: 10.1109/TIT.2019.2909889
[14]
Zheng Z, Li Y, Wu J, et al. Sequential scaled sparse factor regression. J. Bus. Econom. Statist.,2022, 40 (2): 595–604. DOI: 10.1080/07350015.2020.1844212
[15]
Yuan M, Ekici A, Lu Z, et al. Dimension reduction and coefficient estimation in multivariate linear regression. The Journal of the Royal Statistical Society, Series B: Statistical Methodology,2007, 69 (3): 329–346. DOI: 10.1111/j.1467-9868.2007.00591.x
[16]
Bunea F, She Y, Wegkamp M H. Joint variable and rank selection for parsimonious estimation of high-dimensional matrices. Ann. Statist.,2012, 40 (5): 2359–2388. DOI: 10.1214/12-AOS1039
[17]
Chen L, Huang J Z. Sparse reduced-rank regression for simultaneous dimension reduction and variable selection. J. Amer. Statist. Assoc.,2012, 107 (500): 1533–1545. DOI: 10.1080/01621459.2012.734178
[18]
Chen K, Chan K-S, Stenseth N C. Reduced rank stochastic regression with a sparse singular value decomposition. Journal of the Royal Statistical Society, Series B: Statistical Methodology,2012, 74 (2): 203–221. DOI: 10.1111/j.1467-9868.2011.01002.x
[19]
Obozinski G, Wainwright M J, Jordan M I. Support union recovery in high-dimensional multivariate regression. Ann. Statist.,2011, 39 (1): 1–47. DOI: 10.1214/09-AOS776
[20]
Turlach B A, Venables W N, Wright S J. Simultaneous variable selection. Technometrics,2005, 47 (3): 349–363. DOI: 10.1198/004017005000000139
[21]
Quattoni A, Carreras X, Collins M, et al. An efficient projection for ℓ1, ∞ regularization. In: Proceedings of the 26th Annual International Conference on Machine Learning. New York: ACM, 2009: 857–864.
[22]
Zhang C-H, Zhang S S. Confidence intervals for low dimensional parameters in high dimensional linear models. Journal of the Royal Statistical Society Series B: Statistical Methodology,2014, 76 (1): 217–242. DOI: 10.1111/rssb.12026
[23]
Chevalier J-A, Salmon J, Gramfort A, et al. Statistical control for spatio-temporal MEG/EEG source imaging with desparsified mutli-task lasso. In: Advances in Neural Information Processing Systems 33. Red Hook, NY: Curran Associates, Inc., 2020: 1759–1770.
[24]
Li Y, Zheng Z, Zhou J, et al. High-dimensional inference via hybrid orthogonalization. arXiv:2111.13391, 2012.
[25]
Li R, Zhong W, Zhu L. Feature screening via distance correlation learning. Journal of the American Statistical Association,2012, 107 (499): 1129–1139. DOI: 10.1080/01621459.2012.695654
[26]
Székely G J, Rizzo M L, Bakirov N K. Measuring and testing dependence by correlation of distances. Ann. Statist.,2007, 35 (6): 2769–2794. DOI: 10.1214/009053607000000505
[27]
Reid S, Tibshirani R, Friedman J. A study of error variance estimation in lasso regression. Statist. Sinica,2016, 26: 35–67. DOI: 10.5705/ss.2014.042
[28]
Ye F,Zhang C H. Rate minimaxity of the lasso and Dantzig selector for the ℓq loss in ℓr balls. Journal of Machine Learning Research,2010, 11: 3519–3540.
[29]
Cao H, Zhou J, Schwarz E. RMTL: an R library for multi-task learning. Bioinformatics,2019, 35 (10): 1797–1798. DOI: 10.1093/bioinformatics/bty831
[30]
Sakurai T, Fujikoshi Y. High-dimensional properties of information criteria and their efficient criteria for multivariate linear regression models with covariance structures. 2017. http://www.math.sci.hiroshima-u.ac.jp/stat/TR/TR17/TR17-13.pdf. Accessed August 1, 2022
[31]
Li Y, Nan B, Zhu J. Multivariate sparse group lasso for the multivariate multiple linear regression with an arbitrary group structure. Biometrics,2015, 71 (2): 354–363. DOI: 10.1111/biom.12292
[32]
Aziz N B, Mahmudunnabi R G, Umer M, et al. MicroRNAs in ovarian cancer and recent advances in the development of microRNA-based biosensors. Analyst,2020, 145 (6): 2038–2057. DOI: 10.1039/c9an02263e
[33]
Wu Y D, Li Q, Zhang R S, et al. Circulating microRNAs: Biomarkers of disease. Clinica Chimica Acta,2021, 516: 46–54. DOI: 10.1016/j.cca.2021.01.008
[34]
Ren Z P, Hou X B, Tian X D, et al. Identification of nine microRNAs as potential biomarkers for lung adenocarcinoma. FEBS Open Bio,2019, 9 (2): 315–327. DOI: 10.1002/2211-5463.12572