[1] |
ROSIPAL R. Kernel-based regression and objective nonlinear measures to assess brain functioning[D]. Scotland: University of Paisley, 2001.
|
[2] |
SCHLKOPF B, SMOLA A, MLLER K R. Nonlinear component analysis as a kernel eigenvalue problem[J]. Neural computation, 1998, 10(5): 1299-1319.
|
[3] |
FINE S, SCHEINBERG K. Efficient SVM training using low-rank kernel representations[J]. Journal of Machine Learning Research, 2001, 2: 243-264.
|
[4] |
WILLIAMS C K I, SEEGER M. Using the Nystrm method to speed up kernel machines[C]// Proceedings of the 13th Conference on Neural Information Processing Systems. Cambridge: MIT press, 2000: 661-667.
|
[5] |
BACH F. Sharp analysis of low-rank kernel matrix approximations[J]. Journal of Machine Learning Research, 2012, 30: 185-209.
|
[6] |
ALAOUI A E, MAHONEY M W. Fast randomized kernel ridge regression with statistical guarantees[C]// Proceedings of the 28th Conference on Neural Information Processing Systems. Cambridge: MIT Press, 2015: 775-783.
|
[7] |
RUDI A, CAMORIANO R, ROSASCO L. Less is more: Nystrm computational regularization[C]// Proceedings of the 28th Conference on Neural Information Processing Systems. Cambridge: MIT Press, 2015: 1657-1665.
|
[8] |
RAHIMI A, RECHT B. Random features for large-scale kernel machines[C]// Proceedings of the 20th Conference on Neural Information Processing Systems. Cambridge: MIT Press, 2007: 1177-1184.
|
[9] |
YAO Y, ROSASCO L, CAPONNETTO A. On early stopping in gradient descent learning[J]. Constructive Approximation, 2007, 26(2): 289-315.
|
[10] |
BLANCHARD G, KRMER N. Optimal learning rates for kernel conjugate gradient regression[C]// Proceedings of the 20th Conference on Neural Information Processing Systems. Cambridge: MIT Press, 2010: 226-234.
|
[11] |
ZHANG Y, DUCHI J, WAINWRIGHT M. Divide and conquer kernel ridge regression: A distributed algorithm with minimax optimal rates[J]. Journal of Machine Learning Research, 2015, 16: 3299-3340.
|
[12] |
GU Q, HAN J. Clustered support vector machines[C]// Proceedings of the 16th International Conference on Artificial Intelligence and Statistics. Scottsdale: PMLR, 2013, 31: 307-315.
|
[13] |
HSIEH C J, SI S, DHILLON I S. A divide-and-conquer solver for kernel support vector machines[C]// Proceedings of the 31th International Conference on Machine Learning. Beijing: PMLR, 2014, 32(1): 566-574.
|
[14] |
TANDON R, SI S, RAVIKUMAR P, et al. Kernel ridge regression via partitioning[J/OL]. (2016.8.5) [2017.5.24]. https : // arxix.org/pdf/1608.01976.pdf
|
[15] |
GITTENS A, MAHONEY M W. Revisiting the Nystrm method for improved large scale machine learning[J]. Journal of Machine Learning Research, 2016, 17(1): 3977-4041.
|
[16] |
HUANG P S, AVRON H, SAINATH T N, et al. Kernel methods match deep neural networks on TIMIT[C]// Proceedings of the 39th IEEE International Conference on Acoustics, Speech and Signal Processing. Piscataway: IEEE press, 2014: 205-209.
|
[17] |
BOTTOU L, VAPNIK V. Local learning algorithms[J]. Neural computation, 1992, 4(6): 888-900.
|
[18] |
ZHANG Y, DUCHI J C, WAINWRIGHT M J. Communication-efficient algorithms for statistical optimization[J]. Journal of Machine Learning Research, 2012, 14(1): 3321-3363.
|
[19] |
MACKEY L, TALWALKAR A, JORDAN M I. Divide-and-Conquer matrix factorization[C]// Proceedings of the 25th Conference on Neural Information Processing Systems. Cambridge: MIT Press, 2012: 1134-1142.
|
[20] |
PAN Y, XIA R, YIN J, et al. A divide-and-conquer method for scalable robust multitask learning[J]. IEEE transactions on neural networks and learning systems, 2015, 26(12): 3163-3175.
|
[1] |
ROSIPAL R. Kernel-based regression and objective nonlinear measures to assess brain functioning[D]. Scotland: University of Paisley, 2001.
|
[2] |
SCHLKOPF B, SMOLA A, MLLER K R. Nonlinear component analysis as a kernel eigenvalue problem[J]. Neural computation, 1998, 10(5): 1299-1319.
|
[3] |
FINE S, SCHEINBERG K. Efficient SVM training using low-rank kernel representations[J]. Journal of Machine Learning Research, 2001, 2: 243-264.
|
[4] |
WILLIAMS C K I, SEEGER M. Using the Nystrm method to speed up kernel machines[C]// Proceedings of the 13th Conference on Neural Information Processing Systems. Cambridge: MIT press, 2000: 661-667.
|
[5] |
BACH F. Sharp analysis of low-rank kernel matrix approximations[J]. Journal of Machine Learning Research, 2012, 30: 185-209.
|
[6] |
ALAOUI A E, MAHONEY M W. Fast randomized kernel ridge regression with statistical guarantees[C]// Proceedings of the 28th Conference on Neural Information Processing Systems. Cambridge: MIT Press, 2015: 775-783.
|
[7] |
RUDI A, CAMORIANO R, ROSASCO L. Less is more: Nystrm computational regularization[C]// Proceedings of the 28th Conference on Neural Information Processing Systems. Cambridge: MIT Press, 2015: 1657-1665.
|
[8] |
RAHIMI A, RECHT B. Random features for large-scale kernel machines[C]// Proceedings of the 20th Conference on Neural Information Processing Systems. Cambridge: MIT Press, 2007: 1177-1184.
|
[9] |
YAO Y, ROSASCO L, CAPONNETTO A. On early stopping in gradient descent learning[J]. Constructive Approximation, 2007, 26(2): 289-315.
|
[10] |
BLANCHARD G, KRMER N. Optimal learning rates for kernel conjugate gradient regression[C]// Proceedings of the 20th Conference on Neural Information Processing Systems. Cambridge: MIT Press, 2010: 226-234.
|
[11] |
ZHANG Y, DUCHI J, WAINWRIGHT M. Divide and conquer kernel ridge regression: A distributed algorithm with minimax optimal rates[J]. Journal of Machine Learning Research, 2015, 16: 3299-3340.
|
[12] |
GU Q, HAN J. Clustered support vector machines[C]// Proceedings of the 16th International Conference on Artificial Intelligence and Statistics. Scottsdale: PMLR, 2013, 31: 307-315.
|
[13] |
HSIEH C J, SI S, DHILLON I S. A divide-and-conquer solver for kernel support vector machines[C]// Proceedings of the 31th International Conference on Machine Learning. Beijing: PMLR, 2014, 32(1): 566-574.
|
[14] |
TANDON R, SI S, RAVIKUMAR P, et al. Kernel ridge regression via partitioning[J/OL]. (2016.8.5) [2017.5.24]. https : // arxix.org/pdf/1608.01976.pdf
|
[15] |
GITTENS A, MAHONEY M W. Revisiting the Nystrm method for improved large scale machine learning[J]. Journal of Machine Learning Research, 2016, 17(1): 3977-4041.
|
[16] |
HUANG P S, AVRON H, SAINATH T N, et al. Kernel methods match deep neural networks on TIMIT[C]// Proceedings of the 39th IEEE International Conference on Acoustics, Speech and Signal Processing. Piscataway: IEEE press, 2014: 205-209.
|
[17] |
BOTTOU L, VAPNIK V. Local learning algorithms[J]. Neural computation, 1992, 4(6): 888-900.
|
[18] |
ZHANG Y, DUCHI J C, WAINWRIGHT M J. Communication-efficient algorithms for statistical optimization[J]. Journal of Machine Learning Research, 2012, 14(1): 3321-3363.
|
[19] |
MACKEY L, TALWALKAR A, JORDAN M I. Divide-and-Conquer matrix factorization[C]// Proceedings of the 25th Conference on Neural Information Processing Systems. Cambridge: MIT Press, 2012: 1134-1142.
|
[20] |
PAN Y, XIA R, YIN J, et al. A divide-and-conquer method for scalable robust multitask learning[J]. IEEE transactions on neural networks and learning systems, 2015, 26(12): 3163-3175.
|