ISSN 0253-2778

CN 34-1054/N

Open AccessOpen Access JUSTC Research Articles:Management Science and Engineering

An end-to-end multitask method with two targets for high-frequency price movement prediction

Cite this:
https://doi.org/10.52396/JUST-2021-0052
  • Received Date: 18 February 2021
  • Rev Recd Date: 24 March 2021
  • Publish Date: 31 March 2021
  • High-frequency price movement prediction is to predict the direction(e.g. up, unchanged or down) of the price change in short time(e.g. one minute). It is challenging to use historical high-frequency transaction data to predict price movement because their relation is noisy, nonlinear and complex. We propose an end-to-end multitask method with two targets to improve high-frequency price movement prediction. Specifically, the proposed method introduces an auxiliary target(high-frequency rate of price change), which is highly related with the main target(high-frequency price movement) and is useful to improve the high-frequency price movement prediction. Moreover, each task has a feature extractor based on recurrent neural network and convolutional neural network to learn the noisy, nonlinear and complex temporal-spatial relation between the historical transaction data and the two targets. Besides, the shared parts and task-specific parts of each task are separated explicitly to alleviate the potential negative transfer caused by the multitask method. Moreover, a gradient balancing approach is adopted to use the close relation between two targets to filter the temporal-spatial dependency learned from the inconsistent noise and retain the dependency learned from the consistent true information to improve the high-frequency price movement prediction. The experimental results on real-world datasets show that the proposed method manages to utilize the highly related auxiliary target to help the feature extractor of the main task to learn the temporal-spatial dependency with more generalization to improve high-frequency price movement prediction. Moreover, the auxiliary target(high-frequency rate of the price change) not only improves the generalization of overall temporal-spatial dependency learned by the whole feature extractor but also improve temporal-spatial dependency learned by the different parts of the feature extractor.
    High-frequency price movement prediction is to predict the direction(e.g. up, unchanged or down) of the price change in short time(e.g. one minute). It is challenging to use historical high-frequency transaction data to predict price movement because their relation is noisy, nonlinear and complex. We propose an end-to-end multitask method with two targets to improve high-frequency price movement prediction. Specifically, the proposed method introduces an auxiliary target(high-frequency rate of price change), which is highly related with the main target(high-frequency price movement) and is useful to improve the high-frequency price movement prediction. Moreover, each task has a feature extractor based on recurrent neural network and convolutional neural network to learn the noisy, nonlinear and complex temporal-spatial relation between the historical transaction data and the two targets. Besides, the shared parts and task-specific parts of each task are separated explicitly to alleviate the potential negative transfer caused by the multitask method. Moreover, a gradient balancing approach is adopted to use the close relation between two targets to filter the temporal-spatial dependency learned from the inconsistent noise and retain the dependency learned from the consistent true information to improve the high-frequency price movement prediction. The experimental results on real-world datasets show that the proposed method manages to utilize the highly related auxiliary target to help the feature extractor of the main task to learn the temporal-spatial dependency with more generalization to improve high-frequency price movement prediction. Moreover, the auxiliary target(high-frequency rate of the price change) not only improves the generalization of overall temporal-spatial dependency learned by the whole feature extractor but also improve temporal-spatial dependency learned by the different parts of the feature extractor.
  • loading
  • [1]
    Fama E F. Random walks in stock market prices. Financial Analysts Journal, 1995, 51(1): 75-80.
    [2]
    Thaler R H. Behavioral economics: Past, present, and future. American Economic Review, 2016, 106(7): 1577-1600.
    [3]
    Slovic P, Finucane M, Peters E, et al. Rational actors or rational fools: Implications of the effects heuristic for behavioral economics. Journal of Socio-Economics, 2002, 31(4): 329-342.
    [4]
    Ariel R A. A monthly effect in stock returns. Journal of Financial Economics, 1987, 18(1): 161-174.
    [5]
    Barry C B, Brown S J. Differential information and the small firm effect.Journal of Financial Economics, 1984, 13(2): 283-294.
    [6]
    Bustos O, Pomares-Quimbaya A. Stock market movement forecast: A systematic review. Expert Systems with Applications, 2020, 156: 113464.
    [7]
    Ma Z, Bang G, Wang C, et al. Towards earnings call and stock price movement. https://arxiv.org/abs/2009.01317.
    [8]
    Qiu M, Li C, Song Y. Application of the artificial neural network in predicting the direction of stock market index. In 2016 10th International Conference on Complex, Intelligent, and Software Intensive Systems (CISIS 2016). IEEE, 2016: 219-223.
    [9]
    Rustam Z, Nurrimah, Hidayat R. Indonesia composite index prediction using fuzzy support vector regression with fisher score feature selection. International Journal on Advanced Science, Engineering and Information Technology, 2019, 9(1): 121-128.
    [10]
    Yao S, Luo L, Peng H. High-frequency stock trend forecast using LSTM model. In 2018 13th International Conference on Computer Science Education (ICCSE). IEEE, 2018: 1-4.
    [11]
    Lim Y S, Gorse D. Deep probabilistic modelling of price movements for high-frequency trading. In 2020 International Joint Conference on Neural Networks (IJCNN). IEEE, 2020: 1-8.
    [12]
    Feng F, Chen H, He X, et al. Enhancing stock movement prediction with adversarial training. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI-19). Macao, China: International Joint Conferences on Artificial Intelligence Organization, 2019: 5843-5849.
    [13]
    Pagolu V S , Reddy K N , Panda G, et al. Sentiment analysis of Twitter data for predicting stock market movements. In 2016 International Conference on Signal Processing, Communication, Power and Embedded System (SCOPES). IEEE, 2016: 1345-1350.
    [14]
    Shi L, Teng Z, Wang L, et al. Deepclue: Visual interpretation of text-based deep stock prediction. IEEE Transactions on Knowledge and Data Engineering, 2019, 31(6): 1094-1108.
    [15]
    Zhang X, Zhang Y, Wang S, et al. Improving stock market prediction via heterogeneous information fusion. Knowledge-Based Systems, 2018, 143: 236-247.
    [16]
    Chen M Y, Chen T H. Modeling public mood and emotion: Blog and news sentiment and socio-economic phenomena.Future Generation Computer Systems, 2019, 96: 692-699.
    [17]
    Mohan S, Mullapudi S, Sammeta S, et al. Stock price prediction using news sentiment analysis. In 2019 IEEE Fifth International Conference on Big Data Computing Service and Applications (BigDataService). IEEE, 2019: 205-208.
    [18]
    Shah D, Isah H, Zulkernine F. Predicting the effects of news sentiments on the stock market. In 2018 IEEE International Conference on Big Data (Big Data). IEEE, 2018: 4705-4708.
    [19]
    Tsantekidis A, Passalis N, Tefas A, et al. Using deep learning to detect price change indications in financial markets. In 2017 25th European Signal Processing Conference (EUSIPCO). IEEE, 2017: 2511-2515.
    [20]
    Wang J, Sun T, Liu B, et al. Clvsa: A convolutional LSTM based variational sequence-to-sequence model with attention for predicting trends of financial markets. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI-19). Macao, China: International Joint Conferences on Artificial Intelligence Organization, 2019: 3705-3711.
    [21]
    Gudelek M U, Boluk S A, Ozbayoglu A M. A deep learning based stock trading model with 2-D CNN trend detection. In 2017 IEEE Symposium Series on Computational Intelligence (SSCI). IEEE, 2017: 1-8.
    [22]
    Dang Q V. Reinforcement learning in stock trading. In Advanced Computational Methods for Knowledge Engineering. ICCSAMA 2019. Cham, Switzerland: Springer, 2020: 311-322.
    [23]
    Mahdisoltani F, Memisevic R, Fleet D. Hierarchical video understanding. In Computer Vision-ECCV 2018 Workshops. ECCV 2018. Cham, Switzerland: Springer, 2019: 659-663.
    [24]
    Campbell J Y, Hentschel L. No news is good news: An asymmetric model of changing volatility in stock returns. Journal of Financial Economics, 1992, 31(3): 281-318.
    [25]
    Li Z, Tam V. A comparative study of a recurrent neural network and support vector machine for predicting price movements of stocks of different volatilites. In 2017 IEEE Symposium Series on Computational Intelligence (SSCI). IEEE, 2017: 1-8.
    [26]
    Basak S, Kar S, Saha S, et al. Predicting the direction of stock market prices using tree-based classifiers. North American Journal of Economics and Finance, 2019, 47: 552-567.
    [27]
    Klein T. Trends and contagion in WTI and Brent crude oil spot and futures markets: The role of OPEC in the last decade. Energy Economics, 2018, 75: 636-646.
    [28]
    Caruana R, De Sa V. Promoting poor features to supervisors: Some inputs work better as outputs. In Advances in Neural Information Processing Systems, Volume 9. Cambridge, MA: MIT Press, 1997.
    [29]
    Ul Haq A , Zeb A, Lei Z, et al. Forecasting daily stock trend using multi-filter feature selection and deep learning. Expert Systems with Applications, 2021, 168: 114444.
    [30]
    Dai C, Lu K, Xiu D. Knowing factors or factor loadings, or neither? Evaluating estimators of large covariance matrices with noisy and asynchronous data. Journal of Econometrics, 2019, 208(1): 43-79.
    [31]
    Hahn J, Yoon H. Determinants of the cross-sectional stock returns in Korea: Evaluating recent empirical evidence. Pacific-Basin Finance Journal, 2016, 38: 88-106.
    [32]
    Kohara K. Selective-learning-rate approach for stock market prediction by simple recurrent neural networks. In Knowledge-Based Intelligent Information and Engineering Systems. KES 2003. Berlin: Springer, 2003: 141-147.
    [33]
    Zheng Q, Zhu F, Qin J, et al. Multiclass support matrix machine for single trial EEG classification. Neurocomputing, 2018, 275: 869-880.
    [34]
    Antweiler W, Frank M Z. Is all that talk just noise? The informationcontent of internet stock message boards. Journal of Finance, 2004, 59(3): 1259-1294.
    [35]
    Nti I K, Adekoya A F, Weyori B A. Efficient stock-market prediction using ensemble support vector machine. Open Computer Science, 2020, 10(1): 153-163.
    [36]
    Li C, Song D, Tao D. multitask recurrent neural networks and higher-orderMarkov random fields for stock price movement prediction. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York: Association for Computing Machinery, 2019: 1141-1151.
    [37]
    Caruana R A. Multitask connectionist learning. In Proceedings of the 1993 Connectionist Models Summer School. New York: Psychology Press, 1993: 372-379.
    [38]
    Ruder S. An overview of multitask learning in deep neural networks. http://export.arxiv.org/pdf/1706.05098.
    [39]
    Mills T C, Markellos R N. The Econometric Modelling of Financial Time Series. Cambridge: Cambridge University Press, 2008.
    [40]
    Sun J, Xiao K, Liu C, et al. Exploiting intra-day patterns for market shock prediction: A machine learning approach. Expert Systems with Applications, 2019, 127: 272-281.
    [41]
    Hirchoua B, Ouhbi B, Frikh B. Deep reinforcement learning based trading agents: Risk curiosity driven learning for financial rules-based policy. Expert Systems with Applications, 2021, 170: 114553.
    [42]
    Fang J, Xia S, Lin J, et al. Alpha discovery neural network based on prior knowledge. https://arxiv.org/abs/1912.11761.
    [43]
    Blumer A, Ehrenfeucht A, Haussler D, et al. Occam’s Razor. Information Processing Letters, 1987, 24(6): 377-380.
    [44]
    He K, Wang Z, Fu Y, et al. Adaptively weighted multitask deep network for person attribute classification. In MM 2017: Proceedings of the 2017 ACM Multimedia Conference. New York: Association for Computing Machinery, 2017: 1636-1644.
    [45]
    Tang W, Wu Y. Does learning specific features for related parts help human pose estimation? In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. IEEE, 2019: 1107-1116.
    [46]
    Chen Z, Ngiam J, Huang Y, et al. Just pick a sign: Optimizing deep multitask models with gradient sign dropout. In Advances in Neural Information Processing Systems 33 (NeurIPS 2020). San Diego, CA: Neural Information Processing Systems, 2020.
    [47]
    Yu Y, Si X, Hu C, et al. A review of recurrent neural networks: LSTM cells and network architectures. Neural Computation, 1019, 31(7): 1235-1270.
    [48]
    Su H, Qi C R, Li Y, et al. Render for CNN: Viewpoint estimation in images using CNNs trained with rendered 3D model views. In Proceedings of the IEEE International Conference on Computer Vision. IEEE, 2015: 2686-2694.
    [49]
    Lecun Y, Bottou L, Bengio Y, et al. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 1998, 86(11): 2278-2324.
    [50]
    Hochreiter S, Schmidhuber J. Long short-term memory. Neural Computation, 1997, 9(8): 1735-1780.
    [51]
    Levinshtein A, Sereshkeh A R, Derpanis K G. Datnet: Dense auxiliary tasks for object detection. In 2020 IEEE Winter Conference on Applications of Computer Vision (WACV 2020). IEEE, 2020: 1408-1416.
    [52]
    Wang J, Wang Q, Zhang H, et al. Sparse multiview task-centralized ensemble learning for ASD diagnosis based on age- and sex-related functional connectivity patterns. IEEE Transactions on Cybernetics, 2019, 49(8): 3141-3154.
    [53]
    Ditthapron A, Banluesombatkul N, Ketrat S, et al. Universal joint feature extraction for P300 EEG classification using multitask autoencoder. IEEE Access, 2019, 7: 68415-68428.
    [54]
    Yoo B, Kwak Y, Kim Y, et al. Deep facial age estimation using conditional multitask learning with weak label expansion. IEEE Signal Processing Letters, 2018, 25(6): 808-812.
    [55]
    Zhang Yu, Yang Qiang. A survey on multitask learning. https://arxiv.org/abs/1707.08114.
    [56]
    Tang H, Liu J, Zhao M, et al. Progressive layered extraction (PLE): A novel multitask learning (MTL) model for personalized recommendations. In RecSys 2020: 14th ACM Conference on Recommender Systems. New York: Association for Computing Machinery, 2020: 269-278.
    [57]
    Ding F, Luo C. An adaptive financial trading system using deep reinforcement learning with candlestick decomposing features. IEEE Access, 2020, 8: 63666-63678.
    [58]
    Zhong S, Pu J, Jiang Y G, et al. Flexible multitask learning with latent task grouping. Neurocomputing, 2016, 189: 179-188.
    [59]
    Srivastava N, Hinton G, Krizhevsky A, et al. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 2014, 15: 1929-1958.
    [60]
    Ke G, Meng Q, Finley T, et al. LightGBM: A highly efficient gradient boosting decision tree. In Proceedings of the 31st International Conference on Neural Information Processing Systems. Red Hook, NY: Curran Associates Inc., 2017: 3147-3155.
  • 加载中

Catalog

    [1]
    Fama E F. Random walks in stock market prices. Financial Analysts Journal, 1995, 51(1): 75-80.
    [2]
    Thaler R H. Behavioral economics: Past, present, and future. American Economic Review, 2016, 106(7): 1577-1600.
    [3]
    Slovic P, Finucane M, Peters E, et al. Rational actors or rational fools: Implications of the effects heuristic for behavioral economics. Journal of Socio-Economics, 2002, 31(4): 329-342.
    [4]
    Ariel R A. A monthly effect in stock returns. Journal of Financial Economics, 1987, 18(1): 161-174.
    [5]
    Barry C B, Brown S J. Differential information and the small firm effect.Journal of Financial Economics, 1984, 13(2): 283-294.
    [6]
    Bustos O, Pomares-Quimbaya A. Stock market movement forecast: A systematic review. Expert Systems with Applications, 2020, 156: 113464.
    [7]
    Ma Z, Bang G, Wang C, et al. Towards earnings call and stock price movement. https://arxiv.org/abs/2009.01317.
    [8]
    Qiu M, Li C, Song Y. Application of the artificial neural network in predicting the direction of stock market index. In 2016 10th International Conference on Complex, Intelligent, and Software Intensive Systems (CISIS 2016). IEEE, 2016: 219-223.
    [9]
    Rustam Z, Nurrimah, Hidayat R. Indonesia composite index prediction using fuzzy support vector regression with fisher score feature selection. International Journal on Advanced Science, Engineering and Information Technology, 2019, 9(1): 121-128.
    [10]
    Yao S, Luo L, Peng H. High-frequency stock trend forecast using LSTM model. In 2018 13th International Conference on Computer Science Education (ICCSE). IEEE, 2018: 1-4.
    [11]
    Lim Y S, Gorse D. Deep probabilistic modelling of price movements for high-frequency trading. In 2020 International Joint Conference on Neural Networks (IJCNN). IEEE, 2020: 1-8.
    [12]
    Feng F, Chen H, He X, et al. Enhancing stock movement prediction with adversarial training. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI-19). Macao, China: International Joint Conferences on Artificial Intelligence Organization, 2019: 5843-5849.
    [13]
    Pagolu V S , Reddy K N , Panda G, et al. Sentiment analysis of Twitter data for predicting stock market movements. In 2016 International Conference on Signal Processing, Communication, Power and Embedded System (SCOPES). IEEE, 2016: 1345-1350.
    [14]
    Shi L, Teng Z, Wang L, et al. Deepclue: Visual interpretation of text-based deep stock prediction. IEEE Transactions on Knowledge and Data Engineering, 2019, 31(6): 1094-1108.
    [15]
    Zhang X, Zhang Y, Wang S, et al. Improving stock market prediction via heterogeneous information fusion. Knowledge-Based Systems, 2018, 143: 236-247.
    [16]
    Chen M Y, Chen T H. Modeling public mood and emotion: Blog and news sentiment and socio-economic phenomena.Future Generation Computer Systems, 2019, 96: 692-699.
    [17]
    Mohan S, Mullapudi S, Sammeta S, et al. Stock price prediction using news sentiment analysis. In 2019 IEEE Fifth International Conference on Big Data Computing Service and Applications (BigDataService). IEEE, 2019: 205-208.
    [18]
    Shah D, Isah H, Zulkernine F. Predicting the effects of news sentiments on the stock market. In 2018 IEEE International Conference on Big Data (Big Data). IEEE, 2018: 4705-4708.
    [19]
    Tsantekidis A, Passalis N, Tefas A, et al. Using deep learning to detect price change indications in financial markets. In 2017 25th European Signal Processing Conference (EUSIPCO). IEEE, 2017: 2511-2515.
    [20]
    Wang J, Sun T, Liu B, et al. Clvsa: A convolutional LSTM based variational sequence-to-sequence model with attention for predicting trends of financial markets. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI-19). Macao, China: International Joint Conferences on Artificial Intelligence Organization, 2019: 3705-3711.
    [21]
    Gudelek M U, Boluk S A, Ozbayoglu A M. A deep learning based stock trading model with 2-D CNN trend detection. In 2017 IEEE Symposium Series on Computational Intelligence (SSCI). IEEE, 2017: 1-8.
    [22]
    Dang Q V. Reinforcement learning in stock trading. In Advanced Computational Methods for Knowledge Engineering. ICCSAMA 2019. Cham, Switzerland: Springer, 2020: 311-322.
    [23]
    Mahdisoltani F, Memisevic R, Fleet D. Hierarchical video understanding. In Computer Vision-ECCV 2018 Workshops. ECCV 2018. Cham, Switzerland: Springer, 2019: 659-663.
    [24]
    Campbell J Y, Hentschel L. No news is good news: An asymmetric model of changing volatility in stock returns. Journal of Financial Economics, 1992, 31(3): 281-318.
    [25]
    Li Z, Tam V. A comparative study of a recurrent neural network and support vector machine for predicting price movements of stocks of different volatilites. In 2017 IEEE Symposium Series on Computational Intelligence (SSCI). IEEE, 2017: 1-8.
    [26]
    Basak S, Kar S, Saha S, et al. Predicting the direction of stock market prices using tree-based classifiers. North American Journal of Economics and Finance, 2019, 47: 552-567.
    [27]
    Klein T. Trends and contagion in WTI and Brent crude oil spot and futures markets: The role of OPEC in the last decade. Energy Economics, 2018, 75: 636-646.
    [28]
    Caruana R, De Sa V. Promoting poor features to supervisors: Some inputs work better as outputs. In Advances in Neural Information Processing Systems, Volume 9. Cambridge, MA: MIT Press, 1997.
    [29]
    Ul Haq A , Zeb A, Lei Z, et al. Forecasting daily stock trend using multi-filter feature selection and deep learning. Expert Systems with Applications, 2021, 168: 114444.
    [30]
    Dai C, Lu K, Xiu D. Knowing factors or factor loadings, or neither? Evaluating estimators of large covariance matrices with noisy and asynchronous data. Journal of Econometrics, 2019, 208(1): 43-79.
    [31]
    Hahn J, Yoon H. Determinants of the cross-sectional stock returns in Korea: Evaluating recent empirical evidence. Pacific-Basin Finance Journal, 2016, 38: 88-106.
    [32]
    Kohara K. Selective-learning-rate approach for stock market prediction by simple recurrent neural networks. In Knowledge-Based Intelligent Information and Engineering Systems. KES 2003. Berlin: Springer, 2003: 141-147.
    [33]
    Zheng Q, Zhu F, Qin J, et al. Multiclass support matrix machine for single trial EEG classification. Neurocomputing, 2018, 275: 869-880.
    [34]
    Antweiler W, Frank M Z. Is all that talk just noise? The informationcontent of internet stock message boards. Journal of Finance, 2004, 59(3): 1259-1294.
    [35]
    Nti I K, Adekoya A F, Weyori B A. Efficient stock-market prediction using ensemble support vector machine. Open Computer Science, 2020, 10(1): 153-163.
    [36]
    Li C, Song D, Tao D. multitask recurrent neural networks and higher-orderMarkov random fields for stock price movement prediction. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York: Association for Computing Machinery, 2019: 1141-1151.
    [37]
    Caruana R A. Multitask connectionist learning. In Proceedings of the 1993 Connectionist Models Summer School. New York: Psychology Press, 1993: 372-379.
    [38]
    Ruder S. An overview of multitask learning in deep neural networks. http://export.arxiv.org/pdf/1706.05098.
    [39]
    Mills T C, Markellos R N. The Econometric Modelling of Financial Time Series. Cambridge: Cambridge University Press, 2008.
    [40]
    Sun J, Xiao K, Liu C, et al. Exploiting intra-day patterns for market shock prediction: A machine learning approach. Expert Systems with Applications, 2019, 127: 272-281.
    [41]
    Hirchoua B, Ouhbi B, Frikh B. Deep reinforcement learning based trading agents: Risk curiosity driven learning for financial rules-based policy. Expert Systems with Applications, 2021, 170: 114553.
    [42]
    Fang J, Xia S, Lin J, et al. Alpha discovery neural network based on prior knowledge. https://arxiv.org/abs/1912.11761.
    [43]
    Blumer A, Ehrenfeucht A, Haussler D, et al. Occam’s Razor. Information Processing Letters, 1987, 24(6): 377-380.
    [44]
    He K, Wang Z, Fu Y, et al. Adaptively weighted multitask deep network for person attribute classification. In MM 2017: Proceedings of the 2017 ACM Multimedia Conference. New York: Association for Computing Machinery, 2017: 1636-1644.
    [45]
    Tang W, Wu Y. Does learning specific features for related parts help human pose estimation? In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. IEEE, 2019: 1107-1116.
    [46]
    Chen Z, Ngiam J, Huang Y, et al. Just pick a sign: Optimizing deep multitask models with gradient sign dropout. In Advances in Neural Information Processing Systems 33 (NeurIPS 2020). San Diego, CA: Neural Information Processing Systems, 2020.
    [47]
    Yu Y, Si X, Hu C, et al. A review of recurrent neural networks: LSTM cells and network architectures. Neural Computation, 1019, 31(7): 1235-1270.
    [48]
    Su H, Qi C R, Li Y, et al. Render for CNN: Viewpoint estimation in images using CNNs trained with rendered 3D model views. In Proceedings of the IEEE International Conference on Computer Vision. IEEE, 2015: 2686-2694.
    [49]
    Lecun Y, Bottou L, Bengio Y, et al. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 1998, 86(11): 2278-2324.
    [50]
    Hochreiter S, Schmidhuber J. Long short-term memory. Neural Computation, 1997, 9(8): 1735-1780.
    [51]
    Levinshtein A, Sereshkeh A R, Derpanis K G. Datnet: Dense auxiliary tasks for object detection. In 2020 IEEE Winter Conference on Applications of Computer Vision (WACV 2020). IEEE, 2020: 1408-1416.
    [52]
    Wang J, Wang Q, Zhang H, et al. Sparse multiview task-centralized ensemble learning for ASD diagnosis based on age- and sex-related functional connectivity patterns. IEEE Transactions on Cybernetics, 2019, 49(8): 3141-3154.
    [53]
    Ditthapron A, Banluesombatkul N, Ketrat S, et al. Universal joint feature extraction for P300 EEG classification using multitask autoencoder. IEEE Access, 2019, 7: 68415-68428.
    [54]
    Yoo B, Kwak Y, Kim Y, et al. Deep facial age estimation using conditional multitask learning with weak label expansion. IEEE Signal Processing Letters, 2018, 25(6): 808-812.
    [55]
    Zhang Yu, Yang Qiang. A survey on multitask learning. https://arxiv.org/abs/1707.08114.
    [56]
    Tang H, Liu J, Zhao M, et al. Progressive layered extraction (PLE): A novel multitask learning (MTL) model for personalized recommendations. In RecSys 2020: 14th ACM Conference on Recommender Systems. New York: Association for Computing Machinery, 2020: 269-278.
    [57]
    Ding F, Luo C. An adaptive financial trading system using deep reinforcement learning with candlestick decomposing features. IEEE Access, 2020, 8: 63666-63678.
    [58]
    Zhong S, Pu J, Jiang Y G, et al. Flexible multitask learning with latent task grouping. Neurocomputing, 2016, 189: 179-188.
    [59]
    Srivastava N, Hinton G, Krizhevsky A, et al. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 2014, 15: 1929-1958.
    [60]
    Ke G, Meng Q, Finley T, et al. LightGBM: A highly efficient gradient boosting decision tree. In Proceedings of the 31st International Conference on Neural Information Processing Systems. Red Hook, NY: Curran Associates Inc., 2017: 3147-3155.

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return