Using transfer adaptation method for dynamic features expansion in multi-label deep neural network for recommender systems
Deep neural network for recommender systems
Abstract
In this paper, we propose to use a convertible deep neural network (DNN) model with a transfer adaptation mechanism to deal with varying input and output numbers of neurons. The flexible DNN model serves as a multi-label classifier for the recommender system as part of the retrieval systems’ push mechanism, which learns the combination of tabular features and proposes the number of discrete offers (targets). Our retrieval system uses the transfer adaptation, mechanism, when the number of features changes, it replaces the input layer of the neural network then freezes all gradients on the following layers, trains only replaced layer, and unfreezes the entire model. The experiments show that using the transfer adaptation technique impacts stable loss decreasing and learning speed during the training process.References
Ben Schafer, Joseph Konstan, John Riedi, Recommender Systems in E-Commerce, 1999.
Sanjeevan Sivapalan, Alireza Sadeghian, Hossein Rahanam, Asad M. Madni, Recommender Systems in E-Commerce, in World Automation Congress (WAC), Kona, Hawaii, 2014.
Xiao Liu, Xiao Zou, Zilong Ji, Gengshuo Tian, Yuanyuan Mi, Tiejun Huang, K. Y. Michael Wong, Push-pull Feedback Implements Hierarchical, in 33rd International Conference on Neural Information Processing Systems, Vancouver, Canada., 2019.
Fuji Ren, David B. Bracewell, Advanced Information Retrieval, Electronic Notes in Theoretical Computer Science, vol. 225, pp. 303-317, 2009.
Wei Dong Huang, Min Qian Li, Ji Dong Yang, The Design of the Emergency Retrieval System Based on User Preference, Applied Mechanics and Materials, Vols. 157-158, pp. 882-886, 2012.
Robin BurkeAlexander, Felfernig Alexander, FelfernigMehmet H. G¨oker, Recommender Systems: An Overview, AI Magazine, no. 32, pp. 13-18, 2011.
Francesco Colace,Dajana Conte,Massimo De Santo,Marco Lombardi,Domenico Santaniello, A content-based recommendation approach based on singular value decomposition, CONNECTION SCIENCE, vol. 34, no. 1, pp. 2158-2176 , 2022.
J. Ben Schafer, Dan Frankowski, Jon Herlocker & Shilad Sen , Collaborative Filtering Recommender Systems, The Adaptice Web, vol. 4321, pp. 291-324, 2007.
Zhan Su, Zhong Huang, Jun Ai , Xuanxiong Zhang, Lihui Shang, Fengyu Zhao, Enhancing the scalability of distance-based link prediction algorithms in recommender systems through similarity selection, in PLoS ONE, 2022.
Narges Hasanzadeh, Yahya Forghani, Improving the Test Time of M-Distance based Recommendation System, Journal of The Institution of Engineers (India) Series B , vol. 103, no. 7, 2021 .
Shiwen Wu, Wentao Zhang, Fei Sun, Bin Cui, Xu Xie, Graph Neural Networks in Recommender Systems: A Survey, ACM Computing Surveys, vol. 55, no. 5, p. 1–37, 2023.
Oumaima Stitini, Soulaimane Kaloun, Omar Bencharef, An Improved Recommender System Solution to Mitigate the OverSpecialization Problem Using Genetic Algorithms, Computer and System Engineering Laboratory, Faculty of Science and Technology, Cadi Ayyad University, Marrakech , 2022.
Mohammed Baidada, Khalifa Mansouri, Franck Poirier, Hybrid Filtering Recommendation System in an Educational Context., International Journal of Web-Based Learning and Teaching Technologies, vol. 17, no. 1, 2022.
Paula A. Rodr´ıguez, Demetrio A. Ovalle & N´estor D. Duque, A Student-Centered Hybrid Recommender System to Provide Relevant Learning Objects from Repositories, in International Conference on Learning and Collaboration Technologies, Las Vegas, NV, USA, 2015.
Mojtaba Salehi, Isa Nakhai Kmalabadi, A Hybrid Attribute–based Recommender System for E–learning Material Recommendation, IERI Procedia, vol. 2, pp. 565-570, 2012.
Vadim Borisov, Tobias Leemann, Kathrin Seßler, Johannes Haug, Martin Pawelczyk, Gjergji Kasneci, Deep Neural Networks and Tabular Data: A Survey, 2022.
Xiangnan He, Lizi Liao, Hanwang Zhang, Liqiang Nie, Xia Hu, Tat-Seng Chua, Neural Collaborative Filtering, in Proceedings of the 26th International Conference on World Wide Web, Perth, Australia, 2017.
Mate Kovacs, Victor V. Kryssanov, Expanding the Feature Space of Deep Neural Networks, International Journal of Machine Learning and Computing, vol. 10, no. 2, 2020.
Nathaniel Egwu, Thomas Mrziglod, & Andreas Schuppert. Neural network input feature selection using structured l
- norm penalization, Applied Intelligence, 2022.
Yitan Zhu, Thomas Brettin, Fangfang Xia, Alexander Partin, Maulik Shukla, Hyunseung Yoo, Yvonne A. Evrard, James H.
Doroshow, Rick L. Stevens, Converting tabular data into images for deep learning with convolutional neural networks, Scientific Reports, vol. 11325, no. 11, 2021.
Andre Ye & Andy Wang Applying Convolutional Structures to Tabular Data, Modern Deep Learning
Gebrekirstos G. Gebremeskel, Arjen P. de Vries, Pull–push: a measure of over- or underpersonalization in recommendation, International Journal of Data Science and Analytics, 2022.
Bo Yang, Anca Sailer, Ajay Mohindra, Survey and Evaluation of Blue-Green Deployment Techniques in Cloud Native Environments, in ICSOC 2019 Workshops, 2020.
Asmaul Hosna, Ethel Merry, Jigmey Gyalmo, Zulfikar Alom, Zeyar Aung, & Mohammad Abdul Azim, Transfer learning: a friendly introduction, Journal of Big Data, vol. 9, no. 102, 2022.
Kit Wong, Rolf Dornberger, & Thomas Hanne, An analysis of weight initialization methods in connection with different activation functions for feedforward neural networks, 2022.
Michael Bartholomew-Biggs, Steven Brown, BruceChristianson, Automatic differentiation of algorithms, Journal of Computational and Applied Mathematics, vol. 124, no. 1-2, pp. 171-190, 2000.
Atilim Gunes Baydin, Barak A. Pearlmutter, Alexey Andreyevich Radul, Jeffrey Siskind, Automatic differentiation in machine learning: A survey, Journal of Machine Learning Research , vol. 18, no. 153, pp. 1-43, 2018.
Erol Gelenbe, Yonghua Yin, Deep Learning with Dense Random Neural Networks, in ICMMI ’17, Cracow, 2017.
A. F. Agarap, Deep Learning using Rectified Linear Units (ReLU), 2018.
Kunal Banerjee, Vishak Prasad C, Rishi Raj Gupta, Karthik Vyas, Anushree H, & Biswajit Mishra, Exploring Alternatives to Softmax Function, Intel Corporation, 2020.
Diederik P. Kingma, Jimmy Ba, ADAM: A METHOD FOR STOCHASTIC OPTIMIZATION, in 3rd International Conference for Learning Representations, San Diego, 2015.
N. Chumuang, Comparative Algorithm for Predicting the Protein Localization Sites with Yeast Dataset, in International Conference on Signal-Image Technology & Internet-Based Systems (SITIS), Las Palmas de Gran Canaria, 2018.
Dongbo Bu, Yi Zhao, Lun Cai, Hong Xue, Xiaopeng Zhu, Hongchao Lu, Jingfen Zhang, Shiwei Sun, Lunjiang Ling, Nan Zhang, Guojie Li, Runsheng Chen, Topological structure analysis of the protein-protein interaction network in budding yeast, Nucleic Acids Research, vol. 31, no. 9, p. 2443–2450, 2003.
Adriano Rivolli, Larissa C. Parker, Andre de Carvalho, Food Truck Recommendation Using Multi-label Classification, in Portuguese Conference on Artificial Intelligence, 2017.
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).