Research on the Prediction of Quay Crane Resource Hour based on Ensemble Learning
Source: By:Author(s)
DOI: https://doi.org/10.30564/jmser.v3i2.2689
Abstract:In Container terminals, a quay crane’s resource hour is affected by various complex nonlinear factors, and it is not easy to make a forecast quickly and accurately. Most ports adopt the empirical estimation method at present, and most of the studies assumed that accurate quay crane’s resource hour could be obtained in advance. Through the ensemble learning (EL) method, the influence factors and correlation of quay crane’s resources hour were analyzed based on a large amount of historical data. A multi-factor ensemble learning estimation model based quay crane’s resource hour was established. Through a numerical example, it is finally found that Adaboost algorithm has the best effect of prediction, with an error of 1.5%. Through the example analysis, it comes to a conclusion: the error is 131.86% estimated by the experience method. It will lead that subsequent shipping cannot be serviced as scheduled, increasing the equipment wait time and preparation time, and generating additional cost and energy consumption. In contrast, the error based Adaboost learning estimation method is 12.72%. So Adaboost has better performance.
References:[1] Shang X T, Cao J X, Ren J. A robust optimization approach to the integrated berth allocation and quay crane assignment problem[J]. Transportation Research Part E: Logistics and Transportation Review, 2016, 94:44-65. [2] Zahran S Z, Alam J B, AL-Zahrani A H, et al. Analysis of port efficiency using imprecise and incomplete data[J]. Operational Research, 2020, 20(1): 219-46. [3] Perera L P, Mo B. Ship performance and navigation information under high-dimensional digital models[J]. Journal of Marine Science and Technology, 2020, 25(1): 81-92. [4] Meisel F, Bierwirth C. Heuristics for the integration of crane productivity in the berth allocation problem. Transportation Research Part E: Logistics and Transportation Rev.,2009, 45(1):196-209. [5] Steenken D., VOß S. and Stahlbock R.: Container terminal operation and operations research - A classification and literature review, OR Spectrum, 2004, 26(1). [6] Crainic T. G. and Kim K. H.: Intermodal transportation, Handbooks in Operations Research and Management Science. Transportation (edited by Barnhart C. and Laporte, G., Chapter 8), North-Holland, Amsterdam, Netherlands, 2007, 14. [7] HÉCTOR J. Carlo, Iris F.A. VIS, Kees Jan Roodbergen. Storage yard operations in container terminals: Literature overview, trends, and research directions[J].European Journal of Operational Research, 2014, 235(2). [8] HÉCTOR J. Carlo, Iris F.A. VIS, Kees Jan Roodbergen. Transport operations in container terminals: Literature overview, trends, research directions and classification scheme [J]. European Journal of Operational Research, 2014, 236(1). [9] Gharehgozli A H, Roy D, DE Koster R. Sea container terminals: New technologies and OR models[J]. Marit Econ Logist, 2016, 18(2): 103-140. [10] Karam A., Eltawil A. B.. Functional integration approach for the berth allocation, quay crane assignment and specific quay crane assignment problems[J]. Comput Ind Eng, 2016, 102:458-466. [11] Xu Y, Chen Q, Quan X. Robust berth scheduling with uncertain vessel delay and handling time[J]. Annals of Operations Research, 2011, 192(1): 123-140. [12] Bierwirth C, Meisel F. A survey of berth allocation and quay crane scheduling problems in container terminals. Eur. J. Oper. Res.,2010, 202(3):615-627. [13] Hu X P, Zhang Y, Ding Q L, et al. Review on disruption management model and its algorithm[J]. Systems Engineering-Theory& Practice, 2008, 28(10):40-46. (in Chinese). [14] Zhou Z H. Ensemble Learning[J]. encyclopedia of biometrics, 2009:270-273. [15] Dasarathy B V,Sheela B V.A composite classifier system design: Concepts and methodology[C]. Proceedings of the IEEE, 1979, 67(5): 708-713. [16] Giacomel F, Pereira A C M, Galante R. Improving financial time series prediction through output classification by a neural network ensemble[M]. Springer International Publishing, 2015: 331-338. [17] Yang P, Zhou B B, Yang J Y, et al. Stability of feature selection algorithms and ensemble feature selection methods in bioinformatics[M].John Wiley & Sons DOI: https://doi.org/10.30564/jmser.v3i2.2689 50 Journal of Management Science & Engineering Research | Volume 03 | Issue 02 | September 2020 Distributed under creative commons license 4.0 Inc, 2014: 333-352. [18] Zhang C, Ma Y. Ensemble Machine Learni n g , E n s e m b l e L e a r n i n g [ M ] . 2 0 1 2 , 10.1007/978-1-4419-9326-7(Chapter 1):1-34. [19] Breiman, LEO. Bagging predictors[J].Machine Learning, 1996, 24(2):123-140. [20] Schapire R E. The boosting approach to machine learning[J]. An Overview, 2003, 171: 149-171. [21] Wolpert D H. Stacked generalization[M]. US: Springer, 1992: 241-259. [22] Kumari G T P. A study of Bagging and Boosting approaches to develop meta-classifier[J]. Engineering Science and Technology, 2012, 2(5): 850-855. [23] Freund Y, Schapire R E. A Decision-Theoretic Generalization of On-Line Learning and an Application to Boosting[J]. Journal of Computer and System Sciences, 1997, 55:119-139. [24] Breiman L. Random Forests--Random Features[J]. Machine Learning, 1999, 45(1):5-32. [25] Friedman J H. Greedy Function Approximation: A Gradient Boosting Machine[J]. The Annals of Statistics, 2001, 29(5):1189-1232. [26] Cortes C, Vapnik V. Support-vector networks[J]. Machine Learning, 1995, 20(3):273-297.