Loading [MathJax]/jax/output/SVG/jax.js

Well-posedness and approximate controllability of neutral network systems

  • In this paper, we study the concept of approximate controllability of retarded network systems of neutral type. On one hand, we reformulate such systems as free-delay boundary control systems on product spaces. On the other hand, we use the rich theory of infinite-dimensional linear systems to derive necessary and sufficient conditions for the approximate controllability. Moreover, we propose a rank condition for which we can easily verify the conditions of controllability. Our approach is mainly based on the feedback theory of regular linear systems in the Salamon-Weiss sense.

    Citation: Yassine El Gantouh, Said Hadd. Well-posedness and approximate controllability of neutral network systems[J]. Networks and Heterogeneous Media, 2021, 16(4): 569-589. doi: 10.3934/nhm.2021018

    Related Papers:

    [1] Lukáš Pichl, Taisei Kaizoji . Volatility Analysis of Bitcoin Price Time Series. Quantitative Finance and Economics, 2017, 1(4): 474-485. doi: 10.3934/QFE.2017.4.474
    [2] Hakan Pabuçcu, Serdar Ongan, Ayse Ongan . Forecasting the movements of Bitcoin prices: an application of machine learning algorithms. Quantitative Finance and Economics, 2020, 4(4): 679-692. doi: 10.3934/QFE.2020031
    [3] Joanna Olbrys, Elzbieta Majewska . Asymmetry Effects in Volatility on the Major European Stock Markets: the EGARCH Based Approach. Quantitative Finance and Economics, 2017, 1(4): 411-427. doi: 10.3934/QFE.2017.4.411
    [4] Guillermo Peña . A new trading algorithm with financial applications. Quantitative Finance and Economics, 2020, 4(4): 596-607. doi: 10.3934/QFE.2020027
    [5] Bharti, Ashish Kumar . Asymmetrical herding in cryptocurrency: Impact of COVID 19. Quantitative Finance and Economics, 2022, 6(2): 326-341. doi: 10.3934/QFE.2022014
    [6] Makoto Nakakita, Teruo Nakatsuma . Analysis of the trading interval duration for the Bitcoin market using high-frequency transaction data. Quantitative Finance and Economics, 2025, 9(1): 202-241. doi: 10.3934/QFE.2025007
    [7] Vladimir Donskoy . BOMD: Building Optimization Models from Data (Neural Networks based Approach). Quantitative Finance and Economics, 2019, 3(4): 608-623. doi: 10.3934/QFE.2019.4.608
    [8] Haoyu Wang, Dejun Xie . Optimal profit-making strategies in stock market with algorithmic trading. Quantitative Finance and Economics, 2024, 8(3): 546-572. doi: 10.3934/QFE.2024021
    [9] David Anderson, Urban Ulrych . Accelerated American option pricing with deep neural networks. Quantitative Finance and Economics, 2023, 7(2): 207-228. doi: 10.3934/QFE.2023011
    [10] Mustafa Tevfik Kartal, Özer Depren, Serpil Kılıç Depren . The determinants of main stock exchange index changes in emerging countries: evidence from Turkey in COVID-19 pandemic age. Quantitative Finance and Economics, 2020, 4(4): 526-541. doi: 10.3934/QFE.2020025
  • In this paper, we study the concept of approximate controllability of retarded network systems of neutral type. On one hand, we reformulate such systems as free-delay boundary control systems on product spaces. On the other hand, we use the rich theory of infinite-dimensional linear systems to derive necessary and sufficient conditions for the approximate controllability. Moreover, we propose a rank condition for which we can easily verify the conditions of controllability. Our approach is mainly based on the feedback theory of regular linear systems in the Salamon-Weiss sense.



    In recent times, financial institutions have increasingly included cryptocurrencies in their investment portfolios. Cryptocurrencies have garnered significant attention since their inception in 2009, emerging as a significant player in the global financial market (Paule-Vianez et al., 2020; Yang et al., 2020). One noteworthy aspect of cryptocurrencies is their exclusion of intermediaries from financial institutions, leading to a reduction in transaction costs. Consequently, unlike other financial assets, cryptocurrencies operate without a higher authority. Additionally, cryptocurrencies' values are secured by an algorithm that enables the tracking of all transactions (Abakah et al., 2020). The primary advantages of cryptocurrencies include their transparency and 24/7 availability due to the decentralized nature of the cryptocurrency market. Besides, cryptocurrency transactions do not occur in a physical location; instead, they are recorded on a public, open ledger known as the blockchain (Grobys and Sapkota, 2020; Vo and Yost-Bremm, 2020). Consequently, many hedge funds and asset managers have started incorporating cryptocurrency-related assets into their investment portfolios and strategies. This integration has led to a notable surge in interest and activity in both cryptocurrencies and cryptocurrency trading (Fang et al., 2022).

    Cryptocurrency markets, regarded as part of the space of alternative investments, are garnering attention given recent technological advances. This growing investor interest makes cryptocurrency markets an important asset class for traders and researchers alike (Akyildirim et al., 2021). The analysis of cryptocurrency market predictions emerges as a valuable tool for investors, as it can enhance the synchronisation of their investment decisions, mitigate risks, and adjust portfolios to reduce potential losses. Moreover, these models may attract more investors to cryptocurrency markets, boosting liquidity and potentially contributing to market efficiency by reducing information asymmetry and facilitating faster price adjustments. Increasing confidence in automated trading strategies based on neural network predictions could drive trading activity, underscoring the complexity and importance of studying cryptocurrency prediction accuracy in terms of performance, market efficiency, risk management, and market dynamics.

    In addition, it is interesting to comprehend the liquidity interconnectivity within the cryptocurrency market and to scrutinize the disparities between emerging cryptocurrencies and established ones, like Bitcoin and Ethereum, concerning liquidity, trading volume, and supply. Cryptocurrencies have demonstrated significant potential, leading to an increase in trading volumes in cryptocurrency markets and indicating a substantial improvement in liquidity levels. According to Hasan et al. (2022), liquidity interconnectivity is more noticeable in the short-term timeframe compared to the medium and long-term perspectives. Trimborn et al. (2020) investigate the potential benefits of incorporating cryptocurrencies into optimised risk portfolio holdings, considering them as promising investment assets due to their rapid growth. On the contrary, these digital currencies often display higher volatility, possess long tails in their distribution patterns, and exhibit relatively limited liquidity, creating a challenging investment landscape. Their investigations indicate that Bitcoin (BTC), being the earliest and most prevalent cryptocurrency, is given zero importance when liquidity constraints are not a factor in both risk and volatility assessments. Consequently, BTC might not be the most appealing cryptocurrency when considering risk-return optimization. This underscores the significance of incorporating cryptocurrencies other than BTC in constructing a portfolio.

    In the dynamic landscape of recent years, a cohort of emerging cryptocurrencies is making significant strides for diverse reasons. Decentraland stands out as a thriving virtual reality platform on the Ethereum blockchain, catering to content creators and businesses through its MANA and LAND tokens (Guidi and Michienzi, 2022). LuckyBlock has swiftly become a notable player, achieving a remarkable US$1 billion market capitalization and pioneering transparency and fairness in gaming, attracting a substantial investor base (Mirkamol and Mansur, 2023). SafeMoon is gaining traction in the DeFi space, incorporating reflection, LP acquisition, and burn functions while ambitiously aiming to establish an NFT exchange coupled with charity initiatives (Hassan et al., 2022). Avalanche, positioned as a formidable competitor to Ethereum, offers heightened transaction output through its three blockchains—X-Chain, C-Chain, and P-Chain (Makarov and Schoar, 2022). SeeSaw prioritizes security with its blockchain implementation, ensuring technical robustness and resilience to tampering or hacking (Murray-Rust et al., 2023). King Cardano introduces itself as the first auto-claim ADA token, boasting high yields and contributing to the open market by buying back 3% of each transaction (King and Koutmos, 2021). Binamon focuses on community growth, offering a gaming metaverse with digital monsters across multiple games for users to earn tokens and passive income. Kasta facilitates instant and free cryptocurrency transfers via QR codes, with the KASTA Token playing a pivotal role in driving the adoption of cryptocurrencies (King and Koutmos, 2021). Last, X2Y2 emerges as a decentralized NFT marketplace, connecting crypto investors to various wallets and addressing challenges posed by the established NFT trading platform, OpenSea, since its launch in February 2022 (Makridis et al., 2023). After this analysis about the different strengths and utilities, we have chosen these cryptocurrencies for our study.

    Since the inception of Bitcoin in 2009, numerous cryptocurrencies have come into existence, making them a regular feature of financial newsletters. Such currencies have attracted interest from both financial institutions and central banks due to their novel technology drivers and their ability to perturb the prevailing financial institutions' frameworks (Nica et al., 2022). The growing popularity and acceptance of new emerging cryptocurrency markets demand dedicated strategies for maximizing the investment return potential through "learning" algorithms (Petukhina et al., 2021). Furthermore, given the aforementioned characteristics of cryptocurrencies and, above all, their speculative nature and volatility, the cryptocurrency market offers the opportunity to study the dynamics of algorithms from the point of view of human psychology (Vo and Yost-Bremm, 2020; Fang et al., 2022).

    The majority of traders and analysts prefer employing programmatic trading techniques in the cryptocurrency markets. Technological advancements have revolutionized the way investors engage in financial markets. Algorithmic trading (AT), which involves "the use of computer algorithms to automatically make certain trading decisions, submit orders and manage them after submission," exemplifies these technological shifts (Frino et al., 2020). The impact of AT on market quality is deemed significant by researchers, industry professionals, and policymakers. According to Hendershott et al. (2011), AT reduces trading costs and enhances quote information. Moreover, the revenues of liquidity providers also see an increase with AT, although this effect appears to be transient. AT is predominantly active over brief time intervals, spanning from minutes to hours or days. As commercial timeframes contract even more, dwindling to levels of a minute, second, or millisecond, the trading evolves into a form termed high-frequency trading (HFT). Within HFT, algorithms take on the exclusive role of managing the negotiation process (Vo and Yost-Bremm, 2020). In essence, financial trading requires AT to systematically assess the environment for appropriate and swift decisions, especially when continuous data monitoring is not in place (Aloud and Alkhamees, 2021).

    Researchers have extensively studied algorithmic trading, encompassing algorithms guided by both fundamental and technical indicator analyses, as well as those reinforced by machine learning techniques. In the realm of financial applications, Machine Learning (ML) is assuming a progressively vital role, according to Goldblum et al. (2021). Their assertion extends to Deep Learning (DL), a subset of ML techniques honing in on deep neural networks. DL algorithms are advancing, exhibiting the capacity to train on intricate datasets and forecast outcomes. The present landscape sees a surge in active investment by various financial entities, including hedge funds, investment banks, retailers, and contemporary FinTech providers, as they strive to develop expertise in data science and ML (Goodell et al., 2021). Indeed, finance professionals are increasingly trusting automated ML systems for AT, such as fraud recognition systems that detect illegal trades (Abdallah et al., 2016; Bhattacharyya et al., 2011; Milana and Ashta, 2021), risk systems that pass/fail loans (Binns, 2018), in HFT applications that take time-scale decisions that humans are not able to verify (Hendershott et al., 2011; Arévalo et al., 2016; Klausand Elzweig, 2017; Borovkova and Tsiamas, 2019; Egger et al., 2020), and integration with other emerging technologies including cryptocurrencies and blockchain (Milana and Ashta, 2021).

    The most common DL technologies used in cryptocurrency trading have been Convolutional neural networks (CNN), Gated Recurrent Units (GRU), Multiplayer perceptron (MLP), and Long short-term memory (LSTM). Many advanced deep neural network models have been implemented by researchers in cryptocurrency trading. Recent research highlights the success of models utilizing these architectures for modelling and predicting financial time series, which includes cryptocurrencies (Fang et al., 2022). Livieris et al. (2020) presented a model, CNN-LSTM, specifically designed for accurately predicting gold prices. Similarly, Lu et al. (2020) suggested a method based on CNN-LSTM for predicting stock prices.

    On a different note, the genetic algorithm (GA) is a widely recognized metaheuristic algorithm that draws inspiration from the biological development process. It operates on the principles of chromosome representation and fitness selection. Significant benefits of genetic algorithms are that they are efficient in handling non-stationary data and that there is no need to adopt a particular distribution of data (Huang et al., 2019; Katoch et al., 2021). According to Drachal and Pawłowski 2021), there are research studies that use GA from a theoretical and practical perspective, but the latter is mostly used in engineering sciences, and very little in economics or finance. Besides, studies based on finance focus on optimization questions and do not deal with time series forecasting. Therefore, possible applications of GA could be in finance, in particular bankruptcy prediction, risk management, and financial forecast (Sezer et al., 2020; Bustos and Pomares-Quimbaya, 2020). Such authors as (Alameer et al., 2019; Weng et al., 2018), applied GA to obtain the estimation of the variables of the model like the Adaptive Neuro-Fuzzy Inference System (ANFIS). Besides, GA has proven effective in predicting commodity prices with models as support vector machines (SVM). Although GAs have many promising applications, they could be enhanced because GAs remain a huge area for further development and can be improved.

    To address the void in this research domain, we compare DL and quantum methods in neural networks and genetic algorithms techniques applied to the algorithmic trading of the youngest cryptocurrencies, which we have considered: Decentraland, LuckyBlock, SafeMoon, Avalanche, SeeSaw Protocol, Binamon, Kasta, and X2Y2. Blockchains are one of the most significant emerging technologies and, according to Saad et al. (2019), will play a central role in the future configuration of our society. The market capitalization of cryptocurrencies constitutes the main feature representing the importance of the market. Since the introduction of Bitcoin in 2009, there has been huge growth in the blockchain ecosystem, led by conceptual and algorithmic innovations and the emergence of a great number of new cryptocurrencies. A key attribute of the cryptocurrency market is its notable volatility, a trait that escalates with the emergence of new cryptocurrencies. As some new cryptocurrencies generate millionaires for crypto investors, while others determine massive losses for crypto portfolios, an analysis of emerging cryptocurrencies is necessary. Therefore, we should evaluate which new cryptocurrencies are secure, stable, safe, and non-fluctuating, as some new cryptocurrencies deal with the weaknesses of older ones, with better performance, scalability, and schedulability (Marzo et al., 2022). Thus, in our research, we have chosen as a sample the main cryptocurrencies that have increased their exchange in the market in 2022. These new cryptocurrencies have been on the market for only a few years and hence with little historical data to analyze the effectiveness of the application of computational algorithms with trading strategies. Moreover, they are the major cryptocurrencies that appear on cryptocurrency data sites such as CoinMarketCap, CoinGecko, and Bloomberg Cryptocurrencies Insights. In addition, some recent papers have proposed the analysis of other new cryptocurrencies, which benefits cryptocurrency traders and investors in making investment and trading decisions, as most have analyzed Bitcoin and Ethereum (Nica et al., 2022; Petukhina et al., 2021).

    We expect to make at least four further contributions to the literature. First, most existing research studies have analyzed the largest cryptocurrency Bitcoin; however, several authors have proposed the analysis of other cryptocurrencies as future lines of research. Thus, (Vo and Yost-Bremm, 2020) conclude that future studies ought to widen the scope of the analysis to include other cryptocurrencies to check whether the High Frequency Trading strategy is applicable across all cryptocurrencies. Nakano et al. (2018) and Corbet et al. (2020) conclude that building a multi-asset portfolio of various cryptocurrencies would appear to be worthy and not focus only on bitcoin. In addition, Gerritsen et al. (2020) suggest that additional future investigation should target other leading new cryptocurrencies, which comes at the benefit of investors and operators in cryptocurrencies to make informed investment and trading decisions. Moreover, Gradojevic et al. (2023) state that, in forthcoming research, it would be valuable to expand the scope to encompass alternative cryptocurrencies and include a broader range of data frequencies, provided high-frequency data is available. Finally, Adcock and Gradojevic (2019), Akyildirim et al. (2021) and Ren et al. (2022) suggest that other cryptocurrencies could also be studied, since, new cryptocurrencies given that new cryptocurrencies are often overlooked in the existing body of literature. Therefore, our study responds to this call by analyzing the eight new emerging cryptocurrencies markets of 2022.

    Second, we compare innovative techniques of ML, not used in the emerging cryptocurrency market until now. Aloud and Alkhamees (2021) suggest as a future direction for investigation to evaluate the application of Reinforcement Learning algorithmic trading to various emerging markets; for instance, the currency market and cryptocurrencies. For their part, Goodell et al. (2021) conclude that the rise of Artificial Intelligence (AI) and ML has led to future research opportunities to build innovative FinTech business models supported by technology for finance. Examples of core FinTech innovations include blockchain and cryptocurrencies, digital trading, and advisory systems. Besides, further research is needed to comprehend the many faces of FinTech and the implications of AI and ML in the emerging field. Jia et al. (2019) conclude that, although deep learning has demonstrated excellent performance on many signal-processing problems, the association of deep learning and reinforcement learning in financial quantitative trading remains to be studied more. Also, Maghsoodi (2023) recommends for future developments in cryptocurrency markets the application and comparison of ML and DL models like Long Short-Term Memory, Deep Convolutional Networks, Adaptive-Network-based Fuzzy Inference System, and Artificial Neural Networks. Last, Corbet et al. (2020) propose as future research extended technical trading analyses applying more complicated technical trading rules and comparing the results.

    In the third step, we introduce additional non-financial indicators like volatility (Bollinger Band), market strength (Klinger Oscillator), and trend (Trend Elimination Oscillator), proposed by Vo and Yost-Bremm (2020) as a potential area for future investigation. Gradojevic et al. (2023) also suggest exploring a broader range of technical indicators and covering different data frequencies, assuming the availability of high-frequency data. Demir et al. (2019) showcase how seamlessly integrating technical indicators into Day-ahead electricity market predictions significantly improves the predictive accuracy of machine learning models. They highlight the enhanced explanatory capacity provided by non-financial indicators, emphasizing that technical analysis, as an analytical approach, focuses on examining statistical trends in historical market data to predict upcoming price movements. Technical indicators, a tool in technical analysis, analyze market data using various formulas to identify complex price patterns, indicating situations of excessive buying or selling, deviations from a central trend, or proximity to support and resistance lines. Moreover, they demonstrate that technical indicators can offer early indications of future price movements, thereby boosting the predictive capability of models forecasting financial time series and showcasing the explanatory effectiveness of technical indicators across stock markets. c present evidence that technical indicators provide directional information about a security's price, assisting in predicting security prices and simplifying machine learning. Consequently, they conclude that technical indicators wield substantial explanatory power in stock markets. Alonso-Monsalve et al. (2020) conclude that when analysing the change in the value of cryptocurrencies in relation to the US dollar, the use of convolutional neural networks (CNNs) can be beneficial for predicting market trends at short time intervals. However, their research is based on a limited set of technical indicators. Expanding the scope of these indicators would likely reveal the possibility of improving prediction performance. Therefore, it would be advisable for future research to consider exploring this avenue to achieve more robust results.

    Fourth, our study covers data and algorithms that could be replicated in more examples of high frequency trading. According to Akyildirim et al. (2021), the majority of research on cryptocurrency markets focuses on daily returns, but the analysis at higher frequencies is commonly overlooked and should be considered in future research. Furthermore, Adcock and Gradojevic (2019) propose further research to investigate the profitability of other non-linear high-frequency technical trading in cryptocurrency markets. In their 2021 study, Briola et al. (2021) demonstrate the effective training and application of Deep Reinforcement Learning models within the realm of high-frequency trading. They explore the impact of three distinct state definitions on the out-of-sample performance of agents, discovering that having awareness of the mark-to-market profitability of their present position significantly enhances both the overall profit and loss equation and the reward for an individual trade. Alonso-Monsalve et al. (2020) suggest that future studies could explore incorporating higher frequency data into their analyses. They argue that by increasing the number of features used at high return frequencies, it may be possible to develop models with improved predictive accuracy, particularly at shorter time intervals. We evaluate the performance of Neural Networks and Genetic Algorithms in predicting "buy" or "sell" decisions for various cryptocurrencies. Three key metrics—F1 score, Precision, and Recall—are used to measure accuracy. The study encompasses multiple classifiers across different cryptocurrencies, with micro-means used to consolidate results.

    The subsequent sections of this paper are organized as follows. Section 2 furnishes a review of the existing literature. Section 3 outlines the methodology employed in the study. The indicators and data utilized in the research are elaborated in Section 4. Section 5 highlights the obtained results and findings. Section 6 shows the discussion of results. Finally, Section 7 concludes by elucidating the reached conclusions.

    The literature on cryptocurrencies has garnered significant attention in recent times and has been the subject of a variety of research. Some studies have addressed issues concerning their prices, like substantial price volatility (Katsiampa et al., 2019: Bianchi et al., 2022), pricing efficiency (Sensoy, 2018; Corbet and Katsiampa, 2018; Mensi et al., 2019), price determination, and predictability (Bouri et al., 2019; Panagiotidis et al., 2018; Giudici and Abu-Hashish, 2019). Panagiotidis et al. (2018) analyze 21 variables that impact bitcoins returns, concluding that the most crucial factors are the intensity of Google searches, gold yields, and political uncertainty. They state that European economic policy uncertainty, the NIKKEI index, and negative feedback from the Google trend appear to be drivers of Bitcoin. Bianchi et al. (2022) illustrate that, as far as the gains from the short-term investment strategy represent the returns from offering liquidity, their findings indicate that market makers generally achieve greater expected returns when there is heightened concern about adverse selection on both sides of the trade. Furthermore, they reveal that the diminished market liquidity observed during periods of increased volatility appears to be, at least in part, due to the elevated risk premiums necessary for offering liquidity.

    Other authors have investigated the factors determining cryptocurrency market liquidity. For example, Westland (2023) examined five hypotheses regarding liquidity and the factors that influence it, using a selection of cryptocurrencies that make up approximately 90% of trading volume and market capitalization, making the findings broadly applicable. The research findings are derived from the analysis of a concise yet convincing group of cryptocurrencies. These findings are supported by calculated statistics such as liquidity and inferred transaction fees, which were derived from extensive datasets. This author concluded that price serves as an effective predictor of liquidity, while volume can act as a surrogate for liquidity, although there was some contradictory evidence regarding this hypothesis. The study did not find supporting evidence for the notion that interchangeability between cryptocurrencies predicts liquidity. Trimborn et al. (2020) delve into the advantages of integrating cryptocurrencies into risk-optimized portfolios, considering the inherent liquidity constraints of the cryptocurrency market. They propose the Liquidity Bounded Risk-return Optimization method, which introduces an additional liquidity constraint based on the expected investment amount. Application of this approach to monthly and weekly adjusted portfolios, encompassing constituent stocks from the S & P 100, the Barclays Capital US Aggregate Index (US-Bonds Index), and the S & P GSCI (Commodities Index), alongside cryptocurrencies, yields significant improvements in the relationship between volatility/risk and return when employing both Markowitz and CVaR models. Bianchi et al. (2022) highlight that limited liquidity supply, as evidenced by heightened idiosyncratic volatility and broader spreads in the Treasury-EuroDollar rate (TED) and bid-ask spreads, corresponds to increased anticipated returns on investment strategies. They conclude that liquidity providers tend to focus on demanding higher expected returns and risk premiums for lower-quality assets. Hasan et al. (2022) examine liquidity connectivity dynamics in the cryptocurrency market using six major cryptocurrencies: Bitcoin (BTC), Litecoin (LTC), Ethereum (ETH), Ripple (XRP), Monero (XMR), and Dash. Their analysis reveals a moderate level of liquidity connectivity among these cryptocurrencies, with BTC and LTC exerting notable influence. The authors determine that liquidity connectivity is more prominent in the short-term compared to the medium and long-term perspectives.

    Various studies explore the influence of specific factors on the Bitcoin exchange rate and returns. Demir et al. (2018) delve into the impact of economic policy uncertainty, while Zhu et al. (2017) focus on economic indicators such as the consumer price index and the US dollar index. Other drivers examined include market sentiment and Bitcoin popularity, as investigated by Makrichoriti and Moratis (2016), as well as Polasik et al. (2015). Additionally, researchers like Feng et al. (2018), Bouri et al. (2019 and 2019), Vidal-Tomas et al. (2018), and Mensi et al. (2019) study broader market behavior. Atsalakis et al. (2019) take a different approach by exploring the predictive and classification ability of Bitcoin. They propose the use of a hybrid neuro-fuzzy controller system named PATSOS, consisting of two sub-systems of the Adaptive Neuro-Fuzzy Inference System (ANFIS). Their findings demonstrate that the neuro-fuzzy PATSOS model outperforms both the ANFIS model and the Artificial Neural Networks model, offering higher returns for potential investors. This conclusion is supported through validation involving three alternative cryptocurrencies, specifically Ethereum, Litecoin, and Ripple.

    Most studies on the cryptocurrency market have been based primarily on Bitcoin (Kristoufek, 2015; Panagiotidis et al., 2018; Adcock and Gradojevic, 2019; Atsalakis et al., 2019; Huang et al., 2019; Gerritsen et al., 2020; Gradojevic et al., 2023). Kristoufek (2015) examines the Bitcoin price formation and this author observes that conventional fundamental factors, including usage in trade, money supply, and price level, exert an influence on the long-term price of Bitcoin. In addition, their results show that the prices of bitcoin follow investor interest in the cryptocurrency, and bitcoin no longer represents a safe investment. Some studies reported a decline in trust in the integrity of the cryptocurrency market and revealed the existence of possible crime and fraud in this system (Gandal et al., 2018; Griffin and Shams, 2018; Corbet et al., 2020).

    Despite the wealth of literature on Bitcoin, other cryptocurrencies have received less attention. Akyildirim et al. (2021) conduct an in-depth analysis of the twelve most liquid cryptocurrencies, including Bitcoin Cash (BCH), Bitcoin (BTC), Dash (DSH), EOS (EOS), Ethereum Classic (ETC), Ethereum (ETH), Iota (IOT), Litecoin (LTC), OmiseGO (OMG), Monero (XMR), Ripple (XRP), and Zcash (ZEC). Their findings reveal consistent predictability in the expected return direction for cryptocurrency markets at daily or minute time scales, achieving classification accuracies of up to a 69% success rate. Bouri et al. (2020) explore various cryptocurrencies and uncover a connection wherein explosive prices in one cryptocurrency drive similar movements in another. Bouri et al. (2019) investigate the role of trading volume in forecasting yields and volatility across different cryptocurrencies, finding that trading volume is valuable for predicting extreme returns but less so for forecasting future volatility in a smaller set of cryptocurrencies. In a recent study, Mensi et al. (2019) examine the intraday efficiency of Bitcoin and Ethereum, noting Bitcoin's greater inefficiency during general uptrends and downtrends. Maghsoodi (2023) addresses the cryptocurrency allocation problem and real multi-scenario trading experience using a hybrid approach that combines the Prophetic Prediction Model (PFM) and cluster analysis. We demonstrate the effectiveness of integrating PFM and CLUS-MCDA approaches in resolving big data problems with multiple scenarios simultaneously. Hasan et al. (2022) delve into liquidity dynamics interactions within the cryptocurrency market, analyzing a dataset encompassing six major cryptocurrencies: Bitcoin (BTC), Litecoin (LTC), Ethereum (ETH), Ripple (XRP), Monero (XMR), and Dash. Their static analysis reveals moderate interdependence in liquidity among cryptocurrencies, with BTC and LTC exerting significant influence on overall interconnectedness. They identify distinct liquidity clusters, comprising BTC, LTC, and XRP, and another cluster with ETH, XMR, and Dash. In their frequency domain examination, they find higher liquidity interconnections in the shorter term compared to the medium and long term, with BTC, LTC, and XRP being major contributors to short-term fluctuations, while ETH assumes this role in the longer term.

    Another strand of literature concerns the methodology applied in the cryptocurrency market. Some studies employ Artificial Neural Networks (ANN) with technical analysis (Gerritsen et al., 2020; Adcock and Gradojevic, 2019; Huang et al., 2019; Corbet et al., 2020; Nakano et al., 2018). Corbet et al. (2020) examine diverse dimensions of Moving Average (MV) strategies in cryptocurrencies. They sampled this technique to high-frequency Bitcoin returns, at the one-minute frequency. They conclude that some technical rules may show signs of predictive power; nevertheless, more consideration needs to be given to questions like market liquidity and transaction costs. Nakano et al. (2018) assess the effectiveness of a deep learning artificial neural network (D-ANN) model incorporating technical indicators during the period from 2016 to 2018. Their Bitcoin predictions were evaluated over a 15-minute horizon, employing a trading strategy that utilized a more extensive set of technical indicators compared to the study by Corbet et al. (2020). They found that certain technical trading strategies could yield a positive excess return in comparison to the buy-and-hold strategy, even when factoring in reasonable and realistic trading expenses. Adcock and Gradojevic (2019) employ an ANN model with technical indicators, generating daily horizon predictions across the data range from 2011 to 2018. They establish the accuracy of their forecasts for the BTC/USD exchange rate, both in point forecast and density terms. Gerritsen et al. (2020) leverage daily price data spanning from July 2010 to January 2019, demonstrating that specific technical analysis trading rules, particularly the trading range breakout, possess meaningful predictive capabilities for Bitcoin prices. This results in significant excess returns, providing valuable insights for Bitcoin traders and investment decisions.

    Numerous studies dedicated to predicting and trading cryptocurrencies leverage deep learning methodologies. Atsalakis et al. (2019) introduce a neuro-fuzzy trading system incorporating lagged Bitcoin prices as model inputs, surpassing simpler neuro-fuzzy and artificial neural network (ANN) models in performance. In a trading simulation, signals from their model outperform a naïve buy-and-hold strategy by 71.21% in terms of investment yields. Adcock and Gradojevic (2019) explore the predictability of twelve cryptocurrencies at daily and minute frequencies using machine learning classification algorithms such as support vector machines, logistic regression, artificial neural networks, and random forests. On average, these algorithms achieve an accuracy ranging from 55% to 65% at daily or minute frequencies, with support vector machines proving most effective. Gradojevic et al. (2023) evaluate the predictability of BTC/USD exchange rate returns at hourly and daily forecast horizons, employing Support Vector Machine and Random Forest methods, with the Random Forest approach performing exceptionally well. Similarly, Madan et al. (2015) advocate for binomial regressions, support vector machines, and random forest algorithms to forecast random signs of Bitcoin price movement. Lahmiri and Bekiros (2019) utilize deep learning techniques to showcase their effectiveness in forecasting daily returns for Bitcoin, Digital Cash, and Ripple, revealing that Long-short term memory neural network topologies (LSTM) significantly outperform generalized regression neural architecture. Finally, through machine learning optimization, Greaves and Au (2015) achieve a classification accuracy of approximately 55% for predicting upward and downward movements in Bitcoin prices.

    In summary, we can conclude that no previous literature has approached the analysis of the cryptocurrency market by making a comparison between deep learning and quantum methods in neural networks and genetic algorithms used in the algorithmic trading of emerging cryptocurrencies. We, therefore, aim to cover this lack of literature.

    As previously indicated, our analysis of emerging cryptocurrencies trading employs a variety of methods, aiming to establish a resilient model. This model undergoes testing not only through a single categorization technique but also through those proven successful in prior literature and diverse domains. Specifically, our research incorporates a diverse set of neural network approaches, such as Convolutional Neural Networks-Long Short Term Memory with EGARCH Integration, Gated Recurrent Unit-Convolutional Neural Networks with EGARCH Integration, Quantum Neural Networks with EGARCH Integration, Deep Recurrent Convolutional Neural Networks with EGARCH Integration, and Quantum Recurrent Neural Networks with EGARCH Integration. Additionally, our methodology includes Genetic Algorithms, featuring Adaptive Boosting and Genetic Algorithm with EGARCH Integration, Adaptive Neuro-Fuzzy Inference System-Quantum Genetic Algorithm with EGARCH Integration, Adaptive Genetic Algorithm with Fuzzy Logic with EGARCH Integration, Quantum Genetic Algorithm with EGARCH Integration, and Support Vector Machine-Genetic Algorithm with EGARCH Integration. The subsequent sections provide a succinct overview of the procedural aspects inherent in each of these classification techniques.

    Utilizing the capabilities of both CNN and LSTM, a forecasting model for value is crafted based on CNN-LSTM. CNN, originally proposed by LeCun et al. (1998), functions as a feed-forward neural network and has exhibited proficiency in image and natural language processing (Kim and Kim, 2019). Its success extends to time-series forecasting, where local sensing and weight distribution help efficiently reduce the parameter count, thereby enhancing the learning efficiency of the model (Qin et al., 2018). Comprising two key components, namely the convolution layer and the clustering layer, CNN houses multiple convolution kernels, with its calculation formula provided in Equation (1). Following the convolution operation, features are extracted from the data, leading to high-dimensional separated characteristics. To address this issue and mitigate the training cost of the network, a clustering layer is introduced directly after the convolution layer to reduce the dimensionality of the features. The CNN layer typically consists of a convolutional layer with multiple convolution kernels, as defined by Equation (1):

    lt=tnh(xtkt+bt) (1)

    being lt the output value after convolution, tnh is the activation function, xt represents the input vector, kt stands for the weight of the convolution kernel, and bt means the bias of the convolution kernel.

    Introduced by Hochreiter and Schmidhuber in 1997, LSTM was specifically designed to tackle the persistent challenges of exploding and vanishing gradients in Recurrent Neural Networks (RNNs), as highlighted by Ta et al. (2020). Extensively employed in various tasks like speech detection, sentiment analysis, and text processing, LSTM stands out for its distinctive memory capabilities, enabling it to make relatively accurate predictions (Gupta and Jalal, 2020). The LSTM architecture consists of three essential components: the forgetting gate, the input gate, and the output gate. The computation of the LSTM model unfolds through the following steps:

    1. The output at the previous time step and the input at the current time step are processed by the forgetting gate, producing an output according to Equation (2):

    ft=σ(Wf.[ht1,xt]+bf) (2)

    being the value range of ft (0, 1), Wf represents the weight of the forget gate, andbf symbols the bias of the forget gate, xt stands for the input current time value, and ht1 means the last output value.

    2. The output at the previous time step and the current input are processed by the input gate, resulting in the output and the input gate's candidate status, as defined by Equations (3) and (4):

    it=σ(Wi.[ht1,xt]+bi) (3)
     Ct=tanh(Wc.[ht1,xt]+bc) (4)

    where the value range of it is (0, 1), Wi is the weight of the input gate, bi represents the bias of the input gate, Wc represents the weight of the candidate input gate, and bc symbols the bias of the candidate input gate.

    3. The current cell state is updated as follows:

    Ct=ftCt1+it Ct (5)

    being the value range of Ct (0, 1).

    4. At time t, the values of the output gate are determined by taking the output ht1 and the input, xt as it input. The resulting output ot is then established by the output gate:

    ot=σ(Wo.[ht1,xt]+bo) (6)

    where the value range of ot is (0, 1), Wo represents the weight of the output gate, and bo symbols the bias of the output gate.

    5. The ultimate computation of the LSTM output involves the consideration of both the output from the output gate and the cell state, as specified in Equation (7):

    ht=ottanh(Ct) (7)

    Integrating EGARCH into CNN-LSTM enhances financial modeling by combining EGARCH's volatility modeling with CNN-LSTM's dynamic relationship capturing capabilities, promising improved accuracy in forecasting volatility and risk. EGARCH, a model for capturing conditional volatility, is integrated into the CNN-LSTM model as an additional feature. The EGARCH equation, which captures the conditional variance (ht), is as follows:

    ht=ω+α|rt1|+βht1 (8)

    where ht represents the conditional variance at time t, ω is the constant term, α and β are parameters controlling the impact of past returns (rt1) and past conditional variances (ht1). The use of the absolute value (rt1) reflects the asymmetric nature of EGARCH, capturing how positive and negative returns affect volatility differently.

    The output from the EGARCH model, which is the conditional variance (ht), can be combined with the LSTM output. This is typically achieved by treating (ht) as an additional feature and appending it to the LSTM output.

    CombinedOutputt=[LSTMOutputt,ht]  (9)

    where CombinedOutputt is the output at time t that incorporates both the LSTM output and the EGARCH conditional variance (ht). This approach treats ht as an additional feature and appends it to the LSTM output.

    The combined output from the EGARCH and LSTM models is run via a completely linked level, and the model's output is compared to actual values to calculate the prediction error. The CNN-LSTM-EGARCH model provides a comprehensive approach for financial time series forecasting, integrating feature extraction, sequential modeling, and conditional volatility capture for improved accuracy in predictions. After combining EGARCH and LSTM outputs, the combined output run via a completely linked level The model's output is then compared to actual values to calculate the prediction error.

    ModelOutputt=f(CombinedOutputt*W+b)  (10)

    being ModelOutputt represents the model's output at time t, CombinedOutputt is the combined output from the EGARCH and LSTM models, W signifies the weight matrix in the completely linked level, b denotes the bias vector in the completely linked level, and f is the activation function, typically used to introduce non-linearity.

    PredictionErrort =∣ ActualValuet-ModelOutputt  (11)

    where PredictionErrort measures the absolute difference between the actual value at time t and the model's output at the same time. This error quantifies the accuracy of the model's predictions.

    In this method, we aim to develop a powerful forecasting model by combining EGARCH, a model for conditional volatility, with the hybrid neural network framework that unifies the Gated Recurrent Unit (GRU) and Convolutional Neural Network (CNN) modules. This integrated approach allows us to capture both temporal dependencies and spatial features in load data for more accurate predictions. RNN utilizes hidden layers to retain information from earlier time points, shaping the output based on both current states and memories from previous instances. The structure of the unrolled RNN is illustrated as follows:

    at=g1(ωaaat1+ωaxxt1+ba)
    ˆyt=g2(ωayat+by) (12)

    letat represent the output of a single hidden layer at time t. The weight matrices for the hidden layer, input, and output are denoted as ωtaa, ωtax, and ωtaya respectively. Additionally, baand by represent the bias vectors for the single hidden layer and the output, respectively. The symbols g1 and g2 signify the non-linear activation functions.

    The GRU presents a streamlined RNN structure, serving as a variation of the LSTM. Unlike the LSTM, the GRU comprises two gates – the update gate and reset gate – whereas the LSTM integrates three gates, encompassing the forget gate, input gate, and output gate (Gao et al., 2019). The GRU equations are:

    Γu=σ(ωu[ct1,xt]+bu),
    Γr=σ(ωr[ct1,xt]+br) (13)
     ct=tanh(ωc[Γrct1,xt]+bc),
    ct=(1Γu)ct1+Γu ct,

    being ωu, ωr, and ωc the training weight matrix of the update gate, the reset gate, and the candidate activation  ct, respectively and bu, br, and bc stand for the bias vectors.

    This matrix is structured based on the sensors' locations and the chronological sequence of time. The representation of the spatiotemporal matrix is as follows:

    x=[X1(1)Xn(n)Xk(1)Xk(n)] (14)

    We denote k as the kth smart sensor, n as the nth time sequence, and Xk(n) as the data recorded by thekth a smart sensor at time n. To extract the charging characteristics from the spatiotemporal matrix, CNN was employed to process the information within the matrix. The results from the CNN module are linked to a completely linked level for additional processing. The EGARCH equation for capturing conditional variance (ht) is given as:

    ht=ω+α|rt1|+βht1 (15)

    being ht represents the conditional variance at time t, ω is the constant term, α and β are parameters controlling the impact of past returns (rt1) and past conditional variances (ht1). The use of the absolute value (rt1) reflects the asymmetric nature of EGARCH, capturing how positive and negative returns affect volatility differently.

    After combining the outputs of the GRU-CNN model with the EGARCH component, the subsequent step involves processing this combined information through a completely linked level. This is instrumental in the forecasting process and produces the final forecasting outcomes.

    Xcombined represents the combined output from the GRU-CNN and EGARCH integration. The fully connected layer can be represented using a weighted sum and activation function, where WFC represents the weights, bFC represents the biases, and F is the activation function. The output Yforecast is the final load forecasting outcome:

    Yforecast =F(WF CXcombined +bF C) (16)

    where Xcombined is the combined information from the GRU-CNN and EGARCH integration, WFC stands for the weight matrix for the completely linked level, bFC means the bias vector for the completely linked level, and F is the activation function applied to the weighted sum.

    This equation signifies the pivotal role of the completely linked level in the forecasting process, where it transforms the combined information into the final forecasting outcomes. The fully connected layer computes the final load forecasting outcomes by taking into account all the information provided by the GRU-CNN model and EGARCH. This includes temporal dependencies, spatial features, and conditional volatility information.

    The fusion of CNNs and quantum computing has the potential to yield a computational approach with robust predictive capabilities (Wan et al., 2017). In quantum computing, a qubit serves as the smallest unit of information, representing a probabilistic state. A qubit can exist in either a "1" or "0" state or any superposition of the two (Gonçalves, 2019). The state of a qubit is defined in Equation (17):

    |ψ=α|o+β|1 (17)

    We denote α and β the numbers of the amplitude of the corresponding states like |α|2+|β|2=1. It is formed as a pair of numbers. [αβ] The angleθ means the specification of geometrical aspects, defined like: cos(θ)=|α|andsin(θ)=|β| (Alaminos, Esteban and Salas, 2023). Quantum gates can be applied to adjust probabilities through weight enhancement. An illustration of this is presented in formula (18) with the example of a rotation gate:

    U(Δθ)=[cos(Δθ)sin(Δθsin(Δθ)cos(Δθ)] (18)

    The qubit state can be upgraded by employing the aforementioned quantum gate. The implementation of the spin gate on a qubit is provided below:

    [α´β´]=[cos(Δθ)sin(Δθsin(Δθ)cos(Δθ)][αβ] (19)

    The process begins with a quantum hidden neuron in the state |o, setting up the superposition as outlined in Formula (20).

    p|O+1p|1withO|p|1 (20)

    Here, "p" denotes the random probability of initiating the system in the state|O. The classical neurons are induced through the generation of random numbers, as discussed in the works of Sensoy (2018) and Katsiampa et al. (2019). The output from the quantum neuron is established as per Equation (21).

    vj=f(ni=1wjixi) (21)

    Considering "f" as a sigmoid or Gaussian function that depends on the specific problem, the description of the network's output is provided in the subsequent formula:

    yk=f(lj=1wjkvj) (22)

    We assume the fully connected layer in the QNN processes the combined inputs, which now include ht from EGARCH, to compute the final load forecasting outcomes. We denote the fully connected layer's output as "zk" (Alaminos et al., 2020). The weighted sum (zk) of the inputs (vj) is calculated for each output neuron k in the fully connected layer. This is a linear combination of the inputs, and the weights (wjk) determine the strength of the connections.

    zk=lj=1ωjkvj (23)

    The weighted sum (zk) is passed through an activation function (fk), which is typically a problem-dependent sigmoid or Gaussian function (Alaminos et al., 2020). The choice of activation function depends on the specific problem and design of the QNN.

    vk=fk(zk) (24)

    The output from the completely linked level represents the forecasted value for a particular time step or prediction horizon.

    yk=vk (25)

    To train the QNN, an objective function (Ek) is defined to measure the error between the predicted value (yk) and the actual or desired output (ok). The most common objective function applied is the Mean Squared Error (MSE). The target output is denoted byok and the adjustment of the output layer weight is detailed in Formulas (26) and (27):

    E2k=12|ykok|2 (26)

    To update the weights (wjk) in the completely linked level, the backpropagation algorithm is typically employed. The weight update for a given connection (wjk) can be calculated as:

    Δwjk=ηekf´vj (27)

    where E2k represents the squared error between the network's output and the desired output, Δwjk is the update for the weight between neuron j and output neuron k, η is the learning rate, ek is the error signal for output neuron k, and f´vjis the derivative of the activation function applied to the output of neuron j (Alaminos et al., 2020).

    The process of updating weights using backpropagation is typically applied iteratively to minimize the error (Ek) and improve the accuracy of forecasting. This integration of EGARCH with the QNN allows the forecasting model to consider both the temporal dependencies captured by the QNN and the conditional volatility information provided by EGARCH, resulting in enhanced forecasting accuracy. The specific equations and implementation details may vary based on the QNN architecture and the integration of EGARCH.

    RNNs have found success in various domains, particularly in time series forecasting, thanks to their significant predictive capabilities. The conventional RNN framework is organised around its output, which is influenced by its prior predictions (Mahajan, 2011). Formulas (28) and (29) provide the means to obtain an input sequence vector "x," the hidden states of a recurrent layer "s," and the output of a singular hidden layer "y."

    st=σ(Wxsxt+Wssst1+bs) (28)
    yt=o(Wsost+by) (29)

    the weights from the input layer "x" to the hidden layer "s," from the hidden layer to itself, and from the hidden layer to its output layer are denoted as Wxs, Wss, and Wso, respectively. Additionally, by represent the biases of the hidden layer and output layer. In formula (30), σ and o are indicated as symbols for the activation functions.

    STFT{z(t)}(τ,ω)=+z(t)ω(tτ)ejωtdt) (30)

    in this context, z(t) represents the vibration signals, while, ω(t) symbolises the Gaussian window function centred around 0. The complex function T(τ, ω) defines the vibration signals in terms of both time and frequency. To determine the hidden layers through the convolutional operation, we employ Formulas (31) and (32).

    St=σ(WTSTt+WSSSt1+Bs) (31)
    Yt=o(WYSSt+By) (32)

    being W the convolution kernels.

    For the formation of a deep architecture, stacking the recurrent convolutional neural network (RCNN) results in the DRCNN, as proposed by Huang and Narayanan in 2017. In this amalgamation, the ultimate segment of the model incorporates a supervised learning layer, as determined by Formula (33).

    ˆr=σ(Whh+bh) (33)

    where Wh represents the weight and bh the bias, respectively. To optimize parameter learning, stochastic gradient descent is utilized (Ma and Mao, 2019). Assuming that the actual data at time "t" is denoted as "r," the loss function is outlined in Formula (34).

    L(r,ˆr)=12rˆr22 (34)

    To integrate EGARCH, we will append its conditional variance (ht) to the input of the DRCNN model. The augmented input vector at time t becomes [xt, ht].

    After combining the outputs of the DRCNN model with the EGARCH component, the subsequent step involves processing this combined information through a completely link level to produce the final load forecasting outcomes.

    Zk is denoted as the weighted sum in the completely link level, Vk represents the output after applying the activation function, and Yk denotes the final forecasting outcome.

    Zk=lj=1ωjkvj (35)
    Vk=fk(Zk) (36)
    Yk=Vk (37)

    The choice of activation function fk depends on the specific problem and design of the DRCNN model. To train the DRCNN with EGARCH, a similar process as described in the previous response would be followed, involving the definition of an objective function to measure the error and the use of backpropagation to update the weights in the fully connected layer. This integration allows the DRCNN model to consider both the temporal dependencies captured by the DRCNN architecture and the conditional volatility information provided by EGARCH, leading to an enhanced load forecasting model.

    We denote the predicted values by the DRCNN-EGARCH model as ˆrkat time step k and the actual values as rk. The objective function measures the error between the predicted and actual values. A common choice for regression problems like forecasting is objective function of Mean Squared Error (MSE):

    MSE=1NNk=1(ˆrkrk)2 (38)

    In this step, we denote N as the total number of time steps. The weights in the fully connected layer are updated using the gradient descent optimization algorithm. The gradient descent update rule for weight wij in the completely link level is given by:

    ω(t+1)ij=ω(t)ijηMSEωij (39)

    We denote η as the learning rate, and MSEωij as the partial derivative of the MSE with respect to weight ωij. The partial derivative is computed applying the chain rule of calculus and backpropagation:

    MSEωij=2NNk=1(ˆrkrk)ˆrkωij (40)

    The derivative ˆrkωij involves the activations and outputs of the layers in the DRCNN-EGARCH model and depends on the specific architecture and activation functions used. This backpropagation process is applied iteratively over the training dataset to minimize the MSE and improve the model's forecasting performance.

    A quantum system with n qubits resides in the n-fold Hilbert space, represented by the tensor product H=(C2)d resulting in a dimension of 2d. A quantum state is denoted by a unit vector ψH, commonly expressed in quantum computing using bracket notation |ψH |; its conjugate transpose is represented as ψ|=|ψ. The inner product ψ|ψ=ψ22 signifies the square of the 2-norm of ψ.[ψψ] then denominates the outer product, producing a tensor of rank 2. The computational basis states are represented by |0 = (1, 0), |1 = (0, 1). Compound basis states, such as |01= |0|1 = (0, 1, 0, 0), are formed by taking the tensor product of individual states.

    Therefore, a quantum gate serves as a unitary operation U on H, where the operation significantly acts on a subset S ⊆ [n] of qubits, tdenoted by UϵSU(2|S|) To operate on H we extend U to act as identity on the remainder of the space, for instance US1[n]S. In a quantum circuit, the first gate, denoted as R(θ) represents a unitary operation on a qubit, acting on the second qubit from below and dependent on the parameter θ. The dotted line extending from the gate signifies a "controlled" operation. For instance, if the control acts solely on a single qubit, the gate corresponds to the single block-diagonal unitary map |00|1+|11|R(θ)=1R(θ), indicating "if the control qubit is in state |1 apply R(θ)". Gate sequences are computed as matrix products in quantum circuits.

    The projective measurements of a single qubit are represented by a Hermitia 2× 2 matrix P, for instance, M|11| = diag(0, 1); where the complementary outcome is then denoted as M = 1 −M. These measurements are quantified in meters within the circuit. For a given quantum state |ψ, the post-measurement state is M|ψ/p /p with probability p=M|ψ2. This probability also serves as the post-selection likelihood, ensuring the measured result M. This likelihood can be elevated close to 1 through approximately ∼ 1/p rounds of amplitude amplification, as proposed by Grover in 2005.

    In the training process, we employ quantum amplitude amplification (Guerreschi, 2019) on the output pathways to ensure accurate measurement of the correct token from the training data at each step. However, non-linear behavior is achievable in quantum mechanics. As an example, a single-qubit gate R(θ) = exp(iYθ) for the Pauli matrix Y (Nielsen, and Chuang, 2001), acts as:

    R(θ)=exp(iθ(0ii0))=(cosθsinθsinθcosθ) (41)

    specifically, it resembles a rotation within the two-dimensional space defined by the computational basis vectors of a single qubit, {|0,|1}. Although the rotation matrix itself maintains linearity, it is noteworthy that the state amplitudes — cosθand sinθ— exhibit non-linear dependence on the angle θ. When elevating the rotation to a controlled operation cR(i, θi) contingent upon the ith qubit of a state |x for x{0,1}n, we attain the mapping

    R(θ0)cR(1,θ1)cR(n,θn)|x|0=|x(cos(η)|0+sin(η)|1)forη=θ0+ni=1θixi (42)

    Therefore, this equates to a rotation through an infinite transformation of the basis vector |xwith x={x1,,xn}{0,1}n, influenced by a parameter vector θ=(θ0,θ1,,θn). This procedure extends linearly to both the base and target state superpositions. Due to the nature of f R(θ) all alterations in amplitude introduced are real-valued.

    The non-linear transformation of cosine amplitudes is inherent in a controlled operation. However, the sine function exhibits a less sharply defined profile, resembling that of a linear rectified unit and closely resembling a sigmoidal activation function. This activation function introduces a parameter ord (order) with a value greater than or equal to 1, influencing the tilt and thereby affecting the resulting amplitude. In the case of pure states, this quantum neuron undergoes rotation by an f(θ) = arctan(tan(θ)^2ord), where ord ≥ 1 represents the order of the neuron. Assuming an affine transformation η for the input bitstring xi, as depicted in formula (43), this rotation is then translated into the amplitudes. This approach highlights the nuanced interplay between non-linear transformations, activation functions, and rotational dynamics in the quantum neuron model.

    cos(f(η))=11+tan(η)2x2ordandsin(f(η))=tan(η)2ord1+tan(η)2x2ord (43)

    This transformation emerges from standardising the operation |0−→ cos(θ)2ord|0+sin(θ)2ord|1 as is evident. The circuit for ord = 1 is illustrated on the left, and for ord = 2, it is depicted on the right. Higher orders can be constructed recursively. For control in superposition, as in a state|x +|y/2, the method is ineffective when xy for two bit-strings of length n. In such cases, the amplitudes within the overlap will hinge on the success outcome. A strategy known as fixed-point oblique amplitude amplification (Tacchino et al., 2019) is employed. This technique essentially involves post-selecting the measurement result 0 while maintaining the unitarity of the operation with arbitrary precision.

    In this paper, we enhance the capabilities of this quantum neuron by introducing a greater number of check terms. Specifically, η as defined in formula (44) is an affine transformation of the boolean vector x={x1,,xn} for xi ∈ {0, 1}. With the introduction of multi-control gates—each having its own parameterized rotation denoted by a multi-index θI that varies based on the qubits i ∈ I upon which the gate conditions—we have the flexibility to incorporate higher-degree polynomials. In other words

    η´=θ0+ni=1θixi+ni=1nj=1θijxixj+=I[n]|I|dθIiIxi (44)

    With "d" representing the degree of the neuron, for instance, when d = 2 and n = 4, an illustration of a checked rotation is provided as an example that elevates to this higher-order transformation η´ on the bit string xi. Consequently, boolean logic operations of higher degrees can be directly encoded within a single conditional rotation. For instance, an AND operation between two bits x1 and x2 can be succinctly represented as x1x2. To implement the constructed QRNN cell, it necessitates an iterative application of the QRNN cell to a sequence of input words in1,in2,,inL. The outgoing lanes outi denote a discrete distribution measuring pi over the class labels (Alaminos et al., 2022).

    In the intersection of financial forecasting and quantum-inspired machine learning, a pioneering method unfolds through the integration of EGARCH volatility forecasting into a Quantum Recurrent Neural Network (QRNN). The process involves embedding EGARCH output into the input sequence, employing a QRNN cell for dynamic updates, and optimizing with Cross-Entropy Loss. Uniquely, quantum principles like Amplitude Amplification and Repetition-to-Success circuits augment the training, providing a novel perspective. Post-selection techniques add adaptability based on outcomes.

    In this step, the EGARCH conditional volatility (ht) is integrated into the input sequence at each time step. The input at time t (inputt) is constructed as a concatenation of the original input (int) and the EGARCH conditional volatility (ht)

    inputt=[int,ht] (45)

    This combined input is then fed into the subsequent layers of the QRNN.

    Within the Quantum Recurrent Neural Network (QRNN), the hidden state (stst) dynamically evolves by integrating EGARCH volatility (ht). The sigmoid activation function orchestrates this update, blending the previous hidden state (st−1), current input (xt), and EGARCH information through weighted matrices (Wss, Wsx) and a bias term (bs). This integration ensures the nuanced representation of EGARCH dynamics within the evolving hidden state.

    st=σ(Wssst1+Wsxxt+bs) (46)

    The QRNN's predictive prowess emanates from the output (yt), crafted through a softmax transformation of the updated hidden state (st). Fueled by the weight matrix Wys and bias term by, this process synthesizes EGARCH-informed information into a probability distribution. This outcome encapsulates the network's ability to distill EGARCH-informed insights into actionable predictions, a pivotal aspect in enhancing financial modeling precision.

    yt=softmax(Wysst+by) (47)

    In the intricate tapestry of financial modeling, the integration of EGARCH (Exponential Generalized Autoregressive Conditional Heteroskedasticity) stands as a pivotal element for enhanced volatility forecasting. Within this framework, the Cross-Entropy Loss function takes center stage, serving as the compass for training precision. The loss function, defined as

    Loss=tioutilog(yt,i) (48)

    The subsequent weight updates (Wss, Wsx, Wys, bs, by) reflect the network's adaptability to refine its parameters and align with the intricacies of EGARCH-informed predictions.

    WssWssηLossWss (49)
    WsxWsxηLossWsx (50)
    WysWysηLossWys (51)
    bsbsηLossbs (52)
    bybyηLossby (53)

    The introduction of the unitary operator U(Δθ), governed by a rotation matrix, adds a quantum-inspired dimension to this financial modeling landscape, opening avenues for novel approaches to volatility modeling and risk assessment.

    U(Δθ)=(cosΔθsinΔθsinΔθcosΔθ) (54)

    Quantum amplitude amplification is a technique used to enhance the probability of obtaining correct outcomes in a quantum system. In the context of QRNN training, this involves adjusting parameters and selecting the desired outcomes. A generic form of amplitude amplification can be represented as follows:

    Uamplify=2(|ψψ|1NI)I (55)

    where ∣ψ⟩ is the quantum state, I is the identity matrix, and N is a normalization factor.

    Repetition-to-Success (RUS) Circuit and Correction. The RUS circuit is used to check if a quantum neuron has been executed correctly. The correction circuit is applied when the outcome is not as expected. The operations can be expressed as:

    URUS=|00|+ei|1)1| (56)
    Ucorrection=I (57)

    being ϕ is a phase factor that depends on the success or failure of the quantum neuron.

    Post-selection techniques aim to fine-tune the training process based on measured outcomes. In the case of fixed-point oblique amplitude amplification, additional quantum operations are performed. The operations can be represented as:

    Upostselect=diag(1,eiθ) (58)

    where θ is a phase factor that influences the post-selection based on measured outcomes.

    In the traditional AdaBoost algorithm, the weight of each base classifier is determined after computation, without considering the adaptivity of each classifier individually (Wang et al., 2011). Therefore, in this study, Genetic Algorithm (GA) is employed in the adaptive integration procedures of the base classifiers. Both the probability of crossover and the probability of mutation play a crucial role in influencing the optimization effectiveness of the algorithm (Cheng et al., 2019). In selecting the appropriate crossover probability and mutation probability, we refer to the literature (Drezner and Misevičius, 2013). The crossover probability and mutation probability are defined as follows:

    Pc=γ (59)
    Pm=0.1(1γ) (60)

    being γ a regulatory factor. The function evaluating fitness was defined as:

    fit=Ni=1I(y(Xi)=yi)N (61)

    Given {ωm,j|i=1,2,,N;m=1,M}, which represents the weight of each sample in the base classifier. Here, ωm,j denotes the weight of the ith sample in mth base classifier. Consider ym(x) as a base classifier and Y(x) as a strong classifier. The term αm indicates the weight assigned to the mth classifier, while ϵm signifies the error function of the mth base classifier. The parameter M corresponds to the total number of base classifiers. The AdaBoost-GA suggested algorithm can be elaborated as follows:

    inputi,t=[xt,hi] (62)

    where EGARCH conditional volatility (hi) is appended to each input sequence (xi) at each time step.

    Output:

    Y(.): The final strong classifier:

    Y(x)=sign(Mj=1αjyj(x)) (63)

    1. Initialize ω1,i=1/N.

    2. For i = 1 to N do

    3.

    ϵm=Ni=1ωm,iI(ym(Xi)yi) (64)

    4.

    αm=1n{1ϵmϵm} (65)

    if αm 0 then αm increases with the decrement of ϵm. End if

    5.

    ωm+1,i=ωm,iZmexp(αmyiym([xt,hi])) (66)

    where

    Zm=Ni=1ωm,iexp(αmyiyg([xt,hi])) (67)

    6. end for

    αm=GA(αm,1,αm,2,,αm,k,βm,1,βm,2,,βm,l) (68)

    Utilize a Genetic Algorithm to optimize the weights (αm,i) of the base classifiers, considering both EGARCH parameters and base classifier weights.

    Yx=sign(Mj=1αjyj([xt,hi])) (69)

    Combine the base classifiers with their optimized weights, considering EGARCH-informed input, to form the final strong classifier. These equations represent the integration of EGARCH with AdaBoost-GA, where EGARCH information is embedded in the input, influencing both the training of base classifiers and the optimization process through Genetic Algorithms.

    In this strategy, each qubit's probability amplitudes are treated as two genes, resulting in two gene strings within every chromosome. Each gene string serves as a representation of an optimization solution, with the quantity of genes determined by the number of optimization parameters. Quantum rotation gates are applied to bring individual qubits to fruition in the optimal chromosome, while non-quantum gates induce mutations to bolster population diversity. The mutation process utilizes non-quantum gates, and quantum rotation gates facilitate crossover and selection operations. The design of the rotation angle calculation function is shaped by the fluctuating trend of the fitness function at the search point. Should the change rate of the fitness function at a specific search point exceed that at other points, the rotation angle undergoes adjustment accordingly (Cao and Shang, 2010). The essential stages of the proposed model are outlined in Algorithm 1.

    In Algorithm 1, the proposed model unfolds in several steps. The process begins with Data Pre-processing in Phase one, followed by the generation of a random population in Step 2. Subsequently, radii values are calculated in Step 3, and the ANFIS model is initialized in Step 4. Moving on to Phase two, Step 5 involves executing the optimization algorithm using DCQGA. The algorithm then progresses to Step 6, where the optimal accuracy and performance are identified. Finally, Step 7 signals the termination of the entire process. This systematic approach outlines the key stages of the proposed model, combining data pre-processing, population generation, model initialization, optimization, and performance evaluation. The implementation of the proposed model involves two major phases.

    In Phase 1, referred to as Data Pre-processing, a systematic procedure is employed to convert raw inputs and outputs into a suitable format before the training process. The primary objective of this phase is to diminish the dimensionality of the input data and improve the overall generalization performance, as articulated by Bishop (1995).

    The initial data is normalized to the range [0, 1] through min-max normalization:

    X(i)=x(i)mMm (70)

    for the time series data x, m = min{x}, M = max(x). Four categories of time series, such as the opening price, closing price, highest price, and lowest price, are individually standardized in the experiment.

    In the initial step, we enrich the input representation for each time step within the ANFIS-Quantum GA model. The EGARCH conditional volatility (hj) is integrated into the original time series (xj) as part of the input sequence:

    inputi,t=[xt,hi] (71)

    This step ensures that the model considers both the raw time series data and the corresponding EGARCH volatility information during the training process, enhancing its ability to capture both temporal dependencies and conditional volatility patterns.

    Phase 2. Optimisation algorithm: The algorithm applied in this stage draws inspiration from the double-stranded quantum genetic algorithm (Cao and Shang, 2010) and is formulated as follows:

    1.Generate the initial angle to form the double strand from it

    Pi,1=(cos(yi,1),cos(yi,2),,cos(yi,n)) (72)
    Pi,2=(sin(yi,1),sin(yi,2),,sin(yi,n)) (73)

    being yi,n a random number between 0 and 2π, Pi,1 is cosine solution and Pi,2 represents sine solution.

    2. Perform transformation in the solution space

    X(i,j)c=0.5[bi(1+αi,j)+ai(1αi,j)] (74)
    X(i,j)s=0.5[bi(1+βi,j)+ai(1βi,j)] (75)

    We denote i = 1: m, j = 1: n, m as the number of qubits, and n stands for the population size

    3. Calculate the fitness value, which is equivalent to

    Fitness=11+MSE (76)

    4. Update the optimal composition.

    Δθi,j=sgn(A)Δθ0exp(|f(Xi,j)|fiminfimaxfimin) (77)

    whereΔθ0 represents the initial value of the rotation angle,

    5. Perform quantum rotation gates to update the relevant qubits on each actual chromosome for every chromosome.

    A=|α0α1β0β1| (78)

    Hence, the rotation angle Δθ can be determined based on rules such as the following: If A ≠ 0, then the sign of (Δθ) is set to be opposite to the sign of A; otherwise, if A = 0, the direction of Δθ is arbitrary,

    f(Xi,j)=f(Xi,p)f(Xi,c)(Xi,j)p(Xi,j)c (79)
    fimax=max(|f(X1,p)f(X1,c)(X1,j)p(X1,j)c|,|f(X2,p)f(X2,c)(X2,j)p(X2,j)c|,,|f(Xn,p)f(Xn,c)(Xn,j)p(Xn,j)c|) (80)
    fimin=min(|f(X1,p)f(X1,c)(X1,j)p(X1,j)c|,|f(X2,p)f(X2,c)(X2,j)p(X2,j)c|,,|f(Xn,p)f(Xn,c)(Xn,j)p(Xn,j)c|) (81)

    being Xi,pand Xi,c ith vector in solution space like the parent colony and child colony, and (X1,j)p, and (X1,j)c, denotes the jth variable of the vector Xi,p and Xi,c, correspondingly.

    6. Alter qubits according to the mutation probability (Pm = 0.1) equivalent to 1 over the population size (pop size = 10). These values have been empirically determined to ensure a satisfactory balance of speed and efficiency for the model.

    In the final step, the EGARCH-enhanced ANFIS-Quantum GA model is encapsulated as a function Y(x, h) = ANFIS-QGA (x, h). This function integrates the optimized ANFIS model with the additional EGARCH information embedded in the input. The combination of ANFIS structure and quantum optimization, guided by EGARCH, results in a powerful model capable of capturing both temporal dependencies and conditional volatility patterns for improved forecasting.

    The genetic algorithm commences with a collection of solutions, referred to as chromosomes, known as a population. These solutions give rise to additional solutions, termed offspring, based on their fitness value – the higher the fitness value, the more probable their reproduction. The fundamental genetic algorithm is outlined in the next paragraph.

    The Fundamental Genetic Algorithm begins by creating a random population with m chromosomes, denoted as f(y). Fitness is computed for each chromosome and f(y) within the population. The algorithm iterates through steps, including selection based on fitness, crossover of parents to produce new offspring, mutation, and acceptance of the offspring into a fresh population. This process is repeated until a termination condition is met, concluding with the return of the best solution from the current population. The algorithm then loops back to its initial steps for further iterations. This structured approach aims to iteratively refine populations, striving for optimal solutions.

    The adaptive genetic algorithm (AGA) is an enhanced version of the GA, employing adaptive mutations to reach the targeted optimisation results. A GA uses mutations on each parent chromosome, with random gene swapping occurring. In the suggested adaptive mutation, the mutation computation rate is related to the fitness of the chromosome. The mutation yield is based on the mutation rate. For the operation of the AGA, chromosomes have to be created from the set of solutions. Each chromosome undergoes many AGA steps.

    The steps in AGA are the creation of chromosomes, estimating fitness function, crossover, adaptive mutation, and selection. The chromosome stands for the information contained in a predefined form of the solution. Chromosome information is usually encoded in a binary string. Each bit of the chain could maintain a match with the feature of the solution. To put it another way, a number may be expressed by a whole string. There are many encoding techniques for encrypting solutions; it mainly pertains to the problem to be solved.

    Step 1: Chromosome Generation. Generating chromosomes is the starting step of this AGA algorithm. Since the chromosomes here are only the produced rules using fuzzy; the genes are only the parameters of the rules. In the solution space, several random 'C' chromosomes are produced given in the next formula.

    Chk=[Gk0Gk1GkCL1]0kM1;0iCL1 (82)

    being Gki the jth gene of the chromosome, M represents the total population and CL denotes the chromosome´s length.

    Step 2 involves calculating the fitness function, as depicted in formula (83). The primary aim of this function is to optimize the rules by maximizing their effectiveness as solutions are chosen.

    ft=MK=1Rs/M (83)

    with s represents mk to be inserted into the sum expression, it serves as a variable that embodies the ratio of 'm' to 'k'.. Rs denotes the selected rule, with M representing the total count of rules. The fitness value, denoted as ft, is calculated for each chromosome based on the adopted rules. Every chromosome undergoes evaluation using the fitness function. Those chromosomes that meet the requirements of the fitness function, with the assistance of mutation, will be selected for inclusion in the breeding process.

    Step 3 involves crossover. This process entails generating a new chromosome by crossing two parent chromosomes. The resulting chromosome is termed the offspring. Crossover is executed based on targeted genes, and the success rate of producing offspring relies on the crossover rate (CO rate). The formula for determining the crossover point is presented below.

    CPrate=CGCL (84)

    where CPrate represents the Crossover Rate, CG denotes the number of Genes Generated and the CL shows the length of the chromosome. Based on the calculated crossover (CO) rate, the parental chromosomes undergo crossover, generating a set of new chromosomes referred to as offspring. During crossover, we determine the crossover point and swap the genes at these points between the parental chromosomes, leading to offspring that inherit characteristics from both parents. The generated chromosomes typically exhibit higher fitness compared to the previous generation, making them better suited for further processing.

    Step 4: Adaptive Mutation. The mutation is done according to a rate of mutation, which is estimated as:

    MUr=PmCL (85)

    being MUr the Mutation rate, Pm represents the Mutation Point and CL symbols the length of the chromosome.

    Mutation rate selection relies on the predicted fitness value. Due to the rules determined by fuzzy logic, the fitness value serves as the basis for this method. A mutation rate compared with the given fitness values is performed according to the threshold and the resulting values are chosen as the final mutation rate. The vector depicting the mutation points follows:

    MUr={mp1,mp2,mpl,} (86)

    being l length of the chromosome. The rate of mutation r identification is done according to the fitness ft

    MUr={1;ifftT0;else (87)

    where the computation of T is performed according to the resulting fuzzy rule. The mutation rate varies for each chromosome over each iteration and relies on the fitness value.

    Step: 5 Selection. Given the fitness value achieved, the new chromosomes (Np) are placed in a sorting pool. In the selection pool, the chromosomes with the best fitness value will be kept at the top. The highest Np chromosomes reserved in the selection pool are selected as the next generation among the 2Np chromosomes. There are several steps in the prediction model: Rough set-based attribute reduction, normalisation, and then AGAFL classification. Each step of the described model is detailed below:

    1. Standardization

    Consider the dataset, which consists of attributes and entities. Normalization is applied to simplify the numerical complexity of the data by transforming it into a specific range. The common min-max method is employed for this normalization. The initial dataset is transformed into a range through min-max normalization.

    Dn=DDminDmaxDminX[newminnewmax]+newmin (88)

    where the range of transform datasets is denoted by newmin, newmax; where newmin = 0 and newmax = 1.

    The core framework of the proposed model involves processing inputs such as Decentraland, LuckyBlock, SafeMoon, Avalanche, SeeSaw Protocol, Binamon, Kasta, and X2Y2 to yield an Optimized Fuzzy Logic Classifier with refined classification rules as the output. The model's progression includes the registration of rough set theory to extract valuable information from the input datasets. Mined features from this process are then fed into the fuzzy logic classifier for model training and the generation of classification rules. This encompasses key steps like fuzzification, fuzzy rule generation, and defuzzification. Following this, an Adaptive Genetic Algorithm (AGA) is employed to optimize the classification rules derived in the previous step. The model's validity is established through cross-validation using test data, and its performance is evaluated using accuracy, specificity, and sensitivity metrics. Additionally, a thorough statistical analysis is conducted to affirm the robustness of the results obtained from the model. This comprehensive approach ensures the effectiveness and reliability of the Optimized Fuzzy Logic Classifier in handling the specified inputs.

    2. Reduced attributes through Rough Sets

    Reducing the attributes using Rough sets is the primary task. Moreover, the number of attributes is minimized and details that are not relevant, disjoint, noisy, or even redundant are removed.

    3. Depiction of the solution

    The solution is expressed in a binary system. In each bit 1 displays the selection as 0, meaning the non-selection of the equivalence attribute. To quote a scenario, the data set comprising 10 attributes (a1,a2,a3,a10) plus a solution Y = 1010110010, then the attributes selected will become (a1,a3,a5,a6,a9).

    4. Fitness Function

    The fitness value of each solution is established by the fitness function and the best solution is selected according to the fitness value. The set of fuzzy logic classifier rules will constitute the population where the fitness function will be implemented. The rule chosen to engage in reproduction to create the following generation will be the superset of Sf. Let R={r1,r2,r3,rm} be the set of rules to be considered to spawn the next population. Let RfRwhereRf is the set of rules holding super sets of Sf. The merit of each solution is evaluated using the fitness function Sf.

    5. Completion standards

    Only if the maximum number of iterations is achieved will the algorithm cease its implementation. The solution containing the highest fitness value is chosen using RS, and the AGAFL is employed to rank the datasets. The best attributes are provided as input to the fuzzy classifier.

    6. Forecasting with the fuzzy logic system

    Having performed feature reduction on the input dataset, the hybrid ADAFL classifier forecasts the model. The fuzzy logic classifier involves three stages: Fuzzification, Fuzzy inference engine, and De-Fuzzification. Incoming data is transformed into a membership value ranging from 0 to 1 through the application of the membership function (MF). The triangular membership approach has been selected to convert the input data into a fuzzy value. The underlying principle governing the assessment of membership values is detailed below:

    fx={0ifx≤=iandifxkxijiifixjkxkjifjxk (89)

    Formulating fuzzy rules is essential for linking inputs to their respective outputs. Given attributes represented by A1,A2,,AN and class labels symbolized by C1,C2,, these rules can be constructed using linguistic values like high, medium, and low.

    The AGAFL system receives test data with narrowed attributes, which are subsequently transformed into fuzzy values using a fuzzy membership function. These fuzzy inputs are compared with the fuzzy rules established in the rule base. The rule inference technique is then utilized to derive linguistic values, which are further converted into fuzzy ratings through a weighted average approach. Finally, rankings are determined based on the obtained fuzzy scores.

    The integration of EGARCH with AGAFL involves enhancing the optimization process with EGARCH-based conditional volatility. The fitness function in AGAFL is designed to maximize the rules as solutions are selected. To incorporate EGARCH, we introduce conditional volatility (σt) into the fitness function. The modified fitness function (Ft) now considers both the fuzzy rules and EGARCH-based volatility:

    Ft=MK=1RsM×σ2t (90)

    being Rs represents the chosen rule, M is the total rule count, and σt is the conditional volatility at time t obtained from EGARCH.

    The mutation rate in AGAFL is adaptive and depends on the fitness of the chromosome. To integrate EGARCH, we modify the mutation rate computation to consider EGARCH-based volatility:

    MUr=Pm×σtCL (91)

    where MUr is the mutation rate, Pm represents the mutation point, σt is the EGARCH-based conditional volatility, and CL is the length of the chromosome.

    The selection process can be modified to favor chromosomes that not only provide good rule solutions but also align with the predicted volatility. One way to achieve this is by introducing a selection probability (Ps):

    Ps=FtNi=1Fi (92)

    where Ft is the weighted fitness of the current chromosome and N is the total number of chromosomes.

    The AGAFL algorithm is enhanced by integrating EGARCH-based volatility computation in every iteration. The calculation of EGARCH volatility is incorporated, and the mutation rate is adjusted accordingly. This modification ensures that the optimization process takes into account not just the fuzzy logic rules but also factors in the volatility of the financial market.

    The quantum evolutionary algorithm (QEA) is an evolutionary algorithm rooted in quantum computing principles. It incorporates concepts such as superposition states in quantum computing and utilizes a single encoding form to achieve enhanced experimental outcomes in combinatorial optimization problems. We aim to improve the global optimization capability of the genetic algorithm by incorporating a local search ability based on the quantum probabilistic model. To address limitations in the Quantum Evolutionary Algorithm (QEA), we introduce a new variant called the "Quantum Genetic Algorithm." This algorithm utilizes a quantum probabilistic vector encoding mechanism, combining the genetic algorithm's crossover operator with the updating strategy of quantum computation. The goal is to enhance the overall global search capacity of the quantum algorithm. The economic dispatching process using the quantum genetic algorithm is outlined in the following steps:

    Step 1: Population Initialization in the Quantum Genetic Algorithm (QGA), the fundamental unit of information is a quantum bit. The state of a quantum bit can be represented as either 0 or 1.

    |Ψ=α|0+β|1 (93)

    where α and β are two complex numbers corresponding to the likelihood of the respective state: (|α|2+|β|2=1),|α|2,|β|2symbolize the likelihood of the quantum bit being in the 0 and 1 states, respectively. Quantum Genetic Algorithms (QGA) present an innovative coding method by utilizing quantum bits, where a quantum bit is described using a pair of complex numbers. The representation of a system with m quantum bits is expressed as:

    [α1β1|α2β2|αmβm|] (94)

    In the equation, |αi|2+|βi|2=1 (i = 1, 2, ..., m). This display method can be employed to represent any linear superposition of states. For example, consider a system of three quantum bits with the following probability amplitudes:

    [12123212]1232| (95)

    The state of the system can be identified as

    342|000+342|001+142|010+342|011+342|100+342|101+142|110+342|111 (96)

    Step 2: Carry out individual coding and measurement of the population generating units. QGA is a probabilistic algorithm similar to EA. The algorithm is denoted as H(t)={Qt1,Qt2,Qth,Qtl} h = 1, 2, …l) being h the size of the population, Ql(t)={qt1,qt2,qtj,qtn}, where n stands for the number of generator units, t is the evolution generation, qtj represents the binary coding of the generation volume of the jth generator unit. Its chromosome is depicted below:

    qtj=[α1tβ1t|α2tβ2t|αmtβmt|] (97)

    (j = 1, 2, ..., n) (m is the length of the quantum chromosome).

    In the ''initialization of H(t)," if α1t, β1t (i = 1, 2, ..., m) in Ql(t) and all the qtj are initialized, it indicates that all possible linear superposition states will occur with equal likelihood. During the step of "generating S(t) from H(t)", a collective solution set S(t) is formed by observing the state of H(t), where in the tth generation, S(t)={Pt1,Pt2,,Pth,,Ptl},Pl={xt1,xt2,,xtj,,xtn}. Every xtj (j = 1, 2, ..., n) is a series, (x1,x2,,xi,,xm), of length m, obtained from the amplitude of the quantum bit |αti|2or |βti|2 (i = 1, 2, ..., m). In the binary scenario, a random number in the range [0, 1] is chosen. If it is greater than |αti|2; '1' is chosen; otherwise, '0' is selected.

    Step 3: Make an individual measurement for every item in S(t). Employ a fitness assessment function to evaluate each object in S(t) and retain the best object in the generation. If a satisfactory solution is achieved, the algorithm stops; if not, proceed to the fourth step.

    Step 4: Apply a suitable quantum rotation gate U(t)U(t) to update S(t)S(t). The traditional GA employs mating and mutation operations to uphold diversity within the population. In contrast, the quantum genetic algorithm utilizes a logic gate to modulate the likelihood amplitude of the quantum state, ensuring the preservation of population diversity. Therefore, the essence of the quantum genetic algorithm lies in the update facilitated by a quantum gate.

    U=[cosθsinθsinθcosθ] (98)

    with θ representing the rotation angle of the quantum gate. Its numerical value is expressed as

    θ=k.f(αi,βi) (99)
    k=π.exp(titermax) (100)

    We treat k as a variable associated with the evolutionary generation to dynamically adjust the mesh size. t denotes the evolutionary generation, π be an angle, and itermax be a constant dependent on the complexity of the optimization problem. The purpose of the function (αi,βi) is to guide the algorithm in searching for the optimal direction.

    Thus, the process of utilizing the quantum rotation gate to modify the probability amplitude for each individual object in the population, specifically by employing the quantum rotation gate U(t) to upgrade S(t) within the quantum genetic algorithm, can be articulated as:

    S(t+1)=U(t)×S(t) (101)

    where t denotes the evolutionary generation, U(t) represents the quantum rotation gate for the tth generation, S(t) means the probability amplitude of a specific object in the t-h generation, and S(t+1) indicates the probability amplitude of the same object in the t+1thgeneration.

    Step 5: Perturbation is introduced as a countermeasure. Given QGA's tendency to be ensnared by superior local extreme values, we introduce population disturbance. QGA investigations reveal that if the leading individual of the current generation represents a local extreme value, liberating the algorithm becomes highly challenging. Consequently, the algorithm risks being confined to local extrema if the foremost individual remains unaltered in successive generations.

    Incorporating EGARCH into the Quantum Genetic Algorithm (QGA) enhances its capacity to optimize financial models by introducing volatility dynamics into the evolutionary process. EGARCH, a powerful tool for modeling financial time series, is seamlessly integrated into the QGA framework to improve the global optimization and local search abilities of the algorithm.

    The quantum bit representation is extended to include EGARCH parameters in addition to QGA parameters. For a system with mm quantum bits, the quantum chromosome matrix Ψ is defined as:

    Ψ=[α1β1α2β2αmβm] (102)

    where αi and βi are complex numbers corresponding to the likelihood of the respective state (0 or 1).

    The quantum chromosome for EGARCH at time t is denoted as:

    qtj=[αt1βt1αt2βt2αtmβtm] (103)

    Then, we update the fitness function to include the volatility from EGARCH:

    Ft=MK=1ωRsMσ2t (104)

    being σ2t is the conditional volatility obtained from EGARCH at time t.

    The quantum rotation gate is applied to update the quantum chromosome for EGARCH:

    U(t)=cos(θ)sin(θ)sin(θ)cos(θ) (105)

    The rotation angle θ is determined as:

    θ=kf(αi,βi)(106) (106)
    k=π(tintermax) (107)

    being itermax is a constant, and f(αi, βi) guides the algorithm towards the optimal solution.

    The application of the quantum rotation gate to the entire probability amplitude is represented as:

    S(t+1)=U(t)×S(t)(108) (108)

    where S(t) is the probability amplitude of the current generation at time t, and S(t+1) is the updated probability amplitude for the next generation.

    To introduce perturbation in the population and prevent the algorithm from getting stuck in local optima, a perturbation function can be defined. One common approach is to add a small random perturbation to the quantum chromosome elements. Let ΔΨ represent the perturbation matrix:

    ΔΨ=[Δα1Δβ1Δα2Δβ2ΔαmΔβm] (109)

    The perturbation is then applied to the quantum chromosome Ψ as follows:

    Ψperturbed=Ψ+ΔΨ (110)

    being Δαi and Δβi are small random perturbations added to the corresponding elements of αi and βi. The perturbation function ensures exploration of the solution space, promoting diversity in the population and preventing convergence to local extrema. The amount of perturbation can be controlled by adjusting the magnitude of Δαi and Δβi based on the desired level of exploration.

    Addressing the parameterization challenge in Support Vector Machines (SVM) involves various approaches, from straightforward methods to more advanced metaheuristics. This method considers a trio of parameters, including the kernel type and a set of kernel-specific constants. The process initiates with the generation of an initial population of chromosomes using a pseudo-random or chaotic distribution. Each chromosome consists of both an integer-valued and a real-valued component. Individuals for the mating pool in each generation can be selected through elitism, roulette, or the recommended Boltzmann choice method. The remaining slots in the mating pool are filled using an n-point crossover operator and a boundary mutation technique (Ping-Feng et al., 2006; Chih-Hung et al., 2009), employing pseudorandom and chaotic sequences.

    SVRGBC is an amalgamation of an integer-valued and a real-valued genetic algorithm, employing various genetic operators like selection, crossover, and mutation to generate offspring from the population of real solutions. The chosen individuals are then gathered into a breeding pool, where crossover and mutation operators are applied. An issue with the GA crossover operation in the SVR parameter setting problem is chromosomal heterogeneity. A solution involves implementing a dominance scheme, as suggested by Lewis et al. (1998). Each gene's value is determined by its kernel, considering upper and lower limits of the allele. The subsequent stage is mutation, where several chromosomes are targeted for mutation to produce altered clones introduced into the new population. The current work employs a uniform mutation, as follows:

    cold={c1,c2,,ci,cn}
    cnewi=LBi+r(UBiLBi) (111)
    cnew={c1,c2,,cinew,cn}

    being cold and cnew a new selected chromosome before and after mutation accordingly; n symbols the number of genes in the chromosome structure; cnewi stands for the new value for the i allele after mutation; r represents a random number in the range [0, 1], generated by one of the available probability distributions; LBi and UBi mean the lower and upper bounds of the i allele.

    The former is applied to manage the kernel type KT due to its integer encoding, while the latter alters values like P1 or P2 To achieve this, a slight modification of the presented mutation operator is required: A rounding function is introduced immediately after the perturbation of allele i to ensure accurate values. Building on Goldberg's work (1990), we recommend Boltzmann selection, which is used to choose the surviving set of mates based on the system's temperature determined by a cooling schedule. Each solution is either accepted or rejected following the Boltzmann distribution, as defined by (112).

    P(xi)=e(ΔEKT) (112)

    With k as the Boltzmann constant, ΔE indicating the energy difference between the best and the current solution, and T representing the current system temperature. In simulated annealing (SA), the system temperature is determined by employing a cooling function, selected from the classical exponential or linear options of annealing schemes (Kirkpatrick et al., 1983). The optimization process of Support Vector Regression (SVR) parameters through the Genetic Algorithm (GA) requires a fitness function to evaluate and select chromosomes for the mating set. In this study, x MSE is utilised to assess the quality of each solution to maintain a robust genetic material. This is determined by (113), where σt s the observed volatility for period t, ˆσt represents the forecasted volatility for period t, and n denotes the total forecasted time frame.

    MSE=ni=1(σtˆσt)2n (113)

    Furthermore, the Mean Squared Error (MSE) is determined through a statistical estimation of the model's generalization error, known as k-fold cross-validation (CV). This involves a technique for deriving the parameter values of a model from a training sample (Kohavi, 1995; Refaeilzadeh et al., 2009).

    The integration of EGARCH into SVM-GA marks a sophisticated synergy of financial modeling techniques. EGARCH's prowess in capturing time-varying volatility is seamlessly combined with SVM-GA, a robust optimization algorithm for SVM parameters. This fusion enhances the SVM-GA's capacity to optimize not only SVM parameters but also to consider dynamic volatility through EGARCH. The approach involves quantum-inspired chromosome representation, fitness function modification for EGARCH, and the adoption of Boltzmann selection in the genetic algorithm. This synergistic integration offers an advanced solution for refining support vector models, addressing both market trends and volatility dynamics. Now, we delve into the key steps that define this powerful integration.

    First, we extend the chromosome representation to include both SVM and EGARCH parameters. For a system with m genes:

    Chromosome=[Cα1β1α2β2αmβm](114) (114)

    being C represents SVM parameters, and αi, βi represent EGARCH parameters.

    In the integration of EGARCH into SVM-GA, generating a chromosome using a pseudo-random distribution involves assigning random values to the genes corresponding to SVM parameters. Here is the equation for generating a chromosome using a pseudo-random distribution for integer-valued SVM parameters:

    ci=RandomInteger(LBi,UBi)(115) (115)

    being ci represents the i-th gene in the chromosome, RandomInteger(LBi, UBi) generates a pseudo-random integer within the specified lower (LBi) and upper (UBi) bounds for the i-th SVM parameter. We update the fitness function to include both SVM and EGARCH components. Consider the volatility obtained from EGARCH as well as the SVM performance:

    Fitness=λ×SVMFitness+(1λ)×EGARCHFitness(116) (116)

    where λ is a weight parameter to balance the influence of SVM and EGARCH on the overall fitness.

    After, we choose individuals for the mating pool using the selection method Boltzmann selection. The Boltzmann selection can be implemented using a probabilistic approach based on the fitness of individuals. The probability of selecting an individual xi for the mating pool is determined by the Boltzmann distribution. The formula for Boltzmann selection probability (Pxi) is given by:

    P(xi)=e(ΔEKT)Nj=1e(ΔEjKT) (117)

    where ΔE is the difference in fitness between the best solution and the current solution xi, k is the Boltzmann constant, T is the temperature of the system, which may decrease over generations, and N is the total number of individuals in the population.

    This probability distribution ensures that individuals with better fitness have a higher probability of being selected but allows for exploration by considering the temperature T. Then, we apply crossover and mutation operators to generate offspring. The crossover should ensure the integration of both SVM and EGARCH parameters. For mutation, modify both SVM and EGARCH parameters.

    Mutation:αnewi=LBi+r×(UBiLBi) (118)
    Mutation:βnewi=LBi+r×(UBiLBi) (119)
    Mutation:Cnewi=LBC+r×(UBCLBC) (120)

    being r is a random number in the range [0, 1], and LB, UB denote lower and upper bounds for the corresponding parameters.

    We obtain trading data for the emerging cryptocurrencies under study in our article based on their exchange rate with the U.S. dollar through the websites of CoinMarketCap and CoinGecko. The results of Vidal-Tomás (2022) show that cryptocurrency ranking platforms, such as Coinmarketcap, Coingecko, and BraveNewCoin, cover the majority of trading activity in the cryptocurrency market. They determine that Coinmarketcap and Coingecko are suitable options due to the consistency of its results with major exchange platforms and databases in US dollars. The inputs used in this study are expressed in the prices of the cryptocurrencies, usually applied in the algorithmic trading (Vo and Yost-Bremm, 2020; Fang et al., 2022), and with data in daily frequency. In addressing concerns about potential variations in training intervals across cryptocurrencies, we systematically tackled the issue by implementing a unified training period from 2022/02/15 to 2022/07/18. Despite variations in specific dates for each currency, this standardized timeframe ensures a consistent and comparable duration for the training data. Table 1 summarizes the metadata of the datasets and descriptive statistics.

    Table 1.  Datasets' metadata and descriptive statistics.
    Dataset Date Range Min Open Price Max Open Price Mean (Price) Median (Price) Standard Deviation (Price) Kurtosis (Price) Market Cap
    Decentraland 2021/09/24–2022/07/18 $0.503 $5.90 $2.18 2.97515 2.74 3.86 $1,695,267,577
    LuckyBlock 2022/01/28–2022/07/18 $0.0006616 $0.009617 $0.00349 0.0051393 1.15 2.72 $38,783,443
    SafeMoon 2021/03/11–2022/07/18 $0.000000003257 $0.00000654 $0.000000958 0.00003272 0.073 3.19 $2,557,257
    Avalanche 2020/09/24–2022/07/18 $9.48 $146.22 $47.34 77.85 59.22 4.57 $6,692,455,244
    SeeSaw Protocol 2022/02/14–2022/07/18 $0.0005877 $0.4603 $0.00831 0.23044385 3.64 4.03 $1,849,274
    Binamon 2021/07/01–2022/07/18 $0.01029 $0.8615 $0.1793 0.435895 4.07 3.67 $1,616,528
    Kasta 2022/02/05–2022/07/18 $0.05889 $1.49 $0.2081 0.774445 1.37 3.11 $2,183,656
    X2Y2 2022/02/15–2022/07/18 $0.1254 $4.17 $1.29 2.1477 1.92 2.83 $32,712,428,183,656

     | Show Table
    DownLoad: CSV

    According to the trading literature (Vo and Yost-Bremm, 2020), we created the following indicators, which are conventionally used by finance professionals: Relative Strength Index, Stochastic Oscillator, Williams %R, Moving average convergence divergence, On Balance Volume, Bollinger bands, Trend elimination oscillator, and Klinger oscillator. They are incorporated into a random walk model such as a benchmark for the introduction of the inputs in the methodologies used in this study.

    Relative Strength Index (RSI)

    RSI is a trend indicator that seeks to capture overbought or oversold conditions. It compares the mean gain of bullish periods to the mean loss of bearish periods over a specified time frame. Furthermore, we use a 14-day period (Vargas et al., 2018). Moreover, RSI assesses the pace of price movement changes and is calculated using:

    RSI=1001001+RSp (121)

    where RS represents the ratio between the average gain and the average loss over a specific period p. RSI is applied to determine a global trend, allowing for extrapolation of commercial momentum (Wilder, 1978). RSI's default time framework is 14 (p = 14), and a lower time frame denotes an improved sensitivity to movement. RSI range is between 0 and 100, and a score less than 30 points to an over-sold situation, whereas a score greater than 70 signals an overbought scene (Wilder, 1978). RSI allows us to observe price momentum changes over time.

    Stochastic Oscillator (SO)

    The stochastic oscillator is a well-known momentum indicator. This indicator, credited to Lane (1984), values the momentum in the movement of price changes relative to price or volume. The equation for the stochastic oscillator is:

    SO=100CLpHpLp (122)

    where C is the present closing price; H and L represent, respectively, the highest and lowest prices over a specified period p. Similar to the RSI, the default time frame is 14 (p = 14). The stochastic oscillator is measured on a scale from 0 to 100. A reading of 20 or below indicate oversold conditions, while a reading of 80 or above signifies overbought conditions.

    Williams %R (WR)

    Williams %R is another momentum indicator that evaluates the stock price movement. It gauges the closing price by considering the difference between the maximum and minimum prices. The calculation for Williams %R is performed at regular intervals as follows:

    %R=HpCHpLp100 (123)

    where C is the present closing price; H and L denote, respectively, the highest and lowest prices over a specified period p. Typically, the period is set at 14. The RSI scale ranges from 0 to -100. If the Williams %R falls below -80, it is deemed oversold; if it goes above -20, it is considered overbought. While the Williams %R might be viewed as the inverse of the stochastic oscillator, it offers a precise indication of a potential market reversal in upcoming trading periods (Murphy, 1999).

    Moving average convergence divergence (MACD)

    MACD serves as a trend-following indicator, calculated by deducting the 26-day Exponential Moving Average (EMA) from the 12-day EMA. Subsequently, a 9-day EMA of the MACD is utilised as a signal line (Vargas et al., 2018). MACD stands as a valuable momentum indicator in trading. It is widely acknowledged among traders for its effectiveness and simplicity, providing both trend and momentum signals (Appel, 2005). MACD reveals a difference between two exponential smoothing moving averages—one over a longer period and the other over a shorter period. This highlights the convergence and divergence of these two moving averages. The general understanding is that when the short-term moving average surpasses the long-term moving average (whether in an upward or downward direction), it signals a noise reduction insight into the asset's short-term future trajectory. The MACD equation can be interpreted as a signal line. If the MACD falls below the signal line, it indicates a sell signal, and conversely, if it rises above the signal line, it suggests a buy signal. The equations for both MACD and the signal line are presented as follows:

    MACD=EMAp(C)EMAq(C) (124)

    and

    SignalLine=EMAr(MACD) (125)

    being EMAp(C) and EMAq(C) the exponential smoothing averages of the closing price C over periods p and q, respectively. The period p is consistently smaller than the period q. Similarly, EMAr(MACD) is the exponential smoothing average of the MACD calculated over a period r. The usual configurations for p, q, and r are commonly set to 12, 26, and 9, respectively. However, these values might be adjusted depending on the measurement interval.

    On Balance Volume (OBV)

    OBV stands for an impulse indicator that employs volume flow to forecast changes in the stock price. It is computed based on the addition of volume when the closing price is higher than the previous day's price or the subtraction of volume when the closing price is lower than the previous day's price. We calculate a 20-day EMA of the OBV below

    To complete the definition, we set the OBV at the reference day p = 0 to be zero, i.e., we let OBV0 = 0. At p = 1, OBV1 = V1 or - V1, according to if the price at p = 1 increases or decreases in relation to the price at p = 0. A high value of OBV denotes good market sentiment. The OBV may also be applied to forecast market reversal. If OBV moves upwards and price moves downwards, and vice versa, it signals upcoming market sentiment reversal (Tsang, and Chong, 2009).

    Unlike prior indicators that only use price, OBV is a cumulative indicator that is centred on volume traded. Because the volume movement predates the price movement, a change in OBV indicates a subsequent price change (Granville, 1976). An increase in OBV suggests that the price will rise, while a fall in OBV implies a decline. The formula for estimating the OBV is

    IfCp=C(p1)OBV(p)=OBV(p1) (126)
    IfCp<C(p1)OBV(p)=OBV(p1)Vp (127)
    IfCp>C(p1)OBV(p)=OBV(p1)+Vp (128)

    being Cp the closing price at time p, Vp exhibits the volume at time p. When the closing price of a share equals its closing price of the preceding period, OBV stays constant. Whenever the closing price rises (Cp>C(p1)), the trading volume becomes an addition to the prior OBV. Similarly, if the closing price declines, the trading volume is reduced from the prior period's OBV.

    Bollinger bands (BB)

    BB is a volatility indicator that captures prices by constructing an upward and a downward band around two standard deviations over a simple 21-day moving average (Vargas et al., 2018). Bollinger Bands are an indicator that contemplates feasible buy/sell points when prices are close to the bands (top/bottom) and takes profits when prices move towards the middle band. Bollinger bands trace two standard deviations up and down from a simple moving average. Moving standard deviations are employed to draw bands around a moving average.

    σ=Nj=1(XjX)2N (129)
    X=MA=Nj=1XjN (130)
    Upperband=X+2σ
    Middleband=X (131)
    Lowerband=X2σ

    Trend elimination oscillator (TEO)

    Trend oscillators are indicators that swing above and below a central line or between predetermined levels, with their values fluctuating over time. Oscillators have the ability to stay at extreme levels, indicating overbought or oversold conditions, for extended durations. However, they are not capable of indicating a trend continuously. In contrast, the On-Balance Volume (OBV) has the ability to follow a trend indefinitely, as it consistently increases or decreases in value over a sustained period (Fernández-Blanco et al., 2008).

    TEO recognizes cycles and overbought and oversold areas independently of price movements. Cycles of longer duration are in turn formed by smaller cycles. Moreover, TEO focuses more attention on short trading periods regardless of longer-term trends. An appropriate period must be chosen as shorter cycles are best monitored. Short-term traders can buy when the oscillator turns positive and sell when it turns negative.

    Medi=1p1+ζi=1Ciei (132)
    TEOi=CieiMedi(ζ2+1) (133)

    being Ciei the current bar closure.

    Klinger Volumen Oscillator (KVO)

    KVO relates prices and volume by comparing the average of today's prices with yesterday's and, according to the results, applies a multiplier coefficient to the volume. Each day, therefore, it adds or subtracts a figure relating to volume and prices. It then applies two exponential averages to absorb the movements and applies a moment technique to establish the underlying trend of the resulting curve. Finally, KVO applies another exponential moving average to the result of the previous calculation. The result is a curve that moves by trends stably. The different interpretations of this indicator are:

    -From a charting point of view, interpreting divergences, breakouts, etc. about the price curve.

    -As a filter for signals from other oscillators, indicating the most favorable situations for market trading.

    -As a buy point, if it crosses the zero line upwards, and sell if it crosses it in the opposite direction.

    ifMaxi+Mini+Ciei3>Maxi1+Mini1+Ciei13 (134)

    so Data1 = 1; otherwise Data1 = 0

    Data2=MaxiMini (135)

    If Data1i=Data1i1; so Data3 = Data3i1+Data2i; otherwise Data3 = Data2i1+Data2i

    Data4=Voli|2Data2Data31|Data1100 (136)
    Data5=Med.Exp(p2).Data4 (137)
    Data6=Med.Exp(p3).Data4 (138)
    Data7=Data5Data6 (139)
    KVO=Med.Exp(p1).Data7 (140)

    being Maxi the volume on the current bar, Maxi1 the volume in the previous bar, Ciei the closure of the current bar, Ciei1 the closure of the previous bar, Mini the minimum on the current bar, and Mini1 the minimum on the previous bar.

    Tables 2 and 3 show the accuracy results to predict a buy or sell decision in neural networks and genetic algorithms respectively with three metrics: F1, Precision, and Recall. For computing these metrics, we first calculate the values of the next parameters for every classifier:

    Table 2.  Position accuracy (buy and sell) in Neural Networks with EGARCH integration.
    Dataset Decentraland LuckyBlock SafeMoon Avalanche SeeSaw Protocol Binamon Kasta X2Y2
    CNN-LSTM F1 Buy 94.51 94.54 94.56 94.57 94.61 94.67 94.69 94.72
    GRU-CNN F1 Buy 96.24 96.26 96.26 96.28 96.32 96.37 96.38 96.39
    QNN F1 Buy 97.47 97.50 97.50 97.51 97.54 97.60 97.60 97.64
    DRCNN F1 Buy 95.70 95.71 95.75 95.75 95.77 95.82 95.85 95.87
    QRNN F1 Buy 96.83 96.86 96.89 96.90 96.91 96.97 96.99 97.00
    CNN-LSTM F1 Sell 93.10 93.13 93.13 93.16 93.20 93.24 93.26 93.29
    GRU-CNN F1 Sell 94.81 94.81 94.81 94.82 94.85 94.93 94.97 94.99
    QNN F1 Sell 96.02 96.03 96.05 96.08 96.11 96.17 96.18 96.18
    DRCNN F1 Sell 94.28 94.31 94.35 94.37 94.40 94.45 94.45 94.47
    QRNN F1 Sell 95.39 95.42 95.44 95.48 95.50 95.55 95.57 95.60
    CNN-LSTM Precision Buy 93.83 93.85 93.87 93.87 93.88 93.93 93.93 93.94
    GRU-CNN Precision Buy 95.54 95.55 95.57 95.59 95.63 95.72 95.74 95.77
    QNN Precision Buy 96.77 96.80 96.80 96.85 96.86 96.89 96.89 96.92
    DRCNN Precision Buy 95.01 95.03 95.03 95.03 95.06 95.12 95.13 95.16
    QRNN Precision Buy 96.13 96.13 96.15 96.16 96.17 96.23 96.27 96.31
    CNN-LSTM Precision Sell 93.25 93.25 93.28 93.29 93.31 93.35 93.37 93.39
    GRU-CNN Precision Sell 94.95 94.99 95.01 95.02 95.04 95.07 95.11 95.14
    QNN Precision Sell 96.17 96.20 96.24 96.25 96.28 96.32 96.35 96.36
    DRCNN Precision Sell 94.42 94.44 94.47 94.51 94.51 94.56 94.59 94.63
    QRNN Precision Sell 95.54 95.55 95.60 95.60 95.64 95.70 95.71 95.73
    CNN-LSTM Recall Buy 92.45 92.46 92.46 92.46 92.47 92.52 92.56 92.58
    GRU-CNN Recall Buy 94.14 94.15 94.17 94.17 94.21 94.24 94.27 94.29
    QNN Recall Buy 95.35 95.37 95.39 95.41 95.44 95.49 95.49 95.50
    DRCNN Recall Buy 93.62 93.62 93.65 93.70 93.71 93.74 93.77 93.79
    QRNN Recall Buy 94.72 94.72 94.74 94.77 94.79 94.83 94.87 94.89
    CNN-LSTM Recall Sell 92.91 92.95 92.96 92.99 93.01 93.05 93.05 93.08
    GRU-CNN Recall Sell 94.61 94.63 94.64 94.67 94.71 94.77 94.79 94.80
    QNN Recall Sell 95.82 95.84 95.87 95.89 95.90 95.95 95.95 95.98
    DRCNN Recall Sell 94.08 94.10 94.11 94.13 94.16 94.18 94.20 94.22
    QRNN Recall Sell 95.19 95.23 95.27 95.31 95.34 95.37 95.40 95.41

     | Show Table
    DownLoad: CSV
    Table 3.  Position accuracy (buy and sell) in Genetic Algorithms with EGARCH integration.
    Dataset Decentraland LuckyBlock SafeMoon Avalanche SeeSaw Protocol Binamon Kasta X2Y2
    AdaBoost-GA F1 Buy 93.91 93.91 93.95 93.95 93.95 93.99 94.03 94.07
    ANFIS-QGA F1 Buy 95.63 95.66 95.67 95.71 95.73 95.79 95.83 95.86
    AGAFL F1 Buy 98.62 98.66 98.68 98.69 98.73 98.80 98.84 98.88
    QGA F1 Buy 96.83 96.88 96.92 96.94 96.94 96.99 97.02 97.03
    SVM-GA F1 Buy 95.88 95.92 95.94 95.96 95.99 96.03 96.05 96.06
    AdaBoost-GA F1 Sell 92.51 92.53 92.54 92.57 92.61 92.65 92.67 92.69
    ANFIS-QGA F1 Sell 94.20 94.23 94.27 94.32 94.33 94.37 94.40 94.44
    AGAFL F1 Sell 97.16 97.16 97.19 97.22 97.22 97.27 97.31 97.34
    QGA F1 Sell 95.39 95.41 95.45 95.46 95.49 95.51 95.51 95.54
    SVM-GA F1 Sell 94.46 94.48 94.51 94.51 94.53 94.57 94.58 94.60
    AdaBoost-GA Precision Buy 93.23 93.25 93.28 93.33 93.33 93.36 93.38 93.40
    ANFIS-QGA Precision Buy 94.94 94.96 95.00 95.02 95.06 95.13 95.16 95.18
    AGAFL Precision Buy 97.91 97.95 97.98 97.99 98.01 98.10 98.12 98.14
    QGA Precision Buy 96.13 96.16 96.18 96.20 96.25 96.33 96.36 96.40
    SVM-GA Precision Buy 95.19 95.21 95.24 95.26 95.29 95.33 95.37 95.41
    AdaBoost-GA Precision Sell 92.65 92.67 92.68 92.70 92.71 92.77 92.80 92.84
    ANFIS-QGA Precision Sell 94.35 94.36 94.37 94.38 94.42 94.47 94.52 94.56
    AGAFL Precision Sell 97.31 97.33 97.35 97.36 97.38 97.46 97.46 97.48
    QGA Precision Sell 95.54 95.55 95.58 95.58 95.62 95.68 95.69 95.72
    SVM-GA Precision Sell 94.60 94.62 94.66 94.67 94.70 94.74 94.77 94.80
    AdaBoost-GA Recall Buy 91.86 91.90 91.94 91.96 91.97 92.01 92.02 92.04
    ANFIS-QGA Recall Buy 93.54 93.59 93.62 93.65 93.66 93.71 93.72 93.73
    AGAFL Recall Buy 96.48 96.48 96.48 96.49 96.50 96.58 96.60 96.61
    QGA Recall Buy 94.73 94.76 94.80 94.82 94.86 94.88 94.90 94.94
    SVM-GA Recall Buy 93.80 93.83 93.86 93.88 93.90 93.94 93.96 93.99
    AdaBoost-GA Recall Sell 92.32 92.35 92.35 92.36 92.41 92.45 92.50 92.54
    ANFIS-QGA Recall Sell 94.01 94.03 94.04 94.08 94.10 94.16 94.17 94.17
    AGAFL Recall Sell 96.96 96.97 97.00 97.02 97.03 97.05 97.08 97.09
    QGA Recall Sell 95.19 95.21 95.22 95.25 95.29 95.32 95.34 95.38
    SVM-GA Recall Sell 94.26 94.27 94.29 94.31 94.36 94.41 94.44 94.45

     | Show Table
    DownLoad: CSV

    -TP: An output where the model classifies properly to each cryptocurrency

    -FP: An output where the model classifies wrongly to each cryptocurrency

    -FN: An output where the model classifies wrongly denied to each cryptocurrency

    -TN: An output where the model classifies properly denied to each cryptocurrency

    We then estimate the precision and recall for individual classifiers according to the below equations:

    Precisioni=TPiTPi+FPi (141)
    Recalli=TPiTPi+FNi (142)

    To summarize the results more compactly, we utilize the micro-means of every metric. We calculate the micro-means of precision and recall using the below formulas:

    Precisionμ=ni=1TPini=1(TPi+FPi (143)
    Recallμ=ni=1TPini=1(TPi+FNi (144)

    being n the number of classifiers.

    Also, we calculate the F1 score, which merges the micro-averaged precision and recall into a single value. The F1 score is the harmonic mean of precision and recall.

    F1=2×Precisionμ×RecallμPrecisionμ+Recallμ (145)

    On average, Genetic Algorithms perform better than Neural Networks. In Neural Networks methodology, the technique QNN has the highest accuracy and precision with the three metrics, with X2Y2 cryptocurrency having the greatest level of accuracy in this technique (97.64% in F1 Buy, 96.92% in Precision Buy, and 95.50% in Recall Buy). In the genetic algorithm method, the technique AGAFL has the most accurate and precise value with all three metrics, being again the X2Y2 cryptocurrency is the one with the highest accuracy level in this approach (98.88% in F1 Buy, 98.14% in Precision Buy and 96.61% in Recall Buy). X2Y2 is a cryptocurrency where crypto investors may interface with various cryptocurrencies like Coinbase Wallet, WalletConnect, and imToken among others. It was launched in February 2022 to face the bigger and older NFT trading platform namely OpenSea (Sinha, 2022). In both methodologies, accuracy rates for "buy" decisions are higher than for "sell" decisions.

    We have also assessed the cumulative gains for each cryptocurrency by assigning a potential dollar value through the execution of a rolling, out-of-sample simulation that aims to accurately mimic a real live algorithmic trading system. The cumulative profits are presented in Table 4. Initially, we allocate $1,000 of capital. In the first instance, we use one week of currency and signal data to train the methods. Subsequently, we apply the parameters of this model for the following week and evaluate the gains or losses of the trades. We then update the data for the second week to train a new model, and use the parameters of that new model as trading signals for the third week, repeating this process across the entire sample. The timeframes considered are 5, 15, and 360 minutes. Profits or losses for each trade are cumulative (not accrued). This approach can be viewed as a simple "long-term only" strategy. In this Table 4 we have also applied the Gibbson Ross Shanken (GRS) test from our benchmark model. GRS test serves to examine the ex ante efficiency of portfolios with a variety of different interpretations. However, the most usual interpretation is the "geometric mean-standard deviation". From practical examples, the multivariate approach of the GRS test produces more reliable results than the univariate dependent approach (García, 2004). GRS calculation is based on the formula below:

    FGRS=TKNNˆαT1ˆα1+ˆμTK1KˆμK (146)
    Table 4.  Cryptocurrencies log returns with a $1,000 investment.
    Decentraland Period Cumulative Profits Annualized Average Returns Sharpe Ratio GRS Test p-value (GRS Test) 2020 2021 2022
    5 min 7,355.25 529.08% 791.741.538 3.94 0.043 790.40% 13.21% 60.33%
    15 min 8,376.76 314.14% 803.836.676 4.17 0.067 1024.44% 51.20% 186.30%
    360 min 1.344.018.317 91.31% 144.621.766 3.43 0.029 107.02% 28.52% 187.19%
    Long only 6.870.166.014 42.03% 137.548.728 3.24 0.048 34.60% 73.13% 88.11%
    LuckyBlock Period Cumulative Profits Annualized Average Returns Sharpe Ratio GRS Test p-value (GRS Test) 2022
    5 min 5,435.97 456.84% 7.415.007.158 5.16 0.033 634.50%
    15 min 6,190.93 241.48% 7.542.707.625 4.92 0.051 881.60%
    360 min 9.933.101.775 82.42% 1.854.816.605 4.696 0.027 112.99%
    Long only 5.077.464.895 32.59% 134.665.947 4.48 0.019 36.53%
    SafeMoon Period Cumulative Profits Annualized Average Returns Sharpe Ratio GRS Test p-value (GRS Test) 2021 2022
    5 min 5,195.70 856.84% 7.998.863.842 5.82 0.063 789.09% 13.18%
    15 min 5,917.29 700.48% 8.120.919.948 5.55 0.047 922.74% 51.11%
    360 min 9.494.058.676 122.42% 1.963.993.711 5.34 0.014 106.84% 28.48%
    Long only 4.853.040.946 62.59% 1.287.137.122 5.05 0.017 34.55% 73.01%
    Avalanche Period Cumulative Profits Annualized Average Returns Sharpe Ratio GRS Test p-value (GRS Test) 2020 2021 2022
    5 min 9,932.10 810.21% 8.509.145.672 3.79 0.032 746.14% 12.47% 56.95%
    15 min 11,311.49 662.36% 8.624.559.485 4.01 0.061 967.08% 48.33% 180.83%
    360 min 1.814.884.257 115.76% 1.857.113.174 3.62 0.020 101.03% 26.93% 176.71%
    Long only 9.277.073.073 59.19% 1.217.091.119 3.12 0.012 32.67% 69.04% 83.17%
    SeeSaw Protocol Period Cumulative Profits Annualized Average Returns Sharpe Ratio GRS Test p-value (GRS Test) 2022
    5 min 8,389.05 855.42% 8.983.956 4.97 0.072 687.78%
    15 min 9,554.14 699.32% 9.105.809.904 4.74 0.036 821.04%
    360 min 1.532.923.838 122.22% 1.960.740.089 4.35 0.029 96.67%
    Long only 7.835.787 62.49% 1.285.004.804 4.15 0.025 34.49%
    5 min 11,689.23 953.55% 1.001.453.829 5.61 0.028 731.79% 29.34%
    15 min 13,312.66 779.54% 101.503.705 5.35 0.045 948.48% 93.76%
    360 min 213.596.389 136.24% 2.185.663.721 4.9 0.039 99.08% 63.38%
    Long only 1.091.832.331 69.66% 1.432.412.382 4.68 0.042 32.44% 112.50%
    Kasta Period Cumulative Profits Annualized Average Returns Sharpe Ratio GRS Test p-value (GRS Test) 2022
    5 min 12,341.49 1006.76% 1.057.334.953 3.65 0.047 927.15%
    15 min 14,055.51 823.03% 1.071.676.118 3.86 0.058 1201.68%
    360 min 2.255.150.675 143.84% 2.307.623.757 3.06 0.064 125.54%
    Long only 1.152.756.575 73.55% 1.512.340.993 2.89 0.052 40.59%
    X2Y2 Period Cumulative Profits Annualized Average Returns Sharpe Ratio GRS Test p-value (GRS Test) 2022
    5 min 11,101.89 1062.93% 1.116.334.244 4.78 0.071 898.88%
    15 min 12,643.75 868.96% 1.131.475.645 4.56 0.052 1048.74%
    360 min 2.028.638.831 151.86% 2.436.389.163 4.02 0.068 112.54%
    Long only 1.036.971.399 77.65% 1.596.729.621 3.84 0.046 39.85%

     | Show Table
    DownLoad: CSV

    being ˆα the intercept of the asset pricing model, T represents the total number of observations of each risk factor, K exhibits the total number of test assets, 1K means the covariance matrix of the idiosyncratic component of asset returns, and μK stands for the average of each risk factor.

    Most of the p-values are greater than 0.05 which means that the model successfully passes the GRS test, as, at the 5% significance level, the p-value shows that the results are significant.

    Across all cryptocurrencies, the economic benefits of applying a 360-minute trading frequency are smaller, but it is a meaningful enhancement over the "long-only" strategy. Our model yields sizable economic benefits. The highest results occur at the 15- min interval, yielding the highest result for the cryptocurrency Kasta with 14,055.51, in second place Binamo (13,312.66) and followed by SXY2 with 12,643.75. Kasta allows crypto investors to send cryptocurrencies for free right away. Each crypto transfer in crypto wallets becomes easy and convenient just by scanning QR codes. KASTA Token plays a significant part in driving the adoption growth of cryptocurrencies (Sinha, 2022).

    Moreover, Figure 1 reveals the average importance of each indicator in the sample for each cryptocurrency in our model.

    Figure 1.  Average financial indicators feature importance.

    RSI is the most important in all cryptocurrency's studies. The MACD indicator holds great importance as well, with a peak for SafeMoon currency. Last, SO and TEO indicators are the third and fourth in order of importance; however, in the LuckyBlock currency TEO indicator outperforms MACD. In comparison with other works, Nakano et al. (2018) demonstrate the relative effectiveness of technical indicators in Bitcoin as ANN input to simple time series of returns. These indicators are MACD, SO, and OBV. They conclude that the employment of various technical indicators potentially avoids overfitting in sorting non-stationary financial time series data, improving trading performance. Vo and Yost-Bremm (2020) create financial indicators and used an ML algorithm to devise a high-frequency minute-level trading strategy for Bitcoin, we employ six exchanges as our information technology framework. The financial indicators utilised include RSI, SO, WR, MACD, and OBV. The findings reveal that RSI holds the utmost significance, surpassing other indicators by a significant margin. The remaining indicators carry similar importance, with a particular emphasis on WR and SO.

    In summary, our study marks a notable advancement in algorithmic trading analyses by achieving good precision and surpassing the accuracy benchmarks established in prior research. The integration of advanced methodologies, notably the EGARCH model, significantly contributes to this precision, offering a refined understanding of market dynamics and risk factors. This heightened precision becomes especially crucial in the volatile realm of cryptocurrency markets, where rapid fluctuations demand sophisticated modeling approaches.

    Our research confirms the use of Genetic algorithms and the techniques of Neural Networks as suitable strategies in algorithmic trading of the youngest cryptocurrencies. We achieve goods results when compared to previous literature. Madan et al. (2015) center on Bitcoin price data and leverage data at 10-minute and 10-second intervals. They model the price prediction issue like a binomial classification assignment, testing with a custom algorithm that leverages both Random Forest (RF) and generalized linear models. Their results are 50–55% accurate in forecasting the 10-minute forward price sign of future prices. McNally et al. (2018) predict the price of Bitcoin with machine learning algorithms and its results showed that SVM achieved the highest accuracy (62.31%). Zhengyang et al. (2019) analyze the accuracy of Bitcoin price in USD through the implementation of a Bayesian optimized recurrent neural network (RNN) and a LSTM. The LSTM achieves the highest classification accuracy of 52% and an RMSE of 8%. They concluded that RNN and LSTM are efficient for Bitcoin prediction, with LSTM being better able to recognize long-term dependencies. The LSTM surpassed the RNN slightly, but not significantly. Nevertheless, the LSTM takes noticeably more time to train.

    Nakano et al. (2018) explore Bitcoin intraday technical trading based on Artificial Neural Networks (ANN) to derive significant trading signals from technical input metrics computed from time series performance data at 15-minute timeslots. They achieved an average accuracy of about 75%. Vo and Yost-Bremm (2020) create an HFT strategy for Bitcoin using RF, reaching an average precision of 94%. They used Design Science Research, a method of inquiry providing guidance on how to construct and assess an information technology device and how the device resolves a formulated problem. Nielsen, and Chuang (2001) implement two machine learning models, ANN and LSTM to forecast the model the price of various cryptocurrencies, like Bitcoin, Ethereum, Ripple, Stellar Lumens, Litecoin, and Monero. They considered RMSE in joint prediction, obtaining an average of 7%. Their results revealed that ANN, overall, beats LSTM even though theoretically, LSTM is better suited than ANN in terms of time-series dynamics modeling. Their results show that the prospective future state of a cryptocurrency time series depends to a large extent on its historical development.

    In 2020, Othman et al. (2020) conducted an analysis of the volatility structure of Bitcoin currency. They utilized a Rapid-Miner program implementing an artificial neural network (ANN) algorithm. The results reveal that ANN proves to be a suitable model, achieving accuracy level of 92.15% in predicting Bitcoin market prices compared to actual prices. Notably, symmetric volatility attributes are employed in this analysis. The findings underscore the significant role of the low price attribute, which emerges as the primary factor influencing Bitcoin price trends, accounting for 63% of the variation. Following this, the close price attribute holds a percentage of 49%, trailed by high price at 46%, and open price at 37%. Alonso-Monsalve et al. (2020) explored the use of convolutional neural networks (CNNs) for intraday trend classification of cryptocurrency exchange rates against the USD. Specifically, they assessed 1-minute exchange rates spanning a year-long period from July 1st, 2018 to June 30th, 2019. They analyzed six major cryptocurrencies over a year, finding CNNs, especially Convolutional LSTM, to be effective predictors of price trends, notably for Bitcoin, Ether, and Litecoin. Convolutional LSTM consistently outperformed other models, including for Dash and Ripple, though with a slight margin. However, the performance of other models was limited for Dash and Ripple, possibly due to inherent noise or unaccounted temporal behavior in the data generation process. Patel et al. in 2020 introduce a hybrid cryptocurrency prediction scheme based on LSTM and GRU, focusing on Litecoin and Monero. Their findings demonstrate good accuracy in price prediction, indicating the scheme's potential applicability across various cryptocurrencies. In this study, a hybrid model combining GRU and LSTM was proposed, outperforming the LSTM network as evidenced by prediction errors.

    In 2021, Akyildirim, Goncu, and Sensoy examined twelve major cryptocurrencies, assessing their predictability across daily and various intraday time scales. They employ machine learning classification algorithms such as support vector machines, logistic regression, artificial neural networks, and random forests. Notably, support vector machines emerged as the most reliable and robust models, consistently achieving over 50% accuracy in predicting next day returns across all cryptocurrencies and time scales. While performance varied among coins, many achieved predictive accuracies surpassing 69% without extensive feature tuning. This suggests that with further refinement in model selection and feature exploration, machine learning algorithms could easily surpass 70% predictive accuracy. In 2022, Ortu et al. explore the impact of integrating social and trading indicators alongside traditional technical variables to analyze cryptocurrency price fluctuations at hourly and daily frequencies. They employ various deep learning techniques, including Multi-Layer Perceptron (MLP), Multivariate Attention Long Short Term Memory Fully Convolutional Network (MALSTM-FCN), Convolutional Neural Network (CNN), and Long Short Term Memory (LTMS) neural networks. Focusing on Bitcoin and Ethereum, they evaluate two models: a restricted model considering only technical indicators, and an unrestricted model incorporating social and trading indicators. In the restricted analysis, MALSTM-FCN achieved the highest performance, with an average f1-score of 54% for Bitcoin and CNN for Ethereum at hourly frequency. In the unrestricted case, LSTM yielded the best results for both Bitcoin and Ethereum, boasting average accuracies of 83% and 84%, respectively.

    In brief, previous literature has focused mainly on Bitcoin and Ethereum. However, there is an increasing acceptance and popularity of new emerging cryptocurrency markets and some authors have proposed future lines of research to build a multi-asset portfolio of several new cryptocurrencies (Nakano, Takahashi, and akahashi, 2018; Vo, and Yost-Bremm, 2020).

    We unfold a thorough comparison of methodologies for analyzing algorithmic trading in 8 new cryptocurrencies of 2022, with a unique focus on integrating the EGARCH model. Diverse methods, such as CNN-LSTM, GRU-CNN, QNN, DRCNN, and QRNN in Neural Networks, and AdaBoost-GA, ANFIS-QGA, AGAFL, QGA, and SVM-GA in Genetic Algorithms, are applied to establish a robust model. Notably, the AGAFL technique, enriched by the integration of EGARCH, stands out with the highest accuracy and precision values across all metrics. The X2Y2 cryptocurrency, underpinned by EGARCH, attains acceptable accuracy levels (98.88% in F1 Buy, 98.14% in Precision Buy, and 96.61% in Recall Buy). Furthermore, the Neural Networks methods, particularly the QNN model, also exhibit notable accuracy (97.64% in F1 Buy, 96.92% in Precision Buy, and 95.50% in Recall Buy) through the incorporation of EGARCH.

    In contrast to prior research, our study not only achieves superior accuracy results but also underscores the advantages of integrating EGARCH in enhancing machine learning models for algorithmic trading. The EGARCH model, known for its proficiency in capturing volatility dynamics, contributes significantly to the robustness of the applied methodologies. This integration allows for a more nuanced understanding of market behavior and risk factors, providing a comprehensive foundation for improved predictive capabilities.

    The innovative integration of EGARCH into our methodologies plays a pivotal role in addressing the complexities of cryptocurrency markets, particularly in managing and forecasting volatility. The EGARCH-enhanced AGAFL technique emerges as a standout performer, showcasing the model's ability to refine precision and accuracy metrics. This underscores the importance of considering volatility patterns, an area where EGARCH excels, when constructing robust machine learning models for algorithmic trading.

    Our study can serve as a resource for investors, regulators, or even developers. First, regarding investors, accuracy in predicting cryptocurrency movements can significantly influence their investment decisions. High accuracy can lead to more profitable trades in asset buying or selling, and reduce losses. The findings of our study would be a valuable and significant contribution for commercial purposes among cryptocurrency market participants. Investors will proactively predict cryptocurrency price trends and make the correct investment decision, whether its buying, holding, or selling to achieve normal market returns. According to Othman et al. (2020), financial markets have evolved significantly in the last decade, driven by new technologies in automated trading and financial infrastructure. Our finding is particularly important for algorithmic trading purposes, as deciding when to buy and sell cryptocurrencies intraday can be successfully automated with machine learning algorithms.

    Second, although cryptocurrency markets are far from being regulated, policymakers must continue to pay attention to these markets. The reason is that Chicago Board Options Exchange (CBOE) and Chicago Mercantile Exchange (CME) introduced Bitcoin futures in December 2017, so it would be realistic to assume that Bitcoin options and those of other emerging cryptocurrencies could also be introduced in the near future (Akyildirim et al., 2021). Regulators could, therefore, consider implementing guidelines or regulations to ensure transparency, fairness, and accountability in the use of neural networks for cryptocurrency trading. Policies could impose disclosure requirements on companies or individuals using neural networks in trading, including information about the accuracy of models, data sources, and potential conflicts of interest. Additionally, as indicated by Wang et al. (2023), a forecast analysis model is also a concern for policymakers and central banks, who are increasingly interested in cryptocurrency volatility. Some central banks are considering launching their own cryptocurrencies.

    Third, developers play a crucial role in designing, implementing, and refining neural network models for cryptocurrency analysis. They need to continuously improve the accuracy of their models by incorporating new data, refining algorithms, and addressing potential biases or limitations.

    The implications of our analysis, enriched by the integration of EGARCH, extend to regulators and trading platform designers, influencing monetary policy appropriateness and the capacity of central banks to enforce it successfully. The EGARCH model contributes to a more accurate evaluation of risk, providing valuable insights for decision-makers in navigating the dynamic cryptocurrency landscape. The increasing acceptance of new emerging cryptocurrency markets is further underscored by the advantages brought about by EGARCH integration. EGARCH's ability to capture volatility nuances contributes significantly to understanding the characteristics of the cryptocurrency field, providing a solid foundation for strategic decision-making.

    In summary, our research, accentuated by the integration of EGARCH, makes an indispensable contribution to the field of High-Frequency Trading. The advantages of incorporating EGARCH are evident in the improved accuracy and precision of machine learning models, emphasizing its role in refining predictions and risk management. Stakeholders gain valuable insights into the application of EGARCH-infused methodologies for trading optimization, asset allocation, and portfolio construction in a globally oriented context.

    Future research can continue exploring and refining neural network models for cryptocurrency trading, aiming to enhance their reliability. Moreover, consideration can be given to analysing more complex models, incorporating sentiment data that could enrich cryptocurrency predictions. This sentiment data may include indicators extracted from social media platforms, such as the sentiment and emotions expressed in comments on GitHub and Reddit, as demonstrated by Ortu et al. (2022) in their study, considering emotions such as love, joy, anger, sadness, excitement, and similar sentiments. Besides, future research directions involve delving deeper into the advantages of EGARCH integration, exploring its impact on diverse cryptocurrency trading strategies, and extending its application to less-explored assets like derivatives and fixed-income assets. Additionally, ongoing exploration of models describing agent behavior in crypto asset markets should consider the added value that EGARCH brings to accurate estimations and simulations.

    The authors affirm that no artificial intelligence (AI) tools were used in the creation of this work.

    This research was funded by the Universitat de Barcelona, under the grant UB-AE-AS017634.

    All authors declare no conflicts of interest in this paper.



    [1] Asymptotic behaviour of flows on reducible networks. J. Networks Heterogeneous Media (2014) 9: 197-216.
    [2] J. Banasiak and A. Puchalska, A Transport on networks - a playground of continuous and discrete mathematics in population dynamics, In Mathematics Applied to Engineering, Modelling, and Social Issues, Springer, Cham, 2019,439–487.
    [3] A. Bátkai and S. Piazzera, Semigroups for Delay Equations, Research Notes in Mathematics, 10, A K Peters Ltd, Wellesley, 2005.
    [4] Asymptotic periodicity of flows in time-depending networks. J. Networks Heterogeneous Media (2013) 8: 843-855.
    [5] Flows in networks with delay in the vertices. Math. Nachr. (2012) 285: 1603-1615.
    [6] Complex networks: Structure and dynamics. Phys. Rep. (2006) 424: 175-308.
    [7] Regular linear systems governed by neutral FDEs.. J. Math. Anal. Appl. (2006) 320: 836-858.
    [8] H. Brezis, Functional Analysis, Sobolev Spaces and Partial Differential Equations, Springer, New York, 2010.
    [9] R. F. Curtain and H. Zwart, An Introduction to Infinite-Dimensional Linear Systems Theory, Texts in Applied Mathematics, 21 Spinger-Verlag, New York, 1995. doi: 10.1007/978-1-4612-4224-6
    [10] Controllability of linear discrete systems with constant coefficients and pure delay. SIAM J. Control Optim. (2008) 47: 1140-1149.
    [11] The semigroup approach to transport processes in networks. Physica D: Nonlinear Phenomena (2010) 239: 1416-1421.
    [12] Y. El Gantouh, S. Hadd and A. Rhandi, Approximate controllabilty of network systems, Evol. Equ. Control Theory, (2020). doi: 10.3934/eect.2020091
    [13] Y. El Gantouh, S. Hadd and A. Rhandi, Controllability of vertex delay type problems by the regular linear systems approach, Preprint.
    [14] Exact and positive controllability of boundary control systems, networks. J. Networks Heterogeneous Media (2017) 12: 319-337.
    [15] Maximal controllability for boundary control problems. Appl. Math. Optim. (2010) 62: 205-227.
    [16] K.-J. Engel and R. Nagel, One-parameter Semigroups for Linear Evolution Equations, Springer-Verlag, New York, 2000.
    [17] Perturbing the boundary conditions of a generator. Houston J. Math. (1987) 13: 213-229.
    [18] Unbounded perturbations of C0-semigroups on Banach spaces and Applications. Semigroup Forum (2005) 70: 451-465.
    [19] The regular linear systems associated to the shift semigroups and application to control delay systems with delay. Math. Control Signals Sys. (2006) 18: 272-291.
    [20] Unbounded perturbations of the generator domain. Discrete and Continuous Dynamical Sys. (2015) 35: 703-723.
    [21] Eventual norm continuity for neutral semigroups on Banach spaces. J. Math. Anal. Appl. (2011) 375: 543-552.
    [22] J. K. Hale and S. M. Verduyn Lunel, Introduction to Functional Differential Equations, Applied Math. Sciences Series, 99, Springer-Verlag, New York, 1993. doi: 10.1007/978-1-4612-4342-7
    [23] Asymptotic behavior of flows in networks. Forum Math. (2007) 19: 429-461.
    [24] The analysis of exact controllability of neutral-type systems by the moment problem approach. SIAM J. Control Optim. (2007) 46: 2148-2181.
    [25] On controllability and observability of time delay systems. IEEE Trans. Automat. Control (1984) 29: 432-439.
    [26] Infinite-dimensional linear system with unbounded control and observation: A functional analytic approach. Trans. Amer. Math. Soc. (1987) 300: 383-431.
    [27] Radius of approximate controllability oflinear retarded systems under structured perturbations. Systems Control Lett. (2015) 84: 13-20.
    [28] (2005) Well-posed Linear Systems. Cambridge Univ: Press.
    [29] Controllability and observability of systems of linear delay differential equations via the matrix Lambert W function. IEEE Trans. Automat. Control (2008) 53: 854-860.
    [30] M. Tucsnak and G. Weiss, Observation and Control for Operator Semigroups, Birkhäuser, Basel, Boston, Berlin, 2009. doi: 10.1007/978-3-7643-8994-9
    [31] Admissible observation operators for linear semigoups. Israel J. Math. (1989) 65: 17-43.
    [32] Admissibility of unbounded control operators. SIAM J. Control Optim. (1989) 27: 527-545.
    [33] Transfer functions of regular linear systems. Part I: Characterization of regularity. Trans. Amer. Math. Soc. (1994) 342: 827-854.
    [34] Regular linear systems with feedback. Math. Control Signals Systems (1994) 7: 23-57.
    [35] Q. C. Zhong, Robust Control of Time-Delay Systems, Springer-Verlag Limited, London, 2006.
  • This article has been cited by:

    1. JinChun Lu, Rachsuda Setthawong, Pisal Setthawong, 2024, A Framework Utilizing Genetic Algorithm Wrapped Neural Network Model in Discovering Effective Technical Indicators for the Cryptocurrency Market, 979-8-3503-6630-3, 572, 10.1109/InCIT63192.2024.10810651
  • Reader Comments
  • © 2021 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1769) PDF downloads(181) Cited by(0)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog